XXVII Brazilian Congress on Biomedical Engineering: Proceedings of CBEB 2020, October 26–30, 2020, Vitória, Brazil 9783030706012, 303070601X

This book presents cutting-edge research and developments in the field of Biomedical Engineering. It describes both fund

107 23 116MB

English Pages 2274 Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Organization
Steering Committee
Sec02
Sec03
Program Committee
Preface
Contents
Basic Industrial Technology in Health
1 Analysis of the Acoustic Power Emitted by a Physiotherapeutic Ultrasound Equipment Used at the Brazilian Air Force Academy
Abstract
1 Introduction
2 Materials and Methods
2.1 Evaluation of the Relative Error of the ROP
2.2 Repeatability Analysis of ROP Measurement
2.3 Estimation of the ROP Variation Coefficient
3 Results
3.1 Evaluation of Rated Output Power (ROP)
3.2 Repeatability Analysis of ROP Measurement
3.3 Estimation of the ROP Variation Coefficient
4 Discussion
5 Conclusion
Acknowledgements
References
2 Synthesis and Antibacterial Activity of Maleimides
Abstract
1 Introduction
2 Material and Methods
2.1 Syntesis
2.2 Antimicrobial Activity
3 Results
3.1 Synthesis and Structural Characterization
3.2 Biological Activity
4 Discussion
5 Conclusions
Acknowledgements
References
Bioengineering
3 Study of the Effect of Bioceramic Compressive Socks on Leg Edema
Abstract
1 Introduction
2 Materials and Methods
2.1 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
Conflict of Interest
References
4 Analysis of the Electric Field Behavior on a Stimulation Chamber Applying the Superposition Principle
1 Introduction
2 Materials and Methods
2.1 Experimental Study
2.2 Computational Study
3 Results
3.1 Experimental Results
3.2 Computational Results
4 Discussion
5 Conclusions
References
5 Development of a Non-rigid Model Representing the Venous System of a Specific Patient
Abstract
1 Introduction
2 Materials and Methods
2.1 Geometric Model
2.2 Bipartite Molds Manufacturing
2.3 Core Manufacturing
2.4 Silicone Preparation
2.5 Silicone Injection
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
6 Comparative Study of Rheological Models for Pulsatile Blood Flow in Realistic Aortic Arch Aneurysm Geometry by Numerical Computer Simulation
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
References
7 Total Lung Capacity Maneuver as a Tool Screen the Relative Lung Volume in Balb/c Mice
1 Introduction
2 Materials and Methods
2.1 Animals
2.2 Assessment of Total Lung Capacity
2.3 Data Analysis
3 Results
4 Discussion
5 Conclusion
References
8 The Influence of Cardiac Ablation on the Electrophysiological Characterization of Rat Isolated Atrium: Preliminary Analysis
1 Introduction
2 Methods
2.1 Experimental Protocol
2.2 Optical and Electrical Mapping
2.3 Electrophysiological Characterization
3 Results
3.1 Time Analysis
3.2 Dominant Frequency and Organization Index
4 Discussion
5 Conclusion
References
9 Evaluation of the Surface of Dental Implants After the Use of Instruments Used in Biofilm Removal: A Comparative Study of Several Protocols
Abstract
1 Introduction
2 Material and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
10 Comparison of Hemodynamic Effects During Alveolar Recruitment Maneuvers in Spontaneously Hypertensive Rats Treated and Non-treated with Hydralazine
1 Introduction
2 Materials and Methods
2.1 Animals
2.2 Experimental Protocol
2.3 Data Acquisition
2.4 Data Analysis
3 Results
4 Discussion
5 Conclusion
6 Compliance with Ethical Requirements
6.1 Conflict of Interest
6.2 Statement of Human and Animal Rights
References
11 Experimental Study of Bileaflet Mechanical Heart Valves
Abstract
1 Introduction
2 Materials and Methods
2.1 Experimental Set Up and Data Collection
2.2 Data Analysis
2.3 BMHVs
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
12 Preliminary Results of Structural Optimization of Dental Prosthesis Using Finite Element Method
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Acknowledgements
References
Biomaterials, Tissue Engineering and Artificial Organs
13 Tribological Characterization of the ASTM F138 Austenitic Stainless-Steel Treated with Nanosecond Optical Fiber Ytterbium Laser for Biomedical Applications
Abstract
1 Introduction
2 Experimental Procedure
2.1 Ball-Cratering Equipment
2.2 Materials
2.3 Wear Tests
3 Results and Discussion
3.1 Scanning Electron Micrograph
3.2 Wear Volume Behavior
3.3 Friction Coefficient Behavior
3.4 Wear Resistance Analysis
4 Conclusions
Acknowledgements
References
14 Computational Modeling of Electroporation of Biological Tissues Using the Finite Element Method
1 Introduction
2 Materials and Methods
2.1 Electropermeabilization in Biological Tissues
2.2 Finite Element Method
3 Results
4 Conclusion
References
15 The Heterologous Fibrin Sealant and Aquatic Exercise Treatment of Tendon Injury in Rats
Abstract
1 Introduction
2 Methods
2.1 Surgical Procedure
2.2 Aquatic Exercise Protocol
2.3 The Evolution of Edema
2.4 Euthanasia
2.5 Collagen Quantification
2.6 Statistical Analyses
3 Results and Discussions
4 Conclusions
Acknowledgements
References
16 Myxomatous Mitral Valve Mechanical Characterization
1 Introduction
2 Methodology
3 Results
4 Discussion
References
17 Qualitative Aspects of Three-Dimensional Printing of Biomaterials Containing Devitalized Cartilage and Polycaprolactone
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
3.1 Biomaterials Based on Alginate and Poloxamer 407 (Compositions 1 to 4)
3.2 Biomaterials Based on Alginate (Compositions 5 to 8)
3.3 Biomaterials Based on Alginate and Gelatin (Compositions 9 to 12)
4 Conclusions
Acknowledgements
References
18 Chemical Synthesis Using the B-Complex to Obtain a Similar Polymer to the Polypyrrole to Application in Biomaterials
Abstract
1 Introduction
2 Materials and Methods
2.1 Definition of the Methods and Reagents
2.2 B-complex First Treatment
2.3 B-complex Second Treatment
2.4 B-complex Third Treatment
2.5 B-complex Fourth Treatment
2.6 Conductive Properties Assay
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
19 Evaluation of the Effect of Hydrocortisone in 2D and 3D HEp-2 Cell Culture
Abstract
1 Introduction
2 Materials and Methods
2.1 Cell Culture
2.2 Incubation with Hydrocortisone
2.3 Mitochondrial Metabolic Activity (MTT Assay)
2.4 Crystal Violet Assay
2.5 3D Culture
2.6 Immunostaining
2.7 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
20 Surface Topography Obtained with High Throughput Technology for hiPSC-Derived Cardiomyocyte Conditioning
Abstract
1 Introduction
2 Materials and Methods
2.1 Polyurethane Substrate Preparation
2.2 DLIP on Polyurethane
2.3 DLIP-Modified PU Surface Characterization—Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM)
2.4 hiPSC-CM Cell Culture on DLIP-Modified PU Substrate
2.5 Cell Morphology and Orientation—SEM and Fluorescent Staining
2.6 Statistical Analysis
3 Results
3.1 Surface Characterization of DLIP-Modified PU—AFM and SEM
3.2 Alignment of hiPSC-CM in Response to DLIP-Modified PU
3.3 Differential Morphology of hiPSC-CM Cultured on DLIP-Modified PU—aspect Ratio and Spreading Area
3.4 Alignment of F-Actin Myofibrils
4 Discussion
5 Conclusions
Acknowledgements
References
21 The Effect of Nitrided Layer on Antibacterial Properties of Biomedical 316L Stainless Steel
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Conclusions
Acknowledgements
References
22 Biomechanical Analysis of Tissue Engineering Construct for Articular Cartilage Restoration—A Pre-clinical Study
Abstract
1 Introduction
2 Materials and Methods
2.1 Cell Culture: Harvesting, Isolation, Expansion and Differentiation of MSCs
2.2 Development of a Tissue Engineering Construct Compound of Cell and Extracellular Matrix (TEC)
2.3 Animal Model
2.4 Mechanical Evaluation
3 Results
4 Discussion
5 Compliance with Ethical Requirements
5.1 Statement of Informed Consent and Human Rights
5.2 Statement of Animal Rights
6 Conclusions
Acknowledgements
References
23 Cytotoxicity Evaluation of Polymeric Biomaterials Containing Nitric Oxide Donors Using the Kidney Epithelial Cell Line (Vero)
Abstract
1 Introduction
2 Methodology
2.1 Biomaterials Synthesis
2.2 Incorporation of RSNOs in Polaxamer/hyaluronic Acid Hydrogels and Chitosan Nanoparticles
2.3 Cells Viability Assays
3 Results
4 Discussion
5 Conclusions
6 Compliance with Ethical Requirements
6.1 Conflict of Interest
Acknowledgements
References
24 Cellular Interaction with PLA Biomaterial: Scanning Electron Microscopy Analysis
Abstract
1 Introduction
2 Materials and Methods
2.1 Biomaterial Preparation and Sterilization
2.2 Vero Cell Culture
2.3 Cell Interaction—Scanning Electron Microscopy
3 Results and Discussion
4 Conclusion
Acknowledgements
References
25 Development of a Gelatin-Based Hydrogel to be Used as a Fibrous Scaffold in Myocardial Tissue Engineering
Abstract
1 Introduction
2 Materials and Methods
2.1 Gelatin Films
2.2 Nanofibers Through Electrospinning
2.3 Crosslinking
2.4 Characterization
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
26 Experimental and Clinical Performance of Long-Term Cannulas for Mechanical Circulatory Assistance
Abstract
1 Introduction
2 Materials and Methods
2.1 The Cannulas
2.2 “In Vitro” Testing
2.3 “In Vivo Validation”
3 Results
4 Conclusions
Acknowledgements
References
27 Evaluation of Lantana Trifolia Total Extract in Cell Culture: Perspective for Tissue Engineering
Abstract
1 Introduction
2 Objectives
3 Materials and Methods
3.1 Preparation of Lantana Trifolia Total Extracts
3.2 Vero Cell Culture
3.3 Qualitative Analyses: Cell Morphology
3.4 Cell Quantification: MTT Assay
4 Results and Discussions
5 Conclusion
Acknowledgements
References
28 Study on the Disinfection Stability of Bullfrog Skin
Abstract
1 Introduction
2 Materials and Methods
2.1 Sample Preparation
2.2 Experimental Groups
2.3 Ozone Disinfection
2.4 Stability Assessment and Microbiological Analysis
3 Results and Discussion
4 Conclusions
Acknowledgements
References
29 Corrosion Analysis of a Marked Biomaterial
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
References
30 Mechanical and Morphological Analysis of Electrospun Poly(ε-Caprolactone) and Reduced Graphene Oxide Scaffolds for Tissue Engineering
Abstract
1 Introduction
2 Materials and Methods
2.1 Electrospinning Solutions
2.2 Preparation of Membranes by Electrospinning
2.3 Mechanical Testing
2.4 Qualitative Morphological Evaluation
2.5 Statistical Analysis
3 Results and Discussion
4 Conclusions
Acknowledgements
References
31 Experimental Apparatus for Evaluation of Calcium Fluctuations in Cardiomyocytes Derived from Human-Induced Pluripotent Stem Cells
Abstract
1 Introduction
2 Materials and Methods
2.1 PET and Micro-Textured PET
2.2 hiPSC-CM Culture and Plating
2.3 Loading of Fluorescent Indicator
3 Results and Discussion
3.1 Cell Chamber
3.2 Module for Registration and Analysis of the Signals
4 Conclusions
Acknowledgements
References
32 Cytotoxicity of Alumina and Calcium Hexaluminate: Test Conditions
Abstract
1 Introduction
2 Materials and Methods
2.1 Biomaterials
2.2 FTIR
2.3 Surface Morphological Analysis
2.4 Vero Cell Culture
2.5 Qualitative Analyses: Cell Morphology
3 Results and Discussions
4 Conclusions
Acknowledgements
References
33 Tendon Phantom Mechanical Properties Assessment by Supersonic Shear Imaging with Three-Dimensional Transducer
Abstract
1 Introduction
2 Materials and Methods
2.1 Phantom Preparation
2.2 Image Acquisition—SSI with 3D Transducer
2.3 Image Processing
2.4 Results
3 Discussion
4 Conclusion
Acknowledgements
References
34 Physiological Control of Pulsatile and Rotary Pediatric Ventricular Assist Devices
1 Introduction
2 Ventricular Assist Devices
2.1 pVAD
2.2 pRBP
3 Physiological Control Strategies
3.1 Pediatric Cardiovascular System Model
3.2 pVAD Control
3.3 pRBP Control
4 Discussion
References
35 Evaluation of Calcium Phosphate-Collagen Bone Cement: A Preliminary Study
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
36 Qualitative Hemolysis Analyses in VAD by Stress Distribution Using Computational Hemodynamics
Abstract
1 Introduction
2 Numerical Methodology
3 Results and Discussion
4 Conclusions and Remarks
Acknowledgements
References
Biomechanics and Rehabilitation
37 Development and Comparison of Different Implementations of Fuzzy Logic for Physical Capability Assessment in Knee Rehabilitation
Abstract
1 Introduction
1.1 Difference Between Mamdani and Sugeno
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
38 Serious Games and Virtual Reality in the Treatment of Chronic Stroke: Both Sides Rehabilitation
Abstract
1 Introduction
2 Methods
2.1 Harpy Game
2.2 Evaluation Methods
2.3 Experimental Protocol
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
39 Technologies Applied for Elbow Joint Angle Measurements: A Systematic Review
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
40 The Importance of Prior Training for Effective Use of the Motorized Wheelchair
Abstract
1 Introduction
2 Methodology
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
41 Kinematic Approach for 4 DoF Upper Limb Robotic Exoskeleton in Complex Rehabilitation Tasks
Abstract
1 Introduction
2 Materials
2.1 Upper Limb Exoskeleton
3 Methods
3.1 Forward Kinematic
3.2 Inverse Kinematic
3.3 Workspace Analysis
4 Results and Discussion
5 Conclusion
Acknowledgements
References
42 An Integrated Method to Analyze Degenerative Bone Conditions on Transfemoral Amputees
Abstract
1 Introduction
2 Methods
2.1 Subjects
2.2 Gait Analysis
2.3 Densitometry and Radiography
2.4 Mechanical Properties and Apparent Densities
2.5 Finite Element Modeling
3 Results
3.1 Gait Analysis
3.2 Densitometry and X-Rays
3.3 Mechanical Properties and Apparent Densities
3.4 Finite Element Modeling
4 Discussion
5 Conclusions
Acknowledgements
References
43 Forced Oscillations and Functional Analysis in Patients with Idiopathic Scoliosis
Abstract
1 Introduction
2 Materials and Methods
2.1 Volunteers
2.2 Functional Respiratory Examinations
2.3 6-Minute Walk Test
2.4 Processing, Presentation and Statistical Analysis
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
44 Differences in Respiratory Mechanics in Emphysema and Chronic Bronchitis Evaluated by Forced Oscillations
Abstract
1 Introduction
2 Materials and Methods
2.1 Analysis of the Individuals
2.2 Instrumentation
2.3 Statistical Analysis
3 Results
3.1 Resistive FOT Parameters
3.2 Reactive FOT Parameters
4 Discussion
5 Conclusions
References
45 Automated 3D Scanning Device for the Production of Forearm Prostheses and Orthoses
Abstract
1 Introduction
2 Materials and Methods
2.1 Automated Device for 3D Scanning
2.2 Automated 3D Scanning Process
2.3 3D Image Composition
2.4 Performance Evaluation
2.5 Measuring Process
2.6 Manual Image Acquisition Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
46 Low Amplitude Hand Rest Tremor Assessment in Parkinson’s Disease Based on Linear and Nonlinear Methods
Abstract
1 Introduction
2 Methods
2.1 Participants and Data Collection
2.2 Data Acquisition
2.3 Protocol for Data Collection
2.4 Signal Processing
2.5 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
47 Adaptation of Automatic Postural Responses in the Dominant and Non-dominant Lower Limbs
Abstract
1 Introduction
2 Methods
2.1 Participants
2.2 Task and Equipment
2.3 Experimental Design and Procedures
2.4 Data Collection and Analysis
2.5 Statistical Analysis
3 Results
4 Discussion
Acknowledgements
References
48 Influence of the Layout of the Ludic Tables on the Amplitude and Concentration of Upper Limb Movements
Abstract
1 Introduction
2 Materials and Methods
2.1 Flexion and Extension of the Elbow
2.2 Adduction and Abduction of Shoulder
2.3 Flexion and Extension of the Shoulder
3 Results and Discussion
4 Conclusions
References
49 Photobiomodulation Reduces Musculoskeletal Marker Related to Atrophy
Abstract
1 Introduction
2 Methods
2.1 Experimental Design
2.2 PBMT Protocol
2.3 Morphometric Analysis
2.4 Immunohistochemistry Analysis: Atrogin-1 Expression
2.5 Statistics
3 Results
3.1 Muscle Fiber CSA
3.2 Immunohistochemistry: Atrogin-1 Expression
4 Discussion
5 Conclusion
Acknowledgements
References
50 Effectiveness of Different Protocols in Platelet-Rich Plasma Recovery
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
51 Evaluation of the Motor Performance of People with Parkinson’s Disease Through the Autocorrelation Function Estimated from Sinusoidal Drawings
Abstract
1 Introduction
2 Materials and Methods
2.1 Description of the Database
2.2 Task Description, Sensor Positioning and Data Collection
2.3 Data Analysis
2.4 Feature Extraction
2.5 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
52 Shear Modulus of Triceps Surae After Acute Stretching
Abstract
1 Introduction
2 Methods
2.1 Experimental Procedure
2.2 Measurement of Triceps Surae Shear Modulus
2.3 Dorsiflexion ROM
2.4 Stretching Protocol
2.5 Statistics
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
53 Estimation of the Coordination Variability Between Pelvis-Thigh Segments During Gait at Different Walking Speeds in Sedentary Young People and Practitioners of Physical Activities
Abstract
1 Introduction
2 Material and Methods
2.1 Subjects
2.2 Protocol
3 Results
3.1 Classification of Coordination Patterns
3.2 Statistical Analysis
4 Discussion
5 Conclusion
Acknowledgements
References
54 Gait Coordination Quantification of Thigh-Leg Segments in Sedentary and Active Youngs at Different Speeds Using the Modified Vector Coding Technique
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 Protocol
2.3 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
55 Demands at the Knee Joint During Jumps in Classically Trained Ballet Dancers
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 Protocol
2.3 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
56 Modeling and Analysis of Human Jump Dynamics for Future Application in Exoskeletons
Abstract
1 Introduction
2 Methodology
3 Results and Discussion
4 Conclusions
Acknowledgements
References
57 Functional Electrical Stimulation Closed-Loop Strategy Using Agonist-Antagonist Muscles for Controlling Lower Limb Movements
Abstract
1 Introduction
2 Methods
3 Results
3.1 On the Quadriceps Muscle Group
3.2 On the Quadriceps and Hamstrings Muscle Groups
4 Discussion
5 Conclusion
Acknowledgements
References
58 Control System Viability Analysis on Electrical Stimulation Platform
1 Introduction
2 Materials and Methods
2.1 Plant Validation
2.2 Controller Validation
2.3 Stimulator Validation
2.4 FES Applied to Plant and Control Validation
3 Results
3.1 Plant Validation
3.2 Controller Validation
3.3 Stimulator Validation
3.4 FES Applied to Plant and Control Validation
4 Discussion
5 Conclusions
References
59 Influence of the Use of Upper Limbs in the Vertical Jump on the Ground Reaction Force of Female Athletes from the Development Basketball Categories
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
60 Wearable System for Postural Adaptation
Abstract
1 Introduction
2 Materials and Methods
2.1 Wearable System Architecture
2.2 Hardware
2.3 Smartphone's App
2.4 Calibration and Use Cases
2.5 Test Protocols
3 Results
3.1 First Test
3.2 Second Test
3.3 Third Test
4 Discussion
5 Conclusions
Compliance with Ethical Requirements
References
61 On the Use of Wrist Flexion and Extension for the Evaluation of Motor Signs in Parkinson’s Disease
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
62 Motor Performance Improvement in a Technological Age: A Literature Review
Abstract
1 Introduction
2 Material and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
63 Modulation of EMG Parameters During Ankle Plantarflexor Fatigue in Trained Gymnasts and Healthy Untrained Controls
Abstract
1 Introduction
2 Methods
2.1 Participants
2.2 Procedures
2.3 EMG Acquisition and Processing
2.4 Statistical Analyses
3 Results
4 Discussion
5 Conclusions
Acknowledgements
Appendix
References
64 Amputation Rates in Southeastern Brazil
Abstract
1 Introduction
2 Materials and Methods
3 Results
3.1 Incidence of Amputation
3.2 Published Researches on Amputations in Southeastern Brazil
4 Discussion
5 Conclusion
Acknowledgements
References
65 Design and Manufacturing of Flexible Finger Prosthesis Using 3D Printing
Abstract
1 Introduction
2 Methods and Materials
2.1 3D Printing: Equipment, Parameters and Material
2.2 Biomechanics of the Index Finger
2.3 Scanning/Modeling of the Prosthesis
2.4 Finite Element Analysis
3 Results and Discussions
4 Conclusion
5 Conflict of Interest
References
66 Characterization of Sensory Perception Associated with Transcutaneous Electrostimulation Protocols for Tactile Feedback Restoration
Abstract
1 Introduction
2 Materials and Methods
3 Results
3.1 Continuous or Discrete Electrostimulation Perception
3.2 Sensory Intensity Perception
3.3 Sensory Perception of Frequency
4 Discussion
5 Conclusion
Acknowledgements
References
67 Differences in Shear Modulus Among Hamstring Muscles After an Acute Stretching
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 Experimental Procedure
2.3 Stretching Protocol
2.4 Measurement of Shear Modulus
2.5 Measurement of Maximum ROM
2.6 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
68 Adapted Children’s Serious Game Using Dennis Brown Orthotics During the Preparatory Phase for the Gait
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Compliance with Ethical Requirements
6 Conclusion
7 Conflict of Interest
References
69 A Serious Game that Can Aid Physiotherapy Treatment in Children Using Dennis Brown Orthotics
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Compliance with Ethical Requirements
5 Conclusion
References
70 A Neuromodulation Decision Support System: A User-Centered Development Study
Abstract
1 Introduction
2 Materials and Methods
2.1 Design Thinking
2.2 SMDQ App Development
2.3 User Evaluation
2.4 Data Analysis
3 Results
4 Discussion
5 Conclusions
6 Conflict of Interest
References
71 Mechanisms of Shoulder Injury in Wheelchair Users
1 Introduction
2 Discussion
2.1 Epidemiology
2.2 Pain
2.3 Etiology
2.4 Prevention and Rehabilitation
3 Conclusion
References
72 Development and Customization of a Dennis Brown Orthosis Prototype Produced from Anthropometric Measurements by Additive Manufacturing
Abstract
1 Introduction
2 Materials and Methods
2.1 3D Modeling of Dennis Brown Orthosis
2.2 Computational Simulation of the Mechanical Resistance of the Orthosis
2.3 Production of Dennis Brown Orthosis by Additive Manufacture
2.4 Analysis of the Technical and Financial Feasibility of Orthosis Production
3 Results
4 Discussion
5 Compliance with Ethical Requirements
6 Conclusions
7 Conflict of Interest
References
73 Comparison of Kinematic Data Obtained by Inertial Sensors and Video Analysis
Abstract
1 Introduction
2 Materials and Methods
2.1 Processing of Video Analysis Data
2.2 Processing of Accelerometer Data
2.3 Determination of Data Averages
2.4 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
6 Conflict of Interests
Reference
74 NNMF Analysis to Individual Identification of Fingers Movements Using Force Feedback and HD-EMG
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects and Protocol Setup
2.2 Data Processing
3 Results and Discussion
4 Conclusions
Acknowledgements
References
75 Nonlinear Closed-Loop Control of an OpenSim Wrist Model: Tuning Using Genetic Algorithm
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusions
References
76 Quantification of Gait Coordination Variability of Pelvis-Thigh Segments in Young Sedentary and Practitioners People Walking in Different Slopes Using the Vector Coding Technique
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 Protocol
2.3 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
77 Prediction of the Heel Displacement During the Gait Using Kalman Filter in 2D Image
Abstract
1 Introduction
2 Methods
2.1 Experimental Setup
2.2 Kalman Filter
2.3 Data Processing
3 Results and Discussion
4 Conclusions
Acknowledgements
References
78 Achilles Tendon Tangent Moduls of Runners
Abstract
1 Introduction
2 Methods
2.1 Subjects
4 Tangent Modulus Calculation
5 Instrumentation and Data Acquisition Procedures
6 Statistical Analysis
7 Results
8 Discussion
9 Conclusions
Acknowledgements
References
79 Modeling and Simulation of Lower Limbs Exoskeleton for Children with Locomotion Difficulties
Abstract
1 Introduction
2 Materials and Methods
2.1 Children Characteristics
2.2 Materials Selection
2.3 Actuation System Simulation
2.4 Mechanical Structure Modeling
3 Results
4 Discussion
5 Compliance with Ethical Requirements
6 Conflict of Interest
7 Statement of Informed Consent
8 Statement of Human and Animal Rights
9 Conclusions
Acknowledgements
References
80 Remote Control Architecture for Virtual Reality Application for Ankle Therapy
1 Introduction
2 Materials and Methods
2.1 Robotic Device
2.2 Virtual Reality Application
2.3 Graphical User Interface
2.4 Communication
2.5 Test Protocol
3 Results and Discussion
4 Conclusions and Future Works
References
81 Callus Stiffness Evolution During the Healing Process—A Mechanical Approach
Abstract
1 Introduction
2 Analytical Model
2.1 Stiffnesses Calculations
2.2 Sharing Load Calculations
2.3 Analytical Model Data
3 Results
4 Conclusions
Acknowledgements
Appendix
References
82 An Analytical Model for Knee Ligaments
Abstract
1 Introduction
2 Analytical Model
3 Experimental Approach
4 Results
5 Conclusions
Acknowledgements
Appendix
The 1D Model in Coronal Plane
References
83 Anterior–Posterior Ground Reaction Forces During Gait in Children and Elderly Women
Abstract
1 Introduction
2 Methods
2.1 Subjects
2.2 Procedures
2.3 Data Analysis
3 Results and Discussion
4 Conclusions
Acknowledgements
References
84 Gait Speed, Cadence and Vertical Ground Reaction Forces in Children
Abstract
1 Introduction
2 Materials and Methods
2.1 Sample
2.2 Procedures
2.3 Data Analysis
3 Results
3.1 Sample Characterization
3.2 Vertical Forces and Time Parameters
3.3 Speed and Cadence
4 Discussion and Conclusions
Acknowledgements
References
85 May Angular Variation Be a Parameter for Muscular Condition Classification in SCI People Elicited by Neuromuscular Electrical Stimulation?
Abstract
1 Introduction
2 Methods
2.1 Volunteers
2.2 Neuromuscular Electrical Stimulation, Electrodes and Electrogoniometer
2.3 Experimental Protocol
2.4 Angle Variation
2.5 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
86 Kinematics and Synergies Differences Between Horizontal and Vertical Jump Test
Abstract
1 Introduction
2 Materials and Methods
2.1 Sample and Experimental Protocol
2.2 Biomechanical Models and Data Processing
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
87 Modeling, Control Strategies and Design of a Neonatal Respiratory Simulator
Abstract
1 Introduction
2 Methods
2.1 Respiratory System Model
2.2 Simulation Plant
2.3 Control System Design and Sensoring
3 Results
3.1 Spontaneous Mode
3.2 Pressure-Controlled Mode
3.3 Assisted Mode
4 Discussion
5 Conclusions
Acknowledgements
References
88 Application of Recurrence Quantifiers to Kinetic and Kinematic Biomechanical Data
Abstract
1 Introduction
2 RQA Applied to Kinetic and Kinematic Biomechanical Data
2.1 Recurrence Quantifiers
2.2 Recurrence as a Possibility for Kinetic and Kinematic Data Analysis
3 Conclusions
Acknowledgements
References
89 Virtual Environment for Motor and Cognitive Rehabilitation: Towards a Joint Angle and Trajectory Estimation
Abstract
1 Introduction
2 Materials and Methods
3 The Orbbec Sensor
4 The Virtual Environment
5 The Tasks in the Game and the Data Acquisition Protocol
6 Results
7 Discussion
8 Conclusions
Acknowledgements
References
90 Kinematic Model and Position Control of an Active Transtibial Prosthesis
1 Introduction
2 Model Formulation
2.1 Kinematic Model
2.2 Motor Dynamics
2.3 Bionic Feet Model
2.4 Linearization
2.5 Control System
3 Discussions and Conclusion
References
91 Development of a Parallel Robotic Body Weight Support for Human Gait Rehabilitation
1 Introduction
2 Review of Active BWS Structures
3 Modeling of the Structure
4 Experimental Tests
5 Game Integration
6 Conclusion
References
92 Comparison Between the Passé and Coupé Positions in the Single Leg Turn Movement in a Brazilian Zouk Practitioner: A Pilot Study
Abstract
1 Introduction
2 Material and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
93 Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping
1 Introduction
2 Methods
2.1 OpenPose and Filtering for Angle Joint Estimation
2.2 Dynamic Time Warping
2.3 Our Contribution—Applying DTW to the Output Result of OpenPose
3 Results
4 Conclusion
References
94 Analysis of Seat-to-Hand Vibration Transmissibility in Seated Smartphone Users
Abstract
1 Introduction
2 Materials and Methods
2.1 Volunteers
2.2 Experimental Apparatus
2.3 Experimental Procedure
3 Results and Discussion
4 Conclusions
Acknowledgements
References
95 Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives
1 Introduction
2 Motor Primitives Model
2.1 ICA—Independent Component Analysis
2.2 PCA—Principal Component Analysis
2.3 NNMF—Non-negative Matrix Factorization
2.4 SOBI—Second Order Blind Identification
3 Experimental Results
4 Conclusions
References
96 Control Design Inspired by Motors Primitives to Coordinate the Functioning of an Active Knee Orthosis for Robotic Rehabilitation
1 Introduction
2 Experimental Procedure
2.1 Exoskeleton—ExoTao
2.2 Inverse Dynamics
2.3 OpenSim
2.4 Forward Dynamics
3 Control Developed from Primitive Torques
4 Results
5 Conclusion
References
97 Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications
1 Introduction
2 Methods
2.1 Physical Requirements
2.2 Brazilian Population Spectrum and Critical User
2.3 Anthropometric Proportions of Human Body
2.4 Mechanical Design Requisites
2.5 Exoskeleton's Mechanical Project
3 Results and Discussion
3.1 Design of the Actuation Direct Drive
3.2 Design of Structural Components
3.3 Prototype Construction and Analysis
4 Conclusions
References
98 A Bibliometric Analysis of Lower Limb Exoskeletons for Rehabilitation Applications
Abstract
1 Introduction
2 Methodology
3 Exploratory Analysis
4 Secondary Analysis
5 Bibliometric Analysis
5.1 Co-authorship Analysis (Authors)
5.2 Co-citation Analysis (Cited References)
5.3 Bibliographic Coupling Analysis (Documents)
5.4 Co-occurrence Analysis (Keywords)
6 Conclusions
Acknowledgements
References
99 Biomechatronic Analysis of Lower Limb Exoskeletons for Augmentation and Rehabilitation Applications
Abstract
1 Introduction
2 Exoskeletons in the Literature
2.1 Commercial Products
2.2 Academic Publications
3 Mechanisms
3.1 Metabolic Cost
3.2 Biomechanics of Walking
3.3 Human Average Walking Speed
3.4 Mechanics of Human Movement
3.5 Movements at the Hip Joint
3.6 Movements at the Knee Joint
3.7 Movements at the Ankle and Foot Articulations
4 Actuatos and Sensors
5 Control
6 Human–Robot Interaction
7 Conclusions
Acknowledgements
References
100 Numerical Methods Applied to the Kinematic Analysis of Planar Mechanisms and Biomechanisms
Abstract
1 Introduction
1.1 Complex Numbers
1.2 Closed Loop Method
1.3 Numerical Differentiation
2 Methodology
3 Results and Discussion
3.1 Four Bar Mechanism
3.2 Quick Return Mechanism
3.3 Comparison with Analytical Results
4 Conclusion
Acknowledgements
References
101 Project of a Low-Cost Mobile Weight Part Suspension System
Abstract
1 Introduction
2 Methodology
3 Development
4 Results
5 Conclusion
Acknowledgements
References
102 Design of a Robotic Device Based on Human Gait for Dynamical Tests in Orthopedic Prosthesis
Abstract
1 Introduction
2 Methods
2.1 Image Analysis
2.2 Mechanical Device
3 Results and Discussion
4 Conclusions
Acknowledgements
References
103 Analysis of an Application for Fall Risk Screening in the Elderly for Clinical Practice: A Pilot Study
Abstract
1 Introduction
2 Methods
2.1 Participants
2.2 The Assessment Instrument
2.3 Data Collection
2.4 Data Analysis
2.5 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
104 Development of Prosthesis and Evaluation of the Use in Short Term in a Left Pelvic Limb in a Dog
Abstract
1 Introduction
2 Methods
2.1 Evaluation
2.2 Negative and Positive Mold
2.3 Prosthesis Construction
3 Results
4 Discussion
5 Conclusion
Conflict of Interest
References
105 Estimation of Muscle Activations in Black Belt Taekwondo Athletes During the Bandal Chagui Kick Through Inverse Dynamics
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
4.1 Quadriceps Muscles
4.2 Bíceps Femoris
4.3 Gluteal Muscles
4.4 Soleus, Adductor Longus and Tensor Fasciae Lata
5 Conclusions
Acknowledgements
References
Biomedical Devices and Instrumentation
106 Study of a Sonothrombolysis Equipment Based on Phased Ultrasound Technique
Abstract
1 Introduction
2 Materials and Methods
2.1 Simulations and Validation
2.2 Implementation of a Prototype
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
107 Development and Characterization of a Transceiver Solenoid RF Coil for MRI Acquisition of Ex Situ Brain Samples at 7 Teslas
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Conclusions
Conflict of Interest
References
108 Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology
1 Introduction
2 Background and Challenges
2.1 Breast Dielectric Properties
2.2 Microwave Interaction and Processing
2.3 MwI System Approach
2.4 Radar—SFCW Waveform
2.5 SDR Technology
2.6 GNU Radio Development
3 Methodology
4 MwI Analysis
4.1 Target Resolution
4.2 EM—Penetration Depth
5 Results
5.1 Overview
5.2 GNU Radio—Setup
5.3 Baseband Signal
5.4 SNR Measured
6 Conclusion
References
109 Prototype of a Peristaltic Pump for Applications in Biological Phantoms
Abstract
1 Introduction
2 Methodology and Development
2.1 Design of the User Interface
2.2 Design of the Control Stage
2.3 Design of the Mechanical Stage
2.4 Blood Mimic
3 Results
3.1 Doppler Tests
4 Conclusions
Conflict of Interest
References
110 Performance Evaluation of an OOK-Based Visible Light Communication System for Transmission of Patient Monitoring Data
1 Introduction
2 Materials and Methods
2.1 Experimental Setup to Prove Concepts
2.2 System Frequency Response
2.3 A Brief Theory About OOK Codification
3 Experimental Results
3.1 Characterization of the Employed White LED
3.2 Transmitted and Received OOK Signals
3.3 Performance Analysis
3.4 Transmitting the Data of a Multi-parametric Monitor
4 Conclusions
References
111 Equipment for the Detection of Acid Residues in Hemodialysis Line
Abstract
1 Introduction
2 Materials and Methods
2.1 Spectrophotometry Study
2.2 Study of an Electrical System
2.3 Development of a Schematic for Each Study
2.4 Comparison of Methods and Analysis of Responses
3 Results and Discussions
4 Conclusion
Acknowledgements
References
112 Using of Fibrin Sealant on Treatment for Tendon Lesion: Study in Vivo
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussions
5 Conclusions
Acknowledgements
References
113 Analysis of Respiratory EMG Signals for Cough Prediction
Abstract
1 Introduction
1.1 Background
2 Methodology
2.1 Muscles Analyzed
2.2 Experimental Setup
2.3 Experiment Steps
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
114 Integrating Power Line and Visible Light Communication Technologies for Data Transmission in Hospital Environments
1 Introduction
2 OFDM Theoretical Background
3 Materials and Methods
3.1 Experimental Setup
3.2 OFDM Parametrization in the VLC System
4 Experimental Results
4.1 Performance Analysis of the PLC System
4.2 Optical Characterization of the White LED
4.3 Performance Evaluation of the VLC System
5 Conclusions
References
115 Design of an Alternating Current Field Controller for Electrodes Exposed to Saline Solutions
Abstract
1 Introduction
2 Material and Methods
2.1 AC-Field Control
2.2 Temperature Control
2.3 Control Algorithms
2.4 Experimental Protocol
3 Results and Discussions
4 Conclusions
Conflict of Interest
References
116 Electrooculography: A Proposed Methodology for Sensing Human Eye Movement
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussions
4 Conclusions
Acknowledgements
References
117 Analysis of Inductance in Flexible Square Coil Applied to Biotelemetry
Abstract
1 Introduction
2 Development of Decagonal Solution
3 Methodology
3.1 Validations
4 Results
5 Conclusions
Acknowledgements
References
118 A Method for Illumination of Chorioallantoic Membrane (CAM) of Chicken Embryo  in Microscope
Abstract
1 Introduction
2 Tumoral Angiogenesis
2.1 Definition of Tumor Angiogenesis
2.2 Study of Angiogenesis Using CAM
3 Materials and Methods
3.1 Microscopy
3.2 Limitations of the Traditional Lighting System for Capturing CAM Images
3.3 Study of Lighting Sources
4 Results and Discussion
5 Conclusions
References
119 Development and Experimentation of a rTMS Device for Rats
Abstract
1 Introduction
2 Methods
2.1 Instrumentation
2.2 Experimental Design
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
120 Diagnostic and Monitoring of Atrial Fibrillation Using Wearable Devices: A Scoping Review
Abstract
1 Introduction
2 Metodology
3 Results
3.1 Selection of Articles
3.2 ECG Signal
4 Discussion
5 Conclusions
Conflict of Interest
References
121 The Effects of Printing Parameters on Mechanical Properties of a Rapidly Manufactures Mechanical Ventilator
1 Introduction
2 Materials and Methods
2.1 Slicing Parameter
2.2 Mechanical Tests
2.3 Development of an Ambu-based Mechanical Ventilator
2.4 Simulation and Production of Parts for the Prototype of the Mechanical Ventilator.
3 Results
3.1 Mechanical Tests and Simulations
3.2 Printing Parameters and their Influences
4 Discussion
5 Conclusion
References
122 Fully Configurable Real-Time Ultrasound Platform for Medical Imaging Research
Abstract
1 Introduction
2 Method
3 Hardware Design
3.1 Use of Commercial FPGA Modules
4 Configuration of the Platform
4.1 Embedded Configuration Subsystem
4.2 Configurable Parameters
5 Data Acquisition and Image Formation
6 Results
7 Conclusions
Acknowledgements
References
123 Proposal of a Low Profile Piezoelectric Based Insole to Measure the Perpendicular Force Applied by a Cyclist
1 Introduction
2 Materials and Methods
2.1 3D-Printed Insole Sctructure
2.2 Piezoelectric film array
2.3 Calibration procedure
2.4 Trials
3 Results
3.1 Dynamic Calibration
3.2 Waveform during trials
4 Discussion
5 Conclusions
References
124 Soft Sensor for Hand-Grasping Force by Regression of an sEMG Signal
1 Introduction
2 Protocol and Acquisition System
2.1 Protocol
2.2 Acquisition
3 Method
3.1 Dynamometer Characterization
3.2 Data Processing
3.3 Regression
4 Results
4.1 Dynamometer Characterization
4.2 Data processing
4.3 Model Regression
4.4 Online Performance
5 Discussion
6 Conclusion
References
125 Development of a Low-Cost, Open-Source Transcranial Direct-Current Stimulation Device (tDCS) for Clinical Trials
Abstract
1 Introduction
2 Materials and Methods
2.1 Hardware Development
2.2 Application Development
2.3 Bench Test
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
126 Velostat-Based Pressure Sensor Matrix for a Low-Cost Monitoring System Applied to Prevent Decubitus Ulcers
1 Introduction
2 Methodology
2.1 Pressure Sensors Matrix
2.2 Microcontroller
2.3 Wireless Interface
2.4 Application
3 Experimental Tests
4 Results
5 Discussions
6 Conclusion
References
127 Analysis and Classification of EEG Signals from Passive Mobilization in ICU Sedated Patients and Non-sedated Volunteers
1 Introduction
2 Methodology
2.1 Experiment Format
2.2 Emotiv Epoc Neuroheadset
2.3 Preprocessing
2.4 Feature Extraction
2.5 Classification
3 Results and discussion
3.1 Preprocessing and Signal Visualization
3.2 Classifier
4 Conclusions
References
128 Use of Fluorescence in the Diagnosis of Oral Health in Adult Patients Admitted to the Intensive Care Unit of a Public Emergency Hospital
Abstract
1 Introduction
2 Materials and Methods
2.1 Study Location
2.2 Study Design
2.3 Study Population and Data Collection Procedures
2.4 Analysis of Results
2.5 Ethical Considerations
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
129 A Non-invasive Photoemitter for Healing Skin Wounds
1 Introduction
2 Material and Methods
2.1 LEDs
2.2 Embedded System
2.3 Experiments
3 Results and Discussion
3.1 Optimal Distance
3.2 Irradiation Area
3.3 Temperature
4 Conclusion
References
130 Intermittent Suctioning for Patients Under Artificial Ventilation: A Digital Model Study
Abstract
1 Introduction
2 Materials and Methods
2.1 Modeling
2.2 Validation
2.3 Simulation
3 Results
3.1 Validation of Models
3.2 Simulation
4 Discussion
4.1 Validation
4.2 Simulation
4.3 Limitations
5 Conclusions
Acknowledgements
References
131 Prototype for Testing Frames of Sunglasses
Abstract
1 Introduction
2 Literature Revision
3 Materials and Methods
4 Results and Discussion
5 Conclusions
Conflict of Interest
References
132 Evaluation of Hyperelastic Constitutive Models Applied to Airway Stents Made by a 3D Printer
Abstract
1 Introduction
2 Materials and Methods
2.1 Finite Element Analysis Software
2.2 Flexible 3D Filament
2.3 Mechanical Tests of Uniaxial Tension and Pure Shear
2.4 Mechanical Loads Investigated
2.5 Manufacture of the Samples
3 Results
3.1 Uniaxial Tension Test
3.2 Pure Shear Test
3.3 Fit of Hyperelastic Constitutive Constants
3.4 Virtual Simulation of the Uniaxial Tension Test
3.5 Mechanical Testing on the 3D Ninjaflex Stent
3.6 Mechanical Comparison Between HCPA-1, 3D Ninjaflex Black Stent and Virtual Simulation
4 Conclusions
References
133 Are PSoC Noise Levels Low Enough for Single-Chip Active EMG Electrodes?
1 Introduction
2 Methodology
2.1 Amplifiers and Signal Conditioning
2.2 Analog-to-Digital Converter
2.3 Digital Filter
2.4 USB Data Transmission
3 Results
4 Discussion
5 Conclusion
References
134 Implementation of an Ultrasound Data Transfer System via Ethernet with FPGA-Based Embedded Processing
Abstract
1 Introduction
2 Methods
3 Results
4 Conclusions
Acknowledgements
References
135 Software for Physiotherapeutic Rehabilitation: A Study with Accelerometry
Abstract
1 Introduction
2 Methods
3 Results and Discussion
4 Conclusions
Conflict of Interest
References
136 B-Mode Ultrasound Imaging System Using Raspberry Pi
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Conclusions
Acknowledgements
References
137 Access Control in Hospitals with RFID and BLE Technologies
Abstract
1 Introduction
2 Materials and Methods
2.1 Topology and Communication Protocol
2.2 Software
2.3 Hardware
2.4 Firmware
2.5 Test Protocol
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
138 Piezoelectric Heart Monitor
Abstract
1 Introduction
2 Methods
3 Hardware
3.1 Power Module
3.2 Sensor Module
3.3 Digitization Module (PIC)
3.4 Human–Machine Interface—HMI
4 Software
4.1 Main Thread
4.2 Acquisition Thread
5 Seismocardiogram Fiducials Points
6 Results
7 Discussions
8 Conclusions
Conflict of Interest
References
139 Transducer for the Strengthening of the Pelvic Floor Through Electromyographic Biofeedback
Abstract
1 Introduction
2 Materials and Methods
3 Electromyographic Feedback
4 Results and Discussion
5 Conclusions
Conflict of Interest
References
140 Monitoring Hemodynamic Parameters in the Terrestrial and Aquatic Environment: An Application in a 6-min Walk Test
Abstract
1 Introduction
2 Materials and Methods
2.1 Prototype Development
2.2 Validation and Application
2.3 Data Analysis
3 Results and Discussion
4 Conclusions
Acknowledgements
References
141 Scinax tymbamirim Amphibian Advertisement Sound Emulator Based on Arduino
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
142 Microphone and Receiver Calibration System for Otoacoustic Emission Probes
Abstract
1 Introduction
2 Materials and Methods
2.1 Adjustment Procedures
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
143 Pneumotachograph Calibration: Influence of Regularization Methods on Parameter Estimation and the Use of Alternative Calibration Models
Abstract
1 Introduction
2 Materials and Methods
2.1 Mathematical Description of Flow Calibration
2.2 Regularization of Parameter Estimation
2.3 Calibration Function
2.4 Experimental Setup
3 Results
4 Conclusions
Acknowledgements
References
144 Geriatric Physiotherapy: A Telerehabilitation System for Identifying Therapeutic Exercises
1 Introduction
2 Materials and Methods
2.1 Methodology
2.2 Data Analysis
3 Results and Discussion
3.1 Volunteer 1
3.2 Volunteer 2
4 Conclusions
References
145 UV Equipament for Food Safety
Abstract
1 Introduction
2 Material and Methods
2.1 Equipment
2.2 Microbiological Test
3 Results
3.1 Equipment Construction
3.2 Microbiological Test
4 Conclusions
Acknowledgements
References
146 Method to Estimate Doses in Real-Time for the Eyes
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
147 An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture
1 Introduction
2 Related Work
3 ECG Data Acquisition Biosensor and ESP32 Embedded Systems
3.1 ESP32 Embedded System
3.2 EKG-EMG Bio-sensor for Data Acquisition
4 System Architecture
4.1 Acquisition Module
4.2 Processing Module
4.3 Data Visualization - Client Module
4.4 Cloud Storage
5 Methodology
5.1 Comparison with related works
5.2 Test Bench
6 Results
6.1 Device Parameters and Budget
6.2 Power Parameters
6.3 Cloud Server Test
6.4 Comparison with Related Work and Real ECG Reading Test
7 Conclusion
References
148 Development of a Hydraulic Model of the Microcontrolled Human Circulatory System
Abstract
1 Introduction
2 Materials and Methods
2.1 Reference Model
2.2 Reservoir and Compliance Chamber
2.3 Restriction Valve
2.4 Piston Pump
2.5 Sensing and Actuators System
2.6 Control System
2.7 Bench Test Parameter Control
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
149 A Design Strategy to Control an Electrosurgery Unit Output
Abstract
1 Introduction
1.1 ESU Output Circuit
2 Materials and Methods
2.1 PSPICE Model
2.2 Sweep Parameters Method
3 Results And Discussion
4 Conclusions
Conflict of Interest
References
150 Near Field Radar System Modeling for Microwave Imaging and Breast Cancer Detection Applications
1 Introduction
2 Microwave Imaging and Breast Cancer Detection Review
2.1 Radar Topologies
2.2 Frequency Spectrum and Signal Propagation Considerations
2.3 System Modeling
3 Methodology
4 Proposed System Modeling
5 Results and Discussions
6 Conclusions
References
Biomedical Optics and Systems and Technologies for Therapy and Diagnosis
151 Development and Validation of a New Hardware for a Somatosensorial Electrical Stimulator Based on Howland Current-Source Topology
1 Introduction
2 Materials and Methods
2.1 EELS
2.2 Current-Source Design and Calibration Scheme
2.3 Current-Source Testing
3 Results
4 Discussion
5 Conclusion
References
152 Autism Spectrum Disorder: Smart Child Stimulation Center for Integrating Therapies
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussions
5 Conclusions
Acknowledgements
References
153 Fuzzy System for Identifying Pregnancy with a Risk of Maternal Death
1 Introduction
2 Identifying Pregnancy with a Risk of Maternal Death Based on Fuzzy Logic
2.1 Fuzzy Input-Output Inference Mapping
2.2 Input and Output Fuzzy Sets
2.3 Input Linguistic Variables and Linguistic Terms
2.4 Fuzzy Assessment for Maternal Death Risk Rules
3 Results and Discussion
4 Conclusion
References
154 Evaluation of Temperature Changes Promoted in Dental Enamel, Dentin and Pulp During the Tooth Whitening with Different Light Sources
Abstract
1 Introduction
2 Material and Method
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
155 Effects of a Low-Cost LED Photobiomodulation Therapy Equipment on the Tissue Repair Process
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
156 Occupational Dose in Pediatric Barium Meal Examinations
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discusion
4 Conclusions
Acknowledgements
References
157 Synthesis of N-substituted Maleimides Potential Bactericide
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusion
Conflict of Interest
References
158 A Review About the Main Technologies to Fight COVID-19
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
159 Evaluation of Cortisol Levels in Artificial Saliva by Paper Spray Mass Spectrometry
Abstract
1 Introduction
2 Paper Spray Mass Spectrometry (PS-MS)
3 Material and Methods
4 Results and Discussion
5 Conclusions
Acknowledgements
References
160 Tinnitus Relief Using Fractal Sound Without Sound Amplification
Abstract
1 Introduction
2 Materials and Methods
3 ResultS
4 Discussion
5 Conclusion
Conflict of Interest
References
161 Analysis of the Heat Propagation During Cardiac Ablation with Cooling of the Esophageal Wall: A Bidimensional Computational Modeling
Abstract
1 Introduction
2 Materials and Methods
3 Results
3.1 Simulation for T_{e} = 60 °C
3.2 Simulation for T_{e} = 70 °C
3.3 Simulation for T_{e} = 80 °C
4 Discussion
5 Conclusion
Acknowledgements
References
162 Development of a Rapid Test for Determining the ABO and Rh-Blood Typing Systems
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
163 In Silico Study on Electric Current Density in the Brain During Electrochemotherapy Treatment Planning of a Dog’s Head Osteosarcoma
Abstract
1 Introduction
2 Materials and Methods
2.1 Case Study
2.2 In Silico Study
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
164 Evaluation of Engorged Puerperal Breast by Thermographic Imaging: A Pilot Study
Abstract
1 Introduction
2 Methods
3 Results E Discussions
4 Conclusions
References
165 Study of the Photo Oxidative Action of Brosimum gaudichaudii Extract
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
166 Electrochemotherapy Effectiveness Loss Due to Electrode Bending: An In Silico and In Vitro Study
Abstract
1 Introduction
2 Materials and Methods
2.1 In Silico Study
2.2 In Vitro Study
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
167 Conductive Gels as a Tool for Electric Field Homogenization and Electroporation in Discontinuous Regions: In Vitro and In Silico Study
1 Introduction
2 Materials and Methods
2.1 In Silico Study
2.2 In Vitro Study
2.3 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
References
168 Line Shape Analysis of Cortisol Infrared Spectra for Salivary Sensors: Theoretical and Experimental Observations
Abstract
1 Introduction
2 Material and Methods
2.1 Sample Preparation
2.2 Infrared Conditions
2.3 Theoretical Calculation
3 Results and Discussion
4 Conclusions
Acknowledgements
References
169 Differential Diagnosis of Glycosuria Using Raman Spectroscopy
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
170 Development of a Moderate Therapeutic Hypothermia Induction and Maintenance System for the Treatment of Traumatic Brain Injury
Abstract
1 Introduction
2 Materials and Methods
2.1 Operation
2.2 System Hardware
2.3 Developed Temperature Sensors
2.4 Control Software
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
171 Temperature Generation and Transmission in Root Dentin During Nd:YAG Laser Irradiation for Preventive Purposes
Abstract
1 Introduction
2 Material and Method
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
172 Photobleaching of Methylene Blue in Biological Tissue Model (Hydrolyzed Collagen) Using Red (635 nm) Radiation
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusion
Acknowledgements
References
173 Effect of Photodynamic Inactivation of Propionibacterium Acnes Biofilms by Hypericin (Hypericum perforatum)
Abstract
1 Introduction
2 Materials and Methods
2.1 Growth of Strain and Biofilm Formation
2.2 Photosensitizer
2.3 Light Source
2.4 Experimental Groups
2.5 Disaggregation of Biofilm and CFU Counting
2.6 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
174 Evaluating Acupuncture in Vascular Disorders of the Lower Limb Through Infrared Thermography
Abstract
1 Introduction
2 Materials and Methods
2.1 The Sample
2.2 Acupuncture Protocol
2.3 Acquisition of Thermographic Images
2.4 Equipment
2.5 Thermal Image Processing
3 Results
3.1 Patient 1
3.2 Patient 2
4 Discussion
5 Conclusions
Acknowledgements
References
175 Photodynamic Inactivation in Vitro of the Pathogenic Fungus Paracoccidioides brasiliensis
Abstract
1 Introduction
2 Materials and Method
2.1 Fungal Lineage and Culture Media
2.2 Cultivation of Paracoccidioides brasiliensis
2.3 Preparation of Paracoccidioides brasiliensis for in Vitro Tests
2.4 Photosensitizer
2.5 Light Source
2.6 Photodynamic Action
2.7 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
176 In Vitro Study of the Microstructural Effects of Photodynamic Therapy in Medical Supplies When Used for Disinfection
Abstract
1 Introduction
2 Material and Method
3 Results and Discussion
4 Conclusion
Acknowledgements
References
177 Technological Development of a Multipurpose Molecular Point-of-Care Device for Sars-Cov-2 Detection
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Acknowledgements
References
178 Chemical Effects of Nanosecond High-Intensity IR and UV Lasers on Biosilicate® When Used for Treating Dentin Incipient Caries Lesions
Abstract
1 Introduction
2 Material and Method
3 Resuts and Discussion
4 Conclusion
Acknowledgements
References
179 Antioxidant Activity of the Ethanol Extract of Salpichlaena Volubilis and Its Correlation with Alopecia Areata
Abstract
1 Introduction
2 Material and Methods
3 Results
3.1 Extract Characterization
4 Discussion
5 Conclusions
Acknowledgements
References
180 Evaluation of the Heart Rate Variability with Laser Speckle Imaging
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusion
Acknowledgements
References
181 Photobiomodulation and Laserpuncture Evaluation for Knee Osteoarthritis Treatment: A Literature Review
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
182 Methodology for the Classification of an Intraocular Lens with an Orthogonal Bidimensional Refractive Sinusoidal Profile
Abstract
1 Introduction
2 Concepts
2.1 Types of Intraocular Lenses
2.2 Lens Design
2.3 Merit Functions
3 Methodology
3.1 Algorithm
3.2 Thresholds and Ranking
3.3 Score
4 Results
4.1 3D bar Chart
4.2 Discussion
4.3 Future Improvements
5 Conclusions
Acknowledgements
References
183 Discrimination Between Artisanal and Industrial Cassava by Raman Spectroscopy
Abstract
1 Introduction
2 Objectives
3 Material and Methods
4 Results
5 Discussion
6 Conclusion
Conflict of Interest
References
184 Effect of Light Emitted by Diode as Treatment of Radiodermatitis
Abstract
1 Introduction
2 Materials and Methods
2.1 Development of Photoemitter Equipment
2.2 Irradiation of the Panniculus Carnosus
2.3 LED or Ambient Light Exposure
2.4 Macroscopic Analysis
2.5 Microscopic Analysis
2.6 Euthanasia of Animals
2.7 Statistical Analysis
3 Results
3.1 Macroscopic Analysis
3.2 Microscopic Analysis
4 Discussion
5 Conclusion
Acknowledgements
References
185 Effect of Photobiomodulation on Osteoblast-like Cells Cultured on Lithium Disilicate Glass-Ceramic
Abstract
1 Introduction
2 Materials and Methods
2.1 Cell Culture
2.2 Sample Preparation
2.3 LED Irradiation
2.4 Cell Viability
2.5 Functional Analysis
3 Results
3.1 Cellular Viability
3.2 Functional Analysis
4 Discussion
5 Conclusions
Acknowledgements
References
186 Reference Values of Current Perception Threshold in Adult Brazilian Cohort
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 CPT Protocol
2.3 Data Analysis
3 Results
4 Compliance with Ethical Requirements
4.1 Conflict of Interest
4.2 Statement of Human and Animal Rights
5 Conclusions
References
187 Analysis of the Quality of Sunglasses in the Brazilian Market in Terms of Ultraviolet Protection
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Acknowledgements
References
188 Do Sunglasses on Brazilian Market Have Blue-Light Protection?
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
189 Thermography and Semmes-Weinstein Monofilaments in the Sensitivity Evaluation of Diabetes Mellitus Type 2 Patients
Abstract
1 Introduction
2 Materials and Methods
2.1 Evaluation Protocol
2.2 Thermographic Images
2.3 Sensitivity Test
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
Biomedical Robotics, Assistive Technologies and Health Informatics
190 Design and Performance Evaluation of a Custom 3D Printed Thumb Orthosis to Reduce Occupational Risk in an Automotive Assembly Line
1 Introduction
2 Materials and Methods
2.1 Thumb Orthosis Development
2.2 SEMG Data Acquisition and Signal Processing
2.3 Statistical Analysis
3 Results
3.1 Amplitude Analysis
3.2 Median Analysis
3.3 Qualitative Material Printability and Comfort Analysis
4 Discussion
5 Conclusion
References
191 Handy Orthotics: Considerations on User-Centered Methodology During Development Stages of Myoelectric Hand Orthosis for Daily Assistance
1 Introduction
2 Methodology
2.1 Bibliometric Analysis
3 Considerations on Methodologies and Outcomes
3.1 Regarding User-Centered Technologies
3.2 Regarding Conceptual Investigation
3.3 Regarding Interaction Design
3.4 Regarding Usability
3.5 Regarding User Experience
4 Discussion
5 Conclusion
References
192 Human Activity Recognition System Using Artificial Neural Networks
1 Introduction
2 Materials and Methods
2.1 Application
2.2 Data Acquisition
2.3 Signal Preprocessing
2.4 Segmentation
2.5 Feature Extraction
2.6 Model Selection
2.7 Classification
3 Results and Discussion
3.1 Dataset Development
3.2 Model Selection
3.3 Classification Results
3.4 Comparison with Related Works
4 Conclusion
References
193 Modeling and Simulation of a Fuzzy-Based Human Control Using an Interaction Model Between Human and Active Knee Orthosis
1 Introduction
2 Methodology
2.1 Human-AKO Interaction Model
2.2 Simulation Algorithm
2.3 Experimental Procedure
3 Results and Discussion
4 Conclusion
References
194 Wearable Devices in Healthcare: Challenges, Current Trends and a Proposition of Affordable Low Cost and Scalable Computational Environment of Internet of Things
Abstract
1 Introduction
2 Literature Review
2.1 Use of Wearable Devices in Healthcare
2.2 Use of Wearable Devices for Cardiovascular Diseases
3 Trends in the Use of Wearable Devices in Healthcare
3.1 Use of the Golden Standard Holter Monitor
3.2 Energy Consumption and Operating Frequency
3.3 Use of Fog Computing
4 Computational Environment and Prototype
4.1 Hardware Prototype
4.2 Software Components
5 Final Remarks
References
195 Use of RGB-D Camera for Analysis of Compensatory Trunk Movements in Upper Limbs Rehabilitation
1 Introduction
2 Methodology
2.1 Experimental Protocols
2.2 Kinect V2 Acquisition Systems
2.3 Rehabilitation Training Tasks
2.4 Experimental Sessions
3 Tests and Results
3.1 Range of Motion Capture Tests
3.2 Trunk Compensation Tests
4 Conclusion
5 RESPOSTAS
5.1 Revisor 1
5.2 Revisor 2
5.3 Revisor 3
5.4 Revisor 4
5.5 Revisor 5
References
196 Repetitive Control Applied to a Social Robot for Interaction with Autistic Children
Abstract
1 Introduction
2 Materials and Methods
3 Tests and Results
4 Conclusions
Acknowledgements
References
197 Analysis of Sensors in the Classification of the Brazilian Sign Language
1 Introduction
2 Materials and Methods
2.1 Instrumentation
2.2 Experiment Procedure
2.3 Segmentation
2.4 Feature Extraction
2.5 Classifier
3 Results and Discussion
3.1 Sensor Analysis by Type
3.2 Analysis of the Sensors in Sets
4 Conclusion
References
198 A Smart Wearable System for Firefighters for Monitoring Gas Sensors and Vital Signals
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Measurement of Gas Concentration
3.2 Measurement of Vital Signs
4 Data Processing and Monitoring
5 Conclusions
References
199 Importance of Sequencing the SARS-CoV-2 Genome Using the Nanopore Technique to Understand Its Origin, Evolution and Development of Possible Cures
Abstract
1 Introduction
2 SARS-CoV-2
3 Mutations
4 Sequencing
5 Conclusions
Acknowledgements
References
200 Vidi: Artificial Intelligence and Vision Device for the Visually Impaired
1 Introduction
2 Materials and Methods
2.1 Hardware
2.2 Software
2.3 Physical Structure
3 Results
4 Discussion
5 Conclusion
References
201 Mobile Application for Aid in Identifying Fall Risk in Elderly: App Fisioberg
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusion
Conflict of Interest
References
202 Real-Time Slip Detection and Control Using Machine Learning
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussions
5 Conclusions
Acknowledgements
References
203 Programmable Multichannel Neuromuscular Electrostimulation System: A Universal Platform for Functional Electrical Stimulation
1 Introduction
2 Materials and Methods
2.1 Electrical Stimulator System Architecture
2.2 Firmware
2.3 Tests Description
3 Results and Discussion
4 Conclusion
References
204 Absence from Work in Pregnancy Related to Racial Factors: A Bayesian Analysis in the State of Bahia—Brazil
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusion
References
205 Perspectives on EMG-Controlled Prosthetic Robotic Hands: Trends and Challenges
1 Introduction
2 Prosthetic Hand
3 EMG Acquistion and Processing
4 Biofeedback
5 Mechanical Performance
6 User Experience
7 Discussion and Conclusion
References
206 Use of Workspaces and Proxemics to Control Interaction Between Robot and Children with ASD
1 Introduction
2 Materials and Methods
2.1 State Machine
2.2 Control Law
2.3 Child's Path Simulation
2.4 ROS Packages and Messages
3 Results
3.1 Displaying Emotions
4 Conclusion
References
207 Proposal of a New Socially Assistive Robot with Embedded Serious Games for Therapy with Children with Autistic Spectrum Disorder and down Syndrome
1 Introduction
2 Theoretical Background
2.1 Down Syndrome
2.2 Autism Spectrum Disorder
3 Previous Works Developed in NTA-UFES/Brazil
3.1 Serious Games
3.2 Socially Assistive Robots
3.3 Robot MARIA T-21
3.4 Assesment
4 Conclusion
References
208 Performance Assessment of Wheelchair Driving in a Virtual Environment Using Head Movements
Abstract
1 Introduction
2 Materials and Methods
2.1 Wheelchair and Motion Tracker Integration
2.2 Integration with Virtual Reality Simulator
2.3 User Performance Assessment
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
209 Use of Artificial Intelligence in Brazil Mortality Data Analysis
Abstract
1 Introduction
2 Materials and methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
210 Dual Neural Network Approach for Virtual Sensor at Indoor Positioning System
Abstract
1 Introduction
2 Related Works
3 Methodology
3.1 Database Construction
3.2 System Architecture and Experiments Setup
4 Results
5 Discussion
6 Conclusions
Acknowledgements
References
211 Development of Simulation Platform for Human-Robot-Environment Interface in the UFES CloudWalker
1 Introduction
2 Materials and Methods
2.1 UFES CloudWalker Simulation Model
2.2 Simulation Environment
2.3 Map Data Extraction and Robot Navigation Algorithms
2.4 Experimental Procedures
3 Results and Discussion
3.1 Mapping and Localization
3.2 Navigation and Obstacles Avoidance
4 Conclusions and Future Work
References
212 Assisted Navigation System for the Visually Impaired
Abstract
1 Introduction
2 Material and Methods
2.1 System Details
2.2 Data’s Collection
3 Results and Discussion
4 Conclusions
Acknowledgements
References
213 Development of Bionic Hand Using Myoelectric Control for Transradial Amputees
1 Introduction
2 Materials and Methods
2.1 Data Acquisition and Processing
2.2 Development of Prosthetic Hand
2.3 Control
3 Results
4 Conclusion
References
214 Communication in Hospital Environment Using Power Line Communications
Abstract
1 Introduction
2 Materials and Methods
2.1 Prototype
2.2 Firmware
2.3 Test Protocols
3 Results and Discussion
4 Conclusion
Conflict of Interest
References
215 Influence of Visual Clue in the Motor Adaptation Process
Abstract
1 Introduction
2 Materials and Methods
2.1 Equipment
2.2 Experimental Protocol
2.3 Data Analysis
2.4 Endpoint Errors
2.5 Volunteers
3 Results
3.1 Impact of Visual Clue on Movement Adaptation
3.2 Effect of Obstruction of Vision Under the Manipulator
3.3 Effects of Concentric and Eccentric Force Fields
4 Discussion
4.1 Visual Clues Lead to Slower Adaptation
4.2 Greater Adaptability in Concentric Fields
5 Conclusion
Acknowledgements
References
216 Application of MQTT Network for Communication in Healthcare Establishment
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Conflict of Interest
References
217 Artificial Neural Network-Based Shared Control for Smart Wheelchairs: A Fully-Manual Driving for the User
1 Introduction
2 Literature Review
3 Methodology
3.1 Inputs
3.2 ANN Training
4 Results
5 Conclusion
References
218 Identifying Deficient Cognitive Functions Using Computer Games: A Pilot Study
Abstract
1 Introduction
2 Related Work
3 Methodology
3.1 The Game
3.2 The Experiment
3.3 Cognitive Functions
3.4 Measurement
4 Results
5 Discussion
6 Conclusion
Conflict of Interest
References
219 A State of the Art About Instrumentation and Control Systems from Body Motion for Electric-Powered Wheelchairs
Abstract
1 Introduction
2 Materials and Methods
3 Search in Specialized Databases
4 Selection and Inclusion Criteria
5 Data Extraction and Evaluation Criteria
6 Synthesis, Analysis and Presentation of Results
7 Results
8 Controller Design from Head Motion Capture
9 Controller Design from Hand Motion Capture
10 Discussion
11 Conclusion
Acknowledgements
References
220 Socket Material and Coefficient of Friction Influence on Residuum-Prosthesis Interface Stresses for a Transfemoral Amputee: A Finite Element Analysis
Abstract
1 Introduction
2 Materials and Methods
2.1 Numerical Model Development
2.2 Socket Modifications
3 Results
4 Discussion
4.1 Parallels with Literature-Based Experimental Pressure Measurements in Humans and Numerical Studies
4.2 Evaluation of Different Materials and COFs
5 Conclusions
Acknowledgements
References
221 Subject Specific Lower Limb Joint Mechanical Assessment for Indicative Range Operation of Active Aid Device on Abnormal Gait
Abstract
1 Introduction
2 Materials and Methods
2.1 Gait Aid Devices Features
2.2 Trial Tests
2.3 Inverse Kinematics and Dynamics
2.4 Signal Analysis
3 Results and Discussion
3.1 Kinematics
3.2 Frequency Spectrum
3.3 Dynamics and Energy
3.4 Dynamic Stiffness
4 Conclusions
Conflict of Interest
References
222 Web/Mobile Technology as a Facilitator in the Cardiac Rehabilitation Process: Review Study
Abstract
1 Introduction
2 Methodology
2.1 Inclusion Criteria
2.2 Exclusion Criteria
2.3 Databases and Research Strategies
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
Biomedical Signal and Image Processing
223 Regression Approach for Cranioplasty Modeling
1 Introduction
2 Materials
3 Methodology
3.1 Information Extraction from DICOM Images
3.2 Phantom Generation
3.3 Mirroring of Contour Points
3.4 Determination of Contour Points in the Hemisphere of Interest
3.5 Determination of the Polynomial Regression Region in X-Axis
3.6 Dataset for Polynomial Regression
4 Results and Discussion
5 Conclusion and Further Work
References
224 Principal Component Analysis in Digital Image Processing for Automated Glaucoma Diagnosis
1 Introduction
2 Texture Feature Extraction
3 Dimensionality Reduction
3.1 Principal Component Analysis
3.2 PCA of a Digital Image
4 Materials and Methods
4.1 Imagery Database
4.2 Texture Features
4.3 Dimensionality Reduction Stage
4.4 Classification
5 Results
6 Conclusions
References
225 Pupillometric System for Cognitive Load Estimation in Noisy-Speech Intelligibility Psychoacoustic Experiments: Preliminary Results
Abstract
1 Introduction
2 Material and Methods
2.1 Hardware
2.2 Software
2.3 Robustness of the CL Index
2.4 Psychoacoustic Experiments
3 Results and Discussion
4 Conclusion
Acknowledgements
References
226 VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion
1 Introduction
2 Related Work
3 Methodology
3.1 Datasets and Data Pre-processing
3.2 Feature Extraction and Classification
4 Results and Conclusion
References
227 Estimation of Magnitude-Squared Coherence Using Least Square Method and Phase Compensation: A New Objective Response Detector
Abstract
1 Introduction
2 Methods
2.1 Magnitude-Squared Coherence (MSC)
2.2 New Detector
2.3 Windowing
3 Material and Methods
3.1 Stimuli
3.2 EEG Data
3.3 Epoch Length
3.4 Performance Measurement
4 Results and Discussion
5 Conclusions
Acknowledgements
References
228 Image Processing as an Auxiliary Methodology for Analysis of Thermograms
Abstract
1 Introduction
2 Methodology
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
229 Performance Comparison of Different Classifiers Applied to Gesture Recognition from sEMG Signals
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 Acquisition Setup and Protocol
2.3 Pre-processing
3 Classification Schemes
3.1 Support Vector Machines
3.2 Convolution Neural Networks
3.3 Hyperdimensional Computing
4 Results and Discussion
4.1 Impact of Number of Gestures
4.2 Impact of Number of Training Examples
4.3 Adaptation Capabilities
4.4 Discussion
5 Conclusions
Acknowledgements
References
230 Modelling of Inverse Problem Applied to Image Reconstruction in Tomography Systems
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussions
5 Conclusion
Acknowledgements
References
231 Towards a Remote Vital Sign Monitoring in Accidents
1 Introduction
2 Materials and Methods
2.1 Experiment Protocol
2.2 Images Acquisition
2.3 Skin Segmentation
2.4 Signal Extraction
2.5 Frequency Analysis
3 Results and Discussion
4 Conclusion
References
232 Analysis About SSVEP Response to 5.5–86.0Hz Flicker Stimulation
1 Introduction
2 Methods
2.1 Volunteers and EEG Signal Acquisition
2.2 Experimental Procedure
2.3 Applied Questionnaire
2.4 Signal Processing
3 Results and Discussion
3.1 Questionnaire Results
3.2 SSVEP Amplitude Response and SNR
4 Conclusion
References
233 Correlations Between Anthropometric Measurements and Skin Temperature, at Rest and After a CrossFit® Training Workout
Abstract
1 Introduction
2 Methods
2.1 Sample
2.2 Procedures
2.3 Data Collection
2.4 Instruments
2.5 Thermal Images Processing Method
2.6 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
234 Channel Influence in Armband Approach for Gesture Recognition by sEMG Signals
1 Introduction
2 Materials and Methods
2.1 Data Acquisition and Segmentation
2.2 Data Processing and Classification
3 Results and Discussion
4 Conclusion
References
235 Human Activity Recognition from Accelerometer Data with Convolutional Neural Networks
Abstract
1 Introduction
2 Materials and Methods
2.1 Dataset
2.2 CNNs
2.3 CNN Architecture and Training Parameters
3 Results and Discussions
4 Conclusions
Acknowledgements
References
236 Multimodal Biometric System Based on Autoencoders and Learning Vector Quantization
Abstract
1 Introduction
2 Methodology
2.1 Database
2.2 Architecture of the Multimodal Biometric System
2.3 Face Trait
2.4 Voice Trait
2.5 Fusion and Matching
3 Results
4 Analysis of the Results
5 Conclusion
Acknowledgements
References
237 Evaluating the Performance of Convolutional Neural Networks with Direct and Sequential Acyclic Graph Architectures in Automatic Segmentation of Breast Lesions in Ultrasound Images
Abstract
1 Introduction
2 Materials and Methods
3 Buidling the Database
4 CNN Architectures Choice and Implementation
5 Experiments
6 Evaluation Metrics
7 Results and discussions
8 Conclusions
Acknowledgements
References
238 Evaluation and Systematization of the Transfer Function Method for Cerebral Autoregulation Assessment
1 Introduction
2 Methods
2.1 Data Collecting
2.2 Pre-processing Signals in the Time Domain—CAAos Platform
2.3 Evaluating the Cerebral Autoregulation Parameters in the Frequency Domain—CAAos Platform
2.4 Statistical Analysis
3 Results
4 Discussion
5 Conclusion
References
239 Anomaly Detection Using Autoencoders for Movement Prediction
1 Introduction
1.1 Anomaly Detection
2 Methodology
2.1 The VAE Model
2.2 sEMG Database
2.3 Feature Extraction
2.4 VAE Algorithm
3 Results
4 Conclusion
References
240 A Classifier Ensemble Method for Breast Tumor Classification Based on the BI-RADS Lexicon for Masses in Mammography
1 Introduction
1.1 Motivation of the Proposed Scheme
2 Materials and Methods
2.1 Mammography Dataset
2.2 Feature Extraction
2.3 Proposed Approach
2.4 Experimental Setup
3 Results
4 Discussion and Conclusions
References
241 A Comparative Study of Neural Computing Approaches for Semantic Segmentation of Breast Tumors on Ultrasound Images
1 Introduction
2 Materials and Methods
2.1 BUS Image Dataset
2.2 Conventional Approach
2.3 Convolutional Approach
2.4 Segmentation Performance Assessment
3 Results
4 Discussion and Conclusions
References
242 A Preliminary Approach to Identify Arousal and Valence Using Remote Photoplethysmography
1 Introduction
2 Methods
2.1 Database
2.2 Remote Photoplethysmography Estimation
2.3 Feature Extraction
2.4 Classification
3 Results and Discussion
3.1 Personalized Emotion Classification
3.2 Cross-Subjects Emotion Classification
3.3 Heart Rate Variation for Emotion Classification in the Literature
4 Conclusion
References
243 Multi-label EMG Classification of Isotonic Hand Movements: A Suitable Method for Robotic Prosthesis Control
1 Introduction
2 Materials and Methods
2.1 Data Acquisition
2.2 Multi-label Approach
2.3 Feature Extraction Classification
2.4 Evaluation Metrics
3 Result and Discussion
4 Conclusion
References
244 Evaluation of Vectorcardiography Parameters Matrixed Synthesized
Abstract
1 Introduction
2 Materials and Methods
2.1 Experimental Protocol
2.2 Data Collection
2.3 Vectorcardiogram of the Exams
2.4 Comparison of the Vectorcardiograms
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
245 Microstate Graphs: A Node-Link Approach to Identify Patients with Schizophrenia
1 Introduction
2 Methods
2.1 Database Description
2.2 Microstate Analysis
2.3 Graph Theory Applied to Microstate Analysis
2.4 Global and Local Properties
2.5 Classification
3 Results and Discussion
4 Conclusion
References
246 Eigenspace Beamformer Combined with Generalized Sidelobe Canceler and Filters for Generating Plane Wave Ultrasound Images
Abstract
1 Introduction
2 Background
2.1 Minimum Variance (MV) Beamformer
2.2 Eigenspace-Based on Minimum Variance (EBMV) Beamformer
3 Method
3.1 Image Data Acquisition
3.2 Generalized Sidelobe Canceler (GSC)
3.3 Wiener and Kuan Filters
4 Results and Discussion
5 Conclusions
Acknowledgements
References
247 Anatomical Atlas of the Human Head for Electrical Impedance Tomography
1 Introduction
2 Materials and Methods
2.1 MR Images
2.2 Spatial Normalization
2.3 Segmentation
2.4 Resistivity Images
2.5 Statistical Atlas
2.6 Mesh Development and Forward Problem
2.7 EIT Inverse Problem
3 Results
4 Discussion and Conclusion
References
248 Sparse Arrays Method with Generalized Sidelobe Canceler Beamformer for Improved Contrast and Resolution in Ultrasound Ultrafast Imaging
Abstract
1 Introduction
2 Sidelobe Canceler Beamformer
3 Sparse Arrays Method
4 Results and Discussion
5 Conclusions
Acknowledgements
References
249 Center of Mass Estimation Using Kinect and Postural Sway
Abstract
1 Introduction
2 Materials and Methods
2.1 Device–Kinect
2.2 Software—Python Codes and Kinect SDK
2.3 Hardware–Machine
2.4 Kinematic Method
2.5 Procedures
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
250 Estimation of Directed Functional Connectivity in Neurofeedback Training Focusing on the State of Attention
1 Introduction
2 Materials and Methods
2.1 Participants
2.2 Procedures
2.3 Data Acquisition
2.4 Pre-processing of Data
2.5 sLoreta
2.6 Effective Connectivity Analysis
3 Results
4 Discussion
5 Conclusion
References
251 Characterization of Electroencephalogram Obtained During the Resolution of Mathematical Operations Using Recurrence Quantification Analysis
1 Introduction
2 Theoretical Framework
2.1 Electroencephalogram
2.2 Recurrence Plot and Recurrence Quantification Analysis RQA
3 Materials and Methods
3.1 Database
3.2 Obtaining RQA Measurements
3.3 Statistical Analysis
4 Results
4.1 Analysis Using the RQA Measures
5 Discussion
6 Conclusion
References
252 Performance Evaluation of Machine Learning Techniques Applied to Magnetic Resonance Imaging of Individuals with Autism Spectrum Disorder
1 Introduction
2 Materials and Methods
2.1 Database
2.2 Feature Extraction Techniques
2.3 Predictive Models of Machine Learning
2.4 Training
2.5 Metrics
3 Results
4 Discussion
5 Conclusion
References
253 Biomedical Signal Data Features Dimension Reduction Using Linear Discriminant Analysis and Threshold Classifier in Case of Two Multidimensional Classes
1 Introduction
2 LDA Calculation
3 The Specific Case of Two Classes
4 Performance Calculation
4.1 ROC Analysis
5 Methodology
6 Results
7 Conclusion
References
254 Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® Training Workout: A Pilot Study
Abstract
1 Introduction
2 Methods
2.1 Volunteers
2.2 Anthropometry of the Volunteers
2.3 Evaluation by Infrared Thermography
2.4 Training Session
2.5 Data Analysis
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
255 Implementation of One and Two Dimensional Analytical Solution of the Wave Equation: Dirichlet Boundary Conditions
1 Introduction
2 Methodology
2.1 Analytical Solution
3 Results
3.1 1D Problem
3.2 2D Problem
4 Discussion
5 Conclusion
References
256 Application of the Neumann Boundary Conditions to One and Two Dimensional Analytical Solution of Wave Equation
1 Introduction
2 Methodology
2.1 One-Dimensional Wave Equation
2.2 Two-Dimensional Wave Equation
3 Results
3.1 1D Problem
3.2 2D Problem
4 Discussion
5 Conclusion
References
257 Comparative Analysis of Parameters of Magnetic Resonance Examinations for the Accreditation Process of a Diagnostic Image Center in Curitiba/Brazil
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
258 Information Theory Applied to Classifying Skin Lesions in Supporting the Medical Diagnosis of Melanomas
1 Introduction
2 Problem Identification
3 Correntropy
4 Methodology
4.1 Formation of the Image Database
4.2 Digital Image Processing
4.3 Approach Proposed for Features Extraction
4.4 Proposed Classification
5 Results and Discussion
6 Conclusions and Future Works
References
259 Alpha Development of Software for 3D Segmentation and Reconstruction of Medical Images for Use in Pre-treatment Simulations for Electrochemotherapy: Implementation and Case Study
Abstract
1 Introduction
2 Software Development
2.1 Backend Development
2.2 Frontend Development
3 Case Study
3.1 Numerical Simulations
4 Results
5 Discussion
6 Conclusions
Acknowledgements
References
260 A Systematic Review on Image Registration in Interventionist Procedures: Ultrasound and Magnetic Resonance
1 Introduction
2 Methodology
2.1 Target Question Design
2.2 Search Strategy
2.3 Exclusion and Inclusion Criteria
3 Results
4 Discussion
5 Conclusion
References
261 Application of Autoencoders for Feature Extraction in BCI-SSVEP
1 Introduction
2 BCI Signal Processing Module
2.1 Pre-processing
2.2 Feature Extraction
2.3 Classification
3 Autoencoders
4 EEG Data Set Information
5 Methodology
5.1 Pre-processing
5.2 Hyperparameters of Classifiers
6 Experiments
7 Results and Discussion
7.1 Time-Domain Autoencoder
7.2 Frequency-Domain Autoencoder
7.3 Comparative Analysis
8 Conclusions
References
262 An LPC-Based Approach to Heart Rhythm Estimation
1 Introduction
2 Theoretical Development
2.1 ECG Signal
2.2 LPC—Linear Prediction Coefficients
2.3 LPC: Heart Rhythm Estimation
3 Results and Discussions
4 Conclusions
References
263 Spectral Variation Based Method for Electrocardiographic Signals Compression
Abstract
1 Introduction
2 Materials and Methods
2.1 Proposed Compression System
2.1.1 Frequency Estimator
2.1.2 Interpolation/Subsampling
2.1.3 FFT Per Cycle
2.1.4 Proposed Compression Method
2.1.5 Lossless Compression
2.2 Signal Reconstruction
2.3 Error Versus Compression Rate Analysis
2.4 Visual Inspection
3 Results and Discussion
3.1 Error Versus Compression Rate Results
3.2 Visual Inspection Results
4 Conclusions
Acknowledgements
References
264 Centre of Pressure Displacements in Transtibial Amputees
Abstract
1 Introduction
2 Methods
2.1 Subjects
2.2 Experimental Design
2.3 Analysis Plan
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
265 Lower Limb Frequency Response Function on Standard Maximum Vertical Jump
1 Introduction
2 Materials and Methods
2.1 Trial Tests
2.2 Input-Output Signals
2.3 System Dynamic
2.4 Signal Analysis
3 Results and Discussion
4 Conclusion
References
266 Time-Difference Electrical Impedance Tomography with a Blood Flow Model as Prior Information for Stroke Monitoring
1 Introduction
2 Materials and Methods
2.1 Three-Layer Head Model Mesh Generation
2.2 Inclusion of Arteries in the Meshes
2.3 Dynamic Model of Brain Blood Flow
2.4 Forward Problem Solution
2.5 Blood Flow Model as Prior Information
2.6 Time-Difference Imaging: Solution of the Inverse Problem
3 Results
3.1 Monitoring of Healthy Patient
3.2 Detection of Ischemia with Reference Measurements
3.3 Detection of Pre-existing Ischemia
4 Discussion
5 Conclusions
References
267 Development of a Matlab-Based Graphical User Interface for Analysis of High-Density Surface Electromyography Signals
1 Introduction
2 Materials and Methods
2.1 HD-sEMG and Force Signals Upload and Preprocessing
2.2 Force Graphical Representation
2.3 HD-sEMG Signals Graphical Representation
2.4 Spatial Filters
2.5 Feature Extraction
2.6 Amplitude Estimators
2.7 Frequency Estimators
2.8 Modified Entropy
2.9 Coefficient of Variation
2.10 Gravity Center
2.11 Topographic Maps
2.12 Software Evaluation
3 Results
3.1 Software Evaluation
4 Discussion and Conclusion
4.1 Availability of the Software
References
268 Development of an Automatic Antibiogram Reader System Using Circular Hough Transform and Radial Profile Analysis
1 Introduction
2 Materials and Methods
2.1 Developed Device
2.2 Strains and Antibiotics
2.3 Proposed Software
3 Results
4 Conclusion
References
269 Gene Expression Analyses of Ketogenic Diet
1 Introduction
2 Materials and Methods
2.1 Publically Available Microarray Datasets from GEO-NCBI
2.2 Differentially Expressed Gene Analyses by GEO2R from GEO Datasets
2.3 Gene Ontology Over or Under Representation: Cytoscape and BINGO
2.4 gProfiler with KEGG Orthology
2.5 Consistent Genes and Categories
3 Results
3.1 Differentially Expressed Genes
3.2 The Union of Intersections
3.3 BINGO and Gene Ontology Terms for the Union of Intersections
3.4 gProfiler with KEGG Orthology for the Union of Intersections
4 Discussion
5 Conclusion
References
270 Comparison Between J48 and MLP on QRS Classification Through Complexity Measures
1 Introduction
2 Methods
2.1 The BIH-Arrhythmia Database
2.2 Process
2.3 Complexity Measures
2.4 Evaluation
3 Results
4 Discussion
5 Conclusion
References
271 Low Processing Power Algorithm to Segment Tumors in Mammograms
Abstract
1 Introduction
2 Materials and Methods
2.1 Database
2.2 Preprocessing
2.3 Tumor Segmentation
3 Validation Method
4 Processing Speed Evaluation
5 Results and Discussion
6 Conclusions
Acknowledgements
References
272 Electromyography Classification Techniques Analysis for Upper Limb Prostheses Control
Abstract
1 Introduction
2 Methodology
2.1 Bibliographical Study
2.2 Experiments Dataset
2.3 Software Tools
2.4 Dataset Shaping
2.5 Features Extraction
2.6 Training and Testing Data Samples
2.7 Classifiers Fitting
3 Results and Discussion
4 Conclusions
References
273 EEG-Based Motor Imagery Classification Using Multilayer Perceptron Neural Network
1 Introduction
2 Materials and Methods
2.1 Features Extraction
2.2 Experimental Design
2.3 Topology, Training, and Evaluation of MLP
3 Results and Discussion
4 Conclusion
References
274 Real-Time Detection of Myoelectric Hand Patterns for an Incomplete Spinal Cord Injured Subject
Abstract
1 Introduction
2 Methodology
2.1 Subjects
2.2 Data Acquisition and Hardware
2.3 Experimental Protocol
2.4 Data Processing and Analysis
2.4.1 Pre-processing
2.4.2 Feature Extraction
2.4.3 Classification Methods
2.5 Evaluation of Performance
3 Results
3.1 Off-Line Classification
3.2 Real-Time Classification
4 Discussion
5 Conclusions
Acknowledgements
References
275 Single-Trial Functional Connectivity Dynamics of Event-Related Desynchronization for Motor Imagery EEG-Based Brain-Computer Interfaces
Abstract
1 Introduction
2 Materials and Methods
2.1 EEG Dataset
2.2 Data Processing
2.3 ERD/S Single-Trial
2.4 Dynamic Functional Connectivity
2.5 Classification and Subject Inclusion Criterion
3 Results
4 Discussion and Conclusions
Acknowledgements
References
276 A Lightweight Model for Human Activity Recognition Based on Two-Level Classifier and Compact CNN Model
1 Introduction
2 Deep Learning Models for HAR and Complexity Analysis
3 Optimization of Deep Learning Models for Edge Computing
3.1 General Approach Techniques
3.2 HAR-Specific Techniques
4 Proposed Lightweight HAR Classifier
4.1 Input Data
4.2 Preprocessing
4.3 Feature Extraction
4.4 Static-Dynamic Classifier
4.5 Static Activities Classifier
4.6 Dynamic Activities Classifier
5 System Evaluation and Discussion
5.1 SVM Classifier
5.2 Decision Tree for Static Activities
5.3 Compact CNN for Dynamic Activities
5.4 Model Results
6 Conclusion
References
277 Applicability of Neurometry in Assessing Anxiety Levels in Students
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
278 Using PPG and Machine Learning to Measure Blood Pressure
1 Introduction
2 Methodology
2.1 Database
2.2 Extracted Features
2.3 Dimensionality Reduction
2.4 Blood Pressure Ranges
2.5 Regression Algorithms
3 Experiment and Outcomes
4 Conclusions
References
279 Resting-State Brain in Cognitive Decline: Analysis of Brain Network Architecture Using Graph Theory
Abstract
1 Introduction
2 Methodology
2.1 Data
2.2 Pre-Processing
2.3 Construction Networks
2.4 Graph Theory
2.5 Statistical Analysis
3 Results
3.1 Demographic Data
3.2 Graph Theory
4 Discussions and Conclusion
Conflict of Interest
References
280 Exploring Different Convolutional Neural Networks Architectures to Identify Cells in Spheroids
1 Introduction
2 Methodololgy
2.1 Spheroid Images
2.2 Dataset Preparation
2.3 CNN Implementation
2.4 Training and Testing Process
3 Results
4 Discussions
5 Conclusions
References
281 A Dynamic Artificial Neural Network for EEG Patterns Recognition
Abstract
1 Introduction
2 Materials and Methods
2.1 Materials
2.2 Methods
2.2.1 The ANN
2.2.2 The Daubechies Discrete Wavelet Transform (D-DWT)
2.2.3 The Dynamic Strategy
2.2.4 The Activation Function
2.2.5 The ANN Initial Setup
2.2.6 The ANN Training
2.2.7 The ANN Operation
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
282 Pelvic Region Panoramic Ultrasonography
Abstract
1 Introduction
2 Materials E Methods
2.1 Pre-Processing
2.2 Segmentation
2.3 Reconstruction
3 Results
4 Discussion
5 Conclusion
6 Conflict of Interest
Acknowledgements
References
283 Functional Connectivity During Hand Tasks
Abstract
1 Introduction
2 Materials and Methods
2.1 Participants
2.2 Experimental Setup
2.3 Task
2.4 Acquisition and Pre-Processing of EMG Signals
2.5 PDC Analysis
3 Results
3.1 PDC Color Map
3.2 Statistical Analysis
4 Discussions and Conclusions
Acknowledgements
References
284 Estimation of Alveolar Recruitment Potential Using Electrical Impedance Tomography Based on an Exponential Model of the Pressure-Volume Curve
1 Introduction
2 Materials
3 Methodology
4 Results
5 Discussion
6 Conclusion
References
285 Assessment of Respiratory Mechanics in Mice Using Estimated Input Impedance Without Redundancy
Abstract
1 Introduction
2 Materials and Methods
2.1 Animals
2.2 Animal Preparation
2.3 Respiratory Impedance
2.4 Signal Processing
2.5 Respiratory Impedance in Epochs
2.6 Data Acquisition
2.7 Constant Phase Model
2.8 Data Analysis
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
286 Dispersive Raman Spectroscopy of Peanut Oil—Ozone and Ultrasound Effects
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
287 Comparison Between Linear and Nonlinear Haar Wavelet for the Detection of the R-peak in the Electrocardiogram of Small Animals
Abstract
1 Introduction
2 Materials and Methods
2.1 First Generation—Linear—Haar Wavelet Transform
2.2 Second Generation—Nonlinear—Haar Wavelet
2.3 R-Peak Detection
2.4 Verification and Validation
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
288 Data Extraction Method Combined with Machine Learning Techniques for the Detection of Premature Ventricular Contractions in Real-Time
Abstract
1 Introduction
2 Methodology
2.1 Data
2.2 Pre-Processing
2.3 Data Extraction
2.4 Classifiers
2.5 Performance Appraisers
3 Results
4 Discussion
5 Conclusion
Conflict of Interest
References
289 Extraction of Pendelluft Features from Electrical Impedance Tomography Images
1 Introduction
2 Materials
3 Methodology
4 Results
5 Discussion
6 Conclusion
References
290 Classification of Raw Electroencephalogram Signals for Diagnosis of Epilepsy Using Functional Connectivity
Abstract
1 Introduction
2 Material and Methods
2.1 Data Acquisition and Pre-processing
2.2 Feature Extraction and Selection
3 Classification and Performance Evaluation
3.1 Support Vector Machines
3.2 Artificial Neural Network
3.3 Algorithms Performance Evaluation
4 Results
4.1 Functional Connectivity
4.2 Performance of the Classifiers
4.3 Diagnostic Method Efficiency
5 Discussion
6 Conclusion
Conflict of Interest
References
291 On Convolutional Neural Networks and Transfer Learning for Classifying Breast Cancer on Histopathological Images Using GPU
1 Introduction
2 Related Works
3 Convolutional Neural Networks and Transfer Learning
3.1 ResNet-18
3.2 ResNet-152
3.3 GoogLeNet
4 Experiments
4.1 Database
4.2 Environment and Setup
4.3 Metrics
4.4 Reducing the Overfitting
4.5 Results
5 Conclusions
References
292 Acquisition and Comparison of Classification Algorithms in Electrooculogram Signals
Abstract
1 Introduction
2 Materials and Methods
2.1 Acquisition System
2.2 Dataset Creation
2.3 Pre-processing and Feature Extraction
2.4 Dimensionality Reduction
2.5 Classification Algorithms
3 Results
4 Discussion
5 Conclusions
References
293 Muscle Synergies Estimation with PCA from Lower Limb sEMG at Different Stretch-Shortening Cycle
Abstract
1 Introduction
2 Materials and Methods
2.1 Trial Tests
2.2 Data Processing
3 Results
3.1 EMG Linear Envelopes
3.2 Linear and Cross-Correlation
3.3 Principal Components Analysis
3.4 Comparison of Results
4 Discussion and Conclusions
References
294 Measurement Techniques of Hounsfield Unit Values for Assessment of Bone Quality Following Decompressive Craniectomy (DC): A Preliminary Report
Abstract
1 Introduction
2 Materials and Method
3 Results and Discussion
4 Conclusions
References
295 Method for Improved Image Reconstruction in Computed Tomography and Positron Emission Tomography, Based on Compressive Sensing with Prefiltering in the Frequency Domain
1 Introduction
2 Tomography and Sinograms
3 Proposed Method
3.1 Estimation of Measurements in the Frequency Domain
3.2 Prefiltering
3.3 Reconstruction of the Filtered Images Using CS
3.4 Spectral Image Composition
3.5 Performance Evaluation Experiments
4 Results and Discussions
5 Conclusion
References
296 Non-invasive Arterial Pressure Signal Estimation from Electrocardiographic Signals
1 Introduction
2 Materials and Methods
2.1 Signal Processing
2.2 Kalman Filter
2.3 System Modeling
2.4 Estimation Performance Analysis
3 Results
4 Discussion
5 Conclusion
References
297 Comparison Among Microvolt T-wave Alternans Detection Methods and the Effect of T-wave Delimitation Approaches
Abstract
1 Introduction
2 Materials and Methods
2.1 T-wave Delimitation Approaches
2.2 MTWA Quantifying Methods
2.3 Statistic Analysis
3 Results
4 Discussions and Conclusions
Acknowledgements
References
298 Detection of Schizophrenia Based on Brain Structural Analysis, Using Machine Learning over Different Combinations of Multi-slice Magnetic Resonance Images
1 Introduction
2 Background
2.1 Schizophrenia
2.2 Magnetic Resonance Imaging—MRI
2.3 Deep Learning and Convolutional Neural Networks
3 Materials and Methods
3.1 Overview
3.2 Data Description
3.3 Neural Network Architecture
3.4 Evaluation of Network Models
4 Results and Discussion
5 Conclusion
References
299 Simulation of Lung Ultrasonography Phantom for Acquisition of A-lines and B-lines Artifacts
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
300 Recognition of Facial Patterns Using Surface Electromyography—A Preliminary Study
1 Introduction
2 Materials and Methods
2.1 Data Acquisition and Experimental Procedure
2.2 Signal Processing and Classification
3 Results and Discussion
4 Conclusion
References
301 Classification of Red Blood Cell Shapes Using a Sequential Learning Algorithm
Abstract
1 Introduction
2 Materials and Methods
2.1 Image Segmentation
2.2 Calculation of the Tangential Angle Function at Each Point of the Contour
2.3 Angular Variation Between each Pair of Points
2.4 Object Classification Using HMM
2.5 Sample Preparation and Image Acquisition
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
302 Possible Caveats of Ultra-short Heart Rate Variability Reliability: Insights from Recurrence Quantification Analysis
Abstract
1 Introduction
2 Materials and Methods
2.1 Participants
2.2 Physiologic Recording and HVR Analysis
2.3 Poincaré Map
2.4 Recurrence Quantification Analysis
2.5 Statistical Analysis
3 Results and Discussions
4 Conclusions
Acknowledgements
References
Clinical Engineering and Health Technology Assessment
303 Unidose Process Automation: Financial Feasibility Analysis for a Public Hospital
Abstract
1 Introduction
2 Materials and Methods
2.1 Unidose Process in the Institution
2.2 Equipment Costs
2.3 Production Inputs
2.4 Staff
2.5 Return Over Investment
3 Results
4 Discussion
5 Conclusions
References
304 Identifying Monitoring Parameters Using HFMEA Data in Primary Health Care Ubiquitous Technology Management
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussions
4 Conclusion
References
305 Rapid Review of the Application of Usability Techniques in Medical Equipment
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussions
5 Conclusion
References
306 Changes in Respiratory Mechanics Associated with Different Degrees of Parkinson’s Disease
Abstract
1 Introduction
2 Materials and Methods
2.1 Study Population
2.2 Respiratory Oscillometry
2.3 Presentation of Results and Statistical Analysis
3 Results
3.1 Reactive Parameters
3.2 Resistive Parameters
4 Discussion
5 Conclusion
Acknowledgements
References
307 Machine Learning Platform for Remote Analysis of Primary Health Care Technology to Support Ubiquitous Management in Clinical Engineering
1 Introduction
2 Materials and Methods
2.1 Data Collection
2.2 Data Preprocessing
2.3 Data Transformation
2.4 Model Training
2.5 Model Testing
2.6 Execution
3 Results
4 Discussion
5 Conclusion
References
308 Evaluation of the Efficacy of Chloroquine and Hydroxychloroquine in the Treatment of Individuals with COVID-19: A Systematic Review
Abstract
1 Introduction
2 Materials and Methods
3 Results
3.1 Selection Process of Studies Included
3.2 Description of Study Results
3.3 Qualitative Evaluation
4 Discussion
5 Conclusion
Acknowledgements
References
309 Embracement with Risk Classification: Lead Time Assessment of the Patient in a Tocogynecology Emergency Service
Abstract
1 Introduction
2 Materials and Methods
3 Study Results
4 Discussion
5 Conclusions
Acknowledgements
References
310 Health Technology Management Using GETS in Times of Health Crisis
Abstract
1 Introduction
2 Methodology
3 Results
3.1 How Many Functionally Active Ventilators Are There in the Brazilian HCI?
3.2 What is the Age Distribution of the Available Park of Pulmonary Ventilators?
3.3 How Much is Spent Annually with Pulmonary Ventilators?
3.4 What Are the Most Frequently Used Ventilator Part?
3.5 How Are Ventilators Distributed with Respect to the Manufacturer?
3.6 Can the Number of Pulmonary Ventilators Under Corrective Maintenance be Estimated?
3.7 What Percent of Ventilators Were Subjected to Preventive Maintenance?
4 Discussion
5 Conclusions
Acknowledgements
References
311 Life-Sustaining Equipment: A Demographic Geospace Analysis in National Territory
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
312 Geospatial Analysis of Diagnostic Imaging Equipment in Brazil
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
313 A Review About Equipment for Mechanical Ventilation in Intensive Care to Combat COVID-19 and the Role of Clinical Engineers
Abstract
1 Introduction
2 Materials and Methods
3 Mechanical Ventilation in COVID-19 Treatment
3.1 Criteria and Clinical Parameters
3.2 Medical Equipment in Intensive Care Unit
3.3 Pulmonary Mechanical Ventilator
4 Discussion
4.1 The Use of Mechanical Ventilators in the Current Situation
4.2 The Role and Influence of Engineers in Combating COVID-19
5 Conclusions
6 Conflict of Interest
Acknowledgements
References
314 Evaluation of Adverse Events Recorded in FDA/USA and ANVISA/Brazil Databases for the Medical Equipment: Pulmonary Ventilators, Defibrillators, Infusion Pumps, Physiological Monitors and Ultrasonic Scalpels
Abstract
1 Introduction
2 Methods
3 Results
3.1 FDA Adverse Events
3.2 ANVISA Adverse Events
3.3 Top Ten Health Hazards of the ECRI
4 Discussion
5 Conclusions
Acknowledgements
References
315 Quality Assessment of Emergency Corrective Maintenance of Critical Care Ventilators Within the Context of COVID-19 in São Paulo, Brazil
1 Introduction
2 Materials and Methods
2.1 Infrastructure
2.2 Leakage Current Measurement
2.3 Resistance of Protective Earth
2.4 Accuracy of Control and Instruments: Volume Control Inflation Type and Pressure Control Inflation Type
2.5 Delivered Oxygen Test
2.6 Alarm Verification
2.7 Calibration and Verification
2.8 Uncertainty of Measurement (U)
2.9 Assessed Critical Care Ventilators
3 Results
4 Discussion
5 Conclusion
References
316 Comparison of the Sensitivity and Specificity Between Mammography and Thermography in Breast Cancer Detection
1 Introduction
2 Materials and Methods
2.1 Data Acquisition
2.2 Sample and Thermogram Analysis
3 Results
3.1 Heat Distribution
4 Discussion
5 Conclusion
References
317 COVID-19: Analysis of Personal Protective Equipment Costs in the First Quarter of 2020 at a Philanthropic Hospital in Southern Bahia- Brazil
Abstract
1 Introduction
2 Material and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
318 Evaluation of the Power Generated by Ultrasonic Shears Used in Laparoscopic Surgeries
Abstract
1 Introduction
2 Materials and Methods
2.1 Materials Used
2.2 Methodology Used in the Tests
3 Results and Discussions
4 Conclusions
Acknowledgements
References
Neuroengineering
319 Influence of Visual Feedback Removal on the Neural Control Strategies During Isometric Force Production
Abstract
1 Introduction
2 Methods
2.1 Participants
2.2 Force Measurements and EMG Recording
2.3 Experimental Protocol
2.4 Data Analysis and Statistics
3 Results
4 Discussion
4.1 Effects of Visual Feedback on Force Control
4.2 Signal-Dependent Noise During Force Production
5 Conclusions
6 Conflict of Interest
Acknowledgements
References
320 Neurofeedback Training for Regulation of Sensorimotor Rhythm in Individuals with Refractory Epilepsy
Abstract
1 Introduction
2 Materials and Methods
2.1 Subjects
2.2 EEG Recordings
2.3 Data Processing
3 Results
4 Discussion
5 Conclusion
Conflict of Interest
References
321 Gelotophobia in the Academic Environment: A Preliminary Study
Abstract
1 Introduction
2 Materials and Methods
2.1 Recruitment of Participants
2.2 Application of the Questionnaire
3 Results
4 Discussion
5 Conclusion
Acknowledgements
Reference
322 A Single Administration of GBR 12909 Alters Basal Mesocorticolimbic Activity
1 Introduction
2 Methods
2.1 Animals
2.2 Microelectrodes Implant Surgery
2.3 Local Field Potential Acquisition and Analysis
3 Results
4 Discussion
5 Conclusion
References
323 Fuzzy Assessment for Autism Spectrum Disorders
1 Introduction
2 Fuzzy System for Autistic Spectrum Disorders Identification
2.1 Biomarkers for Autistic Spectrum Disorder Assessment
2.2 Fuzzy Input-Output Inference Mapping
2.3 Input and Output Fuzzy Sets
2.4 Input Linguistic Variables and Linguistic Terms
2.5 Fuzzy Rules for Assessing the Severity of Autistic Spectrum Disorder
3 Results and Discussion
4 Conclusion
References
324 Subthalamic Field Potentials in Parkinson’s Disease Encodes Motor Symptoms Severity and Asymmetry
Abstract
1 Introduction
2 Materials and Methods
2.1 Patients and Data Acquisition
2.2 Signal Processing
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
325 Subthalamic Beta Burst Dynamics Differs for Parkinson’s Disease Phenotypes
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
6 Conflict of Interest
Acknowledgements
References
326 Development of a Low-Cost Software to Obtain Quantitative Parameters in the Open Field Test for Application in Neuroscience Research
Abstract
1 Introduction
2 Materials and Methods
2.1 Materials
2.2 Tracking Method and Calculation of Total Distance and Average Speed
2.3 Graphical Interface and Executable Creation
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
327 Depolarizing Effect of Chloride Influx Through KCC and NKCC During Nonsynaptic Epileptiform Activity
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
328 Increase of Lactate Concentration During Spreading Depression
Abstract
1 Introduction
2 Methods And Materials
3 Results
4 Discussion
5 Conclusions
6 Conflict of Interest
Acknowledgment
References
329 Microglial Response After Chronic Implantation of Epidural Spinal Cord Electrode
1 Introduction
2 Materials and Methods
2.1 Animals
2.2 Surgery Procedures
2.3 Perfusion and Tissue Processing
2.4 Histological and Immunohistochemistry Analysis
2.5 Cell Counting
2.6 Statistical Analysis
3 Results
3.1 Total Microglia
3.2 Activated Microglia at Different Spinal Cord Levels
3.3 Distribution of Activated Microglia at the Implant Level
4 Discussion
5 Conclusion
References
330 Immediate Cortical and Spinal C-Fos Immunoreactivity After ICMS of the Primary Somatosensory Cortex in Rats
1 Introduction
2 Materials and Methods
2.1 Experimental Animals
2.2 Microelectrode Implantation and Habituation Procedures
2.3 ICMS Protocol
2.4 Perfusion and Tissue Processing
2.5 Histochemistry and c-Fos Immunohistochemistry
2.6 Data Acquisition and Analysis
3 Results
3.1 Distribution and Extent of the Cellular Activation in the Cortex
3.2 C-Fos Immunoreactivity in the Spinal Cord
4 Discussion
5 Conclusion
References
331 Identical Auditory Stimuli Render Distinct Cortical Responses Across Subjects—An Issue for Auditory Oddball-Based BMIs
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusion
5 Conflict of Interest
Acknowledgment
References
332 Proposal of a Novel Neuromorphic Optical Tactile Sensor for Applications in Prosthetic Hands
Abstract
1 Introduction
2 Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgment
References
333 An Object Tracking Using a Neuromorphic System Based on Standard RGB Cameras
1 Introduction
2 Materials and Methods
2.1 Standard RGB to Neuromorphic
2.2 Dataset
2.3 Watershed Segmentation Method
2.4 Quantitative Analysis
3 Results and Discussion
4 Conclusion
References
334 Classification of Objects Using Neuromorphic Camera and Convolutional Neural Networks
1 Introduction
2 Materials and Methods
2.1 Dataset
2.2 Convolution Neural Network
3 Results and Discussion
4 Conclusion
References
335 Acoustic and Volume Rate Deposition Simulation for the Focused Ultrasound Neuromodulation Technique
Abstract
1 Introduction
2 Objectives
3 Materials And Methods
4 Results
5 Conclusions
Acknowledgements
References
336 Neuromorphic Vision-aided Semi-autonomous System for Prosthesis Control
1 Introduction
2 Methods
2.1 Event-Based Cameras
2.2 Dataset Collection
2.3 Grasping Detection
2.4 Wrist Orientation
3 Results and Discussion
4 Conclusion
References
337 Finding Discriminant Lower-Limb Motor Imagery Features Highly Linked to Real Movements for a BCI Based on Riemannian Geometry and CSP
1 Introduction
2 Materials and Methods
2.1 Data Description
2.2 Proposed BCI
2.3 Statistical Evaluation
3 Results and Discussion
3.1 Further Discussion
4 Conclusion
References
338 Computational Model of the Effects of Transcranial Magnetic Stimulation on Cortical Networks with Subject-Specific Neuroanatomy
1 Introduction
2 Methods
2.1 Neuron Models
2.2 Electric Field in Cortical Mesh
2.3 Coupling Electric Field to Neuron Biophysics
2.4 Determination of Excitation Thresholds and Action Potential Initiation Sites
2.5 Neurons in a Feedforward Network
3 Results
3.1 Stimulated Neural Elements
3.2 Excitation Thresholds
3.3 Membrane Potential of Neurons in Network
4 Discussion
4.1 Synaptically Isolated Neurons
4.2 Neurons in a Feedforward Network
5 Conclusion
References
Special Topics in Biomedical Engineering
339 Strategy to Computationally Model and Resolve Radioactive Decay Chain in Engineering Education by Using the Runge-Kutta Numerical Method
1 Introduction
2 Theoretical Background
2.1 Physical Model
2.2 Mathematical Model
3 Methods
3.1 Computational Model
4 Results
5 Discussion
6 Conclusion
References
340 Effects of LED Photobiomodulation Therapy on the Proliferation of Chondrocytes
Abstract
1 Introduction
2 Materials and Methods
2.1 Cell Culture
2.2 Photobiomodulation Therapy
2.3 Chondrocyte Characterization
2.4 Cellular Metabolic Activity
2.5 Cell Proliferation
2.6 Statistical Analysis
3 Results
3.1 Chondrocyte Characterization
3.2 Cell Metabolic Activity
3.3 Cell Proliferation
4 Discussion
5 Conclusions
Acknowledgements
References
341 Characterization of Cultured Cardiomyocytes Derived from Human Induced Pluripotent Stem Cell for Quantitative Studies of Ca2+ Transport
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
342 Electrochemical Characterization of Redox Electrocatalytic Film for Application in Biosensors
Abstract
1 Introduction
2 Materiais and Methods
2.1 Making the Film of Polyaniline, and Fullerene
3 TCNQ Film Deposition
4 Optimization of Experimental Parameters
5 Results and Discussion
6 Conclusion
Acknowledgements
References
343 DXA and Bioelectrical Impedance: Evaluative Comparison in Obese Patients in City of Cáceres
Abstract
1 Introduction
1.1 New Methods of Body Assessment
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
344 On the COVID-19 Temporal Evolution in Brazil
1 Introduction
2 Mathematical Modeling
2.1 The Discrete-Time SIR Model
2.2 Parameters Estimation
3 Simulation and Validation
4 Monte Carlo Simulator
5 Conclusions
References
345 Use of Apicectomy in the Treatment of Refractory Injury
Abstract
1 Introduction
2 Case Report
2.1 Endodontic and Surgical Treatment
2.2 Imagological Preservation
3 Discussion
4 Conclusion
Acknowledgements
References
346 Electronics Laboratory Practices: A Didactic ECG Signal Generator
1 Introduction
2 Theory
2.1 Cardiac Signal Origin
2.2 Biopotential Amplifiers
3 Description of the Proposed Project
3.1 Laboratory Experiments
4 Results and Discussion
5 Conclusions
References
347 Feature Analysis for Speech Emotion Classification
1 Introduction
2 Materials and Methods
2.1 Database
2.2 Feature Analysis
2.3 Feature Selection
2.4 Results
3 Conclusion
References
348 Muscle Evaluation by Ultrasonography in the Diagnosis of Muscular Weakness Acquired in the Intensive Care Unit
Abstract
1 Introduction
2 Methodology
3 Results
4 Discussion
5 Conclusions
6 Conflict of Interest
Acknowledgements
References
349 Development of an Application to Assess Quality of Life
Abstract
1 Introduction
2 Material and Methods
2.1 Study Design
2.2 Software Prototyping
2.3 SF-36 Quality of Live Questionnaire
2.4 Software Validation
2.5 Inclusion and Exclusion Criteria
2.6 Ethical Considerations
3 Results
4 Discussion
5 Conclusions
Acknowledgements
References
350 Tetra-Nucleotide Histogram-Based Analysis of Metagenomic Data for Investigating Antibiotic-Resistant Bacteria
1 Introduction
2 Concepts and Problem Definition
2.1 Metagenomic Analysis
2.2 Signal Processing Based on Histograms of -mers
2.3 The Comprehensive Antibiotic Resistance Database
2.4 Freshwater Metagenome
3 Proposed Method
3.1 Finding Markers
3.2 Relating the Metagenome to Specific Genes Via 4-mers
4 Results
5 Conclusion
References
351 Use of Ultrasound in the Emergency and Initial Growth of Copaifera Reticulata Ducke (Fabaceae)
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Acknowledgements
References
352 Proposal for a Low-Cost Personal Protective Equipment (PPE) to Protect Health Care Professionals in the Fight Against Coronavirus
Abstract
1 Introduction
2 Materials and Methods
3 Results
4 Discussion
5 Conclusions
Conflict of Interest
References
353 Cutaneous Manifestations Related to COVID-19: Caused by SARS-CoV2 and Use of Personal Protective Equipment
Abstract
1 Introduction
2 Material and Methods
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
354 Low-Cost Modified Swab Graphite Electrode Development as a Point-of-Care Biosensor
Abstract
1 Introduction
2 Materials and Methods
2.1 Preparation of the EPG
2.2 Characterization of CHIT/CNT-NH2 Film
3 Results and Discussion
4 Conclusions
Acknowledgements
References
355 Thermographic Evaluation Before and After the Use of Therapeutic Ultrasound in Breast Engorgement
Abstract
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
Acknowledgements
References
356 Thermal Effect of Therapeutic Ultrasound on Muscle-Bone Interface of Swine Tissue (Sus Scrofa Domesticus) with Metallic Implant
Abstract
1 Introduction
2 Material and Methods
3 Results and Discussion
4 Conclusion
Acknowledgements
References
357 Evaluation of Dynamic Thermograms Using Semiautomatic Segmentation Software: Applied to the Diagnosis of Thyroid Cancer
Abstract
1 Introduction
2 Material and Methods
3 Results—Presentation of the Developed Software
3.1 Creation of the Project
3.2 Visualization of Thermograms
3.3 Segmentation of Regions
3.4 Data Extraction
3.5 Examples of Use to Aid in the Diagnosis of Thyroid Cancer
4 Conclusions
Acknowledgements
References
358 Assessment of Dose with CaF2 OSL Detectors for Individual Monitoring in Radiodiagnostic Services Using a Developed Algorithm Based on OSL Decay Curve
Abstract
1 Introduction
2 Materials and Methods
2.1 Detector CaF2 Production and Preparation of Pellets
2.2 Radiation Sources, Energy Characterization, OSL Readouts and Dose Evaluation
2.3 Identification Algorithm
2.4 Estimated Dose Analyses
2.5 Blind Test
3 Results
4 Discussion
5 Conclusion
Acknowledgements
References
Author Index
Recommend Papers

XXVII Brazilian Congress on Biomedical Engineering: Proceedings of CBEB 2020, October 26–30, 2020, Vitória, Brazil
 9783030706012, 303070601X

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

IFMBE Proceedings Teodiano Freire Bastos-Filho · Eliete Maria de Oliveira Caldeira · Anselmo Frizera-Neto Editors

Volume 83

XXVII Brazilian Congress on Biomedical Engineering Proceedings of CBEB 2020, October 26–30, 2020, Vitória, Brazil

IFMBE Proceedings Volume 83 Series Editor Ratko Magjarevic, Faculty of Electrical Engineering and Computing, ZESOI, University of Zagreb, Zagreb, Croatia Associate Editors Piotr Ładyżyński, Warsaw, Poland Fatimah Ibrahim, Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Igor Lackovic, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia Emilio Sacristan Rock, Mexico DF, Mexico

The IFMBE Proceedings Book Series is an official publication of the International Federation for Medical and Biological Engineering (IFMBE). The series gathers the proceedings of various international conferences, which are either organized or endorsed by the Federation. Books published in this series report on cutting-edge findings and provide an informative survey on the most challenging topics and advances in the fields of medicine, biology, clinical engineering, and biophysics. The series aims at disseminating high quality scientific information, encouraging both basic and applied research, and promoting world-wide collaboration between researchers and practitioners in the field of Medical and Biological Engineering. Topics include, but are not limited to: • • • • • • • •

Diagnostic Imaging, Image Processing, Biomedical Signal Processing Modeling and Simulation, Biomechanics Biomaterials, Cellular and Tissue Engineering Information and Communication in Medicine, Telemedicine and e-Health Instrumentation and Clinical Engineering Surgery, Minimal Invasive Interventions, Endoscopy and Image Guided Therapy Audiology, Ophthalmology, Emergency and Dental Medicine Applications Radiology, Radiation Oncology and Biological Effects of Radiation

IFMBE proceedings are indexed by SCOPUS, EI Compendex, Japanese Science and Technology Agency (JST), SCImago. Proposals can be submitted by contacting the Springer responsible editor shown on the series webpage (see “Contacts”), or by getting in touch with the series editor Ratko Magjarevic.

More information about this series at https://link.springer.com/bookseries/7403

Teodiano Freire Bastos-Filho  Eliete Maria de Oliveira Caldeira  Anselmo Frizera-Neto Editors

XXVII Brazilian Congress on Biomedical Engineering Proceedings of CBEB 2020, October 26–30, 2020, Vitória, Brazil

123

Editors Teodiano Freire Bastos-Filho Department of Electrical Engineering Universidade Federal do Espírito Santo Vitória, Brazil

Eliete Maria de Oliveira Caldeira Department of Electrical Engineering Universidade Federal do Espírito Santo Vitória, Brazil

Anselmo Frizera-Neto Department of Electrical Engineering Universidade Federal do Espírito Santo Vitória, Brazil

ISSN 1680-0737 ISSN 1433-9277 (electronic) IFMBE Proceedings ISBN 978-3-030-70600-5 ISBN 978-3-030-70601-2 (eBook) https://doi.org/10.1007/978-3-030-70601-2 © Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Organization

Steering Committee

General Chair Teodiano Bastos-Filho, Federal University of Espirito Santo, Brazil

Members Eliete Maria Caldeira, Federal University of Espirito Santo, Brazil Anselmo Frizera-Neto, Federal University of Espirito Santo, Brazil Francisco Santos, Federal University of Espirito Santo, Brazil

Program Committee Adriano Andrade, Universidade Federal de Uberlândia (Brazil) Adson Ferreira, Universidade de Brasília (Brazil) Alcimar Soares, Universidade Federal de Uberlândia (Brazil) Ana Maria Marques da Silva, Pontifícia Universidade Católica—RS (Brazil) Ana Carolina de Oliveira, Universidade Federal do Triângulo Mineiro (Brazil) Ana Cecilia Villa Parra, Universidad Politécnica Salesiana—Cuenca (Ecuador) André Fábio Kohn, Universidade de São Paulo (Brazil) Anibal Cotrina, Universidade Federal do Espírito Santo (Brazil) Antônio Padilha Bó, University of Queensland (Australia) Antonio Carlos Almeida, Universidade de São Paulo (Brazil) Aparecido Augusto Carvalho, Universidade Estadual Paulista (Brazil) Daniel Cavalieri, Instituto Federal de Ensino Superior—Vitória-ES (Brazil) Denis Delisle, Universidade Federal do Espírito Santo (Brazil) Elisa Berenguer, Universidad Nacional de San Juan (Argentina) Frederico Tavares, Universidade Federal do Rio de Janeiro (Brazil) Henrique Takachi Moriya, Universidade de São Paulo (Brazil) Idagene Cestari, Instituto do Coração—USP (Brazil) Jes Cerqueira, Universidade Federal da Bahia (Brazil) João Salinet, Universidade Federal do ABC (Brazil) John Jairo Villarejo Mayor, Universidade Federal do Paraná (Brazil) José Wilson Bassani, Universidade Estadual de Campinas (Brazil) Karin Komati, Instituto Federal de Ensino Superior—Serra-ES (Brazil) Leandro Bueno, Instituto Federal de Ensino Superior—Vitória-ES (Brazil) Leonardo Felix, Universidade Federal de Viçosa (Brazil) v

vi

Liliam Oliveira, Universidade Federal do Rio de Janeiro (Brazil) Marcus Vieira, Universidade Federal de Goiás (Brazil) Mauro Callejas Cuervo, Universidad Pedagógica y Tecnológica de Colombia (Colombia) Mario R. Gongora-Rubio, Instituto de Pesquisas Tecnológicas (Brazil) Pablo Diez, Universidad Nacional de San Juan (Argentina) Percy Nohama, Pontifícia Universidade Católica—PR (Brazil) Renato Zangaro, Universidade Anhembi Morumbi (Brazil) Richard Tello, Instituto Federal de Ensino Superior—Serra-ES (Brazil) Rodrigo Andreao, Instituto Federal de Ensino Superior—Vitória-ES (Brazil) Rosana Bassani, Universidade Estadual de Campinas (Brazil) Sandra Muller, Instituto Federal de Ensino Superior—Vitória-ES (Brazil) Sônia Maria Malmonge, Universidade Federal do ABC (Brazil) Sridhar Krishnan, Ryerson University (Canada) Pablo Diez, Universidad Nacional de San Juan (Argentina) Roberto Sagaro Zamora, Universidad de Oriente (Cuba) Vicente Lucena, Universidade Federal do Amazonas (Brazil)

Organization

Preface

This volume contains selected papers presented at the XXVII Brazilian Congress on Biomedical Engineering (Congresso Brasileiro de Engenharia Biomédica—CBEB), held virtually in Vitoria, Brazil, on October 26–30, 2020. The conference was organized by the Federal University of Espirito Santo (Universidade Federal do Espírito Santo—UFES) (Brazil). CBEB 2020 is 27th edition of this conference organized by the Brazilian Society on Biomedical Engineering (Sociedade Brasileira de Engenharia Biomédica—SBEB), with biannual periodicity, organized by researchers of a local research institution, with the collaboration of the entire scientific community linked to the area of Biomedical Engineering in Brazil. The conference has a tradition of bringing together academic communities, researchers, scientists from various fields, undergraduate and postgraduate students, as well as representatives from industry, commerce and governments, so that everyone can discuss and present their ideas about the main Biomedical Engineering problems in the country. This conference addresses the following areas: Biomedical Signal and Image Processing; Bioengineering; Biomaterials, Tissue Engineering and Artificial Organs; Biomechanics and Rehabilitation; Biomedical Devices and Instrumentation; Clinical Engineering and Evaluation of Technology in Health; Neuroengineering; Medical Robotic, Assistive Technology and Informatics in Health; Biomedical Optics and Systems and Technologies for Therapy and Diagnosis; Basic Industrial Technology in Health; and Special Topics in Biomedical Engineering. The book was organized in eleven parts, according to the main conference topics of the conference. Each part is devoted to research in different fields of biomedical engineering, in the areas of (1) Biomedical Signal and Image Processing, (2) Bioengineering, (3) Biomaterials, Tissue Engineering and Artificial Organs, (4) Biomechanics and Rehabilitation, (5) Biomedical Devices and Instrumentation, (6) Clinical Engineering and Evaluation of Technology in Health, (7) Neuroengineering, (8) Medical Robotic, Assistive Technology and Informatics in Health, (9) Biomedical Optics and Systems and Technologies for Therapy and Diagnosis, (10) Basic Industrial Technology in Health and (11) Special Topics in Biomedical Engineering. CBEB 2020 received 665 contributions from authors of 19 countries around the world. After a thorough peer-review process, the Program Committee accepted 358 papers to be published in this book. We appreciate all authors for their contribution. These papers are published in the present book, achieving an acceptance rate of about 53%. We would like to take this opportunity to thank members of Program Committee and invited external reviewers for their efforts and expertise in contribution to reviewing, without which it would be impossible to maintain the high standards of peer-reviewed papers. 32 Program Committee members and 547 invited external reviewers devoted their time and energy for peer-reviewing manuscripts. Our reviewers come from all over the world and represent 19 countries. We also would like to thank the following keynote speakers of CBEB 2020: Nitish Thakor (USA), Sridhar Krishnan (Canada), André Fabio Kohn (Brazil), Vanderlei Salvador Bagnato (Brazil), Edgard Morya (Brazil), João Machado (Brazil) and Idagene Cestari (Brazil) for sharing their knowledge and experience. vii

viii

Preface

We appreciate the partnership with Springer, EasyChair, SBEB, FEST and Softaliza as well as our sponsors (CNPq, FAPES and Prolife) for their essential support during the preparation of CBEB 2020. Thank you very much to CBEB 2020 Team. Their involvement and hard work were crucial to the success of the CBEB 2020 conference. Vitória, Brazil February 2021

Teodiano Freire Bastos-Filho Anselmo Frizera-Neto Eliete Maria de Oliveira Caldeira

Contents

Basic Industrial Technology in Health Analysis of the Acoustic Power Emitted by a Physiotherapeutic Ultrasound Equipment Used at the Brazilian Air Force Academy . . . . . . . . . . . . . . . . . . . . . J. G. S. N. Cavalcanti, W. C. A. Pereira, and J. F. S. Costa-Júnior Synthesis and Antibacterial Activity of Maleimides . . . . . . . . . . . . . . . . . . . . . . . E. Conrado, C. J. Francisco, R. H. Piccoli, and A. F. Uchoa

3 11

Bioengineering Study of the Effect of Bioceramic Compressive Socks on Leg Edema . . . . . . . . . A. A. S. Sakugawa, L. A. L. Conrado, A. Balbin Villaverde, and E. Munin Analysis of the Electric Field Behavior on a Stimulation Chamber Applying the Superposition Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. A. Sá, E. M. S. Bortolazzo, J. A. Costa JR, and P. X. de Oliveira Development of a Non-rigid Model Representing the Venous System of a Specific Patient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. C. B. Costa, S. D. F. Gonçalves, T. C. Lucas, M. L. F. Silva, C. M. P. Junior, J. Haniel, and R. Huebner Comparative Study of Rheological Models for Pulsatile Blood Flow in Realistic Aortic Arch Aneurysm Geometry by Numerical Computer Simulation . . . . . . . . M. L. F. Silva, S. D. F. Gonçalves, M. C. B. Costa, and R. Huebner Total Lung Capacity Maneuver as a Tool Screen the Relative Lung Volume in Balb/c Mice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. E. Lino-Alvarado, J. L. Santana, R. L. Vitorasso, M. A. Oliveira, W. Tavares-Lima, and H. T. Moriya The Influence of Cardiac Ablation on the Electrophysiological Characterization of Rat Isolated Atrium: Preliminary Analysis . . . . . . . . . . . . . J. G. S. Paredes, S. Pollnow, I. Uzelac, O. Dössel, and J. Salinet Evaluation of the Surface of Dental Implants After the Use of Instruments Used in Biofilm Removal: A Comparative Study of Several Protocols . . . . . . . . . D. P. V. Leite, G. E. Pires, F. V. Bastos, A. L. Sant’ana, L. Frigo, A. Martins e Silva, J. E. P. Nunes, M. H. B. Machado, A. Baptista, R. S. Navarro, and A. T. Araki Comparison of Hemodynamic Effects During Alveolar Recruitment Maneuvers in Spontaneously Hypertensive Rats Treated and Non-treated with Hydralazine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. C. Ferreira, R. L. Vitorasso, M. H. G. Lopes, R. S. Augusto, F. G. Aoki, M. A. Oliveira, W. Tavares-Lima, and H. T. Moriya

19

25

31

37

43

49

55

59

ix

x

Experimental Study of Bileaflet Mechanical Heart Valves . . . . . . . . . . . . . . . . . . Eraldo Sales, M. Mazzetto, S. Bacht, and I. A. Cestari Preliminary Results of Structural Optimization of Dental Prosthesis Using Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. M. Togashi, M. P. Andrade, F. J. dos Santos, B. A. Hernandez, and E. A. Capello Sousa

Contents

65

71

Biomaterials, Tissue Engineering and Artificial Organs Tribological Characterization of the ASTM F138 Austenitic Stainless-Steel Treated with Nanosecond Optical Fiber Ytterbium Laser for Biomedical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marcelo de Matos Macedo, Giovanna Vitória Rodrigues Bernardes, Jorge Humberto Luna-Domínguez, Vikas Verma, and Ronaldo Câmara Cozza

79

Computational Modeling of Electroporation of Biological Tissues Using the Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. A. Knabben, R. L. Weinert, and A. Ramos

85

The Heterologous Fibrin Sealant and Aquatic Exercise Treatment of Tendon Injury in Rats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. M. C. M. Hidd, E. F. Dutra Jr, C. R. Tim, A. L. M. M. Filho, L. Assis, R. S. Ferreira Jr, B. Barraviera, and M. M. Amaral Myxomatous Mitral Valve Mechanical Characterization . . . . . . . . . . . . . . . . . . . A. G. Santiago, S. M. Malmonge, P. M. A. Pomerantzeff, J. I. Figueiredo, and M. A. Gutierrez Qualitative Aspects of Three-Dimensional Printing of Biomaterials Containing Devitalized Cartilage and Polycaprolactone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. M. Poley, R. Silveira, J. A. Dernowsek, and E. B. Las Casas Chemical Synthesis Using the B-Complex to Obtain a Similar Polymer to the Polypyrrole to Application in Biomaterials . . . . . . . . . . . . . . . . . . . . . . . . Lavínia Maria Domingos Pinto, Mariane de Cássia Rodrigues dos Santos, Mirela Eduarda Custódio, F. E. C. Costa, Larissa Mayra Silva Ribeiro, and Filipe Loyola Lopes Evaluation of the Effect of Hydrocortisone in 2D and 3D HEp-2 Cell Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. O. Fonseca, B. H. Godoi, N. S. Da Silva, and C. Pacheco-Soares Surface Topography Obtained with High Throughput Technology for hiPSC-Derived Cardiomyocyte Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . Lucas R. X. Cortella, I. A. Cestari, M. S., M. Mazzetto, A. F. Lasagni, and Ismar N. Cestari The Effect of Nitrided Layer on Antibacterial Properties of Biomedical 316L Stainless Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Benegra, G. H. Couto, and E. A. Bernardelli Biomechanical Analysis of Tissue Engineering Construct for Articular Cartilage Restoration—A Pre-clinical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. R. de Faria, M. J. S. Maizato, I. A. Cestari, A. J. Hernandez, D. F. Bueno, R. Bortolussi, C. Albuquerque, and T. L. Fernandes

91

97

101

107

113

119

127

133

Contents

xi

Cytotoxicity Evaluation of Polymeric Biomaterials Containing Nitric Oxide Donors Using the Kidney Epithelial Cell Line (Vero) . . . . . . . . . . . . . . . . . . . . . V. C. P. Luz, F. N. Ambrosio, C. B. Lombello, A. B. Seabra, and M. H. M. Nascimento

139

Cellular Interaction with PLA Biomaterial: Scanning Electron Microscopy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. H. S. Mazzaron and C. B. Lombello

145

Development of a Gelatin-Based Hydrogel to be Used as a Fibrous Scaffold in Myocardial Tissue Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Parente and S. M. Malmonge

153

Experimental and Clinical Performance of Long-Term Cannulas for Mechanical Circulatory Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. T. T. Oyama, S. Bacht, M. Mazzetto, L. F. Caneo, M. B. Jatene, F. B. Jatene, and I. A. Cestari Evaluation of Lantana Trifolia Total Extract in Cell Culture: Perspective for Tissue Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. F. L. Silva, L. R. Rezende, F. N. Ambrosio, J. Badanai, R. A. Lombello, and C. B. Lombello

161

167

Study on the Disinfection Stability of Bullfrog Skin . . . . . . . . . . . . . . . . . . . . . . D. N. de Moraes, D. IKozusny-Andreani, C. R. Tim, L. Assis, A. P. Da Costa, and M. M. Amaral

173

Corrosion Analysis of a Marked Biomaterial . . . . . . . . . . . . . . . . . . . . . . . . . . . Eurico Felix Pieretti, Maurício David Martins das Neves, and Renato Altobelli Antunes

179

Mechanical and Morphological Analysis of Electrospun Poly(e-Caprolactone) and Reduced Graphene Oxide Scaffolds for Tissue Engineering . . . . . . . . . . . . . M. J. S. Maizato, H. T. T. Oyama, A. A. Y. Kakoi, and I. A. Cestari Experimental Apparatus for Evaluation of Calcium Fluctuations in Cardiomyocytes Derived from Human-Induced Pluripotent Stem Cells . . . . . M. C. Araña, R. D. Lahuerta, L. R. X. Cortella, M. Mazzetto, M. Soldera, A. F. Lasagni, I. N. Cestari, and I. A. Cestari Cytotoxicity of Alumina and Calcium Hexaluminate: Test Conditions . . . . . . . . R. Arbex, L. R. Rezende, F. N. Ambrosio, L. M. M. Costa, and C. B. Lombello Tendon Phantom Mechanical Properties Assessment by Supersonic Shear Imaging with Three-Dimensional Transducer . . . . . . . . . . . . . . . . . . . . . . . . . . . V. C. Martins, G. B. G. Rolando, L. L. De Matheo, W. C. A. Pereira, and L. F. Oliveira

185

191

197

207

Physiological Control of Pulsatile and Rotary Pediatric Ventricular Assist Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. R. Melo, T. D. Cordeiro, I. A. Cestari, J. S. da Rocha Neto, and A. M. N. Lima

213

Evaluation of Calcium Phosphate-Collagen Bone Cement: A Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Ribeiro, V. A. D. Lima, and L. F. G. Setz

221

Qualitative Hemolysis Analyses in VAD by Stress Distribution Using Computational Hemodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. B. Lopes Jr, E. G. P. Bock, and L. Cabezas-Gómez

227

xii

Contents

Biomechanics and Rehabilitation Development and Comparison of Different Implementations of Fuzzy Logic for Physical Capability Assessment in Knee Rehabilitation . . . . . . . . . . . . . . . . . Thiago B. Susin, R. R. Baptista, Henrique S. Dias, and Fabian L. Vargas Serious Games and Virtual Reality in the Treatment of Chronic Stroke: Both Sides Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. M. Alves, A. R. Rezende, I. A. Marques, D. C. Silva, T. S. Paiva, and E. L. M. Naves Technologies Applied for Elbow Joint Angle Measurements: A Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. R. Rezende, C. M. Alves, I. A. Marques, D. C. Silva, T. S. Paiva, and E. L. M. Naves The Importance of Prior Training for Effective Use of the Motorized Wheelchair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. C. Silva, C. M. Alves, A. R. Rezende, I. A. Marques, T. S. Paiva, E. L. M. Naves, and E. A. Lamounier Junior Kinematic Approach for 4 DoF Upper Limb Robotic Exoskeleton in Complex Rehabilitation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daily Milanés-Hermosilla, Roberto Sagaró-Zamora, Rafael Trujillo-Codorniú, Mauricio Torres-Quezada, D. Delisle-Rodriguez, and T. Bastos-Filho An Integrated Method to Analyze Degenerative Bone Conditions on Transfemoral Amputees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leonardo Broche Vázquez, Claudia Ochoa-Diaz, Roberto Sagaró Zamora, and Antônio Padilha L. Bó Forced Oscillations and Functional Analysis in Patients with Idiopathic Scoliosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cíntia Moraes de Sá Sousa, Luis Eduardo Carelli, André L. C. Pessoa, Agnaldo José Lopes, and P. L. Melo

235

239

245

251

257

267

277

Differences in Respiratory Mechanics in Emphysema and Chronic Bronchitis Evaluated by Forced Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. M. Teixeira, A. J. Lopes, and P. L. Melo

285

Automated 3D Scanning Device for the Production of Forearm Prostheses and Orthoses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. C. de Oliveira, M. C. de Araújo, and M. G. N. M. da Silva

293

Low Amplitude Hand Rest Tremor Assessment in Parkinson’s Disease Based on Linear and Nonlinear Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amanda Rabelo, João Paulo Folador, Ana Paula Bittar, Luiza Maire, Samila Costa, Alice Rueda, S. Krishnan, Viviane Lima, Rodrigo M. A. Almeida, and Adriano O. Andrade Adaptation of Automatic Postural Responses in the Dominant and Non-dominant Lower Limbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. D. P. Rinaldin, J. A. De Oliveira, C. Ribeiro de Souza, E. M. Scheeren, E. F. Manffra, D. B. Coelho, and L. A. Teixeira

301

307

Contents

xiii

Influence of the Layout of the Ludic Tables on the Amplitude and Concentration of Upper Limb Movements . . . . . . . . . . . . . . . . . . . . . . . . . . E. J. Alberti, A. Brawerman, and S. F. Pichorim

313

Photobiomodulation Reduces Musculoskeletal Marker Related to Atrophy . . . . . S. R. Gonçalves, C. R. Tim, C. Martignago, A. Renno, R. B. Silva, and L. Assis

319

Effectiveness of Different Protocols in Platelet-Rich Plasma Recovery . . . . . . . . . C. Dall’Orto, R. Ramsdorf, L. Assis, and C. Tim

325

Evaluation of the Motor Performance of People with Parkinson’s Disease Through the Autocorrelation Function Estimated from Sinusoidal Drawings . . . V. C. Lima, M. F. Vieira, A. A. Pereira, and A. O. Andrade Shear Modulus of Triceps Surae After Acute Stretching . . . . . . . . . . . . . . . . . . . M. C. A. Brandão, G. C. Teixeira, and L. F. Oliveira Estimation of the Coordination Variability Between Pelvis-Thigh Segments During Gait at Different Walking Speeds in Sedentary Young People and Practitioners of Physical Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. A. G. De Villa, A. Abbasi, A. O. Andrade, and M. F. Vieira Gait Coordination Quantification of Thigh-Leg Segments in Sedentary and Active Youngs at Different Speeds Using the Modified Vector Coding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. A. G. De Villa, F. B. Rodrigues, A. Abbasi, A. O. Andrade, and M. F. Vieira Demands at the Knee Joint During Jumps in Classically Trained Ballet Dancers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. S. Lemes, G. A. G. De Villa, A. P. Rodrigues, J. M. A. Galvão, R. M. Magnani, R. S. Gomide, M. B. Millan, E. M. Mesquita, L. C. C. Borges, and M. F. Vieira

329 337

343

349

353

Modeling and Analysis of Human Jump Dynamics for Future Application in Exoskeletons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mouhamed Zorkot, Wander Gustavo Rocha Vieira, and Fabricio Lima Brasil

357

Functional Electrical Stimulation Closed-Loop Strategy Using Agonist-Antagonist Muscles for Controlling Lower Limb Movements . . . . . . . . . D. C. Souza, J. C. Palma, R. A. Starke, G. N. Nogueira-Neto, and P. Nohama

365

Control System Viability Analysis on Electrical Stimulation Platform . . . . . . . . . D. G. S. Ribeiro and R. F. Kozan Influence of the Use of Upper Limbs in the Vertical Jump on the Ground Reaction Force of Female Athletes from the Development Basketball Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. P. Xavier, S. C. Corrêa, E. R. S. Viana, C. S. Oliveira, and C. P. Guimarães Wearable System for Postural Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. C. Bispo, N. A. Cunha, E. L. Cavalcante, G. R. P. Esteves, K. R. C. Ferreira, A. C. Chaves, and M. A. B. Rodrigues

373

381 387

On the Use of Wrist Flexion and Extension for the Evaluation of Motor Signs in Parkinson’s Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. R. Lígia, A. O. Adriano, and P. A. Adriano

395

Motor Performance Improvement in a Technological Age: A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. F. Vilela, L. Basso, D. S. F. Magalhães, and A. F. Frade-Barros

401

xiv

Modulation of EMG Parameters During Ankle Plantarflexor Fatigue in Trained Gymnasts and Healthy Untrained Controls . . . . . . . . . . . . . . . . . . . . M. C. da Silva, C. R. da Silva, F. F. de Lima, J. R. Lara, J. P. Gustavson, and F. H. Magalhães

Contents

405

Amputation Rates in Southeastern Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. R. F. Jorge, A. M. de Freitas, and A. B. Soares

413

Design and Manufacturing of Flexible Finger Prosthesis Using 3D Printing . . . . V. V. P. Pinto, J. E. Alvarez-Jácobo, and A. E. F. Da Gama

419

Characterization of Sensory Perception Associated with Transcutaneous Electrostimulation Protocols for Tactile Feedback Restoration . . . . . . . . . . . . . . A. C. P. R. Costa, F. A. C. Oliveira, S. R. J. Oliveira, and A. B. Soares

425

Differences in Shear Modulus Among Hamstring Muscles After an Acute Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. C. Teixeira, M. C. A. Brandão, and L. F. Oliveira

433

Adapted Children’s Serious Game Using Dennis Brown Orthotics During the Preparatory Phase for the Gait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. T. D. Silva, B. C. Bispo, A. M. Conceição, M. B. C. Silva, G. R. P. Esteves, and M. A. B. Rodrigues A Serious Game that Can Aid Physiotherapy Treatment in Children Using Dennis Brown Orthotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. T. D. Silva, B. C. Bispo, G. R. P. Esteves, M. B. C. Silva, and M. A. B. Rodrigues A Neuromodulation Decision Support System: A User-Centered Development Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. B. A. Maranhão, T. M. De Santana, D. M. De Oliveira, and A. E. F. Da Gama Mechanisms of Shoulder Injury in Wheelchair Users . . . . . . . . . . . . . . . . . . . . . H. O. Rodrigues and O. L. Silva Development and Customization of a Dennis Brown Orthosis Prototype Produced from Anthropometric Measurements by Additive Manufacturing . . . . A. T. D. Silva, A. S. Lages, J. V. B. D. Silveira, G. R. P. Esteves, B. C. Bispo, N. A. Santos, M. E. Kunkel, and M. A. B. Rodrigues

439

445

451 459

465

Comparison of Kinematic Data Obtained by Inertial Sensors and Video Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. C. Silva, J. S. Oliveira, A. B. F. Luz, J. L. Pinheiro, and I. M. Miziara

471

NNMF Analysis to Individual Identification of Fingers Movements Using Force Feedback and HD-EMG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. C. Ecard, L. L. Menegaldo, and L. F. Oliveira

477

Nonlinear Closed-Loop Control of an OpenSim Wrist Model: Tuning Using Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W. C. Pinheiro, P. B. Furlan, and L. L. Menegaldo

485

Quantification of Gait Coordination Variability of Pelvis-Thigh Segments in Young Sedentary and Practitioners People Walking in Different Slopes Using the Vector Coding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. A. G. De Villa, L. Mochizuki, A. Abbasi, A. O. Andrade, and M. F. Vieira

491

Contents

xv

Prediction of the Heel Displacement During the Gait Using Kalman Filter in 2D Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. M. Mesquita, R. S. Gomide, R. P. Lemos, and M. F. Vieira Achilles Tendon Tangent Moduls of Runners . . . . . . . . . . . . . . . . . . . . . . . . . . . M. S. Pinto, C. A. R. Sánchez, M. C. A. Brandão, L. L. Menegaldo, and L. F. Oliveira

497 505

Modeling and Simulation of Lower Limbs Exoskeleton for Children with Locomotion Difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. R. Wahbe, V. N. O. Louro, R. C. Martins, J. P. H. Lima, and R. R. de Faria

511

Remote Control Architecture for Virtual Reality Application for Ankle Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Y. M. Villamizar, I. Ostan, D. A. E. Ortega, and A. A. G. Siqueira

517

Callus Stiffness Evolution During the Healing Process—A Mechanical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. R. O. S. Neto, R. R. P. Rodarte, and P. P. Kenedi

523

An Analytical Model for Knee Ligaments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. E. Silva, R. R. P. Rodarte, L. S. P. P. Segmiller, S. A. S. Barros, and P. P. Kenedi Anterior–Posterior Ground Reaction Forces During Gait in Children and Elderly Women . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. L. Vargas, A. G. Aires, G. S. Heidner, M. F. Vieira, and R. R. Baptista Gait Speed, Cadence and Vertical Ground Reaction Forces in Children . . . . . . . A. M. B. S. das Neves, A. G. Aires, V. L. Vargas, L. D. de Souza, T. B. de Souza, T. B. Villar, M. F. Vieira, and R. R. Baptista May Angular Variation Be a Parameter for Muscular Condition Classification in SCI People Elicited by Neuromuscular Electrical Stimulation? . . . . . . . . . . . . C. D. P. Rinaldin, C. Papcke, E. Krueger, G. N. Nogueira-Neto, P. Nohama, and E. M. Scheeren

531

537 541

547

Kinematics and Synergies Differences Between Horizontal and Vertical Jump Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. L. C. Oliveira, P. V. S. Moreira, and L. L. Menegaldo

553

Modeling, Control Strategies and Design of a Neonatal Respiratory Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. B. A. Campos and A. T. Fleury

563

Application of Recurrence Quantifiers to Kinetic and Kinematic Biomechanical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. O. Assis, A. O. Andrade, and M. F. Vieira

573

Virtual Environment for Motor and Cognitive Rehabilitation: Towards a Joint Angle and Trajectory Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Soprani, T. Botelho, C. Tavares, G. Cruz, R. Zanoni, J. Lagass, S. Bino, and P. Garcez Kinematic Model and Position Control of an Active Transtibial Prosthesis . . . . . V. Biazi Neto, G. D. Peroni, A. Bento Filho, A. G. Leal Junior, and R. M. Andrade Development of a Parallel Robotic Body Weight Support for Human Gait Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. A. O. Rodrigues and R. S. Gonçalves

579

587

593

xvi

Contents

Comparison Between the Passé and Coupé Positions in the Single Leg Turn Movement in a Brazilian Zouk Practitioner: A Pilot Study . . . . . . . . . . . . . . . . . A. C. Navarro, A. P. Xavier, J. C. Albarello, C. P. Guimarães, and L. L. Menegaldo

599

Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Macedo, J. Santos, and R. S. Baptista

605

Analysis of Seat-to-Hand Vibration Transmissibility in Seated Smartphone Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. M. A. Dutra, G. C. Melo, and M. L. M. Duarte

613

Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. F. Nunes, I. Ostan, W. M. dos Santos, and A. A. G. Siqueira

621

Control Design Inspired by Motors Primitives to Coordinate the Functioning of an Active Knee Orthosis for Robotic Rehabilitation . . . . . . . . . . . . . . . . . . . . P. F. Nunes, D. Mosconi, I. Ostan, and A. A. G. Siqueira

629

Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. P. C. D. Freire, N. A. Marafa, R. C. Sampaio, Y. L. Sumihara, J. B. de Barros, W. B. Vidal Filho, and C. H. Llanos

637

A Bibliometric Analysis of Lower Limb Exoskeletons for Rehabilitation Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. A. Marafa, C. H. Llanos, and P. W. G. Taco

645

Biomechatronic Analysis of Lower Limb Exoskeletons for Augmentation and Rehabilitation Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. A. Marafa, R. C. Sampaio, and C. H. Llanos

653

Numerical Methods Applied to the Kinematic Analysis of Planar Mechanisms and Biomechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. N. Rosa, M. A. Bazani, and F. R. Chavarette

661

Project of a Low-Cost Mobile Weight Part Suspension System . . . . . . . . . . . . . . L. C. L. Dias, C. B. S. Vimieiro, R. L. L. Dias, and D. M. C. F. Camargos Design of a Robotic Device Based on Human Gait for Dynamical Tests in Orthopedic Prosthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Rosa, T. M. Barroso, L. M. Lessa, J. P. Gouvêa, and L. H. Duque Analysis of an Application for Fall Risk Screening in the Elderly for Clinical Practice: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. V. S. Moreira, L. H. C. Shinoda, A. Benedetti, M. A. M. R. Staroste, E. V. N. Martins, J. P. P. Beolchi, and F. M. Almeida Development of Prosthesis and Evaluation of the Use in Short Term in a Left Pelvic Limb in a Dog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. C. Silveira, A. E. M. Pertence, A. R. Lamounier, E. B. Las Casas, M. H. H. Lage, M. X. Santos, and S. V. D. M. El-Awar Estimation of Muscle Activations in Black Belt Taekwondo Athletes During the Bandal Chagui Kick Through Inverse Dynamics . . . . . . . . . . . . . . . . . . . . . . P. V. S. Moreira, K. A. Godoy Jaimes, and L. L. Menegaldo

667

673

681

689

695

Contents

xvii

Biomedical Devices and Instrumentation Study of a Sonothrombolysis Equipment Based on Phased Ultrasound Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. T. Andrade and S. S. Furuie

705

Development and Characterization of a Transceiver Solenoid RF Coil for MRI Acquisition of Ex Situ Brain Samples at 7 Teslas . . . . . . . . . . . . . . . . . . . . . . . . L. G. C. Santos, K. T. Chaim, and D. Papoti

711

Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Carvalho, A. J. Aragão, F. A. Brito-Filho, H. D. Hernandez, and W. A. M. V. Noije Prototype of a Peristaltic Pump for Applications in Biological Phantoms . . . . . . I. Sánchez-Domínguez, I. E. Pérez-Ruiz, J. Chan Pérez, and E. Perez-Rueda Performance Evaluation of an OOK-Based Visible Light Communication System for Transmission of Patient Monitoring Data . . . . . . . . . . . . . . . . . . . . . K. vd Zwaag, R. Lazaro, M. Marinho, G. Acioli, A. Santos, L. Rimolo, W. Costa, F. Santos, T. Bastos, M. Segatto, H. Rocha, and J. Silva

717

725

731

Equipment for the Detection of Acid Residues in Hemodialysis Line . . . . . . . . . Angélica Aparecida Braga, F. B. Vilela, Elisa Rennó Carneiro Déster, and F. E. C. Costa

737

Using of Fibrin Sealant on Treatment for Tendon Lesion: Study in Vivo . . . . . . Enéas de Freitas Dutra Junior, S. M. C. M. Hidd, M. M. Amaral, A. L. M. Maia Filho, L. Assis, R. S. Ferreira, B. Barraviera, and C. R. Tim

741

Analysis of Respiratory EMG Signals for Cough Prediction . . . . . . . . . . . . . . . . T. D. Costa, T. Z. Zanella, C. S. Cristino, C. Druzgalski, Guilherme Nunes Nogueira-Neto, and P. Nohama

745

Integrating Power Line and Visible Light Communication Technologies for Data Transmission in Hospital Environments . . . . . . . . . . . . . . . . . . . . . . . . R. Lazaro, Klaas Minne Van Der Zwaag, Wesley Costa, G. Acioli, M. Marinho, Mariana Khouri, G. C. Vivas, F. Santos, T. Bastos-Filho, Marcelo Vieira Segatto, H. Rocha, and Jair Adriano Lima Silva

751

Design of an Alternating Current Field Controller for Electrodes Exposed to Saline Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. L. dos Santos, S. A. Mendanha, and Sílvio Leão Vieira

757

Electrooculography: A Proposed Methodology for Sensing Human Eye Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. de Melo and Sílvio Leão Vieira

763

Analysis of Inductance in Flexible Square Coil Applied to Biotelemetry . . . . . . . S. F. Pichorim A Method for Illumination of Chorioallantoic Membrane (CAM) of Chicken Embryo in Microscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. K. Silva, I. R. Stoltz, L. T. Rocha, L. F. Pereira, M. A. de Souza, and G. B. Borba Development and Experimentation of a rTMS Device for Rats . . . . . . . . . . . . . . P. R. S. Sanches, D. P. Silva, A. F. Müller, P. R. O. Thomé, A. C. Rossi, B. R. Tondin, R. Ströher, L. Santos, and I. L. S. Torres

771

777 785

xviii

Diagnostic and Monitoring of Atrial Fibrillation Using Wearable Devices: A Scoping Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renata S. Santos, M. D. C. McInnis, and J. Salinet The Effects of Printing Parameters on Mechanical Properties of a Rapidly Manufactures Mechanical Ventilator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. R. Santos, M. A. Pastrana, W. Britto, D. M. Muñoz, and M. N. D. Barcelos Fully Configurable Real-Time Ultrasound Platform for Medical Imaging Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Rodriguez, A. F. Osorio, R. O. Silva, L. R. Domingues, H. J. Onisto, G. C. Fonseca, J. E. Bertuzzo, A. A. Assef, J. M. Maixa, A. A. O. Carneiro, and E. T. Costa Proposal of a Low Profile Piezoelectric Based Insole to Measure the Perpendicular Force Applied by a Cyclist . . . . . . . . . . . . . . . . . . . . . . . . . . . M. O. Araújo and A. Balbinot Soft Sensor for Hand-Grasping Force by Regression of an sEMG Signal . . . . . . E. E. Neumann and A. Balbinot Development of a Low-Cost, Open-Source Transcranial Direct-Current Stimulation Device (tDCS) for Clinical Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . N. C. Teixeira-Neto, R. T. Azevedo-Cavalcanti, M. G. N. Monte-da-Silva, and A. E. F. Da-Gama

Contents

791

799

807

813 821

827

Velostat-Based Pressure Sensor Matrix for a Low-Cost Monitoring System Applied to Prevent Decubitus Ulcers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. P. De A. Barros, J. M. X. N. Teixeira, W. F. M. Correia, and A. E. F. Da Gama

835

Analysis and Classification of EEG Signals from Passive Mobilization in ICU Sedated Patients and Non-sedated Volunteers . . . . . . . . . . . . . . . . . . . . . . . . . . . G. C. Florisbal, J. Machado, L. B. Bagesteiro, and A. Balbinot

843

Use of Fluorescence in the Diagnosis of Oral Health in Adult Patients Admitted to the Intensive Care Unit of a Public Emergency Hospital . . . . . . . . . . . . . . . . . M. D. L. P. Matos, E. N. Santos, M. D. C. R. M. Silva, A. Pavinatto, and M. M. Costa

849

A Non-invasive Photoemitter for Healing Skin Wounds . . . . . . . . . . . . . . . . . . . F. J. Santos, D. G. Gomes, J. P. V. Madeiro, A. C. Magalhães, and M. Sousa Intermittent Suctioning for Patients Under Artificial Ventilation: A Digital Model Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. V. O. C. Médice, M. C. B. Pereira, H. R. Martins, and A. C. Jardim-Neto Prototype for Testing Frames of Sunglasses . . . . . . . . . . . . . . . . . . . . . . . . . . . . Larissa Vieira Musetti and Liliane Ventura Evaluation of Hyperelastic Constitutive Models Applied to Airway Stents Made by a 3D Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. F. Müller, Danton Pereira da Silva Junior, P. R. S. Sanches, P. R. O. Thomé, B. R. Tondin, A. C. Rossi, Alessandro Nakoneczny Schildt, and Luís Alberto Loureiro dos Santos

857

865

873

881

Contents

xix

Are PSoC Noise Levels Low Enough for Single-Chip Active EMG Electrodes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W. S. Araujo, I. Q. Moura, and A. Siqueira-Junior

891

Implementation of an Ultrasound Data Transfer System via Ethernet with FPGA-Based Embedded Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. de Oliveira, A. A. Assef, R. A. C. Medeiros, J. M. Maia, and E. T. Costa

897

Software for Physiotherapeutic Rehabilitation: A Study with Accelerometry . . . A. M. Conceição, M. C. P. Souza, L. P. G. Macedo, R. J. R. S. Lucena, A. V. M. Inocêncio, G. S. Marques, P. H. O. Silva, and M. A. B. Rodrigues

905

B-Mode Ultrasound Imaging System Using Raspberry Pi . . . . . . . . . . . . . . . . . . R. A. C. Medeiros, A. A. Assef, J. de Oliveira, J. M. Maia, and E. T. Costa

909

Access Control in Hospitals with RFID and BLE Technologies . . . . . . . . . . . . . . B. C. Bispo, E. L. Cavalcante, G. R. P. Esteves, M. B. C. Silva, G. J. Alves, and M. A. B. Rodrigues

917

Piezoelectric Heart Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. de S. Morangueira Filho, G. V. B. Magalhães, and F. L. Lopes

925

Transducer for the Strengthening of the Pelvic Floor Through Electromyographic Biofeedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. M. Silva, B. C. Bispo, G. R. P. Esteves, E. L. Cavalcante, A. L. B. Oliveira, M. B. C. Silva, N. A. Cunha, and M. A. B. Rodrigues Monitoring Hemodynamic Parameters in the Terrestrial and Aquatic Environment: An Application in a 6-min Walk Test . . . . . . . . . . . . . . . . . . . . . . K. R. C. Ferreira, A. V. M. Inocêncio, A. C. Chaves Filho, R. P. N. Lira, P. S. Lessa, and M. A. B. Rodrigues Scinax tymbamirim Amphibian Advertisement Sound Emulator Based on Arduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. C. Grande, V. H. H. Bezerra, J. G. V. Crespim, R. V. N. da Silva, and B. Schneider Jr.

935

941

947

Microphone and Receiver Calibration System for Otoacoustic Emission Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. C. Tavares, A. B. Pizzetta, and M. H. Costa

953

Pneumotachograph Calibration: Influence of Regularization Methods on Parameter Estimation and the Use of Alternative Calibration Models . . . . . . A. D. Quelhas, G. C. Motta-Ribeiro, A. V. Pino, A. Giannella-Neto, and F. C. Jandre

961

Geriatric Physiotherapy: A Telerehabilitation System for Identifying Therapeutic Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. C. Chaves Filho, A. V. M. Inocêncio, K. R. C. Ferreira, E. L. Cavalcante, B. C. Bispo, C. M. B. Rodrigues, P. S. Lessa, and M. A. B. Rodrigues

969

UV Equipament for Food Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. B. Menechelli, V. B. Santana, R. S. Navarro, D. I. Kozusny-Andreani, A. Baptista, and S. C. Nunez

975

Method to Estimate Doses in Real-Time for the Eyes . . . . . . . . . . . . . . . . . . . . . J. C. de C. Lourenco, S. A. Paschuk, and H. R. Schelin

981

xx

An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. C. B. Pereira, J. P. V. Madeiro, Adriel O. Freitas, and D. G. Gomes Development of a Hydraulic Model of the Microcontrolled Human Circulatory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrew Guimarães Silva, B. S. Santos, M. N. Oliveira, L. J. Oliveira, D. G. Goroso, J. Nagai, and R. R. Silva

Contents

989

997

A Design Strategy to Control an Electrosurgery Unit Output . . . . . . . . . . . . . . . 1003 Paulo Henrique Duarte Camilo and I. A. Cestari Near Field Radar System Modeling for Microwave Imaging and Breast Cancer Detection Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009 F. A. Brito-Filho, D. Carvalho, and W. A. M. V. Noije Biomedical Optics and Systems and Technologies for Therapy and Diagnosis Development and Validation of a New Hardware for a Somatosensorial Electrical Stimulator Based on Howland Current-Source Topology . . . . . . . . . . 1019 L. V. Almeida, W. A. de Paula, R. Zanetti, A. Beda, and H. R. Martins Autism Spectrum Disorder: Smart Child Stimulation Center for Integrating Therapies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027 R. O. B. Ana Letícia, C. M. Lívia, A. P. Phellype, B. V. Filipe, L. F. A. Anahid, and S. A. Rani Fuzzy System for Identifying Pregnancy with a Risk of Maternal Death . . . . . . 1035 C. M. D. Xesquevixos and E. Araujo Evaluation of Temperature Changes Promoted in Dental Enamel, Dentin and Pulp During the Tooth Whitening with Different Light Sources . . . . . . . . . . 1041 Fabrizio Manoel Rodrigues, Larissa Azevedo de Moura, and P. A. Ana Effects of a Low-Cost LED Photobiomodulation Therapy Equipment on the Tissue Repair Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 F. E. D. de Alexandria, N. C. Silva, A. L. M. M. Filho, D. C. L. Ferreira, K. dos S. Silva, L. R. da Silva, L. Assis, N. A. Parizotto, and C. R. Tim Occupational Dose in Pediatric Barium Meal Examinations . . . . . . . . . . . . . . . . 1051 G. S. Nunes, R. B. Doro, R. R. Jakubiak, J. A. P. Setti, F. S. Barros, and V. Denyak Synthesis of N-substituted Maleimides Potential Bactericide . . . . . . . . . . . . . . . . 1057 A. C. Trindade and A. F. Uchoa A Review About the Main Technologies to Fight COVID-19 . . . . . . . . . . . . . . . 1063 P. A. Cardoso, D. L. Tótola, E. V. S. Freitas, M. A. P. Arteaga, D. Delisle-Rodríguez, F. A. Santos, and T. F. Bastos-Filho Evaluation of Cortisol Levels in Artificial Saliva by Paper Spray Mass Spectrometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067 A. R. E. Dias, B. L. S. Porto, B. V. M. Rodrigues, and T. O. Mendes Tinnitus Relief Using Fractal Sound Without Sound Amplification . . . . . . . . . . . 1073 A. G. Tosin and F. S. Barros Analysis of the Heat Propagation During Cardiac Ablation with Cooling of the Esophageal Wall: A Bidimensional Computational Modeling . . . . . . . . . . 1079 S. de S. Faria, P. C. de Souza, C. F. da Justa, S. de S. R. F. Rosa, and A. F. da Rocha

Contents

xxi

Development of a Rapid Test for Determining the ABO and Rh-Blood Typing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087 E. B. Santiago and R. J. Ferreira In Silico Study on Electric Current Density in the Brain During Electrochemotherapy Treatment Planning of a Dog’s Head Osteosarcoma . . . . . 1095 R. Guedert, M. M. Taques, I. B. Paro, M. M. M. Rangel, and D. O. H. Suzuki Evaluation of Engorged Puerperal Breast by Thermographic Imaging: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101 L. B. da Silva, A. C. G. Lima, J. L. Soares, L. dos Santos, and M. M. Amaral Study of the Photo Oxidative Action of Brosimum gaudichaudii Extract . . . . . . . 1105 V. M. de S. Antunes, C. L. de L. Sena, and A. F. Uchoa Electrochemotherapy Effectiveness Loss Due to Electrode Bending: An In Silico and In Vitro Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109 D. L. L. S. Andrade, J. R. da Silva, R. Guedert, G. B. Pintarelli, J. A. Berkenbrock, S. Achenbach, and D. O. H. Suzuki Conductive Gels as a Tool for Electric Field Homogenization and Electroporation in Discontinuous Regions: In Vitro and In Silico Study . . . 1115 L. B. Lopes, G. B. Pintarelli, and D. O. H. Suzuki Line Shape Analysis of Cortisol Infrared Spectra for Salivary Sensors: Theoretical and Experimental Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121 C. M. A. Carvalho, B. L. S. Porto, B. V. M. Rodrigues, and T. O. Mendes Differential Diagnosis of Glycosuria Using Raman Spectroscopy . . . . . . . . . . . . . 1129 E. E. Sousa Vieira, L. Silveira Junior, and A. Barrinha Fernandes Development of a Moderate Therapeutic Hypothermia Induction and Maintenance System for the Treatment of Traumatic Brain Injury . . . . . . . 1135 Reynaldo Tronco Gasparini, Antonio Luis Eiras Falcão, and José Antonio Siqueira Dias Temperature Generation and Transmission in Root Dentin During Nd:YAG Laser Irradiation for Preventive Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141 Claudio Ricardo Hehl Forjaz, Denise Maria Zezell, and P. A. Ana Photobleaching of Methylene Blue in Biological Tissue Model (Hydrolyzed Collagen) Using Red (635 nm) Radiation . . . . . . . . . . . . . . . . . . . . 1147 G. Lepore, P. S. Souza, P. A. Ana, and N. A. Daghastanli Effect of Photodynamic Inactivation of Propionibacterium Acnes Biofilms by Hypericin (Hypericum perforatum) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153 R. A. Barroso, R. Navarro, C. R. Tim, L. P. Ramos, L. D. de Oliveira, A. T. Araki, D. B. Macedo, K. G. Camara Fernandes, and L. Assis Evaluating Acupuncture in Vascular Disorders of the Lower Limb Through Infrared Thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157 Wally auf der Strasse, A. Pinto, M. F. F. Vara, E. L. Santos, M. Ranciaro, P. Nohama, and J. Mendes Photodynamic Inactivation in Vitro of the Pathogenic Fungus Paracoccidioides brasiliensis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165 José Alexandre da Silva Júnior, R. S. Navarro, A. U. Fernandes, D. I. Kozusny-Andreani, and L. S. Feitosa

xxii

In Vitro Study of the Microstructural Effects of Photodynamic Therapy in Medical Supplies When Used for Disinfection . . . . . . . . . . . . . . . . . . . . . . . . . 1173 A. F. Namba, M. Del-Valle, N. A. Daghastanli, and P. A. Ana Technological Development of a Multipurpose Molecular Point-of-Care Device for Sars-Cov-2 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183 L. R. Nascimento, V. K. Oliveira, B. D. Camargo, G. T. Mendonça, M. C. Stracke, M. N. Aoki, L. Blanes, L. G. Morello, and S. L. Stebel Chemical Effects of Nanosecond High-Intensity IR and UV Lasers on Biosilicate® When Used for Treating Dentin Incipient Caries Lesions . . . . . . 1189 M. Rodrigues, J. M. F. B. Daguano, and P. A. Ana Antioxidant Activity of the Ethanol Extract of Salpichlaena Volubilis and Its Correlation with Alopecia Areata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197 A. B. Souza, T. K. S. Medeiros, D. Severino, C. J. Francisco, and A. F. Uchoa Evaluation of the Heart Rate Variability with Laser Speckle Imaging . . . . . . . . 1205 C. M. S. Carvalho, A. G. F. Marineli, L. dos Santos, A. Z. de Freitas, and M. M. Amaral Photobiomodulation and Laserpuncture Evaluation for Knee Osteoarthritis Treatment: A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211 L. G. C. Corrêa, D. S. F. Magalhães, A. Baptista, and A. F. Frade-Barros Methodology for the Classification of an Intraocular Lens with an Orthogonal Bidimensional Refractive Sinusoidal Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217 Diogo Ferraz Costa and D. W. d. L. Monteiro Discrimination Between Artisanal and Industrial Cassava by Raman Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225 Estela Doria, S. C. Nunez, R. S. Navarro, J. C. Cogo, T. O. Mendes, and A. F. Frade-Barros Effect of Light Emitted by Diode as Treatment of Radiodermatitis . . . . . . . . . . . 1231 Cristina Pires Camargo, H. A. Carvalho, R. Gemperli, Cindy Lie Tabuse, Pedro Henrique Gianjoppe dos Santos, Lara Andressa Ordonhe Gonçales, Carolina Lopo Rego, B. M. Silva, M. H. A. S. Teixeira, Y. O. Feitosa, F. H. P. Videira, and G. A. Campello Effect of Photobiomodulation on Osteoblast-like Cells Cultured on Lithium Disilicate Glass-Ceramic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237 L. T. Fabretti, A. C. D. Rodas, V. P. Ribas, J. K. M. B. Daguano, and I. T. Kato Reference Values of Current Perception Threshold in Adult Brazilian Cohort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243 Diogo Correia e Silva, A. P. Fontana, M. K. Gomes, and C. J. Tierra-Criollo Analysis of the Quality of Sunglasses in the Brazilian Market in Terms of Ultraviolet Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249 L. M. Gomes, A. D. Loureiro, M. Masili, and Liliane Ventura Do Sunglasses on Brazilian Market Have Blue-Light Protection? . . . . . . . . . . . . 1253 A. D. Loureiro, L. M. Gomes, and Liliane Ventura

Contents

Contents

xxiii

Thermography and Semmes-Weinstein Monofilaments in the Sensitivity Evaluation of Diabetes Mellitus Type 2 Patients . . . . . . . . . . . . . . . . . . . . . . . . . 1259 G. C. Mendes, F. S. Barros, and P. Nohama Biomedical Robotics, Assistive Technologies and Health Informatics Design and Performance Evaluation of a Custom 3D Printed Thumb Orthosis to Reduce Occupational Risk in an Automotive Assembly Line . . . . . . . . . . . . . 1269 H. Toso, D. P. Campos, H. V. P. Martins, R. Wenke, M. Salatiel, J. A. P. Setti, and G. Balbinotti Handy Orthotics: Considerations on User-Centered Methodology During Development Stages of Myoelectric Hand Orthosis for Daily Assistance . . . . . . . 1277 H. V. P. Martins, J. A. P. Setti, and C. Guimarães Human Activity Recognition System Using Artificial Neural Networks . . . . . . . . 1285 Vinícius Ferreira De Almeida and Rodrigo Varejão Andreão Modeling and Simulation of a Fuzzy-Based Human Control Using an Interaction Model Between Human and Active Knee Orthosis . . . . . . . . . . . . 1293 D. Mosconi, P. F. Nunes, and Siqueira Wearable Devices in Healthcare: Challenges, Current Trends and a Proposition of Affordable Low Cost and Scalable Computational Environment of Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301 Fabrício Martins Mendonça, Mário A. R. Dantas, Wallace T. Fortunato, Juan F. S. Oliveira, Breno C. Souza, and Marcelo Q. Filgueiras Use of RGB-D Camera for Analysis of Compensatory Trunk Movements in Upper Limbs Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309 Alice Tissot Garcia, L. L. da C. Guimarães, S. A. V. e Silva, and V. M. de Oliveira Repetitive Control Applied to a Social Robot for Interaction with Autistic Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319 R. P. A. Pereira, C. T. Valadao, E. M. O. Caldeira, J. L. F. Salles, and T. F. Bastos-Filho Analysis of Sensors in the Classification of the Brazilian Sign Language . . . . . . 1325 Thiago Simões Dias, J. J. A. Mendes Júnior, and S. F. Pichorim A Smart Wearable System for Firefighters for Monitoring Gas Sensors and Vital Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333 Letica Teixeira Nascimento, M. E. M. Araujo, M. W. A. Santos, P. D. Boina, J. V. F. Gomes, M. K. Rosa, T. F. Bastos-Filho, K. S. Komati, and R. J. M. G. Tello Importance of Sequencing the SARS-CoV-2 Genome Using the Nanopore Technique to Understand Its Origin, Evolution and Development of Possible Cures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341 A. M. Corredor-Vargas, R. Torezani, G. Paneto, and T. F. Bastos-Filho Vidi: Artificial Intelligence and Vision Device for the Visually Impaired . . . . . . 1345 R. L. A. Pinheiro and F. B. Vilela Mobile Application for Aid in Identifying Fall Risk in Elderly: App Fisioberg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1355 D. C. Gonçalves, F. P. Pinto, and D. S. F. Magalhães

xxiv

Real-Time Slip Detection and Control Using Machine Learning . . . . . . . . . . . . . 1363 Alexandre Henrique Pereira Tavares and S. R. J. Oliveira Programmable Multichannel Neuromuscular Electrostimulation System: A Universal Platform for Functional Electrical Stimulation . . . . . . . . . . . . . . . . 1371 T. Coelho-Magalhães, A. F. Vilaça-Martins, P. A. Araújo, and H. Resende-Martins Absence from Work in Pregnancy Related to Racial Factors: A Bayesian Analysis in the State of Bahia—Brazil . . . . . . . . . . . . . . . . . . . . . . . 1379 A. A. A. R. Monteiro, M. S. Guimarães, E. F. Cruz, and D. S. F. Magalhães Perspectives on EMG-Controlled Prosthetic Robotic Hands: Trends and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387 C. E. Pontim, Arturo Vaine, H. V. P. Martins, Kevin Christlieb Deessuy, Eduardo Felipe Ardigo Braga, José Jair Alves Mendes Júnior, and D. P. Campos Use of Workspaces and Proxemics to Control Interaction Between Robot and Children with ASD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393 Giancarlo Pedroni Del Piero, E. M. O. Caldeira, and T. F. Bastos-Filho Proposal of a New Socially Assistive Robot with Embedded Serious Games for Therapy with Children with Autistic Spectrum Disorder and down Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399 João. Antonio Campos Panceri, E. V. S. Freitas, S. L. Schreider, E. Caldeira, and T. F. Bastos-Filho Performance Assessment of Wheelchair Driving in a Virtual Environment Using Head Movements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407 E. F. dos Santos Junior, J. T. de Souza, F. R. Martins, D. de Cassia Silva, and E. L. M. Naves Use of Artificial Intelligence in Brazil Mortality Data Analysis . . . . . . . . . . . . . . 1415 Sérgio de Vasconcelos Filho and Cristine Martins Gomes de Gusmão Dual Neural Network Approach for Virtual Sensor at Indoor Positioning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423 Guilherme Rodrigues Pedrollo and A. Balbinot Development of Simulation Platform for Human-Robot-Environment Interface in the UFES CloudWalker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431 J. C. Rocha-Júnior, R. C. Mello, T. F. Bastos-Filho, and A. Frizera-Neto Assisted Navigation System for the Visually Impaired . . . . . . . . . . . . . . . . . . . . . 1439 Malki-çedheq Benjamim C. Silva, B. C. Bispo, C. M. Silva, N. A. Cunha, E. A. B. Santos, and M. A. B. Rodrigues Development of Bionic Hand Using Myoelectric Control for Transradial Amputees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445 C. E. Pontim, M. G. Alves, J. J. A. Mendes Júnior, D. P. Campos, and J. A. P. Setti Communication in Hospital Environment Using Power Line Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451 N. A. Cunha, B. C. Bispo, K. R. C. Ferreira, G. J. Alves, G. R. P. Esteves, E. A. B. Santos, and M. A. B. Rodrigues

Contents

Contents

xxv

Influence of Visual Clue in the Motor Adaptation Process . . . . . . . . . . . . . . . . . 1457 V. T. Costa, S. R. J. Oliveira, and A. B. Soares Application of MQTT Network for Communication in Healthcare Establishment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465 N. A. Cunha, B. C. Bispo, E. L. Cavalcante, A. V. M. Inocêncio, G. J. Alves, and M. A. B. Rodrigues Artificial Neural Network-Based Shared Control for Smart Wheelchairs: A Fully-Manual Driving for the User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1471 J. V. A. e Souza, L. R. Olivi, and E. Rohmer Identifying Deficient Cognitive Functions Using Computer Games: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479 Luciana Rita Guedes, Larissa Schueda, Marcelo da Silva Hounsell, and A. S. Paterno A State of the Art About Instrumentation and Control Systems from Body Motion for Electric-Powered Wheelchairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487 A. X. González-Cely, M. Callejas-Cuervo, and T. Bastos-Filho Socket Material and Coefficient of Friction Influence on Residuum-Prosthesis Interface Stresses for a Transfemoral Amputee: A Finite Element Analysis . . . . 1495 Alina de Souza Leão Rodrigues and A. E. F. Da Gama Subject Specific Lower Limb Joint Mechanical Assessment for Indicative Range Operation of Active Aid Device on Abnormal Gait . . . . . . . . . . . . . . . . . 1503 Carlos Rodrigues, M. Correia, J. Abrantes, M. A. B. Rodrigues, and J. Nadal Web/Mobile Technology as a Facilitator in the Cardiac Rehabilitation Process: Review Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1511 Hildete de Almeida Galvão and F. S. Barros Biomedical Signal and Image Processing Regression Approach for Cranioplasty Modeling . . . . . . . . . . . . . . . . . . . . . . . . 1519 M. G. M. Garcia and S. S. Furuie Principal Component Analysis in Digital Image Processing for Automated Glaucoma Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527 C. N. Neves, D. S. da Encarnação, Y. C. Souza, A. O. R. da Silva, F. B. S. Oliveira, and P. E. Ambrósio Pupillometric System for Cognitive Load Estimation in Noisy-Speech Intelligibility Psychoacoustic Experiments: Preliminary Results . . . . . . . . . . . . . 1533 A. L. Furlani, M. H. Costa, and M. C. Tavares VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539 P. F. Jaquetti, Valfredo Pilla Jr, G. B. Borba, and H. R. Gamba Estimation of Magnitude-Squared Coherence Using Least Square Method and Phase Compensation: A New Objective Response Detector . . . . . . . . . . . . . 1547 F. Antunes and L. B. Felix Image Processing as an Auxiliary Methodology for Analysis of Thermograms . . . 1553 C. A. Schadeck, F. Ganacim, L. Ulbricht, and Cezar Schadeck Performance Comparison of Different Classifiers Applied to Gesture Recognition from sEMG Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561 B. G. Sgambato and G. Castellano

xxvi

Modelling of Inverse Problem Applied to Image Reconstruction in Tomography Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569 J. G. B. Wolff, G. Gueler Dalvi, and P. Bertemes-Filho Towards a Remote Vital Sign Monitoring in Accidents . . . . . . . . . . . . . . . . . . . . 1575 A. Floriano, R. S. Rosa, L. C. Lampier, E. Caldeira, and T. F. Bastos-Filho Analysis About SSVEP Response to 5.5–86.0 Hz Flicker Stimulation . . . . . . . . . 1581 G. S. Ferreira, P. F. Diez, and S. M. T. Müller Correlations Between Anthropometric Measurements and Skin Temperature, at Rest and After a CrossFit® Training Workout . . . . . . . . . . . . . . . . . . . . . . . . 1589 E. B. Neves, A. C. C. Salamunes, F. De Meneck, E. C. Martinez, and V. M. Reis Channel Influence in Armband Approach for Gesture Recognition by sEMG Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597 J. J. A. Mendes Jr., M. L. B. Freitas, D. P. Campos, C. E. Pontim, S. L. Stevan Jr., and S. F. Pichorim Human Activity Recognition from Accelerometer Data with Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1603 Gustavo de Aquino e Aquino, M. K. Serrão, M. G. F. Costa, and C. F. F. Costa-Filho Multimodal Biometric System Based on Autoencoders and Learning Vector Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611 C. F. F. Costa-Filho, J. V. Negreiro, and M. G. F. Costa Evaluating the Performance of Convolutional Neural Networks with Direct and Sequential Acyclic Graph Architectures in Automatic Segmentation of Breast Lesions in Ultrasound Images . . . . . . . . . . . . . . . . . . . . 1619 Gustavo de Aquino e Aquino, M. K. Serrão, M. G. F. Costa, and C. F. F. Costa-Filho Evaluation and Systematization of the Transfer Function Method for Cerebral Autoregulation Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1627 A. M. Duarte, R. R. Costa, F. S. Moura, A. S. M. Salinet, and J. Salinet Anomaly Detection Using Autoencoders for Movement Prediction . . . . . . . . . . . 1635 L. J. L. Barbosa, A. L. Delis, P. V. P Cotta, V. O. Silva, M. D. C. Araujo, and A. Rocha A Classifier Ensemble Method for Breast Tumor Classification Based on the BI-RADS Lexicon for Masses in Mammography . . . . . . . . . . . . . . 1641 Juanita Hernández-López and Wilfrido Gómez-Flores A Comparative Study of Neural Computing Approaches for Semantic Segmentation of Breast Tumors on Ultrasound Images . . . . . . . . . . . . . . . . . . . . 1649 Luis Eduardo Aguilar-Camacho, Wilfrido Gómez-Flores, and Juan Humberto Sossa-Azuela A Preliminary Approach to Identify Arousal and Valence Using Remote Photoplethysmography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659 L. C. Lampier, E. Caldeira, D. Delisle-Rodriguez, A. Floriano, and T. F. Bastos-Filho

Contents

Contents

xxvii

Multi-label EMG Classification of Isotonic Hand Movements: A Suitable Method for Robotic Prosthesis Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665 José Jair Alves Mendes Junior, C. E. Pontim, and D. P. Campos Evaluation of Vectorcardiography Parameters Matrixed Synthesized . . . . . . . . . 1673 Amanda Nunes Barros, V. B. S. Luz, F. M. C. Bezerra, R. M. Tavares, F. Albrecht, A. C. Murta, R. Viana, M. C. Lambauer, E. B. Correia, R. Hortegal, and H. T. Moriya Microstate Graphs: A Node-Link Approach to Identify Patients with Schizophrenia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679 Lorraine Marques Alves, Klaus Fabian Côco, Mariane Lima de Souza, and Patrick Marques Ciarelli Eigenspace Beamformer Combined with Generalized Sidelobe Canceler and Filters for Generating Plane Wave Ultrasound Images . . . . . . . . . . . . . . . . . 1687 L. C. Neves, J. M. Maia, A. J. Zimbico, D. F. Gomes, A. A. Assef, and E. T. Costa Anatomical Atlas of the Human Head for Electrical Impedance Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693 L. A. Ferreira, R. G. Beraldo, E. D. L. B. Camargo, and F. S. Moura Sparse Arrays Method with Generalized Sidelobe Canceler Beamformer for Improved Contrast and Resolution in Ultrasound Ultrafast Imaging . . . . . . 1701 D. F. Gomes, J. M. Maia, A. J. Zimbico, A. A. Assef, L. C. Neves, F. K. Schneider, and E. T. Costa Center of Mass Estimation Using Kinect and Postural Sway . . . . . . . . . . . . . . . . 1707 G. S. Oliveira, Marcos R. P. Menuchi, and P. E. Ambrósio Estimation of Directed Functional Connectivity in Neurofeedback Training Focusing on the State of Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713 W. D. Casagrande, E. M. Nakamura-Palacios, and A. Frizera-Neto Characterization of Electroencephalogram Obtained During the Resolution of Mathematical Operations Using Recurrence Quantification Analysis . . . . . . . 1719 A. P. Mendes, G. M. Jarola, L. M. A. Oliveira, G. J. L. Gerhardt, J. L. Rybarczyk-Filho, and L. dos Santos Performance Evaluation of Machine Learning Techniques Applied to Magnetic Resonance Imaging of Individuals with Autism Spectrum Disorder . . . . . . . . . . 1727 V. F. Carvalho, G. F Valadão, S. T. Faceroli, F. S Amaral, and M. Rodrigues Biomedical Signal Data Features Dimension Reduction Using Linear Discriminant Analysis and Threshold Classifier in Case of Two Multidimensional Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733 E. R. Pacola and V. I. Quandt Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® Training Workout: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739 D. B. Castillo, V. A. A. Bento, E. B. Neves, E. C. Martinez, F. De Merneck, V. M. Reis, M. L. Brioschi, and D. S. Haddad Implementation of One and Two Dimensional Analytical Solution of the Wave Equation: Dirichlet Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747 S. G. Mello, C. Benetti, and A. G. Santiago

xxviii

Application of the Neumann Boundary Conditions to One and Two Dimensional Analytical Solution of Wave Equation . . . . . . . . . . . . . . . . . . . . . . . 1755 S. G. Mello, C. Benetti, and A. G. Santiago Comparative Analysis of Parameters of Magnetic Resonance Examinations for the Accreditation Process of a Diagnostic Image Center in Curitiba/Brazil . . . . . 1761 M. V. C. Souza, R. Z. V. Costa, and K. E. P. Pinho Information Theory Applied to Classifying Skin Lesions in Supporting the Medical Diagnosis of Melanomas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1767 L. G. de Q. Silveira-Júnior, B. Beserra, and Y. K. R. de Freitas Alpha Development of Software for 3D Segmentation and Reconstruction of Medical Images for Use in Pre-treatment Simulations for Electrochemotherapy: Implementation and Case Study . . . . . . . . . . . . . . . . . . . . 1773 J. F. Rodrigues, Daniella L. L. S. Andrade, R. Guedert, and D. O. H. Suzuki A Systematic Review on Image Registration in Interventionist Procedures: Ultrasound and Magnetic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1781 G. F. Carniel, A. C. D. Rodas, and A. G. Santiago Application of Autoencoders for Feature Extraction in BCI-SSVEP . . . . . . . . . . 1787 R. Granzotti, G. V. Vargas, and L. Boccato An LPC-Based Approach to Heart Rhythm Estimation . . . . . . . . . . . . . . . . . . . 1795 J. S. Lima, F. G. S. Silva, and J. M. Araujo Spectral Variation Based Method for Electrocardiographic Signals Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801 V. V. de Morais, P. X. de Oliveira, E. B. Kapisch, and A. J. Ferreira Centre of Pressure Displacements in Transtibial Amputees . . . . . . . . . . . . . . . . . 1809 D. C. Toloza and L. A. Luengas Lower Limb Frequency Response Function on Standard Maximum Vertical Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815 C. Rodrigues, M. Correia, J. Abrantes, M. A. B. Rodrigues, and J. Nadal Time-Difference Electrical Impedance Tomography with a Blood Flow Model as Prior Information for Stroke Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1823 R. G. Beraldo and F. S. Moura Development of a Matlab-Based Graphical User Interface for Analysis of High-Density Surface Electromyography Signals . . . . . . . . . . . . . . . . . . . . . . . 1829 I. S. Oliveira, M. A. Favretto, S. Cossul, and J. L. B. Marques Development of an Automatic Antibiogram Reader System Using Circular Hough Transform and Radial Profile Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 1837 B. R. Tondin, A. L. Barth, P. R. S. Sanches, D. P. S. Júnior, A. F. Müller, P. R. O. Thomé, P. L. Wink, A. S. Martins, and A. A. Susin Gene Expression Analyses of Ketogenic Diet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1843 A. A. Ferreira and A. C. Q. Simões Comparison Between J48 and MLP on QRS Classification Through Complexity Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1851 L. G. Hübner and A. T. Kauati Low Processing Power Algorithm to Segment Tumors in Mammograms . . . . . . 1857 R. E. Q. Vieira, C. M. G. de Godoy, and R. C. Coelho

Contents

Contents

xxix

Electromyography Classification Techniques Analysis for Upper Limb Prostheses Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1865 F. A. Boris, R. T. Xavier, J. P. Codinhoto, J. E. Blanco, M. A. A. Sanches, C. A. Alves, and A. A. Carvalho EEG-Based Motor Imagery Classification Using Multilayer Perceptron Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1873 S. K. S. Ferreira, A. S. Silveira, and A. Pereira Real-Time Detection of Myoelectric Hand Patterns for an Incomplete Spinal Cord Injured Subject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1879 W. A. Rodriguez, J. A. Morales, L. A. Bermeo, D. M. Quiguanas, E. F. Arcos, A. F. Rodacki, and J. J. Villarejo-Mayor Single-Trial Functional Connectivity Dynamics of Event-Related Desynchronization for Motor Imagery EEG-Based Brain-Computer Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1887 P. G. Rodrigues, A. Fim-Neto, J. R. Sato, D. C. Soriano, and S. J. Nasuto A Lightweight Model for Human Activity Recognition Based on Two-Level Classifier and Compact CNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895 Y. L. Coelho, B. Nguyen, F. A. Santos, S. Krishnan, and T. F. Bastos-Filho Applicability of Neurometry in Assessing Anxiety Levels in Students . . . . . . . . . 1903 Aline M. D. Meneses, Marcello M. Amaral, and Laurita dos Santos Using PPG and Machine Learning to Measure Blood Pressure . . . . . . . . . . . . . . 1909 G. S. Cardoso, M. G. Lucas, S. S. Cardoso, J. C. M. Ruzicki, and A. A. S. Junior Resting-State Brain in Cognitive Decline: Analysis of Brain Network Architecture Using Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1917 C. M. Maulaz, D. B. A. Mantovani, and A. M. Marques da Silva Exploring Different Convolutional Neural Networks Architectures to Identify Cells in Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925 A. G. Santiago, C. C. Santos, M. M. G. Macedo, J. K. M. B. Daguano, J. A. Dernowsek, and A. C. D. Rodas A Dynamic Artificial Neural Network for EEG Patterns Recognition . . . . . . . . . 1931 G. J. Alves, Diogo R. Freitas, A. V. M. Inocêncio, E. L. Cavalcante, M. A. B. Rodrigues, and Renato Evangelista de Araujo Pelvic Region Panoramic Ultrasonography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937 J. S. Oliveira, G. S. Almeida, J. E. C. Magno, V. R. da Luz, M. C. P. Fonseca, and I. M. Miziara Functional Connectivity During Hand Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1943 T. S. Nunes, G. A. Limeira, I. B. Souto, A. P. Fontana, and C. J. Tierra-Criollo Estimation of Alveolar Recruitment Potential Using Electrical Impedance Tomography Based on an Exponential Model of the Pressure-Volume Curve . . . . 1949 G. E. Turco, F. S. Moura, and E. D. L. B. Camargo Assessment of Respiratory Mechanics in Mice Using Estimated Input Impedance Without Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953 V. A. Takeuchi, V. Mori, R. Vitorasso, A. E. Lino-Alvarado, M. A. Oliveira, W. T. Lima, and H. T. Moriya

xxx

Dispersive Raman Spectroscopy of Peanut Oil—Ozone and Ultrasound Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1961 P. A. L. I. Marrafa, H. C. Carvalho, C. J. Lima, A. B. F. Moretti, and L. Silveira Jr. Comparison Between Linear and Nonlinear Haar Wavelet for the Detection of the R-peak in the Electrocardiogram of Small Animals . . . . . . . . . . . . . . . . . 1967 P. Y. Yamada and E. V. Garcia Data Extraction Method Combined with Machine Learning Techniques for the Detection of Premature Ventricular Contractions in Real-Time . . . . . . . 1973 L. C. Sodré, B. G. Dutra, A. S. Silveira, and I. M. Mizara Extraction of Pendelluft Features from Electrical Impedance Tomography Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979 V. J. Fidelis, E. D. L. B. Camargo, M. B. P. Amato, and J. A. Sims Classification of Raw Electroencephalogram Signals for Diagnosis of Epilepsy Using Functional Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1985 T. T. Ribeiro, J. S. Fiel, E. M. Melo, R. E. S. Navegantes, F. Gomes, and A. Pereira Junior On Convolutional Neural Networks and Transfer Learning for Classifying Breast Cancer on Histopathological Images Using GPU . . . . . . . . . . . . . . . . . . . 1993 D. C. S. e Silva and O. A. C. Cortes Acquisition and Comparison of Classification Algorithms in Electrooculogram Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1999 A. R. Borchardt, L. S. Schiavon, L. G. L. Silva, A. A. Souza Junior, and M. G. Lucas Muscle Synergies Estimation with PCA from Lower Limb sEMG at Different Stretch-Shortening Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2005 C. Rodrigues, M. Correia, J. Abrantes, M. A. B. Rodrigues, and J. Nadal Measurement Techniques of Hounsfield Unit Values for Assessment of Bone Quality Following Decompressive Craniectomy (DC): A Preliminary Report . . . 2013 Silvio Tacara, Rubens Alexandre de Faria, J. C. Coninck, H. R. Schelin, and Irene T. Nakano Method for Improved Image Reconstruction in Computed Tomography and Positron Emission Tomography, Based on Compressive Sensing with Prefiltering in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2019 Y. Garcia, C. Franco, and C. J. Miosso Non-invasive Arterial Pressure Signal Estimation from Electrocardiographic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2027 J. C. Silva, J. S. de Oliveira, S. E. Silva, M. H. de Carvalho, and A. S. Silveira Comparison Among Microvolt T-wave Alternans Detection Methods and the Effect of T-wave Delimitation Approaches . . . . . . . . . . . . . . . . . . . . . . . 2033 T. Winkert and J. Nadal Detection of Schizophrenia Based on Brain Structural Analysis, Using Machine Learning over Different Combinations of Multi-slice Magnetic Resonance Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2039 J. S. Avelar Filho, N. Silva, and C. J. Miosso Simulation of Lung Ultrasonography Phantom for Acquisition of A-lines and B-lines Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045 F. A. M. Silva, M. Pastrana-Chalco, C. A. Teixeira, and W. C. A. Pereira

Contents

Contents

xxxi

Recognition of Facial Patterns Using Surface Electromyography—A Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2051 M. R. Lima, J. J. A. Mendes Júnior, and D. P. Campos Classification of Red Blood Cell Shapes Using a Sequential Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2059 W. D. Font, S. H. Garcia, M. E. Nicot, M. G. Hidalgo, A. Jaume-i-Capó, A. Mir, and L. F. Gomes Possible Caveats of Ultra-short Heart Rate Variability Reliability: Insights from Recurrence Quantification Analysis . . . . . . . . . . . . . . . . . . . . . . . . 2067 Hiago Murilo Melo, Mariana Cardoso Melo, Roger Walz, Emílio Takase, and Jean Faber Clinical Engineering and Health Technology Assessment Unidose Process Automation: Financial Feasibility Analysis for a Public Hospital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2079 L. H. V. de Santana and M. G. N. M. da Silva Identifying Monitoring Parameters Using HFMEA Data in Primary Health Care Ubiquitous Technology Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085 J. Martins, R. D. Soares Filho, and R. Garcia Rapid Review of the Application of Usability Techniques in Medical Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091 M. R. Brandão and R. Garcia Changes in Respiratory Mechanics Associated with Different Degrees of Parkinson’s Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099 B. T. Caldas, F. C. V. Ribeiro, J. S. Pereira, W. C. Souza, A. J. Lopes, and P. L. Melo Machine Learning Platform for Remote Analysis of Primary Health Care Technology to Support Ubiquitous Management in Clinical Engineering . . . . . . 2105 Rafael Peixoto, R. Soares Filho, J. Martins, and R. Garcia Evaluation of the Efficacy of Chloroquine and Hydroxychloroquine in the Treatment of Individuals with COVID-19: A Systematic Review . . . . . . . 2111 L. C. Mendes, J. Ávila, and A. A. Pereira Embracement with Risk Classification: Lead Time Assessment of the Patient in a Tocogynecology Emergency Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2117 G. R. da Costa, R. C. G. Berardi, E. S. de Oliveira, and A. M. W. Stadnik Health Technology Management Using GETS in Times of Health Crisis . . . . . . 2123 Jose W. M. Bassani, Rosana A. Bassani, and Ana C. B. Eboli Life-Sustaining Equipment: A Demographic Geospace Analysis in National Territory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2131 E. Cruz, I. H. Y. Noma, A. C. Dultra, and M. Negri Geospatial Analysis of Diagnostic Imaging Equipment in Brazil . . . . . . . . . . . . . 2137 I. H. Y. Noma, E. Cruz, A. C. Dultra, and M. Negri A Review About Equipment for Mechanical Ventilation in Intensive Care to Combat COVID-19 and the Role of Clinical Engineers . . . . . . . . . . . . . . . . . . 2143 E. V. S. Freitas, M. A. P. Arteaga, P. A. Cardoso, D. L. Tótola, Y. L. Coelho, G. C. Vivas, D. Delisle-Rodríguez, F. A. Santos, and T. F. Bastos-Filho

xxxii

Evaluation of Adverse Events Recorded in FDA/USA and ANVISA/Brazil Databases for the Medical Equipment: Pulmonary Ventilators, Defibrillators, Infusion Pumps, Physiological Monitors and Ultrasonic Scalpels . . . . . . . . . . . . 2149 Josiany Carlos de Souza, Sheida Mehrpour, Matheus Modolo Ferreira, Y. L. Coelho, G. C. Vivas, D. Delisle-Rodriguez, Francisco de Assis Santos, and T. F. Bastos-Filho Quality Assessment of Emergency Corrective Maintenance of Critical Care Ventilators Within the Context of COVID-19 in São Paulo, Brazil . . . . . . . . . . . 2157 A. E. Lino-Alvarado, S. G. Mello, D. A. O. Rosa, M. S. Dias, M. F. Barbosa, K. N. Barros, E. Silva Filho, B. A. Lemos, J. C. T. B. Moraes, A. F. G. Ferreira Junior, and H. T. Moriya Comparison of the Sensitivity and Specificity Between Mammography and Thermography in Breast Cancer Detection . . . . . . . . . . . . . . . . . . . . . . . . . 2163 T. G. R. Da Luz, J. C. Coninck, and L. Ulbricht COVID-19: Analysis of Personal Protective Equipment Costs in the First Quarter of 2020 at a Philanthropic Hospital in Southern Bahia- Brazil . . . . . . . 2169 L. O. de Brito, S. C. Nunez, R. S. Navarro, and J. C. Cogo Evaluation of the Power Generated by Ultrasonic Shears Used in Laparoscopic Surgeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2173 I. P. H. Rosario and J. M. Maia Neuroengineering Influence of Visual Feedback Removal on the Neural Control Strategies During Isometric Force Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2183 C. M. Germer, E. P. Zambalde, and L. A. Elias Neurofeedback Training for Regulation of Sensorimotor Rhythm in Individuals with Refractory Epilepsy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2189 S. C. Souza, R. E. S. Navegantes, D. S. Miranda, J. S. Fiel, and A. Pereira Gelotophobia in the Academic Environment: A Preliminary Study . . . . . . . . . . . 2193 T. S. Rêgo, D. E. S. Pires, T. T. Ribeiro, R. E. S. Navegantes, D. S. Miranda, E. M. Melo, and A. Pereira-Junior A Single Administration of GBR 12909 Alters Basal Mesocorticolimbic Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2199 L. Galdino, A. C. Kunicki, L. C. N. Filho, R. C. Moioli, and M. F. P. Araújo Fuzzy Assessment for Autism Spectrum Disorders . . . . . . . . . . . . . . . . . . . . . . . 2205 M. M. Costa and E. Araujo Subthalamic Field Potentials in Parkinson’s Disease Encodes Motor Symptoms Severity and Asymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211 J. B. de Luccas, B. L. Bianqueti, A. Fim Neto, M. S. Rocha, A. K. Takahata, D. C. Soriano, and F. Godinho Subthalamic Beta Burst Dynamics Differs for Parkinson’s Disease Phenotypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2219 A. Fim Neto, J. B. de Luccas, B. L. Bianqueti, M. S. Rocha, S. J. Nasuto, F. Godinho, and D. C. Soriano Development of a Low-Cost Software to Obtain Quantitative Parameters in the Open Field Test for Application in Neuroscience Research . . . . . . . . . . . . 2225 T. R. M. Costalat, I. P. R. Negrão, and W. Gomes-Leal

Contents

Contents

xxxiii

Depolarizing Effect of Chloride Influx Through KCC and NKCC During Nonsynaptic Epileptiform Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2233 D. M. Soares, S. G. Cecílio, L. E. C. Santos, A. M. Rodrigues, and A. C. G. Almeida Increase of Lactate Concentration During Spreading Depression . . . . . . . . . . . . 2239 Silas Moreira de Lima, B. C. Rodrigues, J. N. Lara, G. S. Nogueira, A. C. G. Almeida, and A. M. Rodrigues Microglial Response After Chronic Implantation of Epidural Spinal Cord Electrode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2245 A. O. B. Suassuna, J. R. Oliveira, V. S. Costa, C. C. M. Castro, M. S. L. Nascimento, and M. F. P. Araújo Immediate Cortical and Spinal C-Fos Immunoreactivity After ICMS of the Primary Somatosensory Cortex in Rats . . . . . . . . . . . . . . . . . . . . . . . . . . . 2251 V. S. Costa, A. O. B Suassuna, L. Galdino, and A. C. Kunicki Identical Auditory Stimuli Render Distinct Cortical Responses Across Subjects —An Issue for Auditory Oddball-Based BMIs . . . . . . . . . . . . . . . . . . . . . . . . . . 2257 J. N. Mello, A. F. Spirandeli, H. C. Neto, C. B Amorim, and A. B. Soares Proposal of a Novel Neuromorphic Optical Tactile Sensor for Applications in Prosthetic Hands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265 M. G. Pereira, A. Nakagawa-Silva, and A. B. Soares An Object Tracking Using a Neuromorphic System Based on Standard RGB Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2271 E. B. Gouveia, L. M. Vasconcelos, E. L. S. Gouveia, V. T. Costa, A. Nakagawa-Silva, and A. B. Soares Classification of Objects Using Neuromorphic Camera and Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2277 E. B. Gouveia, E. L. S. Gouveia, V. T. Costa, A. Nakagawa-Silva, and A. B. Soares Acoustic and Volume Rate Deposition Simulation for the Focused Ultrasound Neuromodulation Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2283 Patrícia Cardoso de Andrade and E. T. Costa Neuromorphic Vision-aided Semi-autonomous System for Prosthesis Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2289 E. L. Gouveia, E. B. Gouveia, A. N. Silva, and A. B. Soares Finding Discriminant Lower-Limb Motor Imagery Features Highly Linked to Real Movements for a BCI Based on Riemannian Geometry and CSP . . . . . . 2295 L. A. Silva, D. Delisle-Rodriguez, and T. Bastos-Filho Computational Model of the Effects of Transcranial Magnetic Stimulation on Cortical Networks with Subject-Specific Neuroanatomy . . . . . . . . . . . . . . . . . 2301 V. V. Cuziol and L. O. Murta Jr. Special Topics in Biomedical Engineering Strategy to Computationally Model and Resolve Radioactive Decay Chain in Engineering Education by Using the Runge-Kutta Numerical Method . . . . . . 2311 F. T. C. S. Balbina, F. J. H. Moraes, E. Munin, and L. P. Alves

xxxiv

Effects of LED Photobiomodulation Therapy on the Proliferation of Chondrocytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2319 Soraia Salman, Cintia Cristina Santi Martignago, L. Assis, Eduardo Santos Trevisan, Ana Laura Andrade, Julia Parisi, Genoveva Luna, Richard Liebano, and C. R. Tim Characterization of Cultured Cardiomyocytes Derived from Human Induced Pluripotent Stem Cell for Quantitative Studies of Ca2+ Transport . . . . . . . . . . . 2325 Fernanda B. de Gouveia, Talita M. Marin, José W. M. Bassani, and Rosana A. Bassani Electrochemical Characterization of Redox Electrocatalytic Film for Application in Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2331 R. M. Gomes, R. A. Lima, and R. A. F. Dutra DXA and Bioelectrical Impedance: Evaluative Comparison in Obese Patients in City of Cáceres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2337 Cassiano A. G. Dalbem, C. M. M. G. Dalbem, J. E. P. Nunes, T. C. Macedo, B. O. Alves, and Laurita dos Santos On the COVID-19 Temporal Evolution in Brazil . . . . . . . . . . . . . . . . . . . . . . . . 2341 J. A. Costa Jr., A. C. Martinez, and J. C. Geromel Use of Apicectomy in the Treatment of Refractory Injury . . . . . . . . . . . . . . . . . 2347 L. C. T. Moreti, K. G. C. Fernandes, L. Assis, and C. R. Tim Electronics Laboratory Practices: A Didactic ECG Signal Generator . . . . . . . . . 2353 A. C. Martinez, M. S. Costa, L. T. Manera, and E. T. Costa Feature Analysis for Speech Emotion Classification . . . . . . . . . . . . . . . . . . . . . . 2359 R. Kingeski, L. A. P. Schueda, and A. S. Paterno Muscle Evaluation by Ultrasonography in the Diagnosis of Muscular Weakness Acquired in the Intensive Care Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367 Thayse Zerger Gonçalves Dias, A. M. W. Stadnik, F. S. Barros, and L. Ulbricht Development of an Application to Assess Quality of Life . . . . . . . . . . . . . . . . . . 2373 D. S. Oliveira, W. S. Santos, L. M. Ribeiro, G. S. Oliveira, D. S. F. Magalhães, and A. F. Frade-Barros Tetra-Nucleotide Histogram-Based Analysis of Metagenomic Data for Investigating Antibiotic-Resistant Bacteria . . . . . . . . . . . . . . . . . . . . . . . . . . . 2379 S. P. Klautau, S. L. Pinheiro, A. M. Nascimento, P. A. Castro, R. Ramos, and A. Klautau Use of Ultrasound in the Emergency and Initial Growth of Copaifera Reticulata Ducke (Fabaceae) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2385 I. O. Santos, I. F. N. Ribeiro, N. L. Silva, J. R. Rocha, L. B. Andrade, A. F. R. Rodriguez, W. C. A. Pereira, and L. E. Maggi Proposal for a Low-Cost Personal Protective Equipment (PPE) to Protect Health Care Professionals in the Fight Against Coronavirus . . . . . . . . . . . . . . . . 2391 J. A. F. M. Serra, D. A. C. Filho, J. S. Oliveira, M. C. Teixeira, and I. M. Miziara Cutaneous Manifestations Related to COVID-19: Caused by SARS-CoV2 and Use of Personal Protective Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2397 A. Almeida, R. S. Navarro, S. Campos, R. D. B. Soares, B. C. Hubbard, A. Baptista, and S. C. Nunez

Contents

Contents

xxxv

Low-Cost Modified Swab Graphite Electrode Development as a Point-of-Care Biosensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2405 A. K. A. Lucas, V. M. Bezerra, R. A. Lima, P. D. Mendonça, and R. A. F. Dutra Thermographic Evaluation Before and After the Use of Therapeutic Ultrasound in Breast Engorgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2409 L. E. Maggi, M. P. F. Pereira, S. O. Moura, and W. C. A. Pereira Thermal Effect of Therapeutic Ultrasound on Muscle-Bone Interface of Swine Tissue (Sus Scrofa Domesticus) with Metallic Implant . . . . . . . . . . . . . . . . . . . . . 2413 L. E. Maggi, V. L. Souza, S. O. Moura, C. K. B. F. Nogueira, F. S. C. Esteves, D. C. C. Barros, F. G. A. Santos, K. A. Coelho, and W. C. A. Pereira Evaluation of Dynamic Thermograms Using Semiautomatic Segmentation Software: Applied to the Diagnosis of Thyroid Cancer . . . . . . . . . . . . . . . . . . . . 2417 H. Salles, V. Magas, F. Ganacim, H. R. Gamba, and L. Ulbricht Assessment of Dose with CaF2 OSL Detectors for Individual Monitoring in Radiodiagnostic Services Using a Developed Algorithm Based on OSL Decay Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2425 I. Pagotto, R. Lazzari, D. Filipov, L. Mariano, and A. L. M. C. Malthez Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2431

Basic Industrial Technology in Health

Analysis of the Acoustic Power Emitted by a Physiotherapeutic Ultrasound Equipment Used at the Brazilian Air Force Academy J. G. S. N. Cavalcanti, W. C. A. Pereira, and J. F. S. Costa-Júnior

Abstract

Therapeutic ultrasound (TUS) is one of the most widely used methods for treating musculoskeletal injuries, in view of the benefits that this treatment provides: it speeds up the tissue repair process, for example. However, when TUS devices operate outside the range recommended by International Standard IEC 61689:2013, they can cause ineffective treatments or even aggravate the injury. This study aims to analyze the acoustic power emitted by a TUS device from Esquadrão de Saúde de Pirassununga and observe whether the values obtained are within the range established by the IEC 61689:2013. An acoustic radiation force balance was used to measure the power emitted by the equipment. The results showed that on all test days there was a divergence between the nominal and the emitted powers, which exceeded the value recommended by the IEC. Keywords

 

Therapeutic ultrasound Injuries Acoustic power Repeatability

1



IEC 61689:2013



Introduction

The use of therapeutic ultrasound devices (TUSD) is among the most used options for the treatment by diathermy of musculoskeletal injuries. Acoustic energy absorption by the treated tissue promotes repair, accelerating the phases of J. G. S. N. Cavalcanti  J. F. S. Costa-Júnior (&) Brazilian Air Force Academy, Estrada de Aguaí, s/nº, Pirassununga, SP, Brazil e-mail: [email protected] W. C. A. Pereira  J. F. S. Costa-Júnior Biomedical Engineering Program, Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil

inflammation and reducing the healing time [1]. Thermal effects include increased blood flow in the treated region, reduced muscle spasm, increased extensibility of collagen fibers and reduced inflammatory process [2]. For these effects to be achieved, it is necessary to maintain the temperature of the treated region between 40 and 45 °C for at least five minutes [1]. If the tissue temperature is below the recommended temperature range, the treatment may be ineffective. On the other hand, if the tissue temperature exceeds 45 °C, the lesion may worsen and may occur protein denaturation, resulting in cell death [3]. It is possible to observe in the literature that since 1972, researchers have been concerned with the output parameters of TUSD, such as the rated output power (ROP) [4]. Despite these concerns, many users of TUSD lack the skills, competences and adequate instruments to analyze the functioning (acoustic parameters) of these devices, so they merely perform a test known as “cavitation method”, when they believe that a device is malfunctioning [5]. This test subjectively indicates whether ultrasound is being emitted. In general, physiotherapists are unaware of the importance of testing and preventive maintenance of this equipment or do not request the performance check of these services due to the cost. Some professionals send their equipments to companies to carry out these services, but do not receive pre and post-maintenance and/or testing reports to monitor the proper functioning of the TUSD. Shaw and Hodnett [6] mentioned that the difference between the output (real) and nominal (shown on the equipment display) values of power, intensity and ERA (Effective Radiation Area) can interfere with the treatment, and may even compromise the safety of the patient, which can occur when the value of the acoustic output power is very high or the ERA is much lower than the nominal value. Some studies have shown that many of the analyzed devices had ROP, ERA and/or intensity values above the values recommended by the International Standard IEC 61689:2013 [5, 7, 8 and 9]. IEC 61689:2013 establishes the safety requirements for TUSD [10]. It determines that the ROP, ERA and intensity

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_1

3

4

J. G. S. N. Cavalcanti et al.

can have a maximum variation of ±20%, ±20% and ±30%, respectively, in relation to the nominal values. In this way, the temperature in the treated region is likely to be within the desired temperature range (40 and 45 °C). According to the Physiotherapy Section of the Esquadrão de Saúde de Pirassununga (ES-YS), in 2018, 6437 ultrasound treatments were performed on military personnel and their dependents. In addition, many cadets (students) were treated with therapy ultrasound (TUS) after joining the Brazilian Air Force Academy (AFA), due to their participation in the Military Adaptation Stage (MAS) and the cadets’ routine after the MAS. In the same year, 2,912 ultrasound treatments were performed on Cadets, representing 45.2% of the treatments with TUS performed in the Physiotherapy Section. The objective of this study was to perform accuracy tests of the acoustic power emitted by a physiotherapy equipment manufactured in Brazil and used by the Physiotherapy Section of ES-YS over 5 consecutive Sundays and to analyze if the device had acoustic power values within the range determined by the International Standard. In addition, the repeatability of the measurements was evaluated. It is worth mentioning that this TUSD was tested in May 2019, and its next testing in May 2020. Additionally, preventive maintenance and electrical safety inspection were carried out in September 2019.

2

Materials and Methods

The materials used in this study were a TUSD model Sonopulse (Amparo, SP, Brazil) configured to operate in continuous mode, an acoustic radiation force balance (ARFB) (UPMDT 1; Ohmic Instruments, Easton, MD, USA) to directly measure the rated power output in 5 days of testing, and a digital thermometer (MT-455A; Minipa do Brasil Ltda, São Paulo, SP, Brazil), which was used to measure the ambient and water of the balance reservoir temperatures. The TUSD has an ergonomic soundhead applicator with dual function, which allows the user to select a transducer with nominal ERA of 3.5 cm2 (T1) or 1.0 cm2 (T2). When T1 is used, the TUSD can operate at the nominal frequency of 1 MHz or 3 MHz and with a maximum output nominal power of 7 W. On the other hand, T2 allows the use of a nominal frequency of 1 MHz and maximum output nominal power of 2 W. Soundhead applicator was fixed on a support of the ARFB itself, so that the radiation-emitting surface was approximately 0.5 cm below the reservoir water level of the balance, which contained 950 ml of a commercial mineral water without gas “Qualitá”. The central axis of the transducer was visually aligned with the center of the metal cone of the balance and the submerged face of the soundhead

TUSD Soundhead applicator

Water Tank

ARFB

Fig. 1 Illustration of the experimental arrangement used to measure the values of the acoustic power emitted by the TUSD

applicator was positioned parallel to the cone. The system was assembled at the same location in the Physics and Chemistry Laboratory of the AFA, on a stable surface, isolated from airflow and with an ambient temperature of 24.5 ± 1.3 °C over the 5 days, in compliance with that established by the manufacturer of the TUSD for the operation of the device (between 5 and 45 °C). During the experiments, the absence of bubbles between the balance cone and the transducer was observed. After the assembly of the experimental apparatus shown in the Fig. 1, the balance was activated and 5 min later, the test to verify the calibration and programming of the equipment began, which was performed using an object with a mass of 1 g, equivalent to the rated output power of 14.650 W, but can vary by as much as ±1% (furnished by the manufacturer). The method used to measure the ROP by the TUSD is described in subsections A and B. In these subsections, the test used to assess whether the ROP values were within the range recommended by IEC 61689:2013 and the evaluation of the repeatability of the measurement of ROP values obtained on 5 consecutive Sundays are described.

2.1 Evaluation of the Relative Error of the ROP The acoustic power emitted by the TUSD was measured over the entire nominal range of the equipment. When T1 was used at the frequency of 1.0 MHz or 3.0 MHz, 10 ROP measurements were made for each nominal power value, NP (0.3, 0.7, 1.0, 1.4, 1.7, 2.1, 2.4, 2.8, 3.1, 3.5, 3.8, 4.2, 4.5, 4.9, 5.2, 5.6, 5.9, 6.3, 6.6 e 7.0 W). In addition, 10 ROP measurements were performed for each NP value (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9 and 2.0 W) with a frequency of 1.0 MHz and ERA of 1.0 cm2, on 5 consecutive Sundays from the first Sunday in November 2019. The accuracy test was used in order to calculate the relative error, RE, (%) between the nominal power and the ROP obtained with the radiation force balance, for this, Eq. 1 was utilized: RE ¼ ½ðNP  ROPÞ=NP  100

ð1Þ

Analysis of the Acoustic Power Emitted by a ...

5

According to the IEC 61689:2013 [10], the acoustic powers emitted by TUSD with a RE above ±20% must be considered outside the tolerance limit and the equipment must be sent for maintenance and/or testing.

3.0 2.5 2.0 1.5 1.0 0.5 0.0

According to JCGM 200:2012 [11], repeatability consists of making measurements using the same measurement system, the same experimental protocol, the same operator, the same location and a repetition over a short period of time. In this study, measurement repeatability was performed using statistical tests to compare measurements made on 5 consecutive Sundays by the same operator, using the same measurement system, at the same location in the Physics and Chemistry Laboratory and using the same experimental protocol. Initially, the Shapiro–Wilk test was used to assess the normality of the distribution of each subgroup (10 measurements on 5 consecutive Sundays, for each NP value, total of 300 subgroups) considering the use of T1 (1 and 3 MHz) and T2 (1 MHz). This test was chosen as it is the most powerful test, when compared to the Kolmogorov– Smirnov, Lilliefors and Anderson–Darling tests [12]. Then, the Bartllet test was applied to the data that showed normal distribution as it presents better performance than the others [13]. The aim was to evaluate the homogeneity of the variances of the data that presented a normal distribution, as these are the necessary conditions for the use of the one-way analysis of variance (ANOVA) for repeated measures. When ANOVA was not possible, the Kruskal–Wallis test was employed. Statistical analyzes were performed with the Action Stat 3.7 software (ESTATCAMP, Campinas, SP, Brazil) and the significance level of 5% was adopted.

2.3 Estimation of the ROP Variation Coefficient The coefficient of variation (CV) was utilized to indicate the precision in estimating the rated output power values for each nominal power, using the equation below: CV ð%Þ ¼ ½STDROP =MROP   100

ð2Þ

where STDROP and M ROP represent the standard deviation and the mean of ROP values obtained in the experiments.

3

Rated Output Power (W)

2.2 Repeatability Analysis of ROP Measurement

5.5 5.0 4.5 4.0 3.5 3.0 2.5

A B C D E 0.3

0.7 1.0

1.4 1.7

2.1 2.4

2.8 3.1

3.5

aa)

3.8

4.2 4.5

4.9 5.2

5.6 5.9

6.3 6.6

7.0

ab)

0.3

0.7 1.0

1.4 1.7

2.1 2.4

2.8 3.1

3.5

ba)

3.8

4.2 4.5

4.9 5.2

5.6 5.9

6.3 6.6

7.0

bb)

3.2 2.4 1.6 0.8 0.0 7.5 6.0 4.5 3.0 1.5

Nominal Power (W)

Fig. 2 Mean (column) and standard deviation (vertical bar) of the ROP obtained with T1 at the frequencies of 3 MHz (aa and ab) and 1 MHz (ba and bb)

D and E) using T1 and T2 can be seen in Figs. 2 and 3, respectively. Figures 2aa and ab show the ROP values obtained with the frequency of 3 MHz, and Figs. 2ba and bb present the results obtained with 1 MHz. Table 1 shows the mean and standard deviation of the RE values obtained over the five test days (A, B, C, D and E), using T2 and the entire NP range. The mean and standard deviation of the relative error obtained with T1 in the frequencies of 1.0 W and 3.0 MHz can be seen in Table 2. The highlighted data (bold) represents the day and the nominal power value in which the relative error (%) measured is outside the range established by the International Standard IEC 61689:2013 (± 20 %). The values of the relative error with a negative sign indicate that the acoustic power emitted is higher than the value of the nominal power, which may represent a risk of tissue damage.

Results

3.1 Evaluation of Rated Output Power (ROP)

3.2 Repeatability Analysis of ROP Measurement

The mean (column) and standard deviation (vertical bar) of the ROP values obtained on 5 consecutive Sundays (A, B, C,

The Shapiro–Wilk test indicated that 33.0% and 31.0% of the 100 T1 subgroups at frequencies of 1 and 3 MHz,

6

J. G. S. N. Cavalcanti et al. 2.00

A B C D E

1.75

Rated Output Power (W)

Fig. 3 Mean (column) and standard deviation (vertical bar) of the ROP obtained with T2 at the frequencies of 1 MHz

1.50 1.25 1.00 0.75 0.50 0.25

2.0

1.9

1.8

1.7

1.6

1.5

1.4

1.3

1.2

1.1

1.0

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.00

Nominal Power (W)

Table 1 Mean and standard deviation of the RE (%) values obtained over the five test days (A, B, C, D and E) using T2 and the entire range of NP Nominal power (W)

A

B

C

D

E

0.1

−87.80 – 5.61

−130.20 – 1.48

−120.60 – 6.47

−120.80 – 3.68

−107.60 – 3.24

0.2

−31.20 – 1.81

−54.90 – 0.74

−49.30 – 1.95

−50.70 – 1.16

−40.20 – 1.40

0.3

−12.53 ± 2.39

−31.00 – 0.57

−28.93 – 1.18

−27.93 – 0.58

−20.07 – 0.58

0.4

−5.05 ± 0.93

−20.55 – 0.37

−18.65 ± 0.24

−17.80 ± 0.42

−12.15 ± 1.58

0.5

−2.80 ± 0.82

−14.72 ± 0.41

−14.60 ± 1.28

−13.80 ± 0.34

−9.20 ± 0.33

0.6

−0.37 ± 0.43

−12.50 ± 0.18

−12.90 ± 0.50

−11.00 ± 0.22

−7.17 ± 0.18

0.7

3.40 ± 1.55

−11.37 ± 0.30

−11.94 ± 0.38

−9.89 ± 0.24

−5.89 ± 0.54

0.8

4.85 ± 1.12

−11.20 ± 0.26

−12.38 ± 0.18

−9.45 ± 0.23

−5.42 ± 0.21

0.9

2.73 ± 0.32

−11.69 ± 0.28

−12.71 ± 0.65

−9.18 ± 0.74

−5.80 ± 0.27

1.0

9.16 ± 1.07

−0.42 ± 0.15

−1.18 ± 0.18

1.38 ± 0.45

4.40 ± 0.16

1.1

9.20 ± 0.38

1.36 ± 0.10

0.05 ± 0.32

3.44 ± 0.42

6.84 ± 0.39

1.2

10.68 ± 0.18

2.98 ± 0.12

2.32 ± 0.45

6.90 ± 0.36

8.82 ± 0.38

1.3

12.17 ± 0.38

4.32 ± 0.23

5.32 ± 0.90

7.42 ± 0.14

9.29 ± 0.19

1.4

12.90 ± 0.58

5.47 ± 0.12

5.20 ± 0.96

8.10 ± 0.25

10.47 ± 0.34

1.5

13.48 ± 0.25

6.01 ± 0.10

5.16 ± 0.18

7.31 ± 4.21

11.77 ± 0.33

1.6

13.59 ± 0.19

6.84 ± 0.28

6.10 ± 1.73

8.85 ± 0.10

12.59 ± 0.57

1.7

13.61 ± 0.10

7.92 ± 0.25

5.51 ± 0.16

9.09 ± 0.16

13.28 ± 0.24

1.8

13.81 ± 0.15

7.99 ± 0.15

5.73 ± 0.20

9.18 ± 0.22

13.34 ± 0.26

1.9

13.76 ± 0.15

7.67 ± 0.16

6.32 ± 0.16

9.63 ± 0.17

14.56 ± 0.38

2.0

9.59 ± 0.20

3.90 ± 0.45

2.31 ± 0.20

6.28 ± 0.17

12.22 ± 0.13

respectively, did not present a normal distribution. In addition, 42% of the 100 T2 subgroups did not have a normal distribution. In view of these results, it was only possible to apply the Bartllet test to assess the homogeneity of the variances of the ROP values obtained in the 5 days for NP: 2.8 W, 3.1 W and 6.3 W with T1 at the frequency of 1 MHz

(p-value < 0.05 in all cases), 0.3 W, 0.7 W and 2.8 W with T1 at the frequency of 3 MHz (p-value = 0.24, 0.96 and 0.31, respectively), and 0.7 W with T2 (p value < 0.05). This test indicated that only the data obtained with T1 at the frequency of 3 MHz (0.3, 0.7 and 2.8 W) were not significantly different.

Analysis of the Acoustic Power Emitted by a ...

7

Table 2 Mean and standard deviation of the RE values obtained over the five test days (A, B, C, D and E) using T1 and the entire range of NP Frequência MHz

NP W

A

B

C

D

E

1 MHz

0.3

−5.73 ± 3.00

−7.67 ± 4.02

−4.33 ± 11.36

−13.27 ± 1.11

−2.13 ± 4.72

0.7

19.80 ± 0.62

21.14 – 0.54

31.49 – 5.46

16.17 ± 0.41

21.94 – 0.76

1.0

15.30 ± 1.08

17.54 ± 0.37

26.16 – 0.62

13.34 ± 0.58

18.96 ± 0.71

1.4

17.47 ± 0.79

19.70 ± 0.30

28.13 – 0.33

17.84 ± 0.42

20.60 – 0.51

1.7

11.61 ± 0.54

13.87 ± 1.59

23.46 – 0.28

12.2 ± 0.19

15.44 ± 0.17

3 MHz

2.1

13.70 ± 0.26

13.19 ± 0.27

22.92 – 0.22

11.23 ± 0.17

14.41 ± 0.50

2.4

12.23 ± 0.51

7.21 ± 0.42

18.53 ± 0.39

6.09 ± 0.13

8.76 ± 0.38

2.8

11.84 ± 0.81

4.20 ± 0.44

17.27 ± 0.29

4.36 ± 0.11

6.14 ± 0.18

3.1

7.21 ± 0.77

−2.61 ± 0.39

13.15 ± 0.23

−1.33 ± 0.05

1.35 ± 0.35

3.5

13.11 ± 0.52

2.90 ± 0.97

17.01 ± 0.29

2.82 ± 0.12

4.49 ± 0.07

3.8

15.15 ± 2.62

0.46 ± 0.51

16.93 ± 0.31

2.07 ± 0.19

4.07 ± 0.16

4.2

34.80 – 14.18

0.29 ± 0.29

18.88 ± 0.35

3.60 ± 0.11

6.13 ± 1.01

4.5

41.22 – 0.60

−2.55 ± 0.36

19.63 ± 1.26

2.78 ± 0.71

6.37 ± 0.34

4.9

53.87 – 8.01

−2.51 ± 0.93

21.84 – 0.26

3.44 ± 0.20

9.51 ± 4.27

5.2

59.82 – 1.82

−0.95 ± 6.01

21.43 – 0.43

2.35 ± 0.18

6.28 ± 2.76

5.6

57.93 – 1.80

−2.15 ± 0.69

22.37 – 0.30

4.24 ± 5.72

7.35 ± 0.20

5.9

45.87 – 2.63

−3.07 ± 0.26

22.59 – 3.03

1.05 ± 0.26

5.71 ± 0.10

6.3

47.63 – 3.55

−2.12 ± 0.48

21.94 – 0.14

1.54 ± 0.18

5.52 ± 0.11

6.6

16.03 ± 5.47

−10.96 ± 1.12

20.89 – 0.19

1.23 ± 3.32

3.30 ± 0.45

7.0

23.08 – 8.76

−1.05 ± 0.61

25.92 – 0.12

13.54 ± 0.38

9.75 ± 0.72

0.3

−1.67 ± 2.23

9.53 ± 2.09

2.80 ± 1.50

2.53 ± 1.12

1.07 ± 1.38

0.7

28.23 – 0.57

36.97 – 0.51

32.26 – 0.46

34.49 – 0.57

33.40 – 0.51

1.0

34.10 – 0.38

36.20 – 2.57

29.74 – 0.53

34.32 – 0.14

34.04 – 0.23

1.4

37.89 – 0.44

39.00 – 0.21

33.10 – 0.35

36.49 – 0.12

36.71 – 0.10

1.7

31.99 – 0.57

35.40 – 0.61

28.98 – 0.08

32.53 – 0.41

33.00 – 0.58

2.1

30.41 – 0.26

34.70 – 0.19

28.91 – 0.54

32.02 – 0.19

31.70 – 0.41

2.4

26.56 – 0.35

30.42 – 0.29

24.05 – 0.44

27.29 – 0.22

27.88 – 0.47

2.8

23.42 – 0.28

28.41 – 0.24

22.56 – 0.32

25.64 – 0.35

26.43 – 0.48

3.1

18.03 ± 0.31

22.49 – 0.64

19.03 ± 3.56

20.90 – 0.33

20.88 – 0.31

3.5

21.82 – 0.38

26.29 – 0.67

22.75 – 0.16

26.55 – 0.18

26.26 – 0.20

3.8

20.25 – 0.53

27.18 – 0.79

22.41 – 0.32

25.21 – 0.79

26.19 – 0.18

4.2

21.53 – 0.23

26.91 – 0.58

23.46 – 0.47

26.27 – 0.10

27.74 – 0.17

4.5

24.01 – 0.24

25.22 – 0.23

21.52 – 0.24

25.19 – 0.20

26.90 – 0.17

4.9

24.31 – 0.33

24.98 – 0.33

21.32 – 0.22

25.42 – 0.22

26.79 – 0.40

5.2

18.29 ± 0.37

23.90 – 0.16

19.74 ± 0.10

27.09 – 8.78

26.10 – 0.41

5.6

22.53 – 0.63

24.26 – 0.43

20.34 – 1.90

25.11 – 0.05

26.33 – 0.52

5.9

23.49 – 0.43

22.29 – 0.17

20.89 – 0.52

23.88 – 0.17

24.22 – 0.51

6.3

20.87 – 0.42

22.76 – 0.41

20.73 – 0.28

23.98 – 0.15

23.19 – 0.18

6.6

19.41 ± 0.15

21.86 – 0.57

19.25 ± 0.61

22.31 – 0.27

24.12 – 4.81

7.0

22.77 – 0.61

26.16 – 0.22

24.69 – 0.35

27.60 – 0.86

27.43 – 0.33

8

J. G. S. N. Cavalcanti et al.

The Kruskal–Wallis test indicated that there was a significant difference between the power values measured on 5 consecutive Sundays, regardless of the nominal power and the equipment configuration. ANOVA also showed that there was a significant difference between the measurements made in the 5 days when T1 with a frequency of 3 MHz was used with NP of 0.3 W, 0.7 W or 2.8 W.

3.3 Estimation of the ROP Variation Coefficient Considering the data obtained in the 5 days of experiments, the use of 3.5 cm2-ERA with a frequency of 1 MHz resulted in a CV in the range from 5.09% to 7.66% for ROP values related to the NP in the range of 0.3 W to 3.8 W, except for 1.4 and 2.1 W. Accuracy was worse in the range of 4.2 to 7.0 W, as the CV varied from 12.23 to 27.72%. When only the transducer frequency was changed to 3 MHz, the CV variation was from 1.67% to 4.41% for the NP from 0.3 W to 7.0 W, except for the nominal power of 5.2 W (6.68%). The highest precision obtained with the use of the other face of the soundhead applicator, with ERA of 1.0 cm2 and frequency of 1 MHz, was from 3.12% to 4.35% for the NP range from 0.5 W to 2.0 W, except for NP of 0.7 W, 0.8 W and 0.9 W (5.26% to 5.93%). There was a change in the CV from 5.01% to 7.23% for a NP variation from 0.1 to 0.4 W.

4

Discussion

The testing and programming test of the radiation balance carried out in the 5 days of experiments resulted in the average of the PAE of 14.651 ± 0.001 W (5 measurements per day), so it was observed that the balance was working properly, because the power equivalent to mass of the 1 g disk is 14.650 W and the acceptable range recommended by the manufacturer is 14.504 to 14.796 W. The mass calibration of a radiation force balance can just assure that the balance is working to measure mass, so it is important that the user choose the “custom mode” (Watts) option on the front panel to check the calibration and programming of the ARFB. If the reading on the ARFB display is not stable, the user can employ the “grams” mode and multiply the readings by 14.650 to obtain Watts. The ultrasonic energy emitted by the transducer is reflected in the conic-shaped metallic target and is then absorbed by the reservoir rubber lining (thus avoiding reflections and reverberations inside the reservoir). The radiant power is directly proportional to the total down-ward force (weight) on the target. The weight on the target is transferred to an electromechanical load cell inside the balance. This cell is part of a dedicated electronic system

to perform a digital reading in watts of power (custom units) or grams of force. Although the device had preventive maintenance, electrical safety inspection and testing up to date, there were discrepancies between the ROP values obtained on 5 consecutive Sundays, regardless of the nominal power used. It can be seen in Table 1 that 93% of the RE values of the ROP in the 5 days showed values higher than the established by IEC 61689:2013, for nominal powers of 0.1, 0.2 and 0.3 W. In addition, the RE values obtained are negative, which implies that the rated output power is greater than the nominal power, which could potentially damage the biological tissue. Despite of that, the Physiotherapy Section mentioned that these values are not used in the treatment of musculoskeletal injuries. The use of T1 with a frequency of 3 MHz showed that this equipment must be tested, since 89% of the ROP values are above that established by IEC 61689:2013, regardless of the NP values used. When T1 was used with a frequency of 1 MHz, only 23% of the 100 values of the RE were above the limit of ±20% (See Table 1). In general, the equipment showed good accuracy, as the variation coefficient was less than 7.7%, except when T1 with a frequency of 1 MHz and the following NP values were used: 4.2, 4.5, 4.9, 5.2, 5.6, 5.9, 6.3, 6.6 and 7.0 W. The results obtained in this study are not surprising, since several studies found in the literature show that many therapy ultrasound devices operate outside the range recommended by the International Standard [5, 8, 14]. Pye and Milford [14], for example, analyzed 85 TUSD, of which 59 devices (about 69.4%) had at least one acoustic power value exceeding the ±30% margin, and the IEC 61689:2013 recommends that this variation is a maximum of ±20%. The difference between the paper of these researchers and the present study is the quality of the services provided by the company that carried out the preventive and corrective maintenance and the testing of the equipments, as they observed that after the repair and testing, all measurements performed with 76 devices were less than ±30% of the nominal power, and 95% of the measurements were within the ±20% range. The results showed that 42.6% of the 3000 measurements were outside the previously mentioned range and 13.5% of the ROP measurements were outside the ±30% range. It is also worth mentioning that 88.9% of the 1000 measurements made with T1 at the frequency of 3 MHz were outside the range recommended by IEC 61689:2013. Martins et al. [7] evaluated the ROP of two nationally manufactured devices that had just left the corrective maintenance and calibration and observed that some ROP values (16.3% of 80 measurements) were higher than that recommended by IEC 61689.

Analysis of the Acoustic Power Emitted by a ...

Guirro e Santos [15] analyzed 8 TUSD produced in Brazil and found that 5 models (Sonamed I, Sonacel, Sonacel Plus, Sonacel III and Avatar I) had relative errors above ±30% of the intensity selected on the device panel in more than one measured intensity. It is worth mentioning that this result was surprising, since the devices with a testing and/or maintenance problem were new. The acoustic intensity (I) is related to the rated output power through the equation I = ROP/ERA. The cited authors observed that the majority of the TUSD analyzed had some discrepancy between the nominal and measured powers, even when using new devices or after returning from maintenance and/or testing.

5

Conclusion

The accuracy test of the acoustic power emitted by a physiotherapy equipment manufactured in Brazil and used by the Physiotherapy Section of ES-YS over 5 consecutive Sundays showed that the device requires corrective maintenance and/or testing, especially when the ERA of 3.5 cm2 is used with a frequency of 3 MHz, as many measured values are outside the range recommended by IEC 61689:2013, even if the device is within the maintenance deadlines. The rated output power values obtained showed good precision, because, in general, the coefficient of variation was less than 7.7%, but the devices analyzed showed deficiency in the question of repeatability. This fact demonstrates a deficiency in relation to the TUSD manufactured in Brazil and/or the maintenance of the device, considering that the equipment was analyzed under temperature circumstances consistent with that required by the manufacturer (between 5 and 45 °C) and in accordance with the established by the IEC 61689:2013. As a future study, it is intended to evaluate other therapeutic ultrasound devices of the Physiotherapy Section, to observe a possible failure in the company that performs the maintenance of the equipments. Acknowledgements The authors would like to thank the LUS-PEB-COPPE-UFRJ and the Physiotherapy Section of ES-YS for having provided the equipments used in this study.

9 Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Prentice WE (2011) Therapeutic modalities in rehabilitation. McGraw-Hill, New York 2. Kumar V, Abbas AK, Aster JC (2013) Patologia Básica. Elsevier, Rio de Janeiro 3. Habash RWY, Bansal R, Krewski D et al (2006) Thermal therapy, part 1: an introduction to thermal therapy. Crit Rev Biomed Eng 34:459–489 4. Stewart HF, Harris GR, Herman BA et al (1974) Survey of the use and performance of ultrasonic therapy equipment in Pinellas County. Phys Ther 54:707–715 5. Lima LS, Costa Júnior JFS, Costa RM et al (2012) Exatidão da potência acústica de equipamentos comerciais de ultrassom fisioterapêutico. CBEB 2012. XXIII Congresso Brasileiro De Engenharia Biomédica, Porto De Galinhas, Brazil 2012:2443– 2447 6. Shaw A, Hodnett M (2008) Calibration and measurement issues for therapeutic ultrasound. Ultrasonics 48:234–252 7. Martins ARA, AAS, Omena TP et al (2016) Aferição da intensidade de dois aparelhos de ultrassom fisioterapêutico pós-manutenção. CBEB 2016, XXV Congresso Brasileiro de Engenharia Biomédica, Foz do Iguaçu, Brazil, pp 1989–1992 8. Ferrari CB, Andrade MAB, Adamowski JC et al (2010) Evaluation of therapeutic ultrasound equipments performance. Ultrasonics 50:704–709 9. Johns LD, Straub SJ, Howard SM (2007) Variability in effective radiating area and output power of new ultrasound transducers at 3 MHz. J Athl Train 42:22–28 10. IEC 61689:2013. Ultrasonics—Physiotherapy systems—Field specifications and methods of measurement in the frequency range 0.5 MHz to 5 MHz, 3rd ed. 2013. 11. International vocabulary of metrology: basic and general concepts and associated terms, 3rd. ed. JCGM 200:2012 (2012) 12. Razali NM, Wah YB (2011) Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. J Stat Model 2:21–33 13. Riboldi J, Barbian MH, Kolowski ABS et al (2014) Accuracy and power of parametric and non-parametric homoscedasticity tests assessed for simulation. Rev Bras Biom 32:334–344 14. Pye SD, Milford C (1992) The performance of ultrasound physiotherapy machines in Lothian Region. Ultrasound Med Biol 20:347–359 15. Guirro R, Santos SCB (2002) Evaluation of the acoustic intensity of new ultrasound therapy. Ultrasonics 39:553–557

Synthesis and Antibacterial Activity of Maleimides E. Conrado, C. J. Francisco, R. H. Piccoli, and A. F. Uchoa

Abstract

Previous studies have shown that cyclic imides and their subclasses have obtained positive results in biological activities, showing antimicrobial, analgesic and antifungal therapeutic potential. Arousing the interest of the scientific community and the pharmaceutical industry, surpassing expectations in relation to certain drugs used in the market. In this work we specifically perform the synthesis of one of the subclasses of cyclic imides, called maleimides, as they are easy to obtain and with good yields, in addition to being versatile for conjugation with other molecules. The synthesized compounds were identified by 1H and 13C nuclear magnetic resonance, mass spectroscopy and antibacterial activity was analyzed. In this work, a maleimide platform was obtained with different functional groups. These molecules showed synthesis yields (90%) and the activity was superior to the antibiotic streptomycin. Keywords

Maleimides

1



Drugs



Antibacterial activity

Introduction

With the development of humanity, new diseases have emerged, which may be of a bacteriological, fungal or viral nature. Generally, of a complex nature such as AIDS and E. Conrado  A. F. Uchoa Universidade Anhembi Morumbi, Instituto de Engenharia Biomédica, São José dos Campos, SP, Brazil C. J. Francisco (&) Universidade Nilton Lins, Parque das Laranjeiras, Av. Prof. Nilton Lins, 3259 - Flores, Manaus, AM 69058-030, Brazil R. H. Piccoli Universidade Federal de Lavra, Lavras, MG, Brazil

Covid-19. The emergence of new diseases requires ingenious studies in the sense of designing, synthesizing and determining the biological activity of new molecules. In this sense, organic synthesis has been a great ally with the scientific community, favoring the significant growth of new drugs. Which can show promising results in biological effects and in advancing pre-clinical and clinical studies. Among these compounds, cyclic imides stand out [1]. In order to enhance the biological activity, chemical processes are used that imply molecular changes, this depends on the pharmacological and pharmacophobic groups present in the molecule. In this way, we seek to insert groups that can alter the physico chemical properties, such as: hydrophobicity of the substance; alteration of the potential of pharmacological groups. This change occurs through the insertion of groups of donors and/or electron acceptors. This allows the application of qualitative and quantitative methods and can be correlated with chemical structure and biological activity [2]. These methods make it possible to verify the interaction between the chemical structure and biological activity, or the chemical structure and some physical chemical properties. Additionally, it is possible to verify its effects caused in a substance (ligand) during the interaction with the biological receptor, justifying the main factors of this interaction [3]. The introduction of substituents produces changes in the physicochemical properties of the molecule (hydrophobicity, electronic density and structural conformation) and may lead to new syntheses [4]. Several methods have been developed to obtain a better understanding of the different physical chemical parameters, in a small series of substances or test group, which we mention the methods of Hansch [5], Craig [6], Topliss [7]., and the modified Topliss method [7]. In this sense, cyclic imides are compounds that have great therapeutic potential. This family of compounds has the functional group characteristic of imides –CO-N (R) (Fig. 1), where R can be a hydrogen atom, an alkyl or aryl group linked to a carbon chain. Among the cyclic imides we can highlight: Maleimides, Succinimides, Glutarimides,

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_2

11

12

E. Conrado et al.

Fig. 1 General structure of cyclic Imides

R

O N R

O

Phthalimides, Naphtha-limides and their derivatives [1]. With relevance to Maleimides, for being inhibitors of Prostaglandin Endoperoxide Synthase (PGHS). This process occurs by enzymatic attack on olefinic and carbonyl carbons [8]. Proving the importance of studying the biological activity of maleimides, as they are closely related to antimicrobial processes. Cyclic imides and their subclasses have biological effects of great importance which we cite as maleimide as the main representative in antifungal, antibacterial and insecticidal activities [1]. The main objective of this work was to obtain a maleimide platform with different functional groups (R) and to determine the antibacterial activity of the compounds obtained against Escherichia coli ATCC 055, Pseudomonas aureginosas ATCC 27853, Staphylococcus aureus ATCC 25923 and Listeria monocytogenes ATCC 15313.

2

hours and the inhibition halos (mm) were read. Controls were performed with solvent (DMSO) and antibiotic (Streptomycin). The tests were carried out with the following bacteria: Pseudomonas aureginosas ATCC 27853 (gram negative), Escherichia coli Enteropatogênica ATCC 055 (gram negative), Staphylococcus aureus ATCC 25923 (gram positive) e Listeria monocytogenes ATCC 15313 (gram positive).

3

Results

3.1 Synthesis and Structural Characterization In this research, 11 compounds were obtained, compound 1 has recognized antibacterial activity. Compounds 2, 3 and 4, have already been synthesized, but never had their biological activities determined. The remaining compounds (5–11) are new. Figure 2 shows the synthetic scheme for obtaining these compounds. The 11 compounds were perfectly purified on a column with silica gel and characterized by high resolution mass spectrometry, and nuclear magnetic resonance of Hydrogen (NMR1H) and carbon 13 (NMR13C). Only the spectra referring to methyl 4-(2,5-dioxo-2,5-dihydro-1H-pyrrol-1yl) benzoate (compound 9), are presented in Figs. 3, 4 and 5, respectively.

Material and Methods 3.2 Biological Activity

2.1 Syntesis Anilines and anhydride came from Sigma Aldrich; high purity analytical reagents. The solvents ethyl ether, acetic anhydride, chloroform, methanol and sodium acetate from Sinthya. Maleimides were synthesized in two stages: 1) the addition of anilines to maleic anhydride for the formation of the corresponding acid and 2) reflux in acetic anhydride for the cyclization of the imide ring. This procedure was performed according to the process described in the literature [8], with a yield between 32 and 78%. The characterization was performed by nuclear magnetic resonance at 500 MHz, using deuterated chloroform (CDCl3) as a solvent and TMS as a reference.

2.2 Antimicrobial Activity The radial diffusion method on nutrient agar was used, where the bacteria were grown in Brain Heart Infusion. After activation, the turbidity was corrected using the McFarland 0.5 scale. The sterile paper discs were immersed in a solution of the compounds dissolved in DMSO, at a concentration of 250 mMol/mL. These plates were incubated at 37 °C for 24

Antibacterial activity was determined in four strains of bacteria. Two gram positive (Staphylococcus aureus ATCC 25923 e Listeria monocytogenes ATCC two gram negative (Pseudomonas aureginosas ATCC 27853 e Escherichia coli ATCC 055 (Table 1). Since it is known that the biological activity of maleimides is related to structural factors and their physiochemical properties, in this work a platform of maleimides with different functional groups was synthesized, which provided a study of the chemical structure related to biological activity. In P. aureginosas bacteria ATCC 27853, E. coli ATCC 055, S. aureus ATCC 25923 and L. monocytogenes ATCC 15313. The Fig. 1 presents a scheme of the synthesis strategy of the compounds that were used to study biological activity. Within the concentrations used, no activity was observed for experiments. DMSO. The antibiotic Streptomycin did not show selectivity between gram positive and gram negative bacteria. For gram negative bacteria, compounds 11 and 1 showed a potential close to Streptomycin. The same was observed for compounds 10, 5, 1 and 3 in P. aureginosas. On the other hand, the gram positive results showed an inhibitory potential of the compounds in the following order

Synthesis and Antibacterial Activity of Maleimides

Fig. 2 Compound platform structure: (1) 1-phenyl-1H-pyrrole-2,5dione; (2) 1-octyl-1H-pyrrole-2,5-dione, (3) 1-(4-nitrophenyl)1H-pyrrole-2,5-dione; (4) 1-(4-mercaptophenyl)-1H-pyrrole-2,5-dione;

13

(5) 1-(4-butylphenyl)-1H-pyrrole-2,5-dione; (6)1-(4-dodecylphenyl)1H-pyrrole-2,5-dione; (7) 1-(4-octadecylphenyl)-1H-pyrrole-2,5-dione; (8) 1-(4-henicosylphenyl)-1H-pyrrole-2,5-dione

Fig. 3 Mass spectrum of methyl 4-(2,5-dioxo-2,5-dihydro-1H-pyrrol-1-yl) benzoate

1, 11, and 5, higher than Streptomycin when applied to S. aureus. For L. monocytogenes, compounds 1, 5, and 10 also showed an inhibitory potential greater than Streptomycin. Considering that these compounds have the maleimide ring as a pharmacological group, and that the mechanism of action occurs by enzymatic attack on the vinyl group

conjugated to carbonyls and / or carbonyl group, and that they are present in all compounds; In this work, that the differences between the activities seem to be related to the hydro/lipophilic balance. The solvent (DMSO) had no inhibitory effect. Streptomycin (control) showed a regular inhibition pattern within

14

E. Conrado et al.

Fig. 4 Spectrum of RMN 1H, in CDCl3 a 500 MHz 4(2,5-dioxo-2,5dihydro-1H-pyrrol-1-yl) benzoate

Fig. 5 Spectrum of RMN 13C, in CDCl3 a 125 MHz 4(2,5-dioxo-2,5dihydro-1H-pyrrol-1-yl) benzoate

the group of bacteria analyzed. With zone of inhibition of 15.2–16.4 mm for gram negative and 13.5–13.2 mm for gram positive. For gram negative bacteria, E. coli was most affected by compounds 1 and 11 (15.0–15.8 mm) and 5 (13.0 mm). P. aureginosas was more affected by compounds 1, 3, 5 and 10 (13.47–13.98 mm). For gram positive S. aureus was more affected by 5, 11 and 1 (16.7–18.4 mm). L. monocytogenes was more affected by 10, 5 and 1 (17.0– 20.3 mm).

4

Discussion

The 1-octyl-1H-pyrrole-2,5-dione (compound 2) was obtained from octan-1-amine, with 38% yield. On the other hand, those obtained by anilines, showed much higher yields, between 78 and 92%. This reaction yield is related to the nucleophilicity of the amines. The higher yields were obtained for the alkylated anilines of compounds 4, 5, 6, 7

Synthesis and Antibacterial Activity of Maleimides Table 1 Antibacterial activity of maleimides by the agar diffusion method; Data are presented as mean and standard error of three independent

15

Compound

Inhibition zone (mm) Escherichia coli 250 mMol/mL

Streptomycin DMSO 1 2

16.4 ± 0.00 0 ± 0.00 15.8 ± 0.24 8.43 ± 2.27

15.23 ± 0.00 0 ± 0.00 13.73 ± 0.84 4.60 ± 1.88

Staphylococcus aureus 250 mMol/mL 13.51 ± 0.00 0 ± 0.00 18.4 ± 1.09 0 ± 0.00

Listeria monocytogenes 250 mMol/mL 13.6 ± 0.00 0 ± 0.00 20.3 ± 0.57 6.6 ± 0.13

3

8.4 ± 0.28

13.93 ± 0.67

10.5 ± 1.03

14.0 ± 0.75

4

9.0 ± 1.24

10.05 ± 1.44

8.1 ± 0.69

7.8 ± 0.60

5

13.3 ± 0.38

13.98 ± 1.05

16.7 ± 0.17

17.8 ± 0.09

6

4.2 ± 1.70

2.1 ± 1.74

8.1 ± 0.36

0 ± 0.00

7

5.4 ± 2.31

7.25 ± 0.18

8

8.8 ± 0.61

12.44 ± 0.95

11.2 ± 0.16

7.5 ± 0.14 9.6 ± 1.01

9

4.7 ± 1.94

6.4 ± 0.14

0.0 ± 0.00

9.6 ± 0.46

13.47 ± 0.82

13.6 ± 0.31

17.0 ± 0.46

11

15.0 ± 0.71

12.04 ± 0.59

16.8 ± 0.54

12.5 ± 1.25

Conclusions

For the synthetic process, it was found that the compounds obtained from anilines present a much higher yield than those obtained by aliphatic amines. Compounds 1, 5, 10 and 11 were the ones with the greatest potential. Since, the pharmacological group is the maleimide ring, and the mechanism of enzymatic attack on this ring is dependent on an internalization in microorganisms and specific structures, it is concluded that this biological activity is dependent on the functionalization group, because allows adequate permeability, which is dependent on the hydro/lipophilic balance.

0 ± 0.00

0 ± 0.00

10

and 8, all with yields greater than 85%. The higher yield of these compounds is related to the greater reactivity of these anilines, which was increased by the inductive effect of the alkyl chain donor. These compounds were perfectly purified by chromatography on silica gel, since the polarity of the products differs significantly from the reaction reagents and by-products. The characterization was performed unequivocally by mass spectrometry and nuclear magnetic resonance (1H NMR) and 13CNMR at 500 MHz and 125 MHz, respectively. The NMR1H spectra show a characteristic singlet in chemical displacement (r) of 6.8 ppm for the 2H vinyls, characteristic of the maleimide ring. In the 13CNMR the carbons of carbonyls 1 and 4 appear in r  170 ppm. This relationship seems to be maximalized for compound 1, 5, 10 and 11 in gram positive.

5

Pseudomonas aeruginosa 250 mMol/mL

Acknowledgements The authors wish to thank Fapesp, Process 2013 / 07937-8, CEPID REDOXOMA - Research Center on Redox Processes in Biomedicine Conflict of Interest The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References 1. Cechinel Filho V, Corrêa R, Yunes RA, Nunes RJ (2003) Aspectos químicos e potencial terapêutico de imidas cíclicas: uma revisão da literatura. Química Nova 26:230–241 2. Kalgutkar A S, Crews B C. Marnett L J (1996) J. Med.Chem. 39, 8 3. Hansch C, Fujita T (1964) ϼ-r-p Analysis. A method for the correlation of biological activity and chemical structure. J Am Chem Assoc 86:1616–1626 4. Craig PN (1971) Interdependence between physical parameters and selection of substituent groups for correlation studies. J Med Chem 14(8):680–684 5. Topliss JG (1972) Utilization of operational schemes for analog synthesis in drug design. J Med Chem 15:1006 6. Ferreira RS, Oliva G, Andricopulo A (2011) Integração das técnicas de triagem virtual e triagem biológica automatizada em alta escala: oportunidades e desafios em P&D de fármacos. Química Nova 34 (10):1770–1778 7. Huisgen R, Granskey R, Saver J (1964) The chemistry of alkenes. In: S Patai (ed) Interscience Publishers, London, p 739 8. Cava MP, Deana AA, Muth K et al (1973) N-phenylmaleimide. In: Baumgarten HE (ed) Org. Synth. Collective, vol 5. Wiley, New York, pp 944–946

Bioengineering

Study of the Effect of Bioceramic Compressive Socks on Leg Edema A. A. S. Sakugawa, L. A. L. Conrado, A. Balbin Villaverde, and E. Munin

Abstract

Keywords

The aim of the present study is to investigate the therapeutic effect of wearing compressive socks composed of synthetic fibers with IR- emitting ceramic particulates on patients with edema of inferior member extremities. Thirty patients of both genders and age ranging from 30 to 70 years old (54.9 ± 13.3 years) were enrolled in the study and separated into two groups: C+ and placebo. The C+ group wore the compressive socks with embedded ceramic powder for at least 8 h a day for four weeks. The placebo group wore compressive socks made with fabrics of same material but without the ceramic particulate. The evolution of the treatment was assessed through plethysmographic measurements. The statistical analysis was done using the Kolmogorov– Smirnov normality test and a parametric two-tailed t-test with Welch correction at the significant level of a = 0.05. The Prism 8.0 software (GraphPad Software Inc., La Jolla, CA, USA) was used for the analysis. The experimental data showed a statistically significant reduction in edema volume for the ceramic-active group C+ , as compared to the placebo group. The treatment of leg edema using compressive socks containing ceramic particulate in its fabrics seemed to be more effective when compared with placebo socks.

Leg edema Bioestimulation Infrared therapy body emission Bioceramic socks

A. A. S. Sakugawa  A. Balbin Villaverde  E. Munin (&) Anhembi Morumbi University (UAM), Biomedical Engineering Center, Estrada Dr. Altino Bondensan, 500 Distrito de Eugênio de Melo, CEP: 12.247-016, São José dos Campos, SP, Brazil e-mail: [email protected] L. A. L. Conrado  A. Balbin Villaverde  E. Munin Center for Innovation, Technology and Education (CITE), Estrada Dr. Altino Bondensan, 500, Distrito de Eugênio de Melo, CEP: 12.247-016, São José dos Campos, SP, Brazil

1







Black

Introduction

Biological effects resulting from the occlusion of body parts by pieces of clothes and devices containing ceramic particulates have been reported [1–7]. Those published results indicate that the presence of inorganic particulates in the composition of occluding devices and fabrics modulates the radiation exchange between the human body and the surrounding medium. Increment in the local tissue temperature and vasodilatation seems to be the primary effects attributed to creams and garments containing inorganic particulates and other ceramic devices as well. In the last decades, the interest for studying IR ceramic garment therapies has been increased and reports of successful treatments for a variety of diseases are numerous. Among the diseases treated with garments embedded with ceramic particulates it can be mentioned Raynaud’s syndrome (thermo flow gloves) [5]; chronic foot pain (polyethylene terephthalate fiber socks) [6]; primary dysmenorrhea (far infrared-emitting sericite belt) [8, 9]; cellulite reduction (bioceramic-coated neoprene shorts) [1, 2]; impairment of limb movements in post-polio syndrome (Infrared MIG3 bioceramic fabrics) [7]; postoperative effusion after total knee arthroplasty (knee pad of Nexus-ES fiber) [10]. Likewise, studies on the reduction of body measurements of individuals wearing bioceramic high-waist undershorts were reported [3, 4]. It was also found that some bioceramic appliances improve athlete’s performance, such as reduction in time recovery of futsal players by using bioceramic pants [11], control bacterial load in runners by wearing socks embedded with ceramic particulates [12], and performance improvement of gymnast when wearing

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_3

19

20

A. A. S. Sakugawa et al.

modified suits with ceramic materials [13]. The effects of bioceramic textiles used in sports activities are disclosed in a recently published review [14]. All materials in the nature emit electromagnetic radiation that depends on the temperature and composition of the material. If the material were perfectly absorbing (black body) the spectrum of the emitted radiation presents a maximum wavelength (kmax) given by Wien’s law and a total power per unit area (P/A) that satisfy the Stefan-Boltzmann Law: lmax ¼ ½2:897=TÞ  103 m P=A ¼ sT4 where T is the absolute temperature in degrees Kelvin (K) (273 + °C) and r = 5.67040  10−8 W/m2K4 is the Stefan’ constant. For real materials in the nature (not black body), the kmax does not depend on the material properties, being only a function of its temperature. On the other hand, the total power per unit area depends also on the material properties. P=A ¼ erT4 where e is the emissivity of the material and ranges from zero (non-absorbing material) to 1 (black body). For bioceramic particulates the emissivity is 0.9 or even higher. At room temperature that emission falls into the infrared range (IR) of the spectrum. Figure 1 displays the value of kmax as a function of the temperature T. It can be observed that for a human skin temperature of 33 °C the ceramic particulates emit radiation with a maximum at 9,47 lm and receives from the environment, at an average temperature of 23 °C, radiation with a maximum wavelength around 9.8 lm. The aim of the present study is to elucidate if wearing socks with embedded ceramic particulates is effective for reducing edema of the legs in persons with chronic

Fig. 1 Maximum wavelength of the black body radiation as a function of the body temperature (Wien’s Law)

inflammation of vascular origin. The edema reduction will be assessed using a plethysmographic method.

2

Materials and Methods

The study protocol was in accordance with the Declaration of Helsinki, and it was approved by the University Ethics Committee (Universidade do Vale do Paraiba - Ethics Committee approval number H42/CEP2010). All subjects provided written informed consent before data collection. The subjects of this research were 30 patients of both sexes, who ranged from 30 to 70 years old, suffering from edema of vascular or lymphatic origin in the extremities of the inferior members, presenting symptoms diagnosed for more than one year. The 30 patients were equally distributed in two groups. The group C+ wore socks with particulate ceramic embedded in the fabrics and the placebo group wore socks of the same fabrics without ceramic material. Since some patients presented edema in both legs, a total of 48 inferior members were diagnosed with edema, in such a way that each group contained 24 edematous members. The patients were advised not to apply cosmetics on the edematous site, and neither to perform massage nor lymphatic drainage. Exclusion criteria: pregnant or lactating women, endocrine disease, disorders of diabetic origin, venous ulcers and arterial disorders, being under treatment with antiinflammatory drugs. Each of the volunteers of both groups received a set of two socks with compression factor of 20 mmHg, containing or not the ceramic particulates. The socks were manufactured with Emana™ polyamide yarns with embedded ceramic particulates containing oxides of Al, Si, Zn e Ca, and were provided by BIOS (São José dos Campos, Brazil). The volunteers were requested to wear the socks daily for eight consecutive daytime hours, during four weeks, and were evaluated weekly. The edema reduction from before to after treatment was assessed by a homemade plethysmograph, which consisted of a rigid boot made with fiberglass containing 2.5 L of water. The edema volume was determined by measuring the volume of water displaced when the patients inserted their leg into the boot with water. The research subjects did not move or shake their legs, to reduce measurement errors. The edema size evolution over the time of treatment was assessed by plethysmographic measurements of the leg volume weekly for four weeks. A quantitative evaluation parameter was defined by DV = V − Vo, where V is the leg volume at each week and Vo is the leg volume before treatment. The negative value of DV indicates that there was a reduction of the edema.

Study of the Effect of Bioceramic Compressive Socks on Leg Edema

21

2.1 Statistical Analysis The statistical analysis of the data was done using the Kolmogorov–Smirnov test for normality of the data and a parametric two-tailed t-test with Welch correction for analyzing the difference between the means of C+ and placebo groups, at the significant level of a = 0.05. The Prism 8.0 software (GraphPad Software Inc., La Jolla, CA, USA) was used for the analysis. Data is expressed as mean ±SEM.

3

Results Fig. 3 Frequency histogram chart for C+ and placebo groups

The mean age of the thirty volunteers that participated in the study was 54.9 ± 13.3 years (mean ± SD). The age dispersion of the patients in both groups showed to be statistically homogenous (mean ± SD): 50.7 ± 12.0 years (placebo group) versus 59.1 ± 13.7 years (C+ group), p = 0.831. Figure 2 depicted the results of the edema volumetric change (DV) obtained by plethysmographic measurements from before to after treatment. The initial volume Vo covers the range: placebo from 838 to 1388 mL and C+ from 898 to 1520 mL The statistical box plot of Fig. 2 gives a perception of data dispersion and shows that the mean volumetric reduction for the group that wore socks containing ceramic particulates in the fabrics composition was (−78.9 ± 10.7) mL, while for the placebo group the mean volumetric reduction was only (−41.3 ± 5.6) mL. Intragroup analysis of both groups using the Kolmogorov–Smirnov normality test indicates that data of each group pass the normality test. A statistical comparison among the experimental data for the two study groups with parametric unpaired t test returned a two-tailed p value of 0.003, pointing out that the difference among the group means is very significant. Figure 3 displays the frequency as a function of edema volumetric change at the end of the treatment for the two groups. The histograms of Fig. 3 show that, for the group

Fig. 2 Edema reduction from before to after treatment as quantified by Plethysmographic measurements disclosed in statistical box plot

that wore socks containing ceramic particulate, the distribution of measured values for edema volumetric change is clearly shifted toward higher volume reductions, as compared to the distribution achieved for the placebo group. The time evolution of the edema volumetric change (DV) over the treatment is an important information, since it indicates if the reduction is consistent. It is displayed in Table 1 the data about the edema change for the placebo and the C+ groups at each week of treatment and the corresponding p-value obtained from the intergroup statistical analysis. It is possible to see that the edema volume undergoes a systematic reduction along the treatment for the two groups, although the reduction is larger for the group that wore socks with ceramic particles. It can be observed that the values of p consistently decrease from the 1st to the 4th week reaching a statistically significant difference at the 3rd and 4th weeks. The results indicate that treatment by wearing socks containing ceramic particulates becomes more efficient with continued use when compared to wearing socks without ceramics.

4

Discussion

The better clinical results observed in the present study that are promoted using ceramic-containing socks may be attributed to a stimulation of blood perfusion and lymphatic drainage, as a consequence of an increment in the local tissue temperature produced by the topical occluding garment containing ceramic particulates. This hypothesis is strengthened by the work reported by Ko and Berbrayer [5], who treated Raynaud’s syndrome patients with ceramic-containing gloves. A mean temperature increases of 1 °C in the finger dorsum of the patients who wore the ceramic gloves was reported, suggesting that the gloves, active by the incorporated ceramic, were beneficial in the management of Raynaud’s symptoms. Other study on the thermal effects of wearable ceramic materials was reported

22 Table 1 Edema volumetric change for each week of treatment expressed as mean ± SEM for placebo and C+ groups

A. A. S. Sakugawa et al. Edema volumetric change (DV)

p value C+

Placebo 1st week

−20.2 ± 5.8

−34.6 ± 11.4

0.260

2nd week

−30.2 ± 6.7

−52.5 ± 11.2

0.080

3rd week

−38.3 ± 7.1

−66.1 ± 11.5

0.040a

4th week

−41.3 ± 5.6

−78.9 ± 10,7

0.003a

The value of p corresponds to the weekly intergroup statistical analysis of group means a Mean difference statistically significant

recently by Papacharalambous et al. [15]. They introduced the hands of healthy voluntaries first in cold water and then in pouches, being one pouch bioceramic-active and the other one a placebo. It was found a percentage temperature increase of 6.34% (dorsal side) and 4.74% (palm side) when the use of an active pouch is compared to a placebo one [15]. The data presented in this work show that compressive socks are beneficial in the treatment of edema of the lower member extremities and that the addition of particulate ceramics in the fabrics from which the socks are manufactured enhances the clinical results even further. The experimental data for the time evolution of edema volume reduction over the four weeks of treatment reveal an increase of edema reduction capacity of the socks containing bioceramic particulates, as compared with the placebo ones: 1st week 14.4 mL, 2nd week 22.3 mL, 3rd week 27.8 mL, and at the 4th week 37.6 mL. The difference between mean values of both groups increases in a monotonic way over time, and at 3rd and 4th weeks that difference becomes statistically significant, p = 0.04 and p = 0.003, respectively. The frequency histograms of Fig. 3 show that, for the group that wore socks containing ceramic particulate, the distribution of measured values for edema volumetric change is clearly shifted toward higher absolute volumes, as compared to the distribution achieved for the placebo group. Anderson et al. [16] disclosed a thorough study about the radiative interactions between the body, garments and ambient, and the energy flux formed by the reflected, transmitted and emitted IR radiation. They discussed the way how the IR energy is transferred back and forth between the three elements: body, fabric and surrounding medium. More recently, Washington et al. [17] presented a carefully investigation about the possible mechanism for the phototherapy using IR radiation produced by Bioceramic materials embedded in fabrics, which are used for wearable garments such as the compressive socks of the present study. Ceramic particulates embedded in the garments absorb energy that comes from the body by radiative process and by conductive/convective heat transfer, and then re emit back to the body. The radiation emitted by the body is centered at about 9 lm, assuming a skin temperature of 33 °C (Fig. 1),

however the ceramic particles emit with a slightly longer wavelength, because the garment temperature is lower than that of the skin. Other point to be considered is whether the clothing change the body’s boundary conditions and interfere with the radiant heat exchange between the body and the surrounding medium [3]. They considered that the garments might be working as a radiation trap that sends reflected IR radiation back to the body [3]. An interest question was raised by the work of Vatansever and Hamblin [18] about the correlation between nonheating IR phototherapy and the visible-NIR low power phototherapy. They asserted that the effect of both therapies may be due to vasodilation by NO release from cytochrome c oxidase or NO bound to hemoglobin, among other possible mechanisms. Further studies will be necessary for a better understanding of the IR phototherapy processes responsible for the results presented herein.

5

Conclusion

The present study seems to indicate the effectiveness of wearing compressive socks with embedded ceramic material for edema volumetric reduction in individuals suffering from edema of vascular or lymphatic origin in the extremities of the inferior members. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Rao J, Paabo KE, Goldman MP (2004) A double-blinded randomized trial testing the tolerability and efficacy of a novel topical agent with and without occlusion for the treatment of cellulite: a study and review of the literature. J Drugs Dermatol 3:417–425 2. Rao J, Gold MH, Goldman MP (2005) A two-center, double-blinded, randomized trial testing the tolerability and efficacy of a novel therapeutic agent for cellulite reduction.

Study of the Effect of Bioceramic Compressive Socks on Leg Edema

3.

4.

5.

6.

7.

8.

9.

10.

J Cosmet Dermatol 4:93–102. https://doi.org/10.1111/j.1473-2165. 2005.40208.x Conrado LAL, Munin E (2011) Reduction in body measurements after use of a garment made with synthetic fibers embedded with ceramic nanoparticles. J Cosmet Dermatol 10:30–35. https://doi. org/10.1111/j.1473-2165.2010.00537.x Conrado LAL, Munin E (2013) Reductions in body measurements promoted by a garment containing ceramic nanoparticles: a 4-month follow-up study. J Cosmet Dermatol 12:18–24. https:// doi.org/10.1111/jocd.12027 Ko GD, Berbrayer D (2002) Effect of ceramic-impregnated thermo flow gloves on patients with Raynoud´s Sindrome: randomized, placebo-controlled study. Alter Med Rev 7:328–334 York RMB, Gordon IL (2009) Effect of optically modified polyethylene terephthalate fiber socks on chronic foot pain. BMC Complement Alter Med 9:10. https://doi.org/10.1186/1472-68829-10) Silva TM, Moreira GA, Quadros AAJ et al (2009) Effects of the use of MIG3 bioceramics fabrics use—long infrared emitter—in pain, intolerance to cold and periodic limb movements in post-polio syndrome. Arq Neuropsiquiatr 67:1049–1053. https:// doi.org/10.1590/s0004-282x2009000600016 Lee CH, Roh JW, Lim CY et al (2011) A multicenter, randomized, double-blind, placebo-controlled trial evaluating the efficacy and safety of a far infrared-emitting sericite belt in patients with primary dysmenorrhea. Complement Ther Med 19:187–193. https://doi.org/10.1016/j.ctim.2011.06.004 Liau BY, Leung TK, Ou MC et al (2012) Infrared ray-emitting belts on primary dysmenorrhea. Int J Photoener ID 238468. https:// doi.org/10.1155/2012/238468 Innocenti M, Mancini M, Faccio M et al (2017) The use of a high-tech knee pad for reduction of the postoperative effusion after total knee arthroplasty. Joints 5:7–11. https://doi.org/10.1055/s0037-1601406

23 11. Nunes RFH, Cidral-Filho FJ, Flores LJF et al (2020) Effects of Far-infrared emitting ceramic materials on recovery during 2-week preseason of elite futsal players. J Strength Cond Res 34:235–248. https://doi.org/10.1519/JSC.0000000000002733 12. Nova AM, Marcos-Tejedor F, Martín BG et al (2018) Bioceramic-fiber socks have more benefits than cotton-made socks in controlling bacterial load and the increase of sweat in runners. Text Res J 88:696–703. https://doi.org/10.1177/ 0040517516688631 13. Cian C, Gianocca V, Barraud PA et al (2015) Bioceramic fabrics improve quiet standing posture and handstand stability in expert gymnasts. Gait & Posture 42:419–423. https://doi.org/10.1016/j. gaitpost.2015.07.008 14. Pérez-Soriano P, Sanchis-Sanchis R, Aparicio I (2012) Effects of bioceramic textiles used in physical activity or sport: a systematic review. Int J Cloth Sci Tech 30:854–863. https://doi.org/10.1108/ IJCST-05-2018-0066 15. Papacharalambous M, Karvounis G, Kenanakis G et al (2019) The effect of textiles impregnated with particles with high emissivity in the Far Infrared, on the temperature of the cold hand. J Biomech Eng 141:034502. https://doi.org/10.1115/1.4042044 16. Anderson DM, Fessler JR, Pooley MA et al (2017) Infrared radiative properties and thermal modeling of ceramic-embedded textile fabrics. Biomed Opt Express 8(3):1698–1711. https://doi. org/10.1364/BOE.8.001698 17. Washington K, Wason J, Thein MS et al (2018) Randomized controlled trial comparing the effects of far-infrared emitting ceramic fabric shirts and control polyester shirts on transcutaneous PO2. J Textile Sci Eng 8 pii:349 https://doi.org/10.4172/21658064.1000349 18. Vatansever F, Hamblin MR (2012) Far infrared radiation (FIR): Its biological effects and medical applications. Photon Lasers Med 1:255–266. https://doi.org/10.1515/plm-2012-0034

Analysis of the Electric Field Behavior on a Stimulation Chamber Applying the Superposition Principle L. A. Sá, E. M. S. Bortolazzo, J. A. Costa JR and P. X. de Oliveira

Abstract

This paper investigates the possibility of applying electric fields in different directions in a circular chamber using two insulated electrical pairs of electrodes positioned in orthogonal directions. Controlling the current on each axis—with two independent stimulator channels—we would be able to stimulate the central area in different direction according to superposition principle. This physical hypothesis is reasonable, and the work was focused on investigating if the resulting electric fields would be homogeneous in a small work area in a circular chamber. The electric field magnitude and orientation was determined with the measured electric potential data, and then the homogeneity tested with monofactorial ANOVA. A computational model was developed, based on the experimental setup, using finite element method to solve the electric field distribution within the chamber, in order to compare the experimental results. We tested the resulting electric field for theoretical orientations of 0◦ and 31.7◦ , and our results shows a electric field magnitude of 6.31 ± 0.11 V/m with p = 0.07 and 6.74 ± 0.40 V/m with p = 0.7032, and orientation of 0◦ with p = 0.9997 and 31.7◦ with p = 0.9999, respectively. Comparing the results with the analytical solution and the computational model, we observed that there was no statistical difference between them, which allows future biological experiment. Keywords

Multidirectional stimulation • Electric field • Superposition principle

L. A. Sá (B) · E. M. S. Bortolazzo · J. A. Costa JR · P. X. de Oliveira School of Electrical and Computer Engineering, University of Campinas, Av. Albert Einstein, No. 400, Campinas, Brazil

1

Introduction

Among the cardiac arrhythmias, the ventricular arrhythmias represent a higher risk of life, because they interfere directly in the ejection of blood into the systemic circulation [1]. The only existing therapy for ventricular fibrillation (VF) is defibrillation, that stimulates a critical mass—between 75 and 90%—of the heart tissue, applying electric fields (E) on the patient’s chest, in order to stop fibrillation [2]. However, high levels of intensity of defibrillatory shocks can injure, or even be lethal [3], to heart cells, which makes it necessary to research ways to stimulate the heart using a lower E. One way to minimize the intensity of the applied E in the heart would be varying the direction of the electric field vector (E)1 with sequential application of stimuli within the refractory period, through several pairs of electrodes in different orientations—known as multidirectional stimulation [4]. A previous study allowed to observe that, in a cell population, 80% of cardiomyocytes were recruited using multidirectional stimuli with three pairs of electrodes [4]. Also, there was a 30% decrease in the required energy to reverse 50% of induced VF protocols in a pig population through multidirectional stimuli, using three pairs of electrodes [5], when comparing with unidirectional protocols. However, it is reasonable that the use of three, or more, pairs of electrodes is not always clinically viable, since time and space allocation are critical on an emergency protocol. We hypothesize that is possible to stimulate and defibrillate a heart using multidirectional stimuli with only two pairs of electrically isolated electrodes, using the superposition principle. The superposition principle is illustrated in Fig. 1 for the case of a cylindrical chamber with two orthogonal pairs of electrodes. Stimuli are applied concomitantly in both pairs. The resulting electric field vector (ER ) is equal to the vector

1 The bold font notation denotes a vector element. © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_4

25

26

L. A. Sá et al.

sum of the two E that would be separately generated by each pair of electrodes. Previous studies had employed a cylindrical chamber to analyse electrical stimulation on isolated cells with a single pair of electrodes at a x-axis, and E at any point of the chamber can be calculated by [6]. In that study, a concentric work area with a 10% radius was determined and was verified that E varied less than 1% in magnitude and 1◦ in phase. Let x W A and yW A be all the coordinates inside the work area, E can be considered constant and given by E(xWA , yWA ) = E(0, 0) =

2·I aˆ x b·π ·σ ·h

(1)

where E(0, 0) is the electric field given by [6] for a single pair of stimulation electrode, I is the stimulation current, b is the chamber inner radius, h is the height of solution, σ is the aqueous solution conductivity in chamber and aˆ x is the versor that indicates the stimulation axis. E(0, 0) has the same direction as the stimulation axis defined by the electrodes line. We propose that, when two electrically insulated pair of electrodes are used, as can be seen on Fig. 1, E1 can be given exactly by the same solution determined by [6] and E2 determined using the same equation but orthogonally to the first. Assuming that the uniformity of E within the work area is not affected by the second electrode pair, and that the system is linear [6], the superposition principle says that ER within the work area can be determined by Eqs. (2) and (3), given by

 (E 1 (x W A , yW A )2 + E 2 (x W A , yW A )2 ) (2)   E 2 (x W A , yW A ) (3) α(x W A , yW A ) = arctan E 1 (x W A , yW A )

|ER |(x W A , yW A ) =

where |ER |(x W A , yW A ) is the |ER | magnitude within the work area, E1 (xWA , yWA ) and E2 (xWA , yWA ) are the linear contribution of electric field intensity of channel 1 and channel 2 at the work area, respectively, and α(x W A , yW A ) is the angle that ER is positioned in reference to the channel 1 axis, within the work area. By varying the amplitude of current stimulus in the two insulated electrical pairs of electrode positioned in orthogonal directions, we hypothesised that is possible to apply a multidirectional stimulus with any amount of stimulus, intensity and in any direction. Our aim in this work was focused on investigating if the resulting electric fields would be homogeneous in a small work area in a circular chamber, and compare with a computational model developed at this work and the analytical solution determined by [6].

2

Materials and Methods

This section was divided in two complementary parts to test the superposition principle of E in a cylindrical chamber with two orthogonal electrode pairs. First, a experimental study is presented to test our hypothesis and homogeneity of E inside a delimited area of the chamber. A special case with a single electrode pair being stimulated is also analysed. Second, a computational model is developed and simulated to validate the study with the same conditions of the experimental study. In both studies we took advantage of the relationship given by: E = − (4) where E is the electric field in V/m,  is the gradient operator, and  is the electric potential in Volts.

2.1

Fig. 1 Top view of a cylindrical chamber. E1 and E2 (red) are the electric fields generated considering just one pair of electrode (small filled circles) are active. When stimuli are applied concomitantly in both pairs, E R (blue) is the resultant electric field and equals the vector sum of E1 and E2 . α is the angle of E R to channel 1 axis. The work area, where E R is considered uniform, is represented (out of scale) as the shaded area

Experimental Study

The experimental setup is presented as a block diagram in Fig. 2. A cylindrical chamber made of polylactic acid, 9 cm radius and 1 cm height was built in a 3D printer. The first pair of stainless steel electrodes, with 1 mm in diameter and 2 cm long, was positioned at diametrically opposite points of the chamber and reaching perpendicularly the bottom of the chamber. A second pair of identical electrodes was also positioned at diametrically opposite points but perpendicularly to the line determined by the first pair. Next, the chamber was

4 Analysis of the Electric Field Behavior on a Stimulation …

27

Fig. 2 Experimental setup for validation of the superposition principle. Two insulated voltage stimulators are connected to two pairs of orthogonally displaced electrodes. Oscilloscopes 1 and 2 used for current sensing through series resistor, and oscilloscope 3 used for electric field mapping. The scanning points presented as black bullets on the chamber center (out of scale) were scanned with the scanning electrode (red). A reference electrode (blue) and a auxiliary electrode for differential acquisition (purple) are also represented

partially filled with Tyrode’s solution [6] at 23◦ C, and the height was measured since it is a parameter of the chamber conductivity and ER . A low-intensity electrical voltage stimulator—developed for this work—with two insulated channels, was connected to the pairs of electrodes. The stimulator channels were adjusted to provide symmetrical rectangular bipolar pulses with 25 ms duration in each phase and frequency 0.3 Hz, in order to avoid electrode polarization. Simultaneously, the current in each channel was continuously measured with electrically isolated oscilloscopes through 22.2  series resistors. We delimited a 10 mm edge square-shape concentric work area for experimental mapping of ER , and we assume that ER is constant inside this area and determined by the set of Eqs. (1), (2) and (3). For the first experiment set, the current in the second pair of electrodes was adjusted to zero so the theoretical phase α(xWA , yWA ) had a value of 0◦ according to Eq. (3), reducing to a one single channel stimulation case. At the second experiment set, the voltage amplitude was adjusted to obtain a measured current at each channel so the phase α(xWA , yWA ) had a theoretical value approximately 33◦ , according to Eqs. (1) and (3). Next, we started to perform the actual E mapping. This can be achieved indirectly by mapping the electric potential () as indicated by Eq. (4). Hence, a third electrically insulated oscilloscope and three Ag-AgCl electrodes, with 0.2 mm diameter, were used to measure  in a 36-point matrix (black bullet points in Fig. 2). The edge of the collection data was coincident with the 10 mm edge square-shape concentric work area and the measurement points were spaced with 2 mm on each direction. Next, the points were sequentially scanned by the first Ag-AgCl

electrode, a scanning electrode coupled to a microscope chariot (Zeiss, Jena, Germany) with resolution of 0.05 mm. The second electrode was used as a reference for the oscilloscope and positioned close to the chamber edge, at a point where we observed a maximum amplitude of the signal. A third electrode was used to obtain a differential signal and reduce noise, and it was positioned close to the reference electrode. After completing the data collection for , we derived ER . Since our data are not continuous, we did not directly apply Equation (4), but we derived ER within a smallest squareshape scanning points with the edge potential measurements. First, we calculated E at the main diagonal by taking the negative difference of that two main diagonal points and divided by the distance. Next, we do the same approach for the E in the secondary diagonal. This operation was applied for the 36-point matrix, and results on a 25-point vector matrix that can be decomposed in intensity |ER |(x,y) and phase α(x, y). The scanning process was repeated 10 times (N = 10) and each of the 25 points was considered a single group for statistical analysis. Since we expected |ER |(x,y) and α(x, y) to be uniform within the measured area, each group was analyzed using three normality tests: Kolmogorov-Smirnov, D’Agostino & Pearson and Shapiro-Wilk. The distribution of a group was considered normal if at least two tests had p > 0.05. Under the null hypothesis test, the samples of each group were compared with the expected value, given by Eqs. (1), (2) and (3), using monofactorial ANOVA. A p-value lower than 0.05 was indicative of statistical difference. The absence of any statistical difference between the groups was indicative of uniformity of the ER (x,y).

28

2.2

L. A. Sá et al.

Computational Study

The computational model was built using COMSOL Multiphysics 5.3 (COMSOL, Burlington, MA, USA), a software for solving partial differential equations (PDE) based on finite elements method (FEM). As it can be seen in our experimental setup, it can be divided in two different electrical systems: an electrical circuit—used for electrodes stimulation and current sensing—and electric field distribution in the chamber. The major concern in modeling the proposed system is to assure that both stimulation channels are isolated from each other. The electrical circuit was built using 0D space dimension geometry. Two physics of Electrical Circuit were added— one per channel—to solve circuits using Kirchhoff’s Laws, including a voltage source Vs , a sensing resistor Rsens and an external I vs. U block—-used for connect the electrical circuit to a circuit terminal in other space dimension. Each circuit has its own ground signal, due to channels isolation, and the external block is connected to the electrodes. We assume that our conductive solution is isotropic and that only the inner chamber geometry influences the electric field. Thus, we used a 2D space dimension geometry to represent the inner stimulation chamber just like in our experimental method. Electric field was calculated by Equation (4) using two Electric Currents physics—one per channel—of the AC/DC module in a stationary study. Besides that, the outer circle, that represents the chamber boundaries, is modeled as an electric insulator, and the electrodes as terminals—connected to external I vs. U block from Electrical Circuit. The mesh was generated automatically using the physicscontrolled mesh builder (available on COMSOL), for extra fine element size, which resulted in approximately 6800 finite triangular elements. The parameters values used on simulation are chamber radius R = 9 cm, Tyrode’s electrical conductivity σt yr ode = 1.46 S/m [6], height of solution and current in each channel are obtained from the experimental results.

3

Results

3.1

Experimental Results

We performed the mapping first for 0◦ and, sequentially, for 33◦ . Due to the low precision on the electric stimulator, we were unable to fine adjust the phase to 33◦ , resulting in a phase of 31.7◦ . For each case, the values of measured and adjusted current at each channel, the measured height of the solution, the p-value from the statistical analysis derived for the |ER |(x,y), and α(x, y) (E) distributions, as well the value of the theoretical |ER | and α are presented in Table 1. The values of mean ± standard error of the mean (SEM) of electric field magnitude and phase for each point in our mesh, derived from the mapping explained in the section Experimental Study are presented in Fig. 3.

3.2

Computational Results

We measured the sensing current Isens on each channel sensing resistor, in order to determine the voltage sources Vs1 and Vs2 . The voltage sources were calculated by: Vs = (Rsol + Rsens )Isens

where Rsol is the solution resistance, and Rsens is the sensing resistor. The current measurement was done by adding a very small resistor value, Rsens = 100μ, so that the voltage drop on Rsens is negligible and Eq. 5 can be approximated to Vs = Isens Rsol . Thus, a first simulation for each case was run to determine Rsol to the measured height of solution, and then Rsol and Isens were used to estimate the voltage source. As mentioned on section Computational Study, two electric currents were added to isolate channels, and the resulting electric field |E R | and measured phase α was calculated as in Eqs. 2 and 3, respectively, where E1 and E2 are the electric fields due to channel 1 and channel 2, respectively. The computational results are summarized on Table 2.

Table 1 Experimental results Parameter (Unit)

(5)

α = 0[◦ ]2

Height of solution h (mm)a 6.0 ± 0.1 Current in x-axis Isensx (mA) 7.82 Current in y-axis Isens y (mA) 0 ANOVA p-value for magnitude 0.069335 ANOVA p-value for phase 0.7032 Electric field |E R | (V/m)a,b 6.31 ± 0.11 a Measured value ± accuracy from charriot b Theorically values given by Eqs. (1) and (3) ± accuracy given the height accuracy

α = 31.7[◦ ]2 8.0 ± 0.1 9.46 5.86 0.9997 0.9999 6.74 ± 0.40

4 Analysis of the Electric Field Behavior on a Stimulation …

29

Fig. 3 Experimental results. The red dashed represents the theoretical value. Fig (a) and (b) are the electric field module and phase α = 0◦ , respectively. Fig (c) and (d) are the electric field module and phase para α = 31.7◦ , respectively Table 2 Computational results Parameter (Unit)

α = 0◦

α=31.7◦

Chamber resistance in x-axis Rch x () Chamber resistance in y-axis Rch y () Voltage source in x-axis Vsx (V) Voltage source in y-axis Vs y (V) Electric field |E R | (V/m)a Phase α (◦ )a a Numerically calculated by Eq. (4)

441.42 438.40 3.45 0 6.31 0

331.07 328.79 3.13 1.92 6.74 31.7

4

Discussion

To validate the superposition principle in a circular chamber using two insulated electrical pairs of electrodes positioned in orthogonal directions, we mapped the E in the work area so we could compare the result with the analytical solution, obtained in [6]. Since the work area that we defined is smaller than in [6], E should vary less than 1% in module, and 1◦ in phase. This was confirmed in our computational study where E varies less than 0,31%. Further, the simulation outcomes at the work area were identical to the analytical solution for electric field intensity and phase. Moreover, we measured 36 points inside the chamber for simulations at 0◦ and 31.7◦ . The expected result for the values found in the work area were 6.31 V/m, for α = 0◦ and 6.74 V/m, for α = 31.7◦ . When performing the monofactorial ANOVA there was no statistical difference between them, which indicates that the superposition principle is valid when applying stimuli on two pairs of electrodes. A significant dispersion in the experimental values was noticed, and there were local means that diverged more than 1% or 1◦ , in magnitude and phase, respectively. Those errors are noticed in the 0◦ and the 31.7◦ setup. However, previ-

ous works validated the E within the work area of 10% of the chamber radius, and they show that there was no differences in E greater than 1% and 1◦ for a single stimulation pair of electrode [6]. Since the errors were noticed for the 0◦ setup, they should not be given due the superposition principle. We obtained a p-value very close to 0.05 in the 0◦ setup, which is the case where we are applying stimulus with just one pair of electrodes. The maximum variation in the 0◦ direction, approximately 14% in magnitude, and the fact that more points varied more than 1% in magnitude, may have contributed to the fact that the p-value is so close to 0.05, which did not occur in the 31.7◦ direction. Despite this, statistical analysis indicated that there is no significant difference between the data. The errors can be attributed to several factors, such as: electrodes positioning in the reruns of the experiment, electrodes polarization during the experiment, error in determining the height of the solution, the fact that we considered the solution as isotropic, as well as the inherent errors of instrumentation. Therefore, the fact that there is no statistical difference shows us that the applied model is acceptable, with the exception that the setup developed for this experiment requires improvement to reduce standard error.

30

5

L. A. Sá et al.

Conclusions

With this developed work, one can observe that it is possible to apply multidirectional stimuli with only two pairs of electrodes. The electric field was mapped analytically, numerically, and experimentally, applying simultaneous stimuli in two different directions, and we can conclude that the superposition principle was validated inside the work area for 0◦ and 31.7◦ . We believe that these results endorse the hypothesis that it is possible to apply a stimuli of any magnitude in any direction, with only two pairs of electrodes. Although it is necessary to improve the experimental setup, we can conclude that our hypothesis is valid, so we can further investigate and develop a setup that allows analyze the electric field for any angle within the four quadrants, for future biological experiments.

Acknowledgements The authors would like to thank CNPq (Proc. N 141289/2018-0) and CAPES (Proc. N 88887.470307/2019-00 and 88887.497724/2020-00) for their financial support.

Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Guyton AC (2006) Tratado de fisiologia médica. Elsevier Brasil [2] Li Y, Bihua C (2014) Determinants for more efficient defibrillation waveforms in Anaesthesia, pharmacology, intensive Care and cmergency A.P.I.C.E. In: Antonino G (ed) Springer Milan, Milano, pp 203–218 [3] Oliveira PX, Bassani RA, Bassani JWM (2008) Lethal effect of electric fields on isolated ventricular myocytes. IEEE Trans Biomed Eng 55:2635–2642 [4] Fonseca AVS, Bassani RA, Oliveira PX, Bassani JWM (2012) Greater cardiac cell excitation efficiency with rapidly switching multidirectional electrical stimulation. IEEE Trans Biomed Eng 60:28–34 [5] Viana MA, Bassani RA, Petrucci O, Marques DA, Bassani JWM (2016) System for open-chest. Multidirectional Electr Defibrillation Res Biomed Eng 32:74–84 [6] Bassani RA, Lima KA, Gomes PAP, Oliveira PX, Bassani JWM (2006) Combining stimulus direction and waveform for optimization of threshold stimulation of isolated ventricular myocytes. Physiolog Measur 27:851

Development of a Non-rigid Model Representing the Venous System of a Specific Patient M. C. B. Costa, S. D. F. Gonçalves, T. C. Lucas, M. L. F. Silva, C. M. P. Junior, J. Haniel, and R. Huebner

Abstract

1

Regarding the cardiovascular system, in-vitro studies appears as an alternative for experimental assessment of blood flow parameters, in order to validating numerical simulation through particle imaging velocimetry (PIV) or laser Doppler velocimetry (LDV), and assist health professionals in clinical procedure. The aim of this study is to develop a methodology for manufacturing optical silicone models of blood vessels, life size, for the purposes described. A process similar to casting was used. A bipartite mold and a core were manufactured through of the 3D print after the geometry be acquired by computed tomography. The optical silicone was used to manufacture the model. After the cure of the silicone, it was observed that the model showed great transparency and compliance. This procedure showed up a simple and fast way for the manufacturing of optical silicone specimen of blood vessels. The results obtained were ideal for the purpose of the study, however the final model still lacks mechanical characterization for specific applications. Keywords

 



Non-rigid anatomical model Optical silicone Life size Cardiovascular system 3D printing

M. C. B. Costa (&)  S. D. F. Gonçalves  M. L. F. Silva  C. M. P.Junior  J. Haniel  R. Huebner Departamento de Engenharia Mecânica, Laboratório de Bioengenharia, Universidade Federal de Minas Gerais, Avenida Antônio Carlos, 6627, Belo Horizonte, Brazil T. C. Lucas Departamento de Enfermagem, Universidade Federal dos Vales do Jequitinhonha E Mucuri, Diamantina, Brazil



Introduction

Standard hemodynamic in blood vessels can be changed for flow features itself, for example, local vortex generation and turbulence. The flow can also be disturbed due to geometric changes of the lumen or because of pathologic conditions, as aneurysms formation, among other factors. Moreover, instruments and clinical devices insertion, like central venous catheter (CVC) and endoprosthesis, can also cause these disturbances. Furthermore, there may be a variation in shear stress to which blood cells are subjected, changing the hemolytic and thrombogenic potential, which can cause others more grave adversities, like thrombosis [1, 2]. Numerical models based on computational fluid dynamics (CFD) can be used in order to predict these changes [2]. This tool has the advantage of being non-invasive and relatively low cost. The use of CFD has provided useful information to help understand atherosclerosis formation by computing the velocity field and wall shear stress associated with that velocity field in cardiovascular system [3]. However, it is necessary to validate the numeric models by experimental techniques. In-vivo experiments have some disadvantages, for example: issues involving patient integrity, in case of invasive technique; and in magnetic resonance imaging (MRI) procedure, where the patient must be immobile during the measurement. Besides, it is not always possible to measure the necessary physical quantities [3, 4]. Thus, in-vitro studies, which use materials that mimic the vascular tissue behavior appears as an alternative [5, 6]. These studies can be performed using a pulsatile flow pump to better simulate the flow and a fluid with the same or similar properties of blood (as a solution of water and glycerin) [7]. This is done in order to compare it with the numerical results for a better understanding of the hemodynamic behavior [8]. Recent researches are using rigid models to perform in-vitro experimentations with several objectives. These models have the advantages of being transparent which

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_5

31

32

allows the visualization and assessment of clinical devices prototypes inserted in specimen. Furthermore, the use of optical techniques to assess the blood flow field is facilitated. Nevertheless, from the engineering point of view, the rigid models are only suitable for fluid dynamics studies, because they do not simulate the behavior of vessels wall due to the blood flow [9–11]. There are many different experimental techniques for measuring flow quantities, however, the most systematically used are optics, such as particle image velocimetry (PIV) and laser Doppler velocimetry (LDV), and acoustics ones, such as ultrasound imaging velocimetry (UIV) [12]. Optical experiments need that the geometry has optical access, that is, the manufacturing material must be transparent and does not exhibits optical deviation, like optical silicone or polyurethane [3]. On the other hand, acoustic tests can employ latex and polyvinyl alcohol, since they do not require optical access [5, 13]. Moreover, these specimens are being used by healthcare professionals as well. Studies about these experimental models are being accomplished with the aim of simulate surgical procedures, as aortic aneurysms, and tests of clinical devices, like robotic catheter for aortic valve implantation [14, 15]. The physical replica of patient-specific anatomy can be used for the training of novice surgeons in robotic surgery. It is fundamental that the model correctly resembles the morphological properties of the vessel [16]. The present study aims to develop a protocol to manufacture simplified models in optical silicone, motivated by the increase of need for anatomical geometric models of the circulatory system. The model used was composed by the right internal jugular vein, left internal jugular vein and superior vena cava.

Fig. 1 Image acquisition methodology from CT scan

M. C. B. Costa et al.

2

Materials and Methods

The construction of the replica of blood vessel followed the steps below.

2.1 Geometric Model The geometry used in this work was obtained by the methodology proposed in [17]. Initially, the geometry was acquired by Computed Tomography (CT) of a male healthy patient, 74 years old. The procedure was approved by the Comitê de Ética em Pesquisa/Universidade Federal de Minas Gerais (CEP-UFMG) under process number CAAE 02,405,712.5.1001.5149. In the InVersalius3®, a medical image processing software, it was imported the DICOM (Digital Imaging and Communications in Medicine) files. The file data were processed to acquire only the region of interest. Figure 1 illustrates the procedure, with the region of interest highlighted in green. The next step was to edit the geometry in Steriolithography (STL) format in the Autodesk Meshmixer®. This step consisted of smoothing and removing some imperfections on the geometry surface, to improve the surface quality. The subclavian and external jugulars veins were removed to reduce the complexity of the model, since it would be difficult to reproduce an experimental bench. The treated geometry is indicated in Fig. 2a. With the aim of facilitating the manufacturing process by the silicone casting method (a process similar to casting, but using silicone), the edited geometry of the venous system was simplified in order to make it symmetrical in relation to

Development of a Non-rigid Model Representing …

the XZ plane, represented by Fig. 2b. In this figure, the original geometry is indicated in orange and the simplified geometry in green. The ANSYS SpaceClaim software was used to perform this procedure.

2.2 Bipartite Molds Manufacturing

33

2.4 Silicone Preparation In this study, the silicone used for casting procedure was the polydimethylsiloxane (PDMS) of the Sylgard 184 kit (Dow Corning). This material was chosen due to the high degree of transparency. A curing agent provided by the kit was added in proportion of 10:1 to accelerate the curing reaction. The products were mixed at room temperature. After preparation, the mixture was taken to a vacuum chamber to eliminate the bubbles that appeared during the process. Figure 2e shows the before and after of the silicone being placed in the vacuum chamber.

The geometry acquired by the process previously described was employed to generate the cavity of the bipartite mold, that is, the external surface of the specimen. In the ANSYS SpaceClaim software were inserted cylindrical connections at the inlets and outlet, with standard sizes of ½″ at the border of the internal jugulars and 1″ at the end of the superior vena cava, in order to fixate the models on experimental benches. In addition, fittings were added to ensure that the core was fixed and aligned in the mold during the injection of the silicone. Figure 2c shows the final mold. For the molds manufacture, a three-dimensional print of Fused Deposition Modeling (FDP) technology, model P350 (3D Factor) was used. The maximum print size is 350  350  350 mm and the resolution in the vertical direction is 0.01 mm. The printing tolerance is 0.25 mm in all directions. The final molds, in STL format, were converted to Gcode language by Slic3r software. The molds were printed using the lactic polyacid plastic (PLA). After the mold printing procedure, the cavities, which will be filled with silicone, were sanded and polished in order to reduce the surface roughness with the aim to give the specimen a better transparency and avoid optical deviations to be used in optical tests or flow field visualization technique. Figure 2d indicates the printed mold.

After the curing process and the removal of the PVA core, by immersing the body in water, the specimen in optical silicone was obtained. Figure 3 indicate the results obtained. It is observed that, the model showed an excellent degree of transparency and a great surface finish. The body also showed good elasticity and compliance.

2.3 Core Manufacturing

4

For manufacturing the core, an offset of 3.0 mm in the geometry acquired in section A was made (mean thickness of the analyzed system). The surface generated through this offset corresponds to the internal surface of the specimen, that is, the external surface of the core. For the core were inserted cylindrical connections, with diameter less than the geometry used to create the molds by 3.0 mm, and fittings as well. The core was manufactured similarly to the mold. It was used the same printer machine, and the files in STL format were converted to Gcode language. However, the polyvinyl alcohol (PVA), a water-soluble material, was used. This material was employed by ease of removal of the model after of the silicone cure. After printing procedure, the core was also sanded and polished to ensure that the internal surface of the model has low roughness. Figure 2d shows the printed core.

The manufacturing of specimens in life sizes is a useful tool in the field of medicine, because they can be used for training trauma correction and surgical simulations. These models help professionals to better understand the patient’s anatomy [12]. Furthermore, the specimens can be an alternative for in-vitro experiments, as a way of validating numerical computer simulation. The transparency and surface finish acquired in the model, Fig. 3, allows medical training to take place in a more didactic way. Visualization of the inside of vessel facilitates the positioning of clinical devices. In addition, it becomes possible to monitor the behavior of the device inserted in the model. Moreover, the transparency is suitable for experimental methods to obtain flow field parameters in the study region, which use optical fundamentals, such as the PIV or LDV. Tests that use only qualitative analysis, such as the use of dyes, can also be performed.

2.5 Silicone Injection First, the core was correctly positioned in the mold, which was closed and properly sealed. The silicone was injected into the injection channel of the mold and filling exclusively due to the gravitational effect. The mold was left to stand for approximately 24 h, silicone curing time.

3

Results

Discussion

34 Fig. 2 a Edited geometry after CT scan acquisition b Simplification of geometry c Three-dimensional design of the bipartite mold d Bipartite mold and core already printed e Removing bubbles in the preparation of the optical silicone

M. C. B. Costa et al.

Development of a Non-rigid Model Representing …

35

With respect to elasticity and compliance, the model had plausible results. The specimen showed an appropriate technical feature to be used in practices described above. Thus, the model obtained allows professionals, who work with numerical simulations, to validate their studies from the fluid dynamics point of view. Nevertheless, there is a need for a mechanical characterization to determine a relationship with the real blood vessel wall, because the material used in the manufacture of the model may not have the actual hyperelastic characteristic of the vascular tissue. During the training of surgical procedure, the incision in the specimen is also properly simulated. In the procedure of manufacturing there is a possibility of leakage during the injection of the silicone in the mold and bubbles formation during the mixing of the silicone with the curing agent. The first problem can be corrected by making the seal more efficient and improving the mold fitting. The second can be adjusted by taking care in the preparing the silicone and using the vacuum chamber. Finally, the methodology described, presents a limitation because it needs the geometries to be symmetrical in relation to a plane to manufacture the bipartite molds, that is, there cannot be three-dimensional anatomical curvatures.

5

Conclusion

Computational fluid dynamics analysis and in-vitro tests appears as an alternative tool to assess changes in the physical parameters of blood flow due to clinical devices insertions, like CVC and endoprosthesis, furthermore, due to pathologic condition, as aneurysms formation. In this context, is necessary to manufacture models that represent the studied blood vessel and mimics the vascular tissue behavior. Throughout this research, several tests were carried out to develop a methodology capable of manufacturing a specimen in optical silicone, to realize the proposed objective. Using the silicone casting technique, together with the use of 3D printing, the manufacture of an optical silicone replica of a blood vessel proved to be viable and inexpensive. The developed model allows physiological accuracy for the validation of numerical fluid dynamic studies carried out. In addition, it allows the improvement of surgical procedures, since these models are used to assist surgeons during their training. Finally, tests can be carried out to check whether the presented methodology can be applied to other materials, such as polyurethane, and to other vessels in the circulatory system.

Fig. 3 a Result of the process after curing the silicone. b Dissolution of the PVA core. c Final model of the optical silicone

Acknowledgements The authors would like to thank CNPq (401217/2016-7) and FAPEMIG (APQ-02735-17) for their support in this project.

36 Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Ha H, Ziegler M, Welander M et al (2018) Age-related vascular changes affect turbulence in aortic blood flow. Front. Physiol 9:36. https://doi.org/10.3389/fphys.2018.00036 2. Lucas T, Tessarolo F, Jakitsch V et al (2014) Blood flow in hemodialysis catheters: a numerical simulation and microscopic analysis of in vivo-formed fibrin. Artif Organs 38(7):556–565. https://doi.org/10.1111/aor.12243 3. Lermusiaux P, Leroux C, Tasse J et al (2001) Aortic aneurysm: construction of a life-size model by rapid prototyping. Annals of Vascular Surgery Inc. https://doi.org/10.1007ls100160010054 4. Arcaute K, Palafox G, Medina F et al (2003) Complex silicone aorta models manufactured using a dip-spin coating technique and water-soluble molds. El Paso, Texas 5. Chee A, Ho Chung, Yiu B et al (2016) Walled carotid bifurcation phantoms for imaging investigations of vessel wall motion and blood flow dynamics. IEEE Trans Ultrason, Ferroelectr Freq Cont 63(11) https://doi.org/10.1109/TUFFC.2016.2591946 6. Le T, Troolin D, Amatya D et al (2013) Vortex phenomena in sidewall aneurysm Hemodynamics: experiment and numerical simulation. Ann Biomed Eng. https://doi.org/10.1007/s10439-0130811-9 7. Liepsch D (2002) An Introduction to biofluid mechanics-basic models and aplications. J Biomech 35:415–435 8. Arcaute K, Wicker R (2008) Patient-specific compliant vessel manufacturing using dip-spin coating of rapid prototyped molds. El Paso, Texas

M. C. B. Costa et al. 9. Amili O, Golzarian J, Coletti F (2019) In vitro study of particle transport in successively bifurcating vessels. Ann Biomed Eng. https://doi.org/10.1007/s10439-019-02293-2 10. Taher F, Falkensammer J, McCarte J (2017) The influence of prototype testing in three-dimensional aortic models on fenestrated endograft design. Soc Vascular Surgery. https://doi.org/10.1016/j. jvs.2016.10.108 11. Geoghegan P, Jermy M (2017) A piv comparison of the flow field and wall shear stress in rigid and compliant models of healthy carotid arteries. J Mech Med Biol 17(4). https://doi.org/10.1142/ S0219519417500415 12. Drost S, Alam N, Houston J et al (2017) Review of experimental modelling in vascular access for hemodialysis. Cardiovascular Eng Technol 8(3). https://doi.org/10.1007/s13239-017-0311-4 13. Chaudhuri A, Ansdell L, Richards R et al (2003) (2003) Non-Axisymmetrical (Life-Like) abdominal aortic aneurysm models: a do-It-yourself approach. J Endovasc Ther 10:1097–1100 14. Yuan D, Luo H, Yang H et al (2017) Precise treatment of aortic aneurysm by three-dimensional printing and simulation before endovascular intervention. Sci Rep. https://doi.org/10.1038/ s41598-017-00644-4 15. Kvasnytsia M, Famaey N, Böhm M et al (2016) Patient specific vascular benchtop models for development and validation of medical devices for minimally invasive procedures. J Med Rob Res 1(3). https://doi.org/10.1142/S2424905X16400080 16. Marconi S, Negrello E, Mauri V et al (2019) Toward the improvement of 3D-printed vessels’ anatoical models for robotic surgery training. Int J Artif Organs. https://doi.org/10.1177/ 0391398819852957 17. Machado B, Haniel J, Junior C et al (2018) Modelo Complexo Não-Rígido de Tamanho Natural de Grandes Vasos Usando Método de Casting e Impressão em 3D. Águas de Lindóia, São Paulo

Comparative Study of Rheological Models for Pulsatile Blood Flow in Realistic Aortic Arch Aneurysm Geometry by Numerical Computer Simulation M. L. F. Silva, S. D. F. Gonçalves, M. C. B. Costa and R. Huebner

Abstract

In numerical simulations of blood flow in the aorta, shear rates tend to be higher than 100 s−1 , making blood behave as a Newtonian fluid, with constant dynamic viscosity. This study proposes a comparative analysis via computational fluid dynamics (CFD) of the Newtonian, CarreauYasuda, Power-Law and Casson rheological models for a realistic geometry of the aortic arch obtained by computed tomography (CT) and adapted with an aneurysm. It has been shown that there are instants of time throughout the cardiac cycle in which the blood exhibits non-Newtonian behavior. This behavior leads to a considerable variation in the dynamic viscosity that can influence the flow hemodynamics. It was also possible to detect that the effective viscosity varies over the cardiac cycle, including the Newtonian model, which suggests that the turbulent viscosity is variable. In general, the Carreau-Yasuda and Power-Law models show similar behavior, whereas the Casson model tends to be more closely to the Newtonian model. Keywords

Rheology • Blood flow • Aneurysm • Non-newtonian models • Aortic arch

1

Introduction

From a rheological point of view, blood has a non-Newtonian behavior of the viscoplastic type. It has characteristics of pseudoplastic fluids, where the apparent viscosity decreases with the increase in the deformation rate, and in addition, it requires an initial shear stress (yield stress), to start its flow [1]. However, the value of the yield stress is very small in a way that the blood throughout the circulation will be fluid and not solid [2]. M. L. F. Silva (B) · S. D. F. Gonçalves · M. C. B. Costa · R. Huebner Department of Mechanical Engineering, Universidade Federal de Minas Gerais, Avenida Antônio Carlos, Belo Horizonte 6627, Brazil

At low shear rates, blood viscosity decreases due to the “rouleaux” effect, which consists in the aggregation of red cells. On the other hand, above approximately 100 s−1 , the aggregations are broken by the velocity gradient and the blood assumes Newtonian fluid behavior [3]. Although the shear rate in large arteries is generally high, certain aortic conditions induce flow disorders. This generates regions of recirculation and stagnation that are accompanied by a significant reduction in the shear rate, which may induce non-Newtonian behavior [4]. The shear stress in the artery wall also presents a different behavior according to the rheological model [5]. This variation in shear stress modifies the dynamic of the arterial wall in aortic aneurysms. Bilgi and Atalık [6] demonstrated by fluid structure iteration by assuming blood as Newtonian fluid leads to significant differences in hemodynamic behavior in aneurysms, when compared to the Carreau model. These differences were observed in all structural models of arterial wall analyzed by Bilgi and Atalık [6]: rigid, linearly elastic and hyperelastic. Although there are not many studies on the blood flow rheology in vivo [7], different investigations through numerical simulations have reached contradictory conclusions. Therefore, the differences between Newtonian and non-Newtonian models has sometimes been subjective [4,8–11]. The research developed here aims to compare the blood flow modeled numerically via computational fluid dynamics (CFD) in a realistic geometry of the aortic arch aneurysm, from the perspective of Newtonian fluid and different nonNewtonian models.

2

Materials and Methods

The geometric model was created from image reconstruction of a computed tomography (CT) scan of a healthy patient. The procedure was approved by the following ethics committee: Comitê de Ética em Pesquisa/Universidade Federal de Minas Gerais (CEP-UFMG) process number CAAE

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_6

37

38

02405712.5.1001.5149. After obtaining the solid, a fusiform aneurysm was created in order to exceed at least 50% of the diameter of the aortic arch. The geometry, as well as a threedimensional reconstitution step by means of digital image processing, can be seen in Fig. 1a, b. Boundary conditions adapted from Alastruey et al. [12] were adopted. In the ascending aorta (AA), the pulse wave velocity was imposed. In the brachiocephalic trunk (BT), left common carotid artery (LCCA), left subclavian artery (LSA) and thoracic aorta (TA) pulse pressure were imposed. The cardiac cycle considered has a duration of 0.8 s, which represents a heart rate of approximately 72 bpm. 320 time steps of 0.0025 s were used. The boundary conditions are shown in Fig. 1c. Two cardiac cycle were simulated and the first one was discarded. The mesh convergence test was carried out according to the ASME V&V 20 [13] standard. The mesh selected for the study has 1085072 elements and 360988 nodes, refined in the aortic arch in the region of the aneurysm and branches. Four rheological models were selected for the comparative study, namely, Newtonian, Power-Law, Casson and CarreauYasuda. The rheological properties were the same used by Gonçalves et al. [14] and Shibeshi and Collins [3]. Those

M. L. F. Silva et al.

properties are represented in Table 1, where: μ: dynamic viscosity (Pa.s), μ∞ : maximum viscosity (Pa.s), μ0 : minimum viscosity (Pa.s), n: Power-Law index, λ: time constant (s), k: consistency index (Pa.sn ), τ0 : yield stress (N) and η: Casson’s rheological constant. ANSYS-Fluent® 2019 R2 software was used to solve the time-dependent continuity and momentum equations. The turbulence model adopted was the k − ω Shear Stress Transport (SST). For spatial discretization of the momentum and continuity it was used the second order Upwind method. For temporal discretization, the implicit second order algorithm was adopted. The pressure-based coupled algorithm was adopted to solve the pressure-velocity coupling. The second order scheme was used to discretize the pressure equation. Finally, the Least Squares Method Cell-Based Gradient was used for the gradients. As convergence condition, the residual value of 10−4 was used for the velocity components, continuity, turbulence kinetic energy (k), and specific dissipation rate (ω). The maximum value of Courant–Friedrichs–Lewy number (CFL) found in numerical simulation was 175.93 and the average was 14.31. Although the CFL were high, numerical instabilities were not found, because an implicit formulation of temporal discretization was used, which is unconditionally stable.

3

Results

The results for analyzing the rheological behavior of the simulated models in different planes can be seen in Fig. 2. The planes were numbered (P1, P2, …, P14) in ascending order from the top of the geometry. For a better comparison of images, the structural similarity index (SSIM) [15] is also shown, which is an index to quantify the percentage difference between images. The comparison was made in relation to the Newtonian model. SSIM is calculated by Equation (1).    2μx μ y + C1 2σx y + C2     SS I M (x, y) = μ2x + μ2y + C1 σx2 + σ y2 + C2

(1)

where μx , μy , σx , σy and σxy are the local means, standard deviations, and cross-covariance for images x, y. C1 and C2 are constants included to avoid instabilities [15]. Figure 2a shows the results of the shear rate of the Newtonian model at the instants of time (t) of 0.10, 0.29, 0.40 and 0.80 s. These instants of time correspond respectively: to the maximum velocity value, which occurs at the systole and it is close to the point of maximum pressure; the moment of closing the aortic valve; approximately half of the diastolic phase; and at the end of the cardiac cycle. Fig. 1 a 3D reconstruction stage; b Geometric model; c Velocity and pressure pulses

6 Comparative Study of Rheological Models for Pulsatile Blood …

39

Table 1 Rheological properties of the models selected for the study μ∞ μ0 n λ k μ τ0 η

Newtonian

Power-law

Casson

Carreau-Yasuda

– – – – 0.0035 – –

0.0560 0.0035 0.7080 – 0.0170 – – –

– – – – – 0.0035 0.0050 0.0035

0.0560 0.0035 0.3568 3.3130 – – – –

Figure 2b shows how the dynamic viscosity varies in each model in the aortic arch in the P5, P6, P7 and P8 planes for the instants of time of 0.1 and 0.8 s which are the time of the peak systole velocity and the end of cycle respectively. These instants of time differ each other due to their high and low shear rate, respectively. Figure 2(c) shows the effective viscosity, which is the sum of the dynamic viscosity and the turbulence viscosity, in the same planes for the same instants of time. Finally, Fig. 2(d) shows the velocity field for low shear rates (at t = 0.8 s). In order to demonstrate once again the behavior of the different rheological models, Fig. 3 shows how the shear rate, dynamic viscosity, effective viscosity, shear stress, velocity and pressure vary in the median plane of the aortic arch (P7). The values correspond to the mass flow weighted average. Finally, Table 2 shows the mean absolute percentage deviation (MAPD) over time of each model in relation to Newtonian model for the graphs in Fig. 3. MAPD is calculated by Eq. 2. Table 2 also shows the maximum absolute difference (MAD) and the instant of time (tmad ) of such difference. M AP D =

 n  1   Pn Ni − PNi    n Pn Ni

(2)

i=1

where PnNi and PNi are any property of the non-Newtonian and Newtonian model at instant of time i, respectively, and n is the number of time steps.

4

Discussion

It is observed that when the velocity increases, the shear rate also tends to increase. At these instants of time, some part of the aorta has values greater than 100 s−1 , as in the systole phase. On the other hand, when the velocity is low, or even in certain regions during systole, the shear rate is less than 100 s−1 , which suggests non-Newtonian behavior.

Figure 2b proves that at low velocities, dynamic viscosity tends to vary more than in regions with higher velocities, which cannot be concluded using the Newtonian model. This behavior is also seen in Gonçalves et al. [14] for blood flow in venous catheters that modify the shear rate. This is also shown in Table 2, where the maximum absolute difference between the Newtonian model in relation to the non-Newtonian model occurs at points of low shear rates. It is important to note that at high shear rates, there were also local variations in viscosity, although smaller. However, in regions where viscosity is predominantly constant, the viscosity value is higher than the Newtonian model. This trend is seen in the three non-Newtonian models analyzed. At high shear rates, the effective viscosity also tends to vary less, even at different values for different models. At low shear rates, the effective viscosity increases and varies locally, which shows that in addition to the increase in dynamic viscosity, there is an increase in turbulence viscosity. From the point of view of effective viscosity, Casson’s model is the closest to the Newtonian. On the other hand, the Carreau-Yasuda and Power-Law models are more coincident. Analyzing the hemodynamic behavior of the flow, Fig. 2d shows that in regions of low shear rate and velocity, the velocity field itself can be slightly altered due to the rheological model. In other words, blood rheology can modify the fluid dynamic of the flow. The results of the velocity field are consistent with those found by Karimi et al. [5]. Table 2 shows that the maximum absolute difference coincides with low shear rates. At other instants of time, especially at high shear rates, the velocity difference is small. In average terms, as shown in Fig. 3, the shear rate in the selected plane is always less than 100 s−1 , which suggest that the phenomenon cannot be modeled as Newtonian. In fact, the dynamic viscosity varies throughout the cardiac cycle, at a different value than the Newtonian one. This suggests that for greater agreement with non-Newtonian models, the viscosity must be different from the usual value of 0.0035 Pa.s. It is

40

M. L. F. Silva et al.

Fig. 2 a Strain rate in aorta; b Dynamic viscosity at t = 0.1 s and t = 0.8 s; c Effective viscosity at t = 0.1 s and t = 0.8 s; d Velocity field at t = 0.8 s. NT: Newtonian; CY: Carreau-Yasuda; PL: Power-Law; CS: Casson. BT: brachiocephalic trunk; LCCA: left common carotid artery; LSA: left subclavian artery; TA: thoracic aorta; AA: ascending aorta

6 Comparative Study of Rheological Models for Pulsatile Blood …

41

Fig. 3 Average change in the P7 plane of the shear rate, dynamic viscosity, effective viscosity, shear stress, velocity, and pressure over the cardiac cycle

also interesting to note that the highest value in magnitude of the shear rate occurs at the point of maximum velocity in the plane during systole, as expected. However, from the hemodynamic point of view, the flow behavior does not undergo major changes due to the consideration of the Newtonian model, as shown in the graph of variation in velocity and pressure. However, on average, the biggest differences do not occur at the same instants of time where the viscosity differences are greater. Figure 3 also shows that the increase in the shear rate leads to a reduction in viscosity, which bring it closer to the Newtonian case. The shear stress peaks occur at these same instants of peak shear rate, despite the reduction in viscosity. In all models it is shown that the effective viscosity varies over the cycle, as well as it varies from model to model. However, in this case, the Newtonian model follows the trend of the other models. It makes clear that the turbulent viscosity is also different throughout the cycle and it tends to vary more in the Newtonian model. This shows that the turbulence behavior can vary from one model to another. The highest value of effective viscosity occurs during systole, at the same time as the maximum peak shear rate. It is important to note that the analysis of Fig. 3 is performed in relation to average values, so that occasionally the

differences detected can be even greater, as can be seen in Fig. 2 that there are local variations. In addition, although the peak velocity at the exit to the heart (aortic entry) occurs at 0.1 s, the maximum velocity and peaks of the other properties in the plane occur at around 0.2 s. Finally, in relation to Table 2, it can be said that the maximum differences found are at specific instants of time, especially in regions with low shear rates. On average, the differences between the models are small. Exception for effective viscosity and shear stress, where the greatest differences were in points of higher velocities and shear rates.

5

Conclusion

Although the blood flow in the aorta is systematically treated as Newtonian, when considering the entire cardiac cycle, such a strategy may not represent the behavior of the actual flow. At instants of time where the velocity field has lower values, the shear rate tends to be also lower, leading to a variation in dynamic viscosity, which is not predicted in the Newtonian model, but can be seen in other rheological models, such as demonstrated in this study for a realistic geometry of the aortic arch aneurysm.

42

M. L. F. Silva et al.

Table 2 Difference of non-Newtonian models in relation to the Newtonian model of mass flow weighted average variables in the P7 plane CY

PL

CS

Dynamic viscosity

MAPD (%) 42.285 42.484 21.107 MAD (Pa.s) 0.00442 0.00517 0.00184 tmad (s) 0.300 0.800 0.800 Effective viscosity MAPD (%) 6.864 5.337 3.268 MAD (Pa.s) 0.00612 0.00543 0.00225 tmad (s) 0.170 0.170 0.010 Strain rate MAPD (%) 3.969 4.112 1.991 MAD (s−1 ) 5.312 5.537 3.802 tmad (s) 0.450 0.450 0.440 Shear stress MAPD (%) 6.027 5.915 4.094 MAD (Pa) 0.224 0.255 0.0937 tmad (s) 0.160 0.160 0.430 Velocity MAPD (%) 3.031 3.882 2.353 MAD (m/s) 0.0169 0.00961 0.0113 tmad (s) 0.140 0.140 0.010 Pressure MAPD (%) 0.017 0.008 0.009 MAD (Pa) 12.739 5.072 5.582 tmad (s) 0.030 0.030 0.030 CY Carreau-Yasuda; PL Power-Law; CS Casson; MAPD Mean Absolute Percentage Deviation; MAD Maximum Absolute Difference; tmad Instant of time of maximum absolute difference

In general, the Carreau-Yasuda and Power-Law models show very similar results, while the Casson model tends to get closer to the Newtonian model. From the rheological point of view, the non-Newtonian models show considerable differences in relation to the Newtonian one. On the other hand, the differences in the hemodynamic field are negligible for regions with higher shear rates, becoming more significant at lower shear rates. Thus, it is advisable to adopt a non-Newtonian model when it is necessary to model regions of low shear rate in aortic blood flow. Acknowledgements The authors would like to thank Conselho Nacional de Desenvolvimento Científico e Tecnológico—CNPq (312982/2017-8) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—CAPES (88882.381154/2019-01) for their support in this study. Conflict of Interest The authors declare that they have no conflict of interest.

6.

7. 8.

9.

10.

11.

12.

References

13.

1. 2.

14.

3. 4.

5.

Irgens F (2014) Rheology and non-newtonian fluids. Springer Chandran Krishnan B, Rittgers Stanley E, Yoganathan Ajit P (2012) Biofluid mechanics: the human circulation. CRC Press Shibeshi Shewaferaw S, Collins William E (2005) The rheology of blood flow in a branched arterial system. Appl Rheol 15:398–405 Arzani A (2018) Accounting for residence-time in blood rheology models: do we really need non-Newtonian blood flow modelling in large arteries? J Royal Soc Interface 15:20180486 Karimi S, Dabagh M, Vasava P, Dadvar M, Dabir B, Jalali P (2014) Effect of rheological models on the hemodynamics within human

15.

aorta: CFD study on CT image-based geometry. J Non-Newtonian Fluid Mech 207:42–52 Bilgi C, Atalık K (2019) Numerical investigation of the effects of blood rheology and wall elasticity in abdominal aortic aneurysm under pulsatile flow conditions. Biorheology 56:51–71 Ionescu Clara M (2017) A memory-based model for blood viscosity. Commun Nonlinear Sci Numer Simul 45:29–34 Razavi A, Shirani E, Sadeghi MR (2011) Numerical simulation of blood pulsatile flow in a stenosed carotid artery using different rheological models. J Biomech 44:2021–2030 Steinman DA (2012) Assumptions in modelling of large artery hemodynamics. In: Modeling of physiological flows. Springer, pp 1–18 Molla MM, Paul MC (2012) LES of non-newtonian physiological blood flow in a model of arterial stenosis. Med Eng Phys 34:1079– 1087 Morbiducci U, Gallo D, Massai D et al (2011) On the importance of blood rheology for bulk flow in hemodynamic models of the carotid bifurcation. J Biomech 44:2427–2438 Alastruey J, Xiao N, Fok H, Schaeffter T, Alberto FC (2016) On the impact of modelling assumptions in multi-scale, subjectspecific models of aortic haemodynamics. J Royal Soc Interface 13:20160073 McHale MP, Friedman JR, Karian J (2009) Standard for verification and validation in computational fluid dynamics and heat transfer. Am Soc Mech Eng ASME V&V. 2009:20–2009 Gonçalves SF, Lucas TC, Haniel J, Silva MLF, Huebner R, Viana EMF (2019) Comparison of different rheological models for computational simulation of blood flow in central venous access. 25th ABCM international congress of mechanical engineering. Uberlândia, Brasil, pp 1–9 Bovik Wang Z, Alan C, Sheikh Hamid R, Simoncelli Eero P (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Proc 13:600–612

Total Lung Capacity Maneuver as a Tool Screen the Relative Lung Volume in Balb/c Mice A. E. Lino-Alvarado, J. L. Santana, R. L. Vitorasso, M. A. Oliveira, W.Tavares-Lima and H.T. Moriya

Abstract

Keywords

Assessment of the mechanical ventilation in rodents is widely performed using a mechanical ventilator for small animals (SAV). One of the main adjustable parameters in SAV is tidal volume, typically 10 mL/kg, which is configured in relation with the animal body weight. Traditionally, the preset TLC maneuver is used for alveolar recruitment; this study aims to explore the data from TLC to screen the relative lung volume in Balb/c mice, in order to analyze its relationship to the body weight of mice. The TLC maneuver allowed us to measure the relative delivered lung volume from PEEP (positive end-expiratory pressure) to 30 cm H2 O. In overall, one hundred twenty-four (124) 13– 20 week-old Balb/c mice were used, animals were split into two groups using mean value of the body weight (24g) as a cutoff point. Group H (n =51) had animals with body weight higher than the mean value, while mice with lower body weight belonged to group L (n = 73). Significant positive correlation was found within animals in the group H (r = 0.6137); conversely, animals in group L did not present correlation for relative volume and body weight (r ≈ 0). Additionally, an analysis of static compliance (Cstat) was conducted for each group using unpaired t-test ( p < 0.05). Therefore, it was possible to indicate that mice with lower body weight presented lower static compliance compared with those with greater body weight. Results suggest that tidal volume apparently depends on recruitment volume or compliance instead of the mice body weight.

Respiratory mechanics • Total lung capacity • Balb/c mice • Static compliance • Ventilator for small animals

A. E. Lino-Alvarado (B) · J. L. Santana · R. L. Vitorasso · H. T. Moriya Laboratory of Biomedical Engineering, Escola Politécnica, University of São Paulo, Av. Prof. Luciano Gualberto 158, São Paulo, Brazil M. A. Oliveira · W. Tavares-Lima Laboratory of Physiopathology of Experimental Inflammation, Department of Pharmacology, Institute of Biomedical Sciences, University of São Paulo, São Paulo, Brazil

1

Introduction

Great advances in the study of the respiratory system of mammals have occurred since the use of small animals became largely implemented in the biomedical research scenario, such as mice, rats and guinea pigs [1–3]. Among these small rodents, mice are currently the main mammals used for in vivo experiments due to some advantages comparing to bigger mammals, for instance, a well understood immunologic system, a short reproductive cycle, they can be easily obtained in large numbers and, also, have perceived economic factors [1,4]. To study the respiratory system, usually, one needs the ability to measure lung function and to understand how lung function changes under conditions likely to be encountered in lung diseases. However, to work with these small animals and successfully assess lung function some practical issues must be overcome, such as the difficulty of measuring the necessary respiratory signals of flow, volume and transpulmonary pressure; difficulties as such, that are mainly attributed to the small magnitude of airflow involved [4]. In order to achieve precision in these measurements, inevitably an invasive approach must be implemented, this a situation very well discussed in [5] where the authors coined “the phenotyping uncertainty principle”, which highlights the compromise between the achievement of natural conditions and measurement precision, i.e., the least invasive the method the least precise the measurements and vice versa. A method capable of assessing lung function with high precision is the one where breathing frequency, tidal volume, mean lung volume and volume history are all under precise experimental control and the influence of the upper airways

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_7

43

44

has been eliminated. However, to conduct an experiment with all these variables controlled, the animal must be anesthetized, paralyzed and tracheostomized, which put the animal far from its “normal” conditions, due to surgical and pharmacological stresses [4,5]. This method is widely implemented through the use of a mechanical ventilator for small animals (SAV), which uses a piston pump controlled by a computer to transmit airflow to the animal [6]. The computer receives input signals from transducers that measure tracheal pressure and piston displacement and, after correction for gas compression in the piston chamber, it can precisely control the frequency and amplitude of the airflow transmitted to the animal attached to the ventilator [5]. Through a SAV many applications can be performed such as the assessment of respiratory mechanics by positivepressure ventilation, in the time and frequency-domain [7,8], and analyses of the pressure-volume (P-V) curve relation, which is a classical method to investigate lung elasticity [9] or study the total lung capacity of mice (TLC), which is the aim of this work. The assessment of the maximum lung volume or TLC of mice has been reported in the literature following no standard experimental procedure [10]. This happens, mainly, because the term TLC is derived from humans, where the maximum volume attributed to TLC is the maximum voluntary inspiration of an individual, which cannot be accomplished by animals [9,10]. For non-human animals, the term TLC as capacity simply indicates maximal lung volume [10,11]. Thereby, the TLC of mice is ideally determined by a limit pressure arbitrarily set by the investigator, at which the inflation limb of the P-V curve starts to plateau (constant lung volume while increasing lung pressure), usually with pressures >20 cm H2 O. Nevertheless, it has already been shown that for much greater values of pressures, such as 90 cm H2 O, in few cases, the inflation limb may not reach a plateau and no lung damage was observed [10]. So, currently, in the lack of a standard, a common practice is to choose a pressure within 25–35 cm H2 O when, in most cases, the inflation curve starts performing a plateau for mice [9]. Working with a specific pressure for TLC, it also means that is possible to investigate another important physiological parameter: one that relates pressure and volume in the absence of flow (i.e., constant pressure and volume), which represents the static compliance (Cst ) of the respiratory system [12]. The Cst is a very useful measurement of lung distensibility, in which several factors might affect it, such as lung size, age, respiratory diseases (e.g. emphysema and asthma), bronchodilator drugs and cardiovascular disease (e.g. mitral stenosis and left ventricular failure) [13], so that it is used as an important tool to investigate different abnormalities involving the respiratory system.

A. E. Lino-Alvarado et al.

These physiological parameters presented (TLC and Cst ) are also often subjects of allometric studies to investigate how they are associated with different body sizes. However, although these studies show strong correlations of these parameters with body size, the majority of them are performed with less invasive (i.e., less precise) methods of measuring and controlling airflow and transpulmonary pressure, besides the fact that they are based on interspecies allometric scaling [5,14–16]. Thus, this work aimed to analyze the relationship of the TLC with the body weight of mice of the same strain through the use of a more invasive method to achieve more precision on the assessment of the TLC. For this purpose, a mechanical ventilator for small animals (flexiVent, SCIREQ, Canada) [6], was used to estimate the TLC at a limit pressure of 30 cm H2 O.

2

Materials and Methods

2.1

Animals

All animals were raised under similar conditions in the vivarium with free access to food and water. The experimental procedures conformed to standards of animal welfare and were approved by the Ethics Committee in the Use of Animals of ICB/USP (No 9782280518, 528816021, 95/2017 and 015/2014). One hundred twenty-four (124) Balb/c mice 13–20 weeks old were used to perform this study. To investigate the relationship of the TLC with the body weight, the animals were divided into two groups: group H (26.6 ± 2.1)(mean ± SD) and group L (18.4 ± 4.2) according to the value of their body weight, whether it is higher (group H) or lower (group L) than a cutoff value of 24 g, which is the average value of all 124 animals.

2.2

Assessment of Total Lung Capacity

To perform the experiments, a mechanical ventilator for small animal (SAV, flexiVent, SCIREQ, Canada) were used to accomplish two task: supply the animals with the necessary airflow to keep them alive and to execute specific maneuvers by controlling pressure and flow to acquire the TLC of each animal. Before the experiment was conducted, a two-stage calibration had been performed in the SAV. Firstly, the SAV generated a perturbation while the tracheal cannula was closed, this stage permitted to estimate the gas elastance within the cylinder-tubing assembly. Secondly, the same perturbation was started whilst the cannula was totally opened to the atmosphere, at this time, flow resistance and gas inertance were

7 Total Lung Capacity Maneuver as a Tool to Screen …

2.2.1 Alveolar recruitment maneuvers To estimate the TLC, we used a preset maneuver available on the SAV, known as alveolar recruitment maneuver (ARM), which is widely implemented in the assessment of respiratory mechanics. It consists of a pressure-controlled inflation maneuver which starts from a preset PEEP of 3 cm H2 O, with a total duration of 6 s (Fig. 1). Initially, a linear increase in tracheal pressure (Ptr ) is made from PEEP value until it reaches the preset value of 30 cm H2 O (approximately 3 s) and, then, this pressure is maintained for more 3 s, creating the plateau observed in Fig. 1a. In Fig. 1b we can see the amount of air delivered to the lungs through time while the tracheal pressure was increasing and when the plateau condition was reached. The SAV was adjusted to use as TLC the average value of the delivered volume of air to the lungs (Vtr ) in the last 100 ms of the inflation maneuver (Fig. 1b), so these were the values used herein to compute the analyses. The standard protocol of assessing respiratory mechanics in our laboratory performs two ARM for each animal, thereby, to work with as many alveolar units open as possible the second maneuver was chosen for this study.

Ptr (cmH2O)

40 30 20 10 0

0

2

4

6

8

Time (s)

(a) 1.0 0.8

Vtr (mL)

estimated. Finally, the SAV compensated the airway pressure of the animal by subtracting pressures related to resistances and inertance [17]. The animals were anesthetized with a solution of ketamine (12 mg/kg) and xylazine (12 mg/kg) through an intraperitoneal (i.p.) injection. After, a tracheostomy was performed with a metal cannula 18G (BD Company, USA) and, then the animal was connected to the SAV to be ventilated with a tidal volume of 10 mg/kg [8,18,19], positive end-expiratory pressure (PEEP) of 3 cm H2 O and breathing frequency of 150 breaths/minute. Following, the right jugular vein was exposed and a needle attached to a flexible PVC tube (Critchley Electrical Products PTY, Australia) was inserted, to be used as a pathway to phosphate-buffered saline (PBS) or bronchoconstrictor drugs used in a concomitant study held by our group. It is worth saying that this study did not interfered in the present work, because the TLC assessments were executed prior any PBS or bronchoconstrictor drug administration. Finally, the respiratory muscles were blocked with a injection of 1 mg/kg of pancuronium bromide (i.p.). By doing it, possible inferences of the animal breathing on the respiratory system assessment are eliminated, which makes the method more precise [5,7].

45

0.6 0.4 0.2 0.0

0

2

4

6

8

Time (s)

(b) Fig. 1 Inflation maneuver performed in Balb/c mouse by the SAV in the laboratory. In a the tracheal pressure is increased and monitored by the SAV until 30 cmH2 O which takes approximately 3 s, then this pressure is maintained for more 3 s. In b the corresponding volume of air delivered to the lungs during this maneuver is shown

2.2.2 Static compliance Cstat was calculated as the measured volume divided by the controlled pressure (30 cm H2 O) during the plateau of the TLC maneuver. Vtr (1) Cstat = Ptr

2.3

Data Analysis

Before statistical analysis was performed, mice had been divided into the following two groups: group H (26.6 ± 2.1) composed of 51 animals with body weight higher than the mean value (24 mg) and group L (18.4 ± 4.2) with 73 animals with body weight below the mean value. To exclude possible outliers, the Tukey method was performed [20]. A Shapiro-Wilk test was used to assess the normality of the data in each group. To evaluate whether or not a significant correlation exists between animal body weight and the TLC (Vtr ) in the groups H and L, dispersion graphs using the TLC

46

A. E. Lino-Alvarado et al.

estimated along with the Pearson coefficient (r) were used. In addition, an unpaired t-test was conducted for the values of Cstat in each group.

3

Results

Figure 2 shows that there is no correlation between the relative volume and body weight (r ≈ 0) in group L. Otherwise, as it is presented in Fig. 3, a positive correlation was observed in group H (r = 0.6137 and p < 0.0001).

Fig. 4 Mean ± SD of the Cstat for the studied groups: H and L. Animals with lower body weight (group L) presented a lower static compliance; otherwise, animals (group H) had higher static compliance

4

Fig. 2 No correlation was found between the volume during TLC (Vtr ) and animal weight in group L

Fig. 3 Significant correlation was observed in group H, involving the volume during TLC (Vtr ) and animal weight

Figure 4 illustrates the mean value ± SD of Cstat that was calculated for group H and L.

Discussion

ARMs are used, e.g., in Intensive Care Unit (ICU) with the main purpose of recruiting collapsed alveoli, increase gas exchange, and improve arterial oxygenation [21]. Albeit important in clinical practice, ARM must be applied in controlled circumstances such as proper sedation. However, besides clinical practice, ARMs are present in respiratory mechanics studies with animal models [22,23]. The purpose of ARM is, once more, to recruit collapsed alveoli before the respiratory mechanics evaluation. Additionally, ARM may be used as a perturbation applied in the respiratory system in order to assess respiratory mechanics with physiological meaning. Herein, the selected ARM from the commercial small animal ventilator was the Total Lung Capacity (TLC), which aims to achieve the homonym lung capacity, the Total Lung Capacity. However, in a new version of the ventilator used in this work the maneuver presents a different name (deep inflation) with the same pressure set. The TLC as a lung capacity is, for humans, the lung volume at the end of a maximal voluntary inspiration. In the case of non-human animals, the term TLC as capacity refers to the maximal lung volume. The TLC maneuver at 30 cm H2 O may not inflate the animal lung at the maximal capacity [10]; yet, it can bring information regarding the animal respiratory mechanics. The relative volume referred to the total volume of air delivered into the mice lungs whilst the tracheal pressure increases from PEEP to 30 cm H2 O.

7 Total Lung Capacity Maneuver as a Tool to Screen …

Figure 4 presents that animals with lower body weight had, statistically, a lower static compliance. This compliance was modeled using the TLC maneuver. In addition, it was found that animals with higher body weight and, consequently, greater compliance, presented an association between body weight and volume during TLC (Fig. 3). This association was not observed among the animals with lower body weight. Therefore, we posit two possibilities: there is, indeed, no association between body weight and volume at the end TLC maneuver for the animals with lower body weight; or the 30 cm H2 O pressure is not sufficient to inflate lower compliance animal lungs up to a point of discriminating animals with lower or higher body weight. Notwithstanding, a pressure screening could fully address this issue. The implications of these findings are related to animals’ life supporting mechanical ventilation and respiratory mechanics. Both tidal volume and perturbation volume are set using body weight. For example, it is common to set the tidal volume as 10 mL/kg and the perturbation’s volume amplitude is related numerically with the tidal volume. Thus, the volume amplitude during perturbation of a larger animal is greater than the volume amplitude of a lower body weight animal. This is intuitive, larger animals would require greater volume amplitude during perturbation. However, since there was no correlation with body weight and volume during ARM for smaller animals, the volume set in the perturbations should not be linearly predicated on animal body weight. The respiratory impedance and its further modeling is volume dependent [1]. Hence, if animal’s tidal volume is not properly adjusted along with adequate perturbation volume amplitude, the modeled parameters and impedance calculations may be inaccurate or unreliable. The impedance calculation and linear models application relies on the linear behavior of the respiratory system [8]. This behavior is only fulfilled with certain conditions and one of them is low amplitude of the perturbation and, clearly, depends on the species and morphological characteristics within. In this work the body weight cutoff was 24 g based on the group mean. However, another study with a wider body weight range and with a different design could provide a proper cutoff in order to classify the groups. Based on this work findings, the animals with less than 24 g participate in the group with lower compliance. This group had a non-linear distribution of volume at 30 cm H2 O recruitment and this could indicate that the tidal volume and, consequently, perturbation volume amplitudes could be based on recruitment volume or compliance instead of the body weight.

47

5

Conclusion

Animals with lower body weight presented lower compliance compared with those with greater body weight. Additionally, it was found that animals with higher body weight and, consequently, greater compliance, presented an association between body weight and volume during recruitment, whereas this association was not observed among the animals with lower body weight. These finding could suggest, specially for the lower compliance group, that the perturbation volume amplitude could be based on recruitment volume or compliance instead of the body weight. Acknowledgements This study was financed in part by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)—Brasil (grant numbers 308298/2016-0, 408006/2016-1 and 133814/2019-0); and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001 (grant numbers 88882.333327/2019-01 and 88882.333348/2019-01). Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Sly PD, Collins RA, Thamrin C, Turner D, Hantos Z (2015) Volume dependence of airway and tissue impedances in mice. J Appl Physiol 94(4):1460–1466 2. Aun M, Bonamichi-Santos R, Arantes-Costa F, Kalil J, GiavinaBianchi P (2017) Animal models of asthma: utility and limitations. J Asthma Allergy 10:293–301 3. Lambrecht BN, Hammad H (2015) The immunology of asthma. Nat Immunol 16:45–56 4. Irvin CG, Bates JHT (2003) Measuring the lung function in the mouse: the challenge of size. Respir Res 4:1–9 5. Bates JHT, Irvin CG (2003) Measuring lung function in mice: the phenotyping uncertainty principle. J Appl Physiol 94:1297–1306 6. Schuessler TF, Bates JHT (1995) A computer-controlled research ventilator for small animals: design and evaluation. IEEE Trans Biomed Eng 42:860–866 7. Bates JHT (2009) Lung mechanics: an inverse modeling approach. Cambridge University Press, New York 8. Moriya HT, Moraes JCTB, Bates JHT (2003) Nonlinear and frequency-dependent mechanical behavior of the mouse respiratory system. Ann Biomed Eng 31:318–326 9. Limjunyawong N, Fallica J, Horton MR, Mitzner W (2015) Measurement of the pressure-volume curve in mouse lungs. J Visualized Exp 10. Soutiere SE, Mitzner W (2004) On defining total lung capacity in the mouse. J Appl Physiol 96:1658–1664 11. Leith DE (1976) Comparative mammalian respiratory mechanics. Physiologist 19:485–510 12. Leith DE (1983) Comparative mammalian respiratory mechanics. Am Rev Respir Disease 128:S77–S82

48 13. Cotes JE, Chinn DJ, Miller MR (2006) Lung function: physiology, measurement and application in medicine, 6th edn. WileyBlackwell, Birmingham 14. Lindstedt SL (1987) Allometry: body size constraints in animal design. In: Pharmacokinet. Risk Assess. Drink. Water Heal, 8th edn. National Academies Press, Washington, D.C 15. Stahl WR (1967) Scaling of respiratory variables in mammals. J Appl Physiol 22:453–460 16. Bennett FM, Tenney SM (1982) Comparative mechanics of mammalian respiratory system. Respir Physiol 49:131–140 17. Bates JHT, Schuessler TF, Dolman C, Eidelman DH (1997) Temporal dynamics of acute isovolume bronchoconstriction in the rat. J Appl Physiol 82:55–62 18. Walder B, Fontao F, Tötsch M, Morel DR (2005) Time and tidal volume-dependent ventilator-induced lung injury in healthy rats. Eur J Anaesthesiol 22:786–794

A. E. Lino-Alvarado et al. 19. Guivarch E, Voiriot G, Rouzé A et al (2018) Pulmonary effects of adjusting tidal volume to actual or ideal body weight in ventilated obese mice. Sci Rep 8:6439 20. Mosteller F, Tukey JW (1977) Data analysis and regression: a second course in statistics 21. Hartland BL, Newell TJ, Damico N (2015) A systematic review of the literature. Alveolar Recruitment Maneuvers Under General Anesthesia, Respiratory care, p 60 22. Mori V, Oliveira MA, Vargas MHM et al (2017) Input respiratory impedance in mice: comparison between the flow-based and the wavetube method to perform the forced oscillation technique. Physiol Meas 38:992–1005 23. Vitorasso RL, Oliveira MA, Lima W, Moriya HT (2020) Highlight article: respiratory mechanics evaluation of mice submitted to intravenous methacholine: Bolus versus continuous infusion. Exp Biol Med 245:680–689

The Influence of Cardiac Ablation on the Electrophysiological Characterization of Rat Isolated Atrium: Preliminary Analysis J. G. S. Paredes, S. Pollnow, I. Uzelac, O. Dössel and J. Salinet

Abstract

Keywords

Atrial fibrillation (AF) is the most common cardiac arrhythmia seen in the clinical practice, and treatments with antiarrhythmic drugs are of limited effectiveness. Radiofrequency catheter ablation (RFA) has been widely accepted as a strategy to treat AF. In this study, we analyzed the electrophysiological impact of different RFA strategies by varying the duration of ablation in a controlled protocol. The electrical activity of the isolated right atrium of rats, under different RFA time strategies, was acquired on the epicardium by electrical Mapping (EM), and simultaneously on the endocardium by Optical Mapping (OM). Analyses were performed in both time and frequency domain, through analysis of signal’s morphology, local activation time (LAT), conduction velocity (CV), dominant frequency (DF), and organization index (OI). The morphology of the optical and electrical signals with prolonged ablation time was altered as the ablation time increased. As observed, DF and OI decreased with the increase of the ablation time and resulted in fragmented electrograms. Through the characterization of traditional metrics applied to the electrical and optical data, it was possible to identify the important changes induced by the ablated area.

Cardiac electrophysiology • Electrical mapping • Optical mapping • Signal processing • Radiofrequency ablation

J. G. S. Paredes (B) · J. Salinet Modelling and Applied Social Sciences Centre, Biomedical Engineering, Federal University of ABC/Engineering, Alameda da universidade, s/n Av.Anchieta, São Bernardo do Campo 09606-045, Brazil S. Pollnow · O. Dössel Karlsruhe Institute of Technology (KIT)/Institute of Biomedical Engineering, Karlsruhe, Germany I. Uzelac School of Physics,Georgia Institute of Technology, Georgia, USA

1

Introduction

Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting 1–2% of the worldwide population [1]. It is characterized by the collapse of the synchronized wavefront atria depolarization, experiencing disorganized and selfsustained electrical activation patterns [1]. During AF, the atria do not empty completely between contractions, leading to the formation of blood clots and thromboembolic events, increasing the risk of stroke by five-fold, and doubling mortality [1]. Three different theories were postulated in an attempt to explain the mechanism of AF: multiple and continuous intra-atrial reentries, ectopic foci, and the existence of self-sustained rotors [2,3]. These occurrences vary among patients and can coexist simultaneously, or intercalated with each other in the same patient, favoring a complex atrial activation pattern [2]. The treatment of AF by antiarrhythmic drugs has limited effectiveness, and radiofrequency catheter ablation (RFA) has been widely adopted as a strategy to terminate and prevent recurrence of AF [4]. The primary ablation strategy for AF involves electrical isolation of the pulmonary veins (PVs), aiming to restore and maintain sinus rhythm [4]. Recurrence of AF after RFA occurs in 20–40% of patients when nonconductive cauterized tissue becomes conductive [5]. Permanent PV isolation may not be achieved after a single RFA treatment, and repeated ablation procedure is necessary in such instances [5]. Studies of the electro-physiological characteristics of the ablated tissue allow a better understanding of the ablated area in terms of its excitability and conductivity in order to devise more optimized RFA strategies. This study is based on the analysis of the electrophysiological impact of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_8

49

50

J. G. S. Paredes et al.

different RFA time duration through a controlled protocol on isolated mice atria mimicking the clinical practice to treat AF by RFA.

2

Methods

2.1

Experimental Protocol

All animal experiments were carried out in agreement with the European Council guidelines for the care and use of laboratory animals and have been approved by the local Committee for Animal Welfare (Regierungspräsidium Karlsruhe, 35-9185.81/G-104/17). For this study, two adult Fisher rats were used (A Case and B case). The right atrium was dissected by cutting along the tricuspid valve to the superior vena cava (10 x 10 mm) followed by a bath with a Krebs-Henseleit solution under a controlled temperature of 36.7 ± 0.5 ◦ C. The sinoatrial node is ablated, and the epicardium stimulated at 6.7 Hz. RFA was performed with a tungsten microdissection electrode connected to an electrosurgical unit (MD1, Micromed) in the center of the atrium epicardium. The RFA strategy is performed as follows: the same area of the tissue is ablated for 0.5, 1, 1.5, 2, 2.5, 3, and 4 s. Before and after each ablation, epicardium atrium unipolar electrograms (EGMs) and Optical Activation Potentials (OAPs) are recorded simultaneously with two-minute intervals between successive ablations. By the end of the ablation protocol, the same area on the epicardium had an accumulative ablation of 14.5 s [6].

2.2

Optical and Electrical Mapping

For optical mapping, the transmembrane voltage-sensitive dye Di-4-ANEPPS (Sigma Aldrich) was used. Two LEDs (525 nm center wavelength) supplied with 4 A and each coupled with the narrow band-pass filters of 530 nm were used as excitation light sources. The excitation light was directed to a glass diffuser and then focused by two plano-convex lenses, propagating to a Dichroic mirror and a Makro-Planar lens before reaching the endocardium. The emitted fluorescence is directed back to the Makro-Planar and Dichroic mirror, followed by a long-pass filter (610 nm) and then to a highspeed focal length lens attached to a camera. Sequences of fluorescence images were acquired at a resolution of 82 × 82 pixels 868 Hz with a binning factor of 2. The spatial field of view of the camera is approximately 10.5 × 10.5 mm or 128 × 128 µm per pixel. Due to the noise around image border, the images were cropped to the resolution of 64 × 64 pixels. Simultaneously to optical mapping, EGMs were recorded with a circular multielectrode array (MEA) containing eight

silver/silver chloride electrodes (3.5 mm of diameter) with a sampling frequency of 100 kHz [6]. Both experiments were made in Karlsruhe Institute of Technology [6], and all the analysis below was calculated with data collected.

2.3

Electrophysiological Characterization

All analyses were performed off-line in MATLAB (R2018b). Preprocessing: OAPs were filtered with an adaptive multidimensional Gaussian low-pass filter [7]. The baseline of the fluorescence drift was removed with a low-pass Kaiser Window FIR filter, per-pixel basis, subtracting the baseline, and dividing the obtained difference by the baseline [8]. In order to further reduce noise, the signal was smoothed using a Gaussian kernel [9,10] with a matrix’s size of 6 × 6. Finally, for each pixel, signals were normalized from 0 to 1. EGMs were downsampled to 5 kHz. Pacing artifacts present in the EGMs were removed as follows: as the spectrum of the EGMs is within the region below 1 kHz, the EGMs were high-pass filtered with 3rd-order Butterworth (fc = 1 kHz) filter in order to detect only the pacing peak. After peak detection, a segment with length covering 2 ms before and after the detected pacing peak was removed and replaced by a cubic spline interpolation. Afterwards, the EGMs were band-pass filtered with 3rd-order low-pass (fc = 743 Hz), and high-pass (fc = 0.6 Hz) Butterworth filters. A 50 ± 1 Hz notch filter was used to remove the power line noise. Signal Morphology: 10 OAPs were selected (eight of them near the location of the MEA’s electrodes, one far away from the ablated area, and one in the center of the RFA area). The comparison of the morphology between OAPs recorded at baseline (BL) and after each RFA was performed with Pearson Correlation (PC). The analysis was also expanded to the EGMs’ morphology. Time analysis: The activation and repolarization times were considered as the time at which the upstroke reaches 50% and recovers to 90% of its maximum amplitude, respectively [11]. LATs were calculated for the optical signals, with a duration of 4 s, underneath the MEA’s electrodes. For the eight pixels, the differences between maximal and minimal LATs were calculated. The mean dispersion (LAT) was calculated as the mean value of the obtained differences. With the LATs, the isochronous maps are generated to distinguish patterns of wave-front propagation and provide information on the longitudinal conduction velocity (CV) calculated through the single vector method [12] where it was selected a point at approximately 80% of the distance between the stimulation site to the border for the longitudinal direction [13]. The mean OAP pattern for each pixel under the MEA was upsampled [14] and used to find the action potential duration (APD),

8 The Influence of Cardiac Ablation on the Electrophysiological …

between the time shift from the LAT to the time where the amplitude decays 30, 50, and 90% from highest OAP positive peak [15]. The LATs from EGMs were obtained with the minimum downstroke derivative (-dV/dt) [6]; after that, the LAT values are calculated. Frequency analysis: the Fast Fourier Transform (FFT) is applied to characterize the frequency spectrum. A Hamming window was used to reduce spectral leakage. FFT was performed with a zero-padding factor of 5, resulting in a frequency step of 0.05 Hz. Dominant frequency (DF) is identified as the frequency peak in the power spectrum within the range of 4–12 Hz. The organization index (OI) [16] allows us to characterize the spread of the DF peaks and its harmonics due to the dispersion of CV and action potential upstroke caused by ablation. For the OI calculation, an area with 0.5 Hz on either side of the DF and its harmonics (in the range up 20 Hz) is divided by the total area under the power frequency spectrum within a range of 2–20 Hz.

3

Results

Figure 1a shows two consecutive OAPs at the center of the ablated area for BL, 2.5 s, and 4 s of RFA, with LATs marked in the signals. With RFA time increase, the morphology of OAPs deteriorates with increased fractionation, leading to dispersion of LATs. Figure 1b shows OAPs at the location underneath the 3rd MEA electrode located around the center of ablated area, characterized with lower fractionation than inside the ablated area. Figure 1c shows the 3rd electrode’s EGMs, with noticeable absence of the negative segment of the S wave (down deflection). This feature indicates tissue damage due to ablation, with complete wave propagation block after 2.5 s of ablation. The absence of negative S wave segment indicates that wavefront is not passing through the ablated region. Table 1 shows PC values obtained from 8 OAPs underneath the MEA’s electrodes (2nd and 5th column), one OAP at the center, and far away from the ablated area (in and out columns), for the different time RFA strategies. The corresponding PC values from EGMs are shown in the lower part of the table. Inside the ablated area, and at the MEA’s electrodes, the PC decreases with RFA time increase.

3.1

Time Analysis

The LAT time across the OAPs recorded underneath the MEAs electrodes increased (in ms) from 4.3 ± 0.5 (baseline) to 8.4 ± 2.2 (2.5 s) and 9.8 ± 4.2 (4 s). The EGMs presented a similar trend, increasing from 6.0 ± 1.2 to 11.7 ± 2.5 and 12.6 ± 2.7, respectively. The ablation affected APD values, for “A” case, specifically, with decreased APD90 towards the end of

51

RFA ablation protocol indicating abscence of repolarization due to wavefront block (Fig. 2a). The isochronous maps show a pattern of wavefront propagation from the bottom right to the upper left for case A (Fig. 2b). After 4 s of RFA, compressed isochrone lines around the ablation area indicate CV reduction, where the ablated area acts as a functional block. The CV was 87 cm/s in BL and 65 cm/s after 4 s of RFA with an estimated CV of 48 cm/s in the center of the ablation area. For Case B, the propagation is from the left to the right. The CV was 69 cm/s at BL, 37 cm/s after 4 s of RFA, and eximated to be 0.8 cm/s in the center of the ablation area (Fig. 2c).

3.2

Dominant Frequency and Organization Index

The DF analysis for the A case shows large areas of 6.7 Hz until 2.5 s, decreasing in the ablated area to 5.4 Hz after 4 s of RFA. The DF obtained from EGMs went down from 6.7 to 4.7 Hz. The respective OI maps are displayed in Fig. 3a. The overall OI values for the OAPs in the ablated area are reduced, in contrast to the EGMs. For B case, the DF decreases from 6.7 Hz to 5.7 Hz after 2.5 s of RFA. Figure 3b shows that the OI maps have similar patterns as A case, with lower values of OI in the middle of the ablated area after 2.5 s. The EMGs show partially correlated OI values to the OI map.

4

Discussion

Similar to our observations, a recent study also showed that EGMs and OAPs change in their morphology, by increasing levels of fractionation, are associated with prolonged RFA time [17]. Specifically, EGMs are useful to detect waveblock in the ablated region, as successful ablation, through the absence of the negative S-wave segment [17]. Several authors use the first derivative of the AP in order to find the activation time [14,18]; but this method presented problems in the ablated area, and alternatively, time activation was determined as the time point at which the upstroke reaches 50% [11]. In optical mapping due to inherited spatial averaging, waveblocks are characterized by the dispersion of LATs around the ablated area, and randomness of LATs corresponding to successive wavefronts in the ablated region. According to Chorro et al. [19] the distal tissue has its activation delayed by the ablated lesions. With both OAP and EGM methods, PC values are highly correlated, consistent, and can be used as markers to characterise successful or failed ablation [17]. For more detailed ablation analysis, OAPs are advantageous due to a higher resolution.

52

J. G. S. Paredes et al.

Fig. 1 a OAPs at BL, 2.5 s and 4 s after RFA obtained in the ablated area. b OAP acquired underneath MEA electrode 3. c EGM for electrode Table 1 OAPs and EGMs’s Pearson Correlation Time

A case Mean ± SD

In

Out

0.5 s 2.5 s 4s

0.94 ± 0.02 0.91 ± 0.03 0.01 ± 0.03

0.91 0.60 0.01

0.96 0.97 −0.01

0.5 s 2.5 s 4s

0.95 ± 0.04 0.05 ± 0.15 0.06 ± 0.03

B case Mean ± SD

In

Out

0.88 ± 0.05 0.87 ± 0.04 0.77 ± 0.09

0.87 0.70 0.53

0.95 0.96 0.96

OAP

EGM

In the border zone surrounding the lesion, the RF energy resulted in a significant APD shortening [20] presented at the end of the RFA strategy, (Fig. 2a). Some authors found no significant modifications in atrial CV in the proximal areas to the ablation site [19]. APD markers (to study repolarization) and CV can be calculated across entire tissue to study the effects of RFA not only on the ablated area but also at surrounding tissue to limit the RFA damage where it is not necessary. OI is the marker of choice to identify focal sources during persistent atrial fibrillation. However, in this study, the randomness of the OIs from EGMs and poor correlation with respective OI values from OAPs, render OIs from EGMs as poor markers for ablation characterization. OIs from OAPs are much better suited [21]; however optical mapping cannot be used in clinical settings.

0.52 ± 0.31 0.10 ± 0.31 0.03±0.28

5

Conclusion

This study aimed to identify the effect of the ablation time duration on the atrium substrate. The effect on heart electrophysiology at the ablation site and surrounding tissue was quantified by simultaneously optical and electrical signals. Many factors can influence the observed difference between cases A and B during the experimental phase such as: wall thickness, force, time, temperature, size of electrode, and the pacing location. Statistically the most important limitation was the number of cases, a bigger number of samples would be helpful to explain also the differences among experiments. Future studies will include more experimental data, including different animal species and different RFA strategies, to identify the important physiological changes at the ablated area.

8 The Influence of Cardiac Ablation on the Electrophysiological … Fig. 2 a APD30, APD50 and APD90 for RFA strategy for A and B cases inside the ablated area. b LAT map generated at BL, after 2.5 s, and 4 s of RFA for A case. c LAT maps for B case

Fig. 3 a OI maps with the OI values for the EGMs for A case. b OI maps for B case

53

54 Acknowledgements Financial support: Program of Alliances for Education and Training (Scholarship Brazil - PAEC OEA-GCUB-2017). Conflict of Interest The authors declare no conflict of interest.

References 1.

January CT, Wann L, Calkins H et al (2019) Guideline for the management of patients with atrial fibrillation: a report of the american college of cardiology. J. Am College Cardiol. 74:104–132 2. Konings KT, Kirchhof CJ, Smeets JR, Wellens HJ, Penn OC, Allessie MA (1994) High-density mapping of electrically induced atrial fibrillation in humans. Circulation, Philadelphia 3. Zipes DP, Jalife J (2004) Cardiac electrophysiology from cell to bedside. SAUNDERS, Philadelphia 4. Kirchhf P, Eckardt L (2013) Ablation of atrial fibrillation: for whom and how? Heart 96:1325–1330 5. Darby AE (2016) Recurrent atrial fibrillation after catheter ablation: considerations for repeat ablation and strategies to optimize success. J Atr Fibrillation 9 6. Pollnow S (2018) Characterizing cardiac electrophysiology during radiofrequency ablation; 24. Karlshure Transactions on Biomedical Engineering, Karlsruhe, Germany 7. Pollnow S, Pilia N, Schwaderlapp G et al (2019) An adaptive spatiotemporal Gaussian filter for processing cardiac optical mapping data. Comput Biol Med 102:267–277 8. Uzelac I, Iravanian S, Fenton FH (2019) Parallel acceleration on removal of optical mapping baseline wandering. Comput Cardiol 46 9. Sung D, Omens JH, Mcculloch AD (2000) Model-based analysis of optically mapped epicardial activation patterns and conduction velocity. Annal Biomed Eng 28:1085–1092 10. Gurevich DR, Herndon C, Uzelac I, Fenton FH, Grigoriev RO (217) Level-set method for robust analysis of optical mapping recordings of fibrillation. Comput Cardiol 44

J. G. S. Paredes et al. 11. Efimov IR, Nikolski VP, Salama G (2004) Optical imaging of the heart. Circulation 95:21–33 12. Doshi AN, Walton RD, Krul SP et al (2015) Feasibility of a semiautomated method for cardiac conduction velocity analysis of highresolution activation maps. Comput Bio Med 177–183 13. Linnenbank AC, Bakker JMT, Coronel R (2014) How to measure propagation velocity in cardiac tissue: a simulation study. Front Physiol 14. O’Shea C, Holmes AP, Yu TY et al (2019) ElectroMap: highthroughput open-source software for analysis and mapping of cardiac electrophysiology. Sci Rep 9–1389 15. Yu TY et al (2014) An automated system using spatial oversampling for optical mappingin murine atria. Development and validation with monophasic andtransmembrane action potentials. Prog Biophy Molecular Biol 1–9 16. Salinet JL (2013) High density frequency mapping of human intracardiac persistent atrial fibrillation electrograms. University of Leicester 17. Pollnow S, Schwaderlapp G, Dössel O et al (2019) Monitoring the dynamics of acute radiofrequency ablation lesion formation in thinwalled atria–a simultaneous optical and electrical mapping study. Biomed Eng Biomed, Technik, p 65 18. Efimov IR, Huang DT, Rendt JM, Salama G (1994) Optical mapping of repolarization and refractoriness from intact hearts. Circulation 90 19. Chorro FJ et al (1998) Acute effects of radiofrequency ablation upon atrial conduction in proximity to the lesion site PACE 21:1659–1668 20. Wu CC et al (1999) Sequential change in action potential of rabbit epicardium during and following radiofrequency ablation. J Cardiovasc Etectrophxsiol 10:1252–1261 21. Jarman JWE et al (2014) Organizational index mapping to identify focal sources during persistent atrial fibrillation. J Cardiovasc Etectrophxsiol 25:355–363

Evaluation of the Surface of Dental Implants After the Use of Instruments Used in Biofilm Removal: A Comparative Study of Several Protocols D. P. V. Leite, G. E. Pires, F. V. Bastos, A. L. Sant’ana, L. Frigo, A. Martins e Silva, J. E. P. Nunes, M. H. B. Machado, A. Baptista, R. S. Navarro, and A. T. Araki in G5-L, flattening of the roughness peaks was observed, but not in the valleys; in G2-US several morphological changes were observed: total kneading of the roughness in some areas and fine scratches in others. It is concluded that of the protocols used, the least harmful to roughness was the laser, followed by stainless steel curettes and ultrasound. Teflon curettes did not change the surface roughness, but added material residues to the surface.

Abstract

It is extremely important to check the morphological changes on the surfaces of the implants that will have an impact on their osseointegration and clinical longevity of the implant. For this reason, the purpose of this in vitro study was evaluated the damage caused to the surface of dental implants after different conditions of simulation of mechanical removal of the biofilm and scraping using several different protocols. There was no formation and removal of biofilm. Twenty-five implants of the Singular Implants® brand (Parnamirim, Brazil) were divided into 05 groups: G1 control- C (n = 5), G2 Ultrasound- US (n = 5), G3 Stainless Curette- INX (n = 5), G4 Teflon®-TF curtain (n = 5), G5 Laser-L (n = 5). Scraping was performed on the first three turns of the G2, 3 and 4 implants. The G5-L received irradiation with Laser Er: YAG (50 J, 1.5 W, 30 Hz). After the procedures, the implants were evaluated in SEM (1500–3000 X). It can be observed that in G4-TF there was no change in surface morphology and roughness, however residues of TF were deposited on the surface; in G3-INX important morphological changes were observed, with impression on the titanium of parallel striations typical of the use of curettes; D. P. V. Leite (&)  G. E. Pires  A. T. Araki Dentistry Post Graduation Program, Universidade Cruzeiro do Sul, Galvão Bueno Street, 868 – Liberdade, São Paulo, 01506-000, Brazil M. H. B. Machado  A. Baptista  R. S. Navarro Bioengineering Post Graduation Program, Universidade Brasil, São Paulo, Brazil A. L. Sant’ana  A. M. eSilva  J. E. P.Nunes  R. S. Navarro Biomedical Engineering Post Graduation Program, Universidade Brasil, São Paulo, Brazil F. V. Bastos Dentistry Post Graduation Program, Universidade Anhanguera, São Paulo, Brazil L. Frigo Dentistry Post Graduation Program, Universidade Guarulhos, Guarulhos, Brazil

Keywords

Dental implants Curettes

1



Peri-implantitis



Ultrasound



Laser



Introduction

In 1954, Branemark discovered, indirectly, through his classic study that checked blood circulation in rabbit tibias, what he called osseointegration. This allowed for further research on animal models, leading to results that enabled the treatment of edentulism, using prostheses connected to dental implants [1]. The osseointegration process is decisive for obtaining success in prosthetic rehabilitation of totally or partially edentulous edges, using dental implants. Direct and stable contact between the implant and the surrounding bone determines this success [2]. Effective osseointegration is dependent on characteristics such as implant shape (macroscopic and microscopic), titanium quality surface and its chemical-biological interaction with bone tissue [3, 4]. Characteristics such as topography, wettability, surface load and surface chemical composition, in contact with bone tissue, define the speed and quality of osseointegration. These properties provide bone-implant interactions, such as ionic adsorption, protein absorption, communication between cells and implant surface, in addition to signaling for differentiation of these cells, leading to the union of the biomaterial with the bone [5, 6]. Surface treatment

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_9

55

56

D. P. V. Leite et al.

techniques have been proposed in order to create a biochemical union capable of accelerating the initial phases of bone neoformation on the implant [4, 7]. The purpose of this in vitro study was evaluated the damage caused to the surface of dental implants after different conditions of simulation of mechanical removal of the biofilm and scraping using several different protocols.

2

Material and Methods

This is a comparative in vitro study to evaluate the damage caused to the surface of dental implants after the simulation conditions of mechanical removal of the biofilm and scraping using several different protocols. There was no formation and removal of biofilm. Twenty-five sterile implants from the Singular Implants® brand (Parnamirim-RN, Brazil) were divided into 5 groups: G1 control (n = 5), G2 Ultrasound (n = 5), G3 Stainless Curette (n = 5), G4 Teflon® curette (n = 5), G5 Er:YAG Laser (n = 5). To perform the laboratory study, dental implants were attached to implant support braces to perform a simulation of the clinical scraping and cleaning procedure to assess damage to the implant surface. Treatments were performed on the first three turns of the implants (Fig. 1). In the control group (G1) no treatment was performed, in the Groups 2 and 3 were performed scraping with different types of curettes, in the Group 4 scraping with ultrasound was performed in P mode (Periodontics) at intensity 3, in the Group 5 received irradiation with an Er:YAG high power laser in the parameters: 50 J, 1.5 W, 30 Hz. After performing the procedures, the implants were evaluated in SEM (1500 X and 3000 X) at the LCT-POLI/USP-SP laboratory.

3

Results

It was observed that in G4 Teflon® there was no morphological change in the roughness, but Teflon® residues were blunted on the surface; in the G3 Inox, important morphological changes were noted, with printing on the titanium of parallel striations typical of the use of curettes; in the G5-Laser, flattening of the roughness peaks was observed, but not in the valleys; in G2-ultrasound, more varied morphological changes were observed: total kneading of the roughness in some areas and fine scratches in others (Fig. 2).

a

f

b

g

c

h

d

e

Fig. 1 Illustrative image of laboratory performance of the treatment on the implant surface

i

j

Fig. 2 Representatives SEM photomicrographs of the surface morphology of the implants after different treatments. A: G1 control, B: G2 Ultrasound; C: G3 Stainless Curette; D: G4 Teflon® curette; E: G5 Laser. Original magnification at 1500X. F: G1 control, G: G2 Ultrasound; H: G3 Stainless Curette; I: G4 Teflon® curette; J: G5 Laser. Original magnification at 3000X

Evaluation of the Surface of Dental Implants After the Use …

4

Discussion

Effective osseointegration is dependent on characteristics such as implant shape, titanium quality and its chemical-biological interaction with bone tissue. This analysis will guide appropriate clinical choices, providing better quality and speed to osseointegration [3, 4]. These properties provide bone-implant interactions with tissues leading to the union of the biomaterial with the bone [5, 6]. Surface treatment techniques have been proposed in order to create a biochemical union capable of accelerating the initial phases of bone neoformation on the implant [4, 7]. The purpose of this in vitro study was evaluated the damage caused to the surface of dental implants after different conditions of simulation of mechanical removal of the biofilm and scraping using several different protocols. According to Louropoulou et al. [8], dental implants must integrate with three different types of tissues: epithelial, connective and bone, so that they can, in a predictable way, be really durable. For Davies [9], there are several factors that influence the success of the dental implant, these being the primary stability, quality and bone quantity [9]. Sangata [10] states that primary stability is essential for osseointegration, therefore, an implant with high primary stability will be successful and the opposite will lead to the failure of this implant. For Goiato et al. [11], the factors that influence osseointegration are bone density, the location of the implant in the maxilla and/or mandible and the length of the implant, so that primary stability was not considered as the main requirement for osseointegration. For Thakral et al. [11], texturing techniques in dental implants can influence the establishment of osseointegration, both for cell differentiation, after implant insertion, and in the calcified bone matrix. Also according to Wennerberg and Albrektsson [12], the treated surfaces result in greater bone/implant contact (BIC), in relation to smooth implants. Thus, implants with textured surfaces are indicated for locations with lower BIC at the end of surgery. In contrast, Att et al. [13] state that the bone is deposited indistinctly on porous or smooth surfaces. Thus, porosity, then, would not be a necessary condition for bone apposition to occur. Park et al. [14] and Yan et al. [15] showed that Ti implants covered with hydroxyapatite spray plasma had a greater amount of bone at the bone/implant interface, when compared to implants with a smooth surface. According to De Groot et al. [16], hydroxyapatite spray plasma implants have been extensively studied and considered to have high potential for osseointegration. The force used to remove HA-coated implants in the bed requires 55 MPa in 3 months and 62 MPa in 6 months, which suggests high bone remodeling. With the aid of scanning microscopy, it was identified that each surface accumulated a different amount of organic and

57

inorganic matter in the formation of bone matrix. This suggests that cellular responses occur regardless of the physicochemical properties of surface conditioning [17]. In this line of reasoning, Rupp et al. [16] confirm that the SLActive and SLA surfaces did not show any apparent differences when both had the same topography, however, statistically significant differences were observed with two or four weeks of BIC repair (bone/implant contact). This reports that the changes were not the result of topography, but that they probably occurred due to changes in chemical structures [18]. Proving the study by Rupp et al. [18], a new study carried out in Sweden by Oates et al. [19] demonstrated that the increase in the speed of bone formation can influence the stability of the implant. The SLA and SLActive surfaces were analyzed using a resonance frequency for osseointegration. The authors noted that the transition time from primary stability to secondary stability was two weeks for SLActive surface and four weeks for SLA. These results demonstrate that the speed of bone formation directly influences the stability of the implant [18]. Huang et al. [19] studied the effect of chemical and nanotopographic changes in the early stages of osseointegration. The sandblasting surface modifications with Ti oxide, fluoride treatment and modifications with nanohydroxyapatite were investigated, analyzed by means of removal torque and histological analyzes after four weeks. Analyzes of SEM images indicated the presence of nanostructures on chemically modified implants, confirming the presence of Ti, O2, C and N in all studied groups. The removal torque was higher for implants with chemical nanotopographic modifications. It is concluded that nanotopographically modified surfaces produced a differentiated surface with greater bone apposition, thus explaining the greater results for removal torque on that surface [19]. In the preset study, the Fig. 2 demonstrated that in the group 4 Teflon® there was no morphological change in the roughness, but Teflon residues were blunted on the surface; in group 3 Inox important morphological changes were noted, with imprint on the titanium of parallel streaks typical of the use of curettes; in Group 5 Laser there was a flattening of the roughness peaks, but not of the valleys; in group 2 Ultrasound the most varied morphological changes were observed: total kneading of the roughness in some areas and fine scratches in others.

5

Conclusion

Based on the findings, it can be concluded that all tested protocols for scraping and simulation of biofilm removal, promoted changes in the surface of implants. The treatment with Er:YAG laser promoted less roughness and morphological damages, followed by stainless steel curettes and ultrasound.

58 Acknowledgements The authors thank the Universidade Cruzeiro do Sul. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Albrektsson T, Pi B, Ha H, Lindstrom J (1981) Osseointegrated titanium implants. Requirements for ensuring a long-lasting, direct bone-to-implant anchorage in man. Acta Orthop Scand 52:155–170 2. Vargas E, Dw B, Cl S (2006) Beneficial neurocognitive effects Bernardes SR, Claudino M, Sartori IAM. Análise fotoelástica da união de pilar a implantes de hexágono externo e interno. Implant News 3:355–359 3. Pi B (1983) Osseointegration and its experimental background. J Prosth Dent 50:399–410 4. Skalak R (1988) Stress transfer at the implant interface. J Oral Implantol 13:581–593 5. Ce M (1999) Implant design considerations for the posteriorregions of the mouth. Imp Dent 8:376–385 6. Kasemo B, Lausmaa J (1988) Biomaterial and implant surfaces:on the role of cleanliness, contamination, and preparation procedures. J Biomed Mater Res 22:145–158 7. Buser D, Nydegger T, Oxland T, Dl C, Rk S, Hp H et al (1999) Interface shear strength of titanium implantswith sandblasted and acid-etched surface: a biomechanical study in the maxilla of miniature pigs. J Biomed Mater Res 45:75–83 8. Louroupou A, Vander W (2015) Influence of mechanical instruments on the biocompatibility of titanium dental implants surfaces: a systematic review. Clin Oral Implants Res 26:841–850 9. Je D (1998) Mechanisms of endosseous integration. Int J Prosthodont 11:391–401

D. P. V. Leite et al. 10. Sangata M (2010) Immediate loading of dental implant after sinus floor elevation with osteotome technique: a clinical report and preliminary radiographic results. J Oral Implantol 36:485–489 11. Thakral G, Thakral R, Shrama N, Seth J, Vashisht P (2014) Nanosurface the future of implants. J Clin Diagn Res 8:07–10 12. Att W, Yamada M, Ogawa T (2009) Effect of titanium surface characteristics on the behavior and function of oral fibroblasts. Int J Oral Maxillofac Implants 24:419–431 13. Park EJ, Song Yh, Hwnag MJ, Song HJ, Park YJ (2015) Surface characterization and osteoconductivity evaluation of micro/nano surface formed on titanium using anodic oxidation combined with H2O2 etching and hydrothermal treatment. J Nanosci Nanotechnol 15:6133–6136 14. Yan J, Jf S, Pk C, Han Y, Ym Z (2013) Bone integration capability of a series of strontium-containing hydroxyapatite coatings formed by micro-arc oxidation. J Biomed Mater Res 101:2465–2480 15. De Groot K, Geesink R, Cpa K (1987) Plasma sprayed coatings of hydroxylapatite. J Biomed Mat Res 21:1375–1381 16. Rupp F, Ra G, Scheideler L, Marmur A, Boyan Bd, Schwartz Z et al (2014) A review on the wettability of dental implant surfaces I: theoretical and experimental aspects. Acta Biomater 10:2894– 2906 17. Tw O, Valderrama P, Bischof M, Nedir R, Jones A, Simpson J et al (2007) Enhanced implant stability with a chemicallymodified SLA surface: a randomized pilot study. Int J Oral Maxillofac Implants 22:755–760 18. Huang Y, He J, Gan L, Liu X, Wu Y, Wu F, Gu ZW (2014) Osteoconductivity and osteoinductivity of porous hydroxyapatite coatings deposited by liquid precursor plasma spraying: in vivo biological response study. Biomed Mater 9:065007 19. Ke H, Ducheyne P (1992) Hydration and preferential molecular absorption on titanium in vitro. Biomaterials 13:553–647 20. Wennerbeg A, Albrektsson T (2009) Effects of titanium surface topographyon bone integration: a systematic review. Clin Oral Impl Res 20:172–184

Comparison of Hemodynamic Effects During Alveolar Recruitment Maneuvers in Spontaneously Hypertensive Rats Treated and Non-treated with Hydralazine L. C. Ferreira, R. L. Vitorasso, M. H. G. Lopes, R. S. Augusto, F. G. Aoki, M. A. Oliveira, W.Tavares-Lima and H.T. Moriya

not sufficient to decrease the basal pressure, however it influenced on the cardiovascular response during alveolar recruitment. Since there was no difference in the total air volume displacement in alveolar recruitment maneuvers among groups, the explanation to the influence on the cardiovascular response during alveolar recruitment is solely related with the cardiovascular system.

Abstract

This article aims to evaluate the behavior of the arterial pressure response during alveolar recruitment maneuvers in spontaneously hypertensive rats (SHR). Such study can lead to a better understanding of arterial pressure behavior, which is relevant in the context of hypertension being a health condition. The animals were separated into groups: with and without Hydralazine treatment (20 mg/kg dissolved in drinking water) and two ages (17 weeks—2 weeks treatment and 21 weeks—6 weeks treatment). For the experiment, the animals were anesthetized, tracheostomized and connected to a small animal ventilator (flexiVent legacy, SCIREQ, Canada). The animals were connected to a custom-built arterial pressure monitor device. Arterial pressure and volume of the respiratory system were analyzed during two consecutive alveolar recruitment maneuvers. The results indicated that there is no statistically significant difference in arterial pressure during alveolar recruitment maneuvers, although respiratory system volume showed significant difference ( p < 0.0001, ANOVA) among different groups. Regarding the percentage comparison between the period before ARM and the second ARM, there was statistical ( p < 0.0001, ANOVA) difference in 17 weeks treated versus 21 weeks non-treated and 21 weeks treated versus ( p > 0.05 for all). These findings could indicate that the drug, with the present dose and administration time, was L. C. Ferreira (B) · R. L. Vitorasso · M. H. G. Lopes · R. S. Augusto · H. T. Moriya Biomedical Engineering Laboratory, Escola Politécnica of the University of São Paulo, Av. Prof. Luciano Gualberto, 380-Cidade Universitária, São Paulo/SP, Brazil F. G. Aoki Institute of Science and Technology, Federal University of São Paulo, São José dos Campos/SP, Brazil

Keywords

Arterial pressure • Alveolar recruitment maneuver • Hydralazine • Hypertension • Rats

1

Introduction

Arterial Hypertension (AH) is a cardiovascular disease that affects the health system globally [1,2]. In the 2000s, 26.4% of the world’s adult population had hypertension. This number becomes even higher in the forecast for the year 2025, where 29.2% of people will be affected by the condition [3]. Currently, in Brazil, about 35% of the population over 40 years old suffers from heart problems related to increased arterial pressure. To reduce this number, it is possible to use antihypertensive drugs to prevent or treat the condition, since its efficiency compared to non-pharmacological treatments is approximately 5 times greater [2,4]. Animal model studies of AH have been very useful in understanding this pathology. Such studies aim to better understand the pathophysiology of AH in humans, by reproducing this condition in animal models in order to observe their hemodynamic response. Data obtained from these researches can later on be useful in translational medicine [5,6]. In addition, hypertensive behavior and its hemodynamic parameters have been occasionally studied associated with mechanical ventilation [7,8] and in comparison studies involving normotensive individuals [9].

M. A. Oliveira · W. Tavares-Lima Department of Pharmacology, Institute of Biomedical Sciences, University of São Paulo, São Paulo/SP, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_10

59

60

L. C. Ferreira et al.

Alveolar Recruitment Maneuver (ARM) is normally used in mechanical ventilation as a short-term increase in airway pressure in order to recruit collapsed alveoli [10,11]. This technique is used in clinical practices and also in studies of ventilatory mechanics [12]. There are several techniques of recruitment maneuvers discussed in the literature [13], but it is known that this technique can prevent the collapse of alveolar units, thus increasing the available lung area for an effective gas exchange and better arterial oxygenation, through an insufflation pressure corresponding to the total lung capacity applied repeatedly for a period of time [13–16]. Since one of the characteristics of antihypertensive drugs is decreasing the effects of AH through vasodilation [3,4], it is expected that its use could be beneficial in these conditions when associated with the use of lung ventilation through ARM, which means having a significant decrease in arterial pressure, although the ARM promotes an increase in lung volume, that can affect the arterial pressure measurements due to lung expansion. The present study aimed to evaluate the arterial pressure behavior during the performance of ARMs in two groups of Spontaneously Hypertensive Rats (SHR) for the purpose of understanding the consequences of treatment with antihypertensive substance and how it affects the hemodynamic response of animals. These groups were divided into two categories: rats treated with the antihypertensive drug hydralazine and those that were not treated.

2

Materials and Methods

2.1

Animals

SHR (n = 36) were used for this research. Hydralazine-treated (n = 20, 20 mg/kg dissolved in drinking water) and non-treated (n = 16, naïve) animals were divided into two groups according to their age: 17 weeks old (hydralazine-treated, n = 12, and non-treated, n = 10) received a 2-week treatment, and 21 weeks old (hydralazine-treated, n = 8, and non-treated, n = 6) received a 6-week treatment (Table 1). During the procedure, the animals were anesthetized with an intravenous (i.v.) injection of ketamine (110 mg/kg) and xylazine (10 mg/kg). Table 1 Groups of animals Group

Age (weeks)

n

Weight (g)

Treated with hydralazine Treated with hydralazine Non-treated Non-treated

17

12

308.5±28.3

21

8

333.8±42.1

17 21

10 6

331.9±40.9 347.0±39.2

2.2

Experimental Protocol

A 2.5 cm long 14G cannula was inserted into the animal’s trachea (Höppner, Brazil), which was connected to a small animal ventilator (SAV) (flexiVent legacy, SCIREQ, Canada) and ventilated with a tidal volume of 10 mL/kg, respiratory rate of 120 cycles per minute and PEEP (Positive end-expiratory pressure) of 3 cm H2 O. Although smaller tidal volumes are used for protective mechanical ventilation, higher tidal volumes are also applied for evaluating physiological parameters [17–19]. Through SAV the respiratory system volume was measured. The jugular vein was dissected and then a flexible PVC tube (Critchley Electrical Products PTY, Australia) was inserted during ventilation. Furthermore, the right carotid artery was cannulated with a polyethylene catheter to measure the animal’s arterial pressure. The respiratory musculature was blocked with pancuronium bromide (1 mg/kg i.v.). After that, the standard SAV protocol for ARM was performed [18], which consists in two alveolar recruitment maneuvers up to 30 cm H2 O (the pressure was increased in ramp for 3 seconds until it reached 30 cm H2 O and then maintained at this pressure for another 3 seconds) with an interval of 5 seconds between the two ARMs.

2.3

Data Acquisition

An analog to digital 24-bit converter module (HX711, Avia Semiconductor, China) was used to acquire arterial pressure data with an invasive arterial pressure sensor (New NPC 100 Sensor, Amphenol, USA). A custom-made software routine was implemented in order to perform the calibration of the device using as references pressures 0 mmHg and 200 mmHg, to which the device was subjected through a mercury column. The data were exported at a rate of 50 samples per second and with a resolution of 0.01 mmHg. The data output from SAV provided the airway pressure and tracheal volume signals with a sampling frequency of 256 Hz.

2.4

Data Analysis

Arterial pressure data were treated using the Tukey criterion for outlier detection [20]. Then, in order to facilitate the visualization of the stabilization segments, the signal was filtered through a moving average with 21 samples in length (N = 21 in Eq. (1)). N 1  P Ai+ j (1) P Ai = N j=1

10 Comparison of Hemodynamic Effects During Alveolar …

61

After filtering the arterial pressure signal, certain sections were selected for analysis: Before ARM and during ARM (in order to adopt a standard, we selected 1 s in the three mentioned sections), since the objective is to evaluate if hydralazine treatment promotes a significant impact on arterial pressure decrease, and, if so, when this impact is more critical (before or during the recruitment). The absolute arterial pressures of the animals before and during ARM were then compared in terms of percentages. A statistical package (Prism 8, GraphPad Software, USA) was used. For the analysis of tracheal volumes in the different groups studied, during ARM, we used Two-Way ANOVA (ANOVA—Analysis of Variance), with Bonferroni post-hoc. Regarding the arterial pressure data, the data were submitted to a normality test (Shapiro-Wilk) and in case of adherence to the normal distribution curve, parametric analysis (OneWay ANOVA) or non- parametric (Kruskal-Wallis) was used. For such analyses, statistical significance of p < 0.05 was adopted.

3

Results

Figure 1 shows the profile of a representative arterial pressure measurement during data collection. It is possible to notice the absence of stabilization (plateau) during the first ARM. Regarding the measured respiratory system volumes (Fig. 2), between the first and second ARMs, a significant difference was verified ( p < 0.0001). In terms of post-hoc analysis, significant difference between ARMs was found on the following groups: 17 weeks and 21 weeks treated, and 21 weeks non treated (* p < 0.05). The graphs on Fig. 3 represent the absolute values of arterial pressure before the second ARM (a), the absolute values of arterial pressure during the second ARM (b) and the percentage of arterial pressure decrease during ARM (c). In terms of absolute values of arterial pressure before ARM, there was no significant difference among the groups observed, both for analysis of variance and post-hoc analysis. For absolute values of arterial pressure during ARM, the analysis of variance showed a significant difference among the groups observed ( p < 0.05), while in the post-hoc analysis there was no significant difference among groups ( p > 0.05). Regarding the percentage comparison between the period before ARM and the second ARM, a significant difference was observed between the groups in the analysis of variance ( p < 0.05). It was found statistical difference in 17 weeks treated versus 21 weeks nontreated and 21 weeks treated versus 21 weeks non-treated comparisons ( p > 0.05).

Fig. 1 Arterial pressure behavior during the alveolar recruitment maneuvers. Graph a is the signal extracted from the custom-built arterial pressure measurement device and b is the same signal after being filtered by a moving average of 21 samples

4

Discussion

In our study, we aimed to verify a possible influence of the ARM on arterial blood pressure using a strain of SHR, commonly used in arterial hypertension research [21–23]. The ARM is often used in animal models to homogenize lung volume history prior to assessment of respiratory mechanics procedures. Clinically, this technique is used to improve oxygenation by opening previously collapsed airways [24]. Since the arterial pressure is a vital signal and the alveolar recruitment may present cardiovascular repercussions [12,25] such as hypotension, the main objective of this work was

62

L. C. Ferreira et al.

Fig. 2 Comparison between ARM volumes (information expressed in terms of means and standard deviations), where there was significant difference between the first and second ARMs in the analysis of variance ( p < 0.0001, Two-way ANOVA). In post-hoc analysis, there was significant difference between ARMs on the following groups: 17 weeks and 21 weeks treated, and 21 weeks non treated (* p < 0.05)

to study the arterial blood pressure in SHR during alveolar recruitment. In order to collect the data in a standardized way, it was decided to use the second ARM instead of the first to which the animals were submitted. This is because during the first ARM, not all animals showed arterial pressure stabilization, making it not possible to collect physiologically relevant information in this context (Fig. 1). It was found that, statistically, the volume displacement of the respiratory system during the second ARM was lower than the first ARM (Fig. 2). The hypothesis is that this happens because between the first and the second ARM there is not enough time to reach Functional Residual Capacity (FRC), thus the remaining volume above FRC of the first ARM influences the volume measured in the second ARM. This phenomenon may influence on the assessed compliance/elastance, once the compliance may be obtained simply dividing the volume displacement by the difference in pressure. Besides the volume displacement repercussion, this work studied the arterial blood pressure during those maneuvers. There was no difference in the basal arterial pressure among treated and non-treated in different age animal groups (Fig. 3).

Fig. 3 Arterial pressure comparisons. a Absolute arterial pressure value before ARMs ( p > 0.05, ANOVA). b Absolute arterial pressure value during the second ARM ( p < 0.05, ANOVA). In post-hoc analysis, no significant differences were observed c pressure decrease percentage when comparing the period before the two ARMs and the pressure value during the second ARM ( p < 0.05, ANOVA). * p < 0.05

10 Comparison of Hemodynamic Effects During Alveolar …

This finding was not expected. However, one possible explanation is related with the hydralazine administrated time and dose. Notwithstanding, the pressure during recruitment was not different among groups in post-hoc analysis, only in the analysis of variance, despite being, visually (Fig. 3), lower in the non-treated groups. However, the treated groups presented a lower decrease during the recruitment, which is a desirable physiological response. Thus, the treated groups cardiovascular system was less impacted by the intrinsic transitory hypotension during ARM than the non-treated groups. The arterial blood pressure is directly influenced by the recruitment since the heart is mechanically compressed within the lung expansion, hence impacting on the amount of blood entering in the atrium and on the output blood resistance [26]. These findings could indicate that the drug, with the present dose and administration time, was not sufficient to decrease the basal pressure, however it influenced on the cardiovascular response during alveolar recruitment. Since there was no difference in the total air volume displacement in ARMs among groups (Fig. 2), the explanation to the influence on the cardiovascular response during alveolar recruitment is solely related with the cardiovascular system. Therefore, the cardiovascular compliance or resistance rather than the respiratory compliance played a major role on the results found herein.

63

The procedures comply with Law number 11794 (08/10/2008), which regulates all research activities involving the use of animals in Brazil. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES— Brazilian Coordination for the Improvement of Higher Education Personnel)—Brazil—Finance Code 001 (88882.333348/2019-01) and the Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico (CNPq—Brazilian National Council of Scientific and Technological Development)—Brazil (308298/2016-0 and 408006/2016-1). Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2.

3.

4.

5.

5

Conclusion

There was no difference in basal arterial blood pressure between treated and non-treated animals with the present hydralazine doses and time set. However, the treated groups cardiovascular system was less impact by the intrinsic transitory hypotension during ARM than the non-treated groups which is a desirable physiological response. Furthermore, since there was no difference in the total air volume displacement in ARMs among groups, the explanation to the influence on the cardiovascular response during alveolar recruitment is solely related with the cardiovascular system.

6

Compliance with Ethical Requirements

6.1

Conflict of Interest

6.

7.

8.

9.

10.

11.

12.

The authors declare that they have no conflict of interest. 13.

6.2

Statement of Human and Animal Rights

All experiments involving laboratory animals were evaluated and approved by the “Ethics Committee for Animal Use” of the Institute of Biomedical Sciences of the University of São Paulo (CEUA protocol number 1936211117)

14. 15.

Castrillón-Spitia JD, Franco-Hurtado A, Garrido-Hernández C et al (2018) Utilización de fármacos antihipertensivos, efectividad e inercia clínica en pacientes. Rev Colomb Cardiol 25(4):249–256 Souza AC, Borges JWP, Moreira TMM (2016) Qualidade de vida e adesão ao tratamento em hipertensão: revisão sistemática com metanálise. Rev Saúde Pública 2016:50–71 Kearney P, Whelton M, Reynolds K et al (2005) Global burden of hypertension: analysis of worldwide data. The Lancet 365(9455):217–223 Limas C, Westrum B, Limas CJ (1984) Comparative effects of hydralazine and captopril on the cardiovascular changes in spontaneously hypertensive rats. Am J Pathol 117:360–371 Doggrell S (1998) Rat models of hypertension, cardiac hypertrophy and failure. Cardiovasc Res 39(1):89–105 Garutti I, Martinez G, Cruz P et al (2009) The impact of lung recruitment on hemodynamics during one-lung ventilation. J Cardiothoracic Vasc Anesth 23(4):506–508 Bento AM, Cardoso LF, Tarasoutchi F et al (2014) Hemodynamic effects of noninvasive ventilation in patients with venocapillary pulmonary hypertension Arq. Bras Cardiol 103(4):410–417 Zamanian R, Haddad F, Doyle R et al (2007) Management strategies for patients with pulmonary hypertension in the intensive care unit. Crit Care Med 35(9):2037–2050 Ferreira L, Vanderlei L, Valenti V (2014) Efeitos da Ventilação Mecânica não Invasiva sobre a Modulação Autonômica Cardíaca Revista Brasileira de. Cardiologia 27:53–58 Rosa RG, Rutzen W, Madeira L et al (2015) Uso da tomografia por impedância elétrica torácica como ferramenta de auxílio às manobras de recrutamento alveolar na síndrome do desconforto respiratório agudo: relato de caso e breve revisão da literatura. Rev bras ter intensiva 27(4):406–411 Costa DC, Rocha E, Ribeiro TF (2009) Associação das manobras de recrutamento alveolar e posição prona na síndrome do desconforto respiratório agudo. Rev Bras Ter Intensiva 21(2):197–203 Hartland B, Newell T, Damico N (2015) Alveolar recruitment maneuvers under general anesthesia: a systematic review of the literature. Respir Care 60(4):609–620 Doras C, Guen ML, Ferenc P et al (2015) Cardiorespiratory effects of recruitment maneuvers and positive end expiratory pressure in an experimental context of acute lung injury and pulmonary hypertension. BMC Pulm Med 15(2015):82 Pinto A, Reis M, Teixeira C et al (2015) Alveolar recruitment: who needs? how? when? Revista Médica de Minas Gerais 2015:25 Barbas CSV, Bueno MAS, Amato MBP et al (1998) Interação cadiopulmonar durante a ventilação mecânica. Rev Soc Cardiol Estado de São Paulo 3:28–41

64 16. Rodrigues YCSJ, Studart RMB, Andrade IRC et al (2012) Ventilação mecânica: evidências para o cuidado de enfermagem. Esc Anna Nery 16(4):789–795 17. Severgnini P, Selmo G, Lanza C et al (2013) Protective mechanical ventilation during general anesthesia for open abdominal surgery improves postoperative pulmonary function. Anesthesiology 118:6 18. Barros A, Takeuchi V, Fava F et al (2020) (2019) Effects of arterial and tracheal pressures during a respiratory mechanics protocol in spontaneously hypertensive rats. MEDICON 76:551–558 19. Santos C, Moraes L, Santos R et al (2012) Effects of different tidal volumes in pulmonary and extrapulmonary lung injury with or without intraabdominal hypertension. Intensive Care Med 38:499–508 20. Tukey JW (1970) Exploratory data analysis. Addison-Wesley, Reading, Massachusetts

L. C. Ferreira et al. 21. Doris P (2017) Genetics of hypertension: an assessment of progress in the spontaneously hypertensive rat. Physiol Genom 49(11):601– 617 22. Jennings DB, Lockett HJ (2000) Angiotensin stimulates respiration in spontaneously hypertensive rats. Regul Integr Phys 287(5):R1125–R1133 23. Iriuchijima J (1973) Cardiac output and total peripheral resistance in spontaneously hypertensive rats. Jpn Heart J 14(3):267–272 24. Reiss LK, Kowallik A, Uhlig S (2011) Recurrent recruitment manoeuvres improve lung mechanics and minimize lung injury during mechanical ventilation of healthy mice. PLoS One 6(9):1–15 25. Hess DR (2015) Recruitment maneuvers and PEEP titration. Respir Care 60(11):1688–1704 26. Lovas A, Szakmány T (2015) Haemodynamic effects of lung recruitment manoeuvres. BioMed Res Int 2015

Experimental Study of Bileaflet Mechanical Heart Valves Eraldo Sales, M. Mazzetto, S. Bacht, and I. A. Cestari

Abstract

1

Biological or mechanical heart valve prostheses are used as a treatment to replace failing native heart valves. The goal of this study is to investigate the hydrodynamic performance of small sizes bileaflet mechanical heart valves (BMHVs) considering their use to control flow direction on a pulsatile pediatric ventricular assist device (VAD) [1]. Small size BMHVs of 17, 19 mm were tested in vitro and compared to 23 mm prosthesis. Each prosthesis was placed on a pulse duplicator (4–90 bpm, 2.0–4.0 L/min flow range) and pressure and flow signals were recorded to determine pressure gradient, flow regurgitation and effective orifice area (EOA). Pressure gradients (maximum/minimum; mmHg) were (12.2/8.5), (8.4/6.3) and (8.2/5.8), for 17, 19 and 23-mm, respectively. Valve effective orifice areas (maximum/minimum; cm2) were (1.05/1.02), (1.37/1.24) and (1.55/1.35) for 17, 19 and 23-mm sizes, respectively. The regurgitation fractions (maximum/minimum; %) obtained were (6.91/5.29), (10.47/6.08), and (14.63/10.04), for 17, 19 and 23-mm sizes, respectively. The results suggest that the mechanical valves have adequate performance according to the requirements of the ISO 5840:2015 standards and that they can be used to effectively control flow direction on a pediatric pulsatile ventricular assist device. Keywords

 

Hydrodynamic performance Cardiac simulator Artificial heart valves Mechanical prosthesis

E. Sales (&)  M. Mazzetto  S. Bacht  I. A. Cestari Laboratorio de Bioengenharia, Instituto do Coracao, Hospital das Clinicas HCFMUSP, Faculdade de Medicina, Universidade de Sao Paulo, Sao Paulo, SP, Brazil e-mail: [email protected]



Introduction

The function of heart valves is to ensure that blood flow occurs unidirectionally through the cardiac chambers. When these valves fail, they need to be replaced by a cardiac prosthesis. The currently available prostheses include two main categories: mechanical valves and bioprostheses [2]. In vitro characterization of the dynamic performance of these prostheses is necessary to predict their hemodynamic behavior prior to implantation [3]. In this study, in vitro tests of BMHVs of sizes #17, #19 were performed and compared to #23 regular size valve. The minimum requirements for the hydrodynamic performance of cardiac prostheses as determined by ISO 5840:2015 are shown in Table 1 for valves of sizes 17–25 mm.

2

Materials and Methods

2.1 Experimental Set Up and Data Collection The characterization of the hydrodynamic performance of cardiac prostheses is performed by measuring the pressure gradient and the flow through the prosthesis. We utilized a pulse duplicator (“The Shelhigh Pulse Duplicator®”, Shelhigh Inc, New Jersey, USA) to simulate physiological conditions as shown in Fig. 1. BMHV were positioned in the aortic position of the pulse duplicator adjusted to generate flows in the range of 2.0–4.0 L/min with a stroke volume of 65 ml and pulse rates varying from 40 to 90 cycles per minute. The pulse duplicator was filled with a blood analog fluid (33.33% of glycerin and 66.67% of physiological saline solution). A 25 mm BMHV (St. Jude Medical® Regent®, USA) was positioned in the mitral position of the pulse duplicator in all tests. Flow and pressure signals were recorded using DataQ WinDaq DI-220 acquisition system (DataQ Instruments, USA) at a sampling rate of 500 samples per second per channel.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_11

65

66 Table 1 Minimum requirements for an aortic prosthesis hydrodynamic performance according to ISO 5840:2015

E. Sales et al. Parameter

Aortic

Diameter (mm)

17

19

21

23

25

EOA (cm )

 0.70

 0.85

 1.05

 1.25

 1.45

Regurgitation fraction (%)

 10

 10

 10

 10

 15

2

Fig. 1 Circuit for in vitro characterization of the hydrodynamic performance of mechanical valve prostheses

The pressure gradient was obtained by a differential transducer with input and output pressure ports positioned before and after the valve, respectively. The pressure gradient was calculated from the average value in the positive differential pressure signal recorded in each pulse, that is, when ventricular pressure (before the aortic prosthesis, following the flow direction) is greater than the aortic pressure (after the prosthesis). The difference between ventricular and aortic pressure, was measured with differential pressure transducer Validyne DP15-TL (Validyne Engineering Corporation, USA) and an ultrasonic flow meter was used to measure instantaneous flow (Transonic System Inc., Transonic, USA). Pressures in the so-called ventricular and aortic compartments of the Pulse duplicator were measured (Edwards PX12N absolute pressure transducers, Edwads LifeScience, USA).

2.2 Data Analysis To obtain the effective orifice area, the equation proposed by Gorlin and Gorlin was used [4, 5]:

EOA ¼

qVRMS qffiffiffiffi 51; 6 Dp q

ð1Þ

where Δp is the mean differential pressure (positive differential pressure, in mmHg), q is the density of the test fluid (g/cm3) and qVRMS is the root mean square of the anterograde flow (ml/s). Pressure and flow measurements made in ten cycles were used for determination of pressure gradient, regurgitation fraction and effective orifice area for each BMHV studied. Uncertainties were determined by the standard deviation of the measures. Root mean square of the anterograde flow along a pulse is given by: sRffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi t2 2 t1 qV ðtÞ dt qVRMS ¼ ð2Þ t2  t1 where qv(t) is the pulsatile flow (ml/s) and, t1 and t2 are the initial and final instants of the positive differential pressure period, respectively.

Experimental Study of Bileaflet Mechanical Heart Valves

67

Table 2 Characteristics of the BMHVs prostheses Model

17 AHPJ-505

19 AHPJ-505

23 AHPJ-505

Type

Aortic

Aortic

Aortic

Tissue annulus diameter (mm)

17

19

23

Geometric orifice area (cm2)

1.6

2.1

3.1

Implant height open (mm)

4.8

5.2

6.7

Overall height open (mm)

9.9

10.6

13.3

Cuff OD (standard/expanded) (mm)

23/24

25/26

29/31

2.3 BMHVs Nominal characteristics of BMHVs (St. Jude Medical® Regent®, USA) sizes #17, #19 and #23 are shown in Table 2. An image of a BMHV is shown in Fig. 2 in the fully open (a) and closed position (b).

3 Fig. 2 BMHV in open position (a) and closed position (b)

The regurgitant fraction (RF) as a function of closing volume (CV), leakage volume (LV) and the total forward volume (FV) is given by [5]: RF ¼

CV þ LV  100 FV

Fig. 3 Representative hydrodynamic curves for BMHV

ð3Þ

Results

Figure 3 shows representative signals recorded for different flow rates (2.0–4.0 L/min). Pressure gradient values for the three evaluated prostheses are presented in Table 3. Effective orifice areas obtained (mean ± SD) for the BMHV studied are shown in Fig. 4. The regurgitation fractions versus flow are shown in Fig. 5 for prostheses #17, #19 and #23.

68

E. Sales et al.

Table 3 Pressure gradients (P, mmHg) for #17, #19 and #23 sizes BMHVs

Flow rate (L/min)

P (#17, mmHg)

P (#19, mmHg)

P (#23, mmHg)

2.0

12.2 ± 0.2

7.0 ± 0.2

5.8 ± 0.1

2.5

11.9 ± 0.1

8.4 ± 0.4

6.2 ± 0.3

3.0

9.4 ± 0.4

8.3 ± 0.3

6.6 ± 0.3

3.5

8.5 ± 0.3

6.3 ± 0.3

8.2 ± 0.5

4.0

9.4 ± 0.3

6.6 ± 0.3

7.0 ± 0.4

Fig. 4 Effective orifice area for #17 (1.04 ± 0.01 cm2), #19 (1.30 ± 0.05 cm2) and #23 (1.44 ± 0.08 cm2) bileaflet mechanical prostheses

4

Discussion

There are several studies in the literature reporting the characterization of BMHVs [6, 7]. However, the performances of BMHVs of smaller sizes have not been reported. Lee et al. [8] investigated the application of 21 mm size cardiac BMHV scaled to pediatric VAD. The tests reported did not take into account the metrics of interest for the specific VAD application, including the pressure gradient, effective orifice area and regurgitant fraction. Dasi et al. [6] studied flow through a 25 mm BMHV which is suitable only for ventricular assist devices of larger volumes. Yun et al. [7] reported a model of BMHV for assessment of blood damage which was scaled in size for pediatric patients. To the best of our knowledge there are no reports of 17 and 19 mm BMHV compatible with pediatric VADs considering the measurements of all the parameters of interest for this specific application. According to the requirements of the ISO 5840:2015 for #17, #19 and #23 the EOA (cm2) must be greater than or equal to 0.70, 0.85 and 1.25, respectively. The average values obtained and uncertainties were 1.04 ± 0.01 cm2,

Fig. 5 Regurgitation fraction for BMHVs sizes 17 (6.4 ± 0.8%), 19 (8 ± 1%) and 23 (12 ± 2%) versus flow (L/min)

1.30 ± 0.05 cm2, and 1.44 ± 0.08 cm2, respectively. According to the same standards the regurgitation fraction must be equal or less than 10%. The average regurgitation fractions and uncertainties obtained in our measurements for #17, #19 and #23 BMHV were 6.4 ± 0.8%, 8 ± 1%, 12 ± 2%, respectively. Both EOA and regurgitation fraction obtained demonstrate that the mechanical valves tested have adequate performance according to the minimum requirements established by ISO 5840:2015.

5

Conclusions

Three bileaflet mechanical prostheses used as replacements for heart valves were evaluated. The results showed that the hydrodynamic performance of these devices is in accordance with the requirements established by the regulatory test standards for the characterization of these prostheses (ISO 5840:2015). Our results suggest that the BMHVs tested can be used to effectively control flow direction as needed for pediatric pulsatile ventricular assist device.

Experimental Study of Bileaflet Mechanical Heart Valves Acknowledgements We thank FAPESP (2012/5083-6), FINEP (1253/13),) and CNPQ (467270/2014-7; 311191/2017-7) for the financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Cestari IA, Mazzetto M, Oyama HT, Bacht S, Jatene MB, Cestari IN, Jatene FB (2019) Design and hydrodynamic performance of a pediatric pulsatile pump. In: Costa-Felix R, Machado J, Alvarenga A (eds) Proceedings on 26th Brazilian congress on biomedical engineering, vol 70/1. IFMBE. Springer, Singapore, pp 85–88

69 2. Burriesci G, Marincola FC, Zervides C (2010) Design of a novel polymeric heart valve. J Med Eng Technol 34(1):7–22 3. Rahmani B et al (2012) Manufacturing and hydrodynamic assessment of a novel aortic valve made of a new nanocomposite polymer. J Biomech 45:1205–1211 4. Gorlin R, Gorlin SG (1951) Hydraulic formula for calculation of area of the stenotic mitral valve, other cardiac valves, and central circulatory shunts. Am Heart J 41:1–29 5. Claiborne TE et al (2013) In vitro evaluation of a novel hemodynamically optimized trileaflet polymeric prosthetic heart valve. J Biomech Eng 135:021021-1–021021-8 6. Dasi LP et al (2008) Passive flow control of bileaflet mechanical heart valve leakage flow. J Biomech 41(6):1166–1173 7. Yun BM et al (2014) Computational simulations of flow dynamics and blood damage through a bileaflet mechanical heart valve scaled to pediatric size and flow. J Biomech 47(12):3169–3177 8. Lee H et al (2007) Hydrodynamic characteristics of bileaflet mechanical heart valves in an artificial heart: cavitation and closing velocity. Artif Organs 31:532–537

Preliminary Results of Structural Optimization of Dental Prosthesis Using Finite Element Method M. M. Togashi, M. P. Andrade, F. J. dos Santos, B. A. Hernandez, and E. A. Capello Sousa

Abstract

Despite the high success rate of dental prostheses in treating patients, mechanical failures still occur. Studies about the biomechanical behaviour of dental prostheses are thus important to avoid such failures and to ensure patient’s well-being. This study proposed a parametric and optimization analysis of a dental implant to investigate which implant’s structural parameters are more relevant to prevent failure and improve osseointegration. A mathematical function called response surface was obtained based on Von Misses stresses in the cortical bone, RSM (Response Surface Methodology), DOE (Design of Experiments), and finite element models. A structural optimization analysis was conducted with the objective of minimizing stresses in the cortical bone. In addition to the sensitivity analysis of the parameters, a more agile process to estimate critical stress through equations was presented, providing a faster way of identifying potential failure causes. Keywords

 

 

Biomechanics Dental prosthesis Finite element method Parametric analysis Optimization

1

Introduction

Dental prostheses are biomechanical structures responsible for restoring the mechanical function of natural and damaged/lost teeth. The use of osseointegrated implants has

M. M. Togashi  M. P. Andrade (&)  F. J. dos Santos  B. A. Hernandez  E. A. Capello Sousa Department of Mechanical Engineering, Engineering College of Bauru, São Paulo State University - UNESP, Bauru, Brazil e-mail: [email protected]

become a common practice in dentistry due to countless success reports. However, dental implants still present long-term problems, mostly because they are mechanical components, which are subject to failures [1]. External loads are constantly applied to the implant’s system, which generates stress and strains in the implant and its surrounding bone. Within certain levels, they are beneficial and contribute to osseointegration; when they are excessive or insufficient, they can cause system instability and bone loss, affecting implant stability [2]. Studies in biomechanics present complexities, such as inhomogeneous mechanical properties and non-linear geometries, which make an analytical solution unfeasible. As a result, numerical methods, especially the Finite Element Method (FEM), are widely applied in this area [3–6]. For example, a study reported that it is possible to evaluate, with good precision, the mechanical behaviour of dental prosthesis using the Finite Element Method [7]. In order to develop new and better implants, it is necessary to investigate the influence of the different parameters that will affect the structural behaviour of the prosthesis, such as the mechanical properties of the bone and implant’s design. However, if such analysis is conducted manually, by individually comparing each parameter, it certainly will be a time-consuming process. When a structural analysis becomes too complex, with different parameters and constraints, a manual analysis limits the possibility of a wider exploration of the modelling options. An alternative methodology for this analysis must then be explored [8]. One way of evaluating several parameters is through the Response Surface Methodology (RSM). RMS is a set of statistical and mathematical techniques applied in the development, improvement, and optimization of parameters [9]. The purpose of this methodology is to find a mathematical equation, via response equations or functions and based on the initial variables, so that, given a set of new independent variables, the approximate answer is

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_12

71

72

M. M. Togashi et al.

instantaneously found. RSM uses the Design of Experiments (DOE) technique to create the response surface. The DOE is based on experimenting several combinations of parameters and measuring the responses provided. Each experiment, with a specific combination of parameters, is then translated to the finite element models and the output results are used to create the response curve [10]. This study presents the preliminary results of an optimization analysis of a dental prosthesis and an analysis of the influence of the main structural parameters in the implant structural behaviour using the Finite Element Method and the Response Surface Methodology.

2

Materials and Methods

The finite element model developed in this study is similar to the prosthesis used elsewhere [11]. The prosthesis consists of a Branemark single implant (Nobel Biocare−Götemburg, Sweden), in which a 5 mm abutment (Nobel Biocare −Götemburg, Sweden) was screwed in with a titanium screw. A prosthetic crown, composed of a cobalt–chromium (Co–Cr) alloy and coated with a feldspathic ceramic (CNG solutions prosthetics, São Paulo, SP, Brazil), was placed on the top of the abutment, using another titanium screw. A previously developed finite element (FE) model of the mandible, containing both cortical and cancellous bone, was used to insert the implant and its components [12]. The mandibular bone was then imported into the finite element software Ansys (Ansys 15.0, Swanson Analysis System, Houston, Pa, USA). A hole was made in the bone through Boolean operation, which consists of generating a cylinder with the same diameter of the implant in the centre of the bone and then extracting this volume from the bone, generating a hole, making it possible to insert the implant in it. The internal components of the prosthesis (implant, screws and abutment) were modelled directly in the finite element software, using the programming language APDL (Ansys Parametric Design Language). The crown geometry was obtained through 3D scanning and CAD manipulation [12]. The outer (feldspathic porcelain) and the inner layer (Co–Cr alloy) of the crown were built based on micro-CT images. The central hole in the crown (to place the screws) was also obtained by Boolean operation. Figure 1 illustrates the final volume of the prosthesis into the bone. After modelling of the geometry, homogeneous, isotropic and linearly elastic material properties were applied to each component, as shown on Table 1.

Fig. 1 Final structure composed of the prosthesis incorporated into the bone

Table 1 Properties of the components used in the model Material

Young’s modulus (GPa)

Poisson’s coefficient

References

Cortical bone

14

0.3

[13]

Cancellous bone

1

0.3

[13]

Implant

110

0.34

[14]

Abutment

110

0.34

[14]

Fixation screw

100

0.34

[14]

Abutment screw

100

0.34

[14]

Metallic crown

218

0.33

[15]

Ceramic crown

68.9

0.28

[16]

The element SOLID187 was used to discretize the components in the FE software. It is a solid 3D quadratic element, with ten nodes and three degrees of freedom per node. After generating the mesh, a load of 75 N was applied in the negative direction of the y-coordinate (vertical) on top of the crown. Movement restrictions were applied in all degrees of freedom in the mandibular bone in order to simulate its real fixation. Such configuration is illustrated in Fig. 2. Contact elements were inserted into the sharing surfaces between implant and abutment and between abutment and crown. Preloads of 100 N mm and 200 N mm were applied to the implant and crown screws, respectively, to simulate the tightening torque.

Preliminary Results of Structural Optimization of Dental …

73

Figure 3 illustrates the parameters assessed in this study: implant length (H), abutment height (h), and Young’s modulus of the cortical bone (E). Figure 4 presents the geometrical representation of the coded values for each variable: (+1) for maximum values, (0) for average values, and (−1) for minimum values. Table 2 depicts the real and coded values for each parameter. Each analysis shown in Table 2 was carried out by adapting the finite element model to that combination of parameters. By means of this parametric analysis, a function called response surface was obtained and it described the Von Mises stress on the cortical bone, acquired from the finite element models, as a function of the selected parameters. Fig. 2 Discretized volume and loading application

3

Fig. 3 Parametrized parameters

The parametric and optimization analyses were conducted using Response Surface Methodology (RSM), Design of Experiments (DOE), and the Box-Behnken model. The Box-Behnken approach uses an independent cubic model in which the possible combinations are represented at the midpoints of a cube’s edges. This model can rotate and requires at least three levels, or options, on each parameter.

Results and Discussion

After running 13 different model combinations, Von Mises stresses around the implant hole and on the cortical bone were obtained as illustrated in Fig. 5 and listed on Table 3. Based on the obtained stress values, three different regressions equations were created using Microsoft Office Excel software to determine which response curve best represented the model. These equations were (i) a linear regression without interaction, (ii) a linear regression with interaction, and (iii) a non-linear regression. The best results, in terms of adjusted R2, were obtained from the non-linear regression as a function of the coded parameters (H, implant height; h, abutment height; E, Young’s modulus of the cortical bone). The response surface obtained was presented and evaluated from a statistical point of view and it is shown on Eq. 1: S ¼ 28; 726  3153H  3043h þ 6319E þ 6058H 2 þ 4847h2

ð1Þ

In order to assess the precision of the response curve, von Mises stress values were estimated using the response surface and compared to the calculated stress values obtained by the finite element models. A correlation of R2 = 0.93 was found for a relationship of y = 0.98x + 0.13 (Fig. 6).

74

M. M. Togashi et al.

Fig. 4 Representation of Box-Behnken model

Table 2 Coded and real values for each parameter

Experiment/model

Hcod

hcod

Ecod

Hreal (mm)

Hreal (mm)

Ereal (MPa)

1

0

1

1

11

5

20,000

2

0

−1

1

11

1

20,000

3

0

1

−1

11

5

8000

4

0

−1

−1

11

1

8000

5

1

0

1

15

3

20,000

6

−1

0

1

7

3

20,000

7

−1

0

−1

7

3

8000

8

1

0

−1

15

3

8000

9

1

1

0

15

5

14,000

10

−1

1

0

7

5

14,000

11

1

−1

0

15

1

14,000

12

−1

−1

0

7

1

14,000

13

0

0

0

11

3

14,000

Hcod is the coded value for the implant’s height, Hreal is the real value for the implant’s height in mm, hcod is the coded value for the abutment’s height, hreal is the real values for the abutment’s height in mm, Ecod is the coded value for the Young’s modulus of the cortical bone, and Ereal is the real value for Young’s modulus of the cortical bone in MPa

Preliminary Results of Structural Optimization of Dental …

75 Table 3 Von Mises stress in the cortical bone obtained in the 13 analyses

Fig. 5 Von Mises stress on the cortical bone

Experiment/model

Stress [MPa]

1

39.174

2

44.307

3

22.457

4

32.107

5

37.100

6

45.163

7

31.883

8

28.746

9

34.304

10

40.177

11

37.944

12

46.098

13

28.726

4

Fig. 6 Estimated stress and real stress correlation plot. Dashed lines represent a deviation of 10%

Conclusions

A response surface representing the von Mises stress distribution on the cortical bone as a function of the structural parameters h, H, and E was created. A good correlation between calculated and estimated von Mises stress was found. Among the analysed parameters, the most sensitive, i.e. the one in which a small change significantly affects the cortical bone stress distribution, is the height of the implant. The use of response curves (or surfaces) in structural optimization of dental implants sped up the analysis process, as the best parameter, which minimises the stress on the bone, is found directly from a polynomial, not requiring intense and time-consuming interactive processes, which would include the generation of new geometries, meshes and finite element calculations. It is worth mentioning that, for simplification purposes, some other conditions were not assessed in this study, such as the stress on the prosthesis, the stress on the cancellous

76

bone, etc. However, this preliminary study adds confidence in the use of optimization techniques and finite element models in the design of new dental implants. Acknowledgements The authors gratefully acknowledge the support of the Brazilian Government and CAPES for a Master of Science Scholarship and CNPq for an Undergraduate Research Scholarship.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Franciosa P, Martorelli M (2012) Stress-based performance comparison of dental implants by finite element analysis. IJIDeM 6(2):123–129. https://doi.org/10.1007/s12008-012-0155-y 2. Frost HM (1994) Wolff’s Law and bone’s structural adaptations to mechanical usage: an overview for clinicians. Angle Orthod 64 (3):175–188 3. Chen L-J, Hao HE, Li Y-M et al (2011) Finite element analysis of stress at implant-bone interface of dental implants with different structures. Trans Nonferrous Met Soc China 21(7):1602–1610. https://doi.org/10.1016/s1003-6326(11)60903-5 4. Winter W, Möhrle S, Holst S et al (2010) Bone loading caused by different types of misfits of implant-supported fixed dental prostheses: a three-dimensional finite element analysis based on experimental results. Int J Oral Maxillofac Implants 25(5):947–952 5. Perez M (2012) Life prediction of different commercial dental implants as influence by uncertainties in their fatigue material properties and loading conditions. Comp Meth Prog Bio 108 (3):1277–1286. https://doi.org/10.1016/j.cmpb.2012.04.013

M. M. Togashi et al. 6. Tian K, Chen J, Han L et al (2012) Angled abutments result in increased or decreased stress on surrounding bone of single-unit dental implants: a finite element analysis. Med Eng Phy 34 (10):1526–1531. https://doi.org/10.1016/j.medengphy.2012.10. 003 7. Baiamonte T, Abbate MF, Pizzarello F et al (1996) The experimental verification of the efficacy of finite element modelling to dental implant systems. J Oral Implantol 22(2):104–110 8. Szajek K (2013) Optimization of a two-component implantology system using genetic algorithm, Thesis (Ph.D. in Structural Engineering). Posznan University of Technology, Faculty of Civil and Environmental Engineering 9. Myers R, Montgomery D, Anderson-Cook C (2009) Response surface methodology, process and product optimization using designed experiments. Wiley, New Jersey 10. Nist/Sematech at https://www.itl.nist.gov/div898/handbook 11. Albarracín ML (2011) Evaluation of the deformation of the abutment and perimplantar region as a function of the load applied to screw-retained implant-supported single crowns. Master of Science dissertation (Master of Science in dentistry)—College of Dentristy of Bauru, University of São Paulo–USP, Bauru, SP 12. Hernandez BA, Capello Sousa E (2010) Analysis of mechanical contact problems using finite element method in two-dimensional and three-dimensional models of dental prostheses. Final Research Report to PIBIC. Bauru, SP 13. Juodzbalys G, Kubilius R, Eidukynas V et al (2005) Stress distribution in bone: single-unit implant prostheses veneered with porcelain or a new composite material. Implant dent 14(2):166– 175. https://doi.org/10.1097/01.id.0000165030.59555.2c 14. Aoki T, Okafor ICI, Watanabe I et al (2004) Mechanical properties of cast Ti-6Al-4V-XCu alloys. J Oral Rehabil 31(11):1109–1114. https://doi.org/10.1111/j.1365-2842.2004.01347.x 15. Craig RG, Powers JM (1989) Restorative dental materials. Mosby, St. Louis 16. Geng JP, Tan KB, Liu GR (2001) Application of finite element analysis in implant dentistry: a review of the literature. J Prost Dent 85(6):585–598. https://doi.org/10.1067/mpr.2001.115251

Biomaterials, Tissue Engineering and Artificial Organs

Tribological Characterization of the ASTM F138 Austenitic Stainless-Steel Treated with Nanosecond Optical Fiber Ytterbium Laser for Biomedical Applications Marcelo de Matos Macedo, Giovanna Vitória Rodrigues Bernardes, Jorge Humberto Luna-Domínguez, Vikas Verma, and Ronaldo Câmara Cozza Abstract

Keywords

This study investigated the tribological behavior of the ASTM F138 austenitic stainless-steel – which is generally used in biomedical applications – treated with laser. Metallic biomaterial surfaces were treated under different nanosecond optical fiber ytterbium laser pulse frequencies, with the purpose to increase their surface hardness. Further, ball-cratering wear tests were conducted to analyze their tribological behavior on the basis of their wear volume and coefficient of friction. The obtained results showed that the nanosecond optical fiber ytterbium laser pulse frequency influenced the surface hardness of each specimen and, consequently, on the wear resistance of the ASTM F138 austenitic stainless-steel biomaterial. With an increase of laser pulse frequency, a decrease in the wear volume of the worn biomaterial was observed – which is the main tribological parameter to study the wear resistance of a metallic biomaterial. In contrast, the coefficient of friction values were found to be independent of the laser pulse frequency, surface hardness and the wear volume of the specimen.

Biomaterial Austenitic stainless-steel Laser treatment Wear resistance Wear volume Coefficient of friction

M. de Matos Macedo Department of Materials Science, UFABC – Federal University of ABC, Santo André, Brazil G. V. R. Bernardes  R. C. Cozza (&) Department of Mechanical Engineering, University Center FEI – Educational Foundation of Ignatius “Priest Sabóia de Medeiros”, São Bernardo do Campo, Brazil e-mail: [email protected]; ; [email protected] J. H. Luna-Domínguez Facultad de Odontología, Universidad Autónoma de Tamaulipas, Ciudad Victoria, Mexico V. Verma Thermochemistry of Materials SRC, National University of Science and Technology, Moscow, Russia R. C. Cozza FATEC-Mauá – Department of Mechanical Manufacturing, CEETEPS – State Center of Technological Education “Paula Souza”, Mauá, Brazil

1

 

 



Introduction

Orthopedic devices, because of friction against mobile implants, bones, or other body parts, detach particles in contact with body fluids, which are placed in locations far from the removed source causing complications to the patients [1]. Metallic particles released from the wear process may move passively, through tissue and/or the circulatory system or, can be actively transported [2], compromising the biomaterial’s biofunctionality. ASTM F138 austenitic stainless-steel is one of the metallic materials used for the manufacturing of orthopedic implants, because of its unique mechanical properties and low cost. Additionally, a surface treatment with a laser technique can be adopted to increase the wear resistance of a metallic implant manufactured with biomaterial. In other line of research, generally, the ball-cratering wear test is a practical tribological method used by various researchers, to analyze the wear resistance of different materials [3–10]. Figure 1 presents a schematic representation of the principle of this wear test. In this mechanical configuration of the wear test, a rotating ball is forced against the specimen being tested and a liquid solution is supplied between the ball and the specimen during wear experiments. The purpose of the ball-cratering wear test is to generate wear craters on the surface of the specimen; the wear volume (V) may be determined using Eq. 1 [5], where “d” is the diameter of the wear crater and “R” is the radius of the ball.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_13

79

80

M. de Matos Macedo et al.

Liquid solution Specimen Ball Normal force

Tangential force

Fig. 1 Micro-abrasive wear testing by rotating ball: schematic representation of its principle

V

pd4 64R

ð1Þ

The friction coefficient (l) acting on the tribological system “specimen + ball” can be calculated using Eq. 2, where “N” is the normal force applied on the specimen and “T” is the tangential force measured during the wear tests. l¼

T N

ð2Þ

Observing the importance of the biomaterials in terms of the social factor and considering the acceptance of the ball-cratering wear tests to study the wear behavior of materials, in the present work, we aimed to evaluate the tribological behavior of ASTM F138 austenitic stainless-steel treated with different laser processes – to increase its hardness, by using the “ball-cratering wear test method”.

2

Experimental Procedure

Fig. 2 Ball-cratering wear test equipment with free-ball mechanical configuration used for the ball-cratering wear tests

of 50 N and an accuracy of 0.001 N. The values of “N” and “T” were read by a readout system, continuously, during the tests.

2.2 Materials The tested material was an ASTM F138 austenitic stainless-steel biomaterial with the following chemical composition – Table 1. It was treated with a nanosecond optical fiber ytterbium laser, under four different pulse frequencies, with the purpose to increase its hardness, as presented in Table 2. One ball made of AISI 316L austenitic stainless-steel, with diameter of D = 25.4 mm (D = 1” – standard size), was used as the counter-body. Table 3 shows the hardness (H) of the materials used in this work (specimen and ball). For the sake of comparison, the surface of this biomaterial without laser treatment – under the condition of “as received” – was also evaluated, totaling five specimen tested.

2.1 Ball-Cratering Equipment 2.3 Wear Tests An equipment with a free-ball mechanical configuration (Fig. 2) has been used for the wear tests, in which its design, building and technical functionality has been detailed in a previous work [11]. Additionally, this ball-cratering wear test equipment has also been previously evaluated by other researchers [11–15], who selected different test conditions and whose tribological apparatus presented excellent functionality during the experiments. Two load cells were used in the ball-cratering wear test equipment: one load cell to control the “normal force – N” and the other load cell to measure the “tangential force – T” developed during the experiments. These “normal” and “tangential” force load cells used had a maximum capacity

Table 4 presents the test conditions defined for the ball-cratering wear experiments. One value of normal force (N) was defined for the wear experiments: N = 0.25 N. The ball rotational speed was n = 50 rpm and with the diameter D = 25.4 mm, the tangential sliding velocity of the ball was equal to v = 0.066 m/s. The wear experiments were conducted under a test time of t = 2 min and with the value of v = 0.066 m/s, was calculated a value of sliding distance (S) between the specimen and the ball of S = 8 m. All the experiments were conducted without interruption and a Phosphate Buffered Saline (PBS) chemical solution –

Tribological Characterization of the ASTM F138 Austenitic … Table 1 Chemical composition of the ASTM F138 austenitic stainless-steel biomaterial

Table 2 Nanosecond optical fiber ytterbium laser pulse frequencies used for surfaces treatments of the specimen

Table 3 Hardness of the materials used in this research – specimen and ball

81

Chemical element

% (in weight)

C

0.023

Si

0.78

Mn

2.09

P

0.026

S

0.0003

Cr

18.32

Mo

2.59

Ni

14.33

Fe

Balance

Specimen

Laser frequency – f

1

f1 = 80 kHz

2

f2 = 188 kHz

3

f3 = 296 kHz

4

f4 = 350 kHz

Specimen condition – counter-body

Hardness – H

Specimen – “as received”

199.3 HV

(H0)

Specimen 1

Treated ) f1 = 80 kHz

204.3 HV

(H1)

Specimen 2

Treated ) f2 = 188 kHz

215.4 HV

(H2)

Specimen 3

Treated ) f3 = 296 kHz

226.1 HV

(H3)

Specimen 4

Treated ) f4 = 350 kHz

239.9 HV

(H4)

380 HV

(HB)

Ball – AISI 316L austenitic stainless-steel

Table 4 Test conditions selected for the ball-cratering wear experiments

Experimental parameter

Value

Normal force – N

0.25 N

Ball rotational speed – n

50 rpm

Tangential sliding velocity – v

0.066 m/s

Test time – t

2 min

Sliding distance – S

8m

which simulates body fluid – was continuously fed between the specimen and the ball during the wear experiments, at a frequency of 1 drop/2 s. Additionally, both the normal force (N) and the tangential force (T) were monitored and registered constantly during the wear tests. Then, after the tests, the diameters (d) of the wear craters were measured by optical microscopy. Finalizing the step of data acquisition, the wear volume and the coefficient of friction were calculated from Eqs. (1) and (2), respectively and the maximum standard-deviation reported was informed on the plots of V and l.

3

Results and Discussion

3.1 Scanning Electron Micrograph Figure 3 presents a scanning electron micrograph of the surface of a wear crater generated during the ball-cratering wear experiments. Analyzing Fig. 3, presence of grooves was reported due to the sliding movement between the ball and the specimen during the wear test. In fact, the occurrence of grooves along

82

M. de Matos Macedo et al.

consequently, an increase in the superficial hardness of the ASTM F138 austenitic stainless-steel, the wear resistance of the specimen increased, as characterized by decrease in the wear volume (V). The tribological behavior reported in this work is in qualitative agreement with Archard’s Law, where the material hardness (H) is inversely proportional to wear volume (V), expressed by Eq. (3): V¼

K SN H

ð3Þ

K is dimensionless constant, as a function of the type of material.

3.3 Friction Coefficient Behavior Fig. 3 Example of surface of a wear crater obtained during the ball-cratering wear tests

the surface of a worn material is a characteristic tribological behavior of two metallic materials under relative movement.

3.2 Wear Volume Behavior Figure 4 shows the results obtained for the behavior of the wear volume (V) as a function of the nanosecond optical fiber ytterbium laser pulse frequency (f) and specimen hardness (H) − V = f(f,H), along with the result obtained from “as received” specimen. We observed that with an increase in the nanosecond optical fiber ytterbium laser pulse frequency (f) and,

Figure 5 shows the behavior of the coefficient of friction (l) for the “as received” specimen and the specimen treated by nanosecond optical fiber ytterbium laser pulse frequency (f) − µ = f(f). Analyzing the behavior of the coefficient of friction (l) as a function of the nanosecond optical fiber ytterbium laser pulse frequency (f) and specimen hardness (H) − l = f(f, H), it is possible to observe that the coefficient of friction did not present a direct relationship with f, i.e., the coefficient of friction was independent of the laser pulse frequency and specimen hardness.

3.4 Wear Resistance Analysis No direct relationship between the wear volume (V) and the coefficient of friction (l) was observed, i.e., the highest value

Increase in laser frequency and specimen hardness.

as received

Specimen 1

Specimen 2

Specimen 3

Specimen 4

f1 = 80 kHz H1 = 204.3 HV

f2 = 188 kHz H2 = 215.4 HV

f3 = 296 kHz H3 = 226.1 HV

f4 = 350 kHz H4 = 239.9 HV

0.000

Fig. 4 Wear volume (V) as a function of the nanosecond optical fiber ytterbium laser pulse frequency (f) and specimen hardness (H) − V = f(f, H). maximum standard-deviation reported: 0.001 mm3

0.04 0.02 0.00 as received

Specimen 1

Specimen 2

Specimen 3

Specimen 4

f4 = 350 kHz H4 = 239.9 HV

0.001

0.06

f3 = 296 kHz H3 = 226.1 HV

0.002

0.08

f2 = 188 kHz H2 = 215.4 HV

0.003

0.10

f1 = 80 kHz H1 = 204.3 HV

0.004

H0 = 199.3 HV

Coefficient of friction –

0.12

0.005

H0 = 199.3 HV

Wear volume – V [mm3]

0.006

Fig. 5 Friction coefficient (l) as a function of the nanosecond optical fiber ytterbium laser pulse frequency (f) and specimen hardness (H) − l = f(f,H). Maximum Standard-Deviation reported: 0.02

Tribological Characterization of the ASTM F138 Austenitic …

of the wear volume was not related to the higher value of the coefficient of friction. However, considering the wear resistance of the ASTM F138 austenitic stainless-steel, we observed that the important tribological behavior was the decrease in the wear volume (V), which was directly related to the longer service life of the orthopedic implant. In fact, a longer service life of an orthopedic implant is important to improve the quality life of the patients, avoiding unnecessary surgical operations.

4

Conclusions

The following are the major conclusions drawn from the results obtained in this work, regarding the tribological characterization of the ASTM F138 austenitic stainless-steel biomaterial: (1) The tribological behavior was influenced by frequency of the nanosecond optical fiber ytterbium laser, in terms of “wear volume”; (2) The laser treatment increased the wear resistance of the biomaterial, quantified by decrease in “wear volume”; (3) The coefficient of friction behavior was independent of the nanosecond optical fiber ytterbium laser pulse frequency; (4) No direct relationship was found between the wear volume and the coefficient of friction.

Acknowledgements The Authors acknowledge University Center FEI – Educational Foundation of Ignatius “Priest Sabóia de Medeiros” and CEETEPS – State Center of Technological Education “Paula Souza” by financial support to conduct this work. Conflict of Interest The Authors of this work declare that they do not have “conflict of interest”.

References 1. Blac J (1984) Systemic effects of biomaterials. Biomater 5(1):11– 18. https://doi.org/10.1016/0142-9612(84)90061-9

83 2. Okazaki Y (2002) Effect of friction on anodic polarization properties of metallic biomaterials. Biomater 23(9):2071–2077. https://doi.org/10.1016/S0142-9612(01)00337-4 3. Adachi K, Hutchings IM (2003) Wear-mode mapping for the micro-scale abrasion test. Wear 255(1–6):23–29. https://doi.org/ 10.1016/S0043-1648(03)00073-5 4. Allsopp DN, Hutchings IM (2001) Micro-scale abrasion and scratch response of PVD coatings at elevated temperatures. Wear 251(1–12):1308–1314. https://doi.org/10.1016/S0043-1648(01) 00755-4 5. Rutherford KL, Hutchings IM (1997) Theory and application of a micro-scale abrasive wear test. J Test Eval 25(2):250–260. https:// doi.org/10.1520/JTE11487J 6. Cozza RC, Tanaka DK, Souza RM (2009) Friction coefficient and abrasive wear modes in ball-cratering tests conducted at constant normal force and constant pressure–preliminary results. Wear 267:61–70. https://doi.org/10.1016/j.wear.2009.01.055 7. Cozza RC (2014) Influence of the normal force, abrasive slurry concentration and abrasive wear modes on the coefficient of friction in ball-cratering wear tests. Tribol Int 70:52–62. https:// doi.org/10.1016/j.triboint.2013.09.010 8. Umemura MT, Varela LB, Pinedo CE, Cozza RC, Tschiptschin AP (2019) Assessment of tribological properties of plasma nitrided 410S ferritic-martensitic stainless steels. Wear 426–427:49–58. https://doi.org/10.1016/j.wear.2018.12.092 9. Cozza RC, de Mello JDB, Tanaka DK, Souza RM (2007) Relationship between test severity and wear mode transition in micro-abrasive wear tests. Wear 263:111–116. https://doi.org/10. 1016/j.wear.2007.01.099 10. Gee MG, Wicks MJ (2000) Ball crater testing for the measurement of the unlubricated sliding wear of wear-resistant coatings. Surf Coat Technol 133–134:376–382. https://doi.org/10.1016/S02578972(00)00966-X 11. Cozza RC, Suzuki RS, Schön CG (2014) Design, building and validation of a ball-cratering wear test equipment by free-ball to measure the coefficient of friction. Tecnologia em Metalurgia Materiais e Mineração 11(2):117–124. https://doi.org/10.4322/ tmm.2014.018 12. Wilcken JTSL, Cozza RC (2014) Influence of the abrasive slurry concentration on the coefficient of friction of different thin films submitted to micro-abrasive wear. In: 2nd International conference on abrasive processes—ICAP 2014, 8–10 Sept 2014, University of Cambridge, UK 13. Cozza RC, Rodrigues LC, Schön CG (2015) Analysis of the micro-abrasive wear behavior of an iron aluminide alloy under ambient and high-temperature conditions. Wear 330–331:250– 260. https://doi.org/10.1016/j.wear.2015.02.021 14. Macedo MM, Cozza RC (2019) Tribological behavior analysis of the ISO 5832-1 austenitic stainless-steel treated by optical fiber laser used for biomedical applications. J Mater Appl 8(2):91–96. https://doi.org/10.32732/jma.2019.8.2.91 15. Macedo MM, Verma V, Luna-Domínguez JH, Cozza RC (2020) Biotribology of mechanically and laser marked biomaterial. Corrosion, 1st edn. IntechOpen, London, pp 1–11. https://doi. org/10.5772/intechopen.92564

Computational Modeling of Electroporation of Biological Tissues Using the Finite Element Method M. A. Knabben, R. L. Weinert and A. Ramos

Abstract

This article presents experimental and computational results of electroporation in rabbit liver. Therefore, an empirical electroporation model was used, able to describe the dynamic behavior of electroporation. The experiments were perform with a cylindrical electrode system, where voltage pulses with three different levels were applied. For the numerical simulation of electroporation, a finite element software in MatLab® was developed, intended for academic research. The electroporation model combined with the Finite Element Method proved to be an appropriate simulation tool for the study of biological electropermeabilization. Keywords

Electropermeabilization of biological tissues • Dynamic modeling of electroporation • Rabbit liver • Computational electromagnetic field calculation • Finite element method

1

Introduction

Biological electropermeabilization is a non-linear physical phenomenon that causes the opening of pores in the plasma membrane, for isolated cells, cell suspension or biological tissue, when these are subjected to high intensity electric fields. The electric field applied on the tissue causes the movement of ions near the cell membrane, increasing the potential difference between the intracellular and extracellular environment, known as transmembrane potential Vm [1]. This electrical polarization mechanism creates pores in the cell membrane, facilitating the flux of ions from the intracellular environment to the extracellular environment, resulting M. A. Knabben (B) · R. L. Weinert · A. Ramos Department of Electrical Engineering, University of Santa Catarina State, Joinville, SC, Brazil

in the increase of tissue conductivity. Biological electropermeabilization occurs if Vm reaches values between 200 mV and 1 V. If Vm exceeds the value of 1 V, irreversible electropermeabilization occurs, causing the cell death [2]. Although electroporation has been a phenomenon studied for several decades and is already used in important applications in medicine, biology and biochemistry, there are still fundamental questions to be investigated. Due to the complex mathematical process associated, the field calculation by computational methods comes up as the only available resource to evaluate the effects of electroporation on biological material. However, this requires the use of models able to describe the temporal variation in tissue conductivity, as a function of the local applied electric field. Electroporation models can be characterized as static or dynamic. In static models, the simulation is performed in stages, where the conductivity is updated in the analysis domain according to the distribution of the electric field obtained in the previous stage, fallowing a specific mathematical relation. The process is repeated until the conductivity stabilizes in the entire domain through a convergence criterion. In dynamic models, interest case of this article, as the program calculates the local electric field, the conductivity value in each region is increased at each step time of the simulation, according to the electroporation model [3,4]. Currently, some published papers have dealt with the inclusion of electroporation models in finite element programs. The preference for using the Finite Element Method (FEM) is justified because it allows a better discretization of the objects involved in the analysis domain, when compared to the other numerical methods [4,5]. In this article, the calculation of field distribution and current density were obtained using a software based on the Finite Element Method, that were built in MatLab® for academic research, which allows the inclusion of the dynamic electroporation model obtained from [3]. The adjustment of model parameters is made by comparing the simulated results with the experimental results, performed on rabbit liver.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_14

85

86

M. A. Knabben et al.

2

Materials and Methods

2.1

Electropermeabilization in Biological Tissues

The electroporation equipment consists of a switched-mode power supply to generate the pulsed voltage waveform of up to 800 V and 10 A, and an instrumentation circuit for measuring the applied voltage and the current that flows through the sample of biological material. Rabbit liver samples were used from the Leporidae strain, Oryctolagus cuniculus, between 4 and 6 months of life. The reason why we chose rabbit liver in this work is to verify if the dynamic electroporation model in [3] is able to represent the biological electropermeabilization in the same organ (liver), but on a different species of animal (rabbit). The tests were performed about 30 min after the organs were removed. Three rabbits were used in the experiments. The liver of each animal was divided into standard formats (20 mm wide and 10 mm high), where each organ resulted in five samples, resulting in five experiments for each voltage level applied. For the application of the electric field, a system of cylindrical steel electrodes with 0.6 mm of diameter and 15 mm of high was used, separated by a distance of 5.4 mm center to center. The electroporation protocol used in the experiments consists in the application of 10 consecutive voltage pulses. The tests were carried out for three different voltage levels, 200 V (case A), 500 V (case B) and 800 V (case C). The time duration of each pulse is 93 µs at high level and 95 µs at low level. The rise and fall times are 3 µs and 100 ns, respectively. The room temperature was controlled at 22 ◦ C. The dynamic electroporation model used in computer simulations is an empirical model proposed in [3], where the model has been validated for rat liver. Equations (1) and (2) combined result in rate of time variation of the electroporation conductivity. dσ p = dt

(σeq − σ p ) τo + τ e

−( EE )2 ( σ p2

σt )2 t +σ p

σo σt

σeq = σo + σt e

−( EE )2 ( σ p1

σt )2 t +σ p

(1)

conductivity of the tissue, whose value depends of the intensity of the applied field and σt is the conductivity of the region near to the cell membranes. τo and τo + τ are, respectively, the minimum and maximum relaxation times of the electroporation process. E is the local electric field and E p1 and E p2 are the electric field thresholds. The initial values of the parameters were taken from [3]. Then, successive simulations were performed using the “trial & error” method, minimizing the percentage of error from pulse to pulse, between the experimental and simulated curves. In this study’ modeling, the parameters σt , τ and E p2 were adjusted for each of the applied voltage amplitudes. The remaining parameters of the model were maintained.

2.2

Finite Element Method

The analysis domain consists of the plane that intercepts the medium height of the electrodes. Due to the cylindrical geometric shape of the electrodes, the electric field distribution on this surface is non-uniform. However, along the normal direction of the surface (direction of tissue thickness) the field distribution is approximately uniform. Therefore, the FEM in two dimensions is appropriated to perform computational calculations, considering this type of geometry. The geometric model explored in this article is shown on Fig. 1. The construction of the finite element software requires the follow steps in sequence:

Electrodes

Ly

ϕ

(2)

ϕ

d

This model can be understood as a relaxation equation, where σ p is the electroporation conductivity, σo is the initial value of the electroporation conductivity, σeq is the final

Biological Tissue

Lx Fig. 1 Geometric model analyzed

14 Computational Modeling of Electroporation of Biological …

87

1. Discretization of the analysis domain in a finite number of triangular elements, filling the discretization mesh; 2. Organize the set of equations that allow the calculation of the electrical potential within each element as a linear function of spatial coordinates; 3. Perform the superposition of all elements of the mesh to obtain an expression for the electrical potential in the entire domain of analysis; 4. Create a loop for the development of simulation’s dynamic where, at each k iteration, the follow steps in the follow order are required: 4.1. Update the potential in electrode nodes; 4.2. Solve the system of linear equations to obtain the potential of nodes in the analysis domain; 4.3. Obtain the electric field in each element through the potential of the nodes; 4.4. Update the conductivity in the elements using the electroporation model, with the electric field as input. The FEM allows writing the potential as a linear function of the spatial coordinates. Thus, the potential inside of each element of the mesh is then written as a system of linear equations, which depends of the potential of its vertices (local nodes). With the superposition of all elements, the potential is then obtained in the entire domain of analysis [6]. Then, it is necessary to identify the local nodes of each element according to a global numbering. Thus, Eq. (3) is obtained, Np  ˜ f n Vn (3) V (x, y) = n=1

where V˜ (x, y) is the potential at any position in the analysis domain, Vn are the potentials of the nodes and f n are functions resulted from combination of linear equations of the local nodes, only for elements that share the considered node [6]. Equation (3) suggests that the solution for the electrical potential V˜ (x, y) may be write through the expansion in the f n function base, where the coefficients of this series are the potentials of the Vn nodes. The problem consider in this article consist in solve the continuity equation. Therefore, the divergence of the current density is zero, as shown in (4),   − ∇ · σ (x, y)∇ V˜ (x, y) = 0

(4)

where σ (s/m) is the conductivity of the environment. In this study, it was assumed that the conduction current in the tissue is much greater than the displacement current. In addition,

the tissue was considered homogeneous and the dielectric dispersion was negligible in the frequency range used. These are approaches commonly used in the analysis of electroporation in biological tissues [1,4,5]. The potential is then calculated using the weighted residuals method between the expected response of Equation (3) and the numerically response, where weighting functions are used to the minimize the average error (residual). Thus, the Galerkin method was adopted, where the weighting functions are the f n  functions of the potential expansion base [6]. The Neumann boundary condition was adopted, which establishes that the normal electric field on the boundary surfaces of the analysis domain is null. Thus, a homogeneous system of equations appears, which when solved, allows obtaining the potential of the Vn nodes. This system was solved by the LU factor method, available in MatLab® through the command linsolve. However, it is necessary to simplify the system by using the electrodes nodes as an independent term (source of the system), since their potential are imposed by the voltage source. Therefore, they should not be calculated. The potential in the analysis domain is then obtained by (3). The electric field is then calculated for each element using the negative of the gradient of the potential, as written in (5). E=−

Np 

Vn ∇ f n

(5)

n=1

The next step is to use the Equations (1) and (2) to calculate the increment in electroporation conductivity σ P (k) in the mesh elements, at each time step t, as shown in (6). σ P (k) = σ P (k − 1) +

dσ p (k)t dt

(6)

Thus, Eq. (7) provides the conductivity value σ (k) that must be updated in each element, for the new calculation of the electric field in the next iteration, σ (k) = σ S + σ P (k)

(7)

where σ S is the conductivity of the intact tissue.   The current can be obtained by integrating J(k) along the median line between the electrodes, as shown in (8),    L y    Lx Lx  I (k) = h σ , y, k E , y, k dy 2 2 0

where h is the height of the sample and k is an iteration.

(8)

88

3

M. A. Knabben et al.

Results

300

Parameter

Greatness

Value

Length of x edge Length of y edge Distance between electrodes Electrode diameter Applied voltage levels Total simulation time Number of pulses Time step Initial conductivity Conductivity of intact tissue

Lx Ly d φ Vo T Np t σo σS σt A σt B σtC τo τ A τ B τC E p1 E p2 A E p2 B E p2C

10 mm 10 mm 5.4 mm 0.6 mm 200/500/800 V 1.94 ms 10 382.2 ns 1x10−8 s/m 47x10−3 s/m 0.101 s/m 0.286 s/m 0.387 s/m 1 µs 7 ms 2.5 ms 3.5 ms 2.3 kV/m 8.5 kV/m 27 kV/m 40 kV/m

Maximum conductivity Minimum relaxation time Maximum relaxation time Electric field threshold Electric field threshold

4%

4.4% 5.1% 5.5% 6%

6.2%

1

1.8

100 0

0

0.2

0.4

0.6

0.8

1.2

1.4

1.6

Time (ms) (a)

1.5

1.8% 1.4% 1.1% 0.7% 0.4% 0.1% 0.4% 0.8% 2.5% 2.7%

I (A)

1 0.5 0

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Time (ms) (b)

0.6% 0.2% 0.2% 0.7% 1.2% 1.7% 3 0.4% 0.8% 0.9% 0.8% 2 1 0

Table 1 Simulation parameters

0.5% 1.2% 3%

200

I (A)

Simulations were performed on regular meshes with a certain number of elements and nodes. It was verified that, as more well-resolved meshes were being used, more irrelevant the differences between the curves were, which led to the use of a uniform mesh with 24336 elements and 12325 nodes. The software was run on a personal computer (Intel® Core i7-9700, 3 GHz, 16 GB RAM) with Windows 10 operating system. The simulation time for each of the three voltage levels took about 80 h, all with 5001 time steps. Table 1 presents the numerical parameters used in the computer simulation and the parameters of the electroporation model that provided the best results. Figure 2 shows the experimental and computational results obtained for the electric current that circulates in the tissue for the three cases treated in this article. The electric current has a well-characterized behavior, increasing during the intervals corresponding to the application of voltage pulses. This happens because the conductivity of the tissue is also increasing, due to electroporation. Note that the dynamic electroporation behavior changes significantly in the initial pulses, however it acquires a stable and repetitive behavior in the final pulses. The proposed model is able to reproduce this behavior very close to the experimental reality, especially for the pulses of greater amplitude [7,8].

I (mA)

3%

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Time (ms) (c)

Fig. 2 Comparison between experimental (- -) And simulated (-) currents, with the percentage of relative error to each pulse, for applied voltage levels of: a 200 V; b 500 V; c 800 V

Figure 2 also shows the distribution of the percentage error for each pulse, related to the experimental and simulated current, for the three voltage levels applied. The experimental current peaks present at the beginning of each new pulse occur mainly because of the displacement current and dielectric dispersion in the biological tissue sample. Note that the simulated current curves do not show such peaks, since the displacement current and dispersion were not consider in the computational modeling. Therefore, for the calculation of the error distribution, the current peaks were neglected. Figure 3 shows the distribution of conductivity and the electric field module in the analysis domain for 500 V applied voltage (case B), generated in a 100 × 100 nodes in the grid, distributed uniformly throughout the domain, at the instant of time equal to 1,815 ms, which corresponds to the last instant of time at high level of the last pulse.

14 Computational Modeling of Electroporation of Biological …

89

Figure 3a Shows a uniform region with an approximately elliptical shape containing the electrodes, where the conductivity has its maximum value equal to 0.33 S/m, which indicates the occurrence of electroporation in this region. Figure 3b shows some values of the electric field module in the analysis domain, with the maximum value equal to 286 kV/m, near to the surface of the electrode on the left side, and the minimum equal to 22 kV/m, near to domain’s contour. It indicates that the electric field module tends to be more intense in the region next to the electrodes, which justifies the greater values of conductivity in this region as well.

10

0.25 S/m

y (mm)

0.33 S/m

6

4

2 0.19 S/m 2

4

6

8

10

x (mm) (a) 10

Conclusion

0.22 S/m

0 0

4

The software developed based on the Finite Element Method allowed the inclusion of a dynamic model of electroporation for the analysis of the current and the conductivity and electric field distribution in rabbit liver samples. The electroporation model proved to be appropriate, as it involved only six parameters, which three of them remained the same for both three voltage levels applied. Studies still need to be performed in the future to include the displacement current and dielectric dispersion in the computational software, to faithfully model the physical phenomena that occur in biological tissues. The electroporation model combined with the Finite Element Method proved to be an appropriate simulation tool for the study of biological electropermeabilization.

0.17 S/m 8

Comparing the results of this work with the results of [4], where the FEM was also applied to the geometry of cylindrical electrodes, we note that both dynamic models of electroporation are able to represent the experimental current curves and resulted in similar conductivity distributions. However, we must consider that the model proposed in [4] needs the adjustment of 14 parameters, while the model proposed in this work needs the adjustment of only six.

22 kV/m

Conflict of interest The authors declared no conflicts of interest for this article. Ethics All animal procedures were conducted only after the project was approved by the Committe of Ethics in Animal Use of the UNIVILLE—Universidade da Região de Joinville [University of the Joinville Region]—CEUA/UNIVILLE, (UNIVILLE process 02/2019). The works analyzed by CEUA/UNIVILLE agree to the Brazilian Law 11.974 (1998), which establishes the procedures for the scientific use of animals according to the guidelines of the National Couneil for Animal Experimentation Control (CONCEA).

y (mm)

8

References

6 286 kV/m

[1] Suzuki D, Anselmo J, Oliveira K et al (2014) Numerical model of dog mast cell tumor treated by electrochemotherapy. Artif Organs 39:192–197 [2] Chen C et al (2006) Membrane electroporation theories: a review. Med Biol Eng Comput 44:5–14

86 kV/m 4 41 kV/m

2

0 0

2

4

6

8

10

x (mm) (b) Fig.3 a Distribution of conductivity in analysis domain for Vo = 500 V; b Distribution of electric field module in analysis domain for Vo = 500 V

90 [3] Weinert R, Pereira E, Ramos A (2018) Inclusion of memory effects in a dynamic model of electroporation in biological tissues. Artif Organs 1–6 [4] Langus J, Kranjc M, Kos B, Šuštar T, Miklavˇciˇc D (2016) Dynamic finite-element model for efficient modelling of electric currents in electroporated tissue. Sci Rep 6 [5] Voyer D, Silve A, Mir L, Scorretti R, Poignard C (2018) Dynamical modeling of tissue electroporation. Bioelectrochemistry 119:98– 110

M. A. Knabben et al. [6] Bondeson A, Rylander T (2006) Ingelström Pär. Springer, Computational Electromagnetics [7] Teissie J, Golzio M, Rols M (1724) Mechanisms of cell membrane electropermeabilization: a minireview of our present (lack of ?) knowledge. Bio-chimica et Biophysica Acta 205:270–280 [8] Chang D, Chassy B, Saunders J, Sowers A (1992) Guide to electroporation and electrofusion. Strut Dynam Electr Field 1

The Heterologous Fibrin Sealant and Aquatic Exercise Treatment of Tendon Injury in Rats S. M. C. M. Hidd, E. F. Dutra Jr, C. R. Tim, A. L. M. M. Filho, L. Assis, R. S. Ferreira Jr, B. Barraviera, and M. M. Amaral

animals’ paws from the seventh day on in all treatments (p < 0.002), when compared to the control group. After 7 and 14 days of treatment, LE showed a greater reduction in the volume of edema (p = 0.03041) compared to the control. After 21 days, the (LS) showed a greater reduction in edema compared to the control group. It was possible to verify a higher collagen to LSE ratio in the evaluated period after 21 days of treatment. Thus, the heterologous fibrin sealant associated or not with aquatic exercise has a beneficial influence on tendon repair, becoming a propitious technique for future clinical applications.

Abstract

Acute rupture of the calcaneus tendon is relatively common, usually related to sports practice. In recent years, the number of researches in search of more efficient techniques, which induce the healing process, has been growing. The Fibrin Sealant Derived from Snake Venom (FSDSV) or Heterologous Sealant has been standing out in animal and human application for accelerating the repair of lesions, reducing the likelihood of hemorrhage and infectious diseases and having low production cost. Aquatic exercise also presents itself as an efficient strategy for rehabilitation, reducing pain and edemas, improving muscle properties and enhancing the repair process due to the numerous beneficial effects provided by the liquid medium. The aim of this research is to evaluate the use of fibrin sealant derived from snake venom associated with aquatic exercise in tendon repair. We used 84 rats of the Wistar strain, weighing between 170 and 250 g of weight who underwent surgery to induce partial rupture of the calcaneus tendon. The animals were randomly separated into four experimental groups. The technique used was the application of fibrin sealant and aquatic exercises according to the studied group. There was a greater reduction in the edema of the S. M. C. M. Hidd  E. F. Dutra Jr  C. R. Tim  L. Assis  M. M. Amaral Universidade Brasil, São Paulo, Brazil A. L. M. M.Filho Núcleo de Pesquisa em Biotecnologia e Biodiversidade, UESPI, Teresina, Brazil R. S. Ferreira Jr Centro de Estudos de Venenos e Animais Peçonhentos CEVAP-UNESP, Botucatu, Brazil B. Barraviera Faculdade de Medicina de Botucatu, UNESP, Botucatu, Brazil S. M. C. M. Hidd (&) Biomedical Engineering, Universidade Brasil, Rua Carolina Fonseca 235, São Paulo, Brazil e-mail: [email protected]

Keywords



Edema Calcaneus tendon exercises

1



Fibrin sealant



Aquatic

Introduction

The calcaneus tendon is the largest tendon in the human body, formed by connective tissue compact, composed of fibroblasts and extracellular matrix, which by function connects to calf muscles to heel bone, with spontaneous regeneration capacity [1]. Despite its high biomechanical resistance, it breaks frequently, usually due to the practice of repetitive sports and intense mechanical loads, with predominant involvement in the age group between 30 and 50 years and in men [2]. After a tendon injury, the healing tissue formed in regeneration can hinder habitual mobility, heal slowly and hardly maintain the structural integrity and mechanical strength of a healthy tissue, which often results in challenges in choosing a better form of treatment. The most used treatment for tendon injuries has been the surgical and conservative (non-surgical) methods. However, these methods can cause a series of complications to the patient, such as

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_15

91

92

S. M. C. M. Hidd et al.

tissue necrosis, injury recurrence, infections preventing the integral restructuring of the tendon [3]. In the search for the replacement of traditional treatments, FSDSV is a promising alternative due to its hemostatic and adhesive potential. This sealant is a biologic product obtained through the combination of concentrated fibrinogen and reestablished bovine thrombin of calcium chloride, which assists in tissue regeneration, acting as a substrate for cell growth, with total biocompatibility in studies carried out in both animals and humans [4]. The exercise performed in the liquid medium is another alternative to improve the prognosis of the lesion due to the benefits that the physical properties of water provide to the practitioner. This helps in the range of motion, flexibility and consequently provides muscle relaxation, thus avoiding complications such as edemas, pain, and muscle spasms, in addition to reducing the recovery time of tendon injury [5]. Thus, the present study proposes to analyze the reduction of edema, the scar response, and possible changes in the amounts of total collagen in the tendons of the different groups, with the use of FSDSV in association with aquatic exercise.

2

Methods

The study was approved by the ethics committee (CEUA/UESPI), no. 6,899, of July 15, 2019. To carry out this study, 84 female Rattus norvegicus of the Wistar strain were used, obtained in the biotherium of UESPI. Rats weighing between 170 and 250 g were underwent to partial injury of the right calcaneus tendon. The rats were kept in a light–dark photoperiod of 12 h, with a temperature of 24 ± 1 °C, with free access to water and food. The animals were randomly separated into four experimental groups, containing 21 rats divided into each of the following 4 groups: Control Group (L), group in which the rats underwent partial tendon injury, without receiving any type of treatment; Fibrin Sealant Group Derived from Snake Venom (LS), a group that rats underwent partial tendon injury and treated with sealant; Aquatic Exercise Group (LE) group in which the rats underwent partial tendon injury and treated with aquatic exercise; Fibrin Sealant Group Derived from Snake Venom plus Aquatic Exercises (LSE) group in which the rats underwent partial tendon injury and treated with the sealant and aquatic exercises. Each group was subdivided into 3 other subgroups, containing 7 rats each, related to the evaluation times: 7, 14 and 21 days [6]. The treatment was performed with the application of fibrin sealant and aquatic exercises, according to the experimental group described.

2.1 Surgical Procedure Prior to the surgical procedure, trichotomy and asepsis were performed (alcohol 70%) on the right lower paw of each rat. Then, a longitudinal incision of 3 cm was made in the animal’s skin for exposure of the calcaneus tendon. Soon afterwards, under a surgical, a partial rupture of the calcaneus tendon was done and then applied the fibrin sealant to the injured area [1]. The fibrin sealant composed of the thrombin-like fraction of the venom of Crotalus durissus terrificus, cryoprecipitate of buffalo blood and calcium chloride, was provided by the Center for The Study of Venomous Poisons and Animals of UNESP (State University of São Paulo–Brazil). The heterologous fibrin compound was made available in three microtubes stored at a temperature (−20 ºC). At the time of application, they were thawed, reconstituted, mixed and applied (9 lL in each transected tendon) to form a stable clot with a dense fibrin network [6].

2.2 Aquatic Exercise Protocol All animals went through a period of adaptation to the liquid medium, for 15 days preceding the surgical process. The duration of the exercises was 10 min a day, 5 times a week and with a load equivalent to 10% of body weight [7]. Only groups (LE), and (LSE) to went through aquatic exercise training after the third postoperative day [8]. The exercises were done in an oval tank, with 50 cm depth and capacity for 100 L, with water level of 40 cm depth and water temperature controlled at 32 °C.

2.3 The Evolution of Edema The edema volumes were measured in five moments: before the tendon injury, 24 h, 7, 14 and 21 days after the tendon injury. The right paw of the animal was introduced in the plethysmometer. A pencil mark was made on the animal paw to standardize the measurements position. The plethysmometer measures the volume of the paw. The edema volume was calculated by the difference between the measurements before and after the tendon injury [10].

2.4 Euthanasia All animals were humanely killed on the scheduled dates by a lethal dose of anesthetic of sodium Thiopental (100 mg/kg), by intraperitoneal injection [9].

The Heterologous Fibrin Sealant and Aquatic Exercise Treatment …

2.5 Collagen Quantification Collagen quantification was performed through the slides stained with picrosirius red. After the random choice of one cut per blade, it was photographed in 3 distinct graft fields, using an optical microscope (Olympus, Optical Co. Ltd, Tokyo, Japan) equipped with filters to provide polarized lighting. The images were obtained at a 400 increase with a digital camera coupled to the microscope (Sony DSCs75, Tokyo, Japan) and analyzed using MatLab R2019b image analysis software. The analysis methodology was carried out in accordance with Castro et al. [13].

2.6 Statistical Analyses The statistical analyses were performed using the Minitab 18 statistical software. To analyse the distribution of the groups, the Anderson–Darling normality test was used. The Quantitative variables were presented and standard deviator minimum, maximum, median and quartiles. The KruskallWallis nonparametric test was used for comparison between the groups. For all analyses, a significance level of 5% was considered. Analysis of the animal weight, the volume of edema 24 h after surgery, the final edema volume, and total collagen were performed.

3

Results and Discussions

The mean and standard deviation of the weight of the animals was 206 ± 24 g, there was no statistical difference between the groups studied.

Fig. 1 Edema volume

93

In the analysis of the volume of edema 24 h after surgery, the statistical comparison between the groups showed that there is no difference between their values (p-value = 0.023). Thus, the surgical injury procedure induced the inflammatory process equally in all the paws of the animals of all groups. Figure 1 shows the volume of the final edema of each of the groups studied. It was observed that after 7 days of treatment, the treatment groups (LE7, LS7 and LSE7) presented statistically lower edema volume (p-value < 0.002), compared to the group without treatment application. In the intragroup comparison, it was observed that the control group (without treatment) presented a volume of edema for the period of 21 days statistically greater than 14 days (p-value = 0.04), showing that the lesion without treatment does not progress satisfactorily. The group treated only with aquatic exercise showed greater statistical differences between the LE7 and LE21 (p-value = 0.0090) and between LE14 and LE21 (p-value = 0.0096), showing that there was an increase in the volume of edema for this treatment. Among the groups treated only with fibrin sealant, there were greater statistically differences between LS14 and LS21 groups (p-value = 0.0014). Finally for groups treated with fibrin sealant and aquatic exercise, statistically smaller differences were observed between LSE7 and LSE21 (pvalue = 0.0048). Figure 2 presents the representative images of collagen stained by picrosirius red for each studied group. The results of this analysis for each group are shown in Fig. 3. When comparing the intragroup results of the total collagen ratio, it is observed that the proportion of collagen ratio for the L7 control group is significantly higher than in the L21 group (p-value = 0.013).

94

Fig. 2 Representative images of picrosirius red collagen stained

For the groups treated with aquatic exercise LE, no statistical difference was observed in the amount of collagen (p-value > 0.4). In the groups treated with LS sealant, a statistical difference was observed between groups LS7 and LS21 (p-value = 0.012) and between LS14 and LS21 (p-value = 0.007). The collagen content of the LS21 group is statistically higher compared to the LS7 and LS14 groups. However, no statistical difference was observed between groups LS7 and LS14. For the groups treated with sealant plus aquatic exercise, no statistical difference was observed in the amount of collagen between the groups (p-value > 0.27). In the intergroups comparison by period, it was observed that, after 7 days of injury induction, the groups treated with

Fig. 3 Comparison p-value intergroups and intragroup

S. M. C. M. Hidd et al.

aquatic exercise LE7 showed a statistically higher value than the control group L7 (p-value = 0.04). After the 14 days of injury induction, the LE14 group showed a statistically lower value than the control group L14 (p-value = 0.0006). For 21 days after the injury, the LE21 group has a statistically lower value than the L21 group (p-value = 0.0002). This study tested the effects of FSDSV associated with aquatic exercise in the partial tendon repair process in rats, aiming to analyze the reduction of edema, scar response and possible changes in the amounts of total collagen in the tendons of the different groups. It was observed that there was a statistically significant reduction in edema for the group treated with fibrin sealant associated with aquatic exercise (LSE21) compared to the control group. Indicating that in the first day after the induced partial injury, the rats showed a greater volume of edema, resulting from the greater accumulation of macrophages at the injury site [10]. Rats that underwent to aquatic exercise for a period of 7, 14 or 21 days showed progress in reducing the volume of edema. It is known that physical exercise acts as a stressful stimulus that can promote changes and reorganize the responses of the neuroendocrine system [11]. Corroborating these results, Antunes et al. (2012) evaluated the effect of resistance exercise in aquatic environment in experiments with 18 Wistar rats and found that physical exercise was somewhat beneficial in reducing edema. In the present study, the association of fibrin sealant with aquatic exercise in the acute phase of tendinopathy recovery was an effective therapeutic modality in improving edema from the inflammatory process, although treatment with aquatic exercise or sealant, when applied alone, has also been effective in reducing the volume of edema, contributing to the regenerative process, aiming at the return of functionality to the patient.

The Heterologous Fibrin Sealant and Aquatic Exercise Treatment …

However, these same authors found that when comparing the results of intragroup edema, it was not possible to observe a statistically significant difference, unlike this study, where there was a significant difference in the volume of edema in the groups that underwent to more days of exercise. Regarding the amounts of collagen in the intragroup comparison, only the group treated with fibrin sealant by LS21 presented a significant amount when compared to groups LS7 and LS14. The data in the intergroup comparison suggest that the FSDSV and aquatic exercise may be an excellent support for treatment during tendon repair (LSE14) followed by LE7 and finally LS21, because they presented satisfactory results in relation to the recovery of tendon organization, with the time and types of treatments applied.

4

Conclusions

To the best of our knowledge, this work is the first experimental model of tendinopathy associating fibrin sealant derived from snake venom associated with aquatic exercise, thus providing innovative options for the treatment of tendon lesions. In this work, the groups treated with aquatic exercise, with fibrin sealant and sealant associated with aquatic exercise presented statistically lower edema volume than the control group. It was found that the group treated with fibrin sealant associated with physical exercise obtained a better reduction of edema when compared with the other treatment groups. The association of sealant with aquatic exercise promoted the regeneration of the calcaneus tendon of the animals, as well as stimulating the early organization of collagen fibers. Acknowledgements This 2017/21851-0.

work

was

supported

by

FAPESP

Conflict of Interest The authors declare that they have no conflict of interest.

95

References 1. Holm C, Kjaer M, Eliasson P (2014) Achilless tendon rupture– treatment and complications: a systematic review. Scand J Med Sci Sports 25(1):e1-10 2. Frauz K et al (2019) transected tendon treated with a new fibrin sealant alone or associated with adipose-derived stem cells. Cells 8:56 3. Rosa LFPBC, Vaisberg MW (2002) Influências do exercício na resposta imune. Rev Bras Med Esporte. 8(4):167–172 4. Walden G et al (2017) A clinical, biological, and biomaterials perspective into tendon injuries and regeneration. Tissue Eng Part B Rev 23(1):44–58 5. Grinsell D, Keating CP (2014) Peripheral nerve reconstruction after injury: a review of clinical and experimental therapies. Biomed Res Int 2014:13 6. Dietrich F (2012) Comparação do efeito do plasma rico em plaquetas e fibrina rica em plaquetas no reparo do tendão de Aquiles em ratos. Faculdade de Medicina, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre, RS 7. Ferreira JR (2017) Selante de fibrina heterólogo derivado do veneno de cobra: do banco ao leito-uma visão geral. J Venom Anim Toxins Incl Trop Dis 8. Peres VS (2013) Efeito do selante de fibrina derivado de peçonha de serpente associado a células-tronco mesenquimais na cicatrização de feridas cirúrgicas em ratos. Dissertation-Faculdade de Medicina de Botucatu, Universidade Estadual Paulista, Botucatu 9. Rajabi H et al (2015) Os efeitos curativos das atividades aquáticas e injeção alogênica do plasma rico em plaquetas (PRP) nas lesões do tendão de Aquiles em ratos experimentais. World J Plast Surg 4 (1):66–73 10. Karvat J, Antunes JS, Bernardino GR, Kakihata CMM, Bertolini GRF (2014) Effect of low-level laser and neural mobilization on nociceptive threshold in experimental sciatica. Revista Dor 15 (3):207–210 11. Pestana PRD, Alves AN, Fernandes KPS, Silva JA Jr, França CM, Martins MD, Bussadori SK, Mesquita-Ferrari RA (2012) Efeito da natação na expressão de fatores regulatórios miogênicos durante o reparo do musculoesquelético de rato. Rev Bras Med Esporte 18 (6):419–422 12. Silva ASD, Lima AP (2011) Os benefícios da reabilitação aquática para grupos especiais. Lecturas Educación Física y Deportes, Buenos Aires 16:159 13. Castro TNS, Martignago CCS, Assis L, de Alexandria FED, Rocha JCT, Parizotto NA, Tim CR (2020) Effects of photobiomodulation therapy in the integration of skin graft in rats. Lasers Med Sci 35:939–947

Myxomatous Mitral Valve Mechanical Characterization A. G. Santiago, S. M. Malmonge, P. M. A. Pomerantzeff, J. I. Figueiredo and M. A. Gutierrez

Abstract

The mitral valve (MV) along with the tricuspid valve are part of the atriovalvar complex and, when committed by myxomatous disease, suffers from tissue degeneration with severs changes in its mechanical properties, consequently loosing its coaptation capability, resulting in the well known Mitral Regurgitation (MR). This paper presents the results of stress × strain tests of 19 mitral valve posterior leaflets committed by myxomatous disease extracted from patients undergoing MV repair surgery. Due to their dimensions, only uni axial testes were performed, i.e., only the radial direction was considered. Are presented the Young Modulus, Yielding and Linearity limits. The Young Modulus obtained for myxomatous MV are compared to normal values found in the literature. Keywords

Mitral valve • Myxomatous disease • Mechanical properties

1

Introduction

The mitral valve (MV) is responsible for maintaining one directional blood flow inside the cardiac cavity. The MV possesses a complex set of structures such as the chordae thendinae, papillary muscles and the mitral annulus. The papillary muscle is attached to the leaflets via several chordae thendinae, controlling their opening and closing movements, while A. G. Santiago (B) · S. M. Malmonge Engineering and Social Sciences, Federal University of ABC/ Center for Modeling, Alameda da Universidade s/n, São Bernardo do Campo, Brazil e-mail: [email protected] P. M. A. Pomerantzeff · J. I. Figueiredo · M. A. Gutierrez Medical School,University of São Paulo/ Heart Institute, São Paulo, Brazil

the mitral annulus is responsible for maintaining the circumferential tension of the MV. The VM proper behavior allows the one directional blood flow, from the left atrium to the left ventricle, with no considerable re-flux. However, its behavior may be compromised by myxomatous disease, causing a degeneration of the chordae thendinae and leaflets tissues and, consequently, changing their mechanical properties [1], implying in a coaptation failure, causing the well known Mitral Regurgitation (MR). Valve replacement and valve repair are the two techniques employed to solve MR problem, with the latter being the most used with better post-surgical results. This technique consists in removing the most damaged portion of the leaflet, reducing its area. Literature presents several numerical models for MV simulations [2–5], which are crucial for better surgical planning and understanding of the MV post-surgical behavior. Such simulations depends upon the knowledge of the mechanical properties of the structure being analyzed, thus, relies on the proper characterization of myxomatous mitral valves. The MV internal structure is composed by conjunctive tissue and collagen fibers circumferential oriented [6], hence, the material can be classified as a transversely isotropic, having distinct characteristics in circumferential and radial directions. Several studies have been made in order to characterize its different structures, such as the paper presented by Ritchie et al. [7] and Chen et al. [8], that aim to determine the mechanical properties of the chordae thendinae; Liao et al. [9] performed a study in which the authors determined the influence of the MV anterior leaflet collagen fibers in the MV movements. A main issue concerning the mentioned papers is that they did not use human specimen for testing, which may lead to considerable divergences in numerical simulations. Barber et al. [10] presents an extensive study considering normal and mixomatous MV. This paper presents the characterization a first study, consisting of 19 myxomatous mitral valve samples (posterior leaflet) considering a brazilian population, which can lead to important insights concerning its demographics particularities. Are presented the Young Modulus, Stress at Break, Stress

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_16

97

98

A. G. Santiago et al.

at Yield, Peak Stress, Stress at Break and Strain at Break, since these parameters are needed for plotting the StressStrain (σ × ) material characteristic curve. All 19 samples were extracted from patients undergoing MV repair surgery at the Heart Institute of the University of São Paulo (InCorFMUSP) and due to their size and shape, only the radial tensile test was performed. This research was approved by InCor’s Institutional Review Board (IRB) under number 4704/18/056. This is the first study known so far that aims the characterization of myxomatous MV specific for brazilian population and also, the first to use samples from surgical procedures.

2

Methodology

All 19 myxomatous mitral valve samples (posterior leaflet) were extracted from patients undergoing mitral valve repair surgery with age ranging from 44 to 83 years old, with mean age 63.31 ± 9.35 year s. Transesophagial Echocardiography Ultrasound (TEE-US) was used to evaluate the patients condition before the surgery. The samples were sectioned from the mitral annulus to the free edge and each one was placed in a plastic container filled with glycerin in order to preserve its integrity and taken to the tensile test. In order to proceed with the tension tests, each one of them was immersed into a saline solution for 10–15 min before placed in the tension machine. All tension tests were performed using the MTS Tryton 250 located at the Laboratory of Biomaterials of the Federal University of ABC. In order to guarantee the stability of the samples, four adapters were used, each one designed to avoid the samples to slip over the machines grips. Figure 1 shows one sample placed on the machine. The horizontal gap between each grip was carefully adjusted according the length of each sample, avoiding any pre-tension.

Fig. 1 Mitral Valve sample placed on tension machine for testing

The TestWorks proprietary software was used to perform the tests and the results of load × displacements were recorded and used to determine the Young Modulus (E), Stress at Yield (σe ), Peak Stress (σ M ) as well as Stress at Break (σ R ) and Strain at Break (ε R ). A Python 3.7 Application Programming Interface (API) was developed in order to read and convert the data from each generated file to a pandas library, which posses statistical and graphical resources. It must be mentioned here, that all stress-related values are considered as engineering stress, i.e., σ = F/A [Pa], with A the crosssection area of the specimen, considered to be constant.

3

Results

Figure 2a shows the boxplot for the samples Young Modulus parameter and considering the Young Modulus over 1.84 [M Pa] as an outlier, it can be excluded from the parameters statistics. Figure 2b, c present the histogram and the gaussian distribution for the Young Modulus. It must be noted that Fig. 2c was evaluated considering only those values below 1.8 [M Pa]. Table 1 presents the mean values along the standard deviation for the analyzed properties. Figure 3 present the Young Modulus (E), Stress at Yield (σe ), Stress at Break (σ R ) and Strain at Break ( R ) parameterized by the patients age. The p-values and Pearson coefficients (r 2 ) are presented in Table 2.

4

Discussion

This paper presented a study of 19 myxomatous mitral valve samples extracted from patients undergoing MV repair surgery. An analysis concerning the Young Modulus values is presented along some other important characteristics such as Strain at Break, Stress at Break, Peak Stress and Stress at Yield. It is also presented an attempt to correlate the variation of the mentioned parameters with the patients age. As stated by Fig. 2a, b, Young Modulus over 1.84 [M Pa] can be considered as outliers, hence, they were not considered for estimating the mean value for myxomatous MV Young Modulus (Fig. 2c). The presented results in Fig. 2c and Table 1 shows that the Young Modulus for myxomatous MV is higher than those for healthier MV, as presented by Pham et al. (2017) [11], with E = 180.05 ± 71.50 [K Pa]. From the parametric properties shown in Fig. 3 and the results in Table 2, it can be observed a low correlation between the given properties and the patients age for the considered age range, meaning that no significant change in the myxoid MV properties can be noticed in the age span analysed.

16 Myxomatous Mitral Valve Mechanical Characterization

99

(a) (a)

(b) (b)

(c) (c) Fig. 2 Analysis for myxomatous Young Modulus: a Boxplot for Young Modulus values in M Pa; b Young Modulus histogram; c Normal distribution for Young Modulus Table 1 Tension tests results: Mean value and Standard deviation Property

Mean Value

Std. deviation

Thickness (mm)

2.084

671.03 × 10−3

Width (mm)

6.474

1.144

Area (mm 2 )

13.45

4.739

Modulus (E) ( K Pa )

666.18

361.60

Stress at Yield (σe ) ( K Pa ) 822.71

558.00

Peak stress (σ M ) ( K Pa )

983.77

579.90

Break stress (σ R ) ( K Pa )

632.00

549.60

Strain at break ( R ) (%)

230.50

158.90

(d) Fig. 3 Properties parameterized by the patients age: a Young Modulus; b Stress at yield; c Stress at break; d Strain at break

100

A. G. Santiago et al.

Table 2 Tension tests results: pvalue and Pearson coefficient Property

pvalue

r2

Modulus (E)

943.97 × 10−3

428.98 × 10−6

Stress at Yield (σe )

826.90 × 10−3

2.89 × 10−3

Stress at Break (σ R )

465.43 × 10−3

31.75 × 10−3

Strain at Break ( R )

275.03 × 10−3

69.62 × 10−3

Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Lazar F, Marques LC, Aiello VD (2018) Myxomatous degeneration of the mitral valve. Autopsy Case Rep 8 [2] Gao H, Qi N, Feng L et al (2017) Modelling mitral valvular dynamics–current trend and future directions. Int J Numerical Methods Biomed Eng 33 [3] Khodaei S, Fatouraee N, Nabaei M (2017) Numerical simulation of mitral valve prolapse considering the effect of left ventricle. Math Biosci 285:75–80

[4] Sturla F, Onorati F, Votta E et al (2014) Is it possible to assess the best mitral valve repair in the individual patient? Preliminary results of a finite element study from magnetic resonance imaging data. J Thoracic Cardiovascular Surg 148:1025–1034 [5] Voigt I, Ionasec RI, Georgescu B et al (2009) Model-driven physiological assessment of the mitral valve from 4D TEE medical imaging 2009: visualization. Image-Guided Procedures Modeling 7261:72610R [6] Kunzelman RP, Cochran (1992) Stress/strain characteristics of Porcine mitral Valve tissue: parallel versus perpendicular Collagen orientation. J Cardiac Surg 7:1–20 [7] Ritchie J, Jimenez J, He Z, Sacks MS, Yoganathan AP (2006) The material properties of the native porcine mitral valve chordae tendineae: an in vitro investigation. J Biomech 39:1129–1135 [8] Chen L, Yin FCP, May-Newman K (2004) The structure and mechanical properties of the mitral valve Leaflet-Strut chordae transition zone. J Biomech Eng 126:244–251 [9] Liao J, Yang L, Grashow J, Sacks MS (2007) The relation between collagen fibril kinematics and mechanical properties in the mitral valve anterior leaflet. J Biomech Eng 129:78–87 [10] Barber JE, Kasper FK, Ratliff NB, Cosgrove DM, Griffin BP, Vesely I (2001) Mechanical properties of myxomatous mitral valves. J Thoracic Cardiovascular Surg 122:955–962 [11] Pham T, Sulejmani F, Shin E, Wang D, Sun W (2017) Quantification and comparison of the mechanical properties of four human cardiac valves. Acta Biomaterialia 54:345–355

Qualitative Aspects of Three-Dimensional Printing of Biomaterials Containing Devitalized Cartilage and Polycaprolactone I. M. Poley, R. Silveira, J. A. Dernowsek, and E. B. Las Casas

Abstract

1

The aim of this study was to investigate the printability of different compositions of biomaterials to be potentially used in bioprinting of cartilaginous substitutes. Part of the compositions contained pulverized devitalized cartilage (DVC), which confers the necessary biochemical complexity for bioprinted scaffolds, and some compositions contained granulated polycaprolactone (PCL), that provides greater mechanical resistance to the scaffolds. An additive manufacturing equipment specially customized for bioprinting was used in the printing tests. In addition to bringing biochemical advantages, DVC increases consistency of scaffolds. PCL, on the other hand, has to be reduced to smaller granulometries for better results, since it has obstructed the printing needles. Modifications to the printer design have been suggested to make printing viable using high viscosity biomaterials and 0.41 mm or thinner needles, which may provide greater resolution and shorter distances for diffusion of nutrients and oxygen inside the scaffolds. Keywords

Bioprinting



Bioink



DVC

  PCL

Printability

I. M. Poley (&) Laboratório de Bioengenharia/Departamento de Engenharia Mecânica, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil e-mail: [email protected] R. Silveira 3DLopes, Belo Horizonte, Brazil J. A. Dernowsek CTI Renato Archer, Campinas, Brazil E. B.L. Casas Departamento de Engenharia de Estruturas, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

Introduction

Bioprinting is a promising area of tissue engineering. It aims to biofabricate complete rejection-free organs and tissues by using computer-controlled three-dimensional printing devices to accurately deposit cells and biomaterials, for the purpose of creating anatomically correct structures. Cell-embedded biomaterials used in bioprinting comprise the so-called bioinks, whereas those without cells are considered biomaterial inks. Extracellular matrices have already been used as biomaterials in biofabrication of substitutes for cartilaginous tissues due to their chondroinductive potential [1]. The biochemical complexity of those structures helps to mimic a natural cellular microenvironment in the scaffolds, promoting cell growth, multiplication, and correct differentiation. According to Kiyotake et al. [1], extracellular cartilage matrices are used in two different presentations, those being DCC (decellularized cartilage) and DVC (devitalized cartilage). Decellularization is done to avoid immunogenicity of the material; that process, however, damages the biochemical content and consequently the chondroinductivity of the scaffolds. Devitalization, on the other hand, disrupts the chondrocytes membranes by ice crystals formation due to freezing and thawing processes—in this case, the cellular debris remains in the middle of the extracellular matrix. In addition to greater chondroinductive potential, DVC has a mechanical advantage over DCC due to its greater biochemical complexity. Clinical evidence suggests that cartilage presents a low risk of immune response, and therefore cell debris need not to be removed [2]. Polycaprolactone (PCL) is a biocompatible polymer that melts at about 60 °C. Under physiological conditions, as in implants in the human body, it is degraded by hydrolysis of its esters, making room for collagen produced by chondrocytes that can be added to the scaffolds. In bone and cartilage substitutes, it provides the necessary structural reinforcement to support cells [3, 4].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_17

101

102

I. M. Poley et al.

In this study, different biomaterials with potential to be used in bioprinting of cartilaginous substitutes and their variations containing DVC and/or PCL were tested on a 3D printer for their printability and consistency of the resulting printed scaffolds.

2

Materials and Methods

Twelve different biomaterials have been formulated to be tested for their potential use in bioprinting. The formulations started from compositions previously tested by Paxton et al. [5]. Pulverized DVC and/or granulated PCL were added to some formulations. The exogenous pulverized DVC added to the compositions is commercially available and produced from shark cartilage by the Tiaraju™ Laboratory (Santo Ângelo, RS, Brazil) under the name of “Cartilagem de Tubarão”. Its maximum particle size is 0.25 mm. At first, preference was given to the use of porcine DVC due to its greater physiological similarity with human cartilage, but difficulties related to grinding that material and obtaining it in sufficient quantities for the tests justified the replacement by shark DVC only for mechanical tests. Anyway, the use of exogenous material is justified by clinical evidence that cartilaginous tissues are immunoprivileged [1]. Future immunohistochemical evaluation of the printed scaffolds is suggested to verify biocompatibility. PCL (average Mn 80,000, 440,744-250G), poloxamer 407 (Pluronic F-127, P2443-250G), sodium alginate (W201502-1 KG), gelatin (48,723-500G-F), and Dulbecco's Modified Eagle’s Medium (D5921-500ML) were purchased from Sigma-Aldrich (St. Louis, Missouri, USA). Calcium chloride (102,392) was purchased from Merck Millipore (Burlington, Massachusetts, USA).

Table 1 Compositions of biomaterials tested on the 3D printer

Granulated PCL was obtained by grating. Average particle diameter of 1 mm was achieved. Thinner comminution is suggested for future tests, making it possible to use printing needles with 0.41 mm or smaller diameter. 0.41 mm is the largest recommended diameter for bioprinted filaments, since the diffusion of oxygen and nutrients through the material requires a maximum radius of 0.2 mm. The formulations by Paxton et al. [5] reproduced in this study were: • Poloxamer 407 30% wt in aqueous solution • Sodium alginate 8% w/v in PBS (phosphate buffered saline), with pre-crosslinking promoted by 1% w/v CaCl2 solution, mixed with the alginate gel in a proportion of 7: 3 volume mixing ratio (that composition is hereafter referred to as 8%/1% alginate, as in Paxton et al. [5]) • 2%-10% alginate-gelatin, formulated from a mixture of 4% w/v sodium alginate in PBS and 20% w/v gelatin in PBS (that composition was first tested by Wüst et al. [6] and then reproduced by Paxton et al. [5]) The resulting compositions are listed in Table 1. The biomaterials were tested using a customized 3D printer by 3DLopes™. The equipment uses the FDM (fused deposition modeling) technique, and was adapted to function with a syringe filled with biomaterial, with the piston being pressed by a screw system whose stepper motor (Nema 17, 3 W, 1.33 A) is controlled by a processing board. At the tip of the syringe, needles with various diameters can be adapted. For this study, 1.54 mm needles were used. Scaffolds with overlapping layers of parallel filaments, rotated by 90°, forming a checkerboard pattern, were printed to assess the uniformity and consistency of the filaments. The scaffolds made with biomaterials from 1 to 8 received calcium chloride 2% w/V solution to have the

Biomaterial

Composition

1

8%/1% alginate mixed with poloxamer 407 30% wt in a 1:1 volume mixing ratio

2

Same composition of biomaterial ink 1 with additional DVC 3% wt

3

Same composition of biomaterial ink 1 with additional PCL 3% wt

4

Same composition of biomaterial ink 1 with additional DVC 1.5% wt and PCL 1.5% wt

5

Pure alginate 8%/1%

6

Same composition of biomaterial ink 5 with additional DVC 3% wt

7

Same composition of biomaterial ink 5 with additional PCL 3% wt

8

Same composition of biomaterial ink 5 with additional DVC 1.5% wt and PCL 1.5% wt

9

2% - 10% alginate-gelatin

10

Same composition of biomaterial ink 9 with additional DVC 3% wt

11

Same composition of biomaterial ink 9 with additional PCL 3% wt

12

Same composition of biomaterial ink 9 with additional DVC 1.5% wt and PCL 1.5% wt

Qualitative Aspects of Three-Dimensional Printing …

alginate crosslinking completed after deposition. Biomaterials numbered 9 to 12, which are heat-responsive, were transferred from the printhead heated to 37 °C to a cooled plate (0–4 °C) when deposited, following the same procedure as Paxton et al. [5].

3

Results and Discussion

3.1 Biomaterials Based on Alginate and Poloxamer 407 (Compositions 1 to 4) Biomaterial 2, which contains DVC, presented filaments with greater consistency and, in general, less irregularities, when compared to biomaterial 1, as shown in Fig. 1. When printing using biomaterial 3, which contains the largest amount of particulate PCL among compositions 1 to 4, clogging occurred due to the gradual accumulation of PCL particles at the syringe outlet. As shown in Fig. 1, the

103

resulting scaffold became very irregular and it was not possible to print overlapping layers. Biomaterial 4, with lower amounts of PCL compared to biomaterial 3, allowed to print a complete scaffold, shown in Fig. 1, but with more irregular filaments compared to biomaterials 1 and 2. Partial obstruction of the syringe outlet by PCL particles contributed to the irregularities. Due to the presence of poloxamer 407 in the compositions, during periods of refrigerated storage before impressions, those biomaterials liquefied and decanted, decreasing the presence of air bubbles that remained mixed in those biomaterials after the preparation of the compositions. Centrifugation procedures for removal of air bubbles were not successful due to the unwanted decanting of the particulate additives of the compositions.

3.2 Biomaterials Based on Alginate (Compositions 5 to 8) Biomaterial 5, which is 8%/1% pure alginate, presented a more irregular printed pattern, shown in Fig. 2, when compared to biomaterial 1, the mixture of 8%/1% alginate and poloxamer 407 30% w/w in equal parts. This is because biomaterial 5 does not liquefy when stored in the refrigerator and, therefore, does not release air bubbles by decantation. For comparative purposes, none of the biomaterials was centrifuged prior to printing, but biomaterial 5, which was also prepared and tested by Paxton et al. [5], does require centrifugation for better results. Similarly, biomaterial 6 showed a more irregular printed scaffold compared to its equivalent with poloxamer 407

Fig. 1 Printing results for biomaterials 1 to 4

Fig. 2 Printing results for biomaterials 5 and 7

104

I. M. Poley et al.

(biomaterial 2) due to the presence of air bubbles. However, the use of centrifugation for biomaterial 6 would separate its particulate DVC from its content. Biomaterial 7, as well as its equivalent with poloxamer 407 (biomaterial 3), caused clogging by PCL particles at the syringe outlet. Drip was observed when increasing the printhead temperature, because the biomaterial became less viscous, as seen in Fig. 2, and passed between the PCL particles instead of carrying them. Biomaterial 8 printing results were very similar to those of its equivalent with poloxamer 407, biomaterial 4.

3.3 Biomaterials Based on Alginate and Gelatin (Compositions 9 to 12) Biomaterials 9 to 12, which are thermoresponsive, were deposited on cooled plates to acquire consistency. Figure 3 shows the advantage of the presence of DVC in biomaterial 10 when compared to biomaterial 9, which does not have DVC. The filaments of biomaterial 10 maintain the tubular characteristic to a greater degree, as they are more consistent. Biomaterial 11, shown in Fig. 3, generated a problem similar to that presented in Fig. 2 for biomaterial 7. There was also clogging with PCL particles at the syringe outlet, but the low viscosity of the gel containing gelatin at 37 °C allowed the gel to pass through the interstices between the particles and through the needle. The result was a scaffold poor in PCL particles. The scaffold of biomaterial 12, shown in Fig. 3, on the other hand, also presented some irregularities, due to some clogging, but the filament consistency was improved due to the presence of particles from both DVC and PCL. Fig. 3 Printing results for biomaterials 9–12

4

Conclusions

The high viscosities of the biomaterials make it necessary to replace the printer's central axis motor. The currently used motor is not potent enough to keep the thread turning in attempts to print using thinner needles. More potent motors are larger, so their use would require to redesign the entire central axis. With a more potent motor, printing needles with a diameter of 0.41 mm can be used. Also for the use of thinner needles, it will be necessary to obtain PCL with smaller particle sizes. The high viscosities of the tested biomaterials favor the gradual carrying, without accumulation, of their particulated components along the printing. Lower viscosities would give PCL particles greater freedom to settle to the bottom of the syringe and to accumulate. However, it is necessary for the particulated material to be, in any case, sufficiently fine to avoid clogging.

The addition of DVC to the compositions, besides conferring biochemical advantages, also brought mechanical advantages to the consistency of the scaffolds. The effect of adding PCL, on the other hand, will be better evaluated when testing PCL with smaller particle sizes. The use of poloxamer 407 in conjunction with alginate favors the escape of bubbles by decantation during refrigerated storage. The decantation of compositions containing gelatin at temperatures just above room temperature has the same effect. In such cases, centrifugation may not be necessary when preparing biomaterials. However, it is recommended to gently stir these compositions before use to redistribute the particulate material with a minimum mixing with air. Rheometry analysis of the twelve tested biomaterials, in conjunction with computational fluid dynamics simulations, will be useful to predict the shear stresses to which cells may

Qualitative Aspects of Three-Dimensional Printing …

be subjected when added to biomaterials for bioprinting. Shear stresses must be low enough to minimize disruption of cell membranes. Acknowledgements The authors would like to thank the Brazilian agencies CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and FAPEMIG (Fundação de Amparo à Pesquisa do Estado de Minas Gerais) for their support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Kiyotake E, Beck E, Detamore M (2016) Cartilage extracellular matrix as a biomaterial for cartilage regeneration. Ann NY Acad Sci 2016:139–159

105 2. Farr J, Yao J (2011) Chondral defect repair with particulated juvenile cartilage allograft. Cartilage 2:346–353 3. Kang H, Lee S, Ko I et al. (2016) A 3D bioprinting system to produce human-scale tissue constructs with structural integrity. Nat Biotechnol, 1–11 4. Garrigues N, Little D, Adams J et al (2014) Electrospun cartilage-derived matrix scaffolds for cartilage tissue engineering. J Biomed Mater Res 102a: 3998–4008 5. Paxton N, Smolan W, Böck T et al (2017) Proposal to assess printability of bioinks for extrusion-based bioprinting and evaluation of rheological properties governing bioprintability. Biofabrication 9. https://doi.org/10.1088/1758-5090/aa8dd8 6. Wüst S, Müller R, Hofmann S (2015) 3D Bioprinting of complex channels-effects of material, orientation, geometry, and cell embedding. J Biomed Mater Res A 103:2558–2570

Chemical Synthesis Using the B-Complex to Obtain a Similar Polymer to the Polypyrrole to Application in Biomaterials Lavínia Maria Domingos Pinto, Mariane de Cássia Rodrigues dos Santos, Mirela Eduarda Custódio, F. E. C. Costa, Larissa Mayra Silva Ribeiro, and Filipe Loyola Lopes

Abstract

Keywords

This research aimed to perform chemical synthesis using the B-complex to obtain similar conductive and magnetic characteristics to the Polypyrrole polymer, which presents biocompatible properties. The technique, characterized as low-cost, can be used in many applications, since as biomaterials in biomedical area until telecommunications area. To obtain the Polypyrrole, it was chosen two methods and four treatments to achieve the powder. The conductive properties of the resulting powders were verified using the four-probe technique to measure the voltage; the conductivity was performed using a conductivity meter, and the HD magnet was used to check the magnetic properties of the synthesized powder. The results obtained for the conductivity was  1.68 µS/cm, and the voltage was about 1.31 mV. The use of B-complex as a substitute of the Polypyrrole shows similar results to its conductive characteristics, been a viable alternative as biomaterial application.

Biosensors

L. M. D. Pinto  M. de CássiaR. dos Santos  M. E. Custódio  F. E. C. Costa  F. L. Lopes (&) National Institute of Telecommunication, Santa Rita do Sapucai, Brazil e-mail: fi[email protected] L. M. D. Pinto e-mail: [email protected] M. E. Custódio e-mail: [email protected] F. E. C. Costa e-mail: [email protected] L. M. S. Ribeiro Federal University of Itajubá, Itajubá, Brazil e-mail: [email protected]

1



B-complex



Conductive

Introduction

The Polypyrrole (PPy) is an organic polymer performed by the polymerization of pyrrole discovered by Dall’Olio in 1968. Dall’Olio obtained a black powder adhered to the electrode surface through the electrolysis process of a solution of PPy in sulfuric acid. A few years later, Diaz and Col obtained a PPy black film from the electrolysis of pyrrole in acetonitrile and tetramethylammonium tetrafluorborate. This process resulted in a high conductivity of the polymer (100 S. cm −1), caused by the oxidation–reduction between the polymeric chain and the doping agent [1]. Since then, the PPy synthesis has been used due to the high electric conductivity, adhesion, mechanical strength, chemical activity, and it is used as metals coating, protecting it from the corrosion processes. Besides, the PPy synthesis is obtained easily [2]. Furthermore, this polymer is often used in biosensor manufacturing, showing better characteristics of sensibility, lifetime, fast response time, high conductivity, and biocompatibility [3, 4]. However, it is necessary to search for substances with new physicochemical properties, which presents a low impact on the environment and low-cost fabrication for innovative materials. The choice of new polymeric material is necessary to select the chemical synthesis route that best suits the process [5]. The most revolutionary applications of the conductive polymers concern in the biomedicine field, being applied in tissue engineering, artificial nerves, and biosensors [5]. The PPy electrodes biosensors can be used to receive biological and electrical signals and also in the manufacturing of neural prosthesis for nerve regeneration [6–9].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_18

107

108

L. M. D. Pinto et al.

To obtain a conductive material through the PPy synthesis, some researchers used oxidizing agents, surfactants, and conventional PPy. These works present good conductive results, but the high cost may prevent its use [1]. The cost analysis demonstrates that 250 g of doped PPy costs R$ 2.1800,00 and non-doped general PPy costs about R$ 640,00 [10]. The doped polymer presents high conductive than the non-doped polymer due to the use of oxidizing or reductive agents in the doping process, leading to deformations on the polymeric chain known as polarons and bipolarons resulting on the better conductivity properties [5]. Also, the oxidizing or reductive process of PPy causes the appearance of magnetic characteristics, and the higher is the doping level, the better is its magnetic sensibility [11]. For these reasons, this research aims to develop and test two methods of chemical synthesis, which presents a conductive effect similar to conventional PPy using the same oxidizing and surfactants agents, but substituting the PPy by the B-complex (group of vitamins that contains pyrrolic rings in its chemical structure).

2

Materials and Methods

This study was realized in the biochemistry lab of the National Institute of Telecommunications in June-November of 2019. The research was divided into two methods, developed in the following steps: (A) Definition of the methods I and II to obtain the pyrrole monomer from natural substrates that present chemical pyrrolic structures; (B) B-complex First Treatment; (C) B-complex Second Treatment; (D) B-complex Third Treatment; (E) B-complex Fourth Treatment and (F) Conductive Properties Assay.

(SLS) were used in the synthesis to realize the polymer doping and consequently increase its conductivity.

2.2 B-complex First Treatment For the First Treatment were used the reagents of Method I. An aqueous solution was prepared, dissolving the SLS reagent in 1L of distilled water in stirring for 30 min. This solution was divided into twice: Solution 1 (S1) and Solution 2 (S2) with 500 mL each. In S1 was added 5.04 g of B-complex and in S2 was added 3.30 g of ammonium sulfate. S1 and S2 solutions were stirring for 15 min. After these processes, the S1 solution was added in S2 solution and stirred for 1h05 resulting in Solution 3 (S3). In 40 mL of S3, it was added 16 mL of acetone and 16 mL of methanol. This new solution was stirring for 30 min and centrifuged for 10 min posteriorly. Figure 1 represents the S3 solution with the black powder resulting from the centrifugation process. After this process, the liquid was disposed of and the first washing was done. For the first washing, were used 16 mL of ethanol, 16 mL of acetone, and 20 mL of distilled water distributed in 5 test tubes. All tubes were centrifuged for 10 min. For the second washing, were used 16 mL of acetone and 32 mL of distilled water. The solution was centrifuged for more 10 min and was obtained a clear liquid. The liquid of the tubes was filtered in quantitative filter paper. The powder achieved was removed from the filter paper, placed on a porcelain vessel, and was taken to the oven for 72 h at 75 °C. After the drying process, the powder was weighed.

2.3 B-complex Second Treatment 2.1 Definition of the Methods and Reagents The Method I consist of the following reagents: sodium lauryl sulfate (SLS) (0.08 mol/L), ammonium sulfate (0.05 mol/L), acetone, methanol, distilled water, and the B-complex. Method II consists of the following reagents: iron (III) sulfate (0.05 mol/L), iron perchloride (0.1 mol/L), acetone, methanol, distilled water, and the B-complex. The B-Complex is a group of eight vitamins compound by: 16 mg of Niacin (B3), 5.0 mg of Pantothenic Acid (B5), 1.3 mg of Pyridoxine (B6), 1.3 mg of Riboflavin (B2), 1.2 mg of Thiamine (B1), 2.4 mg of Cyanocobalamin (B12), 240 µg of Folic Acid (B9) and 30 µg of Biotin (B7) [12]. In the B-Complex is possible to find the union of four pyrrole groups (C4H5N) that are present in the vitamin B12 [13]. The oxidazings agents (Ammonium Sulfate, Iron (III) Sulfate and Iron Perchloride) and the surfactant

The second treatment with the B-complex was realized using the reagents of Method II. An aqueous solution was prepared to dissolve 1.999 g of iron (III) sulfate in 50 mL of distilled water, forming the Solution 1 (S1). Another solution was made dissolving 1.059 g of B-complex in 50 mL of distilled water, creating Fig. 1 Black powder obtained after the first centrifugation process

Chemical Synthesis Using the B-Complex to Obtain …

the Solution 2. The S1 and S2 solutions were added in each other using a pipette and stirred for 3h30, resulting in Solution 3 (S3). The S3 solution was centrifuged for 10 min. It was realized three washing with distilled water and four centrifugation process until obtaining a clear liquid. After the fourth centrifugation, the solution was filtered and taken to an oven for 16 h at 75 °C to obtain a dry powder. To verify the magnetic properties, an HD magnet was used to attract the black powder, as seen in Fig. 2.

2.4 B-complex Third Treatment The third treatment using the B-complex was made using the reagents of Method II. An aqueous solution was prepared dissolving 1.622 g of iron perchloride in 50 mL of distilled water, forming Solution 1 (S1). Solution 2 (S2) was obtained dissolving 1.059 g of B-complex in 50 mL of distilled water. The S1 and S2 solutions were added in each other using a pipette and stirred for 3h30, resulting in Solution 3 (S3). The centrifugation, washing, drying, and verification of magnetic properties processes were the same as the second treatment with the B-complex.

2.5 B-complex Fourth Treatment The B-complex fourth treatment was realized using the reagents of Method I. In this treatment, the concentrations of the SLS and ammonium sulfate reagents were increased to 46.137 g and 6.606 g, respectively. An aqueous solution was prepared dissolving the SLS reagent in 500 mL of distilled water and stirred for 30 min. This solution was divided into two, Solution 1 (S1) and Solution 2 (S2), with 250 mL each. In S1, was added 5.0428 g of B-complex, and the ammonium sulfate was added in S2. Both solutions were

Fig. 2 Black powder attracted for the HD magnet

109

stirring for 15 min. After this process, S1 and S2 were added in each other using a pipette and stirred for 1 h, resulting in Solution 3 (S3). In 40 mL of S3 was added 32 mL of methanol and it was stirring for 30 min. Subsequently, the S3 was dispensed in 5 test tubes and taken to the centrifuge process for 10 min. To increase the powder concentration in the tubes was removed the liquid of each tube and added more of S3 solution to it. The tubes were centrifuged for more 10 min before the first washing process. The washing process was realized using 50 mL of methanol in 30 mL of distilled water distributed in test tubes centrifuged for 10 min. After the last centrifugation, it could be seen a considerable quantity of powder at the bottom of the tube, which was refrigerated for seven days. The centrifugation, washing, drying, and verification of magnetic properties processes were the same as the second treatment with the B-complex.

2.6 Conductive Properties Assay The first technique used to assay the conductive characteristics of the chemical synthesis using the Method I and II described previously was the four-probe method. This technique consists of four electrically conductive probes arranged in lines, which are positioned in contact with the sample surface. The probes are equally spaced with each other for a known distance (S). An electrical charge (i) is provided to the sample through the external probes, and the potential difference (V) is measured through the internal probe [9, 14]. To measure the voltage, a colorless nail base coat was used because it is an electrical insulator and does not interfere with the conductivity of the material. Therefore, the obtained powders were mixed with the nail base and were deposited separately in 5 cm lines drawn in a paper, as seen in Fig. 3a. For each sample, it was used four reference points with the distance of 1.5 cm between then. A 5 V DC (1A) source powered the peripheral region, and the multimeter test probes (Minipa ET—2042D) were placed in the central points of each sample. The second technique to check the conductive characteristics of the powders obtained in Method I and II was realized using a Siemens conductivity meter with a range of 40 mΩ to 1 kΩ, as shown in Fig. 3b. For the third technique, a conductivity meter (HANNA Instruments Brasil) was used. This device is used to measure the hemodialysis water conductivity from the Teaching Hospital of Itajubá—MG. In this test, due to the amount of powder, there were realized measurements with the powder obtained from the second and fourth treatments only. To

110

L. M. D. Pinto et al.

Fig. 3 a Colorless nail base mix with the powders obtained in Method I and II. b Conductivity measurement of the powders mix with the nail base using the Siemens Conductivity Meter

measure the conductivity, three beckers with 100 mL of hemodialysis water were used: one becker with 0.05 g of graphite for the control sample and the others two with 0.05 g of the powder obtained in the second and fourth treatment.

3

Results

To verify the conductive properties of the powders of each treatment, it was necessary to produce adequate amounts of powder. Table 1 represents the amount of powder obtained in each treatment. Observing the Table 1, the first and third treatment presented a small amount of powder, so it could not be used to verify the conductive characteristics in the conductivity

meter. Although, it was realized the measurements using the four-probe technique of each sample in triplicate, and the results are presented in Table 2. The resistance of the samples was not measurement because the resistance values were out of the scale reading for the multimeter. Tables 3, 4, and 5 present the values obtained of the measurement realized in quintuplicate. After confirming the conductivity of the powders obtained of the PPy synthesis in the laboratory, it was realized the arithmetic average for the control sample, second and third treatment, as shown in Table 6. The results obtained in the tested methods were satisfactory for the second and fourth treatments due to the possibility of verifying the conductivity properties using all techniques of measurement and also the magnetic properties using the HD magnet.

Table 1 Amount of obtained powder Treatments

Amount powder (g)

First treatment

0.0436

Second treatment

0.7139

Third treatment

0.0682

Fourth treatment

0.1996

4

Discussion

Comparing the results obtained in Table 6 with the literature, as shown in Table 7, it was possible analyzing the conductivity of the PPy and its synthesis. The product obtained in the present study presents several characteristics of PPy that

Chemical Synthesis Using the B-Complex to Obtain … Table 2 Potential difference obtained in each sample

Table 3 Graphite conductivity measurement

Table 4 Second treatment conductivity measurement

Table 5 Fourth treatment conductivity measurement

Measurements of the Potential Difference (mV) Samples







Base Nail

0

0

0

Base + Graphite

6.4

4.2

3.9

Base + Firts Treatment

3.7

5.8

3.3

Base + Second Treatment

1.7

1.2

2.2

Base + Third Treatment

10.6

12.4

9.8

Base + Fourth Treatment

1.2

0.9

1.5

Measurements of the conductivity (µS/cm) Samples











Water hemolialysis conductivity

2.8

2.9

3.0

3.0

3.0

Water hemodialisys + powders conductivity

3.5

3.4

3.4

3.4

3.5











Measurements of the conductivity (µS/cm) Samples Water hemolialysis conductivity

2.8

2.9

2.9

2.8

2.9

Water hemodialisys + powders conductivity

4.1

4.8

4.7

4.8

4.8

Measurements of the conductivity (µS/cm) Samples











Water hemolialysis conductivity

2.8

2.7

2.9

2.8

2.9

Water hemodialisys + powders conductivity

4.2

4.3

4.2

4.5

4.4

Table 6 Average of the potential difference and conductivity

Table 7 Conductivity according to other authors

111

Potential difference (mV)

Conductivity (µS/cm)

Graphite

4.83

0.50

Second Treatment

1.7

1.79

Fourth Treatment

0.93

1.57

Authors

Conductivity (mS/cm)

Campos et al. [1]

12600 ± 2330

Santim, 2011 [15]

0.0334

Sowmiya et al. [16]

8.21

Kim et al. [17]

280

characteristics differ from those of the substrates used in chemical reaction. However, it cannot be possible to compare the results due to the measurement of this research be made in an aqueous medium. For the comparison be realized, it will be necessary a hydraulic press and manual press to prepare the sample, according to the authors.

The use of the B-complex as a substitute of PPy is viable due to the low-cost and the presence of conductive and magnetic properties similar to the polymer. For this reason, the synthesis obtained in this work can be used in applications as biosensors for disease diagnostics as cancer, which present different electrical responses in the ill cells to the

112

L. M. D. Pinto et al.

healthy cells. Also, it can be used as a controlled drug release, synthetic membranes, artificial muscles, tissue engineering, and neural probes [5, 15, 18]. Notwithstanding, the PPy can be destinate as electromagnetic Shielding for electromedical equipment with needs this protection for an excellent performance. As future research, the synthesis can be realized with baker’s yeast, which presents pyrrolic rings in their chain and low-cost.

5

Conclusion

The chemical synthesis of the second and fourth treatment obtained in this work showed good results due to present conductive and magnetic properties similar to the PPy. From this research, future and promising works will be continued with more tests, including biocompatibility assay and the physicochemical characterization of the B-complex synthesis. Acknowledgements The authors would like to thank the FAPEMIG (Foundation for Research Support of State of Minas Gerais), and all other agencies from the Brazilian government. Conflict of Interest The authors announce no potential conflicts of interest with respect to the research, authorship, or publication of this article.

References 1. Campos RM, Rezende MC, Faez R (2009) Chemical synthesis of polypyrrole: influence of anionic surfactants on thermal and conductive properties. In: Proceedings of the 10th Brazilian Polymer Congress, Foz do Iguaçu 2. Ramôa SDAS, Merlini C, Barra GMO (2014) Obtaining Montmorillonite/Polypyrrole conducting nanocomposites: effect of incorporating surfactant on structure and properties, Scielo Polímeros, vol. 24, n spe, pp 57–62. Epub Apr 18, 2014. ISSN 0104-1428. https://doi.org/10.4322/polimeros.2014.051 3. Júnior LR, Neto GO, Kubota LT (1997) Conductive polymerbased potentiometric transducers: analytical applications. New Chem 1997:519–527

4. Fahlgren A, Bratengeier C, Gelmi A, Semeins CM, Klein-Nulend J, Jager EWH et al (2015) Biocompatibility of polypyrrole with human primary osteoblasts and the effect of dopants. J Plos One, 1–18 5. Nogueira FAR (2010) Synthesis and characterization of polypyrrole derivatives for application in electrochemical devices, Post-Graduation Dissertation, Federal University of Alagoas, Maceió, AL, 2010 6. Gomez N, Schmidt CE (2007) Nerve growth factor-immobilized polypyrrole: Bioactive electrically conducting polymer for enhanced neurite extension. J Biomed Mater 81A(2007):135–149 7. Fonner JM, Forciniti L, Nguyen H, Byrne JD, KouL Y (2008) Nawaz J S (2008) Biocompatibility implications of polypyrrole synthesis techniques. Biomed Mater 3:1–12 8. Ferreira CL (2017) Synthesis of polymeric nanocomposites PCL/PLGA/Polypyrrole nanofibers for application in biocompatible conduit for nerve regeneration, Master's Dissertation, Eng. Tec. Materials, Pontifical Catholic University of Rio Grande do Sul, Porto Alegre, RS 9. Noeli S (1998) Photochemical synthesis, electrical and morphological characterization of PPy/PVDF composites, Master's Dissertation, Faculty of Chemical Engineering of Campinas, SP 10. Polypyrrole Price at https://www.sigmaaldrich.com/catalog/product/ aldrich/482552?lang=pt®ion=BR&gclid=Cj0KCQjwgezo BRDNARIsAGzEfe6O5DxbAli8qLqJ_LRr_q0QYYRQqt1XEQR5 wwE73G_Vb6fW_EDUA8waApTCEALw_wcB 11. Marchesi LFQP (2010) Study of the electrochemical and magnetic properties of Polypyrrole, PhD Thesis Federal University of São Carlos, SP 12. VITGOLD at https://www.vitgold.com.br/b-complex 13. Silva MT (2019) Concentrations of vitamins B6, folate and B12 and their association with serum lipids in pregnancy: a prospective study, Postgraduate Master's Dissertation, University Federal of Pelotas, Pelotas, RS 14. Okoa MM (2000) Four Point Measure at https://www.lsi.usp.br/ *dmi/manuais/QuatroPontas.pdf 15. Santim RH (2011) Synthesis and characterization of Polypyrrole (PPy) obtained by the conventional chemical process and microemulsion, Post-Graduation Dissertation, Universidade Estadual Paulista, Faculty of Engineering of Ilha Solteira, SP, Ilha Solteira 16. Sowmiya G, Anbarasan PM, Velraj G (2017) Synthesis, characterization and electrical conductivity study of conductive polypyrrole doped with nano Tin composite for antibacterial application. IRJET 4(2017):127–131 17. Kim S et al (2016) Electrochemical deposition of conductive and adhesive polypyrroledopamine films. Sci Rep 6:30475 18. Hocevar MA (2011) Development of amperometric enzyme biosensors using polypyrrole nanoparticles, Postgraduate Master's Dissertation in Mines, Metallurgy and Materials, Federal University of Rio Grande do Sul, Porto Alegre, RS

Evaluation of the Effect of Hydrocortisone in 2D and 3D HEp-2 Cell Culture M. O. Fonseca, B. H. Godoi, N. S. Da Silva, and C. Pacheco-Soares

Abstract

Keywords

Cancer is one of the diseases with the highest incidence in the world and that associated with the patient's emotional state, can act positively or negatively in the treatment. Cortisol is described as a primary stress hormone in the human body. Studies show a positive correlation of elevated cortisol levels and cancer progression. The corticoids can increase cell proliferation and increased reactive oxygen species that contribute to DNA damage. Prolonged exposure to stress can contribute to tissues becoming insensitive to cortisol, the primary human stress hormone. This study explores the influence of cortisol, an important hormone involved in stress, on tumor cell development, particularly in human cells of carcinoma of human laryngeal (HEp-2). HEp-2 cells were exposed to increasing cortisol (hydrocortisone) concentrations for 24 or 48 h and cytotoxicity (MTT assay [3(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazio bromide], proliferation assay (crystal violet assay), and immunolabeled 3D culture for fibronectin and FAK were investigated. Corticosteroids and stress in cancer patients may inter-fere with cancer treatments because these may cause tumor cells to progress instead of reduce depending on the cell type. Although some cases favored corticosteroids use in cancer patients, a more detailed analysis is necessary be-fore prescribing them. Moreover, it is important to assess the patient’s cortisol level before and after treatment as well.

Stress

M. O. Fonseca  B. H. Godoi  C. Pacheco-Soares (&) Laboratório de Dinâmica de Compartimentos Celulares, Universidade do Vale do Paraíba, Av Shishima Hifumi 2911, São Jose dos Campos, Brazil e-mail: [email protected]

1



Carcinoma



3D culture



Hydrocortisone

Introduction

Cancer is one of the most feared diseases of the twentieth century and spreads with increasing incidence in the twenty-first century [1]. As regards occurrences of cancer, several factors have been identified as contributing factors, including genetic predisposition, exposure to environmental risk factors, contagion by certain viruses, cigarette use, and the ingestion of carcinogens [2]. Psychological factors can contribute to the development of cancer, given the effects of emotional states on hormonal modification and the alteration of the immune system [3]. From these possibilities, we increasingly find studies that seek to relate or measure possible influences of psychological and social aspects in the development and possible aggravation of oncological pathologies. The disease’s symptoms and the related characteristics of the different forms of treatment used to fight cancer significantly interfere with patients’ routines and quality of life, characterizing significant stressors in many cases [4]. Studies relate factors associated with stress to tumor biology; those that relate the effects of glucocorticoids and their relationship with the biology of tumor cells stand out [5–7]. The dysregulation of cortisol levels, a symptom associated with stress, also contributes to factors related to the morbidity, severity, and mortality of the disease process. This includes numerous types of oncological diseases such as tumor progression in breast cancer [8, 9].

N. S. Da Silva Laboratório de Biologia Celular e Tecidual, Universidade do Vale do Paraíba, São Jose dos Campos, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_19

113

114

2

M. O. Fonseca et al.

Materials and Methods

2.1 Cell Culture HEp-2 (carcinoma of human larynges) were obtained from cell bank of Rio de Janeiro and cultured at 37 °C under 5% CO2 in Dulbecco's Modified Eagle Medium (DMEM, Gibco BRL, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (FBS) (Gibco BRL, Grand Island, NY, USA), and 1% penicillin and streptomycin (Invitrogen Life Technologies, Carlsbad, CA, USA).

2.2 Incubation with Hydrocortisone HEp-2 cells were plated (1  105 cells/mL) in 24-well microplates, with MEM culture medium supplemented with 10% fetal bovine serum (SFB) for cell adhesion in a 5% CO2 oven and temperature of 37 °C and incubated overnight. The next day, the cells were subjected to treatment with Hydrocortisone 500 mg, diluted in PBS, for periods of 24 and 48 h in the following concentrations: 0.5, 1.0, 1.5, 2, and 2.5 µM [10].

2.3 Mitochondrial Metabolic Activity (MTT Assay) HEp-2 cells submitted to treatment with hydrocortisone in the periods of 24 and 48 h were washed with PBS three times incubated with MTT (0.5 mg/ml) for 1 h at 37 °C in an atmosphere of 5% CO2. Over the precipitates of formazan, the organic solvent DMSO (50 lL) was added to each well. The plate is kept under shaken for 10 min, for solubilization of the formazan crystals, and the absorbance reading was performed on a Packard SpectraCount wavelength of 570 nm. The data obtained were plotted in a graph by the GraphPad 6.0 program.

2.4 Crystal Violet Assay Crystal Violet staining is used to infer population density, a clonogenic test. The method is described as colorimetric for determining appropriate cells, as described by Fernandes et al. Cristal Violeta, a substance used for the test, crosses cell and nuclear membranes, binding to DNA, RNA, and proteins, which provides the identification of viable cells. In its outline, the spectrophotometric quantification of adherent cells indirectly identifies the number of viable cells used in this study to measure the population variation of the strains.

The HEp-2, subjected to treatment with cortisol at the mentioned times, had the culture medium removed afterward and incubated with 100 µL of Crystal Violet for 3 min at room temperature. Afterward, the plate was washed with running water and incubated for 1 h with 100 µL of DMSO, to then read on the photometer spectrum at 570 nm.

2.5 3D Culture Cultures were performed using the Bio-Assembler™ kit designed for 24-well plates (n3D-Biosciences Inc, Houston, TX, USA) [11]. In summary, NanoShuttlesTM was added to a T-25 culture flask with a ratio of 1 µL of NanoShuttlesTM per 20,000 cells and incubated at 37 °C and 5% CO2 overnight. Then, the cells were detached by treating them with 5 mL of trypsin for 5 min and washed by centrifugation (600 g/5 min) with a balanced saline solution (PBS). Cell viability was determined by the trypan blue exclusion method (1% w/v in PBS), and the density was adjusted to 106 cells/mL in the medium supplemented with RPMI-1640. HEp-2 cells conjugated to NanoShuttlesTM were seeded on a 24-well ultralow-attachment plate (ULA, Cellstar® Greiner Bio-one, Kremsmünster, Austria) in 105 cells and a final volume of 400 µL/well. The 3D culture was achieved by incubating (at 37 °C and 5% CO2) the plates under the magnetic field, first using a bioprinting unit for 3 h, followed by the levitation unit during the entire culture period. This procedure allows for the growth of the cellular spheroid. The 3D culture plate was replenished with fresh medium every two days until the time of using the spheroid.

2.6 Immunostaining After seven days of culture, HEp-2 cell spheroids were divided into two groups: (i) a cells-only control group and (ii) a treatment group, in which the cells were incubated with cortisol (2.5 µM) for 48 h at 37 °C in a 5% atmosphere of CO2. After this, the spheroids were fixed with 4% paraformaldehyde in PBS for 15 min at room temperature. The cells were then permeabilized with 0.2% Triton X-100 in PBS for 10 min and then blocked with 1% bovine albumin serum (BSA) in PBS for 30 min. Thereafter the spheroids were incubated with anti-human monoclonal antibody mouse against fibronectin antibody (1:500/1 h) and rabbit anti-human monoclonal antibody against FAK (1:500/1 h), and then incubated with antibodies. Anti-mouse polyclonal secondary antibody was conjugated to FITC, and goat anti-rabbit polyclonal secondary antibody was conjugated to TRITC (all antibodies from Sigma Aldrich, Co). The nuclei

Evaluation of the Effect of Hydrocortisone in 2D and 3D …

115

were marked with DAPI (4′,6-diamidino-2-phenylindole). Samples were visualized using a fluorescence microscope (DMIL, Leica).

2.7 Statistical Analysis The data presented are in the form of mean and standard deviation, compared by the two-way ANOVA test and confirmed by the Tukey test. Statistical significance was admitted with P < 0.05 with *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001 being considered significant. Experiments were performed in three independent replications with n = 8. GraphPad Prism 6® software was used to perform statistical and graphical analyzes.

3

Results

The mitochondrial evaluation by MTT aimed to measure possible influences of incubation with hydrocortisone, in concentrations of 0.5, 1.0, 1.5, 2.0, and 2.5 µM, in HEp-2 cell culture, relating possible findings to the population density results which were then analyzed. Figure 1 presents the results of the mitochondrial activity. Compared with the control group, the other groups with increasing concentrations of hydrocortisone do not show a statistical difference within 24 h. The comparative analysis of the control group with the other groups in the 24 h period shows a statistically significant increase in mitochondrial activity, mainly in the control, with concentrations of 0.5 to 2.0 µM (p < 0.0001), while the comparison between 150

different concentrations of hydrocortisone results in a significant increase in mitochondrial activity between groups 0.5 versus 2.5 µM (p < 0.0001), 1.0 versus 2.0 µM (p < 0.0003), 1.0 versus 2.5 µM (p < 0.0001), 1.5 versus 2.5 µM (p < 0.0001), and 2.0 versus 2.5 µM (p < 0.0001). Within 48 h, a significant increase was observed in all concentrations when compared to the control. Population growth was assessed using the Crystal Violet test. The data described in Fig. 2 reveal that even under exposure to increasing hydrocortisone concentrations, there was no population reduction in the two distinct periods, except for the group incubated at 2.5 µM in the 24 h period, where a slight but not significant population reduction was observed. The 3D growth assessment of HEp-2 cell culture under the action of hydrocortisone (2.5 µM) after 24 and 48 h of incubation is shown in Figs. 3 and 4. In 24 h, the interaction with hydrocortisone was observed to be interfering with cells’ spheroid formation, showing a very significant dispersion of cells marked with DAPI (p = 0.0004). Compared to the control, the immunolabeled for fibronectin exhibited a significant reduction (p = 0.0192) in the staining intensity (Fig. 3b). Within 48 h, there was no significant change in fluorescence intensity among the immunolabeling (Fig. 4b).

4

The disclosure of cancer diagnosis and treatment is usually a traumatic experience for the patient, causing the release of cortisol in response to stress associated with physical and mental comorbidities.

24 48

100

150

Relative proliferation rate (%)

Mitochondrial activity (%)

Discussion

50

24 48 100

50

0 Control

0.5

1

1,5

2

2,5

Concentration ( µM)

0 Control

0.5

1

1,5

2

2,5

Concentration ( µM)

Fig. 1 Mitochondrial activity (MTT) assay of the HEp-2 cells. Cellular activity of the HEp-2 cells MTT assay at 24 and 48, after incubation of increasing concentrations of hydrocortisone, showing increase in cell numbers with increase in time. All values are expressed as mean ± standard error of the mean (SEM) from three different samples

Fig. 2 Crystal violet assay. Crystal violet assay of HEp-2 cells, 24 to 48 h incubation with increasing concentrations of hydrocortisone. All values are expressed as mean ± standard error of the mean (SEM) from three different samples

116

M. O. Fonseca et al.

Fig. 3 Tumor spheroids immunolabeled for fibronectin and FAK. Cell proliferation and morphology of tumor spheroids at day 7 of culture, and after 24 h incubation with 2.5 µM of hydrocortisone, a photomicrography of spheroids after immunolabeled with ant-fibronectin and anti-FAK, nuclei labeled with DAPI, bar 50 µm. b Graphic fluorescence intensity for DAPI, fibronectin and FAK

Fig. 4 Tumor spheroids immunolabeled for fibronectin and FAK. Cell proliferation and morphology of tumor spheroids at day 7 of culture, and after 48 h incubation with 2.5 µM of hydrocortisone, a photomicrography of spheroids after immunolabeled with ant-fibronectin and anti-FAK, nuclei labeled with DAPI, bar 50 µm. b Graphic fluorescence intensity for DAPI, fibronectin and FAK

To investigate the effects of cortisol on mitochondrial activity and cell proliferation, we exposed HEp-2 cells to 0.5–2.5 µM cortisol for 24 and 48 h and subjected them to MTT and crystal violet assays. The 3D culture was also tested to evaluate the extracellular matrix. The results of the evaluation of the mitochondrial activity show that after 24 h of exposure to increasing concentrations of hydrocortisone, the cells presented a significant increase at 0.5 and 1.0 µM and a reduction at 2.5 µM. There is a significant increase in mitochondrial activity within 48 h,

indicating intense cellular activity, as verified by Bomfim et al. and Peterson et al. [12, 13]. The behavior of HEp-2 cells (laryngeal carcinoma), was different from that of K562 cells (chronic myeloid leukemia) [14]. Within 24 h, the latter showed reduced mitochondrial activity. However, both strains showed similar behavior after 48 h. The crystal violet assay shows DNA duplication. It can be verified that the decrease observed within 24 h is insignificant and that the duplication of the genetic material and,

Evaluation of the Effect of Hydrocortisone in 2D and 3D …

consequently, the cell proliferation within 48 h remain similar to the control. These results are corroborated by Peterson et al. and Dong et al. [13, 15]. The evaluation of 3D growth HEp-2 cells and interaction with a concentration of 2.5 µM demonstrates that such concentration interferes with the interaction between cells after 24 h of incubation. There was no significant change in the interaction and spheroid formation after 48 h. Similar results were obtained by Chen et al. and Al-natsheh [16].

5

Conclusion

Corticosteroids and stress in cancer patients may interfere with cancer treatments because these may cause tumor cells to progress instead of reduce depending on the cell type. Although some cases favored corticosteroids use in cancer patients, a more detailed analysis is necessary before prescribing them. Moreover, it is important to assess the patient’s cortisol level before and after treatment as well. Acknowledgements This study had financial support of the Fundação de Amparo à Pesquisa de São Paulo (FAPESP) under Partnership Grant Number 16/17984-1. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Roy PS, Saikia BJ (2016) Cancer and cure: a critical analysis. Indian J Cancer 53. Available from https://www.indianjcancer. com/text.asp?2016/53/3/441/200658 2. Brian C, Clarke B, Blackadar B, Blackadar CB, Researcher V (2016) Historical review of the causes of cancer. Accessed 17 May 2020. Available from https://dx.doi. 3. Chae J, Lee CJ (2019) The psychological mechanism underlying communication effects on behavioral intention: focusing on affect and cognition in the cancer context. Commun Res 46(5):597–618. https://doi.org/10.1177/0093650216644021 4. Bedillion MF, Ansell EB, Thomas GA (2019) Cancer treatment effects on cognition and depression: the moderating role of physical activity. Breast 44:73–80. https://doi.org/10.1016/j.breast. 2019.01.004

117 5. Reiche EMV, Nunes SOV, Morimoto HK (2004) Stress, depression, the immune system, and CACER. Lancet Oncol 5:617–625. https://doi.org/10.1016/S1470-2045(04)01597-9 6. Spiegel D, Giese-Davis J (2003) Depression and cancer: mechanisms and disease progression. Biol Psychiatry 54:269–282. https://doi.org/10.1016/S0006-3223(03)00566-3 7. Vitale I, Manic G, Galassi C, Galluzzi L (2019) Stress responses in stromal cells and tumor homeostasis. Pharmacol Therapeut 200:55–68. https://doi.org/10.1016/j.pharmthera.2019.04.0048. mcewen1998 8. Lillberg K, Verkasalo PK, Kaprio J, Teppo L, Helenius H, Koskenvuo M (2003) Stressful life events and risk of breast cancer in 10,808 women: A cohort study. Am J Epidemiol. 157:415–423. https://doi.org/10.1093/aje/kwg002 9. Sephton SE, Sapolsky RM, Kraemer HC, Spiegel D (2000) Diurnal cortisol rhythm as a predictor of breast cancer survival. J Natl Cancer Inst 92(12):994–1000. https://doi.org/10.1093/jnci/ 92.12.994 10. Abdanipour A, Tiraihi T, Noori-Zadeh A, Majdi A, Gosaili R (2014) Evaluation of lovastatin effects on expression of anti-apoptotic Nrf2 and PGC-1a genes in neural stem cells treated with hydrogen peroxide. Mol Neurobiol 49(3):1364–1372. https:// doi.org/10.1007/s12035-013-8613-5 11. Natânia de Souza-Araújo C, Rodrigues Tonetti C, Cardoso MR, Lucci de Angelo Andrade LA, Fernandes da Silva R, Romani Fernandes LG et al (2020) Three-dimensional cell culture based on magnetic fields to assemble low-grade ovarian carcinoma cell aggregates containing lymphocytes. Cells 9(3):635. https://doi.org/ 10.3390/cells9030635 12. Bomfim G, Merighe G, de Oliveira S, Negrao J (2018) Effect of acute stressors, adrenocorticotropic hormone administration, and cortisol release on milk yield, the expression of key genes, proliferation, and apoptosis in goat mammary epithelial cells. J Dairy Sci 101:6486–96. Available from https://doi.org/10.3168/ jds.2017-14123 13. Petersen A, Carlsson T, Karlsson J-O, Jonhede S, Zetterberg M (2008) Effects of dexamethasone on human lens epithelial cells in culture 14. De M, Fonseca O, Soares Da Silva N, Soares CP (2019) Effect of cortisol on K562 leukemia cells. O Mundo Da Saúde, São Paulo 43(4):854–869. https://doi.org/10.15343/0104-7809.20194304854 869 15. Dong J, Li J, Li J, Cui L, Meng X, Qu Y et al (2019) The proliferative effect of cortisol on bovine endometrial epithelial cells. Reprod Biol Endocrinol 17(1):97. Accessed 27 May 2020. Available from https://doi.org/10.1186/s12958-019-0544-1 16. Chen YX, Wang Y, Fu CC, Diao F, Song LN, Li Z et al (2010) Dexamethasone enhances cell resistance to chemotherapy by increasing adhesion to extracellular matrix in human ovarian cancer cells. Endocr Relat Cancer 17(1):39–50. https://doi.org/10. 1677/ERC-08-0296

Surface Topography Obtained with High Throughput Technology for hiPSC-Derived Cardiomyocyte Conditioning Lucas R. X. Cortella, I. A. Cestari, M. S., M. Mazzetto, A. F. Lasagni, and Ismar N. Cestari

Abstract

1

The use of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CM) to replace myocardial tissue after an infarct holds great promises. However, hiPSC-CM are phenotypically immature when compared to cells in the adult heart, hampering their clinical application. We aimed to develop and test a surface structuring technique that would improve hiPSC-CM structural maturation. Laser ablation was used to fabricate a micron-pattern on polyurethane surface and evaluated cell morphology, orientation and F-actin assemblage to detect phenotypic changes in response to the microtopography. This topography positively influenced cell morphology regarding to spreading area and elongation, and hiPSC-CM orientation, improving their structural maturation. The methodology thus presented has relatively low cost and is easily scalable, making it relevant for high-throughput applications such as drug screening for the pharma industry. Keywords



 

hiPSC-CM Cardiomyocytes Surface topography Direct laser interference patterning Polyurethane

L. R. X. Cortella (&)  I. A. Cestari  M. Mazzetto  I. N. Cestari Bioengineering Department, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Enéas de Carvalho Aguiar, 44, 05403-900 São Paulo, Brazil e-mail: [email protected] M. S.  A. F. Lasagni Institute for Manufacturing Technology, Technische Universität Dresden, George-Baehr-Str. 3c, 01069 Dresden, Germany M. S. PROBIEN-CONICET, Dpto. de Electrotecnia, Universidad Nacional del Comahue, Buenos Aires 1400, 8300 Neuquén, Argentina



Introduction

The use of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CM) holds great promises for the development of physiologically relevant in vitro models for drug screening and mechanistic studies of cardiac development [1, 2]. Furthermore, tissue-engineering strategies using hiPSC-CM are actively sought for therapy after myocardial ischemia and infarction. These strategies seek to create viable tissue constructs to repair, replace or augment the function of injured or diseased myocardial tissue [3]. However, hiPSC-derived cardiac constructs have technical limitations that hamper its application on cardiac regeneration. These stem cell-derived cardiomyocytes are phenotypically immature compared to those present in the adult heart [4]. When hiPSC-CM are cultured in vitro, they present a random spatial distribution, circular morphology and an isotropic actin organization, which results in an immature contractility pattern [5]. In turn, in the native myocardium, cardiomyocytes are longitudinally aligned rod-shaped cells with anisotropic actin distribution that communicate and contract in a specific directional manner [6]. Proper alignment of cardiomyocytes provides optimal coupling for electrical signal propagation and synchronous cell contractions required for good cardiac function [7]. As a consequence, achieving proper orientation of elongated hiPSC-CM is a fundamental goal in cardiac tissue engineering. One way to achieve similar physiological cellular organization in vitro is to use topographical cues to induce cell elongation and orientation [8]. There are many techniques available to create microtopographies with potential to direct cell behavior. We previously demonstrated that grooves and ridges produced by Direct Laser Interference Patterning (DLIP) were able to promote elongation and alignment of endothelial cells [8]. DLIP permits the direct fabrication of periodic structures in

A. F. Lasagni Fraunhofer-Institut für Werkstoff-und Strahltechnik IWS, Winterbergstr. 28, 01277 Dresden, Germany © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_20

119

120

L. R. X. Cortella et al.

the low micrometer to sub-micrometer scale on different types of solid materials such as polymers, metals and ceramics [9]. DLIP has significant advantages over other methods, since it is possible to process large areas with reproducibility and accuracy in a single step with throughputs approaching 1 m2/min [10]. In this study, we have hypothesized that a microtopography produced by DLIP will positively influence the morphology and orientation of hiPSC-CM statically cultured in vitro. We demonstrated that it is possible to use DLIP to pattern polymeric polyurethane (PU) surface useful in conditioning hiPSC-CM. Using this methodology, we showed that hiPSC-CM cultured onto aligned grooves were more elongated and oriented themselves according to the underlying topography when compared to those cells cultured on non-patterned surfaces.

2

Materials and Methods

2.1 Polyurethane Substrate Preparation Polyurethane was prepared following the instructions from the manufacturer (Cequil™, Araraquara, Brazil) by mixing the pre-polymer and polyol components and casting over an inert mold. Equal parts of the two liquid components were weighed, placed in a heating oven (Quimis, Brazil) at 75 °C for 30 min to eliminate humidity and immediately mixed in equal stoichiometric ratio and poured on the mold at 30 °C, under vacuum (8 kPa) to remove air bubbles during polymerization and curing process (approximately 30 min). The reaction between pre-polymer and polyol components resulted in PU films of 1.3 mm (±0.2 mm) thickness. The pre-polymer was industrially synthesized from 4, 4′-diphenylmethanodiisocyanate in molar excess with the polyol, keeping a percentage of free isocyanate for later reaction. The polyol is polyester derived from castor oil, a natural oil with 90% of its fatty acid content consisting of ricinoleic acid.

2.2 DLIP on Polyurethane Line-like structures with a spatial period of 3 µm were produced using a high-power Nd:YAG laser (Quanta Ray, Spectra Physics) with a pulse duration of 10 ns and a fundamental wavelength of 1064 nm. The repetition rate of the laser system was 10 Hz. Due to the high absorption of PU in the UV-spectral region, a wavelength of 266 nm was selected, which corresponds to the 4th harmonic of 1064 nm. The fluence, defined as the average optical energy per unit area, was set to 0.7 J cm−2. A detailed description of the experimental setup can be found elsewhere [11].

2.3 DLIP-Modified PU Surface Characterization —Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) Surface micropattern produced on PU was characterized by AFM with a Nanoscope IIIA (Digital Instruments, USA) and the measurements were conducted using tapping mode. The measured features were spatial period (K), defined as the distance between two consecutive ridges; groove depth, which was measured from ridge top to groove bottom and ridge width, determined as full width at half maximum of groove depth. The topographies were further inspected with a scanning electron microscope TM 3000 (Hitachi, Japan) at an acceleration voltage of 5 kV.

2.4 hiPSC-CM Cell Culture on DLIP-Modified PU Substrate Human induced pluripotent stem cells derived cardiomyocytes were purchased from Pluricell Biotech (São Paulo, Brazil) and cultivated following manufacturer’s instructions. Cells were firstly plated at 15th day of differentiation on 24-well plate (Sarstedt), previously coated with extracellular matrix (Geltrex™, Thermo Fisher), at a density of 137  103/cm2 and cultured at 37 °C and 5% CO2 atmosphere with RPMI (Thermo Fisher) supplemented with plating medium supplement (PluriCardio™ PMS). Culture medium was replaced by RPMI containing maintenance medium supplement (PluriCardio™ MMS) every 24 h. In order to perform the experiments, sterile DLIP-modified and unmodified PU samples were placed inside a 96-well microplate (Sarstedt) and allowed to equilibrate with RPMI containing Geltrex™ for two hours prior to cell seeding. At 19th day of differentiation cells were harvested from 24-well plate with 0.35% trypsin/EDTA solution (Gibco); centrifuged at 250  g for 5 min, suspended in RPMI with PMS and seeded on PU at a density of 17  103/cm2 for morphology and orientation analysis and at a density of 34  103/cm2 for immunocytochemistry assay, under same culturing conditions. Culture medium was replaced with RPMI containing MMS every 24 h. Experimental groups were defined as follows: (a) unmodified PU (control) and (b) PU-L3 (line-like pattern featuring spatial period of 3 µm).

2.5 Cell Morphology and Orientation—SEM and Fluorescent Staining The influence of microtopography on hiPSC-CM morphology and alignment was investigated. Cells at the 19th day of

Surface Topography Obtained with High Throughput Technology …

differentiation were cultured on DLIP-modified PU for 48 h and fixed with 4% paraformaldehyde (PFA) (Sigma) for one hour at 4 °C. Then they were washed with phosphate buffered saline (PBS) (Gibco) and dried at room temperature, before placing them in the scanning electron microscope vacuum chamber. F-actin and cell nucleus were visualized by fluorescent staining using a fluorescence microscope (TM300, Nikon, Japan) equipped with an AxioCam MRC camera (Zeiss, Germany). Cells were fixed with 4% PFA in PBS for two hours at 4 °C, permeabilized by 0.1% Igepal (Sigma, Brazil) at 37 °C for 30 min and blocked with 2% bovine serum albumin (BSA) (Sigma, Brazil) in PBS for one hour. F-actin fibers were stained with alexa-488-phalloidin (A12379, Life Tech., USA) at 1:100 and nuclei were stained with Hoechst 33,342 (B2261, Sigma, Brazil) at 1:50. PU samples were maintained in PBS/glycerol (1/1) solution and protected from light prior to viewing under the microscope. SEM and fluorescent images were used to estimate cell morphology with ZEN 2012 software (Zeiss, Germany) and orientation of cells with Image J (NIH, USA). A total of six samples from each experimental group were examined and at least 150 cells per sample were imaged. Based on the outline of isolated cells we estimated values of spreading area (filled region by projected cell boundary) and aspect ratio, which gives an indication for cellular elongation. Aspect ratio is defined as the ratio between the breadth (minimum feret) and length (maximum feret) of each cell and varies from zero to one. A value of 1 approximates the shape of a circle and a value of zero depicts that of a straight line. Cell orientation, defined as the angle between the major axis of the cell and the axis of the grooves was estimated and Fig. 1 a AFM characterization of surface micropattern produced on PU represented by 3D projection obtained with NanoScope Analysis software. The X and Y axes indicate width and length of the sections, respectively. Note that Z-axis is expanded for better visualization (aspect ratio = 0.4). b Schematic representation of the topographical features measured by AFM

121

cells were considered aligned when this angle was lower than 10°. A minimum angle of 0° indicated parallel alignment from the groove axis; a maximum angle of 90° suggested perpendicular alignment and an average angle of 45° was expected for random orientation. For quantification of cell orientation on control, an arbitrary axis was selected.

2.6 Statistical Analysis All values were obtained from datasets of three independent experiments performed in duplicates. Statistical comparison between DLIP-modified PU and unmodified PU groups was carried out by Mann–Whitney rank sum test using SigmaStat statistical software (Jandel Scientific). Statistical significance was accepted at P < 0.05. Data are expressed as mean ± standard deviation (S.D).

3

Results

3.1 Surface Characterization of DLIP-Modified PU—AFM and SEM A two beams configuration in the DLIP system was used to fabricate well defined and homogeneous line-like micropatterns consisting in periodic and parallel ridges and grooves on PU substrate. Figure 1a displays an AFM image from a 3D projection of the micropattern from structured PU and Fig. 1b represents schematically the topographical features measured by AFM. Average values of spatial period, ridge width and groove depth are summarized in Table 1. The SEM images of Fig. 2 show a laser irradiated PU with

122

L. R. X. Cortella et al.

Table 1 AFM measurements of spatial period, ridge width and groove depth. Mean values ± S.D are shown in (µm) AFM measurements

PU-L3

Spatial period

3.0 ± 0.01

Groove depth

0.7 ± 0.03

Ridge width

1.7 ± 0.01

no interference pattern that was used as control group (Fig. 2a), as well as the DLIP-patterned PU substrate with a spatial period of 3.0 µm (Fig. 2a).

3.2 Alignment of hiPSC-CM in Response to DLIP-Modified PU We investigated hiPSC-CM orientation as a measure of the degree to which hiPSC-CM responded to the underlying topography. Figure 3 shows the distribution of orientation angles of cells cultured on DLIP-modified PU and control. Table 2 summarizes the values of average orientation angle and percentage of aligned cells. hiPSC-CM cultured on non-patterned PU were randomly orientated, resulting in an

expected average angle of 47° ± 26°. Differently, the interaction of hiPSC-CM with the topography narrowed the distribution of orientation angles, with 50% of aligned cells at angles lower than 10°. Figure 4 shows aligned hiPSC-CM following the groove direction.

3.3 Differential Morphology of hiPSC-CM Cultured on DLIP-Modified PU—aspect Ratio and Spreading Area In order to evaluate the development of structural anisotropy of cells, aspect ratio and cell area were determined. The contact of hiPSC-CM with the microtopography resulted in a more elongated shape than cells in contact with non-patterned PU, showing a 29% difference (P < 0.001) in elongation (Fig. 5) and a 21% reduction in spreading area (Fig. 6) (p < 0.001).

3.4 Alignment of F-Actin Myofibrils hiPSC-CM cultured on DLIP-modified PU exhibited a parallel assemblage of F-actin filaments to groove direction (Fig. 7b), whereas cells on control surface displayed a random disposition of actin fibers across the cytoplasm (Fig. 7a).

4

Fig. 2 SEM visualization of non-patterned surface Control PU a and micropatterned PU-L3 b. Scale bar: 100 µm

Discussion

In this work, we showed a microtopography produced by DLIP technique on the surface of a polymeric polyurethane substrate capable to induce a distinct morphology and orientation of hiPSC-CM in vitro. Using this methodology, we demonstrated that hiPSC-CM cultured onto aligned ridges and grooves were more elongated with smaller spreading area and oriented themselves according to the underlying topography, when compared to those cells cultured on non-patterned surface. These phenotypic changes may be associated to maturation aspects that are clinically relevant in the eventual production of myocardial tissue implants. This methodology is easily scalable and can be implemented with relative low cost and high throughput. Our results indicate the feasibility of using DLIP to create precise and relevant topographical patterns in order to induce phenotypic characteristics of interest for the large scale production of hiPSC-CM, important for obtaining functional cardiac tissue substitutes.

Surface Topography Obtained with High Throughput Technology …

123

Fig. 3 Histograms report the distribution of orientation angles in degrees (x axis) and their relative frequency (%) (y axis) of hiPSC-CM cultured for 48 h on control (top) and DLIP-modified PU (bottom)

Table 2 Values of mean orientation angles and percentage (%) of aligned cells on micropatterned PU and control. Cells oriented with angle lower than 10° in relation to the groove direction were considered aligned. (*) indicates statistical significance (P < 0.001) between groups Mean orientation angle (°)

% aligned cells ( 0.05) [23]. The differences between groups were tested by the t-test (p < 0.05) and effect size was estimated by Cohen’s d (p < 0.05).

3

Results

ð3Þ

The second was the mean of the differences between corresponding peaks and valleys from the reference and estimated correlograms, vmm (4). Figure 2 illustrates the reference correlogram estimated from a sinusoidal signal, where Lag is the discrete time delay. The AuC of the reference correlogram is 0.32. The reference correlogram vector was defined as Vrr ¼ ½0:87; 0:75; 0:62; 0:50; 0:37; 0:25; 0:12. These are the peaks and valleys shown in Fig. 2. vmm was defined as in (4), in which Vcc is the vector of length P specified by

The estimated correlograms for the volunteers of the CG and PD groups are shown in Figs. 3 and 4, respectively. Table 1 shows the mean and standard deviation of the estimated features for each group. The overall mean and standard deviation of the features for each group was vmm ¼ 0:154  0:063 and AuC ¼ 0:204  0:040 for CG, and vmm ¼ 0:239  0:085 and AuC ¼ 0:179  0:033 for the PD group. The Shapiro-Wilk test [23] confirmed the normality of both variables for both groups (p > 0.05). The results of the application of the t-test and the analysis of the effect size are shown in Table 2.

332

Fig. 3 Estimated correlograms for the control group

V. C. Lima et al.

Evaluation of the Motor Performance of People …

Fig. 4 Estimated correlograms for the PD group. Lag is the discrete time delay

333

334

V. C. Lima et al.

Table 1 Mean and standard deviation of vmm and AuC for each participant. VC and VP are labels identifying individuals from the CG and PD groups, respectively

VC01

0.065 ± 0.046

0.262 ± 0.041

VC02

0.063 ± 0.045

0.270 ± 0.035

VC03

0.217 ± 0.153

0.196 ± 0.088

VC04

0.070 ± 0.050

0.127 ± 0.136

VC05

0.211 ± 0.149

0.174 ± 0.104

VC06

0.154 ± 0.109

0.209 ± 0.079

VC07

0.174 ± 0.123

0.203 ± 0.082

VC08

0.135 ± 0.096

0.220 ± 0.071

VC09

0.239 ± 0.169

0.167 ± 0.108

VC10

0.179 ± 0.127

0.219 ± 0.072

VC11

0.184 ± 0.130

0.200 ± 0.085

VC12

0.156 ± 0.110

0.221 ± 0.070

VP01

0.116 ± 0.082

0.214 ± 0.068

VP02

0.336 ± 0.238

0.139 ± 0.128

VP03

0.221 ± 0.156

0.180 ± 0.099

VP04

0.313 ± 0.222

0.152 ± 0.119

VP05

0.099 ± 0.070

0.246 ± 0.046

VP06

0.177 ± 0.125

0.203 ± 0.083

VP07

0.266 ± 0.188

0.188 ± 0.093

VP08

0.328 ± 0.231

0.149 ± 0.121

VP09

0.341 ± 0.241

0.143 ± 0.125

VP10

0.297 ± 0.210

0.163 ± 0.111

VP11

0.188 ± 0.133

0.207 ± 0.080

VP12

0.271 ± 0.192

0.167 ± 0.108

Table 2 Results of the t-test and effect size for each parameter

4

AuC (mean ± std)

vmm (mean ± std)

Participant

t-test statistic (p-value)

Effect size Cohen’s d

Interpretation of effect size

AuC

1.788 (p = 0.044)

0.730

Medium

vmm

3.066 (p = 0.003)

1.252

Large

Discussion

In this study, a novel method was proposed to quantify the fine motor performance of people with PD. The motor performance was analyzed based on two features, AuC and vmm , estimated from the ACF of acceleration signals acquired while the subjects followed a sinusoidal pattern. Most literature studies assess motor performance in PD through the analysis of spiral drawings [24, 25], however recent results suggest the need of using sinusoidal patterns for fine movement evaluation [26].

Statistical differences between groups are believed to have occurred due to the influence of motor symptoms caused by the disease. However, the correlograms shown in Fig. 4 and the results shown in Table 1 highlight that some volunteers with PD had a good motor performance, which can be justified by the efficacy of drug therapy [5]. The large effect size suggests the feature vmm is more suitable for the evaluation of motor performance of people with PD. Considering that vmm is located at the peaks and valleys, this can contribute to the detection of the difficulty these individuals have in changing directions while following sinusoidal patterns.

Evaluation of the Motor Performance of People …

The main limitation of this study was not to evaluate the relationship between the obtained results and scores of a clinical scale such as the UPDRS. This can be useful for the measure of the severity of the disorder and its impact on the fine motor performance. Future work can include the understanding of the estimated features in the context of experimental tasks in which the individual draws sinusoidal patterns without visual cues.

5

Conclusions

This study presented a simple and practical method to assess motor performance based on the analysis of features extracted from the correlogram of acceleration signals. The features allowed for the discrimination between healthy control and PD groups. The evaluation of sinusoidal drawings can be introduced in the clinical practice for the follow-up of PD, and as an objective instrument to the evaluation of the efficacy of drug and surgical based therapies. Acknowledgements The present work was carried out with the support of the National Council for Scientific and Technological Development (CNPq), Coordination for the Improvement of Higher Education Personnel (CAPES – Program CAPES/DFATD88887.159028/2017-00, Program CAPES/COFECUB-88881.370894/ 2019-01) and the Foundation for Research Support of the State of Minas Gerais (FAPEMIG-APQ-00942-17). V. C. Lima is a scientific initiation fellow of CNPq (Edital Nº 02/2019 PIBIC CNPq/UFU) supervised by A. O. Andrade. M. F. Vieira, A. A. Pereira, and A. O. Andrade are fellows of CNPq, Brazil (306205/2017-3, 310911/ 2017-6, 304818/2018-6, respectively). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Dhall R, Kreitzman DL (2016) Advances in levodopa therapy for Parkinson disease. Neurology 86:S13–S24. https://doi.org/10. 1212/WNL.0000000000002510 2. Patel S, Lorincz K, Hughes R et al (2009) Monitoring motor fluctuations in patients with parkinson’s disease using wearable sensors. IEEE Trans Inf Technol Biomed 13:864–873. https://doi. org/10.1109/TITB.2009.2033471 3. Tiago MSF, Almeida FO, Santos LS, Veronezi RJB (2001) Instrumentos de avaliação de qualidade de vida na doença de Parkinson. Rev Neurociências 18:538–543. https://doi.org/10. 34024/rnc.2010.v18.8437 4. Antes D, Katzer J, Corazza S (2009) Coordenação motora fina e propriocepção de idosas praticantes de hidroginástica. Rev Bras Ciências do Envelhec Hum 5:. https://doi.org/10.5335/rbceh.2012.109

335 5. Memedi M, Sadikov A, Groznik V et al (2015) Automatic spiral analysis for objective assessment of motor symptoms in parkinson’s disease. Sensors 15:23727–23744. https://doi.org/10.3390/ s150923727 6. Meinel K (1984) Motricidade I: teoria da motricidade esportiva sob o aspecto pedagógico. Ao Livro T, Rio de Janeiro 7. Saavedra Moreno JS, Millán PA, Buriticá Henao OF (2019) Introducción, epidemiología y diagnóstico de la enfermedad de Parkinson. Acta Neurológica Colomb 35:2–10. https://doi.org/10. 22379/24224022244 8. Lees AJ, Hardy J, Revesz T (2009) Parkinson’s disease. Lancet 373:2055–2066. https://doi.org/10.1016/S0140-6736(09)60492-X 9. Machado ARP, Zaidan HC, Paixão APS et al (2016) Feature visualization and classification for the discrimination between individuals with Parkinson’s disease under levodopa and DBS treatments. Biomed Eng Online 15:169. https://doi.org/10.1186/ s12938-016-0290-y 10. Hely MA, Chey T, Wilson A et al (1993) Reliability of the columbia scale for assessing signs of parkinson’s disease. Mov Disord 8:466–472. https://doi.org/10.1002/mds.870080409 11. Rigas G, Tzallas AT, Tsipouras MG et al (2012) Assessment of tremor activity in the Parkinson’s disease using a set of wearable sensors. IEEE Trans Inf Technol Biomed 16:478–487. https://doi. org/10.1109/TITB.2011.2182616 12. Heldman DA, Espay AJ, LeWitt PA, Giuffrida JP (2014) Clinician versus machine: Reliability and responsiveness of motor endpoints in Parkinson’s disease. Parkinsonism Relat Disord 20:590–595. https://doi.org/10.1016/j.parkreldis.2014.02.022 13. Mellone S, Palmerini L, Cappello A, Chiari L (2011) Hilbert– huang-based tremor removal to assess postural properties from accelerometers. IEEE Trans Biomed Eng 58:1752–1761. https:// doi.org/10.1109/TBME.2011.2116017 14. Djuric-Jovicic MD, Jovicic NS, Radovanovic SM et al (2014) Automatic Identification and classification of freezing of gait episodes in Parkinson’s disease patients. IEEE Trans Neural Syst Rehabil Eng 22:685–694. https://doi.org/10.1109/TNSRE.2013. 2287241 15. Salarian A, Russmann H, Wider C et al (2007) Quantification of tremor and bradykinesia in Parkinson’s disease using a novel ambulatory monitoring system. IEEE Trans Biomed Eng 54:313– 322. https://doi.org/10.1109/TBME.2006.886670 16. Silva APSPB da (2018) O uso de sensores inerciais para caracterização e classificação do tremor de punho em indivíduos com a doença de Parkinson e correlação com a escala de avaliação subjetiva: UPDRS. Universidade Federal de Uberlândia 17. RStudio Team (2019) RStudio: Integrated Development Environment for R 18. R Core Team (2020) R: a language and environment for statistical computing 19. de Oliveira Andrade A (2019) TREMSEN-Toolbox. https://doi. org/10.5281/zenodo.3583452. 20. Andrade AO, Ferreira LCV, Rabelo AG et al (2017) Pelvic movement variability of healthy and unilateral hip joint involvement individuals. Biomed Signal Process Control 32:10–19. https://doi.org/10.1016/j.bspc.2016.10.008 21. Nasir Husain Q, Bakri Adam M, Shitan M, Fitrianto A (2016) Extension of Tukey’s Smoothing Techniques. Indian J Sci Technol 9. https://doi.org/10.17485/ijst/2016/v9i28/97354 22. Wise J (1955) The autocorrelation function and the spectral density function. Biometrika 42:151. https://doi.org/10.2307/2333432 23. Razali NM, Wah YB (2011) Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. J Stat Model Anal 2:21–33

336 24. Almeida MFS, Cavalheiro GL, Furtado DA, et al (2012) Quantification of physiological kinetic tremor and its correlation with aging. In: 2012 Annual international conference of the IEEE engineering in medicine and biology society. IEEE, pp 2631–2634 25. Gupta J Das, Chanda B (2019) Novel features for diagnosis of Parkinson’s disease from off-line archimedean spiral images. In:

V. C. Lima et al. 2019 IEEE 10th international conference on awareness science and technology (iCAST). IEEE, pp 1–6 26. Folador JP, Rosebrock A, Pereira AA, et al (2020) Classification of handwritten drawings of people with Parkinson’s disease by using histograms of oriented gradients and the random forest classifier. In: IFMBE Proceedings. pp 334–343

Shear Modulus of Triceps Surae After Acute Stretching M. C. A. Brandão, G. C. Teixeira, and L. F. Oliveira

Abstract

Keywords

Stretching programs are frequently used in order to reduce or avoid the risk of injury, in addition to promoting a gain in flexibility. Dynamic elastography technique (Supersonic ShearWave Imaging (SSI)) was used to analyze the changes in the shear modulus (µ) of the triceps surae tissue, in vivo, after an acute stretching maneuver. Some studies have analyzed the µ of the medial gastrocnemius (MG) and lateral gastrocnemius (LG) after an acute stretching, but data from the Achilles tendon (AT) seems to be absent. Therefore, the aim of the study is to analyze the µ of the MG, LG and the AT immediately after an acute stretch of the plantar flexors. Eight healthy young performed two stretching exercises for the triceps surae, e 3 sets of one minute were performed for each exercise. Range of motion (ROM) was measured pre and post stretching, as well as the elastographic images, for GM, GL and AT. After acute stretching, there was a significant increase in ROM (p-value = 0.033; mean pre: 34.75°; mean post: 38.50°), however, no significant difference in µ was observed for the muscles and tendon structures (GM mean pre: 5.81 ± 2.85 kPa, mean post: 7.25 ± 2.37 kPa, p = 0.200. LG mean pre: 4.89 ± 1.74 kPa, mean post: 6.49 ± 3.18 kPa, p = 0.255. AT mean pre: 128.12 ± 26.66 kPa, mean post = 108.06 ± 41.79 kPa, p = 0.168). We concluded that the stretching protocol applied in this study resulted in a significant increase in the range of motion, however, it seems not to be due to changes in the mechanical properties of the triceps surae muscle and tendon structures.

Elastography Supersonic shearwave imaging SSI Achilles tendon Gastrocnemius muscles Stretching

M. C. A. Brandão (&)  G. C. Teixeira  L. F. Oliveira Biomedical Engineering Program, UFRJ, 2030 Av. Horácio Macedo Bloco I, Subsolo 044-C, Rio de Janeiro, Brazil e-mail: [email protected]

1



  

Introduction

Stretching programs are frequently used in order to reduce or avoid the risk of injury, in addition to promoting flexibility gain, thus being able to improve sports performance [1–3]. One of the variables related to the efficiency of the stretching program is the maximum range of motion (ROM), which can be measured by different techniques, using goniometry, dynamometry or/and inertial sensors. However, there are still lacunae about the possible structural and mechanical adaptations of the tissue after a stretching program [4–6]. The triceps surae is formed by the medial gastrocnemius (MG), lateral gastrocnemius (LG), soleus muscles and the Achilles tendon (AT). It is a structure well investigated, once the AT has high rates of injury in runners [7]. In order to reduce the risk of injury, stretching programs are applied to the triceps surae, although there are no scientific evidences of its benefit. A method of investigating possible mechanical adaptations in vivo is dynamic elastography of the Supersonic Shearwave Imaging (SSI) type. An imaging technique that quantifies the tissue shear modulus (µ) in vivo, non-invasive and in real time (30 ms) [8, 9]. This technique starts from the pushing mode, consisting of acoustic radiation forces emission, of high intensity, focused on different depths of the tissue, generating transverse waves to the tissue [10, 11]. Simultaneously to pushing mode, other piezoelectric elements of the transducer operate in imaging mode, calculating in an ultra-fast way (2 ms and frequency of 30 kHz) the propagation speed of these shear waves (cs) [8, 10, 11]. Assuming the isotropic tissue, purely elastic and considering

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_52

337

338

M. C. A. Brandão et al.

the density of the biological tissue (q) of 1010 kg/m3, the shear modulus is estimated: l ¼ q  c2s

ð1Þ

Based on literature, elastography is a technique with good reliability, being valid in vitro for muscles and tendons [12, 13]. In addition to indicating that it is an adequate for analyzing tissue changes after clinical, rehabilitation, strength training and stretching interventions. Studies using the SSI investigated the effect on µ of the MG and/or LG after acute stretching [14–16]. Nakamura et al. [15] observed significant reductions in the µ of the MG after acute stretching in healthy young, however the authors did not measure the LG and AT. On the other hand, Akagi and Takahashi [14] and Taniguchi et al. [16] found no significant results for muscles µ changes after acute stretching. Only one study using SSI, analyzed the µ of the AT after the acute stretching [17] and verified a significant increase only for the non-dominant limb. An important point to consider is that the studies present very different methodological approaches including the stretching programs and imaging acquisition protocols. In this way, there are still scientific lacunae, regarding the mechanical responses of the three structures (MG, LG and AT) after acute stretching. It is not known whether the gain in joint amplitude occurs due to some structural or neural change. Therefore, the aim this is study is to analyze the µ of the medial and lateral gastrocnemius muscles and the Achilles tendon immediately after an acute stretching.

2

Methods

2.1 Experimental Procedure This study was approved by the Ethics Committee of the Clementino Fraga Filho University Hospital (HUCFF/ UFRJ); nº. 3.672.989. The samples consisted of 8 healthy adults, 4 men and 4 women with mean age: 26.75 ± 6.60 years; weight: 73.63 ± 13.68 kg and height: 1.71 ± 0.04 cm. For data acquisition, one visit to the laboratory were required. During the visit, the steps were: (a) acquisition of elastographic images in the dominant limb, for the structures of the AT, MG and LG; (b) measurement of the ankle joint maximal dorsiflexion ROM of the non-dominant limb with an isokinetic dynamometer; (c) stretching exercises for the triceps surae in the non-dominant limb; (d) acquisition of elastographic images in the non-dominant limb, for the structures of AT, MG and LG and acquisition of ROM for the ankle joint.

2.2 Measurement of Triceps Surae Shear Modulus Elastographic images were acquired with an Aixplorer ultrasound (v.11 Supersonic Image, Aix-en-Provence, France), with 60 mm linear-array transducer at 4-15 MHz frequency was used for the AT and with 40 mm linear-array transducer at 2–10 MHz frequency was used for the MG and LG. The gel (Ultrex gel; Farmativa Indústria e Comércio Ltda, Rio de Janeiro, Brazil) was used for the acoustic coupling on the surface skin. The volunteers were positioned in prone position on the stretcher, with their feet pending and relaxed. Reference markings were made at 30% of the leg length (from the popliteal crease to the lateral malleolus) to acquired elastographic images of muscles (MG and LG), according to the method used by Lima et al. [18]. With the transducer positioned longitudinally at the mark, the elastographic mode was activated, showing a mapping area between the superficial and deep aponeurosis. After 5 s, in order to achieve color map stabilization, the elastographic image was acquired. The preset skeletal muscle (MSK) adapted with a variable scale of 0–300 kPa was used. With the volunteer in the same position, elastographic images of the AT were acquired. The transducer was positioned longitudinally 2 cm away from its distal insertion, which was observed in the B-mode LG, according to the method used by Lima et al. [18]. The preset MSK adapted, with a variable scale of 0–800 kPa. For all structures, three elastographic images were acquired. For data analysis, the images were exported in DICOM format and the µ analyzes were calculated with a Matlab R2015a routine (MathWorks, Natick, MA, USA), developed in our lab. A circular region of interest (ROI, 1 cm diameter) in the center of the mapping area of the MG and LG was used to measure the muscles µ; (Fig. 1a). For the AT, a rectangle was selected in the free tendon (Fig. 1b). The value of µ was considered the mean of the three images.

2.3 Dorsiflexion ROM To assess the ROM maximum in dorsiflexion, the volunteer was positioned on the isokineticdynamometer (Biodex 4 System Pro Medical System Inc, New York, USA) with the knee fully extended and the hip flexed, trunk inclined at 85º. The lateral malleolus was aligned with the Biodex’s axis of rotation. The test started with 30º of plantar flexion with a constant speed of 5°/s until the maximum ROM determined by the volunteer, reporting discomfort. One ROM measurement was made pre and post stretching.

Shear Modulus of Triceps Surae After Acute Stretching

339

Fig. 1 Elastographic images a Example the circular ROI in the LG. b Example the rectangular ROI in the CT

2.4 Stretching Protocol

2.5 Statistics

The volunteers performed two exercises, 3 sets of 1 min were executed for each exercise, with an interval of 30 s. The first exercise consisted of maximum dorsiflexion on a step, with the knee extended (Fig. 2a). The second exercise was executed with the aid of the wall, where the stretching leg remained behind the other, with the hip and knee extended, without removing the rear foot from the floor, until the maximum dorsiflexion (Fig. 2b). For all exercises, the volunteers were instructed to maintain the maximal joint dorsiflexion and to until the end of the exercises.

For statistical analysis, pre and post differences were tested by the paired Student’s t test, with a significance level of 5% (p < 0.05) (Statistica 10 (StatSoft Inc. Tulsa, Ok, USA)). To the elastographic images reliability was determined by the intraclass correlation coefficient (ICC) (SPSS 20 (IBM SPSS Statistcs Viewer, Armonk,NY, USA). Based on the 95% confident, is interpreted as follows: below 0.49 as poor, 0.5 to 0.75 as moderate, 0.75 to 0.90 as good and 0.90 to 1.00 as excellent reliability [19].

Fig. 2 Exercises stretching a First exercise on the step b Second exercise with the aid of the wall

340

3

M. C. A. Brandão et al.

Results

ICC ranged from 0.991 to 0.996 (Tab. 1) with excellent reliability [19]. A significant increase in ROM was obtained after acute stretching (p-value = 0.033; mean pre-stretching: 34.75° ± 7.26°; mean post-stretching: 38.50° ± 7.11°). However, no significant change was found for the µ of the three structures, as can be seen in Table 1. No significant difference was found between the sexes for the stiffness values for the 3 structures.

4

Discussion

As expected, the stretching protocol in this study generated a significant increase in ROM, corroborating with the other studies [14–17], although the stretching protocols have some differences. Conversely, the stretching protocol applied was not enough to verify acute changes in shear module of the muscle and tendon. Only one study was found [17] analyzing the behavior of AT after acute stretching by SSI, absent from the GM and GL muscles. Chiu et al. [17] compared the µ of the AT before and immediately after a static stretching session, which consisted maintained for 5 min on a 30º inclined platform for both limbs. Before stretching, it was observed that the µ on the dominant legs was significantly greater than the non-dominant legs. After stretching, there was a significant increase in µ for the non-dominant legs but dominant legs there was not significant difference. The authors hypothesized that the dominant leg must exceed a particular threshold in order trigger adaptation on the shear modulus. It is known that the AT is a long tendon, and that different adaptations can occur along it. Studies [20, 21] indicate that the tensile strain behavior of AT can vary between the proximal and distal parts and to the conditions of loads submitted to it, whether passive or active loads [22, 23]. In the present study and in the study by Chiu et al. [17], µ was analyzed near to tendon insertion. However, the studies differ during the acquisition of the elastographic image. Chiu et al. used a customized ankle fixer, altering the length of the tendon adding a pre tension to the structure. Another methodological difference is that the stimulus performed by Chiu et al. [17] was constant, different from our protocol that had interval between the sets. There are no other results Table 1 Reliability and mean of µ for muscles and tendon

available in the literature using the SSI technique in AT after active stretching, so the methodological differences presented may have influenced the comparison of the tendinous structure. For MG and LG muscles, three articles were found using the SSI technique for analysis after acute stretching [14–16]. Akagi and Takahashi [14] compared the µ of the MG and LG muscles after the volunteers stand on a stretching board during 3 sets with two minutes of duration with one minute of interval. Immediately after stretching no significant difference was observed. Although exercise and protocol time are similar to this study, the imaging acquisition methodology is different. The authors acquired the images with 30º plantar flexion and using the transducer transversally to the muscles. It is known that the elastographic image made in transverse mode has less reliability than the longitudinal [23]. Nakamura et al. [15] compared the GM µ values before and after two minutes static stretching passively imposed by the dynamometer. The stretching was composed of 4 sets of 30 s, where the platform started in 30º of plantar flexion and moved with a constant speed of 5°/s until the highest degree of dorsiflexion determined by the volunteer, after this process the 30 s were counted. The authors found a significant decrease in µ of the GM after 2 min of stretching. It should be noted that the stretching was done passively, while the present study was realized actively, and may have small variations in intensity over time. Another methodological difference is that the image acquisition was made after 2 min the sets. Because of such differences, the results of this study are not corroborated with Nakamura et al. [15]. Finally, Taniguchi et al. [16] did not observe differences in GM and GL µ values after 5 sets of 1-minute static stretching with the aid of the wall. Elastographic images were acquired with the ankle joint was set at 0º of the plantar-dorsiflexion angle, with AT pre tension. In the present study, the volunteer feet was relaxed. The change in the ankle joint angle for image acquisition prevent comparisons of the results. Some methodological limitations can be pointed out: the analysis the µ of AT next to the insertion, knowing that the AT is long and may present heterogeneous alterations, we suggest that elastographic images be made along the tendon. Thin structures such as the AT may favor the appearance of guided waves, which could be have influenced the µ [24].

µ

ICC

Pre stretching (mean ± std)

Post stretching (mean ± std)

p-value

Pre stretching

Post stretching

AT

128.12 ± 26.66 kPa

108.06 ± 41.79 kPa

0.168

0.934

0.985

MG

5.81 ± 2.85 kPa

7.25 ± 2.37 kPa

0.200

0.991

0.966

LG

4.89 ± 1.74 kPa

6.49 ± 3.18 kPa

0.255

0.911

0.996

Shear Modulus of Triceps Surae After Acute Stretching

5

341

Conclusions

The stretching protocol applied in this study resulted in a significant increase in the ankle maximal dorsiflexion. However, it was not resulted from changes of the shear modulus of the lateral and medial gastrocnemius and the Achilles tendon. This suggests that the significant increase in ROM can be related to the neural mechanism.

11.

12.

13. Acknowledgements This study was supported by CNPq, CAPES, FAPERJ and FINEP. 14. Conflict of Interest The authors declare that they have no conflict of interest. 15.

References 1. Folpp F, Deall S, Harvey LA et al (2006) Can apparent increases in muscle extensibility with regular stretch be explained by changes in tolerance to stretch? Aust J Physiother 52:45–50 2. Magnusson S, Simonsen EB, Aagaard P et al (1996) A mechanism for altered flexibility in human skeletal muscle. J Phys 497(1):291– 298 3. Magnusson S (1998) Passive properties of human skeletal muscle during stretch maneuver. Scand J Med Sci Sports 8(2):65–77. https://doi.org/10.1111/J.1600-0838.1998.tb00171.x 4. Nakamura M, Ikezoe T, Takeno Y et al (2011) Acute and prolonged effect of static stretching on the passive stiffness of the human gastrocnemius muscle tendon unit in vivo. J Orthop Res 29:1759–1763. https://doi.org/10.1002/jor.21445 5. Morse CI, Degens H, Seynnes OR et al (2008) The acute effect of stretching on the passive stiffness of the human gastrocnemius muscle tendon unit. J Physiol 586(1):97–106. https://doi.org/10. 1113/jphysiol.2007.140434 6. Herbert RD, Moseley AM, Butler JE et al (2002) Change in length of relaxed muscle fascicles and tendons with knee and ankle movement in humans. J Physiol 539(2):637–645. https://doi.org/ 10.1013/jphysiol.2001.012756 7. Ooi CC, Schneider ME, Malliaras P et al (2015) Prevalence of morphological and mechanical stiffness alterations of mid Achilles tendons in asymptomatic marathon runners before and after a competition. Skeletal Radiol 44:1119–1127. https://doi.org/10. 1007/s00256-015-2132-6 8. Bercoff J, Tanter M, Fink M et al (2004) Supersonic shear imaging: a new technique. IEEE Trans Ultrason Ferroelectr Freq Control 51(4):396–409. https://doi.org/10.1016/j.diii.2013.01.022 9. Gennisson J, Deffieux T, Fink M et al (2013) Ultrasound elastography: principles and techniques. Diagn Interv Imaging 94 (5):487–495. https://doi.org/10.1016/j.diii.2013.01.022 10. Bamber J, Cosgrove D, Dietrich C (2013) EFSUMB guidelines and recommendations on the clinical use of ultrasound

16.

17.

18.

19.

20.

21.

22.

23.

24.

elastography. part 1: basic principles and technology. Ultraschall Med 34:169–184. https://doi.org/10.1055/S-0033-1335205 Shiina T (2013) JSUM ultrasound elastography practice guidelines: basics and terminology. J Med Ultrason 40(4):309–323. https://doi.org/10.1007/s10396-013-0490-z Rosskopf AB, Bachmann E, Snedeker JG et al (2016) Comparison of shear wave velocity measurements assessed with two different ultrasound systems in an ex-vivo tendon strain phantom. Skeletal Radiol 45(11):1541–1551. https://doi.org/10.1007/s00256-0162470-z Liu J, Zhihui Q, Wang K et al. (2018) Non-invasive quantitative assessment of muscle force based on ultrasonic shear wave elastography. Ultrasound Med Biol 0 (0):1-12. http(s)://doi. org/10.1016/j.ultrasmedbio.2018.07.009 Akagi R, Takahashi H (2013) Acute effect of static stretching on hardness of the gastrocnemius muscle. Med Sci Sports Exerc 45 (7):1348–1354. https://doi.org/10.1249/MSS.0b013e3182850e17 Nakamura M, Ikezoe T, Kobayashi T et al (2014) Acute effects of static stretching on muscle hardness of the medial gastrocnemius muscle belly in humans: an ultrasonic shear-wave elastography study. Ultrasound Med Biol 40(9):1991–1997. https://doi.org/10. 1016/j.ultrasmedbio.2014.03.024 Taniguchi K, Shinohara M, Nozaki S et al (2015) Acute decrease in the stiffness of resting muscle belly due to static stretching. Scand J Med Sci Sport 25(1):32–40. https://doi.org/10.1111/sms. 12146 Chiu TCR, Ngo HC, Lau LW et al (2016) An investigation of the immediate effect of static stretching on the morphology and stiffness of achilles tendon in dominant and non-dominant legs. PLoS ONE 11(4): https://doi.org/10.1371/journal.pone.0154443 Lima K, Martins N, Pereira W et al (2017) Triceos surae elasticity modulus measured by shear wave elastography is not correlated to the plantar flexion torque. Muscles Ligaments Tendons J 2:347– 352 Koo TK, Li MY (2016) A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med 15(2):155–163 Magnusson SP, Hansen P, Aagaard P et al (2003) Differential strain patterns of the human gastrocnemius aponeurosis anf free tendon, in vivo. Acta Physiol Scand 177:185–195. https://doi.org/ 10.1046/J.1365-201x.2003.01048.x Gerus P, Rao G, Berton E (2011) A method to characterize in vivo tendon force-strain relationship by combining ultrasonography, motion capture and loading rates. J Biomech 44(2011):2333–2336. https://doi.org/10.1016/j.jbiomech.2011.05.021 Sugisaki N, Kanehisa H, Kawakami Y et al (2005) Behavior of aponeurosis and external tendon of the gastrocnemius muscle during dynamic plantar flexion exercise. Int J Sports Health Sci 3:235–244 Dorado Cortez C, Hermitte L, Ramain A et al (2016) Ultrasound shear wave velocity in skeletal muscle: a reproducibility study. Diagn Interv Imaging 97(1):71–79. https://doi.org/10.1016/j.diii. 2015.05.010 Brum J, Bernal M, Gennisson JL, Tanter M (2014) In vivo evaluation of the elastic anisotropy of the human Achilles tendon using shear wave dispersion analysis. Phys Med Biol 59:505–523

Estimation of the Coordination Variability Between Pelvis-Thigh Segments During Gait at Different Walking Speeds in Sedentary Young People and Practitioners of Physical Activities G. A. G. De Villa, A. Abbasi, A. O. Andrade, and M. F. Vieira

Abstract

1

The aim of this study was to analyze the variability of coordination during gait of sedentary young people and practitioners of physical activities, at different speeds (Pre- ferred walk speed (PWS), 120% of PWS and 80% of PWS) using the Vector Coding technique (VC). Thirty young people participated in this study, of which 15 are sedentary and 15 practiced exercise regularly at least three times a week. They performed a protocol of walking 1 min at each speed for data collection on a treadmill in a randomized order. For the Pelvis- Thigh segment, the angles were computed during four phases of the gait (first double support, single support, second double support, and swing) in the sagittal plane. The data were analyzed with a customized MatLab code. Significant differences were observed in 120% and 80% of the PWS for both groups, with greater variability in 80% of the PWS, suggesting that walking at slower speeds is a greater challenge for the neuromuscular system, when compared to higher speeds. Keywords



Variability Coordination Pelvis-Thigh



Practitioners



Sedentary

G. A. G. De Villa (&)  M. F. Vieira Bioengineering and Biomechanics Laboratory, Federal University of Goiás, Avenue Esperança, Goiânia, 74690-900, Brazil A. Abbasi Department of Biomechanics and Sports Injuries, Faculty of Physical Education and Sports Sciences, Kharazmi University, Tehran, Iran



Introduction

Regular physical activity during childhood and adolescence contributes to a better bone and muscle condition, helps in controlling body mass, preventing or delaying hypertension, and reduces levels of depression and anxiety [1]. However, the reasons, if any, for how exercise practice affects pelvic-thigh segmental coordination in young people is still unclear. Coordination can be defined as a process in which movement components are sequentially organized over time, and their relative magnitude determined to produce a functional or synergistic movement pattern [2], and can evaluated by using nonlinear techniques, as Vector Coding. Coordination between movements of body parts is essential for gait and is coded, often in a subtle way, to accommodate variations required by the task which may be, for example, speed [3], curves in the path [4] or even an obstacle in the middle of the path [5]. In the Vector Coding technique, the phase angles represent the segment coordination pattern, while the standard devia- tion of the phase angle at each point of the gait cycle repre- sents the variability of the coordination of this segment [6]. Coordination between lower limb segments can be influenced by the level of activity of an individual. Thus, the ex- amination of coordination and its variability can contribute to understand how active lifestyle influences joint coupling bio- mechanics during gait, with respect to individual skills, speed, and injury risk. Therefore, the objective of the present study is to analyze the variability of Pelvis-Thigh segments coordination during gait of sedentary young people and regular practitioners of physical activity at different speeds (preferred walking speed (PWS), 120% of PWS and 80% of PWS) using the technique of vector coding (VC).

A. O. Andrade Center for Innovation and Technology Assessment in Health, Federal University of Uberlandia, Uberlândia, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_53

343

344

2

G. A. G. De Villa et al.

Material and Methods

2.1 Subjects Thirty young adults, 15 sedentary (age: 21.64 ± 3.65 years; body mass: 61.9 ± 8.7 kg; height: 1.71 ± 0.11 m) and 15 practitioners (age: 21.91 ± 2.32 years; body mass: 60.1 ± 7.2 kg; height: 1.73 ± 0.09 m) participated in the study. Young adults were classified as practitioners if they practiced physical activity at least three times a week, one hour a day.

right thigh segmental pair, the angles were calculated during four phases of the gait: (first double support, single support, second double support, and swing phase) and it was used the whole waveform of specific joint angle. The kinematic data was exported as a c3d file and analyzed with a customized with Bonferroni correction in the cases where the main or interaction effect was significant. Statistical analysis was performed using SPSS software, version 23 (SPSS Inc., Chicago, IL, USA), with a significance level set at a < 0.05.

3

Results

2.2 Protocol All participants agreed to sign an informed consent form. Next, they submitted to protocols previously approved by the Local Research Ethics Committee (protocol 1.003.935). For data collection, 16 retro-reflective markers were fixed at specific anatomical points according to Vicon's lower limb plug-in-gait model (Vicon, Oxford Metrics, Oxford, UK). PWS was evaluated based on the method described by Dingwell and Marin (2006) [7]. All participants performed a protocol with three 1 min-trials on a treadmill at each data collection speed, in a randomized order. Data was collected using a 3D motion capture system containing 10 infrared cameras operating at 100 Hz. Data were filtered using a low pass, zero-lag, fourth order, Butterworth filter with a cut-off frequency of 8 Hz. The number of gait cycles required to quantify the exact coordinative variability and how gait is divided to obtain and compare results are also two variables that need to be high- lighted. First, it is still unclear exactly how many gait cycles are needed to safely estimate the variability of coordination. Recent literature estimates five [8], ten [9, 10] to fifteen gait cycles [11] are necessary. As most studies used ten steps for everyone, we follow this convention, because it is already a consolidated number of steps reported in recent literature and a way of optimizing computational time. The gait was divided in four phases: first double support, single support, sec- ond double support, and swing phases. Thus, for each part of the gait cycle (first double support, single support, second double support, and swing), each coordination range was determined, and the segmental coordination was classified according to Table 1. For the pelvis and Table 1 Ranges of the coupling angles used to classify the variability coordination segment patterns (Anti-Phase, In-Phase, Proximal-Phase and Distal-Phase)

There was no significant main effect of group and interaction effect between groups and speeds. However, significant main effect of speed was observed during second double sup- port and swing phases, as shown in Table 2. Post-hoc tests revealed that there were significant differences for the second double support between 120 and 80% of the PWS (0.009), with greater coordinative variability for 80% of the PWS. Likewise, there were significant differences for the swing phase between 120 and 80% of the PWS (0.002) with greater variability for 80% of the PWS. For all phases, the pelvis and thigh segments were in phase. MatLab code (R2018a, MathWorks, Natick, MA).

3.1 Classification of Coordination Patterns Regarding the classification of coordination patterns, for each phase of the gait cycle, the range of each coordination pattern was determined and classified according Table 1.

3.2 Statistical Analysis The results were tested by repeated measures analysis of variance (ANOVA) with mixed design, comparing main effects of group (practitioners and sedentary), main effect of speed (100%, 120% and 80% of PWS) and the interaction effect between groups and speeds, followed by a post-hoc test. Figures 1 and 2 show the behavior of the coupling angle and the frequency of the observed coordination pattern for between Pelvis-Thigh segments, respectively.

Coordination pattern

Coupling angle definitions

In-Phase

22,5°  c < 67,5°; 202,5°  c < 247,5°

Anti-Phase

112,5°  c < 157,5°; 292,5°  c < 337,5°

Distal-Phase

67,5°  c < 112.5°; 247,5°  c < 292,5°

Proximal-Phase

0°  c < 22,5°; 157,5°  c < 202,5°; 337,5°  c < 360º

Estimation of the Coordination Variability Between Pelvis-Thigh … Table 2 Coordinative variability for the Pelvis-Thigh pair. Analysis of repeated measures ANOVA. F is used to test the general fit of a regression model for a data set; p is the significance of the test; η2 is a measure of the effect size; NS = Not significant

Effect Group

Speed

Group

Speed

345

Phases of gait

F

P

n2

First double support

0.176

NS

0.060

Single support

0.520

NS

0.018

Second double support

0.714

NS

0.050

Swing

0.620

NS

0.022

First double support

5.906

0.006

0.174

Single support

2.153

NS

0.071

Second double support

6.017

0.018

0.177

Swing

8.518

0.043

0.233

First double support

0.049

NS

0.952

Single support

2.153

NS

0.071

Second double support

1.304

NS

0.054

Swing

3.055

NS

0.098

Fig. 1 Coupling angle (c) for the Pelvis-Thigh segment

4

Discussion

In the present study, the variability of coordination of Pelvis-Thigh segments in the sagittal plane during gait in two groups of young people, sedentary and practicing physical activities, was quantified by the vector coding technique.The participants walked on a treadmill varying the 20% speed around the PWS (100, 120 and 80% of PWS). The objec- tive was to analyze whether there are significant differences in the variability of coordination between these groups for the Pelvis-Thigh segment, during four phases of

gait (first double support, single support, second double support, and swing). For that, statistical comparisons between the groups evalu- ated were used. It seems that the physical activity practiced in the intensity of the participants in this study, does not cause detectable changes in the Pelvis-Thigh coordination. On the other hand, the variability of coordination for the Pelvis-Thigh segment was significantly greater in 80% of PWS during the second double support and swing phase. Second double support phase corresponds to push-off phase of the analyzed side, suggesting that the coordination between Pelvis-Thigh is compromised in lower speeds, presenting greater

346

G. A. G. De Villa et al.

Fig. 2 Frequency of the observed coordination patterns between Pelvis-Thigh segments

variability. Similar results were found in [12], suggesting that the slower walking speed is more challenging for neuromuscular con- trol, especially in more proximal segments.

Conflict of Interest The authors declare that they have no conflict of interest.

5

References

Conclusion

There were significant differences in coordination variability concerning gait speed. In relation to the Pelvis-Thigh segment, the greatest differences were observed in 80% of the PWS with greater variability for the second double support and swing phases. Such results suggest that walking at slower speeds is a greater challenge for the neuromuscular system, when compared to higher speeds. Understanding these characteristics of young people's gait may contribute to the development of effective physical exercise protocols for sedentary people. Acknowledgements This study was partially supported by National Council for Scientific and Technological Development (CNPq), Coordi- nation for the Improvement of Higher Education Personnel (CAPES), and Foundation for Research Support of State of Minas Gerais (FAPEMIG). Adriano O. Andrade and Marcus F. Vieira are CNPq fellows, Brazil (304818 / 2018-6 and 306205 / 2017-3, respectively).

1. Burgeson SCR, Wechsler H et al (2001) Physical education and activity: results from the school health policies and programs study. J Sch Health 71:279–293https://doi.org/10.1111/j.17461561.2001.tb03505.x 2. Scholtz JP (1990) Dynamic pattern theory-some implications for therapeutics. Phys Theory 70:827–843. https://doi.org/10.1093/ptj/ 70.12.827 3. Donker PJ, Daffertshofer A et al (2005) Effects of velocity and limb londing on the coordination between limb movements during walking. Motion Behav 37:217–230. https://doi.org/10.3200/ JMBR.37.3.217-230 4. Courtine G, Schieppati M (2004) Tuning of a basic coordination pattern consturcts straight-ahead and curved walking in humans. J Neurophysiol 91:1524–1533. https://doi.org/10.1152/jn.00817. 2003 5. Moraes AE, Lewis R, Patla MA (2004) Strategies and determinants for selection of alternate foot placement during human locomotion: influence of spatial and temporal constrait. Exp Brain Res 159:1–13. https://doi.org/10.1007/s00221-004-1888-z 6. Hafer JF, Boyer KA (2018) Age related differences in segment coordination and its variability during gait. Gait Posture 62:92–98. https://doi.org/10.1016/j.gaitpost.2018.02.021

Estimation of the Coordination Variability Between Pelvis-Thigh …

347

7. Dingwell LC, Marin JB (2006) Kinematic variability and local dynamic stability of upper body motions when walking at different speeds. J. Biomech 39:444–452. https://doi.org/10.1016/j.jbiomech.2004.12.014 8. Hafer JF, Freedman J et al (2016) Changes in coordination and its variability with an increase in running cadence. J Sports Sci 34:1388–1395. https://doi.org/10.1080/02640414.2015.1112021 9. Floría P, Sánchez-Sixto A et al (2019) The effect of running speed on joint coupling coordination and its variability in recreational runners. Hum Mov Sci 66:449–458. https://doi.org/10.1016/j.humov.2019.05.020

10. Miller RH, Chang R et al (2010) Variability in kinematic coupling assessed by vector coding and continuous relative phase. J Biomech 43:2554–2560. https://doi.org/10.1016/j.jbio-mech. 2010.05.01 11. Heiderscheit RE, Hamill J, Van Emmerik REA (2002) Variability of stride characteristics and joint coordination among individuals with unilateral patellofemoral pain. J Appl Biomech 18:110–121. https://doi.org/10.1123/jab.18.2.110 12. Chiu LS, Chou LS (2012) Effect of walking speed on inter-joint coordination differs between young and elderly adults. J Biomech 45:275–280. https://doi.org/10.1016/j.jbiomech.2011.10.028

Gait Coordination Quantification of Thigh-Leg Segments in Sedentary and Active Youngs at Different Speeds Using the Modified Vector Coding Technique G. A. G. De Villa, F. B. Rodrigues, A. Abbasi, A. O. Andrade, and M. F. Vieira

Abstract

1

The aim of the present study was to quantify the variability coordination of Thigh-Leg segments, during gait, of young sedentary and active at different speeds (preferred walk- ing speed (PWS), 120% of PWS and 80% of PWS) using the previously reported modified Vector Coding technique, to record the segmental angles. Thirty young people participated in this study, of which 15 practiced physical activities at least an hour a day and three times a week, and 15 were sedentary. For data collection they executed a protocol of one-minute walking on a treadmill at each speed, in a randomized order. For the Thigh-Leg segments, the angles were computed during four phases of the gait (first double support, single support, second double support, and swing), in the sagittal plane (flexion/extension angles). The data were analyzed using a customized Matlab code. There were statistical differences for the Thigh-Leg segment pair, with great differences observed in 120 and 80% of PWS for both groups. Keywords

Variability



Coordination



Vector coding

G. A. G. De Villa (&)  F. B. Rodrigues  M. F. Vieira Bioengineering and Biomechanics Laboratory, Federal University of Goiás, Avenue Esperança, 74690-900 Goiânia, Brazil F. B. Rodrigues State University of Goiás, Goiânia, Brazil A. Abbasi Department of Biomechanics and Sports Injuries, Faculty of Physical Education and Sports Sciences, Kharazmi University, Tehran, Iran

Introduction

The practice of physical activities in adolescents is related to greater social interaction, less risk of diseases with aging, better musculoskeletal development among other benefits [1, 2]. However, how exercise affects segmental thigh-leg coordination in young people remains unclear. Walking is a cyclical movement that is repeated in several and different patterns for each individual, in which coordination between different segments is of importance. Recent literature estimates that five [3], ten [4] to fifteen gait cycles [5] are the minimum number of gait cycles needed to calculate a reliable coordinative variability. However, it is commonly agreed that less than five cycles are a small number and the reported values cannot be representative of the true variability of an individual or group. For more reliable results, this work used the entire time series collected, a total of 25 gait cycles for everyone. According to studies the ideal analysis is in the sagittal plane, because this is the most demanding plane presenting expressive extension and flexion excursions in the joint that connects the lower limb segments, analysis of sagittal plane can clearly show the phase and anti-phase relationships between segments [6–8]. The aim of this study was to estimate the coordination and coordination variability between the Thigh-Leg segments of two groups of young people (sedentary and active), while walking on a treadmill at different speeds, using the previously reported modified Vector Coding (VC) technique [7]. As the practice of physical actives can contribute to a lower risk of injuries, we hypothesized that (1) young active have greater coordinative variability compared to the other group, (2) and the instants of support would be of greater concern, with smaller values for sedentary group.

A. O. Andrade Center of Innovation and Technology Assessment in Health, Federal University of Uberlândia, Uberlândia, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_54

349

350

2

G. A. G. De Villa et al.

Materials and Methods

2.1 Subjects Thirty young adults, 15 sedentary and 15 actives participated in the study. Young adults were classified as active if they practice physical activity at least three times a week, one hour a day.

2.2 Protocol For data collection, 16 retro-reflective markers were fixed at specific anatomical points according to Vicon’s lower limb plug-in-gait model (Vicon, Oxford Metrics, Oxford, UK). A 3D capture system containing 10 infrared cameras operating at 100 Hz was used. Data were filtered using a low pass, zero- lag, fourth order, Butterworth filter with a cut-off frequency of 8 Hz. Kinematic data were exported as text file and ana- lyzed with a custom MatLab code (R2018a, MathWorks, Na- tick, MA). The preferred walking speed (PWS) on the treadmill was determined according to a previously reported protocol [9]. A four-minute walk on the treadmill was allowed for familiarization and immediately followed by a two-minute rest. After the rest period participants performed three walks of 1 min each, in the PWS, 120% of the PWS and 80% of the PWS, in randomized order. As already stated above the Thigh-Leg segment was analyzed for 25 strides, normalized to 100 points each, for each one-minute walking period. The segmental angles were calculated in relation to the global coordinate system of the la- boratory. Then the coupling angles were calculated using the previously reported modified Vector Coding technique [7], in four phases of the gait cycle: first double support, single support, second double support and swing phase. The coupling angles represent the coordination patterns and the standard deviation of the coupling angle at each instant of the gait cycle represents the coordination variability.

2.3 Statistical Analysis The repeated measures analysis of variance (ANOVA) with mixed design was used to compare the two groups, the main effect of speed and the effect of interaction between groups and speed, followed by a post hoc test with Bonferroni correction in the cases where the main or interaction effect was significant. Statistical analysis was performed using SPSS software, version 23 (SPSS Inc., Chicago, IL, USA), with a significance level set at a < 0.05.

Figure 1 show a typical example of the coupling angle (cmean) and coupling angle variability (CAV) for the ThighLeg segment, for the two conditions that showed differences.

3

Results

The segments Thigh-Leg had rotation in the same direction, being, therefore, in-phase. The statistical results for each phase are shown in the Table 1. Regarding Table 1, comparing the two groups, there were no significant differences for Groups or for Groups versus Speeds. However, there were differences between 120% of PWS and 80% of PWS (p = 0.0016) for speed (bold in table), with greater variability to 120% of PWS. Even though it is not the focus of the article, in Fig. 1, the values of the coordination of this segment are represented, which is being analysed as a way of seeing how its variability behaves in relation to its coordination.

4

Discussion

The groups presented similar results, showing that the level of physical activity of the active’s group was not enough to produce significant changes in Thigh-Leg coordination during walking. The first two hypotheses were discarded the results only support the second hypothesis: significant differences were found in the single support phase and were sensitive to the speed. With respect to speed, similar results were previously reported for young and older adults [10, 11], besides people tend to have more difficulties walking at a lower speed than at a higher speeds in relation to PWS itself [12]. Furthermore, all differences were observed in stance phase. This suggests that Thigh-Leg segments coordination occurs during foot contact and subsequent body weight loading on unilateral lower limb. This may lead to different patterns of overuse in stance phase at different walking speeds, as it has been reported that altered segment coordination is indicative or marker of overuse injury [13], through the shift of stress to tissues not adapted for repetitive loading [3]. The different segment coordination between walking speeds can be a result of a change in amplitude or relative timing of adjacent segment.

5

Conclusions

There was no significant main effect of activity, and no significant interaction effect between groups and speeds. However, significant main effect of speed was observed

Gait Coordination Quantification of Thigh-Leg Segments …

351

Fig. 1 Coupling angle variability (CAV) and coupling angle (cmean) for the Thigh-Leg segments at 120 and 80% of preferred walking speed (PWS) Table 1 Coordination variability of thigh-Leg pair

Effect Group

Speed

Group x Speed

Phases of gait

F

p

η²

FDS

0.174

0.480

0.033

SS

1.498

0.241

0.097

SDS

0.303

0.303

0.002

SG

0.359

0.559

0.025

FDS

0.945

0.373

0.063

SS

2.994

0.046

0.176

SDS

1.722

0.203

0.110

SG

1.180

0.299

0.078

FDS

0.003

0.979

0.000

SS

0.800

0.431

0.054

SDS

1.728

0.200

0.110

SG

0.398

0.538

0.028

Analysis of Repeated Measures (ANOVA). FDS = First Double Support; SS = Single Support; SDS = Second Double Support; SG = Swing

during single support phase. These preliminary results suggest that Thigh and Leg coordination variability during single support phase are speed dependent showing that, although these segments are in phase irrespective to walking speed, their coordination variability differs. Changes in

walking speed produce changes in motion amplitude or relative tim- ing of the analyzed segments that, in turn, alter the coordination variability during single support phase. However, this result was not sensitive to the level of activity, probably because the balance control during single support

352

phase, where human body behaves as an inverted pendulum, is not affected by level of activity [14]. Future studies can investigate more closely this link between the level of physical activities and speed for this segment, in order to develop practices of physical exercises that add more to the health of young people as well as improving the quality of gait. Acknowledgements The authors would like to thank the financial support of the National Council for Scientific and Technological Development (CNPq), the Coordination for the Improvement of Higher Education Personnel (CAPES), the Foundation for Research Support of State of Goiás (FAPEG) and the Foundation for Research Support of State of Minas Gerais (FAPEMIG). A. O. Andrade and M. F. Vieira are CNPq fellows, Brazil (304818/2018-6 and 306205/2017-3, respectively). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Aaron DJ, Dearwater SR et al (1995) Physical activity and the initiation of high-risk health behaviors in adolescents. Med Sci Sport Exerc 27:1639–1645. https://doi.org/10.1249/00005768199512000-00010 2. Fairclough S (2004) Physical education makes you fit and healthy. physical education’s contribution to young people’s physical activity levels. Health Educ Res 20:14–23. https://doi.org/10.1093/ her/cyg101 3. Hafer JF, Silvernail JF et al (2016) Changes in coordination and its variability with an increase in running cadence. J Sports Sci 34:1388–1395. https://doi.org/10.1080/02640414.2015.1112021

G. A. G. De Villa et al. 4. Miller RH, Chang RB et al (2010) Variability in kinematic coupling assessed by vector coding and continuous relative phase. J Biomech 43:2554–2560. https://doi.org/10.1016/j.jbiomech. 2010.05.014 5. Heiderscheit BC, Hamill J, Van Emmerik RE (2002) Variability of stride characteristics and joint coordination among individuals with unilateral patellofemoral pain. J Appl Biomech 18:110–121 6. Chang R, Van Emmerik R, Hamill J (2008) Quantifying rearfoot– forefoot coordination in human walking. J Biomech 41:3101– 3105. https://doi.org/10.1016/j.jbiomech.07.024 7. Needham R, Roozbeh NNC (2014) Quantifying lumbar-pelvis coordination during gait using a modified vector coding technique. J Biomech 47:1020–1026. https://doi.org/10.1016/j.jbiomech. 2013.12.032 8. Harrison K, Kwon YU et al (2019) Inter-joint coordination patterns differ between younger and older runners. Hum Mov Sci 64:164–170. https://doi.org/10.1016/j.humov.2019.01.014 9. Dingwell JB, Marin LC (2006) Kinematic variability and local dynamic stability of upper body motions when walking at different speeds. J Biomech 39:444–452. https://doi.org/10.1016/j. jbiomech.2004.12.014 10. Ghanavati T, Salavati M et al (2014) Intra-limb coordination while walking is affected by cognitive load and walking speed. J Biomech 47:2300–2305. https://doi.org/10.1016/j.jbiomech. 2014.04.038 11. Chiu S-L, Chou L-S (2012) Effect of walking speed on inter-joint coordination differs between young and elderly adults. J Biomech 45:275–280. https://doi.org/10.1016/j.jbiomech.2011.10.028 12. Silvernail JF, Bradley S, Wiegand K (2018) Differences in the coordination of gait based on body mass index in sedentary young adults. Osteoarthr Cartil 26:S382. https://doi.org/10.1016/j.joca. 2018.02.747 13. DeLeo AT, Dierks TA, Ferber R, Davis IS (2004) Lower extremity joint coupling during running: a current update. Clin Biomech 19:983–991. https://doi.org/10.1016/j.clinbiomech.2004.07.005 14. Winter DA, Prince F, Patla A (1997) Validity of the inverted pendulum model of balance in quiet standing. Gait Posture 5:153– 154. https://doi.org/10.1016/S0966-6362(97)83376-0

Demands at the Knee Joint During Jumps in Classically Trained Ballet Dancers T. S. Lemes, G. A. G. De Villa, A. P. Rodrigues, J. M. A. Galvão, R. M. Magnani, R. S. Gomide, M. B. Millan, E. M. Mesquita, L. C. C. Borges, and M. F. Vieira

Abstract

The aim of this study was to analyze the differences in ground reaction strength and knee mechanics joint in four jumps usually trained in ballet: Changement, Echappé Sauté 1 (fifth position for second position), Echappé Sauté 2 (second position for fifth position) and Sauté. Fifteen professional dancers participated in this study, exceeding a weekly 15 h of classes. The participants performed three trials of each jump in a randomized order on a force platform. The Sauté jump produced the greatest peak knee moment in both propulsion (< 0.001) and landing phases(< 0.001), but the lowest rate of force development in propulsion phase (0.023). These results indicate that Sauté is performed with a deeper plié in both propulsion and landing phases, with smaller ground reaction force peak and knee peak force. This pattern of jumping may be less harmful and should be adopted in the other jumps by classical dancers who perform such exercises daily several times. Keywords

Demand

1



Jumps



Knee

Introduction

The specificities of classical ballet require a lot of dexterity and training to perform the only form of dance that encompasses a high level of athletics and unique visual T. S. Lemes  G. A. G. De Villa (&)  A. P. Rodrigues  J. M. A.Galvão  R. M. Magnani  R. S. Gomide  M. B. Millan  E. M. Mesquita  L. C. C. Borges  M. F. Vieira Bioengineering and Biomechanics Laboratory, Federal University of Goiás, Avenue Esperança, 74690-900 Goiânia, Brazil R. M. Magnani State University of Goiás, Goiânia, Brazil

aesthetics [1]. The practice of dance, including classical ballet at high levels of preparation, can be considered a sport, due to the amount of intense rehearsals and classes performed by its practitioners [2]. Around 73% of the severe injuries were traumatically caused when performing jumps and lifts [3]. One of the aspects that most demand ballet practice is the jumping movements [3] which require a high mechanical demand for rapid muscular effort in the lower extremity and are associated with joint injuries. There are an alarming number of injuries caused by the frequent practice of ballet, and some studies [4, 5] report that injuries to the feet, ankles, knees and spine occur constantly, so that these segments are susceptible of chronic and acute illnesses. The most frequently knee injuries in dancers are related to patellar alignment, inflamed plica, or torn meniscus or cruciate ligaments [6]. The ground reaction force (GRF) is a variable of interest due to its potential correlation with high injury rates. Greater ground reaction force can have harmful effects on the body and can result from an inadequate ground surface, poor technique, or footwear used [7]. Professional classical dancers perform more than 200 jumping and/or landing actions in daily training sessions. Vertical jumps have been used in studies [8–10] as tests to evaluate the performance and other characteristics of the lower limbs. Some of the daily jumps of classical ballet have characteristics like those of vertical jumps, as they are jumps that do not have anterior-posterior displacement and have phases of propulsion, flight, and landing. Thus, variables and calculations similar to those of studies involving vertical jumps can be used to evaluate these ballet jumps [11]. Therefore, the aim of this study was to analyze the differences in knee demand in four jumps usually trained in ballet: Changement, Echappé Sauté 1 (fifth position for second position), Echappé Sauté 2 (second position for fifth position) and Sauté, to verify which jump has the greatest potential for injuries. We hypothesize that landing phase

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_55

353

354

T. S. Lemes et al.

produces results more deleterious than propulsion phase, and the jumps performed in fifth position would be of greater risk.

was conducted when ANOVA was significant. All statistical analysis was performed using SPSS software (SPSS Inc. Chicago, IL, USA), with a significance level set at a < 0.05.

2

3

Materials and Methods

2.1 Subjects Fifteen individuals (6 males and 9 females; age: 21.4  3,1 years; body mass: 57.2  8.6 kg; height: 1.66  0.08 m) participated in the study with ballet experience of 10.6  5.9 years, and exceeding a weekly 15 h of classes. All participants were tested in individual sessions. The subjects performed three trials of each jump in a randomized order on a force platform. The execution of all trials was validated by a dance specialist, who also applied a question form to learn about demographic characteristics and possible injuries.

2.2 Protocol The volunteers were instructed to perform the jumps keeping their hands on their waist to exclude the contribution of the upper limbs. They were also instructed to perform the jumps as they do during classes. A force platform (AMTI, model OR6-7) positioned on a flat and stable surface was used to collect GRF. Both kinetic and kinematic data were captured using full body plug-in-gait at sampling frequency of 100 Hz by the Vicon system, which includes 10 infrared cameras (Vicon Nexus, Oxford Metrics, Oxford, UK). Individual kinetic variables were extracted: sagittal plane net maximum knee load (KFmax), maximum knee moment (KMmax), and maximum knee power (KPmax), rising time (RT), peak of vertical GRF and mean rate of force development (RFD) [12] for the propulsion and landing phases of classical ballet jumps. Next, an average of the three trials was calculated. The propulsion phase was calculated from the deepest squat, called plié in classical ballet, until the loss of contact with the force platform. The landing phase was calculated from the instant of contact with the force platform to the deepest plié.

2.3 Statistical Analysis For each variable, as all data had a normal distribution the one-way ANOVA test was used to determine differences between jumps. A post-hoc test with Bonferroni correction

Results

Tables 1 and 2 show the results for the four jumps, in the propulsion and landing phases, respectively. Overall, KFmax, peak GRF, and mean RFD were greater for landing phase than for propulsion phase, whereas KPmax and rising time were smaller for landing phase than for propulsion phase. During the propulsion phase, significant differences were found for KMmax (p < 0.001), RT (p = 0.010) and RFD (p = 0.023). Sauté produced greater KMmax than the other jumps, greater RT than Echappé 2 (post hoc p = 0.007), and Echappé 2 produced greater RFD than Sauté (post hoc p = 0.017). During the landing phase, significant differences were found only for KMmax (p < 0.001). Sauté produced greater KMmax than the other three jumps (p = 0.001, p = 0.001, and p < 0.001 for Changement, Echappé1, and Echappé2, respectively).

4

Discussion

The aim of this study was to evaluate the differences in demand on the knee joint during the propulsion and landing phases of four common dance jumps Changement, Echappé Sauté 1 (fifth position for second position), Echappé Sauté 2 (second position for fifth position) and Sauté. The hypothesis was that landing phase produces greater demands on the knee joint than propulsion phase, and that the jumps performed in fifth position would be of greater risk. The results of this study support the first hypothesis, but not the second one. The results confirm that landing phase is more demanding than propulsion phase. The shorter rising time is in line with greater RFmax and peak GRF values. These results are in accordance with Kulig et al. [13] who found similar results for this same joint, although with a different jump, Saut de Chat [13]. A component that should be highlighted in the vertical attenuation of the reaction force of the ground is the stiffness of the legs, composed of compressibility of the tissue and angular stiffness of the individual joints. The greater the physical demand for activity, the greater the amount of stiffness that a leg presents [14]. The knee seems to be the main articulator, among those of the members lower legs, the

Demands at the Knee Joint During Jumps … Table 1 Variables analyzed for the Changement, Echappé 1, Echappé 2 and Sauté in propulsion phase

Table 2 Variables analyzed for the Changement Echappé Sauté 1. Echappé Sauté 2 and Sauté in landing phase

Jump KFmax (N/kg) KMmax (N. mm/kg)

355 Changement

Echappé 1

Echappé 2

Sauté

p

3.1 ± 0.8

3.1 ± 0.2

2.1 ± 0.9

2.6 ± 0.9

0.152

672,2 ± 216.2

534.1 ± 168.6

332.8 ± 179.5

1233.5 ± 267.4

0.05), one-way ANOVA followed by Tukey’s post hoc were applied for the parameters that did not present a normal distribution, nonparametric Kruskal– Wallis test followed by Dunn’s multiple comparisons for post hoc were applied. GraphPad Prism for Windows, version 8.2.1 (GraphPad Software, San Diego, California USA) with a 5% level of significance was used for all of the statistical and graphical analysis.

Table 1 Sample demographics for all participants and classified by age All participants

G1

G2

G3

6.4 (1.8)

4.4 (0.51)

6.3 (0.64)

8.5 (0.64)

Height (m)

1.21 (0.13)

1.08 (0.066)

1.23 (0.098)

1.33 (0.073)

Mass (kg)a,b,d

26.4 (9.1)

19.2 (3.4)

26.8 (7.2)

33.2 (9.5)

N

51

17

17

17

Age (years)

a

a,b,c

Data are shown as Mean (Standard Deviation). a(p > > > 2 > TL CL þ TR CR þ x TR TL ðTR CL þ TL CR Þ > > Retotal ¼ Rc þ > > > x2 CR2 CL2 ðRR þ RL Þ2 þ ðCL þ CL Þ2 > > >   > > CR þ CL þ x2 TR2 CL þ TL2 CR > 1 > > h i Xtotal ¼   > > xCCW x x2 C2 C 2 ðRR þ RL Þ2 þ ðCR þ CL Þ2 > < R L > > > > > > > > > > > > > > > > > > > > > :

x ¼ 2pf qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 jZmj ¼ Re2total þ Xtotal T R ¼ RR C R TL ¼ RL CL ð1Þ 8 > > > > > > > > > > > > > >
> > C CCW R > > > > > V_ total ¼ V_ R þ V_ L > > > > Vtotal > > : Ppleural ¼ CCW

Fig. 1 Two-compartment respiratory model

ð2Þ

where,ZRS , respiratory system’s impedance; Re, real part of the respiratory system’s impedance; X, imaginary part of the respiratory system’s impedance; RC, RL, RR are respectively central, left lung and right lung resistances; CL, CR, CCW are respectively left lung, right lung and chest wall compliances; f , frequency of breathing; V_ total , V_ R ,V_ L are respectively total, right lung and left lung airflow; Pdiaphragm, Pmouth, Ppleural are diaphragmatic, mouth and intrapleural pressures; Vi, volume associated with the i compartment, in this case, left, right or total;

Modeling, Control Strategies and Design of a Neonatal …

It is important to mention that the ventilation mode will impact the respiratory model, specifically, on the voltage source factor. If in spontaneous mode the voltage source will emulate the diaphragmatic pressure. However, if in controlled or assisted mode the diaphragmatic pressure is null and the driving pressure will be emitted by the mechanical ventilator. For this work, the mechanical ventilator will be on constant pressure control variable.

2.2 Simulation Plant As stated earlier, commercial respiratory simulators tend to be expensive and do not have high accuracy regarding small volumes and pressures. In this sense, a simulation plant based on a pump syringe mechanism (Fig. 2) is proposed in this paper. This type of mechanism is generally used for small volumes, flows, and pressures. In addition, it has a propensity to be low-cost [8]. It is relevant to address that although the respiratory model is a two-compartment system, the simulation plant is composed of one compartment only. The reason for that is by allowing the user to interact with a multicompartment model, they can emulate more scenarios and pathologies, thus the experience may be more realistic and helpful for their training. In addition, the simulation plant may be kept as simple as possible without losing important features for the simulation. The equipment is composed of a syringe-like container, in which the piston is connected to a lead screw mechanism that is able to convert angular velocity to linear velocity, consequently may control the piston position. In order to execute the movement of the piston, a D.C motor receives signals, from the controller, which are calculated through the respiratory model and the control system, the to a syringe outlet is connected mechanical ventilator by hoses, in a way that the simulator is subject to a chosen mechanical

Fig. 2 Syringe pump mechanism [6]

565

ventilation mode. Mathematically, the syringe pump system can be described as Eq. (3). q_ syringe ðtÞ ¼ ASyringe TxðtÞ

ð3Þ

where, q_ syringe is the flow out of the syringe; Asyringe is the area of the syringe chamber; T is the lead screw pitch, x is the lead screw angular velocity.

2.3 Control System Design and Sensoring A state variable approach was chosen to be implemented on the control system. Therefore, differential equations may be written in the state variable form, as follow. ( x_ ðtÞ ¼ AxðtÞ þ BuðtÞ ð4Þ yðtÞ ¼ CxðtÞ Applying the syringe model (Eq. 3) to the system above, : : it is possible to determine the next relations: xðtÞ ¼ q , syringe

xðtÞ ¼ qsyringe , A = 0, B ¼ Asyringe T, uðtÞ ¼ xðtÞ, and C = 1. Thus, it is possible to realize that it is necessary to track the volume of air, which can be achieved by integrating the flow over time. Due to the fact that, this paper is studying this matter on Matlab/Simulink environment, there are no sensors, consequently, C = 1, in Eq. 4. A Linear-quadratic regulator (LQR), which is a special case of the optimal control problem, was implemented since the simulation plant equation (Eq. 3) is linear. In addition, its parameters (Q,R) were determined through several simulations until results were consistent. The parameters used were Q ¼ 106 and R ¼ 107 . From an overall perspective, the respiratory model will provide reference volume signal, which will be translated to an input to the DC motor, ultimately responsible to push the piston forward or backward through the syringe. This

566

A. B. A. Campos and A. T. Fleury

movement will generate airflow that by integrating may be compared to the reference model value, through the following relation. e ðt Þ ¼ V ðt Þ  x ðt Þ

ð5Þ

The result will be a feedback error to be introduced in the control block (Fig. 3). As stated above, the respiratory simulator can be integrated into a mechanical ventilator, therefore it must work according to the main ventilation modes: spontaneous, controlled and assisted. In the first mode, the breathing cycles are initiated by the patient. It is a process that allows total or partial independence of a person to breathe. On the other side, controlled cycle is a process in which the respiration is initiated, controlled and terminated by the machine. Latter, the assisted mode combine features from both of the previous modes. Cycles are initiated by the person, however, when detected a respiratory effort above the threshold, the ventilator is triggered to assist the patient and assumes total control of the breath until it is finished.

3

Results

The simulations’ results are presented in 3 segments: spontaneous, controlled and assisted. For this paper solely the full-term newborn scenario is going to be displayed to the reader. An important observation to address is that, although alveolar pressure is negative, in this work, it will be displayed as a positive value in order to appear in a similar manner as in mechanical ventilators, hence it is more readily to the professionals in training. Respiratory parameters utilized (Table 1) in these simulations were compiled by Stanckiewicz [4].

3.1 Spontaneous Mode Spontaneous breathing is a process that initiates with the contraction of the muscles that cause lung expansion, such as diaphragmatic and intercostal muscles. In order to replicate

Fig. 3 Block diagram of the control system’s algorithm

these muscles efforts, a quarter of a sine function (Fig. 4) was introduced by Kaye [9]. As stated previously, Kaye’s model was utilized as the input pressure in the respiratory model (Eqs. 1–2), in order to calculate the volume reference signal, which will be processed through the control system, thus triggering the simulation plant. Results are presented below (Figs. 5 and 6), where the first illustrates respiratory parameters, such as volume, flow, pulmonary pressure and muscular pressure. The latter figure (Fig. 6) is associated with control parameters, for instance as, input u(t), errors and piston position.

3.2 Pressure-Controlled Mode Opposed to the previous item, the mechanical ventilator is going to initiate, control and terminate the breathing cycle. It is worth to mention, once again, that the simulations were implemented in pressure-controlled mode. Results are presented as follows (Figs. 7 and 8).

3.3 Assisted Mode For this mode, the trigger level was programmed for 0.3 kPa. Consequently, after this value is reached the ventilator will assume control of the breathing cycles. The simulation is presented below (Figs. 9 and 10). From a control system design perspective, in order to assess whether the simulation plant outputs are tracking the reference signals properly, one must monitor the error variable, Eq. 5, which is the difference between the syringe volume and the respiratory model volume (Figs. 6b, 8b and 10b). One could track the airflow error variable, as well, to check consistency in the results (Figs. 6c, 8c and 10c). A more readily and comprehensible manner to show these errors are using percentages, as the described as follows (Table 2). eð%Þ ¼

eðtÞ xðtÞ

ð6Þ

Modeling, Control Strategies and Design of a Neonatal …

567

Table 1 Respiratory parameter for a full-term newborn [4] Rc (kPa s/L) Full-term

6,8

RL (kPa s/L) 3,4

RR (kPa s/L) 3,4

4

Fig. 4 Diaphragmatic pressure modeled by Kaye [7]

Fig. 5 Respiratory parameters. a comparison between reference volume and the plant volume; b comparison between reference airflow and the plant flow; c Alveolar pressure model; d spontaneous muscular pressure model

CL (mL/kPa)

CR (mL/kPa)

Ccw (mL/kPa)

20,5

20,5

124

Discussion

The results shown may be analyzed from a biological and control system perspective. The first is related directly to the respiratory model and if it is befitting to real neonatal respiration. This evaluation may be done by comparing major features of the breathing cycles, such as, frequency, tidal volume and alveolar airflow with the values found in the literature (Table 3). During spontaneous breathing, it was possible to notice that the tidal volume was approximately 20 mL and peak flow was near 4.45 L/min while in a breathing frequency of 50. Available reference data mentions that tidal volume sits in the range of 5.5–8 mL/Kg. In addition, alveolar airflow is approximately between 1 and 1.5 L/min kg. The respiratory rate diverges from difference

568

A. B. A. Campos and A. T. Fleury

Fig. 6 Control parameters. a Input u(t); b Volume error; c Airflow error; d Piston relative position

Fig. 7 Respiratory parameters. a comparison between reference volume and the plant volume; b comparison between reference airflow and the plant flow; c Alveolar pressure model; d spontaneous muscular pressure model

references from 30.6 to 51 cycles per minute. It is worth to mention that a healthy full-term newborn’s weight is between 2.5 and 4.5Kg. Therefore, it is fair to conclude that the respiratory model is acceptable, once it can replicate

some of the most important characteristics of neonatal respiration. On the control system point of view, maximum errors relative to the control variables were compiled, as seen

Modeling, Control Strategies and Design of a Neonatal …

569

Fig. 8 Control parameters. a Input u(t); b Volume error; c Airflow error; d Piston relative position

Fig. 9 Respiratory parameters. a comparison between reference volume and the plant volume; b comparison between reference airflow and the plant flow; c Alveolar pressure model; d spontaneous muscular pressure model

previously (Table 2). In general, assisted mode had the highest levels of errors, 3.29% on airflow and 2.77% on the volume. One may justify this as a result of a change on the reference signal source, switched from the mathematical model to the mechanical ventilator. Although there are peak

errors, it is noticeable that the system quickly adjusts to these variations. Contrarily, smallest errors were found in the controlled mode, 1.55% on airflow and 2.62% on the volume, due to the fact that the reference signal source did not changed and the pressure emitted by the ventilator did not

570

A. B. A. Campos and A. T. Fleury

Fig. 10 Control parameters. a Input u(t); b Volume error; c Airflow error; d Piston relative position

Table 2 Maximum errors relative to the control variables for each scenario

Table 3 Reference data on frequency, tidal volume and alveolar flow

Mode

eflow (L/s)

eflow (%)

evolume (mL)

evolume (%)

Spontaneous

0.16

2.84

0.67

3.07

Controlled

0.08

1.55

0.59

2.62

Assisted

0.17

3.29

0.60

2.77

Frequency

49 ± 51

Data from literature

Units

References

Breaths per Minute

[5, 6]

Tidal Volume

6.00–8.00

4.51–6.63

mL/Kg

[5, 8]

Alveolar Flow

1.00–1.50

1.00–1.50

mL/min Kg

[5, 8]

presented large variations. In this context, it is reasonable to infer that the control system was able to track the reference signal from the respiratory model.

5

30.6–47.8

Conclusions

This study developed a project of a neonatal respiratory simulator that is capable of interact alongside a mechanical ventilator and its main ventilation modes, however, this study is focused solely on the pressure-controlled procedures. In general, the paper approached a way to utilize a pump-syringe like mechanism as a simulation plant. As seen

earlier, results were more than satisfactory. Errors margins were not above 3.5% for all modes. Further studies are in advance to propose new simulation plant mechanisms to get lower friction levels. Besides, studies may design a control system to include flow controlled feature of the mechanical ventilator. Acknowledgements The authors would like to thank Dr. Milton Harumi Miyoshi and his team from the Neonatal Laboratory at the Universidade Federal de Sao Paulo (UNIFESP) and Dr. Jorge Bonassa for their incentive and collaboration. Conflict of Interest The authors declare that they have no conflict of interest.

Modeling, Control Strategies and Design of a Neonatal …

571

References 5. 1. United Nations Inter-agency Group for Child Mortality Estimation (2019) Levels & Trends in Child Mortality: Report 2019, Estimates developed by the UnitedNations Inter-agency Group for Child Mortality Estimation. United Nations Children’s Fund, New York, p 2019 2. Baldoli I, Cuttano A, Scaramuzzo RT, Tognarelli S, Ciantelli M, Cecch F, Gentile M, Sigali E, Laschi C, Ghirri P, Menciassi A, Dario P, Boldrini A (2015) A novel simulator for mechanical ventilation in newborns: mechatronic respiratory system simulator for neonatal applications. Proc Inst Mech Eng 229(8):581–591 3. Murphy AA, Halamek LP (2005) Educational perspectives: simulation-based training in neonatal resuscitation. J Am Acad Pediatrics e489 4. Stankiewicz B, Palko KJ, Darowski M, Zielisnki K, Kozarski M (2017) A new infant hybrid respiratory simulator: preliminary

6.

7.

8. 9.

evaluation based on clinical data. International Federation for Medical and Biological Engineering 2017 Coté CJ, Lerman J, Todres ID (2018) A practice of anesthesia for infants and children. Elsevier Health Sciences Schmalisch G, Wilitzki S, Wauer RR (2005) Differences in tidal breathing between infants with chronic lung diseases and healthy controls. BMC Pediatrics 5(1):36 Otis AB, McKerrow CB, Bartlett RA, Mead J, McIlroy MB, Selverstone NJ, Radford Jr EP (1956) Mechanical factor in distribution of pulmonary ventilation. J Appl Physiol 427–443 Chan AY (2016) Biomedical device technology: principles and design. Charles C Thomas Publisher Kaye JM (1997) Traumap: the design of a 3D virtual environment for modeling cardiopulmonary interactions. A Dissertation in Computer and Information Science—University of Pennsylvania

Application of Recurrence Quantifiers to Kinetic and Kinematic Biomechanical Data A. O. Assis, A. O. Andrade, and M. F. Vieira

Abstract

A brief review of the literature on the use and clinical significance of recurrence quantification analyzes for kinematic and kinematic data is presented. The common recurrence quantifiers used in biomechanics, as well as their theoretical significance, are presented. Next, an overview of the studies in which recurrence quantifiers are used is presented, showing their association with other nonlinear quantifiers. The influence of parameters, such as sampling frequency, definition of the minimum size of the diagonal lines and length of the time series, is investigated. Although there is different approaches for parametrizing and applying recurrence quantification analysis to biomechanical data, recurrence quantifiers can extract information from data with low time resolution while maintaining the reliability of results. More importantly, the use of recurrence quantifiers in clinical trials allows for discrimination between experimental and control groups. Keywords

RQA

1



Recurrence quantifiers



Recurrence threshold

These tools are related to the nonlinear dynamic analysis of human movement, in order to determine stability, adaptability and loss of complexity, being useful to measure characteristics that allow the researcher to distinguish different populations and evaluate the efficacy of rehabilitation methods [5–7]. The main nonlinear quantifiers commonly used in Biomechanics are Detrended Fluctuation Analysis (DFA) [8], Lyapunov Exponent (sLE) [9, 10], Multiscale Entropy (MSE) [11], Sample Entropy (SampEn) [8, 12], Correlation Dimension [13], Higuchi dimension and Recurrence Quantification Analysis (RQA) [7, 8]. The quantifiers, when estimated from a single time series, may be related to underlying processes of the body [1, 14, 15] and it may be inferred whether the subjects present any loss of their characteristics, such as stability and adaptability. There are a number of studies using RQA on biomechanical data, correlating it with diagnosis [15–17], testing its applicability in different situations by using force platforms [5], walking overground [16] or on treadmill [10], and during concurrent cognitive tasks [18]. Thus, the present review intends to present an overview on the application of RQA quantifiers, as well as their commonly parameterization in biomechanics.

Introduction

The analysis of human gait stability has used several tools to quantify characteristics that make it possible to infer the behavior and interaction between musculoskeletal structures and the nervous system [1–4].

2

A. O. Assis (&) Federal Technological Institute of Goiás, Goiânia, Brazil e-mail: [email protected]

Recurrence analysis is based on the recurrence plot (RP), initially proposed by Eckmann et al. [19], which is constructed from the graphical representation of the recurrence RPði; jÞ matrix [19]. In this sense, after reconstructing the state space of a system, the recurrence function presented in Eq. 1 is used:

A. O. Andrade Centre for Innovation and Technology Assessment in Health, Federal University of Uberlândia, Uberlândia, Brazil M. F. Vieira Bioengineering and Biomechanics Laboratory, Federal University of Goiás, Goiânia, Brazil

RQA Applied to Kinetic and Kinematic Biomechanical Data

2.1 Recurrence Quantifiers

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_88

573

574

A. O. Assis et al.

 RPði; jÞ ¼ / r  xi  xj fxi gNi¼1 ;

ð1Þ

where / is the Heaviside function, kk is the Euclidean norm, N is the time series length, and r is the recurrence threshold that limits the condition of recurring state. The main recurrence quantifiers used in biomechanical studies are recurrence rate (RR), determinism (DET), average length of diagonal lines (AvgD), maximum length of diagonal lines (MaxD), Shannon entropy of frequency distribution of diagonal line lengths (EntD). Webber and Marwan [20] define these quantifiers of recurrence. Among the classical quantifiers, the percentage of recurrence (RR) infers qualitatively the probability of a state repeating itself, resulting in a parameter very sensitive to the recurrence threshold r, according to Eq. (1). RR is a measure of nonlinear correlation. The remaining quantifiers estimate specific dynamical recurrences and are based on the diagonal lines of the RP. DET calculates the proportion of recurrence points that form diagonal lines, providing a probability that two states close to each other remain close to at least (dmin ) future states. Thus, determinism measured by DET is associated with how close the orbits tend to remain close to the same trajectory within a recurrence threshold. AvgD measures the average time that one state remains close to the other and can be interpreted as the mean prediction time. EntD reflects the complexity of the deterministic structure of the system and calculates the amount of information of the RP with respect to the diagonal lines. The higher the EntD the more complex the RP structure.

2.2 Recurrence as a Possibility for Kinetic and Kinematic Data Analysis Kinetic Data Among the first studies conducted in postural control in which the recurrence quantifiers were extracted, we have the work of Riley et al. [21], motivated by the nonlinear and non-stationary characteristics presented by the behavior of the center of pressure (COP) [22, 23]. They analyzed the deterministic characteristics and non-stationarity properties of the postural sway on a force platform during quasi-quiet upright position, concluding that when closing the eyes the postural behavior becomes more deterministic, and when directing the face to the lateral of the body the COP presents characteristics of increasing drift (the signal stabilizes in different places revealing non-stationary characteristics) when the eyes are open. This suggest that the system, when obtaining parallax movement information, tends to stabilize avoiding the “deterministic” balance strategy, so that in the

condition of open eyes postural control is strongly influenced by the gain of visual information. Further, the same authors also investigated the medial– lateral (ML) and anteroposterior (AP) COP displacement when performing the following tasks: (1) aiming at a fixed target near the platform in the AP direction; (2) aiming at a fixed target far from the platform in the AP direction; (3) aiming at a fixed target near the platform in the ML direction; (4) aiming at a fixed target far from the platform in the ML direction [23]. The recurrence analysis techniques applied to the COP sway have been changing over time, in data acquisition (sampling frequency), reconstruction of the state space, parameterization of the RQA for the calculation of quantifiers and use of certain quantifiers. These changes can be perceived in the sampling frequency for acquisition of kinetic data for the extraction of RQA quantifiers range between 40 and 200 Hz in different studies. The sampling frequency is a very delicate topic, as we can fail to extract dynamic characteristics necessary for recurrence quantifiers [6, 24–27] with a signal inappropriately sampled. By Nyquist's theorem, the sampling frequency must be at least twice the natural frequency of the system. For COP signals, representative of postural control in quasi-quiet upright posture, that have 90% of the total power below 2 Hz [28], the sampling frequency should be much higher as the recurrence quantifiers are sensitive to characteristics that are not detected by spectral analysis [29], Although spectral analysis does not detect transient variations of the signal, a wider band for data acquisition influences the values of recurrence quantifiers. Rhea et al. verified the behavior of Shannon entropy applied in the human center of pressure (COP) data varying the sampling frequency, Rhea et al. obtained that the recurrence quantifiers change when sampling frequency ranged between 166 and 500 Hz [26]. For the extraction of recurrence quantifiers, it is necessary to reconstruct the system state space. This approach has become common due to the need to extract characteristics from the attractor that represents the behavior of the dynamic system. However, we have several approaches to apply the methodology of immersion by delayed copies of the time series. The most common methodology are divided into two main characteristics: (1) the delay is an average of the delay across all participants obtained by mean mutual information algorithm [5, 9, 21, 30]; and (2) the delay obtained by mean mutual information algorithm is calculated and applied to each individual separately [6, 25, 31]. These delay values range from 2 to 55 samples [25, 27], and the embedded dimension found from global false nearest neighbor algorithm varies between 3 and 20 [32, 33]. The first case is

Application of Recurrence Quantifiers to Kinetic and Kinematic …

found mainly in the studies prior to 2010, and the second predominantly in the studies after 2010. The next step to extract recurrence quantifiers is the parameterization of the RQA quantifiers, being related to the characteristics of the state space. First, it is necessary to choose a limit of proximity of the states, the recurrence threshold, which is given in the literature with respect to the distance between states. Three main criteria of choice for recurrence threshold can be found, which are a proportion in relation to the distances from the states. The first criterion is made as a function of the mean distance between the states, which vary between 10 and 35% of the mean distances of the states. The second criterion relates to the maximum distance between states, usually adopted as 10% of the maximum distance. The third criterion is related to the standard deviation of distances, which is adopted as 20% of the standard deviation of the distances between states [25]. Finally, for the choice of the value of the dmin used to calculate DET quantifiers, it is advisable to choose a value that avoids saturation of the quantifier, for example, the value of the DET very close to 100%. The choice of this parameter is somewhat subjective, and in the literature values were between 2 and 5 [24, 30, 34–36]. These different parameters for acquisition, and processing of the signal, and applications of the quantifiers in the data can generate values that may become difficult to compare different studies. It can be seen, for example, that EntD values for a population with similar characteristics differ by a considerable amount when comparing different studies [24, 35]. Some contrasting results can be observed in other cases, as in Haddad et al. and Seigle et al. [24, 35], in which the COP of a population with some disability or pathology presents a more deterministic and less stable behavior than healthy populations. On the other hand, in the study by Ramdani et al. [34], elderly fallers presented a higher EntD than healthy elderly. Kinematic Data When we mention kinematic data, we are considering signals acquired by motion capture systems, gyroscopes, accelerometers, and magnetometers. In this case, we have experiments in which markers or inertial sensors are placed on anatomical points in the body that represent the most appropriate characteristics for the study. In gait studies, in which RQA was applied, we have data that are predominantly sampled from markers of the thoracic and lumbar region at a sampling rate ranging from 100 to 128 Hz [7, 15, 37–39]. For the reconstruction of the state space, velocity or acceleration data are used due to their stationary characteristics. Mean mutual information and global false nearest neighbors’ algorithms are used to choose the ideal delay and embedded dimension, respectively. These values usually

575

found in literature range from 6 to 10 samples for the delay and 5–6 for embedded dimension [7, 17, 37]. For the parameterization of the RQA, the adopted value for recurrence threshold is usually 40% the maximum variation of the states, and the minimum value used as a diagonal line is dmin ¼ 4 which is empirically determined [7, 15, 16, 40]. The main quantifiers directly associated with the gait phenomenon are RR, EntD, AvgD and MaxD. A quantifier difficult to interpret in a gait context is EntD, which presents a behavior related to diagonal lines, being sensitive to the AvgD parameter, as reported by Riley et al. [21] On the other hand, Bizovska et al. [17] report that EntD, despite presenting a different behavior from other entropies, was the only quantifier to present significant difference between faller and non-faller older adults. Recurrence quantifiers have also presented a correlation with some clinical trials in [41]. The quantifiers DET, AvgD and MaxD present correlations with Time Up and Go (TUG) test. Other studies also show the correlation of quantifiers with clinical trials in people with unilateral vestibular hypofunction [16], questionnaires for detection of faller older adults, and studies of multiple sclerosis staging [32]. In these studies, the data were not collected with the same sampling frequency, ranging between 50 Hz and 296.3 Hz, and recurrence threshold r varying between the two conditions: 10% the average distance between points in the state space and 40% of the greater distance in the state space. The reliability of recurrence quantifiers is another point to be considered. In recent studies, it has been reported that the collected time series does not need to be long for the extraction of RR, DET, AvgD, and EntD, as other nonlinear descriptors that require long time series [31, 36].

3

Conclusions

The use of RQA on biomechanical data is relatively recent, so that there is no consensus for the choice of quantifier parameters and data acquisition. However, the results are encouraging due to the fact that quantifiers can be extracted from data with a lower sampling rate and a lower computational cost, which is mainly due to the reliability of their results. RQA is a promising tool for the analysis of biomechanical data, as its quantifiers can distinguish between different populations, have a functional interpretation, and be correlated with clinical trials. Acknowledgements The authors would like to thank the financial support of the National Council for Scientific and Technological De-velopment (CNPq), the Coordination for the Improvement of Higher Education Personnel (CAPES), the Foundation for Research

576 Support of State of Goiás (FAPEG) and the Foundation for Research Support of State of Minas Gerais (FAPEMIG). A. O. Andrade and M. F. Vieira are fellows of CNPq Brazil (304818/2018-6 and 306205/2017-3, respectively). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Vieira MF, Rodrigues FB, Souza GS de S e, Magnani RM, Lehnen GC, Campos NG et al (2017) Gait stability, variability and complexity on inclined surfaces. J Biomech 54:73–79 2. Matsuoi Y, Asai Y, Nomura T, Sat S, Inoue S, Mizukura I et al (2005) Intralimb and interlimb incoordination: comparative study between patients with parkinsonism and with cerebellar ataxia. J Japanese Phys Ther Assoc 8:47–52 3. Farmer SF (1999) Pulsatile central nervous control of human movement. J Physiol 517(1):3 4. Rosenblatt NJ, Grabiner MD (2010) Measures of frontal plane stability during treadmill and overground walking. Gait Posture 31 (3):380–384 5. Schmit JM, Regis DI, Riley MA (2005) Dynamic patterns of postural sway in ballet dancers and track athletes. Exp Brain Res 163(3):370–378 6. Negahban H, Sanjari MA, Karimi M, Parnianpour M (2016) Complexity and variability of the center of pressure time series during quiet standing in patients with knee osteoarthritis. Clin Biomech 32:280–285 7. Bisi M, Riva F, Stagni R (2014) Measures of gait stability: performance on adults and toddlers at the beginning of independent walking. J Neuroeng Rehabil 11(1):131 8. Rhea CK, Kiefer AW, Wright WG, Raisbeck LD, Haran FJ (2015) Interpretation of postural control may change due to data processing techniques. Gait Posture 41(2):731–735 9. Balasubramaniam R, Riley MA, Turvey MT (2000) Speci city of postural sway to the demands of a precision task. Gait Posture 11:12–24 10. Josiński H, Michalczuk A, Mucha R, Świtoński A, Szczȩsna A, Wojciechowski K (2016) Analysis of human motion data using recurrence plots and recurrence quantification measures. AIP Publ 180014:397–406 11. Thuraisingham RA, Gottwald GA (2006) On multiscale entropy analysis for physiological data. Phys A Stat Mech its Appl 366:323–332 12. Vieira MF, Rodrigues FB, de Sá e Souza GS, Magnani RM, Lehnen GC, Andrade AO (2017) Linear and nonlinear gait features in older adults walking on inclined surfaces at different speeds. Ann Biomed Eng 45(6):1560–1571 13. Dingwell JB, Cusumano JP. (2000);. Nonlinear time series analysis of normal and pathological human walking. Chaos. 10(4): 848–63. 14. Bruijn SM, Meijer OG, Beek PJ, van Dieen JH (2010) The effects of arm swing on human gait stability. J Exp Biol 213(23):3945– 3952 15. Riva F, Toebes MJP, Pijnappels M, Stagni R, van Dieën JH (2013) Estimating fall risk with inertial sensors using gait stability measures that do not require step detection. Gait Posture 38(2): 170–174

A. O. Assis et al. 16. Sylos Labini F, Meli A, Ivanenko YP, Tufarelli D (2012) Recurrence quantification analysis of gait in normal and hypovestibular subjects. Gait Posture 35(1):48–55 17. Bizovska L, Svoboda Z, Vuillerme N, Janura M (2017) Multiscale and Shannon entropies during gait as fall risk predictors—a prospective study. Gait Posture 52:5–10 18. Pellecchia GL, Shockley K, Turvey MT (2005) Concurrent cognitive task modulates coordination dynamics. Cogn Sci Soc 29:531–557 19. Eckmann J-P, Kamphorst SO, Ruelle D (1987) Recurrence plots of dynamical systems. Europhys Lett 4(9):973–977 20. Webber CL, Marwan N (2015) Recurrence quantification analysis: theory and best practices. Springer Complexy, 421 p 21. Riley MA, Balasubramaniam R, Turvey MT (1999) Recurrence quantification analysis of postural fluctuations. Gait Posture 9 (1):65–78 22. Eckmann JP, Ruelle D (1992) Fundamental limitations for estimating dimensions and Lyapunov exponents in dynamical systems. Phys D Nonlinear Phenom 56(2–3):185–187 23. Schumann T, Redfern MS, Furman JM, El-Jaroudi A, Chaparro LF (1995) Time-frequency analysis of postural sway. J Biomech 28 (5):603–607 24. Seigle B, Ramdani S, Bernard PL (2009) Dynamical structure of center of pressure fluctuations in elderly people. Gait Posture 30 (2):223–226 25. Negahban H, Salavati M, Mazaheri M, Sanjari MA, Hadian MR, Parnianpour M (2010) Non-linear dynamical features of center of pressure extracted by recurrence quantification analysis in people with unilateral anterior cruciate ligament injury. Gait Posture 31 (4):450–455 26. Rhea CK, Silver TA, Hong SL, Ryu JH, Studenka BE, Hughes CML et al (2011) Noise and complexity in human postural control: interpreting the different estimations of entropy. PLoS One 6(3):1–9 27. Coubard OA, Ferrufino L, Nonaka T, Zelada O, Bril B, Dietrich G (2014) One month of contemporary dance modulates fractal posture in aging. Front Aging Neurosci 6(Feb):1–12 28. Hayes KC (1982) Biomechanics of postural control. Exerc Sport Sci Rev 10(1):363–391 29. Webber CL, Schmidt MA, Walsh JM (1995) Influence of isometric loading on biceps EMG dynamics as assessed by linear and nonlinear tools. J Appl Physiol 78(3):814–822 30. Schmit JM, Riley MA, Dalvi A, Sahay A, Shear PK, Shockley KD et al (2006) Deterministic center of pressure patterns characterize postural instability in Parkinson’s disease. Exp Brain Res 168 (3):357–367 31. Mazaheri M, Negahban H, Salavati M, Sanjari MA, Parnianpour M (2010) Reliability of recurrence quantification analysis measures of the center of pressure during standing in individuals with musculoskeletal disorders. Med Eng Phys 32(7):808–812 32. Cao H, Peyrodie L, Agnani O, Cavillon F, Hautecoeur P, Donzé C (2015) Evaluation of an Expanded Disability Status Scale (EDSS) modeling strategy in multiple sclerosis. Med Biol Eng Comput 53 (11):1141–1151 33. Hasson CJ, Van Emmerik REA, Caldwell GE, Haddad JM, Gagnon JL, Hamill J (2008) Influence of embedding parameters and noise in center of pressure recurrence quantification analysis. Gait Posture 27(3):416–422 34. Ramdani S, Tallon G, Bernard PL, Blain H (2013) Recurrence quantification analysis of human postural fluctuations in older fallers and non-fallers. Ann Biomed Eng 41(8):1713–1725 35. Haddad JM, van Emmerik REA, Wheat JS, Hamill J (2008) Developmental changes in the dynamical structure of postural sway during a precision fitting task. Exp Brain Res 190(4):431–441

Application of Recurrence Quantifiers to Kinetic and Kinematic … 36. Assis AO, Rodrigues FB, Carafini A, Lemes TS, de Villa GA, Andrade AO et al (2020) Influence of sampling frequency and number of strides on recurrence quantifiers extracted from gait data. Comput Biol Med 119 37. Riva F, Bisi MC, Stagni R (2014) Gait variability and stability measures: minimum number of strides and within-session reliability. Comput Biol Med 50:9–13 38. Tamburini P, Storm F, Buckley C, Bisi MC, Stagni R, Mazzà C (2018) Moving from laboratory to real life conditions: influence on the assessment of variability and stability of gait. Gait Posture 59:248–252

577 39. Bisi MC, Stagni R (2016) Development of gait motor control: What happens after a sudden increase in height during adolescence? Biomed Eng Online 15(1):1–12 40. Riva F, Grimpampi E, Mazzà C, Stagni R (2014) Are gait variability and stability measures influenced by directional changes? Biomed Eng Online 13(1):1–11 41. Riva F, Tamburini P, Mazzoli D, Stagni R, Participants A (2015) Is there a relationship between clinical rating scales and instrumental gait stability measures? Springer Int Publ Switz 45

Virtual Environment for Motor and Cognitive Rehabilitation: Towards a Joint Angle and Trajectory Estimation D. Soprani, T. Botelho, C. Tavares, G. Cruz, R. Zanoni, J. Lagass, S. Bino, and P. Garcez

performing the task between the trials in the protocol is also made. The proposed technique for joint angle and trajectory estimation demonstrated to be a feasible and straightforward option to obtain kinematics analysis from playful tasks performed in a markerless system.

Abstract

Cognitive and motor development are important issues in patients with special needs, such as Down Syndrome (DS) or Autism Spectrum Disorder (ASD). The difficulty that individuals with DS and ASD present in the acquisition of motor and cognitive skills has motivated scientists to study and develop means of intervention. The application of technology has improved the traditional way of dealing with therapies, especially with the development of virtual environments. In this sense, serious games can create an immersive environment from recreational resources to assist in rehabilitation and physical and motor training. Motion analysis can be made in a virtual environment through the estimation of joint angles and trajectories. This can help health professionals in a quantitative way to analyze motor and cognitive rehabilitation. This paper aims to show a proposal of virtual environment to be applied in the rehabilitation of motor and cognitive functions of people with special needs. The system consists of a depth camera (RGB-D) and a projection interface for serious games. A protocol of data acquisition based on the tasks performed according to the game is proposed. Preliminary tests with a healthy subject were made. The results are the establishment of the virtual environment: a game was developed that involves motor tasks but also includes cognitive development. Joint angle and trajectory estimation are shown. A comparison which involves the movement and the time of response of the subject

D. Soprani (&)  T. Botelho  C. Tavares  G. Cruz  R. Zanoni  J. Lagass  S. Bino  P. Garcez Federal Institute of Education, Science and Technology of Espírito Santo, Vitória, Brazil e-mail: [email protected]

Keywords



 



Virtual environment Serious games RGB-D cameras Rehabilitation Joint angle estimation

1

Introduction

Cognitive and motor development are important issues in patients with special needs, such as Down Syndrome (DS) or Autism. Dynamic motor dysfunction is widespread among individuals with DS. This includes extended time for movement and reactions, balance, postural deficit and hypotonia, which reduces postural control and proprioception, influencing the sensory and motor experiences. It leads to a neuropsychomotor development delay and late gait acquisition and affecting the fine and gross motor skills performance [1]. Most of the impairments associated with DS are thought to originate from a sensory dysfunction, i.e., the fact that the sensory stimuli are badly processed and integrated [2]. The result of this incomplete or distorted process is the creation of an abnormal mental representation of the external world. This, in turn, may produce motor impairments and deficits in cognitive skills, like spatial awareness, language usage, and social behavior and induces distress and discomfort, frequent concentration losses, and disengagement from the proposed activities [3]. Mobility is also an issue in people who have autism. Autism, or Autism Spectrum Disorder (ASD), is a neurological disorder characterized by impaired social interaction, verbal and non-verbal communication and restricted and repetitive behavior. Individuals with autism exhibit many

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_89

579

580

D. Soprani et al.

forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as stereotypy, which is repetitive movement, such as shaking hands, turning head from side to side, or shaking body [4]. They also tend to have motor disorders associated with gait and manipulation skills [5]. The difficulty that individuals with DS and ASD present in the acquisition of motor and cognitive skills has motivated scientists to study and develop means of intervention. Many therapeutic interventions have the goal of teaching some basic skills so that the patient can acquire autonomy in his/her daily life, e.g., through practices that promote gross and fine motor coordination, attention, and social interaction [6]. The application of technology has improved the traditional way of dealing with therapies, especially with the development of virtual environments. In this sense, serious games can create an immersive environment from recreational resources to assist in rehabilitation and physical and motor training [7–9]. Motion analysis can be made in a virtual environment through the estimation of joint angles and trajectories. This can help health professionals in a quantitative way to analyze motor and cognitive rehabilitation. Commonly, analysis of joint angles is conducted through portable wearable sensors, which include electrogoniometers and potentiometers mounted in a single axis. However, they are bulky and limit natural patterns of movement [10]. This paper aims to show a proposal of virtual environment to be applied in the rehabilitation of motor and cognitive functions of people with special needs. Joint angle and trajectory estimation are shown as preliminary tests. A comparison which involves the movement and the time of response of a healthy subject performing the task between the trials in the protocol is also made. The paper is organized as follows. In the Materials and Methods Section, the system is described, the equipment used, and the proposed protocol of tests performed are shown. In the Results Section the virtual environment, the tasks involving the serious game proposed and the joint angle and trajectories estimation are shown. In the Discussion Section the results cited above are analyzed and compared and in the Conclusion Section a conclusion in made pointing the contribution of this work and the future works.

2

3

The Orbbec Sensor

The Orbbec sensor is composed of an RBG camera, a depth camera, an IR projector, and 2 microphones. It can be used with the Astra SDK (version 2.0.9 Beta3 provides 19 skeletal joints, developed by the same company who manufactures the sensor) or the OpenNI framework (compatible with the open-source OpenNI SDK for 3D natural interaction sensors) or other third-party SDK. It has a 640  480 resolution for the depth camera and a 1280  960 resolution for the RGB, both at 30 frames per second, using USB 2.0 as data interface. The field of view is 60◦ horizontally, 49.5◦ vertically and 73◦ diagonally. As operating systems, it supports Windows 7/8/10, Linux, OS X and Android. The depth range is 0.6–8.0 m (optimal between 0.6 and 5.0 m) for the version of Orbbec Astra that we are using [12]. It is a versatile alternative to of the discontinued Microsoft Kinect® (USA). Figure 1 illustrates this sensor.

4

The Virtual Environment

The system developed to compose the virtual environment works so the user, through an avatar, interacts with the game environment. Due to stimuli caused by the game, the user performs a playful task which can be used to discern laterality, limb extension, proprioception etc. The environment also provides feedback to the user, usually visual and acoustic, to motivate him/her to perform the task when it is successfully accomplished. Figure 2 illustrates the components of the system and the set up proposed. Figure 2 shows the operator who configures the serious game in the computer where the system is running; the screen with the game and an avatar displayed, which is the user visual feedback and the user in the field of view of the RGB-D camera. The games were developed on the Unity3d® platform. In this project, Astra SDK was used to obtain the spatial coordinates of the joints. The joint angles were obtained

Materials and Methods

The virtual environment proposed consists of a depth camera (RGB-D) and a projection interface for serious games, as well as a computer that serves as the central point of data processing. This project uses the AstraPro® RGB-D camera from Orbbec (USA).

Fig. 1 Orbbec 3D camera sensor. Source [11]

Virtual Environment for Motor and Cognitive Rehabilitation …

581

Fig. 4 Game screen

5

The Tasks in the Game and the Data Acquisition Protocol

Fig. 2 Components of the systems

Fig. 3 Parameters to calculate the articulation of the joints (Adapted from [13])

through these spatial coordinates as described by [13] and other authors. Equation 1 shows the relation between the forearm d1 and upper arm d2 lengths, as shown in Fig. 3. The trajectories of the joints were obtained directly from the coordinates.   1 d3 þ d1 þ d2 h ¼ cos ð1Þ 2d2 d1 Figure 4 illustrates the game screen. As it can be seen, the game exhibits a friendly scenario in which an avatar must make movements to achieve a task, generally touching balls. The screen also shows data, score, and the segmentation of the user.

The project was developed in partnership with the Association of Parents and Friends of Exceptional People (APAE-ES) in the municipality of São Mateus/ES. Physiotherapy and psychology professionals assisted with the proposed tasks. The tasks in the game make up a test protocol. The objective of the game is to make the user's avatar touch the ball with hands or feet, according to the side that they appear on and the height. The location, the number of the balls and the time that they appear can be programmed previously. Each ball remains in the environment for a certain time, also adjustable, and disappears when it is touched, that is, the task is performed correctly, or when the time expires. Such adjustments can be made between sessions, usually by a health professional, assisted by the system’s developers. Each success is associated with a score that is shown in the game screen as feedback. At the end of each testing session, the system generates files and graphics related to the data of the user. The protocol of data acquisition was based on tasks in the game. Ten balls were set to appear at the same position: one hundred and thirty-five degrees with respect to trunk, represented by a2 , of the user, above him/her on the right side at his/her arm length distance, as it can be seen in Fig. 5. Each ball appeared for three seconds or until being reached by the user and then, three seconds after the previous ball disappeared the next one appeared. Each movement to touch the ball composes a trial or a repetition in the experiment. In this initial stage of the project, to check the feasibility of the system, the task was performed by one healthy subject. The user does not move forward or backward, nor sideways. Just

582

D. Soprani et al.

also analyzed. The data were not filtered after the acquisition and the sampling time is 30 ms.

6

Fig. 5 Illustration of the protocol of tests and the angles and joints analyzed

reach the balls with the arm. This research applied to the Ethical Committee of IFES under the project number 30540020.8.0000.5072. Due to the current pandemic, tests with more subjects could not be carried out until the current date of this paper. The elbow and shoulder angles as represented in Fig. 5, respectively, by a1 e a2 , were calculated and analyzed. The shoulder angle increases as the forearm raises. The elbow angle increases as the wrist is further away from the shoulder. The wrist, elbow, and shoulder joints trajectories are

Fig. 6 Angles of the left and right arm during the test

Results

The joint angles and trajectories are shown in this section. All angles and trajectories shown are with respect to the frontal plane. Figure 6 shows the angles of the left and the right arm during the whole test. The elbow and the shoulder angles are shown, as well as a signal that represents the presence of the ball. As it can be seen, ten trials or ten movements are shown. Figure 7 shows the elbow and the shoulder angles of the right arm during one trial of the test. It shows with more details the second trial or movement repetition of the Fig. 6. The representation of the ball’s also shown. The value of this last signal is generic and only denotes the presence or the absence of the ball. Figure 8 shows the overlapping shoulder (top) and elbow (bottom) angles of the right arm during the whole test. As it can be seen, there are ten signals, one for each repetition. The trials are synchronized considering that time zero is when the ball appears for each repetition.

Virtual Environment for Motor and Cognitive Rehabilitation …

583

7

Fig. 7 Angles of the right arm during one repetition

Figure 9 shows the trajectories of the right wrist (blue), elbow (red) and shoulder (black) joints in one repetition. The second repetition of Fig. 6 is shown. As it can be seen, the movement was divided into raising (continuous line) and lowering (dashed line). It was to provide a better way to understand of the movement. Figure 10 shows the overlapping trajectories of the right wrist during the whole test. The trajectories shown in Figs. 9 and 10 are with respect to the frontal plane and considering an observer facing the game screen, behind the user. Thus, the vertical axis refers to height or vertical movement and the horizontal axis refers to horizontal movement.

Fig. 8 Overlapping angles of the right arm during the test

Discussion

The system proved to be stable in capturing frames for tasks around 0.5–0.45 m between the camera and user and for movements that do not overlap body members. Thus, in these conditions there were no occlusion or discontinuity problems. As it can be seen in Fig. 6, the left shoulder and elbow angles are virtually constant, respectively, at approximately twenty and one hundred and sixty degrees. This is related to the fact that the task involves only the right arm. Therefore, the angles are consistent with the application. With a relaxed arm, the angle of the elbow is close to one hundred and eighty degrees once the forearm is extended. In the same position, the shoulder angle has a small value once the arm segment is close to the segment of the trunk, making a small angle. From the same figure the right arm presents a pattern along the movements. Two angles shown have related variations, once the movement made to touch the ball always has the same target position. Figure 7 shows the second movement shown in Fig. 6. It is possible to notice that about seven hundred milliseconds after the appearance of the ball, the user begins to flex the forearm, then the angle of the elbow begins to decrease. Almost at the same time, the angle of the shoulder begins to increase, which denotes the lifting of the arm. At a certain point in the movement, about 5.7 s, the angle of the elbow starts to increase again because the forearm is extended to

584

Fig. 9 Trajectories of the wrist, elbow, and shoulder joints of the right arm during one movement

Fig. 10 Overlapping trajectories of the right wrist during the whole test

reach the ball, which occurs in approximately 6.2 s. Then, the ball disappears. After that, when the arm is moving down, the angles of the elbow and shoulder begins to decrease, because in this stage the forearm is initially extended. Around 6.9 s, the angle of the elbow increases again to reach the final resting position, like the initial position, after a little damping in about 7.25 s. The angle of the shoulder also stabilizes in a value close to the one in the beginning of the movement. Figure 8 shows the similarities and differences of the movements shown in Fig. 6. Although the movement pattern is similar, there are differences. Differences can be observed, mainly in the reaction time of each repetition. The fastest time was around 0.5 s and the slowest was around 0.9 s. There were also small differences in the range of motion. The angle of the elbow varied from approximately ten degrees to one hundred and fifty degrees in all repetitions.

D. Soprani et al.

The angle of the shoulder ranged from approximately one hundred and fifty to approximately fifty degrees. The variation of the angles shown is compatible with the results obtained by [13] when this author shows the elbow angles obtained from the flexion and extension movements. Although [13] works to correct errors in these angles, which are acquired using the RGB-D Kinect® system, the data can be considered correlated before this correction. Figure 9 shows the trajectory, in the frontal plane, of the wrist, elbow and shoulder joints during the second repetition of Fig. 6. It is possible to observe that the displacement of the shoulder is not large. This is because the user only raises and lowers the arm during the test. And the shoulder is the axis of the movement. It can be observed that as the wrist raises the elbow moves away from the trunk, due to the flexion of the elbow joint. When the wrist is approximately the same height as the elbow and shoulder, the elbow joint returns to extend until the ball is reached at the top. The lowering movement is analogous, with a little variation in the trajectory. The whole trajectory corroborates the results shown in Figs. 6 and 7. In Fig. 10 the wrist trajectory during the different repetitions maintains a pattern, although there are small variations in vertical and horizontal amplitude. The trajectories shown are compatible with those obtained by [14, 15]. Despite being works with different purposes and articulation trajectories obtained from other sensors, a preliminary comparison can be made, and it indicates correspondence and corroborates the results obtained in this project. The validation of the angle data was made preliminarily with a manual medical goniometer. It was analyzed maximum and minimum values in movements of flexion and extension of the elbow and shoulder. The data obtained with the goniometer was compatible to the calculated. However, this comparison needs further investigation, including other angles and trajectories estimators, as described in future works.

8

Conclusions

This paper presented a proposal of virtual environment to be applied in the rehabilitation of motor and cognitive functions of people with special needs. Joint angle and trajectory estimation were shown. Tests were performed with a healthy subject. A comparison between movements of the subject performing tasks in a protocol of data acquisition was also made. The proposed technique for joint angles estimation demonstrated to be a feasible and straightforward option to obtain kinematics analysis from playful tasks in a markerless system. This can be used to discern laterality, to analyze limb

Virtual Environment for Motor and Cognitive Rehabilitation …

extension/flexion, proprioception etc. The main contribution of this Research Project, even in its preliminary stage, is the possibility of carrying out these tasks that undergo an analysis about the movements performed by the user. It is also possible to make preliminary analysis and comparisons about the users’ interaction with the tasks and the level of commitment between the sessions. Another contribution is the use of the AstraPro® RGB-D camera. It is a versatile alternative to the discontinued Microsoft Kinect® (USA). In the future, to provide more information about the feasibility of the system, tests with more subjects and the comparison of the data with an IMU system will be made. Also, it is included the further investigation of this technique in other motion planes (sagittal and transverse) and in 3D movement scenarios. The integration of this system with other depth cameras will be carried out, through the Robot Operational System (ROS) for data synchronization. This will increase the space for movements and tests. When this is structured, tests with patients will be performed.

585

4.

5.

6.

7.

8.

9.

10. Acknowledgements The developers of this project would like to thank the Federal Institute of Education, Science and Technology of Espírito Santo for their support.

11.

Conflict of Interest The authors declare that they have no conflict of interest. 12. 13.

References 1. Beqaj S, Tërshnjaku EET, QorolliM, Zivkovic V (2018) Contribution of physical and motor characteristics to functional performance in children and adolescents with down syndrome: a preliminary study. Medical science monitor basic research. 2. Ketcheson L, Andrew Pitchford E, Kwon HJ, Ulrich DA (2017) Physical activity patterns in infants with and without down syndrome. Pediatr Phys Ther 29(3):200–206 3. Valencia-Jimenez N, da Luz S, Santos D, Souza M, Bastos T, Frizera A (2020) The effect of smart mirror environment on

14.

15.

proprioception factors of children with down syndrome. Res Biomed Eng ISSN 2446-4732 Lam KS, Aman MG (2007) The repetitive behavior scale-revised: independent validation in individuals with autism spectrum disorders. J Autism Dev Disord 37(5):855–866 Warrick A (2014) Interactive rehabilitation system for improvement of balance therapies in people with cerebral palsy. IEEE Trans Neural Syst Rehab Eng 22(2):1534–4320 Guerra DR, Sylvia M, Martin-Gutierrez J, Acevedo R, Salinas S (2019) Hand gestures in virtual and augmented 3D environments for down syndrome users. Appl Sci (Switzerland) 9(13):1–16 Abellard A, Abellard PAH (2017) Applications: serious games adapted to children with profound intellectual and multiple disabilities. In: 9th international conference on virtual worlds and games for serious applications (VS-Games), pp 183–184 Menezes RC, Batista PKA, Ramos AQ, Medeiros AFC (2014) Development of a complete game-based system for physica therapy with kinect. In: 2014 IEEE 3rd international conference on Serious Games and Applications for Health (SeGAH), Rio de Janeiro Konstantinidis EI, Billis AS, Paraskevopoulos IT, Bamidis PD (2017) The interplay between IoT and serious games towards personalized healthcare. In: 9th international conference on virtual worlds and games for serious applications (VS-Games) Athens, pp 249–252 Hawkins D (2000) A new instrumentation system for training rowers. J Biomech 33:241–245 Maolanon P, Sukvichai K, Chayopitak N, Takahashi A (2019) Indoor room identify and mapping with virtual based SLAM using furnitures and household objects relationship based on CNNs. In: 10th international conference of Information and Communication Technology for Embedded Systems (IC-ICTES), Bangkok, Thailand, pp 1–6 Orbbec 3D at https://orbbec3d.com/product-astra/. Accessed 9 June 2020 Valencia-Jimenez N, Leal-Junior A, Avellar L, Vargas-Valencia L, Caicedo-Rodríguez P, Ramírez-Duque AA, Lyra M, Marques C, Bastos T, Frizera A (2019) A comparative study of markerless systems based on color-depth cameras, polymer optical fiber curvature sensors, and inertial measurement units: towards increasing the accuracy in joint angle estimation. Electronics 8:173 Papaleo E, Zollo L, Garcia-Aracil N et al (2015) Upper-limb kinematic reconstruction during stroke robot-aided therapy. Med Biol Eng Comput 53(9):815–828. https://doi.org/10.1007/s11517015-1276-9 Yang Y, Liu Y, Wang M et al (2014) Objective evaluation method of steering comfort based on movement quality evaluation of driver steering maneuver. Chin J Mech Eng 27:1027–1037

Kinematic Model and Position Control of an Active Transtibial Prosthesis V. Biazi Neto, G. D. Peroni, A. Bento Filho, A. G. Leal Junior and R. M. Andrade

Abstract

Transtibial prostheses have the function of replacing the amputated limb and its constructive characteristics are extremely important for the mobility of the user. Active prostheses are characterized by adding and dissipating energy during walking in a controlled manner. Over the years many researches have been dedicated efforts to develop more suitable prosthetic for the user, the main challenge is to develop a device with power and performance speed suitable for the function in a light and compact structure. This work aims to develop a kinematic model of a bionic foot for transtibial amputees and to tune a controller for the ankle angle in order to enable the proper selection of the system components. Keywords

Control • Transtibial prothesis • Model • Bionic foot

1

Introduction

The number of amputation victims has grown in a worrying way, with vascular diseases, diabetes mellitus, smoking, hypertension, trauma and congenital malformations being the main risk factors [1,2]. The situation becomes more worrying and has a greater socioeconomic impact when the injuries suffered cause the loss of work capacity, socialization and quality of life [3].

V. B. Neto (B) · A. G. Leal Junior Graduate Program in Electrical Engineering, Department of Mechanical Engineering, Federal University of Espírito Santo, Vitoria, Brazil e-mail: [email protected] G. D. Peroni · A. B. Filho · R. M. Andrade Department of Mechanical Engineering, Federal University of Espírito Santo, Vitoria, Brazil

Over the years researchers have been working to mitigate the side effects of the amputation by developing more suitable prostheses for the users [4]. In a general way, a prosthesis can be classified as passive, semi-active and active [5]. The passive devices do not allow for damping control level and do not require power to operate. Semi-active devices, in the other hand, work in damping level control and present better performance than passive devices [6]. Active prostheses are able to supply and dissipate energy in a controllably way, however usually present high power consumption [7]. Semi-active and active prostheses are also classified as microprocessor controller devices. Regarding the ankle prosthesis, the series elastic actuators (SEAs) are the most commonly used for the function [8]. This configuration has some advantages, such as greater impact tolerance, low mechanical output impedance and passive storage of mechanical energy [9,10] and is used to mimic the triceps sural behavior [11]. Figure 1, based on the book of Neumann [12], shows the values for ankle angle, torque and power in the sagittal plane during the gait cycle of a healthy person. During gait, the ground reaction forces applied under the foot generate an external torque in the joints of the lower limbs. Ankle torque can reach approximately −1.8 Nm/kg at the end of the support phase, which may require a power of approximately 4 W/kg. This injection of power is necessary to maintain the progression of the body and prepare the leg for the flight phase. This work aims to develop a mathematical model that approximates the behavior of a bionic foot in operation to select its components optimally. This seeks to reduce costs and improve the performance of a physical prototype to be manufactured in future works. In fact, to use mathematical modeling is a efficient way to simulate and control orthoses/exoskeletons [13] and prostheses [14] before fabrication to reduce costs. As a basis for this work, it was decided to analyze already existing prototypes that would guarantee the desired design characteristics. In view of a simple component arrangement favorable to obtaining dimensions close to

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_90

587

588

V. B. Neto et al.

the components selected for the prototype. The mechanical components present in the equipment and their arrangement were identified in Fig. 2.

Fig. 2 Scheme of the system [15], modified

The DC motor-reducer (1) is the component responsible for the initial activation of the set. The pulley and belt assembly (2) acts by changing the direction of movement between axles and increases the transmission ratio. The ball screw (3) changes the angular movement coming from the motor in a linear movement in the sagittal plane through the displacement of the nut (7). Finally, the connecting triangle (4) is responsible for transferring this movement from the screw to the spring (5) and to the foot base (6), allowing the prosthesis to be displaced in relation to the user’s leg.

2.1

Fig. 1 Variation of angle, torque and power at the ankle while walking [12], modified

those of the human foot, the Walk-Run Ankle (WRA) prosthesis designed by Grimmer [8] was defined as the base model.

Kinematic Model

In order to obtain the kinematic responses that allow a human gait to be replicated as closely as possible, a transmission system that relates the rotation of the ankle to that of the selected motor is necessary. The cinematic relations are applied to 4 different rigid bodies: frame, connection triangle, foot base and spring. The equations proposed by Meriam and Kraige [16] were applied to the geometry of the prototype. Assuming that it is necessary to use a speed reducer at the motor output, the relation of Eq. (1) is obtained. ωm ·

2

Model Formulation

A segmented model into several components was developed and kinematic analyzes were performed on each body separately. The coupling of the kinematics of the bodies was done in order to obtain the final behavior of the ankle according to

1 = ω1 i

(1)

where ωm is the angular speed of the motor, ω1 is the angular speed of the pulley at the output of the reducer and i is the gear ratio of the reducer. For a system of pulleys and synchronous belt, used to change the position of the rotary axis, relating the speeds and radius of the pulleys allows to obtain the reduction of the set.

90 Kinematic Model and Position Control of an Active …

589

Equating the linear speeds obtained from different pulleys, there is the relation between the angular speeds of the pulleys represented in Eq. (2). ω2 = ω1 ·

r1 r2

(2)

where ω2 is the angular speed of the pulley at the entrance of the ball screw, r1 is the radius of the pulley at the output of the reducer and r2 is the radius of the pulley at the entrance of the ball screw. A ball screw was needed to transform the rotary movement of the motor into a translational movement. The relation between the linear displacement of the nut for any angular displacement of the screw is given by Eq. (3), since for each complete turn of the screw, the nut moves a distance equal to the pitch. P · θ2 (3) x3 = 2·π where P is the ball screw pitch, θ2 is the angular displacement of the pulley at the ball screw entrance and x3 is the linear displacement of the nut. Deriving Eq. (3) and replacing Eqs. (1) and (2), it is possible to obtain the linear speed of the nut as a function of the engine rotation. v3 (ωm (t)) =

P · ωm · r1 2 · π · i · r2

(4)

A general diagram of the problem at the point that relates the movement of the screw to the base of the foot can be seen in Fig. 3. Figure 3a shows relevant dimensions in the ball screw, while Fig. 3b, c shows the dimensions of the connecting triangle. Where L 1 is the distance between points 3 and 4 of the triangle, L 2 is the distance between points 4 and 5 of the triangle, L 3 is the distance between points 4 and 5 of the triangle, D1 is the distance between frame bearing and ball screw axis, D2 is the distance from the frame upper surface to screw tip, x3 is screw nut linear displacement, α1 is the angle between

L 1 and horizontal direction in support phase, α2 is the angle between L 2 and vertical direction in support phase and α3 is the angle between D2 and vertical direction in support phase. ˙ angular velocities of the frame To determine γ˙ and β, (point 1) and ankle (point 4), respectively, we match the linear velocity of point 3, obtained from the triangle, with the linear velocity of the point 3*, that was obtained from the screw, plus the relative sliding speed of the nut in the screw as showed in Eq. (5).  (5) v3 = v3∗ + v3/3∗ From these relationships, Eqs. (6) and (7) provide the angular velocities of the frame and ankle coupled together. γ˙ =

β˙ =

−L 1 · β˙ · sen(α1 − β) + v3 · sen(α3 − γ ) D1 · sen(α3 − γ ) + (D2 + x3 ) · cos(α3 − γ )

(6)

v3 · cos(α3 − γ ) − γ˙ · [−D1 · cos(α3 − γ ) + (D2 + x3 ) · sen(α3 − γ )] L 1 · cos(α1 − β)

(7)

A similar analysis can be made by relating the speeds of points 5 and 6 common to the connecting triangle, foot base and spring. Figure 4 shows the variables to be applied in the kinematic relations for the base of the foot and for the spring. where l p is the distance between points 4 and 6 of the foot, lm is spring length, α4 is the angle between l p and horizontal direction in support phase, α5 is the angle between lm and horizontal direction in support phase, ψ is the angular displacement of the foot base and φ is the angular displacement of the spring. The linear velocity of point 6, obtained from the foot base, is matched with the linear velocity of the point 5, that was obtained from the triangle, plus the relative velocity between points 5 and 6 in the spring as showed in Eq. (8).  v6 = v5 + v6/5

(8)

Assuming the spring as a rigid body due to the need for it to have a high stiffness, in Eqs. (9) and (10) the values of

Fig. 3 a Ball screw, b Connection triangle: starting position, c Connection triangle: end position

Fig. 4 a Base of the foot, b Spring

590

V. B. Neto et al.

m (s) Kt = E a (s) L a · J · s 2 + (L a · c + J · Ra) · s + (Ra · c + K f · K t )

(11) where m (s) is the angular speed of the motor, E a (s) is the armature voltage, J is the moment of inertia in the direction of the motor axis, c is the damping coefficient of the motor, K t is the torque constant, K f is the electro-motive force constant, Ra is the armature resistance, L a is the armature inductance.

2.3 Fig. 5 General representation of a motor armature electrical circuit

ψ˙ and φ˙ are obtained, representing, respectively, the angular velocities of the foot base and of the spring. L 2 · β˙ · cos(α2 − β) + lm · φ˙ · sen(α5 − φ) l p · sen(α4 − ψ)

(9)

l p · ψ˙ · cos(α4 − ψ) + L 2 · β˙ · sen(α2 − β) lm · cos(α5 − φ)

(10)

ψ˙ =

φ˙ =

2.2

Bionic Feet Model

In order to reproduce the time and trajectory of the human stride through the prosthesis, it is necessary to design a control system that would ensure that such an achievement was achieved. Based on the dynamic model obtained for the armature controlled DC motor working together with the kinematic model of the prosthesis, we can find the nonlinear model that would be used for tuning the control loop. The model was initially represented within Mathworks Simulink software so that it could be then linearized. For the DC motor block, the input consists of a voltage value while the output is the speed and position of the ball screw nut. For the bionic foot kinematics block, the inputs are the position and speed of the screw nut and the output is the ankle angle shown in Fig. 6.

Motor Dynamics

In order to obtain an input curve for the system, a model was proposed by Nise [17], which describes the dynamics of the armature controlled DC motor. The generalization of a motor’s electrical circuit is shown in Fig. 5. From the generic circuit, applying the Kirschoff laws, one can obtain the equation that governs the dynamics of the electrical part of the motor. The same goes for the mechanics applying Newton-Euler equations. Relating the common variables of both equations, it is possible to arrive at the transfer function whose output is the motor rotation and the input is the armature voltage shown in Eq. (11).

Fig. 6 Linearization of the plant with motor DC and foot kinematic

2.4

Linearization

The linearization process was necessary to tune a controller and the point at which the linearization was done is the support phase, where speeds and positions are zero. In order to apply the linear time-invariant control theory to tune plant controllers, it was necessary to obtain a linearized model for the system using the Simulink Linear Analysis toolbox show in Fig. 7. Equation (12) shows the transfer function of the plant after selecting a set of mechanical components for the prototype. Such selection was carried out through an iterative process seeking to avoid motor saturation.

90 Kinematic Model and Position Control of an Active …

591

Fig. 7 Scheme of Linear Analysis toolbox

Table 1 Controller gains Gain

Simbol

Valor

Proportional Derivative

kp kd

2320 11,5

G(s) =

2.5

s3

18940 + 1001 · s 2 + 131400 · s

(12)

Control System

The type of controller chosen was a PD series. This type of controller was chosen due to the need for quick response to the high angle variations in short intervals during the gait cycle. Derivative action, when combined with proportional action, anticipates the control action and so the process reacts faster. Since the open loop plant has a pole at zero, there is no steady state error and the integral action is not necessary. To obtain the controller for the plant, the MATLAB pidtune function was used, which tunes a type of controller for a given time-invariant linear plant. This function automatically tunes the PID gains to balance performance (response time) and robustness (stability margins). Pidtune chooses a controller design that balances the two measures of performance, reference tracking and disturbance rejection, both factors are importants for the human walking with prostheses. The gains for the controller are shown in Table 1. After designing the control system, the controller was properly coupled to the plant within the Simulink software to verify that the curve of the set point, a polynomial curve, that interpolates the ankle angle curve shown in Fig. 1, was reached by the variable of the process. Figure 8a shows the ability of the process variable to follow the set point for a disturbance applied to the plant, while Fig. 8b shows the values of the manipulated variable (motor armature voltage) during the stride.

Fig. 8 a Rejection of disturbances from the controller, b Actuator response curve during control

592

3

V. B. Neto et al.

Discussions and Conclusion

Considering that the selected DC motor supports a voltage of up to 24V, it can be noted that in the interval between approximately 0.55s and 0.7s the motor supply voltage curve reaches its saturation value, since a large variation of set point in a short time requires a fast engine response. The presence of the spring, not considered during the controller simulation, it can also contribute to reducing the saturation value since the energy stored by it is released at the moment while the voltage peak occurs. The operating speed of the dimensioned controller allowed the ankle angle of the prosthesis to reach the expected values for gait of a healthy individual during the gait cycle. In addition, the proposed controller was efficient in rejecting disturbances and, therefore, works well in a selection of components of the initial prototype, however a possible identification of the plant after the construction of this prototype can contribute to the implementation of a more efficient controller in future projects. So that the controller can distinguish the different phases of the stride and can respond in a appropriate form for each situation, it is proposed to implement a detailed sensing system. Acknowledgements This work was partially funded by FAPES (Espírito Santo Research and Innovation Foundation), TO 207/2018, Project: 83276262.

2. 3.

4.

5.

6.

7.

8. 9.

10.

11.

12.

Conflict of Interest The authors declare that they have no conflict of interest.

13.

References

14.

1.

Seidel AC, Nagata AK, Almeida HC, Bonomo M (2007) Epistemologia sobre amputações e desbridamentos de membros inferi-

15.

16. 17.

ores realizados no Hospital Universitário de Maringa J Vasc Bras 7:308–315 Carvalho J (2003) Amputações de membros inferiores: em busca da plena reabilitação, 2 Spichler D, Miranda F, Spichler SE, Franco LJ (2003) Amputações maiores de membros inferiores por doença arterial periférica e diabetes melito no município do Rio de Janeiro J Vasc Bras 3:111–122 Andrade RM, Bento-Filho A, Vimieiro CBS, Pinotti M (2018) Optimal design and torque control of an active magnetorheological prosthetic knee. Smart Mater Struct 27 Martinez-Villalpando EC, Herr H (2019) Agonist-antagonist active knee prosthesis: a preliminary study in level-ground walking. J Rehabil Res Dev Xu L, Wang DH, Ff Q, Yuan G, Hu LZ (2016) A novel four-bar linkage prosthetic knee based on magnetorheological effect: principle, structure, simulation and control. Smart Mater Struct Andrade RM, Bento-Filho A, Vimieiro CBS, Pinotti M (2019) Evaluating energy consumption of an active magnetorheological knee prosthesis. In: 2019 19th International Conference on Advanced Robotics (ICAR), 75–80 Grimmer M (2015) Powered lower limb prostheses. Doctoral Thesis—Technische Universität Leal-Junior AG, Andrade RM, Bento-Filho A (2015) Linear Serial Elastic Hydraulic Actuator: Digital Prototyping and Force Control IFAC-PapersOnLine. 48:279–285 Leal-Junior AG, Andrade RM, Bento-Filho A (2016) Series elastic actuator: design. Analy Comparison Recent Adv Robot Syst 1:203– 234 Zhu J, Wang Q, Wang L (2014) On the design of a powered transtibial prosthesis with stiffness adaptable ankle and toe joints. IEEE Trans Industrial Electronics 61:4797–4807 Neumann AD (2002) Kinesiology of the musculoskeletal system: foundations for physical rehabilitation. Mosby Andrade RM, Sapienza S, Bonato P (2019) Development of a “transparent operation mode” for a lower-limb exoskeleton designed for children with cerebral palsy. In: 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), pp 512–517 Thatte N, Geyer H (2016) Toward balance recovery with leg prostheses using neuromuscular model control. IEEE Trans Biomed Eng 63:904–913 Grimmer M, Holgate H, Ward J, Boehler A, Seyfarth A (2017) Feasibility study of transtibial amputee walking using a powered prosthetic foot. IEE Int Conf Rehabil Robot 1118–1123 Meriam JL, Kraige LG (2012) Engineering mechanics: dynamics. Wiley Nise NS (2007) Control systems engineering. Wiley, New York

Development of a Parallel Robotic Body Weight Support for Human Gait Rehabilitation L. A. O. Rodrigues and R. S. Gonçalves

Abstract

Stroke is the leading cause of impairment and residual mobility problems in the world, but most of the victims are able to recovery partially after rehabilitation treatments. This paper concerns the development of a 5 degrees-offreedom parallel robotic structure designed to assist the human gait rehabilitation process, working as an active body weight support for treadmill or overground exercises sessions. The structure applies an assist-as-needed design to stimulate balance control, motor coordination and posture correction during the gait. The system is also integrated with an electronic game developed to enhance the experience of the patients while performing the training sessions. The structure mathematical model and the first control tests are done, while the actual running research is performing the first experimental tests and clinical trials. Keywords

Active body weight support • Human gait rehabilitation • Parallel manipulator • Stroke

1

Introduction

Stroke is defined as the loss of one or more neurological functions due to the interruption of blood flow in a certain region of the brain [1]. It is a leading cause of serious long-term disability on the world [2,3]. Studies has shown that Stroke is the most frequent cause of death of the Brazilian population, responsible for around 10% of deaths and hospitalizations, and ranked four on the Latin America. In the US, a person

L. A. O. Rodrigues (B) · R. S. Gonçalves Faculty of Mechanical Engineering,Federal University of Uberlândia, Joao Naves de Avila, 2121, Uberlândia, Brazil e-mail: [email protected]

suffers a stroke every 40 s, and one out every 20 cases reported to have died because of it [2,4,5]. In Physiotherapy and Medicine, the rehabilitation process concerns the application of protocols designed for the treatment of chronic diseases, impairments and residual symptoms of neurological causes. This science is dedicated to design a dynamic process for specific training sessions in order to promote global functional gains to the patients [6]. The rehabilitation process is applied with protocols developed by health professionals, physiotherapists and occupational therapists, and is potentially enhanced by the integration with robotic tools [7,8]. Robotics is applied in this area of knowledge in a wide variety of processes, including activepassive motion training, assisted microsurgery, and compensation of residual capabilities, concerning functional gains in post-stroke victims [9]. Results obtained with the rehabilitation protocols has significant impact on the life quality of the patients [10]. The treatments often provide the recovery of functions that promote functional independence, which may lead to improvements on self-esteem, joy of living, and overcoming difficulties. Clinical evidence shows that robotic-assisted rehabilitation can reduce labor costs, expand the range of training exercises and help chronic patients to maintain the mobility of the impaired limbs [11]. However, robotic rehabilitation tools still need further research, specially concerning the human gait training. For example, the application of robotic exoskeletons over treadmills with passive body weight support (BWS) has a systematical influence in the pelvis and thorax movements during the gait, which can influence on the patient balance and gait stability [12,13]. Another study also shows that perspective of this rehabilitation structures is being reviewed in order to consider the influence of the structures on the gait stability and neuromechanical aspects [14]. Based on the data presented above, this paper most recent results of the research on the development of a 5 degree-offreedom (DOF) parallel robotic manipulator designed as an

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_91

593

594

L. A. O. Rodrigues and R. S. Gonçalves

active BWS to assist treadmill and overground exercises of post-stroke patients during human gait training rehabilitation sessions. A picture of the built prototype is presented in Fig. 1. The main goal of this structure is to both provide body weight suppression and active gait balance assistance when needed. Then, We begin this study by presenting a brief review of the existing active BWS. Then we define the mathematical model and the CAD/CAE model applied for computational simulations, followed by the control methodology selected for the actuators. Finally, the first prototype is presented.

2

ture is connected with the pelvic joint of the patient via an orthopedic brace, and promotes gait corrections with electromagnetic actuators whenever needed during the gait training sessions. Therefore, based on these and other studies, we obtained the required contextualization for the development of the proposed structure, deciding for a 5 DOF parallel manipulator as recommended on reference [15], integrating an assist-asneeded (AAN) control methodology and integration with an electronic game to stimulate the patients during the execution of the training sessions.

Review of Active BWS Structures 3

We started the process of development of the novel structure by consulting the state of art of the active body weight devices on the literature. Some of the main contributions are highlighted below. The pelvic movements and its influence on the gait was the main object of the research of reference [15]. In this study, a robotic structure designed with two parallel robotic arms is connected to the patient, monitoring and assisting its movements on gait exercises over a treadmill. The importance of actuating on 5 of the 6 DOF of the pelvic joint is also presented in this study, where only the active actuation on the sagittal rotation showed no significant evidence on influencing the gait rehabilitation process. The MIT-Skywalker [16] is a rehabilitation system based two basculation treadmills activated by linear cams under the patient’s feet, while supporting its body weight with a passive suspension system combined with a bicycle seat. This system applies the balance training through rotations on the frontal plane of the treadmills. The treadmill movements are controlled by an image tracking system based on a high speed camera and infrared markers [17]. The novel structure proposed on this study was originally designed to replace the treadmills rotations and reconsider the constraint of the pelvic joint on a similar rehabilitation system called HOPE-G [18,19]. The exoskeleton named LOPES [20] is another example of robotic structure with an active BWS module. It actuates with a 2 DOF structure in the pelvic joint and with two serial arms on the lower limbs, featuring two actuators for the hip and one for each knee. Impedance control is applied in all actuators, allowing a bidirectional interaction between the structure and the patient. Thus, the trajectory to be followed can be commanded by the robot or the patient. This system also applies integration with electronic games to stimulates the patients during the training sessions. Another structure is the "Robotic Gait Rehabilitation" (RGR) Trainer [21], which consists in a rehabilitation system over a treadmill with an active BWS, designed for treatment of secondary gait deviations in post-stroke patients. The struc-

Modeling of the Structure

After the contextualization with the existing structure as presented in the previous section, we defined the mathematical model for the proposed device. We opted for a parallel robotic manipulator in order to keep the system feasible for low power actuators, and avoid high costs on the project. The main challenge dealing with parallel structures is the high vulnerability to mobility coupling and singularities on the design, which leads to limitations on the workspace platforms with specific DOFs [22]. Thus, we opted to work with a mechanism of the "multipteron" family, which consists on a collection of configurations for parallel manipulators with low coupling level, facilitating the modeling process [23]. We defined that the structure should be composed of a mobile platform, in which the patient is fixed by means of a saddle and a seat belt, and a fixed platform connected to the mobile through a set of bars and moved by linear actuators. This scheme is illustrated in Fig. 1. In order to simplify the design process of the structure, a method for parallel structures that minimizes joint coordinates coupling is required. Thus, the body weight support was firstly idealized as a 4-PRRU+PRRS parallel manipulator. We obtained the inverse geometric model of the module using equipollent coordinates [24] with the reference frames, where X 0 Y0 Z 0 represents the inertial reference. The frames were placed in order to align each joint coordinate qi with one of the unit vectors of each X i Yi Z i frame, see Fig. 1. The obtained inverse geometric model for the structure is described in Eqs. (1)–(5), where the vertors, angles and parameters are represented in Fig. 1. q1 =0 r px +

dx + r A cos φ 2

q2 =0 r pz + r B cos θ

q3 =0 r p y +

dy + rC cos(θ + ψ) 2

(1)

(2)

(3)

91 Development of a Parallel Robotic Body Weight Support for Human Gait Rehabilitation

595

Fig. 1 a First prototype of the parallel manipulator, built on Laboratory of Automation and Robotics (LAR) of the Federal University of Uberlândia; b Schematic drawing of the reference adopted for the structure modeling; c Reference adopted for the mobile platform

q4 =0 r pz + r D cos(θ + φ)

(4)

q5 =0 r px − r E cos(θ + ϕ)

(5)

The complete procedure to obtain the equations of the mathematical model can be found in [19].

With the obtained model, we established the limit amplitudes for each actuator, and the mobility requirements based on recommendations of [15]. The designed amplitudes are presented in Table 1. Next, we obtained the dynamic model of the structure, utilizing the geometric model presented previously, and considering a maximum load of 150 kg from the patient body weight [25]. The transversal sections of each bar of from the set was

596

L. A. O. Rodrigues and R. S. Gonçalves

Table 1 Amplitudes for each degree-of-freedom of the structure Degree-of-freedom

Direction

Amplitude

Translation Translation Translation Rotation Rotation

frontal Lateral Vertical Transversal Coronal

±200 mm ±100 mm ±150 mm ±25◦ ±5◦

calculated applying a differential evolution algorithm, obtaining values already optimized for the rehabilitation purposes [19,26]. Thus, with all dimensions obtained, we constructed CAD/CAE model of the structure, and performed computational simulations in order to test the platform mobility. A representation of the five basic movements obtained on the kinematic simulations is presented in Fig. 2. Thus, with the simulation of the CAD/CAE Model, the feasibility of the structure was verified in accordance with the previous defined requirements, letting us advance to the construction of the prototype and the experimental phase of the research.

4

Experimental Tests

The first experimental procedure performed on the project was the construction of an experimental bench to design a control system for the actuators. The bench was composed of a linear trail, a crank-rocker oscillating bar linkage, a 24V DC motor integrated with a 500 pulses rotary encoder. This system was controlled utilizing a micro-controller Arduino Mega 2560, and a high current motor driver shield VNH2SP30. The scheme of the projected actuator is presented in Fig. 3, and the mathematical model relating the angular input θi from the motor with the linear displacement output qi is presented in Eq. 6.   2 − r 2 − q2 r3i 2i i − (6) θi = cos 1 2r2i qi A position control design was implemented on the experimental bench based on a PID network. The parameters were tuned applying an interactive method running another version of the differential evolution algorithm [19,26], where the control requirements are set as the optimization constraints. We defined these requirements in order to not allow overshooting, to keep the settling time under 400 ms and the error below 5% of the maximum displacement. This tuning method was useful to overcome unmodelled factors and construction deviations in the experimental bench.

Fig. 2 Sequence of images extracted from the kinematic simulation of the CAD/CAE model of the structure, demonstrating the basic movements of the mobile platform

91 Development of a Parallel Robotic Body Weight Support for Human Gait Rehabilitation

597

Fig. 3 a CAD/CAE Model of the actuator designed for the structure, b schematic drawing of the actuator applied for reference, c experimental bench built on the Robotics and Automation Laboratory of the Federal University of Uberlândia

After defining the PID, the control design also included an AAN function [27] to allow the actuator to assist the execution of the movements only when necessary, and stimulate the patients to maintain the gait balance on their own. In this control technique, a filter is applied between the PID and the physical system. This filter creates an adjustable "dead-zone" that inhibits the action of the actuator while the calculated error is low. Thus, the system is passive when the patient is able to keep the trajectory by himself, and active whenever the error reaches a defined threshold. The transition between the passive and active states of the actuator is smooth and in real time. The AAN function applied on the experimental bench is presented in Eq. 7, where u is the control signal output of the PID network, u c is the compensated AAN signal output, K p is the proportional gain of the PID network and γ is the correction factor that adjusts the controller dead-zone.  u c = K p tanh

γ u3 Kp

 (7)

With the implementation of this control technique, the cursor follows the trajectory directly when unloaded, but allows deviations from the final position if forced forward or backward, slowly increasing the force feedback towards the trajectory. Thus, the system will allow the patient to override the trajectory, and will receive assistance only when not able to maintain the gait stability. After the control system was tested on the experimental bench, the complete prototype was built and the same procedure applied on all actuators. Therefore, the structure is ready for the first mobility tests, followed by a first clinical trial.

5

Game Integration

The game designed to work with the BWS structure is being developed using Unity® 3D following the main guidelines of serious games applied to human gait rehabilitation [28]. The game was named “SpaceWalker” and is based on famous mobile endless running titles like “Temple Run ® ” and “Subway Surfers® ”. All inputs of the game are generated by the BWS micro-controller and sent to the PC running the game through USB serial communication. The structure will also

receive commands from the game such as start/stop commands, position corrections and specific platform movements. The main goal of the game is to control a robot character on a three lane platform, avoiding collisions with obstacles on the way. The character moves forward continuously in the same direction of the treadmill, and changes the lane of movement based on the position of the player as measured by the encoders on the BWS (θi ). The main position of the character is determined based on the position of the actuator 3 (θ3 ), while the other DOF are used to track and assist the gait motion patterns. Therefore, as the player displaces horizontally during the gait exercises, the main character replicates his/her position. The game will also generate commands for controlled roll and yaw displacements on the mobile platform in order to train player’s balance. These movements are designed to stimulate the patients to compensate the balance changes on a controlled environment. The AAN algorithm mentioned on Sect. 4 will also be controlled by the game. Initially, the player will start the game with the AAN disabled, and based on the collisions based on the obstacles on the path, the game will send commands to the micro-controller to gradually reduce the “dead-zone” of the actuators, as long as the correct configuration the bypass the next obstacles.

6

Conclusion

This study presented the progress towards the development of a novel parallel robotic structure, designed to work as an active BWS for human gait rehabilitation training sessions of post-stroke patients. The structure will allow assistance on the movements of the pelvic articulation, in order to reduce the influence of the robotic structure on balance and gait stability of the patients. This device was modeled following a recent methodology for parallel manipulators, applying an architecture with low coupling between the DOFs, allowing a better load distribution between the actuators and a simplifying the operation and construction of the system. The system was simulated with a CAD/CAE model, and with the obtained data, we were able to build an experimen-

598

tal bench and start the experimental tests. The control of the actuators applied an heuristic method for optimization and parameters tuning, and also featured a recent control technique that allows the robotic system to minimize its influence over the natural process of recovering of the patients, applying active assistance only when needed. Based on the obtained results, the complete prototype was built and is currently running movement tests on the mobile platform and being integrated with an electronic game developed specifically to enhance the experience during the rehabilitation training sessions. After concluding this phase, the structure will be submitted to the first clinical trials. Acknowledgements The authors thank UFU, CAPES (grant No. 001), FAPEMIG and CPNq for the partial financial support to this work. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Baggi RPA, Rodrigues EP, Caramêz R (2014) Acidente Vascular Encefálico. Revista Unilus Ensino e Pesquisa 11:88–89 2. Benjamin EJ, Muntner P, Alonso A et al (2019) Heart disease and stroke statistics-2019 update: a report from the American Heart Association. Circulation 139:e56–e528 3. Lavados PM, Hennis AJ, Fernandes JG et al (2007) Stroke epidemiology, prevention, and management strategies at a regional level: Latin America and the Caribbean. Lancet Neurol 6:362–372 4. Lackland D (2017) Heart Disease and Stroke Statistics—2017 Update A Report From the American Heart Association tech rep 5. Brasil (2017) Acidente Vascular Cerebral—AVC 6. Barbosa AM, Carvalho JCM, Gonçalves RS (2018) Cable-driven lower limb rehabilitation robot. J Braz Soc Mech Sci Eng 40 7. Diaz I, Gil JJ, Sanchez E (2011) Lower-Limb robotic rehabilitation: literature review and challenges. J Robot 2011 8. Claflin ES, Krishnan C, Khot SP (2015) Emerging treatments for motor rehabilitation after stroke. The Neurohospitalist 5:77–88 9. Alves T, Gonçalves RS (2020) Predictive equation for a circular trajectory period in a cable-driven robot for rehabilitation. J Braz Soc Mech Sci Eng 42:279 10. Camargos ACR, Lacerda TB, Taise VB, Silva GC, Parreiras JT, Vidal THJ (2012) Relationship between functional independence and quality of life in cerebral palsy. Fisioterapia em Movimento 25:83–92 11. Lum P, Reinkensmeyer D, Mahoney R, Rymer WZ, Burgar C (2002) Robotic devices for movement therapy after stroke: current status and challenges to clinical acceptance. Top Stroke Rehabil 8:40–53 12. Swinnen E, Baeyens JP, Knaepen K et al (2015) Walking with robot assistance: the influence of body weight support on the trunk and pelvis kinematics. Disabil Rehabil Assist Technol 10:252–257

L. A. O. Rodrigues and R. S. Gonçalves 13. Swinnen E, Baeyens JP, Hens G et al (2015) Body weight support during robot-assisted walking: influence on the trunk and pelvis kinematics. NeuroRehabilitation 14. Morone G, Paolucci S, Cherubini A et al (2017) Robot-assisted gait training for stroke patients: current state of the art and perspectives of robotics. Neuropsychiatr Dis Treatm 13:1303–1311 15. Roberts M (2004) A robot for gait rehabilitation 16. Susko T, Swaminathan K, Krebs HI (2016) MIT-Skywalker: a novel gait neurorehabilitation robot for stroke and cerebral palsy. IEEE Trans Neural Syst Rehabil Eng 24:1089–1099 17. Gonçalves RS, Hamilton T, Krebs HI (2017) MIT-Skywalker: on the use of a markerless system. In: IEEE international conference on rehabilitation robotics, pp 205–210 18. Gonçalves RS, Hamilton T, Daher AR, Hirai H, Hermano I (2017) MIT-Skywalker: considerations on the design of a body weight support system. J NeuroEng Rehabil 14:1–12 19. Gonçalves RS, Rodrigues LAO (2019) Development of a novel parallel structure for gait rehabilitation. In: Handbook of research on advanced mechatronic systems and intelligent robotics, vol 4, pp 42–81 20. Veneman JF, Kruidhof R, Hekman EEG, Ekkelenkamp R, Van Asseldonk EHF, Van Der Kooij H (2007) Design and evaluation of the LOPES exoskeleton robot for interactive gait rehabilitation. IEEE Trans Neural Syst Rehabil Eng 15:379–386 21. Pietrusinski M, Cajigas I, Mizikacioglu Y, Goldsmith M, Bonato P, Mavroidis C (2010) Gait rehabilitation therapy using robot generated force fields applied at the pelvis. In: 2010 IEEE Haptics symposium, HAPTICS 2010, pp 401–407 22. Motevalli B, Zohoor H, Sohrabpour S (2010) Structural synthesis of 5 DoFs 3T2R parallel manipulators with prismatic actuators on the base. Robot. Auton. Syst. 58:307–321 23. Gosselin CM, Masouleh MT, Duchaine V, Richard PL, Foucault S, Kong X (2007) Parallel mechanisms of the multipteron family: kinematic architectures and benchmarking. In: Proceedings 2007 IEEE international conference on robotics and automation. IEEE, pp 555–560 24. Selig JM (2000) Geometrical Foundations of Robotics. World Scientific 25. IBGE—Instituto Brasileiro de Geografia e Estatística (2010) Antropometria e Estado Nutricional no Brasil 2008-2009 26. Gonçalves RS, Carvalho JCM, Lobato FS (2016) Design of a robotic device actuated by cables for human lower limb rehabilitation using self-adaptive differential evolution and robust optimization. Biosci J 1689–1702 27. Asl HJ, Narikiyo T, Kawanishi M (2019) An assist-as-needed control scheme for robot-assisted rehabilitation. In: Proceedings of the American control conference, pp 198–203 28. Ma M, Bechkoum K (2008) Serious games for movement therapy after stroke. In: 2008 IEEE international conference on systems, man and cybernetics. IEEE, pp 1872–1877

Comparison Between the Passé and Coupé Positions in the Single Leg Turn Movement in a Brazilian Zouk Practitioner: A Pilot Study A. C. Navarro, A. P. Xavier, J. C. Albarello, C. P. Guimarães, and L. L. Menegaldo

Abstract

Keywords

There are different types of dance turns, such as pirouettes, fouettés, and single leg turns. Besides, these turns can be performed with different body positions and combinations. Therefore, this study aimed to compare displacement of the center of mass (CoM) of the body, head and thigh, and knee flexion of the gesture leg, between the positions of the coupé and passé legs during the Single Leg Turn of the Brazilian Zouk, in which the follower dancer performs a rotation along its longitudinal axis conducted by the leader dancer. The Single Leg Turn is an important movement in competitions and the location of the CoM is essential for maintaining the dynamic balance of the turn. The movement of a follower dancer was recorded by 18 cameras (Prime 13, 240 Hz), OptiTrack system, Motive software (version 1.10.0 Beta 1), for acquiring and digitizing trajectories of reflective markers. Datas were processed using Visual 3D software (version 5.02.11) and average of the attempts was analyzed. The evidence found suggests the occurrence of adaptive changes to maintain the dynamic balance in the Single Leg Turn in relation to the displacement of the CoM of the entire body in the mediolateral axis and in the flexion of the right knee in coupé and passé positions, which can be linked to the pattern of the dancer herself or to the conduction by the partner.

Dance Brazilian Zouk Biomechanics

A. C. Navarro (&)  L. L. Menegaldo Federal University of Rio de Janeiro/Biomedical Engineering Program, Rio de Janeiro, Brazil e-mail: [email protected] A. P. Xavier  C. P. Guimarães National Institute of Technology, Rio de Janeiro, Brazil J. C. Albarello Muscular Biomechanics Laboratory, EEFD/UFRJ, Rio de Janeiro, Brazil



1



Single leg turn



Kinematics



Introduction

Dance modalities have techniques that require different physical valances from the dancer, such as flexibility, agility, strength, balance, and coordination. In general, the specificity of the movements is linked to what Helenita Sá Earp, a pioneer in conducting dance studies within Brazilian universities, called Dance Parameters, in the Theory of Principles and Open Connections in Dance. They are: Movement, Space, Form, Dynamics, and Time [1]. These elements can be observed in the Brazilian Zouk. Originating in Lambada, Brazilian Zouk is a ballroom dance that has been showing growth for the past 20 years. This fact can be evidenced through creation of the Brazilian Zouk Dance Council (BZDC), responsible for regulating and monitoring congresses and competitions, in addition to internationally ranking participants in dance. In competitions, there is an increasing demand for dancers to have intense training, in which they are able to perform all elements with high precision. Besides the expressive technique, the intensity and complexity of the combinations of movements, linked to the different conditions of supports, balances and postural controls must be mastered for better results. The literature demonstrates that, when related to their modality, the motor coordination of dancers is more efficient [2]. Currently, in ballroom dancing, the terms “lady” and “gentleman” have been replaced by follower and leader respectively since dancing in each of these positions has ceased to be a gender-related issue. In the Single Leg Turn (SLT), the follower performs a rotation movement along its longitudinal axis, generally supported on the metatarsal of one of the feet (demi pointé) to reduce friction [3]. This

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_92

599

600

movement is propelled by the leader, who holds one of his hands. The leader makes a circle around the head of the follower, which in turn must keep the arm and trunk muscles active so that the force is transmitted to his body and the rotation takes place (Fig. 1). It is an important movement from the competitive point of view, and it is present in different types of ballroom dancing. The body position assumed by the follower during the movement varies according to the desired aesthetic, thus, the positions of the lower limbs may vary (e.g. attitude, passé, coupé, tendu), as well as the position of the head (e.g. looking up, diagonal low left, “spotting” on the leader or the audience) and the spine (e.g. lateral flexion, hyperextension). It is possible to affirm that there is not a single position to be maintained by the person being conducted. The clipping of the study in Brazilian Zouk practitioners occurs from the place where research is inserted, since Rio de Janeiro is the main stage for the creation of this dance modality. In this way, this research is intertwined with the context of the city. In sports, digital technology has acted together with biomechanics, contributing to the improvement of training [4]. However, much of the knowledge in the field of dance is empirical. Thus, scientific literature requires studies involving this problem so that the teaching and training of dance movements are not based exclusively on personal experiences. As soon, this research is an experimental biomechanical study of the movement known as SLT using such methodologies in Dance. The displacement of the center of mass (CoM) is an important variable for maintaining dynamic balance in the SLT. Because of this, it is relevant to investigate whether the positioning of the gesture leg can influence the displacement of the center of mass. Therefore, the objective of this study was to verify the influence of the position of the gesture leg on the displacement of the CoM of the follower during the SLT movement. In this sense, the displacements of the CoM of the whole body, head and thigh were compared, as well as the flexion of the knee of the

Fig. 1 Single Leg Turn in passé position. Source Image acquired at Movement Studies Center (CEMOV)

A. C. Navarro et al.

gesture leg, between the positions of the coupé and passé legs in a Brazilian Zouk follower. It is hoped that this study will contribute to a better understanding of the movement, helping in the training of professional dancers, dance technicians and practitioners, as well as in the improvement of teaching methodologies for its realization.

2

Material and Methods

The pilot project (CAAE: 31957420.6.0000.5257) was carried out with a professional Zouk Brazilian follower (32-years-old, 1.65 m tall, 56.4 kg total body mass) with 9 years of dance practice and 5 years of practicing Brazilian Zouk with his current dance partner. According to the BZDC, the dancer was considered a professional for having won more than 40 points in the advanced category in competitions [5]. Data collection was carried out at the Movement Studies Center (CEMOV), located in Vila Olímpica da Mangueira in partnership with the National Institute of Technology and the Mangueira do Futuro Institute. Participants signed an informed consent form prior to participating in the study; this was prepared in accordance with the Declaration of Helsinki and resolution 466/12 of the National Health Council (2012). The follower was instrumented with 57 reflective markers placed at specific anatomical points, following the Biomech marker set. She was accompanied by her dance partner (leader) and, after the specific warm-up used by the pair in their rehearsal routine; the SLT movement was performed in two lower limb positions, namely coupé and passé in parallel rotation (Fig. 2). In both positions, the neck was rotated towards the left shoulder and slightly flexed, so that the follower's look was projected to the ground. The leader pushed the movement counterclockwise. In relation to the body of the follower, the movement happened with unipodal support on the left limb, rotating on the vertical axis from

Comparison Between the Passé and Coupé Positions in the Single …

right to left, characterizing this turn as an en dedans movement. The knee of the follower's support leg was slightly flexed. The turning movements were recorded by 18 cameras (Prime 13, 240 Hz), using the OptiTrack system and Motive software (version 1.10.0 Beta 1) for data acquisition and digitization. After digitization, the data was processed using Visual 3D software (version 5.02.11). The x, y and z coordinates were considered for the anteroposterior, vertical, and mediolateral axis, respectively. The dancers performed the turns in the most natural way possible, without controlling the number of turns or speed of movement over a specific area for the movement to take place. For analysis, a SLT movement was selected, consisting of six complete turns (2.160°) performed in sequence. However, it is common that during the first complete lap (first 360°) adjustment movements occur in the amplitude of the upper and lower limbs by the follower and the leader (initial phase of the turn), thus, this first lap was discarded and the next five laps (swing maintenance phase) were analyzed, since they have different characteristics. The attempt selected for analysis was chosen in relation to its execution during the phase of completion of the movement. In this sense, attempts in which the dancers were interrupted by the loss of balance were not considered, since an important criterion of the competitions is that they correctly

Fig. 2 Turning with the head marking downwards, in the different positions of the legs, coupé (a) and passé (b). Source Image acquired at Movement Studies Center (CEMOV)

601

perform each phase of the movement. The average of the 5 laps was calculated for the values related to the duration of the movements, greater and lesser displacement of the CoM from the body, head and right leg and angles of the right knee (gesture leg).

3

Results

The average values of displacement selected for each variable can be seen in Table 1. Negative values on the anteroposterior axis indicate that the displacement occurred behind the demarcated location, while positive values indicate that the displacement occurred in front of the demarcated location to movement. In the same sense, on the mediolateral axis, negative values indicate that the follower has shifted to the left and positive values indicate to the right. In relation to the mean of the greatest displacement of the CoM of the body, the value of −1.45 ± 0.02 m was observed in the coupé position and −1.45 ± 0.01 m for the passé position on the x axis. On the y axis, the values observed for this variable were 1.05 ± 0.01 m and −1.45 ± 0.01 m for the coupé and passé positions, respectively, while on the z axis was observed 0.09 ± 0.02 m for coupé and −0.02 ± 0.02 m for passé. The mean of the smallest displacement of the CoM of the body in coupé was −1.49 ± 0.02 m on the x axis, 1.03 ± 0.01 m on the y axis and 0.06 ± 0.02 m on the z axis, while for passé the values observed were −1.50 ± 0.02 m, 1.05 ± 0.01 m and −0.07 ± 0.02 m for the x, y and z axes respectively. Greatest displacement of the CoM of the head in the coupé position was −0.85 ± 1.28 m in x axis, 1.55 ± 0.01 m in y axis and 0.11 ± 0.01 m in z axis, while in the passé position the values observed for the x, y and z axes were, respectively, −1.44 ± 0.02 m, 1.54 ± 0.01 m and 0.01 ± 0.02 m. Smallest displacement of the CoM of the head were −1.57 ± 0.03 m on the x axis, 1.53 ± 0.01 m on the y axis and −0.11 ± 0.03 m on the z axis for coupé position. In passé, the values found were −1.56 ± 0.01 m on the x axis, 1.53 ± 0.01 m on the y axis and −0.11 ± 0.03 m on the z axis. Greatest displacements of the CoM of the right thigh (gesture leg) in coupé were −1.33 ± 0.03 m on x axis, 1.54 ± 0.01 m on y axis and 0.28 ± 0.01 m on z axis, while for passé position the average found for this variable on x axis was −1.36 ± 0.02 m, 1.52 ± 0.01 m on y axis and 0.15 ± 0.02 m on z axis. Finally, smallest displacement of the CoM of the right thigh was −1.71 ± 0.03 m on x axis, 1.43 ± 0.03 m on y axis and −0.15 ± 0.03 m on z axis in coupé and −1.69 ± 0.02 m, 1.44 ± 0.01 m and −0.20 ± 0.03 m on x, y and z axes respectively in passé.

602

A. C. Navarro et al.

Table 1 Mean and standard deviation of the displacement of the center of mass of the body, head and right thigh for Single Leg Turn in coupé and in passé positions

Variables

Axis

Coupé (m)

Passé (m)

Greater displacement of the center of mass of the body

Anteroposterior (x)

−1.45 ± 0.02

−1.45 ± 0.01

Smallest displacement of the center of mass of the body

Vertical (y)

1.05 ± 0.01

1.07 ± 0.01

Mediolateral (z)

0.09 ± 0.02

−0.02 ± 0.02

−1.49 ± 0.02

−1.50 ± 0.02

1.03 ± 0.01

1.05 ± 0.01

Anteroposterior (x) Vertical (y)

0.06 ± 0.02

−0.07 ± 0.02

−0.85 ± 1.28

−1.44 ± 0.02

vertical (y)

1.55 ± 0.01

1.54 ± 0.01

mediolateral (z)

0.11 ± 0.01

0.01 ± 0.02

−1.57 ± 0.03

−1.56 ± 0.01

1.53 ± 0.02

1.53 ± 0.01

mediolateral (z)

−0.02 ± 0.03

−0.11 ± 0.03

anteroposterior (x)

−1.33 ± 0.03

−1.36 ± 0.02

vertical (y)

1.54 ± 0.01

1.52 ± 0.01

mediolateral (z)

0.28 ± 0.01

0.15 ± 0.02

−1.71 ± 0.03

−1.69 ± 0.02

1.43 ± 0.03

1.44 ± 0.01

−0.15 ± 0.03

−0.20 ± 0.03

mediolateral (z) Greater displacement of the center of mass of the head

anteroposterior (x)

Smallest displacement of the center of mass of the head

anteroposterior (x) vertical (y)

Greater displacement of the center of mass of the right thigh

Smallest displacement of the center of mass of the right thigh

anteroposterior (x) vertical (y) mediolateral (z)

Table 2 shows the averages of the highest angular values for flexion and extension of the right knee (gesture leg) and the average duration of each of the movement turns. Regarding the highest degree of knee flexion, the value of 74.22 ± 3.80 was found in the coupé position and in the passé position 107.46 ± 0.60. The highest degree of knee extension of the gesture leg was 74.22 ± 3.80 and 107.46 ± 0.60 for the coupé and passé positions, respectively. The average duration in seconds for each lap of the movement was 0.63 ± 0.05 in coupé and 0.64 ± 0.04 in passé.

4

Discussion

Evidence of differences between the coupé and passé positions in the SLT movement was found in relation to the displacement of the body's CoM in the mediolateral axis and the flexion of the right knee (gesture leg) of the follower. In reference to the greater displacement of the CoM of the body, it is possible to verify that, in the passé position, the follower performed the movement to the left of the place demarcated for its performance on the mediolateral axis (z),

Table 2 Average angular amplitude of the right knee and average duration of movement in the coupé and passé positions

justifying the negative value found. The displacement variation that occurred on the mediolateral axis to the left in the passé position may have been influenced due to the greater degree of flexion of the right knee (gesture leg) of follower. It is also possible that this happened due to the force imposed by the leader to make the turn happen. Still regarding this variable, in the vertical (y) and anteroposterior (x) axes, a similarity is identified regarding the values described. A smaller displacement of the CoM of the head on the mediolateral axis (z) was identified for the two leg positions, this data can be justified by the initial position of the head rotated to the left side in anterior flexion. It is common that in pirouettes of Classical Ballet, the dancer marks a fixed point of the gaze in space with the objective of minimizing oscillations and vertigo [6–8]. This technique is known as “spotting” and consists of keeping your eyes at a fixed point in space as long as possible when starting the rotation [9]. However, for the position chosen for this study, the head followed the movement of the trunk, that is, it remained “in block” with the rest of the body throughout the execution of the movement [6].

Variables

Coupé

Greater angular flexion of the right knee (°)

74.22 ± 3.80

107.46 ± 0.60

Greater angular extension of the right knee (°)

50.67 ± 8.20

110.07 ± 0.88

0.63 ± 0.05

0.64 ± 0.04

Average duration of each movement (s)

Passé

Comparison Between the Passé and Coupé Positions in the Single …

In relation to the right thigh (gesture leg), the values of displacement from its CoM were similar in both positions (passé and coupé). Although the follower tilted her body to the left during the movement of the passé, the CoM of the right thigh did not follow the trend of the CoM of the body as a whole. This suggests the occurrence of a compensatory mechanism for maintaining balance. Although the displacements of the CoM occur on the scale of centimeters, it is important to emphasize that the area of the follower's support base is reduced to the metatarsals of one of her feet, therefore, a minimum alteration may be sufficient to disturb the dynamic balance of the movement, as highlighted Hopper and Golomer in their studies [10, 11]. Both positions require the follower to flex the knee of the gesture leg. In passé, a reduced angular amplitude was observed in this joint in relation to the coupé position. This difference may have been caused by the follower's constant attempts to adjust her body position to maintain speed and stability of movement.

5

Conclusions

This pilot study analyzed only the displacement of centers of mass and the knee flexion angles of the gesture leg of a Brazilian Zouk dancer. The evidence found suggests the occurrence of adaptive changes to maintain the dynamic balance in the Single Leg Turn in relation to the displacement of the CoM of the entire body in the mediolateral axis and in the flexion of the right knee to the coupé and passé positions. Such variations may be linked to the pattern of the follower itself, as well as to the conduct of the leader. Thus, investigating other variables such as the reaction force of the ground, the joint amplitude of the upper limbs and the influence of the leader on the Single Leg Turn in a larger number of participants will allow a deeper understanding of the movement. From this, technical improvements can be established to assist the teaching-learning process of the Single Leg Turn.

603 Acknowledgements The authors would like to thank FAPERJ, the CNPQ and FINEP for financial support for carrying out this work. The present work was carried out with the support of the Improvement of Higher Education Personnel—Brazil (CAPES)—Financing Code 001. Besides, the authors would like to thank the Mangueira do Futuro Institute for athletic support. Conflict of Interest The authors declare that there is no conflict of interest.

References 1. Motta M (2006) Teoria Fundamentos da Dança: uma abordagem epistemológica À luz da Teoria das Estranhezas. UFF, Niterói 2. Bronner S, Ojofeitimi S (2006) Gender and limb differences in healthy elite dancers: Passé Kinematics. J Mot Behav 38:71–79 3. Lott M (2019) Translating the base of support a mechanism for balance maintenance during rotations in dance. J Dance Med Sci 23(1):17–25 4. Bandeiras C (2019) Technology in Biomechanics Sports. IEEE Potentials 38(3):8–10. https://doi.org/10.1109/MPOT.2019.2897276 5. BZDC at https://www.brazilianzoukcouncil.com 6. Lin C et al (2014) Differences of ballet turns (pirouette) performance between experienced and novice ballet dancers. Res Q Exerc Sport 85(3):330–340 7. Amblard B et al (2001) Voluntary head stabilization in space during oscillatory trunk movements in the frontal plane performed before, during and after a prolonged period of weightlessness. Exp Brain Res 137:170–179 8. Chatfield S et al (2007) A descriptive analysis of kinematic and electromyographic relationships of the core during forward stepping in beginning and expert dancers. J Dance Med Sci 11:76–84 9. Grant G (1967) Technical manual and dictionary of classical ballet, 3rd edn. Dover Publications 10. Hopper D (2014) (2014) The effects of vestibular stimulation and fatigue on postural control in classical ballet dancers. J Dance Med Sci 18(2):67–73 11. Golomer E (2008) Influence of vision and motor imagery styles on equilibrium control during whole-body rotations. Somatosens Mot Res 26(4):105–110

Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping V. Macedo, J. Santos and R. S. Baptista

Abstract

In this study, we describe a system to automatically analyze rowing kinematic parameters using video capturing and processing and using a single RGB camera. Useful rowing 2D joint angles in the sagittal plane are estimated using the OpenPose API combined with an offline filter to overcome frame loss and oscillations. Rowing key moments are identified using a time series comparison between the joint angles curves from the movement performed and a manually labeled reference joint angles curves, representing a desired rowing stroke. The comparison is realized using the Dynamic Time Warping method. All the obtained data is displayed in a user-friendly interface to monitor the movement and provide offline feedback. The proposed approach enables automatic analysis of video-recorded training sessions. Keywords

Rowing • Sport • Motion analysis • Training feedback

1

Introduction

Rowing is a full body endurance activity that requires coordinated body movement to efficiently move the boat. Each stroke can be divided into phases and the correct sequenced execution is central for the correct technique and impacts the athlete’s performance [13]. One of the main challenges encountered in elite rowing is to consistently maintain the technique across different paces and throughout the whole competition [7]. The proliferation of rowing machines in fitness clubs has led to the increased practice of this sport not necessarily with V. Macedo (B) · J. Santos · R. S. Baptista Faculty of Gama, Universidade de Brasília, Asa Norte, 70910-900, Brasília, Brazil e-mail: [email protected]

correct technique. Rowing with incorrect technique can also lead to injuries, particularly there is a high incidence of back injury [16]. In rowing clubs there is an initiation where the proper technique is presented and usually there is regular coach supervision with verbal feedback. In competitive training it is common to use video analysis to detect and present specific deviations from the correct form. In the rowing scenario, many works on automatic kinematic analysis focus on wearable sensors (such as inertial units [4,12,18]), or instrumentation on the ergometer machine [10,11]. And while those systems tend to give accurate results, they are cumbersome for long-term movement analysis due to long setup time. Automatic video movement analysis requires two steps: movement capture and data analysis. Movement capture includes not only video capture, but also feature extraction, such as point trajectories or joint angles. The gold standard is MOCAP marker-based systems such as Vicon or Qualisys. Several application using those systems for rowing analysis were published [3,6,9,16,17]. However, these systems are expensive and have a long setup time, due to the need for marker placement and calibration procedures. With the advances in artificial intelligence and deep learning, many markerless video processing techniques to automatically extract features, i.e. skeleton drawing, have been proposed, such as OpenPose and others [1,5]. These techniques are usually applied to movement recognition but rarely for biomechanical analysis because of low precision and lost frames. Once the kinematic data is extracted, a finer analysis can be performed to provide useful movement feedback for rowers, which is rarely explored. In this paper we present two contributions. First, we overcome OpenPose’s limitation regarding precision and loss of frames applying a linear estimator. Second, we propose the use of the Dynamic Time Warping (DTW) algorithm to detect key moments in the rowing stroke and segment each rowing stroke into phases. Although automatic movement segmentation based on kinematic data have been proposed [8], to the best of our knowledge it has not been applied to rowing.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_93

605

606

2

V. Macedo et al.

Methods

In this section we will first present how we used OpenPose to obtain the relevant joint angles. Next, we introduce the Dynamic Time Warping algorithm and finally how we combined both to create an automatic procedure for kinematic data analysis. All algorithms described in this section were developed in Python programming language.

2.1

OpenPose and Filtering for Angle Joint Estimation

The OpenPose library is an open-source project providing state of the art multi-person 2D pose estimation. The estimation algorithm is based on a convolutional neural network (CNN) trained on large datasets, such as COCO Keypoints [14]. In their framework, each image is fed through the network in a feedforward manner, outputting in a vector form all the keypoints identified for each person. This approach has the advantage of giving accurate real time 2D pose estimation results for single image applications but, in its current version, still lacks functionalities for video processing. One of the main issues is its frame to frame estimation, which does not consider temporal dependencies and introduces high frequency transitions, or oscillations, in the keypoints trajectory. This problem was addressed in their later work [15], by using a recurrent neural network (RNN), though it is not yet part of the open-source API. Nonetheless, considering the higher accuracy and robustness of the OpenPose system in comparison with other opensource libraries encountered, our work used some of the keypoints provided by the BODY 25 OpenPose model in order to estimate relevant joint angles. More specifically, we chose six joint angles in the sagittal plane, forming the pose illustrated in Fig. 1. Although our goal is not to provide multi-person kinematic analysis, it is helpful to be able to deal with other persons passing through the video, especially for public environments. However, for the OpenPose system this introduces another difficulty, which is to choose amongst all the persons and false positives detected in any frame, considering that the order of their keypoints vector can vary between estimations. We solved this by applying a biggest person algorithm, which calculates the minimal rectangular area occupied by each person and selects the biggest, only requiring the target rower to be placed centrally in the video. After separating the target rower keypoints we still need to deal with the oscillations mentioned and the missing data due to false negatives, which are common in CNNs. This was done in two steps performed offline: interpolation and filtering. For the first step we used a cubic interpolation implemented individually for each joint and for each xy coordinate miss-

Fig. 1 Rowing pose and joint angles selected in the sagittal plane. The initial R stands for Right. Letters A, K , H , T , S, E and W represents Ankle, Knee, Hip, Trunk, Shoulder, Elbow and Wrist respectively. The symbol θ serve as an angle variable

ing. Then the next step consisted of implementing a linear Kalman Filter modeled for a generic particle moving in a bi-dimensional space. The filter was applied for each joint trajectory separately. Then, we consider the joint coordinates as a vector in the − → form P k with two components, one for the line coordinate and the other for the column, as following: − →k P = (P jk , Pik )

(1)

where i is the line and j is the column of the corresponding pixel. To calculate each angle depicted in Fig. 1, three points represented as vectors are specified: (i) the joint coordinate − →k − → P for joint k, (ii) the first adjacent point P k− , (iii) the sec− →k+ ond adjacent point P . For the knee, shoulder and elbow angles the adjacent points are the next and previous joint coordinate in the kinematic model. In the case of the ankle angle, one of the adjacent points is placed horizontally to the right of the ankle joint and for the trunk angle, the hip joint is the central joint, one of the adjacent is the elbow joint and the other is placed vertically upward the hip joint. In order to calculate the joint angles, we formulate the problem as the angle between two vectors, where the adjacent vectors defined are centered in the joint coordinate using the following operation: → − → − →k± − P 0 = P k± − P k

(2)

Finally. the angle is calculated by computing the arctangent function with shifted domain (atan2) using Eq. 3, deduced from the definitions of cross product and dot product: ⎛ → − →  − ⎞ × P k+  P k−  0 0 180 ⎠ atan2 ⎝ − θk = → − → π P k− · P k+ 0

0

(3)

93 Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping

where θ k is given in degrees. For the knee and elbow joint, because of the reference chosen, we take the supplementary angle from the one computed. The advantage of using the atan2 function is that it produces negative angles when the elbow crosses the body backwards (posterior side), when the trunk extends over the vertical line or when either the elbow or knee hyperextends. It is worth noting that the angles estimated are restricted to the two-dimensional sagittal plane and therefore does not represent motions like hip and shoulder abduction or any other motion that steps out of the plane. Although the ankle, knee and trunk angles are mostly contained in that plane, the elbow angle is not. The proposed method does not assume to estimate the real elbow angle but at least approximates the elbow profile curve, which can be used for classification purposes.

2.2

607

where d(i, j) calculates the square of the i −th element from A and the j −th element from B (Eq. 7). And the right side of equation compares three previous elements (row and column, row, column). The distance between elements is measured by Euclidean distance: d(i, j) = (ai − b j )2

(7)

When the algorithm reaches the last element of the matrix, it starts the backtracking from the last element to the first. The principle is the same when it fills the matrix, but doing the reverse way to find the optimal path and saving this path to represent the diagonal that corresponds to an optimal alignment. After distance is measured, the dynamic time warping can be defined as the most appropriate warping path based on the cumulative sums of distances on each path.

Dynamic Time Warping 2.3

Dynamic Time Warping (DTW) is a computational technique used to find the optimal alignment or the similarity between two time series. This alignment is a correspondence between distant points of these two time series, which may vary in speed or time. To find this match, the DTW calculates a minimum distance between two points from the series and they can be identical except for the difference in x-axis. The distance between two signals results in a warped representation which provides a matching between regions [2]. Each line that connects point to point from the sequences represents the distance between them. The lines from the same point, which has the same value on the y-axis, are separated on multiple lines resulting in a warped representation [2]. Therefore, since a point may map to multiple points then the length can be different. If these sequences were identical, a representation would not be warped once each point would be vertically connected at 90◦ .

2.2.1 DTW Algorithm Suppose two time series A and B of length n and m, respectively. Then suppose an n-m matrix that will be filled when it starts increasing the indexes (i, j) corresponding to row and column up to its maximum size. A = a1 , a2 , . . . , ai , . . . , an

(4)

B = b1 , b2 , . . . , b j , . . . , bm

(5)

To fill the resulting matrix, the algorithm calculates the accumulated distance and is described by: D(i, j) = d(i, j) + min[D(i − 1, j − 1), D(i − 1, j), D(i, j − 1)]

(6)

Our Contribution—Applying DTW to the Output Result of OpenPose

In order to apply the DTW algorithm, we first need to choose a reference or desired rowing cycle. This reference was acquired by capturing and processing a video with a single rowing stroke performed by a former Brazilian elite and national team member rowing athlete. Note that we do not assume this is the correct or ideal technique, only one selected to serve as a reference. The two main moments in a rowing cycle are the Catch (Fig. 2a) and Finish (Fig. 2c), which separates the Drive (from Catch to Finish) and Recovery (from Finish back to Catch). But, to show that the DTW method is not limited to identify only well-defined moments, we selected two other moments in the rowing cycle to analyze (Fig. 2). The first is a moment in the Drive phase where the knee first reaches full extension (Leg Extended Fig. 2b) and the second is a moment in the Recovery phase where the elbow fully extends in order to return to the Catch moment again (Arm Returned Fig. 2d). Then, we segment each cycle of the target rowing movement (i.e. the rowing joint angles from the movement to be evaluated). This segmentation was performed using a threshold based peak detection algorithm applied in the knee angle trajectory. Next, the DTW algorithm is performed individually for each segmented cycle. Since the algorithm is applied on a single time-series, we needed to select the most representative joint angle trajectory for each rowing moment. Based on the given definitions for each moment we assigned the knee angle for the Leg Extended and the elbow angle for the Finish and Arm Returned moments. Note that the Catch moment is encountered using the peak detection algorithm.

608

V. Macedo et al.

Fig. 2 All four rowing moments selected for analysis with the following temporal order: catch, leg extended, finish and arm returned

The last step resulted in two path vectors, one for the knee angle and the other for the elbow. These vectors connect frames or moments from the reference angle curve to the target. With that information we search which frames in the target angle curve best associates with each manually labeled rowing moment in the reference cycle. This results in an automatic labeling of all the moments chosen for each cycle in the target rowing video. By knowing the rowing key moments, several metrics can be provided for the target rower movement. First, from the cycle length we calculate the pace and period of the cycle. Then, with the Catch and Finish moments we estimate the drive and recovery percentage in the cycle. Lastly, we use the reference angles for each moment to analyze the movement consistency and error, for example, we analyze the trunk angle for the Finish moment in all cycles and compare it to the reference trunk angle for the same moment in the single desired cycle. In order to test the algorithm two five-minute videos with 120 rowing cycles were used as target movements and compared with the reference desired cycle performed by the professional rower. The first target video was performed by a novice rower with no experience and the second by the same professional experienced rower which performed the reference cycle. Therefore, aside from comparing the novice tech-

nique with the desired cycle, we also compare the experienced rower five-minute training with his own reference to check how consistent was.

3

Results

The biomechanical analysis proposed provided the evaluation of each rowing moment in accordance to the reference movement. One of the cycles evaluated is shown in Figure 3, where it is possible to compare each technique based on the knee and elbow joints angles. The first difference observed is in the knee angle behavior between the Leg Extended and Arm Returned moment. Whereas the knee angle maintains a similar value during the interval for the reference technique, the target knee angle achieves its peak value only in the middle of the interval and start decreasing before the end. To further analyze each rowing cycle, as it is normally done by coaches, we developed a simple interface (Fig. 4) allowing to switch between each cycle and each rowing moment, while giving it specific calculated metrics. Those metrics include the absolute time of the moment and the time relative to the cycle, the period and pace of the cycle, the drive and recovery percentage and the error for each joint angle in comparison with the reference angles for that moment.

93 Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping

609

Fig. 3 Comparison between the reference rower movement (upper images) with the target rower movement (lower images). The images to the left represent the Catch moment for each rower and at the right a plot of the knee and elbow joint angles with the rowing moments marked

Fig. 4 Rowing movement analysis interface developed to demonstrate the metric calculated automatically for the rowing movement. The upper dropdown bars allow the selection of both the cycle number and the cycle moment, while the lower bars provide a fine selection of the desired frame and player functionalities for the video

In addition to the cycle by cycle analysis, we also provide a whole training analysis, where the user can select the moment and angle of interest and observe it throughout the cycles while comparing it to the reference angle. This is an important feedback of consistency, which according to [7] is one of the main differences between elite and non-elite rowers. An example of the interface is depicted in Fig. 5, showing the progress of the trunk angle during the training for the Finish moment in each cycle. From this data we can observe two main things. First that independently of the technique used, the experience rower is far more consistent throughout the cycles, shown by the fewer variations in amplitude of the trunk angle. And, secondly, that the experienced rower in both

the reference and the training cycles extends the trunk in the Finish moment further than the novice rower, which leads to a greater movement amplitude.

4

Conclusion

In this paper we proposed a framework for automatic video analysis of the rowing motion by extracting kinematic data from the video and using the DTW algorithm with peak detection to automatically classify each rowing cycle into distinct moments. The framework uses an exemplary video from an experienced rower with manually defined moments of interest

610

V. Macedo et al.

Fig. 5 Comparison for the trunk angle during the Finish moment in all cycles analyzed using the interface. Dashed line represents the desired angle according to the reference movement for the Finish cycle, the blue and orange continuous lines show the trunk angle at each Finish cycle in the training session for the novice and experienced rower respectively

and can segment and analyze a recording of another rower. The proposed approach can automatically provide kinematic data and compare it to a desired stroke technique, highlighting deviations in the execution. This could be used as a training tool since traditionally posture correction and feedback on execution is provided exclusively by the coach overseeing the training session. It is worth reinstating that the system does not provide or judges any reference stroke as being ideal or correct. In a real application the user would be responsible for providing the reference video he wants as comparison, acknowledging the differences in anthropometric characteristics between the rowers compared and how it could influence which technique is more appropriate. The study focuses on proposing a preliminary system design but is not validated in the kinematic parameters acquisition. A different study is being conducted in order the assess the validity of the image-based algorithm using Openpose. Another limitation is being offline only, due to both the interpolation and DTW algorithms. Future work will include investigation of online estimation techniques.

4.

5. 6.

7.

8.

9.

10.

Conflict of Interest The authors declare that they have no conflict of interest.

11.

References

12.

1.

2. 3.

Güler RA, Neverova N, Kokkinos I (2018) Densepose: Dense human pose estimation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7297– 7306 Berndt D, Clifford J (1994) DTW (dynamic time warping). Workshop on knowledge knowledge discovery in databases 398:359–370 Bingul BM et al (2014) Two-dimensional kinematic analysis of catch and finish positions during a 2000 m rowing ergometer time trial. S Afr J Res Sport Phys Educ Recreat 36(3):1–10. ISSN: 03799069

13. 14. 15.

Bosch S et al (2015) Analysis of indoor rowing motion using wearable inertial sensors. In: Proceedings of the 10th EAI international conference on body area networks. ICST, pp 233–239. ISBN: 9781-63190-084-6. https://doi.org/10.4108/eai.28-9-2015. 2261465 Cao Z et al (2018) OpenPose: realtime multi-person 2D pose estimation using part affinity fields. http://arxiv.org/abs/1812.08008 Cerne T, Kamnik R, Munih M (2011) The measurement setup for real-time biomechanical analysis of rowing on an ergometer. Measurement 44(10 ):1819–1827. ISSN: 02632241 Cerne T et al (2013) Differences between elite, junior and nonrowers in kinematic and kinetic parameters during ergometer rowing. Hum Movem Sci 32(4):691–707. ISSN: 01679457. https://doi. org/10.1016/j.humov.2012.11.006 de Souza Baptista R, Bo AP, Hayashibe M (2017) Automatic human movement assessment with switching linear dynamic system: motion segmentation and motor performance. IEEE Trans Neural Syst Rehabil Eng 25(6):628–640. ISSN: 15344320. https://doi. org/10.1109/TNSRE.2016.2591783 Fothergill S, Harle R, Holden S (2008) Modeling the model athlete: automatic coaching of rowing technique. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5342 LNCS, pp 372–381. ISSN: 03029743. https://doi.org/10.1007/978-3-54089689-0_41 Franke T, Pieringer C, Lukowicz P (2011) How should a wearable rowing trainer look like? A user study. In: Proceedings of international symposium on wearable computers, ISWC, pp 15–18. ISSN: 15504816. https://doi.org/10.1109/ISWC.2011.15 Gravenhorst F et al (2012) Sonicseat: a seat position tracker based on ultrasonic sound measurements for rowing technique analysis. In: BODYNETS 2012—7th international conference on body area networks. https://doi.org/10.4108/icst.bodynets.2012.249917 Gravenhorst F et al (2014) Strap and row: rowing technique analysis based on inertial measurement units implemented in mobile phones. In: IEEE ISSNIP 2014—2014 IEEE 9th international conference on intelligent sensors, sensor networks and information processing, conference proceedings, Apr 2014, pp 21–24. https://doi.org/10. 1109/ISSNIP.2014.6827677 Ishiko T (2015) Biomechanics of rowing, pp 249–252. https://doi. org/10.1159/000392181 Lin T-Y et al (204) Microsoft COCO: common objects in context. http://arxiv.org/abs/1405.0312 Raaj Y et al (2019) Efficient online multi-person 2D pose tracking with recurrent spatio-temporal affinity fields. In: Proceedings of the

93 Automatic Rowing Kinematic Analysis Using OpenPose and Dynamic Time Warping IEEE conference on computer vision and pattern recognition, pp 4620–4628 16. Sforza C (2012) A three-dimensional study of body motion during ergometer rowing. Open Sports Med J 6(1):22–28. ISSN: 18743870. https://doi.org/10.2174/1874387001206010022 17. Skublewska-Paszkowska M et al (2016) Motion capture as a modern technology for analysing ergometer rowing. Adv Sci Technol Res J 10(29):132–140. ISSN: 2080–4075. https://doi.org/10.12913/ 22998624/61941

611

18. Tessendorf B et al (2011) An IMU-based sensor network to continuously monitor rowing technique on the water. In: Proceedings of the 2011 7th international conference on intelligent sensors, sensor networks and information processing, ISSNIP 2011, pp 253–258. https://doi.org/10.1109/ISSNIP.2011.6146535

Analysis of Seat-to-Hand Vibration Transmissibility in Seated Smartphone Users R. M. A. Dutra, G. C. Melo, and M. L. M. Duarte

Abstract

1

The use of smartphones by Brazilians in vehicles subject to vibration is increasing. Consequently, the analysis of the transmissibility of seat-to-hand vibration becomes necessary. Tests on a vibrating platform were developed in order to assess the influence of frequency, amplitude, gender and type of smartphone on this transmissibility. Six different stimuli (three frequencies and two amplitudes) were applied to volunteers who used two types of smartphones. The results showed that transmissibility increased with decreasing frequency and amplitude. In addition, the change in frequency or amplitude did not cause proportional behavior in transmissibility. In relation to gender, men accentuated the vibration compared to women, indicating less capacity to reduce vibration. As for smartphones, they had no influence on transmissibility. Keywords

Transmissibility



Whole-body vibration



Smartphone

R. M. A. Dutra (&) Centre for Innovation, Research and Teaching in Mechatronics, UFSJ/NIPEM, Department of Telecommunications and Mechatronics Engineering, Universidade Federal de São João del-Rei, KM 7 MG 443, Ouro Branco, Brazil e-mail: [email protected] R. M. A. Dutra  M. L. M. Duarte UFMG/PPGMEC, Postgraduate Program in Mechanical Engineering, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil G. C. Melo DEMEC/UFMG, Mechanical Engineering Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

Introduction

The Brazilian population spends on average 32 days a year in traffic, with 46% using buses as the main means of transportation [1]. One way to compensate for this idle time is using smartphones, what contributes to the country being the fifth largest user of mobile phones in the world [2]. Public transport is the preferred place for 68% of Brazilians to use it [3]. During the journey, the user, exposed to mechanical vibrations, may present biomechanical, physiological and psychological reactions that affect comfort, health and stability, and consequently, the performance of tasks accomplished on these smartphones [4, 5]. A more common methodology of characterization or dynamic performance in tasks with mechanical vibrations is through the measurement of transmissibility, which is defined as the human body's ability to filter and transmit vibrations [6, 7]. Thus, its amplitude can be used as a global factor to assess the dynamic performance of the seat [8]. Therefore, the present article has as main objective to verify if the vibration developed in the transport seat is passed on to smartphone users’ hands. As specific objectives, to analyze the influence of frequency, amplitude, gender and type of smartphone on this transmissibility. It is expected that, with these results, it will be possible to analyze in future works the performance and comfort of the activities developed on smartphone.

2

Materials and Methods

2.1 Volunteers Sixteen university students, eight male and eight female, were assessed using the Anamnese capability questionnaire to participate in the whole-body vibration tests (WBV), as recommended by Michael [8] and ISO13090-1 [9]. All volunteers presented neither neuromuscular disorders,

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_94

613

614

R. M. A. Dutra et al.

vertigo, dizziness, balance disorder nor vision problems, fulfilling the inclusion criteria [10]. The sixteen students were familiar with the types of smartphones used: Galaxy S4 by Samsung® and iPhone 5 by Apple®. These smartphones have, respectively, the two most widespread operating systems: Android and iOS [11, 12]. Four men and four women used the Galaxy S4, while the rest used the iPhone 5. An Informed Consent Form was obtained from the volunteers after a clear explanation of the objectives and procedures of the experiment. In addition, the study was approved by the Research Ethics Committee of Universidade Federal de Minas Gerais under number CAAE 01936713.4.0000.5149. The characteristics of the sample are summarized in Table 1.

2.2 Experimental Apparatus The support platform (simulation bench) used for the WBV tests is illustrated in Fig. 1. The vibration is generated by an electrodynamic exciter (shaker) and transmitted to the platform where the volunteer is positioned. The accelerometer, located below the seat, reads the weighted acceleration and the obtained value is filtered and corrected by a signal conditioner. Monitoring and control are performed by a computer-board system that corrects this value. After correction, the signal is amplified and activates the electrodynamic exciter in order to impose the desired level of acceleration on the platform. The analysis of this acceleration was carried out only on the vertical axis (z axis), assuming that the vibrations on the transverse axes are much lower, as occurs in vehicles [13]. The vibration stabilization time is less than 30 s [14]. To read the vibration generated in the volunteer's hand, an accelerometer located near their third metacarpal bone of the right hand was used (Fig. 2). The reading is triaxial, in which the anatomical coordinated system was used [16].

Fig. 1 Simulation workbench and data acquisition, monitoring and control system equipment (adapted from [15])

2.3 Experimental Procedure Each volunteer was subjected to six test conditions, consisting of a combination of three vertical frequencies (5, 10 and 15 Hz) and two RMS amplitude values (0.6 and

Table 1 Experimental anthropometric characteristics of the subjects

Fig. 2 Location of the accelerometer in hand and the coordinate system (adapted from [16])

Mean age (years)

Mean height (m)

Mean body mass (kg)

Iphone 5 (N = 8)

22.40 ± 1.49

1.70 ± 0.10

65.80 ± 10.53

Galaxy S4 (N = 8)

24.10 ± 2.89

1.70 ± 0.11

67.10 ± 12.50

Female (N = 8)

22.80 ± 3.03

1.61 ± 0.07

56.40 ± 5.24

Male (N = 8)

23.80 ± 1.56

1.79 ± 0.03

76.50 ± 6.18

Overall (N = 16)

23.25 ± 2.54

1.69 ± 0.11

66.43 ± 11.96

Analysis of Seat-to-Hand Vibration Transmissibility in Seated …

1.2 m/s2). The frequencies and amplitudes were applied at random, to avoid residue and influence between tests. According to Michael [8], vehicle seats generally transmit vibration levels below 20 Hz. Therefore, the three frequency values in this study belong to the acceptable range of transmission. The frequency of 5 Hz is in the range of greatest transmissibility of vibration between spine and seat and between head and platform [17, 18]. The frequencies of 10 Hz (double) and 15 Hz (triple) were chosen to investigate whether the transmissibility behavior would be linear when compared to the linear change in frequency. The amplitude of 0.6 m/s2 is equivalent to the average value of vertical acceleration (z axis) in passenger vehicles [19, 20]. The choice of the acceleration value of 1.2 m/s2 is consistent with the maximum values found in the literature for RMS acceleration measured in passenger vehicles, mainly associated with the speed of 60 km/h [21, 22]. In addition, it will be possible to investigate what happens when the acceleration of 0.6 m/s2 is doubled, and if the effect promotes linearity. The volunteers were positioned on the support platform and instructed to keep their legs bent at a 90° angle throughout the test and their backs resting on the back of the seat. Only this posture was adopted, although there are no consistent results in the literature on the effect of posture on WBV exposure [10]. The movement of hands and arms was the only one that could be free for the volunteer. In all tests, data collection was started after the control system stabilization. All volunteers were instructed to type pre-established phrases with an average of 65 characters per test, simulating the use of the smartphone in messaging applications. For the analysis of transmissibility, three ratios provided by the relation between each weighted RMS acceleration of the hand and the weighted RMS acceleration measured in the seat were calculated.

615

3

Results and Discussion

The average values of RMS accelerations in the seat (z axis) and in the hand (x, y and z axes) for each test are shown in Table 2. The results proved the efficiency of the control system because the average value of the weighted RMS acceleration measured on the z axis of the seat (desired value) is very close to the amplitude of excitation in the test (measured average value). The maximum error was approximately 3.33%. Transmissibility was calculated between each of the three axes of the hand (x, y and z) and the z axis of the seat, as shown in Fig. 3. The frequency of vibration of 5 Hz showed the highest transmissibilities from the seat to the hand, especially between the z axis of the seat and the x axis of the hand. In fact, at this frequency there is a large displacement of the hand-arm system during the task accomplishment [8]. It was observed that an increase in frequency implies a reduction in transmissibility, which is justified by the fact that it is distancing from the natural frequency of the body. In only three situations, the average transmissibility showed a value greater than 1. This indicates that the weighted RMS acceleration of the hand was greater than the weighted RMS acceleration of the seat, that is, the acceleration was accentuated. All these expansions took place at 5 Hz, confirming again that this frequency causes greater transmissibility in relation to the seat. However, no attenuation was observed between the z/z axes for a frequency of 5 Hz. It was also observed that at the frequencies of 10 Hz and 15 Hz, the z/z transmissibilities were very close for the same amplitude. In relation to the vibration amplitude, the excitation of 0.6 m/s2 caused greater transmissibility than the excitation of 1.2 m/s2. This behavior shows that the average amplitude found in passenger vehicles (0.6 m/s2) is the one that provides greater transmissibility. In all cases, hand vibration in relation to the seat has been reduced.

Table 2 Test results Test

Magnitude (m/s2)

Frequency (Hz)

Weighted RMS acceleration in the seat—z axis (m/s2)

Difference between amplitude and weighted RMS acceleration in the seat (%)

Weighted RMS acceleration in hand—z axis (m/s2)

Weighted RMS acceleration in hand—y axis (m/s2)

Weighted RMS acceleration in hand—x axis (m/s2)

T1

0.6

10

0.62

+3.33

0.34

0.44

0.46

T2

1.2

5

1.19

−0.01

0.81

1.13

1.81

T3

1.2

15

1.22

+1.66

0.46

0.40

0.41

T4

0.6

5

0.61

+1.66

0.54

0.65

0.99

T5

1.2

10

1.24

+3.33

0.50

0.52

0.70

T6

0.6

15

0.62

+3.33

0.33

0.30

0.34

616

R. M. A. Dutra et al.

Fig. 3 Average transmissibility between each of the hand axes (x, y and z axes) and the seat axis (z axis) of each test in ascending order of frequency (Hz) and magnitude (m/s2), respectively

Fig. 4 Average transmissibility between each of the hand axes (x, y and z axes) and the seat axis (z axis) by: a frequency, b magnitude

Analysis of Seat-to-Hand Vibration Transmissibility in Seated …

The behavior of the average x/z, y/z and z/z transmissibilities as a function of frequency and as a function of amplitude is shown in Fig. 4. According to Mitsunori et al. [23], resonant frequencies cause significant responses in the human body between 2 and 11 Hz, which justifies the fact that the frequency of 15 Hz (outside the resonance range) shows the lowest transmissibilities and that they are similar to each other. No linear proportional behavior of the transmissibility with the variation of frequency or amplitude was observed. Therefore, increasing the frequency (doubling or tripling it) and/or increasing the amplitude (doubling it) does not cause an increase or decrease of the same intensity in the transmissibility value. The absence of linearity was also observed for variation of axes in the same frequency or in the same amplitude. This indicates that a change in transmissibility between a hand axis (x, y or z) and the seat axis (z) does not generate a proportional variation (of order 1) in the transmissibilities of the other axes.

Fig. 5 Average transmissibility between each of the hand axes (x, y and z axes) and the seat axis (z axis) by frequency (Hz) and by: a gender, b smartphone

617

The data obtained in the frequency analysis by gender (a) and by smartphone (b) are presented in Fig. 5. For the frequencies of 5 Hz and 10 Hz, the average transmissibility of vibration in all situations was higher for men in relation to women. This analysis contradicts the idea that the higher the mass value, the lower the transmissibility value, since men had a higher average mass. Therefore, higher weights and BMI indicate less capacity of the human body to reduce hand movement. This behavior is not noticed at the frequency of 15 Hz, which can infer that gender and, consequently, weight and BMI, do not influence transmissibility at higher frequencies. In the smartphone analysis, the average transmissibilities at the frequency of 5 Hz were higher for the Galaxy model, while at the frequency of 10 Hz they were higher for the iPhone model. At 15 Hz, the transmissibilities were similar for both smartphones. With this variation, it was observed that the type of smartphone is not a factor of influence on the transmissibility between the seat and the hand.

618

R. M. A. Dutra et al.

Fig. 6 Average transmissibility between each of the hand axes (x, y and z axes) and the seat axis (z axis) by magnitude (m/s2) and by: a gender, b smartphone

The behavior of the transmissibility by amplitude in analyzes by gender (a) and by smartphone (b) is shown in Fig. 6. In the data by gender, in all amplitudes, the average transmissibility was higher for men. Again, the results show that the male human body has greater difficulty in reducing hand vibration. In smartphone analysis, there is no standard behavior between the amplitudes. In the amplitude of 0.6 m/s2, the transmissibility was higher for the Galaxy model between the x/z and z/z axes, while in the amplitude of 1.2 m/s2 it was between the y/z and z/z axes. Therefore, it is reaffirmed that the type of smartphone used has no impact on the transmissibility of vibration from the seat to the hand.

4

Conclusions

This work had as main objective the accomplishment of tests with volunteers seated on a vibrating platform in order to evaluate the transmissibility of the vibration from seat to

hand during the use of smartphone. The results showed that the transmissibility is higher precisely at the frequency of 5 Hz, which is in the resonance range of the torso-arm system, and in the amplitude of 0.6 m/s2, which is the most common in passenger vehicles. In addition, the data indicated an absence of linearity in the behavior of the transmissibility in relation to the variation of amplitude or frequency. In relation to gender, it was observed that men have greater transmissibility at lower frequencies, with the male body having greater difficulty in mitigating vibration. Regarding smartphones, the absence of trend exposed that the type does not interfere with transmissibility. In view of the above, it is important to increase the test conditions to create the curve of nonlinear behavior of transmissibility in relation to frequency and amplitude. In addition, the vibrating platform must be improved in order to make its back more like those found in vehicles, since the backrest effect can be an influencing factor in the results [10]. Not enough, it is necessary to evaluate the performance

Analysis of Seat-to-Hand Vibration Transmissibility in Seated …

of activities developed on the smartphone, such as the number of mistakes and speed during a writing process, also measuring comfort and difficulty due to transmissibility. Acknowledgements The authors thank FAPEMIG for the financial support of the TEC448-13 project. Conflict of Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References 1. IPSOS Instituto (2019) Mobilidade urbana, Summit Mobilidade Urbana 2019, São Paulo, Brasil, 2019 2. App Annie (2019) The state of mobile 2019, Brasília 3. TechTudo at https://www.techtudo.com.br/noticias/noticia/2012/ 10/brasileiros-usam-smartphones-mais-no-onibus-e-no-metro-dizestudo.html 4. Marilu AMF, Luiz FS, Tiago LB (2016) Transporte coletivo: vibração de corpo-inteiro e conforto de passageiros, motoristas e cobradores. J Transp Lit 10:35–39 5. Dong-jun P, Moon GC, Jong-tak S et al (2019) Attention decrease of drivers exposed to vibration from military vehicles when driving in terrain conditions. Int J Ind Ergon 72:363–371 6. Romain B, Thomas W, Raphael D et al (2019) Assessment of the impact of sub-components on the dynamic response of a coupled human body/automotive seat system. J Sound Vib 459:114846 7. Rui-chum D, Lei H, Wei D et al (2019) Effect of sitting posture and seat on biodynamic responses of internal human body simulated by finite element modeling of body-seat system. J Sound Vib 438:543–554 8. Michael JG (1996) Handbook of human vibration. Elsevier Science, Southampton 9. ISO13090-1 (1998) ISO 13090-1: Guidance on safety aspects of tests and experiments with people, in Part 1: Exposure to whole-body mechanical vibration and repeated shock. pp 23. 10. Shih-Yi L, Chun-Ching L, Cheng-Lung L et al (2019) Vertical vibration frequency and sitting posture effects on muscular loads and body balance. Int J Ind Ergon 74:102860

619 11. Gencay D, Pinar OD (2019) A comparison of mobile form controls for different tasks. Comput Stand Interfaces 61:97–106 12. Junghun K, Hyunjoo L, Jongsu L (2020) Smartphone preferences and brand loyalty: a discrete choice model reflecting the reference point and peer effect. J Retail Consum Serv 52:101907 13. Xiaoke Z, Aaron MK, Muhammad IK et al (2017) Whole body vibration exposure patterns in Canadian prairie farmers. Ergonomics 60(8):1064–1073 14. Lázaro VD, Maria LMD, José MG (2012) Development of an active control system for a whole body vibration platform. In: ABCM symposium series in mechatronics proceedings of v.5, Section II—control systems, 21th international congress of mechanical engineering, Natal, Brasil, 2011, pp 298–305 15. Maria LMD, Priscila AA, Frederico CV et al (2018) Correlation between weighted acceleration, vibration dose value and exposure time on whole body vibration comfort levels evaluation. Saf Sci 103:218–224 16. ISO8727 (1997) ISO 8727: Mechanical vibration and shock— human exposure—biodynamic coordinate systems, p 14 17. Manohar MP, Gunnar BJA, Lars J et al (1984) In vivo measurement of spinal column vibrations. J Biomech 17(11):876 18. Colin C, Michael JG (1991) Effects of vertical vibration on passenger activities: writing and drinking. Ergonomics 34:1313– 1332 19. Peter LP, William T, Donald EW (1992) Hand-arm vibration: a comprehensive guide for occupational health professionals. Van Nostrand Reinhold, Ann Arbor 20. Gurmail SP, Michael JG (2002) Evaluation of whole-body vibration in vehicles. J Sound Vib 253(1):195–213 21. Alexandre B (2001) Caracterização dos níveis de vibração em motoristas de ônibus: um enfoque no conforto e na saúde, in Tese (Doutorado em Engenharia) – Faculdade de Engenharia. UFRGS, Porto Alegre 22. Zulquernain M (2007) Investigating data entry task performance on a laptop under the impact of vibration: the effect of color. Int J Occup Saf Ergon 13:291–303 23. Mitsunori K, Fumio T, Hiroyuki A et al (2001) An investigation into a synthetic vibration model for humans: an investigation into a mechanical vibration human model constructed according to the relations between the physical, psychological and physiological reactions of humans exposed to vibration. Int J Ind Ergon 27 (4):219–232

Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives P. F. Nunes, I. Ostan, W. M. dos Santos and A. A. G. Siqueira

Abstract

Through the study and analysis of motor primitives, it is possible not only to perform motor control to aid impaired people, but also to develop techniques for neuromuscular rehabilitation. Human motor control is executed through a basic set of signals that govern the motor behavior. This basic set of signals is called primitive motor movements. From the knowledge of an individual’s motor primitives, it is possible to develop control strategies, which are capable of assisting individuals with some sort of mobility impairment. In order to extract these motor primitives in a precise way to carry out the development of assisting devices, factorization techniques are fundamental and many methods exist to perform this task. The present work analyzes the results yielded by four of the most used matrix factoring techniques (PCA, ICA, NNMF, SOBI) using electromyography signals. The results suggest that the PCA is the technique that best managed to reconstruct the EMG signals after their factorization, with a virtually zero relative error. The SOBI technique also yielded satisfactory results, followed by NNMF and finally ICA, which presented a reconstructed signal quite different from the original. Keywords

Motor primitives • PCA • ICA • NNMF • SOBI

1

Introduction

Human motor behavior is highly goal-oriented and this requires the Central Nervous System (CNS) to coordinate different aspects of action generation to achieve coordinated P. F. Nunes (B) · I. Ostan · W. M. dos Santos · A. A. G. Siqueira Department of Mechanical Engineering, Rehabilitation Robotics Laboratory, São Carlos School of Engineering, University of São Paulo, Av. Trabalhador São Carlense, 400, São Carlos, Brazil e-mail: [email protected]

mobility goals [1]. The CNS is able to easily plan and execute well-coordinated and refined motor behaviors despite facing multiple levels of challenges such as noisy sensory input, repetitive movements of the human musculoskeletal system, and complexity (that is, nonlinearity and time dependence) of the biomechanics of the body. Understanding how the CNS overcomes the computational burden of motor control has been the focus of studies in the field of human motor control and motor learning [2]. The main component of the CNS is the neuron, a type of highly specialized cell and essential element in its formation. Complex behaviors are built by the CNS through basic building blocks called Motion Motor Primitives [3]. Some studies define primitives as time-invariant spatial synergies, such as muscle co-contractions that happen simultaneously in the muscles of animals [4] and in humans [5]. The use of space-time synergies can significantly reduce the computational load during motor control over the CNS since, with a basic set of synergies, it is possible to reconstruct different conditions and tasks related to muscle movement, by simply defining when and how much to recruit from each synergy within the individual handling segment [6]. The storage by the CNS of motor primitives in the form of space-time muscle synergies is still a subject to be defined. In this work, the assignment of motor primitives in the form of space-time synergies will be analyzed with regards to different factorization techniques. Primitives can be defined as a basic set of signals that govern learned motor behaviors [7]. These primitives combine with different weights to form a minimum set of components, which is capable of reconstructing all possible muscle activations or position profiles [8]. Primitive extraction techniques aim for the reduction of dimensionality and elimination of unnecessary characteristics in order to facilitate the data analysis [9]. Dimensionality reduction is one of the advantages when it comes to motor control, since a reduced number of control signals is used to trigger them with the appropriate combina-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_95

621

622

P. F. Nunes et al.

tion. In addition, this combination is done in a linear manner, excluding nonlinear control of movements. Studies carried out by Tresch et al. [10] and Lambert-Shirzad and Van der Loos [11] compare several factorization methods for the extraction of muscle motor primitives. In [12] the matrix factorization techniques more commonly used were also employed here, which were: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF) followed by Second Order Blind Identification (SOBI), a method that had not yet been applied to extract synergies. Still in [12], the techniques were evaluated through the comparison between real data and synthetic signals generated from different configurations and known synergies. From the synthetic data, a study of the extraction of each factoring technique was realized. In [13], using primitive motor motions, the evaluation of the influence of the structure of an exoskeleton on the kinetic characteristics during overground walking was proposed. Inertial Measurement Units (IMUs) were used to obtain the kinematic variables that, together with the ground reaction forces, were applied to a biomechanical model to calculate the torques via OpenSim’s Inverse Dynamics tool. By simulating these torques, a control strategy based on motor primitives could be proposed, as in [14]. This strategy is capable of assisting the recovery of post-stroke patients with hemiparetic gait by providing assistance only when needed. Therefore, it is possible to help individuals with motor impairment to move by themselves. Other works present similar strategies, as denoted in [15] and [16]. The works mentioned so far emphasize the relevance of motion motor primitives in both the study and analysis of motion, as well as a tool which can serve as inspiration for the development of new control strategies of assistive robotic devices. The first step to work with motion motor primitives is to extract them from measured signals by means of a factorization technique. This article compares the performance of the most used matrix dimensionality reduction techniques (i.e. PCA, ICA, NMF and SOBI). The original signals to be factorized were surface electromyography signals (EMG). The quality of the factorization is important for an accurate estimate of the motor primitives of hemiparetic individuals and for the development of a more efficient control.

2

Motor Primitives Model

Although motion primitives were both proposed as an invariant, or synchronous, model in time [17] and also as a variant, or asynchronous, model in time [18], in this study the first approach is employed. This is due to the fact that recent research applied motor primitives in muscle analysis with

EMG and showed evidence that they are indeed synchronized with time [19]. Motor motion primitives are usually calculated in the liter ature as the sum of the product between primitives pi and  their respective weights wi varying from 1 to N primitive numbers, according to Eq. 1 [20]. This calculation extracts a linear combination between the primitives and the corresponding weights to minimize the difference between the original and the reconstructed signals. As the primitives contribute to each pattern of muscle activity with the same weighting function wi (t), the primitive model pi is synchronous without any variation in time. The calculation is performed using matrix factorization techniques in the multichannel EMG activity to estimate the muscle primitives and their weighting functions. X i (t) =

N 

pi · wi (t)

(1)

i=1

The factorization techniques (PCA, ICA, NMF and SOBI) will result in the matrix X i (t), and in both a set of motor primitives pi (t × N ), where N is the amount of primitives, as well as a set of time-varying weight matrices wi (t). Each primitive set will be related to a mode of activation or use of motion effectors during gait.

2.1

ICA—Independent Component Analysis

Independent component analysis (ICA) is one of the most used techniques employed to separate signals received without any prior knowledge about their sources (also denoted as blind sources). This technique aims to recover the independent sources only. ICA applications are numerous. It is employed in the medical field as a tool for brain imaging techniques [21] and in bioinformatics [22]. ICA is generally used as an extraction technique to find time series characteristics [23], dealing with the nonGaussianity of the data and using higher order statistics (kurtosis). Kurtosis indicates non-Gaussianity of data and is a measure of probability distribution of a random variable with real value, generating base vectors that are statistically independent. In this work, FastICA was employed, which uses signal kurtosis. According to [24], finding the local extreme of kurtosis is equivalent to estimating independent non-Gaussian components. Based on this theory, [25] developed an algorithm based on a fixed point iteration scheme which finds the local end of kurtosis from a linear combination of observable variables. At the end of the ICA calculation, the linear combination for v(t) = vi · w T obtains the minimum and maximum kurtosis, where w will be the weight vector and vi will be the bleached vector.

95 Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives

2.2

PCA—Principal Component Analysis

Principal Component Analysis (PCA) was proposed by [9] as a procedure that uses orthogonal transformation to convert a set of observations of correlated variables into a set of unrelated variables, called principal components. The PCA relies strongly on orthogonality, in which the first component will be the one with the greatest variance, followed by the second component, orthogonal to the first, which has the second greatest variance. Therefore, the variance decreases with each component [26].

2.3

NNMF—Non-negative Matrix Factorization

Non-Negative Matrix Factorization (NNMF) is a statistical tool that imposes a non-negative restriction on the extracted factors, using second-order statistics to find the vectors that best represent the data variance. The NNMF determines that the matrices pi and wi (t) are non-negative and iteratively chooses the sets of vectors taking this into consideration. The non-negativity of the pi and wi (t) matrices ensures that the results of the factorization are global minima and not a local minimum, which guarantees a convex optimization problem. NNMF was used by [27] to analyze the sets of muscle primitives with different weights, whereas [28] analyzed the impact that the number and selection of muscles have on the primitive analysis of a musculoskeletal model. This restriction is essential because muscle activation signals processed by EMG data always have positive values.

2.4

623

SOBI—Second Order Blind Identification

Second Order Blind Identification (SOBI) uses the unified diagonalization of covariance matrices with time delay to estimate unknown components. Therefore, it can reveal more information about the temporal profile of electromyography activities. SOBI leads to noncorrelated components in these delays and, therefore, is sometimes considered an alternative to ICA, which is based on higher-order statistics [29]. In [12], SOBI was used for the first time for the priming of motion motor primitives. It presented, without reducing the dimensionality, better results than other factoring methods, proving to be an alternative for cases in which small amounts of electrodes are employed [12].

3

Experimental Results

To compare the matrix dimensionality reduction techniques most used in the extraction of motion primitives, the electromyography signals of a healthy individual who walked on the treadmill for 2 min at a speed of 3.3 km/h were used. The first and last steps were excluded from the analysis and 40 steps were considered for each condition. The signals of five muscles of the individual’s right leg were measured: rectus femoris (RF), vastus medialis (VM), tibialis anterior (TA), biceps femoris (BF), and gastrocnemius lateralis (GL). Figure 1 illustrates the process. The original signals of the five muscles were factored using the four techniques and the primitives along with their respective weights were obtained. Later, the signals were reconstructed from the combination of weights and primitives.

Original EMG

Fig. 1 Diagram of factorization and signal reconstruction steps

P1

RF

x

Factorization

VM BF

TA

P2

GL

x

Error

RF VM BF TA GL

Reconstructed EMG

P1

x

RF VM BF

Factorization

P2

x

TA GL RF VM BF TA GL

RF VM BF TA GL

624

P. F. Nunes et al.

Fig. 2 Original (dashed line) and reconstructed signal (solid line) after factorization with the following methods: a PCA, b ICA, c NNMF, d SOBI

a

b

c

d

The number of motor primitives was chosen according to the reconstruction of the responses, in which the variance in all sets was greater than 95%. For example, Fig. 2 illustrates the original RF signals (dashed line) and the reconstructed signal (continuous line) with regards to each factorization technique. After the reconstruction of the signals, factorization was performed again in order to obtain the weights and primitives of the reconstructed signal. The primitives of the original and the reconstructed signal were combined to calculate Pearson’s correlation coefficients (ρ), in order to assess the linear relationship between the two signals, as shown in Fig. 3. The relative error between the first motor primitive (regarding all five muscles) and the reconstructed one (by each factorization technique) was also calculated, as depicted in Fig. 4. The mean relative error was also analyzed regarding all four primitives covered by the factorization, as it can be seen in Fig. 5. The primitives of the original and the reconstructed signal were combined to calculate Pearson’s correlation coefficients (ρ), in order to assess the linear relationship between the two signals, as shown in Fig. 3. The relative error between the first motor primitive (5 muscles) and the one reconstructed by each of the factoring techniques was also calculated, as in Fig. 4. The mean relative error was also analyzed for the four primitives covered by the factorization, as in Fig. 5.

4

Conclusions

In this article, the most common matrix factorization techniques (PCA, ICA, NNMF and SOBI) were compared to extract motion motor primitives. There are numerous studies that use these techniques to extract motor primitives as a way to control myoelectric activities by means of biomechanical inspired control strategies. However, only two studies ([10]

and [30]) compared several factorization methods (excluding SOBI) to estimate the synergy without investigating the factors that affect the quality of the factorization employed. Only in [12], SOBI was used for the first time as a factorization method. To perform the analyses and comparisons in this work, EMG signals (5 channels) of an individual walking on a treadmill were decomposed using the four matrix factorization techniques mentioned before, to generate a set of primitives and weights. Thereafter, the extracted primitives were combined with these weights in order to reconstruct the signals and compare them with the original EMG ones. Figure 2 illustrates the envelopes of a channel (RF) with respect to the original and reconstructed signal after each factoring technique. It could be noted that PCA showed a greater similarity between the original signal and the reconstructed one after the factorization. To confirm this, the two signals were compared through the Pearson’s correlation coefficient (ρ) between them, in order to assess the linear relationship between the two signals, as shown in Fig. 3. In the case of PCA, the correlation was ρ = +1, i.e. there was a perfect positive correlation between the two signals; in the ICA, the correlation was ρ = −0.5066 , which means that the two signals are not linearly dependent; in NNMF, the correlation was ρ = +0.9882, i.e. there was a strong correlation between the signals, but not perfect; the same happened with SOBI, which showed a positive correlation of ρ = +0.9956. By using the first motor primitive, from each of the 5 channels, obtained by the factorization techniques, the relative reconstruction errors (dashed line) of the original signals (dark blue) and reconstructed signals (light blue) weights were evaluated, as in Fig. 4. The NNMF was able to reconstruct the signals after factoring with an error of less than 50% in the first two channels and almost zero in the other three, while the signals reconstructed

95 Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives

1

1

(a)

0

625

(b)

0

-1 -1

0

1

(c)

-1

1

-1

0

1

1

(d)

0

0

-1 -1

0

-1

1

-1

0

1

Fig. 3 Correlation between the original signal and reconstructed signal after factorization with the following methods: a PCA, b ICA, c NNMF, d SOBI

(a)

100 80 60

100

80%

60

40

60%

20

40%

0

Primitive 1 Weights

100%

-20 -40

RF

VM

TA

BF

GL

(b)

80%

20

60%

-20

40%

-60

20%

-100

0%

-140

20% RF

VM

TA

BF

GL

100%

100% 100

80

80%

60

60%

40

40%

20

20% RF

VM

TA

0%

(d)

(c) 100

0

100%

BF

Signal

GL

0%

80

80%

60

60%

40

40%

20

20%

0

-20

Signal Rec.

RF

VM

TA

BF

GL

0%

Rel. Error (%)

Fig. 4 Relative error in percentage (dashed line) between the original signal (dark blue) and the reconstructed signal (light blue) after factorization with the following methods: (A) PCA (B) ICA (C) NNMF (D) SOBI

626

P. F. Nunes et al.

Fig. 5 Mean relative error in percentage between original and reconstructed primitives as regards to each factorization method

Mean Relative Error

300% 250% Primitive 1

200%

Primitive 2

150%

Primitive 3

100%

Primitive 4

50% 0% ICA

by the ICA presented the largest reconstruction errors, which can be observed by analyzing the mean value of the relative errors of each factorization technique for the 4 primitives, as shown in Fig. 5. Note that the reconstructed signal after factoring by PCA and SOBI presents more reliable results and with a relative error of reconstruction practically zero, presenting themselves as viable matrix factorization techniques, although the SOBI algorithm presents better results for the cases in which one deals with a limited number of channels [12]. This work was approved by the Ethics Committee of the Federal University of São Carlos (Number 26054813.1.0000.5504). Acknowledgements This work was supported by the Coordination for the Improvement of Higher Education Personnel (CAPES), Support Program for Graduate Studies and Scientific and Technological Research for Assistive Technology in Brazil (PGPTA), process no . 3457/2014, Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), process no . 2019/05937-7.

PCA

7.

8.

9. 10.

11.

12.

13.

14.

Conflict of Interest The authors declare that they have no conflict of interest.

15.

References

16. 17.

1. 2. 3. 4.

5.

6.

Turvey MT (1990) Coordination. Am Psychol 45:938 Guigon E (2011) Models and architectures for motor control: simple or complex. Motor Control 20:478–502 Giszter SF (2015) Motor primitives new data and future questions. Curr Opin Neurobiol 33:156–165 Overduin SA, dAvella A, Carmena JM, Bizzi E (2012) Microstimulation activates a handful of muscle synergies. Neuron 76:1071– 1077 Berger DJ, Gentner R, Edmunds T, Pai DK, d’Avella A (2013) Differences in adaptation rates after virtual surgeries provide direct evidence for modularity. J Neurosci 33:12384–12394 Ruckert E, d’Avella A (2013) Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems. Front Comput Neurosci 7:138

18.

19.

20.

21.

NNMF

SOBI

Degallier S, Ijspeert A (2010) Modeling discrete and rhythmic movements through motor primitives: a review. Biol Cybern 103:319–338 Nunes PF, Nogueira SL, Siqueira AAG (2018) Analyzing motor primitives of healthy subjects wearing a lower limb exoskeleton, pp 1–6 Person K (1901) On lines and planes of closest fit to system of points in space. Philos Mag 2:559–572 Tresch MC, Cheung VC, d’Avella A (2006) Matrix factorization algorithms for the identification of muscle synergies: evaluation on simulated and experimental data sets. J Neurophysiol 95:2199–2212 Lambert-Shirzad N, Van der Loos HM (2016) On identifying kinematic and muscle synergies: a comparison of matrix factorization methods using experimental data from the healthy population. J Neurophysiol 117:290–302 Ebied A, Kinney-Lang E, Spyrou L, Escudero J (2018) Evaluation of matrix factorisation approaches for muscle synergy extraction. Med Eng Phys 57:51–60 Nunes PF, Santos WM, Siqueira AAG (2018) Influence of an exoskeleton on kinetic characteristics and muscles during the march using motion primitives, pp 1–7 Nunes PF, Santos WM, Siqueira AAG (2018) Control strategy based on kinetic motor primitives for lower limbs exoskeletons. IFACPapersOnLine 51:402–406 Garate VR, Parri A, Yan T et al (2016) A novel bioinspired framework using motor primitives for locomotion assistance through a wearable cooperative exoskeleton. IEEE Robot Autom Mag 1070:83–95 Ruiz Garate V, Parri A, Yan T et al (2016) Motor primitive-based control for lower-limb exoskeletons, pp 655–661 Saltiel P, Wyler-Duda K, D’Avella A, Tresch MC, Bizzi E (2001) Muscle synergies encoded within the spinal cord: evidence from focal intraspinal NMDA iontophoresis in the frog. J Neurophysiol 85:605–619 d’Avella A, Saltiel P, Bizzi E (2003) Combinations of muscle synergies in the construction of a natural motor behavior. Nat Neurosci 6:300 Hart CB, Giszter S (2013) Distinguishing synchronous and timevarying synergies using point process interval statistics: motor primitives in frog and rat. Front Comput Neurosci 7:52 Roh J, Rymer WZ, Beer RF (2015) Evidence for altered upper extremity muscle synergies in chronic stroke survivors with mild and moderate impairment. Front Hum Neurosci 9:6 Vigario R, Sarela J, Jousmaki V, Hamalainen M, Oja E (2000) Independent component approach to the analysis of EEG and MEG recordings. IEEE Trans Biomed Eng 47:589–593

95 Analysis of Matrix Factorization Techniques for Extraction of Motion Motor Primitives 22. Liebermeister W (2002) Linear modes of gene expression determined by independent component analysis. Bioinformatics 18:51– 60 23. Levine E, Domany E (2001) Resampling method for unsupervised estimation of cluster validity. Neural Comput 13:2573–2593 24. Delfosse N, Loubaton P (1995) Adaptive blind separation of independent sources: a deflation approach. Signal Process 45:59–83 25. Hyvarinen A, Oja E (1997) A fast fixed-point algorithm for independent component analysis. Neural Comput 9:1483–1492 26. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdiscip Rev Comput Stat 2:433–459

627

27. An Q, Ishikawa Y, Nakagawa J et al (2013) Muscle synergy analysis of human standing-up motion with different chair heights and different motion speeds, pp 3579–3584 28. Steele KM, Tresch MC, Perreault EJ (2013) The number and choice of muscles impact the results of muscle synergy analyses. Front Comput Neurosci 7:105 29. Belouchrani A, Abed-Meraim K, Cardoso JF, Moulines E (1997) A blind source separation technique using second-order statistics. IEEE Trans Signal Process 45:434–444 30. Lambert-Shirzad N, Van der Loos HM (2017) Data sample size needed for analysis of kinematic and muscle synergies in healthy and stroke populations. In: 2017 International conference on rehabilitation robotics (ICORR). IEEE, pp 777–782

Control Design Inspired by Motors Primitives to Coordinate the Functioning of an Active Knee Orthosis for Robotic Rehabilitation P. F. Nunes, D. Mosconi, I. Ostan and A. A. G. Siqueira

Abstract

In order to assist physiotherapists during the process of rehabilitation of individuals, especially after anomalies in the neuromusculoskeletal system due to diseases such as stroke or injuries, different types of orthoses for lower limbs and controls for these orthoses have been developed. This work aims to develop a robotic control strategy based on kinetic motor primitives, capable of assisting the recovery of patients with compromised movements. The primitives are calculated from the torques obtained by OpenSim’s Inverse Dynamics, which are used as input into the scaled model of the subject that used the orthosis along with the positions provided by the encoders of the orthosis fixed to the knee joint of the subject, who performed the extension/flexion movement. The proposed strategy was evaluated using a Forward Dynamics algorithm, through which new knee position data were obtained. Keywords

Rehabilitation robotics • Motor primitives • Exoskeleton • Lower limbs • Biomechatronics

1

Introduction

Deficiencies or anomalies in the neuromusculoskeletal (NME) system caused by diseases such as stroke or injury provoked by trauma to the brain, spinal cord (or in any way directly affecting the muscles) reduce the quality of life of patients. Stroke has increased significantly in the number of cases due to the growth of the elderly population, which in P. F. Nunes (B) · D. Mosconi · I. Ostan · A. A. G. Siqueira Department of Mechanical Engineering, Rehabilitation Robotics Laboratory, São Carlos School of Engineering, University of São Paulo, Av. Trabalhador São Carlense, 400,São Carlos, Brazil e-mail: [email protected]

2050 will reach 2 billion people aged 60 and over [1] while the elderly will represent 20% of the world’s population [2]. Globally, stroke is the second cause of death and the third cause of long term disability. There are approximately 33 million stroke survivors and it is a more common disorder affecting older people [3,4]. Among several anomalies caused by it, stroke is a pathology that can lead to severe sequelae in the NME system, including damage to the neural areas that control the movements of the upper and lower limbs, caused by the suspension of the blood supply to the brain because of a clot (ischemic stroke) or a bleeding (hemorrhagic stroke). Motor impairment affects about 50% of stroke survivors, who are referred to as hemiparetic patients [5]. Hemiparetic impairment is the tendency for the body to remain in an asymmetric postural position, distributing the lower body weight over the nonparetic hemisphere. This asymmetry and the difficulty in transferring the weight to the affected side interfere with the ability to maintain postural control, preventing orientation and stability to perform trunk and limb movements [6]. The hemiparetic patient does not present balance phase and support phase defined as normal gait; and this increases the risk of falls since movements are uncontrolled, and balance and proprioception are impaired [7]. Therefore, it is important that people physically compromised not only by stroke, but also those who suffer from some deficiency or anomaly in the NME system, are able to take care of themselves and perform simple tasks of everyday life, such as walking. This goal has been sought by many research groups in the field of rehabilitation robotics [8–10]. Several studies have been published with the intention of rehabilitating and improving the quality of life of patients with some type of disability. These systematic studies suggest that the intensity of physiotherapeutic practice, expressed by the number of repetitions and the specific training of the task, are the main motivators of motor rehabilitation after neural damage [11–14]. The improvement of the quality of life of these patients happens because the cerebral cortex is a set of highly inter-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_96

629

630

connected neuronal cells with the ability to adapt in response to changing environments. This ability to adapt is a fundamental property of nervous tissues and forms the basis for learning, called neuroplasticity. Neuroplasticity is the ability to adapt and learn in a way dependent on experience, from repetitions [15]. Neuroplasticity is the basic mechanism underlying the improvement of functional outcome after stroke and the recovery of these patients can be performed from strengthening exercises of the affected limbs and training tasks [16]. Therefore, the development of new technologies is crucial in order to rehabilitate the NME system of post-stroke patients and to promote a society with an active and sustainable aging. New technologies to rehabilitate the neuromusculoskeletal system have been proposed and developed by research groups in the field of rehabilitation robotics since the beginning of the 21st century. Recently, a large number of lower limb exoskeletons and active orthosis for care and rehabilitation have been developed and reported in the literature [17]. Exoskeletons can be used for assistance, with the main objective of supporting patients who have suffered complete damage to the spinal cord and who cannot recover their movements. In this paper, a modular exoskeleton for lower limbs presented in [18] was used to evaluate its influence on torques and motion primitives. The device is designed to provide partial support for knee flexion and extension by exerting precise torques at this joint. One of the most important tasks in biomechanics is to accurately determine the torques that operates on human joints during the movement. Therefore, the objective of this work is to develop an adaptive control strategy based on motors primitives and the knowledge of the kinetic characteristics of the user in order to produce an assisting torque to aid in achieving the desired movement. The concept of assistance based on motor primitives using an assistive exoskeleton was explored by Garate et al. [19], in which primitives were explored through two distinct strategies. In the first strategy, the primitives are identified and combined by weights to produce the desired assistive torque profiles, and in the second strategy the motor primitives are identified to be used as neural stimuli in a virtual model of the musculoskeletal system, which produces the profiles of the desired auxiliary torque. In [20] it is evaluated the influence of the structure of the exoskeleton on the kinetic characteristics, through the relations between motor primitives and their respective weights. Inertial Measurement Units (IMUs) were used to obtain the kinematic variables that, together with the ground reaction forces, were applied in a biomechanical model to calculate the torques using OpenSim’s Inverse Dynamics. The present work aims to develop a strategy of rehabilitation control based on kinetic motor primitives. The torque of the robot, calculated through these primitives, was applied

P. F. Nunes et al.

again in a computational interaction model based on a neuromusculoskeletal model from OpenSim together with position vectors, which were obtained through the encoders of the orthosis used by the subject, in order to obtain new angular positions of the knee joint.

2

Experimental Procedure

To evaluate the influence of the orthosis on the profiles of kinetic activity during flexion/extension of the knee joint, a set of experiments was performed. First, the user wore the orthosis and positioned himself in a high chair, not allowing foot contact with the floor. The orthosis is fixed to the chair so that the user does not feel the weight of the actuator, positioned in the part of the orthosis fixed to the thigh. The user initiates knee flexion and extension movements, following a sinusoidal trajectory ranging from −90o to 0o , with a period of 8 s. The orthosis position data are sent to Inverse Dynamics Tool of OpenSim, as described in Sect. 2.2. During the execution of the experimental procedure, the patient received from a computer screen a visual feedback about the movement that should be performed by him. The OpenSim Gait2392 model was used in the input of the Inverse Dynamics Tool, and the anthropometric data and the mass of the individual using the orthosis, along with the joint positions of the subject, were inserted. The torques were calculated in two cases, with and without the orthosis. The desired torque of the robot was calculated based on the ratio between the weights of the motor primitives in both cases, as described in Sect. 3. Further, the desired torque of the robot was sent to a Forward Dynamics algorithm developed in MATLAB®along with the orthosis position data in order to calculate the new position vectors.

2.1

Exoskeleton—ExoTao

In this study, only the knee joint of the right leg during the flexion/extension movement is considered. The orthosis is fixed to a chair so that the users do not feel the weight of the actuator, positioned in the part of the orthosis attached to the thigh. This configuration uncouples the inertia of the motor/reduction gear and other nonlinearities of the output, besides isolating the shock system introduced by the load. Another important property of this type of actuator is that the elastic element can be used as a torque sensor considering the linear relationship between spring deflection and torque. Figure 1 shows the orthosis with encoders and the rotary series elastic actuator (rSEA). The rSEA control hardware consists of an EPOS Positioning Controller, manufactured by the Swiss company Maxon Motor and by a CAN board (NI PCI-8513 CAN/XS), manufactured by the

96 Control Design Inspired by Motors Primitives to Coordinate the Functioning … Fig. 1 Position of the orthosis. A user wearing the active knee orthosis employed in this work

tional forces, and τ of torques of the knee joint with (τw ) and without (τwo ) the orthosis.

2.3

North American multinational National Instruments, which was installed in a conventional computer. The interface between the devices was made by Controller Area Network (CAN) through the CANopen communication protocol and the data transmission rate between the devices was configured to 500 kbit/s. Data acquisition of the hip and ankle encoders was done through the Serial Peripheral Interface (SPI) protocol.

2.2

Inverse Dynamics

The computational neuromusculoskeletal model used in the development of this study corresponds to the model Gait2392 from OpenSim. The dimensions of the muscle model can be modified. The dimensions of the standard model, without scale, correspond to a subject of 1.80 m height with a mass of 80 kg. A scale model was generated according to the anthropometric measures and body mass of the individual who used the orthosis. This model will be one of the inputs of the Inverse Dynamics tool for the calculation of torques with and without the orthosis. The Inverse Dynamics (ID) tool determines the set of torques at the joints responsible for a particular movement, making use of the kinematics that describe it (position, velocity and acceleration) and external forces applied to the model. In this work, for the calculation of ID, the angular position data θr of the joints were obtained from the orthosis [21]. The classical equation, Eq. (1), of motion to find the torque of the joints can be written as follows: ¨ + C(θ, θ˙ ) + G(θ ), τ = M(θ)θ

631

(1)

¨ is the inertial matrix, C(θ, θ˙ ) is the vector of where M(θ) Coriolis and centripetal forces, G(θ ) is the vector of gravita-

OpenSim

The OpenSim1 is a free software that allows the construction, exchange and analysis of neuromusculoskeletal system models and dynamic movement simulations. It was introduced in 2007 at the conference of the American Society of Biomechanics and has since been used in a wide variety of applications including biomechanical research, medical device design, rehabilitation and orthopedic sciences, neuroscience, ergonomic analysis, robotics, biology and education. Several devices and tools have been employed in the search to obtain parameters that provide better results, such as the use of motion capture technology, dynamometer, force platforms and medical images that, used in conjunction with neuromusculoskeletal models, provide a solid basis in biomechanical analysis [22]. In this sense, the use of models allows to know analytically each musculoskeletal and neurophysiological component, besides the functional relations among its variables, allowing to abstract how the input variables are processed by each component in order to produce an output [23]. The model used in the development of this study corresponds to the OpenSim Gait2392 model, a computational model of the three-dimensional human musculoskeletal system with 23 degrees of freedom and 92 muscle-tendon actuators, representing 76 muscles of the lower limbs and trunk. This model was scaled with the anthropometric data and weight of the individual who used the active knee orthosis, as seen in Fig. 2. The OpenSim software was chosen to model the human limbs due to the fact it permits that anthropometric parameters of each patient are adjusted whenever a new simulation is performed in an intuitive manner. Furthermore, besides being already employed in other works from the same laboratory [24], the models available for OpenSim are adequate to work with muscle activation, which is the aim of future works of the group.

2.4

Forward Dynamics

The Forward Dynamics (FD) determines the set of kinematic data obtained from the integration of the differential equations that define the neuromusculoskeletal model dynamics when a torque is applied to the model. In this work, a Forward Dynamics routine was developed in MATLAB® in order to obtain the new angular position 1 http://opensim.stanford.edu.

632

P. F. Nunes et al.

tion collected experimentally. A feedback PID loop estimates the torque exerted by the patient in order to, together with the motor primitive torque, compose the total torque required to perform the movement. The θ F D is the angular position calculated by the Feedforward Dynamics algorithm. To calibrate the PID gains, an experimental procedure (as described above) was conducted. The torques and positions measured from the robot were used in the Forward Dynamics algorithm, being the gains adjusted empirically until the θ F D converged to θr . After calibrated, a second experimental procedure was conducted in order to verify whether the gains encountered were appropriate (which was confirmed). The PID gains are: K p = 6.25, K i = 5.75 and K d = 5.25. The simulations were performed on a computer with Intel®CoreTM i7-5500 2.40 GHz processor, 8.00 GB of RAM, 2.00 GB dedicated video card,Windows 10 Home Single Language 64 bits. The OpenSim version 3.3 and the MATLAB ®R2017b were the platforms where the simulations took place.

3 Fig. 2 The neuromusculoskeletal model used in this work presenting the minimum (θr = −90o ) and maximum (θr = 0o ) angular position of the sinusoidal movement developed by the user

of the knee when the torque calculated by the motor primitives is applied. A block diagram representing the algorithm is depicted in Fig. 3. The virtual model of the active orthosis presented in the Fig. 3 is a coordinate actuator from the OpenSim that is coupled to the right knee joint of the model and whose transfer function is described by Eq. (2): τ = u t · τoptm

(2)

where τoptm is the maximum torque (optimal toque) that the actuator can apply (in this case, 250 N m). The variable τr is the torque yielded from the motor primitives, part of the feedforward loop, and θr is the angular posi-

Fig. 3 Block diagram depicting the Forward Dynamics routine developed in MATLAB®

Control Developed from Primitive Torques

Motor primitives consist of the sum over time of the product between primitive curves ( pi (t)) and their respective weights wi , i = 1, . . . , M where M is the number of primitives. First, the motor primitives for the torques of the subject not wearing the exoskeleton were extracted through the Principal Component Analysis (PCA). This technique extracts a linear combination of primitives and corresponding weights to minimize the difference between the original and reconstructed signals. Here, only the motor primitives of the knee joint will be analyzed, which are reconstructed by Eq. (3): τwo (t) =

M 

piwo · wi (t)wo

(3)

i=1

where piwo (t) and wiwo are respectively the primitives and weights of the subject not wearing the exoskeleton (w wo denotes without exoskeleton). The desired robot torque is computed based on the ratio ( pi ) between the weights of

96 Control Design Inspired by Motors Primitives to Coordinate the Functioning …

633

both cases, wearing the exoskeleton (w w ) and not wearing the exoskeleton (w wo ) as in Eq. (4): pi =

wiw wiwo

(4)

Hence, the robot torque is given by Eq. (5): τR =

M 

piwo (1 − pi )wiwo

(5)

i=1

The advantage of using the same primitive curves to calculate the weights for the two cases dwells in the fact that robot torque will assist the user only where the weight of the motor primitive is smaller (showing the deficiency of the individual in a certain movement).

4

Results

In this section we present the results obtained from the set of experiments performed to evaluate the influence of orthosis on the kinematic and kinetic processes during knee extension/flexion movement. Figure 4 illustrates the movement primitives and weights when the user wore and did not wear the orthosis. The primitives were calculated to represent at least 95% of the variance and less than 4% of the reconstruction error. This led to two components, which represent 98% of the variance. Note that the weights for the case without the orthosis are smaller than the weights for the other case, resulting in a lower level of torque (in absolute values).

Fig. 4 (Left) Motor primitives torques and (right) motor primitive weights without (green) and with (red) the exoskeleton

Fig. 5 Angular positions of the knee

Figure 5 shows the angular positions of the knee measured by the orthosis (position reference) in red and in blue the actual position of the knee, aided by the torque of the robot during flexion/extension. It can be noted that the amplitude generated during the movement was equivalent. The position RMS error was 1.26 degrees. The largest positive position error was 2.55◦ and the largest negative error was −1.54◦ . The knee reference position (red line) was used as input to the OpenSim Inverse Dynamics tool to calculate the knee joint torques. Figure 6 shows the results in which the black line illustrates the patient torque obtained through the feedback loop, the green line represents the torque of the robot that was obtained by the primitive motion motor, and the pink line illustrates the total torque that is the sum of the torque of the robot with the torque of the patient. The simulation took 4.42 min to be accomplished.

634

P. F. Nunes et al.

Based on these results and with the aim of designing a way of acting on the “weaker” primitive, the control strategy was proposed to help knee flexion/extension of patients with some type of motor problem in the lower limbs. The results showed that the assistance of the robot based on motor primitives was efficient to recover the position of the knee joint, even if the subject was using the exoskeleton.

Fig. 6 Total toque, robot torque and user torque

Acknowledgements This work was supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES), Support Program for Graduate Studies and Scientific and Technological Research for Assistive Technology in Brazil (PGPTA) under grant 3457/2014, São Paulo Research Foundation (FAPESP) under grant no . 2019/05937-7. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. 2.

3.

Fig. 7 Position error 4.

Figure 7 illustrates the position error that is calculated by the difference between the desired position θd and the new position θn .

5. 6.

5

Conclusion

In this work, a control strategy was used for a knee orthosis using motion motor primitives. Torques with and without orthosis were obtained through the OpenSim Inverse Dynamics tool, which were used as input into the scale model of the subject that used the orthosis along with the position vectors of the encoders. With these torques, the robot torque was calculated through the motor primitives that were again inserted into the OpenSim for the calculation of new position vectors through Direct Dynamics. The primitives were calculated using Principal Component Analysis (PCA). The PCA algorithm was able to reconstruct the responses with a high degree of fidelity. The variation in all sets of signals was greater than 98%, with only two motor primitives. When analyzing the weights of the motor primitives, it was noticed that when the subject was wearing the orthosis the force exerted by him was greater than without the orthosis.

7.

8.

9. 10.

11.

12.

13. 14.

Organization WH (2015) World health statistics 2015. World Health Organization Castles S, De Haas H, Miller MJ (2013) The age of migration: international population movements in the modern world. Palgrave Macmillan Lozano R, Naghavi M, Foreman K et al (2012) Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010. The lancet 380:2095–2128 Murray CJ, Barber RM, Foreman KJ et al (2015) Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition. The Lancet 386:2145–2191 Mackay J, Mensah GA, Greenlund K (2004) The atlas of heart disease and stroke. World Health Organization Ikai T, Kamikubo T, Takehara I, Nishi M, Miyano S (2003) Dynamic postural control in patients with hemiparesis. Am J Phys Med Rehabil 82:463–469 Sommerfeld DK, Eek EUB, Svensson AK, Holmqvist LW, von Arbin MH (2004) Spasticity after stroke: its occurrence and association with motor impairments and activity limitations. Stroke 35:134–139 Lum P, Burgar C, Shor P, Majmundar M, Loos M (2002) Robotassisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke. Arch Phys Med Rehabil 83:952–9 Krebs HI, Dipietro L, Levy-Tzedek S et al (2008) A paradigm shift for rehabilitation robotics. IEEE Eng Med Biol Mag 27:61–70 Contreras-Vidal JL, Bhagat NA, Brantley J et al (2016) Powered exoskeletons for bipedal locomotion after spinal cord injury. J Neural Eng 13 Kwakkel G, Wagenaar RC, Twisk JW, Lankhorst GJ, Koetsier JC (1999) Intensity of leg and arm training after primary middlecerebral-artery stroke: a randomised trial. The Lancet 354:191–196 Committee ESOEE, Committee EW, others (2008) Guidelines for management of ischaemic stroke and transient ischaemic attack 2008. Cerebrovasc Dis 25:457–507 Langhorne P, Bernhardt J, Kwakkel G (2011) Stroke rehabilitation. The Lancet 377:1693–1702 Brkic L, Shaw L, van Wijck F et al (2016) Repetitive arm functional tasks after stroke (RAFTAS): a pilot randomised controlled trial. Pilot Feasibil Stud 2:50

96 Control Design Inspired by Motors Primitives to Coordinate the Functioning … 15. Wieloch T, Nikolich K (2006) Mechanisms of neural plasticity following brain injury. Curr Opin Neurobiol 16:258–264 16. Pekna M, Pekny M, Nilsson M (2012) Modulation of neural plasticity as a basis for stroke rehabilitation. Stroke 43:2819–2828 17. Chen B, Ma H, Qin LY et al (2016) Recent developments and challenges of lower extremity exoskeletons. J Orthop Transl 5:26–37 18. dos Santos WM, Nogueira SL, de Oliveira GC, Peña GG, Siqueira AAG (2017) Design and evaluation of a modular lower limb exoskeleton for rehabilitation. In: IEEE-RAS-EMBS international conference on rehabilitation robotics, UK, London, pp 447–451 19. Garate VR, Parri A, Yan T et al (2016) A novel bioinspired framework using motor primitives for locomotion assistance through a wearable cooperative exoskeleton. IEEE Robot Autom Mag 1070:83–95

635

20. Nunes PF, Nogueira SL, Siqueira AAG (2018) Analyzing motor primitives of healthy subjects wearing a lower limb exoskeleton, pp 1–6 21. Peña GG, Consoni LJ, dos Santos WM, Siqueira AA (2019) Feasibility of an optimal EMG-driven adaptive impedance control applied to an active knee orthosis. Robot Auton Syst 112:98–108 22. Lloyd DG, Besier TF (2003) An EMG-driven musculoskeletal model to estimate muscle forces and knee joint moments in vivo. J Biomech 36:765–776 23. Sartori M, Llyod DG, Farina D (2016) Neural data-driven musculoskeletal modeling for personalized neurorehabilitation technologies. IEEE Trans Biomed Eng 63:879–893 24. Mosconi D, Nunes PF, Siqueira AAG (2018) Modeling and control of an active knee orthosis using a computational model of the musculoskeletal system. J Mechatron Eng 1:12–19

Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications J. P. C. D. Freire, N. A. Marafa, R. C. Sampaio,Y. L. Sumihara, J. B. de Barros, W. B. Vidal Filho and C. H. Llanos

actuation drive was able to replicate the torques required, therefore, making the orthosis to meet successfully the intended mechanical prerequisites.

Abstract

This paper presents the mechanical design of an active hip and knee orthosis for rehabilitation applications. The exoskeleton device consists of two motorized joints providing 2 DOF per leg and can support a non-standard critical user (1.90 m in height and 100 kg in weight) in rehabilitation gait conditions (gait speed ≈ 0.3 m/s). The work’s methodology is firstly established with literature review to explore relevant orthotic projects already developed. Then, project requirements are defined, including critical user conditions, joint restrictions and rehabilitation gait torques and angular motion. The exoskeleton’s structure is modelled following critical static and dynamic conditions and solved analytically for static failure and stiffness criteria. The actuation drive components are designed based in numerical modelling and also solved for static failure and stiffness criteria. The project’s mechanical components are then designed following the results once they reached acceptable safety levels. The mechanical components can be subdivided into three main groups: lumbar support, limb structural links and actuation drives. Those groups a integrated to construct a prototype of the orthotic device. The assembled prototype presented the aimed robustness when tested for basic motion control associated with a equivalent rehabilitation gait pattern with artificial loads. The test trials showed low levels of induced deflection and the

J. P. C. D. Freire (B) · N. A. Marafa · Y. L. Sumihara · J. B. de Barros · W. B. Vidal Filho · C. H. Llanos Mechanical Engineering Department, Universidade de Brasília, Campus Darcy Ribeiro, Asa Norte,Brasília, Brazil R. C. Sampaio Faculty of Gama, Universidade de Brasília, Brasília, Brazil

Keywords

Mechanical design • Active orthosis • Hip-knee orthosis • Rehabilitation

1

Introduction

For many decades, orthoses have been playing an important role in the field of rehabilitation where many paraplegic, hemiplegic and spinal-cord injury patients have been benefiting from the orthotic devices. Since its first introduction in the US patent in 1935 [1], there have been many academic publications dealing with its development and control for various applications. In [2] , an active bilateral orthosis intended for paraplegic assistance was developed. In [3], the goal was designing a simple, affordable and efficient semiactive hybrid orthotic system for support and facilitation of unilateral pathological human walking. Such solution supports neurologically injured patients’ gait function towards safer and physiological patterns. In [4], a wearable active knee orthosis for walking assistance was developed, consisting of an active knee joint, a double-tendon-sheath transmission system aimed at reducing the weight of the orthosis and achieve preferable maneuverability. In [5], the main focus was minimization of the problems of speed, noise, and weight associated with orthoses, therefore, a externally powered model was developed using a bi-articular muscle mechanism with a bilateral-servo actuator. The number of paraplegic patients is significant. For instance, in the United States alone, there are approximately 5.6 million paraplegic people [6]. In Brazil, according to Instituto Brasileiro de Geografia e Estatística (IBGE), in 2015, about 6.7% of the Brazilian population showed some type of disability [7]. It can be inferred that a considerable part of this population consists of paraplegic patients. Due

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_97

637

638

J. P. C. D. Freire et al.

to this limitation, many people lose their quality of life and even professional opportunities, a condition which may lead to social exclusion. Aiming this problem, we propose a mechanical design and an early stage prototype of an active hip and knee orthosis. In Brazil, lower limb exoskeleton models have already been developed by some projects such as [8] among others. However, we do believe that, still, there is room for new projects to emerge. Since this is the first work in the field of rehabilitation robotics from the research team from the Laboratory of Embebed Systems and Integrated Circuit Applications (LEIA) of the Universidade de Brasília, Brazil, the early stages of a mechanical design project are developed. First, the project’s physical requirements are established using the Brazilian population’s anthropometric proportions and physical conditions to determine a non-standard critical user parameters. Additional data retrieved from a standard gait pattern is also taken in account to define the torques and motion pattern that the actuation drive should provide for a rehabilitation gait pattern. With the boundary conditions defined, the system’s static and dynamic models are solved using static failure and stiffness criteria. The defined mechanical conditions are used to design the mechanical components and their respective materials. A prototype of the assembled robotic exoskeleton was constructed and is presented in the results section. This work’s conclusion section brings the final view in the results of the orthosis mechanical components and overall project.

2

Methods

2.1

Physical Requirements

Aiming at the design of a rehabilitation exoskeleton for use with spinal cord injury (SCI) individuals in a rehabilitation gait protocol, which operates at low speeds (≤ 0.3 m/s), there is no need for motion drives actuating in the frontal and transverse planes. Therefore, it is possible to neglect the gaitinduced torques in the non-sagittal planes. That simplification is based on two considerations: First, the exoskeleton only provides motion in the sagittal plane. Second, the user does not apply any extra loads or torques to the structure, except the ones caused by the inertia and weight of the leg. Also, due the low gait speed, around 0.3 m/s, the inertial effects are expected to be minimal.

2.2

Brazilian Population Spectrum and Critical User

The data which provided the information about the average height and weight of the Brazilian population was obtained

from the Brazilian Institute of Geography and Statistics IBGE based on a survey carried out in 2008. According to this survey, almost half of the Brazilian population (49%) aged 20 or over is overweight. This data is part of the study of anthropometry and nutritional status of children, adolescents and adults in Brazil released in 2009 and which was part of the 2008/2009 family budget survey [9]. The survey showed that, from 20 to 24 years of age, the medians of height and weight of the Brazilian men were 1.73m and 69.4kg while that of women were 1.61m and 57.8kg respectively. However, in order to assess most of the users dimension spectrum, a non-standard critical user was defined. In such a user, due to body dimensions and weight, the internal stresses and deflections are higher due also forces, so if the orthosis can withstand the critical user, it can also sustain sub-critical users. Therefore, the critical user was set standing 1.90m tall and weighing 100 kg.

2.3

Anthropometric Proportions of Human Body

Anthropometry is the basis of exoskeleton and orthosis design since the device is aimed to corporate with human body parts. In this work’s case, the lower limb sections (hip and knee) that are considered. As presented in many exoskeletons projects development, such as in [10], the technique of anthropometry was also used for dimensioning the structural components such as the: (a) waist length of the anthropomorphic hip, and (b) limb-line links lengths which include the hipknee and knee-ankle anthropomorphic links. While the links lengths are easily defined for a standard population due its direct relation with the user’s height, the waist length is more variable because its additional relation with the user’s weight and physiological proportions. Based on the standard properties of human body provided from [11], the structural components dimensions of the exoskeleton were calculated, which include the dimensions of: (a) waist length, (b) Hip-Knee link length, (c) Knee-Ankle link length. For the calculation of these parameters, the critical user was used. The dimensions of the anthropomorphic links are shown in Table 1.

2.4

Mechanical Design Requisites

For the remaining gait-related characteristics, the actuation torques required for the motion drives of the hip and the knee joints at the sagittal plane in a rehabilitation gait condition are obtained from [12]. There, the critical torques for the hip and knee joints are normalized by the used body mass and can be further divided into the absolute minimum and maximum torques for each joint. For both hip and knee joints, the

97 Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications

639

Table 1 Mechanical Design Requisites Component

Requisite established

User’s max. height User’s max. weight Rehabilitation gait speed Waist length Hip-knee link length Knee-Ankle link length Number of DOF hip joint Number of DOF knee joint Hip joint angular range

1900 mm 100 kg 0.3 m/s 435 mm 445 mm 435 mm 1 DOF 1 DOF −20◦ (Flexion) 120◦ (Extension) 0◦ (Flexion) 135◦ (Extension) 0.07 Nm/kg (Flexion) −0.28 Nm/kg (Extension) 0.14 Nm/kg (Flexion) −0.25 Nm/kg (Extension) 28 Nm 25 Nm

Knee joint angular range Hip joint torque range Knee joint torque range Absolute max. torque Hip Absolute max. torque Knee

absolute maximum torque is found during extension. Table 1 shows the summary of the requirements for the mechanical design in this project.

2.5

Exoskeleton’s Mechanical Project

With the exoskeleton’s physical requirements defined, there next step was to design its mechanical components. Static and dynamic modeling of the exoskeleton’s mechanical structure are developed considering separate individual sections. In this approach, each one of the individual sections represents an anatomical link of the lower body, ranging from the lumbar column to the ankle joint. The ankle itself is not modeled as a joint in this work, since the orthosis is aimed for hip and knee actuation. Therefore, the system has five different individual sections, but can also be divided in two main groups: (1) leg section and (2) lumbar and hip section, being both connected through the hip joint. The main group systems are represented as free-body diagrams in Fig. 1, where, in order to present a reduced ad clearer view of the whole model, the screw-joint variables were omitted. In the free-body diagrams, the letters P and R are force variables, M and T are moment/torque variables, l and w are dimensional variables and θh and θk are the joint rotation angles. The association between the mechanical model and the physical requirements, where the anatomical dimensions and

gait torques correspond to the system’s boundary conditions, enables solving it for the ultimate safety factors of the overall project. Such process is done using an analytical approach associated with an iterative algorithm coded in MATLAB environment. The first outputs of such process are the reaction forces/moments in all individual mechanical components. Once defined those intermediate variables, material and structure’s geometric profile are taken into account to generate the second set of outputs, which are the stresses and deflections in the different sections of the system. With those variables defined, static failure and stiffness criteria are applied to ensure the safety of all the system’s mechanical components.

3

Results and Discussion

From the methods, concepts and assumptions presented in section II, the orthosis mechanical components were designed. They can be divided in two major classes: (1) Actuation direct drive (ADD) components and (2) Structural components. The first class includes all the components that are responsible for the actuation or transmission of the joint torques provided by the actuator. The second class contains the anthropomorphic-based components that surround and provide support to the user’s body. It is important to state two remarks relative to this first prototype. First, one of the aims was to validate the actuator drive construct and its capabilities to replicate the motion and

640

J. P. C. D. Freire et al.

torques of an rehabilitation gait. Secondly, to evaluate the mechanical design of the whole structure to support the gaitrelated loads. Therefore, user-related specs, such as links with variable lengths or adjustable connections with the purpose to adapt to different users were not developed in this current work.

3.1

(a)

(b) Fig. 1 Free-body diagrams of a Leg section and b Lumbar and Hip section

Design of the Actuation Direct Drive

From literature [13], many options regarding the actuation drive used in different exoskeleton models were analysed. From those, the actuation units that present a good power generated/used ratio and absolute torques provided are electrical motors with harmonic or planetary gearhead drives. Therefore, for this project, a MAXON set that combines a EC90 motor drive and a GP52 planetary gearhead is chosen as driving unit. Aiming at direct actuation in the exoskeleton’s anthropomorphic joints, there was the need to design a coupler to hold the actuation unit in its respective joint position and allow the torque transmission to the next anthropomorphic link. Therefore, the full actuation set consists of (1) Actuation unit, result of the combination of the MAXON motor-gearbox set and (2) coupling components, which includes a roller bearing, a bearing block, and a shaft coupler. The full actuation drive is shown in Fig. 2a. The Outer Arm and Inner Arm components, which were designed to connect the actuation unit to the link bars are show in Fig. 2b. The Inner Arm component connects the motor shaft and the distal link bar, acting as the effective rotation unit. The Outer Arm connects the gearhead coupler to the proximal link bar and was also designed as a mechanical limiter that restricts the rotation angle of the actuation mechanically. The analysis of shear force, torsion, compression of the actuation unit were all performed numerically and the data obtained were used to design these parts. The use of Outer Arm as a mechanical limiter helps to avoid any possible damage to the user, thus, making the project safer. The materials for the actuation unit mechanical components were chosen considering factors such as strength/density ratio and ductility. Therefore, aluminum alloys were preferred for most components. The Inner Arm and Outer Arm were built with the 7075-T6 Aeronautical alloy, prizing its superior strength, while the 6061-T4 alloy

97 Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications

(a)

(b)

641

orthotic device’s structure can be divided in four component categories: Knee-Ankle Link, Hip-Knee Link, Hip and Lumbar Links and Lumbar Support. The Knee-Ankle and HipKnee links are simple longitudinal structures that resemble their anatomical counterparts. In the other hand, the Hip and Lumbar structure does not present an exact anatomical match, since the hip part of the structure is designed to surround the user’s body in a rectangular manner, and the lumbar column is positioned at the hip’s back, and not in its center. The Lumbar Support does also have a unusual design when compared with other orthesis models, such as some shown in [1]. This model’s support resembles a backpack (or a carapace) and is based in the work [14]. Following the literature review regarding exoskeleton’s anthropomorphic structure, a similar trend of use of tubular/columnar links as structural components was found. Therefore, the geometric profile chosen for the structural links is a rectangular tube in aluminum 6063-T5 alloy. The rectangular tube’s dimensions, such as wall width, were defined based in the system’s analytical solution to ensure standard safety factors. These links will be padded to prevent damage to the user in the future. The lumbar support was designed to be fastened around the user’s back using adjustable straps. The lumbar support is built on a rigid frame associated with medium density foam that fit the user’s back. The adjustable straps are made from A-grade polyester automotive belts. Since the Knee-Ankle and Hip-Knee Links have indeed simple designs, they are not displayed in detailed manner in this work. However the Hip and Lumbar linkage set and the Lumbar Support have more complex assemblies, being shown, respectively, in the lower and upper parts of Fig. 2c.

(c)

Fig. 2 Mechanical components, a actuation drive, b connectors for anthropometric links and c Lumbar support

is used in the bearing block. The shaft coupler, in the other hand, is built in brass and act as the safety piece aimed to deform in case of overload. The bearing used to hold the shaft coupler is a SKF 6008-2Z model.

3.2

Design of Structural Components

The structural components class includes the anthropomorphic rigid links (lumbar, hip, upper leg and lower leg) and the lumbar support that is fixed to the user. The final design of the

3.3

Prototype Construction and Analysis

With the mechanical project developed, the system components were built and the complete prototype was assembled at the laboratory of automation and control (Grupo de Automação e Controle—GRACO) of the University of Brasilia (http://graco.unb.br/). Focusing in the mechanical validation of the project design, only one side of the prototype was built and tested, as presented in Fig. 3. This first prototype, consisting of the combination of two of the three main groups (1) lumbar and hip set and (2) right leg set, once assembled weights approximately 13.1 kg. The third group set, which is the unassembled left leg would weight additional 5.6 kg, leading to a total weight of 18.7 kg for the complete orthosis model. However, unlike other lower body

642

J. P. C. D. Freire et al.

Despite this prototype’s results, questions that might arise are if the model will still have the same results when assembled with both legs or in a eventual gait trial. For the first situation, we do believe that the deflections would only be higher in the Hip and Lumbar linkage, since a induced moment would appear due the dynamical variation of the antagonist leg. However, to counter an excessive deflection, the lumbar column structure is doubly reinforced, as shown in Fig. 2c. For the second situation, however, deflections in all links can be expected to be even higher due conditions associated with user interface or control. For example, a user could, intentionally or not, increase its muscular stiffness beyond the limits for which this exoskeleton was designed, reaching critical deflection or stresses. To counter that problem, the orthosis mechanical structure should be reinforced in future works.

Fig. 3 The active orthesis prototype with one mounted leg

exoskeleton models (usually HKAFO class), this one currently does not have an anthropomorphic ankle. Such limitation brings the impossibility of performing an completely assisted gait, as well making direct comparisons with other models specifications, such as weight or costs. After the complete assembly of the prototype, the testing protocol was divided into three parts. Firstly, the actuation drives in both hip and knee joints were controlled separately to reach the respective joint’s maximum acceleration. Such test aimed to asset the experimental setup built for the exoskeleton as well test a simple motion control strategy. The second experimental test was to load the orthosis with the equivalent weight of the leg to replicate the maximum torques reached in rehabilitation gait conditions (see Table 1). Again, this test was performed in each active joint separately and was able check: (1) the effective power of the actuation unit after the losses of the coupling components, and (2) the deflection level of the anthropomorphic links with a critical level of external loads. The final experimental trial was to create a motion control in which both joints would move simultaneously to replicate a rehabilitation gait pattern. Such test indicated if the exoskeleton’s structure could stand the combined multiarticular movement loads and induced moments without significant deflections. The experimental trials presented satisfactory results. The control strategy, despite being simple, was able to replicate a rehabilitation gait pattern under the desired range of the coupling components. In the second and third testing phases the orthosis structure showed mechanical robustness and low levels of load-induced deflection, while it could move the critical user’s equivalent load without reaching the actuator’s peak torque.

4

Conclusions

This work has the main objective of designing the mechanical structure of a active lower limb orthosis for the hip and knee joints. Due to its focus on robustness, the Exoskeleton project was made aiming at its capacity to support the conditions associated to a non-standard critical user. At the same time, the project presents fair safety factors and is capable of providing mechanical security to the structure. The Exoskeleton prototype met the prerequisites successfully, and proved capable of acting in the area of rehabilitation. However, as of future works, the authors are considering implementing a suitable control strategy using instrumentation or Brain-Machine Interface (BMI) with the help of Electromyography (EMG) or Electroencephalography (EEG) signals for operational control. The mechanical structure can also be redesigned in such a that, the links and the waist structures can be adjustable, thereby increasing the range of users size and making the structure compatible to more users with different sizes. Additionally, the structure can also be reinforced to ensure additional safety for the user. With respect to the number of DOF, the mechanical structure developed in this work’s prototype considers that, since it would not completely restrain the user’s lumbar and hip, the user could develop some low-level adaption for core and hip movement to assist the gait. Therefore, for the time being and aiming at the hip and knee movements, 2 DOF per leg are sufficient for movement testing in the prototype. However, in future works the number of DOFs should be increased, even if some of them are designed as passive, to be able to replicate de DOFs of a more natural gait. Acknowledgements The Authors thank CAPES and Laboratory of Embedded Systems and Integrated Circuits Applications (LEIA) for proving the necessary tools and finance that made this project successful.

97 Mechanical Design of an Active Hip and Knee Orthosis for Rehabilitation Applications Conflict of Interest The authors declare that they have no conflict of interest and are the sole responsibles for the information presented in this work.

6.

7.

References 1.

2.

3.

4.

5.

Dollar AM, Herr H (2008) Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art. IEEE Trans Robot 24:144– 158 Cano BD, Cardiel E, Dominguéz G et al (2013) Design and simulation of an active bilateral orthosis for paraplegics. In: Proceedings of world congress on nature and biologically inspired Computing, pp 47–51 Gil J, Sánchez-Villamañan MC, Gómez J et al (2018) Design and implementation of a novel semi-active hybrid unilateral stance control knee ankle foot orthosis. In: IEEE Proceedings of international conference on intelligent robots and systems, pp 5163–5168 Shan H, Jiang C, Mao Y et al (2016) Design and control of a wearable active knee orthosis for walking assistance. In: IEEE Proceedings of international workshop on advanced motion control, pp 51– 56 Saito Y, Kikuchi K, Negoto H et al, Development of externally powered lower limb orthosis with bilateral-servo actuator. In: IEEE Proceedings of international conference on rehabilitation robotics, pp 394–399

8.

9. 10. 11. 12.

13. 14.

643

Christopher, Foundation Dana Reeve (2019) Stats about paralysis: prevalence of paralysis in the United States. Available on: https://www.christopherreeve.org/living-with-paralysis/statsabout-paralysis. Accessed on 10 De 2019 Villela F (2019) IBGE Brazilian Agency 2015 Survey at http:// agenciabrasil.ebc.com.br/geral/noticia/2015-08/ibge-62-dapopulacao-tem-algum-tipo-de-deficiencia. Accessed on 10 Dec 2019 Santos WM, Oliveira GC, Siqueira AAG (2016) Desenvolvimento de um exoesqueleto modular para membros inferiores. In: SBA Proceedings of Congresso Brasileiro de Automática, pp 1680–1685 IBGE Brazilian Agency 2009 Survey (2009) Antropometria e estado nutricional de crianças, adolescentes e adultos no Brasil Aguilar-Sierra H,Yu W, Salazar S et al (2015) Design and control of hybrid actuation lower limb exoskeleton. Adv Mech Eng 7:1–13 Winter DA (2009) Biomechanics and motor control of human movement. Willey, Hoboken Robbi DB (2018) Análise dinâmica de um exoesqueleto de membros inferiores utilizado no contexto de reabilitação de indivíduos com lesão medular, Monografia. Universidade de Bras ília, Bras ília, Brazil Aliman N, Ramli R, Harris SM (2017) Design and development of lower limb exoskeletons: a survey. Robot Auton Syst 95:102–116 Freire JP, Bo APL, Rocha T (2018) Exosuit for alternative hip actuation: a proof of concept. In: Proceedings of 6o Encontro Nacional de Engenharia Biomecânica ENEBI, pp 79–85

A Bibliometric Analysis of Lower Limb Exoskeletons for Rehabilitation Applications N. A. Marafa, C. H. Llanos, and P. W. G. Taco

Abstract

1

This paper presents a systematic literature review method and a bibliometric analysis for the topic “Lower limb exoskeletons for rehabilitation applications”. The method can also be applied to other topics. Five databases were chosen for documents search, which are: Web of Science, Scopus, IEEE Xplore, Catálogo de Teses e Dissertações (CTD) and Biblioteca Digital de Teses e Dissertações (BDTD) of CAPES. All the results presented in this paper were obtained on December 31, 2019, as they might change with time. To make the review reproducible, updatable and continuous, a review protocol was created. Results from the secondary analysis were further analyzed, presenting the best “top 10”: most prominent research areas, most highlighted research countries, sources with the highest register of documents and, most cited documents from Web of Science and Scopus respectively. However, in the bibliometric analysis, VOSviewer tool was used for the co-authorship, co-citation and bibliographic coupling analyses, while TagCrowd was used for keywords co-occurrence analysis. The work is concluded with some important considerations observed during this analysis. Keywords

 

Systematic literature review Bibliometric analysis Lower limb exoskeletons Rehabilitation

N. A. Marafa (&)  C. H. Llanos Department of Mechanical Engineering, University of Brasilia, Brasilia, Brazil P. W. G.Taco Department of Civil Engineering, University of Brasilia, Asa Norte, Brasilia, Brazil



Introduction

One of the most important steps in scientific research is a literature review, where the search for documents that are most relevant to a subject is raised. A systematic literature review is defined as a scientific research method that brings together relevant studies on a formulated question, using the literature database that deals with that question as a source and method for identification, selection and systematic analysis, in order to perform a critical and comprehensive literature review. A systematic literature review identifies, selects, and critically appraises researches in order to answer a clearly formulated question [1, 2]. In this paper, a methodology is introduced to carry out the search process which includes the choice of databases and the use of tools such as VOSviewer and TagCrowd to perform further analyses of the documents obtained during the search. The method was applied to the formulated question “Lower limb exoskeletons for rehabilitation applications”. Exoskeletons are wearable robots that integrate human body with robotic machines into one system, combining human intelligence with robotic strength and endurance. Over 100 years ago, many research papers in the field of rehabilitation were published, and most of them only focused on prostheses and orthoses. In recent years, exoskeletons have conquered a huge space in the literature and have led to many innovations and scientific publications. Exoskeletons are used for various applications, such as for (a) human force augmentation, (b) rehabilitation or (c) therapeutic assistance. There are basically: (a) upper limb exoskeletons which are designed to supplement upper body member(s), and (b) lower limb exoskeletons designed to supplement lower body member(s). For this review, only the lower limb exoskeletons are considered. In 1965, a full-body powered exoskeleton prototype, dubbed “Hardiman” and intended for human force augmentation was developed by General Electric Research, Schenectady, United States [3]. In 1969, the first exoskeleton

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_98

645

646

N. A. Marafa et al.

for rehabilitation application was developed at Mihailo Pupin Institute, Belgrade, Serbia, and also at the University of Wisconsin-Madison, United States, in the early 1970s as described in [4]. Since then, many different projects of robotic exoskeletons were developed by many researchers such as [5–7], among others. In addition to academic researches, there are several companies around the world that are developing the assistive devices for commercial gains, such as: Wandercraft of France, which developed ATLANTE (https://www.wandercraft.eu/), Cyberdyne of Japan, which developed HAL (https://www.cyberdyne.jp/ english/), Ekso Bionic of the United States, which developed EksoGT (https://eksobionics.com/), Hyundai of South Korea, which developed H-MEX, among others. However, there are many similarities and differences from one project to another, such as in: actuation design, sensors used, control strategies and type of human–robot interactions used in their development. In an attempt to develop a lower limb exoskeleton for rehabilitation application by the Laboratory of Embedded Systems and Integrated Circuits Applications (LEIA) located at Grupo de Automação e Controle (GRACO) of the Faculty of Technology (FT) of the University of Brasilia, (https:// www.graco.unb.br/), this systematic literature review and bibliometric analysis was conducted. The method presented consists of five steps: (a) Define the Search Topic, (b) Keywords Selection, (c) Database Selection, (d) String Construction, and (e) Search Methods.

analyses were performed, Exploratory Analysis and Secondary Analysis. In Exploratory Analysis, all the selected databases were considered for the analysis. However, in Secondary Analysis, all the selected databases were also considered, but only Web of Science’s and Scopus’s documents were further analyzed. In this case, the analysis was done by presenting the best “top 10”: (a) most prominent research areas, (b) most highlighted research countries, (c) sources with the highest register of documents, and (d) most cited documents. VOSviewer tool [11] was used for the analysis of Co-authorship, Co-citation and Bibliographic Coupling, while for the analysis of the Co-occurrence of keywords, TagCrowd (https://tagcrowd.com/) was used.

3

In Exploratory Analysis, String (1) and String (2) were used for documents search. String (1) aims at exploring the general idea of the total number of documents available in the literature related to the field of rehabilitation, and which include the use of exoskeletons, prostheses and orthoses. String (2) explored the general idea of the total number of publications that only talked about exoskeletons, which in this case include exoskeletons for both the upper and the lower limbs (see Table 1).

4 2

Exploratory Analysis

Secondary Analysis

Methodology

It is very important to make it clear at this point that, all the results presented in this paper were collected on December 31, 2019, as they might change (increase) with time as the number of publications continue to increase in the selected databases. There are many methods for the systematic literature review as shown in [8–10], among others. However, our method in this work is to strictly follow these steps: (a) Define the Search Topic: the topic was defined as “lower limb exoskeletons for rehabilitation applications”. (b) Keywords Selection: here, three keywords were selected, which include “exoskeleton, lower limb, and rehabilitation”. (c) Database Selection: five databases were selected “Web of Science, Scopus, IEEE Xplore, Catálogo de Teses e Dissertações (CTD) and Biblioteca Digital de Teses e Dissertações (BDTD) of CAPES”. (d) String Construction: three strings were constructed “String (1) is “Exoskeleton” OR “Prosthesis” OR “Orthosis”; String (2) is “Exoskeleton”; and String (3) is “Exoskeleton” AND “Lower Limb” AND (“Rehabilitation” OR “Paraplegic” OR “Paralysis” OR “Deficiency”)”. (e) Search Methods: two methods of

In Secondary Analysis, results from Exploratory Analysis were further refined. This process aims at eliminating documents that are not relevant to the topic in question. To achieve that, the objective String (3) and some inclusion and exclusion criteria were applied. Specifically, for including documents, we have opted for: (a) relevant documents published or accepted for publication including, books, articles, review articles, dissertations and theses; (b) documents that contain the chosen keywords in their titles; (c) documents dealing with exoskeletons for lower limbs only; (d) documents that are easily accessible, (e) all publications up to October 31, 2019, and (f) all documents in English and Portuguese. More additional excluding criteria comprises: (a) documents that focus only on Prosthesis; (b) documents dealing only with upper limb exoskeletons; (c) documents that are not easily accessible, and (d) documents in other languages other than English and Portuguese. In the secondary analysis, Web of science and Scopus databases presented 474 and 723 documents respectively and the analyses of these documents are presented in (Tables 2, 3, 4, 5 and 6).

A Bibliometric Analysis of Lower Limb Exoskeletons …

647

Table 1 Search results for the three constructed strings obtained from the selected databases Database

Search type

String (1)

String (2)

Res

1° Year Pub

Res

String (3) 1° Year Pub

Res

1° Year Pub

Web of Science

Topic

71.284

1945 (4)

6,753

1947 (1)

474

2006 (2)

Scopus

Title, Abstract, Keywords

400.291

1903 (1)

9,972

1912 (1)

723

2004 (1)

IEEE

All

7480

1969 (2)

2,545

1981 (1)

304

2006 (1)

CTD

All

4751

1989 (2)

184

1990 (1)

8

2005 (1)

BDTD

All

2737

1964 (4)

90

1994 (1)

8

2009 (2)

Res. = Results obtained in a single search 1° year Pub. = First year of publications in a database (number of publications in the year)

Table 2 The most prominent research areas from Web of Science and Scopus S. No.

Web of Science

Scopus

Subject areas

Res.

% of 474

Subject area

Res.

% of 723

1

Engineering

263

55.48

Engineering

486

67.21

2

Robotics

175

36.92

Computer Science

359

49.65

3

Computer Science

113

23.84

Medicine

185

25.58

4

Automation Control Systems

80

16.88

Mathematics

117

16.18

5

Rehabilitation

74

15.61

Biochemistry, Genetics and Molecular Biology

74

10.24

6

Neurosciences Neurology

50

10.55

7

Instruments Instrumentation

16

3.37

8

Material Science

16

3.37

Chemical Engineering

34

4.70

9

Chemistry

14

2.95

Material Science

32

4.42

10

Medical Informatics

10

2.11

Health Professions

27

3.73

Neuroscience

48

6.64

Physics and Astronomy

47

6.50

Table 3 The most highlighted research countries from Web-of-Science and Scopus S. No.

Web of Science

Scopus

Country

Res.

% of 474

Country

Res.

% of 723

1

Peoples R China

106

22.36

Peoples R China

173

23.92

2

United States

81

17.08

United States

111

15.35

3

Spain

43

9.07

Italy

61

8.44

4

Italy

31

6.54

Spain

60

8.30

5

Japan

28

5.91

Japan

43

5.95

6

France

27

5.69

Switzerland

33

4.56

7

Switzerland

26

5.49

Brazil

30

4.15

8

South Korea

25

5.27

France

30

4.15

9

Mexico

19

4.01

South Korea

29

4.01

10

India

18

3.79

United Kingdom

23

3.18

648

N. A. Marafa et al.

Table 4 Sources with the highest register of documents from Web of Science and Scopus S. No.

Web of Science

Scopus

Source Title (2 year Cites/Doc.)

Reg.

Source Title (2 year Cites/Doc.)

Reg.

1

International Conference on Rehabilitation Robotics ICORR (1.29)

23

IEEE International Conference on Rehabilitation Robotics ICORR

37

2

Journal of NeuroEngineering and Rehabilitation (4.112)

17

Journal of NeuroEngineering and Rehabilitation

25

3

IEEE Transactions on Neural Systems and Rehabilitation Engineering (4.112)

13

IEEE Transactions on Neural Systems and Rehabilitation Engineering

20

4

2017 International Conference on Rehabilitation Robotics ICORR (1.29)

9

Biosystems And Biorobotics_Switzerland

17

5

Sensors_Switzerland (3.514)

9

Lecture Note in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Note in Bioinformatic

11

6

Biosystems and Biorobotics _Switzerland (7.261)

8

Sensors _Switzerland

11

7

IEEE Engineering in Medicine and Biology Society Conference Proceedings_ (3 yr., 0.961)

8

IEEE International Conference on Intelligent Robots and Systems_Spain

10

8

IEEE International Conference on Intelligent Robots and Systems_Spain (2.026)

8

Proceeding of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society EMBS_Canada (3 yr., 0.836)

10

9

Proceedings of the IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics_United States (3 yr., 1.236)

8

Journal of Physics: Conference Series_United Kindom (0.544)

9

10

International Journal of Advanced Robotic Systems_United States (1.601)

7

Proceedings of the IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics_United States

9

Table 5 The most cited documents from Web of Science S. No.

Title

Author

Year

Citations

Comment/status

1

Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-art [3]

Dollar A. M.

2008

619

R/I

2

Review of Assistive Strategies in powered lower-limb orthoses and exoskeletons [21]

Yan T.

2015

183

R/I

3

Control Strategies for active lower extremity prosthetics and Orthotics: a Review [20]

Tucker M.R.

2015

166

R/I

4

Preliminary Evaluation of a Powered Lower Limb Orthosis to aid walking in paraplegic individuals [15]

Farris R. J.

2011

136

R/I

5

Emg and App-integrated human–machine interface between the paralyzed and Rehabilitation Exoskeleton [22]

Yin Y.H.

2012

109

R/I

6

Recent Development of Mechanisms and Control strategies for robot-assisted lower Limb rehabilitation [18]

Meng W.

2015

113

R/I

7

State of the Art and Future Direction for Lower Limb Robotic Exoskeletons [23]

Young A. J.

2017

126

R/I

8

Long-term training with a brain-machine interface-based gait protocol induces partial neurological recovery in paraplegic patients [14]

Donati A. R.C.

2016

108

R/I

9

The H2 Robotic Exoskeleton for gait rehabilitation after stroke: early findings from a clinical study [13]

Bortole M.

2015

98

R/I

10

The Design and Control of a therapeutic Exercise Robot for lower limb Rehabilitation: Physiotherabot [12]

Akdogan E.

2011

96

R/I

R = Relevant/I = Included

A Bibliometric Analysis of Lower Limb Exoskeletons …

649

Table 6 The most cited documents from Scopus S. No.

Title

Author

Year

Citations

Comment/status

1

Lower Extremity Exoskeletons and Active Orthoses: Challenges and Stste-of-the-art [3]

Dollar A.M.

2008

770

R/I

2

Wearable Robot: Bio mechatronics Exoskeletons (Book) [19]

Pons J.L.

2008

323

R/I

3

Current hand Exoskeleton technologies for rehabilitation and Assistive Engineering [16]

Heo P.

2012

226

IR/E

4

Control Strategies for active lower extremity prosthetics and Orthotics: a Review [20]

Tucker M.R.

2015

202

R/I

5

Preliminary evaluation of a powered limb orthosis to aid walking in paraplegic individuals [15]

Farris R.J.

2011

172

R/I

6

Review of Control Algorithms for Robotic Ankle Systems in Lower-Limb Orthosis, Prostheses, and Exoskeletons [17]

Jimenez-Fabian R.

2012

163

R/I

7

Analytical evaluation of the Abbott Architech (R) STAT Troponin-I immunoassay [24]

Farris D.P.

2005

156

IR/E

8

Emg and App-integrated human–machine interface between the paralyzed and Rehabilitation Exoskeleton [22]

Yin Y.H.

2012

133

R/I

9

State of the Art and Future Directions for Lower Limb Robotic Exoskeletons [23]

Young A.J.

2017

149

R/I

10

Long-term training with a brain-machine interface-based gait protocol induces partial neurological recovery in paraplegic patients [14]

Donati A.R.C.

2016

125

R/I

R = Relevant/I = Included, IR = Irrelevant/E = Excluded

5

Bibliometric Analysis

5.1 Co-authorship Analysis (Authors) In Co-authorship analysis using VOSviewer, the closer two authors are located to each other, the stronger their relatedness. Therefore, for Web of Science, the minimum number of documents of an author was set to 5, and the minimum number of citations of an author to 10, of the 1556 authors, 41 meet the thresholds (Fig. 1). For Scopus, the minimum number of documents of an author was set to 5, and the minimum number of citations of an author to 20, and in this case, of the 2062 authors, 66 meet the thresholds (Fig. 2).

5.3 Bibliographic Coupling Analysis (Documents) The Bibliographic Coupling using VOSviewer relates the number of cited references that two publications have in common. In this case, the bigger the circle of a document, the stronger the citation. Therefore, for Web of Science, the minimum number of citations of a document was set to 40, of the 474 documents, 27 meet the thresholds (Fig. 5). For Scopus, the minimum number of citations of a document was set to 40, and of the 723 documents, 48 meet the thresholds (Fig. 6). Dollar (2008) [3] was observed to be the most cited document in both the Web of Science and Scopus.

5.4 Co-occurrence Analysis (Keywords) 5.2 Co-citation Analysis (Cited References) In Co-citation analysis using VOSviewer, the closer two documents are located to each other, the stronger their relatedness. The strongest co-citation links between documents are also represented by lines [11]. Therefore, for Web of Science, the minimum number of citations of a cited reference was set to 20, of the 9320 cited references, 41 meet the thresholds (Fig. 3). For Scopus, the minimum number of citations of a cited reference was set to 4, and in this case, of the 19,592 cited references, 67 meet the thresholds (Fig. 4).

The Keyword Co-occurrence analysis using TagCrowd relates the occurrence of a keyword in two or more documents. In this case, the bigger the appearance of a keyword, the higher the number of documents it is repeated. In Figs. 7 and 8, TagCrowd tool was used to analyze this relationship by using the Web of Science’s and Scopus’s data collected from the secondary analysis. The keyword exoskeleton appears in more documents than any other keyword in both the Web of Science’s and Scopus’s documents, therefore, it appears bigger and more visible.

650

N. A. Marafa et al.

Fig. 1 Co-authorship analysis, Web of Science

Fig. 5 Bibliographic coupling analysis, Web of Science

Fig. 2 Co-authorship analysis, Scopus

Fig. 6 Bibliographic coupling analysis, Scopus

Fig. 3 Co-citation analysis, Web of Science

Fig. 7 Keywords co-occurrence analysis, Web of Science

Fig. 4 Co-citation analysis, Scopus Fig. 8 Keywords co-occurrence analysis, Scopus

A Bibliometric Analysis of Lower Limb Exoskeletons …

6

Conclusions

The review method presented started by providing the general idea of the total number of publications available in the literature and in the field of interest. This was achieved by the use of String (1), where the general idea of when and what was published in the field of rehabilitation from day one to December 31, 2019, was presented from all the selected databases. String (2) and Sting (3) were used to further refine these results. However, choice of right keywords, databases and good strings constructions are very important steps in order to guaranty the high quality of results. Heo, P. (2012) was excluded on the basis of only talking about hand exoskeleton (upper limb) which became Irrelevant by the exclusion criterion (b). Farris, D. P. (2005) was excluded because of not been easily accessible and became irrelevant by exclusion criterion (c) and also for not talking about lower limb exoskeletons. It was observed that, the method applied was good and from the top 10 most cited documents in Web of Science and Scopus, six (6) were found common to both the databases and only two documents in all were rejected leaving twelve (12) relevant documents related to our formulated question, which were listed in Table 5 and 6 respectively. In Figs. 1, 2, 3, 4, 5, 6, 7 and 8, some of the names or words might appeared blurred, this is expected, because they are being dominated by the most cleared ones. Example of this dominance is clearly seen in “dollar (2008)” Figs. 5 and 6 and also in “exoskeleton” Figs. 7 and 8. It is also important to take note that, there was no restriction or criterion used in limiting the number of selected databases to five, as this was only based on authors’ choice. What has been considered in this selection was only the relationship between the database and the formulated question. Moreover, the number of selected databases can be increase or decrease, but it is very important to know that, some databases are indexed to others. However, the inclusion of Catálogo de Teses e Dissertações (CTD) and Biblioteca Digital de Teses e Dissertações (BDTD) of CAPES aimed at exploring the works done by masters and doctoral students from Brazilian institutions which include Dissertations and Theses. Finally, considering the quality of the result obtained and the number of rejected documents, it is convincing to say that the objective of this review is achieved. Acknowledgements The Authors thank Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), for proving the free access to all the databases consulted in elaborating this work.

651 Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Dewey A, Drahota A (2016) Introduction to systematic reviews: online learning module Cochrane training. https://training. cochrane.org/interactivelearning/module-1-introductionconducting-systematic-reviews 2. Marino AM, Rocha MS (2017) Revisão da Literatura: Apresentação de uma Abordagem Integradora. In: 2017 AEDEM international conference on economy, business and uncertainty: ideas for a European and Mediterranean industrial policy? Reggio Calabria, Italia, 2017, pp 427–443 3. Dollar AM, Herr H (2008) Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art. IEEE Trans Rob 24:144–158 4. Huo W, Mohammed S, Moreno JC, Amirat Y (2016) Lower limb wearable robots for assistance and rehabilitation: a state of the art. IEEE Syst J 10:1068–1081 5. Zoss A, Chu A, Kazerooni H (2006) Biomechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX). IEEE/ASME Trans Mechatron 11:128–138 6. Mcdaid AJ, Xing S, Xie SQ (2013) Brain controlled robotic exoskeleton for neurorehabilitation. In: 2013 IEEE/ASME international conference on advanced intelligent mechatronics. Wollongong, NSW, Australia, 2013, pp 1039–1044 7. Vinoj PG, Jacob S, Menon VG (2019) Brain-controlled adaptive lower limb exoskeleton for rehabilitation of post-stroke paralyzed. IEEE Access 7:132628–132648 8. Atallah AN, Castro AA (1998) Revisão Sistemática da Literatura e Metanálise: a melhor forma de evidência para tomada de decisão em saúde e a maneira mais rápida de atualização terapêutica. https://www.centrocochranedobrasil.com.br/cms/apl/artigos/ artigo_530.pdf 9. Martins LFL, Thuler LCS, Valente JG (2005) Cobertura do exame de Papanicolaou no Brasil e seus fatores determinantes: uma revisão sistemática da literatura. Revista Brasileira de Ginecologia e Obstetrícia 27:485–492 10. Pereira ÂL, Bachion MM (2008) Atualidades em revisão sistemática de literatura, critérios de força e grau de recomendação de evidência. Revista Gaúcha de Enfermagem 27:491–498 11. Van Eck NJ, Waltman L (2018) Manual of Vosviewer tool at https://www.vosviewer.com/documentation/Manual_VOSviewer_ 1.6.8.pdf 12. Akdoǧan E, Adli MA (2011) The design and control of a therapeutic exercise robot for lower limb rehabilitation: physiotherabot. Mechatronics 21:509–522 13. Bortole M, Venkatakrishnan A, Zhu F et al (2015) The H2 robotic exoskeleton for gait rehabilitation after stroke: early findings from a clinical study wearable robotics in clinical testing. J NeuroEng Rehabil 12:1–14 14. Donati ARC, Shokur S et al (2016) Long-term training with a brain-machine interface-based gait protocol induces partial neurological recovery in paraplegic patients. Sci Rep 6:1–16

652 15. Farris RJ, Quintero HA, Goldfarb M (2011) Preliminary evaluation of a powered lower limb orthosis to aid walking in paraplegic individuals. IEEE Trans Neural Syst Rehabil Eng 19:652–659 16. Heo P, Gu GM, Lee SJ et al (2012) Current hand exoskeleton technologies for rehabilitation and assistive engineering. Int J Precis Eng Manuf 13:807–824 17. Jiménez-Fabián R, Verlinden O (2012) Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses, and exoskeletons. Med Eng Phys 34:397–408 18. Meng W, Liu Q, Zhou Z et al (2015) Recent development of mechanisms and control strategies for robot-assisted lower limb rehabilitation. Mechatronics 31:132–145 19. Pons JL (2008) Wearable robots: biomechatronic exoskeletons. Wiley, Madrid

N. A. Marafa et al. 20. Tucker MR, Olivier J, Pagel A et al (2015) Control strategies for active lower extremity prosthetics and orthotics: a review. J NeuroEng Rehabil 12:1–29 21. Yan T, Cempini M, Oddo CM et al (2015) Review of assistive strategies in powered lower-limb orthoses and exoskeletons. Robot Auton Syst 64:120–136 22. Yin YH, Fan YJ, Xu LD (2012) EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton. IEEE Trans Inf Technol Biomed 16:542–549 23. Young AJ, Ferris DP (2017) Analysis of state of the art and future directions for robotic exoskeletons. IEEE Trans Neural Syst Rehabil Eng 25:171–182 24. Farris DP, Murakami MA, Apple F (2005) Analytical evaluation of the Abbott Architect (R) STAT Troponin-I immunoassay. Clin Chem 51:28–28

Biomechatronic Analysis of Lower Limb Exoskeletons for Augmentation and Rehabilitation Applications N. A. Marafa, R. C. Sampaio, and C. H. Llanos

Abstract

Keywords

Lower limb exoskeletons are wearable robots worn by human operators for various purposes. Their design, control and biomechanical aspects have been discussed in many publications in the literature. However, there is a gap in their robotic nature analysis, and few documents discussed the biomechatronic aspect of the robotic devices. In this scenario, this paper presents a brief analysis of the biomechatronic system’s components of lower limb exoskeletons for augmentation and rehabilitation applications. In this case, the biomechatronic system is considered to have five components, which include: Mechanisms, Actuators, Sensors, Control, and Human–Robot Interaction. A literature review was initially conducted to explore documents with the highest relevance to the topic for analyses. Therefore, in Mechanisms: metabolic cost, biomechanics of walking, average human walking speed, mechanics of human movements, movements at the hip, at the knee and at the ankle joints are addressed. In Actuators, different types of actuators used by different projects from the literature such as electric motors, series electric actuators (SEAs), pneumatic, hydraulic, and pneumatic muscle actuators are analyzed. In Sensors and Control, different types of sensors and control strategies adopted by different projects are also analyzed. In Human–Robot Interaction, cognitive human–robot interaction and physical human– robot interaction are discussed. Finally, the work is concluded with some important considerations for this analysis.

Biomechatronic system Lower limb exoskeletons Augmentation Rehabilitation

N. A. Marafa (&)  C. H. Llanos Department of Mechanical Engineering, University of Brasilia, Brasilia, Brazil e-mail: [email protected] R. C. Sampaio Faculty of Gama, University of Brasilia, Asa Norte, Brasilia, Brazil



1





Introduction

For more than seven decades, the research on exoskeletons began and it is now growing rapidly all over the world, which is resulting in many commercial products and academic publications. Robotic lower limb exoskeletons are wearable robots for lower limbs, worn by human operators for various purposes, such as for: (a) human force augmentation, (b) rehabilitation application, (c) therapeutic assistance, among others. Their design, control and biomechanical analysis have been discussed in many documents in the literature, and by many researchers such as [1–3], among others. However, very few documents found in the literature discussed their biomechatronic aspect. Therefore, in this paper, the biomechatronic system’s components of lower limb exoskeletons for augmentation and rehabilitation applications are briefly analyzed, since, knowing and understanding these components provides a complete and more comprehensive knowledge of their robotic nature of design, powering, and control. The work started with a literature review analysis centered on two main axes: (a) commercial products, which aims to search for lower limb exoskeletons manufactured by different companies around the world for commercial gains and, (b) academic publications, which aims to search for all the relevant documents (published or accepted for publication) dealing with the lower limb exoskeletons for augmentation and rehabilitation applications, and which are available in accredited scientific databases that are related to the topic, such as: Web of Science, Scopus, IEEE explore, among others.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_99

653

654

N. A. Marafa et al.

As described in [4], “Biomechatronics may in a sense be viewed as a scientific and engineering discipline whose goal is to explain behavior by means of artificial models, e.g. the system’s components”. In this sense, the biomechatronic system’s components of wearable exoskeletons are (a) Mechanisms, (b) Actuators, (c) Sensors, (d) Control, and (e) Human–Robot Interaction. Therefore, in this work, analyses of these components are focused only on the lower limb exoskeletons for augmentation and rehabilitation applications. However, exoskeletons for other applications, such as for upper limbs, can also be analyzed in a similar way. This is because of their similarities in design, powering and control (i.e. similar system’s components). In (a) Mechanisms, kinematics and dynamics of lower limb exoskeletons are analyzed, explaining the concept of metabolic cost of walking, biomechanics of walking, human walking speed, mechanics of human walking, movements at the hip, knee and ankle joints are discussed. In (b) Actuators: electric motors, series electric actuators (SEAs), pneumatic, hydraulic, and pneumatic muscle actuators are analyzed. In (c) Sensors: force sensors, position sensors, electromyography (EMG) signal sensors and electroencephalogram (EEG) signal sensors are presented. In (d) Control: force based control and position-based control strategies are analyzed. In (e) Human–Robot Interaction, physical human– robot interaction and cognitive human–robot interaction are discussed, analyzing the human–robot interfaces that support them. Finally, the work ended with some final considerations.

2

Exoskeletons in the Literature

A forward tilt of the upper body is sensed by the system which initiates the first step. Repeated body shifting generates a sequence of steps which mimics a functional natural gait of the legs. HAL or Hybrid Assistive Limb (Fig. 1e) [7], developed by Tsukuba University and the robotics company Cyberdyne in Japan is intended to support and expand the physical capabilities of its users, particularly people with physical disabilities. There are two primary versions of the system: HAL 3, which only provides leg function, and HAL 5, which is a full-body exoskeleton for the arms, legs, and torso. HAL 3 has 7DOF motorized, and uses its sensor to capture the bioelectric signal (BES) transmitted by the brain to the muscles and uses the signals to realize the intended movements of the wearer. Crutch or Walker is necessary. H-MEX or Hyundai Medical Exoskeleton (Fig. 1f) [8], is a 18 kg exoskeleton with an adjustable aluminum frames that straps to the user’s feet, legs, and back, with hinges at the knee and waist capable of supporting up to 40 kg of wearer’s total weight. Electrically motorized, H-MEX has the ability to sit, stand, move, navigate stairs and run around. The user needs crutches to maintain balance and to trigger the next action to step, sit, or climb. Hercule (Fig. 1g) [9], developed by the French RB3D is an augmentation exoskeleton designed to unburden wearer from heavy loads. Depending on the task, it can be used by the operator either in front or behind. Some of its characteristics are: weight of 30 kg, power (peak) of 600 W, Height of 1100 mm (Taille L), Width of 650 mm, Depth of 400 mm, 14 degrees of freedom of which 4 are motorized, dorsal harness and shoe straps.

2.1 Commercial Products 2.2 Academic Publications In recent years, there are many different companies around the world investing in exoskeleton projects for commercial gains, such as: ReWalk, Cyberdyne, Hyundai, RB3D, among others. Most of the commercial products analyzed are only intended for rehabilitation applications for patients with spinal cord injuries (SCI). However, few of the commercial products found are designed for augmentation applications such as the RB3D’s Hercule [5]. ReWalk personal 6.0 (Fig. 1d) [6], developed by ReWalk, Israel, is the first exoskeleton to receive FDA clearance for personal and rehabilitation use in the United States in 2011, and it is considered to be the most customizable exoskeleton designed to be used at home and in community. It provides powered hip and knee motions to enable individuals with SCI to stand upright, walk, turn, climb and descend stairs. Some of its characteristics are: Actuated with motors at the hip and knee joints, controlled by the user using subtle changes in his/her center of gravity.

In 1965, a full-body powered exoskeleton prototype was developed by General Electric Research, Schenectady, United States. This exoskeleton was dubbed “Hardiman” and it was mainly intended for human force augmentation (Fig. 1b) [10]. In 1969, the first exoskeleton for rehabilitation application was then developed at Mihailo Pupin Institute in Serbia [11], and at the University of Wisconsin-Madison, also in the US in the early 1790s. Since then, many projects of the assistive devices category were developed such as in [12–14], among others. However, it was also observed that, earlier in 1890, Nicholas Yagn (a Russian), proposed an apparatus resembling an exoskeleton (Fig. 1a) [10]. This apparatus was developed to facilitate walking, running and jumping. The earlier version was designed with a bowl spring interconnecting the hip and ankle which stores energy when compressed, while, the final version uses a compressed gas bag to store the energy.

Biomechatronic Analysis of Lower Limb Exoskeletons …

655

Fig. 1 a Yagn’s Apparatus, b Hardiman, c BLEEX, d ReWalk, e HAL, f H-MEX and g

In [1], human biomechanical considerations for the hip, knee, and ankle joints in developing exoskeletons were presented. In [15], the advantages of using two different types of actuators (DC motor with harmonic unit and pneumatic artificial muscle) at the hip and knee joints is shown. In this case, the harmonic drive actuator has high torque, high positioning precision and relatively small dimensions which are ideal properties for gait rehabilitation. It was also possible to measure the center of pressure (CoP) or zero moment pressure (ZMP) of exoskeletons using force sensors as shown in [16]. This measurement technique can be used to check stability while the exoskeleton is being used by a patient. In [17], a hybrid training mode for unilateral paraplegics is presented, the main idea here is to train the leg with movement problems based on the healthy leg’s behaviors. In [18], critical design criteria to be considered in the mechanical development and selection of actuators is presented.

3

Mechanisms

3.1 Metabolic Cost One of the main objectives of designing lower limb exoskeletons, especially for augmentation performance, is to significantly minimize the metabolic cost of the intended user. Therefore, to determine the effectiveness or whether the device is significantly assistive or not, there is the need to compare the required human metabolic cost needed to perform a task with or without the use of the exoskeleton. This calculation can be done by measuring and comparing the rate of ox ygen consumption, carbon dioxide production and urinary nitrogen excretion while performing the task using the exoskeleton and without using the exoskeleton [11]. There are many other methods for this calculations as shown in [19–21]. However, changing the user’s walking speeds may complicate this comparison. In [22], the method

for measuring a metabolic energy cost of walking when changing speed is presented, it was also estimated that, the cost of this changing speeds represents 4–8% of the daily walking energy budget. Initial tests of metabolic cost were performed on some exoskeletons with varied results showing an increase such as in MIT exoskeleton and a decrease in other cases.

3.2 Biomechanics of Walking Human walking is accomplished with a strategy called the double pendulum. During forward motion, the leg that leaves the ground swings forward from the hip, this sweep is considered the first pendulum and then, the leg strikes the ground with the heel and rolls through to the toe in a motion described as an inverted pendulum. The motion of the two legs is coordinated so that one foot or the other is always in contact with the ground. The process of walking recovers approximately sixty per cent (60%) of the energy used due to pendulum dynamics and ground reaction force. It was also observed that, for a proper and comfortable motion, human leg normally has 7 DOF (3 at the hip, 1 at the knee and 3 at the ankle). Therefore, in the design and control of lower limb exoskeletons, understanding human walking gait is fundamental. Human gait cycle is represented by the periodic repetition of two phases, the stance phase and the swing phase. The stance phase (when the foot is on the ground) occupies around 60% of the gait cycle and occurs between two events: the heel strike and the toe-off (same foot), while the swing phase (when the foot is the air) occupies only around 40% of the gait cycle, and starts with the toe-off event and ends whit the next heel strike. The human walking gait through one cycle that begins at heel strike and end at terminal swing is presented in Fig. 2, adapted from [23], percentage are given at their approximate locations.

656

N. A. Marafa et al.

Fig. 2 Human walking gait. Adapted from [23]

3.3 Human Average Walking Speed Human walking speed is another important aspect to be considered when designing and controlling lower limb exoskeletons. People walk at different speeds depending on factors like, height, weight, age, gender, terrain, surface, load, effort, fitness, and cultural beliefs. A study carried out at Portland State University in 2005 and later reviewed in 2009 showed that, the average human walking speed at crosswalks is about 1.4 m/s or 5 km/h [24]. In Brazil, Novaes et al. [25] studied the gait speed among Brazilians aged above 40 years old and their study showed that the average speeds of the Brazilian population were around 1.26 m/s or 4.54 km/h among men and 1.16 m/s or 4.18 km/h among women. It was also observed from the literature that, gait speeds, and step lengths were very important considerations of many projects when designing and controlling lower limb exoskeletons.

3.4 Mechanics of Human Movement The anatomical planes of human motion as described in medicine are the Coronal or Frontal plane, which divides the body into anterior and posterior parts, the Transversal or Axial plane, which divides the body into upper and lower parts, and the Sagittal or Lateral plane which divides the body into right and left parts. Movement in the Sagittal plane is called flexion–extension, in the Coronal plane is called abduction–adduction and in the Transverse plane is called medial–lateral rotation. Almost all the projects analyzed from the literature have active joints that allow movements in the sagittal plane (flexion–extension) at both the hip and knee joints, this is because of its importance for the forward and backward movement of the body and body segments (Fig. 3a). When the number of DOF is in high order, a D-H model can be used to design the kinematic model for the lower limb exoskeletons as shown in [16].

More explanations on how to develop D-H models is presented in [4].

3.5 Movements at the Hip Joint The hip joint is a multiaxial ball and socket synovial joint which behaves as a spherical joint. It moves in different cardinal reference planes that pass through the joint center allowing three possible degrees of freedom (3DOF). The hip movements are Flexion–extension (Fig. 3b), Abduction–adduction (Fig. 3c), and Medial–lateral rotation (Fig. 3d). Flexion is the rotating motions of the hip joint that brings the thigh forward and upward while the extension is just the opposite. The range of hip flexion is up to 120° and that of extension is up to 20°. Abduction is the movement of the lower limb away from the mid-line of the body while the adduction is just the opposite. The range of abduction is up to 40° and that of adduction is between 30° and 35°. Medial–lateral rotation is the rotation around the long axis of the femur. The range of medial rotation is only from 15° to 30°. The range of lateral rotation is larger and can be up to 60° [4].

3.6 Movements at the Knee Joint The Knee joint is a synovial hinge joint that behaves like an ellipsoidal. It has two parts: the femoro-patellar joint and the femoro-tibial joint. It moves in different cardinal reference planes that pass through the joint center allowing two possible degrees of freedom (2DOF): flexion–extension and medial–lateral rotation. However, just like at the hip joint, here also, flexion–extension is the most important joint movement in the sagittal plane, which allows a stable locomotion of an individual. The range of movement in flexion is up to 120°, this is when the hip is extended, it is 140° when the hip is flexed and 160° when the knee is flexed

Biomechatronic Analysis of Lower Limb Exoskeletons …

657

Fig. 3 a Anatomical human body planes [26], b flexion-extension [27], c abduction-adduction [28] and d medial-lateral rotations [28]

passively. medial–lateral rotation is the rotation (internal) that occurs during the final stage of extension. It helps in bringing the knee to the locked position which provides a maximum stability. It is limited to 10° with 30° of flexion and to 15° when the knee is fully flexed. Lateral rotation occurs at the early stage of flexion. It is needed to unlock the joint and the range of motion is limited to 30° when flexion is 30° of flexion, and to 50° when flexion is 120° [4].

3.7 Movements at the Ankle and Foot Articulations

Fig. 4 a Dorsal-plantar flexion and b eversion and inversion [28]

The ankle is a hinge-type synovial joint and is involved in lower limb stability. The ankle and foot contain 26 bones connected by 33 joints, and more than 100 muscles, tendons and ligaments. The ankle joint is located between the lower ends of the tibia and the fibula and the upper part of the talus. There are three movements that occur around a transverse axis: (a) Dorsal and Plantar Flexions (Fig. 4a), (b) Inversion and Eversion (Fig. 4b), and (c) Pronation and Supination. The Dorsal flexion is the movement that brings the foot dorsally to the anterior surface of the leg and has a range of motion of up to 20° while the Plantar flexion is just the opposite and has the range of motion from 40° to 50°. Inversion involves moving the heel and forefoot towards the mid-line, bringing the foot sole towards the median plane while the Eversion consists of moving the heel and forefoot laterally placing the sole of the foot away from the median plane. The Inversion has a range of motion between 30° and 35° while Eversion has a range of motion between 15° and 20°. Pronation is a combination of forefoot inversion and abduction while Supination is a combination of forefoot inversion and adduction [4].

4

Actuatos and Sensors

For lower limb exoskeletons, actuators are analogous to human leg joints, they provide and regulate the joint angles needed to move the exoskeleton and the wearer from one place to another. There are many types of actuators used in the literature for the construction of exoskeleton’s actuations, which include: Electric Motors, Pneumatic Actuators, Hydraulic Actuators, Pneumatic Muscle Actuators, and Series Electric Actuators (SEAs). Each of these actuators can be chosen based on the project’s requirements and application. In [15], two actuators (a DC motor with harmonic unit and a pneumatic artificial muscle) were used for the hip and knee joints, the choice was made to take the advantage of the harmonic drive actuator’s high torque and high precision positioning with relatively small dimensions and electric motors with high torque to weight ratio, reduced noise, and reliability. It was also observed that, there are different types of actuators used by different projects, Sankai [29] used electric motors, Beyl [30] used linear pneumatic actuators, Zoss et al. [12] used linear hydraulic actuators

658 Table 1 Characteristics of Exoskeletons Designs

N. A. Marafa et al. References

Actuator

Sensor

DOF per Leg

[12]

Linear hydraulic actuator

16 accelerometers and 10 encoders and 8 feet sensors and 6 force sensors and 1 load cell and 1 in clinometer and 6 servo hydraulic valves

7

[14]

DC motors

EEG sensor and angular and pressure sensor

3

[15]

DC motors and pneumatic artificial muscles

EMG sensor and pressure and linear motion sensors

7

[17]

DC motor and gear pair transmission and ball screw with a slide nut

Gyroscope and position and force and posture sensor

7

[18]

DC servomotor

Motor encoders and force sensors at the sole of the feet

3

[29]

DC motor

EMG and encoders (knee and hip, and arms and trunk) and pressure and force sensors on the soles of the feet

-

[30]

Linear pneumatic actuator

Pressure sensors and foot force sensors and knee and hip encoders

2

[32]

DC servomotor

EMG and force sensors in the feet

8

[33]

DC motor

Joint encoders and knee and hip load cells and pressure sensors in the soles of the feet

3

[34]

DC motor

EMG and encoders (knee and hip) and pressure and force sensors on the soles of the feet

2

and, Aguilar-Sierra et al. [15] used the combination of two types of actuators. Stepper motors and servo motors are capable of very advanced position-based control which is much more difficult to achieve with pneumatic or hydraulic systems. Pneumatic or Hydraulic actuators have a high ratio of actuator power to actuator weight. Series electric actuators have special characteristics such as high fidelity, extremely low impedance, low friction and others. The first walking active exoskeleton developed by Prof. M. Vukobratovic and his team at Mihailo Pupin Institute was pneumatically actuated. The General Electric Research’s full body powered exoskeleton prototype was actuated using a hydraulic unit. In 1974, the first known example of active exoskeleton actuated using electrical motors was designed. However, in recent years, DC electric motors, linear pneumatic and hydraulic actuators are mostly used. Since the early 1990s, researchers started to develop SEAs which are now playing an important role in actuation design. Sensors have been playing significant roles in controlling lower limb exoskeletons. They are the main sources of information in the cognitive human–robot interaction and are mostly used to regulate force and position of the assistive devices. Most of the sensors used to control exoskeletons are: force sensor, pressure sensor, encoders, Hall effect sensor, pose sensor, electromyography (EMG) sensor, electroencephalogram (EEG) sensor, among others. In Table 1, a

list of the actuators and sensors used in the most relevant projects found in the literature are presented. Some of the projects used more than one type of sensor for controlling the lower limb exoskeletons such as in [18, 29], while others use only one type of sensor such as [14, 15].

5

Control

It was observed that, the control strategies vary widely from one project to another. The two common forms of control strategies applied in controlling lower limb exoskeletons are the, Force-based control and Position-based control strategies. The Forced-based control strategy is commonly applied to exoskeletons for human performance augmentation. It involves the application of force/torque value based on the assumed portion of the gait cycle. The Position based control strategy is usually applied when the user has little ability to interact with the exoskeleton [31], it is commonly applied to exoskeletons for rehabilitation applications. In [9, 13, 14], the exoskeletons are controlled by the brain waves of the user using an electroencephalogram (EEG) headset based on human intention to move. In this case, the headset used by the wearer is capable of capturing brain signals and sending them to a microcontroller for motion execution. In the automatic control strategy, the exoskeleton is set to mimic normal human walking in a repetitive way.

Biomechatronic Analysis of Lower Limb Exoskeletons …

6

Human–Robot Interaction

A Human–Robot Interaction is the interaction between the wearer and the robotic exoskeleton. This interaction can be a physical human–robot interaction or cognitive human–robot interaction. “The interaction between the wearable robot and the human is a critical factor for ensuring smooth and efficient control strategies that will be based on the estimation of the wearer’s motion intention” [11]. In the cognitive human–robot interaction (cHRI), the cognitive process involves reasoning, planning and execution of a previously identified goal. Here, the main objective is to create a cognitive system that reflects the wearer’s intent to move. In this case, the human–robot interaction is designed in such a way that, there is some kind of direct communication between the robot and human biological aspects such as EMG and EEG signals. These signals are measured from the human body and reflect the human motion intention directly, which can be fully estimated without information loss and delay. Based on these two types of signals, corresponding control strategies have also been developed to assist the users to ensure daily living activities and rehabilitation exercises. On the other hand, one of the crucial roles of a cHRI is to make the human aware of the possibilities of the robot while allowing him to maintain control of the robot at all times. In [15], the human–machine interface (HMI) uses the EMG sensors which are put on several muscles of human lower limb to capture its movements, generate a base of gait patterns and send EMG signals to a computer via a wireless connection. In [14], the EEG sensor has 16 electrodes incorporated in its structure, where two electrodes act as the measurement references. It also uses a non-invasive method to collect brain signals from the scalp of the person. The signal is then converted into digital data which is given as an input to a microcontroller. It is applied in [14, 15, 29, 32, 34]. In a physical human–robot interaction (pHRI), the interaction between the human body and the mechanical structure of lower limb exoskeletons is considered. The key role of a robotic exoskeleton in a pHRI is the generation of supplementary forces to empower and overcome human physical limits. This limits can either be natural or even as a result of a disease or trauma. Therefore, there is a direct flux of power between both actors (exoskeleton and the user). However, pHRI is designed according to the interaction forces between the user and the exoskeleton. In pRHI, the anthropometry of the human body which is the basis of an exoskeleton design is considered. This is done during the mechanical design phase, by proper establishment of the structural requirements such as the material to be used, number of degrees of freedom, number of actuated joints,

659

joints restrictions, type of actuator to be used, weight of the robot, height of the links waist diameter, among others.

7

Conclusions

It was observed during the analysis that, there are many choices for the selection of materials, actuators, sensors and control strategies to make in the design and development of lower limb exoskeletons. However, the comfortability of the mechanical structure depends not only on the right choices of these components, but also on the number of DOF used per leg. It was observed that, for a normal and a comfortable movement, it is important that, the exoskeleton have at least 1DOF in all the three anatomical planes at the hip, 1DOF at the knee and 1DOF at the ankle (sagittal plane). It was also observed that, in recent years, there is evolution in the intention-based control, in which many projects are adopting the cognitive control strategy, where the human intention to move is used for the exoskeleton’s control either by the use of EEG or EMG signals. However, considering the transitional nature of the gait cycle, it can be advantageous to breaking the controller into multiple control states depending on phase of the human gait cycle. Acknowledgements The Authors thank CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) for proving the free access to all the databases consulted in elaborating this work. Conflict of Interest Authors declare that they have no conflict of interest.

References 1. Cenciarini M, Dollar AM (2011) Biomechanical considerations in the design of lower limb exoskeletons. In: IEEE international conference on rehabilitation robotics, pp 10–14 2. Jung JY, Park H, Yang HD, Chae M (2013) Brief biomechanical analysis on the walking of spinal cord injury patients with a lower limb exoskeleton robot. In: IEEE international conference on rehabilitation robotics, pp 1–4 3. Cardona M, Garcia Cena CE (2019) Biomechanical analysis of the lower limb: a full-body musculoskeletal model for muscle-driven simulation. IEEE Access 7:92709–92723 4. Pons JL (2008) Wearable robots: biomechtronics exoskeletons, CSIC, Madrid 5. Hercule, https://www.rb3d.com/en/exoskeletons/exo/ 6. Rewalk, https://rewalk.com/rewalk-personal-3/ 7. Hal, https://www.cyberdyne.jp/english/ 8. H-Mex, https://exoskeletonreport.com/product/h-mex/ 9. Yin YH, Fan YJ, Xu LD (2012) EMG and EPPintegrated human-machine interface between the paralyzed and rehabilitation exoskeleton. IEEE Trans Inf Technol Biomed 16:542–549

660 10. Dollar AM, Herr H (2008) Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art. IEEE Trans Rob 24:144–158 11. Huo W, Mohammed S, Moreno JC, Amirat Y (2016) Lower limb wearable robots for assistance and rehabilitation: a state of the art. IEEE Syst J 10:1068–1081 12. Zoss A, Chu A, Kazerooni H (2006) Biomechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX). IEEE/ASME Trans Mechatron 11:128–138 13. Mcdaid AJ, Xing S, Xie SQ (2013) Brain controlled robotic exoskeleton for neurorehabilitation. In: 2013 IEEE/ASME international conference on advanced intelligent mechatronics, pp 1039–1044 14. Vinoj PG, Jacob S, Menon VG (2019) Brain-controlled adaptive lower limb exoskeleton for rehabilitation of post-stroke paralyzed. IEEE Access 7:132628–132648 15. Aguilar-Sierra H, Yu W, Salazar S, Lopez R (2015) Design and control of hybrid actuation lower limb exoskeleton. Adv Mech Eng 7:1–13 16. Kim J, Han JW, Kim DY et al (2013) Design of a walking assistance lower limb exoskeleton for paraplegic patients and hardware validation using CoP Regular Paper. v. 10 17. Long Y, Du ZJ, Wang W, Dong W (2016) Development of a wearable exoskeleton rehabilitation system based on hybrid control mode. Int J Adv Rob Syst 13:1–10 18. Onen U, Botsali FM, Kalyoncu M et al (2014) Design and actuator selection of a lower extremity exoskeleton. IEEE/ASME Trans Mechatron 19:623–632 19. Griffin TM, Roberts TJ, Kram R (2003) Metabolic cost of generating muscular force in human walking: Insights from load-carrying and speed experiments. J Appl Physiol 95:172–183 20. Winter A (2005) Gait data, international society of biomechanics, biomechanical data resources, https://guardian.curtin.edu.au/org/ data 21. Donelan JM, Kram R, Kuo AD (2002) Mechanical work for step-tostep transitions is a major determinant of the metabolic cost of human walking. J Exp Biol 205(3717):3727 22. Seethapathi N, Srinivasan M (2015) The metabolic cost of changing walking speeds is significant, implies lower optimal

N. A. Marafa et al.

23.

24.

25.

26. 27. 28. 29. 30.

31.

32.

33.

34.

speeds for shorter distances, and increases daily energy estimates. Biol Lett 11(9):20150486. https://doi.org/10.1098/rsbl.2015.0486 Luca RS (2016) Lower limbs robot motion based on the probabilistic estimation of the joint angles starting from EMG data of an injured subject. 2015. Thesis (Master's Degree in Bioengineering), College of Engineering, University of Padua, Italy Carey N, Aspelin K (2005) Establishing pedestrian walking speeds. Portland, https://www.westernite.org/datacollectionfund/ 2005/psu_ped_summary.pdf Novaes DR, Miranda AS, Dourado VZ (2011) Velocidade usual da marcha em brasileiros de meia idade e idosos. Revista Brasileira de Fisioterapia, São Carlos 15:117–122 National Cancer Institute: Seer training Modules, https://training. seer.cancer.gov/anatomy/body/terminology.html#planes Clinical Gait, https://clinicalgate.com/the-musculoskeletal-system3/ Hall SJ (2015) Basic biomechanics. McGraw Hill, New York Sankai Y (2006) Leading edge of cybernics: robot suit HAL. In: 2006 SICE-ICASE international joint conference, vol 10, pp 1–2 Beyl P (2010) Design and control of a knee exoskeleton powered by pleated pneumatic artificial muscles for robot assisted gait rehabilitation Chair. Ph.D. thesis, Vrije Universiteit Brussel Young AJ, Ferris DP (2017) Analysis of state of the art and future directions for robotic exoskeletons. IEEE Trans Neural Syst Rehabil Eng 25:171–182 Van Der Kooij H, Veneman J, Ekkelenkamp R (2006) Design of a compliantly actuated exo-skeleton for an impedance controlled gait trainer robot. In: Annual international conference of the IEEE engineering in medicine and biology, pp 189–193 Winfree KN, Stegall P, Agrawal SK (2011) Design of a minimally constraining, passively supported gait training exoskeleton. In: IEEE international conference on rehabilitation robotics, pp 5975499 Kawamoto H, Lee SLS, Kanbe S, Sankai Y (2003) Power assist method for HAL-3 using EMG-based feedback controller. In: SMC’03 Conference Proceedings. 2003 IEEE international conference on systems, man and cybernetics. Conference theme— system security and assurance, vol 2, pp 1648–1653

Numerical Methods Applied to the Kinematic Analysis of Planar Mechanisms and Biomechanisms H. N. Rosa, M. A. Bazani, and F. R. Chavarette

Abstract

This article presents a numerical methodology for the kinematic analysis of planar mechanisms based on the closed loop method. Initially, the mesh is defined with the aid of complex notation and, through iterative processes, the configuration of the mechanism is defined as a function of time. For the calculation of speeds and accelerations, the 4th order numerical derivative was applied. After applying the procedure, satisfactory results were obtained for a certain step size range, showing itself as a useful tool for kinematic analysis. Keywords





Closed loop Mechanisms Kinematics derivation Numerical methods



Numerical

upper extremity kinematics to assess the effects of external load upon the internal structures to gain understanding of the of injuries to the throwing athlete. Long [5] quantify the changes in gait kinematics and kinetics caused by the use of bilateral double rocker sole shoes, a sole use in treatment and care of patients. Currently, several analytical processes are available to describe the kinematics of a mechanism. Despite guaranteeing accurate results, the analytical methodology is dependent on the technical knowledge of the designer and is a more time-consuming procedure, in addition to the fact that many mechanisms do not have an analytical solution, mainly in mechanisms with multiple degrees of freedom. As a result, many designers use numerical methods for kinematic analysis [6, 7], which we will discuss in this article.

1.1 Complex Numbers

1

Introduction

According to Norton [1], kinematics aims to create (design) desired movements of mechanical elements and then calculate their positions, speeds and accelerations that these elements will generate in their respective components. Kinematic analysis is essential to verify the behavior of the mechanism, intrinsic to biomechanics, since it aims to manufacture prostheses that simulate human movements [2]. In his article on sports injures in the Lower Extremities, Nigg [3] points out that the combination of external force measurements and kinematic descriptions of the geometry with optical methods enables an estimate to be made of the internal forces acting in the human body. Dillman [4] conducted a quantitative three-dimensional description of the

To perform the kinematic analysis of a mechanism, the application of vectors is essential. There are several ways to represent a vector, including: polar, Cartesian and complex. The complex shape makes use of Euler’s identity to represent the vector, as shown in Eq. (1), in which the real part corresponds to the X direction and the imaginary part, represented by j corresponds to the Y direction. Rejh ¼ R cosðhÞ þ jR sinðhÞ

ð1Þ

Being R the module and h the orientation of the vector. The advantage of using complex notation for the description of vectors is given by its compact way of presenting data, besides being computationally more efficient in storing values, facilitating mathematical operations, such as numerical derivation.

H. N. Rosa (&)  M. A. Bazani  F. R. Chavarette Department of Mechanical Engineering, Universidade Estadual Paulista, UNESP, Avenida Brasil, 56, Centro, CEP 15385-000 Ilha Solteira, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_100

661

662

H. N. Rosa et al.

1.2 Closed Loop Method



X

R i ehi

ð4Þ

i

A closed loop is composed of three elements: known values, which are fixed values independent of the mechanism configuration; Unknowns, which are variable values depending of the mechanism configuration; and Degrees of Freedom, which are the independent variables required to define the mechanism configuration. The elements can be either angles or links. Based on these elements, a vector loop is drawn by the links of the mechanism with their respective angular orientations. The sum of the vectors must be equal to zero, defining a closed loop. For a better understanding, let’s look at the example (four-bar mechanism knee joint) in Fig. 1, in which the known values are represented by the black color, the unknowns, by the blue color, and the degrees of freedom, by the red color. The sum of vectors C1 and C2 is equal to the sum of vectors C3 and C4, thus defining a closed loop. The angular orientations of the C1, C2, C3 and C4 links are, respectively, 360  q1 , h2, h3, 0°. Note that the orientation of C1 must be relative to the reference directions. Thus, and applying the algebra of complex numbers, it is possible to describe the mechanism by Eq. (2). 

C1 e360q1 þ C2 eh2 þ C3 eh3  C4 e0 ¼ 0

Using point P in Fig. 2 as example, its position can be given by Eq. (5). P ¼ C1 e360q1 þ C2 eh2

The method can be applied to mechanisms with more than one loop and one degree of freedom. However, the number of unknowns must be equal to twice the number of loops, since the method takes into account the sum in the X and Y directions.

1.3 Numerical Differentiation In order to facilitate the kinematic analysis, Mabie [9] proposes the application of numerical derivatives to the position data, obtaining the velocities, and then applying to the velocity data, obtaining the accelerations. It’s a simple and

ð2Þ

With Euler’s identity, we have Eq. (3).  C1 cosð360  q1 Þ þ C2 cosðh2 Þ þ C3 cosðh3 Þ  C4 ¼ 0 C1 sinð360  q1 Þ þ C2 sinðh2 Þ þ C3 sinðh3 Þ ¼ 0 ð3Þ Equation (3) consists of a system of non-linear equations of two unknowns, which can be solved by iterative methods, requiring initial values carefully chosen. Knowing the unknowns, just apply the Euler identity adding the vectors to describe any point in the system, according to Eq. (4).

Fig. 1 Four bar mechanism and its respective closed loop. Adapted from [8]

ð5Þ

Fig. 2 Flow chart of the applied methodology

Numerical Methods Applied to the Kinematic Analysis of Planar …

663

efficient method of obtaining the entire kinematic description. However, the numerical differentiation is dependent on the difference between the position data (time). If too far apart, speed and acceleration are inaccurate due to the truncation error. If too close, there is a greater computational effort, in addition to increasing rounding errors. A more effective approach is to increase the order of numerical differentiation, ensuring more accurate results for larger steps [10]. Equation (6) presents the 4th order progressive numerical differentiation.

7. If necessary, add the links to describe a point in the system, applying the respective numerical differentiations.

df 25f ð xÞ þ 48f ðx þ hÞ  36f ðx þ 2hÞ þ 16f ðx þ 3hÞ  3f ðx þ 4hÞ ¼ dx 12h

ð6Þ

Figure 2 shows a flow chart of the steps to be followed. In this paper, we are going to analyse two mechanisms, the four-bar knee joint, already presented in Fig. 1, and a Quick return mechanism, presented in Fig. 3. For the knee joint, we will vary q1 from 0° to 90°, with constant angular velocity of 0.5 rad/s, to simulate a knee flexion. For the quick return mechanism, we will vary q1 from 0° to 360°, with a constant angular velocity of 1.0 rad/s. The information used for the analysis of both mechanisms is shown in Tables 1 and 2. For the unknowns, the value does not correspond to the real, but to the initial estimate.

Equation (7) presents the regressive numerical d of the same order. df 3f ð xÞ  16f ðx  hÞ þ 36f ðx  2hÞ  48f ðx  3hÞ þ 25f ðx  4hÞ ¼ dx 12h

ð7Þ It is important to use both equations to cover the totality of data, once, if applied only to Eq. (6), the last 5 data will not be covered, requiring to apply Eq. (7) to them.

2

Methodology

The methodology developed, whose programming language was Python, follows the sequence described below: 1. Define closed-loop equations, unknowns, degrees of freedom and known values. 2. Enter degrees of freedom as a function of time and known values. 3. Start the iterative method by choosing reasonable starting values for the start of the process. In this work, the Newton Raphson method was used for systems of non-linear equations, standard method of the fsolve function of the scipy library. 4. Apply the results obtained as initial values for the subsequent iteration, ensuring rapid convergence. Repeat steps 3 and 4 until all the mechanism configurations defined by the degrees of freedom are covered. 5. Apply the 4th order numerical differentiation to the position values to calculate the speeds. 6. Apply the 4th order numerical differentiation to the velocity values to calculate the accelerations.

Fig. 3 Quick return mechanism and its respective closed loop. Adapted from [6]

Table 1 Information regarding the four bar mechanism Elements

Values

C1

5 in.

C2

9 in.

C3

7 in.

C4

10 in.

q1

0° to 90°

h2

45°

h3

90°

h4



664

H. N. Rosa et al.

Table 2 Information regarding the quick return mechanism Elements

Values

C

0.4 m

D

0.75 m

L1

0.4 m

L2

0.4 m

X

0.4 m

h3

45°

R

0.085 m

q1

0°–360°

3

Results and Discussion

3.1 Four Bar Mechanism Figure 4 shows the results obtained from position, speed and acceleration of unknowns h2 and h3. As we can see, the angular acceleration of h2 and h3 decreases as the flexion movement occurs, to an extend where the velocity is almost constant near the end of the movement. The kinematics of point P were also analyzed, as shown in Fig. 5. Based on the results, it is possible to state that the internal efforts in the joint P are greater at the beginning of the movement, due to the higher acceleration absolute values.

3.2 Quick Return Mechanism Figure 6 shows the results obtained from position, speed and acceleration of unknowns X and h3. From the 4 s, the acceleration changes abruptly, indicating a quick return of the mechanism. This can be observed in the position graphic, where the mechanism takes more time to reach its maximum value, and then return quickly to its initial position.

3.3 Comparison with Analytical Results After analyzing the mechanism using the proposed methodology, the results were compared with the analytical methodology proposed by Doughty [6]. The position data was not compared, since its error depends on the limit value of the iterative process, set at 10−11. The absolute error depending on the number of steps chosen is shown in Fig. 7. The number of steps ranged from 60 to 360 (h = 0.105 to h = 0.017 s).

Fig. 4 Values of unknowns for the four bar mechanism as a function of time

As can be seen, the application of a 4th order numerical derivative guarantees satisfactory results for speeds and accelerations, which is less accurate, since the data goes through the numerical procedure twice. The values in the first example continually reduce its error, with significant reduction in the rate of change of the absolute error, meaning that the step can be reduced further but will not ensure greater accuracy. In the second example, the error starts to increase by 300 steps (h = 0.021 s), representing the contribution of the rounding error to the absolute error due to the decrease in the step [10].

Numerical Methods Applied to the Kinematic Analysis of Planar …

665

Fig. 5 Point P values for the four-bar mechanism as a function of time

Fig. 6 Values of unknowns for the quick return mechanism as a function of time

666

H. N. Rosa et al.

mechanism without the need to apply analytical procedures. Its disadvantage, besides not being able to be applied in spatial mechanisms, is that it is necessary to define the initial estimates of the first iterative process, which can be estimated with the aid of graphic tools, drawing the mechanism from the degrees of freedom and the fixed values. The application of a 4th order numerical derivative proved to be accurate for a range of steps in both mechanisms. As noted, it is possible that a small step choice is not ideal, since rounding errors begin to have a greater contribution than truncation errors. Unfortunately, due to the limit space available, it was not possible to present the influence of the numerical derivative order on the absolute error, as well as the convergence of the initial estimate, which can be done in future works. Acknowledgements The first author dedicates this work to the Sisplexos Group and the Zebra Aerodesign Team. Conflict of Interest The authors declare that they have no conflict of interest.

References

Fig. 7 Absolute error of the numerical results in relation to the analytical

4

Conclusion

The closed loop methodology, integrated with complex notation and numerical differentiation, proved to be adequate for the kinematic analysis of planar mechanisms and biomechanisms. With such a tool, it is possible to evaluate the position, speed and acceleration of the points of a

1. Norton RL (2009) Kinematics and dynamics of machinery. Mcgraw Hill Higher Education 2. De Vries J (1995) Conventional 4-bar linkage knee mechanisms: a strength-weakness analysis. J Rehabil Res Dev 32(1):36 3. Nigg BM (1985) Biomechanics, load analysis and sports injuries in the lower extremities. Sports Med 2(5):367–379 4. Dillman CJ, Fleisig GS, Andrews JR (1993) Biomechanics of pitching with emphasis upon shoulder kinematics. J Orthop Sports Phys Ther 18(2):402–408 5. Long JT, Klein JP, Sirota NM, Wertsch JJ, Janisse D, Harris GF (2007) Biomechanics of the double rocker sole shoe: gait kinematics and kinetics. J Biomech 40(13):2882–2890 6. Doughty S (1988) Mechanics of machines (No. TJ170. D68 1988) 7. Uicker JJ, Pennock GR, Shigley JE, Mccarthy JM (2003) Theory of machines and mechanisms, vol 3. Oxford University Press, New York 8. Greene MP (1983) Four bar linkage knee analysis. Orthot Prosthet 37(1):15–24 9. Mabie HH, Reinholtz CF (1987) Mechanisms and dynamics of machinery. Wiley, New York 10. Chapra SC, Canale RP (2010) Numerical methods for engineers. McGraw-Hill Higher Education, Boston

Project of a Low-Cost Mobile Weight Part Suspension System L. C. L. Dias, C. B. S. Vimieiro, R. L. L. Dias, and D. M. C. F. Camargos

Abstract

1

The present work deals with the development of a low cost mobile partial weight suspension (PWS) device. PWS is a technique used to treat people with motor disorder. There are some PWS systems on the market that perform similar functions but are not financially viable to a large public. Other than that, those systems do not allow adaptations. Initially, research was carried out at a university clinic to collect information on the problem. Afterwards, the project started, so that it would meet the established requirements. The device is based on a structure made out of circular steel tubes fixed by pins, a ratchet system and pulleys for adjusting the height of the suspension belt. To validate the structure, a static simulation of finite elements was performed. Finally, a low-value device was projected for satisfactorily compliance of the requirements imposed by the clinic’s physiotherapy team. Keywords



Project Physiotherapy suspension



Rehabilitation



Partial weight

L. C. L. Dias (&)  C. B. S. Vimieiro Mechanical Engineering Graduate Program, Federal University of Minas Gerais, Antônio Carlos, Belo Horizonte, Brazil C. B. S. Vimieiro Mechanical Engineering Graduate Program, PUC Minas, Belo Horizonte, Brazil R. L. L. Dias Clinical Physiotherapy Center, PUC Minas, Belo Horizonte, Brazil D. M. C. F.Camargos Control and Automation Engineering, Federal University of Minas Gerais, Belo Horizonte, Brazil

Introduction

The physiotherapy process has a fundamental role in the rehabilitation of people with motor disorders, whether adults or children. The Brazilian physiotherapy market has few options of low-cost technologies to assist professionals for clinical practice, especially equipment that works to reduce the physical effort made by them. A very common and widespread type of system for minimizing efforts and improving the treatment of patients with motor disorders is that of partial weight suspension (PWS). Also known as body weight-supported treadmill training— BWSTT, supported treadmill ambulation training—STAT, or Laufband therapy, PWS is a suspension system, which reduces the resulting force between the gravitational force and the force of suspension, reducing the load on the musculoskeletal system during treadmill training [1]. According to [2], these devices are being increasingly used for the recovery of gait of different types of patients. This mechanism is of great help to physiotherapists because it allows greater freedom in the treatment of patients, compared to the treatment without it, or only using stretchers. It also allows a greater range of exercises, demands less effort from the professional and transmits greater security to the patient. A negative factor of these devices is the fact that they are fixed in a certain place. The most common are formed by a crane system where the beam is fixed on the structure of the treatment environment, another common device is the ones fixed on tracks as reported in the experiments of [3]. Several studies point to a significant improvement in the gait of patients with the use of partial suspension devices. Prudente [3] evaluated the effects of treadmill gait training with partial weight suspension (PWS) and tangible reinforcement in children with cerebral palsy and achieved an improvement of up to 9.22%. Jaques [4] shows that better gait symmetry is obtained with a PWS treatment, with better weight transfer to the affected side during activity. Segura

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_101

667

668

L. C. L. Dias et al.

[5] concludes from its data evaluation that weight suspension is beneficial in the treatment of people after cerebrovascular accidents (CVA). Yoneyama et al. [6] presents the efficiency of training with a device with partial weight suspension in the balance of hemiparetic patients. As can be seen, the benefits of using SPP are undeniable and extend to all age groups as shown in the studies mentioned above. Therefore, the main objective of this work is to present a project of a low-cost, manual, adjustable dimensions, and completely dismountable weight suspension device for neuro-pediatric treatments. This system is innovative when compared to the solutions available on the market, as they are generally fixed and do not allow disassembly or adjustment of dimensions.

2

Methodology

The project was based on the methodology presented by [7], making changes to it to better meet the project’s needs. The methodology used is presented as a flow chart in Fig. 1. The first step was to verify the real need of the project, for that a visit was made at the university clinic where the demand arose, where the need for the development of this project was understood. After understanding the previous step, a market research was carried out to identify products that perform similar functions. After that, the main objective was outlined, as previously described. In a second step, another visit was made to the institution to learn about the limitations and requirements to be met by the project, in this same phase, a brainstorming was done to get ideas about the

Need Idenficaon

Support search Goal seng List requirements limitaons

and

Problem analysis Synthesis ofideas

possible layout of the equipment. Finally, a summary was made of everything that was discussed and the model that best suits the problem was chosen. After that, the project started, where a structural calculation was carried out and then validated via FEM (Finite Element Method) and ended the selection of the other devices.

3

Development

For the development of the project it was necessary to know the limitations of the place where the device will be used, so that there are no difficulties and problems in its use. Simultaneously, the requirements that the device must meet were raised, based on the information collected from the clinic professionals. The design requirements are presented in Table 1. During the visit it was noted that the clinic has several physical limitations, of which the gypsum ceiling makes it impossible to fix it in any way. The access to the doors is limited to 2200 mm in height and 900 in width. Other than that, the corridors are narrow, treadmills and ramps are not suitable for patients with reduced motor capacity, and there is a small number of rehabilitation rooms. These restrictions are illustrated in Fig. 2. In Fig. 2a, it is possible to see an overview of the clinic where the professionals perform the procedures on patients. In Fig. 2b, c can be seen, respectively, a ramp and a treadmill for gait training. In Fig. 2d, one of the physical limitations is shown, by the narrow door. Afterwards, a market research was carried out in search of similar products, studies, and patents on the subject. The purpose of this research was to seek information and see limitations of the devices found commercially. During this research, it was possible to notice that, in general, commercial equipment has an average cost that varies from U $1000.00 to U$4000.00, which makes them unviable for many clinics, for being considered expensive. It’s worth mentioning that the project was done in Brazil, and therefore the budget is in Reais (R$), for easier understanding, the currency was converted to Dollars (U$) using a conversion ratio of U$100 = R$500.

• Conceptual design

Table 1 Project requirements Requirements Project • Structural calculaon • Virtual analysis

Fig. 1 Methodology used in the project

Description

Values

Weight range of patients (Kg)

10–70

Height variation of patients (m)

0.7–1.7

Fully manual drive



Project of a Low-Cost Mobile Weight Part Suspension System

669

Fig. 2 Physical space and equipment limitations. a General service location. b Ramp for training gait. c Treadmill for training. d access to clinical parts

Based on the data above, a conceptual project was elaborated with the following aspects: • Structure with 4 supports for greater stability; • Adjustment system for height of the partial weight suspension belt; • Adjustment system for the overall height; • Use of a ratchet (self-locking) and pulleys for regulation and redistribution of force; • Locking wheels to allow the equipment to move; • Fitting system, to make it completely removable. With the above definitions, the project of the mobile device was started. The first step was to define the range of variation of dimensions of the device. A 1900 mm high device was chosen, in which the height adjustment of the belt is done by means of a ratchet. The width of the structure ranges from 1050 to 1450 mm, and this variation can be done through the displacement of the locking pin. A depth ranging from 1300 to 1900 mm, with a variation of steps of 100 mm also obtained through holes. A schematic of the structure is shown in Fig. 3. The dimensions and materials of the tubes are shown below: • Tube 1 Steel AISI 1020—/63.5 mm  1.5 mm; • Tube 2 Steel AISI 1020—/60.2 mm  1.5 mm; • Tube 4 Steel AISI 1020—/57.1 mm  1.5 mm;

• Plate 3 Steel AISI 1020—100 mm  300 mm  5 mm. This article will present only the structural calculation of the connecting beam between the support feet, Fig. 3, because it is the critical region of the structure. In a simplified way, the analysis region can be dimensioned as a bi-embedded beam subjected to bending due to a point load acting in the center of the part. The size considered in the calculations was the sum of the lengths of part 1 with the length of part 2 that is concentric to part 1. According to [8], the maximum arrow for a bi-embedded beam is given by Eq. (1). y¼

Pl3 192EI

ð1Þ

In which P is the applied force (1500 N), l is the beam length (1400 mm), E is the elasticity module of the material and I is the moment of inertia of the cross section which is given by Eq. (2) for a tube.  p D4  d4 I¼ ð2Þ 64 where D is the external diameter and d is the internal diameter. Applying the above values, is found a maximum deflection of 0.7444 mm. However, it is known that this value is overestimated, as it was not taken into account the variation in the diameter of the tubes or their overlap, which acts as a stiffener for the structure.

670

L. C. L. Dias et al.

Fig. 4 shows a maximum deflection smaller than the one theoretically calculated. This is because the simulation considered the juxtaposition of the tubes, that as mentioned previously, tends to stiffen the structure, and shows the influence of the sides of the structure on the flexing of the system. In Fig. 5 the results of the combination of stresses are shown, obtaining a maximum value of 11.975 MPa. However, it should be noted that this value must be multiplied by a stress concentration coefficient, due to the presence of holes that were not considered in the simulations. To perform the correction, the abaqus provided by [9], of a cylindrical tube with a bent bore in which had a kt (Stress concentration coefficient) of approximately 2.1. Equation (3) shows the correction that was done. rmax ¼ kt :rnominal

Fig. 3 Structure parts

As it is a complex problem, a finite element analysis was performed to validate and show the efforts to which the whole structure is submitted. For the simulation, a bar element was used, a mesh with elements of 50 mm, movement restrictions on the feet and point load in the center of part 1. The displacement results are shown in Fig. 4 and the equivalent Stress values are shown in Fig. 5. A static simulation was chosen because the load is not suddenly applied, thus it can be simplified. The results in Fig. 4 Deformation in static simulation

ð3Þ

Therefore, from Eq. (3), a tension of 25.1475 MPa is reached. This value is below the steel yield limit (330 MPa) and stress useful (165 MPa—half yield limit), showing that the structure is robust enough for the efforts of a 1500 N load. The pins that connect the parts of the structures are subject to pure shear (s). For this condition it is known that the design follows Eq. (4). s¼

V A

ð4Þ

In which the shear stress, V is the shear force and A is the cross-sectional area of the pin. The simulation shows that the axial force of the structure is 50.146 N, it is also known that this force is equivalent to the shear force that the pin is

Project of a Low-Cost Mobile Weight Part Suspension System

671

Fig. 5 Maximum combined stress

subjected to. The selected pin is 12.7 mm in diameter, so using Eq. (4) there is a shear stress of 0.39 MPa, which is below the shear stress supported by the material. To adjust the height of the belt, a ratchet of the 4636 Bremen model is used with a load capacity of 815 kg horizontal, 260 kg of vertical load and 10 m of cable length.

4

Results

Once all the components of the device had been defined, assembly in a virtual environment was started to check for possible interference and to avoid setbacks in the part of physical tests. To guide the cables and modify the direction of the force made by the ratchet, a pair of pulleys was used, in addition, to generate the desired mobility, casters with locking feet were used. The complete device is shown in Fig. 6. In order to avoid oxidation and corrosion, as a protective treatment, the AISI 1020 steel was painted, which also makes it more aesthetically pleasing to professionals and patients. A budget was made to verify that the value of the device is below the commercial products found on the market, it is shown in Table 2. It’s worth mentioning that the project was done in Brazil, and therefore the budget is in Reais (R$), for easier understanding, the currency was converted to Dollars (U$) using a conversion ratio of U$1,00 = R$5,00. The clinical tests to gather evidence of it is effectiveness were not performed yet. The next step is to build a prototype to perform a mechanical validation and get it is approval on

Fig. 6 Final model of the partial weight suspension device

672

L. C. L. Dias et al.

Table 2 Project budget

Description

Quantity

Unitary value

Total value

Turnstile

1

U$ 44.00

U$ 44.00

Pulley

2

U$ 6.00

U$ 12.00

Tube Ø63.5 mm  1.5 mm

1

U$ 20.00

U$ 20.00

Tube Ø60.3 mm  1.5 mm

1

U$ 19.00

U$ 19.00

Tube Ø57.3 mm  1.5 mm

2

U$ 18.00

U$ 36.00

lockable caster

4

U$ 3.00

U$ 12.00

Pin Ø12.7 mm  150 mm

10

U$ 0.60

U$ 6.00

Product Finish



U$ 12.00

U$ 12.00

Execution service value



U$ 60.00

U$ 60.00

Total

the ethics committee to allow it to be used on patients. However, it’s worth mentioning that similar systems are widely used and have proven to be effective on the treatment of people with motor disorder.

5

Conclusion

The present work presented a robust device that meets the demands raised and still has a reduced price when compared to other products found on the market. In addition, the mechanism has the advantage of being completely dismountable, thus favoring its transport, interchangeability, and displacement. Therefore, it is concluded that the objectives of the work were achieved in a satisfactory way, as the work presents in detail the stages of development. The work is still in the design phase, the next step being the construction of a prototype for carrying out physical tests and then its implementation. Other than, as a future work, system automation is being considered and further cost reductions are sought. Acknowledgements This work was carried out with the support of the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior— Brazil (CAPES), MechanicalEngineering Graduate Studies Program (PPGMEC), and with the support of the Clinical Physiotherapy Center of PUC Minas and the LABBIO Bioengineering Laboratory at UFMG. Conflict of Interest. Every paper must contain a declaration of conflicts of interest. If there are no such conflicts write “The authors declare that they have no conflict of interest”.

U$ 221.00

References 1. Haupenthal A, Schutz GR, de Souza PV, Roesler H (2008) Análise do suporte de peso corporal para o treino de marcha. Fisioter. Mov 21(2):85–92 2. Patiño M, Gonçalves A, Monteiro B, Santos I, Barela A, Barela J (2007) caracteristicas cinematicas, cineticas e eletromiograficas do andar de adultos jovens com e sem suporte parcial de peso corporal. Revista Brazileira De Fisioterapia 11:19–25 3. Prudente COM (2006) Comportamento motor em crianças com paralisia cerebral: efeitos do treino de marcha em esteira com suspensão de peso e conceito neuroevolutivo bobath associado ou não ao reforço tangível. Universidade Católica de Goiás, Goiânia, vol 44, no 2, pp 1–47 4. Jaques EdS, Capra R, Schmidt D, Schaan CW, Rossato D (1999) Treino de marcha com suspensão parcial de peso. Porto Alegre, vol 37, p 2356 5. Segura MSP (2005) O andar de pacientes hemiplégicos no solo e na esteira com suporte total e parcial de peso. Universidade Estadual Palista - Instituto de Biociência. Rio Claro, pp 1–151 6. Yoneyama SM, da Silva TLN, Baptista JDS, Mayer WP, Paganotti MT, Costa PF (2009) Eficiência do treino de marcha em suporte parcial de peso no equilíbrio de pacientes hemiparéticos. Rev. Med. 88(2):80–86. https://doi.org/10.11606/issn.1679-9836.v88i2p80-86 7. Norton RL (2013) Projeto de Máquinas - Uma abordagem integrada. Porto Alegre, p 1028 8. Pinheiro LM, Catoia B, Catoia T (2010) Tabela de Vigas: Deslocamentos e Momentos de Engastamento Perfeito. São Carlos, p. 10 9. Pilkey WD (2011) Stress Concentration Factors, 2°. Wiley, New York

Design of a Robotic Device Based on Human Gait for Dynamical Tests in Orthopedic Prosthesis D. Rosa, T. M. Barroso, L. M. Lessa, J. P. Gouvêa, and L. H. Duque

Abstract

1

In this work, the mechanical project of a machine for tests in transtibial prosthesis is proposed. The system is developed as a robotic mechanism that executes movements above the prosthesis to reproduce the human gait. However, instead of usual devices that implement a model for a single axis movement, this work employs a user similar gait to test a prosthetic device. Thus, this approach is more realistic, as it allows the test of a prosthesis for different types of gaits and different global movements, such as the climbing of a stair. To accomplish this objective, three steps are proposed. Firstly, image analysis is extracted from human gait. Then, the movements are converted to the mechanism’s joint angles. Finally, the robot performs the gait based on these angles. Although it is an initial approach, open-loop simulations demonstrate that it is useful for the study of the dynamics of an orthopedic prosthesis, as well as to perform fatigue analysis of this kind of equipment and even to verify the integrity of the prosthesis to guarantee the user’s safety. Keywords





Orthopedic prosthesis Mechanical project modeling Robotic mechanism



Dynamical

D. Rosa (&) Pontifical Catholic University of Rio de Janeiro, Marques de Sao Vicente, 225, 22451-900, Rio de Janeiro, Brazil e-mail: [email protected] T. M. Barroso Pontifical Catholic University of Paraná, Curitiba, Brazil L. M. Lessa  J. P. Gouvêa  L. H. Duque Fluminense Federal University, Volta Redonda, Brazil

Introduction

The human gait is an essential study in biomechanics. One important function of this subject is related to the development of prosthetic devices. The main objective of this apparatus is to imitate the human body function [1]. Several are the developments in the last decades to improve the quality of orthopedic prosthesis and promote the wellness of amputees. Some of them are related to the manufacture of low-cost devices, initiative that can popularize orthopedic prosthesis for deprived people [2, 3]. Others are related to the manufacturing of high-performance devices, for use in athletic employments, for instance [4]. Some researches are related to the testing of the prosthesis, to guarantee the high quality of the device and to avoid risks during the daily use [5]. The present work also aims to develop a device for prosthesis testing. However, the novelty is that this device must take into account the real human gait. Although orthopedic prosthesis testing is a long-standing procedure, the technologies, models, and implementations differ little over time, remaining similar to that proposed in [6]. It consists in the adoption of a dynamic loading machine with only one-direction actuation and a load cell for force feedback. In this case, the movement executed on the prosthesis is very similar to traditional traction and fatigue test machines. Some proposals adopt a rotational joint on the test machine basis, to simulate the interaction between soil and human foot [5]. This approach is useful for foot-prosthesis precise analysis and, according to the authors, has the potential to be used with different and advanced models. However, these kinds of analysis continue using a one-directional movement, admitting a situation of a knee that is unable to flex. It is very different from the real human gait, which can be modeled as an inverted pendulum with diverse degrees of movement [7]. The implementation of a machine for full directional prosthesis analysis is complex and potentially expensive. Consequently, to obtain a more detailed analysis than

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_102

673

674

D. Rosa et al.

actually is performed, the double pendulum approach is useful. Moreover, the majority of works in literature adopt a model for the human body and calculates inverse dynamics to investigate human gate. For humanoid robots or other artificial organisms, this methodology is adequate, as they are capable to reproduce this ideal implementation. For the human, conversely, it is an approximation that generates imprecisions, as each person has its physiology and gait. On the generation of additive manufacturing and computational intelligence, better and easier solutions can be developed to improve this question. Here, instead of numerical models to solve the inverse kinematics and dynamics problem, image processing is used to acquire data from a video of a walking person. Once the data is processed, the robotic device can execute the movements to reproduce the gait. This work is divided into three parts. Firstly, the procedure of image analysis is described. Later, the machine and its dynamics are explained. Finally, open-loop simulations show the characteristics of the system, as the necessary torques and velocities for the robotic tester. These results are essential for the future assembly of a real machine, which is one of the motivations for the research presented in this paper. This work was made in collaboration with the LBRSC (Laboratory of Biomechanics, Robotics, and Computational Simulation) at the Industrial Metallurgical Engineering School of Volta Redonda (Federal Fluminense University, Brazil).

2

person. The angle m1 is obtained between the horizontal axis and the vector described by the line that connects the hip with the knee. The angle m2 is obtained between the m1 support vector and the line that connects the knee and the ankle. The angle related to the foot rotation is not considered in this case, as it is related to a passive joint in the usual prosthesis. The extraction of angles is performed by using the software Kinovea (version 0.8.15). The measurements are manually exported to MATLAB (version R2019a). The sequence of images is shown in Fig. 2 and can be obtained at https://youtu.be/NgKKwHtmZ0I. For better estimation, the maximum number of frames from the video can be used. However, to demonstrate that the technique is robust, a small number of frames is implemented in this work. A polynomial approximation can be used to relate the angles with the joint variables. This procedure is analog to the inverse kinematics obtaining in numerical models. For this analysis, the hip is considered the origin of the reference frame, and the ankle is the end-point of the double pendulum on which the model is based. Usually, the hip has vertical displacements during the walking. However, these are small and do not provoke significant deviations on the recorded angles. Table 1 shows the measured angles for eleven frames of the video used in this work. Equations 1 and 2 show the

Methods

2.1 Image Analysis The first step to design the robotic device for prosthesis tests is to determine the trajectories it must perform. In [8], a formulation for 3 degrees of freedom above the knee mechanism is considered. A prismatic joint is responsible for reproducing the vertical displacement, a revolution joint is used to the thigh angle control, and a passive joint represents the knee. This mechanism requires a set of parameters estimation, related to mechanical friction, while some biological characteristics are not able to be modeled. Therefore, this solution inserts many uncertainties into the tests. Moreover, the mathematical models are not capable to represent different types of gaits, which consists of a large restriction for this strategy. An alternative is shown in [9], where the authors used video frames to analyze the real movement of a person. The proposed methodology is adapted and applied here. Considering the implementation of a below knee prosthesis, two angles are sufficient to describe the gait. Figure 1 shows that these angles can be obtained from a video on which marks (small foam balls) are spread along the leg of a

Fig. 1 Angles obtained from one video frame

Design of a Robotic Device Based on Human Gait …

675

Fig. 2 Images used for data acquisition during the human gait

5th-degree polynomials y1 and y2 , obtained from m1 and m2 sets of angles. These equations are obtained from the MATLAB environment, using interpolation. All algorithms used here are implemented by the authors. Figures 3 and 4 show the final adjustment according to original discrete data. As expected, the interpolation error is minimal. It reaches 2 degrees in the worst case, but the mean error value is minor

than 0.1°. This result can be improved if more frames of the video are employed. However, even in this simplified case, the methodology demonstrates to be applicable to the movements here studied. y1 ¼ 543x5 þ 1569x4  1594x3 þ 667x2  62x þ 245 ð1Þ

676

D. Rosa et al.

Table 1 Angles obtained from video file m2 (°)

Frame

m1

1

245

15

2

244

21

3

252

22

4

260

21

5

260

21

6

266

20

7

268

24

8

275

24

9

282

31

10

282

41

11

263

60

(°)

Fig. 4 Adjustment of curves for the second joint, m2

Fig. 5 Design of the proposed solution

Fig. 3 Adjustment of curves for the first joint, m1

y2 ¼ 146x5  455x4 þ 557x3  322x2 þ 80x þ 15

ð2Þ

2.2 Mechanical Device The robotic tester machine proposed in this work has two degrees of freedom on its actuation. Two revolution joints are adopted, and electrical motors attached to reducers are responsible for the drive system. One motor guides the angle m1 , a rotational displacement concerning the center of the rotational axis. This crank-like mechanism is an interesting solution, as it allows the system to behave as a treadmill device. The second motor is responsible for the orientation of the lower leg, as already stated. However, it is only possible because the foot is considered a passive joint. In this case, the movement does not have intrinsic restrictions and

the foot can even leave the machine ground. Figure 5 shows the design of the robotic device for prosthetic dynamical tests. Firstly, the user must assemble the prosthesis in the leg assembly (4), shown in Fig. 6, to operate the device. When the machine in operation, the motor 1(1) rotates the disk (2) and emulates the thigh movement and the motor 2 (3) moves the leg assembly (4), moving the prosthesis. The prosthesis gaits over the treadmill composed by the rolls (6), belt (7), and the base (8) moving it during the gait procedure. However, before the manufacturing and the controlling, a kinematic and a dynamical model for the device should be proposed and simulated. It is necessary to observe system behavior and its requirements. The kinematic model is simply implemented, as it can be obtained from direct differentiation of Eqs. 1 and 2. The velocities are described by the first derivation and the accelerations by the second derivation of y1 and y2 . For the dynamics, the model of the actuated double pendulum, which is analogous to the two-link planar robotic manipulator, is implemented. The general formulation,

Design of a Robotic Device Based on Human Gait …

677

Fig. 6 Exploded view of the system

shown in [10], is given by Eq. 3. B is the term with inertial parameters, C represents centripetal and Coriolis terms of force, q is the vector of positions on the joint space, g is the acceleration related to the gravitational force, s is the vector of torques that actuates the joints, J is the Jacobian and h is the vector of forces that actuates on the foot extremity. Equations 4 and 5 are the scalar forms of Eq. 3 and are already expressed in function of the torque on the joints 1 and 2, respectively. The terms a1 and a2 are the lengths of the links of the analogous pendulum model used to represent the robotic device. The mass of each link is represented by ml1 and ml2 , while the moment of inertia is given by I l1 and I l2 . The mass of the motors are mm1 and mm2 , the moments of inertia are I m1 and I m2 , and the reduction ratios k1 and k2 . _ q_ þ gðqÞ ¼ s  J T ðqÞh BðqÞ€q þ Cðq; qÞ

ð3Þ

s1 ¼ðIl1 þ 0:25ml1 a21 þ k12 Im1 þ Il2 þ ml2 a21 þ 0:25a22 þ a1 a2 cosð#2 ÞÞ þ Im2 þ mm2 a21 Þ€ q1 þ ðIl2 þ 0:25ml2 ða22 þ 4a1 a2 cosð#2 ÞÞ þ k2 Im2 Þ€q2  ml2 a1 a2 sinð#2 Þq_ 1 q_ 2  0:5ml2 a1 a2 sinð#2 Þq_ 22 þ a1 gð0:5ml1 þ mm2 þ ml2 Þ cosð#1 Þ þ 0:5ml2 a2 g cosð#1 þ #2 Þ þ f1

ð4Þ  q1 s2 ¼ Il2 þ 0:25ml2 ða22 þ 2a1 a2 cosð#2 Þ k2 Im2 Þ€  2 2 þ Il2 þ 0:25ml2 a2 þ k2 Im2 €q2 þ 0:5ml2 a1 a2 sinð#2 Þq_ 21 þ 0:5ml2 a2 g cosð#1 þ #2 Þ þ f2

ð5Þ

Optimal parameters for different situations can be obtained from a set of simulations. Here, the initial considerations are for a random man with 1.8 m of height and 70 kg of mass. The forces on the foot have a horizontal component of 50 N and a vertical component of 500 N. The

lengths of the analogous pendulums are 0.42 m for the first link and 0.15 m for the second one. The mass and moment of inertia from the first link are 7.35 kg and 0.8 kg.m2. For the second link, 3.3 kg and 0.2 kg.m2. The reduction is fixed at 1:10 in both motors. The masses of the first motor and second motors are 3 and 2 kg. The moments of inertia are 0.01 and 0.005 kg.m2, respectively. The mass is considered as applied in the center of mass of both links. The simulations have a duration of 1.4 s and the frequency of actualization is set to 1 kHz. Moreover, it should be noticed that the described model is for robotic tester. The objective of this work is to provide a machine design to test orthopedic prosthesis. The elements on the dynamical model proposed do not consider the characteristics of the prosthesis, as its moment of inertia. Consequently, for future implementation and analysis of the prosthesis, the model should be improved with a third link. If a complete analysis is necessary, as the evaluation of supination/pronation in gait, a three-dimensional device can be developed. It is useful also in the rehabilitation area, on biomechanics analysis.

3

Results and Discussion

The dynamic model is implemented in MATLAB (R2019a) environment. Open-loop simulations are performed, to estimate the behavior of the system before its construction. The positions achieved by the robot joints are evaluated for singularity existence verification; the same for joint velocities, accelerations, and torques. It is also necessary to verify if the trajectories are physically achievable by real motors and gear systems. Figures 7, 8, 9, 10, 11, 12, 13 and 14 show the curves obtained in simulations. In Figs. 7 and 8, a continuous movement is observed and the device does not encounter any singularity on its trajectory. It is expected, as the angles come from the polynomial approach shown in Figs. 3 and 4,

678

D. Rosa et al.

Fig. 7 Angular position simulated for the robotic device joint 1

Fig. 9 Angular velocity simulated for the robotic device joint 1

Fig. 8 Angular position simulated for the robotic device joint 2

Fig. 10 Angular velocity simulated for the robotic device joint 2

which is well-fitted with data from processed images. The angular velocities, obtained through a direct derivation of the joint angles, are shown in Figs. 9 and 10. These velocities are low, as they correspond to the real gaits shown in Fig. 2. In opposition, at the end of the trajectory, the first joint presents an abrupt variation, related to the moment when the foot loses contact with the ground. This condition generates high accelerations in Figs. 11 and 12. Thus, high torque motors are necessary to accurately reproduce the gait, as shown in Figs. 13 and 14. The trajectories obtained correspond to data from literature [11, 12]. However, slightly smoothed, due to the polynomial approximation used. This smoothing is

interesting because it generates a simplified input for the control system to be developed in the future. Moreover, the analysis performed in this work is focused on the robotic device. Thus, the torque related to the electric current applied to the motor is the variable to be controlled. This approach is not usual in biomechanics [6], which usually adopts the reaction forces measured on the foot as system feedback. To relate these forces and the torques in the motors, signals from a load cell installed on the base treadmill can be used. This relationship, combined with the evaluation of the dynamic model at each moment of the movement, also allows the estimation of the energy dissipated on the prosthesis during the dynamic tests.

Design of a Robotic Device Based on Human Gait …

Fig. 11 Angular acceleration simulated for the robotic device joint 1

Fig. 12 Angular acceleration simulated for the robotic device joint 2

4

Conclusions

In this work, the design of a prosthesis testing device was proposed. From recorded video frames, image processing was used to estimate the angles to be performed by a mechanical system. Simulations were carried out to verify whether a robotic device is capable of reproducing human gait. From open-loop simulations, it was possible to observe that high-performance motors can accomplish realistic dynamical tests with an orthopedic prosthesis. To improve the physical system, the application of a lighter structure can reduce the torques required for the movement. Changes in the reduction factor may be necessary, to support the choice

679

Fig. 13 Torque simulated for the robotic device joint 1

Fig. 14 Torque simulated for the robotic device joint 2

of motors to be adopted. Closed-loop simulations must be carried out to verify which control algorithms can guarantee the execution of the desired trajectories. Torque control can be implemented to ensure that the forces required for testing the prosthesis are being properly performed. Finally, algorithms for automatic image processing can be developed, guaranteeing agility and reliability for the analysis shown in this work. Acknowledgements The authors would like to thank the Brazilian institutions: Capes (Coordination for the Improvement of Higher Education Personnel), CNPq (National Council for Scientific and Technological Development), LabRob PUC-Rio (Laboratory of Robotics of the Pontifical Catholic University of Rio de Janeiro), PPGEM–PUCPR (Pontifical University of Paraná Mechanical

680 Engineering Postgraduate Program), and LBRSC EEIMVR–UFF (Laboratory of Biomechanics, Robotics, and Computational Simulation–Industrial Metallurgical Engineering School of Volta Redonda— Federal Fluminense University) for their support for this work. Conflict of Interest The authors declare that they have no conflict of interest.

D. Rosa et al.

6.

7. 8.

References 1. Jwegg MJ, Hassan SS, Hamid AS (2015) Impact testing of new athletic prosthetic foot. Int J Curr Eng Technol 5:1 2. O’neill C (2014) An advanced, low cost prosthetic arm. Sensors, Valencia, Spain. https://doi.org/10.1109/ICSENS.2014.6985043 3. Rouse EJ, Mooney LM, Hargrove LJ (2016) The design of a lightweight, low cost robotic knee prosthesis with selectable series elasticity. IEEE International Conference on Biomedical Robotics and Biomechatronics, Singapore 4. Hobara H (2014) Running-specific prostheses: the history, mechanics, and controversy. J Soc Biomechan 38:2 5. Staker F, Blab F, Dennerlein F et al (2014) Method for sports shoe machinery endurance testing: modification of ISO 22675

9.

10. 11.

12.

prosthetic foot test machine for heel-to-toe running movement. Procedia Eng 72:405–410 Wevers HW, Durance JP (1987) Dynamic testing of below-knee prosthesis: assembly and components. Prosthet Orthot Int 11:117– 123 Ferreira ARS, Gois JAM (2018) Análise da cinemática e dinâmica da marcha humana. Revista Militar De Ciência E Tecnologia 35:3 Richter H, Simon D, Smith WA et al (2015) Dynamic modeling, parameter estimation and control of a leg prosthesis test robot. Appl Math Model 39:559–573 Lessa LM, Gouvêa JP (2018) Análise biomecânica da marcha humana durante o subir e descer escadas. Cadernos UniFOA 38:21–36 Sciavicco L, Siciliano B (2000) Modelling and control of robot manipulators. Springer, London Quintero S, Reznick E, Lambert DJ et al (2018) Intuitive clinician control interface for a powered knee-ankle prosthesis: a case study. IEEE J Transl Eng Health Med 6:1–9. https://doi.org/10.1109/ JTEHM.2018.2880199 Kondratenko Y, Khademi G, Azimi V et al (2017) Robotics and prosthetics at Cleveland State University: modern information, communication, and modeling technologies. Commun Comput Inform Sci Springer, Cham. https://doi.org/10.1007/978-3-31969965-3_8

Analysis of an Application for Fall Risk Screening in the Elderly for Clinical Practice: A Pilot Study P. V. S. Moreira, L. H. C. Shinoda, A. Benedetti, M. A. M. R. Staroste, E. V. N. Martins, J. P. P. Beolchi, and F. M. Almeida

Abstract

1

To test the accuracy of an App in determining the risk of falls in elderly women, 24 elderly women admitted to a hospital were evaluated by the TechOne App, regarding self-report of falls, by the clinical questionnaire (“TFQ”) and subjective motor evaluations by the questionnaire “TMPSE”. Then, in one faller and one non-faller volunteer, stabilometric variables were measured in the static bipedal posture by the inertial sensing algorithm “TBM”. The scores of the questionnaires were compared between fallers and non-fallers by the ANOVA test for independent measures and by the ability in discriminate fallers by the Area Under de Curve (AUC) of the ROC curve and traditional classification measures. The TechOne App demonstrated to be technically viable to obtain stabiliometric variables, demonstrating clear difference between one faller and one non-faller volunteer. The TFQ showed excellent metrics (Sensitivity: 90%; Specificity: 71.4%; Accuracy: 79.17%; AUC: 0.875) and difference between groups (p < 0.05). The TMPSE score showed no significant differences and presented lower classification metrics (Specificity: 64.3%; AUC: 0.654). It is concluded that TBM is potentially viable and the TFQ is accurate to predict falls. Keywords

Fall prevention Balance



Clinical evaluation



Inertial sensing

P. V. S. Moreira (&) Programa de Engenharia Biomédica (PEB/COPPE), Federal University of Rio de Janeiro, Av. Horácio Macedo, 2030, Block H, no 327, Rio de Janeiro, Brazil e-mail: [email protected] L. H. C. Shinoda  A. Benedetti  F. M. Almeida Techbalance, São Paulo, Brazil M. A. M. R. Staroste  E. V. N.Martins  J. P. P. Beolchi Amil Seguros, São Paulo, Brazil



Introduction

The elderly population in the world is growing rapidly, concomitantly with the increase in the number of deaths and morbidities due to falls in the elderly each year [1]. Falls are the main cause of death-related injuries in the elderly aged 65 and over [2]. Approximately 30% of the elderly fall annually [3], contributing for that falls be responsible for 95% of femoral neck fractures and be one of the main causes of traumatic brain injury [4]. In Brazil, prevalence of falls in elderly people living in urban areas is 25%. Of these, 1.8% resulted in hip or femur fractures and, among them, 32% required surgery with prosthesis placement [5]. In 2012, the mortality rate in Brazil due to falls was 375 for every 100 thousand elderly people [6]. Given the frequency and severity of falls, there is a need to develop easy-to-use tools that are accurate in predicting falls. Falls cause physical injuries and lead to shortening of daily activities, physiological deconditioning and reduced quality of life [7]. Due to the severity of the injuries resulting from falls, it is recurrent that injured patients need, in addition to emergency hospital care, surgeries and long periods of hospitalization [6]. Deciding when patients are able to be released is a critical and complex need for hospital clinical practice that depends on an interdisciplinary and multifactorial approach involving the control of comorbidities, assessment of patients’ psychic and physical condition, family support, residential security, in addition to consider the length of the hospital stay as a possible risk factor for the probability of falling [4]. Problems with mobility, balance and loss of muscle strength, chronic conditions such as cardiovascular, diabetes mellitus, depression and arthritis, as well as many of the medications used to treat them, can increase the risk of falling [4] and complicate hospitalization and dehospitalization. All of these factors can be measured using specific psychometric constructs [4, 8, 9], functioning as risk meters to support preventive technical decisions.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_103

681

682

Currently, to predict falls in hospitalized patients, there are only two questionnaires validated for the Brazilian population, the Morse Fall Scale (MFS) [10] and the Johns Hopkins Fall Risk Assessment Tool (JHFRAT) [8]. The MFS scale presents good validation metrics (Sensitivity: 95.2% and Specificity: 64%) for Brazilian patients admitted to hospitals [10], meanwhile, the JHFRAT has shown excellent sensitivity (97%) to detect true fallers. However, the specificity of the classification given by the score of this questionnaire is very low (6%) [8]. In addition, in both studies, the sample of volunteers was not restricted to the elderly, which makes it necessary to create and to study the classification metrics of an algorithm that be accurate to predict falls for the elderly population and that, although it should prioritize sensitivity, it also obtain acceptable specificity. Another relevant aspect for clinical practice, in addition to the questionnaire classification metrics for the prediction of falls, is the need to obtain agility, precision and objectivity of measurement, reduction of inter-evaluator bias and the possibility of gathering patient information in banks of data for epidemiological analysis. These aspects are not fully covered in the traditional validated methodologies. Applications embedded in smartphones have the advantage, compared to traditional questionnaires, the possibility of being able to evaluate psychometric and biomechanical balance data based on inertial sensing, resulting from integrated mathematical analysis of the data obtained by motion sensors (accelerometer, gyroscope and magnetometer) [11]. These biomechanical analyses generate data about the estimated center of gravity (CG) linear displacement and a stabilometric CG curve over the horizontal plane (statokinesiogram). Recently this kind of biomechanical variables, obtained by smartphones, proved to be effective for to discriminate fallers and non-fallers [11, 12]. For example, Hsieh et al. [12] demonstrated that some biomechanical variables obtained by smartphones, such as the anteroposterior and vertical RMS of linear acceleration, are able to discriminate elderly people regarding the risk of falls in a way comparable to the measures of force plate. However, due to methodological limitations, these authors did not determine the data resulting from the mathematical integration of the acceleration data (variables related to the linear velocity and linear position respectively, the first and second integral of acceleration over time). From this step is possible to determine a number of variables associated with balance as the root mean square (RMS), range and velocity of the CG on the anteroposterior and mediolateral direction, total displacement and ellipse area [13]. These variables are traditionally obtained by using data originating from force plate, through the displacement of the center or pressure (COP) [13] since it was demonstrated that these variables are good discriminators of elderly faller and non-fallers [14, 15].

P. V. S. Moreira et al.

The startup TechBalance® (São Paulo, SP, Brazil) developed the TechOne, an application for measure fall risk composed by two questionaires and a biomechanical analysis of balance based on inertial sensing. In these questionnaires, there are one interactive questionnaire of performance in motor tasks and another that assesses patients’ fragility. TechOne generates a rating score regarding the risk of falling, based on the frailty and motor performance indexes and various numerical results of biomechanical variables. Studies with the biomechanical algorithm are preliminairy and are being carried out so that their numerical results of the variables are translated using statistical and artificial intelligence methods) into fall risk category scores to be mathematically integrated in the future to the TFP score and thus generate a hybrid note (psychometric and biomechanical) that improves the ability to predict falls and better understand their potential causes. This way, the aim of the present study is to test the accuracy for screening the risk of falls in hospitalized elderly people, using the TechOne. The secondary objective of the study is to demonstrate the feasibility of carrying out biomechanical analyzes of balance through an inertial sensing algorithm embedded in the TechOne application. Our hypotheses are: (1) questionnaires embedded in the TechOne will present better accuracy with a very higher specificity over the reported data in the literature; (2) the algorithm embedded in TechOne is technically viable to obtain stabilometric variables through inertial sensor analysis.

2

Methods

2.1 Participants Twenty-four elderly women, aged between 60 and 85 years, admitted to a private hospital of high complexity located in the city of São Paulo, were recruited for this study. The descriptive data are shown in Table 1. The volunteers were automatically divided, by register in the application (filled in by two trained nurses), between fallers (who reported at least one fall event in the last 12 months) and non-fallers (who did not report a fall). The criteria for inclusion in the study were: (a) being between 60 and 85 years old, (b) not having a history of severe comorbidity related to the vestibular, motor, visual or cognitive system that contraindicated the performance of motor assessment and postural balance or compromised ability to provide informed consent. The study was approved by the Ethical Committee of the Clinical Hospital of the University of São Paulo with the protocol number CEP: 3.972.825.

Analysis of an Application for Fall Risk Screening … Table 1 General descriptive characteristics of the participants

683

Item

Fallers

Non Fallers

Frequency [n (%)]

10 (41.7%)

14 (58.3%)

Age (Years)

73.9 ± 4.33

65.57 ± 10.08

Body Mass (Kg)

70.5 ± 13.23

65.85 ± 8.76

Height (cm)

153 ± 5.54

160 ± 8.68

2.2 The Assessment Instrument The assessment instrument used was the TechOne application (TechBalance®, São Paulo, SP, Brazil), composed of a psychometric assessment algorithm (the TFP: “TechB-Fall-Prediction”), consisting of a clinical questionnaire (the TFQ: “TechB-Fragility-Questionnaire”) and other for subjective evaluation of motor performance (the TMPSE: “TechB Motor-Performance-Subjective-Evaluation”), and one algorithm for the biomechanical balance evaluation algorithm (the TBM: “TechB-Balance-Measurement”), based on the processing of inertial data from motion sensors embedded in smartphones (accelerometer, gyroscope and magnetometer). The TFQ is composed of 39 dichotomous questions divided into 6 constructs: context, autonomy, pathologies, routine, mobility and complaints. The TMPSE is an instrument of subjective categorical assessment of “motor performance” in which evaluators must select one of three options that represents the level of autonomy in performing each of the 14 motor tests (weight 0 for “unable to perform the posture”, weight 1 for “perform the posture with some help”, weight 2 for “perform the posture independently”). The motor performance tests performed by the patients are postures challenging the balance, such as standing on tiptoe, one-legged posture, sitting to stand, walking and turning the head to the right and left, etc. The biomechanical balance evaluation algorithm (TBM) is an algorithm for processing inertial data from motion sensors, embedded in smartphones, which uses the three sensors (accelerometer, gyroscope and magnetometer) to obtain quaternions used to convert the three-dimensional linear accelerations of the local reference of the mobile device to the global reference whose origin of this system is the geographic location of the beginning of the data collection. The anteroposterior (AP) axis is the one that, at the beginning of the data collection, points to the front of the patient, the mediolateral axis (ML), which points to the patient's right side and the vertical axis, the one that points to above, making a 90 degree angle to the Earth plane. Linear accelerations are mathematically integrated, once and twice, to obtain the velocity and three-dimensional linear displacement of the device, while possible progressive errors (drifts) are removed by specific methods, developed for this purpose. From this processing, the biomechanical variables described below are calculated: total displacement,

maximum velocity AP and ML, root of the mean of squares (RMS) of linear displacement AP and ML, range of motion (range) AP and ML and ellipse area with 95% of confidence.

2.3 Data Collection Data were collected over 2 months by 2 nurses properly trained for the protocol. Data collection was preceded by a brief presentation of the questionnaire objectives and registration of general data for each patient. Then, the 39 questions of the clinical interview were asked sequentially, while the evaluators marked in the application which alternative was chosen for each question. In the following step, patients were instructed to perform motor performance tests, and then, each posture was performed with the smartphone attached to the waist, capturing inertial data (video 1: doi. org/https://doi.org/10.6084/m9.figshare.11886963.v1). At the end of each collection, the evaluators marked in the TMPSE algorithm, the alternative that best represented the patient's performance. At the end of the protocol, the evaluators clicked on the option to send the results to the TechBalance® database. Because it is a preliminary study, only 2 patients (one faller and one non-faller) had the inertial data processed by the TBM algorithm.

2.4 Data Analysis For each item of the TFQ clinical questionnaire, a score of 0 was assigned for the absence of the condition and 1 for its presence. For example, in the question “Does the patient have high blood pressure?” a value of 0 was attributed to normotensive individuals and 1 to hypertensive individuals. It was also done for the performance in the 14 motor tests contained in the TMPSE, so that the scores vary from 0 to 2, according to the level of proficiency and independence in carrying out tests. At the end of the filling, the application algorithm applied different weights (defined in an internal study) for each question, according to the importance of each one of them for the classification regarding the risk of falls, thus generating a clinical score for the TFQ and a subjective motor performance score for the TMPSE. Finally, the “Final Score” of the application was generated, consisting of the

684

weighted sum of each questionnaire result, whose weights assigned to the scores of each questionnaire were previously defined, based on an internal study. The linear positions of the transverse plane, generated from the processing of inertial balance data (available in TBM), for an elderly faller and a non-faller, when performing the bipedal posture (using comfortable sneakers) with eyes open in the narrow stance (heels and toes touching), with hands in the hips, for 30 s, were presented in a stabilometric plot and used to generate the following two-dimensional variables: Ellipse Area with 95% Confidence Interval and Total Displacement; and, for the mediolateral and anteroposterior axes: the Range of Displacement, the Root of the Means of Squares (RMS) of the displacement and the Maximum Absolute Velocity. The rationale for this procedure was primarily based for to avoid fatigue related to long duration of a trial in quiet standing, particularly in pathologic elderly. Furthermore, the optimum test–retest reliability for our protocol was assumed to be obtained at 20–30 s trial durations [16], and we wanted a test that is feasible to be implemented in a clinical setting where time constraints play an important role. Additionally, the patients were evaluated only in the narrow stance because in this kind of base, Melzer et al. [14] found differences between elderly fallers and non-fallers, while they found that the wide stance cannot detect differences in postural stability between groups.

2.5 Statistical Analysis The partial scores of each questionnaire (TFQ and TMPSE), as well as the Final Score were assessed for normality and homogeneity of variance by the Shapiro–Wilk and Levene tests, respectively. The values of each individual item were considered to be non-normal due to their dichotomous or categorical nature of restricted options (3 options). Only data that could be considered normal or homogeneous were considered to be parametric and therefore compared between fallers and non-fallers by means of analysis of variance (ANOVA) for independent measures, while nonparametric partial scores were assessed using the Mann Whitney test (since we have only two independent groups). Then, the Receiver Operation Characteristic (ROC) curve analysis was performed with the scores of the TFQ, TMPSE questionnaires and the Final Score. In the ROC analysis, the areas under the curve (AUC) of each score were evaluated and also, this curve was used to find the cutoff value of each score. The point of each curve where the sensitivity is  90% and tends to stabilize, while the specificity is maximized was used to find the ideal cut-off value for each score. A “Classification Matrix” was generated for each score, containing the number of patients correctly and incorrectly

P. V. S. Moreira et al.

classified as falling or not falling. The main metrics for classification performance were calculated from four possible classifications: “True Faller” (TF, faller classified as faller), “False Faller” (FF, non-faller classified as faller), “True Non-Faller” (TN, non-faller classified as non-faller) and “False Non-faller” (FN, Faller classified as non-faller). Five performance metrics were identified: (1) Sensitivity: The percentage of fallers who are correctly classified as fallers: Sensitivity ¼ 100  TF=ðTF þ FN Þ

ð1Þ

(2) Specificity: the percentage of non-fallers correctly classified as non-fallers: Specificity ¼ 100  TN=ðTN þ FF Þ

ð2Þ

(3) Accuracy: the percentage of patients correctly classified in the total data set: Accuracy ¼ 100  ðTN þ TF Þ=ðTN þ TF þ FN þ FF Þ ð3Þ

For the comparison performed by the ANOVA and Mann–Whitney test, the effect size (ES) of type η2 was calculated. Except for the Mann–Whitney η2 and the ROC curve (calculated in Matlab), all statistical tests were calculated using the SPSS 18.0 software (SPSS Inc., Chicago, IL) considering the significance level as P < 0.05. The magnitude of the ES was interpreted as trivial (ES  0.1), small (0.1 < ES  0.3), moderate (0.3 < ES  0.5), and large (ES > 0.5). Statistical analysis was not applied for biomechanical variables due to the restricted number of participants.

3

Results

Figure 1 shows a stabilometric curve for the inertial sensing of a falling elderly woman and a non-falling woman when performing the static bipedal posture for 30 s. Table 2 contains data resulting from the stabilometric analysis of a falling patient and a non-falling patient. The means comparison tests with the scores generated by the questionnaires showed that the TFQ score was the one that differentiated patients in fallers [M (95% CI) = 33.0 (26.3; 39.7)] and non-fallers [M (95% CI) = 17.1 (11.8; 22.5)] with the largest effect size [p = 0.0005; η2 = 0.435]. The TFMSE score did not significantly differentiate (U:

Analysis of an Application for Fall Risk Screening …

685

Fig. 1 Stabilometric curves of the estimated trajectory of the center of gravity in a faller and a non-faller

Table 2 Stabilometric results for a faller and a non-faller on a BP for 30 s

Variables

Fallers

Non fallers

Ellipse area (mm )

226

188

Range of ML Displacement (mm)

8

9

Range of AP Displacement (mm)

24.72

17.10

2

RMS of ML Displacement (mm)

1.79

2.22

RMS of AP Displacement (mm)

5.96

4.22

Total displacement (mm)

98.5

77.93

Maximal ML velocity (mm/s)

12.17

6.06

Maximal AP velocity (mm/s)

21.25

12.50

BP Bipedal Posture, ML Medium Laterallis Axis, AP Antero-Posterior

38.5; p: 0.07; η2 = 0.14) the patients. The Final Score differentiated fallers [median (IQR) = 24.6 (12.9)] from non-fallers [median (IQR) = 12.4 (11.3)] however, the effect size was low (U: 33.5; p: 0.033; η2 = 0.19). Figure 2 contains the ROC analysis curves for the TFQ, TMPSE and Final Score scores. The results demonstrated that for the TFQ score, the value of the ideal threshold is 22, for both, TMPSE and Final Score, the value is 14. The results of the metrics for classification of scores on the ideal thresholds are available in Table 3.

4

Discussion

Most of the biomechanical variables analyzed obtained higher values in the falling patient in relation to the non-falling patient, demonstrating the potential of using the inertial sensing of embedded balance to screen the risk of falls. The few variables that did not obtain such clear differences were the range of motion and the RMS, both in the lateral median axis. The previous results found in this study are consistent with those obtained by Hsieh et al. [12], by demonstrating that, in the horizontal plane, only the anteroposterior acceleration RMS was a good classifier

Fig. 2 ROC curve with area (AUC) of the classification scores

regarding the risk of falls, measured by of the area under the curve (AUC = 0.761–0.837) ROC. The TFQ score was the one with the greatest effect size in the comparisons of means, highest AUC, best specificity and accuracy for patient classifications. This indicates that TFQ can prove to be a potent ally for screening the risk of falls for

686 Table 3 Comparative results of the algorithms embedded on the TechBalance-Fall-Prevent application

P. V. S. Moreira et al. Variables

Sensitivity (%)

Specificity (%)

Accuracy (%)

TFQ

90.0

71.4

79.2

TMPSE

90.0

64.3

75.0

Final score

90.0

64.3

75.0

the clinical environment. The levels of sensitivity obtained by the TFQ were similar to those of the questionnaires already validated (JHFRAT and the MFS). However, the TFQ showed specificity values 1.1 to 11 times higher than those presented by traditional scales [8, 10]. This means that the accuracy of the TFQ test tends to be greater than that of the other scales available. For example, JHFRAT has an accuracy of only 51.5% [8] while that of TFQ was 79.17%. The lack of significance and the small effect size obtained in the mean comparisons and in the distribution tests of the classifications of the TMPSE motor assessment items indicate that the subjective assessment of the postures does not contribute to the quality of the classifications. Eight of the 14 motor tests used are composed of postures considered to be static. However, the human body is unable to balance itself in a totally static manner, as the vestibular and neuromuscular system constantly tries to rebalance itself through constant changes in the levels of muscular forces produced. This means that the balance has a “quasi-static” nature [13]. The micro-oscillations of these positions are very difficult to subjective evaluation and demand more objective methods of measurement. The high cost of equipment and the complex nature of data processing by laboratorial equipments for measuring balance associated with the inefficiency of subjective assessments points to the need to measure balance by sensitive, accurate, reliable and easily accessible clinical equipment [11]. This is possible through inertial sensing carried out by smartphone applications [12] and TBM has shown promise for this purpose. The lower efficiency of the TMPSE for the screening of risk of falling, coupled with the fact that its values feed the final score of the application, leads us to recommend that the weighted values of the TMPSE be removed from the calculation of the final score and that studies should be done to statistically integrate the results of the TFQ with those of the TBM, in order to try to generate a hybrid method (psychometric and biomechanical) of patient classification that is even more accurate than the TFQ. The limitations of the study are due to the fact that the sample size (n = 2) for biomechanical balance tests is exploratory, the sample size of patients evaluated by psychometric tools is small (n = 24) and the number of TFQ instrument items too high (39 items) to perform quick screenings. This prevented the traditional validation process, which consisted of content analysis, construct analysis, dimensionality, reliability and validity. What was intended

to be demonstrated in the present study was the potential validity of the results. Based on the limitations, it is recommended to carry out studies with an adequate sample size and to carry out the entire technical process of psychometric validation, including inter-rater and intra-subject reproducibility analyzes. Similarly, it is recommended that studies be carried out to validate the measures of the TBM algorithm, with adequate sample size and agreement analysis in relation to the measures obtained by methods considered as the gold standard (force platform, baropodometry or kinematics).

5

Conclusions

It is concluded that: (a) the TFQ clinical questionnaire showed excellent discriminatory capacity for screening regarding the risk of falls in female hospital patients. However, new and systematic steps to improve the instruments contained in the analyzed application are necessary; (b) the TBM inertial sensing algorithm is feasible to objectively measure balance. Acknowledgements To the AMIL hospital and security plans network for funding research and providing nurses for data collection. We also want to thank the Coordination for the Improvement of Higher Education Personnel (CAPES) that partially supported this work through a PNPD postdoctoral fellowship. Conflict of Interest. Scientists and consultors of the Techbalance® enterprise are involved in the study; however, they were instructed to objectively and ethically conduct the research and present positive or negative results.

References 1. Ambrose AF, Paul G, Hausdorff JM (2013) Risk factors for falls among older adults: a review of the literature. Maturitas 75(1):51– 61 2. Heinrich S, Rapp K, Rissmann U et al (2010) Cost of falls in old age: a systematic review. Osteoporos Int 21(6):891–902 3. Bergen G, Stevens MR, Burns ER (2016) Falls and fall injuries among adults ages over 65 years. MMWR 65(37):993–998 4. Florence CS, Bergen G, Atherly A et al (2018) Medical costs of fatal and nonfatal falls in older adults. J Am Geriatr Soc 66 (4):693–698 5. Pimentel WRT, Pagotto V, Stopa SR et al (2018) Quedas entre idosos brasileiros residentes em áreas urbanas: ELSI-Brasil. Rev Saude Publica 52:12

Analysis of an Application for Fall Risk Screening … 6. Abreu DRDOM, Novaes ES, Oliveira RRD et al (2018) Fall-related admission and mortality in older adults in Brazil: trend analysis. Ciencia E Saude Coletiva 23(4):1131–1141 7. National Center for Injury Prevention and Control (CDC) (2013) National Estimates of the 10 Leading Causes of Nonfatal Injuries Treated in Hospital Emergency Departments, United States 8. Martinez MC, Iwamoto VE, Latorre MDRDDO et al (2019) Validade e confiabilidade da versão brasileira da Johns Hopkins Fall Risk Assessment Tool para avaliação do risco de quedas. Rev Bras Epidemiol 22:190037 9. Oliver D, Britton M, Seed P et al (1997) Development and evaluation of evidence based risk assessment tool (STRATIFY) to predict which elderly inpatients will fall: case-control and cohort studies. BMJ 315(7115):1049–1053 10. Urbanetto JDS, Pasa TS, Bittencout HR et al (2016) Analysis of risk prediction capability and validity of Morse Fall Scale Brazilian version. Rev Gaucha Enferm 37(4):62200

687 11. Roeing KL, Hsieh KL, Sosnoff JJ (2017) A systematic review of balance and fall risk assessments with mobile phone technology. Arch Gerontol Geriatr 73:222–226 12. Hsieh KL, Roach KL, Wajda DA et al (2019) Smartphone technology can measure postural stability and discriminate fall risk in older adults. Gait Posture 67:160–165 13. Duarte M, Freitas SMSF (2010) Revisão sobre posturografia baseada em plataforma de força para avaliação do equilíbrio. Rev Bras Fisioter 14(3):183–192 14. Melzer I, Benjuya N, Kaplanski J (2004) Postural stability in the elderly: a comparison between fallers and non-fallers. Age Ageing 33(6):602–607 15. Swanenburg J, de Bruin ED, Favero K et al (2008) The reliability of postural balance measures in single and dual tasking in elderly fallers and non-fallers. BMC Musculoskelet Disord 9(1):162 16. Le Clair K, Riach C (1996) Postural stability measures: what to measure and for how long. Clin Biomech 11(3):176–178

Development of Prosthesis and Evaluation of the Use in Short Term in a Left Pelvic Limb in a Dog G. C. Silveira, A. E. M. Pertence, A. R. Lamounier, E. B. Las Casas, M. H. H. Lage, M. X. Santos, and S. V. D. M. El-Awar

Abstract

The growing appreciation of pets has enabled us to improve the technologies to make a prosthetics for then. Amputations in dogs can be performed due to trauma, or neoplasms, and the animals may develop compensatory disturbances for the short or long term, that can lead to chronic pain. Prosthetics are devices that replace the absence of a limb or other part of the body. Its main objectives are to align the spine, decrease the overload of the weight in the remaining limbs, muscle control, and external mechanic system that can help to lead to functional independence of the patient. The goal of this paper is to develop a metatarsophalangeal prosthesis of a left pelvic limb of a dog. The steps were: medical evaluation for the use of a prosthesis by a veterinarian, confection of the negative and positive molds, confection of the device, adaptation process, and evaluation of results. Keywords



Prosthesis Dogs Rehabilitation

1



Amputation



Biomechanics



Introduction

In veterinary medicine, amputations can occur due to the presence of neoplasms, osteosarcoma being more common, severe arthritis that is no longer medically managed, irreversible neurological disorders, severe trauma and G. C. Silveira (&)  M. X. Santos  S. V. D. M.El-Awar Escola de Medicina Veterinária, PUC Minas, R. Rosário, 1081, Betim, Brazil A. E. M. Pertence  A. R. Lamounier  E. B. Las Casas  M. H. H. Lage Escola de Engenharia Mecânica, UFMG, Belo Horizonte, Brazil

congenital anomalies [1–3]. The limb amputations are generally so high that it is impossible for pets to use a prosthesis. Although the animal can adapt to walking with 3 supports, the lack of a limb can cause changes in gait and overload in the muscles, joints of the spine and limbs [4–10]. So, it is necessary to use devices in order to avoid or minimize compensatory effects, joint diseases, help the gait, and weight distribution of the animal. This would require the production of a prosthesis that is the size, angulation, padding and right fit, according to biomechanical knowledge, with the objective of improving life quality for the animal. In human medicine, amputations are performed with the intention of using a prosthesis afterward, however, such amputation techniques are not usual in veterinary medicine. The prosthesis is an equipment designed to replace a lost limb or body part. Alternaly, orthoses are developed to support or protect an injured limb [2]. Prostheses are divided into exoprostheses and endoprostheses. The exoprostheses are composed of a cartridge, responsible for receiving the body weight by the stump and transferring it to a contact device with the ground, like an “artificial foot” [4]. Endoprostheses are composed of an internal part, which is located inside the bone marrow, with a small external part passing through the transcutaneous layer, into which the contact device with the ground fits. The goal of this paper is develop a prosthesis for a left pelvic limb, to minimize compensatory effects caused by the lack of the limb, reestablish the posture, improve walking gait and life quality. Human and veterinary patients who go pass through the limb amputation process develop compensations to maintain balance and locomotion. However, these compensatory movements are not necessarily efficient and often lead to short, medium and long term complications [12]. The ultimate goal of biomechanical treatment is to restore the ability to walk safely, efficiently and functionally, in addition to perform routine life activities [13]. Walking efficiency increases as gait parameters approach normal. In

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_104

689

690

G. C. Silveira et al.

addition, the use of the device provides a better life, and helps prevent deformation and degeneration of the remaining, joints. The device decreases non-conformities in limb length, increases levels of exercise and activity and provides means to participate in rehabilitation therapy [14]. The so-called elective level or subtotal amputation is the preferable therapy to traditional limb amputation. However, to use a prosthesis on the thoracic limb, it is necessary to preserve 40% of the radius and ulna; for the pelvic limb, the medial and lateral malleoli must be preserved [12]. The structural part of the prosthesis, or cartridge, must have certain characteristics such as lightness, mechanical resistance, comfort, durability, aesthetics and suitability for modeling, and can be made by various materials [4]. For the comfort and protection of the stump, the fitting includes an interface between the rigid external wall and the stump, made of soft material and with anti-allergenic properties [15]. The group of polymeric materials has properties that are useful in the construction of structures and devices because they are light, flexible and have good resistance to corrosion, with rubber being one of the characteristic examples of this group [16, 17]. In addition to absorbing shocks, it is an insulating and protective element and can assist in the movement and positioning of some joints [16]. In this research, the prosthesis was made inspired by the model created by Lage et al. [18].

observed, probably due to disuse. In the biomechanical evaluation, when the patient was moving and stopped, it was noticed that there was an incorrect spine angulation, both in lateral view and in the caudal back, the humerus radius ulnar joints turned laterally and the hyperextended right pelvic limb. No sensory or motor deficits were found during neurological evaluation.

2.2 Negative and Positive Mold The negative mold and measurement process is the stage in which the negative copy of the stump is performed, as well as the support measurements for making the positive model. Initially, the limb with the amputation was protected by a thin stocking to isolate the stump, then a plaster bandage 100 mm wide was used, moistened with water and applied around the limb. Finally, with the help of a stiletto, the negative mold was removed from the patient as shown in Fig. 1B. To obtain the positive mold, which became an anatomical copy of the stump, the negative mold was filled with a mixture of powdered plaster and water in a 50% volumetric proportion of each component. Then, with the positive mold completely rigid, the negative mold was removed.

2.3 Prosthesis Construction

2

Methods

A case study was carried out at the Veterinary Hospital of PUC Minas in Belo Horizonte, Brazil, with the support of the Biomechanical Engineering Laboratory (MECBIO) of the Structural Engineering Department at UFMG, Brazil, of a canine, male, adult, 4 years old, no defined race, large and with traumatic amputation in the left tarsus joint. The patient described in this study has been walking with 3 limbs for approximately four years, resulting in a change in body weight distribution. The animal's guardian authorized the entire study for making a prosthesis by means of a consent form. All the instructions used in this study are in accordance with those used in Brazil.

2.1 Evaluation A general medical evaluation of the patient was performed, including exams: general well-being, orthopedic, myofascial, biomechanical and neurological evaluation. The patient was healthy, with a good body score, with functional joints at orthopedic evaluation, without pain on palpation of the spine and other joints. In the myofascial evaluation, atrophy of the musculature of the amputated left pelvic limb was

From the positive mold and the distance from the stump to the ground, the prosthesis was started. The positive mold was surrounded by a layer of 4 mm thick silicone EVA, fixing a 10 mm  5 mm profile aluminum support bar on the back, with three 4 mm diameter holes so that it can join the cartridge, through M5  12 mm rivets, transferring the existing load on the cartridge to the contact device with the anatomical shape for later fixation of a 4 mm thick Microduro rubber base that will be in contact with the ground Fig. 2. With the first stage of the cartridge ready, the structural part of the prosthesis was constructed, consisting of three layers of Hygiacast® polyester synthetic bandage number 2. In the final stage, the device was painted with black spray paint and the edges were sanded. Two velcro straps 20 mm wide were fixed with rivets M5  12 mm, in order to fit the limb of the animal. Figure 3 shows the prosthesis after the construction of the rigid wall and after final finishing. The adaptation period was 3 weeks, consisting of the progressive use of the prosthesis together with exercises. In the first week, the patient used the prosthesis for 2 h, with 4-h intervals without use. At the end of 4 h the process was repeated. In the second week, the patient used the prosthesis for 2 h and remained without it for 2 h. After 2 h of rest, the

Development of Prosthesis and Evaluation of the Use …

691

Fig. 1 a Distance between stump and ground, b Removing the negative mold

Fig. 2 Positive mold wrapped by 4 mm siliconized EVA layer, fixed on the back, aluminum support bar profile 10 mm x 5 mm, with three 4 mm holes. a Right lateral face. b Front face. c Posterior face. d Left side face

process was repeated. In the third week the dog used the device for 4 h and was without it for 2 h. At the end of 2 h the process was repeated.

3

Results

After making the device, the patient's adaptation to the device began, in which it was necessary to increase the height of the cartridge by 30 mm to improve the prosthesis fixation area. After this modification, a second test was performed, verifying that the prosthesis needed an increase

of 4 mm in the thickness of the surface that is in contact with the stump. A better adjustment of the device to the animal's body was needed as well. In addition, an adjustment was made to the height of the prosthesis with a reduction of 10 mm to the ground and a change in the original angle by 10 degrees. In the third test, the prosthesis presented a satisfactory fit to the stump shown in Fig. 4, allowing the patient to have external mechanical control and lever movement, and the adaptation phase started at home. The exercises with the animal were performed at the first and last use of the prosthesis at each stage. Because they are

692 Fig. 3 Prosthesis with the construction of the structural part and final finishing

Fig. 4 Third test, showing a patient wearing the prosthesis with a satisfactory fit to the stump

Fig. 5 Evaluation of the use of the prosthesis after the adaptation period. a Animal in the first evaluation before using the prosthesis. b Animal after the fourth test

G. C. Silveira et al.

exercises that can be uncomfortable, they were performed after a period of implant placement (15–30 min). This was to avoid associating the use of the prosthesis with something negative, and the animal refusing to cooperate in the use of the prosthesis the next time. The following protocol for the exercises was proposed: With the animal using the prosthesis, standing supported on all fours, researchers lightly pushed with the fingers on the side of the limb using the prosthesis, while the remaining limb was raised, so the the animal has only has the support of the limb that is using the device. The patient remained this way for 30 s, and as the patient improved his balance, the time was increased by up to 1–5 min. This exercise can be done on the floor, or with the aid of a balance disc, with the intention that the balance disc would increase the challenge. At the end of the exercises, the owner would hold the limb that had the prosthesis and move it, imitating a walk. At first, the patient was uncomfortable when using the device, but with daily use, he became well adapted and started to support the limb while standing and intermittently when walking. With the routine use of the device, it was noted the desired keratinization of the skin in the stump region, which had thin skin, which provided discomfort for the animal when using the prosthesis initially. However, the device failed in the distal region during use after a few days, requiring a new structural analysis. The prosthesis was restored using more support points to improve the rigidity and stress distribution between the parts. A fourth test was carried out in which it was possible to observe better adaptation to the device, greater recruitment of the muscles of the amputated limb and the animal was able to perform tasks such as going up and down stairs more easily. With the use of the prosthesis, it was possible to observe a reduction in compensatory changes such as better spine alignment, body positioning and walking, in addition, the patient demonstrated comfort when using the device, as shown in Fig. 5.

Development of Prosthesis and Evaluation of the Use …

4

Discussion

Kirpensteijn et al. [19] through the use of a force mat, managed to determine that animals that have low amputation of the pelvic limb, change the center of mass of their body, starting to carry 73% of the body weight in the thoracic limbs and 26% in the pelvic limb remaining. Normal loading is described around 60% in the thoracic limbs and 40% in the pelvic limbs. During the qualitative assessment of the animal's gait, great effort was noticed when walking, since it uses the trunk as a lever to assist the impulse of the remaining pelvic limb, associated with knee hyperextension at the moment of the impulse. This overload on the thoracic limbs may be the reason for the slight varus-type angular deviation that the animal presented. Goldner et al. [6] and Fuchs et al. [20] describe that the thoracic limbs are more affected, with the contralateral limb being more severe than the ipsilateral limb. Joint changes are caused indirectly by muscle remodeling and directly by weight overload, hyperflexion or hyperextension [21]. By observing the gait of the animal under study, it was possible to visualize the tarsal and knee hyperextension at the moment of the impulse. According to Hogy et al. [21], Goldner et al. [6] dogs that do not have a pelvic limb tend to have their knee and tarsus hyper flexed during the support phase and hyperextended during the impulse phase, while their elbows remain hyperextended in all four, and hyper flexed when in motion. Despite Galindo-Zamora et al. [7], find the same results, he does not find ligament or cartilaginous lesions in an MRI scan of the knees of dogs with 120 days of amputation. Despite the good padding of the prosthesis, a small lesion was noticed at the tip of the stump, probably caused by the bony tip present in the stump. Despite this, the animal was asked to continue using the prosthesis with a little cotton at the site of the lesion, in order to generate a keratosis on the skin that was very thin, and thus increase comfort during use. This attitude has already been described as beneficial in the literature [22]. The hypotrophy state of the limb that received the prosthesis may had made it difficult to adapt to the use of the device, as its requires the animal to create muscle strength and resistance to support its own weight. This is in line with a medical work described by Chen et al. [23], in which less educated patients who possibly did heavy legwork, had better physical condition, so they probably adapted more quickly to the prosthesis compared to the other patients who were more educated. In addition, the period between amputation and use of the prosthesis was 4 years while Chen et al. [23] and Robinson et al. [24] show that the best period for fitting the prosthesis is 6–8 weeks after limb amputation.

693

At the region where the search was developed, the prosthesis are made with materials that is more expensive when compared with the materials used to make the prosthesis of this present study. This difference in the production cost, allows that the device be passed for the client in a lower cost. What is an advantage, when our patients are from tutors, that can’t afford buy the other prosthesis of the region. Despite this paper not had gait analysis, the placement of the prosthesis brought new support to the body, thus, tending to bring gait and posture, as close as possible to normality. It was noticed an improvement in the alignment of the spine and the use of the prosthesis for the impulse when walking reduced the extreme effort made by the spine and contralateral limb, previously seen in the first gait visual, assessment. Moreover, the owner reported that her dog had more facility at the daily activities, after the using of prosthesis. This improvement could have been better if the adaptation was associated with physical therapy, which was indicated, but not done. At the end of the adaptation period, the need for physical therapy for better results was reinforced.

5

Conclusion

The prosthesis developed is considered to have promoted qualitative benefits like, spine alignment and better weight distribution while walking, qualitatively evaluated by a veterinarian. The improves view by the authors, provided life quality for short terms, helping the patient to walk using the four limbs, assisting in the prevention of joint diseases in the short, medium and long term, fulfilling the objective of the present work. New studies using gait analysis has to be made, with the goal to improve the analysis of the patients who wear a prosthesis. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Phillips A et al (2017) Clinical outcome an complications of thoracic and pelvic limb stump and socket protheses. Vet Comp Orthop Traumatol 30:265–271 2. Adamson C et al (2015) Assistive devices, orthotics, and prosthetics. Vet Clin North Am Small Anim Pract 35:1441–1451 3. Dickerson VM et al (2015) Outcomes of dogs undergoing limb amputation, owner satisfaction with limb amputation procedures, and owner perceptions regarding postsurgical adaptation: 64 casas (2005–2012). JAVMA 247:786–792 4. Lage MHH et al (2018) Aplicação de Conceitos de Biomecânica na Confecção de Próteses para Cães. In: Encontro Nacional de Engenharia Biomédica, 6. Águas de Lindóia: Enebi. V. 1, p. 1–7

694 5. Raske M et al (2015) Short-term wound complications and predictive variables for complication after limb amputation in dogs and cats. J Small Anim Pract 56:247–252 6. Goldner B et al (2015) Kinematic adaptations to tripedal locomotion in dogs. Vet J 204:192–200 7. Galindo-Zamora V et al (2016) Kinetic, Kinematic, magnetic resonance and owner evaluation of dogs before and a after the amputation of a hind limb. BMC Vet Res 12:14 8. Fischer S (2013) Adaptations in muscle activity to induced, short-term hind limb lamness in trotting dogs. PLoS ONE 13:10 9. Fitzpatrick N et al (2011) Intraosseous transcutaneous amputation prothesis (ITAP) for limb salvage in 4 dogs. Vet Surg 40:909–925 10. Fuchs A et al (2015) Limb and back muscle activity adaptations to tripedal Locmotion in dogs. J Exp Zool a Ecol Genet Physiol 323:506–515 11. Wustefeld-Janssens BG, Lafferty M, Séguin B (2018) Modification of the metal endoprothesis limb-salvage procedure for excision of a large distal radial osteossarcoma in a dog: a case report. Vet Surg 47:802–808 12. Borghese I, Fair L, Kaufmann M (2013) Canine sports medicine and rehabilitation. Wiley, Hoboken 13. Bedotto RA (2006) Biomechanical assessment and treatment in lower extremity prosthetics and orthotics: a clinical perspective. Phys Med Rehabil Clin North Am 17:203–243 14. Carr BJ (2018) Retrospective study on external canine limb prosthesis used in 24 patients. Vet Evid 3:1–13 15. Brasil. Ministério da Saúde. Secretaria de Gestão do Trabalho e da Educação na Saúde (2013) Confecção e manutenção de órteses, próteses e meios auxiliares de locomoção: confecção e manutenção de próteses de membros inferiores, órteses suropodálicas e

G. C. Silveira et al.

16.

17. 18.

19.

20. 21.

22. 23.

24.

adequação postural em cadeira de rodas. Brasília: Ministério da Saúde Agnelli LB, Toyoda CY (2003) Estudo de materiais para a confecção de órteses e sua utilização prática por terapeutas ocupacionais no Brasil. Cadernos De Terapia Ocupacional Da UFSCar 11:82–94 Padilha AF (1997) Materiais de Engenharia. Hemus Editora Ltda, São Paulo Lage MHH, Lamounier AR, Pertence A, Eustáquio M (2016) Desenvolvimento de uma metodologia de fabricação de próteses e órteses para cães. In: XXV CONGRESSO BRASILEIRO DE ENGENHARIA BIOMÉDICA, 25, Foz do Iguaçu. Anais. Foz do Iguaçu: CBEB 1:1–5 Kirpensteijn et al (2000) Ground Reaction force analysis of large-breed dogs when walking after the amputation of a limb. Vet Records 146:155–159 Fuchs A, Goldner B, Nolte I et al (2014) Ground reaction force adaptations to tripedal locomotion in dogs. Vet J 201(3):307–315 Hogy SM et al (2013) Kinematic and kinetic analysis of dogs during trotting after amputation of a pelvic limb. AJVR 74:1164– 1661 Salawu A et al (2006) Stump ulcers and continued prosthetics limb use. Prosthet Othot Int 30:279–285 Chen MC, Less SS, Hsieh YL et al (2018) Influencing factors of outcome after lower-limb amputation: a five-year review in a Plastic Surgical Department. Ann Plast Surg 61:314–318 Robinson V, Sansam K, Hirst L et al (2010) Major lower limb amputation—what, why and how to achieve the best results. Orthopaeds Trauma 24:276–285

Estimation of Muscle Activations in Black Belt Taekwondo Athletes During the Bandal Chagui Kick Through Inverse Dynamics P. V. S. Moreira, K. A. Godoy Jaimes, and L. L. Menegaldo

Abstract

The purpose of this study was to analyze the agreement between muscle activations calculated using Static Optimization (SO) and experimental measurements from surface electromyography (EMG), applying different exponentials to the cost function, during the Bandal Chagui kick. Three-dimensional kinematics, of three kicks, from five black belt athletes, were analyzed. The fastest kick for each subject was processed using the freely available software OpenSim. The tools used were inverse kinematics, residual reduction algorithm (RRA) and SO. The waveforms, estimated with the SO, from eight muscles of the dominant lower limb were compared with surface EMG measurements in phase (P), magnitude (M) and global agreement (C) by equations in which a perfect agreement results in zero. A global agreement close to zero was found for the gluteus medius and maximus (C  0.12) using 4 as the exponential and for the biceps femoris, with 2 as exponential, the global agreement was good (C = 0.19). Keywords

Inverse dynamics Electromyography

1





Static optimization Ballistic movement

Martial art

Introduction

Kicks are responsible for more than 90% of the strikes used in Taekwondo and the Bandal Chagui is the most frequent applied technique with values between 63.3 and 68.6% among all the techniques seen in a combat [1]. An efficient P. V. S.Moreira (&)  K. A. Godoy Jaimes  L. L. Menegaldo Biomedical Engineering Program, COPPE-UFRJ, Av. Horácio Macedo 2030, Rio de Janeiro, Brazil e-mail: [email protected]

kick must be fast, powerful, carry high kinetic energy and be accurate [2, 3], to accomplish these objectives, vigorous and coordinated muscle contractions are required [2, 4]. Thus, to determine the muscle forces responsible for an efficient kick can represent a big progress for the biomechanics of combat sports. The direct measurement of muscle forces would traditionally require invasive methods such as introducing transducers into tendons by surgical process [5]. In the other hand, the estimation of these forces, using non-invasive methods, is usually done by mathematical methods based on direct or inverse dynamics (for example, the computed muscle control, CMC, method of OpenSim) [5, 6]. Among these, the method that has estimated forces in a more acceptable way in comparative studies is Static Optimization (SO) [6, 7]. SO determines a possible solution of muscle activations that would generate the joint moment calculated by the inverse dynamics, so that a cost function is minimized. In general, the cost function considers that the central nervous system acts economically in terms of energy. The choice of the cost function depends on the main intention of the movement, for example, to save energy [5], to cause greater joint stability through muscle contractions [2, 8] and to reduce the individual forces of the long muscles [5], or even, reduce co-contractions to, thus, reducing the force applied in the opposite direction of movement [9] and the articular contact forces [6]. OpenSim uses as a cost function the sum of muscle activations elevated to a positive exponent, which variation can result in different patterns of distribution of muscular forces [5, 8]. Traditionally, the estimated muscle forces, using SO, are validated regarding the agreement of this force (or muscle activation) in relation to the muscular excitation curves measured experimentally by electromyography (EMG) [7, 8, 10]. In specific cases, the forces are validated as to their ability to correctly estimate the joint contact forces directly measured using pressure sensors implanted in joint reconstruction surgery [6, 8]. EMG by itself measures when and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_105

695

696

P. V. S. Moreira et al.

how much a muscle is activated, but the determination of the actual force is not straightforward, for different reasons, noise (crosstalk) among others. Through SO, force and not only activation is calculated. Individual (for each muscle) SO activation is determined through a function to minimize the sum of muscle activations (around the involved joints) based on the SO force and general muscle-tendon properties (architecture, tendon stiffness, force-length and forcevelocity relationship, delay for activation and deactivation, etc.). However, to non-invasively validate the muscle force estimations determined by SO calculations, agreement with EMG excitations is recommendable [7, 8, 10]. Most of the studies about the validation of the methods of estimating muscle forces, based on inverse dynamics and SO, were carried out with daily functional movements such as gait, climbing stairs and the movement of sit and get up from a chair [5–8]. A slightly more complex and fast movement was studied by Alvim et al. [10], who compared the activations calculated by SO and CMC with the activations measured from EMG during the “single leg triple hop” test in healthy individuals. These authors found, for some muscles, a good agreement between the estimated activations and experimental measures. However, no study on the validity of cost functions has been carried out with ballistic movements, more complex and with greater articular amplitude, such as martial kicks. This, in addition to the multifactorial demands (high impact forces and speed, with energy savings, joint stability, etc.) that athletes need to reach, in order to perform efficient strikes, makes it hard to know in advance which cost function would be more appropriate. Therefore, the objective of this study is to demonstrate the quantitative and qualitative agreement between the activations estimated by the SO tool of the freely available software OpenSim and experimental measurements with EMG during Bandal Chagui. A second objective is to find the exponent of the cost function that best estimates muscle activations and, consequently, muscular forces of the kicking lower limb.

2

Methods

Five black belts, with a history of participation on national or international competitions, participated in the study, four male (27.8 ± 5.4 years old, 68.2 ± 4.1 kg, 174 ± 2 cm and BMI = 22.53 ± 1.56) and one female, Brazilian champion, Brazilian military champion and 3rd place in the military world games (30 years old, 57.5 kg, 161 cm and BMI = 22.2). All volunteers signed a free and informed consent form. This study was approved by the Research Ethics Committee (approval number: 2.421.495) of University Hospital Clementino Fraga Filho, from UFRJ, in Rio de Janeiro.

Bipolar Ag/AgCl electrodes, with 20 mm distance between poles, length of 44 and 21 mm of width, were placed on 8 muscles of the dominant lower limb (tensor fasciae lata, rectus femoris, vastus lateralis, biceps femoris, gluteus medius, gluteus maximus, adductor longus and soleus), according to SENIAM, except for adductor longus that was placed according to Weis et al. [11]. Volunteers performed a general warm-up (3 min of comfortable sprint race followed by 2 min of free combat moves, known as “steps”), followed by a specific warm-up consisting of 10 kicks to a focus pad, with 15 s intervals. The first 8 kicks were asked to be submaximal and the last 2 high. Then, in approximately 5 min, 18 kinematic markers were placed on specific anatomical points according to the simplified Helen Hayes protocol. A static trial was performed for 10 s, in anatomical position, with open arms at approximately 45 degrees of abduction of the shoulders. The volunteers were familiarized with the main test with three attempts (2 submaximal and 1 maximum) then a 2-min rest was given followed by the main assessment. The main assessment was composed of three Bandal Chagui kicks, volunteers were asked to perform them at maximum speed and impact force, reacting to light stimuli, with the shortest possible reaction time. The light stimuli were composed by the lighting of 1 LED, located in the upper front part of the vertical striking tower (Boomboxe®). The LED was controlled by a MATLAB script that communicated with an Arduino Uno microcontroller, connected to the laptop via USB. The stimuli occurred as follows: the researcher gave the command “prepare!”, The LED blinked random and rapidly 5 times, using the “rand” function of the algorithm, then the LED lighted up for 700 ms, with the beginning of the event at any time in an interval of 5–10 s after the first flash of the LED. The trajectories of the reflective markers during static and dynamic trials (kicks) were recorded by the “Smart Capture” (BTS®) movement capture system, with a frequency of 250 Hz and the EMG signal with 1000 Hz using the FREEEMG 300 (BTS®) system with eight analog inputs, an A/D converter with 16-bit resolution, and a common rejection mode ratio of >100 dB at 50–60 Hz and 20–500 Hz bandpass filter. The kicks started from the combat static position (without jumping), with each foot on a force platform (model: B600, BTS®) sampled at 1000 Hz, synchronized with the kinematic system. The kinematic markers and force vectors were associated with the Helen Hayes model in the Smart Tracker software (BTS®), interpolated and filtered through a 4th order Butterworth zero-lag low pass filter, with a cutoff frequency of 50 Hz, in the Smart Analyzer (BTS®) software. The static trial files containing the 3D position of markers were imported into OpenSim [12] and the musculoskeletal model was scaled: process by which the dimensions of the

Estimation of Muscle Activations in Black Belt …

Gait2392 rigid body model, available in the software model library, are scaled to correspond the anatomical dimensions of each volunteer. This model consists of 12 rigid segments: pelvis, trunk, thighs, legs, heels, subtalar region and fingers. This model is controlled by a set of 76 muscles, represented by 92 actuators (Fig. 1). Bandal Chagui is a semi-circular kick that starts with a hip extension to propel the pelvis forward when the foot is still touching the ground. Then a hip flexion followed by medial rotation, hip abduction, and knee extension. The technique is completed with the dorsum of the foot impacting the target (Figs. 2 and 3). For the three kicks per athlete, the linear position of the metatarsal marker (between the 2nd and 3rd metatarsal) was derived numerically with respect to time to obtain the peak speed and then the fastest kick was used to subsequent analyzes. In these analyzes, the file containing the pre-processed linear paths of the markers and the ground reaction forces were imported into OpenSim, and the inverse kinematics (IK), the residual reduction

697

algorithm (RRA) and SO tools were applied to obtain the dynamics of the rigid bodies of the musculoskeletal representation (Fig. 1) and their estimated forces and activations (a) during the execution of the kick. The SO of the kicks was analyzed using two exponents p (ap) for the cost function, p = 2 and p = 4. To minimize the sum of ap, SO needs to calculate the muscle fiber forces (f) and then, normalize the neural effort “a” by the relative force generated taking in consideration, simultaneously: (1) the hypothetical maximal force (fmax) if “a” was maximal (i.e.: when a = 1); (2) the relationship among the muscle-tendon (MT) force, length and velocity according to the Hill’s model. Muscle architecture and histochemical properties (percentage of fibers) influence both, force-length-velocity curve and the force per activation ratio (f/a). Muscle fiber force is calculated dividing muscle force (F) by its estimated cross-sectional area (CSA), while F is calculated by the quotient of muscle-tendon force along the tendon axis (FMT) for cosine of the pennation angle (a). In other words, FMT = F*cos(a). The EMG waveforms were rectified and filtered through a 4th order Butterworth zero-lag low pass filter, with a cut-off frequency 10 Hz using MATLAB. The filtered EMG signal of each muscle, during the chosen kick was normalized in relation to the maximum value (of the same muscle) of EMG peak obtained by the filtered EMG curves of the three kicks. Next, the measured and estimated activations of all the evaluated athletes were averaged to generate a single averaged curve (with limits of agreement of ±1 standard deviation) (Figs. 2 and 3). In these waveforms, for the qualitative analysis (visual analysis of the curves), two phases were identified, (1) Impulse phase - from the onset of the resulting ground reaction force (GRFres) to the offset of this force. The onset and offset thresholds were, respectively, GRFres > 102.5% of the baseline value (baseline value: average of 50 ms before the LED onset) for more than 25 ms and GRFres < 1% of the baseline value; (2) Aerial kicking phase—from GRFres offset to contact with the target. The aerial phase was subdivided into two phases: (a) knee flexion phase and (b) knee extension phase, divided by the peak of the knee flexion angle. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi iffi u h 1 R t2 2 u c ð t Þ dt u t2t1 t1 i1 M ¼ th ð1Þ R t2 2 1 t2t1 t1 mðtÞ dt h

i mðtÞcðtÞdt ffi# P ¼ "rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R t2 2 mðtÞ dt R t2 2 1 t1 1 t1 cðtÞ dt t2t1 R t2 1 1 1 p cos t2t1 t1

Fig. 1 Musculoskeletal model Gait_2392, composed by 12 rigid bodies (bone representations), controlled by a set of 76 muscles represented with 92 musculotendon (MT) actuators. Left image shows the markers and rigid bodies, right image shows the model with the MT actuators

ð2Þ

t2t1

For the full kick analysis (the entire movement), the quality of the agreement between the estimated muscle activations and the EMG curves (Figs. 2 and 3) was assessed

698

Fig. 2 Activation waveforms measured by electromyography (black line) and estimated with Static Optimization (blue line), using p = 2. The first vertical line (brown) represents the average offset of the impulse phase and the second vertical line (red) is at the knee flexion peak. The shaded areas of the activation curves (and of the vertical lines) and

P. V. S. Moreira et al.

dashed lines represent the agreement limits of ±1 standard deviation. TFL: tensor fasciae lata, RF: rectus femoris, VL: vastus lateralis, AL: adductor longus, Gmed: gluteus medius, GM: gluteus maximus, BF: biceps femoris, Sol: soleus; GRF: Ground Reaction Force; Flex: Flexion. All the curves are averaged from the full sample of volunteers (n = 5)

Estimation of Muscle Activations in Black Belt …

Fig. 3 Activation waveforms measured with by electromyography (black line) and estimated with Static Optimization (blue line), using p = 4. The first vertical line (brown) represents the average offset of the impulse phase and the second vertical line (red) is at the knee flexion peak. The shaded areas of the activation curves (and of the vertical lines)

699

and dashed lines represent the agreement limits of ±1 standard deviation. TFL: tensor fasciae lata, RF: rectus femoris, VL: vastus lateralis, AL: adductor longus, Gmed: gluteus medius, GM: gluteus maximus, BF: biceps femoris, Sol: soleus; GRF: Ground Reaction Force; Flex: Flexion. All the curves are averaged from the full sample of volunteers (n = 5)

700

P. V. S. Moreira et al.

qualitatively (through the visual analysis of the vertical differences in activations, at every instant, between methods, on the curves of the Figs. 2 and 3) and the estimation errors were determined by the method proposed by Geers [13]. This method is based on the calculation of errors of Magnitude M (Eq. 1) and phase P (Eq. 2) between two waveforms, where m(t) is the EMG measured, c(t) is the activation, and t1 to t2 is the time interval of interest. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi C ¼ M 2 þ P2 ð3Þ Equation (3) combines the magnitude and phase errors to obtain a single value for a global comparison. M and P must be close to zero when there is no difference in magnitude and phase between the waveforms.

3

Results

Figures 2 and 3 show the waveforms of the muscle activation measured (EMG) and estimated using SO with exponents 2 and 4, respectively. Table 1 contents the error values (M, P, C) of agreement between corresponding waveforms.

4

Discussion

4.1 Quadriceps Muscles The quantitative analysis of the curves showed that the estimated activations of the quadriceps muscles (VL and RF) showed excellent phase agreement (P < 0.15) in relation to EMG excitations, for both exponents of p (2 and 4). For the rectus femoris, the magnitude of the activations showed an overestimation greater than or equal to 100% for both

Table 1 Agreement error between the measured activation and the estimated using static optimization

exponents (M  1.00), which took the global error to similarly high values (C  1.00), especially for exponent 4. The values calculated by SO also overestimated the magnitude (M) values of activation for the vastus lateralis muscle, however, with much more reasonable errors, close to 40%. For the VL, the global error was basically determined by the magnitude, presenting a better general metric for the exponent 2. Alvim et al. [10] obtained, with the SO (p = 2), for the vastus lateralis in a type of horizontal jump, phase error values higher than those found in the present study (P = 0.19 versus P = 0.11). However, these authors found practically zero errors of magnitude (M = 0.01), which made their general metrics (C = 0.01) much more acceptable than those of the present study. Our modeling showed an activation curve containing two peaks, and it was in these moments that the greatest overestimations occurred, although, within the limits of agreement of the standard deviation of the EMG curves. Our 38% overestimation probably occurred due to possible differences in muscle architecture (cross-sectional area, angle of pennation, etc.), percentage of type II fibers and the ratio of force (f) produced by activation (a) generated, (f/a), between the athletes of the present study and the constants used in the SO algorithm. These constants are based on analyzes with ordinary individuals and an important adaptation to the training of explosive movements is the increase in the neuromuscular recruitment capacity and in the frequency of activation.

4.2 Bíceps Femoris Among athletes’ hamstrings, the biceps femoris is the most rigid muscle [14], but the one that suffers the most

p=2 M

p=4 P

C

M

P

C

TFL

1.02

0.16

1.03

1.28

0.09

1.28

RF

1.00

0.11

1.00

1.15

0.14

1.16

VL

0.38

0.11

0.39

0.43

0.09

0.44

AL

1.91

0.24

1.93

2.03

0.19

2.04

GMed

−0.34

0.20

0.39

0.05

0.09

0.10

Gmax

−0.37

0.13

0.39

−0.09

0.08

0.12

BF

0.11

0.16

0.19

0.56

0.15

0.58

Soleus

1.19

0.27

1.22

1.38

0.15

1.38

Mean

0.61

0.17

0.82

0.85

0.12

0.89

SD

0.75

0.05

0.55

0.68

0.04

0.64

p optimization exponent of the cost function, M Magnitude error, P Phase error, C global agreement error, SD standard deviation, TFL Tensor Fasciae Lata, RF Rectus Femoris, VL Vastus Lateralis, AL Adductor Longus, GMed Gluteus Medius, GMax Gluteus Maximus, BF Biceps Femoris

Estimation of Muscle Activations in Black Belt …

deformation [15] and eccentric power in ballistic movements [16]. Consequently, it is the most prone to injury [17]. Covarrubias et al. [18] demonstrated that, in taekwondo, muscle stretching is the third largest cause of injuries (13.9% of all injuries), and among all combat actions, the most damaging are precisely the kicks of the semicircular type (Bandal Chagui and Dolho Chagui). According to them, 88% of the injurious movements are kicks and of these, 82% are semicircular. This makes finding the cost function that best estimates muscle strength an important object of study for martial sciences. The analysis of the curves showed that the estimated activations of the Biceps Femoris muscle presented excellent global agreement in relation to EMG excitations, for exponent 2, but not for p = 4. What caused this difference was to obtain a smaller error M for the exponent 2, which led to less overestimation of activation during the impulse phase. There was excellent phase agreement (P  0.11) for both exponents. However, when the foot is almost touching the target (time > 95% of the duration of the movement) there is an apparent overestimation of the calculated activations. This probably occurred because the parameters of the phantom model do not consider that taekwondo athletes have a greater stiffness of the Biceps Femoral and Semitendinosus tendon than non-athletes [14], which would increase passive forces and elicit less neural effort to stabilize the knee and prevent stretch injuries. This may also be associated with the fact that most of the volunteers recruited were of sub-elite level, as it has been shown that sub-elite athletes produce less antagonistic BF contractions than elite athletes during the knee extension of Bandal Chagui [2].

4.3 Gluteal Muscles In the present study, the calculated maximum and medium gluteal activations obtained excellent metrics (M and P < 0.1) in agreement with the measured excitations. Visual analysis of the curves confirms this. However, this only occurred with the exponent 4. This means that a greater exponent, when considering the occurrence of greater distributions of forces around the pelvis and hips, does not underestimate the forces of the gluteal muscles. In fact, it is possible to observe, by the values of M, greater co-contraction between hip extensors and flexors (GM in relation to TFL and RF) as well as between abductors and adductors of the hip (Gmed in relation to AL). This seems to indicate that in the athletes of the present study, stabilizing the hip is a plausible goal. In fact, Moreira et al. [9] demonstrated that in athletes of sub-elite level, the co-contraction between hip flexors and extensors was greater than that of elite athletes, when performing Dolho Chagui (a semicircular kick delivered at the height of the head). These

701

authors credited the high antagonistic activations of the GM (in the knee extension phase) to a reflex effect of protection of the hip joint and muscles, due to the high angular speeds to which the athletes are submitted. This effect would be minimized in elite athletes due to greater coordination and flexibility, which would reduce eccentric mechanical stress and therefore would have generated greater angular velocities of hip flexion. The values of M, P and C obtained in the present study were lower than those obtained by Alvim et al. [10] indicating that with the exponent 4, the modeling for the kick can estimate the forces of the gluteal region more adequately than in horizontal triple jumps. Abduction, flexion and hip extension are very important movements to perform Bandal Chagui correctly, so that obtaining good agreement metrics seems to be promising for the SO to be used in future studies.

4.4 Soleus, Adductor Longus and Tensor Fasciae Lata Soleus is a mono-articular muscle, consequently, it has a lower number of minimizing equations, causing an expectation for finding a good agreement between the activation and EMG curves. This clearly did not occur, due to the high magnitude errors. The long adductor is an important muscle to estimate forces due to the high potential for injury, however the agreement (a vs EMG) was low. The Tensor Fasciae Lata is a multifunctional muscle (flexor and abductor of the hip and extensor of the knee) that had its activation overestimated. A possible cause for this inconsistency is related to the active tension index (f/a), which is probably higher in taekwondo athletes than in the individuals used to generate this constant. A second possibility is related to the elastic properties of the tendons, that is, it is possible that as a chronic training effect, taekwondo athletes have tendons with greater rigidity and hysteresis than non-athletes. This would cause a greater ability to transmit forces between muscle and bone, with a minimum of neural excitation.

5

Conclusions

It is concluded that the exponent 4 is ideal to estimate the strength of the gluteal muscles, exponent 2, ideal to estimate the strength of the BF and reasonable to estimate the strength of the VL. Finally, only the shape (Phase P) but not the magnitude (M) of the TFL and RF time curves could be estimated correctly, regardless of the exponent. Acknowledgements This work is supported by CAPES with a PNPD postdoctoral scholarship

702 Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Kwok HHM (2012) Discrepancies in fighting strategies between Taekwondo medalists and non-medalists. J Hum Sport Exerc 7 (4):806–814. https://doi.org/10.4100/jhse.2012.74.08 2. Moreira PVS, Goethel MF, Gonçalves M (2016) Neuromuscular performance of Bandal Chagui: comparison of subelite and elite taekwondo athletes. J Electromyogr Kinesiol 30:55–65. https://doi. org/10.1016/j.jelekin.2016.06.001 3. Kim TW, Lee SC, Kil SK et al (2017) Kicking modality during erratic-dynamic and static condition effects the muscular co-activation of attacker. J Sports Sci 35(9):835–841. https://doi. org/10.1080/02640414.2016.1192672 4. Quinzi F, Camomilla V, Di Mario A et al (2016) Repeated kicking actions in karate: effect on technical execution in elite practitioners. Int J Sports Physiol Perform 11(3):363–369. https://doi.org/10. 1123/ijspp.2015-0162 5. Crowninshield RD, Brand RA (1981) A physiologically based criterion of muscle force prediction in locomotion. J Biomech 14 (11):793–801. https://doi.org/10.1016/0021-9290(81)90035-X 6. Wesseling M, Derikx LC, de Groote F et al (2015) Muscle optimization techniques impact the magnitude of calculated hip joint contact forces. J Orthop Res 33(3):430–438. https://doi.org/ 10.1002/jor.22769 7. Żuk M, Syczewska M, Pezowicz C (2018) Use of the surface electromyography for a quantitative trend validation of estimated muscle forces. Biocybern Biomed Eng 38(2):243–250. https://doi. org/10.1016/j.bbe.2018.02.001 8. Modenese L, Phillips ATM, Bull AMJ (2011) An open source lower limb model: hip joint validation. J Biomech 44(12):2185– 2193. https://doi.org/10.1016/j.jbiomech.2011.06.019

P. V. S. Moreira et al. 9. Moreira PVS, Franchini E, Ervilha UF et al (2018) The effect of the expertise level of taekwondo athletes on electromyographic, kinematic and ground reaction force performance parameters during the Dolho Chagui kick. Archives of Budo 14:59–69 10. Alvim FC, Lucareli PRG, Menegaldo LL (2018) Predicting muscle forces during the propulsion phase of single leg triple hop test. Gait & Posture 59:298–303. https://doi.org/10.1016/j.gaitpost.2017.07. 038 11. Weiss LD, Silver JK, Weiss J (2004) Easy EMG: a guide to performing nerve conduction studies and electromyography. Butterworth-Heinemann, Edinburgh 12. Delp SL, Anderson FC, Arnold AS et al (2007) OpenSim: open-source software to create and analyze dynamic simulations of movement. IEEE Trans Biomed Eng 55:1940–1950. https://doi. org/10.1109/tbme.2007.901024 13. Geers TL (1984) An objective error measure for the comparison of calculated and measured transient response histories. Shock Vib Bull 54:99 14. Avrillon S, Lacourpaille L, Hug F et al (2020) Hamstring muscle elasticity differs in specialized high performance athletes. Scand J Med Sci Sports 30(1):83–91. https://doi.org/10.1111/sms.13564 15. Schache AG, Dorn TW, Blanch PD et al (2012) Mechanics of the human hamstring muscles during sprinting. Med Sci Sports Exerc 44(4):647–658. https://doi.org/10.1249/MSS.0b013e318236a3d2 16. Chumanov ES, Heiderscheit BC, Thelen DG (2007) The effect of speed and influence of individual muscles on hamstring mechanics during the swing phase of sprinting. J Biomech 40(16):3555–3562. https://doi.org/10.1016/j.jbiomech.2007.05.026 17. De Smet AA, Best TM (2000) MR imaging of the distribution and location of acute hamstring injuries in athletes. AJR Am J Roentgenol 174(2):393–399. https://doi.org/10.2214/ajr.174.2. 1740393 18. Covarrubias N, Bhatia S, Campos LF et al (2015) The relationship between Taekwondo training habits and injury: a survey of a collegiate Taekwondo population. Open Access J Sports Med 6:121–127. https://doi.org/10.2147/OAJSM.S80974

Biomedical Devices and Instrumentation

Study of a Sonothrombolysis Equipment Based on Phased Ultrasound Technique A. T. Andrade and S. S. Furuie

Abstract

Sonothrombolysis consists in a new treatment based on ultrasound to help to unblock arteries that may be blocked by a thrombus, helping in infarct treatment as a noninvasive method. Although it is a new method, it has shown great progress thorough recent years. Among the variety of applications of this technology, the Phased Array Ultrasound Technique (PAUT) can achieve a higher pressure with the same power using the focused beam concept. In this paper, this technique is analyzed as a tool for sonothrombolysis treatment and compared to other conventional techniques. The advantages of the phased array ultrasound technique are shown by wave simulation programs. After that, a project for an ultrasound device prototype is introduced. The project implementation and its characteristics are also explored herein. Keywords

 

Sonothrombolysis Electronics transducer Transducer matrix

1



Ultrasound

microbubbles spread on the patient blood flow. This microbubbles movement can help to unblock the arteries that may be blocked by a thrombus, helping in infarct treatment. Recent studies show the progress of this technique [1, 2]. There are different ultrasounds techniques that can be used in sonothrombolysis application; one of them is the phased array ultrasound technique, also known as PAUT [3] and already applied in different ultrasound applications, such as imaging [4], elastography [5] and brain treatments [6]. Using the sum of independent ultrasound waves, this technique can generate a more powerful ultrasound vibration in a specific point or area. The objective of this paper is to study the phased array ultrasound technique and compare it to other ultrasound techniques to project a sonothrombolysis application using an ultrasound simulation software. Also, a new electronic hardware project will be designed to support this application and enable us to build a sonothrombolysis equipment prototype. Others desired characteristics for the device are: portability; simplicity and low cost, such that it could be operated by technicians in ambulances. Therefore, the equipment will not use ultrasound images, instead the intention is to irradiate proper ultrasound power in all cardiac volume and provoke thrombolysis wherever is needed.

Introduction

The medical ultrasound enables diagnosing several diseases without harm for the patient. Besides this common use, medical ultrasound can also be used as a resource for treatments such as sonothrombolysis. Sonothrombolysis consists in using the ultrasound waves to cause cavitation in A. T. Andrade (&) Postgraduate Program in Electrical Engineering, Escola Politécnica da Universidade de São Paulo, Av. Prof. Luciano Gualberto, São Paulo, Brazil S. S. Furuie Departamento de Telecomunicações e Controle, Escola Politécnica da Universidade de São Paulo, São Paulo, Brazil

2

Materials and Methods

The methodology used is two-fold: (a) Simulations and validation; and (b) Implementation. In the first part, different ultrasound techniques are applied in the sonothrombolysis scenario using an ultrasound simulation software, studying the advantages of the focused beam method and comparing different techniques to choose the focal points and to calculate their delays. In the second section, the hardware is designed from delay discretization to the activation circuit. Once connected to the transducer matrix elements, they allow the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_106

705

706

sonothrombolysis of the patient. In conjunction with microbubbles injection, there may be a partial or total unblocking of the patient’s arteries.

2.1 Simulations and Validation Conventional ultrasound applications consist of a simultaneous acoustic beam activation along all the transducer, whereas PAUT technique applies time delays to each transducer element to create a constructive wave interference at a chosen focal point. This interference improve the energy applied at this point, creating a high pressure region around it and a lower pressure region far from it. For sonothrombolysis we need to apply high pressure at the patient’s heart; since some arteries may present thrombus. This region, called region of interest (ROI) herein, is defined as a prism containing a square base with an 8 cm edge and 6 cm depth, located 4 cm away from the patient skin, as shown in Fig. 1. This volume comprises a regular size heart and its arteries. By means of simulations, this section will show the comparison between ultrasound applications with and without focusing, pointing out the advantages of the proposed method. For this, the software FOCUS, created by Michigan State University will be used [9]. FOCUS is a continuous and transient wave ultrasound simulator designed to work in a variety of media. The comparison is made using a transducer matrix of 64 elements measuring 9 mm  9 mm each and 1-mm edge to edge space, as shown in Fig. 2. For therapeutics purposes, it is commonly used ultrasound frequencies in the order of kilohertz due its efficiency on cavitation effects [2]. Therefore, the simulations presented in this paper were carried out at 500-kHz frequency. Also, the relative pressure units will be calculated in dB relative to the pressure PMI = 1.3435 MPa, defined as the pressure needed to reach a MI (Mechanical Index) of 1.9 at the transducer frequency. This value is the maximum MI allowed for diagnostics equipment [10] and will be used as reference, even though the equipment is intended for therapeutic purposes. Creating a simulation with a simultaneous activation on FOCUS, a pressure field is created (Fig. 3), reaching a relative pressure of Ppp = 0.9956 dB (1.5067 MPa).

Fig. 1 Region of interest (ROI) [7, 8]—modified

A. T. Andrade and S. S. Furuie

Therefore, using the phased array activation with the right calculated delays can improve the pressure at any point in the application area comparing it to the conventional method. Another advantage is that using the phased array ultrasound technique, it is possible to focus the transducer energy at certain points with more efficiency, avoiding high pressure points in undesirable areas, such as the patient skin or some fragile organ. To achieve high pressure in the whole desirable area instead of just a unique point, we here propose several focal points, generating a more homogeneous pressure field. For this, the phased array ultrasound technique will be applied multiple times, each time focusing on a different point using different time delays. To compare the efficiency of applying the phased array ultrasound technique multiple times, 162 focal points spaced 1 cm from each other were distributed in planes 4 and 7 cm away from the transducer plane, 81 on each plane (Fig. 2) and, for each of them, a simulation was carried out by activating all the 64 transducer elements, with corresponding calculated delays. After all the 162 simulations, a combined pressure field was generated based on the maximum pressure of all the simulations at each point. As expected, the pressure field (Fig. 3) presented a higher pressure compared to the simultaneous activation simulation and better pressure distribution. The new approach presents a maximum relative pressure of Ppp = 7.9730 dB (3.3643 MPa).

2.2 Implementation of a Prototype Once a transducer matrix with individual elements is designed as the one shown in Fig. 1, the focal points are chosen and validated by simulations covering the desirable area, and the sonothrombolysis application is ready to be implemented. Beyond the pressure field, the simulation programs calculate the time delays for each transducer and focal point. Therefore, the sonothrombolysis equipment prototype must activate each transducer element according to its delay and repeat it for all the focal points. For each focal point, all the T transducer elements are swept in the transducer matrix during a fixed time called Sweeping time.

Study of a Sonothrombolysis Equipment Based …

707

Fig. 2 Transducer matrix with 64 elements simulated with FOCUS with focal points equally distributed at the plane z = 4 cm and z = 7 cm

Fig. 3 Pressure Field for the simulation of: a Simultaneous activation. b PAUT activation focused on the focal points shown in Fig. 2

Therefore, at every sweeping time tS , a focal point is focused. The delays have to be calculated using software programs, such as FOCUS [9], K-wave [11] and others, since it needs some computer processing to assure reasonable level of homogeneity of pressure peak in the ROI. Once the time delays for each transducer element/focal point are determined, the hardware just needs to activate the transducer at the right moment for each focal point, a task that needs much less processing than the calculation itself. The task to activate the transducer just needs to read the delay information and apply it to the transducer repeatedly, which can be done by a microcontroller, a micro PC or a digital circuit. In any case, a memory capable of storing the time delay information is necessary.

There are many options and different types of memories for data storage. For this article, the memory has to keep its data even if there is no power, given the device autonomy for transportation and less energy consumption. Some microcontrollers and micro PCs already have a non-volatile memory embedded, but it can also be added separately. The memory data rate reading is determined by an internal frequency, defined by the device reading clock. Therefore, at each reading clock period, the Low Computer Processing hardware reads the stored data, generates corresponding digital signal, which controls activation or deactivation of transducer elements at each step. Since the activation signal is discrete in time, the time delays need to be discretized before being stored in the non-volatile memory. The discretization is carried out

708

A. T. Andrade and S. S. Furuie

according to the reading clock frequency. This discretization causes an alteration in the original time delays, which can yield error in the ultrasound focalization. The error can be mitigated by choosing a higher reading clock frequency. The higher it is, the lower the rounding in the discretization and, therefore, the closer to the simulation the ultrasound focalization is. However, there are some constraints that limit the reading clock frequency. First, the hardware clock limits the reading clock frequency; it thus depends on the device computer processing power and the complexity of its algorithm. A simpler algorithm can run faster and requires less processing. The memory has a frequency limitation determined by its data access time; this value is defined by the memory specifications and limits how long the data query lasts. Therefore, it is important for the project to study the discretization implication for the final goal and, considering the accuracy needed, to determine the hardware and memory that fit the product specifications. Once the activation signals for each transducer are stored in a memory, it can be read by a Microcontroller, Micro PC or a Digital circuit, emitting a signal according to the transducer frequency. Since transducer elements need voltages in the order of hundred volts to be activated, it is necessary to add a step-up circuit to elevate the circuit voltage.

3

Table 1 Data comparison between simultaneous activation and phased array ultrasound technique Simulation

Simultaneous activation

PAUT 162 focal points at 7 and 4 cm away

Maximum relative pressure Ppp (dB)

0.9956

ROI homogeneity 3 dB (%)

0

54.67

ROI homogeneity 6 dB (%)

0

66.70

7.9730

Results

We first analyzed different scenarios activating a transducer matrix to cover the ROI homogeneously and at high intensity. The latter needs a high pressure at a certain area, which contains the patient’s heart; all the arteries near it may present a thrombus. This region, called region of interest (ROI). The purpose of applying the PAUT technique is to increase the pressure on the ROI while keeping it rather uniform along the region, avoiding low pressure areas that may cause an inefficient cavitation. Table 1 shows the results of maximum relative pressure measure Ppp and ROI homogeneity, defined as the ROI volume percentage that presents a relative pressure Ppp between zero dB and 3 dB or 6 dB. The objective is to improve the homogeneity, generating a more uniform cavitation along the ROI due the absence of low pressure regions on it. All the Phased array ultrasound technique simulations present a higher max pressure PMI and better coverage. Table 1 shows also that by careful design of focal points we can achieve high homogeneity. Figure 4 shows histograms of maximum pressures for Simultaneous activation simulation and PAUT activation for 162 focal points. It illustrates the capability of increasing the pressure level and homogeneity by focusing technique.

Fig. 4 Pressure field histogram for the simultaneous activation simulation and the Phased array ultrasound technique simulation at the focal points shown in Fig. 2 at 4 and 7 cm

4

Discussion

This paper compares the ultrasound application using the simultaneous activation method and the PAUT method. Based on the results, it can be noted that the PAUT method reaches a higher relative pressure and a more homogeneous pressure field. In all simulations we considered ultrasound waves of 500 kHz and Region of Interest (ROI) covering the size of a normal person heart. The simulations can be altered to analyse different scenarios.

5

Conclusion

We here presented an approach to study an electronic ultrasound device for sonothrombolysis based on the Phased Array Ultrasound Technique (PAUT). This technique was compared to the simultaneous activation ultrasound technique by analyzing its characteristics, such as pressure field

Study of a Sonothrombolysis Equipment Based …

and pressure field histogram. Examples of performance for both techniques were presented using software simulations. We also proposed a simple design by separating the data calculation and the ultrasound driver. Besides that, an electronic transducer activation project is presented based on transistors application, enabling the construction of a full functional device. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES)—Finance Code 001, and by FAPESP grant number 2018/21435-9. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Porter TR Mathias W (2019) Cardiovascular sonothrombolysis. Curr Cardiol Rep 21

709 2. Azevedo LF, Porter TR, Ramires JAF, Mathias W (2019) Cardiovascular sonothrombolysis. Ther Appl Ultrasound Biomed J Sci Tech Res 21 3. Poguet J, Vazquez J, Marguet J, Pichonnat F (2002) Phased array technology: concepts, probes and applications. In: 8th European congress on non destructive testing, vol 8. Barcelona, Spain 4. Satyanarayan L, Krishnamurthy CV, Balasubramaniam K (2007) Simulation of ultrasonic phased array technique for imaging and sizing of defects using longitudinal waves. Int J Press Vess Pip 84 5. Jeong WK, Lim HK, Lee H, Jo JM, Kim Y (2014) Principles and clinical application of ultrasound elastography for diffuse liver disease. Ultrasonography 33:149–160 6. Hynynen K, Clement G (2007) Clinical applications of focused ultrasound. Brain Int J Hyperth 23:193–202 7. Studio Active. Human heart with body lateral view. Endocardium, cava. https://www.dreamstime.com/stock-illustration-humanheartbody-lateral-viewmuscular-organ-size-closed-fist-functionsasbody %C3%A2%E2%82%AC%E2%84%A2s-image96248062 8. Visco K. How a deadly disease may also contribute to greater fertility. https://www.eivf.org/post/how-a-deadly-disease-mayalso-contribute-to-greaterfertility 9. University Michigan State. Fast object-oriented C++ ultrasound simulator. https://www.egr.msu.edu/fultras-web/ 10. Szabo T (2013) Diagnostic ultrasound imaging: inside out, 2nd edn. Academic Press, Cambridge, Massachusetts 11. Treeby B, Jaros J. k-Wave: a MATLAB toolbox for the time-domain simulation of acoustic wave fields

Development and Characterization of a Transceiver Solenoid RF Coil for MRI Acquisition of Ex Situ Brain Samples at 7 Teslas L. G. C. Santos, K. T. Chaim, and D. Papoti

Abstract

1

This paper describes the development and characterization of a solenoid radiofrequency (RF) coil specifically designed for MRI acquisition of ex situ brain samples at 7 teslas. The coil was designed to maximize the filling factor of small brain samples, such as hippocampus and brainstem, providing optimum signal-to-noise ratio (SNR) when compared to a 32-channel volume head coil available. SNR measurements comparing the performance between the constructed solenoid and the available head coil have shown a gain of 4.8 times. As expected, field homogeneity achieved by the solenoid was found to be considerably worse when compared to the volume head coil. Nevertheless, the available region with high homogeneity provided by the solenoid showed to be enough to cover the whole sample and generate images free of RF inhomogeneity artifacts. The solenoid coil allowed the acquisition of high spatial resolution images of ex situ brainstem samples, making possible the discrimination between gray-white matters in the brainstem anatomy. Keywords

Magnetic resonance imaging MRI



RF coils



Postmortem

L. G. C. Santos (&)  D. Papoti Centro de Engenharia, Modelagem e Ciências Sociais Aplicadas, Universidade Federal do ABC, Alameda Universidade s/n, Sao Bernardo do Campo, SP, Brazil e-mail: [email protected] K. T. Chaim Faculdade de Medicina FMUSP, Universidade de Sao Paulo, Sao Paulo, SP, Brazil

Introduction

Magnetic Resonance Imaging (MRI) is an imaging modality for medical diagnostic that combines static and dynamic magnetic fields with radiofrequency (RF) to generate images of the human body [1]. MRI has become one of the most important imaging techniques due to its ability to provide different types of contrasts from soft tissues with high spatial resolution, and for not involving ionizing radiation, being considered a completely noninvasive technique. Besides providing anatomical structural images for clinic diagnostic, it allows functional and metabolic studies of the brain by using functional MRI (fMRI), Magnetic Resonance Spectroscopy (MRS), perfusion and diffusion imaging techniques [2]. Among the main components of an MRI scanner, the RF coils can be considered one of the fundamental parts, since they represent the first transducer element in all signal reception chain [3, 4]. Basically, a RF coil works as a radio antenna, tuned to a resonant frequency (known as Larmor frequency) to transmit RF power during the excitation of the nuclear spins in the hydrogen atoms of the sample. During reception, the RF coils should be able to receive the induced RF signal during the relaxation processes affecting the nuclear spins. The RF coils can be divided into three different categories: (1) Transmit-only, which has as main function delivering power to excite the spins of the sample through a homogenous RF field (also called B1 field); (2) Receive-only, with the main function of receiving the signal with maximum sensitivity, increasing the signal-to-noise ratio (SNR) of the images; (3) Transceivers, which in this case should be able to transmit power with homogenous RF field and to receive the induced signal with high sensitivity. Early in 2015, the Institute of Radiology (InRad) of the School of Medicine at the University of Sao Paulo (FMUSP) acquired the first ultra-high field MRI scanner (MAGNETON 7 T, SIEMENS) in Latin America in a project entitled Imaging Platform in the Autopsy Room (PISA, from the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_107

711

712

L. G. C. Santos et al.

Portuguese-derived acronym for Plataforma de Imagem na Sala de Autopsia [5]). In addition to research in the field of neurology, this equipment will be used for many other projects with patients/volunteers, also in the field of pathology, including in situ (postmortem) and ex situ (histology) MRI to study neurodegenerative diseases, such as Alzheimer, Parkinson and Multiple Sclerosis [6–8]. However, one of the main current limitations of 7-T scanners is the unavailability of commercial and dedicated RF coils, representing a great challenge for protocols designed to postmortem image and ex situ brain samples. Currently, there is only a volume head coil available for the 7-T scanner located at FMUSP, which represents a limitation for imaging small brain samples with high spatial resolution due to the very low SNR provided by this coil when loaded with small samples. Furthermore, one of the current challenges in ultra-high field (i.e.  7 T) MRI scanners is the B1 field inhomogeneity, which is the main reason why 7-T scanners do not have transmit-only body coils available. For this reason, all RF coils operating at 7 T must be designed to operate as transceivers. The goal of the work reported here was to develop a dedicated RF coil operating as transceiver specifically designed for MRI acquisitions of histological brain samples, such as brainstem and hippocampus. The chosen geometry was the typical solenoid, due to its capability to produce highly homogenous B1 field along the axial axis and for its efficiency in generating field intensity, which can be translated as high SNR when receiving the MRI signal according to the principle of reciprocity [9]. Since this coil will operate inside a human MRI scanner and there are no space constraints, the coil axis can be oriented perpendicular to the main magnetic field (also called B0 field).

2

Materials and Methods

Since the brain tissue samples are usually stowed in conic 50 ml falcon tubes, the mechanical support of the coil was designed considering typical dimensions of falcon tubes in order to maximize the so-called filling factor of the coil [9], consequently increasing its SNR. All mechanical parts were designed using the CAD software Solidworks 2008 (Dassault Systemes), as shown by Fig. 1a. The design consisted of a solenoid type coil with 5 turns of equally spaced copper tape (3 M) with 5 mm wide and 100 µm thickness. To provide coverage for the whole sample, the support was designed with 28 mm inner diameter, 34 mm outer diameter and 130 mm length. Considering that the conductive parts and circuits of the coil will be distant from the gradient coil and other metallic parts of the scanner, there was no need for using an external RF shielding. To provide additional protection for the operator and for the coil during its handling, a

cylindrical structure made of PVC was added to the design, as shown by Fig. 1b. All mechanical parts (except the PVC external tube) were 3D printed using ABS plastic in a fused deposition modeling (FDM) 3D printer (Flashforge dreamer). Figure 1c shows the 32-channels volume head coil commercially available for SIEMENS 7-T scanners. The tuning of the coil at 297 MHz was performed by inserting 2.2 pF capacitors (non-magnetic High Q 0.111″ series, Passive Plus) in series with each turn of the solenoid. A variable capacitor (non-magnetic high voltage Teflon, Polyflon) was inserted in parallel with the fixed capacitor in the central loop, allowing fine tune adjustments due to changing in load conditions. A capacitive balanced impedance transformation network [9] was used to match the coil impedance to 50 X, providing maximum RF power transfer during transmission and high sensitivity during signal reception (optimum SNR). The characterization on the workbench of the coil was performed using a Vector Impedance Analyzer (model 8712ET, Agilent), as shown in Fig. 2a. The tuning of the coil to 298 MHz was performed by measuring the reflection coefficient in logarithmic scale (S11), as shown in Fig. 2b, achieving value of −17 dB when the coil was loaded with a brainstem sample. The measured impedance after adjusting the tuning and matching capacitors was Z = (50.21 + j5.18) X. The coil was tested in the 7-T scanner by acquiring B1 maps, SNR maps and high resolution T1 weighted (T1W) images of brainstem samples. B1 maps were acquired using a sequence known as SA2RAGE [10], while the SNR maps were obtained using a FLASH3D sequence. For all measurements, a conic falcon tube filled with water solution and CuSO4 at 5 mM was used as phantom. For comparison of the SNR and B1 field homogeneity, the exact same measurements were performed with the commercial volume head coil (Fig. 1c) as well. To quantify B1 field homogeneity the parameter known as Non-Uniformity (NU [11]) was used. It is defined as the ratio of the standard deviation inside a region of interest (ROI) and the average of the signal intensity of the same ROI. SNR values were calculated defining ROIs inside the sample signal and at the background noise [12]. The signal is defined as the average of the pixel intensity inside the ROI, while the noise is defined as the standard deviation of the background noise. Both NU and SNR can be calculated according to Eqs. (1) and (2) below: NU ¼

ðStandard Deviation of Signal Intensity in the ROIÞ ðAverage of the Signal Intensity in the ROIÞ ð1Þ SNR ¼

ðAverage of the Signal) ðStandard Deviation of Noise  0:66Þ

ð2Þ

Development and Characterization of a Transceiver …

713

Fig. 1 a Mechanical design of the coil support. b Coil assembly including the 3D printed parts and the external PVC tube for protection. c 32-channel head coil currently available at the 7 T

Fig. 2 a Characterization on the workbench setup using a vector impedance analyzer. b S11 measurements in logarithmic scale (top) and impedance measurement using the Smith chart (bottom)

High spatial resolution T1W images were acquired from a brainstem sample using FLASH3D sequence with TR/TE = 40/17 ms and 4 averages. Axial slices were acquired with FOVphase/FOVread = 27/40 mm, matrix size Nphase  Nread = 660  960, resulting in 40 µm in plane resolution with slice thickness of 100 µm resulting in a total acquisition time of 5 h 46 min 38 s. The coronal slices were acquired with FOVphase/FOVread = 39/60 mm, matrix size Nphase  Nread = 630  960, resulting in 70 µm isotropic resolution, with total acquisition time of 5 h 30 min 27 s. Fig. 3 B1 field maps obtained using SP2RAGE sequence for a Solenoid Coil. b Head coil

3

Results

Figure 3 shows normalized B1 maps obtained with the solenoid coil (Fig. 3a) and with the head coil (Fig. 3b) using the phantom described before. The values are normalized by the field intensity at the isocenter of each coil. As expected, due to its size compared to the solenoid, the volume head coil is more homogeneous along the longer dimension of the phantom, as confirmed by the values of NU displayed in

714 Table 1 Comparison between SNR and NU values obtained for both coils

L. G. C. Santos et al. SNR

NU (%)

Solenoid

Signal (a.u) 3309.05

Noise (a.u) 7.30

453.36

48

HeadCoil

1043.83

11.02

94.67

19

Fig. 4 SNR maps obtained from the phantom using both RF coils. a Head coil. b Solenoid coil

Fig. 5 Images acquired from a brainstem sample prepared using fomblin® on falcon tube and using the solenoid coil. a Head MRI in axial orientation illustrating the anatomical position of the brainstem. b, c T1W axial-slices obtained from the brainstem sample with 40 µm in plane resolution and 100 µm slice thickness. d Head MRI in coronal orientation illustrating the anatomical position of the brainstem. e, f T1W coronal slices obtained from the brainstem sample with 70 µm isotropic resolution

Table 1. Nevertheless, the solenoid is capable of producing RF field with high homogeneity in a volume equal to the falcon tube inner diameter with, approximately, 7.2 cm long. The SNR maps were generated by normalizing each pixel intensity of the phantom image by the standard deviation of pixels intensity located at the background noise, as can be

seen by Fig. 4. As can be observed, the solenoid coil showed an improvement of 4.8 times in SNR compared to the head coil, which is confirmed by the values for the signal, noise and SNR shown in Table 1. Figure 5 shows T1W images acquired with high spatial resolution using the developed solenoid coil. As a reference

Development and Characterization of a Transceiver …

for anatomical localization, Fig. 5a illustrates brainstem structure in the head viewed in axial orientation. Figure 5b, c show axial-oriented slices acquired from an ex situ brainstem sample with 40 µm in plane resolution and 100 µm slice thickness. Figure 5d shows the position of brainstem structure inside the head viewed in coronal orientation. Figure 5e, f show coronal slices acquired from the same ex situ sample with spatial resolution of 70 µm isotropic. All slices show a clear discrimination between gray-white matters in the brainstem, easily allowing the identification of some fine structures of the brainstem anatomy [13], such as pons, medulla oblongata, pyramid and anterior-median fissure.

4

Conclusions

This work has described the development and characterization of a transceiver RF coil with solenoid geometry designed specifically for MRI acquisition of histological brainstem samples at 7 T. The coil geometry was optimized to maximize the filling factor of samples stowed in falcon tubes, improving its final SNR. Since this coil will be used in human 7-T scanners, due to its small dimensions compared to the magnet bore, it is possible to use solenoid geometries with the condition that B1 field direction is placed perpendicular to the main magnetic field. B1 field maps showed that the volume head coil generates more homogenous field distribution than the solenoid coil. However, for samples sizes up to 7 cm in length, the solenoid coil is still able to acquire high quality images free of B1 field non uniformity artifacts. SNR maps and measured SNR values for the solenoid showed a gain of 4.8 times compared to the head coil, allowing MRI acquisition with spatial resolution of 70 µm isotropic from ex situ brainstem sample. This made possible clear discrimination between gray-white matter in the brainstem and the identification of fine structures of the brainstem anatomy.

715 Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Plewes DB, Kucharczyk W (2012) Physics of MRI: a primer. J Magn Reson Imag 35:1038–1054 2. Holdsworth SJ, Bammer R (2008) Magnetic resonance imaging techniques: fMRI, DWI, and PWI in Seminars in neurology. Thieme Med Publ 28:395–406c 3. Vaughan JT, Griffiths JR (2012) RF coils for MRI. Wiley, Hoboken 4. Gruber B, Froeling M, Leiner T, Klomp DWJ (2018) RF coils: a practical guide for nonphysicists. J Magn Reson Imag 48:590–604 5. Marques F (2015) A morte explica a vida Pesquisa Fapesp 229 6. Kolasinski J, Stagg CJ, Chance SA et al (2012) A combined post-mortem magnetic resonance imaging and quantitative histological study of multiple sclerosis pathology. Brain 135:2938– 2951 7. De Barros A, Arribarat G, Combis J, Chaynes P, Péran P (2019) Matching ex vivo MRI with iron histology: pearls and pitfalls. Syst Rev Front Neuroanat 13:68 8. Tuzzi E, Balla DZ, Loureiro JRA, et al (2019) Ultra-high field MRI in Alzheimer’s disease: effective transverse relaxation rate and quantitative susceptibility mapping of human brain in vivo and ex vivo compared to histology. J Alzheimer’s Dis 1–19 9. Mispelter J, Lupu M, Briguet A (2006) NMR probe heads for biophysical and biomedical experiments: theoretical principles & practical guidelines. Imperial College Press 10. Eggenschwiler F, Kober T, Magill AW, Gruetter R, Marques JP (2012) SA2RAGE: a new sequence for fast B1+-mapping. Magn Reson Med 67:1609–1619 11. Salmon CEG, Vidoto ELG, Martins MJ, Tannús A (2006) Optimization of saddle coils for magnetic resonance imaging. Braz J Phys 36:4–8 12. Association National Electrical Manufacturers, et al (2001) Determination of signal-to-noise ratio (SNR) in diagnostic magnetic resonance imaging. NEMA Standards Publication MS 1-2001 13. Gil MAF, Bote RP, Barahona ML, Encinas JPM (2010) Anatomy of the brainstem: a gaze into the stem of life in seminars in ultrasound, CT and MRI. Elsevier 31:196–219

Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology D. Carvalho, A. J. Aragão, F. A. Brito-Filho, H. D. Hernandez and W. A. M. V. Noije

Abstract

This paper evaluates a potential system to perform Medical Imaging dealing with electromagnetic waves at microwave frequency range. It aims at detect early breast cancer in a portable and low-cost solution instead of expensive and bulky equipment. An explanation about Microwave Imaging and Software-Defined Radio is presented. Employing GNU radio toolkit, a Radio Frequency communication was performed, whereby SNR close to 80 dB was achieved, showing that the proposed system is suitable for our project, paving the way for further work. Keywords

Microwave imaging • Software-defined radio • Breast cancer detection

1

Introduction

Medical imaging techniques are widely used for assisting the diagnosis of inner structures of tissues, organs or body members aiming to detect any abnormality. In general, it employs well established methods, such as Ultrasound, X-rays, Magnetic Resonance Imaging and also the Positron Emission D. Carvalho (B) · A. J. Aragão · W. A. M. V. Noije Department of Electronic Systems Engineering of the School of Engineering, DMPSV–Poli USP, Av. Prof. Luciano Gualberto-380, São Paulo, Brazil e-mail: [email protected] A. J. Aragão Department of Electrical Engineering of Federal Institute of Education, Science and Technology (IFSP), São Paulo, Brazil F. A. Brito-Filho Federal University of the Semiarid Region (UFERSA), Caraubas, Brazil H. D. Hernandez Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil

Tomography. Microwave Imaging (MwI) has emerged as a potential technique, offering an option at low-cost and with low health-risk to illness screening. MwI is based on the interaction of non-ionizing Electromagnetic (EM) waves at microwave frequencies. The waves are radiated over a body part and the scattered signal is acquired and processed. Depending on the dielectric properties of the illuminated part, a tumor could be revealed. Several researches describe a significant contrast between the dielectric of tumors and healthy tissues at microwave frequencies [1,2]. This work focuses on the application of MwI to breast cancer detection, once it is the most common type among women, which has affected over 2 million in the World in 2018. In Brazil, around 16 thousand deaths were reported in 2016. In addition, around 120 thousand new cases were estimated between 2019 and 2020 [3]. Despite these numbers, early stage detection contributes to a reduction in mortality rates, as reported by the American Cancer Society [4], which reveals that if a tumor is detected when it is still localized, the 5-year survival rate is 99%, against 27% if it has metastasized into distant tissues. Breast cancer diagnosis must follow protocols of physical and clinical examinations, imaging and histological analysis of the pathology. Thus, ultrasound and mammography are the most widely used techniques due to their reasonable precision, low cost and high availability in the medical network. However, ultrasound precision is highly dependent on the operator’s experience and mammography is uncomfortable, besides using ionizing radiation. The MwI systems has emerged as a new alternative, with several systems reported over the past years, such as [5,6], as a non-portable solution; [7] as a portable system. All the aforementioned systems were implemented using Vector Network Analyzer (VNA), which is large and expensive. Besides, [8] reported a handheld impulse-radar detector system using only CMOS circuits, but still not low-cost. Concerning portable and low-cost MwI systems, SoftwareDefined Radio (SDR) has been seen as a technological break-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_108

717

718

D. Carvalho et al.

through. SDR comprises a Radio Frequency (RF) section, coupled with analog to digital (ADC) and digital to analog (DAC) converters, all controlled by a glue logic, which interfaces a computer through an USB connection. A SDR usage for MwI was first indicated by [9]. For breast imaging with high contrast, a challenge that needs to be addressed is the operational frequency system. The higher the chosen frequency is , the higher the imaging resolution, leading to a smaller detectable tumor. However, at high frequencies, microwave signals propagating through the tissue undergo more attenuation, which means that a deep tumor could go undetected. To cope with this trade-off, a MwI system, SDR-based is proposed. This work investigates the implementation issues and assesses the BladeRF SDR platform, which shows to be versatile, while meeting the requirements for finding a breast tumor using EM waves. This paper is organized as follows: Sect. 2 presents the background about breast cancer, microwave imaging and SDR; Sect. 3 shows the methodology; Sect. 4 explaining the two main concerns about the MwI system and Sect. 5 discusses and highlights the achievements; conclusions and perspectives are drawn in Sect. 6.

Note that the gap between the breast and the antennas (light blue) have a harmful impact on the system measurements due to the large difference between the electrical properties of living tissue and air. To reduce reflections and increase the coupling power into the tissue, coupling liquids are used to fill this gap.

2

Background and Challenges

2.2

2.1

Breast Dielectric Properties

The interaction of EM fields (e.g. Microwave) with tissues follow three basic mechanisms, as explained by [10]:

During a woman’s lifetime, the breast undergoes different stages according to her age; the fibroglandular tissues of a younger woman are replaced by fat during the menopause period. Therefore, a younger woman’s breast tends to be firmer and denser than the breasts of older women. Microwave breast imaging modality is possible due to the difference in dielectric properties between malignant and healthy breast tissue. Fatty tissues are expected to have low permittivity and conductivity, while a tumor would have its properties 10 times larger. However, higher permittivity and conductivity are expected for fibroglandular tissues, resulting in reflections of microwave signals, making the tumor identification harder, once the dielectric difference to a tumor is close to 10%, as mentioned by [1]. To make matters worse, the breast skin acts as a shield, reflecting strong signal back to the receiver, as depicted in Fig. 1, which shows a heterogeneous breast model composed of glandular tissues (red) and a tumor(purple), also a signal transmitted and collected from different perspectives. This artifact, reflected due to the skin, is typically several orders of magnitude greater than the reflections from any tumors present at the inner breast. If it is not removed effectively, it could easily mask the presence of tumors.

Fig. 1 A breast coupled with BladeRF—Microwave signals representation

Microwave Interaction and Processing

• the displacement of conduction electrons and ions in tissue as a result of the force exerted on them by the applied EM field; • polarization of atoms and molecules to produce dipoles; • orientation of permanent dipole molecules in the direction of the applied field. In response to an applied field, a current is created within the tissue based on its intrinsic electrical conductivity. The current is proportional to the number of free electrons and ions (ionized molecules), which is higher to invasive cancer compared with the other tissues in a breast, due to the increased volume of water within cancerous tissue. The degree to which it can be polarized, either by creating new dipoles or by the co-orientation of permanent dipole molecules, is a measure of its permittivity. Thus, whenever a breast is illuminated by a microwave signal, a tumor presence is identified as a higher energy signal response, which is processed to compose an image. Imaging reconstruction algorithms are evaluated by [11], showing that the Delay-Multiply-and-Sum (DMAS) algorithm is tailored to MwI.

108 Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology

2.3

MwI System Approach

There are two main approaches to perform the MwI, Tomography and Radar-Based. Both rely on the application of an electromagnetic wave and on the dielectric properties of the breast and of the tumor. MwI tomography aims to recover the illuminated organ dielectric profile using an inverse scattering method. It is a quantitative reconstruction of the breast dielectric profile. In turn, radar-based MwI depends on the application of an Ultra-Wideband (UWB) pulse that aims at reconstructing the target “energy map”, i.e. the position of the strong reflection or backscattered signal from the illuminated organ. Based on the non-uniformity of the dielectric properties, a region with increased backscattering could be identified as abnormal tissues or a tumor. Most of the current systems developed use the radar-based approach, probably due to the advance in microelectronic technologies, allowing Integrated Circuits (ICs) to run at high speed operation. Also in an application such as a breast cancer detection, the aim is to determine the existence and location of a tumor (qualitative reconstruction), rather than the breast dielectric profile.

2.3.1 System Parameters Regarding MwI system performance parameters, specially from the receiver point of view, sensitivity and dynamic range are always concerned. First mentioning the conception of Signal-to-Noise (SNR) ratio, which is a relation of the desired signal level to the level of background noise. The higher the SNR, the better the system is. This said, the weakest signal that can be successfully detected against the background of measurement noise and uncertainties is known as Sensitivity. A MwI requires high sensitivity values, at least 60 dB according to Nikolova [12]. This value means that two signals with voltage difference of one thousand times must be successfully detectable. Once the RF signal has lost its energy due to the lossy tissues in the breast, it is a hard requirement for a MwI system. The dynamic range (DR), measured in dB, is the ratio between the largest input signal tolerated by the receiver and the sensitivity. In other words, it corresponds to the input signal levels for which the receiver provides linear amplification. The large DR among the skin reflection and a tumor backscattered results in a request for a DR of at least 40 dB but could be up to 100 dB, as seen in [13]. Thus, high DR is desirable to achieve better microwave signal penetration depth.

2.4

719

Radar—SFCW Waveform

Time Domain radar is implemented to deal with EM pulses covering large bandwidths (e.g. 5 GHz). However, it is not indicated if large DR is concurrently required, as for MwI systems. A way out of this issue is provided by radar operating in the frequency domain together with Stepped Frequency Continuous Wave (SFCW) waveform. SFCW radar is based on a single-frequency tone transmitted, backscattered by the target and then acquired, as drawn in Fig. 1. These procedures are repeated but increasing the tone frequency by f (Narrow base band signal) until reaching the desirable BW.

2.5

SDR Technology

The SDR device could be seen as a programmable IC Transceiver coupled with ADCs, DACs, Digital filters, etc. Moreover, most signal processing tasks are performed by a FPGA (Field-programmable gate array). SDR is capable of transmitting an arbitrary waveform as a SFCW signal, sweeping its operational frequency from some few Hertz up to some GHz by the software. Not only the frequency, but also the power gain or bandwidth could be software-defined. Taking the radar-based MwI into account, SDR devices are capable of performing all the chain: SFCW waveform generation; RF signals transmission (TX) and reception (RX) through antennas; process these RX signals. As mentioned in I, VNA is commonly used for MwI system implementation, once it is able to synthetically generate the UWB pulse and to collect the backscatters over a wide range of frequencies. Nonetheless, SDR could carry out the same functionality with the advantage of being low-cost.

2.5.1 BladeRF Developed by NuandTM , BladeRF is a versatile USB 3.0 SDR device that provides full duplex operation up to 40 MSPS with an instantaneous bandwidth of 56 MHz. As depicted in Fig. 2, the Cyclone V FPGA is at its heart, interfacing on the one side with the Cypress FX3 USB and the Analog Device AD9361 RF transceiver on the other. The latter is composed of ADCs and DACs featuring 12-bit resolution, providing a native DR of 74 dB. Additional DR can be obtained by using averaging techniques, the same used in most VNAs. BladeRF also provides 2 TX and 2 RX SMA interfaces, allowing performing the multi input and output.

720

D. Carvalho et al.

Fig. 2 BladeRF board

Moreover, the Blade TX band ranges from 47 MHz to 6.0 GHz, while its RX band ranges from 70 MHz to 6.0 GHz, covering most of the operational frequency for MwI. AD9361 also provides self-calibration and automatic gain control (AGC) to maintain a high performance level under varying temperatures and input signal conditions.

2.6

Assuming the radar-based MwI system, the distance (D) of a target, i.e. tumor, from the system can be simplified as

GNU Radio Development

GNU Radio [14] is a free and open-source software development toolkit chosen to implement the design of the RF, analog and digital parts of this MwI system project. It provides several signal processing blocks as filters, signal sources and, most importantly, it supports the BladeRF platform.

3

Fig. 3 Slant Range concept (Modified from [13])

Methodology

This work begins formulating some assumptions, such as the target resolution, i.e. a minimum tumor detectable, and the EM wave penetration requirements for breast cancer detection employing microwaves. After that, those requirements are compared with the BladeRF specifications. Next, a RF communication between two UWB antennas using the SFCW approach is performed aiming to explore the BladeRF features, thus firstly proving its ability to realize a radar-based MwI system.

4

MwI Analysis

4.1

Target Resolution

According to Pereira [15], a breast tumor could be diagnosed as localized (e.g. Stage II) if its size is less than 5 cm. However, the tumor size is only one of the factors considered when staging a person’s breast cancer. Therefore, a more reasonable target is a tumor as small as 1 cm, thus defining the system spatial resolution or slant range (rs ) resolution, as called in radar systems.

D=

ν∗τ 2

(1)

where D corresponds to the scattered point from the time of flight τ of the microwave signal with a wave velocity ν. However, a target location in space requires more than a single radar measurement, as the target can reflect from a spherical surface distant from D spaced by rs . Better spatial precision is achieved repeating the radar measurement using antennas (Tx and Rx) in different positions related to the breast, and then combining the results as sketched in Fig. 3. The rs value of the MwI technique is directly related to the bandwidth (BW) of the transmitted pulse, as expressed in 2. rs ≈

ν 2 ∗ BW

(2)

Taking into account that the microwave velocity into the human body is at least three times slower than in the air (3x108 m/s) [13], in order to achieve close to 1 cm resolution, BW around 5 GHz is needed. This means that, for a radar-based MwI, a pulse with that BW should be transmitted over an operational frequency.

4.2

EM—Penetration Depth

The penetration depth of an EM pulse from a MwI system depends on its operating frequency and TX power as well as on the dielectric properties of the illuminated object. The TX power is limited by thermal effects due to the microwave radiation absorbed in a living organism, which may cause tissue

108 Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology

721

damage. Concerning the operation frequency, its attenuation is calculated to rise proportionally to the square of the frequency [16], as depicted in the Friis equation: P( f, d) =

Gt x ∗ Gr x ∗ λ2 (4π )2 ∗ d 2

(3)

where P( f, d) is the attenuation as a function of frequency f and of the distance travelled by electromagnetic wave d. Note that attenuation is also a function of Gtx and Grx (antenna gains), not discussed at this moment. Now considering the dielectric properties of an object, the EM pulse attenuation can be expressed as [17], where σ is the conductivity and  the permittivity: 1.69 ∗ 103 ∗ σ P(σ, ) = √ 

Fig. 4 GNU Radio Blocks

(4)

From Eqs. 3 and 4, and considering a tumor as deep as 15 cm in a breast, is possible to say that at least 60 dB attenuation needs to be taken into account.

5

Results

5.1

Overview

A MwI system is predictably feasible, as seen in Table 1, which compares the BladeRF against the MwI requirements that previous calculated. Moreover, taking the spatial resolution and depth into account, the BladeRF is suitable to generate and to collect the microwave signals, even if not able to generate a pulse with a BW as high as 5 GHz, yet this could be done by the SFCW waveform implementation.

5.2

GNU Radio—Setup

Firstly, a system performing a RF communication was implemented in the GNU radio toolkit, the main blocks of which are depicted in Fig. 4.

It is composed of the signal source, providing a single frequency tone waveform; the Osmocom sink and source, which powers the BladeRF TX and RX respectively; and a GUI interface. The osmocom sink first assignment is to modulate the baseband to the RF signal, while the osmocom source first stage is to extract this baseband from the RF signal. This first experiment consists of a baseband signal of 100 KHz modulated at an Intermediate Frequency (IF), which has been increased from 1 GHz up to 6 GHz by a 25 MHz step, using SFCW approach. The TX signal is straightforwardly collected by the RX antenna. Both antennas are positioned 15 cm away from each other. The IF value corresponds to the Ch0 frequency at Osmocom blocks in Fig. 4.

5.3

Baseband Signal

Figure 5 shows the complex time domain of the baseband signal at both stages, whereby TX are solid lines and RX are dotted. Those signals are in quadrature modulation, that is, they have both a real (in-phase) and an imaginary (quadrature) part. The RX attenuation is related to the system loss, which could refer to the antennas and also to the transceiver response.

Table 1 Summary comparing Mw1 requirements and SDR performance Performance

Mw1 Required

SDR Provide

Bandwidth (GHz) Sensitivity (dB) Dynamic Range (dB) Gain (dBm) Noise floor

5.0  60 70 −20 ≤ −157 dBm/Hz

722

D. Carvalho et al.

Fig. 5 Baseband signal—time Domain

Fig. 6 SNR of RF signal at: a 1 GHz and b 5 GHz

It is worth mentioning that all the osmocom gains are set to zero dB. Moreover, they could be incorporated to the BladeRF, a Power Amplifier to the TX; and Low Noise Amplifier to the RX, providing an additional flexibility and better performance to the SDR parameters.

5.4

SNR Measured

Figure 6 shows the frequency domain of the received RF signal, where the IF frequency is (a) at 1 GHz and (b) at 5 GHz. It is possible to notice that the SNR decreases its value from ≈ 80 dB at 1 GHz to ≈ 63 dB at 5 GHz. Lastly, the SNR value for all the range from 1 to 6 GHz is plotted in 7, with a variation of ≈ 17 dB. As shown, higher noise is expected at higher frequencies.

6

Conclusion

The investigation and analysis of SDR technology for early breast cancer detection has been presented together with the radar-based MwI one, an emerging medical imaging technique. The evaluated BladeRF, a portable and low-cost platform, meets the MwI requirements for early breast cancer detection, with a native dynamic range of 74 dB and SNR up to 80 dB. Using SCFW waveform was proposed to deal with the challenge of the system operational frequency, once it allows flexibility and targets the resolution of 1 cm employing BladeRF. The next steps include scattering parameters measurements and tests with a breast phantom, both taking advantage of the BladeRF platform. It is expected that the results identify a tumor-like target buried at the inner phantom.

108 Analysis of Breast Cancer Detection Based on Software-Defined Radio Technology

723

Fig. 7 SNR plot related to the IF varying from 1 to 6 GHz

Acknowledgements This work was supported by CAPES which provided a scholarship by the PROEX program. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Lazebnik M, Popovic D, McCartney L et al (2007) A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries. Phys Med Biol 52:6093–6115 2. Sugitani T, Kubota S, Kuroki S et al (2014) Complex permittivities of breast tumor tissues obtained from cancer surgeries. Appl Phys Lett 104:253702 3. INCA (2019) Estimativa 2020: incidência de câncer no Brasil 4. American Cancer Society at https://www.cancer.org/ Accessed Mar 2020 5. Preece AW, Craddock I, Shere M, Jones L, Winton Helen L (2016) MARIA M4: clinical evaluation of a prototype UWB radar scanner for breast cancer detection. J Med Imaging 6. Angie F, Luc D, Julio C, Gil D, Peter L, Guillaume R (2018) Onsite validation of a microwave breast imaging system, before first patient study. Diagnostics 8 7. Islam MT, Tarikul Islam M, Kibria S, Samsuzzaman M (2019) A low cost and portable microwave imaging system for breast tumor detection using UWB directional antenna array. Sci Rep 9

8. Song H, Sasada S, Kadoya T et al (2017) Detectability of breast tumor by a hand-held impulse-radar detector: performance evaluation and pilot clinical study. Sci Rep 7 9. Jayaseelan M, Bialkowski Konstanty S, Abbosh Amin M (2016) Software-defined radar for medical imaging. IEEE Trans Microw Theory Tech 64 10. Paulsen Keith D, Meaney Paul M, Larry G (2005) Alternative breast imaging: four model-based approaches. Springer, Berlin 11. Elahi MA, O’Loughlin D, Lavoie Benjamin R et al (2018) Evaluation of image reconstruction algorithms for confocal microwave imaging: application to patient. Data Sens 18 12. Nikolova Natalia K (2017) Introduction to microwave imaging. Cambridge University Press, Cambridge 13. Oloumi D, Bevilacqua A, Bassi M (2019) UWB radar for high resolution breast cancer scanning: system, architectures, and challenges. IEEE Int Conf Microw Anten Commun Electron Syst 14. GNU Radio at https://www.gnuradio.org/. Accessed Mar 2020 15. Pereira AW (2014) Performance do PET/CT pré-operatório na predição de resposta patológica após tratamrnto com quimioterapia neoadjuvante para pacientes com câncer de mama. PhD thesis. Fundaçã Antônio Prudente 16. Kurup D, Vermeeren G, Tanghe E (2015) In-to-out body antennaindependent path loss model for multilayered tissues and heterogeneous medium. Sensors 17. Hayt WH Jr, Buck John A (2006) Engineering electromagnetics, 7th edn. McGraw Hill, New York

Prototype of a Peristaltic Pump for Applications in Biological Phantoms I. Sánchez-Domínguez, I. E. Pérez-Ruiz, J. Chan Pérez, and E. Perez-Rueda

Abstract

In this work we present the design, construction and characterization of a peristaltic pump for biological applications. The design considers the blood flow with the possibility in a future, to perform the simulation of cardiac pathologies. The system is based on a peristaltic pump (which emulates how the heart works), Teflon hoses and a blood mimic [2] with rheological properties similar to real blood. The phantom is able to simulate blood flow, as well as the ability to show variations in blood flow, according to the selected mode. The blood flow was validated with commercial Doppler equipment. This is the first version of the prototype and it can be modified and arranged for a correct functioning of the phantom, after the experimental evaluations. Keywords





Biological phantom Peristaltic pump system Blood mimic

1



Cardiovascular

Introduction

The study of the cardiovascular system and blood flow is fundamental in medicine, since cardiac pathologies are one of the main causes of death in the world. These pathologies could have different origins; for example, the atherosclerosis is a disease characterized by the accumulation of lipids and fibrous elements in the large arteries or as high glucose in diabetes, that can induce vascular endothelial cell dysfunction and affect blood viscosity and arterial wall tension [3], making of the central interest the simulation of different I. Sánchez-Domínguez (&)  I. E. Pérez-Ruiz  J. Chan Pérez  E. Perez-Rueda UNAM, Unidad Academica del IIMAS en el Estado de Yucatan, Parque Científico, Merida, Yucatán, Mexico e-mail: [email protected]

pathologies for its understanding. In general, the normal functioning of the biological systems and subsequently incorporating the associated pathologies, haves been simulating it, in order to be as reliable as possible and less invasive; allowing the heart rates of different laboratory animals to be simulated. This premise proposes the creation of a flow emulator or phantom blood, which can not only generate a flow similar to that of the human being, but also incorporates that of a mouse and a possum, since they are typical models in research. This phantom is based on the use of a controlled peristaltic pump, for mimicking fluid with rheological characteristics similar to blood, Teflon hoses, where it is possible to control the type of flow according to the interest of the study. Therefore a biological phantom can be defined as a device that attempts to emulate the functional characteristics of biological tissues [4].

2

Methodology and Development

As mentioned previously, the heart is the center of the circulatory system, since it provides the Oxygen and nutrients necessary to the organs and tissues. Therefore, we propose the design of a peristaltic pump capable of providing a flow with a regime very similar to the blood; in addition it also provides a distensibility effect on blood vessels. The pump is multi programmable, i.e., it contains configurations for three different species (human, mouse and possum), It can be programmed with the average heart rate values relative to each species and it represents a versatile tool designed for laboratory work in biomedical applications. The pump has two modes of operation, one preset, which contains the configurations of the species mentioned above and the other mode is the manual where if the user is looking for some other simulation of heart rates, from other species can use this manual mode, where you can enter the required value. The peristaltic pump is made up of an electromechanical system which, by means of a control by electronic

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_109

725

726

components, regulates the speed of rotation of a rotation system, which will cause the displacement of the blood through the system. Therefore the pump is composed of a control system, by means of which the user can interact and program the heart rate (manual or preset); and a mechanical system, which will be responsible for making the rotate. Rotation occurs thanks to four signals sent from four different ports on the microcontroller to the motor. These signals are displaced by time D from each other, which gives the speed of rotation of the motor. The signals that the microcontroller sends to the motor are only for control, since these signals are low current (500 mA) and the motor operates with a higher amperage (3A). A power stage was implemented whose purpose is to condition the signal and deliver a signal with higher amperage to the motor so that it works. In general, for the phantom design, the analysis of all the elements that emulate the blood flow was carried out, using the formula for mimic [5]. The electronic part was important, finally the software stage where the control stage is performed and the information is displayed.

2.1 Design of the User Interface The user interface consists of a 64  128 dot LCD graphical display, by which it is possible to display important control data, such as manual heart rate or the preset that is currently activated. The user can enter a preset heartbeat setting or manually enter the desired heartbeat value in a range of 0 to 250 bpm. An Android program was designed and programmed in a mobile platform to configure the pump through a Bluetooth platform [Bluetooth].

2.2 Design of the Control Stage The digital stage of the pump takes the data entered and it is converted. It into instructions for the mechanical stage of the pump; it is also in charge of controlling the peripherals of the pump such as the LCD screen, the LEDs module, the Bluetooth module and cooling fan.

I. Sánchez-Domínguez et al.

Fig. 1 Schematic of a peristaltic pump

volumes displaced per minute for each species and the diameter of the hoses that would be used to simulate the blood vessels of the different species was also considered. The present design was integrated by a system of two discs and a shaft, which contain three rotating rollers which are the responsible (through their movement) of the crushing of the hoses and a C-shaped structure, which provides a surface where the hoses can be crushed by the rollers, Fig. 2.

2.4 Blood Mimic Another essential component in the phantom is the blood mimic. The main goal is to simulate the rheological characteristics of blood, i.e. the: red blood cells. To this end the follow materials and substances were selected, which, due to their liquid nature, provide the viscosity and flow characteristics necessary for this phantom: distilled water, glycerin, medical grade liquid soap without foam and liquid sweetener; It should be clarified that dye will be applied to simulate the red color of the blood. The concentrations for the elaboration of the blood mimic are shown in Table 1 [5].

3 2.3 Design of the Mechanical Stage The purpose of this section is to provide an element that rotates by the action of an engine and that serves to the flow of the blood emulator along the phantom, Fig. 1. This part was integrated by a system of two discs and one axis, with three rotating rollers which are directly responsible (by crushing the hoses) for the fluid to circulate through the system. The shaft was coupled to the motor. The

Results

The results presented show the operation of each part of the dispositive, observing that a functional and a repeatable phantom were achieved and that the tests carried out validate it. The design of a mobile application for Android with the App Inventor 2 program, which allows connection to compatible mobile devices, such as cell phones or tablets, allows the user to control remotely the peristaltic pump since it is connected via Bluethooth®. In Fig. 3 the screens generated by the application are presented.

Prototype of a Peristaltic Pump for Applications …

727

Fig. 2 Final design of the peristaltic pump, scheme in explosion

Table 1 Formula for making mimic blood emulator according to Samavat

Material

Quantity (grams)

Glycerin

100.6

Distilled water

828.8

Foam soap without foam

9.6

Natural sweetener

33.6

PVC powder

5.0

Fig. 3 a Main screen of the application. b Possum mode screen

The electronic part displayed in Fig. 4, such as the circuit board, with some peripherals such as the display, and the pump control part is presented.

Figure 5 shows the phantom already assembled, with two beakers containing the blood mimic, the hoses that emulate the arteries of the peristaltic pump.

728

I. Sánchez-Domínguez et al.

As mentioned previously, and because the mimic was dyed red, it was necessary to generate all the possible contrasts. The acquisition of images was used a camera canon Rebel XS. Figure 6, the images of the hose with different chamber filters displayed; demonstrating the presence of suspended PVC particles in the mimic, allowing calculate the flow velocity.

3.1 Doppler Tests

Fig. 4 Printed circuit board with some peripherals

In order to achieve a sufficient clear photograph in determining the flow velocity profile, the particles needed to be sufficiently contrasting with the background of the image.

Fig. 5 Phantom built, and ready for tested in situ

Fig. 6 a Photography in landscape mode. b Photography in portrait mode

Finally, a commercial Sonomedics® brand Doppler ultrasound was used to evaluate the dispositive. This last test consisted of verifying if the Doppler ultrasound was able to detect the flow produced by the pump and determine a heart rate similar or equal to that programmed in the peristaltic pump, as shown in Fig. 7 [6]. The success in this test would determine if the pump is capable of performing peristalsis movement effectively and is capable of emulating the heartbeat, the latter being detected by commercial measuring instruments. The results of the measurements are described in Table 2.

Prototype of a Peristaltic Pump for Applications …

729

Fig. 7 Measurements with the US transducer with emulation of a Mouse, b Opossum

Table 2 Emulated frequency measured with the transducer

Setting

Theoretical value

Human

80

78–81

Opossum

150

147–153

Mouse

200

195–199

Frequency value manually programmed

4

Actual value measured with doppler

Conclusions

The prototype of a biological phantom was developed, constructed and characterized from a multi programmable peristaltic pump capable of emulating the heartbeat of three different species: human, mouse and possum; It also allows a mode with manual programming between 30 and 250 bpm. The pump emulates the peristalsis that normally occurs in the blood vessels and causes blood flow. This peristaltic pump has an interface, so that the interaction with the user is as comfortable and practical as possible. The beats generated by the pump could be detected by a commercial Doppler device; therefore the validity of the phantom could be verified. In addition, it is of particular interest to continue testing with phantoms with a better approximation to biological tissues, so that certain designs and materials appropriate for this purpose are currently considered such as. The use of Agar-Agar to better emulate the tissue or organs, and the use of hoses with thinner walls to have a better effect of compliance and particles with suspension in the mimic. In the power part it is necessary to

90

88–90

use motors with higher voltage (V), to improve the control and operation of the phantom. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Bluetootth. https://developer.android.com/guide/topics/connectivity/ bluetooth?hl=es-419 2. Martins F, von Krüger M, Albuquerque W (2014) Continuos flow phantom for the calibration of an ultrasonic transit-time flowmeter. Brazilian J Biomed Eng 3–10 3. Rask-Madsen C, King GL (2013) Vascular complications of diabetes: mechanisms of injury and protective factors. Cell Metab 17(1):20–33 4. Rivera M, Reiszel F, Pereira W, Machado JC (2002) Phantoms para ultrasonido con variación continua de la velocidad de propagación de la onda. Revista Mexicana de Ingeniería Biomédica, 5–10 5. Samavat H, Evans JA (2006) An ideal blood mimicking fluid for doppler ultrasound phantoms. J Med Phys 257–278 6. Zhou X, Kenwright D, Wang S, Hossak J, Hoskins P (2017) Fabrication of two flow phantom for doppler ultrasound imaging. IEEE Trans Ultrason Ferroelectric Freq Control 53–65

Performance Evaluation of an OOK-Based Visible Light Communication System for Transmission of Patient Monitoring Data K. vd Zwaag, R. Lazaro, M. Marinho, G. Acioli, A. Santos, L. Rimolo, W. Costa, F. Santos, T. Bastos, M. Segatto, H. Rocha and J. Silva

Abstract

The absence of interference by radio waves, due to the use the unlicensed light spectrum provided by light-emitting diodes (LEDs) sources, is one of the main advantages of visible light communication (VLC) systems in health classified areas. The robustness of a stable and reliable VLC system, for application in intensive care medical environments, is experimentally demonstrated in this paper. A proof-of-concept experimental setup is presented and the results described to confirm the performance of on-off keying (OOK) schemes based on Manchester coding. The experimental results show that the transmission, of real data provided by a patient monitoring equipment, over a VLC link of 1.5 m, was successfully achieved with the wide range of LED bias current between 10 and ≈ 700 mA. Keywords

Patient monitoring • Visible light communications (VLC) • On-off keying (OOK) • Manchester coding

1

Introduction

The limited availability of radio spectrum has led to an increasing contention between numerous systems, especially in license-free bands [1]. Wireless-fidelity (Wi-Fi) networks are currently experiencing data rate collapses and latency explosions because of the traffic load and, in consequence, K. vd Zwaag (B) · R. Lazaro · M. Marinho · G. Acioli · A. Santos · L. Rimolo · W. Costa · M. Segatto · H. Rocha · J. Silva LabTel, Federal University of Espirito Santo (UFES), Avenida Fernando Ferrari, 514, Vitória, Brazil e-mail: [email protected] F. Santos Federal University of Espirito Santo (UFES), São Mateus, Brazil

interfering traffic. The use of light as a carrier avoids these complications due to the several terahertz available in the unlicensed spectrum, and also due to the fact that interference from other cells is locally confined and can easily be blocked by physical separators or beam adjustment [1,2]. Optical wireless communication (OWC) is a wireless communication technology that uses the unlicensed light spectrum with light sources providing enough modulation bandwidth for high data rates. Therefore, in the specific light-fidelity (LiFi) application, each luminaire, employed as light source, can be used as a wireless access point [3]. Visible light communications (VLC) is an optical wireless communication system that transports information through the modulation of a visible light source [4]. According to Rajagopal et al. [5], the visible light spectrum used for communication signals can be determined between 400 THz (780 nm) and 800 THz (380 nm) using light-emitting diodes (LEDs), which are generally applied for illumination. Due to the growth of high power LEDs, interest in VLC has rapidly increased to meet the green technology paradigm. The security in data communication, energy saving, high bandwidth and the possibility of using existing lighting infrastructures are distinct motivations using light to carry information [4]. New technologies have been developed to fulfill the demand of the exponential increase of devices requiring wireless connectivity. Due to limitations of RF capacity, the unlicensed spectrum used by VLC emerges as a complementary technology to this demand. Through its numerous advantages, VLC is subject to research in classified areas like health [6–8]. According to Khan [9], VLC will not interfere or be affected by the radio waves of other machines, which minimizes risks to human health. The already many installed LEDs, and the expectation that almost all lighting devices will be LED-based, makes VLC a feasible option for optical wireless communication in healthcare environments. Monitoring of vital parameters such as temperature, pulse, blood pressure, respiration, electrocardiogram, among oth-

T. Bastos LRTA, Federal University of Espirito Santo (UFES), Vitória, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_110

731

732

K. vd Zwaag et al.

ers, in hospital intensive care (IC) units is extremely important for rapid and effective interventions, aimed for returning patients to baseline conditions [10,11]. Thus, VLC can provide access of vital parameters to a doctor and/or nurse on personal devices such as smartphones and/or tablets [12,13]. Visible light communication is often investigated for wearable patient monitoring devices. In some cases, a VLC downlink with an infrared (IR) uplink is proposed like in [14], or entirely with an IR mobility scheme as described in [15]. The downside of these schemes is the possible interference with some medical equipment like surgeon robots, where IR optical tracking systems are used [16]. The respective authors also performed experiments with VLC in a hospital surgery room, using different Li-Fi schemes with multi-user links to create a robust transmission. To avoid the aforementioned IR and electromagnetic interference, the authors of [17] deployed VLC to avoid interference on patient monitoring systems. They achieved a 30 cm optical link to transmit electrocardiogram signals, using on-off keying (OOK) codification. In this work, we experimentally evaluate the robustness of a stable and reliable VLC transmission system for application in intensive care medical environments. A proof-of-concept experimental demonstration is presented and the results are described to confirm the performance of the Manchesterbased OOK codification scheme. The experimental results show that the transmission of real data provided by a multiparametric equipment, over a VLC link of 1.5 m, was successfully achieved in a wide range of light intensities.

detected by a purposely manufactured photoreceiver, before analog-to-digital conversion (ADC) by a 2.5 G Samples/s mixed domain oscilloscope. The system performance was offline measured with the eye opening penalty metric.

2

Materials and Methods

2.3

2.1

Experimental Setup to Prove Concepts

On-off keying is a type of line coding in which the generated information bits are substituted by pulses [19]. Commonly, the information bit 1 is, during the signaling interval, mapped to positive pulse, whereas the bit 0 is replaced by the absence of a pulse in an unipolar signal formulation, or by a negative pulse if bipolar signals streams are required. The simplicity of this mapping process is requested in low cost applications. Several formation laws can be used during the design of the pulses. The Manchester coding is one of the most employed due to the facilities it provides in clock recovering, while it avoids long sequences of both bits one and zero [19]. The rule that stipulates a negative pulse transition, in the middle of the duration of bit 1, and the opposite to bit 0, was adopted in this work.

Figure 1 depicts a block diagram of the setup prepared to experimentally demonstrate the robustness of the proposed VLC system. The OOK signals, generated using the software Matlab, were loaded into a 250 M Samples/s arbitrary function generator (AFG), utilized as digital-to-analog converter (ADC). The analog signals available at the AFG output were amplified and superimposed onto a LED bias current, aiming to provide non-negative amplitudes. The output of the BiasTee was directly supplied to a commercial white LED. After propagation through the optical wireless channel, supported by bi-convex optical lenses, the signals were

Fig. 1 Block-diagram of the implemented VLC system. EA electrical amplifier, DC direct current, PD photodiode

2.2

System Frequency Response

Normally, the frequency response of a white LED is modeled as a low-pass filter with a cutoff frequency limited to few MHz [18]. Because this limitation is perfectly perceived in a frequency response measurement, the apparatus depicted in Fig. 2a was prepared. An Anritsu’s MS2038C network analyzer (NA) was utilized to obtain the frequency response of the VLC system. It should be stressed that the parameter S21 was captured, which represents the amplitude characteristic profile of the VLC transfer function, respecting the light intensity that can saturate the photodiode. The low-pass characteristic, clearly shown in Fig. 2b, demonstrates that the LED is the main system limitation in terms of bandwidth. Nevertheless, it should be noticed that in the application addressed in this work, this is not the major issue. It can be seen in Fig. 2b that the 3 dB bandwidth of the VLC system is limited to 2.4 MHz. However, if pre-emphasis and/or pos-equalization techniques are implemented in the system, at a cost of a growth in system complexity, a bandwidth around 8 and even 10 MHz can be achieved.

A Brief Theory About OOK Codification

DC

MATLAB

LED USB 2.0

MDO 3012 2,5 GS/s

AFG 250 MS/s

EA 1 Bias-Tee CH1

CH2

PD VLC LOS Channel Free Space

Lens 1 Coaxial Cable

Lens 2

EA 2

110 Performance Evaluation of an OOK-Based Visible Light . . .

733 1000

Polarization Current (mA)

800

NA

LED + PD

EA1

600 400 200 0 2.6

2.7

2.8

2.9

3.0

DC Voltage (V)

Bias-Tee

(a)

Datasheet LED

Fig. 3 I–V curves provided by datasheet and by the measurements

current observed in the source display and in a Fluke 87-V multimeter. Figure 3 shows a comparison between the curve provided by the datasheet and the measured curve. It can be observed from Fig. 3 that the measurement obtained on the LED terminals is close to the curve provided in the datasheet. The slight differences are due to the different measurement conditions that provide different level of electronic noise.

(b)

Fig. 2 a Experimental configuration used to characterize the VLC system. b The measured system frequency response

3

Experimental Results

Prior to the performance evaluation, the equipment and devices were calibrated and configured for proper use and stabilization. The performance was measured in terms of eye opening penalty (EOP), offline calculated as  EOP = 10 × log10

E Oref E Orec

 [dB],

(1)

where E Oref is the difference between the positive and negative levels of a reference signal, and E Orec is the eye opening amplitude of the received signal [20]. These parameters are obtained with eye-diagrams built with superposition of several received signals, in a predefined number of signal duration.

3.1

Characterization of the Employed White LED

To survey the current–voltage (I–V) curve of the LED, the DC voltage of a voltage source was manually adjusted to not exceed 3.2 V, a value that, according to its datasheet, can damage the LEDs [21]. The voltage was then varied and the

3.2

Transmitted and Received OOK Signals

The preliminary experimental results shown in Fig. 4a illustrate the Manchester line coding generated at a frequency of 2 MHz, with an upsampling of 8 samples per symbol. The sequence shown in Fig. 4b, measured at a B2B configuration of the oscilloscope, is almost the same as the signals generated in Matlab depicted in Fig. 4a, approving the transmitted signal quality. The smoothing observed in the received pulses at the VLC receiver, depicted in Fig. 4c, is due to the low-pass characteristic of the system. It can be seen from Fig. 4 that one symbol duration is 500 ns and long sequences of zeros and ones are avoided through the Manchester coding. This is important because it facilitates data synchronization at the receiver.

3.3

Performance Analysis

Figure 5 shows the EOP as a function of the LED bias (polarization) current, measured at a distance of 1.5 m and a frequency of 2 MHz. Taking an EOP of 15 dB as a reference penalty, it is possible to observe from Fig. 5 that a great performance is reached in a wide range of currents between 10 and ≈ 700 mA. Nevertheless, as expected, three performance regions can be established in the measured curve. In the first, comprising the bias currents between 0 and 100 mA, the system perfor-

734

K. vd Zwaag et al.

0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

9

10

0

1

2

3

4

5

6

7

8

9

10

(a)

(b)

(c) Fig. 4 a Part of the generated 2 MHz Manchester Matlab signals. b B2B measurement of the oscilloscope. c Pulses detected at the VLC receiver

2

Limited by noise Limited by nonlinearity

Optimum region

Fig. 5 Measured EOP as a function of the current at 2 MHz and 1.5 m distance

mance is limited by electronic and signal clipping noise [22]. In this case, the intensity of the signal is reduced, as well as the signal-to-noise ratio, increasing thereafter the EOP. This is substantiated by the first (left) eye diagram depicted inset in Fig. 5. An optimum performance region is verified with the EOP measured with the currents adjusted from 100 to 500 mA. The wide open eye-diagram shown inset (in the middle) in Fig. 5 corroborates with such definition. An EOP of 1.4 dB up to 30.39 dB obtained at 500 and 800 mA, respectively, marks the area where the performance is affected by the system nonlinearities. It should be stressed that these nonlinearities are raised mainly by the LED saturation [23]. The astonishing eye-diagram reproduced in the nonlinear region illustrates the impossibility of bits recovering.

110 Performance Evaluation of an OOK-Based Visible Light . . .

The pictures shown in Fig. 7 confirm the robustness of the evaluated VLC system. The measured data depicted in Fig. 7a consisting of the heart-rate, oxygen saturation, pulse rate, respiration rate, temperature and non-intrusive blood pressure measurement, was successfully recovered after propagation over 1.5 m of a VLC link, as illustrated in Fig. 7b

Multi-Parametric Monitor MDO/AFG

EA 1 Bias-Tee

LED

Lens 1

735

Lens 2 PD+EA 1

Fig. 6 Photo of the experimental VLC system with multi-parametric monitor. EA electrical amplifier, PD photodiode

4

Conclusions

The transmission of data generated by a multi-parametric sensors via a visible light communication system, with onoff keying line codification, was successfully demonstrated in this work. A stable and reliable experimental setup was intentionally prepared to confirm the ability of light-emitting diodes (LEDs) to also serve as information access points in classified areas such as intensive care medical environments. The experimental results show that the transmission of real data provided by a patient monitoring equipment and subsequently codified in Manchester pulses, over a VLC link of 1.5 m, was successfully accomplished in the wide range (from 10 to ≈ 700 mA) of the white LED bias current. Evaluations with high bit rates, as well as with higher link lengths are part of our future works.

(a)

Acknowledgements The authors acknowledge the support from the FAPES 80599230/17, 538/2018, 84343338, 601/2018, and CNPq 307757/2016-1, 304564/2016-8, 309823/2018-8 research projects. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2.

(b)

3. Fig. 7 Generated and received vital parameters. a Data generated in multi-parametric monitor PRO12. b Corresponding parameters received in Matlab

4. 5.

3.4

Transmitting the Data of a Multi-parametric Monitor

Figure 6 shows a picture of the experimental setup used to transmit patient data measured by sensors of a multiparametric monitor. Apart from this important health equipment, it reflects the diagram shown in Fig. 1.

6.

7.

Pathak P, Feng X, Hu P, Mohapatra P (2015) Visible light communication. Netw Sens Surv Poten Challenges IEEE Commun Surv Tutor 17:2047–2077 Hussain B, Li X, Che F, Patrick C, Wu L (2015) Visible light communication system design and link budget analysis. J Lightw Technol 33:5201–5209 Schulz D, Jungnickel V, Alexakis C et al (2016) Robust optical wireless link for the backhaul and fronthaul of small radio cells. J Lightw Technol 34:1523–1532 Arnon S (2015) Visible light communication. Cambridge University Press, Cambridge Rajagopal S, Roberts R, Lim S (2012) IEEE 802.15. Visible light communication: modulation schemes and dimming support. IEEE Commun Mag 50 Berenguer P, Schulz D, Hilt J et al (2017) Optical wireless MIMO experiments in an industrial environment. IEEE J Select Areas Commun 36:185–193 Mahmood Z (2019) The internet of things in the industrial sector. Springer, Berlin

736 8.

9.

10.

11. 12.

13. 14.

15.

K. vd Zwaag et al. Retamal J, Oubei H, Janjua B et al (2015) 4-Gbit/s visible light communication link based on 16-QAM OFDM transmission over remote phosphor-film converted white light by using blue laser diode. Opt Exp 23:33656–33666 Khan L (2017) Visible light communication: applications, architecture, standardization and research challenges. Digit Commun Netw 3:78–88 Dhatchayeny D, Sewaiwar A, Tiwari S Vikramaditya, Chung Y (2015) EEG biomedical signal transmission using visible light communication. In: IEEE 2015 International Conference on Industrial Instrumentation and Control (ICIC), pp. 243–246 Cheong Y, Ng X, Chung W (2013) Hazardless biomedical sensing data transmission using VLC. IEEE Sens J 13:3347–3348 Rachim V, An J, Quan P, Chung W (2017) A novel smartphone camera-LED communication for clinical signal transmission in mHealth-rehabilitation system. In: IEEE 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3437–3440 Tan Y, Chung W (2014) Mobile health-monitoring system through visible light communication. Bio-med Mater Eng 24:3529–3538 Adiono T, Armansyah R, Nolika S, Ikram F, Putra R, Salman A (2016) Visible light communication system for wearable patient monitoring device. 2016 IEEE Region 10 Conference (TENCON), pp. 1969–1972 Torkestani S, Sahuguede S, Julien-Vergonjanne A, Cances J (2012) Indoor optical wireless system dedicated to healthcare application in a hospital in IET communications 6:541–547

16. Mana S, Hellwig P, Hilt J, et al (2020) LiFi experiments in a hospital. In: Optical Fiber Communication Conference: M3I–2 Optical Society of America 2020 17. Mayuri M, Vijayalakshmi B, Sindhubala K (2015) Biomedical data transmission using visible light communication. Int J Appl Eng Res 10 18. Silva F, Martins W (2017) A computational platform for visible light communications. XXXV Simpósio Brasileiro de Telecomunicações e Processamento de Sinais (SBrT 2017). 891–895 19. Proakis J, Salehi M (2001) Digital communications, 4th edn. McGraw-hill, New York 20. Binh LN (2014) Optical fiber communication systems with Matlab and Simulink models. CRC Press, Boca Raton 21. LumiLeds . LUXEON Rebel ES Datasheet https://www.lumileds.com/uploads/17/DS61-pdf 2020. Accessed 4 Apr 2020 22. Zwaag K, Neves J, Rocha H, Segatto M, Silva J (2019) Adaptation to the LEDs flicker requirement in visible light communication systems through CE-OFDM signals. Opt Commun 441:14–20 23. Zwaag K, Neves J, Rocha H, Segatto M, Silva J (2018) Increasing VLC nonlinearity tolerance by CE-OFDM. In: Latin America Optics and Photonics Conference: W3D–3Optical Society of America 2018

Equipment for the Detection of Acid Residues in Hemodialysis Line Angélica Aparecida Braga, F. B. Vilela, Elisa Rennó Carneiro Déster, and F. E. C. Costa

Abstract

Hemodialysis is a procedure responsible for performing the blood filtering function of a diseased kidney. During the hemodialysis process, the dialer responsible for filtering blood is reused after disinfecting its membrane, which process might leave residues that may be harmful to health. A disinfection of hemodialysis machines is performed with an acidic solution. This paper focuses on the development of a device that can detect harmful and unwanted substances present in the circuit of the patient. The equipment proposed in this research performs an automated dialer and patientline acid detection process that applies increased safety to the healthcare professional and prevents discomfort or damage that may be caused by peracetic acid residues to the patient. Keywords

Hemodialysis

1



Renal insufficiency



Spectrophotometry

Introduction

Hemodialysis is a procedure that seeks to remove excessive fluids and substances accumulated in the blood, when the kidneys do not perform their function properly. The procedure removes substances such as urea, potassium, sodium A. A. Braga (&)  F. B. Vilela  E. R. C. Déster  F. E. C. Costa Department of Biomedical Engineering, Instituto Nacional de Telecomunicações—Inatel, Santa Rita Do Sapucaí, Brazil e-mail: [email protected] F. B. Vilela e-mail: fi[email protected] E. R. C. Déster e-mail: [email protected]

and water from the blood of the patient [1]. The blood is driven to the hemodialysis machine by a pump, being collected from a biocompatible tube material implanted within a vessel of the patient, called fistula, as shown in Fig. 1 [2]. In the dialyzer there is a membrane that allows the exchange of solutes and liquids. Such membrane is composed of a bundle of hollow fibers, which is where the blood circulates inside the fibers and the dialysate circulates outside in the opposite direction to the blood [1]. The dialysate is a mixture composed of electrolytes, bicarbonate and glucose dissolved in pure water. Due to the high cost of hemodialysis, there is a practice of reuse of dialyzers that is adopted world- wide and must be performed correctly. Otherwise, patients might suffer different kinds of discomfort, such as nausea, pressure drop, headache and even infections [2]. After each procedure, the circuit of the patient is washed, responsible for the removing of blood residues in the line. After washing, the circuit of patient line and dialyzer are stored in containers filled with a solution composed of peracetic acid (C2H4O3) and, depending on the solution, with other chemicals, which vary from hospital to hospital. The removal of peracetic acid from the circuit and from the dialyzer is performed by injecting a saline solution into the patient’s line and in the dialyzer to remove the chemical waste. The confirmation of the removal is commonly carried out visually by the professional responsible for disinfection. In this procedure, a colorless reagent is used that, when in contact with peracetic acid, shows a slightly yellowish color change. As this procedure depends entirely on the person responsibility, a small color change might bot be noticed, causing acid residues to come into contact with the biological system of the patient [3]. As provided for in the Resolution of the Collegiate Board (RDC) no 11 of March 13, 2014 of the National Health Surveillance Agency (ANVISA) in Brazil, it is mandatory, at the end of each session, to clean and disinfect the machine and the surfaces that come into contact with the user [4].

F. E. C. Costa e-mail: [email protected] © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_111

737

738

A. A. Braga et al.

Fig. 1 Outline of a hemodialysis procedure [3]

Thus, the present study focuses on evaluating the requirements for the development of a new detection method, for any change in the concentration of peracetic acid in the saline solution used in the circuit of patient of hemodialysis. Therefore, it can enable the development of a device that can minimize the possible discomfort generated by the contact of peracetic acid with the biological system of patient.

2

Materials and Methods

The research was divided into four main stages: study of spectrophotometry; study of an electrical applicable to the purpose; development of a schematic for each study and; comparison of methods and analysis of responses.

2.1 Spectrophotometry Study The initial research proposal was to develop a device that would detect the presence of acid by collecting a sample of it from the circuit of the patient line through the conductivity of the solution. However, analyzing the information available by the specialists in hemodialysis at the Samuel Libaˆnio Hospital, Pouso Alegre MG, they informed that it would not be something interesting for them, as they want a device that offers an immediate response. Another reason is that not all hospitals use the pure peracetic acid solution. At

the Samuel Libaˆnio Hospital, for example, the healthcare team uses a solution composed by different concentrations of peracetic acid (C2H4O3), acetic acid (C3COOH) and hydrogen peroxide (H2O2), as in most hospitals. For this reason, there was a migration of methodology to spectrophotometry, a technique that uses an optical signal that measures how much a chemical substance absorbs light, through the interaction of light with the solution. This method required the use of a light source sensor and a sensor to detect the absorption of light by the substances present in the solution [5]. Moreover, it was necessary to carry out a study of the wavelength of each substance present in the solution used to disinfect the circuit of the patient and the dialyzer. As mentioned above, each hospital uses a different solution with different concentrations for disinfection and, to carry out the tests, the solution used by the Samuel Libaˆnio Hospital was applied. Therefore, it was possible to collect data that could bring a satisfactory result to the hospital and for research purposes. In order to use this methodology, it was necessary to carry out studies to verify the light absorption lengths of each component present in the solution. Because each component absorbs a specific wavelength, it is possible to check the presence or absence of each one that causes discomfort to the patient [6]. The wavelengths studied are 180, 205 and 280 nm, for the respective components, peracetic acid, acetic acid and hydrogen peroxide [5].

2.2 Study of an Electrical System Furthermore, it was analyzed the possibility of using a pH sensor used in the chemistry laboratory of Inatel. For a possible detection of acid in the physiological solution, a very simple and easy-to-use method was adopted for the person responsible for disinfection. Thus, a new methodology would be the verification of the presence of acid through the pH of the solution, because the higher the concentration of acid the lower the pH present in the solution. For this, several tests were carried out with different levels of acid in the saline solution, to check the resolution and sensibility of the sensor. The sensor used is the pH meter AK90. This model has a feature that automatically compensates the temperature changes. As a temperature variation in the disinfection solution can change its chemical properties, it can interfere with the detection result.

2.3 Development of a Schematic for Each Study For spectrophotometry, a prototype was developed with a light source, capable of encompassing the wavelengths of all

Equipment for the Detection of Acid Residues in Hemodialysis Line

739

components present in the solution; a sensor, capable of detecting the absorption of light by the components and a display that presents a message that acid was still present in the circuit of the patient. To perform the tests, it was necessary to have an environment without any interference of light, so that there would be no interference in the reading of the sensor. The tests were carried out with a pure saline solution, saline solution with a small amount, around 1 µL, of the acid solution and another with a larger amount, around 50 µL, of the acid solution. In this test, the different voltage levels that each solution resulted were analyzed. For the electrical system, only tests were carried out with different concentrations of acid and serum with the pH meter. It is important to emphasize that before each measurement, the sensor was neutralized with a buffer solution, so that one solution would not interfere with the reading of the other. All readings were recorded and through them a graph was generated which is found in a posterior section.

It was designed a system that can detect any level of acid concentration, because small ones that goes into the confirmed that there is no minimum acceptable concentration. However small the acid concentration that goes into the patient’s body, it will already cause the patient to experience discomfort during the procedure. During a new research on the functioning of the hemodialysis machine and the disinfection procedure, it was concluded that the acid solution is removed through the circuit of the patient, which has a specific coupling for removing the solution. With this it would be possible to develop a device that could be coupled at the end of the circuit, making it possible to capture the solution that leaves the circuit. Tests were performed with different concentrations of the peracetic acid solution, which was supplied by the Samuel Libaˆnio Hospital. The tests with the solution with peracetic acid were carried out in the following steps: calibration of the pH meter in a buffer solution; measurement of the pH of the pure saline solution; measurement of the pH of the pure peracetic acid solution; 20 mL of saline were placed in beakers; each beaker received an acid concentration, initially 50 µL were used, and that concentration was gradually reduced, until it reached 1 µL, as shown in Figs. 2 and 3. It was analyzed that flow and pressure of the circuit of the patient depend on the machine [5]. Thus, it was so that it was possible to have a small idea of two machines was consulted, so that it is possible to compare if there is any difference in

2.4 Comparison of Methods and Analysis of Responses After the tests were performed, it was necessary to study which method would be most interesting for the hospital and for those responsibles for disinfecting the circuit of the patient. For the hospital would not be interesting for something that is high cost and for those responsible for disinfection it would not be interesting for it to take a long time to make the detection, since the verification of the presence of the acid is carried out when there is an exchange of hemodialysis users.

3

Results and Discussions

It is noticed that the detection of acid, in the circuit of the patient and in the dialyzer, in an automated way is of extreme importance for both the user and the patient, since the automation of the acid detection procedure makes it possible to decrease errors of users and, consequently, a better wellbeing of the patient during the hemodialysis procedure. Serious hypersensitivity reactions to peracetic acid have already been described after reprocessing the dialysers and lines of the hemodialysis patient, manifest in themselves as dizziness, headache, nausea and bronchospasm [1, 7]. This study was carried out to check the best light source to join the project, as it was necessary for it to cover all lengths from 180 to 280 nm. And that was the biggest obstacle, since one of the objectives of the project was to create a device low cost. Therefore, the detection of acid using a pH solution was more recommended to this goal.

Fig. 2 Saline and peracetic acid test

740

A. A. Braga et al.

Fig. 3 Change in pH at different acid concentrations

values. No difference was found in the pressure of the machines, since all hemodialysis machines work with a pressure close to that of the blood in the circuit of the patient. The possibility of bacteria detection by pH was also verified. In order to be complemented in the project, some scientific articles were researched, as it is an indication for future research projects.

4

Conclusion

The present study aimed to investigate and develop a system for detecting peracetic acid in the dialyser, as well as in the hemodialysis line of the patient. The results of the present study verified that the pH meter has the capability to detect peracetic acid in a solution with saline solutions, by measuring the pH of it. With the tests carried out it was possible to reach the conclusion that spectrophotometry would be a reliable method, but with a high cost, which is not interesting for a hospital, and the method with pH is an authentic, simple and low cost method. Acknowledgements I would like to acknowledge the Center of Development and Transfer of Assist Technology (CDTTA) and all the professors who helped this work.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Governo do Brasil. Rede públic de saúde recebe mais recursos para serviços de nefrologia. Available in https://www.brasil.gov. br/saude/2013/03/rede-publica-de-saude-recebe-mais-recursos-paraservicos-de-nefrologia. Accessed 2019-07-06 2. Ministério da Saúde. Equipsmentos médico-hospitalares e o gerencia-mento da manutenção: capacitação a distância (ISBN): 720 (2002) 3. Oliveira M (2011) Um novo métodoespectrofotomérico para detectar níveis residuais de peróxido após o processamento de filtros de hemodiálise Einstein 9:1–3 4. Higashi N (2005) Direct determination of peracetic acid, hydrogen peroxide, and acetic acid in disinfectant solutions by far-ultraviolet absorption spectroscopy. Analy Chem 77:2272–2277 5. Agência Nacional de Vigilância Sanitária (ANVISA) [Internet]. Resolução da Diretoria Colegiada - RDC N.o 154, de 15 de junho de 2004. Estabelece o Regulamento Técnico para o Funcionamento de Serviços de Diálise. May 31 of 2006 6. Higashi B, Iouri G, Pablo J, Scháberle, FA (2016) Fundamentos da espectroscopia de transferênciaóptica:90–93 7. Robinson BM, Feldman HI (2005) Dialyzer reuse and patient outcomes: what do we know now? Semin Dial Analyt Chem 18:175–179

Using of Fibrin Sealant on Treatment for Tendon Lesion: Study in Vivo Enéas de Freitas Dutra Junior, S. M. C. M. Hidd, M. M. Amaral, A. L. M. Maia Filho, L. Assis, R. S. Ferreira, B. Barraviera, and C. R. Tim

sealant treatment for tendon lesion can be promote the reduction of edemas after 21 days. The fibrin sealant group showed similar collagen deposition with the injury group, but with the collagen fibers more organized. The results suggest that the fibrin sealant is effectiveness to treat tendon injuries.

Abstract

Tendon injuries are among the most common orthopaedic problems with long-term disability as a frequent consequence due to its prolonged healing time. Thus, the study has investigated the effect a new fibrin sealant (FS) derived from the venom of crotalus durissus terrificus in tendon repair. Therefore, 12 animals (±185.8 g) from Wistar lineage, showed average weight ± 185.8 g. All animals received the partial transected tendons and then they were separated in two group a random way: Lesion group (GL) and fibrin sealant group (GS). Immediately after the tendon injuries, 9 µL of fibrin sealant was applicated in each transected tendon, in order to form a stable clot with a dense fibrin network. The methodology of edema analysis was made in three stages: before the partial transected tendons, 24 h after tendon injuries induction and after 21 days of fibrin sealant treatment. The quantification of collagen was by slide colored with picrosirius red. We applied the Kolmogorov–Smirnov test to verify the normality between the groups. For the comparison between the groups the Student’s teste was applied for parametric samples and Mann–Whitney test for non-parametric samples with significance level of p < 0.05. The results showed that there were not significant difference between the groups in the first 24 h before the lesion, but the fibrin E. de Freitas Dutra Junior (&)  S. M. C. M.Hidd  M. M. Amaral  L. Assis  C. R. Tim Departamento de Engenharia Biomédica, Universidade Brasil, Rua Carolina Fonseca nº 584, Bairro Itaquera, São Paulo, Brazil A. L. M.Maia Filho Núcleo de Pesquisa em Biotecnologia e Biodiversidade, UESPI, Teresina, Piauí, Brazil R. S. Ferreira Centro de Estudos de Venenos e Animais Peçonhentos CEVAP-UNESP, São Paulo, Brazil B. Barraviera Faculdade de Medicina de Botucatu, UNESP, São Paulo, Brazil

Keywords

Edema sealant

1





Collagen Tendon injury Biopolymer



Heterologous fibrin

Introduction

The tendon tissue is an anatomic structure settled up between the muscles and bones that present as main function to transmit the strength made in the muscles to the bones, making possible the articular movement. It is constituted specially by extracellular matrix and fibroblasts that are the responsible for making the collagen fibers. There are many types of collagen. The collagen type l, which is found in tendon, is more resistant to stress, besides demonstrating huge capacity of supporting tension strength [1]. The heel tendon absorbs the strength in the realization of the flexion movement on the plantar and dorsal surface of the foot. During the inversion and eversion movement, this strength predominantly of tension make stress in different parts of tendon. In consequences of repetitive micro traumas arising from friction unnatural against the bones surface and tissue stretch resulting in micro ruptures of collagen fibers [2]. The tendon can be hit by different pathologies coming from blunt traumas or repetitive effort, besides infections, which might induce partial or total injury. Resulting in these cases in absence from work, and or athletic activities because of the inflammation, in other words, the edema [3]. The use of biocompatible materials with the purpose of regenerate living tissue is being widely studied and tested in

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_112

741

742

E. de Freitas Dutra Junior et al.

laboratories specially those implanted temporally or permanently in the human body. It is visible the current progress in the area of health combined with engineering in search for a better quality of life, and it is making possible the restoration of functions of organs and injured tissue or stricken from some illness [4]. In this premise, emerged researches investigating adhesive agents and the fibrin sealant using the fibrinogen, extracted from human blood. However, being some years after, it was not viable anymore, because of the possibility of infecting disease transmission. Thus, was purposed by Brazilian researchers the production of the same cattle blood associated with an enzyme derived from a Fibrin Sealant from a Poison Serpent, which according to the results that the researches already made it is having positive effects, besides not stimulating the inflammation process and to be cheaper [5], 6]. The fibrin sealant heterologous is composed by a cryoprecipitate rich in fibrinogen from the extraction of the buffalo blood, (bubalino bubalus bubalis) associate to an enzyme similar to thrombin (serina protease) which on the other hand is extracted from the serpent poison (Crotalus durissus terrificus). The active principle of this sealant is similar to final straight coagulation cascade. Then, an enzyme similar to thrombin acts over the fibrinogen molecule becoming it in monomers of fibers that polymerize in presence of calcium to make a stable clot with adhesive effects, hemostatic, and sealant [6]. The present study aimed to evaluate the fibrin sealant heterologous on treatment of partial lesion tendon.

2

Materials and Methods

The study is about an experimental research quantitative made according to precept the law nº 11.794 of October 2008, decree nº 6.899 of 15th of july 2009 and with the edited rules by the Conselho Nacional de Controle Animal (CONCEA) and was athorized by the ethical committee of animals using (CEUA/UESPI) da State University of Piauí with protocol nº 0326/2019. The sample were composed by 12 rats wistar lineage obtained at bioterium of UESPI. The animals presented the average weight 185 ± 14.8 g. They were randomly divided in two experimental group, 6 in the Lesion group (GL) and 6 in the Fibrin Sealant group (GS). The GL were submitted to partial transected tendon and received the simulation of treatment and GS were submitted to partial transected tendon and received treatment with Fibrin Sealant Derived From Serpent Poison. The animals were anesthetized with intraperitoneal injection of Ketamine (80 mg/kg) and Xylazine (10 mg/kg)

and the right lower paws submitted to antisepsis and trichotomy. Then, with a surgery microscope it was made a partial transected on the tendon, immediately after occurred an application of sealant [7] in the GS. For the application of fibrin sealant in the incision of the tendon, 03 (three) ampoules were used, which were kept at —20 ºC, which were subsequently thawed, mixed and applied, 9 µL in each transected tendon, in order to form a stable clot with a dense fibrin network [8]. In the animals of the group GL were made the same proceedings, however without Fibrin Sealant application. This group received saline solution as treatment. Edema analysis was made in 03 (three) different moments: The first evaluation—(AV1)—on the day, before the tendon lesion; The second evaluation—(AV2)—24 h after the tendon lesion and the third evaluation—(AV3)— after 21 days from the tendon lesion. The edema evaluation was made on the right paw in all the animals, right above the incision where was also made a mark as reference. Then it was put the paw until the mark in a plethysmometer, where was considered the volumetric alteration of liquid in (mL), by displacement of water, thus the equipment quantified the edema. The animals were humanely killed on the 21th postoperative day with a lethal dose of anesthetic ketamine (240 mg/kg) and xylazine (30 mg/kg). Then, the right tendon was collected and immediately fixed in 10% formaldehyde (Merck, Darmstadt, Germany) for 24 h. Next, the sample was dehydrated in a graded series of ethanol and embedded in paraffin. Therefore, thin sections (5 lm) were prepared in the longitudinal plane, using a micrometer (Leica RM -2145, Germany). The sample was prepared histologically for quantification of collagen. So, the colored slides with picrosirius red were photographed in 3 different ways of the lesion area on tendon, by an optical microscope trinolocular Olympus CX31, model YS 100, equipment with filters to provide the illumination polarized. The images were obtained in a zoom of 400 with a digital camera Olympus SC20 (Blue Lagoon Drive, Miami—U.S.A.) coupled to a microscope and analyzed using software analysis of image Image J. The methodology of analysis was made according to Castro et al. [9]. For this, the images were loaded in the software and then converted to black and white, so the black represents collagen, in red, and white the background black anterior. Right after, the black pixel number on each image was used to calculate the percentage of the image area that corresponded to the collagen. The number of pixels was determined and expressed in total percentage of collagen. The data were collected and statistically analyzed by program GraphPad Prism Software. The values of the edema volume and collagen were applied the test of normality Kolmogorov–

Using of Fibrin Sealant on Treatment for Tendon …

743

Smirnov. The comparison among the groups was made using test t of Student or Mann–Whitney when suitable using interval of confidence of 95% (p < 0.05).

3

Results

The initial volume did not show statistic difference among the groups, so the measured volume is compatible between the experimental groups (p = 27) (Fig. 1a). Still, after 24 h of the lesion induction, both groups presented values statistically equal (p = 1) (Fig. 1b). However, when the measure was repeated after 21 days, the measure value of volume of the edema indicate that GS showed improve performance (p = 0.003) when compared with the group Lesion (Fig. 1c). Figure 2 shows that the group treated with fibrin sealant presented quantity of collagen fibers similar to control group. However, the fibrin sealant group presented collagen fibers more organized when compared to the control group.

4

Discussions

The current study observed the effect of Fibrin Sealant on the reducing of calcaneal tendon edema. The tendon repair process can be divided into the inflammation, proliferation and remodeling phase. The first phase starts immediately after lesion and extends until 7 days. In these first 24 h occurs the angiogenesis that will be the responsible for the formation of a vascular chain that permit the survival of tissue. Then, the collagen deposition starts, mainly type III. The third and the last phase starts collagen type III it is replaced by collagen type I [10– 12]. Our results demonstrated that both groups the reduction of the edema was similar immediately after the tendon lesion (24 h). Edema is one of the cardinal signs of inflammation and the inflammatory process occurs normally on 7 days after the lesion, it could be explained the results in the GS [13]. De Barros et al. [5] indicates that the effect anti-inflammatory of Fibrin might help to stop the accumulation synovial liquid, accelerating the process of fibrin sealant to reduce the complications arising to facial lifting. Giodano et al. [14], detected a significant decreasing of edema in patient treated with fibrin sealant. Studies in vivo made by Barbon et al. [15] indicated that the Fibrin rich in platelets promotes less inflammation and fibrosis. After the initial inflammatory phase, it starts the deposition of collagen on the lesion area. The result of the current study showed that for the quantity of collagen fibers did not have statistic difference between the groups GL and GS. On the other hand, it was possible to observe the better

Fig. 1 Edema volume. a Initial, before lesion; b edema volume after 24 h; c edema volume after 21 days; Lesion group (GL): animals submitted to partial transected tendons and received the simulation of treatment; fibrin sealant group (GS): animals submitted to partial transected tendons and received fibrin sealant

organization of collagen fibers in the fibrin sealant group when compared to the control group. Under other perspective, Peres [16] showed an increasing of fibroplasia and organization of collagen fibers to the final of 21 days after the application of thin layers of Fibrin Sealant on wound, however his study occurred with treatment of skin wound of rats, which is more vascularized related to tendon.

744

E. de Freitas Dutra Junior et al.

Conflict of Interest The authors declare that they have no conflict of interest.

References

Fig. 2 Collagen analysis. Lesion group (GL): animals submitted to partial transected tendons and received the simulation of treatment; fibrin sealant group (GS): animals submitted to partial transected tendons and received fibrin sealant

Then, we suggested that the quantity of collagen found in this result could be related to experimental period evaluated. The repair phase in 21 days after the lesion, at the final process, which turned hard the observation about the difference of quantity of collagen between the groups. For future studies, we suggest that the groups will be evaluated in 7, 14, and 21 days after lesion, in order to have more precise results. Still, the current study used healthy animals, without any condition that could influence on the process of repair, for example the Diabetes Mellitus. This way is necessary investigate the effects of the fibrin sealant in situations that the process of tendon repair can be compromised.

5

Conclusions

The results of the study demonstrated that after 21 days of treatment with sealant, it was a significant reduction in edema compared to the control group. It is concluded that the use of heterologous fibrin sealant to repair tendons in rats, could be stimulated healing and repair of tendon injuries. This treatment is very promising for clinical application. Acknowledgements We would like to acknowledge the contributions of funding agency CAPES for the financial support of this research.

1. Thorpe CT, Peffers MJ, Simpson D et al (2016) Anatomical heterogeneity of tendon: fascicular and interfascicular tendon compartments have distinct proteomic composition. Sci Rep 6:20455 2. Bogaerts S, Desmet H, Slagmolen P et al (2016) Strain mapping in the Achilles tendon—a systematic review. J Biomech 49(9):1411– 1419 3. Egger AC, Berkowitz MJ (2017) Achilles tendon injuries. Curr Rev Musculoskeletal Med 10(1):72–80 4. Hendow EK, Guhmann P, Wright B et al (2016) Biomaterials for hollow organ tissue engineering. Fibrogenesis Tissue Repair 9:3 5. De Barros CN, Yamada ALM, Ferreira RS Jr et al (2015) A new heterologous fibrin sealant as a scaffold to cartilage repair— experimental study and preliminary results. Exp Biol Med (Maywood) 241(13):1410–1415 6. Ferreira RS Jr, Barros LC, Abbade LPF et al (2017) Heterologous fibrin sealant derived from snake venom: from bench to bedside— an overview. J Venomous Animals Toxins Including Trop Dis 20 (23):2–12 7. Dietrich F (2012) Comparação do efeito do plasma rico em plaquetas e fibrina rica em plaquetas no reparo do tendão de Aquiles em ratos. Dissertação (Mestrado em Medicina e Ciências da Saúde)—Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre, RS 8. Frauz K, Teodoro LFR, Carneiro GD et al (2019) Transected tendon treated with a new fibrin sealant alone or associated with adipose-derived stem cells. Cells 16, 8(1):56 9. Castro AA, Skare TL, Nassif PAN et al (2019) Tendinopatia e obesidade. ABCD Arq Bras Cir Dig 26:107–110 10. Sharma P, Maffulli N (2005) Current concepts review tendon injury and tendinopathy: healing and repair. J Bone Joint Surg 87 (1):187–202 11. Docheva D, Müller SA, Majewski M et al (2014) Biologics for tendon repair. Adv Drug Deliv Rev 84:222–239 12. Fakunle ES, Lane JG (2017) Cell culture approaches for articular cartilage: repair and regeneration. Bio-orthopaedics:161–172 13. Mattos LHL, Álvarez LEC, Yamada ALM et al (2015) Effect of phototherapy with light-emitting diodes (890 nm) on tendon repair: an experimental model in sheep. Lasers Med Sci 30 (1):193–201 14. Giordano S, Koskivuo I, Suominen E et al (2017) Tissue sealants may reduce haematoma and complications in face-lifts: a meta-analysis of comparative studies. J Plast Reconstr Aesthetic Surg 70(3):297–306 15. Barbon S, Stocco E, Macchi V et al (2019) Platelet-rich fibrin scaffolds for cartilage and tendon regenerative medicine: from bench to bedside. Int J Mol Sci 20(7):1701 16. Peres VS (2014) Efeito do selante de fibrina derivado de peçonha de serpente associado a células-tronco mesenquimais na cicatrização de ferida cirúrgica em ratos. 2014. 73 f. Dissertação (mestrado)—Universidade Estadual Paulista Júlio de Mesquita Filho, Faculdade de Medicina de Botucatu, Botucatu, SP

Analysis of Respiratory EMG Signals for Cough Prediction T. D. Costa, T. Z. Zanella, C. S. Cristino, C. Druzgalski, Guilherme Nunes Nogueira-Neto, and P. Nohama

Abstract

Keywords

Respiratory muscle weakness can be one of the aftereffects due to cervical or high thoracic spinal cord injury (SCI). This problem affects the organism's capacity to maintain the correct levels of oxygen, and the cough events start to be difficult to perform. Studies have demonstrated that the application of synchronized Transcutaneous Functional Electrical Stimulation (TFES) to the respiratory muscles, diaphragm and abdomen, enhanced their resistance and strength. So, this study aims to evaluate if electromyography (EMG) sensors positioned on the sternocleidomastoid (SCM), omohyoid (OM), and external oblique (EO) muscles can be used to help in automatic cough prediction for a future TFES synchronization system. Voluntary coughs of nineteen volunteers were recorded using three EMG sensors and their potential use in cough prediction was evaluated. Results show that EO muscle presents better results (55 ± 31.37 V) in amplitude variations compared to the other ones. However, it is still not possible to standardize one single muscle for prediction due to the variation of activation in each person. Also, the use of EMG sensors can contribute significantly to cough prediction systems considering the muscle response can be perceived well before the sound.

Cough detection EMG Omohyoid Oblique

T. D. Costa  P. Nohama Universidade Tecnológica Federal do Paraná, Curitiba, Paraná, Brazil T. Z. Zanella  C. S. Cristino  G. N. Nogueira-Neto (&)  P. Nohama Pontifícia Universidade Católica do Paraná, Rua Imaculada Conceição, 1155, Prado Velho, ZIP 80215-901, Curitiba, Paraná, Brazil e-mail: [email protected] T. D. Costa  C. Druzgalski California State University Long Beach, Long Beach, CA, USA

1

 



Sternocleidomastoid



Introduction

1.1 Background One of the main causes of death in people with Spinal Cord Injury (SCI) is respiratory system complications [1–4]. Two examples are respiratory failure [3, 5] and abdominal weakness, leading to bad performance during cough events [6]. Sometimes, respiratory events related to SCI can lead to the need for mechanical ventilator assistance or pacemaker when appropriate [7]. Some people still have partial movement of the diaphragm and/or abdominal muscles and can breathe/cough by themselves, but not like healthy people do [2]. Therefore, it cannot be enough to maintain the appropriate oxygen levels or to support adequately events of cough, which are very important to mucus removal [8–10]. To improve the quality of life, there are some procedures done by health professionals such as mucus removal through simulated cough (manual abdominal compression) [11]; use of mucolytics [12]; or the oxygen levels can be improved with the application of positive pressure devices [5]. But the results of these procedures can be enhanced with the use of another technique called Transcutaneous Functional Electrical Stimulation (TFES) synchronized with the patient's respiratory activity [4, 13]. Synchronized TFES has been mostly applied to the diaphragmatic [9, 14] and abdominal muscles [4, 12]. Because the diaphragm is the main muscle responsible for the inhalation [15], whereas the abdomen is important for cough events and forced exhalation [4]. During quiet breathing, abdominal TFES should be applied at the start of exhalation [4] and diaphragmatic TFES

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_113

745

746

T. D. Costa et al.

should be applied at the beginning of inhalation. During cough, abdominal TFES should be applied during glottal closure (between the end of inhalation and the beginning of a cough exhalation) with a higher amplitude compared to normal breathing TFES [4]. There are two ways to synchronize TFES with the patient's respiratory activity: (1) Manually by a physiotherapist observation—adequate timing is not guaranteed; (2) With an electronic system capable of detecting the respiratory phases, and in real-time predict them to trigger the TFES in an optimum timing; Electronic sensors associated with algorithms have been used for cough activity tracking: elastic strap around the chest and abdomen [4], EMG on pectoralis major and deltoid muscles [11], flow sensor with a face-mask [4], and so on. This study aims to evaluate if electromyography (EMG) sensors positioned on the sternocleidomastoid (SCM), omohyoid (OM) and external oblique (EO) muscles, can be used to help in automatic cough prediction, for a future TFES synchronization system. The main idea is to extract relevant information that can be useful to the creation of algorithms for this purpose.

2

Methodology

the internal jugular vein. This muscle acts in all movements of the tongue but does not work during head movements [19]. In this way, the interference in the electromyography signal could be free of movement artifacts, which does not happen with the SCM muscle, being able to become an innovative muscle in studies to detect the inspiratory phase. (3) The abdominal muscles have greater activation during the expiratory phase of breathing. Studies confirm that all abdominal muscles are activated during cough compression and exhalation [20]. The primary role of these muscles is to pull the abdominal wall inward and increase the abdominal pressure, thus inducing a cranial displacement of the diaphragm into the chest cavity, causing an increase in pleural pressure and a decrease in lung volume [21]. The external oblique muscle shows efficiency during the expulsion phase of the cough, as it helps in expiratory storage providing the most efficient voluntary cough [22, 23].

2.2 Experimental Setup Nineteen healthy volunteers (Table 1) provided written consent, according to the Ethics Committee on Research in Humans Board of the Pontifícia Universidade Católica do Paraná (n. 97910718.8.0000.0020), to participate in this study:

2.1 Muscles Analyzed In healthy people, the cough event has three stages: inhalation, compression and forced quick exhalation. It starts with the inhalation, then goes to the glottal closure (compression) and then finally ends with the forced exhalation [16]. The sternocleidomastoid, omohyoid and external oblique muscles have been chosen for EMG analysis for the following reasons: (1) the sternocleidomastoid muscle (SCM) is one of the main flexors of the neck and also one of the most important accessory muscles for inhalation; a greater activation in the sitting position and during moderate and maximum efforts; from the contraction of the inspiratory muscles, there is a decrease in pleural pressure causing air to flow into the lung, resulting in expansion of the lung and rib cage [17, 18]. Thus, the intent to use this sensor is to catch the initial time of the first phase of the cough: the inhalation; (2) the function of the omohyoid muscle is to lower the hyoid bone, as it participates in swallowing and phonation functions. During long breaths, it helps to reduce the pressure in the upper part of the lungs and

(1) a pneumotachometer with heat controller (model 3830, Hans Rudolph, Inc., USA) was used as a gold standard sensor for the identification of the onset of cough events, and it was calibrated for each volunteer before the experiment. The pneumotachometer has a precision screen assembly made of a fine mesh stainless steel screen that generates a linear differential pressure signal that is proportional to the flow rate. For this reason, an air pressure sensor (MPXV7002) is needed to convert this pressure into an electrical signal. The heater eliminates moisture buildup on the screens that can affect the flow measurement process; (2) three wearable EMG sensors (Myoware muscle sensor AT-04-001, Advancer Technologies, USA) with adjustable gain, CMRR = 110, 5 V and 9 mA supply and 110 GX input impedance were used in this study. Since output signals were centered at Vcc/2, it was necessary to develop offset cancelling and analog inverter (gain 10) amplifier circuits for each EMG channel; (3) a 16-bit, 500 kS/s data acquisition board (USB-6341, National Instruments, USA) registered sensor data at a 1 kHz sample rate and ±10 V dynamic range;

Analysis of Respiratory EMG Signals for Cough Prediction

747

Table 1 Volunteers’ data ID

Gender

Height (m)

Weight (kg)

Abdominal circumference (cm)

Waist (cm)

Physical activity frequency

Age (years)

1

M

1.68

62

92

75

Sedentary

23

2

F

1.51

62.9

93.4

89

Sedentary

47

3

F

1.67

66.8

76

69.3

Athlete

26

4

F

1.54

54.1

75.5

66.5

Sedentary

21

5

M

1.73

62.4

72.5

71.5

Athlete

21

6

M

1.90

101.3

104.5

99

Sedentary

40

7

F

1.56

59.6

85

75

Sedentary

38

8

F

1.51

50.5

70

61.8

Athlete

21

9

F

1.67

66

85

75.5

Athlete

20

10

F

1.70

52.2

70.5

66

Sedentary

20

11

M

1.73

90.6

106

102.5

Sedentary

44

12

F

1.52

48.9

75

66.5

Regular

46

13

F

1.65

70.4

90

77

Regular

24

14

M

1.87

85.3

93

89



28

15

F

1.66

62.9

79

70

Athlete

26

16

F

1.70

81

92

85

Sedentary

30

17

F

1.65

95.2

108

100

Regular

35

18

F

1.62

63.6

76

69.8



26

19

M

1.69

83

97

96

Regular

51

(4) a data analysis software was developed on MATLAB (v2018, Mathworks, USA).

2.3 Experiment Steps (1) The electrodes were positioned on the participants, according to Fig. 1. The points were measured by tape and marked with a pen. The anatomic points were the upper belly of the omohyoid muscle with a reference point on the sternal clavicular end; the central portion of the sternocleidomastoid with the reference electrode on the clavicular sternal end (right); and the external oblique with the reference electrode on the upper anterior part of the iliac crest; (2) Each volunteer was asked to perform twenty spontaneous coughs while staying in a sitting position; (3) Data were recorded and after that, using the pneumotachometer signals, all the onsets of cough (initial time in seconds) were identified by human observation, and their values were registered;

(4) The signals have been processed with algorithms to offset removal, amplification, band-pass filter (20–90 Hz). After processing, they look like the ones in Fig. 2; (5) After the processing step, features were extracted to compare which muscle had more activity (changes in voltage levels) in the pre-cough event. In each signal, as shown in Fig. 3, the exact point (T) before the cough explosion was selected. A time window of 1 s (TW) ending 100 ms before this point was used in feature calculations. The features extracted for analysis were: • Root mean square (RMS): square root of the arithmetic means of the squares of each value x of an N-sized TW, as shown in Eq. (1); rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 2 Rms ¼ x þ x22 þ    þ x2N ð1Þ n 1 • Zero-Crossing rate: the rate of sign-changes along TW, as shown in Eq. (2) so that 1R\0 is an indicator function;

748

T. D. Costa et al.

Sum ¼

n¼N X

xn

ð3Þ

n¼1

• Maximum signal variation (MSV): the difference between the maximum and minimum values along TW. MSV ¼ maxðx½nÞ  minðx½nÞ

3

X 1 n¼N1 1R\0 ðxn xn1 Þ N  1 n¼1

ð2Þ

• Sum: the sum of the absolute values from TW, as shown in Eq. (3);

Fig. 2 Example of 46 years old woman EMG signal of SCM, OM, and EO muscles, and the gold-standard pneumotachometer. The highlighted point in red is the onset of the exhalation phase of cough

Results

The EMG signals recorded during the pre-cough are different for each person in terms of amplitude and time response, as shown in Fig. 4. Results of calculations are shown in Table 2: 394 events of cough were analyzed and the mean and standard deviation values for each feature are presented. The EO muscle had the best response when the sum and the MSV were analyzed. This means that the TW amplitudes were higher, and the signals were more visible, while OM had the second-best muscle response, followed by the SCM. The ZCR and RMS results were not considered for evaluation, due to the values being very similar.

Fig. 1 Placement of EMG sensors

ZC ¼

ð4Þ

4

Discussion

The best results using the EMG sensor on the EO muscle can be explained because it participates at the beginning of the forced expiration phase [6]. Conceivably, the amplitude of the

Analysis of Respiratory EMG Signals for Cough Prediction

749 Table 2 Mean (±standard deviation) values for the features analyzed for 394 time-series acquired during pre-cough

Fig. 3 Example of cough event, where TW is the time window used for calculations, the pre-cough time, and T the exact forced exhalation moment

Features

EO

OM

SCM

Sum (V)

55.01 (±31.37)

38.66 (±42.47)

33.91 (±28.75)

RMS (V)

0.07 (±0.04)

0.06 (±0.06)

0.05 (±0.04)

ZCR (%)

0.20 (±0.02)

0.22 (±0.02)

0.21 (±0.02)

MSV (V)

0.60 (±0.36)

0.47 (±0.56)

0.35 (±0.36)

acquired signals for this muscle was higher due to the easier access to sensor position, compared to the OM and SCM. There are cases where the patient skips the inhalation phase because there is already enough air into the lungs to perform the cough event. In these cases, the SCM is not required, then its response is not a good option for cough prediction. Even though the EO provided the best results, it is still not possible to standardize a method using just one specific muscle for cough prediction, because there are people who have a better response in one muscle than another. For example, in Spivak and colleague's study [11], abdominal TFES was synchronized automatically using EMG signals from different muscles, depending on each individual. In some of them, the pectoralis major muscle had a better response. In other cases, the deltoid produced a more intense signal. Most research on cough identification and prediction uses microphones and audio data. In Vise et al. [24], from recordings, coughing sounds were identified by specialists and then a regression algorithm was used to identify and count coughing events. Drugman et al. [25] made use of a combination of different sensors, ECG, thermistor, chest belt, accelerometer, contact microphone, and audio microphone. The sensors that presented the most relevant information were the audio microphone, followed by the ECG and accelerometer, for prediction, a neural network was trained for each sensor and derived combinations, the best result being the network that used only the audio microphone. Amoh et al. [26] used a piezoelectric transducer and additional signal conditioning electronics to amplify respiratory sounds and sense vibrational energy produced by the cough event.

5 Fig. 4 Examples of different behaviors of EMG signals for SCM, OM, EO muscles. The muscle activity can vary from person to person, and in some cases, there is no apparent activity, resulting in signals, as shown in this figure. a and b SCM, OM, and EO responses; c only OM and EO apparent responses and d only EO apparent response

Conclusion

For a cough prediction, it is important to find out the best muscle response for each person. In this experiment, it was observed that EO muscle response was better compared to SCM and OM.

750

The common use of microphones for the prediction and detection of cough is possible thanks to the characteristic sound produced by this event in its explosion phase, facilitating research using this method. And, although it is not possible to standardize a single muscle, the use of EMG sensors is little explored for cough detection. Still, it can be useful to assist in making predictions earlier, since the muscle response can be perceived many times before the sound. In future studies, muscle response features combined with machine learning algorithms can be a promising technique for cough prediction. Acknowledgements The authors thank CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico), Brazil, for scholarships and financial support.The author TDC thanks for the scholarship received from CAPES for the PDSE program (process number: 88881.131558/2016-01). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Stanic U, Kandare F, Jaeger R et al (2000) Functional electrical stimulation of abdominal muscles to augment tidal volume in spinal cord injury. IEEE Trans Rehabil Eng 8:30–34 2. Costa V (2005) Efeito do uso da cinta abdominal elástica na função respiratória de indivívuos lesados medulares na posição ortostática. Universidade de São Paulo, Brazil 3. Cheng PT, Chen CL, Wang CM et al (2006) Effect of neuromuscular electrical stimulation on cough capacity and pulmonary function in patients with acute cervical cord injury. J Rehabil Med 38:32–36 4. Gollee H, Hunt K, Allan D et al (2007) A control system for automatic electrical stimulation of abdominal muscles to assist respiratory function in tetraplegia. Med Eng Phys 29:799–807 5. Wong S, Shem K, Crew J (2012) Specialized respiratory management for acute cervical spinal cord injury: a retrospective analysis. Top Spinal Cord Inj Rehabil 18(4):283–290 6. Gollee H, Hunt K, Fraser M et al (2008) Abdominal stimulation for respiratory support in tetraplegia: a tutorial review. J Autom Control 18(2):85–92 7. Pavlovic D, Wendt M (2003) Diaphragm pacing during prolonged mechanical ventilation of the lungs could prevent from respiratory muscle fatigue. Med Hypotheses 60(3):398–403

T. D. Costa et al. 8. Cuello A, Aquim E, Cuello G (2013) Músculos ventilatórios – biomotores da bomba respiratória, avaliação e tratamento. Editora Andreoli, Brazil, São Paulo 9. Ávila J (2001) O emprego da estimulação elétrica transcutânea no tratamento da disfunção diafragmática. Centro Federal de Educação Tecnológica do Paraná, Curitiba, Brazil 10. McCaughey E (2014) Abdominal functional electrical stimulation to improve respiratory function in acute and sub-acute tetraplegia. University of Glasgow, UK 11. Spivak E, Keren O, Niv D et al (2007) Electromyographic signal-activated functional electrical stimulation of abdominal muscles: the effect on pulmonary function in patients with tetraplegia. Spinal Cord 45(7):491–495 12. McCaughey E, McLean A, Allan D et al (2016) Abdominal functional electrical stimulation to enhance mechanical insufflation-exsufflation. J Spinal Cord Med 39(6):720–725 13. Nohama P, Jorge R, Valenga M (2012) Effects of transcutaneous diaphragmatic synchronized pacing in patients with chronic obstructive pulmonary disease (COPD). Rev Bras Eng Bioméd 28(2):103–115 14. Jorge R (2009) Effects of the transcutaneous synchronous diaphragmatic pacing in moderate and severe chronic obstructive pulmonary disease (COPD). Pontificia Universidade Católica do Paraná, Curitiba, Brazil 15. Silverthorn D (2010) Mecânica da respiração. Fisiologia humana: uma abordagem integrada, 5th edn. Porto Alegre, Artmed, Brazil, pp 569–591 16. McCool F (2006) Global physiology and pathophysiology of cough: ACCP evidence-based clinical practice guidelines. Chest 129:48–53 17. Kohan E, Wirth G (2014) Anatomy of the neck. Clin Plast Surg 41:1–6 18. LoMauro A, Aliverti A (2016) Physiology of respiratory disturbances in muscular dystrophies. Breathe 12(4):318–327 19. Castro H, Resende L, Bérzin F et al (1999) Electromyographic analysis of the superior belly of the omohyoid muscle and anterior belly of the digastric muscle in tongue and head movements. J Electromyogr Kinesiol 9(3):229–232 20. Fontana G, Lavorini F (2006) Cough motor mechanisms. Respir Physiol Neurobiol 152(3):266–281 21. De Troyer A, Kirkwood P, Wilson T (2005) Respiratory action of the intercostal muscles. Physiol Rev 85(2):717–756 22. Ratnovsky A, Elad D, Halpern P (2008) Mechanics of respiratory muscles. Respir Physiol Neurobiol 163(1–3):82–89 23. LoMauro A, Aliverti A (2019) Respiratory muscle activation and action during voluntary cough in healthy humans. J Electromyogr Kinesiol 49:102359 24. Vizel E, Yigla M, Goryachev Y et al (2010) Validation of an ambulatory cough detection and counting application using voluntary cough under different conditions. Cough 6:3 25. Drugman T, Urbain J, Bauwens N et al (2013) Objective study of sensor relevance for automatic cough detection. IEEE J Biomed Heal Informatics 17(3):699–707 26. Amoh J, Odame K (2015) DeepCough: a deep convolutional neural network in a wearable cough detection system. IEEE Biomed Circ Syst Conf Atlanta GA 2015:1–4

Integrating Power Line and Visible Light Communication Technologies for Data Transmission in Hospital Environments R. Lazaro, Klaas Minne Van Der Zwaag, Wesley Costa, G. Acioli, M. Marinho, Mariana Khouri, G. C. Vivas, F. Santos, T. Bastos-Filho, Marcelo Vieira Segatto, H. Rocha and Jair Adriano Lima Silva

Abstract

An integration involving power line communication (PLC) and visible light communication (VLC) technologies is experimentally investigated in this work, for data transmission in hospitals. This combination represents a promising solution in classified areas, where radio frequency wireless transmission is prohibitive due to interference in machinery used in certain medical services. This is also motivated by a physical layer transparency provided by the adoption of the orthogonal frequency division multiplexing (OFDM) in both technologies. Bit rates around 10 and 230 Mb/s measured after propagation through 21.7 m in downlink and uplink, respectively, show the suitability of the PLC technology as an alternative backbone to a central data monitoring. Moreover, a bit rate around 4.8 Mb/s can be achieved with the VLC system in a link of 2.0 m, with an error vector magnitude (EVM) of −24 dB, confirming the robustness of this integration. Keywords

Communication in hospitals • Visible light communication (VLC) • Power line communication (PLC) • Orthogonal frequency division multiplexing (OFDM)

R. Lazaro (B) · K. M. Van Der Zwaag · W. Costa · G. Acioli · M. Marinho · M. Khouri · M. V. Segatto · H. Rocha · J. A. L. Silva LabTel, Federal University of Espirito Santo (UFES), Av. Fernando Ferrari,Vitória, ES, Brazil G. C. Vivas HUCAM, Federal University of Espirito Santo (UFES), Vitória, ES, Brazil F. Santos · T. Bastos-Filho NTA, Federal University of Espirito Santo (UFES), Vitória, ES, Brazil

1

Introduction

The continuous supervision of patients is an important issue in medical cares. Once that parameters such as temperature, respiration and heart rates are necessary to follow patient’s health, it is fundamental to guarantee a fast and reliable intercommunication system, taking into account technical aspects involved in the data type and spectrum usage [1]. Considering that the monitoring of patient vital data requires low transmission rates (approximately 20 kb/s), solutions based on wireless technologies, such as Wi-Fi and ZigBee become available [2,3]. However, the known spectrum usage affects the adoption of these communication technologies in the abovementioned monitoring [3]. Indeed, the proposition of modern communication schemes in hospital demand some particular technique issues. One of the main issues is the fact that hospitals are considered as classified areas in terms of spectrum usage, especially when medical equipment operate in industrial, scientific and medical (ISM) band [3]. Medical service, such as wireless body area networks and medical telemetry, as well as medical implant communications, can be disturbed by interference produced by the common indoor wireless technologies that utilize the ISM band [4]. It is worth notice that most of the wireless services operate with low or even ultra low powers, a fact that amplifies the interference [5]. Therefore, solutions based on visible light communications (VLC), as well as a combination between VLC and power line communication (PLC), seem to be attractive in hospital environments. According to [6], the use of VLC in short-range opticwireless systems provides, among others, hundred of THz bandwidth in the unlicensed spectrum, electromagnetic interference immunity, and range widening by complementing the already deployed visible light infrastructure. On the other hand, PLC is a communication system that uses the existing power distribution infrastructure to transmit data [7]. In addi-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_114

751

752

R. Lazaro et al.

Fig. 1 Illustration of hospital environments with the PLC-VLC integration. The PRO12 data arrives to a personal smartphone or tablet in infirmaries, offices and monitoring centrals

tion to the low-cost advantage, PLC allows communication through obstacles that usually destroy wireless signals [8]. As a result, PLC becomes attractive in smart grids, telemetry and indoor information access [9–12]. Therefore, a hybrid solution that integrates VLC and PLC, combining the advantages of both systems, is desirable in the reliable applications demanded in health cares [13–15]. In this work, we experimentally evaluate the robustness of a hybrid PLC-VLC system to transmit patient data, measured with sensors of multi-parametric monitors, to a medical team located in a monitoring central or in a VLC illuminated cell. Figure 1 briefly presents some application scenarios of the investigated technological integration in different hospital environments. The data obtained by the monitor sensors are transmitted via PLC to the light-emitting diodes (LEDs) used as optical sources. Hence, the PLC system is employed as a backbone for the VLC technology. After the LED modulation, the signals propagate through the VLC channel to a smartphone or a tablet in possession of a team member, to the computers located in a meeting room, or even to the monitoring center by PLC, VLC or Ethernet connections. The 10 Mb/s (230 Mb/s) power line communication system, conceived in the downlink (uplink) of a 21.7 m channel length, proves the suitability of the PLC as an alternative backhaul technology for central data monitoring in hospitals. Moreover, the bit rate Rb ≈ 4.8 Mb/s achieved with the VLC system, in a link of 2.0 m with an EVM = −24 dB, confirms the robustness of the proposed PLC-VLC integration.

2

OFDM Theoretical Background

Another key aspect that motivates this work is related to the fact that orthogonal frequency division multiplexing (OFDM) is the modulation format implemented in most of practical PLC systems, which is recommended for broadband access in VLC indoor applications. Hence, if OFDM signals are properly designed in the VLC system, a desired transparency in the physical (PHY) layer of the aforementioned integration is contemplated. OFDM is a spectral efficient multicarrier modulation scheme, that effectively deals with the delay spread of broadband wireless channels [16]. Bandwidth granularity is another advantage that advocates its choice for multi-user communication systems. In its conception, the bitstream is

divided into N subsequences that, after symbol mapping, modulates N SC = N subcarriers before an orthogonal frequency domain multiplexing [16]. In a discrete time domain representation, this multiplexing is performed by an inverse fast Fourier transform (IFFT), which means that, in the receiver, the demultiplexing is processed by an FFT. Therefore, an analog OFDM signal can be represented as x(t) =

N SC ∞  

X k,i

 (t − i Ts )e j2π(k f )t ,

(1)

i=−∞ k=1

 where, X k,i represents the mapped symbols, (t) represents an ideal rectangular pulse-shaping and TS is the useful symbol duration (see the payload shown in Fig. 4). In the frequency domain, the subcarriers are spaced in  f = (TS )−1 Hz. Defining Tc as a sample or a chip period, then  f = 1/(N SC Tc ) and TS = (N SC + NC P )TC . Thus, the discrete time version of the signal described in Eq. (1) can be written as x(nTc ) =

N SC +∞  

X k,i e

kn j ( 2π ) N SC

,

(2)

i=−∞ k=1

  where, from Eq. (1), (t) = 1 during Ts and (t) = 0 otherwise. In order to virtually eliminate the intersymbol interference (ISI) produced by channel distortions, a cyclic prefix (CP) is inserted in the ODFM signals, by extending the signal samples by NC P more samples [17]. The bandwidth of the confined spectrum of an OFDM signal can be approximately given by Bw ≈ 2π

(N SC + 1) (N SC + NC P ) R, N SC N SC

(3)

b is the symbol rate, for Rb the bit rate and where R = logR(M) 2 M the modulation level used in the subcarrier mapping [18].

3

Materials and Methods

3.1

Experimental Setup

Figure 2 shows the experimental setup prepared to investigate the robustness of the PLC-VLC integration. It can be veri-

114 Integrating Power Line and Visible Light Communication …

753

Fig. 2 Experimental setup used to proof the PLC-VLC integration

fied in Fig. 2 that the data generated from the PRO12 monitor sensors is coupled to the electrical grid by the commercial XAV5201 PLC transceiver, upon an Ethernet connection between these two devices. After propagation through the PLC channel, the signals were captured by another PLC transceiver and transmitted via Ethernet protocol to a computer. In the computer, the received signals are processed in Matlab and encapsulated in OFDM signals designed to deal with the interference provided by the VLC channel. After propagation through the VLC channel, supported by bi-convex lenses, the optical signals were detected by a S10784 photodiode, before analog-to-digital conversion by the MDO3012 oscilloscope. The OFDM demodulation is offline performed in Matlab, and the data generated in multiparametric monitor at the transmitter is demonstrated in a computer by a graphic interface intentionally developed.

3.2

OFDM Parametrization in the VLC System

In the OFDM signal generation, pseudorandom binary sequences PRBS = 4 × (29 − 1) bits are multiplexed using an 64-IFFT, after Hermitian symmetry (HS) and subcarrier mapping in 4-QAM. It is important to notice that the HS was used to force the generation of real coefficient signals. Thirty of the subcarriers were zeroed to avoid aliasing, which means that, from the N = (N F F T /2) = 64/2 = 32 multiplexed subcarriers, only N SC = 16 carry information data. Considering the VLC channel estimation with a maximum delay spread of τ = 15 ns, as described in [19], a cyclic extension of Tg = 10 × τ = 150 ns was designed. A total signal duration T = Ts + Tg = 7.65 μs was conceived, in which Ts = 50 × Tg = 7.5 μs is the useful signal duration. The subcarriers spacing is then  f = T −1 = 133.3 kHz. The central frequency chosen for the analog carrier was f c = 7.5 MHz, and a bandwidth Bw = 5 MHz was available. Hence, with 4-QAM (M = 4) mapping, the bit rate of the VLC system was calculated as N (log (M)) w × SC 1+g2 ≈ 4.8 Mb/s.1 Further details Rb = NB+2 T

g = Tg represents a penalty introduced by the cyclic prefix, an extension also used to facilitate the equalization process implemented in the receiver to compensate linear distortions.

1 The factor

about the signal generation and OFDM-based VLC system parametrization can be found in [19].

4

Experimental Results

4.1

Performance Analysis of the PLC System

Figure 3 shows the downlink and uplink bit rates measured at different PLC link distances. It can be observed from Fig. 3 that, in downlink, the bit rate decreases at the distances d ≥ 10 m. The reduced bit rates registered at link lengths below 10 m are explained by the interference induced by the multipath communication nature that characterizes PLC systems [11,20]. Channel recognition and equalization methods, insured by the preamble and control headers depicted in Fig. 4, are extremely important in the success of the signal transmission. Nevertheless, the Rb = 10 Mb/s measured at 21.7 m demonstrates the suitability of this technology as a backhaul to the application of VLC in hospitals. Figure 3 also shows that, despite the performance reduction provided by multipath occurred at d ≤ 10 m, the bit rates remain close to 230 Mb/s, as a result of the reserved large bandwidth Bw ≈ 28 MHz highlighted in Fig. 4. It should be stressed that the payload of uplink was not utilized during the data transmission.

4.2

Optical Characterization of the White LED

Different levels of light intensity must be considered during the application of VLC in intensive care or general hospital environments. The illuminance diversity logically intervenes in the link ranges. Thus, before a performance analysis of the overall system, we conducted illuminance measurements against several values of LED bias current (Ibias ). Figure 5 shows the light intensity as a function of Ibias , at 0.3 and 2.0 m. We choose these distances to evaluate the impact of photodetection saturation in short distances, and the effect of attenuation at longer link lengths. As expected, the illuminance decreases with the link length. Almost a linear behavior in both considered distances is clear from Fig. 5. The intensities provided by Ibias between

200

200

PLC Uplink PLC Downlink

150 100

100 50 0 0

5

10

15

20

0 25

Downlink bit rate [Mbps]

R. Lazaro et al. Uplink bit rate [Mbps]

754

Distance [m]

Fig. 3 Bit rates measured in uplink and downlink at different distances

a Fig. 6 EVM versus LED bias current, measured at VLC links of 0.3 and 2.0 m

b

4.3

Fig. 4 PLC signals measured in time (a) and frequency (b) domains

Fig. 5 Measurements of light intensity with several values of LED bias current (Ibias ), at VLC links of 0.3 and 2.0 m

100 and 200 mA should be taken into account in scenarios where a visual comfort is required. Another important observation is the VLC link maintainance, in a scenario of Ibias ≈ 10 mA, even when illuminance below 10 lux is required.The high light intensities measured with currents above 800 mA must be avoided, due to the LED heat that can damage the optical device.

Performance Evaluation of the VLC System

Figure 6 shows the EVM performance as a function of the LED bias current, measured at VLC link lengths of 0.3 and 2.0 m. It can be seen from Fig. 6 that, at 0.3 m, the system performance increases with bias current. The plateau registered at Ibias ≥ 200 mA is due to the inherent electronic noise of the overall system. The lowest performance occurred at the currents below 200 mA is due to the low levels of LED luminance (see Fig. 5). Despite the performance penalties, the system behavior is similar at 2.0 m, when Ibias ≤ 200 mA. As expected, in this case, the performance decreases with the bias current due to the LED saturation. Nevertheless, the system robustness remains as depicted by the constellation diagram shown inset the Fig. 6. This observation is also corroborated by the pictures shown in Fig. 7. It can be observed from Fig. 7 that the data generated by the sensors of the multiparametric monitor (depicted in Fig. 7a) successfully propagated through the integrated channels to the computer located at the receiver, as illustrated in Fig. 7b. It is worth noting that, data from several multi-parametric monitors can be transmitted at the same time in the integration evaluated in this work. In this context, the multiple access scheme exploited in the experimental works described in [20] should be employed in the data layer implemented in the PLC equipment.

5

Conclusions

A successful integration between power line and visible light communication systems was experimentally demonstrated in

114 Integrating Power Line and Visible Light Communication …

755 Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2. (a)

3.

4.

5. 6. (b)

Fig. 7 a Data generated in the multi-parametric monitor. b The same data received and displayed in a graphical interface

7.

8.

9.

this work. We choose the VLC technology for transmission of patient data, obtained by sensors of multi-parametric monitors, due to the absence of interference from radio frequency wireless signals, and to the prohibition of their propagation in some classified areas of hospitals, such as intensive care units. The low cost advantage promised by the PLC technology justified its choice as a VLC backhaul technology. The integration was also motivated by a common OFDM format in the PHY layer of both technologies. The real data was successfully (EVM = −24 dB) transmitted in a ≈ 4.8 Mb/s VLC system, over a link of 2.0 m. On the other hand, a bit rate around 10 Mb/s, achieved in a downlink PLC channel of 21.7 m, demonstrates the suitability of this technology as a promising backbone, when the integration is applied especially in a central data monitoring. As future work, we plan to evaluate the effects of real time transmission drawbacks. To this aim, we will consider the possibility of manufacturing prototypes for communications in both down and uplinks. Acknowledgements The authors acknowledge the support from the FAPES 80599230/17, 538/2018, 84343338, 601/2018, and CNPq 307757/2016-1, 304564/2016-8, 309823/2018-8 research projects.

10.

11.

12.

13.

14.

15.

16.

Banaee H, Ahmed M, Loutfi A (2013) Data mining for wearable sensors in health monitoring systems: a review of recent trends and challenges. Sensors 13:17472–500 Pramanik P, Nayyar A, Pareek G (2019) WBAN: driving ehealthcare beyond telemedicine to remote health monitoring: architecture and protocols. Telemedicine Technologies, Elsevier, pp 89– 119 Krishnamoorthy S, Reed J, Anderson C, Robert P, Srikanteswara S (2003) Characterization of the 2.4 GHz ISM band electromagnetic interference in a hospital environment. In: Proceedings of the 25th annual international conference of the IEEE engineering in medicine and biology society (IEEE Cat. No. 03CH37439), vol 4. IEEE, pp 3245–3248 Al Kalaa M, Butron G, Balid W, Refai H, LaSorte N (2016) Long term spectrum survey of the 2.4 GHz ISM band in multiple hospital environments. In: 2016 IEEE wireless communications and networking conference workshops (WCNCW). IEEE, pp 246–251 Rao H, Saxena D, Kumar S et al (2014) Low power remote neonatal temperature monitoring device. Biodevices:28–38 Association IEEE Standard, others (2011) IEEE Std. for local and metropolitan area networks–Part 15.7: Short-range wireless optical communication using visible light. IEEE Computer Society Huo Y, Prasad G, Atanackovic L, Lampe L, Leung V (2018) Grid surveillance and diagnostics using power line communications. In: 2018 IEEE international symposium on power line communications and its applications (ISPLC). IEEE, pp 1–6 Cano C, Pittolo A, Malone D, Lampe L, Tonello A, Dabak A (2016) State Art Power Line Commun From Appl Med IEEE J Sel Areas Commun 34:1935–1952 Pinomaa A, Ahola J, Kosonen A, Nuutinen P (2014) Applicability of narrowband power line communication in an LVDC distribution network. In: 18th IEEE international symposium on power line communications and its applications. IEEE, pp 232–237 Mlynek P, Koutny M, Misurec J, Kolka Z (2014) Measurements and evaluation of PLC modem with G3 and PRIME standards for Street Lighting Control. In: 18th IEEE international symposium on power line communications and its applications. IEEE, pp 238–243 Castor L, Natale R, Silva J, Segatto M (2014) Experimental investigation of broadband power line communication modems for onshore oil & gas industry: a preliminary analysis. In: 18th IEEE international symposium on power line communications and its applications. IEEE, pp 244–248 Tonello A, Siohan P, Zeddam A, Mongaboure X (2008) Challenges for 1 Gbps power line communications in home networks. In: 2008 IEEE 19th international symposium on personal, indoor and mobile radio communications. IEEE, pp 1–6 Ndjiongue A, Ferreira H, Shongwe T (2016) Inter-building PLCVLC integration based on PSK and CSK techniques. In: 2016 international symposium on power line communications and its applications (ISPLC). IEEE, pp 31–36 Komine T, Nakagawa M (2003) Integrated system of white LED visible-light communication and power-line communication. IEEE Trans Consumer Electron 49:71–79 Ding W, Yang F, Yang H et al (2015) A hybrid power line and visible light communication system for indoor hospital applications. Comput Ind 68:170–178 Soo CY, Jaekwon K, Yang Won Y, Kang Chung G (2010) MIMOOFDM wireless communications with MATLAB. Wiley

756 17. Barros D, Kahn J (2008) Optimized dispersion compensation using orthogonal frequency-division multiplexing. J Lightwave Technol 26:2889–2898 18. Monteiro F, Costa W, Neves J et al (2020) Experimental evaluation of pulse shaping based 5G multicarrier modulation formats in visible light communication systems. Opt Commun 457:124693

R. Lazaro et al. 19. Zwaag K, Neves J, Rocha H, Segatto M, Silva J (2019) Adaptation to the LEDs flicker requirement in visible light communication systems through CE-OFDM signals. Opt Commun 441:14–20 20. Castor L, Natale R, Favero J, Silva J, Segatto M (2016) The smart grid concept in oil & gas industries by a field trial of data communication in mv power lines in journal of microwaves. Optoelectron Electromagn Appl 15:81–92

Design of an Alternating Current Field Controller for Electrodes Exposed to Saline Solutions J. L. dos Santos, S. A. Mendanha, and Sílvio Leão Vieira

Abstract

1

The production of giant unilamellar vesicles (GUV) has been the subject of many studies due to its simplicity and to mimic essential complex functions of biological membranes. The precise control of the medium temperature and the electric field amplitude in saline solutions is crucial for the formation of GUV to be used in the analysis training protocols for biomedical applications. Herein, we propose two automatic temperature control methods and stability of the electric field amplitude in the saline solution. The control system is based on a microcontroller that uses pulse width modulation to control the actuation devices. Over time, the field amplitude stability was assessed through an applied electric field with a frequency of 500 Hz and a voltage amplitude of 600 mVrms. Two algorithms for controlling the field amplitude were analyzed: bang-bang and proportional-integral-derivative (PID). Their performance was evaluated in saline solutions kept at room temperature and 60 °C. The mean values of the field in the absence of control, with bang-bang control and PID control, showed differences compared to the initial value, 17%, 1%, and 0.3%, respectively. Based on findings, the PID protocol proved to be the most effective in maintaining field values over time. Keywords





Proportional-integral-derivative (PID) controller Bang-bang controller Alternated electric field Temperature Giant unilamellar vesicles (GUVs)



J. L. dos Santos  S. A. Mendanha  S. L. Vieira (&) Federal University of Goiás, Institute of Physics, Avenida Esperança s/n, Câmpus Samambaia, Goiânia, Brazil e-mail: [email protected]

Introduction

There are several technological applications in which precise control of applied electrical fields must be guaranteed [1–6]. The control of field fluctuations induced by ionic charges in electrodes exposed to saline solutions is essential in biophysics experimentation applications. For instance, the AC (alternating current) amplitude plays a fundamental role during the giant unilamellar vesicles (GUV) electroformation [7–9]. Moreover, even though the exact mechanism of GUV electroformation is not yet fully understood, several authors reported that the generation of vesicles based on the presence of electric fields is also strongly dependent on the temperature and the frequency of the applied fields [7–9]. However, as far as we know, there is no equipment available commercially, and the design and development of these electroforming chambers is the responsibility of each research group interested in this type of system. Several models had been described over the years in the literature [7–10], mostly using an electric field produced by low-frequency AC, 0.5–1.0 kHz, and a voltage amplitude between 2.0 and 3.0 V applied to electrodes immersed in pure aqueous or saline solutions. Besides, the current reported electroforming systems use general-purpose laboratory instruments, requiring manual tuning and, consequently, making it difficult to fully control these physical quantities in the presence of ionic charges. This work presents a new proposal for an automated AC-field and temperature controllers incorporated into GUV electroforming chambers to improve the vesicle yield, but general biological applications are possible. Thus, seeking to minimize the user’s dependence during the electroformation procedure and mainly enhance the control over its physical parameters. This configuration introduces innovative advances concerning pre-existing electroforming chambers, especially for processes involving saline solutions.

J. L. dos Santos  S. L. Vieira Department of Electrical and Computer Engineering, Federal University of Goiás, Goiânia, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_115

757

758

2

J. L. dos Santos et al.

Material and Methods

The electronic automation of the electroforming chambers is based on an ATmega328p® microcontroller (Atmel, San José, California, USA), with low cost and easy application. It performs operations with a clock frequency of 16 MHz, and its 10-bit analog-to-digital converter (ADC) operates with an external reference voltage of 4.092 V. For electrostimulation, we have a sine waveform generator formed by an oscillator from Wien Bridge and amplitude stabilization circuit, 500 Hz fixed output frequency with 0.1% stability, with voltage amplitude of 3.0 V and maximum distortion of around 1%. The electroformation chamber operates at a sampling frequency of 1 Hz for data acquisition. The electrodes use a voltage of stimulation ranging from 50 to 600 mVrms and 500 Hz. The electrodes are formed by two tiny platinum (Pt) wires (6.0  1.0 ± 0.2) mm disposed parallelly of each other. The distance between the platinum electrode is 3 mm. Consequently, an electric field between both conducting electrodes can be estimated. Based on such quantities, peak amplitude ranges from 16.7 mV/mm to 200 mV/mm, respectively. Figure 1 illustrates the functional description of the control electrodes, including the primary functional steps and an image description of the chambers and electrodes.

2.1 AC-Field Control This module has its functionality based on four operational amplifiers that provide the analog signal RMS-DC to be digitized by the ADC. Thus, avoiding the use of accessory instruments, such as function generators and oscilloscopes. The amplitude of the AC-electric field applied to the electrodes was controlled with a PWM-controlled attenuator. This attenuator uses the resistance ratio to control the gain of an operational amplifier, modifying the gain and attenuating the amplitude of the AC-field. The reading of the voltage amplitude applied to the electrodes was obtained using an alternating current-AC to direct current-DC converter (True RMS-DC converter). This module has its functionality based on four operational amplifiers that provide the analog signal RMS-DC to be digitized by the ADC. The signal digitized by the microcontroller is used as an input variable for control comparisons. This signal is also collected directly from the electrodes and is used to correct the previously read voltage amplitude value.

Fig. 1 a Diagram representation of the main components of the AC-field and temperature PID controllers. b Chambers/electrodes illustration, 1 aluminium heat sink, 2 platinum electrodes (6.0  1.0 ± 0.2) mm separated by 3 mm, 3 TPBPE, 4 NTC sensor

shape (70  22  18 ± 0.2) mm of aluminum that covers a block of polytetrafluoroethylene (PTFE) and a pair of TPBPE with dimensions of (15.0  15.0  3.2 ± 0.2) mm fixed on its surface. The PWM (pulse width modulation) method was responsible for power control. A hot and cold source inversion feature of the thermoelectric plates was used to obtain heating or cooling by controlling the current flowing direction. The current flowing through the TPBPE was designed not to exceed the maximum value of 3.34 A. Moreover, an NTC (Negative Temperature Coefficient) sensor was connected to one end of the electroforming chamber’s PTFE structure to monitor the system temperature. As the NTC sensor is not directly connected to the electrodes, a calibration is performed to obtain a correct temperature measurement.

2.2 Temperature Control 2.3 Control Algorithms Thermoelectric plates using the Peltier effect (TPBPE) control the electroforming chambers and, consequently, the electrodes’ temperature. The electroforming chamber has a rectangular

To minimize AC-field fluctuations was implemented two different control algorithms. The first was a bang-bang

Design of an Alternating Current Field Controller …

controller (2 step or on–off controller), also known as a hysteresis controller. According to the desired setpoint, the control signal could assume values between two reference signals, a lower and an upper limit. This approach is commonly known as hysteresis. Although the hysteresis protocol minimizes constant control triggers, it changes the controlled variable around the operating point or setpoint. The second protocol was the PID (Proportional Integral Derivative) control method, characterized by a combination of proportional, integral, and derivative control techniques. The PID control increases the accuracy of the process and its efficiency using a loop feedback mechanism. Nevertheless, the PID algorithm requires a fine-tune determination of proportional (Kp), integral (Ki), and derivative (Kd) gains. This tuning is associated with those supposed variables that we intend to control. So, a self-tuning algorithm based on the accumulation of the gains is helpful to minimize errors. An aqueous solution containing 150 mM of NaCl, with a setpoint of 60 °C performed the temperature tuning process providing the values of Kp, Ki, and Kp. Additionally, we verified the PID control algorithm in aqueous solutions containing 0 and 600 mM of NaCl using the same setpoint. A similar procedure was performed to the AC-field PID control using a setpoint of 400 mVrms.

2.4 Experimental Protocol The data were collected using platinum electrodes supported by a PTFE base (as illustrated in Fig. 1). In each measurement, 525 µL of the desired aqueous solutions were applied to the electrodes. The ionic solutions (phosphate saline buffer, PBS) containing 150 and 600 mM of NaCl were prepared as described in the reference [11]. Data acquisition was performed for 120 min, and the values of the voltage and temperature were collected each second.

3

Results and Discussions

Figure 2 shows the behavior of the AC-voltages values measured in Pt-electrodes immersed in aqueous solutions containing 0, 150, and 600 mM of NaCl, respectively, at 60 °C. The response of the system for the three aqueous solutions shows an exponential decrease. Compared with the reference value, the significant difference can reach up to 17% in 120 min. This issue denotes the necessity of some control algorithm to maintain the setpoint value. We know that different time constants characterize exponential decay. So, fitting the AC-field, we may obtain the time constants for the three samples. The time constant obtained for the pure aqueous solution was approximately 27.0 min. While, for the couple saline solutions, the time constants were about

759

13.0 min. Notably, the time constants for the saline solutions do not match with the aqueous solution. It seems to indicate that free ions must drastically influence the inaccuracy in the application of the AC-field. Effects related to the evanescent field is due to the ionic charges present in the solutions that cause the harsh response after the stimulation turned off. It is related to ionic polarization, which involves the relative displacement of positive and negative ions. This type of polarization is slower compared to electrons and involves larger masses. At high frequencies, the ions cannot respond to the rapidly changing field, and the coupling between the field and ions is negligible. Thus, when the field turned off, the ionic solution maintains its dipole moments in line with the primordial field for a specific time. This phenomenon causes a voltage spike during the depolarization. As depicted in Fig. 2a–c, the transient field after depolarization is a function of ionic concentration in the solutions. The pure water has negligible conductivities, but an electric field polarizes them showed a less transient response after the field turned off. Figure 3a shows the result of the bang-bang algorithm’s execution during data acquisition of the AC-field for Pt-electrodes immersed in a PBS solution containing 150 mM of NaCl for different setpoints values. As can be seen, the control system response was unstable, with the voltage values exceeding the desired setpoints, mostly for the higher ones. These instabilities occur probably due to the hysteresis presented by the system close to the setpoint values. To minimize the AC-field instabilities during the data acquisition, we executed a PID algorithm. Consequently, the system response provides new useful values of the AC-voltage for Pt-electrodes, which are immersed in a PBS solution containing 150 mM, as presented in Fig. 3b. As can be seen, the PID control performed by the ATmega328p® microcontroller using the parameters described in the Methods section proved to be very useful. The PID control can guarantee a standard deviation of fewer than 2.0 mVrms for the different aqueous solutions. Note that it had utterly minimized the oscillations induced by the absence/presence of ionic charges compared to the bang-bang protocol. Table 1 depicts a comparison of the control stability of the two algorithms used in this work. The statistical analysis has performed at the setpoint of 600 mVrms. It was selected to be the most unstable voltage level, as shown in Fig. 3a. Temperature stability is an important parameter required for the electroforming process [4, 7]. As described in the Methods section, an independent PID control algorithm also executed to maintain the electrodes at a constant temperature during the data acquisition. As verified for the AC-field implementation, this kind of control proved to be very efficient to guarantee setpoint stability. The PID control was also applied to control the temperature, and its

760

Fig. 2 Behavior of the AC-voltages values (without any algorithm of control) measured in Pt-electrodes immersed in a aqueous solution; b PBS-NaCl 150 mM and c PBS-NaCl 600 mM. The blue lines indicate the setpoint values. Data acquisition was performed at a controlled temperature of 60 °C

J. L. dos Santos et al.

Fig. 3 a, b Behavior of the AC-voltage measured in Pt electrodes immersed in a solution containing 150 mM of NaCl for different setpoint values. c Temperature control of Pt-electrodes immersed in a saline solution. The red lines indicate the setpoints values

Design of an Alternating Current Field Controller … Table 1 AC-voltage parameters obtained for Pt-electrodes immersed in different aqueous solutions at 60 °C at the setpoint of 600 mVrms

Sample

761 N

Mean

SD

Min

Max

Aqu. solution

1489

599.09

4.85

588.80

614.56

NaCl 150 mM

1484

576.87

7.19

561.20

609.04

NaCl 600 mM

1485

590.40

6.66

575.92

616.40

Aqu. solution

1471

599.99

1.47

595.13

605.82

NaCl 150 mM

1472

599.29

1.54

593.57

603.58

NaCl 600 mM

1467

599.45

1.37

595.01

603.06

Bang-bang algorithm

PID algorithm

Note N represents the number of points at the voltage level; Mean the arithmetic mean value; SD the standard deviation; Min the minimum value, and Max the maximum value in mV Table 2 Temperature parameters obtained from Pt-electrodes immersed on a PBS-NaCl 600 mM solution at 60 °C

Sample

N

Mean

SD

Min

Max

NaCl 600 mM

7648

59.45

0.36

59.01

61.15

Note N represents the number of interactions; Mean the arithmetic mean value; SD the standard deviation; Min the minimum value, and Max the maximum value in Celsius

response is depicted in Fig. 3c. A standard deviation of fewer than 0.5 °C was obtained for the mean value, whereas the maximum temperature did not exceed 62 °C. Table 2 presents a compilation of the statistical analysis associated with the PID control for a temperature setpoint 60 °C. Finally, as Sooksood and co-authors [12], the excess charge accumulation (in our case, ionic charges) could lead to electrolysis with electrode dissolution. Passive and active (closed-loop) charge balancing techniques have been reported, including electrode discharging, blocking capacitors, quasi-static electrode potential expression, pulse insertion, and offset regulation methodologies [12]. Our approach resembles the offset regulation techniques, but we chose to implement a PID controller to minimize the current spikes and timing conflicts due to subsequent stimulations. A PID control technique was already used by our group to maintain the desired AC-field in Pt electrodes used for the electroformation of GUVs [13].

4

Conclusions

The presence of ionic charges in the aqueous solutions produced local electric-fields that induced oscillations in the AC-field applied to electrodes. However, the PID protocol proved to be very useful in maintaining the setpoints values for both AC-field and temperature parameters of electrodes immersed in aqueous solutions containing 150 and 600 mM NaCl.

Conflict of Interest The authors have no conflicts of interest.

References 1. Ogata K (2010) Transient and steady-state response analyses. Modern control engineering, 5th edn. Pearson Education, Inc., publishing as Prentice Hall, New Jersey, USA, Chapter 5, pp 159– 268 2. Ang KH, Chong G, Li Y (2005) PID control system analysis, design, and technology. IEEE Trans Contr Sys Tech 13:559 3. Levine WS (2011) The control handbook control system applications, 2nd edn. Section VII 4. Cogan SF (2008) Neural stimulation and recording electrodes. Annu Rev Biomed Eng 10:275–309 5. Donaldson NN, Donaldson PEK (1986) When are actively balanced biphasic (‘Lilly’) stimulating pulses necessary in a neurological prosthesis? I. historical background; Pt testing potential; Q studies. Med Biol Eng Comput 24:41–49 6. Donaldson NN, Donaldson PEK (1986) When are actively balanced biphasic (‘Lilly’) stimulating pulses necessary in a neurological prosthesis? II—pH changes; noxious products; electrode corrosion; discussion. Med Biol Eng Comput 24:50–56 7. Li Q, Wang X, Ma S, Zhang Y, Han X (2016) Electroformation of giant unilamellar vesicles in saline solution. Colloids Surf, B 147:368–375 8. Politano TJ, Froude VE, Jing B, Zhu Y (2010) AC-electric field-dependent electroformation of giant lipid vesicles. Colloids Surf B 79(1):75–82 9. Bagatolli LA, Needham D (2014) Quantitative optical microscopy and micromanipulation studies on the lipid bilayer membranes of giant unilamellar vesicles. Chem Phys Lipid 181:99–120 10. Drabik D, Doskocz J, Przybyło M (2018) Effects of electroformation protocol parameters on quality of homogeneous GUV populations. Chem Phys Lipid 212:88–95

762 11. Mendanha SA, Alonso A (2015) Effects of terpenes on fluidity and lipid extraction in phospholipid membranes. Biophys Chem 198:45–54 12. Sooksood K, Stieglitz T, Ortmanns M (2009) Recent advances in charge balancing for functional electrical stimulation. Ann Int Conf IEEE EMBS 31:5518–5521

J. L. dos Santos et al. 13. dos Santos JL, Mendanha AS, Vieira SL, Gonçalves C (2019) Portable proportional-integral-derivative controlled chambers for giant unilamellar vesicle electroformation. Biomed Phys Eng Express 5:047002

Electrooculography: A Proposed Methodology for Sensing Human Eye Movement G. de Melo and Sílvio Leão Vieira

Abstract

1

Bioelectric signals are emanations from living biological systems and have their origin in various electrical potentials in cells. In this context, electrooculography (EOG) is defined as a specific biopotential is generated by the eye and eyelids’ muscular movement. Here, we are dedicated to developing a methodology and an appropriate environment to detect the human eye’s movement by acquiring bioelectric signals. The study was conducted in 40 healthy individuals using the 4/5 configuration consisting of four main electrodes and an additional fifth reference electrode. The methodology is following standards and guidelines for data acquisition of ECG signals. During the acquisition, the volunteer remains seated and with his head comfortably sustained on a support of a tailored Ganzfeld box. On the back wall of the box were fixed five light-emitting devices (LEDs) in the form of a plus sign symbol positioned at a distance opposed and at the center. The LEDs were turned-on or turned-off according to a pre-established sequence protocol synchronized with data acquisition while the volunteer performed eye movements. As a result, were obtained 18 samples of EOG signals per subject of 60 s. A dedicated algorithm was used to separate the data by group according to the established methodology. Keywords





Electrooculography (EOG) Eye movement Bioelectric signals Ganzfeld



G. de Melo  S. L. Vieira (&) Department of Electrical and Computer Engineering, Federal University of Goiás, Avenida Esperança s/n, Câmpus Samambaia, Goiânia, Brazil e-mail: [email protected] S. L. Vieira Institute of Physics, Federal University of Goiás, Goiânia, Brazil

Introduction

Electrooculography (EOG) biosignals are electrical signals generated by the eye’s biopotentials from a retina-cornea constant voltage source [1, 2]. The EOG biosignals are an eye angle-dependent, which is the central concept that makes voltage measurements possible. EOG signals can be studied and applied to diagnose eye diseases and the interaction of people with limited movements. Indeed, in the last decade, a plethora of new applications have been proposed in the area such as Robotics [3–5]; Human–Computer Interaction (HCI) [6–13]; Noise reduction by digital signal processing techniques [14, 15]; Healthcare [1, 16]; Machine Learning [2, 17, 18]. EOG is classified as bioelectric signals. EOG is a signal that measures and records the resting potential of the eye retina. The human eye functions as a dipole in which the frontal cornea represents the anode and the rear the cathode [19]. Such signals are acquired through electrodes positioned at an external angle to the eyes, both vertically and horizontally. The eyeball movement approaches the cornea of an electrode. Its corresponding potential increases, and the potential of the opposite electrode decreases. According to [7], many experiments considered part of the cornea as a positive pole and the retina a negative pole. These two poles are responsible for generating a small amount of electric field. It is detected on the forehead, temple, and upper part of the cheek when placing some electrodes. Typically, the EOG signal has a differential potential ranging from 50 to 3500 mV in amplitude and frequency from 0.1 to 20 Hz [7]. Other authors present different values of this potential for both amplitude and frequency: 30 mV– 50 mV and 7 Hz–20 Hz [19]; 10 mV–100 mV and 0.1 Hz– 15 Hz [20]; 0.4 mV–1.0 mV and 0.5Hz–30 Hz [21]; and 10 mV–100 mV and 0.1 Hz–38 Hz [15], respectively. In the field of diagnosis and treatment of vision, the most crucial step in the study of EOG is signal acquisition. To accomplish it is necessary to develop an enabling

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_116

763

764

G. de Melo and S. L. Vieira

environment, quality devices and materials, and very objective methodologies so that the acquisition can be carried out in a way that is within the standards. There are several methodologies for acquiring EOG signals, and the acquisition equipment, in most cases, is expensive [7, 15, 19–21]. Herein, we proposed to create an environment with an excellent cost–benefit and a new methodology according to standards guidelines for data acquisition of EOG signals.

2

Materials and Methods

This section describes the methodology for carrying out the acquisition of EOG signals. First, the acquisition device is configured to capture EOG signals. After this configuration, the procedures to accomplish the data acquisition defined. The data acquisition used the bioelectric instrument Miotool (200/400 USB, Miotec Equipamentos Biomédicos Ltda, RS, Brazil). It allows data acquisition of surface electromyography signals with the support of Miograph USB software. According to the equipment manual, the system meets all safety standard records for electromedical equipment NBR IEC 601.1/1994 and EMENDA (1997), NBR IEC 60601.1.2:2006, and Particular Standard NBR IEC 60601.2.40/1998 [22]. The device has the following specifications [22]: • • • • •

14 bits of resolution; Channel acquisition rate of 2000 samples per second; Low noise level; Common 110 dB rejection mode; Security isolation 3000 Vrms.

The connection between the electrodes and the Miotool 200/400 USB was used as a Differential Surface Sensor (SDS500) with a claw connection. This type of connection is suitable for fixed electrodes in places that are not flat and difficult to fix, such as the face. Thus, the claw-type SDS500 allows the electrodes’ placement adjustments, facilitating the signals’ acquisition [22]. The Miograph USB software has previously configured to acquire EOG signals. For the type of acquisition, the sEMG S option was chosen, referring to the skin surface’s EMG signals. Two acquisition channels were used to measure the horizontal and vertical eye movements. Two sensing electrodes connect each channel. Besides, was used a fifth reference electrode connected to a specific input of the equipment. The main settings for both channels are: • 0.1 Hz high pass filter (HPF) of fourth-order; • 40 Hz low pass filter (LPF) of fourth-order;

• Notch filter 60 Hz of fourth-order; • Maximum Voluntary Contraction (MVC) set to a maximum of 900 mV. Completed the device configuration task, initiated the methodological procedures and data acquisition of EOG signals. The measurements accomplished by observing the Clinical Standards Methods for analysis of electrooculogram, recommended by ISCEV [23]. In the literature, there are many descriptions regarding the placement of electrodes to acquire EOG signals [15, 19–21]. Bharadwaj and colleagues [24] explain that the number of electrodes for acquiring EOG signals can be 2/3, 3/4, 4/5, or 7/8, where the first number denotes the number of active electrodes and the second number denotes the total electrodes, including the reference electrode. Such differences will depend on the application involving the research. This work was used the 4/5 configuration based on characteristics of the device used. Figure 1a shows the electrode arrangement in the configuration described. The electrode used was the disposable ECG electrode (2223BRQ, 3 M, Sumaré, SP, Brazil). The electrode consists of a polyethylene foam back, covered with hypoallergenic acrylic adhesive on one side and laminated with polypropylene tape printed on the other side; conductive adhesive gel; stainless steel pin and polymer counter pin covered with Silver/Silver Chloride treatment with size 4.5  3.8 cm [25]. The acquisition of EOG signals carried out in two phases. The first with the volunteer exposed to light and the second with an absence of light. According to the ISCEV [24] standard, a record ranging from 7 to 12 min increases the eye’s bioelectric potential, thus reaching Lighting Peak (LP). In the second phase, between 10 and 15 min of the absence of light, there is a drop in potential, thus reaching what can be called Dark Valley (DV). Thus, according to the presented standard, the acquisition test was performed between 10 and 12 min for each phase. To attempt control of luminosity and the stimulus environment described in the International Society for Clinical Electrophysiology of Vision (ISCEV) [24] standard was necessary to use a closed and insulating external light during signal acquisition. Specifically, this environment is a place where there will be internal control of luminosity (the specific standard 100 cd/m2, equivalent to 100 lx) and stimulating the volunteer’s eye movement (LEDs at the bottom of the container). This environment is known as Ganzfeld, as illustrated in Fig. 1b. The environment was designed following the guidelines and essential concerns established by the standard. A Ganzfeld was based on a 79  56  47 cm sized cardboard box with a square window opening on its front surface

Electrooculography: A Proposed Methodology …

765

Fig. 1 a Electrode arrangement on the face for acquiring EOG signals, 4/5 configuration. b Illustration of the internal LEDs for illumination (yellow bands) and support for the volunteer’s head support (blue)

of 27  27 cm. These measures were chosen based on box material available at that moment, once there is no standardization about Ganzfeld box dimensions. However, specific rules remark on the internal luminosity and layout of the light-emitting diodes (LEDs). Inside the Ganzfeld, a LED ruler was adapted to control lighting following the ISCEV Standard. At the edge of the square opening, soft foam support was placed to support the volunteer’s head. The previous figure illustrates the Ganzfeld prototype’s spatial dimensions for this research, the dispositions of the LED strip that were installed, the support bracket, and the LEDs on Ganzfeld’s bottom.

Five LEDs have been installed inside the box on the rear surface in the shape of a plus sign symbol. They were configured to light according to three predefined sequences to stimulate the volunteer’s eye movement. Four light emitters are arranged on the plus sign’s extremity and one in the center of the cross. The central LED is 25 cm from the bottom edge of the box and 39 cm from the side edges. Each of the other four LEDs is 10 cm straight from the center LED. The central LED was arranged so that it was approximately straight with the eyes of the volunteer. The movement of the volunteer’s eyes towards the other LEDs accomplish an angle, a, of about 11.31 degrees.

766

G. de Melo and S. L. Vieira

Fig. 2 Expected bioelectric signal from a single blink and a double blink obtained on channel 2 (Vertical)

A microcontroller ATmega328 (Arduino Uno R3, Ivrea, Italy) is used to control the LED’s on–off sequences, as it is an easy-to-program and low-cost microcontroller. The Ganzfeld box’s internal lighting is controlled by the LED strip light’s intensity powered by an adjustable DC voltage source (1502DD, Yihua, China). For the internal lighting reaches 100 lx, a voltage of 10.1 V and an electrical current of 0.33 A was set up. The luminous intensity inside the box was measured using a lux meter (MLM-1332, Minipa, Joinville, Brazil). Three different sequences of activation of the LEDs were elaborated to stimulate the volunteer’s eye movement. Three sequences were elaborated and configured so that the volunteer did not try to predict what next movements of eyes would be performed during the data acquisition. Thus, these sequences served as guidance and induced what direction the individual’s eyes would have to move. Such sequences are based on the following eye movements: Blink, Double blink, Right-Center, Left-Center, Up-Center, Down-Center, Right-Left-Center, Left-Right-Center, Up-Down-Center, and Down-Up-Center. According to the established sequences, the signal acquisition procedure was performed to stimulate the eye to look horizontally, vertically, or blink. The three sequences were first performed with the indoor box light turned-on. In this procedure, three samples were obtained from each sequence, in the following order: Sequence 1, Sequence 2,

Sequence 3. Then, three more samples were obtained from each of the sequences with the interior light off. The microcontroller LEDs controller and the bioelectric monitoring instrument were synchronized to data acquisition. The LEDs were turned-on or turned-off according to a pre-established sequence protocol while the volunteer per-formed eye movements. Signals were acquired of 40 healthy individuals formed by men and women, providing 720 samples. They were then arranged in two groups of 360 samples obtained in the lighted environment and the dark. The time to take all raw data was about 12 h being 18 min for each person.

3

Results and Discussions

EOG signal results from many factors, including eyeball rotation and movement, eyelid movement, electrodes placement, head movements, and luminance influence. Those EOG systems are easily contaminated with drift in long-term measurements. The drifts can be reduced by applying Ag/AgCl electrodes and filling them bubble-free with electrode-gel. The separation and extraction of the movements of interest from EOG signals were done through a custom algorithm. The aggregate data from one volunteer has 120.000 sampled points. The row data was windowed with a duration of 60 s,

Electrooculography: A Proposed Methodology …

767

Fig. 3 Evoked potential recorded from channels 1 (horizontal) and 2 (vertical) as the response of the eyeball movement

which corresponds to 2.000 sampled points. The methodology used to detect eye movement is based on extracting the sum of the mean amplitude for each second of the analyzed sample. For example, if the average amplitude in that second

is greater than the global signal’s average amplitude, the algorithm identifies it as an eye movement signal. Finally, the algorithm generates a graph of the eye movement signals taken from the initial raw signal. As illustrated in

768

G. de Melo and S. L. Vieira

Fig. 2, we can see the signal produced by one single blink (Fig. 2a) of the eye, depicted on the left, and a double blink (Fig. 2b) of the eyes shown on the right of the picture. These two movements typically have a higher response in amplitude compared to the other emanated movement, such as they look to the horizontal and vertical direction. The eye blink amplitude is almost four times greater than those found representing the right to center or up to center movement, as illustrated in Fig. 3. This figure are illustrated a sequence of bioelectric responses of the eyes. All sequence of movement has the look center of the eye as reference. At Fig. 3a are described the signal from horizontal (right-center) and vertical (up-center) movement; Fig. 3b illustrate the movement described by horizontal (left-center) and vertical (down-center); Fig. 3c presents a sequence of three movements initially beginning with horizontal (right-left-center) and vertical (up-down-center); and finally the Fig. 3d shows a horizontal (left-right-center) and vertical (down-up-center) movement of the eyeball. It is interesting to note that some moments have similar bioelectric behavior. For example, as we can see in Fig. 1a, the response pattern to look right to center movement is equivalent to up to center performance. In other words, different movements present signals with similar characteristics, alternating the channel from which it is acquired. In other words, different movements present signals with similar characteristics, alternating the channel from which it is acquired.

4

Conclusions

In this report, we have proposed an environment and methodologies of EOG signals acquisition. The developed environment to obtain the data, called the Ganzfeld box, was easy to make. Its project presented a relatively low-cost, allowing it to use anywhere. The methodology presented is shown to successfully obtain the bioelectric response of a group of volunteers’ eye movements. The specifications of the proposed make it suitable for biomedical applications. The acquired EOG signal provides different eye-related activities. Accordingly, many new systems can be designed to perform different tasks in the real world, involving, e.g., Human–Computer Interaction (HCI) and Artificial Intelligence (AI). As a promising tool, efforts have been made in such a direction to evaluate EOG signal processing using digital filters and feature extractions algorithms. Acknowledgements The authors are grateful to the volunteers for promptly dispose of participating in the screening and survey, Prof. Wesley Pacheco Calixto, from IFG, for experimental support, and the Coordenacão de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for financed in part the project.

Conflict of Interest The authors have no conflicts of interest. Ethical Approval It was approved by the Research Ethics Committee (CEP) of the Federal University of Goiás (UFG) under the Certificate of Presentation for Ethical Appreciation (CAAE), number 15226219.2.0000.5083 and under the Number Opinion 3.521.755.

References 1. Picot A, Charbonnier S, Caplier A (2011) EOG-based drowsiness detection: Comparison between a fuzzy system and two supervised learning classifiers. IFAC Proc Vol 44(1):14283–14288. https:// doi.org/10.3182/20110828-6-IT-1002.00706 2. Banerjee A, Datta S, Pal M, Konar A, Tibarewala DN, Janarthanan R (2013) Classifying electrooculogram to detect directional eye movements. Proc Technol 10:67–75. https://doi. org/10.1016/j.protcy.2013.12.338 3. Bozinovski S (1990) Mobile robot trajectory control: From fixed rails to direct bioelectric control. Proc IEEE Int Works Intell Motion Control 2:463–467. https://doi.org/10.1109/IMC.1990. 687362 4. Chen Y, Newman WS (2004) A human-robot interface based on electrooculography. IEEE Int Conf on Rob Autom Proc ICRA’04 1:243–248). https://doi.org/10.1109/ROBOT.2004.1307158 5. Rusydi MI, Okamoto T, Ito S, Sasaki M (2018) Controlling 3-D movement of robot manipulator using electrooculography. Int J Electr Eng Inform 10(1):170–185 6. Kim Y, Doh NL, Youm Y, Chung WK (2007) Robust discrimination method of the electrooculogram signals for human-computer interaction controlling mobile robot. Intell Autom Soft Comput 13(3):319–336. https://doi.org/10.1080/ 10798587.2007.10642967 7. Hossain Z, Shuvo MMH, Sarker P (2017) Hardware and software implementation of real time electrooculogram (EOG) acquisition system to control computer cursor with eyeball movement. In: 2017 4th international conference on advanced in electrical engineering (ICAEE). IEEE, pp 132–137. https://doi.org/10. 1109/ICAEE.2017.8255341 8. Nathan DS, Vinod AP, Thomas KP (2012) An electrooculogram based assistive communication system with improved speed and accuracy using multi-directional eye movements. In: 2012 35th international conference on telecommunication and signal processing (TSP). IEEE, pp 554–558. https://doi.org/10.1109/TSP. 2012.6256356 9. He S, Li Y (2017) A single-channel EOG-based speller. IEEE Trans Neural Syst Rehabil Eng 25(11):1978–1987. https://doi.org/ 10.1109/TNSRE.2017.2716109 10. Yu Y, Liu Y, Yin E, Jiang J, Zhou Z, Hu D (2019) An asynchronous hybrid spelling approach based on EEG–EOG signals for chinese character input. IEEE Trans Neural Syst Rehibil Eng 27(6):1292–1302. https://doi.org/10.1109/TNSRE. 2019.2914916 11. Ogai, S, & Tanaka, T (2017) A drag-and-drop type human computer interaction technique based on electrooculogram. In: 2017 Asia-Pacific signal and information proceeding association annual summit and conference (APSIPA ASC). IEEE, pp 716– 720. https://doi.org/10.1109/APSIPA.2017.8282126 12. Heo J, Yoon H, Park KS (2017) A novel wearable forehead EOG measurement system for human computer interfaces. Sensors 17 (7):1485. https://doi.org/10.3390/s17071485

Electrooculography: A Proposed Methodology … 13. Djeha M, Sbargoud F, Guiatni M, Fellah K, Ababou N (2017) A combined EEG and EOG signals based wheelchair control in virtual environment. In: 2017 5th international conference on electrical engineering-Boumerdes (ICEE-B). IEEE, pp 1–6. https:// doi.org/10.1109/ICEE-B.2017.8192087 14. Mala S, Latha K (2016) Electrooculography de-noising: wavelet based approach to reduce noise. Int J Adv Eng Tech VII(II):482, 487 15. Choudhury SR, Venkataramanan S, Nemade HB, Sahambi JS (2005) Design and development of a novel EOG biopotential amplifier. IJBEM 7(1):271–274 16. Venkataramanan S, Prabhat P, Choudhury SR, Nemade HB, Sahambi JS (2005) Biomedical instrumentation based on electrooculogram (EOG) signal processing and application to a hospital alarm system. In: Proceeedings of 2005 international conference on intelligent sensor and information proceedings. IEEE, pp 535–540. https://doi.org/10.1109/ICISIP.2005.1529512 17. Aungsakun S, Phinyomark A, Phukpattaranont P, Limsakul C (2011) Robust eye movement recognition using EOG signal for human-computer interface. In: International conference on software engineering and computer systems. Springer, Berlin, Heidelberg, pp 714–723. https://doi.org/10.1007/978-3-642-22191-0_63 18. Gürkan G, Gürkan S, Uşakli AB (2012) Comparison of classification algorithms for EOG signals. In: 2012 20th signal processing

769

19.

20.

21.

22. 23.

24. 25.

and communications applications conference (SIU). IEEE, pp 1–4. https://doi.org/10.1109/SIU.2012.6204469 Yang JJ, Gang GW, Kim TS (2018) Development of EOG-based human computer interface (HCI) system using piecewise linear approximation (PLA) and support vector regression (SVR). Electronics 7(3):38. https://doi.org/10.3390/electronics7030038 Kuno Y, Yagi T, Uchikawa Y (1997) Biological interaction between man and machine. Proc IEEE/RSJ Int Conf Intel Rob Syst Inno Rob Real-World App IROS’97 1:318–323. https://doi.org/10. 1109/IROS.1997.649072 D’Souza S, Sriraam N (2014) Design of EOG signal acquisition system using virtual instrumentation: a cost effective approach. Int J Meas Tech Instr Eng (IJMTIE) 4(1):1–16. https://doi.org/10. 4018/ijmtie.2014010101 Miotec Equipamentos Biomedicos Miotool 200/400 (2010) Manuais do Usuário Rev. D Constable PA, Bach M, Frishman LJ, Jeffrey BG, Robson AG (2017) International society for clinical electrophysiology of vision. ISCEV standard for clinical electro-oculography. Documenta Ophthalmol 134(1):1–9.https://doi.org/10.1007/s10633-017-9573-2 Bharadwaj S, Kumari B (2017) Electrooculography: analysis on device control by signal processing. Int J Adv Res Comput Sci 8(3) 3M Produtos Hospitalares Eletrodo Monitorização Cardíaca (2011)

Analysis of Inductance in Flexible Square Coil Applied to Biotelemetry S. F. Pichorim

Abstract

Flexible square coils are widely applied in wearable devices, especially in wireless power transfer systems. Wearable biomedical sensors are solutions where the electronic circuits are embedded into the clothes, for long-term patient monitoring in homecare health. The determination of coil´s inductance in a situation of flexion is not trivial, and may involve complex equations. In this paper, an analysis of inductance in flexible square coil, using classical Neumann equation, is proposed. An approximation with decagonal coils was used to simplify the calculation. Two sigmoid curves were calculated for fitting with high correlation. The simplification presented an averaged error less than 4.1%. Finally, comparisons with practical data are done. This proposed procedure can help in the design of flexible coils applied in wearable biotelemetric devices. Keywords

 

Flexible coil Square coil power Biotelemetry

1



Self-inductance



Wireless

Introduction

Wearable systems are a topic of interest in health homecare monitoring. Hence the patient can carry a biomedical device on his/her own cloth, physiological parameter can be monitored, such as, temperature, heart and respiratory rates, pressure [1]. This kind of device is constructed on textiles or flexible polymers, and, generally, is powered by an inductive-coupling wireless power transfer [2]. Therefore, S. F. Pichorim (&) UTFPR, Federal University of Technology—Paraná, CPGEI/DAELN, Av. Sete de Setembro, 3165, Curitiba, PR, Brazil e-mail: [email protected]

flexible and flat coils are an important element in the design of passive implantable devices, where there is no battery, and the energy is transferred transcutaneously (through skin). For example, flexible coils can be applied for intraocular retinal prostheses [3], angle sensing [4], radio-frequency identification (RFID) [5, 6], wood humidity measurement [7], ECG, biofluid analysis, or continuous glucose monitoring [2], and in many other biomedical areas [1]. The inductive coils have many shapes, but mainly they are square or rectangle [2, 4, 5, 7–9]. A wearable WPT (Wireless Power Transfer) system is presented by [8] for smart cycling applications (energy harvester on a bicycle). In that work, flexible and textile coils for wearable WPT are placed on bicycle handle and cyclist glove. Coils have rectangular geometry with 30 mm of size. Although the bicycle handle is a cylindrical structure, an architecture with planar coils was considered. Efficiencies, as a function of the load impedance, coil separation, tissue thickness, and misalignment, were presented, but always supposing that coils are planar. Biswas et al. analyzed electrical parameters (inductance, capacitance, quality factor, etc.) of square coils (10–20 mm diameter) as a function of the scaling in a WPT system [2]. Although secondary coils were flexible, only the case of planar coils was validated. The power transfer to implantable medical devices in UHF was discussed by [9]. The biosensors can be implanted at various body locations. Tissue models of some implantation locations are presented: arm, thigh, abdomen, and head [9]. Although these models assume the body surfaces as cylinders or spheres, the coils (loop square up to 50 mm) are always supposed to be plane [9]. Few works have developed models for flexed coils [5, 6]. Leung et al. presented an analysis of inductance, resistance and capacitance as a function of curvature, where flexible printed RFID tags are affixed onto surfaces of cylindrical containers [6]. Octagonal planar coils (external diameter from 20 to 55 mm) were used. Authors assumed that

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_117

771

772

S. F. Pichorim

self-inductance has a proportional relation with the projected coil’s area, which changes with different surface curvatures. However, the paper only presents the solution for great radii of curvature (angle of flexion less than 180°). In the same direction, a relation between self-inductance and projected area of flexed coil was proposed by [5]. Some empirical equations were presented involving cubic polynomial with coil’s area, length, and width. This paper presents the investigation of the behavior of square coil inductance when the coil is flexed on a cylindrical surface. An equation of self-inductance in flexible square coil, using classical Neumann method, is proposed in Sect. 2, where an approximation with decagonal coils was used to simplify the calculation. In Sect. 3, two sigmoid curves were used for fitting, and the validation of proposed method is presented. Results and conclusion are detailed in the following chapters.

2

Development of Decagonal Solution

In this analysis, the square coil (length sides l) is supposed to be bent on a cylindrical surface with radius R, with two sides aligned parallelly to axis z, as seen in Fig. 1a. Thus, an angle of flexion a is formed in both flexed sides (a ¼ l=R). In this situation, the total self-inductance of flexed square coil is composed of 4 inductances: Lf, Lp, Mf, and Mp (Fig. 1b), where Lf and Lp are the self-inductances of flexed and plane sides, respectively; and Mf and Mp are the mutual inductances between the parallel flexed and plane sides, respectively. Due to these sides being perpendicular one to each other, mutual inductances between flexed and plane sides are null. Since the total inductance is the sum of magnetic contribution of each segment on all other segments, the square coil inductance is   LSQUARE ¼ 2 Lp þ Lf  Mp  Mf : ð1Þ The negative signal of mutual inductances in (1) is because the currents flow in opposite direction in each side. Observing Fig. 1b, Lp and Mp are inductances of straight segments, which are easy to be calculated, hence equations are classical and can be found in literature. However, Lf and Mf are inductances of curved segments, which are not easy to be solved. Practical equations found in literature are always for a complete turn (i.e. circular ring coils), but not for a partial circle with angle a. Intuitively, Lp is not sensitive to flexion, and Mp increases with the flexion angle a, because the distance between the two parallel segments decreases. The mutual inductance (Mp) between two equal parallel straight filaments (lengths l and apart with distance d) [10] can be determined by

" # rffiffiffiffiffiffiffiffiffiffiffiffiffi! rffiffiffiffiffiffiffiffiffiffiffiffiffi l0 l l2 d2 d Mp ¼ þ 1þ 2  1þ 2 þ l ln d l 2p d l

ð2Þ

where l0 is the magnetic permeability of the medium (vacuum or air, i.e. 4.p.10–7 H/m), and d is the cord formed in the arc of bent side, and can be calculated by d¼

2l a sin : a 2

ð3Þ

The self-inductance (Lp) of a straight filament (wire radius q) [10], with length l, can be obtained by     l0 2l 3 Lp ¼ l ln  : ð4Þ q 4 2p As previously presented, the determination of inductances of curve parts (Lf and Mf) requires a new solution. So, it will be supposed that the circular arc of flexed side l can be approximated to a decagon (ten-side polygon) as shown in Fig. 1c. Grover’s table 8 [10] shows that the inductance difference between circular and decagon plane figures (with same perimeter l) is less than 1.7%, which can be considered a satisfactory approximation in practice. A classic solution for the inductance calculation in a generic circuit is the Neumann equation, which can be solved by numerical method approximating the circuit curves to linear segments (Δs), making Δs as small as possible [11]. Supposing a circuit 1 with N segments and a circuit 2 with H, the mutual inductance between these two circuits is M¼

N X H l0 X Dsj  Dsk 4p j¼1 k¼1 rjk

ð5Þ

where       Dsj  Dsk ¼ Dxj Dxk þ Dyj Dyk þ Dzj Dzk

ð6Þ

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2  2  2ffi xj  xk þ yj  yk þ zj  zk : rjk ¼

ð7Þ

and

Denominator rjk is the distance between segments j and k, with orthogonal coordinates x, y and z. In this work, both circuits have 10 segments, as shown in Fig. 1d. For the self-inductance calculation, the same Neumann Eqs. 5–7 can be used. In this case, circuits 1 and 2 must be

Analysis of Inductance in Flexible Square Coil …

773

Fig. 1 A square coil bent in axis-z direction. In a the sides l, angle a and radius R of flexion. In b self-inductances Lf and Lp, and mutual inductances Mf and Mp, for flexed and planed parts, respectively. In c and d the decagonal model, segmentation of curved parts used by Neumann solution for self- and mutual inductance calculation, respectively

overlapped (Fig. 1c). However, to avoid mathematical indeterminate form (division by zero), rjk must be equalled to the wire radius (q) when i = j [11]. With this described procedure, the total inductance of a square coil can be obtained. Although it is not simple at all, it cannot be said complex, because it uses simple equations and a loop repetition of 100 times (i.e. 10 sides of polygon in two summations), which requires a very low computational cost.

3

results are presented in a normalized form (Lf/Lp and Mf/ Mp), or in other word, in dimensionless quantities. The behavior of the inductances (Lf and Mf) as a function of the flexion angle (a) was analyzed, and fitting curves were sought. As it is shown in next chapter, these curves have sigmoid shapes. Thus, the normalized self- and mutual inductances can be written as Lf Mf B or ¼ Aþ Lp Mp 1 þ eðCx þ DÞ

ð8Þ

Methodology

Aiming the evaluation of this solution, a square coil (length l of 30 mm and wire radius q of 0.25 mm) was flexed on a cylindrical surface (as seen in Fig. 1a) in steps of 30° in a, ranging from 0° to 360°, and the inductances were calculated by (1–7). The dimensions of square coil were chosen because they are similar to those found in literature [6–9]. However, l and q are not relevant in this analysis, since they can be easily changed in the equations. Nevertheless, the

where x is the normalized angle of flexion, i.e. x ¼ a=360 , and constants A, B, C, and D are determined by fitting process.

3.1 Validations In parallel to previous process, comparisons and validation were done with the results obtained by numerical

774

S. F. Pichorim

approximation of Neuman method (the decagon proposed in this work). The inductances Lf and Mf were also calculated by classic equations in 2 situations: for a = 0° (flat coil), and for a = 360° (circular ring). Of course, these 2 situations were chosen because they are very easy to be found in literature. For instance, the self-inductance of a flat circular ring (radius R and wire radius q) [10] can be determined by     8R 7 L ¼ l0 R ln  : ð9Þ q 4 And the mutual inductance between two equal coaxial circular rings (radii R and apart with distance d) [11] can be obtained by    2 2  k K ðkÞ  EðkÞ ð10Þ M ¼ l0 R k k where k2 ¼

4R2 d2 þ 4R2

ð11Þ

and K(k) and E(k) are the complete elliptical integrals of first and second kind, respectively [12]. Equations (9–11) are used for comparison in the case of a = 360° (circular ring), and (2) to (4) in case of a = 0° (flat coil). Other possibility of validation is to compare the results for a = 0° with the equation of self-inductance for a flat square coil, found in literature [10]. That means     2l l L ¼ 0 l ln  0:52401 ð12Þ q p

outermost side length of 38 mm, tracks and spaces width of 0.5 mm [7]. This coil was bent up to a = 290°.

4

Results

Figure 2 presents the results of self- and mutual inductances of flexed segments as a function of the angle a. The dots are values obtained by Neumann Eq. (5) and curves are fitted by sigmoid Eq. (8). Coefficients of correlation are 0.99983 and 0.99987 for self- and mutual inductance, respectively. Table 1 gives the coefficients A, B, C, and D of sigmoid curves (8) obtained in the fitting process for both self- and mutual inductances. Since high correlation were yielded, these coefficients may help in the calculation procedure, avoiding the numerical solution of Neumann method. As said in previous section, the first and last points (a = 0° and 360°) in Fig. 2 can be calculated and compared with classic Eqs. (2–4) and (9–11). This procedure was used to estimate the errors caused by the substitution of flexed circular coil’s segment by a decagon in Neumann equation. For a flat situation (plane or a = 0°), Neumann method calculated self- and mutual inductances of Lp = 29.49 nH and Mp = 2.804 nH, which represent errors of 3.90% and 0.04% in comparison with theoretical values of 28.384 nH and 2.803 nH, respectively. For a totally flexed situation (ring or a = 360°), Neumann method calculated self- and mutual inductances of Lf = 19.92 nH and Mf = 34.17 pH, representing errors of 9.57% and –4.71% in comparison with theoretical values of 18.18

Finally, the total inductance of a flexed square coil, obtained by the decagon method, can be compared with practical results, and with two simplistic suppositions: (1) the square inductance is proportional to cross-section area of bent coil [5, 6]; and (2) the square coil inductance is similar to the inductance of a rectangle coil formed in the cross-section plane. For this second approach, the self-inductance of a flat rectangle coil (wire radius q) [10], with sides a and b, can be determined by      pffiffiffiffiffiffiffiffiffiffiffiffiffiffi l0 2a 2b L¼ a ln þ b ln þ 2 a2 þ b2 q q p  a b a arcsinh  b arcsinh  1:75ða þ bÞ : b a ð13Þ Practical results were obtained using a precision impedance analyzer (Agilent 4294 A), which measured the inductance of a spiral flexible square coil with 8.5 turns,

Fig. 2 Normalized self- and mutual inductances as a function of the angle of flexion. Dots are obtained by Neumann solution and curves are fitted. Coefficients of correlation are 0.99983 and 0.99987 for self- and mutual inductance, respectively

Analysis of Inductance in Flexible Square Coil … Table 1 Coefficients of sigmoid curves fitted for self- and mutual inductances, and their correlations

775 Lf/Lp

A

Mf/Mp

1.02105

−0.02953

B

−0.38339

−1.10001

C

−5.30663

6.27631

D

3.00437

−2.83541

Correlation

0.99983

0.99987

Fig. 3 Normalized self-inductances as a function of angle of flexion (a) applied in square coils. Dots are measured values and the blue curve is obtained by proposed decagonal model with Neumann method. For comparison, curves orange and gray are supposing a flat rectangle model with dimensions l  d and a model of linear relationship with area

nH and 35.86 pH, respectively. These errors exist because a decagon is not exactly equal to a circular ring. After showing that the proposed method presents reliable results, the total inductance of a flexible square coil can be analyzed. Applying the results of Fig. 2 in (1), Fig. 3 was drawn. Curve in blue is obtained by the technique proposed in this paper. For comparison, practical measurements were plotted with dots. Although there is a small difference between them, the behavior of curve is very similar to dots. A reason for this difference is that the model is based on a simple square (with 4 equal sides) and the practical coil was a spiral shape, where each turn has distinct length. However, since inductances are normalized, these results demonstrate the robustness of the proposed technique. In the first point (for a = 0°) of Fig. 3, the square coil has a self-inductance of 104.5 nH, determined by Neumann equation. If the classic solution for flat square coil is used [by (12)], the value of 102.3 nH was obtained, i.e. only an error of 2.16% is recorded. As said previously, [5, 6] assumed a linear or polynomial relation between the square coil inductance and the

cross-section area of bent coil. This is a simplistic supposition and is valid only for very small angles of flexion. In Fig. 3, the gray curve is obtained using this idea. As it can be observed, the gray curve is completely apart from the proposed method, and therefore, it does not agree with the reality (practical values). A last attempt of simplification (also in a simplistic way) is to calculate the self-inductance of a flat rectangle, i.e. the projection of the flexed square on a plane. Thus, the rectangle is formed by the straight side l and the cord d of the arc of flexed square’s side. Applying this idea in (13), the orange curve in Fig. 3 was drawn. Also, one can note that this supposition does not match the practical result.

5

Conclusions

In this paper, an analysis of inductance in flexible square coil, using classical Neumann equation, was proposed and presented in details. An approximation with decagonal coils was used to simplify the calculation. Two sigmoid curves

776

were calculated for fitting with high correlation (greater than 0.9998). The simplification method presented an averaged error less than 4.1% when compared with classic procedures for the cases of flat and totally curved coils (a = 0° and 360°, respectively). Of course, errors can be reduced by using polygons with a greater number of sides, requiring higher computational costs. Finally, comparisons with practical data and two simplistic solutions were done, showing that the proposed method has a better agreement with practice. Author believes that this procedure can help in the design of flexible coils applied in wearable biotelemetric devices. Acknowledgements The author thanks CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) for financial support. Conflict of Interest The author declares that he has no conflict of interest.

References 1. Nag A, Mukhopadhyay SC, Kosel J (2017) Wearable flexible sensors: a review. IEEE Sensors J 17:3949–3960 2. Biswas DK, Sinclair M, Le T, Pullano SA, Fiorillo AS, Mahbub I (2020) Modeling and characterization of scaling factor of flexible spiral coils for wirelessly powered wearable sensors. Sensors 20:2282. https://doi.org/10.3390/s20082282

S. F. Pichorim 3. Li W, Rodger DC, Meng E, Weiland JD, Humayun MS, Tai Y (2006) Flexible parylene packaged intraocular coil for retinal prostheses. Int Conf Microtechnol Med Biol Okinawa 2006:105– 108. https://doi.org/10.1109/MMB.2006.251502 4. Anandan N, Varma Muppala A, George B (2018) A flexible, planar-coil-based sensor for through-shaft angle sensing. IEEE Sens J 18:10217–10224. https://doi.org/10.1109/JSEN.2018. 2874065 5. Fotheringham F, Ohnimus I, Ndip S, Guttowski H, Reichl (2009) Parameterization of bent coils on curved flexible surface substrates for RFID applications. In: Electronic computing technology conference, San Diego, USA, pp 502–507 6. Leung SYY, Tiu PK, Lam DCC (2006) Printed polymer-based RFID antenna on curvilinear surface. Int Conf Electron Mat Pack Kowloon 2006:1–6. https://doi.org/10.1109/EMAP.2006.4430606 7. Reis DD, Castaldo FC, Pichorim SF (2019) Flexible circuits for moisture measurement in cylindrical timber of wood. IEEE Int Conf Flexible Printable Sens Syst Glasgow United Kingdom 2019:1–3. https://doi.org/10.1109/FLEPS.2019.8792322 8. Wagih M, Komolafe A, Zaghari B (2020) Dual-receiver wearable 6.78 MHz resonant inductive wireless power transfer glove using embroidered textile coils. IEEE Access 8:24630–24642. https:// doi.org/10.1109/ACCESS.2020.2971086 9. Bocan KN, Mickle MH, Sejdić E (2017) Tissue variability and antennas for power transfer to wireless implantable medical devices. IEEE J Transl Eng Health Med 5:1–11 10. Grover FW (1946) Inductance calculations. Van Nostrand, New York 11. Silvester P (1968) Modern electromagnetic fields. Prentice-Hall, Englewood Cliffs 12. Pichorim SF, Abatti PJ (2004) Design of coils for millimeter- and submillimeter-sized biotelemetry. IEEE Trans Biomed Eng 51:1487–1489. https://doi.org/10.1109/TBME.2004.827542

A Method for Illumination of Chorioallantoic Membrane (CAM) of Chicken Embryo in Microscope M. K. Silva, I. R. Stoltz, L. T. Rocha, L. F. Pereira, M. A. de Souza, and G. B. Borba

quantitative analysis of the images, using the vessel counting method based on a grid, resulted in a mean percentage difference counting of +56% vessels in relation to the counting with the original illumination of the microscope.

Abstract

This article presents the study and recommendation of a method to improve the observation, in laboratory experiments, using a microscope stereoscopic or stereomicroscope with chicken embryo chorioallantoic membranes (CAM). In these experiments, tumor cells are implanted in this CAM so that they will form a tumor during the embryo's development period. The number of neovessels that develop around the tumors are indicators of results on testing with antiangiogenic drugs. These studies require quantification of vascularization to compare the stimulating or inhibitory effects caused by different agents. The proposed method to improve the visualization of the vessels consists of a lighting chamber composed of three lighting rings, to accommodate the egg, for observation in a microscope. Each ring is composed of sets of 8–16 LEDs, with a temperature of color between 3000 and 5500 K, aligned, focusing on the regions above, central and below the egg. The method provided an improvement in the results as more details of the images were observed, when compared to the images obtained with the original illumination of the adopted microscope stereoscopic or stereomicroscope, model LEICA ZOOM™ 2000. Also, the proposed LED lighting avoids the heating irradiated by the original illumination, which may cause discomfort, dehydration and unnecessary movements of the embryo, besides reflection or glare on the exposed membranes and liquids of the egg, which may impairment the visualization. Qualitative evaluation of sample images demonstrated that subtle vessels stand out, thus allowing better results for further quantitative analysis of the CAM. The M. K. Silva (&)  M. A. de Souza  G. B. Borba Universidade Tecnológica Federal do Paraná/Programa de Pós-Graduação em Engenharia Biomédica, Curitiba, Brazil e-mail: [email protected] I. R. Stoltz  L. T. Rocha  L. F. Pereira Pontifícia Universidade Católica do Paraná/Laboratório de Biologia Experimental, Curitiba, Brazil

Keywords

Angiogenesis control

1



Chorioallantoic membrane



Luminosity

Introduction

Angiogenic phenomena belong to a large area of scientific research, that of blood vessels. Numerous in vivo assays are carried out [1], both in human and non-human tissues. In general, these tests consume time and resources, both in the preparation and execution stages, as well as in the analysis and interpretation of results. In vivo angiogenic assays have allowed important progress for elucidating the mechanism of action of many angiogenic and antiangiogenic phenomena, and it is reasonable to reserve the term “angiogenic factor” for substances that produce new vessel growth in in vivo assays [2]. The main factors that determine the choice of a method are cost, ease of use, reproducibility and reliability. The objective of this project is to provide a more favorable lighting method for collecting images by a microscope (magnifying glass), for later visual analysis of tumors and vessels generated inside chicken eggs, in CAM, in experiments on tumors. The proposed method is based on LED lighting at strategic points in the sample and allows the isolation of the sample from external interference, and the adjustment of the lighting, specifically for each observed membrane. The CAM images obtained using the proposed lighting method, in comparison to those obtained with the original lighting of the microscope allow better visualization,

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_118

777

778

M. K. Silva et al.

both of the most prominent vessels, as well as those of the most subtle and deepest ones.

2

Tumoral Angiogenesis

2.1 Definition of Tumor Angiogenesis Angiogenesis is a physiological process in which the formation of new blood vessels occurs from a pre-existing vessel. The vascular endothelial growth factor (VEGF) can induce the growth of tumor feeding vessels, supplying their nutritional needs and allowing their expansion. Therefore, angiogenesis is a necessary step for the transition from a tiny focus of neoplastic cells to a large malignant tumor. Angiogenesis is also related to metastasis, in which cancer cells can break the tumor limits, entering the network of vessels and being carried away by blood to other distant regions, implanting and growing as another tumor [3].

2.2 Study of Angiogenesis Using CAM Tests for the study of angiogenesis in vivo include research on the iris or the vascularized cornea of rodents eyes; subcutaneous implants in rodents that allow testing with three-dimensional substrates including, for example, polyester sponge [4]; research with zebrafish embryos that may represent new modalities for studying mechanisms of angiogenesis and angiosuppression during their development [5]. Many of these techniques are complex, difficult to reproduce, require many specimens and can make procedures expensive. Many of these limitations decrease significantly in tests with chicken embryos, more objectively with chicken embryo chorioallantoic membranes (CAM) [2]. Some of the advantages and disadvantages of using CAM [6] are described in Table 1. Research involving the use of CAM requires the quantification of vascularization for comparing the stimulation or inhibitory effects caused by different agents. This quantification can be obtained through image analysis, where the diameters and lengths of the vessels [3], joints [7], points per section [2], vessel color [8] or other characteristics that demonstrate the evolution of the studied area.

3

Materials and Methods

The lighting system developed for capturing CAM's images aims to improve the quality of the images, in relation to those obtained with the original lighting system of

microscopes traditionally used for this type of application. With regard to that, at first, the main limitations of the microscope lighting system were observed in the acquisition of CAM’s images. Subsequently, based on studies on the characteristics of the original light sources of the microscope and alternative light sources, the most suitable one was adopted. Finally, as a result, an apparatus was developed in order to support and control the intensity of the adopted lighting source. The project was submitted and approved by the Committee on Animal Research and Ethics of Pontifícia Universidade Católica do Paraná under registration number CEUA-PUCPR (01657/2019).

3.1 Microscopy The acquisition of CAM images was performed with the use of a stereomicroscope LEICA ZOOM™ 2000 from Leica Microsystems [9]. The zoom obtained, according to the manufacturer's manual, ranges from 10.5 to 45X. An eyepiece camera (model DM800) is attached to one of the eyepieces of this stereoscopic magnifier, with autofocus, eight megapixel CMOS HD sensor and USB output. The original illumination of this magnifying glass, traditionally incorporated in other similar magnifiers as well, is composed of two sources: (i) one of reflected light, positioned on the arm of the equipment composed of a 15 W halogen lamp focusing at an angle of 60º over the object to be observed and changing its height to match the focus of the stereomicroscope. (ii) a transmitted light, positioned at the base of the equipment, composed of a 15 W tungsten lamp focusing on the bottom of the object to be analyzed. The diameter of the area of this transmitted light is 40 mm.

3.2 Limitations of the Traditional Lighting System for Capturing CAM Images Traditional lighting setups of microscopes such as the mentioned above, used in the present work, present limitations when applied for CAM observations. The following paragraphs cite and comment on these limitations. Internal observation of the egg: as the fixed lighting of the stereomicroscope is more for external observation of objects, it becomes necessary to look for a better angle so that the light enters through the opening of about 2  2 cm (made on the side of the air chamber for observation). Often this is not satisfactory for a better view of the deepest vessels, as they are located in a shaded area. Color spectrum: the halogen lamp has characteristics that, for capturing CAM's images, result in the highlight of the yellow and red components in the membrane images. As shown later, this tends to reduce the visibility of more subtle vessels.

A Method for Illumination of Chorioallantoic Membrane (CAM) … Table 1 Advantages and disadvantages of using CAM

Advantagens

Disadvantagens

Easy to use

The origin of the chicken test limits the availability of reagents

Rapid vascular growth

Proliferation during the course of embryonic development

Visualization of tests in real time

Immunodeficiency

Complete accessibility to the circulatory system (for delivery of intravascular molecules)

Contains a well-developed vascular network, which makes it difficult to discriminate between new and existing veins

Environment in vivo

Sensitive to environmental factors

Excellent modeling of more complex systems

Differences in drug metabolism in mammals

Accessibility to vessels of different sizes and types

The shells (or other irritants) can induce angiogenesis

Reproducibility and reliability

Administration of oral drugs cannot be tested

Lamps heating: the heating irradiated by the stereomicroscope set of lamps must be well observed, because as a living organism is being handled, it may cause discomfort, dehydration and consequently unnecessary movements of the embryo, which is surrounded by the membrane to be observed. Excessive brightness: causes reflections or glare in the exposed membranes and liquids of the egg, making it difficult to see both the regions in which the reflection takes place, as well as the inner vessels and tissues. Lack of adjustment in luminosity: each egg has individual characteristics such as internal and external color, porosity, embryos with different colors and in different stages of development, in addition to different tumor cell lines, drugs and other types of samples that can be inserted in the experiment. Fixed intensity lighting is not ideal in all cases, and can sometimes be excessive, reinforcing the problems of brightness, heating, dehydration and glare; or it may be too dim, generating shadows and little penetration of the stereomicroscope objective to observe the vessels, especially the deeper ones. External influence: reflections from lamps, windows and other equipment inside the laboratory are difficult to control and predict. So, it is important to provide a controlled environment for the area under observation.

3.3 Study of Lighting Sources For a lamp to have a good color rendering index (CRI), its capability to reproduce colors must be above 80% [10]. Table 2 Comparison between lamps used in the stereomicroscope and LEDs

779

Light source

CRI %

Light Efficiency is a parameter that indicates how efficient a light source converts the energy it receives into light, being evaluated by the ratio of the total luminous flux emitted, given in lumens (lm) and its energy consumption given in Watts (W). Table 2 shows the IRC and luminous efficiency of incandescent, halogen and light emitting diodes (LEDs). Another factor to be taken into account when comparing halogen lamps and LED lamps is the colorimetric issue induced by these two lamps. Halogen has a color spectrum that tends to yellow and red and consequently this trend influences the sharpness and reproduction of the other colors of the observed material, sometimes requiring the use of a specific physical color filter for each type of dye or material. The LED lamp, on the other hand, has a color spectrum centered on white, which has no direct influence on the chromatic breakdown of color in the observed material and, therefore, dispenses with the use of a specific color filter, thus having a huge influence on the observed final resolution [11]. The spectral composition shows why sometimes optical microscopes with incandescent and/or halogen illumination generate chromatic aberrations that tend to red and yellow, because they have their peaks in the wavelengths close to the mentioned colors, which does not occur with the LED, making the contrast and brightness characteristics of the LED lamps superior in all lengths of colorimetric band (except for the red contrast). The existence of a heated filament in halogen and incandescent lamps generates infrared radiation, both in the arm and in the base of the equipment, Luminous efficiency (Lm/W)

Average life (h)

Incandescent

100

10–15

750–1.000

Halogen

100

15–35

1.500–2.000

LED

70–95

35–130

25.000–100.000

780

which, because of this characteristic, emits heat towards both the object to be studied and the observer, differently from what happens with the lamp LED that does not generate heat, making the observer's work more comfortable [11]. Another important factor is to use LEDs with a neutral light or slightly warm white and with a color temperature not exceeding 5500 K. Many commercial LEDs emit a considerable amount of blue light, making it more difficult to correct images obtained even in post-processing [12]. A single LED is not sufficient to provide satisfactory lighting, so they are used in sets. Some additional features make LED lamps more suitable for manufacturing a lighting chamber, some of which would be: • High luminous power: In the range of 10–150 mW, which corresponds to about five times the radiation of a 40 W tungsten lamp [13]. • More accurate: They are precisely directional in their emission, without the need for accessories or refractors [14]. • Variety of colors: A wide range of color temperatures [14]. • Dimming: Possibility of dimming (darkening) without changing the color temperature [14].

Fig. 1 a Brightness points in the embryonic attachments; b top and sides; c bottom; d use in the stereomicroscope

M. K. Silva et al.

• Possibility of adjustments: Light color adjustable with the use of multicolored LEDs, allowing dynamic color control and high color saturation [14]. Another feature is that the diffuse lighting emitted by the LEDs helps to reveal shapes, minimizing accentuated shadows and specular highlights effects that tend to emphasize too much specific textural elements or obscure them completely. Although most LEDs have built-in reflective structures to focus light to varying degrees, LED light is emitted naturally in all directions with low spatial coherence. This means that the LED light is suitable for providing diffused lighting and is especially effective in situations where less reflected light is desired [12].

4

Results and Discussion

For proper lighting, a lighting chamber with a height of 9.5 cm and a diameter of 6 cm was designed, made with dark plastic material and it is composed of three lighting rings, each ring composed of sets of 8 to 16 LEDs, with a temperature of color between 3000 and 5500 K, aligned, focusing on the regions above, central and below the egg

A Method for Illumination of Chorioallantoic Membrane (CAM) …

that must be properly positioned to be observed in the stereomicroscope as shown in Fig. 1. By separating the lighting chamber into three sectors, the level of illumination is increased, making the details (which would be little highlighted with conventional lighting) more visible. Several integrated circuits available on the market, such as the LM555, UC3525 and others, can generate Pulse Width Modulation (PWM) to control the illumination intensity of the LEDs [15]. However, for this Project we opted for the use of Arduino Uno due to its known and programmable technical characteristics, making it easier to monitor the variables and possible collections of these data for further processing [16]. The Arduino has several PWM ports (pins 3, 5, 6, 9, 10 and 11), capable of varying the pulse width of a digital signal [17]. With PWM modulation it is possible to control the brightness intensity of LEDs or control of RGB LEDs, making it possible even to obtain different colors through the possible combinations. The current capacity of an Arduino Uno port is around 40 mA, and as the current consumption of a single LED amounts to 20 or 30 mA, this requires the need for a current gain proportional to the number of LEDs used in the project. To meet this need, it was necessary to use a power circuit, with the use of BC548 transistors, one for each lighting sector, each transistor can supply up to 800 mA of current. For visualization of the results, shown in Fig. 2, image samples were collected using the original microscope illumination, based on halogen and tungsten lamps (left), and using the proposed method, based on LED illumination (right). It is possible to observe that the images correspond to slightly different regions of the same CAM. This occurred because, when exchanging between the original and proposed lighting systems, the embryo moved, also changing the position of the CAM inside the egg. Thus, in order to facilitate the observation of the movement that occurred between acquisitions of the same CAM and to allow a more faithful visual comparisons between the images, outlines in yellow were drawn, which act as references for the identification of the same regions in the images. In all samples, the proposed lighting method allowed the visualization of structures and vessels that were not captured in the image with the original microscope lighting, in addition to making the vessels more prominent and also more visually evident. This proves the effectiveness of the proposed method for improving the capture of CAM images for later visual inspection and analysis. The brightness on the image has become more controlled, because with three buttons you can adjust the amount of light depending on the physiological characteristics of the egg.

781

Fig. 2 Samples of CAM images captured using the original microscope illumination (left), and with the proposed illumination method (right). Yellow outlines identify the same regions in each pair of images

As the lighting chamber is closed, the problems with external interference have decreased a lot, requiring attention only for the interferences that enter through the eyepiece of the stereomicroscope, making it necessary to cover the entrance that is not being used by the camera.

782

With the light direction separated in three different positions, the shading problem caused by the angle of the halogen lamp located above the sample in the stereomicroscope, was greatly reduced. This softening of shadows is also aided by the more diffused lighting of LEDs.

M. K. Silva et al. Table 3 Counting of the vessels in Fig. 3, for each pair of images, from top (Img 1) to bottom (Img. 4). The mean percentage difference is relative to the original lighting of the microscope Img.

Original

LED

Dif.

Dif. (%)

1

2.89

4.00

1.22

38.46

2

2.89

3.89

1.00

34.62

3

3.89

6.22

2.33

60.00

4

3.44

6.67

3.22

93.55

Mean

56.65

A quantitative evaluation was also performed, using the vessel counting method based on a grid [18]. In this method, a 3  3 squared grid of 1 cm2 is overlaid on the CAM image and the vessels’ crossings over the four sides of each of the nine cells of the grid are accounted. The final count is the average of the nine countings. Figure 3 shows the results of this procedure executed by a specialist, familiarized with such visual analysis of CAM images, for the images presented in Fig. 2. In Fig. 3, note that the grids were positioned at the same position on both images—with the original microscope illumination and with the proposed illumination, thus providing a suitable comparison. Table 3 presents the obtained results. As mentioned before and shown in Fig. 2, the proposed lighting provided the visualization of vessels that were not captured with the default microscope lighting, resulting in a more faithful analysis of the CAM. The mean percentage difference of the vessels accounted was +56%.

5

Fig. 3 Results of the quantitative evaluation using the vessel counting method based on a grid [18], for the sample images in Fig. 2. Grid’s area is 1 cm2 (1  1 cm). The yellow ticks identify the crossings of the vessels with each of the four sides of every one of the nine cells of the grid. Original microscope illumination on the left and proposed illumination method on the right. Table 3 presents the obtained values

Conclusions

Many of the difficulties observed in the experiments were overcome with the lighting chamber replacing the standard lighting of the Leica stereomicroscope. The deeper vessels around the embryonic attachments were better highlighted, thus improving the observation of their growth in all dimensions and ensuring better reliability in the results of the experiments. External influences are no longer a concern due to the insulation that the camera provides. It made it easier to obtain a more appropriate observation angle, since the LEDs are positioned around the analyzed object and with the adjustment of brightness it became possible to correct the brightness and saturation, thus avoiding reflections and shadows on the surface. Quantitative analysis using the vessel counting method based on a grid [18] confirmed that a larger number of vessels could be visualized. Using the proposed lighting system, a mean percentage difference of +56% vessels were counted.

A Method for Illumination of Chorioallantoic Membrane (CAM) …

For future work, the development of new tools are intended, such as image processing algorithms for the automatic analysis of images, of important parameters for the quantification of the vascularization of CAM membranes, such as number of vessels, number of bifurcations, estimate of the number of vessels by the method of intersection with overlapping grid and vessel density index (vdi %).

References 1. Jain RK, Schlenger K, Hockel M et al (1997) Quantitative angiogenesis assays: progress and problems. Nat Med 3:1203– 1208. https://doi.org/10.1038/nm1197-1203 2. Ribatti D, Nico B, Vacca A et al (2006) The gelatin sponge– chorioallantoic membrane assay. Nat Protoc 1:85–92. https://doi. org/10.1038/nprot.2006.13 3. Doukas CN, Maglogiannis I, Chatziioannou AA (2008) Computer-supported angiogenesis quantification using image analysis and statistical averaging. IEEE Trans Inf Technol Biomed 12:650–657. https://doi.org/10.1109/TITB.2008.926463 4. Andrade SP, Fan TP, Lewis GP (1987) Quantitative in vivo studies on angiogenesis in a rat sponge model. Br J Exp Pathol 68:755– 766 5. Serbedzija GN, Flynn E, Willet CE (1999) Zebrafish angiogenesis: a new model for drug screening. Angiogenesis 3:353–359. https:// doi.org/10.1023/a:1026598300052 6. Nowak-Sliwinska P, Segura T, Iruela-Arispe ML (2014) The chicken chorioallantoic membrane model in biology, medicine and bioengineering. Angiogenesis 17:779–804. https://doi.org/10. 1007/s10456-014-9440-7

783 7. Bo Z, Liya A, Shao L et al (2009) AngioIQ: a novel automated analysis approach for angiogenesis image quantification. In: 2nd international conference on biomedical engineering and informatics, Tianjin, 2009, pp 1–5. https://doi.org/10.1109/BMEI.2009. 5304911 8. Ribatti D, Tamma R (2018) The chick embryo chorioallantoic membrane as an in vivo experimental model to study human neuroblastoma. J Cell Physiol 2018:1–6. https://doi.org/10.1002/ jcp.26773 9. Leica Microsystems at https://www.leica-microsystems.com 10. Sales RP (2011) LED, o novo paradigma da iluminação pública, LACTEC/IEP 11. Oliveira CHL, Costa MAD, Costa GH (2015) Comparação entre a Lâmpada Halógena e o LED como fontes de Iluminação na Microscopia Óptica, 8o Congresso Brasileiro de Metrologia. Bento Gonçalves 2015:1–4 12. Kerr PH, Fisher EM, Buffington ML (2008) Dome lighting for insect imaging under a microscope. Am Entomol 54:198–200. https://doi.org/10.1093/ae/54.4.198 13. UNICAMP at https://tinyurl.com/y6xydxj9 14. Waide P (2010) Phase out of incandescent lamps Implications for international supply and demand for regulatory compliant lamps. IEA—International Energy Agency, p 86 15. Zhang Y, Xie W, Li Z et al (2013) Model predictive direct power control of a PWM rectifier with duty cycle optimization. IEEE Trans Power Electron 28:5343–5351. https://doi.org/10.1109/ TPEL.2013.2243846 16. Jo S-A, Lee C-H, Kim M-J et al (2019) Effect of pulse-width-modulated LED light on the temperature change of composite in tooth cavities. Dent Mater 35:554–563. https://doi. org/10.1016/j.dental.2019.01.009 17. ARDUINO at https://www.arduino.cc/ 18. Yang S-H, Lin J-K, Huang C-J et al (2005) Silibinin inhibits angiogenesis via Flt-1, but not KDR, receptor up-regulation. J Surg Res 128:140–146. https://doi.org/10.1016/j.jss.2005.04.04

Development and Experimentation of a rTMS Device for Rats P. R. S. Sanches, D. P. Silva, A. F. Müller, P. R. O. Thomé, A. C. Rossi, B. R. Tondin, R. Ströher, L. Santos, and I. L. S. Torres

surgery for chronic constriction injury (CCI) of the sciatic nerve, and rats were treated with daily 5-min sessions of rTMS for 8 consecutive days. Nociceptive behavior was assessed by von Frey and Hot Plate tests at baseline, after NP establishment and post-treatment. Results: The measurement of magnetic field intensity 2.5 mm and 5.0 mm from the coil center on the 90º axis showed 160 mT and 125 mT respectively. rTMS treatment promoted a partial reversal of the mechanical allodynia and a total reversal of the thermal hyperalgesia induced by CCI. Conclusions: We presume that low-frequency rTMS is a potential tool for NP treatment, possibly due to the modulation of plasticity and promotion of an analgesic effect.

Abstract

Neuropathic pain (NP) is related to the presence of hyperalgesia, allodynia and spontaneous pain, affecting 7–10% of the general population. Repetitive transcranial magnetic stimulation (rTMS) is being applied for NP relief especially in patients with refractory pain. Objective: As NP response to existing treatments is often insufficient, we aimed to develop a magnetic stimulator and customized coils and evaluate rTMS treatment over the nociceptive response of rats submitted to a NP model. Methods: The magnetic stimulator and the butterfly coils were developed in the Biomedical Engineering lab of Hospital de Clínicas de Porto Alegre. The device generated pulses with a 1 ms pulse width in a 1 Hz frequency. A total of 106 adult (60-days old) male Wistar rats were divided into 9 experimental groups: control (C), control plus sham rTMS (C + s.rTMS), control plus rTMS (C + rTMS), sham neuropathic pain (s.NP), sham neuropathic pain plus sham rTMS (s.NP + s.rTMS), sham neuropathic pain plus rTMS (s.NP + rTMS), neuropathic pain (NP), neuropathic pain + sham rTMS (NP + s.rTMS) and neuropathic pain plus rTMS (NP + rTMS). NP establishment was achieved 14 days after the P. R. S.Sanches (&)  D. P. Silva  A. F. Müller  P. R. O.Thomé  A. C. Rossi  B. R. Tondin Serviço de Pesquisa e Desenvolvimento em Engenharia Biomédica, Hospital de Clínicas de Porto Alegre, Rua Ramiro Barcelos, 2350, Porto Alegre, Brazil e-mail: [email protected] R. Ströher  L. Santos  I. L. S. Torres Laboratório de Farmacologia da Dor e Neuromodulacão: Ensaios Pré-Clínicos, Centro de Pesquisa Experimental, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil R. Ströher  L. Santos  I. L. S. Torres Programa de Pós-Graduação em Ciências Biológicas: Farmacologia e Terapêutica - Instituto de Ciências Básicas da Saúde - Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS, Brazil

Keywords





Transcranial magnetic stimulation Neurophatic pain Neuromodulation Brain stimulation

1



Introduction

Neuropathic pain (NP) is a somatosensory nervous system dysfunction characterized by the presence of hyperalgesia, allodynia and spontaneous pain [1]. NP affects 7–10% of the general population and its incidence has increased due to the population aging, the incidence of diabetes mellitus and the rise of survival from cancer after chemotherapy [2]. Non-invasive therapies as repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS) are being applied for NP pain relief [3] especially in patients with refractory pain [4]. Both techniques promote the modulation of brain function, inducing neuroplasticity in the central nervous system (CNS) by changing the resting membrane potential and modifying neuronal activity [5].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_119

785

786

P. R. S. Sanches et al.

The primary motor cortex (M1) and/or the dorsolateral prefrontal cortex (DLPFC) are the most studied regions using rTMS for pain. Both are related to decreasing pain perception in humans, where rTMS has been applied to treat NP, fibromyalgia and regional pain syndrome [6, 7]. Analgesia induced by neuromodulatory techniques seem to be associated to the re-establishment of normal cortical excitability, and it is more dependent on the remodeled endogenous opioid system [8]. Considering that NP management is a complex issue and that the response to the current treatments is often unsatisfactory the present study aimed to evaluate the effects of rTMS treatment on the nociceptive response of rats submitted to a NP model.

2

Methods

This research had two phases: the development of the rTMS device and customized coils and the experimentation in animals.

2.1 Instrumentation It was developed a rTMS device and customized coils for the application of magnetic stimulus on the rats scalps. The prototype of rTMS device and coils is shown in Fig. 1. The customized butterfly coil is compounded by two circular coils (22 mm  5 mm) made of 30 AWG copper wire and connected to each other in an angle of 120° (Fig. 2). The measurement of magnetic field intensity at 2.5 and 5.0 mm from the coil center on the 90º axis was done using a Magnetometer Koshava 5 (Wuntronic, Munich). The rTMS device is an electronic circuit that generates a high DC voltage by means of the rectification and filtering of AC power. The circuit diagram is shown in Fig. 3.

Fig. 1 Butterfly magnetic coils (a) and rTMS device (b)

Fig. 2 Butterfly magnetic coil

The operation has two phases that occur in a 1 Hz frequency: (1) the controller activates the solid state relay to promote the capacitor charge; (2) the controller applies a 1 ms pulse on the IGBT gate and current flows by the coil generating a magnetic field. The number of pulses may be set up in the controller. The rTMS device could be set up as active or sham. In the first one the device really applied the magnetic pulse and in the sham the rTMS just generated the noise of magnetic pulse but without magnetic stimulus. The two options were necessary in order to run the experimental design shown in Fig. 4.

2.2 Experimental Design A total of 106 adult male Wistar rats (60 days old; weight ± 250 g) were used for this experiment. The animals were randomized and divided into nine experimental groups: control (C), control plus sham rTMS (C + s.rTMS), control plus rTMS (C + rTMS), sham neuropathic pain (s.NP), sham neuropathic pain plus sham rTMS (s.NP + s.rTMS), sham neuropathic pain plus rTMS (s.NP + rTMS), neuropathic pain (NP), neuropathic pain + sham rTMS (NP + s.rTMS), and neuropathic pain plus

Development and Experimentation of a rTMS Device for Rats

787

Fig. 3 rTMS circuit diagram

Fig. 4 Experimental design

rTMS (NP + rTMS). Afterwards, rats from the groups s.NP and NP were submitted to sham surgery and chronic constriction injury (CCI) of the sciatic nerve, respectively. Then, 14 days after CCI surgery, the establishment of NP was assessed by von Frey and Hot Plate tests. From day 15 after CCI surgery, animals underwent daily rTMS sessions for 8 consecutive days. The nociception was evaluated by von Frey and Hot Plate tests at baseline, 14 days after CCI surgery and 24 h after the last session of rTMS. The experimental design is shown in Fig. 4. Neuropathic Pain Model Neuropathic pain was induced by CCI of the left sciatic nerve [9]. Animals from the s.NP groups had the sciatic

nerve exposed similarly to the NP groups, but without any nerve ligature. Controls didn’t suffer any surgery procedure. rTMS Fourteen days after CCI surgery and the establishment of NP, the animals from the treated groups underwent one daily 5-min session of rTMS for 8 consecutive days (always starting at 8:30 AM). Animals were restrained during stimulation wrapped in a cloth and the coil was fixed to the head using adhesive tape (Micropore™) (Fig. 5). For sham stimulation, animals were also restrained and the coil was placed and fixed in the same position similar to the active stimulation.

788

P. R. S. Sanches et al.

Hot Plate Test Thermal hyperalgesia was evaluated by Hot Plate (HP) test. The temperature of the plate was set to 55 ± 0.1 °C. The time (in seconds) between the placing of the rat onto the plate and the first response (foot licking, jumping, or rapid removing of paws) was recorded as the latency of nociceptive response. Statement of Human and Animal Rights All the procedures were approved by the Institutional Committee for Animal Care and Use (GPPG/HCPA protocol #2017-0438) and conducted in compliance with Brazilian law [11, 12]. Fig. 5 rTMS treatment (rat restrained during the stimulation)

Von Frey Test

3

Mechanical hyperalgesia was assessed by an automatic von Frey anesthesiometer (Insight, São Paulo, Brazil). Von Frey stimuli were applied to the mid-plantar surface of the operated hind paw through the mesh floor. The withdrawal threshold of the operated left hind paw was expressed in grams (g) [10].

The measurement of magnetic field intensity 2.5 mm and 5.0 mm from the coil center on the 90º axis showed 160 mT and 125 mT respectively. In Fig. 6 is shown the 1 Hz burst of magnetic pulses and the shape of one individual pulse. The results of Hot plate and Von Frey tests on the nine groups of rats is shown in Fig. 6. The proposed rTMS

Fig. 6 1 Hz burst of magnetic pulses (upper) and the shape of one individual pulse (down)

Results

Development and Experimentation of a rTMS Device for Rats

789

Fig. 7 Nociceptive response of hot plate test (upper) and von Frey test (down)

treatment was able to partially revert mechanical allodynia and totally revert thermal hyperalgesia in animals from the NP groups as shown in Fig. 7.

4

Discussion

According to the nociceptive tests performed 14 days after CCI surgery, NP establishment was reached, similarly to other studies from our group [10, 13]. Animals from the NP groups presented lower mechanical and thermal thresholds than the other groups, including the sham-NP group. At this time, the proposed rTMS treatment was applied, being able to partially reverse mechanical

allodynia and totally reverse thermal hyperalgesia in animals from the NP groups. In the current study TMS was applied in a low frequency rate. Low frequency rTMS has been reported having inhibitory effects on the brain, thus contributing to its analgesic effect [14, 15]. TMS stimulation frequency at or below 1 Hz causes neuronal inhibition, whereas at higher frequencies produces neuronal facilitation [16]. We highlight that the von Frey test involves Ad-fibers, while the Hot Plate test assesses tonic pain mainly by C-fibers [17]. The application of noninvasive brain stimulation in small animals is always a challenge in preclinical studies. We consider that the inability to restrict the stimulation area is a limitation of the current study. This may implicate the

790

P. R. S. Sanches et al.

translationality of our results. It has been shown in humans that the DLPFC is a brain region involved in the top down modulation of pain [18], and that the M1 region is a valuable stimulation target in chronic and NP treatments [3, 8, 19]. Therefore, restricting stimulation to these regions may be more effective in the treatment of pain. Additionally, our TMS device was drawn only for treatment, not being able to perform diagnosis, like TMS application in humans. Altogether, the results of the present study may guide to a new treatment option for pain.

7.

8.

9.

10.

5

Conclusion

We presume that low-frequency rTMS is a potential tool for NP treatment, possibly due to the modulation of plasticity and promotion of an analgesic effect.

11.

12. Acknowledgements This research was supported by the following Brazilian funding agencies: CNPq, CAPES, GPPG/HCPA (FIPE Grant 2017-0438); FAPERGS (Grant PRONEM 16/2551-0000249-5).

13.

Conflict of Interest The authors declare that they have no conflict of interest. 14.

References 1. Jensen TS, Finnerup NB (2014) Allodynia and hyperalgesia in neuropathic pain: clinical manifestations and mechanisms. Lancet Neurol 13(9):924–935. https://doi.org/10.1016/S1474-4422(14) 70102-4 2. Colloca L, Ludman T, Bouhassira D et al (2001) Neuropathic pain. Nat Rev Dis Primers 3:17002. https://doi.org/10.1038/nrdp.2017.2 3. Lefaucheur JP (2016) Cortical neurostimulation for neuropathic pain: state of the art and perspectives. Pain 157(Suppl 1):S81–S89. https://doi.org/10.1097/j.pain.0000000000000401 4. Moore NZ, Lempka SF, Machado A (2014) Central neuromodulation for refractory pain. Neurosurg Clin N Am 25(1):77–83. https://doi.org/10.1016/j.nec.2013.08.011 5. Antal A, Paulus W, Rohde V (2017) New results on brain stimulation in chronic pain. Neurol Int Open 01(04):E312–E315. https://doi.org/10.1055/s-0043-119865 6. Lefaucheur JP, Drouot X, Ménard-Lefaucheur et al (2006) Motor cortex rTMS restores defective intracortical inhibition in chronic

15.

16.

17.

18.

19.

neuropathic pain. Neurology 67(9):1568–1574. https://doi.org/10. 1212/01.wnl.0000242731.10074.3c Mhalla A, Baudic S, Ciampi de Andrade D et al (2001) Long-term maintenance of the analgesic effects of transcranial magnetic stimulation in fibromyalgia. Pain 152(7):1478–1485. https://doi. org/10.1016/j.pain.2011.01.034 Moisset X, de Andrade DC, Bouhassira D (2016) From pulses to pain relief: an update on the mechanisms of rTMS-induced analgesic affects. Eur J Pain 20(5):689–700. https://doi.org/10. 1002/ejp.811 Bennett GJ, Xie YK (1988) A peripheral mononeuropathy in rat that produces disorders of pain sensation like those seen in man. Pain 33(1):87–107. https://doi.org/10.1016/0304-3959(88)90209-6 Cioato SG, Medeiros LF, Marques-Filho PR et al (2016) Longlasting effect of transcranial direct current stimulation in the reversal of hyperalgesia and cytokine alterations induced by the neuropathic pain model. Brain Stimul 9(2):209–217. https://doi. org/10.1016/j.brs.2015.12.001 Brazil (2008) Lei 11794—Procedimentos para o uso científico de animais. https://www.planalto.gov.br/ccivil_03/_ato2007-2010/ 2008/lei/l11794.htm Ministério da Ciência, Tecnologia e Inovação, CONCEA, (2013a). Diretriz brasileira para o cuidado e a utilização de animais para fins científicos e didáticos—DBCA. Portaria n. 465, de 23 de Maio de 2013. Brasilia-DF, Brasil Callai EMM, Scarabelot VL, Fernandes Medeiros L et al (2019) Transcranial direct current stimulation (tDCS) and trigeminal pain: a preclinical study. Oral Dis 25(3):888–897. https://doi.org/10. 1111/odi.13038 Fregni F, Potvin K, Da Silva D et al (2011) Clinical effects and brain metabolic correlates in non-invasive cortical neuromodulation for visceral pain. Eur J Pain 15(1):53–60. https://doi.org/10. 1016/j.ejpain.2010.08.002 Sampson SM, Kung S, McAlpine DE et al (2011) The use of slow-frequency pré-frontal repetitive transcranial magnetic stimulation in refractory neuropathic pain. J ECT 27(1):33–37. https:// doi.org/10.1097/YCT.0b013e31820c6270 Weissman-Fogela I, Granovskyb Y (2019) The “virtual lesion” approach to transcranial magnetic stimulation: studying the brain– behavioral relationships in experimental pain. Pain Rep 4(4):e760. https://doi.org/10.1097/PR9.0000000000000760 Sydney PBH and Conti PCR (2011) Guidelines for somatosensory evaluation of temporomandibular dysfunction and orofacial pain patients. Rev Dor 12(4):349–353. https://doi.org/10.1590/S180600132011000400012 Seminowicz DA, Moayedi M (2017) The dorsolateral prefrontal cortex in acute and chronic pain. J Pain 18(9):1027–1035. https:// doi.org/10.1016/j.jpain.2017.03.008 Moisset X, Lefaucheur JP (2019) Non pharmacological treatment for neuropathicpain: Invasive and non-invasive cortical stimulation. Rev Neurol (Paris) 175(1–2):51–58. https://doi.org/10.1016/j. neurol.2018.09.014

Diagnostic and Monitoring of Atrial Fibrillation Using Wearable Devices: A Scoping Review Renata S. Santos, M. D. C. McInnis, and J. Salinet

Abstract

Keywords

Atrial Fibrillation (AF) is a supraventricular arrhythmia in which an irregularity in atrial electrical activity causes the atria to lose their ability to contract efficiently. This causes the heart to fail and is an increasingly legitimate risk factor for palpitations and clot formations that can cause stroke. Patients suffering from this cardiac pathophysiology require constant monitoring of their cardiac activity, requiring regular visits to the cardiologist or the use of constant cardiac monitors. To increase the quality of life of these patients, wearable devices can be used to remotely monitor and detect cardiac arrhythmias in real time. This work aims to do a scoping review on the use of wearable devices for the detection and monitoring of AF. To implement this process, a computational tool, StArt (State of the Art by Systematic Review), was used to assist the researcher in the application of this technique. A total of 1979 articles were selected at first; in the end, 10 articles were kept for further evaluation. The results of this review showed that, not only are new technologies being made to detect AF, but existing devices are being tested for their validity and feasibility in clinical settings. To test for device validity, a 12-lead ECG is the gold standard against which these existing or novel devices’ screening are compared to, being cited in more than half of the final ten articles as their reference standard. All in all, the studies showed great accuracy, with sensitivities being all in the 90th percentile, on average, and all articles concluded that the study of devices had various clinical benefits and were viable options for AF detection in the future.

Atrial fibrillation AF detection

R. S. Santos (&)  M. D. C. McInnis  J. Salinet Biomedical Engineering, Modeling and Applied Social Sciences Centre, Federal University of ABC/Engineering, Alameda da Universidade, s/n—Anchieta, São Bernardo do Campo, São Paulo, 09606-045, Brazil e-mail: [email protected]

1



Smartwatch



Wearable devices



Introduction

Atrial fibrillation (AF) is a supraventricular arrhythmia in which an irregularity in atrial electrical activity causes the atria to lose their ability to contract efficiently [1]. This arrhythmia is considered the most common cardiac arrhythmia in clinical practice; its prevalence is positively correlated with an increase in age, progressing from 0.5% in people up to 60 years old, to 10% in elderly people over 80 years old [2]. The likelihood of developing AF is about 50% higher in men than in women [2]. AF is correlated with a significant increase in mortality, resulting in higher medical and hospital costs [2]. There are several health complications that are associated with an increased risk of AF; including, but not limited to, hypertension, diabetes, valve disease, myocardial infarction and heart failure [1]. Moreover, AF is one of the main causes of stroke, that for which early detection is imperative for the effectiveness of treatment. AF, however, is in many cases asymptomatic and difficult to detect. The diagnosis of AF can be hampered by the episodic nature of events, which are often sudden. These circumstances make AF one of the most relevant public health problems nowadays [3, 4]. Its treatment is based on the use of antiarrhythmic drugs (AA) to control heart rate or rhythm, depending on the patient. However, evidence shows that ablation is preferable to AA therapy to control heart rate [5]. The evolution of medicine in the last decades, combined with the use of medical technologies that have implemented small and flexible electronic circuits for acquisition of physiological data in real time, has made it possible for countless people who suffer from cardiac arrhythmias to have

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_120

791

792

a better quality of life [6]. However, some of these people now lack more continuous monitoring of their cardiac activities, requiring constant visits to the cardiologist [7]. In this sense, there is great potential for the use of wearable devices in acquiring patients’ physiological information in real time. In recent years, with the emergence of these devices, it has been possible to see the manifestation of countless applications of this type of technology, particularly in the area of healthcare [8]. Wearable devices, by definition, can be easily attached to our bodies or suitable for wearing, such as, but not limited to: bracelets, smartwatches, glasses, contact lenses, and clothing. Some of these “gadgets” have great potential for monitoring and detecting AF in patients through the physiological tracking offered by mobile and portable devices [9]. The use of smartphones and smartwatches for AF detection has recently gained attention as a means of low-cost, mass screening; this includes technologies such as photoplethysmography (PPG), artificial intelligence and electrodes to acquire electrocardiograms (ECGs), which allows for confirming the cardiac frequency and rhythm with relatively decent accuracy [10]. Combined with other features, such as ease of use and connectivity, smartwatches can potentially be used for large-scale AF screening and eventually, to replace current detection methods [11]. While studies are evolving to qualify the accuracy of these devices, common approaches to the diagnosis of AF include using pulse palpation, PPG, oscillometry blood pressure, and ECG, which is commonly used as the gold standard for AF diagnosis. The method of checking the pulse manually is done by the individual themself, where they count the number of beats per second felt on either of their wrists. PPG method is a simple and low-cost optical technique that can be used to detect changes in blood volume in the microvascular bed of tissues [12, 13]. Oscillometry is a non-invasive method which measures the amplitude of fluctuations in pressure that are created by the expansion of artery walls each time blood passes through them. To measure the fluctuations in pressure, a cuff is placed on the subject’s forearm and gradually inflated and deflated to detect systolic and diastolic pressure [14, 15]. Recently, a review presented the benefits of AF screening through wearable devices based on PPG [16]. This technology limits identification of AF episodes by only analyzing one parameter: the dynamics of the heart rate. The use of ECG allows direct measurement of the heartbeat, rhythm and electrical activity of the heart, making it possible to detect AF. Furthermore, it is possible to obtain information about the origin and propagation of the action potential throughout the heart. As different zones are activated sequentially, it is possible to associate them with temporal variations of the ECG complexes [17]. Clearly, wearable

R. S. Santos et al.

devices that make use of ECG acquisition might be of higher clinical relevance. In this proposed study, investigators aim in identifying the current technologies developed for AF screening, focusing on wearable devices with embedded ECG acquisition systems.

2

Metodology

In this work, a scoping review of scientific articles was made, through a search in the following databases: PubMed, IEEE (Institute of Electrical and Electronic Engineers), Web of Science and ScienceDirect. These articles were carefully selected using keywords in English such as: AF, Smartwatch, Wearable Devices, AF Detection. To carry out the process of the scoping review, a computational tool, StArt (State of the Art by Systematic Review), was used to assist the researchers. During the research process, a review plan was used to identify the sources and two search strings were also used: “AF AND wearable devices” and “AF AND smartwatch”. These were used to recognize the most relevant studies by using inclusion and exclusion criteria based on the title and abstract [18]. The research was carried out from November 2019 to April 2020, using articles belonging to the literature published online, in the years 2009–2020. The review process of the articles underwent an investigation that adhered to the following inclusion criteria: (1) The publications must be available on the above databases; (2) The publications have words from the search strings; (3) Full text is accessible; (4) Publications are in English; (5) Publications are made between the year 2009– 2020; on the other hand, articles that did not meet the above requirements were used as a basis for exclusion criteria. This included: (6) Studies that mention wearable devices but do not mention AF; (7) Studies that mention AF, but do not mention wearable devices; (8) Studies that mention neither AF nor wearable devices; (9) Studies without mention of AF detection; (10) Studies without mention of AF monitoring; (11) Lack of access to the full text; (12) Publications that do not contain the words in the search strings. These criteria have been defined to refine the search of the most relevant articles for this review.

3

Results

3.1 Selection of Articles A total of 1979 papers were selected at first. After reviewing the title and abstract, 143 articles were accepted, 1627 were rejected and 209 were duplicates. Of these, only the works

Diagnostic and Monitoring of Atrial Fibrillation …

that mentioned wearable devices capable of generating ECG signals for the detection of AF were selected. A total of ten articles met this criterion. Figure 1 illustrates the refinement process in the selection of articles used for this scoping review. The articles were classified according to punctuation values, a method performed with StArt. Tables 1 and 2 illustrate the calculation of scores and the articles included in the scoping review, respectively.

3.2 ECG Signal The authors of articles [19, 20] claim that general public interest is driving the rapid expansion of portable health Fig. 1 Diagram of the steps for articles selection

793 Table 1 Method for calculating the score value Keywords

Score value

In title

10 points per occurrence

In abstract

7 points per occurrence

In keywords

5 points per occurrence

care. The United States Food and Drug Administration (FDA) has approved several technologies, one being the AliveCor KardiaBand: a bracelet for the Apple watch (Apple, Cupertino, California, USA) with a built-in electrode which generates an intelligent electrocardiogram (iECG) that is transmitted through an audio signal processed by an application on the user’s smartphone. Signal processing provides an AF diagnosis and creates a report that can be

794 Table 2 Articles included in the scoping review

R. S. Santos et al. Title

Author(s)

Source

Year

Score

Accuracy of a smartwatch based single-lead ECG device in detection of AF [19]

Rajakariar et al.

Heart

2020

58

ECG WATCH: a real time wireless wearable ECG [20]

Randazzo et al.

IEEE—Int Symp Med Meas Appli (MeMeA)

2019

36

Patient directed recording of a bipolar three-lead ECG using a Smartwatch with ECG function [21]

Bischof et al.

Jove

2019

31

Development of a wearable mobile electrocardiogram monitoring system by using novel dry foam electrodes [22]

Tseng et al.

IEEE Systems Journal

2014

29

Utilization and clinical feasibility of a handheld remote electrocardiography recording device in cardiac arrhythmias and AF: a pilot study [23]

Chang et al.

International Journal of Gerontology

2015

29

Diagnostic accuracy of a novel mobile phone application for the detection and monitoring of atrial fibrillation [24]

Rozen et al.

The American Journal of Cardiology

2018

24

Smartwatch performance for the detection and quantification of AF [25]

Wasserlauf et al.

Circulation. Arrhyth and Elecphys

2019

22

Screening for AF using a mobile, single-lead ECG in Canadian primary care clinics [26]

Godin et al.

Canadian Journal of Cardiology

2019

17

An intelligent telecardiology system using a wearable and wireless ECG to detect AF [27]

Lin et al.

IEEE Transactions on Information Technology in Biomedicine

2010

10

A wearable mobile electrocardiogram measurement device with novel dry polymer-based electrodes [28]

Wang et al.

TENCON 2010—2010 IEEE Region 10 Conference

2010

5

shared via email. This device has the potential to be used on a large scale for AF. In Brazil, Pellanda et al. [29] highlighted the importance of using wearable devices for preventing cardiac disorders development. Among the devices mentioned, the Apple Watch, which uses KardiaBand technology, is able to generate a large quantity of information, which allows for not only the monitoring of the individual patient, but also the planning of collective prevention strategies. The results of this scoping review showed that, not only are new technologies being made to detect AF, but existing devices are being tested for their validity and feasibility in the clinical setting. To test for device validity, the 12-lead ECG is the gold standard against which these existing or novel devices’ readings are then compared to, being cited in more than half of the final ten articles as their reference standard. All the articles focused on prospective studies, with the only prior information collected about patients being their medical history (for evaluation and control purposes). Samol et al. [21] for example, 100 healthy patients were studied. This means that the validity of the device being tested would need to be sampled again using a

population of patients with cardiac pathology history. The maximum number of participants studied from these ten articles was 339; further studies with a larger sample would be helpful in generating more compelling results for validation of these instruments. The studies showed great accuracy, with sensitivities being all in the 90th percentile, and all articles concluded that the study of devices had various clinical benefits and were viable options for AF detection in the near future. Randazzo and colleagues [20] highlighted that the latest generation of smartwatches produce a single lead ECG recording comparable to that of the Einthoven I bipolar lead of a standard 12 channel ECG, using the back of the watch as the positive electrode and the screen as the negative electrode. The ECG recording is controlled by the patient and activated if symptoms occur. The system creates a PDF document for further analysis by a health professional. Tables 3 and 4 present the studies that validated a new device or did a clinical validation for understanding different types of technology. Table 5 summarizes the statistics metrics and main benefits presented by the articles in this scoping review.

Diagnostic and Monitoring of Atrial Fibrillation … Table 3 Works about new devices

Table 4 Works about clinical validation

Table 5 Results of the metrics and main benefits obtained

795

New devices

Works

Einthoven 3-lead ECG with smartwatch AliveCor KardiaBand ECG watch

[21] [26]

Wearable mobile ECG monitoring system

[22, 28]

Wireless ECG with warning expert system

[27]

Clinical validation

Works

AliveCor KardiaBand ECG watch

[19]

ECG watch

[20]

Smartwatch with ECG function

[25]

Wearable mobile ECG monitoring system (WMEMS) Handheld ECG device

[24] [23]

Evaluation

Benefits

Using AliveCor KardiaBand ECG watch the metrics were: SE: 94.4%, SP: 81.9%, PPV: 54.8%, NPV: 98.4%. agreement between 12-lead ECG and this device was moderate when unclassified tracings were included (j = 0.60, 95% CI 0.47–0.72) [19]

Combining the automated device diagnosis (AliveCor KardiaBand ECG watch) with electrophysiology interpretation of unclassified tracings improved overall accuracy. But authors suggest that physician involvement will likely remain an essential component when exploring the utility of these devices for arrhythmia screening [19]

Recordings of ECG watch and 12-leads ECG from healthy and AF patients were compared. ECG watch heart rate error is within interval of around 5% of the maximum. Cross correlation of the two heart-rate estimation systems resulted in an agreement of 90.5% [20]

Performs an on-request single-lead ECG recording in 10 s segment, anytime, anywhere, without the need to go physically to hospitals. It can record any of the peripheral leads and share the results with physicians just by the tap of a button on a software for smartphones or computers. The ECG watch needs at least half of the recording time of other analogous commercial solutions and yields a numerical output signal, which can then be used for further inspection, analysis and filtering. The ECG WATCH also embeds an algorithm to automatically detect AF [20]

100 healthy subjects underwent ECG recording with the proposed smartwatch and 12-led ECG. From the 300 recordings, 93% of the smartwatch recordings were assigned to the corresponding Einthoven leads I–III by blinded cardiologists. Fleiss kappa analysis showed moderate interrater reliability (kappa = 0.437; p < 0.001). The intraclass correlation coefficient was 0.703 [21]

Wider use of a smartwatch for patient-directed three-lead ECG recording may be feasible in the population at large after a short tutorial. Morphological and quantitative ECG parameters like P wave, QRS complex, and T wave of the smartwatch recordings were highly comparable to the corresponding standard ECG leads [21]

Lead II ECG recordings from the wearable mobile electrocardiogram monitoring system with dry foam electrodes agreed in about 98.21% when compared with the acquired by traditional wet electrodes. The average accuracy and sensitivity of the first derivative approach, embedded in the proposal device, for R-wave detection are 98.14 and 97.32% respectively (ECG used were from MIT-BIH database) [22]

Wearable mobile electrocardiogram monitoring system is tested for detecting AF in the hospital. It really provides a good performance and is a good prototype for ECG telemedicine applications [22]

339 participants underwent to investigate incidence of abnormal ventricular beats in symptom-driven portable remote electrocardiography (ECG) device. Participants underwent 24-h ECG transferred using a handheld portable electrocardiogram with an embedded automatic ECG wavelet data extraction software. From the 1152 data transferred, 98.4% were successfully transmitted. 32.5% of the participants presented evidence of AF resulting in 50.9% of the arrhythmias were AF [23]

The handheld ECG device shows clinical feasibility with high rate for AF detection suggesting that this tool can be used remotely to help clinical diagnosis for patients with cardiac arrhythmias [23]

(continued)

796

R. S. Santos et al. Table 5 (continued) Evaluation

Benefits

Evaluation of the accuracy of Cardiio Rhythm Mobile Application for AF detection. 98 patients who were scheduled for elective cardioversion for AF were enrolled. The Cardiio Rhythm correctly identified 93.1% of the recording as AF and 90.1% as non-AF. SE = 93.1%, SP = 90.9%, PPV = 92.2% and NPV = 92.0% [24]

The Cardiio Rhythm Mobile Application holds promising potential for the accurate detection AF from sinus rhythm, in patients with prior history of AF, but, it is necessary clinical trials in the outpatient setting to evaluate the utility of this technology in monitoring AF recurrence, in patients undergoing cardioversion for AF ablation [24]

This study compares the accuracy of an AF-sensing watch (KardiaBand) with simultaneous recordings from an insertable cardiac monitor (ICM). A convolutional neural network (SmartRhythm 2.0) was trained in ECGs from 7500 KardiaBand users. The machine learning technique was validated before in 24 AF patients with ICM who simultaneously wore the watch. 31 348.9 h of simultaneous AF-sensing watch and ICM recordings were analyzed. SmartRhythm 2.0 outcomes were: episode SE of 97.5%, duration SE of 97.7%, patient SE of 83.3% and PPV was 39.9% [25]

The AF-sensing watch with SmartRhythm 2.0 is overly sensitive for detection of AF and assessment of AF duration in an ambulatory population when compared with an invasive cardiac monitoring. Such devices may represent an inexpensive, noninvasive approach to long-term AF surveillance and management [25]

This study investigated the feasibility of a mobile ECG device for AF screening. 184 physicians were provided with a KardiaMobile ECG device. Evaluation of the Kardia device was done using a Likert scale. 72% of the physicians completed the survey. 7585 patients were screened (42% of eligible patients). AF was detected in 471 patients [26]

Physicians had a favorable impression with the device. Moreover, physicians generally reported a high perceived clinical value (94%) and ease of integration (89%) of the device [26]

The system identify sinus tachycardia, sinus bradycardia, wide QRS complex, AF, and cardiac asystole. The average accuracy, SE, and PPV performance were 94%, 94.5% and 99.4%, respectively for 10 heathy subjects and 20 AF patients [27]

A novel wireless telecardiology ECG system equipped with a built-in automatic warning expert system. This study demonstrates that the proposed device is capable of accurately detecting AF episodes and instantaneously alerting both the user and the healthcare personnel, facilitating early intervention [27]

20 healthy subjects and 25 AF patients. The diagnostic was based on 15 s ECG recording segments for a total of 5 min. It resulted in 400 and 1500 detection trials for the control and AF group, respectively. SE and PPV were 100% for control group and 94.56 and 99.22% for the AF group [28]

In the proposed study authors developed and validated a wearable mobile electrocardiogram monitoring system for ECG acquisition with dry foam electrode. From the results, authors concluded that the wearable ECG acquisition device is suitable for long-term ECG monitoring in daily life (>33 h) contributing for ECG telemedicine applications [28]

SE sensitivity; SP specificity; PPV positive predictive value; NPV negative predictive value

4

Discussion

AF is one of the most common cardiac arrhythmias in adults, associated with increased mortality and increased health care costs, and devices available on the market can help with monitoring and detection of AF [25]. Wearable devices for cardiac arrhythmia detection are becoming increasingly popular. According to Ponciano and colleagues [30], the wearable technology market, which is made up of devices that monitor physiological parameters (heart rate, ECG and sleep pattern, for example), is expected to grow to 929 million connected devices in 2021. By obtaining physiological data using a simple user interface,

these devices can provide a diagnosis and transmit information to a cardiologist for immediate review. Within the available technologies, KardiaBand (AliveCor, Mountain View, CA) is a smart watch accessory approved by the FDA that allows a continuous assessment of the user’s heart rate. The device could function as a continuous, wearable AF monitor with real-time notification to the patient [25]. According to Lin and colleagues [27], this telecardiology system can be applied as a health monitor not only to normal individuals, but also to hospitalized patients. Moreover, wearable devices can also replace the 24-h monitor for longer monitoring and real-time detection. In recent years, several wearable pulse-rate meters using PPG technology have been developed and have become

Diagnostic and Monitoring of Atrial Fibrillation …

widely available. PPG is a simple and low-cost method that makes measurements at the skin’s surface in order to detect volumetric changes in blood in peripheral circulation [31]. However, such devices might limit clinician’s diagnosis since they do not allow analysis of the rhythm cardiac through the ECG signal [10]. In this work we present a scoping review that elucidates the current technologies developed and validated on the literature, and highlighted accuracy, reliability, strengths, and limitations in daily clinical practices. Most of the articles are observational cohort studies, resulting from prospective cross-sectional studies that analyze data collected over a time period, which facilitated the ability to evaluate multiple outcomes.

5

Conclusions

In this article, a scoping review on the use of wearable devices for monitoring and detecting AF is presented. With the implementation of the review, it was observed that the topic addressed has been studied for the past 10 years, with a greater emphasis in the most recent years. It should be noted that almost half of the selected publications were made in 2019, which shows a growing rise in interest in this area of study. These technologies have seen an increase in the number of users [31]. They facilitate the outpatient and non-invasive assessment of various cardiac indices in the identification and monitoring of cardiac arrhythmias with the potential to generate large amounts of biomedical data. Although this technology has promising results, this review highlights the relevance of a topic that still needs to be widely discussed. Wearable devices have the potential to offer a variety of uses in the health sector. This supports and strengthens the prospect for long distance communication between doctor and patient as the studies showed great accuracy, with sensitivities being all in the 90th percentile, on average. All the selected articles in this review concluded that the study of devices had various clinical benefits and were viable options for AF detection in the near future. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Magalhães LP, Figueiredo MJ, Cintra FD et al (2016) II Diretrizes Brasileiras de fibrilação qtrial. Sociedade Brasileira de Cardiol106 (4) Supl 2:1–35 2. Kannel WB, Wolf PA, Benjamin EJ et al (1998) Prevalence, incidence, prognosis, and predisposing conditions for AF: population-based estimates. Am J Cardiol 16;82(8A):2N–9N 3. Gladstone DJ, Spring M, Dorian P et al (2014) AF in patients with cryptogenic stroke. N Engl J Med 70(26):2467–2477

797 4. Marsili IA, Biasiolli L, Masè M et al (2020) Implementation and validation of real-time algorithms for AF detection on a wearable ECG device. Comput Biol Med 116:103540 5. Haotiana S, Haorena W, Chengjina Q et al (2020) An incremental learning system for AF detection based on transfer learning and active learning. Comp Method Prog Biomed 187:105219 6. Yetisen AK, Martinez-H JL, Unal B et al (2018) Wearables in medicine. Adv Mater 30(33):1706910:1–26 7. Ramos GEO (2016) Desenvolvimento de um protótipo para aquisição e processamento de sinais cardíacos. Dissertação de Mestrado. Instituto de Ciências Exatas e Aplicadas Campus João Monlevade, Universidade Federal de Ouro Preto 8. Moreira SPFV (2017) Sistemas de informação wearable aplicados à área da saúde. Mestrado em Engenharia Informática. Instituto Superior de Engenharia do Porto 9. McConnell MV, Turakhia MP, Harrington RA et al (2018) Mobile health advances in physical activity, fitness, and AF: moving hearts. J Am Coll Cardiol 71:2691–2701 10. Lee J, Reyes BA, David D et al (2013) AF detection using an iPhone 4S. IEEE Trans Biomed Eng 60(1):203–206 11. Tajrishi FZ, Chitsazan M, Chitsazan M et al (2019) Smartwatch for the detection of AF. Crit Pathw Cardiol 18(4):176–184 12. Cai W, Chen Y, Guo J et al (2020) Accurate detection of AF from 12-lead ECG using deep neural network. Comput Biol Med 116:103378 13. Ben F, John C, Hugh C et al (2017) Screening for atrial fibrillation: a Report of the AF-SCREEN International. Circulation 135 (19):1851–1867 14. Muzny M, Henriksen A, Giordanengo A et al (2020) Wearable sensors with possibilities for data exchange: analyzing status and needs of different actors in mobile health monitoring systems. Int J Med Inform 133:104017 15. Forouzanfar M, Dajani HR, Groza VZ et al (2015) Oscillometric blood pressure estimation: past, present, and future. IEEE Rev Biomed Eng 8:44–63 16. Foster KR, Torous J (2019) The opportunity and obstacles for smartwatches and wearable sensors. IEEE Pulse 10(1):22–25 17. Georgiou K, Larentzakis AV, Khamis N et al (2018) Can wearable devices accurately measure heart rate variability? Syst Rev Folia Med (Plovdiv) 60(1):7–20 18. Rajakariar K, Koshy AN, Sajeev J et al (2020) Accuracy of a smartwatch based single-lead electrocardiogram device in detection of atrial fibrillation. Heart 106:665–670 19. Randazzo V, Ferretti J, Pasero E (2019) ECG watch: a real time wireless wearable ECG. IEEE Int Symp Med Measur Appl:1–6 20. Samol A, Bischof K, Luani B et al (2019) Patient directed recording of a bipolar three-lead electrocardiogram using a Smartwatch with ECG function. J Vis Exp 2019:154 21. Tseng KC, Lin B-S, Liao L-D et al (2014) Development of a wearable mobile ECG monitoring system by using novel dry foam electrodes. IEEE Syst J 8(3):900–906 22. Chang W-L, Hou CJY, Wei S-P et al (2015) Utilization and clinical feasibility of a handheld remote electrocardiography recording device in cardiac arrhythmias and AF: a pilot study. Int J Gerontol 9(4):206–210 23. Rozen G, Vaid J, Hosseini SM et al (2018) Diagnostic accuracy of a novel mobile phone application for the detection and monitoring of AF. Am J Card 121(10):1187–1191 24. Wasserlauf J, You C, Patel R et al (2019) Smartwatch performance for the detection and quantification of atrial fibrillation. Circ Arrhyth Elect 12(6):e006834 25. Godin R, Yeung C, Baranchuk A et al (2019) Screening for AF using a Mobile, single-lead electrocardiogram in Canadian primary care clinics. Can J Cardiol 35(7):840–845

798 26. Lin C-T, Chang K-C, Lin C-L et al (2010) An intelligent telecardiology system using a wearable and wireless ECG to detect AF. IEEE Trans Inf Technol Biomed 14(3):726–733 27. Wang I-J, Liao L-D, Wang et al (2010) A wearable mobile ECG measurement device with novel dry polymer- based electrodes. In: TENCON 2010—2010 IEEE region 10 conference, pp 379–384 28. Pellanda EC, Pellanda LC et al (2016) A prevenção primordial e a “saúde de vestir”: os wearables na Cardiologia. Arq Bras Cardiol 106(6):455–456

R. S. Santos et al. 29. Ponciano V, Pires IM, Ribeiro FR et al (2020) Detection of diseases based on electrocardiography and electroencephalography signals embedded in different devices: an exploratory study. Br J Dev 6(5) 30. Sajeev JK, Koshy AN, Teh AW (2019) Wearable devices for cardiac arrhythmia detection: a new contender? Intern Med J 49 (5):570–573

The Effects of Printing Parameters on Mechanical Properties of a Rapidly Manufactures Mechanical Ventilator T. R. Santos, M. A. Pastrana, W. Britto, D. M. Muñoz and M. N. D. Barcelos

For the production of these specimens, we employed the Simplify 3d software. Optimized settings were suggested for the deposition of the thermoplastic material, considering a reduction in speed to 30 mm/s, a layer height of 0.100 mm, 100% filling, and a line overlap of 60% to avoid voids at the edges of the pieces, within order to increase the mechanical resistance. The observed results are satisfactory and are under the analyzed bibliography, indicating that adjustments in parameters and configurations of material deposition influence the mechanical strength of the parts.

Abstract

With the advent of SARS-COV-2, industry and academy have been mobilizing themselves to find technical solutions to satisfy the high demand for hospital supplies. This work aims to study the manufacturing process by melting and depositing thermoplastic material (three-dimensional printing) to build a mechanical ventilator. We center the methodology of this work on the observation of good practices for the procedure of printing plastic parts for medical and hospital use, aiming to guarantee mechanical resistance and satisfy sanitary restrictions. At first, we studied materials for manufacturing parts for application in the medical-hospital environment. In a second moment, a study of the mechanical resistance of specimens for traction test was developed, based on the ASTM D638 standard, printed with different directions of material deposition and using different types of thermoplastics with potential use in medical systems such as PLA, ABS, and PETG. In a third moment the mechanism of a mechanical ventilator in Solidwork was created, this mechanism automated an AMBU using a rack and gear system, the properties of the PLA material (of the second moment) were applied to the gear (it is the most critical part of the mechanism) and the effects of torsion on the gear were simulated (stepper motor). Finally, a 3D printing mechanism was created. T. R. Santos (B) Faculty of Gama, Aerospace Engineering Undergraduate Course, University of Brasilia, Brasilia, Brazil M. A. Pastrana · W. Britto Department of Mechanical Engineering,University of Brasilia, Brasilia, Brazil D. M. Muñoz Faculty of Gama, Electronics Engineering Undergraduate Course, University of Brasilia, Brasilia, Brazil M. N. D. Barcelos Faculty of Gama, Aerospace And Energy Undergraduate Course, University of Brasilia, Brasilia, Brazil

Keywords

3D printing • SARS-COV-2 Covid-19 • Hospital supply • PLA • Mechanical Ventilator

1

Introduction

With the advent of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) and Coronavirus Disease 2019 (Covid-19), industry and academy have been pushing to seek new technology solutions to be able to meet the growing demand for inputs every day more urgent worldwide [1,2]. In the race for solutions that will mitigate the consequences of the pandemic, the 3D Fused Deposition Modeling (FDM) printer emerges as an ally for the prototyping and manufacturing of hospital supplies [3,4]. 3D FDM printers or rapid prototyping, is an additive manufacturing technology, in which the production of threedimensional parts is formed through the sequential deposition of molten material in layers, based on a Computer-Aided Design (CAD) [5,6]. Rapid prototyping differs from traditional manufacturing technologies, as it is a tool capable of producing parts with complex geometry and unique characteristics in just one manufacturing process [5]. Besides, material waste is much less when compared to production techniques such as subtractive manufacturing, in which components are constructed by subtracting material [7].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_121

799

800

T. R. Santos et al.

One can divide the manufacturing process through threedimensional printing into three stages; it would be the design of parts through computational modeling, the processing of CAD using 3D slicing software and finally, the manufacturing and finishing process part [8,9]. The slicing process is one of the most significant for additive manufacturing since at this stage the printing parameters are defined. Parameters such as layer height, nozzle temperature, deposition speed, filling, layer overlap, extrusion speed of the thermoplastic, number of layers of finishes of the piece, among others, are analyzed [10,11]. We can evidence this fact in the works of [9,10], in which, they demonstrate that parameters such as extrusion speed, layer height, and orientation layers, are essential, as they directly influence the mechanical characteristics of the material. According to [10] the effects of this negligence are not noticed because some of the parameters, when isolated, have little influence on their properties. However, together, they considerably influence the final properties of the produced parts [12,13]. According to [12], the finish of the piece, and its structural resistance is directly related to the parameter of internal filling. This parameter can cause severe deformation in the parts, affecting the external finish and its mechanical strength. In addition, the slicing software is also of significant importance for production, as they are imprecise in the printing time and the amount of material used, such parameters directly influence the final quality of the product [14,15]. In this way, the main contribution of the paper is to study how the parameters of 3D printing influence the quality and properties of parts produced to meet the growing demand for hospital supplies caused by the SARS-COV-2 pandemic. A case study was conducted for producing the main parts that have not direct contact with patients of a mechanical ventilator equipment. We utilized tensile tests for collecting data on the mechanical properties of materials, simulations of maximum motor torque for the validation of mechanical parts, and, finally, a bibliographic study to generate a set of patterns in the printing parameters.

2

analyzed. Consecutively, the production of several samples intended for mechanical characterization of the materials. Finally, we performed simulations of critical parts and manufactured components of a mechanical ventilator prototype.

2.1

Slicing Parameter

In rapid prototyping, it is necessary to have a 3D printing software treatment, to determine the parameters, such as the layer orientation, extruder nozzle temperature, deposition speed, layer height, and the coordinates of the three-dimensional object. This situation reflects the same found by [16], in which it was confirmed that the orientation of the part and that the height of layers has direct and accentuated influences on the mechanical properties of the parts produced. In addition, the height of the layer along with the deposition speed, has a lot of influence on the existence of voids in the piece. Geng et al. [10], managed to demonstrate that extreme speeds, slow and fast, are detrimental to mechanical properties. Therefore, in this work the study of 3D printing pieces employed deposition speeds similar to that reported in [9], and the tracking angle that generated satisfactory mechanical properties as in [16–18]. The main parameter addressed in this work was the effect of the printing orientation on the mechanical properties. Three sets of 5 units were manufactured, each set produced taking into account a print orientation: horizontal, vertical, and flat. Parameters such as edge overlap, printing angle, and bed height, and types of supports, were studied in this research as well, but only their finishing effects were studied in the production of pieces for the mechanical respirator. Primarily, we produced the pieces by using traditional parameters, layer height of 0.200 mm, printing speed of 60 mm/s, tracking angle of -45 / +45◦ and 100% filling, and 0.4 mm diameter nozzle. Subsequently, we employed optimized parameters, in which the layer height was 0.100mm, print speed from 20 to 30 mm/s, tracking angle of 0/90◦ , 100% filling, and overlapping edge of 60%.

Materials and Methods 2.2

For the development of this research, an open source Graber i3 FDM printer of simple extrusion was used in the production of the structures and samples. For slicing and processing the CAD, we used the Simplify3D software. The materials used were 3DLAB filaments, 1.75mm in diameter, made from the following thermoplastics Lactic Polyacid (PLA), Acrylonitrile Butadiene Styrene (ABS), Poly (Ethylene Glycol Terephthalate) (PETG), all of them in natural color. The parts were designed and simulated using SolidWorks computer modeling software. The research was divided into three main stages. Firstly, the slicing parameters and good prototyping practices were

Mechanical Tests

In this research, several tensile tests were conducted to characterize the properties of the materials, and thus analyze the influences of the printing parameters. For this, the technical standard ASTM D638 was used, which is responsible for the standardization of mechanical tests for tensile properties of plastic materials and applies to reinforced and non-reinforced plastics [18]. The tensile test was applied to printed samples under specific conditions of room temperature, humidity, and speed, using an Instron® 8801 equipment. This machine is a compact

121 The Effects of Printing Parameters on Mechanical Properties …

801

servo-hydraulic fatigue testing device that meets the requirements of static and dynamic tests.

A static analysis of the system was performed allowing the motor to be selected. For the analysis it was assumed normal and tangential forces produced by the Ambu (ideally at 30◦ as shown in Fig 2).

2.3

Development of an Ambu-based Mechanical Ventilator

In this work, we adopted the specifications of the minimally clinically acceptable ventilator used during the Covid-19 pandemic. Table 1 summarizes the set of clinical requirements based on the consensus of what is minimally acceptable in the opinion of anesthesia and intensive care medical professionals and the regulatory agencies, taking into consideration the emergency context. Details regarding monitoring and alarms, gas and electricity supply, biological and software safety can be found in [19,20]. Figure 1 shows the CADs of the proposed Ambu-based ventilator, which aims to be an open source and low cost rapidly manufactured equipment that provides basic functions, reserving most sophisticated ventilators for critical patients.

Fa = P E P × A

(1)

f = sin(30) × P E P × A = sin(30) × P E P × A/2 (2) F = (Fa + 2 f ) × S F

(3)

The force Fa is 0.888 Kgf, f is 0.2222 Kgf, and F is 1.5984 Kgf, being the necessary force in the rack with a safety factor of S F = 1.2; however, it’s not necessary torque in the gear. The torque in the gear is expressed by 4. T g = F × Gr × S F

(4)

The estimated minimum motor torque is about 1.6313 Kgf cm with a safety factor S F = 1.2. Based on the estimated torque, it was selected a Nema23 step motor with a maximum torque of 5.61 Kgf cm at 3600 PPS (pulse per second) (0.55 Nm).

Table 1 Minimal requirements of a mechanical ventilator Requirement

Mandatory

Optional

Observation

CMV VCV PCV – PAP 35 cm H2 O 70 cm H2 O – PEP 37 cm H2 O – – IAPL 15–40 cm H2 O – Steps 5 cm H2 O PEEP 5–20 cm H2 O – Steps 5 cm H2 O I:E 1:2 1:1 and 1:3 – RR 10–30 breaths per minute – Steps of 2 Vt 400 ml 350 and 450 ml – CMV (Continuous Mandatory Ventilation), VCV (Volume Controlled Ventilation), PCV (Pressure Controlled Ventilation), Plateau Pressure (PAP), Peak Pressure (PEP), Inspiratory Airway Pressure Limit (IAPL), Positive End Expiratory Pressure (Peep), Inspiratory:Expiratory ratio (I:E), Respiratory Rate (RR), Tidal Volume (Vt ) Fig. 1 Schematic of the mechanical respirator

802

T. R. Santos et al.

Fig. 2 Pressure analysis: Peak Pressure P E P = 37 cm H2 O (0.037 Kgf/cm2 ), Gear radius Gr = 0.85 cm, contact area with the Ambu A = 24 cm2 , Safety Factor S F = 1.2

2.4

Simulation and Production of Parts for the Prototype of the Mechanical Ventilator.

The results obtained in the mechanical characterization of the materials manufactured through rapid prototyping were the basis for the numerical simulations in SolidWorks. In particular, it is important to analyse the torsion effect on the gear responsible for transferring the torque of the Nema 23 motor to the rack of the mechanical ventilator. The simulation procedure was separated into two parts. Initially, it was used an isotropic model of the gear, using the lowest properties found in the tensile test for PLA. Subsequently, we employed an orthotopic model, by using the property values obtained in the horizontal direction applied in the X and Y directions and the vertical direction applied in the Z direction of the simulation. For both models, the Von Mises criteria was applied. For the simulation it was necessary to conduct convergence analysis of the finite elements mesh. Since the results at 12 thousand knots, there was a very small difference of 0.45% compared to the 25 thousand knots, one can conclude that it would not be necessary to use a mesh more refined than 13 thousand knots, in a tetrahedral mesh.

We manufactured gears for Nema 23 motor, supports, spacers, fastener assembly, rack, and fittings for hoses for the prototype of the ventilator. Those parts were produced with 1.75 mm diameter filament of the 3DLab brand made of and natural PLA. Different values for deposition speed, contour overlap and first layer nozzle temperature were used, allowing qualitative comparison to be performed.

3

Results

3.1

Mechanical Tests and Simulations

Table 2 shows the main mechanical properties for ABS, PLA, and PETG, achieved in the tensile tests. It was possible to notice the effect of the printing orientation on the mechanical properties. The samples printed using the horizontal orientation (H ) had their properties maximized, while the samples produced using the vertical orientation (V ) had the worst results. Only PETG presents the worst results for the flat orientation (F). In the mechanical ventilator, the applied torque was 0.55Nm. Table 3 shows the simulation results for isotropic and orthotropic models of the gear made of PLA. Notice

Table 2 Mechanical properties of materials Material guidance

Shear modulus (Gpa)

Poisson’s ratio (υ)

Young modulus (Gpa)

ABS H 0.68 ± 0.039 0.31 ± 0.082 2.20 ± 0.097 ABS V 0.56 ± 0.042 0.27 ± 0.083 2.09 ± 0.102 ABS F 0.58 ± 0.042 0.22 ± 0.083 2.19 ± 0.102 PLA H 1.21 ± 0.075 0.34 ± 0.094 3.58 ± 0.161 PLA V 1.18 ± 0.064 0.25 ± 0.090 3.34 ± 0.143 PLA F 1.13 ± 0.064 0.23 ± 0.087 3.39 ± 0.148 PETG H 0.64 ± 0.040 0.36 ± 0.080 1.80 ± 0.105 PETG V 0.65 ± 0.040 0.29 ± 0.077 1.67 ± 0.105 PETG F 0.63 ± 0.040 0.26 ± 0.080 1.61 ± 0.105 H is horizontal printing (axis X), V is vertical printing (axis Z), F is flat printing (axis Y)

Yield strength (Mpa) 34.12 ± 0.203 27.40 ± 0.203 32.43 ± 0.203 59.88 ± 0.101 53.23 ± 0.101 46.58 ± 0.101 40.32 ± 0.100 34.16 ± 0.100 36.18 ± 0.100

121 The Effects of Printing Parameters on Mechanical Properties …

803

Table 3 Simulation of mechanical properties of the gear made of PLA Material guidance

Simulation type

Safety factor

Von Mise Tension (Mpa)

Knot quantity

PLA H PLA V PLA

Isotropic Isotropic Orthotropic

9.284 8.254 9.266

6.439 6.436 6.452

12837 12837 12837

that for the isotropic model (horizontal orientation) the safety coefficient was 9.284, whereas, for the orthotropic model, the safety coefficient was 9.266. One can conclude that the prototyped gear can whithstand the mechanical efforts previewed in the design of the mechanical ventilator (maximum motor torque of 0.55 Nm).

3.2

Printing Parameters and their Influences

Using the standard printing parameters such as 0.200 mm layer height, 60 mm/s printing speed, and 100% fill, a qualitative analysis of the prototyped parts was performed. It was possible to notice that, in parts of greater complexity, such as Fig. 3 Defective parts: The left upper quadrant shows the layer lamination in the center of the part; the right upper quadrant shows local voids and detachment from the table; the left lower quadrant shows the loss of details of the gear teeth; and the right lower quadrant shows the voids at the gear edges, caused by the lack of complete filling of the image generated by the Simplify3D slicing tool

mechanical respirator connections, there was a lamination of some layers and distortions in the geometry. Additionally, in the parts with small details, such as the gear, it was observed voids of filling so that the teeth became brittle (see Fig. 3). After analyzing the results obtained using the traditional printing parameters, it was decided to search through the bibliographic search, which parameters best apply to the various parts necessary for the production of the prototype of the proposed mechanical ventilator. With that, it arrived at the following parameters: (a) deposition speed of 20–30 mm/s depending on the geometry complexity. It was noticeable that a speed less than 20 mm/s caused the PLA parts to have a displacement between the layers and a speed higher than 30 mm/s produced voids in its construction; (b) tracking angle

804

T. R. Santos et al.

Fig. 4 Enhancement of the same parts after using the new parameters

of 0/90◦ (c) layer height of 0.100 mm; (d) contour overlap of 60%. On the one hand, values higher than 60% of the contour overlap generated defects in the part finishing and depth loss of the gear teeth enclaves. In more significant pieces, we observed burrs, which would be extremely harmful for tubes and connectors that need a smoothed interior. On the other hand, values less than 55% generated the appearance of voids at the edges; (e) the nozzle temperature in the first layer was set to be 5 ◦ C higher than the rest of the layers. In this case, variations higher than 5◦ in nozzle or bed of print generate thermal stresses in the parts, thus causing it to warp and detach from the printing table [9,10,12,17]. The above mentioned parameters were used to produce all parts of the ventilator, from those with extremely complex geometries, to those that had very little detail, without problems of bed displacement, deformation of the parts or presence of voids, as can be seen in Fig. 4. In Fig. 5, it is possible to see the effects of the printing orientation of the rack. Figure 6 shows details of the produced parts assembled on a prototype of the mechanical ventilator.

4

Discussion

In the study on the effect of printing parameters on the production of parts for a mechanical ventilator, it was observed the importance of creating a set of printing parameters, which maximizes mechanical properties and facilitates the prototyping of these structures. Besides, there was a need to create a standard in the production of structures made by rapid prototyping, which makes validation feasible in health agencies. The main results obtained with this study were the definition of a set of printing parameters that managed to be efficient for the structures of geometries of different complexities and specificities, allowing the prototyping of all the parts necessary for the construction of a mechanical ventilator prototype. This set of parameters proved to be efficient not only qualitatively but also quantitatively through the results obtained in the mechanical tests. Furthermore, the strength of prototyped parts was demonstrated through simulations, so that the safety coefficient shows that PLA can be used for the production of the structures.

121 The Effects of Printing Parameters on Mechanical Properties …

805

Fig. 5 Print orientation, a vertical (V), b is horizontal (H) and c is flat (F) Fig. 6 Prototype of the mechanical ventilator. A video of the mechanical ventilator can be seen at the following link: https:// youtu.be/uHftWasdNIs

The results obtained corroborate the of works of [4,9, 10,12], in which they demonstrated how the parameters directly influence the structures produced, both qualitatively and quantitatively. Besides, they demonstrated the effects of the horizontal print orientation (X ), maximization of the parts’ mechanical properties. It is possible to see this clearly when analyzing the Fig. 3 presented in the results, where the use of inadequate parameters can generate parts with severe defects, as in the case of the gear that has lost its shape completely, or in the case of the connection that suffers damage in the middle

of the parts that make its use unfeasible. In Fig. 4 presented in the results, it is possible to see the beneficial effects of using a set of correct parameters. PETG’s elasticity modulus was inferior to those found in the bibliographies, but this difference is due to the anisotropy of the material, and in the literature, its elasticity modulus should be around 2.1 Gpa. It is visible that there is a similarity of behavior in the results of the mechanical tests of both PETG and ABS, this is because both materials are amorphous, whereas PLA is a semicrystalline material as seen in[8].

806

T. R. Santos et al.

Despite the positive and corroborative results with several authors, it is necessary to create more profound studies on the topic, so that one can make a complete analysis of the effects of each parameter on the mechanical and chemical properties of the parts, and how their influence in hospital environment applications. Interestingly, an analysis of the effects of heat treatments on parts produced by rapid prototyping, considering that studies such as Avila et al. (2019), suggest that the heat treatment carried out at 90% of the glass transition temperature, on parts produced by rapid prototyping, has its mechanical properties maximized in some cases up to 10%. Eventually, a study is needed if one puts together these results to those obtained through the improvement of printing parameters and if this would not generate any harmful effect for use in hospital environments.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2.

3.

4. 5.

5

Conclusion

The maximization of the mechanical properties of the prototyped structures through three-dimensional printing taking into account the horizontal printing orientation (X), together with the best finishes on the parts that we produced with the improved printing parameters, demonstrate the importance of always considering the influence of these parameters when producing parts through rapid prototyping. In summary, the results obtained were significant, as they open the possibility of using three-dimensional printing, inside hospitals, by technicians responsible for the maintenance of hospital equipment, in which, in a matter of hours, it can produce parts for the replacement of defective structures, speeding up the maintenance of hospital supplies and independence. However, one must always consider that the pieces cannot be in direct contact with the person, as they would need a more in-depth study on the subject. Furthermore, it would be necessary to create a manual of good practices for the production of hospital supplies through rapid prototyping, so that one could easily apply to different production sites without losing the standards of production excellence. Despite the possibilities that arise through these studies and the possible applications of 3D printing in biomedical engineering projects, it is necessary to carry out more in-depth research on the effects of printing parameters on the production of hospital supplies through rapid prototyping, requiring an analysis regarding which thermoplastics are most suitable for such applications. It is also necessary to carry out several other mechanical tests, whether destructive or nondestructive, so that the effects that these sets of parameters have on the properties of the structures produced through three-dimensional printing become more evident and conclusive.

6.

7. 8.

9.

10.

11.

12.

13. 14. 15.

16.

17.

18. 19. 20.

Pourhossein B, Dabbag A, Fazeli M (2020) Insights into the SARSCoV2 outbreak, the great global challenge N Engl J Med 965:325– 329 Yin S, Huang M, Li D et al (2020) Difference of coagulation features between severe pneumonia induced by SARS-CoV2 and nonSARS-CoV2. J Thromb Thrombolysis Souza J, Gomez Malagon LA (2016) Fabricação de Modelos Anatômicos Usando Prototipagem Rápida. Engenharia E Pesquisa Aplicada Silva JRC (2014) Método de Concepção de Articulações Flexíveis em Impressora 3D. Universidade de Brasília Ambrosi A, Pumera M (2016) 3D-printing technologies for electrochemical applications. Chem Soc Rev 45:2740–2755 Besko MA, Bilyk CB, Sieben PG (2017) Aspectos técnicos e nocivos dos principais filamentos usados em impressão 3D Eletrônica dos Cursos de Engenharia 1:3 Canessa E, Fonda C, Zennaro M (2016) Low-cost 3D Printing. 202 Medeiros CBS (2018) Avaliação de peças de poli (ácido lático) (PLA) impressas para aplicações biomédicas. Mestrado em Ciência e Engenharia de Materiais, Rio Grande do Norte, Natal Wang P, Zou B, Xia H, Ding S, Huang C (2019) Effects of printing parameters of fused deposition modeling on mechanical properties, surface quality, and microstructure of PEEK. Mater Process Technol 271:62–74 Geng P, Wu JZW, Wang WY, Wang S, Zhang S (2019) Effects of extrusion speed and printing speed on the 3D printing stability of extruded PEEK filament. Manuf Process 37:266–273 Song C, Lin F, Ba Z et al (2016) My smartphone knows what you print: exploring smartphone-basedside-channel attacks against 3D printers. 895–907 Martinez ACP, Souza DL, Santos DM, Pedroti LG, Carlo JC, Martins (2019) Avaliação do Comportamento Mecânico dos Polímeros ABS e PLA em Impressão 3D Visando Simulação de Desempenho Estrutural Domingues LG (2018) Estudo e Caracterização da Impressão 103 Paoli MA (2008) Degradação e Estabilização de Polímeros. Chemkeys Auras RA, Lim LT, Selke SEM, Tsuji H (2010) Synthesis, structures, properties, processing, and applications. Poly (lactic acid). Wiley, Hoboken, New Jersey, United States Wu W, Geng P, Li G, Zhao D, Zhang H, Zhao J (2015) Influence of layer thickness and raster angle on the mechanical properties of 3Dprinted PEEK and a comparative mechanical study between PEEK and ABS. Material 2015 Liu Z, Wang G Huo Y, Zhao W (1980) Research on precise control of 3D print nozzle temperature in PEEK material. In: AIP conference proceedings ASTM (2002) Standard test method for tensile properties of plastics Medicines & Healthcare products regulatory agency—MHRA. Rapidly Manuf Ventil Syst Agência Nacional de Vigilância Sanitária – ANVISA (2020) ção da Diretoria Colegiada - RDC no 386. Maio

Fully Configurable Real-Time Ultrasound Platform for Medical Imaging Research S. Rodriguez, A. F. Osorio, R. O. Silva, L. R. Domingues, H. J. Onisto, G. C. Fonseca, J. E. Bertuzzo, A. A. Assef, J. M. Maixa, A. A. O. Carneiro, and E. T. Costa

GPU using four pipelines that run sequentially to produce the images. First results using different configurations has proven the efficiency of the solution.

Abstract

The aim of this paper is to present a new ultrasound platform for medical imaging research, in which all the image formation, capture and emission parameters are user-defined. The requirement to be fully configurable was necessary to enable the evaluation of new signal processing and beamforming algorithms developed by researchers. Since it is a new equipment, different subsystems (synchronism, human–machine interface, platform configuration, image formation, hardware and mechanics) were designed and developed. The degree of flexibility required resulted in a large amount of configuration parameters to be described by the user. The JSON file format provided a well-structured and clear way to describe the different parameters. The hardware design is multi-board. The various functions of the platform are performed by the Synchronism, Multiplex, Tx and Rx boards, mounted in a rack that communicates with a PC. The transmitted beam and the receiver beamforming are generated and implemented in programmable hardware, using FPGAs. Some design techniques reduced the cost and development time, like the use of FPGAs in commercial modules. The RF signal is processed in a

S. Rodriguez (&)  A. F. Osorio  R. O. Silva  L. R. Domingues  H. J. Onisto  G. C. Fonseca  J. E. Bertuzzo Eldorado Research Institute, Campinas, Brazil e-mail: [email protected] A. A. Assef  J. M. Maixa Federal University of Technology—Paraná (UTFPR), Curitiba, Brazil A. A. O. Carneiro Faculty of Philosophy, The Department of the Physics, Sciences and Letters of Ribeirão Preto of the University of São Paulo (DF/FFCLRP/USP), Ribeirão Preto, Brazil E. T. Costa School of Electrical and Computer Engineering (FEEC), and Center for Biomedical Engineering, (CEB), University of Campinas (UNICAMP), Campinas, Brazil

Keywords





Ultrasound real-time platform Flexible configuration Multi-board system GPU-based medical imaging

1



Introduction

Research groups constantly seek to improve techniques and methods in order to expand the frontiers of knowledge so as to achieve unprecedented results and bring benefits to society. Using appropriate tools (or developing them, if they are unavailable) is indeed a fundamental step in this effort. With the aim of offering to Brazilian Universities and Research Centers a versatile tool with multiple levels of interaction, a fully configurable ultrasound platform has been developed [1]. In order to guarantee extreme autonomy to the researcher, the platform allows the execution of unprecedented scenarios for research, being able to derive equally unprecedented results both for the academy and for society in general. The developed platform prototype presented in this article is the result of a Project funded by Brazilian FINEP (Conv. FINEP-UNICAMP-FUNCAMP 01–10-0538–00) and The National Health Fund of The Ministry of Health (Proc. MS/FNS/UNICAMP nº 2210/2008). The Project involved several Brazilian Institutions to design and construct a medical ultrasound equipment operating in Modes B, M and Doppler. This national effort was coordinated by CEB/UNICAMP with researchers involved with ultrasound research from the following institutions: School of Electrical and Computer Engineering (FEEC/UNICAMP) and Center for Biomedical Engineering (CEB/UNICAMP), The Biomedical Engineering Program of the Federal University

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_122

807

808

S. Rodriguez et al.

of Rio de Janeiro (PEB/COPPE/UFRJ), The Ultrasound Laboratory of the Graduate Program in Electrical and Computer Engineering (CPGEI) of the Federal University of Technology—Parana (UTFPR), GIIMUS—Research and Innovation Group in Medical Instrumentation and Ultrasound of the Physics Department of the Faculty of Philosophy, Sciences and Letters of Ribeirão Preto of the University of São Paulo (DF/FFCLRP/USP), The Medical Informatics Service of The Heart Institute of the Faculty of Medicine of The University of São Paulo (InCor/FMUSP), The Department of Mechatronics and Mechanical Systems of the Polytechnic School of the University of São Paulo (USP) and The Physics Department of The Federal University of São Carlos (DF/UFSCar), with the technological support of the Eldorado Research Institute.

2

to perform the image formation function. The hardware design took into account solutions to lower the total cost of the platform. For example, the choice for FPGAs in commercial modules reduced the number of layers needed for each board. The first experiments proved the effectiveness of the design and the flexibility of the configuration method proposed. The manuscript is arranged as follows: Sect. 3, Hardware Design, presents the different boards that compose the platform and their functions. The solution to meet the requirement of the flexible configuration is described in the Sect. 4, Configuration of the Platform. Section 5, Data Acquisition and Image Formation, describes the software structure developed to perform the image formation. The paper ends with Results and Conclusions.

Method 3

Based on a set of input data, existing platforms perform a series of calculations that aim to optimize, according to a given technique, the generation of the ultrasonic transmission beam. The same applies to the processing of the received signals, in which adaptive algorithms execute internal calculations to apply to the echo signal [2]. The proposed platform does not perform any type of internal calculation to determine the parameters to generate the ultrasonic transmission beam, nor for the reception chain. Exceptions only for apodizations performed in FPGA and the delay-and-sum beamforming. This characteristic implies that all parameters necessary to configure the transmission and reception chain must be calculated in advance and loaded into the platform. This “zero calculation” approach offers total control of all known parameters for a pulse-echo ultrasound system to the researcher, who becomes responsible for the experiment's integrity. With this approach, it is possible, for example, to run a study consisting of a shot with an aperture of eight elements in which the first four elements are configured for a 10 cm focus and the subsequent four elements are configured for a 15 cm focus. The JSON (JavaScript Object Notation) file format [3] was chosen to provide a well-structured and clear way to describe the large amount of configuration parameters by the user. The configuration subsystem is responsible for parsing the configuration file to set the corresponding parameters of the different functions placed in the different boards of the platform. The JSON file is loaded in the PC application, which creates an intermediary JSON and sends it to the embedded configuration subsystem that runs in the dual-core ARM installed in the Synchronization board and is responsible for configuring the hardware. The platform sends the RF signal to the PC, which processes the signal

Hardware Design

The platform is composed by a set of boards mounted in a rack, which communicates with a PC through USB 3.0 and Ethernet interfaces. The platform generates the ultrasound pulses that stimulate the transceiver and packs the RF signals to be processed, while the PC runs both the image formation algorithm and the configuration software. The hardware has a modular architecture, composed by Tx, Rx, Mux and Synchronism boards, in addition to a power supply module, connected to each other by a backplane topology mounted on a rack. An Rx board can be mounted as a daughterboard in a Tx board to form a 64 channels Tx/Rx module. The backplane board is designed with 5 slots (1 Mux, 2 Tx/Rx Modules, 1 Synch and 1 Power Module). The Mux board contains the physical connector for the transducer and implements the multiplexing needed to drive a transducer with more channels than the other boards support. The Mux board can be easily changed to accommodate other transducer connectors pinouts. The Synch board is responsible for the embedded portion of the configuration subsystem, which extracts the configuration parameters from the intermediate JSON file and sends them to each respective board. More details about the configuration procedures can be found in section IV of this article. Also, the Synch board generates all the synchronization signals that are distributed to the rest of the platform in a way that every board receives the same clock and trigger signals to correctly capture the ultrasound data. The Tx board generates the pulse beams and the Rx board processes the echoes. Each Tx board has two commercial FPGA (Xilinx Artix-7) modules integrated to perform the transmitted beam, while each Rx board has two commercial FPGA (Xilinx Kintex-7) to implement the receiver

Fully Configurable Real-Time Ultrasound Platform for Medical …

beamforming. The current platform is mounted with one Tx/Rx module totaling 64 physical channels. The platform is capable of generating images by stimulating a 128-element transducer using a multiplexer in the Mux board. The rack contains additional slots to be used for future expansion. The RF signal sent by the Rx boards to the PC is processed in a GPU to generate the image. This approach is less complex in terms of hardware design then other solutions like, for example, implementing the image generation in a DSP, which would require a specific hardware design to incorporate this DSP in one of the boards. Each Rx board is connected to the PC through a dedicated USB 3.0 line. This choice derived from the fact that the calculus of the communication throughput showed a satisfactory frame update rate for research purposes. In addition, the image formation algorithm was optimized to be executed faster than the USB data throughput.

809

consistent and that the profile will produce a valid image. This checking must be done previously. After the user loads the file and selects a profile, the application converts the configuration file to a new, simplified internal JSON, which closely mimics the memory arrays used by the hardware. During the conversion, the application removes from the JSON the settings related to image formation, which are pertinent only to the PC, as well as the profiles that were not selected. This conversion is done so the ARM processor has to do a minimal amount of processing on the configuration file received, focusing instead on copying data arrays to the correct memory locations, an operation that must be performed in real time. The conversion also reduces the size of the configuration file, which can be quite significant, as it contains settings for each scanline. The resulting configuration file is then sent to the Synch board through a REST API. After the Start command is sent to the platform, the application starts listening for packets sent by the platform through the USB 3.0 connections.

3.1 Use of Commercial FPGA Modules 4.1 Embedded Configuration Subsystem High performance FPGA models have a very large number of pins. As a consequence, it is not uncommon that systems with more than 128 channels need boards with 14 or 16 layers. The manufacturing of such complex boards accounts for a large part of the total costs of a platform. To circumvent this challenge, the proposed platform uses commercial FPGA modules, which comprise a FPGA integrated with peripherals like USB 3.0 and Ethernet interfaces, in a unique SODIMM-pinout module. In this approach, all the complexity of floor planning, layout, routing, fan-out, etc. of the FPGA is previously solved. As a result, the number of layers for the custom platform boards is as low as 6 or 8. In addition to the benefit of strongly reducing cost and development effort, future upgrades become easier, since a number of FPGA models with different processing and memory capacities are offered on the market, while maintaining the same pinout.

4

Configuration of the Platform

The configuration of the platform is performed by a software that is divided in a non-embedded (runs in the PC) and an embedded part (runs in the ARM). The configuration parameters are described by the user and loaded in the PC application. When the user loads a configuration file in the application, the file is parsed to check against invalid values. This validation is necessary to prevent settings that may be harmful to the hardware, such as pulsing for too long or overstraining the power supply unit. However, there are no checks to make sure that the configuration of the scanlines is

The SoC (System on a Chip) placed in the Synch board is an integrated circuit that is divided in two distinct areas. The Programmable Logic (PL) contains the FPGA that was used to implement the DPRs (dual-port RAMs) to store the configuration for each function: Tx, Rx, Mux and Trigger Controller. On the other side, the Processing System (PS) area contains a dual-core ARM A9, equipped with peripherals like Ethernet, USB, UART and SPI ports. The function of the software that runs in the ARM is twofold. The non-real-time part implements the embedded portion of the configuration subsystem of the platform. By means of a pre-defined REST interface, it connects to the application that runs on the PC and implements commands to: • Receive and parse the intermediate JSON configuration file; • Start and stop the ultrasound machine; • Shutdown the platform; • Return the log file; • Respond a heartbeat signal. On the other side, the real-time portion of the ARM software reads the configuration parameters that were parsed from the JSON file and writes them to each specific DPR. The software runs on a tiny, customized Linux distribution. Since the parameters are used to configure the production of the ultrasound pulses and the reception of the echo (see next session), the write to the DPR memories must execute in a synchronous way. The rising edge of an external signal

810

called LOAD_CONFIG triggers the write to all the DPR addresses. The falling edge of LOAD_CONFIG triggers the transmission, via SPI protocol, of these parameters from the DPRs to the respective FPGAs of each board of the platform.

S. Rodriguez et al.

• AFE LPF: Access to the Analog Front End registers that configure the curoff frequency of the LPF • AFE HPF: Access to the Analog Front End registers that configure the ADC HPF • Number of samples per scanlines. • Size of ensemble for a Doppler aquisition.

4.2 Configurable Parameters The need of an open, real-time ultrasound platform for research in medical images, in which all parameters of emission, capture and image formation should be defined by the user, implies in a large amount of information to be specified by the user in order to configure the platform. An important requirement is that the user should be capable of describing the configuration parameters in a clear, organized and standardized way. The popular JSON file format specifies a flexible and hierarchical structure, composed by objects and arrays, which was demonstrated to be appropriate to describe all the configuration parameters of the platform with a good level of organization. The JSON configuration file allows the definition of profiles, depending on the set of variables specified, such as pulse format, depth and frequency. The number of profiles in a file is defined by the user. The parameters that can be configured are the following: • Number of scanlines in a frame. • Order of scanlines. (e.g. considering a B frame + Doppler, the order of scanlines should be B,B,B,B,D,D,D,B, B,B,…) • Aperture: the amount of channels that will be excited to form one scanline. • Apodization: weights from 0 to 100% for each received channel. • Pulse delay: individual delay for each channel in transmission to focusing. • Echo delay: individual delay for each channel in reception to align the wave. • Pulse repetition period for each frame. Can be modified on a frame by frame basis. • Period of time in which the platform is pulsing • Period of time in which the platform is receiving ecos. • Pulse width: used to arbitrarialy form the pulse • Pulse voltage: used to arbitrarialy form the pulse • Mux mapping: used to map the channel to an element of the transducer. Ex.: Channel 1 to Element 1, Channel 1 to Element 65. • AFE LNA gain: access to the Analog Front End registers that configure the LNA gain. • AFE VCAT gain: access to the Analog Front End registers that configure the VCAT gain versus time. • AFE PGA gain: access to the Analog Front End registers that configure the PGA gain.

5

Data Acquisition and Image Formation

The RF signal is sent to the PC using two USB 3.0 connections, one for each Rx board. Each USB packet carries data corresponding to a scanline, after the delay-and-sum algorithm that is applied to the entire set of transducer channels. Upon arrival, the USB packets are classified according to the type of data they carry, which can be M mode, B mode or Doppler. The data is then buffered in local memory, along with the data received from other packets of the same type, until a complete frame has been received. All the RF signal processing is done on graphics processing units (GPUs). There is a direct correlation between the frame data points and the pixels that compose the final image. The same calculations have to be applied independently to each frame data point to produce an image. This characteristic makes image formation a highly parallelizable task. This fact, along with the need to quickly process large amounts of data in order to achieve real time performance, makes this task a natural fit for a GPU [4]. After a frame is completely received, its data is copied from the CPU memory to the GPU memory. There are four pipelines that run sequentially on the GPU to produce the images: B Mode, M Mode, Color Doppler and Power Doppler. Not all of these four pipelines are required to be active at the same time. The processing steps for each pipeline are implemented in either OpenGL Shader Language (GLSL), a language used to program the graphics pipeline of modern GPUs, or CUDA, a parallel computing API created by Nvidia to enable general purpose processing using their GPU architecture. The choice between which technology to be used on each step was dependent on the data being processed: in general, steps where both the input and output are images are implemented in GLSL. On the other way, steps where either the input or output are numeric data arrays are implemented in CUDA. This separation takes advantage of the GPU graphics pipeline when dealing with images, which easily enables optimizations such as fast access to neighboring pixels and hardware linear interpolations, as well as making the coding process easier. The pipelines are represented in Fig. 1. Each arrow corresponds to an image formation pipeline: gray is for B mode, yellow is for M mode, blue is for Color Doppler and orange is for Power Doppler. The pipeline steps are represented in

Fully Configurable Real-Time Ultrasound Platform for Medical …

811

Fig. 1 The four image formation pipelines. Gray is for B mode, yellow for M mode, blue for color Doppler and orange for power Doppler

boxes with two different colors, the red ones being implemented in GLSL and the green ones being implemented in CUDA. The RF data is not the same for all pipelines, B mode and M mode receive their own data, and Power and Color Doppler use the same input RF data. When the data are processed and an image is formed in the GPU, it can be rendered directly to the screen.

6

Results

The many different subsystems of the platform (synchronism, human–machine interface, configuration, image formation, hardware and mechanics) were developed

separately and submitted to unit tests. The integration tests used different phantoms and transducers, as well as different configuration files. Figure 2 shows the Rx board and a measurement example of one piezoelectric element when driven by a bipolar square pulse generated in the Tx board. Figure 3 shows the first results of B and Doppler-mode images generated with two different configurations, one JSON file for each configuration. The figure is a screenshot that shows part of the human–machine interface that was developed. The user can select a specific scanline to show the pulse and its echo (Fig. 3b, bottom). An experiment conducted with 64 scanlines and depth of 15 cm resulted in a frame rate of about 30 fps.

Fig. 2 a Rx board, b Example of an ultrasound pulse generated for a 2.5 MHz transducer driven by a bipolar square pulse

812

S. Rodriguez et al.

Fig. 3 Images generated with two different configurations of the platform. a B and Doppler-mode configuration (Doppler string phantom with a linear transducer—the flow is moving upward).

7

Conclusions

The tests and experiments that was run using various different configurations proved the success of the proposed hardware and software design to provide a fully configurable ultrasound platform. The level of flexibility achieved will be very useful in researches that study new signal processing and beamforming algorithms. The platform allows, for example, for each scanline to be individually configured. The modular hardware architecture facilitates the expansion of the number of channels. Acknowledgements The authors would like to acknowledge the financial support of FINEP and of The National Health Fund of The Brazilian Ministry of Health for this research. We also acknowledge the fruitful discussions and help of the following researchers: José Antonio Eiras (UFSCar), Julio Cesar Adamowski, Flávio Buiochi and Pai Chi Nan (USP-SP), Fábio Kurt Schneider (UTFPR), Wagner Coelho de Albuquerque Pereira and Marco Antônio von Kruger (UFRJ), Marco Antônio Gutierrez and Marina de Fátima de Sá Rebelo (Incor-SP).

b B-mode configuration (fetal phantom with a convex probe). Bottom part of the figure shows the pulse and its echo for a specific scanline

Those researchers, the co-authors and their institutions integrate the RBTU (Brazilian Network of Ultrasound Techniques). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Boni E, Alfred CH, Freear S, Jensen JA (2018) Tortoli P (2018) Ultrasound open platforms for next-generation imaging technique development. IEEE Trans Ultrason Ferroelectr Freq Control 65 (7):1078–1092 2. Saniie J (2012) Oruklu E (2012) Introduction to the special issue on novel embedded systems for ultrasonic imaging and signal processing. IEEE Trans Ultrason Ferroelectr Freq Control 59 (7):1329–1331 3. Shin S (2010) Introduction to JSON (javascript object notation) at www.javapassion.com 4. H So J Chen B Yiu Ya A (2011) Medical ultrasound imaging: tso GPU or not to GPU? IEEE Micro 31(5):54–65

Proposal of a Low Profile Piezoelectric Based Insole to Measure the Perpendicular Force Applied by a Cyclist M. O. Araújo and A. Balbinot

Abstract

Keywords

This paper presents the development of a one-dimensional force platform for the pedaling analysis in a bicycle using piezoelectric films. A 3D-printed insole was designed to accommodate an array of Polyvinylidene Fluoride films without changing the pedaling characteristic. The sensor’s positioning sought to cover the point of contact between the shoe and the pedal. An instrumentation amplifier, a charge amplifier and an anti-aliasing filter with a cutoff frequency of 20Hz composed the conditioning circuit. The system dynamic calibration was executed with the application of mechanical impulses to the sensors’ surface using an impact hammer of model 8206 by Brüel and Kjær, and a chassis model NI SCXI-1600 acquired the output signal. Hence, the experimental transfer functions were defined for each one of the 20 channels of the system. The maximum linearity error was 5.98% for the channel #4 of the right insole and 5.81% for the channel #7 of the left insole. A NI USB-6289 board acquired the data coming from the trials with a bicycle. In the collected data analysis, it was possible to define the pedaling phases by observing the sum of all channels for each insole. The average value for the maximum force applied on the right insole was 235.8 N, and the average value for the maximum force applied on the left insole was 223.2 N. It was possible to map the zones of greater and minor activation during the movement via a single channel analysis for each insole, being the regions of greatest activation located at the top of the medial forefoot region (right foot), and at the bottom of the lateral forefoot region (left side). The regions with the least activation are at the bottom of the medial forefoot region (at the end of the medial longitudinal arch) on both sides.

force platform • one-dimensional • pedaling force • piezoelectric films • insole

1

Introduction

Cycling is, beyond any doubt, one of the most widespread and worshiped sports around the world. Whether for recreation or performance purposes, the bicycle tends to become part of our lives due to its eco-friendly character, replacing traditional motor vehicles. As a result, a large number of publications have been made, such as [1], on new experimental discoveries on the biomechanics of cycling, to know with scientific rigor the relations of forces applied in the act of pedaling [2]. The growing demand concerning performance sports drives the need for measures increasingly closer to the point of contact between athletes and equipment. The efficiency of the pedaling force in cycling is usually measured by the relationship between the force perpendicular to the crank, and the total force applied to the pedal [3]. To carry out such force measurements, the precise characterization of the mechanical loads imposed on the pedal represents a fundamental element [4]. The challenge is to implement the measurements in the least invasive way, preserving the ergonomics and the original geometry of the structure to the maximum. This objective is achieved, for example, with the use of pressure-sensitive membranes or films. In the case presented here, the choice is piezoelectric polymeric films— specifically, PVDF (Polyvinylidene Fluoride) films [5]. Historically, force measurements on pedals are performed using strain gauges [1,6,7]. However, the distribution of strain gauges reportedly presents significant cross-sensitivity [4]. The moment application in one of the axis generates strain in

M. O. Araújo (B) · A. Balbinot UFRGS/PPGEE, Laboratory of Electro-Electronic Instrumentation (IEE), Federal University of Rio Grande do Sul, Av. Osvaldo Aranha, 103 - 206D,Porto Alegre, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_123

813

814

M. O. Araújo and A. Balbinot

the remainder of the axes, which makes the system calibration complex and expensive. To eliminate cross-sensitivity issues, the use of multi-directional piezoelectric sensors was studied. Works have been proposed, such as Ericson and Nisell [8] and Broker and Gregor [4], looking for the use of commercially available sensors in pedal instrumentation. Hence, the present work sought to measure the force directly at its point of application. Generally, this measurement occurs via pedal or crank arm instrumentation. In this context, the main objective is to build a one-dimensional force platform in the shape of an insole instrumented with PVDF films with the purpose of measuring the force exerted during the act of pedaling. With an acquisition and storage system, data analysis took place to prove its usefulness in the area. Throughout the development, the miniaturization of the system and the maintenance of non-invasiveness of the method, to preserve the naturalness of the movements. Therefore, we sought to expand the concept of an instrumented pedal, minimizing the profile and valuing the point of contact between foot and pedal. For such, the proposed instrumented insole replaced the structure already present in the shoe without adding volume—which preserves comfort and maximizes movement fidelity. With a competitive appeal, SPD (Shimano Pedalling Dynamics) was used, one of the most widespread in a high-performance environment. It is worth noting that this is a one-dimensional analysis of the applied effort, which means that it only covers the vertical force.

2

Materials and Methods

2.1

3D-Printed Insole Sctructure

The commercial cycling shoes used (SH-M065L, from Shimano) has a sole and midsole presented in one piece. However, unlike ordinary shoes in which the midsole is a

Fig. 1 Insole (left foot) with solid extrusion and passageways for wiring and sensor positioning

smooth structure, this shoe has recesses that prevent the direct cementation of sensors in the region where the force is applied. At the same time, the original insole is very soft, favoring the deflection of the sensors. Also, it does not offer the possibility of using wires without physical interference in the activation of the sensors. Thus, the design of a new insole proved to be necessary. The development began by identifying the desirable requirements for the new insole. To address the limitations of the original structure, these are the characteristics of the so considered ideal solution: • • • •

dimensions compatible with the original insole; uniform and solid surface in the cementation region; flexibility of the structure as a whole; passageways for wiring.

Using the SolidWorks 2012 software, the design considered the highest possible level of ergonomics, i.e., it followed the original insole’s organic lines. The material of choice was TPU (thermoplastic polyurethane), which has elasticity as main characteristic [9]. However, in a multi-layer arrangement, the structure becomes stiff to well distributed compression. A solid extrusion serves as a platform at the point of application of force to generate full support for the sensors. It prevents their deflection—an action that may affect the readings. Also, a hexagonal pattern applied to the remainder of the insole provides mechanical strength and levels the rest of the structure with the cementation region. The structure is seen in Fig. 1 along with its dimensions and sensor positioning. As a result, the insoles offer rigidity for the sensors and flexibility for handling. It is worth mentioning that the deflection of the sensors remains undesirable. However, as the shape of the piece follows the outline of the midsole, no movement is expected during pedaling.

123 Proposal of a Low Profile Piezoelectric Based Insole …

2.2

815

made using LabVIEWTM 2013, in which a routine took care of acquiring and synchronizing data from the sources.

Piezoelectric film array

The role of the piezoelectric film array is to translate mechanical stress into electrical signal. To obtain the largest measurable area of interest for the insole, the sensor of choice was the smallest commercially available film from the manufacturer MSI (Measurement Specialties, Inc.): the LDT0-028K [10]. Its active surface corresponds to 153mm2 , which allows the placement of 10 distinct films in the area of interest. This made it possible to execute a more in-depth study of the pedaling cycle. Piezoelectric films have a set of mechanical and electrical characteristics that shape its signal output. For forces applied perpendicularly to the sensor, the key parameter is the piezo strain constant d33 [11], which is determined by the reason between charge (Q[C]) and applied force (F[N ]). The Equation 1 describes the relationship between these variables. d33

Q = F



C N

 (1)

2.4

Trials

Volunteers were asked to wear the system (Fig. 2), placing the conditioning circuits around their ankles and carefully put on the shoes. Information on volunteers may be found in Table 1. All 5 volunteers have the same shoe size (41 BR). The tests occurred as follows: the volunteer had ten rounds of acquisitions with 60 s each, aiming at a medium cadence movement (around 1Hz) starting from rest. A software interface was created via LabVIEWTM 2013, with a well-defined sequence of steps: 1. Start of the program (15 s of no action to position the volunteer/start of the medium cadence movement); 2. Sequence of audible signals indicating the beginning of acquisitions; 3. Cadence starts from rest.

The conditioning circuit of choice is composed of: an instrumentation amplifier, mainly to reduce common mode noise and to apply gain without adding offset with gain of approximately 32; a charge amplifier with gain of approximately 107 , approaching the piezoelectric film as a charge pump; and an anti-aliasing filter, a low-pass Sallen Key with cutoff frequency of 20Hz. Hence, the output signal, proportional to the force applied to a single sensor, is given by Equation 2. (2) VOut put = 4.488 · 10−3 F[V ]

2.3

Calibration procedure

The dynamic calibration procedure consisted of applying a series of mechanical impulses to the piezoelectric films, acquiring, in a synchronous manner, the responses of the impact hammer and each one of the 20 channels. According to [12], the Type 8206 impact hammer has known sensibility of 23.91[mV/N]. Thus, considering Eq. (1), it was possible to determine the transfer function for every placed sensor. The calibration of piezoelectric films is a challenge due to the equipment. The best way to do it would be having a high speed hydraulic press or a customized device to apply even pressure on the sensor, which unfortunately wasn’t available to the project. That said, the use of an impact hammer is acceptable, but not optimal. The calibration equipment used to gather data was composed of a NI SCXI-1600 [13] chassis and a SCXI-1530 [14] accelerometry module for the impact hammer, and a NI USB6289 DAQ [15] for the 20 channels from the insoles. All channels were acquired at a 1kHz rate. The integration has been

Fig. 2 Cycling shoes with instrumented insoles and conditioning circuits Table 1 Information on volunteers Volunteer

Body mass [kg] Height [m]

Dominant leg

1 2 3 4 5

81 70 71 66 67

Right Right Right Right Right

1.87 1.77 1.89 1.81 1.74

816

M. O. Araújo and A. Balbinot

It is worth remembering that, in the case of inexperienced volunteers concerning the clipless pedals, preliminary tests made sure that the volunteer could acquaint the clipping movement. The protocol is in accordance with the Declaration of Helsinki of the World Medical Association, as the subjects have declared consent in participating. The Institutional Review Board of the Federal University of Rio Grande do Sul approved this study under the Certificate of Presentation for Ethical Appreciation (CAAE) number: 11253312.8.0000.5347.

Table 3 Transfer functions (TF), sensitivity (S) and linearity error (lin ) for the left foot Channel

TF [V]

0

5.08 · 10−3 F 0.0110 4.31 · 10−3 F 0.0138 4.85 · 10−3 F 0.0441 4.70 · 10−3 F 0.0794 4.90 · 10−3 F 0.0419 4.34 · 10−3 F 0.0321 4.44 · 10−3 F 0.0546 4.34 · 10−3 F 0.0723 4.13 · 10−3 F 0.0720 4.24 · 10−3 F 0.0319

1 2 3 4

3

5

Results

6

3.1

Dynamic Calibration 7

Following the guidelines described in Sect. 2.3 , the transfer functions for each one of the piezoelectric films were determined (Tables 2 and 3).

3.2

Table 2 Transfer functions (TF), sensitivity (S) and linearity error (lin ) for the right foot Channel

TF [V]

0

4.39 · 10−3 F 0.0057 4.56 · 10−3 F 0.0344 4.82 · 10−3 F 0.0008 4.51 · 10−3 F 0.0302 4.55 · 10−3 F 0.0076 4.11 · 10−3 F 0.0262 4.29 · 10−3 F 0.0019 4.38 · 10−3 F 0.0898 4.41 · 10−3 F 0.0452 4.78 · 10−3 F 0.0068

2 3 4 5 6 7 8 9

9

lin (% ) 2.69

+ 4.31 · 10−3

4.84

− 4.85 · 10−3

2.98

− 4.70 · 10−3

5.16

+ 4.90 · 10−3

4.89

+ 4.34 · 10−3

4.11

+ 4.44 · 10−3

4.13

+ 4.34 · 10−3

5.81

− 4.13 · 10−3

4.43

− 4.24 · 10−3

5.47

Waveform during trials

The group of volunteers performed a set of trials as described in Sect. 2.4 . Figure 3 shows a 5 s extract of acquisition for both feet for individual #1. All the channels for a foot were combined into one signal, representing the total force exerted by the volunteer. The option for this type of visualization

1

8

S [V/N] − 5.08 · 10−3

S [V/N]

lin (% )

− 4.39 · 10−3

4.48

− 4.56 · 10−3

5.78

− 4.82 · 10−3

3.98

− 4.51 · 10−3

4.83

+ 4.55 · 10−3

5.98

− 4.11 · 10−3

5.41

− 4.29 · 10−3

3.87

− 4.38 · 10−3

5.89

− 4.41 · 10−3

5.67

+ 4.78 · 10−3

4.91

avoids information clutter due to showing 10 channels at once for each foot. The data from this graph will be unwrapped in Section 4. Results considering the average value of peak forces applied by the volunteers may be seen in Table 4. Concerning this data, the mean value for the applied force was 235.8 N with a standard deviation of 25.7 N on the right foot, and 223.16 N with a standard deviation of 25.0 N on the left foot. Table 5 shows the average value for peak forces during the trials considering individual channels. Values in bold font represent the maximum value for a volunteer, while values in italic represent the minimum values for a volunteer. Figure 4 shows a heatmap representing the average value for peak forces considering volunteer #2.

4

Discussion

It is noticeable that the linearity error for the calibration procedure is below 5% (or around this value). The average sensitivity is 4.51 · 10−3 V/N with a standard deviation of 0.27 · 10−3 V/N. Considering the standard uncertainty calculated for the system’s output sensitivity (±3.67 · 10−4 V/N), channel 5 of the right insole and channels 0 and 4 of the left insole are outside the stipulated limit range ([4.121 · 10−3 V/N; 4.855 · 10−3 V/N]). The hypothesis is that this occurs because the response characteristic undergoes significant changes due to the curves of the insole, since the films are in a permanent state of flex-

123 Proposal of a Low Profile Piezoelectric Based Insole …

817

Fig. 3 Extract of an acquisition by volunteer #1 (sum of all channels for each foot). a and b Points of maximum power for the cycle (right and left sides, respectively; c and d Second point of application of force (righ and left sides, respectively) Table 4 Peak forces (sum) applied by each volunteer Side

I1 (N)

I2 (N)

I3 (N)

I4 (N)

I5 (N)

R L

212.8 192.7

252.4 245.3

275.8 242.9

206.2 192.5

231.8 242.4

ion. Also, the insole surface is not entirely smooth: the 3D construction presents a step of 0.1 mm between one layer and another, contributing to the total non-cementation of the film and a small deflection in the region during the application of effort (which can also modify the response of the film). Observing Fig. 3, the anti-symmetry of the acquired signals is clear, which is a coherent result knowing that each leg exerts an effort in approximately half of the pedaling cycle [2] in counter-phase. According to data, it’s noticeable that the volunteers have difficulty in completely removing the force applied during the recovery phase, especially regarding the non-dominant leg. This behavior appoints the several retakes of force application during this phase, prominent in the negative part of the acquired signals. The phenomenon increases the requirement of the limb that is in the propulsion phase, culminating in the eventual waste of energy. As well, although Table 5 shows that the dominant limb presented greater activation in most cases, Fig. 3 shows that subject #2 presented greater force in the non-dominant limb (at least in the excerpt). One of the factors to which the phenomenon can be attributed is the adaptation of the subjects to the bicycle as a whole, generating a variation due to fit inconsistencies and ends up creating this behavior. Note that, immediately after the top dead center, the application of force is initiated. After reaching the peak of force application (at approximately 90o ), there is still a second point of force application before reaching the bottom dead center, where the recovery phase begins and the application of force

is practically stopped, occurring an opposite signal peak indicating the withdrawal of force applied to the sensor. Concerning the sensor activation (see Table 5), it is noted that the regions of greatest activation are located near sensor #0 (ball of the foot) to the right side, and close to sensor #9 (encounter of the previous transverse arch and the lateral longitudinal arch) to the left side. The regions with the least activation are located near sensors #5 and #6 (medial longitudinal arch) on both sides. It gives a hint that individuals tend to use the outer part of the non-predominant limb (supination) while pedaling. Also, it is worth remembering that despite the foot retention system, the shoe cleat allows a limited degree of freedom (6◦ ). Furthermore, an additional degree of freedom for movement inside the shoe may cause effort deviation in favor of lateral forces. Unfortunately, it’s not a measurable variable since the developed insole takes only the perpendicular force into account. To address this issue, a new 6 degree of freedom system is being developed, rendering it possible to quantify the effect of the aforementioned variables. When Table 5 is, again, observed, it’s noticed that the activation pattern is slightly different for each individual. This behavior is attributed to the bicycle assembly, which can play a considerable role in the activation pattern, since the adjustment of the parts contribute to the biomechanics of the movement and, consequently, to the effectiveness and better use of efforts (saddle and seatpost [16], for instance) . Besides, each subject has a different plantar structure, contributing so that the pressure distribution is not the same for everyone.

818

M. O. Araújo and A. Balbinot

Table 5 Average force (maximum value) applied for each channel and each individual Ch.

Side

I 1 (N)

I 2 (N)

I 3 (N)

I 4 (N)

I 5 (N)

0

R L R L R L R L R L R L R L R L R L

35.3 30.5 31.3 26.7 24.9 23.5 24.5 21.3 38.7 25.2 21.4 15.7 13.1 9.5

48.6 33.9 37.3 29.4 23.1 34.2 24.5 29.4 45.6 35.7 31.9 16.9 14.7 12.6

40.0 30.0 31.19 23.8 26.3 31.9 27.6 27.3 39.8 27.5 28.2 12.4

24.16 32.31 41.3 22.8 21.2 29.5 15.9 25.5 30.4 33.1 41.7 8.5

19.5 20.4 37.0 38.0

17.2 26.5 43.9 43.0

30.7 13.8 16.3 29.2 24.9 30.9

25.7 15.5 23.6 18.5 37.4 38.3

53.4 44.4 24.1 33.6 21.2 29.1 24.9 25.0 41.6 30.6 21.0 28.3 13.1 16.1

1 2 3 4 5 6 8 9

Fig. 4 Heatmap for the average value for peak forces during the trials (volunteer #2)

5

Conclusions

In this project, we sought to develop a non-invasive force platform capable of measuring the pedaling force without changing the movement ergonomics. Thus, a study was conducted on the behavior of the piezoelectric films of choice for the movement of interest, seeking the development of the entire conditioning chain to maximize the response for the best possible signal acquisition. All stages of the project were validated with the aid of gauged instruments, with a maximum linearity error of 5.98%. The proposed set of trials revealed that piezoelectric films are capable of transcribing the nuances of the pedaling move-

31.8 39.7 46.8 49.8

ment, making it possible to study its state on different phases. Also, such results prove that, despite the dynamic nature of piezoelectric films, it is possible to use them to measure lowfrequency phenomena and to map the pedal movement. Therefore, the present work proves the viability of a system composed of piezoelectric films for the measurement and the mapping of the pedaling force. However, for the system to evolve to such an extent, it is necessary to carry out more tests (and with more volunteers). Also, modifying the calibration procedure and the insole construction to avoid the reported issues is crucial to take the concept to a higher level. That being the case, it will be possible to consolidate the system as a tool for pedal force analysis, being used, for instance, to determine the profile needed for a padding insole to enhance effort distribution. Acknowledgements The present work was financially supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) - grants no. 136036/2019-8 - and was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Pigatto Andre V, Moura Karina OA, Favieiro Gabriela W, Balbinot A (2016) A new crank arm based load cell, with built-in conditioning circuit and strain gages, to measure the components of the force applied by a cyclist. In: Proceedings of the annual international

123 Proposal of a Low Profile Piezoelectric Based Insole …

2. 3.

4.

5. 6.

7.

8.

conference of the IEEE engineering in medicine and biology society, EMBS Bini RR, Carpes FP (2014) Biomechanics of cycling. Springer International Publishing Bini RR, Hume PA, Croft J, Kilding AR (2013) Pedal force effectiveness in Cycling: a review of constraints and training effects. J Sci Cycling 2:11–24 Broker Jeffrey P, Gregor Robert J (1990) A Dual Piezoelectric element force pedal for kinetic analysis of cycling. Int J Sport Biomech 6 . https://doi.org/10.1123/ijsb.6.4.394 Kawai H. (1969) The piezoelectricity of poly (vinylidene Fluoride). Japanese J Appl Phys 8. https://doi.org/10.1143/jjap.8.975 Davis RR, Hull ML (1982) Measurement of pedal loading in bicycling. J Biomech 14. https://doi.org/10.1016/0021-9290(81)901457 Lazzari CD, Balbinot A, Lazzari Caetano Decian (2011) Wireless crankarm dynamometer for cycling. Sensors Transducers 128:39– 54 Ericson MO, Nisell R (1988) Efficiency of pedal forces during ergometer cycling. Int J Sports Med 9. https://doi.org/10.1055/s2007-1024991

819 9.

10.

11. 12.

13. 14. 15. 16.

ACC (2012) Introduction to polyurethanes: thermoplastic polyurethane 2012. Available at https://polyurethane. americanchemistry.com/polyurethanes/Introduction-toPolyurethanes/Applications/Thermoplastic-Polyurethane/ Measurement Specialties Inc (2017) LDT with crimps vibration sensor/switch 2017. Available at https://cdn.sparkfun.com/datasheets/ Sensors/ForceFlex/LDT_Series.pdf Measurement Specialties Inc (1999) Piezo film sensors. Available at https://mma.pages.tufts.edu/emid/piezo.pdf Br¨uel&Kjær (2016) Type 8206 product data. Br¨uel&Kjær 2016. Available at https://www.bksv.com/-/media/literature/ProductData/bp2078.ashx National Instruments (2004) SCXI-1600 user manual. Available at https://www.ni.com/pdf/manuals/373364c.pdf National Instruments (2000) SCXI-1530/1531 user manual. Available at https://www.ni.com/pdf/manuals/322642a.pdf National Instruments (2016) NI 6289. Available at https://www.ni. com/pdf/manuals/375222c.pdf Bini Rodrigo R, Hume Patria A, Crofta James L (2011) Effects of saddle height on pedal force effectiveness. Procedia Eng 13. https:// doi.org/10.1016/j.proeng.2011.05.050

Soft Sensor for Hand-Grasping Force by Regression of an sEMG Signal E. E. Neumann and A. Balbinot

Abstract

This paper presents the implementation of a soft sensor for hand-grasping force by the sEMG (Surface Electromyography) collect from 6 different muscles in the ventral regions of the forearm. This work is implemented in the envelope of the signal from sEMG by a low-pass filter with a cut frequency of 3Hz, which maintains the information of the energy of the signal. An Artificial Neural Network (ANN) was applied for the regression of the force and was done an online application of the model as a soft sensor, and has as input the 6 channels of sEMG rectified and filtered. Four volunteers were tested to see the viability of the regression, all of them showed high Rsq for fitting the regression model, 0.99, 0.98, 0.98 and 0.97, respectively proving the capability of the application. The online performance demonstrated 16.66[N] of root mean square error, approximately 3.14% of a MVC (Maximal Voluntary Contraction) threshold of volunteer 01. Keywords

Soft sensor • ANN • low-pass filter • hand-grasping force • sEMG

1

Introduction

The surface Electromyography (sEMG) is widely used in prosthesis control. Its adoption to control prosthesis, for instance, increases the life quality and gives greater autonomy to the user [1–4]. As the signal is collected from the skin, some influences from other groups of muscles are captured even though the electrodes are placed in specific places as the central region of the targeted muscle [5]. The force applied E. E. Neumann (B) · A. Balbinot Laboratory of Electro-Electronic Instrumentation (IEE), UFRGS/PPGEE, Av. Osvaldo Aranha, 103, Porto Alegre, Brazil

by a prosthetic hand or by an exoskeleton is extremely delicate, and some quotidian tasks as hold a glass needs a force control to avoid harm to the user [1,2,6–8]. During the use of prostheses, the response must be fast to the user, with a maximum delay of 300 [ms] so that the user does not have the perception of delay [9]. The signal from sEMG contains some useful information of the body movements and the amplitude of the signal is directed linked with the force applied by the muscle [10]. The signal has a stochastic nature and has an amplitude in the range of 1–10 mV, and a frequency in the range of 15–500 Hz [1]. The goal in this study is the implementation of a soft sensor of grip-hand force in an online application, as [9] the model should not introduce a delay that is perceivable by the user. To estimate the force made by the user, the sEMG signal is acquired from specifics superficial muscles located on the forearm. Then the signal is processed, rectified, enveloped and the data is located on a database. Therefore the data is processed and feeds an Artificial Neural Network (ANN), to regress the force applied on a dynamometer. In some previous works related is possible to see the implementation of a force regression by the sEMG signal applied offline [11]. Where the regression was made by 6 channels of sEMG by the ELM (Extreme Learning Machine), SVM (Support Vector Machine) algorithms had the minor RMSE (Root Mean Square Error). In [12] it is possible to see one application of the estimation of hand force using ANN (Artificial Neural Network), where the estimated force was tested online, with a different approach of movement of the hand. In the work [13], it is possible to see the application of hand orthosis for an individual with Duchenne muscular dystrophy, to increase the maximum grasping force of the participant’s, improving from 2.8 to 8N, controlled by sEMG. In other previous works with a different approach [14] in the lower limb, the application is similar, without the use of an accelerometer to support the network.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_124

821

822

E. E. Neumann and A. Balbinot

Fig. 1 Differential electrode position on forearm

2

Protocol and Acquisition System

2.1

Protocol

All procedures performed in these studies involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was approved by the Institutional Review Board of Federal University of Rio Grande do Sul under the Certificate of Presentation for Ethical Appreciation number: 11253312.8.0000.5347. As examined in [11], the combination of the 6 channels utilized revels the better regression of the force. Following previous guidelines [11], 6 muscles were chosen to perform the regression. They are superficial and can be captured from sEMG [15], being them: brachioradialis (BR), flexor carpi radialis (FCR), flexor carpi ulnaris (FCU), extensor carpi radiali (ECR), extensor carpi ulnaris (ECU), and extensor digitorum (ED), the position is showed in Fig. 1, with the superficial electrode fixed in the ventral muscle region on the skin. To starts to acquire the signals, the volunteer is oriented to follow the standard position, sitting comfortably in an upright position, the arm should be pointing down and, the forearm resting on the support of the chair projected 90◦ forward. Due to the necessity of synchronization, one stimulus was shown to the volunteers 5 s before the analog gauge started to climb from zero to Maximal Voluntary Contraction (MVC). During the force application cycle, the user is induced to control his or her force, ramping until the MVC. Right after that, the slow release of the dynamometer is induced. The entirety of the cycle interval is 8 s, divided in 4 s of force application until the MVC and 4 seconds of slow-release until the relaxed state.

The interval between each press is 30 s and the movement is done repeatedly for 10 times each test. The database is composed of 4 volunteers and each one did 8 tests. Therefore, each volunteer has 80 press movements. The interval between tests is more than 5 min.

2.2

Acquisition

The acquisition of the sEMG was done by a Data Acquisition (DAQ) NI UBS-6289 from NATIONAL INSTRUMENTS (A/D of 18 bits), with a sample acquisition of 2kHz combined with a system acquisition SAS1000 V8 from EMG System do Brasil. The signal of force was collected by a dynamometer from EMG System do Brasil, and acquired by a DAQ-mx 6009 from NATIONAL INSTRUMENT with a sample acquisition of 2kHz, which was characterized. The signal from one accelerometer was captured together for future implementations. All the signals were acquired by a routine in the software LabView, where was performed by a computer with 8 GB of RAM, processor i5.

3

Method

Due to the nondeterministic and stochastic nature of the sEMG, it is hard to regress the signal. Then it is common to this area of study to work with the features of the signal, as RMS and medium frequency. However, this work has an intent to run online, then no feature that requires processing signal as frequency-domain is proper to efficiency. The goal is to use less processing as possible.

124 Soft Sensor for Hand-Grasping Force by Regression of an sEMG … Table 1 Data collected from the dynamometer for characterization Weight [kgf]

Output [V]

0.0000 0.2582 0.4582 0.6582 1.2582 1.4582 1.6582

0.105622 0.112208 0.123458 0.129756 0.157926 0.167134 0.177304

3.1

Dynamometer Characterization

The dynamometer used to capture the signal was characterized by the introduction of standard weights at the input and output voltage measurements, as shown in Table 1, and the system’s output and input function was regressed by iterations.

3.2

Data Processing

The first treatment of the signal was the rectified since the important feature for this work is the energy in the signal, which is linear and proportional by the force applied [11]. The exploration of envelope techniques for the sEMG signal exhibit that the use of RMS of the signal is common to envelop the signal of sEMG [10,11], but thinking in an online application that requires a low process demand, was studied one low-pass filter that provides a possibility of a hardware application, using fewer computational resources. The filter applied to this work was a low-pass Butterworth of 4th order with a cut frequency of 3 Hz and yet remaining the energy of the signal.

3.3

Regression

The data of each volunteer was separated into 6 of 8 trials to training the model and 2 of 8 to test the model of regression. The model used in this work is ANN, where the input layer is formed by 6 neurons equivalent to each channel of the sEMG processed with the implemented filter, The model used in this work is ANN, where the input layer is formed by 6 neurons equivalent to each channel of the sEMG processed with the implemented filter. The hidden layer has 100 neurons defined by a sweep from 40 to 100. With hyperbolic tangent sigmoid activation transfer function on each neuron and the method of back-propagation of the error was tested with the Levemberg-Marquart and the Bayesian regularization method. Lastly, the output neuron has a linear function.

823

Then for the validation, a network pre-trained with the volunteer’s data and Bayesian back-propagation method is implemented in the LabView by the MatLab block. Letting all processes run in LabView software with MatLab in a back run.

4

Results

4.1

Dynamometer Characterization

The curve fitting of the data collected on the Table 1 into a linear equation result in the transfer function observed in equation 1 where x is weight in kg f , and f (x) is voltage, with a Rsq of 0.9961. f x = 0.04409x + 0.1028

4.2

(1)

Data processing

The result of the data processing to envelope the signal and keep the energy proved to be promising, and the rectifier and filter can be applied to hardware to consume less processing power of the machine. The resulting signal is observed in figure 2.

4.3

Model Regression

The 6 channels of the filtered signal from figure 2 and the ANN model for regression has made by 6 neurons in the input layer, 100 neurons in the hidden layer, the used method of back-propagation error were Bayesian. The set of trials 1,2,3,4,5, and 7 was used to train the model, and the set of trials 6 and 8 was used to validate the model. The signal was segmented to 4 s before the movement, the movement, and 4 s after the movement. As the results of Mean Square Error (MSE) in table 2, the regression of the force accomplishes by the sEMG signal from the 6 selected group of muscles works, for exemplification in Fig. 3a is possible to observe the response of the regression. The results of the volunteer 01 have the best fit and the lower MSE.

4.4

Online Performance

As shown in Fig. 2, the force can be regressed by an ANN with the 6 channels of sEMG, an ANN is implemented in the LabView by a MatLab block pre-trained. For the model validation, was performed 6 trials each with 10 moves only performed by the volunteer 01. The response of the soft sensor in Fig. 3b shows the possibilities to regress

824

E. E. Neumann and A. Balbinot

Fig. 2 a (Left) Raw Signal of sEMG / b (Right) Rectified and Filtered Signal

Fig. 3 a (Left) Model regression of one movement of volunteer 02 / b (Right) Response of the soft sensor in validation tests of volunteer 01

the signal of sEMG into force. Thus, the soft sensor shows the capabilities to measure the force applied to an object without really measure the force. The maximum punctual absolute error was 0.3163[V], that can be applied to the sensibility of equation 1 than the force is approximately 70.3[N], that occur in the point of transition of the MVC to the release of the dynamometer. The Root Mean Square Error (RMSE) for all the 6 trial of 10 movements each, is 0.0283[V] equivalent to 6.28[N ], approximately 3.14% of the MVC. Beyond the use of the soft sensor, with a threshold value of force, the system has the capability of identifying the MVC, although each volunteer has a different threshold [16], for the volunteer 01, was determined by visual inspection an MVC of 200[N].

5

posture and grip position of the dynamometer for better generalization of the model used in the soft sensor. It is interesting to continue the database by introducing volunteers who do not have forearm muscles or have a partial amputation of the forearm, conducting the study of the position of the electrodes, displacing, or adding electrodes in the ventral regions of the chest muscles of the volunteer in future work. Since humans also can adapt and learn with the systems. This kind of envelope is capable of being applied in hardware, and that was the intent to use this kind of envelope of the signal, that will require less software performer, and can be scalable to a portable computer like a RaspBerry as [14]. Since the force regression doesn’t work with classes, it was a smooth transition of the output signal.

Discussion 6

The database is limited to the specific positioning from which the data was collected. For greater robustness and use of prostheses, the database should be expanded with variations of

Conclusion

The biological signal was captured by electrodes as Figure 1, and as pre-processing was rectified than pass through a low-

124 Soft Sensor for Hand-Grasping Force by Regression of an sEMG …

825

Table 2 Mean Square Error (MSE) of the test set and the R from regression of each volunteer trained by the network model Volunteer

Test set MSE

Regression R

Volunteer 01 Volunteer 02 Volunteer 03 Volunteer 04

4.9910−4

R R R R

2.2310−3 3.1010−3 3.2610−3

pass filter with a cut frequency of 3Hz as displayed in Fig. 2, the energy of the signal was retained as the information of the movement in the pre-processed signal by the envelope technical applied. As the results point, the regression of force to a soft sensor implementation is possible, as the Fig. 3b shows the response of the soft sensor in use. Also, until a subtle change in the pattern of the position of the electrode or the hold position of the dynamometer can influences the system [16], to prevent that instability is recommended to in each trial refix the electrodes to train the network with different positions always targeting the ventral position of the muscle, as demonstrated in [15]. The demonstrated RMSE is approximately 3.14% of the MVC value of volunteer 01, clearly occurs with greater intensity at the point of greatest volunteer effort, not least because several factors influence it, such as the presence of small volunteer tremors in the execution of their maximum strength. The database did not deal with trials of forces below the MVC. For future works, we recommend using haptic technologies to give feedback of force to the user in conjunction with the soft sensor implemented in this work. The soft sensor shows a promissory response to the sEMG signal, keeping the morphology of the force applied into the dynamometer, demonstrating the higher error just into the peak of force in MVC. We also recommend the hardware application of the filter with the purpose of the envelope the signal. This method of the envelope is viable to use in online application orthosis or prothesis and can be combined with some classified system of movements as done in [1]. Acknowledgements The present work was financially supported by the National Council for Scientific and Technological Development (CNPq), grants no 132343/2019-3.

2.

3.

4. 5. 6.

7.

8.

9.

10.

11.

12.

13.

Conflict of Interest The authors declare that they have no conflict of interest.

14.

References

15. 16.

1.

Atzori M, Muller H (2015) The Ninapro database: a resource for sEMG naturally controlled robotic hand prosthetics. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS. DOIurlhttps://doi.org/10.1109/EMBC.2015.7320041

= 0.992627 = 0.989760 = 0.989372 = 0.97977

Castellini Claudio, Van Der Smagt P, Sandini G, Hirzinger G (2008) Surface EMG for force control of mechanical hands. In: Proceedings—IEEE international conference on robotics and automation. https://doi.org/10.1109/ROBOT.2008.4543291 Zonnino A, Sergi F (2019) Model-based estimation of individual muscle force based on measurements of muscle activity in forearm muscles during isometric tasks. IEEE Trans Biomed Eng 67. https:// doi.org/10.1109/tbme.2019.2909171 Dillon A, Blanchard S, Stikeleather L (1996) Introduction to biomedical engineering. 3 Tortora G, Derrickson B (2014) Principles of anatomy & physiology 14th edn. Huang H, Jiang L, Zhao DW et al (2006) The development on a new biomechatronic prosthetic hand based on under-actuated mechanism. In: IEEE international conference on intelligent robots and systems.https://doi.org/10.1109/IROS.2006.281765 D’Anna E, Valle G, Mazzoni A et al (2019) A closed-loop hand prosthesis with simultaneous intraneural tactile and position feedback. Sci Robot https://doi.org/10.1126/scirobotics.aau8892 Valle G, Mazzoni A, Iberite F et al (2018) Biomimetic intraneural sensory feedback enhances sensation naturalness, tactile sensitivity, and manual dexterity in a bidirectional prosthesis. Neuron 100. https://doi.org/10.1016/j.neuron.2018.08.033 Englehart K, Hudgins B (2003) A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans Biomed Eng 50. https://doi.org/10.1109/TBME.2003.813539 D’Alessio T, Conforto S (2001) Extraction of the envelope from surface EMG signals. IEEE Eng Med Biol Mag 20. https://doi.org/ 10.1109/51.982276 Cao H, Sun S, Zhang K (2017) Modified EMG-based handgrip force prediction using extreme learning machine. Soft Comput 21. https:// doi.org/10.1007/s00500-015-1800-8 Srinivasan H, Gupta S, Sheng W, Chen H (2012) Estimation of hand force from surface electromyography signals using artificial neural network. In: Proceedings of the world congress on intelligent control and automation (WCICA). https://doi.org/10.1109/WCICA.2012. 6357947 Bos R, Nizamis K, Koopman B, Herder J, Sartori M, Plettenburg D (2019) A case study with SymbiHand: an sEMG-controlled electrohydraulic hand orthosis for individuals with Duchenne muscular dystrophy. IEEE Trans Neural Syst Rehabilit Eng. https://doi.org/ 10.1109/TNSRE.2019.2952470 Spanias JA, Simon AM, Finucane SB, Perreault EJ, Hargrove LJ (2018) Online adaptive neural control of a robotic lower limb prosthesis. J Neural Eng. https://doi.org/10.1088/1741-2552/aa92a8 Leis AA, Trapani Vicente C (2000) Atlas of electromyography Hallbeck MS, McMullin DL (1993) Maximal power grasp and three-jaw chuck pinch force as a function of wrist position, age, and glove type. Int J Indust Ergon 11. https://doi.org/10.1016/01698141(93)90108-P

Development of a Low-Cost, Open-Source Transcranial Direct-Current Stimulation Device (tDCS) for Clinical Trials N. C. Teixeira-Neto , R. T. Azevedo-Cavalcanti, M. G. N. Monte-da-Silva, and A. E. F. Da-Gama

Abstract

Keywords

Transcranial Direct Current Stimulation (tDCS) has been the target of research in the search to understand its therapeutic effects on human health and the mechanisms by which these effects are mediated. The lack of standardization and the high cost of tDCS devices for clinical trials are some of the bottlenecks in the progress of tDCS research. The development of low-cost open-source devices, adaptable to research questions and that encourage innovation in the area are important. These devices can contribute to the understanding of tDCS mechanisms and effectiveness. Thus, this project presents a development of a tDCS prototype for clinical trials, consisting of hardware for electric stimulation, and a mobile app as a human–machine interface. The hardware is based on a direct current source and a microcontroller (ESP32), which communicates via Bluetooth® with the app. The mobile app was developed collaboratively with researchers. The built prototype had its performance evaluated through bench testing, showing a current production accuracy of 96.53%. It is expected that this project will facilitate access to tDCS devices to research groups that want to explore their effectiveness in most health conditions, following the methodological rigor of clinical trials.

TDCS Open-source Low-cost ESP32

N. C. Teixeira-Neto  R. T. Azevedo-Cavalcanti  M. G. N. Monte-da-Silva  A. E. F. Da-Gama (&) Department of Biomedical Engineering, Rehabilitation Engineering Research Group, Universidade Federal de Pernambuco, Recife, Brazil e-mail: [email protected] N. C. Teixeira-Neto  M. G. N. Monte-da-Silva Department of Biomedical Engineering, Biomedical Instrumentation Research Group, Universidade Federal de Pernambuco, Recife, Brazil



1



Clinical trial



Double-blind



Introduction

Conventional transcranial direct current stimulation (tDCS) involves weak direct currents (0.2–2 mA) applied to the scalp via sponge-based rectangular pads (nominally 25–35 cm2) [1, 2]. The study by Nitsche and Paulus shows that through motor potentials evoked by tDCS, the motor-cortical excitability changes induced by scalp direct current. Thus, the tDCS therapeutic effect might be able to modulate the firing properties of neurons and neuronal excitability [1, 3, 4]. There are clinical trials using tDCS in different conditions such as pain, Parkinson’s disease, motor stroke, poststroke aphasia, multiple sclerosis, epilepsy, consciousness disorders, Alzheimer’s disease, tinnitus, depression, schizophrenia, and craving/addiction [5]. Such studies generally explore the best electrode assembly, current intensity and stimulus duration that would be most appropriate for each condition [6]. To ensure adequate understanding of the observed tDCS effects in humans, researchers need to rely on valid and approved placebo control and blinding, fundamental requirements in randomized controlled trials [7]. Failure of blinding could compromise objective evaluations, resulting in biased assessment of intervention effects [7]. Blinding can be single-blind, if the subject or the experimenter do not know the group assignment (real tDCS or sham), or double-blind, if both the subject and experimenter do not know the group assignment. The traditional placebo protocols in tDCS for subject blinding consist of creation of an initial current ramp, followed by a short period of stimulation (usually for 5–60 s) and a final deceleration ramp [8]. This approach aims to cause sensory stimulation similar to real tDCS, without affecting the cortico-spinal excitability.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_125

827

828

N. C. Teixeira-Neto et al.

Fig. 1 System block diagram

In order to help researchers to deliver adequate sham treatment interventions, there are commercial stimulators for the tDCS clinical trials that include a ‘double blind study (DBS) mode’, which delivers a built-in sham mode [9]. Stimulators in DBS mode allocation concealment is achieved by entering numeric codes in the stimulator assigned to waveform arms (sham or active) or a toggle (A/B mode) [10]. In addition, some devices adjust an impedance display on the device screen that also mimics impedance, current and voltage changes expected in the “active” stimulation functioning of the device and detect loss of electrode contact [10]. However, the price of tDCS stimulators dedicated to neuroscience research with DBS mode is usually relatively expensive (around $5000–$8000). The high price of the stimulators makes access to these devices difficult for small research groups especially in developing countries. Particularly if they are going to use more than one device to facilitate data collection. In addition, some clinical trials use custom-made stimulators with variations of the placebo protocol, making it difficult for other researchers to reproduce the study [9]. Given this research problem with tDCS, this work aims to develop an open-source tDCS stimulator for clinical trials. The idea of using an open-source model is to boost the access and conduction of studies with adequate methodological quality, in order to find the most effective technique and new applications of tDCS.

2

Materials and Methods

The development of the proposed system was divided into two stages: (1) tDCS hardware to control the stimulation current to be applied to the patient; and (2) mobile application (app) that communicates via Bluetooth® with the prototype and will serve as a human–machine interface for

the system. A general diagram of the system can be seen in Fig. 1. Bench tests were then performed to assess the production accuracy of the prototype chain. All hardware and software files of this project will be available on the GitHub platform. They are open to developers in order to encourage collaborative development of the device.

2.1 Hardware Development The tDCS prototype proposed in this work was inspired by the study of Kouzani and collaborators [11], who developed a mini tDCS device. However, the prototype used here has ESP32 as the microcontroller, and LM334 integrated circuit (IC) as the current source. ESP32 device is a powerful microcontroller with built-in Wi-Fi and Bluetooth®, designed to be a solution for Internet of Things devices [12]. The ESP32 board used in this project was the ESP-WROOM-32 version. This platform has a small size and high processing power compared to other traditional microcontrollers like Arduino. The ESP32 microcontroller has two integrated 8-bit digital analog converters (DAC) which was useful in developing the proposed work. Besides, it is one of the microcontrollers with the lowest cost/benefit with Bluetooth®, costing around $9, thus fulfilling one of the requirements of this work. To program ESP32, Arduino’s IDE (Integrated Development Environment) was used. The microcontroller’s analog to digital signal (ADC) converters have 12-bit resolution, and have been used for: (1) monitor the battery charge level; (2) measure voltage and current levels on the patient during stimulation; and (3) the resistance variation of the patient’s stimulated region. The operation flow diagram of the current control by ESP32 can be seen in the Fig. 2. The stimulus can only start or persist at adequate resistance

Development of a Low-Cost, Open-Source Transcranial …

829

Fig. 2 Operation flowchart for the program that operates the tDCS device. Rp, Vp and Ip represent respectively the resistance, voltage and current on the participant during the stimulation

on the participant during the stimulation values and battery levels (see Fig. 2). In this work, the prototype was powered by a 14.4 V battery (pack with 4 Lithium 2200 mAh batteries). The 7805C voltage regulator with fixed 5 V output was used to

power the ESP32. The current intensity is controlled using the LM334, a three-terminal adjustable current source IC. The proposal was the development of a DC source controlled by voltage variation. This voltage control was performed through the ESP32 DAC output (see Fig. 3). This

830

N. C. Teixeira-Neto et al.

Fig. 3 Circuit diagram of the tDCS device

form of current control does not require the use of digital potentiometers, reducing the number of components, battery consumption and stimulator size.

2.2 Application Development The development of the application interface was developed collaboratively with researchers from Laboratory of Applied Neuroscience at the Universidade Federal de Pernambuco which have extensive experience with clinical trials with tDCS. Meetings were held with researchers to gather requirements and subsequent user feedback after testing with the app mobile prototype. Thus, the data were transformed into solutions to facilitate the use and adaptation of the prototype of this project to the needs of researchers. The DART 2.0 programming language with the Flutter SDK version 1.17.1 was used in the Visual Code Studio IDE for developing the native app for Android and IOS platforms. Google defines Flutter as a portable user interface toolkit for building beautiful natively compiled applications for mobile, web and desktop from a single codebase [13] and that’s why this Software development kit (SDK) was chosen in this project. The developed application communicates via Bluetooth® 4.0 with ESP32. The user can choose values of duration and current intensity in the app (see Fig. 4b. The current strength can vary up to a maximum of 2 mA and can be increased by up to 0.2 mA. Maximum stimulation time is 40 min with 1 min increment. To assist researchers in double-blinding, the app features the DBS mode. In this mode the user has to define values of duration and current intensity of the active stimulus and sham (see Fig. 4b). The stimulus current (active or placebo)

presents a waveform composed of a start ramp, hold current and end ramp. After the waveform values of the placebo and active stimulation are configured, the app will generate two stimulus options in DBS mode (“A” and “B”) (see Fig. 4a. Then when set to placebo stimulation, it will simulate real values of resistance stimulation, voltage and current in the app. The app will randomly assign, without the user’s knowledge, the type of stimulus (active or sham) options A and B of the DBS mode and save this setting for future use by the user. To reveal the assignment of the stimulus type (active or placebo) to options A or B by app, the user needs to press the “reveal” button and confirm visualization on the participants’ data screen (see Fig. 4c). In addition, the user has the option to create a new DBS mode for the new assignment of the stimulus type the options “A” and “B”. All DBS modes created are saved and named with their creation date to be chosen in future applications.

2.3 Bench Test For testing the prototype, the electrode-skin impedance of the patient's head was simulated with a 5 kX resistor [11, 14]. The prototype was tested on 14.4 V battery power. During the stimulus operation by the prototype, the current intensity on top of the resistor representing the patient was monitored every minute using a digital multimeter (Agilent Technologies, U3401 A). As the test multimeter does not have a data logger option, the current values were recorded with the help of a smartphone camera to be tabulated in a Microsoft Excel spreadsheet. The “Infinite Shot” application, available on Google Play, was used to capture a photo of the multimeter's display every one minute for an hour.

Development of a Low-Cost, Open-Source Transcranial …

831

Fig. 4 App mobile screen a stimulation screen, b parameters setup screen and c participant data screen

The current intensities tested were 0.2 mA, 1 mA and 2 mA over the period of one hour each. The choice of these current intensity values is in line with those most commonly used in the literature in protocols with tDCS [15]. The current values were stored for the accuracy level analysis. This analysis is important, since over time the battery charge is being consumed, but the stimulus current needs to remain effective.

3

Results

A prototype of a tDCS device for clinical trials was developed, consisting of open-source hardware and software, with a production cost of approximately R$ 30,000. The prototype is made of components that are low cost and have proven to be efficient in current control. The device board has dimensions 5  6 cm. The accuracy for evaluating the performance of the developed tDCS, for each output current range (0.2 mA, 1.0 mA and 2 mA) is shown in Table 1. The result shows high accuracy (96.534%) and a total percentage error of the current of 3.542%. Table 1 Accuracy of the output current

To start the stimulus, values of 30 kX were considered as adequate resistance. According to laboratory tests it was verified that the appropriate resistance values on the patient cannot exceed 15 kX during stimuli of up to 1.0 m or 6 kX after the initial ramp of the stimulus for set intensities of 2 mA. The circuit has a total consumption of 220 mA, giving a battery use autonomy of approximately 10 h. A voltage variation of 0.14 V was verified after one hour of stimulation in our tests. The application has been finalized and is in testing phase by researchers, who have already given initial feedback that enabled the creation of the DBS mode functionality. The DBS mode setting can be seen in Fig. 4b. The participant data screen with disclosure of the stimulus type (placebo or active) of the DBS mode in Fig. 4c.

4

Discussion

An open-source tDCS stimulator for clinical trials with the potential to facilitate the blinding process and innovation in tDCS research was developed. The improvements in the

Ideal value (mA)

Average value (mA)

STD of value

Mean of error (%)

STD of error (%)

Step A

0.2

0.192

0.003

4.052

1.735

Step B

1

1.064

0.035

6.351

3.45

Step C

2

1.965

0.003

1.733

1.735

Standard Deviation (STD)

832

mobile app interface and the adoption of the open-source model were both to facilitate the access to tDCS stimulators, as well as to conduct research with adequate methodological rigor. Bench tests were carried out at the protoboard level, which may have contributed to the variation between the mean of current errors for each current intensity between 1 and 6% (see Table 1). However, the accuracy values of 96.534% of bench tests are comparable to those of Kouzani and collaborators who obtained an accuracy of 97% and validated in humans the ability of his tDCS stimulator to cause altered cortical excitability in humans [11]. This accuracy value allows the researcher to use conventional tDCS size electrodes from 25 to 35 cm2, producing maximum current density of 0.08 mA/cm2, thus, within the safety values established in literature [15–17]. ESP32 is one of the few low-cost microcontrollers that already features Bluetooth® communication built into the development board. This fact contributed to the communication with the app, prototype interface, and to a certain extent, to the guarantee of double-blind procedure, since it facilitates the distance of the tDCS operators from the patient during the stimulation process [18]. This distance avoids possible unwanted interactions between the researcher and the subject, such as observable vasodilation and skin redness after stimulation (typically over the right orbit) [18, 19]. Probably due to the high prices of tDCS devices dedicated to research, many researchers end up selecting stimulators without the DBS mode [5, 20]. With these devices, it is not possible to perform the blinding of tDCS operator, given the lack of the study mode. However, blinding the tDCS operators help in preventing the delivery of supplemental care or co-intervention to subjects in the experimental arm [21]. Because the app developed in this study randomly assigns the type of stimulus (active or sham) in DBS mode, the app allows the same researcher to both create and apply the stimulus without breaking double-blinding. This is intended to facilitate the blinding process, given the use of only one researcher for this. The open-source model facilitates the acquisition of the stimulator developed for its low production cost. Medical devices available on the market are subject to high prices due to regulatory expenditures or simply due to their business model [22]. Since the product here aims the research applications it follows different regulation process from many commercially available stimulators. Moreover, the fact that it is open-source, i.e., aiming that the researchers themselves build their device in a multidisciplinary work, ends up cheapening the acquisition cost. The open-source approach also contributes to the device’s greater ability to adapt and develop collaboratively [23]. The open-source model allows the searcher to build a tDCS equipment more adaptable to their search question by

N. C. Teixeira-Neto et al.

making small changes to the code, for example. Changes to the app code can turn this clinical research tDCS device into one home-based tDCS [24, 25], allowing remote control by the researcher. The open-source model then stimulates the creative process of innovation in tDCS by adapting the device to the objective of the clinical trial and not the other way around [23]. In addition, the device’s adaptability allows researchers to more easily reproduce work protocols with tDCS in order to obtain more reliable information about its effectiveness. Among the limitations of this study is the guarantee of 2 mA stimulus only for resistances below 6 kX. However, a transcranial impedance value of around 5 kX is recommended during most stimulation [14] and this is a common value in the literature on tDCS [26].

5

Conclusion

A low-cost, open-source device was developed that meets recommendations for clinical trials with tDCs. Researchers in interdisciplinary work are expected to build devices that are more adaptable to the objectives of their tDCS study, without the need for high financial resources, and to drive more reproducible studies. The next steps of this research will be the validation of the device in humans and the dissemination of the open-source project in the scientific community. Acknowledgements The authors thank Coordenação de Aperfeiçoamento de Pessoal do Ensino Superior (CAPES) for the award of fellowships during the period of this work. Conflict of Interest The authors declares that there is no conflict of interest regarding the publication of this article.

References 1. Nitsche MA, Paulus W (2000) Excitability changes induced in the human motor cortex by weak transcranial direct current stimulation. J Physiol 633–639. https://doi.org/10.1111/j.1469-7793.2000. t01-1-00633.x 2. Priori A (2003) Brain polarization in humans: a reappraisal of an old tool for prolonged non-invasive modulation of brain excitability. Clin Neurophysiol 114(4):589–595. https://doi.org/10.1016/ S1388-2457(02)00437-6 3. Bikson M, Inoue M, Akiyama H, Deans JK, Fox JE, Miyakawa H, Jefferys JGR (2004) Effect of uniform extracellular DC electric fields on excitability in rat hippocampal slices in vitro. J Physiol 557(1):175–190. https://doi.org/10.1113/jphysiol.2003.055772 4. Radman T, Su Y, Je HA, Parra LC, Bikson M (2007) Spike timing amplifies the effect of electric fields on neurons: implications for endogenous field effects. J Neurosci 27(11):3030–3036. https:// doi.org/10.1523/JNEUROSCI.0095-07.2007

Development of a Low-Cost, Open-Source Transcranial … 5. Lefaucheur JP, Antal A, Ayache SS, Benninger DH, Brunelin J, Cogiamanian F, Cotelli M, Ridder D De, Ferrucci R, Langguth B, Marangolo P, Mylius V, Nitsche MA, Padberg F, Palm U, Poulet E, Priori A, Rossi S, Schecklmann M, Vanneste S, Ziemann U, Garcia-Larrea L, Paulus W (2017) Evidence-based guidelines on the therapeutic use of transcranial direct current stimulation (tDCS). Clinical Neurophysiol 128(1). https://doi.org/ 10.1016/j.clinph.2016.10.087 6. Chase HW, Boudewyn MA, Carter CS, Phillips ML (2020) Transcranial direct current stimulation: a roadmap for research, from mechanism of action to clinical implementation. Molecular Psyc 25(2):397–407. https://doi.org/10.1038/s41380-019-0499-9 7. Spieth PM, Kubasch AS, Isabel Penzlin A, Min-Woo Illigens B, Barlinn K, Siepmann T, Carl H, Carus G (2016) Randomized controlled trials—a matter of design. Neuropsychiatric Disease Treatment 12:1341–1349. https://doi.org/10.2147/NDT.S101938 8. Gandiga PC, Hummel FC, Cohen LG (2006) Transcranial DC stimulation (tDCS): A tool for double-blind sham-controlled clinical studies in brain stimulation. Clin Neurophysiol 117 (4):845–850. https://doi.org/10.1016/j.clinph.2005.12.003 9. Fonteneau C, Mondino M, Arns M, Baeken C, Bikson M, Brunoni AR, Burke MJ, Neuvonen T, Padberg F, Pascual-Leone A, Poulet E, Ruffini G, Santarnecchi E, Sauvaget A, Schellhorn K, Suaud-Chagny MF, Palm U, Brunelin J (2019) Sham tDCS: a hidden source of variability? Reflections for further blinded, controlled trials. Brain Stimulation 12(3):668–673. https://doi.org/ 10.1016/j.brs.2018.12.977 10. Alonzo A, Aaronson S, Bikson M, Husain M, Lisanby S, Martin D, McClintock SM, McDonald WM, O’Reardon J, Esmailpoor Z, Loo C (2016) Study design and methodology for a multicentre, randomised controlled trial of transcranial direct current stimulation as a treatment for unipolar and bipolar depression. Contemp Clinical Trials 51:65–71. https://doi.org/10. 1016/j.cct.2016.10.002 11. Kouzani AZ, Jaberzadeh S, Zoghi M, Usma C, Parastarfeizabadi M (2016) Development and validation of a miniature programmable tDCS device. IEEE Trans Neural Syst Rehabil Eng 24(1):192–198. https://doi.org/10.1109/TNSRE.2015.2468579 12. Maier A, Sharp A, Vagapov Y (2017) Comparative analysis and practical implementation of the ESP32 microcontroller module for the internet of things. Internet Technol Appl (ITA), Wrexham 4 (2):143–148. https://doi.org/10.1109/ITECHA.2017.8101926 13. Flutter. flutter.dev 14. DaSilva AF, Volz MS, Bikson M, Fregni F (2011) Electrode positioning and montage in transcranial direct current stimulation. J Visualized Experi (51). https://doi.org/10.3791/2744 15. Nitsche MA, Cohen LG, Wassermann EM, Priori A, Lang N, Antal A, Paulus W, Hummel F, Boggio PS, Fregni F, Pascual-Leone A (2008) Transcranial direct current stimulation: state of the art 2008. Brain Stimulation 1(3):206–223. https://doi. org/10.1016/j.brs.2008.06.004

833 16. Bikson M, Datta A, Elwassif M (2009) Establishing safety limits for transcranial direct current stimulation. Clin Neurophysiol 120 (6):1033–1034. https://doi.org/10.1016/j.clinph.2009.03.018. Establishing 17. Dun K van, Bodranghien FCAA, Mariën P, Manto MU (2016) TDCS of the cerebellum: where do we stand in 2016? Technical issues and critical review of the literature. Front Hum Neurosci. https://doi.org/10.3389/fnhum.2016.00199 18. Palm U, Reisinger E, Keeser D, Kuo MF, Pogarell O, Leicht G, Mulert C, Nitsche MA, Padberg F (2013) Evaluation of sham transcranial direct current stimulation for randomized, placebo-controlled clinical trials. Brain Stimulation 6(4):690– 695. https://doi.org/10.1016/j.brs.2013.01.005 19. Ezquerro F, Moffa AH, Bikson M, Khadka N, Aparicio LVM, de Sampaio-Junior B, Fregni F, Bensenor IM, Lotufo PA, Pereira AC, Brunoni AR (2017) The influence of skin redness on blinding in transcranial direct current stimulation studies: a crossover trial. Neuromodulation 20(3):248–255. https://doi.org/10.1111/ner. 12527 20. Lefaucheur JP (2016) A comprehensive database of published tDCS clinical trials (2005–2016). Neurophysiologie Clinique 46 (6). https://doi.org/10.1016/j.neucli.2016.10.002 21. Renjith V (2017) Blinding in randomized controlled trials: what researchers need to know ? Manipal J Nursing Health Sci 3(1):45– 50 22. Niezen G, Eslambolchilar P, Thimbleby H (2016) Open-source hardware for medical devices. BMJ Innov 2(2):78–83. https://doi. org/10.1136/bmjinnov-2015-000080 23. White SR, Amarante LM, Kravitz A V., Laubach M (2019) The future is open: Open-source tools for behavioral neuroscience research. eNeuro 6(4):1–5. https://doi.org/10.1523/ENEURO. 0223-19.2019 24. Cucca A, Sharma K, Agarwal S, Feigin AS, Biagioni MC (2019) Tele-monitored tDCS rehabilitation: feasibility, challenges and future perspectives in Parkinson’s disease 11 Medical and Health Sciences 1117 Public Health and Health Services 11 Medical and Health Sciences 1109 Neurosciences. J Neuro Eng Rehabilit 16 (1):1–10. https://doi.org/10.1186/s12984-019-0481-4 25. Carvalho F, Brietzke AP, Gasparin A, Dos SFP, Vercelino R, Ballester RF, Sanches PRS, da Silva DP, Torres ILS, Fregni F, Caumo W (2018) Home-based transcranial direct current stimulation device development: an updated protocol used at home in healthy subjects and fibromyalgia patients. J Visualized Experi 137:1–9. https://doi.org/10.3791/57614 26. Hahn C, Rice J, Macuff S, Minhas P, Rahman A, Bikson M (2013) Methods for extra-low voltage transcranial direct current stimulation: Current and time dependent impedance decreases. Clin Neurophysiol 124(3):551–556. https://doi.org/10.1016/j.clinph. 2012.07.028

Velostat-Based Pressure Sensor Matrix for a Low-Cost Monitoring System Applied to Prevent Decubitus Ulcers T. P. De A. Barros, J. M. X. N.Teixeira, W. F. M. Correia and A. E. F. Da Gama

Abstract

Decubitus ulcers generate costs about 11.5 billion dollars per year in the EUA and cause many injuries such as pain, infections and deaths, totaling about 60 thousand deaths per year. Based on that, this paper presents the development of a bed pressure monitoring system, suited for everyday use to prevent pressure ulcers, that uses alarms to support the caregivers and health professionals to improve health conditions for patients. Instead of using units of sensors to build a sheet of sensors, the proposed solution was made with Velostat, a conductive material which is a carbonimpregnated polyolefin, this choice reduces the cost of the system, enabling access to patients in critical conditions in their homes or even in public hospitals. At this way, a reduced dimensions prototype was built using this conductive sheet which is pressure-sensitive and, as result, the tests demonstrate the functionality of the system, indicating peak pressure in real time on the patient’s skin on a mobile app. Keywords

Pressure ulcer • Decubitus ulcers • Monitoring system • Pressure monitoring • Velostat

T. P. De A. Barros (B) · J. M. X. N. Teixeira Departamento de Eletrônica e Sistemas, Universidade Federal de Pernambuco, Recife, Brazil W. F. M. Correia Laboratório de Concepção e Análise de Artefatos Inteligentes,Universidade Federal de Pernambuco, Recife, Brazil A. E. F. Da Gama Grupo de Pesquisa em Engenharia de Reabilitação, Universidade Federal de Pernambuco, Recife, Brazil

1

Introduction

Bedridden and reduced mobility patients demand one concern: change the decubitus position and relieve sites of high skin pressure. The body parts under pressure, due to reduced blood circulation, suffer from the appearance of ulcers that can reach advanced stages causing pain because of the necrosis process and infection due to the exposition of internal layers of the skin. The ulcers occur mostly in regions of bone prominence, but the severity can advance according to other factors such as bad nutrition and local humidity [1]. As this injury is difficult to heal and can lead to death, the treatment can be costly because of the need to use dressings to protect ulcers, to give medications, to keep the patient admitted in the hospital and, in some cases, surgery may be needed [2]. In addition to the impact on patient’s health, according to a study carried out between 2008 and 2012 in the United States, the financial impact is significant: $ 2.5 million people suffered from pressure ulcers per year, the direct and indirect costs of hospitals was $ 11.5 billion, plus $ 4.2 billion due to fines and litigations and 60 thousand people died from complications per year [3]. This study was realized by M.A.P. Wellsense, which commercializes a pressure monitoring system that provides in real time the feedback of pressure as the caregivers change the patient position [4]. There are few studies to estimate the incidence of pressure ulcer in Brazil, but local studies provide a partial panorama. According to a study conducted at a Brazilian university hospital, the incidence of pressure ulcer in three consecutive months of observation was 39.8%, and 41.0% of these cases were from intensive care unit [5]. Besides the epidemiological studies, some researches propose systems to help pressure ulcer prevention, which may contain bed monitoring, alarms and automation for the bed adaption so that the skin pressure could be minimized [6]. In spite of the products already available to improve the prevention effort, the price is a barrier for undeveloped and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_126

835

836

developing countries to absorb monitoring systems. For this reason, there are proposals to build a low-cost monitoring system to support the prevention routines of caregivers and nurses to guarantee the patient’s health through the remote visualization in real time, keeping up with how long the patient remains at the same region of the body under pressure or to take the feedback of the skin pressure [7]. VU, by Wellsense, is a system that provides a real-time skin pressure map of the patient and creates reports to improve professional’s productivity and to engage the patient to reduce pressure injuries [8]. This product could have difficulty getting into undeveloped and developing countries because of the costs for an imported product, which is a barrier to the adhesion of private hospitals and insurance companies. These examples and the proposed solution in this paper have the common objective of preventing the appearance of pressure ulcers, but they are distant in terms of the chosen sensor. This way, we propose to use a sensor that does not need to be attached directly on the skin and that consumes less energy. These differentials can also be found in proposals from other researchers, using carbon-based composites, such as Velostat [9] and MWCNT-PDMS [10]. The prototype proposed differs from these references in terms of the identified potential, as the objective is to prevent decubitus ulcers and not only plantar ulcers. In addition, the tests carried out went beyond tests with known weights, as experimental tests were also carried out with people (who are not patient) in the decubitus position. Besides the commercialized products, researchers keep trying to create cheaper solutions to democratize the access to the benefits of a pressure ulcer prevention system. Related to the goal of reducing pressure injuries, there are some other developments, for example, a flexible sensor used directly on the skin using a force sensing resistor to be attached to the skin to obtain the measurement of pressure from specific spots of the body, different from VU that is like a sheet above the bed [11]. And a solution that does not require direct touch to the skin: a rubber-based flexible sensor sheet that uses a capacitive soft sensor applied to plantar pressure monitoring [12]. In this paper, the development of pressure sensor matrix used a conductive sheet that proposes to replace the traditional piezoresistive sensors, which is an element that influences significantly on price. It is estimated that per sensor there is an 86% reduction in cost, without considering other elements of the prototype. The proposed solution consists in a monitoring and alarm system to help caregivers visualize the pressure distribution on bed and assist at the patient’s reposition. The development of the prototype fitted in a reduced dimension version will demonstrate an end-to-end low-cost solution to prevent pressure ulcers.

T. P. De A. Barros et al.

2

Methodology

The first step was to identify the cause of the incidence of decubitus ulcers in hospitals, because even with health professionals taking responsibility for applying preventive procedures, the problem is recurrent. The next step was to identify flaws in the prevention procedures currently used in hospitals that justify the persistence of a problem so old that it affects the patient, the family and the hospital. Through this study, the solution led to the design of an alarm and monitoring system that has the potential to be a support in the health professional routine, improving the performance of health professionals and patients’ quality of life, even in a high demand scenario. For this, the prototype was built from a staged development, so that the selected elements that make up the solution promote a low-cost final product and allow faster and lower cost prototyping. The proposed system consists of the reading module, which includes a sensor matrix mat and a microcontroller used to digitize the readings. In addition, a wireless interface was developed so that the results of monitoring can be made available remotely and, as a result, a graphical interface was developed for mobile applications to test the user flow. To support the functionality tests of the system, a visualization module was created, but it is not part of the final product.

2.1

Pressure Sensors Matrix

The starting point of creating a monitoring system was to define the sensors that fit the objectives of measuring the pressure on bed and offer a reliable and an approachable solution. Therefore, the piezoresistive sensor was selected due to its passive type that brings simplicity to the development, with no need to implement a plenty electronic modules [13]. From a literature review, based on papers, products and other development proposals, a sheet with piezoresistive properties showed greater potential to implement a matrix of sensors instead of using separate units of piezoresistive sensors, mainly because of the price. The sheet of a polyolefin enriched with carbon, commercially named as Velostat or Linqstat, depending on the manufacturer, is a pressure sensitive material with variable resistance according to the applied pressure. Since the pressure is related to mass and area of Velostat, the reading range is between 0 and 1000 g, but for different areas, one unique weight will apply a different pressure so the material will have a different resistance. This way, if the area of all sensors is fixed in the matrix, the behavior of resistance at each sensor will roughly be the same [14], because the material does not significantly depend on temperature, considering the limits for correct functioning, which are -45 ◦ C a 65 ◦ C [15].

126 Velostat-Based Pressure Sensor Matrix for a Low-Cost …

837

Fig. 1 a, b, c, Sensor mat fabrication and assembly process, d Mobile app screenshot

However, only Velostat does not replace the pressure sensors, it is necessary to have conductive parts in both sides of the sheet, one side to feed the circuit and the other, to connect a pull down resistor, creating a circuit made by a source, a variable resistance, a fixed resistance and a ground. These two resistors build a voltage divider where the voltage at the fixed resistance will be dependent on variable resistance and it will be read by the ADC at the microcontroller. Fixed to the Velostat, as the conductive part, copper tapes were positioned on both sides of the sheet. Considering the dimensions of Velostat and the tapes, “virtual” units of sensors were built by the intersection of horizontal copper tapes on one side of Velostat and vertical copper tapes on the other side. To assemble these parts, a common double sided tape was fixed in two non-conductive sheets, then the Velostat with copper tapes and the two non-conductive sheets were put together as a sandwich, so that the copper tapes and the double sided tapes were side by side alternating and this way the matrix of sensors was constructed, as illustrated in Fig. 1a, 1b and 1c.

2.2

Microcontroller

To complete the system, the matrix mentioned before should deliver the information read to a microcontroller to process the information and then send the data through a wireless communication to an application. The microcontroller selected was the ATMega2560, from the Arduino Mega development board, which has 16 analog pins, 54 digital pins and operates at a clock frequency of 16 MHz [16]. As the system was designed to utilize all Velostat available, the matrix fits 169 sensors spaced 1.2 cm apart, corresponding to the width of the double sided tape, arranged in a 13 x 13 grid, thus the end of 13 parallel copper tapes are connected to 13 digital pins and the 13 other ones, to 13 analog pins, to improve the logic of scanning a matrix. To scan all sensors, the logic defined in software coordinates the feeding of each digital pin in sequence and the reading of each analog pin in sequence while a unique pin is fed, in other words, the first pin receive 5 V and the Arduino Board

838

T. P. De A. Barros et al.

Fig. 2 Experimental test for the lateral decubitus position and foot region, with the right foot on the mat a Right foot on a sandbank, b The footprint left in the sandbar, c Decubitus position reproduced on the mat d Result obtained from scanning the mat

read 13 analog pins separately but in sequence, after that the second digital pin receives the voltage and again a reading is done in all analog pins and so on. In a big dimension system, with millions of sensors, this logic should be implemented with shift registers and multiplexers, which can be integrated to the Arduino Mega and indicates that the solution developed needs some improvements to be a final product, but due to the Arduino Mega’s number of pins it was possible to simplify the development of this prototype.

2.3

Wireless Interface

After reading all sensors and making the data available at analogs pins, the data can now be sent to an application. To enable the remote communication, as a proof of concept, a Bluetooth 2.0 model HC-06 module was used to implement the wireless interface. The Bluetooth, connected to the matrix of sensors and Arduino, stays available for mobiles to pair with it. Before the Bluetooth transfers the information to the mobile through the UART communication protocol, it should receive data from the Arduino. Data exchange between Bluetooth and Arduino takes place through asynchronous

serial communication from Bluetooth’s RX/TX pins to RX/TX pins of the Arduino and depends heavily on the baud rate to establish the communication between these devices [17].

2.4

Application

In order to have a complete system that indicates to the caregiver which parts of the patient’s body need attention in the pressure ulcer prevention routine, it was created an interface for users to monitor the patient’s situation, using MIT App Inventor [18]. The user flow steps consist in choosing an available Bluetooth device and setting the alarm, which will suggest changing the patient’s position at the scheduled time. A black screen shows up representing the sensor matrix, or in other words, the mattress, in this same screen green and red squares will demonstrate the pressure on the skin, as illustrated in Fig. 1d. In the application code, data received from the classification defined in the Arduino code is converted to three different colors, indicating three levels of pressure, therefore the pressure values are treated in code and the representative values of each color are part of the system specification and are not

126 Velostat-Based Pressure Sensor Matrix for a Low-Cost …

exhibited on the screen. The three classification possibilities are due to the implementation limitations of this version and, consequently, with what was observed during the experimental tests, because at this way we obtain more stable and easier to interpret readings. For the testing phase, the Processing software [19] was used to create a visualization module to gather the real image of the sensor mat with a person in decubitus position and the result of scanning the matrix, indicating the skin pressure regions, as illustrated in Fig. 2c and 2d. The same image of scanning the sensor matrix will be viewed by the application too, in order to perform prototype functionality tests.

3

839

which can be seen in Table 2. The weights were converted to pressure using Eqs. (1) and (2), given that P is pressure, F is force, A is area and m is weight, and the readings converted to voltage, using Eq. (3), given that T is voltage and L is the result from analog pin. F A

(1)

P=

m × 9, 81 A

(2)

T =

5 ×L 1024

(3)

P=

Experimental Tests

The acquisition of readings of different spots of the sensor mat was carried out to map the behavior of the Velostat and the system, at first, the focus was on the weight above an isolated sensor, without considering the blanks where there are no sensors. At this way, two tests were accomplished, one to take the minimum and the maximum weight caught by the sensor and other to obtain the relationship between weight or pressure with the voltage read. In these tests, blocks with known weights with a base area equal to the sensor area, 1 cm2 , and the Arduino console, were used to take the results in a scale of 0 to 1023, corresponding to the voltage, since it uses a 10-bit ADC. To obtain an operating range of the system, shown in Table 1, during the scan it was observed the limits of weight that was possible to see a significant variation at the Arduino’s serial monitor. Then, to find the relationship mentioned before, it was selected randomly 4 different sensors to test 4 different weights—73.05 g, 314.1 g, 541.91 g and 1191.11 g—in order to obtain the average of readings in analog pins to each weight,

Knowing that the system does not show the silhouette of the body, but indicates points of pressure, the experimental test was defined to see the greatest pressure areas from different sections of the body by simulating the decubitus positions. To observe the results obtained in the application, the same Arduino code from the system was used to send the data through serial communication to the visualization module, whose code was created very similar to the application, receiving a classification and showing up the results in a matrix. Besides that, the visualization module, through the functionalities of Processing software, captures video data from a camera to show the real image from the person who is lying on the sensor mat. To complement the experimental test, it was used a kinetic sand to define the expectation for the sensor mat, by lying on the sand to visualize what was printed to the sand and then see the greatest pressure areas where the sand was most deformed. This way, the test has a reference that makes possible to visually compare the printing on the sand with the real image and with the results from the sensor mat scanning.

Table 1 Weight and tension ranges according to the classification Classification

Result (0–1023)

Voltage (V)

Weight (g)

Black Green Red

0–79 80–299 300–950

0–0.38 0.39–1.45 1.46–4.88

0–31 32–134 135–1000

Table 2 Relationship between the pressure applied on the mat and the result obtained from the sensors Pressure (kPa)

Weight (g)

Result (0-1023)

Voltage (V)

7.17 30.81 53.16 93.2 116.85

73.05 314.1 541.91 950.06 1191.11

358 570 704 781.25 856.75

1.75 2.78 3.44 3.81 4.18

840

4

T. P. De A. Barros et al.

Results

The minimum and maximum voltage values that can be represented in the matrix were verified during the initial tests. Values below 80, on a scale of 0–1023 representative of the voltage range from 0 to 5 V, brought unstable results, while using values greater than 1000, or greater than 1 kg, there is no longer any variation in tension for increasing the weight applied to the mat. From the range defined in code for each color classification, it was carried out a test to obtain the minimum and maximum in grams for each interval. Thus, in the test, the weights were applied in an increasing way and with a variation of a few grams to find out for which weight there is a color change and to obtain the representative weight limits for each color displayed on the screen. The results obtained regarding the range in grams for a single sensor are shown in Table 1, which indicates that on a sensor it is necessary to have at least 32 g to display the indication on the screen application that there is pressure in that area and from 135 g onwards we will have the indicative classification of greater intensity. To obtain the relationship between pressure and weight, the readings average obtained for each sensor for the same weight enables to identify in Table 2, that the voltage reading increases with the applied pressure, in a non linear behavior, which reveals one of the properties of Velostat, the reduction of resistance with the applied pressure, since resistance and voltage are inversely proportional according to Ohm’s law. The experimental test seen in Fig. 2 reveals that the bond prominence printed on the kinetic sand highlights at the screen indicating exactly these sites of pressure. In more details, Fig. 2a shows the position of the foot above the sand, at the same time a patient lying on bed in right lateral decubitus, and Fig. 2b reveals through the darkest parts that the points deformed correspond to the prominences of the fifth metatarsal, calcaneus and the lateral malleolus, it is possible to see a small deformation for the rest of the foot. Figure 2c and 2d exhibit the foot above the sensor mat and the results of the scan of the sensor respectively, in real time. Observe that, comparing the reference given in Figure 3.b and the result obtained in Fig. 2d, it reveals that the greatest pressure body site must be the lateral malleolus, but the results show the calcaneus, it is a consequence of calcaneus being concentrated all in a unique sensor, highlighting the red color, however the calcaneus is distributed on the blank and on the sensor, then it displayed the green color.

The mat does not capture the body silhouette, due to the resolution given by the amount of sensors used on the sheet, but by identifying the main sites of pressure, the health professionals are able to pay attention and act when the same sites are under pressure for a long time. The alarm time defined in seconds will be the same for both classifications, therefore, the pressure values are not considered to trigger the alarm, it is to help caregivers in visual evaluation. Besides that, the red and green square had a better acceptance, because the system does not have many different classifications to be represented by a seamless color graduation. People that tried the application got the meaning of what the screen showed up when it was used the green color to represent a lighter pressure and the red color to represent a heavier pressure. The mobile app has a response time equal to 2 s and, during this time, the matrix is scanned 38 times. This choice was made due to the commitment to aggregate reliability in this prototype and, consequently, to the way it was implemented. So, in other words, in software there are a loop with 38 readings of all sensors to finally give a unique and stable reading, which is obtained from the average of all these readings. Aside the results directly related to the system’s functionality, the price was under consideration throughout the design and construction process. To have a price reference, a commercial piezoresistive sensor with 0.78 cm2 was selected, which costs R$ 8.33 per unit [20] against R$ 1.17 for a Velostat-based sensor. This price was calculated based on the cost of the Velostat sheet and the copper conductor used to build the conductive parts on the sheet which totaled R$ 198.00, after that, dividing by the number of sensors distributed on the sheet, 169 sensors of 1 cm2 , the price of the unit was R$ 1.17.

5

Discussions

The sensor mat could be used in different situations that need pressure monitoring, because the variations can be applied through the application, leading how to use the data. For example, pressure monitoring as developed in this paper could provide information to evaluate the comfort of orthoses, since these accessories, when they hurt, can cause the appearance of ulcers, or even the gait assessment in physiotherapy sessions. But even this monitoring system is receptive to several other features, for example, an automatic bed adaptation sys-

126 Velostat-Based Pressure Sensor Matrix for a Low-Cost …

tem based on monitoring skin pressure [21] or monitoring other factors that enhance the appearance of ulcers, such as the skin temperature [22]. However, although these functionalities add technology to the system and deliver benefits to users, when turning to the main point of the problem, it is believed that clinics and hospitals will adhere more easily to the alarm and monitoring system proposed, because it has the essential functionality needed for prevention, which is the monitoring of the main factor, the pressure, and the alarm to support the activities of the health professional at a lower cost. In order to develop a final product with a better resolution, there are some points that need more detailed attention. The first one is calibration, because in the test in which the results of Table 2 were obtained, the average of the results of four sensors were presented, however, despite the behavior being the same for each sensor, the numerical value was not the same, in this way, it is necessary to make a calibration to make all sensors equalized in terms of the readings presented to the user [22]. In addition, it is possible to have the influence of crosstalk among sensors that certainly form a network of resistors in series for the same column of the matrix and in parallel between different columns, so it is necessary to implement improvements in the electronic circuit so that this effect is canceled and readings are even more reliable [22]. Even with the need for some upgrades to improve the resolution and the final product, the main point is that the replacement of the commercial sensor with a carbon-based pressure sensitive sheet offers a price significant difference that reveals the potential of became an affordable product. Regarding the price comparison mentioned in the Results and the reasons presented in the Methodology for the use of Velostat, the price of a sensor in this proposed system is about 86% lower than a commercial piezoresistive sensor.

6

Conclusion

This project proposes a low-cost alternative, when compared to implementations with commercial sensor units for pressure monitoring, demonstrating through a prototype the functionalities of the system. From the tests, the possibility of using Velostat was confirmed for the application discussed in this work, but also for future opportunities. In addition, the panorama mentioned in the Introduction demonstrates the benefits and impacts of making this solution more accessible, improving the quality of life of patients and the work’s condition of caregivers and positively impacting the quality of services provided by the hospital and the costs attributed to the treatment of ulcers.

841

To reach these benefits and to improve the proposed system to a prototype to be tested on mattress at real dimensions and with patients, the next steps should be to replace the copper tapes for printed flexible circuit and to implement the mat’s scanning logic using shift register and multiplexers. But it is important to attend the requirement of keep the product with an affordable cost for each improvement that may be implemented.

References 1.

2.

3.

4.

5.

6.

7. 8. 9. 10.

11.

12.

13. 14. 15. 16.

17.

Campos S, Chagas A, Costa A, França R, Jansen A K (2010) Fatores associados ao desenvolvimento de úlceras de pressão: o impacto da nutrição. Revista de Nutrição, 23 Torquato T, Rosa S, Rosa M, Tahmasebi R (2015) Úlceras por pressão: proposta de um colchão inteligente derivado do látex natural (Hevea Brasiliensis). Ciência & Engenharia (Sci Eng J) 24:47–57 Bauer K, Rock K, Nazzal M, Jones O, Qu W (2016) Pressure ulcers in the United States’ inpatient population from 2008 to 2012: results of a retrospective nationwide study. Ostomy Wound Manage 62:30– 38 Wellsense Study of the MAP TM System Shows Improvement in Care of Hospital Acquired Pressure Ulcers at https://www. globenewswire.com/news-release/2013/02/13/1159059/0/en/ Wellsense-Announces-Study-of-the-MAP-TM-System-ShowsImprovement-in-Care-of-Hospital-Acquired-Pressure-Ulcers. html Gomes F, Bastos M, Matozinhos F, Temponi H, V-Meléndez G (2010) Fatores associados à úlcera por pressão em pacientes internados nos Centros de Terapia Intensiva de Adultos. Revista da Escola de Enfermagem da USP, 44 Carvalho M (2014) Úlcera por pressão: proposta de prevenção por meio de um colchão de latéx natural (Hevea brasiliensis) sensorizado. Universidade de Brasília Manohar A, Bhatia D (2008) Pressure detection and wireless interface for patient bed. IEEE, 389–392 VU is a advanced pressure visualization system TM (APVS) at https://shape-products.com/portfolio/wellsense-vu/ Serra P (2011) Aquisição de valores de pressão plantar com um sensor flexível. Universidade da Beira Interior Ramalingame R, Hu Z, Gerlach C et al (2019) Flexible piezoresistive sensor matrix based on a carbon nanotube PDMS composite for dynamic pressure distribution measurement. JSSS 8:1–7 Gerlach C, Krumm D, Illing M, Lange J, Odenwald S, Hübler A (2015) Printed MWCNT-PDMS-composite pressure sensor system for plantar pressure monitoring in Ulcer prevention. IEEE 15:3647– 3656 Crivello M De (2016) Flexible sensor for measurement of skin pressure and temperature for the prevention of pressure Ulcers. Worcester Polytechnic Institute Sensores de Pressão at https://www.smar.com/brasil/artigotecnico/sensores-de-pressao Pressure Sensor Matrix Mat Project at https://reps.cc/?p=50 Pressure-sensitive conductive sheet (Velostat/Linqstat) at http:// www.farnell.com/datasheets/1815591.pdf Manohar A, Bhatia D (2020) Patient health monitoring system using Arduino mega 2560 and thingsboard server. Int J Sci Technol Res 9:5020–5026 Comparação entre protocolos de comunicacao serial at https:// www.robocore.net/tutoriais/comparacao-entre-protocolos-decomunicacao-serial.html

842 18. MIT App Inventor at https://appinventor.mit.edu/ 19. Welcome to Processing at https://processing.org/ 20. Película de sensores piezoresistivos at https://aliexpress.ru/item/ 32839062526.html

T. P. De A. Barros et al. 21. Yousefi R, Ostadabbas S, Faezipour M et al (2011) A smart bed platform for monitoring & Ulcer prevention. IEEE 4:1362–1366 22. Santos C L A Dos (2009) Sistema Automático de Prevenção de Úlceras por Pressão. Universidade da Madeira

Analysis and Classification of EEG Signals from Passive Mobilization in ICU Sedated Patients and Non-sedated Volunteers G. C. Florisbal, J. Machado, L. B. Bagesteiro and A. Balbinot

tasks performed with passive mobilization and provided accuracy comparable to previous studies involving motor tasks.

Abstract

Electroencephalography (EEG) has been the focus of research and advances for many years, yet there are several tasks to be explore and methods to be tested to improve analysis and classification. Event-Related Potential (ERP) is one of the brain responses measured with EEG, resulting from motor tasks usually are related to motor imagery or real movement. This study aims to analyze and classify event-related desynchronization (ERD) and event-related synchronization (ERS) occurred in tasks involving passive mobilization in Intensive Care Unit (ICU) sedated patients and non-sedated volunteers. Our main goal is to provide preliminary analysis and comparisons between sedated and non-sedated groups based on signal visualization and a classifier. Common Spatial Pattern filtering (CSP) and visual inspection of best band and time were used to verify signal and phenomena. From that, specific features (i.e., Root Mean Square, standard deviation, mean of Welch periodogram and differential entropy) were extracted based on time and frequency to apply a Linear Discriminant Analysis (LDA) classifier. Once the two Intensive Care Unit sedated patients and the two volunteers were analyzed, it was possible to observe the proposed phenomena. Mean accuracy in the best scenario and best person for each group (two people in each group) was found higher than 80 and 77% to sedated and non-sedated participants, respectively. Preliminary results, based on four participants (i.e., two sedated and two non-sedated patients), suggested lateralization in G. C. Florisbal (B) · J. Machado · A. Balbinot Electro-Electronic Instrumentation Laboratory (IEE), Electric Engineering Department, Federal University of Rio Grande do Sul—UFRGS, Osvaldo Aranha Ave, 103, 206-D, Porto Alegre, RS, Brazil e-mail: [email protected] L. B. Bagesteiro NeuroTech Lab, Kinesiology Department, San Francisco State University, San Francisco, CA 94132, USA

Keywords

EEG • Passive Mobilization • Event-Related Potential

1

Introduction

New processing methods and advances in the knowledge of brain functionality with electroencephalography (EEG) employing Brain Computer Interfaces (BCI) have evolved to more complex tasks in biomedical engineering and clinical applications [1]. It is now possible to control prostheses [2], wheelchair [3], consciousness levels [4] and diseases diagnostics [5] using EEG signals. Expected alterations in the EEG signal, such as P300, somatosensory evoked potential (SSEP) and event-related desynchronization (ERD)/eventrelated synchronization (ERS), therefore can be used to control or activate a device. The analysis of a specific sensory output, cognitive or motor event allows observation of an Event-Related Potential (ERP) in different parts of the brain [6]. Executing or imagining the movement in both hands cause the phenomena of ERD in the contralateral side before movement and ERS in the ipsilateral side after execution [7]. This phenomena is observable in α and β rhythms [1,6]. ERD and ERS are analyzed to evaluate the difference between left and right hand movements in passive mobilization in both sedated and nonsedated participants, the use of ERD and ERS are analyzed. The definition of the relative energy, used to identify ERD and ERS was presented in [8]. This study presents a preliminary analysis of ICU sedated patients and non-sedated volunteer’s EEG signals during passive mobilization in both hands were used to compare the phenomena and classify different individuals and groups. Such analysis can be applied to observe motor responses in ICU

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_127

843

844

G. C. Florisbal et al.

based patients in passive mobilization, allowing in future communication and alteration in state of consciousness analysis. EEG signals are collected with an EPOC electrode cap by EMOTIV with 14 electrodes based on 10-20 system [6], this neuroheadset has been already used to ERD/ERS phenomena in [9]. The signal is pre-processed using digital and CSP filtering, features are extracted based in time and frequency. RMS value, standard deviation, mean of Welch periodogram and differential entropy are calculated. Finally, the features are classified in two classes in a Linear Discriminant Analysis (LDA) classifier.

2

Methodology

2.1

Experiment Format

Signal acquisition is performed in order to configure a synchronous BCI system, therefore passive movements were carried out by the physiotherapist at specific time. The first two seconds were defined as the pre-stimulus interval, the next two seconds were movement time interval performed by the physiotherapist and the last three seconds were defined as the post-stimulus period, time intervals presented in Table 1. EEG signal are collected throughout the experiment. Each trial consists of a randomly defined movement representing the flexion of the left or right arm. Each section consists of 10 trials and n sections were performed per participant (10 sections for the two Non-Sedated and for ‘Sedated 2’, and 5 to ‘Sedated 1’). Table 2 indicates the number of movements performed by each subject. Total number of trials is balanced between right and left movements. Signal acquisition is done using an Emotiv Epoc cap via an interface with Labview 13, running in a laptop with Windows 10, at a rate of 128 Hz. Time intervals and a random sequence of movements were generated in Labview. All data, channelacquired signals, and stimuli performed (setting the value ‘1’ for the subject’s left arm stimulus and ‘2’ for the right) were saved in .lvm files. All the remaining signal processing was done in Matlab 2012b running on Windows 10. The Sedated Patient admitted to the ICU had to comply with the following inclusion criteria:

• Adult patients of both genders; • Patients over 18 years old; • Patients using continuous sedoanalgesia with Richmond Agitation Sedation Scale (RASS) -3 to -5; • Patients on invasive mechanical ventilation between 48 and 72 h; For the control group, inclusion criteria was adult volunteers without previous neuromuscular disorders. The procedure is performed to simulate the environment of an ICU, therefore, the volunteer keeps their eyes closed while lying down with neck support. A physiotherapist performed the movements for both groups. All procedures performed in the acquisition of this dataset which involved human participants followed the ethical standards of the institutional review board and the 1964 Helsinki declaration and its later amendments or comparable ethical standards. All procedures performed were approved by the institutional research committee under the Certificate of Presentation for Ethical Appreciation (number: 11253312.8.0000.5347). Other comparable trials stimuli formats can be found in [7,10–13].

2.2

Emotiv Epoc Neuroheadset

An Emotiv Epoc neuroheadset was employed in the present study. This is a commercially available EEG portable cap based in 10–20 system, with 14 signal electrodes and two reference electrodes, 14-bit, 1 LSB resolution. Positioning was based on standard 75 electrodes, acquisition rate was 128 Hz. Signal was digitally filtered from 0.2 to 45Hz, notch filtered at 50 and 60 Hz and range from 8400 uV.

2.3

Preprocessing

First, signal was filtered at the frequencies relative to the analyzed phenomena. A fourth order digital Butterworth filter in the 8–30 Hz range applying the filtfilt function was implemented. The filftfilt function does not contribute to phase

Table 1 Timing in a Trial Events

Reference

Stimulus

Post-stimulus

Interval Time (s)

0–2

2–4

4–7

Table 2 Subjects and Respective quantity of Movements Subject

Non-sedated 1

Non-sedated 2

Sedated 1

Sedated 2

Movements

100

100

50

100

127 Analysis and Classification of EEG Signals from Passive Mobilization …

changes in the data, only attenuates the signal in amplitude. Second, the filter was centered in a frequency specific for each subject, which was based on visual inspection of lateralization index in the frequency domain. This was selected using a Welch periodogram in channels FC5 and FC6 (channels next to motor cortex). Time window was set based on the lateralization in these channels. This was a two-second time window, which was set between “second 2” and “second 5”, at movement start and one second after movement end (see Table 3). Lateralization in frequency and time domain based analysis was implemented in [11,13]. The next step was to apply the CSP filter in all left and right hand trials in all electrodes. CSP filters are commonly used in EEG experiments to minimize the effect of unwanted signals captured from other regions of scalp which are not of interest. So, the filter was applied to maximize discrimination from a sample space transformation, facilitating differentiation between two classes. The filter application was performed by a function developed in Matlab based on [14]. Two CSP channels with the largest discrimination between classes were chosen after filtering and were defined as CSP1 and CSP2. This selection of two channels are used to compare results with analysis of channels FC5 and FC6 without CSP filter applied.

2.4

Feature Extraction

After Preprocessing we start with a Feature Extraction to utilizy in classifier. Our signal classification characteristics were based on previous work [15], which performed classification with these features, but applied in a non-combined way. Thus, the characteristics extracted presented non-combined high hit rates when used in BCI systems, namely: RMS value, standard deviation, power spectral density (PSD) and differential entropy (DE). These features are calculated on signal preprocessed with digital filters and for the selected channels processed with CSP filter. RMS was calculated for each track using the rms function and the standard deviation using the std statistics function. The use of standard deviation and RMS values is interesting when applying ERD / ERS detection, since these phenomena alter the amplitudes more sharply on one side of the brain. PSD was estimated using the Welch periodogram method, Fourier transform dependent method. This method uses overlapping windows and is commonly used for EEG signals because it is not stationary in time but can be considered stationary in short periods of time (i.e., 1–2 s), so from the overlapping window it is possible to decrease the random error for windows up to 50 % overlay [16,17]. We applied the function pwelch, with Hamming window function and frequency

845

one sided with interval [0, fs/2] (cycles/unit time), available in Matlab and performed the average magnitude in frequency. The differential entropy method measures the complexity of a continuous variable with a stochastic character. It is shown in [18] that the EEG signal can be related to a Gaussian distribution, from which DE can be roughly calculated by the expression in (1). DE is used to extract characteristics that are harder to find by using statistical metrics such as mean and standard deviation. DE =

2.5

  1 log 2π eσi2 2

(1)

Classification

An LDA classifier with cross-validation utilizing 10 folds was used for signal classification. Cross-validation was applied over the feature set, which randomly select a feature set for training. Average test error over all folds was calculated. For each subject the classifier was trained 100 times. Mean and standard deviation for all trials during training was obtained. A LDA classifier is commonly utilized to classify this type of EEG data and presents good results [19].

3

Results and discussion

3.1

Preprocessing and Signal Visualization

For the proposed experiment and dataset, when compared with others datasets using the same preprocessing and visualization [10,11,13], the ERD and ERS was not so clear to observe in α rhythms for all participants before CSP filters. Fig. 1 shows the lateralization in channels (the Emotiv electrodes FC5 and FC6 and the CSP generated channels CSP1 and CSP2) for each movement and subject. It is evident that the channels generated by CSP filter are better to find the phenomena. Therefore, CSP filter works well to maximize the difference between these two classes of signal. When analyzing lateralization, in frequency and time domains, the result in CSP shows evident occurrence of these phenomena. The electrodes positioning in the Emotiv Cap can be a reason for the difficulty in finding the characteristics of ERD/ERS in FC5 and FC6 channels. When analyzing the signal after all preprocessing is possible to observe the phenomena in both groups (sedated and non-sedated subjects). Table 3 displays mean and standard deviation of central frequency and time for each group defined after visual inspection and signal analysis. Statistical analysis showed that there was no significant difference between non-sedated and sedated participants.

846

G. C. Florisbal et al.

Fig. 1 Mean of Relative energy for all trials for each movement using channels: FC5,FC6, CSP1 (CSP channels with the largest discrimination for class left) and CSP2 (CSP channels with the largest discrimination for class right) for: a Non-Sedated Subject 1 (50 movements each hand), b Non-Sedated Subject 2 (50 movements each hand), c Sedated Subject 1(25 movements each hand) and d Sedated Subject 2 (50 movements each hand). Movements starts at second 2 and ends at second 4

3.2

Classifier

Classification accuracy results are presented in Table 4. Mean of all 100 loops using different random training groups in classifier was calculated for each subject. The method with and without frequency and time windows selection was also tested. Sedated Subject 2 showed the best result, with 80.1±0.95 % average accuracy using the selected frequency and time windows. All means were higher than 60% without the specific time and frequency and higher than 71% when used the specific time and frequency windows. When comparing the two groups all average accuracy was compatible to experiments employing motor imagery or real movements. They are also compatible with each other.

Visualization of ERD and ERS and accuracy are improved when analyzing signal and accuracy in the specified best time and best frequency for each subject. This suggests that setting these parameters is necessary. Frequency and time window comparison for all subjects were significant parameters to classify movements.

4

Conclusions

The proposed method using CSP filter, frequency and time windows analysis, RMS, standard deviation, PSD and DE like features and LDA as classifier allowed ERD/ERS visual analysis and classification for passive movements in the stud-

127 Analysis and Classification of EEG Signals from Passive Mobilization …

847

Table 3 Mean frequency and time window defined for each group Group

Initial frequency (Hz)

Final frequency (Hz)

Initial time (s)

Final time (s)

Non-sedated Sedated

11.5±2.12 10.5±3.54

16.5±2.12 15.5±3.54

2.25±0.35 2.5±0.35

4.25±0.35 4.5±0.35

Table 4 Accuracy rate for each Subject Subject

Non-sedated 1

Non-sedated 2

Sedated 1

Sedated 2

Accuracy without frequency and time windows selected (%) Accuracy with frequency and time windows selected (%)

67.7±1.1 77.8±0.6

60.5±1.0 71.6±1.0

64.9±2.7 74.9±1.5

69.5±1.6 80.1±0.95

ied groups. Results of signal analysis and classification are comparable to previous research [11–13]. More specifically, when comparing accuracy rates with previous studies [7] using Emotiv headset and motor imagery. Better results were around 85% for BCI Competition Dataset II (motor imagery experiment datasets with 280 trials and one volunteer) and 79% with two volunteers (motor imagery experiment with 140 trials per volunteer), when applying CSP filters to select two CSP channels and a Naive Bayes classifier. Based on the characteristics of signal and the difficulty to analyze and classify, this kind of dataset and method can be used for many other studies to improve the classification or visualization of phenomena. For example, monitoring EEG signals in ICU or surgery settings, and physiotherapy protocol analysis (i.e., passive mobilization in sedated and non-sedated patients). Acknowledgements We thank Alexandre Simões Dias (Hospital de Clínicas de Porto Alegre - HCPA), Luiz Alberto Forgiarini Junior and Rodrigo Noguera for help and assistance in ICU data acquisition, passive mobilization discussions and contact with hospitals.

References 1. 2.

3.

4.

Dornhege G (2007) Toward brain-computer interfacing. MIT, London, 1 Muller-Putz GR, Pfurtscheller G (2007) Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans Biomed Eng 55:361–364 Carra M (2012) Desenvolvimento de uma interface cérebro computador baseada em ritmos sensério motores para controle de dispositivos Master’s thesis. Universidade Federal do Rio Grande do Sul, Porto Alegre Czy˙zewski A, Kurowski A, Odya P, Szczuko P (2020) Multifactor consciousness level assessment of participants with acquired brain injuries employing human-computer interfaces. Biomed Eng Online 19:1–26

5.

6. 7.

8.

9.

10. 11.

12.

13. 14.

15.

16. 17.

18.

19.

Niedermeyer E, Silva FHL (2005) Electroencephalography: basic principles, clinical applications, and related fields. Lippincott Williams & Wilkins Sanei S, Chambers J (2013) EEG signal processing. Wiley, New York Nam C, Jeon Y, Kim YJ et al. Movement imagery-related lateralization of event-related (de) synchronization (ERD/ERS): motorimagery duration effects. Clinical Neurophysiol 122:567–577 Pfurtscheller G, Silva FH (1999) Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiol 110:1842–1857 Stock VN, Balbinot A (2016) Movement imagery classification in EMOTIV cap based system by Naïve Bayes. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 4435–4438 BCI Competition III at http://www.bbci.de/competition/ii/ Carra M (2012) Development of brain computer interface based on sensory motor rhythms for device control. Master’s thesis. Federal University of Rio Grande do Sul, Porto Alegre Doyle LMF, Yarrow K, Brown P (2005) Lateralization of eventrelated beta desynchronization in the EEG during pre-cued reaction time tasks. Clinical Neurophysiol 116:1879–1888 Machado J, Balbinot A (2014) Executed movement using EEG signals through a Naive Bayes classifier. Micromachines. 5:1082–1105 Blankertz B, Tomioka R, Lemm S et al (2007) Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Process Magz 25:41–56 Alzahab NA, Alimam H, Alnahhas MHD et al (2019) Determining the optimal feature for two classes Motor-Imagery Brain-Computer Interface (L/R-MI-BCI) systems in different binary classifiers. Int J Mech Mechatron Eng IJMME–IJENS 19:132–150 Stoica P, Mosesand R (2005) Spectral analysis of signals Welch P (1967) The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electroacoust. 15:70– 73 Shi LC, Jiao YY, Lu BL (2013) Differential entropy feature for EEG-based vigilance estimation. In: 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 6627–6630 Bashashati H, Ward R, Birch G et al (2015) Comparing different classifiers in sensory motor brain computer interfaces. PloS one. 10:e012940129435

Use of Fluorescence in the Diagnosis of Oral Health in Adult Patients Admitted to the Intensive Care Unit of a Public Emergency Hospital M. D. L. P. Matos, E. N. Santos, M. D. C. R. M. Silva, A. Pavinatto, and M. M. Costa

standard hygiene protocol as well as a professional dental surgeon in the multidisciplinary team of the unit. Therefore, it is important to implement an oral hygiene protocol, according to AMIB, to prevent oral diseases, possible infections and worsening of the patients’ systemic condition.

Abstract

Patients admitted to Intensive Care Units (ICUs) may develop oral health problems, causing accumulation of dental biofilm, tongue coating and inflammation in periodontal tissues, considered microbial reservoirs of bacteria. Such problems are associated with hospital infections, in particular the increased risk of a respiratory infection known as mechanical ventilator-associated pneumonia (VAP). This specific oral health condition may be related to the lack of professional training in the ICUs, the absence of a participatory dental team in the clinical and educational context and the lack of specific and/or effective protocols. In this context, the present study aimed to verify the oral health condition of adult patients admitted to the intensive care unit of a public emergency hospital in Teresina-PI as well as the compliance with the recommendations of the standard operating procedure established by the Brazilian Association of Intensive Care Medicine (AMIB) regarding the use of the fluorescence technique for oral hygiene. The research was a descriptive, quantitative and observational study, and the research sample was made randomly among adult patients admitted to the ICU. The data collection occurred in two steps. The first involved a diagnosis of the presence of bacterial plaque in the oral cavity of patients hospitalized in the ICU using a fluorescence system, while the second consisted of a diagnosis by fluorescence made before and after oral hygiene with the application of the protocols recommended by AMIB. The results showed that the oral hygiene care performed in patients hospitalized in the ICU is deficient, with the absence of a M. D. L. P.Matos  E. N. Santos  A. Pavinatto  M. M. Costa (&) Department of Biomedical Engineering, Scientific and Technological Institute, University of Brazil, São Paulo, SP, Brazil e-mail: [email protected] M. D. C. R. M.Silva Department of Public Health, Institute Oswaldo Cruz Foundation —Fiocruz, Rio de Janeiro, RJ, Brazil

Keywords







Oral health Intensive care units (ICU) Public emergency hospital Dental surgeon Fluorescence

1

Introduction

The condition of oral hygiene is closely related to the number and species of microorganisms present in the mouth [1], and a significant increase in the microbiota is observed in the absence of adequate oral hygiene measures. Many authors consider the oral cavity an important reservoir of respiratory pathogens, especially in patients under intensive care [2, 3]. Patients admitted to intensive care units (ICU) often present deficient conditions of oral hygiene as they remain with their mouths open for a long time due to tracheal intubation, which results in mucosa dehydration and decreased salivary flow, allowing greater bacteria colonization and predisposition to periodontal diseases and other possible foci of infection [4]. The presence of oral bacteria associated with poor oral hygiene and periodontitis can trigger the onset and development of lung infections since the oral cavity is anatomically close to the trachea, facilitating the entry of respiratory pathogens. The dental biofilm can be colonized by respiratory pathogenic microorganisms, which can be aspirated from the oropharynx to the upper airway and then to the lower airway and adhere to the pulmonary epithelium [5]. Patients with a precarious oral situation admitted to the ICUs are more likely to have unfavorable results as they are

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_128

849

850

more vulnerable to respiratory diseases, especially those who are under mechanical ventilation, since the cough reflex, sputum and immune barriers are deficient [6, 7]. It is likely that in nosocomial or hospital pneumonia the contamination of the airways by oral pathogens occurs by aspiration and/or inhalation of saliva containing oral bacteria, lipopolysaccharides and bacterial enzymes even though in this pathology the main port of entry of bacteria in the lower respiratory tract is the aspiration of secretion from the oropharynx [8]. Oral hygiene methods in hospitals are usually not employed with the necessary knowledge and domains, where it is possible to identify the lack or poor conduction of some prophylaxis techniques, the insufficient hygiene frequency and the use of materials / substances that do not offer antimicrobial actions necessary for the reduction of oral biofilm. Such facts affect the quality of hygiene, generating a greater possibility of infectious complications that may compromise the morbidity and mortality of patients hospitalized in the ICUs [9]. As interdisciplinary and multidisciplinary teams are already part of the daily routine of the ICUs, it is necessary to understand that oral health is not isolated from the general health of individuals [10]. The definition of some oral hygiene protocols encompasses the treatment and control of oral pathologies in an agile and correct way, the reduction of morbidity and mortality, the improvement in the quality of life and the reduction in the length of hospital stay [11]. The standard operating procedures (SOP) published by the Department of Dentistry and the Nursing Department of the Brazilian Association of Intensive Medicine (AMIB) serves as a guidance for the person responsible for the oral hygiene of the critical patient and aims to standardize the routine procedures and materials/solutions used in critically ill patients [12]. In recent years, the use of modern methods that help the dental surgeon in the detection of oral diseases has increased, specifically changes in the early stages related to both hard and soft tissues of the oral cavity. Among the existing techniques, we can highlight more complex methods, such as digital X-ray and tomography. However, simpler diagnostic methods, such as optical fluorescence imaging systems, have been gaining prominence in the health field as it is considered a tool that can be used in dentistry, helping dentists in the diagnosis of oral diseases [13]. Optical fluorescence is a physical phenomenon that occurs in certain molecules, called fluorophores. Fluorophores have the characteristic of absorbing light at certain wavelengths and re-emitting it at another wavelength. This absorption–emission process is always the same for each

M. D. L. P. Matos et al.

fluorophor, thus representing an optical signature of this component [14]. A healthy tissue portrays a characteristic natural fluorescence, however, when it undergoes a transformation, it changes its natural fluorescence. Hence, during the development of a lesion, fluorophores have their concentration and distribution altered, or change chemically, resulting in modifications in tissue fluorescence [14–16]. The optical fluorescence spectroscopy technique is used to aid in the diagnosis of neoplastic lesions in tissues, the detection of dental caries, the diagnosis of the presence of plaque, among other applications [17]. In all of these applications, the method has shown sensitivity to differentiate tissue variations. Fluorescence imaging is another technique that adds to the capacity of tissue differentiation, representing an important tool for the diagnosis of lesions, such as oral cancer [15]. Furthermore, it is a technique that is easy to perform, highly sensitive and fast in obtaining results [18]. The biological tissue has biochemical and structural compositions that influence the interaction with light, causing the healthy tissue and the neoplastic lesion to show different optical characteristics. Cellular and tissue changes resulting from malignant development modify optical phenomena and the monitoring of fluorescence, reflectance and absorbance, being considered an important diagnostic tool [19]. In dentistry, fluorescence imaging was developed to assist the dental surgeon in the diagnosis of pathologies in the hard and soft tissues of the oral cavity, such as incipient caries, the presence of bacterial plaque, demineralization of the dental enamel, neoplasms, among others [4, 20]. In this context, fluorescence can be used by dental surgeons to perform routine procedures on patients as it enables the detection of possible oral changes through the contrast of visualization of altered tissues that would not be identified through conventional lighting. In addition to the diagnosis of premalignant and malignant lesions, this technique not only allows the dentist to more accurately detect bacterial plaque, tartar, incipient lesions, demineralization of dental enamel, microcracks and marginal infiltrations, but also contributes to the distinction of aesthetic restorative materials, such as composite resin and ceramic [21]. Therefore, the objective of this work is to verify the oral health conditions and the effectiveness of oral hygiene procedures performed on adult patients admitted to the intensive care unit of a public emergency hospital in Teresina-PI by the fluorescence imaging technique. It is worth mentioning that this research shows for the first time the use of fluorescence for the diagnosis of oral health in patients admitted to ICUs. Here, we take advantage of the accuracy of such technique to verify the efficiency of the current method of prophylaxis used in the referred hospital.

Use of Fluorescence in the Diagnosis of Oral Health in Adult …

2

Materials and Methods

2.1 Study Location The study took place in the city of Teresina, capital of the State of Piauí, Brazil, with an estimated 2020 population of 864,000 inhabitants. The specific study location was the Hospital de Urgência de Teresina Zenon Rocha, which offers urgent and emergency care of medium and high complexity 24 h a day. The research was carried out after authorization by the hospital and approval by the Research Ethics Committee of the University of Brazil (REC opinion No. 15663119.3.0000.5494).

2.2 Study Design This research was a descriptive, quantitative and observational study carried out in two steps. In the first, oral (teeth and tongue) and conventional (white illuminated) images of the patients’ mouth were captured, and a fluorescence system was used before and after performing oral hygiene in patients admitted to the ICU, following the current hospital procedure. In the second, the same oral (teeth) and conventional (white illuminated) images were captured, and the same fluorescence system was used before and after the application of the oral hygiene protocol recommended by AMIB.

2.3 Study Population and Data Collection Procedures The study population consisted of 10 adult patients admitted to the ICU of the Hospital de Urgências de Teresina. The data collection procedure was performed in 2 steps, which are described below. (1) Acquisition of conventional and fluorescence images before and after oral hygiene following the current hospital procedure. The first step involved the detection of the presence of bacterial plaque in the oral cavity of patients hospitalized in the ICU through image formation with the aid of a fluorescence system. The photographs were taken of patients whose family members or guardians authorized the research by signing the Informed Consent Form (ICF). The system used to capture the images of the patients’ mouths is composed of a prototype of fluorescence imaging coupled to a SONY H50 digital photographic camera with 16.2 mega pixels. The prototype used for the fluorescence imaging diagnosis was developed in partnership between the

851

University of Brazil (Laboratory of Biomedical Instrumentation) and the company Biopdi (São Carlos, Brazil). The equipment consists of lighting with emission centered at 405 nm with an intensity of approximately 50 mW/cm2, and an array of optical filters that is optimized for viewing the fluorescence in the region of green (500– 565 nm) and red (625–740 nm) emissions. The images were obtained using the 6  camera’s optical zoom at a fixed distance of 10 cm between the camera and the patient’s mouth, keeping the best focus of the image. The distance was defined using a ruler and kept by the researcher in charge of the capture of the photographs. To better obtain images of oral tissues, the camera’s display was manually changed, following the ISO 160 parameter for teeth imaging and the ISO 320 parameter for tongue, gingiva and oral mucosa imaging, which was made without the use of flash. To capture the photographs, ultraviolet (UV) light was placed under the oral tissue where the fluorophores emit visible light through fluorescence after energy absorption [18]. (2) Acquisition of fluorescence images before and after oral hygiene with the application of the SOP recommended by AMIB. In the second stage, the researcher (who is a dentist) randomly selected a patient from the ICU and performed the oral hygiene procedures according to the protocol developed by AMIB, following the procedures below: • The patient’s headboard was positioned in a 45-degree position before the oral hygiene to prevent the aspiration of the solution used in the procedure; • The mechanical control of the dental biofilm was carried out by means of a manual toothbrush that was soaked in a 0.12% chlorhexidine solution for chemical control. The patient’s oral hygiene was then performed with a sweeping movement from the gingiva towards the out of tooth; • After aspiration, the application of humidified gauze with 0.12% chlorhexidine was performed in the patient’s oral cavity (teeth, tongue, mucosa and fixed prostheses or orthodontic appliances, among other devices present), always from the posterior region towards the previous one; • Images of the patient’s mouth/teeth were captured using the fluorescence prototype.

2.4 Analysis of Results The acquired fluorescence images were processed using a fluorescence quantification algorithm written in Matlab® 7.5 (The MathWorks, USA). The algorithm separates the RGB

852

M. D. L. P. Matos et al.

matrices (Red, Green, Blue) from the fluorescence image by isolating the R matrix (matrix containing only red data); after separation, the algorithm averages the red pixels contained in the matrix. This value is then normalized by the first image, that is, before the procedure, and compared with the second one, i.e., after the procedure. With these values, it was possible to quantify whether the procedure performed resulted in an increase or decrease of the fluorescence intensity, indicating whether or not there was a change/ disorganization of the bacterial biofilm, suggesting how effective the hygiene procedure was.

2.5 Ethical Considerations The project complied with Resolution 466/12 of the National Health Council, which deals with ethical issues in research involving human beings.

3

Results

Figure 1 shows the images taken with and without the use of fluorescence before and after performing oral hygiene in three different patients. The images show that practically the same amount (area) covered by biofilm (red region) can be seen on the teeth and tongue before and after the cleaning procedure, suggesting that the oral cleaning performed by the professionals is deficient. In Fig. 2, it is possible to observe the fluorescence images of patient 2’s anterior and lower teeth. Figures 2a and b

Fig. 1 Photographs taken of a patient 1’s lateral and upper teeth, b patient 2’s anterior and lower teeth, and c the back of patient 3’s tongue. For both patients, the photos were taken I with normal light, II with the fluorescence equipment before the hygiene procedure, and III with the fluorescence equipment after the hygiene procedure

(I) are the fluorescence images before and after the cleaning procedure, and Fig. 2a and b (II) are the fluorescence images before and after the cleaning procedure with the image processing performed in the Matlab software, where only the R matrix of the fluorescence image is shown, leaving in greater evidence the presence of the biofilm, or the fluorescence of protoporphyrin IX produced by bacteria. Figure 2c shows the graph of normalized fluorescence intensity before and after the current hygiene procedure performed at the hospital in all 9 patients who participated in the first step of the research. The error bars indicate the standard deviation calculated for the measurements. Figure 3a and b display fluorescence images taken before and after the cleaning procedure recommended by AMIB, (I) without (Figure a) and (II) with image processing (Figure b) performed in the Matlab software, where only the R matrix of the fluorescence image is shown, leaving in greater evidence the biofilm fluorescence. Figure 3c shows the normalized fluorescence intensity quantification graph before and after the cleaning procedure recommended by AMIB. The error bars indicate the standard deviation calculated for the measurements.

4

Discussion

The fluorescence technique proved to be efficient in the study of the oral cavity of patients admitted to the ICU of a public emergency hospital in Teresina-PI, especially in relation to the presence of biofilm (bacterial plaque) before and after the patients’ oral hygiene.

Use of Fluorescence in the Diagnosis of Oral Health in Adult …

853

Fig. 2 Fluorescence images of patient 2’s anterior and lower teeth a before cleaning and b after cleaning, I without and II with color decomposition processing performed by the Matlab software. c Fluorescence intensity before and after performing the current cleaning procedure obtained through fluorescence images

From Figs. 1, 2a and b and 3a and b, we could observe healthy teeth in fluorescent green, mainly due to hydroxyapatite fluorescence, and teeth with the presence of biofilm (bacterial plaque) in fluorescent red, mostly associated with the fluorescence of protoporphyrin IX produced by bacteria [18]. In soft tissues such as the tongue, the presence of biofilm was also confirmed, according to the reddish areas presented in the fluorescence images. In Fig. 2c, it is possible to observe that no significant variation in the fluorescence intensity was detected in all patients, suggesting that the current protocol used as oral cleaning procedure was not effective. Such result indicates that the biofilm remains organized after the hygiene procedure. On the other hand, the images displayed in Fig. 3a and b indicate the occurrence of a significant reduction in the fluorescence intensity after the cleaning procedure recommended by AMIB, suggesting disorganization of the biofilm

both on the teeth and on the surface of the orthodontic appliance used by patient 10. As we can see from the graph in Fig. 3c, there was a reduction in the fluorescence intensity associated with the biofilm, calculated at approximately 60%. From the results regarding the oral health of patients admitted to the ICU obtained from the fluorescence system, we could note the presence of biofilm (bacterial plaque) on both the teeth and the tongue of such patients before and even after cleaning. Visually, the area affected by the biofilm (red fluorescent region) appears to have the same size, implying poor oral hygiene and little or no change in the organization of the biofilm. Such fact was confirmed by standard deviation measurements. In fact, the quantification of the fluorescence intensity before and after current cleaning procedure confirmed that no reduction was observed for all patients. This result corroborates the study by Araújo et al. [20], who reported the efficiency of optical

854

M. D. L. P. Matos et al.

Fig. 3 Fluorescence images of the patient 10’s anterior teeth a before and b after cleaning, following the AMIB protocol, I without and II with color decomposition processing performed by Matlab. The arrows highlight the decrease in the fluorescence intensity, suggesting biofilm disorganization on the dental surface through comparison of both images. c Fluorescence intensity before and after performing the cleaning procedure recommended by AMIB obtained through fluorescence images

fluorescence to visualize the presence of biofilm on the teeth and at the back of the tongue of patients. Therefore, the great relevance of this technique in dentistry is related to the fact that it is a complementary method to the conventional clinical examination performed in the dental office, helping in the diagnosis of lesions of the oral cavity, either in hard or soft tissues [21]. Regarding the application of the oral cleaning procedure in patients hospitalized in the ICU following the protocol by AMIB, the images in Fig. 3 a and b indicate a visual reduction of the dental plaque. The quantification calculations in Fig. 3c confirm that the reduction of the fluorescence intensity was approximately 60% before and after the cleaning procedure. Such decrease is due to the disorganization of the oral biofilm caused by the hygiene/cleaning process, suggesting a lower chance of the patient’s condition evolving to VAP. In the study

developed by Nasiriani et al. [22], a significant reduction in the incidence of VAP was also observed when mechanical and chemical removal of dental biofilm was performed twice a day, using children’s toothbrush for teeth and tongue cleaning and distilled water associated with the application of 12 mL of chlorhexidine every 12 h.

5

Conclusion

It is concluded that the use of the fluorescence system was effective in the diagnosis of the oral biofilm of the studied patients. Based on the results, the oral cleaning / hygiene procedure recommended by AMIB proved to be more effective in removing bacteria when compared to the current procedure adopted by the hospital.

Use of Fluorescence in the Diagnosis of Oral Health in Adult …

Therefore, it is important to implement a standard oral hygiene protocol, such as that recommended by AMIB, to prevent oral diseases, and consequently possible infections and worsening of the patient’s systemic condition. Acknowledgements This study was partly financed by the Coordination for the Improvement of Higher Education Personnel—CAPES (Finance Code 001) and the São Paulo Research Foundation—FAPESP (Grants No. 17/19470-8 and 18/22628-5). E. N. Santos wishes to thank FAPESP for the scholarship (Grant No. 19/04279-6). The authors would also like to thank the company Biopdi (www.biopdi.com) for the technical help. Conflict of Interest The authors declare that they have no

conflict of interest.

References 1. Lindhe J, Karring T, Lang NP (2005) Tratado de periodontia clínica e implantodonta oral. Guanabara Koogan, Rio de Janeiro 2. Gomes FIS, Passos JS, Seixas DACS (2010) Respiratory disease and the role of oral bacteria. J Oral Microbiol 2:5811–5817. https://doi.org/10.3402/jom.v2i0.5811 3. Ewan V, Perry JD, Mawson T et al (2010) Detecting potential respiratory pathogens in the mouths of older people in hospital. Age Ageing 39:122–1225. https://doi.org/10.1093/ageing/afp166 4. Santos PSS, Mello WR, Wakim RCS et al (2008) Uso de Solução Bucal com Sistema Enzimático em Pacientes Totalmente Dependentes de Cuidados em Unidade de Terapia Intensiva. Rev Bras Ter Intensiva 20:154–159 5. Igari K, Kudo T, Toyofuku T et al (2014) Association between periodontitis and the development of systemic diseases. Oral Biol Dent 2:1–7. https://doi.org/10.7243/2053-5775-2-4 6. Gomes SF, Esteves MCL (2012) Atuação do cirurgião-dentista na UTI: um novo paradigma. Rev bras odontol 69:67–72 7. Laurence B, Mould-Millman NK, Scannapieco FA et al (2015) Hospital admissions for pneumonia more likely with concomitant dental infections. Clin Oral Investigations 19:1261–1268. https:// doi.org/10.1007/s00784-014-1342-y 8. Santi SS, Santos RB (2016) A Prevalência da pneumonia nasocomial e sua relação com a doença periodontal: revisão de literatura. RFO 21:260–266. https://doi.org/10.5335/rfo.v21i2.5799

855 9. Caldeira PM (2011) Higiene oral de pacientes em entubação orotraqueal em uma Unidade de Terapia Intensiva. Rev Enferm Integrada 4:731–741 10. Douglass CW (2006) Declaração de Consenso sobre Saúde Bucal e Sistêmica. Inside Dentistry 2:12 11. Pasetti LA, Leão MTC, Araki LT et al (2013) Odontologia hospitalar a importância do cirurgião dentista na unidade de terapia intensiva. Rev Odontologia (ATO) 13:211–226 12. Associação de Medicina Intensiva Brasileira—AMIB (2019) Departamento de Odontologia e Departamento de Enfermagem. Recomendações de higiene bucal (hb) em pacientes internados em UTI adulto ou pediátrica. Available at website: https://www.amib. org.br/noticia/nid/procedimento-operacional-padrao-pop-dehigiene-bucal-amib-2019/ 13. Carvalho MT, Fernandes IQ, Pizelli HE et al (2012) Tecnologias emergentes para laserterapia, terapia fotodinâmica e fotodiagnósticos aplicados à Odontologia. Rev ImplantNews 9:68–74 14. Pratavieira S, Andrade C, Cosci A et al (2012) Diagnóstico óptico em Odontologia. Rev ImplantNews 9:20–23 15. Roblyer DM, Richards-Kortum RR, Sokolov, et al (2008) Multispectral optical imaging device for in vivo detection of oral neoplasia. J Biomed Opt 13:024019. https://doi.org/10.1117/1. 2904658 16. Betz CS, Mahlmann M, Rick K et al (1999) Autofluorescence imaging and spectroscopy of normal and malignant mucosa in patients with head and neck cancer. Laser Surg Med 25:323–334. https://doi.org/10.1002/(sici)1096-9101 17. Meller C, Heyduck C, Tranaeus S et al (2006) A new in vivo method for measuring caries activity using quantitative lightinduced fluorescence. Caries Res 40:90–96. https://doi.org/10.1159/ 000091053 18. Costa MM, Andrade CT, Inada NM et al (2010) Desenvolvimento e aplicação de equipamento para diagnóstico por fluorescência. J Bras Laser 2:8–12 19. Inada N, Costa MM, Guimarães OCC et al (2012) Photodiagnosis and treatment of condyloma acuminatum using 5-aminolevulinic acid and homemade devices. Photodiag Photodyn Ther 1:1–9. https://doi.org/10.1016/j.pdpdt.2011.09.001 20. Araújo GS, Costa MM, Pereira LPC et al (2011) Diagnóstico bucal pelo sistema de imagem por fluorescência óptica. Dental Sci 5:46– 52 21. Ricci HA, Pratavieira S, Júnior AB et al (2013) Ampliando a visão bucal com fluorescência óptica. Rev Assoc Paul Cir Dent 67:129– 135 22. Nasiriani K, Torki F, Jarahzadeh MH et al (2016) The Effect of Brushing with a Soft Toothbrush and Distilled Water on the Incidence of Ventilator-Associated Pneumonia in the Intensive Care Unit. Tanaffos 15:101–107

A Non-invasive Photoemitter for Healing Skin Wounds F. J. Santos, D. G. Gomes, J. P. V. Madeiro, A. C. Magalhães and M. Sousa

Abstract

Keywords

Photobiomodulation is the emission of low intensity light to locally induce cells to equilibrium. One of its possible applications is for skin wound healing. However, any such solution must be non-contact because on the contrary it can cause pain and make the treatment unfeasible. Here we propose a device capable of emitting red light through LEDs to treat this type of injury in a non-contact way. In addition to assessing the optimal irradiation distance and estimating a maximum area of homogeneous coverage, we compared seven different configurations to assess the temperature stability during the treatment application. We tried two values of power density for the emission (50 mW/cm2 and 60 mW/cm2 ) and two distance measurements (5 and 7 cm) between LED boards and a temperature sensor, over a 10-min period. As a result of the optimal distance experiment, the 5 cm distance was used in the following two experiments, where at this distance it is possible to carry out applications covering a homogeneous area of up to 323.95 cm2 . In the temperature fluctuation experiment, we found that, although some configurations have less stability than the others, none of them compromises the efficiency of the treatment.

Photobiomodulation therapy • LED • Skin • Wounds

F. J. Santos (B) · D. G. Gomes Programa de Pós-Graduação em Engenharia de Teleinformática, Centro de Tecnologia, Universidade Federal do Ceará (UFC), Fortaleza-CE, Brazil e-mail: [email protected] J. P. V. Madeiro Departamento de Computação, Centro de Ciências, Universidade Federal do Ceará (UFC), Fortaleza-CE, Brazil A. C. Magalhães Instituto de Ensino e Pesquisa (INSPER), São Paulo-SP, Brazil A. C. Magalhães · M. Sousa Tergos Pesquisa e Ensino LTDA, São Paulo-SP, Brazil

1

Introduction

The cutaneous wound healing is a complex process that involves the interaction of several factors, which work together to restore the injured skin. When the healing of an injury does not follow the expected recovery pattern, it can result in a chronic wound, which represents an overload for both the patient and the medical system [1]. In the United States, the annual cost for the medical system to treat chronic wounds cases is about $ 25 billion [2]. Also in the United States, the number of affected patients grows by approximately 6.5 million, due to the growth of chronic diseases such as diabetes, which can affect the wound healing process [1]. Therefore, when analyzing economic and clinical factors, we realize the importance of new alternatives for the treatment of wounds. Among the alternatives, there is Photobiomodulation therapy (PBMT), also known as low-level laser therapy (LLLT), which uses light at a certain wavelength and power density, to stimulate cells to homeostasis [3]. This same technique has already been used to treat, for example, chronic pain [4]. The PBMT is a simple, efficient and cost-effective method to treat acute and chronic pain, in a non-invasive way [4]. Based on local light irradiation, PBMT can be used to (i) stimulate healing [5,6], (ii) increase tissue regeneration [6] and (iii) reduce pain and inflammation [7], without known side effects. The PBMT can be performed through LASER (light amplification by stimulated emission of radiation) or using LEDs (light emitting diodes) [8]. Among the products used in PBMT, the Light Aid (commercial name) console from Bright Photomedicine is a device currently used for the treatment of chronic pain [9]. Light Aid

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_129

857

858

F. J. Santos et al.

connects to transducers in a flexible wrap, which are positioned (in contact with the skin) in the region to be treated [9]. This console controls several parameters for the treatment, being able to connect to other light emitting devices for the use of PBMT. Here we propose a prototype of a device capable of emitting red light (620–630 nm) at different power densities, capable of covering a wide range of wounds, without requiring contact with the skin, portable and stable. This device must connect to Light Aid, thus, we can list a set of requirements for the development of a photoemitter applied on PBMT for chronic wound healing: • Radiate light with the greatest possible coverage in the region; • Radiate light without space intervals (homogeneous radiation); • No skin contact; • Do not present significant increase of temperature or temperature fluctuations in order to impact treatment efficiency; • Do not need sterilization, but it should allow disinfection.

2

Material and Methods

A model that could meet the initial requirements was designed, as shown in Fig. 1. This model features three couplers, one for each LED board, one central and two on side flaps. These side flaps are movable and can adapt at angles from 90◦ to 180◦ in relation to the central coupler.

2.1

LEDs

One crucial characteristic of our proposal is the LEDs behavior. LEDs emit light that vary in wavelength between 405 nm (blue) and 940 nm (infrared) [8]. The photostimulation resulting from the LEDs acts on the cell in permeability, on stimulating mitochondria, on ATP (Adenosine triphosphate) synthesis and on proteins such as collagen and elastin, in addition to acting as antimicrobial and anti-inflammatory, depending on its wavelength [8]. The Table 1 shows the efficiency of some wavelengths (experiments carried out on rats). The emitter has three red LED boards, with 100 high intensity LEDs each, totaling 300 LEDs. The equipment can be used operating with 100, 200 or 300 LEDs at a time, as shown in Fig. 2.

2.2

Embedded System

To guarantee the efficiency and safety of the treatment, it is necessary that the boards are positioned at an optimal range

Fig. 1 Holder design for three LED boards

of distance. This distance cannot be so far as to reduce the efficiency of irradiation, nor so close as to cause discomfort (increasing the temperature in the vicinity of the boards) and change in power density, which can also result in unwanted effects. As it is an extension of an existing product (Light Aid®), the light emitter needs to adapt to this device. For this purpose, an Arduino Pro Mini was chosen as the prototyping platform, facilitating interaction with the hardware present in LightAid®(based on the Arduino platform) [10]. Here, the Arduino Pro Mini has the function of controlling 3 infrared (IR) distance sensors (model SHARP GP2Y0AF15) and 3 red LEDs. Each board has an IR distance sensor and a red alert LED, so the purpose of using these sensors is to monitor the distance between the 3 boards and the region to be irradiated, that is, when measuring a distance less than the optimal distance the Arduino Pro Mini activates the respective LED, indicating that the board is at a distance that is less than ideal.

2.3

Experiments

2.3.1 Optimal Distance In general, phototherapy-based treatments use exposure of biological tissue to low light intensity (power densities ranging from 1 to 100 mW/cm2 ), for a few minutes [3]. Therefore,

129 A Non-invasive Photoemitter for Healing SkinWounds

859

Table 1 Efficiency in wavelength range applications (λ) [8] λ 515–525 nm 620–630 nm 450 nm

Color Green Red Blue

Efficiency Radiate fibroblasts in circumstances of hyperglycemia Skin healing and rejuvenation Stimulating bile flow and excretion of bilirubin in neonates

(a) Right, center and left led boards already embedded in the shapes of Figure 1

(b) Power off

(c) Power on

Fig. 2 Current version of the prototype

based on the technical specifications of the Light-Aid device hardware, a range of power density is estimated for the performance of the LEDs. As the current Light-Aid console provides a maximum current value of 100 mA to LED boards, the maximum operating power density for them is limited to a maximum value of 60 mW/cm2 , varying between 10 and 60 mW/cm2 , according to factors that influence the difference in absorption in human skin tissues, featuring a customized dosimetry. This power density is important to determine an optimal distance, so that the region to be treated is irradiated with the recommended power density at each dosage. To define an ideal application distance, we programmed the Light-Aid for the power density at 60 mW/cm2 and used a power meter next to a robotic arm, which approached 5 mm in 5 mm of the boards. The power measured on the power meter was recorded and the test was repeated until the value of 60 mW/cm2 was reached.

2.3.2 Irradiation Area As it is a device intended to be versatile, it is of great importance that the irradiation area covers all, or a good part of the region of the wound. Light Aid can be used by activating one, two or three LED boards simultaneously, so its area can vary to treat from a small wound to a larger wound. Thus, it is necessary to assess whether the power radiated throughout the light region is the same, because if the power decreases, the effectiveness of the treatment may be compromised. To evaluate the power of the irradiated area, the following experiment was performed. We used the three boards positioned 180◦ from each other, directed them to a surface where a luximeter was positioned at a distance of 5 cm from the LED boards and configured the console for a power density of 60 mW/cm2 . So we started the application and moved the luximeter manually, in the vertical and horizontal directions (in steps of 0.5 cm), to the ends of the boards, defined at the point where the luminous intensity started to decline. Therefore, when defining the extremities, it was possible to estimate the application coverage area. This same experiment was done using one, two and three boards. 2.3.3 Temperature One of the factors of attention in the use of LEDs for PBMT is the temperature of the light irradiation region. Small increases in temperature can cause discomfort and ineffectiveness in the treatment. Furthermore, it may cause an effect contrary to the desired one. Because of that, it is necessary to evaluate whether the device raises the temperature to the point of impairing the treatment. The temperature must show the minimum possible elevation in the region where the light irradiates. The measurements were made in order to analyze the performance of the boards both individually and combined. For this, we used a temperature sensor (LM35 by Texas Instruments) that was positioned according to the board or the formation of the boards that we wanted to analyze. As these are three plates that can act combined or individually, we carry out measurements with all the options for their operation, in two formations. The two formations were: (i) Open, with the three boards forming an angle of 180◦ with each other and (ii) Closed, with the side plates inclined at an angle of approximately 135◦ to the central board, as shown in Figure 3.

860

F. J. Santos et al.

1 T (a, b) = √ a

(a) Open Format

(b) Closed Format

+∞ t −b )dt, x(t)ψ ∗ ( a

(1)

−∞

where ψ ∗ (t) is the complex conjugate of the mother wavelet ψ(t), which is shifted by a time b and dilated or contracted by a factor a prior to computing its correlation with the signal x(t). Discrete wavelet transform (DWT) consists of choosing scales and positions based on powers of two, known as dyadic scales and positions, which provides efficiency and accuracy for denoising purposes and also for enhancing specific physiologic content. DWT employs a dyadic grid (integer power of two scaling in a and b) and orthonormal wavelet basis functions, exhibiting zero redundancy ([11]). An intuitive way to sample the parameters a and b is to use a logarithmic discretization of the factor a and link this to the size of steps taken between b locations. This kind of discretization has the form t − nb0 a0m 1 ), (2) ψm,n (t) =  m ψ( a0m a0

Fig. 3 Schematic for boards formation

For the Open formation, the distances between the sensor and the center of each board were 5 and 7 cm, to analyze the data from a variation of distance between the boards and the irradiation region, taking into account that there may be slight movements by the patient during therapy. Still in the Open formation, data from both boards operating individually and combined were analyzed to verify the influence of one board on the others. For the Closed formation, the intention was to verify the temperature variation with the three boards operating closely, simulating an application in a curved region, similar to an application around an arm for example. In this formation, the sensor was positioned 5 and 7 cm from the center of the central board. All tests had duration of 11 min and 10 s, as shown in Fig. 3. The first 10 s were used to take the initial temperature, as reference. After that, the light treatment was applied for 10 min. At last, temperature was measured for 1 more minute, to analyze the cooling process after the application.

2.3.4

Extracting Features From Collected Time-Series: Wavelet analysis Aiming to analyze the temperature fluctuations within the collected time-series, we applied Wavelet Transform in order to quantify contributions of specific frequency bands. The continuous wavelet transform (CWT) of a continuous time signal x(t) is defined as

where the integers m and n control the wavelet dilation and translation respectively; a0 is a specified fixed dilation step parameter, and b0 is the location parameter. A common choice for a0 and b0 are 2 and 1 respectively ([11]). Substituting a0 = 2 and b0 = 1 into equation 2, the dyadic grid wavelet can be written as ψm,n (t) = 2−m/2 ψ(2−m t − n).

(3)

Using the dyadic grid wavelet referred by equation 3, the discrete wavelet transform (DWT) is given as ∞ Tm,n =

x(t)ψm,n (t)dt,

(4)

−∞

where Tm,n is known as the detail coefficient at scale and location indices m, n. The authors in [12] developed an efficient and reliable algorithm to compute DWT decompositions, using consecutive filters and decimators, and also the inverse DWT, i.e. the process of reconstruction of modified wavelet coefficients, using consecutive filters and upsamplers, as illustrated in Fig. 4. At the different stages, the high-pass filters (H(z)) and lowpass filters (G(z)) determine the corresponding details dk [n] and approximations ak [n] coefficients, respectively ([13]). By using a filter bank to decompose a signal allows one to selectively examine or modify the content of a signal within chosen bands for the purpose of compression, filtering, or signal classification, and also reconstruct the filtered and/or enhanced wavelet coefficients.

129 A Non-invasive Photoemitter for Healing SkinWounds

861

Fig. 4 A synthesized schema for wavelet decompostion and reconstruction through filter banks

Table 2 Coverage area by number of boards Experiment 1 Board 2 Boards 3 Boards

Height (cm) 9.5 9.5 9.5

In general, the overall waveform of a signal will be primarily contained in the approximation coefficients, and shortterm transients or high frequency activity will be contained in the detail coefficients. Therefore, depending on the sampling frequency of the original signal, as we eliminate the details coefficients at the various stages, and reconstruct the signal using only approximation coefficients, we will recover the major morphological component and progressively discriminate lower and lower frequency components. Considering the family of mother-wavelets daubechies, named as dbk, where k refers to the number of vanishing moments in the low-pass and high-pass filters, we apply decomposition low-pass and high-pass filters related to db10. Altogether, we apply ten levels of decomposition to assess fluctuations in temperature time series.

3

Results and Discussion

3.1

Optimal Distance

After the tests, we found that an optimum distance for effective application is 5 cm, regardless of the power density you want to apply, that is, within the operating range Light-Aid (power density from 10 to 60 mW/cm2 ), the ideal distance remains the same. As it is an application without contact with the skin, this distance can present very smooth variations due to some movement on the part of the patient, which does not compromise the effectiveness of the treatment. It is important to emphasize that this variation should be as small as possible and that the patient should be comfortably accommodated close to the photoemitter, to avoid unnecessary movements.

Area (cm2 ) 90.25 208.05 323.95

Width (cm) 9.5 21.9 34.1

3.2

Irradiation Area

The dimensions of the irradiation area are shown in the Table 2. As in the optimal distance experiment, regardless of the power density configured on the console, the coverage area will be the same, for the distance of 5 cm.

3.3

Temperature

Using the data collected according to the methodology explained in the Sect. 2.3.3 we got the result (in raw data), shown as an example in the Fig. 5. After analyzing the raw data and applying the Wavelet Transform over the obtained time-series, we are able to analyze and quantify different levels of fluctuations. For this, we compute the power of the wavelet detail coefficients (sum of squares of coefficient amplitudes), using the dB scale, within the frequency bands present in Column 1 of Table 3. Then, we identify, for each frequency band, the configuration and the corresponding board which provide the lowest energy level, which are associated to stable behaviors for temperature fluctuations, as presented in Table 3. Also, we identify the configuration and the corresponding board which provide the overall lowest energy level, considering the sum of the power of the wavelet detail coefficients for all frequency bands, which is associated to the most stable behavior for temperature fluctuations. This analysis yields to identify the right board and the configuration 5cm − 60mW/cm2 as the schema related to the most stable temperature evolution. As for illustrating the measured time-series for temperature evolutions, we present in Fig. 6 the time-series for the experiment where the sensor is located 5 cm from the emitter led and the power of led emission is 50 mW/cm2 .

862

F. J. Santos et al.

Fig. 5 Temperature fluctuation over time—right sensor—raw data

Table 3 Measuring the power of detail coefficients for several frequency bands considering the most stable behavior in each frequency band Band (Hz) 1 1 4 - 2 1 1 8 - 4 1 1 16 - 8 1 1 32 - 16 1 1 64 - 32 1 1 128 - 64 1 1 256 - 128 1 1 512 - 256 1 1 1024 - 512 1 1 2048 - 1024

Board Right Right Right Right Right Right Left Left Left Left

Configuration 7 cm - 50 mW/cm 2 5 cm - 60 mW/cm 2 5 cm - 60 mW/cm 2 5 cm - 60 mW/cm 2 5 cm - 60 mW/cm 2 5 cm - 60 mW/cm 2 7 cm - 60 mW/cm 2 7 cm - 60 mW/cm 2 7 cm - 60 mW/cm 2 7 cm - 60 mW/cm 2

4

Fig. 6 Trend curves for sensor settings in front of the right board

For comparison purposes, Fig. 7 presents a bar chart, where it is possible to compare the contribution from each configuration for the overall sum of the power of the detail coefficients from the Wavelet transform of the temperature evolution trends. Considering that the lowest contribution is related to the most stable temperature, we identify that some configurations are more stable than the others.

Power (dB) 58.29 62.14 66.42 71.21 76.49 82.08 87.83 93.71 99.65 105.61

Conclusion

Controlled by the Light-Aid console, the LEDs operate at different power densities, ranging from 10 to 60 mW / cm2 , in addition to covering large wounds due to the possibility of operating the three boards together. In a portable way, being able to be easily transported and without needing contact with the skin, the emitter was calibrated for use at a distance of 5 cm between the boards to the region of the wound, using a distance sensor to prevent the distance from being shorter than the ideal distance. The study of temperature was the main contribution to this work. From a refined analysis of the data collected by a temperature sensor, it was possible to estimate which configurations were more stable in relation to the fluctuation of the values measured by the sensor. It is worth mentioning that this is a behavior based on tests using analog sensors, which do not take into account the characteristics present in biological tissue. As a suggestion for future work, we propose to conduct tests on biological tissue, which can be in vivo or ex vivo, to

129 A Non-invasive Photoemitter for Healing SkinWounds

863

Fig. 7 Contribution from 4 configurations (described on the X axis) for the overall sum of the power of the detail coefficients from the Wavelet transform of the temperature evolution trends

analyze the behavior of temperature in biological tissue and its effectiveness in treating what is proposed. In addition, we propose to improve security redundancy by adding temperature sensors that can assess the heating of the boards during the application of the therapy. Finally, we also suggest, at the end of the clinical tests, the development of a single Printed Circuit Board to control both the sensors and the dosage and application settings.

4.

Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. The authors also acknowledge the financial suport of FUNCAP (Fundação Cearense de Apoio a Pesquisa)..

8.

Conflict of Interest The authors declare that they have no conflict of interest.

10.

5. 6.

7.

9.

11.

References 1. 2.

3.

Han G, Ceilley R (2017) Chronic wound healing: a review of current management and treatments. Adv Therapy 34:599–610 Brem H, Stojadinovic O, Diegelmann RF et al (2007) Molecular markers in patients with chronic wounds to guide surgical debridement. Molecular Med 13:30–39 Sousa MVP, Kawakubo M, Ferraresi C, Kaippert B, Yoshimura EM, Hamblin MR (2018) Pain management using photobiomodulation: Mechanisms, location, and repeatability quantified by pain threshold and neural biomarkers in mice. J Biophot 11(7):201700370

12.

13.

Chow RT, Johnson MI, Lopes-Martins RAB, Bjordal JM (2009) Efficacy of low-level laser therapy in the management of neck pain: a systematic review and meta-analysis of randomised placebo or active-treatment controlled trials. The Lancet 374:1897–1908 Kuffler DP (2016) Photobiomodulation in promoting wound healing: a review. Regenerat Med 11:107–122 Mesquita-Ferrari RA, Martins MD, Silva JA et al (2016) Effects of low-level laser therapy on expression of TNF- and TGF- in skeletal muscle during the repair process. Lasers Med Sci 3:335–340 Hamblin M, Agrawal T, Sousa MVP (eds) (2016) Handbook of Low-Level Laser Therapy. Jenny Stanford Publishing, New York Froes P, Tatum B, Fernandes ICAG et al (2010) Avaliação dos efeitos do LED na cicatrização de feridas cutâneas em ratos. Wistar FISIOTERAPIA BRASIL 11:428–432 Sousa MVP (2014) Fonte de luz multifuncional, portátil e flexível para tratamentos e terapias com luz. WO 2014/10776 A1 Dib F, Valente CMO, Soares LP, Magalhães AC, Miagava J, Sousa MVP (2019) Desenvolvimento de melhorias em ponteira para uso em pacientes com ferimentos superficiais. INSPER 41 Addison PS (2005) Wavelet transforms and the ECG: a review. Physiol Measur 26:R155 Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11:674–693 AlMahamdy M, Riley HB (2014) Performance study of different denoising methods for ECG signals. Procedia Comput Sci 37:325– 332

Intermittent Suctioning for Patients Under Artificial Ventilation: A Digital Model Study F. V. O. C. Médice, M. C. B. Pereira, H. R. Martins, and A. C. Jardim-Neto

Abstract

Keywords

Endotracheal Suctioning is a clinical resource commonly used for the bronchial hygiene of patients under artificial ventilation. This work aims to propose a new procedure to avoid damage to the patient during this process. Considering the importance of assessing the impact of the mechanical ventilation-suctioning interaction on the respiratory system, one of the objectives of this work is to validate a computational model of a closed tracheal suctioning system. For that, the pressures, obtained by the model, were compared to those resulting from aspiration in a physical lung model. Combinations of catheter and orotracheal tube sizes, as well as ventilatory and suctioning pressures, were used. The difference between the alveolar pressures obtained with the computational and physical models was not greater than 0.3 cmH2O; with the correlation between the signals higher than 0.999. Therefore, the computational model was considered adequate for the alveolar mechanical representation in the proposed condition. With the validated digital model and aiming to obtain less loss of volume and alveolar pressure of the patient model, an intermittent suctioning with parameters of 4 patients was implemented (one healthy, one with a restrictive disease, one with obstructive disease and one critical with both disorders). When comparing normal and intermittent suctioning, there was an improvement in the mean alveolar pressure and a decrease in maximum pressure drop during suctioning in patients.

Mechanical ventilation Endotracheal suctioning Mathematical lung model Airtight rigid container lung model

F. V. O. C. Médice (&)  M. C. B. Pereira  H. R. Martins  A. C. Jardim-Neto Departamento de Engenharia Elétrica, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Belo Horizonte, MG 31270-901, Brazil A. C. Jardim-Neto Department of Biomedical Engineering, Stevens Institute of Technology, Castle Point Terrace, Hoboken, NJ 07030, USA

1





Introduction

Patients undergoing mechanical ventilation require periodical airway maintenance [1]. According to the American Association for Respiratory Care (AARC), endotracheal aspiration is a common procedure for these patients being part of the bronchial hygiene therapy that minimizes the accumulation of mucous secretions from the patients’ airways [2]. According to Avena et al. [1] and Ortis [3], the aspiration procedure can cause several complications, such as hypoxemia, bradycardia, atelectasis, mucosal trauma, increased intracranial pressure, bacteremia, pneumothorax and even cardiac arrest and death. During the suctioning procedure, the internal pressure of the trachea is mostly sub-atmospheric, increasing the chances of complications, such as pulmonary derecruitment [4]. Aiming at a new strategy to minimize the collapse during closed system suction, this work aims to estimate the effect of an intermittent aspiration system on the alveolar pressure. Through digital modeling of a patient’s lung, mechanical ventilator and aspirator, the impact of this intermittent aspiration on healthy and sick patients was assessed.

2

Materials and Methods

2.1 Modeling To create the mathematical model, we used the respiratory system motion equation for airway opening pressure [5] as a

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_130

865

866

F. V. O. C. Médice et al.

reference. It considers airway opening pressure (Pao ), elastance (Ers ) and resistance (Rrs ) of all respiratory system and flow (V_ Lung ) and volume (VLung ) of the lungs. It was adapted to include the residual volume (RV), as seen in Eq. 1.  Pao ¼ Ers VLung  RV þ Rrs V_ Lung ð1Þ Using the law of conservation of mass [6], we considered that all gas flow from the lungs comes from the external environment, having no alveolar gas exchange. Equation 2 shows the sum of flows from ventilator (V_ Ventilator ) and suction (V_ Suction ) and Eqs. 3 and 4 show the Poiseuille’s law [6–9] for each flow. They consider the ventilator’s (PVentilator ) and suction’s (PSuction ) pressures with the orotracheal tube (RTOT ) and suction catheter (RSuction ) resistances. V_ Lung ¼ V_ Ventilator þ V_ Suction

ð2Þ

V_ Ventilator ¼ ðPVentilator  Pao Þ=RTOT

ð3Þ

V_ Suction ¼ ðPSuction  Pao Þ=RSuction

ð4Þ

Using Pao as the pressure reference, it is possible to predict the behavior of lung pressure as the alveolar pressure is a portion of Eq. 1, as shown in Eq. 5.  PLung ¼ Ers VLung  RV ð5Þ The reference values for RTOT and RSuction were obtained from orotracheal tubes (TOT) of 7.5 and 8.5 mm; three suction catheters (SC) of 06 FR, 08 FR and 10 FR; and six suction catheters of 12 FR. With a 3 L spirometer calibration syringe (Alpharad, Brazil), a 3 min sequence of cyclic air_ was generated to obtain a signal within the flows (V) expected range of variations of positive and negative pressures (P) for identification and characterization of the resistances. As the SC is inside the TOT when in use, a combination of TOTs and SCs were measured according to AARC’s recommendations [2]. Since negative pressures were too noisy, only the positive pressures were used for the Identification. A fit curve function V_ ¼ aPb of the data was used to get the resistances. For a physical lung model, we use a 32.5 L aluminum stock pot number 36 (Alumínio ABC, Brazil), an airtight rigid container lung model [10, 11] was created. A 28.5 and a 4 mm hole were made in the lid. In the 28.5 mm hole, a silicon tube of 30 mm diameter and 25 cm length was inserted to simulate the trachea. In the 4 mm hole, a three-way stopcock was added to measure alveolar pressure. Both holes were fixed using glue epoxy resin durepoxi (Loctite, Brazil). An acrylic double-sided tape 690 19 mm 2 m (Tectape, Brazil) was used to fixate the lid in the pot and to eliminate leakings.

For the digital model, we use the Simulink as tool. As the resistance changes for each combination of TOT and SC used, two MATLAB Functions were written for calculating the suction flow and the ventilator flow. Using Eq. 2, all flows were integrated. To simulate the behavior of a real ventilator, the comparison between the respirator pressure and the ventilation pressure has a constant delay and a trigger pressure. To set the initial values for the simulation, it is necessary to define the constants Ers , RV and Rrs of the patient. As the simulation assumes a patient on Continuous Positive Airway Pressure mode (CPAP) before starting aspiration, the initial lung pressure is always the initial ventilation pressure. For the Intermittent Suctioning, in the MATLAB Function of suctioning flow, there is a code that, if enabled, it makes the flow goes to zero in a frequency configured by the user. At the exit of MATLAB Function of ventilator flow, there is a Saturation block configured at 0.5 L/s, limiting the maximum ventilator flow.

2.2 Validation All signals were acquired using a differential pressure sensor SSCDRRN001PD2A5 (±1 psi range—Honeywell, EUA) for the alveolar pressure and the differential pressure sensor SSCDRRN005PD2A3 (±5 psi range—Honeywell, EUA) with the mass flow meter SFM3000 (200 slm range—Sensirion AG, Switzerland) for the Ventilator circuit and the Suction circuit. The data acquisition was performed with a sampling rate of 100 Hz by an STM32F765 microcontroller, and data was sent to the computer via Bluetooth. 3D prints connections were made to connect sensors between the TOT/Y-piece and Suction Catheter/delivery tube. To set the simulated patient parameters (Ers and Rrs ), is necessary to measure the parameters of the lung model. For that, it was used the TOT 8.5 mm and the lung model was connected to the Massimus Ventilator (Cmos Drake do Nordeste Ltda, Brazil), previously calibrated (data not shown). Using the assisted mandatory ventilation (ACMV mode) with Volume Control of 500 mL, Pressure Trigger of 0.5 cmH2O, Positive End-Expiratory Pressure (PEEP) of 0 cmH2O, Inspiratory Frequency of 10 cycles/min and Inspiratory Time of 2 s, it was acquired a signal of 2 min. When processing the data, initially was used a butterworth low pass filter with a cutoff frequency of 1 Hz and zero phase. The volume was calculated as the sum of the flow in a period, the error of the measure is crescent and accumulative too. So it was forced that all inspiration volumes were the same as the expiration volumes, removing this sum error. Then, the Least Squares Method was used to find out which are the best parameters according to Eq. 6 for these signals.

Intermittent Suctioning for Patients Under Artificial …

Pao ¼ Ers VLung þ Rrs V_ Lung  PVR

867

ð6Þ

It was used an 1.3 L Portable Surgical Aspirator (Aspiramax, Brazil), the Inter 5 Plus Ventilator (Intermed Equipamento Médico Hospitalar Ltda, Brazil) and the lung model to the validation experiment. As the AARC’s recommendations [2], the suction had a duration of 15 s. Initially, a signal was acquired with only the aspirator turned on using the minimum, average and maximum pressure for each combination of TOT and SC with intervals of 20 s between each one, in order to discover the suction pressure for each. After that first minute, the ventilator was turned on in CPAP of 5 cmH2O, and sensitivity of −0.5. It was expected for the ventilator to reach configured CPAP value, to keep the patient’s pressure always constant. Then, a suction was started using the minimum, average and maximum pressure for each combination with 20-s intervals between each one, to measure the patient’s alveolar pressure. This process was repeated, modifying the CPAP to the values of 10 and 15 cmH2O, always waiting 20 s after the last aspiration to modify this parameter. When processing the data, initially was used a butterworth low-pass filter with a cutoff frequency of 1 Hz and zero phase. Then, it was used the measured ventilation and the suction pressures as inputs for the Simulink model with all the other parameters of the lung model and the combination of TOT and SC. Then, it was compared the measured and simulated alveolar pressures.

2.3 Simulation Using the combination of compliances (Crs ) of 20 or 120 mL/cmH2O and Rrs of 5 or 25 cmH2O/L/s, four types of patients were simulated. The combination Crs = 120/Rrs = 5 as a healthy patient; Crs = 20/Rrs = 5 as a patient with restrictive lung disease; Crs = 120/Rrs = 25 as a patient with obstructive lung disease; and Crs = 20/Rrs = 25 as a patient with both conditions. For all four patients cases, it was used all the seven combination of TOTs and SCs in order to see which cases the patient’s lung pressure would achieve a significant pressure drop. It was set PVentilator = 10 cmH2O, PSuction = −200 cmH2O and RV = 1.2 L. For each combination of patient with TOTs and SCs, it was simulated for the delay values of 50 and 100 ms with Ptrigger of 0.5 and 1.0 cmH2O. The papers of Coimbra and Silverio [12] and Maggiore et al. [13] introduce that the protective PEEP is above 5 cmH2O and it is a protective ventilatory strategy. So, for all the 112 simulations, it was estimated that a pressure drop of up to 2 cmH2O was not enough to require intermittent aspiration. Thus, for values greater than 2 cmH2O, intermittent aspiration was simulated with a frequency of 1 Hz.

3

Results

3.1 Validation of Models The resistances measures were fitted to a second order polynomial with pressure as output and flow as input. But as Pressures are the inputs for the Simulink Model, an inverse function was need. The result is in Table 1. For the lung model, using the Least Squares Method, the parameters for the lung model were Ers ¼ 38:1970, Rrs ¼ 3:3464 and PVR ¼ 0:1597, with a R2 ¼ 0:9973. This means that the compliance is 26.1801 mL/cmH2O, the resistance is 3.3464 cmH2O/L/s and the residual pressure of the model is 0.16 cmH2O, that is, a negligible residual pressure. Using the Simulink model, initially evaluating the values measured by the experiment, the values of Aspiration Pressure by the combination of TOT and SC are shown in Table 2. As it is possible to observe, the smaller the diameter of the tube, the greater the resistance predicted in the model and the greater the suction pressure. When following the suction values by CPAP level, it can be seen that, with the system completely sealed, the fan also pressurizes the suction line, thus creating a small offset in the suction pressure. This can be seen in the values that become more positive with the variation of the CPAP level. When using the same pressure signals as input to the digital system, the values in Table 3 were obtained. Searching for the differences in Ventilator Pressures with the measured Alveolar Pressures, Fig. 1 was made. Observing the trends in the curves in Fig. 1 and the values in Table 3, the combinations with TOT 7.5 obtained lower suction pressures than the same SC combined with TOT 8.5. The difference between the ventilator pressure and the alveolar pressure was higher for the SC with a larger diameter.

3.2 Simulation Figure 2 shows the maximum alveolar pressure drop during aspiration. It is possible to observe that the pressure drop passes the 2 cmH2O threshold in all cases of patients with Rrs = 25 cmH2O/L/s, thus these cases require intermittent aspiration. For these simulations, Ptrigger didn’t interfere with the value of pressure drop. For Delay, the pressure drop is greater when the delay is bigger. So the upper points in Fig. 2 are those with a value of 100 ms and the lower points are those with a value of 50 ms. Figure 3 shows the mean alveolar pressure drop during aspiration. It is possible to observe that, at medium pressure, few cases exceed the protective PEEP threshold and, of these cases, all have Rrs = 25 cmH2O/L/s, except the combination TOT 8.5 and SC 12. For these simulations, Ptrigger didn’t interfere with the value of pressure drop. For Delay, the

868 Table 1 Resistance of orotracheal tubes (TOT) and suction catheters (SC) with the constants a and b of the curve fit equation V_ ¼ aPb

Table 2 Average aspiration pressures per combination in cmH2O

F. V. O. C. Médice et al. V_ ½k ¼ aP½kb

TOT

SC

a

b

V_ ½kSC 06



06

0.001339

0.7939

V_ ½kSC 08



08

0.01081

0.6338

V_ ½kSC 10



10

0.01501

0.6046

V_ ½kSC 12



12

0.03048

0.5689

V_ ½kTOT 7:5

7.5



0.3833

0.5872

V_ ½kTOT 7:5SC 06

7.5

06

0.3223

0.5745

V_ ½kTOT 7:5SC 08

7.5

08

0.2745

0.5687

V_ ½kTOT 7:5SC 10

7.5

10

0.2682

0.5726

V_ ½kTOT 8:5

8.5



0.548

0.5688

V_ ½kTOT 8:5SC 06

8.5

06

0.4413

0.5807

V_ ½kTOT 8:5SC 08

8.5

08

0.3795

0.5819

V_ ½kTOT 8:5SC 10

8.5

10

0.3808

0.5843

V_ ½kTOT 8:5SC 12

8.5

12

0.321

0.5871

CPAP

Combination

Pmin

Paver

Pmax

5

TOT 7.5 SC 06

−30.5935

−60.1557

−127.2874

TOT 7.5 SC 08

−17.9167

−31.6061

−58.1773

TOT 7.5 SC 10

−15.4048

−24.6764

−42.0320

10

15

TOT 8.5 SC 06

−28.8369

−54.3058

−121.0695

TOT 8.5 SC 08

−15.4561

−25.9326

−43.3267

TOT 8.5 SC 10

−9.2302

−13.8758

−19.2028

TOT 8.5 SC 12

−5.8885

−8.2867

−11.0030

TOT 7.5 SC 06

−30.8199

−61.5563

−127.0411

TOT 7.5 SC 08

−15.9677

−28.1604

−55.0304

TOT 7.5 SC 10

−13.4784

−22.8815

−39.1822

TOT 8.5 SC 06

−26.0172

−50.484

−106.2791

TOT 8.5 SC 08

−14.3728

−23.9385

−41.0245

TOT 8.5 SC 10

−8.0579

−12.4343

−17.2151

TOT 8.5 SC 12

−4.0084

−6.0977

−8.4564

TOT 7.5 SC 06

−29.6542

−60.5314

−125.5421

TOT 7.5 SC 08

−14.4892

−27.1108

−52.1786

TOT 7.5 SC 10

−11.5001

−20.5609

−36.1781

TOT 8.5 SC 06

−26.4807

−49.9332

−105.4312

TOT 8.5 SC 08

−13.7283

−24.3806

−41.9824

TOT 8.5 SC 10

−6.9226

−10.7593

−15.4311

TOT 8.5 SC 12

−2.2343

−4.1045

−6.1579

difference was only visible on the blue case, that the upper point in Fig. 3 is those with a value of 50 ms. Observing the values that exceeded the ranges in Fig. 2, intermittent aspiration was used and the results are shown in

Figs. 4 and 5. Again, Ptrigger didn’t interfere and the upper points on Fig. 4 are those with a value of Delay of 100 ms and on Fig. 5 they are those with a value of Delay of 50 ms.

Intermittent Suctioning for Patients Under Artificial … Table 3 Average values of comparison results per combination

869

Combination

Correlation

Psimulated – Pmeasured

TOT 7.5 SC 06

0.9995 ± 2.6458e−04

0.0636 ± 0.1279

TOT 7.5 SC 08

0.9995 ± 1.0000e−04

0.1960 ± 0.1313

TOT 7.5 SC 10

0.9995 ± 1.0000e−04

0.0991 ± 0.1216

TOT 8.5 SC 06

0.9996 ± 1.5275e−04

0.1376 ± 0.1556

TOT 8.5 SC 08

0.9995 ± 1.7321e−04

0.1725 ± 0.1332

TOT 8.5 SC 10

0.9996 ± 0

0.2175 ± 0.1335

TOT 8.5 SC 12

0.9995 ± 1.5166e−04

0.2600 ± 0.1606

Fig. 1 Measured pressure difference curve

Fig. 2 Maximum values of alveolar pressure drop for each combination of patient with TOT and SC for continuous suction. The red dotted line is the protective PEEP value for patients and the black dotted line is the estimated drop pressure so that intermittent aspiration is not necessary

Fig. 3 Mean values of alveolar pressure drop for each combination of patient with TOT and SC for continuous suction. The black dotted line is the PEEP protective value for patients and the red dotted line is the zero alveolar pressure threshold

Fig. 4 Maximum values of alveolar pressure drop for each combination of patient with TOT and SC for intermittent suction of 1 Hz. The red dotted line and the black dotted line are the same as Fig. 2

870

F. V. O. C. Médice et al.

Fig. 5 Mean values of alveolar pressure drop for each combination of patient with TOT and SC for intermittent suction of 1 Hz. The red dotted line and the black dotted line are the same as Fig. 3

4

Discussion

4.1 Validation The SCs and TOTs are identified by a calibration syringe (Alpharad, Brazil), the physical model is identified by a mandatory ventilation mode of a mechanical ventilator (Cmos Drake do Nordeste Ltda, Brazil) and the validation experiment is performed by another mechanical ventilator in a spontaneous ventilation mode (Intermed Equipamentos Médico Hospitalar Ltda, Brazil). It all corroborates for the use of the data because there are identification data and validation data and they aren’t the same. Thus, due to an always positive variation of less than 0.3 cmH2O and a correlation greater than 0.999 between the simulated alveolar pressure signal and the pressure signal measured in the physical model, as shown in Table 3, the data corroborate for the validation of the mathematical model. In addition, the difference between the ventilator pressure and the alveolar pressure was greater for the SCs with a larger diameter, as shown in Fig. 1. This corroborates for a pressure drop proportional to the diameter of the probe, as reported by AARC [2].

4.2 Simulation When analyzing the results presented in Fig. 2, it is possible to observe that restrictive lung diseases show a greater pressure drop than obstructive pulmonary diseases. As the lung does not allow it to store a large amount of air at low pressures, the pressure varies very abruptly, thus presenting

higher pressure drops. Together with the results shown in Fig. 3, it is possible to observe that the cases with the simulated obstructive disease showed greater maximum pressure drops. However, their average pressures don’t follow this trend. Since the airway resistance is higher in those cases, the flow-related loss in energy—the pressure drop— would also be higher (Eq. 1). In contrast, concerning the “healthy patient” case it is possible to observe that, both the maximum drop and the mean pressure presented the best results, showing that the maximum pressure drop and the average pressure value are not so far apart, except for the combination TOT 8.5 and SC 12. Analyzing the maximum pressure drop values shown in Fig. 4, there was a significant improvement in the initial pressure drop compared to Fig. 2, with the combination TOT 8.5 and SC 12 being the highest difference value for all patients. When analyzing the mean pressures in Fig. 5 with the mean pressure of continuous aspiration in Fig. 3, in all cases, there was an improvement. The model inserts an aspiration pressure just before the SC, disregarding all the resistance of the aspiration equipment. So all simulated cases work with a much higher pressure than the one usually used. Since the aspirator works with vacuum pressures from 0 to 795 cmH2O, the maximum pressure measured in Table 2 represents the most feasible values to be used in patients. Thus, it is necessary to know where to measure the AARC recommendation pressure [2] not to work with pressures above 200 cmH2O. It is expected that, when suctioning the mix of air and secretions from real patients, the SC resistance will increase, since the fluid is more viscous [6]. Because of that, the suctioning flow rate would be lower, potentially causing a lower lung derecruitment effect when compared to air suctioning solely. As the presented simulation was based on the former case, it covers the worst-case scenario for suctioning-related alveolar pressure decrease. Therefore, for the “healthy patient” case, the intermittent aspiration procedure is not recommended, since, simulating the worst case, the standard aspiration does not potentially cause major derecruitment to the patient. Regarding intermittent aspiration, the procedure shows that it is capable of maintaining the same aspiration pressure and flowing but drawing less air from the patient’s lungs. However, it is necessary to carry out new measurements on other suction equipment to be able to observe which actual suction pressure must be used. Thus checking whether the pressures in Table 2 are the actual pressures reaching the SC, or whether the aspirator is unable to maintain a vacuum as good as a clinical vacuum pressure in hospitals or if its resistance is very high. However, for the aspiration pressure of 200 cmH2O, it is possible to observe in Fig. 5, an improvement in the average alveolar pressure in patients, except for the critical patient with the worst condition, as

Intermittent Suctioning for Patients Under Artificial …

only for TOT 8.5 and SC 12 there is an average alveolar pressure below 0 cmH2O. Considering then, with the worst-case presented is the combination TOT 8.5 and SC 12 FR, the use of this combination with the aspiration pressure of 200 cmH2O is not recommended, especially if the ventilator does not have flow rates higher than 0.5 L/s.

4.3 Limitations This work didn’t use a model with airway secretions, so further experiments are required to assess whether intermittent aspiration can remove secretions efficiently.

5

Conclusions

After successful validation, the proposed models predicted that a commercial ventilator could compensate for the pressure drop caused by bronchial aspiration in the human lungs. The simulations also predicted that intermittent aspiration could potentially promote less lung derecruitment while reaching the same suction pressures as traditional continuous aspiration. Acknowledgements We thank FAPEMIG, CAPES and CNPq for the financial support. We thank UFMG for all laboratories that were used in this paper. We thank Dr. Alessandro Beda (UFMG) and Dr. Eutálio Pimenta (UFMG) for all equipment and support that were given. Conflict of Interest The authors declare that they have no conflict of interest.

871

References 1. Avena MJ, De Carvalho WB, Beppu OS (2003) Avaliação da Mecânica Respiratória e da Oxigenação Pré e Pós-Aspiração de Secreção em Crianças Submetidas à Ventilação Pulmonar Mecânica. Rev Assoc Méd Bras 49(2):156–161 2. AARC (2010) Endotracheal suctioning of mechanically ventilated patients with artificial airways 2010. Respir Care 55(6):758–764 3. Ortis MDDC (2015) Simulação In Vitro Da Aspiração Endotraqueal Correlacionando Pressão de Vácuo, Diâmetro do Cateter e Diferentes Propriedades Viscoelásticas para o Muco Respiratório. Dissertação (Mestrado em Engenharia Mecânica), Departamento de Engenharia Mecânica, Universidade Federal de Minas Gerais, Belo Horizonte/MG, 92 f 4. Vanner R, Bick E (2008) Tracheal pressures during open suctioning. Anaesthesia 63:313–315 5. Bates JH (2009) Lung mechanics: an inverse modeling approach. Cambridge University Press, Cambridge 6. Fox RW, Mcdonald AT, Pritchard PJ (2006) Introdução à Mecânica dos Fluidos. LTC, Rio de Janeiro 7. Berne RM, Koeppen BM, Stanton BA (2009) Berne & Levy Fisiologia. Elsevier Brasil, Rio de Janeiro/RJ 8. Silverthorn DU (2010) Fisiologia Humana: Uma abordagem integrada. Porto Alegre/RS, Artmed 9. West JB (2012) Respiratory physiology: the essentials, 9ª ed. Lippincott Williams & Wilkins, Filadélfia/PA 10. Shiba N et al (2012) Humidification of base flow gas during adult high-frequency oscillatory ventilation: an experimental study using a lung model. Acta Med Okayama 66(4):335–341 11. Hirayama T et al (2014) Mean lung pressure during adult high-frequency oscillatory ventilation: an experimental study using a lung model. Acta Med Okayama 68(6):323–329 12. Coimbra R, Silverio CC (2001) Novas estratégias de ventilação mecânica na lesão pulmonar aguda e na Síndrome da Angústia Respiratória Aguda. Rev Assoc Méd Bras 47(4):358–364 13. Maggiore SM et al (2003) Prevention of endotracheal suctioning-induced alveolar derecruitment in acute lung injury. Am J Respir Crit Care Med 167(9):1215–1224

Prototype for Testing Frames of Sunglasses Larissa Vieira Musetti and Liliane Ventura

Abstract

Keywords

A first prototype version was developed for testing the resistance in sunglasses frames, according to NBR ISO 12311:2018, which can also be used for prescription glasses. It tests the durability of the frame by the act of placing and removing the glasses from the face by the user, ensuring the quality and safety of glasses sold to consumers. A mechanical system was designed, consisting of a nose simulator and fixing edges for one of the glasses’ temples, which simulates the side that is fixed behind the ear; and an electronic rotation system— 40 rpm—for testing the other temple (simulation of the side that will be removed from the face) and frames have been tested. Tests were performed on 30 samples, out of which 04 were non-compliant with ISO and for only one of them the motor’s torque was not adequate to perform the test. In addition, 5 samples moved away from the “nose” during the test, not comprising the test, but suggesting a more efficient design. The prototype will be revised for proper testing of frames with larger and smaller dimensions than the standard as well as the engine will be re-designed to test samples with less flexible frames. Also, international equipment of the market is discussed, pointing out inadequacy in relation to the requirements of the standard. This research was financially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)—process number 2018/16275-2.

Prescription glasses and sunglasses Increased resistance test of sunglasses frames Prototype for testing eyeglass frames NBR ISO 12311 Sunglasses standards

L. V. Musetti (&)  L. Ventura Department of Electrical Engineering, EESC/USP, Av. Trabalhador São Carlense, 400—Parque Arnold Schimidt, São Carlos, Brazil e-mail: [email protected]



1

 

Introduction

When choosing a pair of sunglasses or prescription glasses, consumers are concerned about the lenses and their UV protection. Frames are usually a fashion style choice for both cases not being recognized as an important item for the product’s lifetime. However, choosing the correct frame results that the quality of a pair of glasses is highly improved. Currently frames are made of thermoset and thermoplastic materials. Thermoplastics can be melted back into a liquid, whereas thermoset plastics always remain in a permanent solid state. Thermoset plastics contain polymers that cross-link together during the curing process to form an irreversible chemical bond [1]. Thermoset used for frames is nitrocellulose. Thermoplastic used for frames are polyamides (or nylons), polycarbonates, cellulose acetate, cellulose propionate acetate, cellulose acetate butyrate and acrylonitrile butadiene styrene (ABS). International standards for sunglasses frames should be valued by manufacturers, since such standards guarantee safety and quality for the consumer. These standards are not used on a compulsory basis in Brazil. Brazilian standards NBR ISO 12312-1:2015 [2] and NBR ISO 12311:2018 [3] are mirror of international standards. The research projects conducted at the Ophthalmic Instrumentation Laboratory (LIO) EESC-USP in the last decade have been dedicated to assure eye safety of the Brazilian population as previously mentioned. Although efforts of our team has been progressively done in order to have an eye safe standard for sunglasses for our

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_131

873

874

tropical country [4–6], as we did in 2013 [7], when we were part of committee for revising the Brazilian standard (NBR 15111:2013), it has been repealed in 2015, and one of the many reasons is that there is no lab in Brazil able to perform every required test on sunglasses standards, which falls back to the issue of not having testing equipment’s at affordable prices. So, this work is one of the many systems that our lab is developing [8–11], to bridge this gap. This system if for the required test (NBR ISO 12312-1:2015 and ISO 12311:2018) of increased resistance in the glasses’ frame, which is specifically intended for goggles for sun protection. Both establishes the same tests, same procedures and have exactly the same requirements for the items that concerns the test method for increased resistance of glasses, which is intended to guarantee the quality of the hinge thermo fusion process and also the characteristics of the molecular memory used in the manufacture of frames, ensuring the longevity of both frames and glasses in general. For this reason, this work will always refer to NBR ISO 12311:2018 [3], since both are identical. The material that the eyeglasses frames are made of is an important issue. Eyeglasses with materials that do not have a good molecular memory, will no longer have the perfect fit with time, and may even break, not to mention that a bad frame quality, can cause the lenses to come loose, chip, crack or even break with time of use. It is the frames that guarantee the ideal position of the lenses over the eyes and in case they present gaps or ruptures, with time, this positioning becomes incorrect. Positioning the lenses in the correct place for the user is a very important factor, especially when it comes to prescription lenses, since all the calculation of correction and focus are based on a match of correct refractive power and correct lens ‘positioning relative to the eyes’. According to Roehe et al. [13], if the optical center of the lenses does not focus on the visual axis, a prismatic effect will be induced, which will be greater the greater the dioptric power of the lens, which may result in visual discomfort or asthenopia. In addition, in progressive lenses, if the Inter pupillary Distance (DIP) is incorrect, the eyes will not enter the progressive corridor, with image distortion”. With this information, we can conclude how important the frames are, especially in these cases, since, if the frame undergoes any deformation, the visual axis will no longer be adjusted with the face, making the glasses no longer functional. Taking into account the requirements of the standards, the glasses must be designed and manufactured in such a way that, when used under the conditions and for the intended purposes, they will not compromise the safety and health of the user. In this context, this paper is the development of a prototype for testing eyeglass frames, required by the sunglasses standards, so that it is guaranteed that the movement of the

L. V. Musetti and L. Ventura

act of placing on and removing glasses from the user’s face does not damage the eyeglasses (deforming or breaking the frame and ensuring that the lenses do not come loose). There is no system in the national market for this type of test and the few models that are available ones in the international market, some of them are not in accordance with the requirements of the standard, since frames must be free on the bridge of the nose during the test, as they are on the user’s face, and these referred ones have, there is a “guide” to keep the eyeglasses on this bridge of the nose during rotation.

2

Literature Revision

The test required by NBR ISO 12312-1:2018 is to simulate the efforts in the frame of glasses for sun protection, especially in the joints, when putting the glasses in the open or closed condition. When tested, sunglasses cannot: fracture at any point, have cracks or have permanent deformations, the original distance between the sides at the measurement fixation point cannot be altered by more than 5 mm after 500 cycles, ensuring that these glasses do not suffer from long-term deformation. Figure 1 illustrates the prototype suggested by the NBR ISO 12311:2018 [3] standard. A mechanical system, to perform this test, consists of a nose simulator and the fixation of one of the glasses’ rods—simulation of the side that is fixed behind the ear, and an electronic rotation system for the other rod (simulation of the side that will be removed from the face), which is called increased resistance test in sunglasses frames. The end of one rod is fixed to restrict lateral movement, besides rotation, while the end of the other rod is rotated through a 60 mm diameter circle. The bridge is supported but not secured by an artificial nose to restrict the movement of the frame. As required by NBR ISO 12311:2018, when submitted to the tests, the sunglasses cannot: fracture at any point, present cracks and present permanent deformations, that is, the original distance between the sides at the measuring point cannot be altered by more than 5 mm after 500 cycles, ensuring that these glasses do not suffer deformations in the long run. The standard specifies some parameters for the construction of the test equipment regarding nose simulator dimensions, bridge and fixation dimensions, number of cycles to be tested and suggests minimum adjustments, such as: the degree of adjustment in relation to the bridge support of at least 40 mm horizontally and vertically. The equipment needs to be capable of transmitting a smooth continuous cyclic movement to one of the joints, 30 mm up and down and 60 mm to the sides, at a rate of

Prototype for Testing Frames of Sunglasses

875

Fig. 1 Diagram of the sunglasses frame testing system suggested by the NBR ISO 12311:2018

40 revolutions/min. It is optional that the test be performed on the right, left or both rods. Because the majority of the population is right-handed, the devices that exist on the international market usually test only the right rod. There is no national equipment for this test available on the market. Prior to the start of the test, the rods are measured at pre-established points, and this distance is recorded as distance 1 (d1). Then, the sample is positioned in the equipment, holding the two rods in their respective fixation points, and the rotation starts for up to 500 cycles. As the sample is removed from the equipment, the distance between the rods is measured at the same pre-established point, recording this measurement as distance 2 (d2). Any fractures, cracks or changes in the movement of the nails should also be recorded. The difference between the measured values d1 and d2 cannot be greater than 5 mm. The models of systems for this kind of test, are difficult to be found in the world market and there are few models around the world (less than half a dozen), in Brazil there is no registered model.

The standard NBR ISO 12311:2018 [3] describes that it must be ensured that the eyeglass frame bridge is freely placed on the bridge support, however on the models that are currently on the market there is a “guide” that holds the eyeglass bridge so that bridge doesn’t move when rotating, as shown in Fig. 2, and which is not a trusted simulation of the real movement performed by the user when placing on or

Fig. 2 Example of a “guide” for the bridge of sunglasses used by international manufactures

876

L. V. Musetti and L. Ventura

removing off the sunglasses. That in the models found in the international market, they have. Considering all the requests from the standard, in this work we show a first version of a prototype developed according with requirements of NBR ISO 12311:2018 [3] for the mechanical, electronic and rotation systems. Also, we have tested some samples for checking some preliminary results, which has led us to make important changes for the development of the second version of the prototype.

3

Materials and Methods

The system was designed in SOLIDWORKS for the development of the prototype, as shown in Fig. 3. Furthermore, the pieces were carefully designed and made in aluminium and brass, in the Mechanical Workshop of the Electrical Engineering department of EESC-USP. For the nose simulator, it is required for the bridge support to have a triangular cross section, which contains an angle of 30 ± 2°, with a thickness at the top of 12 ± 1 mm, with the upper edge approximately rounded. All the mechanics was developed in accordance to the detailed description of ISO 12311:2018. For the electronics, it consists of NEMA 17 step motor, with a torque of 3.5 kgf cm and a supply voltage of 12 V, so that the rods could be cyclically moved. This motor has a precise positioning adjustment, making it possible to count the cycles by counting the steps. For displaying the cycle counter for the user, we have used a Liquid Crystal Display (LCD) size 20  4 and with multi-serial serial bus I2C. The entire system is controlled by a microcontroller circuit, ATmega328, programmed in C++ language with ARDUINO as the development platform. A plate was developed, with an integrated circuit (CI) of a Darlington transistor matrix. FREESCALE’s ULN2803 for the controlling module circuit to drive the step motor. The counting of the number of cycles was done via firmware, by counting the number of steps of the motor. The step motor

completes one cycle every 200 steps, so it is possible to read the number of steps, store and convert them into cycle numbers through the ARDUINO. The system has open pulse actuation buttons to start the motor, and also an additional one to stop the rotation, in case it is necessary to interrupt the cyclic movement before 500 cycles, but as routine, after the start pulse button of the engine is started the system will discontinue automatically when 500 cycles are done. The assembled and functional prototype is show in Fig. 4. Tests are being carried out to improve the system. Figure 4a shows the photo of the prototype, Fig. 4b is the QR code of the video of the working prototype.

4

Results and Discussion

As the prototype was assembled, the test procedures were developed, using the procedure established in item 9.7.3 of the NBR ISO 12311:2018 [3] standard and adding some steps to obtain more analysis information at the end of the experiments. Some steps, such as filling out the report, videos and photos taken of the samples, were added to the test for subsequent analysis. The entire procedure adopted can be seen in the flowchart in Fig. 5. Thirty sunglasses samples were tested in the prototype as shown in Table 1. Four were non-compliant and one was not possible to perform the test, due to the engine not having enough torque to rotate it. Of the non-compliant ones, three after 500 cycles, had changes in the measurement of the rods greater than 5 mm and in one the lens came off at the time of the test. Out of 30 tested samples, 25 were in compliance and 12 of them were off the center of the bridge of the nose when the cyclic movement was initiated. After a careful experimental analysis, it was observed that if these samples left the bridge of the nose at the moment of rotation, the glasses would leave the center consecutively and the right stem would end up being rotated with a smaller

(a) Fig. 3 CAD design of sunglasses frame testing system

(b)

Fig. 4 Prototype for increased resistance test in sunglasses a photo of the prototype; b QR Code for the video of the working prototype

Prototype for Testing Frames of Sunglasses

Fig. 5 Flowchart of the test procedure

diameter than specified, interfering with the standard requirement. As previously mentioned, for models of equipment for this type of test available in the international market (2 Chinese, 1 Italian and 1 German), they all have a “guide” for the bridge of the nose in the system. This “guide” solves the problem of frames leaving the center as the test is running, but it is not faithful with the requirement from the standards

877

that frames must be loose, as if they were resting on the user’s nose. The movement of removing and putting on the glasses from the face, done by the user is what should exhaustively be reproduced in this test, to guarantee the durability of the frame to the consumer due to this movement. When placing the samples in the mechanical system, a difficulty was identified when attaching glasses with much larger or much smaller frames of the standard size, which exceed the adjustments around 40 mm. As the prototype was made for standard size glasses, as the second version of the prototype, an adjustment for tests on frames larger and smaller than 40 mm are being designed and integrated into the system. Subsequent analyses on the adjustments to accommodate the frames of different sizes available on the market will be made, at a later stage, to check if there is an optimal adjustment so that these frames even being loose, can still be maintained in the center of nose simulator. Furthermore, as a next step, a more suitable engine is being implemented and a study of its behaviour during rotations and operating limits will be addressed. Referring to the standard, we have raised some contradictory points or points that can lead to a misinterpretation, such as from 9.7.3.2 of ISO 12311:2018, the standard presents a typical diagram of the test apparatus, as can be seen in Fig. 1. In item 2 shown in Fig. 1, an adjustment on the ‘x’ axis is shown, which is to ensure that the rotating clamp (5) is in the same plane as the fixed clamp (4). However for items (4) and (5) to be in the same plane, item (5) would also need to have a similar adjustment. Additionally, it does not make sense that this adjustment exists, since there is adjustment (6) in Fig. 1 that is exactly for adjusting the sample back and forth by at least 40 mm. In item (6), there would need to be one more adjustment on the ‘y’ axis, as these systems are to test numerous models of sunglasses and the they do not always have the same dimensions As these dimensions change, the center of the frames also changes, and if there is no adjustment, one rod will be more tensioned than the other. However, we need to ensure that the sunglasses’ frame can be assembled with the rods fully open, but not under tension, and with the support of the bridge half distance from the claws, which lead us to conclude that the bridge support always has to be halfway from the fixation of the rods, in case the glasses are not exactly in the suggested distance of 40 mm for which the system was developed. Or, if setting 2 on the ‘x’ axis, proposed by NBR ISO 12311:2018 [3] occurs, the center of the sunglasses will no longer be halfway from the nail fixation; impairing the test. The model suggested as a typical diagram is shown in Fig. 6, with the necessary improvements presented.

878

L. V. Musetti and L. Ventura

Table 1 Results of tested samples for sunglasses frame resistance

Sample number

d1 (mm)

d2 (mm)

Number of cycles

Visual aspects

Compliant/non compliant

HB 02

140

140

500

OK

Compliant

HB 01

140

140

500

OK

Compliant

L:13 N:172

136

136

500

OK

Compliant

L:04 N:264

138

138

500

OK

Compliant

L:25 N:83

133

134

500

OK

Compliant

L:04 N:294

134

135

500

OK

Compliant

L:04 N:05

136

139

500

OK

Compliant

L:04 N:70

131

134

500

OK

Compliant

L:04 N:96

134

134

500

OK

Compliant

L:13 N:140

136

138

500

OK

Compliant

L:04 N:09

132

134

500

OK

Compliant

L:04 N:78

136

138

500

OK

Compliant

L:23 N:04

140

143

500

OK

Compliant

L:04 N:107

142

143

500

OK

Compliant

L:13 N:50

141

141

500

OK

Compliant

L:02 N:130

140

142

500

OK

Compliant

L:13 N:141

133

134

500

OK

Compliant

L:15 N:282

128

132

500

OK

Compliant

L:13 N:48

138

138

500

OK

Compliant

L:23 N:46

148

150

500

OK

Compliant

L:15 N:201

140

140

500

OK

Compliant

L:04 N:61

135

135

500

OK

Compliant

L:04 N:106

126

128

500

OK

Compliant

L:13 N:226

134

135

500

OK

Compliant

L:04 N:110

135

135

500

OK

Compliant

L:13 N:145

135

141

500

OK

Non compliant

L:13 N:42

127

133

500

OK

Non compliant

L:13 N:347

137

147

500

OK

Non compliant

L:04 N:137

143



260

X

Non compliant

L:22 N:313

130



05

X

Non compliant

In this diagram in Fig. 6, changes to items 2 and 6 were proposed.

5

Conclusions

As we preliminarily tested 30 samples to run the equipment for the first time, we have noticed some changes to be made for the typical diagram required by the standard, regarding the movement of the nose bridge during the tests for some type of frames (size related). The models that are available in the international market already have this adjustment on the ‘y’ axis on the nose bridge, as shown on item 2 of Fig. 6. Actually, on practical tests it proved to be totally necessary

and relevant, otherwise it wouldn’t be possible to test different sizes of frames. When the standard states that the frame bridge must be supported freely, it is concluded that there can be no accessory that prevents it to be free at the time of the rotation testing. Therefore, it is questionable whether the equipment on the international market, is in accordance with the standard test requirement. The preliminary results using this first version of the prototype, for the 30 tested samples showed that 13% were non-compliant. Additionally, many other critics can be made on this frame resistance test, such as there should have a paragraph in the Standards for frames with pad arms, however this is

Prototype for Testing Frames of Sunglasses

879

equipment, so that may lead to a final version that may be used in sunglasses certification laboratories or for eyeglasses producers and traders. Conflict of Interest The authors declare that they have no conflict of interest.

References

Fig. 6 Diagram presented as a suggestion for the test apparatus

not mentioned in ISO 12312-1:2018. The test is exactly the same for frames with or without regular pad arms (without extra extension). Besides the suggestions that we should make for the standards later at the end of this research, the present testing method is the one required by Standards, so the equipment has been developed as required by Standards. Regarding the hardware, an interface will be developed for communication with the computer to store data of the tested samples, therefore generating test reports in a database, and allowing controlling the activation buttons and the motor speed directly by the computer or microcontroller with a display. For the mechanics, a new project will allow adjustments for the degree of amplitude of the so that additional models of sunglasses, of different sizes, may be tested in the system. In short, it was possible to develop a first version of a first national prototype for tests of in sunglasses, enabling the improvement and suitability increased resistance test

1. Miles DC, Briston JH (1996) Polymer technology, 3rd edn. Chemical Publishing Co., Inc., New York, pp 305–389 2. Associação Brasileira de Normas Técnicas. NBR ISO 12312-1:2018 (2018) Proteção dos olhos e do rosto—Óculos para proteção solar e óculos relacionados Parte 1: Óculos para proteção solar para uso geral. Rio de Janeiro 3. Associação Brasileira de Normas Técnicas. NBR ISO 12311 (2018) Proteção dos olhos e do rosto—Métodos de ensaio para óculos para proteção solar e óculos relacionados. Rio de Janeiro 4. Masili M, Ventura L (2016) Equivalence between solar irradiance and solar simulators in aging tests of sunglasses. Biomed Eng Online 15:86–98 5. Masili M, Schiabel H, Ventura L (2015) Contribution to the radiation protection for sunglasses standards. Radiat Prot Dosimetry 164:435–443 6. Ventura L et al (2013) ABNT NBR 15111:2013—Óculos para proteção solar, filtros para proteção solar para uso geral e filtros para observação direta do sol. ABNT 1:1–47 7. Loureiro AD, Gomes L, Ventura L (2016) Transmittance variations analysis in sunglasses lenses post sun exposure. J Phys Conf Ser (Print) 733:012028 8. Masili M, Ventura L (2017) Building a resistance to ignition testing device for sunglasses and analysing data: a continuing study for sunglasses standards. Biomed Eng Online 16:1–10 9. Mello MM, Ventura L (2013) Cálculo da transmitânciavisível e semafórica com função de ponderação alternativa. Rev Bras Fís Méd (Online) 7:99–104 10. Lincoln VAC, Ventura L, Faria e Sousa SJ (2010) Ultraviolet analysis of donated corneas: a portable prototype. Appl Opt 49:4890–4897 11. Masili M, Duarte FO, White CC, Ventura L (2019) Degradation of sunglasses filters after long-term irradiation within solar simulation. Eng Fail Anal 1:1 12. Mello MM, Lincoln VAC, Ventura L (2014) Self-service kiosk for testing sunglasses. Biomed Eng Online 13:45 13. Roehe D et al (2008) Estudo comparativo entre dois métodos de medida da distância interpupilar. Rev Bras Oftalmol 67(2):63–68

Evaluation of Hyperelastic Constitutive Models Applied to Airway Stents Made by a 3D Printer A. F. Müller, Danton Pereira da Silva Junior, P. R. S. Sanches, P. R. O. Thomé, B. R. Tondin, A. C. Rossi, Alessandro Nakoneczny Schildt, and Luís Alberto Loureiro dos Santos

emerge in the future. Allowing 3D printing of implantable airway stents in patients.

Abstract

The present work aims to make a proof of concept comparing mechanical tests in a commercial model of airway stent (HCPA-1) versus experimental and virtual simulation mechanical tests using finite elements in a dimensionally similar stent made by a 3D printer in flexible non-biocompatible material. The material of the 3D printed stent was hyperelastically characterized from experimental mechanical tests on samples that allowed calculating the constitutive constants for the hyperelastic models. Virtual simulations of mechanical tests using finite element analysis in the ANSYS software with the available hyperelastic models were compared with the experimental tests, obtaining the precision of the hyperelastic models in the material used to print the stent. A cough-like mechanical test was performed experimentally on HCPA-1 and 3D printed stents simulating a critical condition of use. The results were compared with a virtual simulation using the hyperelastic models obtaining precision that varied from 2.5 to 280% depending on the hyperelastic model used. The work shows that it is possible to obtain virtually the mechanical response of an airway stent using finite elements. Making it possible to minimize the time and financial investment needed to evaluate the research and development of new materials, geometries or customized models based on medical images using the 3D printing technique. Flexible and biocompatible materials for the 3D printer should A. F. Müller (&)  D. P. da Silva Junior  P. R. S. Sanches  P. R. O.Thomé  B. R. Tondin  A. C. Rossi  A. N. Schildt Hospital de Clínicas de Porto Alegre (HCPA), GPPG, Serviço de Pesquisa e Desenvolvimento em Engenharia Biomédica, Porto Alegre, Brazil e-mail: [email protected] L. A. L. dos Santos Laboratório de Biomateriais (LABIOMAT), Departamento de Materiais (PPGE3M), Universidade Federal do Rio Grande do Sul (UFRGS), Escola de Engenharia, Porto Alegre, Brazil

Keywords

Airway stent Hyperelastic

1



3D printer



Finite elements



Introduction

Patients who need to treat obstructive lesions (strictures) of the lower airways, both benign and malignant, can use the stent (orthosis) as a reliable alternative in cases where the surgical alternative is not viable [1]. There are several types of airway stents being used in patients. The main types are: metallic, polymeric and mixed. Each type has its advantages and disadvantages. The “ideal” stent has not yet been developed. It must be stable, strong enough to retain the external compressive forces that compromise the obstruction of the airway lumen, be biocompatible, available in all necessary sizes, resistant to migration, easily implanted and removed and flexible enough to keep up with different luminous irregularities. The Hospital de Clínicas de Porto Alegre developed and obtained Patent Letter number MU7902500-5 for a model of silicone stent, called HCPA-1 (Fig. 1). This stent was developed to improve anchoring, provided by the anatomical shape, which is based on the structure of the airways. The technology transfer process was carried out for the company Medicone Ltda, which obtained the authorization from the regulatory agencies (ANVISA in Brazil) and carries out its commercialization. Current advances in 3D printing techniques present a very promising scenario in the manufacture of customized tracheal stents. The advantage of a stent designed for the airways and produced for an individual patient is that perfect alignment with the mucosa can be achieved, decrease biofilm formation, mucus retention

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_132

881

882

A. F. Müller et al.

Fig. 1 Left: HCPA-1 stent. Right: 3D printing of an HCPA-1 stent in flexible material

between the airways and the stent, formation of granulation tissue and stent migration. Schweiger et al. [2] report in their work the first 2 clinical applications made in Europe using silicone Y stents made with the aid of 3D printing techniques. Recent advances in 3D printing may revolutionize the use of stents, but medical-grade materials are under development [3]. 3D printing is a type of additive manufacturing which is a process that creates a three-dimensional object by building successive layers of raw material. Each new layer is attached to the previous one until the object is completed. It allows to create devices compatible with the patient’s anatomy even with very complex internal structures. There are several types of additive manufacturing [4]. Elastomers are widely used in the manufacture of airway stents, as they meet technical requirements. They are easily manufactured using standard techniques for elastomers and sterilized by conventional methods. Elastomers typically exhibit a strongly nonlinear stress– strain ratio characteristic of hyperelastic materials. In this case, Hooke’s Law is not applicable. As it is a material with inherently non-linear behavior, understanding the constitutive relationships of these materials becomes essential for the correct use in simulation programs using finite element analysis. In general, elastomeric materials are represented as a strain energy density function W, which is based on three strain invariants: I1, I2 e I3. W ¼ f ðI1 ; I2 ; I3 Þ

ð1Þ

The three invariants are defined in terms of the main extension rates k1, k2 e k3. They are given by: I1 ¼ k21 þ k22 þ k23

ð2Þ

I2 ¼ k21 k22 þ k22 k23 þ k23 k21

ð3Þ

I3 ¼ k21 k22 k23

ð4Þ

where k is the ratio of the final length to the original length in the specified direction. Constitutive models are the mathematical representation of a material’s response to an applied load. To adjust the experimental data to the constitutive models, there is a wide variety of numerical models available in the literature. Experimental studies aimed at analyzing the modeling of the mechanical behavior of airway stents using finite element analysis in medical applications are unusual and limited. The aim of this work is to test and evaluate a methodology to virtually simulate mechanical tests using finite elements in the development of projects for new polymeric airway stents produced with 3D printing technology. This shall minimize time and avoid the cost of injection molds and prototypes during development. In order to justify this work, it is mentioned that from the 27,270 new cases of lung cancer diagnosed in Brazil in 2008, the majority had advanced unresectable cancer at the time of diagnosis, with 30% presenting central airway obstruction manifested by respiratory distress, bleeding or infection [5]. This shows the importance of the study in developing and evaluating a method to design new polymeric airway stents.

2

Materials and Methods

2.1 Finite Element Analysis Software In this work, the ANSYS 2020 R1 finite element analysis software was used to perform the calculations of the parameters of the constitutive models and the virtual simulations. In ANSYS, the hyperelastic constitutive models are classified as phenomenological based on the observation of the material’s behavior during experimental tests. In this way, the elastomeric material is defined in this application as being isotropic and almost incompressible. To characterize the material used to make the samples and stents, it is necessary to calculate the constants of the hyperelastic models by performing one or more mechanical

Evaluation of Hyperelastic Constitutive Models Applied …

tests. The main tests performed on elastomers are compression and tension. The compression is always uniaxial while the tension can be applied in a uniaxial, planar or biaxial way. The ANSYS software allows to characterize hyperelastic materials from experimental tests such as: uniaxial tension, biaxial tension, pure shear and volumetric uniaxial compression. We decided to perform only the mechanical stress– strain tests on uniaxial tension with standard ASTM D412C (Fig. 2b) and pure shear using a rectangular sample (Fig. 2c) with dimensions of 60  6  1 mm (b0  l0  e0) respecting the minimum proportion of width 10 times greater than the height.

2.2 Flexible 3D Filament A flexible elastomeric filament with mechanical characteristics similar to the HCPA-1 stent manufactured with a blend of barium sulfate and medical grade silicone Nusil with a hardness between 60 and 65 Shore A was used to print the specimens and stents. It was decided to use a Ninjaflex model polyurethane filament manufactured in the USA by Fenner Inc. This material is flexible and has 85 Shore A hardness and is designed for 3D printers with Fused Deposition Modeling (FDM) technology. The material is non-biocompatible and cannot be implanted in patients. Due to the manufacturing process, the surface finish has many imperfections as shown in Fig. 3. It is only suitable as a proof of concept to evaluate the proposed method. 3D printer materials are always evolving. There are some flexible commercial materials that use Stereolithography (SLA) technology with a layer resolution less than or equal to 50 µm. For example, UV Silicone resins 60A MG and CarbonResin SIL 30 have a hardness of 60 and 30 Shore A

883

Fig. 3 Zoom on the surface of the 3D Ninjaflex-black stent

respectively. With medical grade USP class VI that allows direct contact with the skin for a maximum of 29 days and a short term contact with the mucous membrane up to 24 h.

2.3 Mechanical Tests of Uniaxial Tension and Pure Shear The Instron model 4750 universal testing machine was used to perform the mechanical tests of uniaxial tension and pure shear. The influence of the strain rate on the behavior of the elastomers can be verified by submitting a sample to a sequence of tensile tests with loading and unloading with different strain rates. It was observed that with a displacement of 18 mm/min the loading and unloading curves did not present significant differences [6]. The acquisition rate was adjusted to 1000 samples/s. Ansys specifies that engineering stress and strain data should be used in data load input. The experimental data obtained from the testing machine are: tensile force (F) in Newtons and displacement in mm. The area of the initial straight section Ao of the specimen in the region of interest was measured as recommended by the ASTM standard (Designation: D638 - 14) with the aid of a Mitutoyo digital caliper (resolution of 0.01 mm and accuracy of ±0.02 mm).

2.4 Mechanical Loads Investigated

Fig. 2 3D printed parts made with the flexible Ninjaflex black filament a 3D Ninjaflex stents (with dimensions identical to HCPA-1), b standard ASTM D412C, c rectangular sample

To analyze the mechanical behavior, the specimens were tested with homogeneous and heterogeneous loads considering the material as incompressible. Uniaxial tension tests were performed on 5 samples, in the form of dumbbells in the ASTM_D412C standard. The load cell measures the axial force used to calculate the stress and strain in the region of interest obtained using Digital Image Correlation (DIC) [7–9].

884

A. F. Müller et al.

The state of pure shear stress was carried out through plane stress tests on 5 samples. With initial height l0, constant width b0 and thickness e0. The same equipment used for the uniaxial tension test was used. As in the previous test, the load cell provides the axial force used to calculate the stress. In the mechanical tests of uniaxial tension and pure shear the images of the strain of the samples and their reference standards were acquired and recorded with a digital camera Basler model daA2500-14uc, resolution of 2592  1944 pixels with lens of 16 mm of focal length, IR CUT and iris F1.2 fixed. The images were stored using the pylon6 Viewer 64 Bit software version 6.0.13.7126. The images obtained were processed by the Strain Measurement application [10] in the Matlab R2016b software, obtaining the local strain field of the samples as well as the displacements.

2.5 Manufacture of the Samples

Fig. 4 Stress versus strain graph obtained from the average of the uniaxial tension tests with the ASTM-D412C body 3D printed in with black Ninjaflex

The samples for the uniaxial tension tests, pure shear test and 3D Ninjaflex stents (with dimensions identical to HCPA-1) are shown in Fig. 2a. A 3D printer with FDM technology, Zmorph model 2.0SX, was used for the production of parts. The Voxelizer 2.0 software was used to convert the mechanical design files from the STL standard to the GCODE standard of the 3D printer. The printer was fed with black Ninjaflex filament and adjusted to have strong durability and the print quality was set to the maximum value. Using a 0.4 mm extrusion nozzle, a layer height of 0.14 mm and a path width of 0.3 mm were obtained. Special attention was paid during 3D printing to obtain samples with reproducible mechanical properties.

3

Results

3.1 Uniaxial Tension Test Figure 4 shows the mean stress versus strain curve obtained from uniaxial tension tests. These data were loaded into ANSYS.

3.2 Pure Shear Test Figure 5 shows the average stress versus strain curve obtained from pure shear tests. These data were loaded into ANSYS.

Fig. 5 Stress versus strain graph obtained from the average of pure shear tests with the specimen 3D printed with black Ninjaflex

3.3 Fit of Hyperelastic Constitutive Constants The experimental data were loaded into ANSYS that calculated and fit the hyperelastic constitutive constants for the models shown in Table 1. A refinement was performed through a manual adjustment to the constants using the stress versus strain fit curves calculated by ANSYS.

Evaluation of Hyperelastic Constitutive Models Applied … Table 1 Hyperelastic constitutive models

Models hyperelastic

885 Curve fit

Ansys fit

Manual fit

Error norm for fit

Neo Hookean

Mu = 1.368604E+6 (Pa) D1 = 0 (Pa−1)

Mu = 1.1E+6 (Pa) D1 (Pa−1) = 0

Normalized error

Arruda Boyce

Mu = 1.131162E+6 (Pa) Limiting network stretch = 5.9779744E+11 D1 = 0 (Pa−1)

Mu = 1.07E+6 (Pa) Limiting network stretch = 7.2053E+8 D1 = 0 (Pa−1)

Normalized error

Gent

Mu = 1.368604E+6 (Pa) Limiting value = 8.28802878294E+11 D1 = 0 (Pa−1)

Mu = 1.07E+6 (Pa) Limiting value = 8.28802878294E+11 D1 = 0 (Pa−1)

Normalized error

Blatz-Ko

Initial shear modulus Mu = 2.674761E+6 (Pa)

Initial shear modulus Mu = 2.90E+6 (Pa)

Absolute error

Mooney Rivlin 2°

C10 = −1.09224E+5 (Pa) C01 = 1.139816E+6 (Pa) D1 = 0 (Pa−1)

C10 = −4.0E+5 (Pa) C01 = 2.000849E+6 (Pa) D1 = 3E−6 (Pa−1)

Absolute error

Mooney Rivlin 3°

C10 = −3.766860E+6 (Pa) C01 = 5.43669E+6 (Pa) C11 = 7.54466E+5 (Pa) D1 = 0 (Pa−1)

C10 = −3.766860E+6 (Pa) C01 = 5.436869E+6 (Pa) C11 = 7.54466E+5 (Pa) D1 = 2E−7 (Pa−1)

Normalized error

Mooney Rivlin 5°

C10 = −1.0988845E+7 (Pa) C01 = 1.2759561E+7 (Pa) C20 = 5.097485E+6 (Pa) C11 = −8.544896E+7 (Pa) C02 = 2.1205375E+7 (Pa) D1 = 0 (Pa−1)

C10 = −1.098884E+7 (Pa) C01 = 1.2759561E+7 (Pa) C20 = 5.097485E+6 (Pa) C11 = −8.544896E+7 (Pa) C02 = 2.1205375E+7 (Pa) D1 = 9E−8 (Pa−1)

Normalized error

Polynomial 1°

C10 = −1.09224E+5 (Pa) C01 = 1.139816E+6 (Pa) D1 = 0 (Pa−1)

C10 = −3.8E+5 (Pa) C01 = 2.0E+6 (Pa) D1 = 3E−6 (Pa−1)

Absolute error

Polynomial 2°

C10 = −1.0988845E = 7 (Pa) C01 = 1.2759561E+7 (Pa) C20 = 5.097485E+6 (Pa) C11 = −1.854489E+7 (Pa) C02 = 2.1205375E+7 (Pa) D1 = 0 (Pa−1) D2 = 0 (Pa−1)

C10 = −1.0988845E = 7 (Pa) C01 = 1.2759561E+7 (Pa) C20 = 5.097485E+6 (Pa) C11 = −1.854489E+7 (Pa) C02 = 2.1205375E+7 (Pa) D1 = 4E−8 (Pa−1) D2 = 4E−8 (Pa−1)

Normalized error

Yeoh 1°

C10 = 5.65581E+5 D1 = 0 (Pa−1)

C10 = 5.65581E+5 D1 = 9E−8 (Pa−1)

Normalized error

Yeoh 2°

C10 = 7.02224E+5 (Pa) C20 = −56,353.78 (Pa) D1 = 0 (Pa−1) D2 = 0 (Pa−1)

C10 = 7.02224E+5 (Pa) C20 = −56,353.78 (Pa) D1 = 0 (Pa−1) D2 = 0 (Pa−1)

Absolute error

Yeoh 3°

C10 9.36167E+5 (Pa) C20 −3.39255E+5 (Pa) C30 84,256.5353096 (Pa) D1 = 0 (Pa−1) D2 = 0 (Pa−1) D3 = 0 (Pa−1)

C10 9.36167E+5 (Pa) C20 −3.39255E+5 (Pa) C30 84,256.5353096 (Pa) D1 = 0 (Pa−1) D2 = 0 (Pa−1) D3 = 0 (Pa−1)

Absolute error

Ogden 1°

MU1 = 1.2006E+10 (Pa) A1 = 0.000274922750430 D1 = 0 (Pa−1)

MU1 = 1.2006E+10 (Pa) A1 = 0.000274922750430 D1 = 0 (Pa−1)

Absolute error

3.4 Virtual Simulation of the Uniaxial Tension Test The accuracy of the adjustment of the hyperelastic constants in Table 1 was verified through virtual simulations using the

same mechanical parameters of the uniaxial tension test in the ASTM D412C specimen as shown in Fig. 6. Root Mean Square (RMS) relative error calculation was used to measure the accuracy of the adjustments.

886

A. F. Müller et al.

Fig. 6 Virtual simulation using finite elements in Ansys. Configuration of the uniaxial tension test in the ASTM 412C sample using the hyperelastic constants in Table 1

The RMS relative error of the virtual simulations was calculated in relation to the experimental data, obtaining the comparisons of precision in the adjustments of the constants of the hyperelastic models as shown in Fig. 7.

3.5 Mechanical Testing on the 3D Ninjaflex Stent The representation of the mechanical deformation produced by the patient in the event of a cough is shown in Fig. 8. Considered a special case, it can be used as a reference to simulate the mechanical deformation suffered in the stent when anatomically placed in the patient. This is a critical condition due to the mechanical effort applied to the stent, which can cause movement or even its expulsion [11]. A cough-like mechanical test was performed on a 3D Ninjaflex black stent as shown in Fig. 9a. Comparisons of

Fig. 7 RMS relative error of virtual simulations in relation to experimental data (force  displacement). Uniaxial tension test on the sample ASTM-D412C 3D printed in black Ninjaflex

the RMS relative errors of the virtual simulations (Fig. 9d) in relation to the experimental data (deformation force versus displacement) are shown in Fig. 10. The RMS relative error of the virtual simulations of the 3D Ninjaflex black stent was calculated in relation to the experimental data, obtaining a comparison between the hyperelastic models as shown in Fig. 10.

3.6 Mechanical Comparison Between HCPA-1, 3D Ninjaflex Black Stent and Virtual Simulation The cough-like mechanical test was performed on the HCPA-1 (Fig. 9c) and 3D Ninjaflex black stent (Fig. 9a). The comparison of these mechanical tests and virtual simulation is shown in Fig. 11.

Evaluation of Hyperelastic Constitutive Models Applied …

887

Fig. 8 Deformed form of a tracheal section during a coughing event. Adapted from [11] Fig. 9 Mechanical cough-like test applied (5 mm displacement): a real strain on the 3D Ninjaflex black stent, b virtual simulation: strain on the HCPA-1 stent, c real strain on the HCPA-1 stent, d virtual simulation: 3D Ninjaflex black stent

4

Conclusions

The flexible Ninjaflex material used for 3D printing the stent is non-biocompatible and has an irregular surface finish that is unsuitable for use on airway stents. Airway stents need to have a smooth surface finish with the least possible roughness. It is expected that with the advent of new flexible and biocompatible materials in the future, using 3D printing with SLA technology will be possible to 3D print custom implantable stents. The pure shear test presented operational problems such as excess slip and/or rupture in samples. It was chosen to use the data limiting the maximum strain to 0.07 mm mm−1.

In cough-like mechanical tests, the results obtained in virtual simulations with the 3D Ninjaflex black stent showed a relative RMS error ranging from 2.5% (Yeoh 1ª) to 280% (Polinomial 2°). In the virtual simulations of uniaxial tension tests using the ASTM-D412C samples, the variations of the relative error RMS were smaller. The stent geometry is more complex compared to the ASTM-412C sample. The small excitation in the I2 invariant due to the absence of the biaxial tension test reflected in an inaccurate adjustment of the hyperelastic constitutive constants [12]. The HCPA-1 stent is approved for use in patients and can be used as a gold standard. The cough-like mechanical test

888

A. F. Müller et al.

Fig. 10 RMS relative error of virtual simulations in Ansys of a cough-like event compared to experimental data on the 3D Ninjaflex black stent

Fig. 11 Experimental data obtained from the cough-like deformation test applied to HCPA-1 and 3D Ninjaflex-black stents experimental and virtual simulation

performed on stents show dynamic behavior. Other types of mechanical tests can be performed and used as a reference standard in the development of new stents. There is no technical standard for airway stents, but for endovascular stents there is a complete standard in the FDA [13]. Using the information in Fig. 11, a typical development application can be made, for example changing the original design of the stent by increasing the thickness of the walls and simulating in ANSYS until achieving a mechanical response similar to the HCPA-1 stent. Using the modified design, a stent can be printed and compared with HCPA-1. Evaluating the proposed method, it is observed that it is possible to virtually simulate an airway stent using finite elements. Despite the inhomogeneity of the 3D printing process in the production of the stent samples using FDM

technology and the absence of a biaxial tension test, it was possible to obtain accurate and consistent results. Based on experimental results, this technique is considered adequate to assist new developments of airway stents.

References 1. Barros Casas D, Fernandez-Bussy S, Folch E (2014) Nonmalignant central airway obstruction. Arch Bronconeumol 50:345–354 2. Schweiger T, Gildea TR, Prosch H, Lang G, Klepetko W, Hoetzenecker SO, Fleck N, Radford D (2006) The uniaxial stress versus strain response of pig skin and silicone rubber at low and high strain rates. Int J Impact Eng 32:1384–1402 3. Cheng GZ, San Jose Estepar R, Folch E (2016) Three dimensional printing and 3D slicer: powerful tools in understanding and treating structural lung disease. Chest 149:1136–1142

Evaluation of Hyperelastic Constitutive Models Applied … 4. 3D printing of medical devices. Accessed 10 Apr 2020. https:// www.fda.gov/medical-devices/products-and-medical-procedures/ 3d-printing-medical-devices 5. INCA (2018) Incidência de Câncer no Brasil para 2018. Instituto Nacional do Câncer 6. Meunier L, Chagnon G, Favier D, Orgéas L, Vacher P (2008) Mechanical experimental characterisation and numerical modelling of an unfilled silicone rubber. Polym Test 27:765–777 7. Vacher P, Dumoulin S, Morestin F, Mguil-Touchal S (1999) Bidimensional strain measurement using digital images. Inst Mech Eng Part C ImechE 213:811–817 8. Favier D, Louche H, Schlosser P, Orgé L, Vacher P, Debove L (2007) Homogeneous and heterogeneous deformation mechanisms in an austenitic polycrystalline Ti-50.8 at.% Ni thin tube under tension. Investigation via temperature and strain fields measurements. Acta Mater 55:5310–5322

889 9. Giton M, Caro-Bretelle A-S, Ienny P (2006) Hyperelastic behaviour identification by forward problem resolution: application to a tear test of silicone-rubber. Strain 42:291–297 10. Frank S (2008) Optical strain measurement by digital image analysis. Matlab. https://www.mathworks.com/matlabcentral/ fileexchange/20438-optical-strain-measurement-by-digital-imageanalysis. 21 Nov 2019 11. Malvè M, del Palomar AP, López-Villalobos JL et al (2010) FSI analysis of the coughing mechanism in a human trachea. Ann Biomed Eng 38:1556–1565 12. Marczak RJ, Iturrioz I (2006) Caracterização do Comportamento de Materiais Hiperelásticos. Outubro 13. FDA. Guidance for industry and FDA staff—non-clinical engineering tests and recommended labeling for intravascular stents and associated delivery systems, document issued on: 18 Apr 2010

Are PSoC Noise Levels Low Enough for Single-Chip Active EMG Electrodes? W. S. Araujo, I. Q. Moura and A. Siqueira-Junior

Abstract

Electromyography (EMG) is the study of the muscular functions through the detection of the electrical signal produced by the muscles. It is used to investigate several motor disorders such as Parkinson’s disease, multiple sclerosis, and spinal cord injuries. Electromyographic signals can also be used to develop assistive technologies devices, improving the life quality of certain patients. Such devices require a small size, wearable, active electromyographic electrode. Programmable System on Chip microcontrollers includes analogic and digital elements that allow integrating most of the electromyographic signal acquisition process in a single integrated circuit. However, we must verify if the internal noise level of these devices are compatible with this application. This work describes a surface electromyographic acquisition system implemented as an active electrode in a single Programmable System on Chip. Experiments were conducted to evaluate the internal noise of the developed system and verify if it’s compatible with electromyographic signal requirements. Keywords

Electromyography • Programmable system on chip • Noise analysis

1

Introduction

Electromyographic (EMG) signals are used for physiological investigations, biofeedback, prosthesis control, ergonomics, sports, movement analysis and allows an assessment of muscle functions [1,2]. The EMG signal is generated by the skeletal muscle during muscle contraction. To generate a muscle W. S. Araujo · I. Q. Moura · A. Siqueira-Junior (B) Instituto Federal do Triângulo Mineiro, Campus Ituiutaba, Av. Belarmino Vilela Junqueira, S/N, Ituiutaba, MG, Brazil e-mail: [email protected]

contraction, a signal is transmitted from the brain along the nerves, muscle functions, and fibers, acting on elementary muscles through complex biochemical events [1]. Usually, the EMG signal is acquired by an EMG recording system that is attached to a computer. The signal captured from the human body is an analog signal and must be converted to a digital signal. Several factors influence the acquisition of these signals. For instance the type of motor unit, level of muscle contraction, size of the electrodes, the distance between wires, and position of the electrodes on the muscle surface [1,3]. These factors should be observed on the development of the EMG capture system, requiring specific elements such as instrumentation amplifiers, filters, and analog-digital converters [4]. EMG signals are affected by different noise sources. Noises are any undesirable disturbance that does not come from or aren’t related to the input, overlapping the applied signal [5]. Noise directly interferes with the information in the signal, impairing its quality. In EMG signals, noise depends on several factors such as electrode dimensions, preparation of skin-electrode contact, the quality of the EMG amplifier, direct current (DC), and noise voltages generated at the skin-electrode interface, and other electrical phenomena not related to EMG [1,3]. These noises sometimes mask the signal completely. The noise present in EMG signal acquisition emanate from various sources. Instrumentation noise is inherent to the electronic components of the acquisition system. It consists of virtually all frequencies in the spectrum and can’t be eliminated. It is only possible to reduce it through the use of quality components, better circuit design and construction techniques [1–3,6]. Ambient noise includes electromagnetic radiation interference from radio transmissions, electrical-power wires, light bulbs, etc. This noise is composed of different components according to its origin. High-frequency noise, such as those transmitted by radio, are eliminated by filters in the acquisition system. But the 50/60 Hz electrical network noise is more difficult to deal with [1,6]. Methods commonly used to miti-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_133

891

892

W. S. Araujo et al.

gate the 50/60 Hz will be detailed in the methodology of this work. Artifact movement noise occurs due to the movement of the electrode-skin interface and the cables that connect the electrode to the amplifier. The bandwidth of this type of noise is between 0 and 20 Hz and can be mitigated by better circuit design and reducing the cable length. Instabilities inherent to the EMG signal in the range between 0 and 20 Hz also must be removed through filters. These instabilities occur due to the metabolic activity itself and the quasi-random nature of the motor units firing [1,6]. Surface EMG background activity can reach up to 5 μVRMS [7]. Therefore, instrumentation noise must be lower than this value, and in research applications it must be less than 2 μVRMS [3]. The Programmable System on Chip (PSoC) [8] is an integrated circuit that includes a microcontroller and several programmable analog and digital elements that can be used to build a classic EMG acquisition system [9–12]. This makes it possible to implement almost the entire design of an active EMG electrode on a single chip. However, since EMG systems usually have high precision requirements and are subjected to several noise sources, it is necessary to verify if the noise levels of the developed instrumentation are compatible. This work aims to evaluate if the PSoC noise levels are low enough for a single-chip active EMG electrode. It describe an EMG acquisition system implementation in a single PSoC chip and the experiments to characterize the instrumentation noise obtained from the developed EMG electrode.

2

Methodology

The raw surface EMG signal has amplitudes up to 10 mV and bandwidth range 20–500 Hz [1,3,7]. The digital acquisition of this signal requires amplifiers compatible with such amplitudes, a passband filter adjusted for this frequency band, and a sampling rate greater than 1 kHz to meet the Nyquist criterion. A single differential surface EMG electrode [13] and acquisition system was implemented in PSoC. The electrode is composed of 3 inputs (non inverting, inverting, and reference) connected to a differential amplifier. Figure 1 presents a block diagram of the proposed active EMG electrode with its essential components. The EMG signal is captured through surface silver/silver chloride (Ag/AgCl) electrodes attached to the skin. This voltage is sent through the signal conditioning circuit to an analog-digital converter (ADC) to obtain the signal samples. Finally, the samples are digitally filtered and transmitted to a computer using a digital communication channel, in this case, a Universal Serial Bus (USB) interface. In this work, our goal was to implement as many of these elements as possible in PSoC. Every element described in the paper was implemented in a single CY8C5888LTI-LP097 PSoC, using the CY8CKIT-059 prototype board and PSoC

Electrode

Instrumentation amplifier

Programmable gain amplifier

Digital filter

Analog-to-Digital Converter

M/C

PSoC 5LP

USB

Computer

Fig. 1 Block diagram of the EMG electrode

Creator 4.3 (Cypress Semiconductor Corporation). The next sections detail the design proposed in this work.

2.1

Amplifiers and Signal Conditioning

The EMG signals captured by the electrodes have low amplitude and are polluted with environmental noise, especially the power line interference (PLI) [1–3]. The instrumentation amplifier is the first stage of EMG signal conditioning, expanding its voltage range to match the input of the analog to digital converter. By performing a differential amplification, it attenuates common signals to both inputs. This serves as a mechanism to mitigate electromagnetic interference from the power source [4]. For this work, a two op amp instrumentation amplifier was built using the analog components of the Programmable Gain Amplifier (PGA) type included in PSoC [14]. In addition to performing the amplification, they allow the configuration of the gain from commands sent from the host computer to microcontroller. The bandwidth of the PGA configured with unit gain and in high power mode is 8 MHz, decreasing until reach 160 kHz at 50 times gain. Table 1 shows the possible gain values and the respective bandwidth of these amplifiers. As can be seen, this frequency is much higher than the maximum bandwidth of the EMG signal, so it is possible to use all gain levels in our design. Figure 2 presents the implemented signal conditioning circuit used in this work. The first two amplifiers on the left correspond to a differential amplifier model implemented using the switched capacitor and continuous time blocks (SCCT) [8]. The gain of the instrumentation amplifier was defined as

133 Are PSoC Noise Levels Low Enough for Single-Chip Active EMG Electrodes? Table 1 List of possible PGA gain values. This element is a configurable non-inverting amplifier. Ra and Rb correspond to feedback resistors that can be configured by the microcontroller. FMax defines the amplifier bandwidth Gain

Rb

Ra

FMax

1 2 4 8 16 24 48 50

40k 120k 280k 600k 460k 620k 470k 490k

40k 40k 40k 40k 20k 20k 10k 10k

8 MHz 4 MHz 2 MHz 1 MHz 500 kHz 333 kHz 166 kHz 160 kHz

2 since it is the only combination in which the ratio between the available resistors is allowed by this topology [14]. The signal then goes through an amplification stage (−49 gain) with a high-pass filter (16 Hz). There is great variance at the maximum amplitude of the EMG signal related to the anatomy, the placement of the electrodes, and specific issues of the body’s physiology [3]. Due to this, a gain adjustment is necessary during an EMG signal acquisition. The last stage is composed of a userprogrammable amplifier followed by a low-pass aliasing filter. Mechanisms were implemented in the microcontroller to adjust the gain of the last amplifier. The gain values can be adjusted according to the same parameters presented in Table 1.

2.2

893

between 20 and 250 Hz. Scientific recommendations determine the use of 20 and 500 Hz bandwidth for surface EMG signals [1,3,7,16]. For this project, a sampling rate of 4 kHz was used, much higher than the Nyquist requirement for surface EMG signals. The ADC was configured to use the internal reference (5 V) and has an input between 0 and 5 V. Finally, the samples are then sent to the Digital Filter through a microcontroller interruption routine (Fig. 3).

2.3

Digital Filter

The surface EMG signal bandwidth is set between 20 and 500 Hz. To reduce noise, it is necessary to filter the EMG signal within this range. A possible solution is to use analog filters before the ADC. For this work, it was decided to use a digital filter block as in Fig. 4. The component implements a digital filter in hardware without using the microcontroller central processing unit (CPU). A band-pass Butterworth filter (from 20 to 500 Hz—16th order) was configured in the component. The system doesn’t include a notch filter since it’s use is discouraged for EMG signals. EMG signals have large signal contributions at 50/60 Hz frequencies. The introduction of a notch filtering lead to loss of important EMG signal information and introduces phase rotation that extends to frequencies below and above central frequency thereby dramatically changing the waveform [1,6]. Figure 4 presents the frequency response designed for the filter in PSoC Creator [17].

Analog-to-Digital Converter 2.4

The analog-to-digital converter is connected to the output of the last amplifier. A 12-bit successive approximation ADC [15] was configured in the PSoC. The ADC resolution is within the limits required by international electromyography standards [1,16]. Almost all of the energy of EMG signals lies

USB Data Transmission

Finally, the EMG signal will be transferred to a computer for future analysis. For this project, we chose to send the data via a USB communication channel [18]. The USB interface was chosen for noise analysis due to its robustness and

Fig. 2 Analog signal conditioning stage circuit implemented in the PSoC creator. IA_POS and IA_NEG are the instrumentation amplifier inputs, and REF_IN is the reference

894

W. S. Araujo et al.

Fig. 3 Analog-to-digital converter and digital filter blocks implemented in PSoC

Fig. 4 Frequency and step response of the digital Butterworth filter implemented in the PSoC

simplicity. The PSoC Creator has a communication device class (CDC) component called Universal Serial Bus Universal Asynchronous Receive Transmit (USBUART). This module has an RS-232 compliant data format, pulse train up to 6 Mbps, parity error detection, overrun, and framing, among other features. The Full-Speed USB component (USBFS), has all the settings enabled to implement a CDC interface with another device with minimum configuration. This allows constant communication between the components, simplifying the data acquisition process. In future works, this communication will be replaced by a ZigBee or Bluetooth module for wireless transfers.

3

Results

To evaluate the system noise, the instrumentation amplifier inputs were short-circuited and connected to a constant voltage level of 2.5 V in the middle of the amplifiers’ input range. A 5-second acquisition was performed at 4 kHz, totaling 20,000 samples. From them, histograms of the data samples were created (Fig. 5) for the different possible amplification gains. These histograms show the recognition of the instrumentation noise’s amplitude characteristics and explore the total system noise. An isolated ADC noise analysis was also

carried out, where the 2.5 V continuous signal was applied directly to the converter input (Fig. 3). Results show that the PSoC’s embedded ADC noises peaks up to 8 mV. Although this value is quite high if we compare to the acceptable noise level in surface EMG signals (About 2 μVRMS ), it is equivalent to thirteen less significant bits (LSBs) from peak to peak. As we increase the gain, the ADC noise interferes less in overall signal to noise ratio (SNR). For example, using a 4900 gain, this noise is equal to a voltage variation of 8 mV/4900 = 1.6326 μV, which is satisfactory for capturing EMG signals. The noise measured from the instrumentation amplifier was 10 mV, and the common mode rejection ratio (CMMR) was -29.13 dB. The total noise of the system was about 20 μV for all gains. This measured noise and the low gain from the instrumentation amplifier make it impossible to use this design to capture EMG signals. The noise’s power spectral density (PSD) was also estimated to verify the presence of components related to power line interference (Fig. 6). In the graph, the harmonics of the local power line (60 Hz and its multiples) were included in the frequency axis. The filled area corresponds to the EMG signal’s bandwidth. The results show that the power line’s noise is present, with some harmonics can be detected. But the power isn’t much greater than the remaining noise.

133 Are PSoC Noise Levels Low Enough for Single-Chip Active EMG Electrodes?

895

Fig. 5 Histograms of the measured instrumentation noise. In the first graph, the noise is exclusively related to the ADC and digital filter (Fig. 3). The next, is the instrumentation amplifier noise. In remaining histograms, the noise corresponds to the complete circuit, involving the ADC, filter, and amplifiers (Fig. 2)

Fig. 6 Estimation of the frequency spectral density of the collected samples. The filled area corresponds to the EMG signal bandwidth

896

4

W. S. Araujo et al.

Discussion

This work described an implementation of a complete EMG signal acquisition system in a single PSoC device. It was possible to implement all the necessary elements, including amplifiers, filters, ADC in a single device, but the gain and noise characteristics do not achieve the requirements for EMG signal acquisition. From the results, the main bottleneck is the instrumentation amplifier. It features a limited gain, low CMMR and, high noise. In the PSoC documentation, it is observed that the internal connections have significant resistance that can interfere with the symmetry of the resistors required by the instrumentation amplifier [19]. It would be necessary to use an external amplifier for this type of application, and the remaining system elements could be implemented in PSoC. Despite not achieving a single chip design, the use of PSoC favors greater integration of the system. The spectral analysis also shows that the system has high immunity to induced power line noises. Power line harmonics can be noticed in the spectrogram, but the amplitude levels are on a comparable level of the remaining noise. No isolation system between the user and the computer was implemented, which can eventually be connected to the power line generating risks for the user. Since the initial research focus is to measure the internal instrumentation noise, it was decided to use a USB channel due to its fast implementation. For this reason, no experiment involving human beings was carried out. In the future, USB data transmission will be replaced by a wireless mechanism, removing the necessity for an isolation circuit. Even so, the system in its current state can be used with a specific USB isolation system [20].

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. 2.

3. 4. 5. 6.

7.

8. 9.

10.

11.

12. 13.

5

Conclusion

This work described the implementation of an EMG signal acquisition system in a single electronic device and performed the characterization of its instrumentation noise. The results demonstrate that the system noise is incompatible with most electromyography applications. The main bottleneck is the instrumentation amplifier implemented using PSoC components. This design was not able not to achieve the desired CMMR and noise levels. However, the remaining parts of the system could be implemented on the device, improving the system integration. In this sense, the PSoC was very flexible during the development of the project. Acknowledgements We would like to thank to Leonardo Leal Queiroz Marrega for collaborations in this work. This project is supported by IFTM, CAPES, FAPEMIG and CNPq. The work of W. Silva and I. Moura, was supported by IFTM PIBIC program.

14. 15.

16. 17. 18.

19. 20.

Merletti R, Parker P (2004) Electromyography physiology, engineering, and noninvasive applications. Wiley-Blackwell Chowdhury R, Reaz M, Ali M et al (2013) Surface electromyography signal processing and classification techniques. Sensors 13:12431–12466 De Luca C (1997) The use of surface electromyography in biomechanics. J Appl Biomech 13:135–163 Webster J (2009) Medical instrumentation—application and design. Wiley, New York Malvino A, Bates D (2015) Electronic principles. McGrawHill Education, New York Surface electromyography: detection and recording, https:// www.delsys.com/downloads/TUTORIAL/semg-detection-andrecording.pdf De Luca C, Donald Gilmore L, Kuznetsov M et al (2010) Filtering the surface EMG signal: movement artifact and baseline noise contamination. J Biomech 43:1573–1579 32-bit Arm Cortex-M3 PSoC 5LP, https://www.cypress.com/ products/32-bit-arm-cortex-m3-psoc-5lp Kobayashi H (2011) A ZigBee based wireless EMG/ECG streaming system for the universal interface. In: IECON proceedings of industrial electronics conference, Melbourne, Australia, pp 2094– 2099 Mahaphonchaikul K, Sueaseenak D, Pintavirooj C et al (2010) EMG signal feature extraction based on wavelet transform. In: ECTICON2010: the 2010 ECTI international conference on electrical engineering/electronics, computer, telecommunications and information technology, pp 327–331 Almeida J, Rodríguez Q, Rubiano L (2015) PSoC-based embedded system for the acquisition of EMG signals with Android mobile device display. Tecciencia 10:14–19 Rojas W, Santa F, Porras J (2014) Biomedical instrumentation applications using PSoC microcontrollers. Visión electrónica 8:70–81 De Luca C, Kuznetsov M, Gilmore L et al (2012) Inter-electrode spacing of surface EMG sensors: reduction of crosstalk contamination during voluntary contractions. J Biomech 45:555–561 AN60319—instrumentation amplifier using PSoC 3 / PSoC 5, https://www.cypress.com/file/53376/download Sequencing Successive Approximation ADC, https://www. cypress.com/documentation/component-datasheets/sequencingsuccessive-approximation-adc-adcsarseq SENIAM, http://www.seniam.org Filter, https://www.cypress.com/documentation/componentdatasheets/filter AN56377—PSoC 3 and PSoC 5LP—Introduction to Implementing USB Data Transfers, https://www.cypress.com/documentation/ application-notes/an56377-psoc-3-and-psoc-5lp-introductionimplementing-usb-data AN58827—PSoC 3 and 5LP Internal Analog Routing Considerations, https://www.cypress.com/file/121046/download Gonzalez-Briceno G, Medow J, Ortega-Cisneros S et al (2019) Design and evaluation of a USB isolator for medical instrumentation. IEEE T Instrum Meas 68:797–806

Implementation of an Ultrasound Data Transfer System via Ethernet with FPGA-Based Embedded Processing J. de Oliveira, A. A. Assef, R. A. C. Medeiros, J. M. Maia, and E. T. Costa

Abstract

Keywords

In this paper, we present the implementation of a reconfigurable FPGA-based system for transferring raw ultrasound data using Gigabit Ethernet interface to perform and evaluate the digital signal processing steps according to the Delay-and-Sum (DAS) beamforming method for B-mode imaging, in order to reduce the transfer and computing time. The proposed system consists of two Terasic FPGA development boards and a computer with the Matlab software. The first FPGA board is used for assembling and transmitting data packets with ultrasonic information via Ethernet protocol. The second FPGA board receives the data packets and performs the back-end digital signal processing. The resulting data are transferred to the PC, through a USB port, for scan conversion and image display. The experiments were performed using RF data from an ultrasound phantom for active apertures of 8 and 32 elements. The performance of the proposed algorithm was evaluated by computing the normalized mean square error (NRMSE), contrast ratio (CR) and the contrast-to-noise ratio (CNR) of the reconstructed B-mode images. The analysis of the reconstructed images was performed by comparing the results from the reference Matlab script to results from the Simulink model and the FPGA experimental architecture. The overall processing time was reduced significantly to less than 10 s. Both the qualitative and quantitative analysis results of the generated images indicate that the Simulink and FPGA responses are in excellent agreement with the reference Matlab model.

Ultrasound imaging Ethernet FPGA

J. de Oliveira (&)  A. A. Assef  R. A. C. Medeiros  J. M. Maia Federal University of Technology—Paraná (UTFPR)/ CPGEI/PPGSE, Av. Sete de Setembro, Rebouças, Curitiba, 3165, Brazil E. T. Costa Biomedical Engineering Department (DEB/FEEC), State University of Campinas (UNICAMP), Campinas, Brazil



1



Delay-and-Sum beamforming



Introduction

Ultrasound (US) imaging has been widely used in medical diagnosis to visualize internal organs and tissue structures. Sophisticated US systems are employed to display real-time high-quality moving images of the soft-tissue contrast together with blood flow patterns [1, 2]. To meet the high computational and throughput requirements, such medical US machines typically use Application-Specific ICs (ASICs), Digital Signal Processors (DSPs), FieldProgrammable Gate Arrays (FPGAs) and Graphics Processing Units (GPU) [3]. However, despite the high technology involved and the ability to generate images in real-time, the majority of commercial US imaging systems typically have a “closed” architecture that does not always provide flexibility and access to radiofrequency (RF) data that are required to develop new US techniques [4, 5]. Any quantitative frequency information from RF echo data at various stages of signal processing is lost because the signals are subjected to filtering, envelope detection, scan conversion, and log compression steps, among others [6]. On the other hand, “open” US research platforms, generally developed by international companies, such as the Vantage Research System (Verasonics Inc., United States of America) [7] are cost-prohibitive and dependent on import taxes, making them financially inaccessible to most Brazilian students and researchers. In an effort to address these problems, our group has developed an open system for research and teaching applications, which is capable of simultaneous transmission/ reception up to 128 channels [5]. The system consists of an

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_134

897

898

J. de Oliveira et al.

FPGA-based US front-end hardware segment and a PC-based software processing segment. Thus, after the data capture, the front-end system sends the acquired raw data to the PC, where all back-end processing steps are performed by Matlab. However, the transmission throughput is limited and becomes the main bottleneck of this approach. The data transmission time plus the computation time can approach up to 30 min per generated image. In this paper, we present the implementation of an FPGA-based system for transferring raw US data using Gigabit Ethernet interface to perform and evaluate the digital signal processing according to the Delay-and-Sum (DAS) beamforming for B-mode imaging, in order to reduce both the transfer and computing time.

2

Methods

The block diagram of the proposed system is shown in Fig. 1. The system consists of two Terasic FPGA development boards (DE2-115 and DE4-230) and a PC with Matlab software. The first FPGA board (DE2-115) is used for assembling and transmitting data packets with ultrasonic information in Gigabit Ethernet protocol, using the Intel Triple Speed Ethernet (TSE) Megacore function [8]. The second FPGA board (DE4-230) receives the data packets and performs the back-end digital signal processing for B-mode imaging by using the DAS beamforming method. The resulting data are transferred to the PC, through a USB port for scan conversion and image display. The DE2-115 and the DE4-230 FPGA boards are equipped with Intel Cyclone IV EP4CE115 and Stratix IV EP4SGX230 FPGAs, respectively. The process for generating multiple acoustic waves is called beamforming. In order to obtain an optimal lateral resolution, the US beam is focused through an active aperture, which comprises a group of elements of an array transducer that transmits ultrasonic signals and receives echoes from a region of interest [1]. The digital signal processing steps are presented in Fig. 2. The DAS reconstruction algorithm was applied to the B-mode imaging, which is based on a coherent summation of RF lines after digital filtering, focusing delay for phase alignment and apodization to control the sidelobe level of the beam pattern. Then, demodulation using a Hilbert Transform-based FIR filter strategy [9] and envelope detection is carried out to extract the magnitude information of the US signal. It is followed by a logarithmic compression, which reduces the dynamic range of the envelope. Finally, scan conversion is applied to map the acquired Polar coordinate data to Cartesian coordinates of the specified display window size [1, 2]. Two FPGA designs were built and evaluated for quantitative and qualitative comparisons of the generated images.

The first design had a synthetic aperture of 8 active elements and the second an aperture of 32 active elements. The experiments were performed using RF data from a multipurpose US phantom (Fluke Biomedical, model 84-317), acquired by a US system developed by our group [5]. The digital data were sampled at 40 MHz with a 12-bit resolution using a 3.2 MHz 128-element convex transducer (Broadsound Corp., AT3C52B) for the active apertures of 8 and 32 elements, and stored in the DE2-115 board. The performance of the proposed algorithm was evaluated by computing the square root of the normalized mean square error (NRMSE) cost function in comparison with a reference script of Matlab software validated in previous studies. The NRMSE can be represented as in (1) and is considered excellent for lower than 10%, good for 11–20%, fair for 21–30% and poor for values above 30%: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uPM1 u jY ðnÞ  hyðnÞj2 NRMSE ¼ t Pn¼0 ð1Þ  2  100 M1  hyðnÞ  hy n¼0

where n is the sample index, M is the number of samples, Y (n) is the information obtained through the reference Matlab script, hy(n) is the implemented Simulink model or FPGA response and hy is the average value of hy(n) [9]. Additionally, we used the contrast ratio (CR) and the contrast-to-noise ratio (CNR) to compare the contrast resolution of the reconstructed B-mode images, described by (2) and (3), respectively: CR ¼ 1  CNR ¼ 1 

lt lb

jlt  lb j rb

ð2Þ ð3Þ

where µt is the mean intensity of the target in gray scale, µb is the mean intensity of the background region, both in dB, and rb is the standard deviation of the background intensity [9].

3

Results

To verify the results of the data transfer system via Ethernet with the FPGA-based embedded processing method, the analysis of the images was performed comparing the results from the reference Matlab script to the results from the Simulink model and the FPGA experimental architecture, processed by the DE4-230 development board. The generated images are shown in Fig. 3 with a dynamic range of 30 dB. Figure 3a, c, e and b, d, f show the reconstructed B-mode US images generated by simulations from the Matlab script and Simulink model, and experimentally by

Implementation of an Ultrasound Data Transfer System …

899

Fig. 1 Block diagram of the proposed transfer and processing system Fig. 2 Block diagram of the proposed processing system

Stratix IV FPGA with active apertures of 8 and 32 elements, respectively. By comparing the reconstructed images with an active aperture of 8 elements, the NRMSE from the Simulink model (Fig. 3c) was 8.4%, while for FPGA (Fig. 3e) it was 20.31%. However, making the same comparison, but now

eliminating the first 100 samples from the scanlines, the NRMSE result for the FPGA decreases to 8.31%. Now considering the active aperture of 32 elements, the NRMSE calculated for the Simulink model (Fig. 3d) was 10.16%. For the experimental data of the FPGA design (Fig. 3f), it was 21.25%. Similarly in the previous

900

J. de Oliveira et al.

Active aperture of 8 elements

Active aperture of 32 elements

Matlab script

(a)

(b)

(c)

(d)

(e)

(f)

Simulink

FPGA

Fig. 3 Reconstructed B-mode ultrasound images using the DAS method for contrast analysis generated by simulations from the Matlab script and Simulink model, and experimentally by FPGA Stratix with active aperture of 8 (a, c, e) and 32 elements (b, d, f), respectively

comparison, the first 100 samples were not computed, resulting in an NRMSE of 9.61%. This represents an excellent agreement between the reference model and experimental method. It implies that the comparison is more

relevant for the region of interest, that is, values more distant from the face of the transducer by approximately 1.9 mm. Both FPGA designs were also quantitatively analyzed by a contrast analysis. Figure 4 shows the regions A, B, C and

Implementation of an Ultrasound Data Transfer System …

901

Fig. 4 Regions A, B, C and D used for CR and CNR analysis

D used for the comparison of the CR and CNR values. Table 1 shows the calculated CR and CNR values of regions A, B, C and D for the three methods and their respective error percentages, with the active apertures of 8 and 32 elements. Table 1 CR and CNR results for regions A, B, C and D

Region

Aperture

A

8

32

B

8

32

C

8

32

D

8

32

The qualitative results presented in Fig. 3 show an excellent agreement among the three evaluated methods. Both quantitative analyses of CR and CNR indicate a low percentage of errors between the simulated and experimental results. It can be seen that the percentage error between the

Method

CR

% Error

CNR

% Error

Script

0.18



1.29



Simulink

0.17

3.09

1.28

1.20

FPGA

0.17

3.07

1.28

1.18

Script

0.18

1.4

Simulink

0.17

5.61

1.39

0.61

FPGA

0.17

5.6

1.39

0.58

Script

0.14



1.12



Simulink

0.13

3.99

1.11

1.13

FPGA

0.13

4

1.11

1.14

Script

0.24



1.99



Simulink

0.22

4.56

2.01

0.95

FPGA

0.22

4.55

2.01

0.96

Script

0.10



0.75



Simulink

0.09

4.54

0.73

1.54

FPGA

0.09

4.57

0.73

1.56

Script

0.30



1.76



Simulink

0.29

4.15

1.77

0.45

FPGA

0.29

4.12

1.77

0.48

Script

0.16



1.91



Simulink

0.16

3.13

1.89

0.95

FPGA

0.16

3.10

1.89

0.92

Script

0.21



4.41



Simulink

0.20

4.70

4.74

7.55

FPGA

0.20

4.69

4.71

6.76

902

J. de Oliveira et al.

Table 2 Intel EP4CE115F29C7 and EP4SGX230KF40C2 FPGA utilization summary

Device

EP4CE115F29C7 Cyclone IV E

EP4SGX230KF40C2 Stratix IV

Aperture

8/32

8

32

Logic elements (%)

13

14

19

Comb. ALUT



15,858/182,400 (9%)

23,662/182,400 (13%)

Memory ALUT



335/91,200 < (1%)

299/91,200 < (1%)

Dedicated logical register

10,871/114,480 (9%)

19,577/182,400 (11%)

25,263/182,400 (14%)

Total bits of memory blocks

2,668,345/3,981,312 (67%)

968,992/14,625,792 (7%)

2,167,396/14,625,792 (15%)

DSP blocks with 18-bit elements



166/1288 (13%)

598/1288 (46%)

models is practically the same and varies from 3.07 to 5.61% for CR and 0.45 to 7.55% for CNR. As expected, it was observed that the CR and CNR values are higher for images with active aperture of 32 elements, indicating a better contrast. However, it can also be observed that the model with 32 elements has larger errors, reaching 7.55% in the region D. This behavior can be explained by the amount of information processed in parallel, while for the 8 element apertures, the number of channels is smaller, consequently, resulting in a smaller error calculation. This difference is due to the fact that a coherent sum is performed with more channels, where the resolution errors are also added. In this work, the Matlab algorithm uses the standard double precision arithmetic and the Simulink and FPGA use 17 bits in the fractional part. The overall time from the beginning of the transfer until the last storage by the FPGA is 4 s for the 8 element apertures and 9 s for the 32 element apertures. In a comparison with the previous strategy which requires that the acquired raw RF data are transferred to the PC, where the B-mode back-end processing steps are performed by Matlab, there was a time reduction of approximately 30 min to less than 10 s. Table 2 presents the FPGA Cyclone IV and Stratix VI resources utilization summary for active apertures of 8 and 32 elements.

4

Conclusions

This work investigated the implementation of a reconfigurable system for US data transfer, via Ethernet network, with FPGA-based embedded processing to reduce the B-mode imaging computation time. This objective was achieved by using two FPGA boards and a PC for image display. The proposed system can be used to investigate other I/O strategies to increase the throughput capacity, such

as PCI Express, which is available in the DE4-230 board. Both the qualitative and quantitative analyses of the NRMSE, CNR and CR results of the generated images make it possible to conclude that the Simulink and FPGA responses are in excellent agreement with the reference Matlab model. In future studies, we plan to develop a dedicated FPGA board that connects a US research platform directly to a DE4-230 FPGA board, allowing the development of novel transmission and reception US strategies that can improve the quality of reconstructed images. Acknowledgements The authors would like to thank the following Brazilian organizations: CNPq, FINEP, Araucária Foundation, CAPES, UTFPR, and the Ministry of Health for their financial support that made our research possible. We also thank the Intel FPGA University Program for donating the DE4-230 FPGA board. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Shung KK (2006) Diagnostic ultrasound: imaging and blood flow measurements. CRC Press, Boca Raton 2. Hedrick WR, Hykes DL, Starchman DE (2004) Ultrasound physics and instrumentation. CV Mosby, St. Louis, London 3. Kruizinga P et al (2017) Towards 3D ultrasound imaging of the carotid artery using a programmable and tileable matrix array. In: 2017 IEEE international ultrasonics symposium (IUS). IEEE, pp 1–3 4. Jensen JA (1996) Field: a program for simulating ultrasound systems. Paper presented at the 10th Nordic-Baltic conference on biomedical imaging. Med Biol Eng Comput 34(1):351–353 5. Assef AA., Maia JM, Costa ET (2015) A flexible multichannel FPGA and PC-based ultrasound system for medical imaging research: initial phantom experiments. Res Biomed Eng 31 (3):277–281

Implementation of an Ultrasound Data Transfer System … 6. Wilson T et al (2006) The ultrasonix 500RP: a commercial ultrasound research interface. IEEE Trans Ultrason Ferroelectr Freq Control 53(10):1772–1782 7. Verasonics Inc. https://verasonics.com/vantage-systems/ 8. Triple-Speed Ethernet Intel® FPGA IP Core. https://www.intel. com/content/www/us/en/programmable/products/intellectualproperty/ip/interface-protocols/m-alt-ethernet-mac.html

903 9. Assef AA et al (2019) FPGA implementation and evaluation of an approximate Hilbert transform-based envelope detector for ultrasound imaging using the DSP builder development tool. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 2813–2816

Software for Physiotherapeutic Rehabilitation: A Study with Accelerometry A. M. Conceição, M. C. P. Souza, L. P. G. Macedo, R. J. R. S. Lucena, A. V. M. Inocêncio, G. S. Marques, P. H. O. Silva, and M. A. B. Rodrigues

these two satisfactory to identify the movements of sit/stand and walk of the volunteers. In this way, a limit value is determined that differentiates the sitting from the raised state. Thus, it was considered that the Z axis corresponds to the activity of sitting/getting up, whereas the X axis corresponds to the activity of walking. Conclusion: It was concluded in this study that the software developed is able to detect and monitor patterns of body movements through the oscillations captured by the accelerometers, suggesting that it can assist health professionals both in the assessment and in the physical therapy rehabilitation.

Abstract

Objective: The present study aimed to analyze, using accelerometry, movement patterns of activities performed during physical therapy rehabilitation in order to develop a software for physical therapy monitoring. The system is able to receive the accelerometry data through Bluetooth protocol, perform the processing to generate graphs in real time and store them in text files, in order to provide data for further analysis. Methods: This is an experimental, descriptive, cross-sectional work, with sample for convenience. The study was approved by the Research Ethics Committee (CEP) involving human beings from the Health Sciences Center of the Federal University of Pernambuco (CCS/UFPE) obtained the Advice Number: 3540474. In the study 10 healthy subjects aged between 20 and 40 years participated. For the development of the system was used Processing, which is an open source programming language based on JAVA. Processing contains computational tools ideal for the development of user interfaces as well as tools for processing and manipulating data from computer input and output devices. For the testing phase, the activities of sit/stand and walk were performed. The accelerometer was positioned on the lumbar spine of the participants. Results and discussion: According to the tests conducted the X and Z axes of the accelerometer were selected for analysis and A. M. Conceição (&)  M. C. P. Souza  L. P. G.Macedo Biomedical Engineering, Federal University of Pernambuco, Recife, Brazil R. J. R. S.Lucena  A. V. M. Inocêncio Electrical Engineering, Federal University of Pernambuco, Recife, Brazil G. S. Marques  P. H. O.Silva Informatic Center, Federal University of Pernambuco, Recife, Brazil M. A. B. Rodrigues Electronic Engineering, Federal University of Pernambuco, Recife, Brazil

Keywords

Processing

1



Software



Accelerometer



Physiotherapy

Introduction

The study of human movement is a complex and challenging task, being the subject of multidisciplinary research. Motion monitoring systems are in growing development and demand, both due to the advance of sensor technologies, as well as offering personalized services in different fields, including health. This monitoring may occur by several ways, which can range from simple microelectromechanical sensors to video systems [1, 2]. Accelerometers are microelectromechanical devices that measure the acceleration of a body in relation to the gravity of the earth in one, two or three axes, allowing to quantify the frequency and intensity of movements, depending on the acceleration [3]. Current accelerometers are small, easy to use and can be discreetly attached to the body and can be used to assess human movements in different environments [4, 5]. A study with accelerometers to monitor posture showed that sensors are useful for detecting differences in

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_135

905

906

A. M. Conceição et al.

tasks related to body balance, suggesting their clinical applicability [6]. Processing is a programming language, created to generate and modify images in order to achieve an equilibrium between clarity and advanced features, providing a wide audience in the most diverse circumstances [7]. Because it is an open source programming language, it has several communities on the Internet that want to program images, animations and sounds, and can be used by students and professionals for learning, prototyping and production. One of Processing’s facilities is the use of libraries, because they easily expand the capacity to generate sound, send/receive data in various formats and import/export 2D and 3D files [8, 9]. The present work aims to develop a software for analyzing movement patterns using accelerometry through activities performed during physical therapy rehabilitation.

2

Methods

This was a technological development study, with a convenience sample, consisting of 10 volunteers, of both sexes, aged between 20 and 40 years. Table 1 describes the characterization of the sample. The study was approved by the Research Ethics Committee (CEP) obtaining Advice Number 3540474. The programming language used was Processing, which is open source, based on JAVA and is an integrated development environment (IDE). Processing contains computational tools ideal for the development of user interfaces as well as data processing and manipulation tools resulting from computer input and output devices. The software developed is composed by the junction of 3 subsystems, a reception system through Bluetooth protocol,

Table 1 Characterization of the research sample Volunteers

Age

Sex

V1

32

Female

V2

32

Female

V3

27

Female

V4

23

Female

V5

29

Female

V6

30

Male

V7

34

Male

V8

25

Male

V9

31

Male

V10

26

Male

Mean

28.9

Standard deviation

±3.54181

Fig. 1 Overview of system processes (transmission, processing and storage)

an interpretation and data analysis system, and an interface for viewing the data in graphic format. The operation can be described as follows: the signal measured by the sensor is sent to the software through Bluetooth protocol, then the signal reception process forwards this data to a buffer type vector, in addition to performing storage. The data in the buffer vector are interpreted by the analysis algorithm and are used to generate the graphs, as shown in Fig. 1. To test the operation of the software, a 3-m walk and a sit/stand were performed. During the testing phase, the accelerometer was positioned on the L5/S1 lumbar spine of the participants. The Bluetooth reception system was developed based on the abstraction of the operating system in which input and output devices connected to the computer through Bluetooth protocol are perceived as serial ports by the system’s applications. Therefore, the software on the computer receives data from the sensors in a serial way. After receiving the data, it is stored in a buffer to be passed on to the analysis and interpretation system and to the interface. This is done to avoid data loss caused by possible delay on the part of the interface. For the standardization of messages, a protocol was developed that follows a specific message format executed by the receiving system. In the message format, the first two Bytes are the start flags, which represent the start of sending a measurement made by the sensors at a certain point in time. The start flags are used to organize and restart communication by the receiver so that the receiving sequence defined by the message format is obeyed and the data is correctly stored. After sending the flags, the sensors are read. The information provided by each sensor axis is composed of 2 Bytes (16 bits). Thus, the X axis provides 2 Bytes, the Y axis plus 2 Bytes and finally the Z axis plus 2 Bytes. The X axis corresponds to the capture of movements in the anteroposterior direction. The Y axis for laterolateral movements and the Z axis for cranio-caudal movements. The analysis and interpretation system consists of two algorithms that are always running. These algorithms have the function of detecting if the individual is standing or sitting and if a step has been taken. For the development of

Software for Physiotherapeutic Rehabilitation: A Study …

Fig. 2 Flowchart of the data analysis and interpretation system

the algorithms and the choice of parameters for the interpretation of each type of detection, graphs generated from usability tests were analyzed, where each graph was labeled with the type of movement that was being performed, in order to find a pattern in the data. With these patterns, the algorithms identify when a pattern happens and notify the occurrence of one of the actions, automating the process of detecting actions taken by the patient/user. The operation of this system occurs from variations that occur through the user’s action, captured by the sensors. When there is a variation in the signal captured by the sensors and delivered to the reception system through Bluetooth protocol, an analysis is performed to classify the variation received as: the user took a step, or stood up, or remains standing, or sitting, or remains sitting. This analysis is done in the order and ways described by the flowchart shown in Fig. 2.

3

Results and Discussion

From the analysis of the X, Y and Z axes, generated in the graphs during the performance of the activities, a greater variation was observed in the behavior pattern of the

907

accelerometer X and Z axes. The Z axis was selected for the detection of the sit/stand movement, where, when the user is sitting, its value is always lower than when the user is standing. In this way, a transition value that differentiates the sitting from standing state is determined. This value can vary from patient to patient, so it must be calibrated initially. Moreover, to improve the stability of this method was applied to a low pass filter in the Z axis, with a frequency of 45 Hz in order to avoid false positives due to the sensitivity of the sensors. For the detection of the step, a greater variation of values was observed in the X axis in relation to the others, taking this axis as a reference. It was noted that when a step is taken an increase occurs followed by a decrease in the values in the form of “hills” and “valleys”, respectively. The hill is simply a peak in values in a certain intensity with concave down, while a valley is a peak in values in a certain intensity with concave up. Analyzing the graphs, it was noticed that the average of the X axis values always tends to a base number, which was called a reference. To detect the peaks, two limit values are used, one of which identifies the hill (if it is greater than the reference value), and the other identifies the valley (if it is less than the reference value). Therefore, the limit value that identifies the hill must always be greater than the reference and the limit value that identifies the valley must always be less than the reference. For a simplified initial calibration of the method, a midpoint is fixed between the limit values of the accelerometer, so what varies in the calibration is only the difference between the limit values. Thus, the greater the difference, the less sensitive the detection will be, as this makes it difficult to detect peaks with less amplitude. And the smaller the difference, the more sensitive the detection will be, as this facilitates the detection of peaks. Figure 3 shows an example of detecting the behavior of the accelerometer axes when performing the proposed activities. On the left side of Fig. 3, it is possible to observe the response of the accelerometer on the X, Y and Z axes related to the standing movement that is shown on the right side of the figure. The X and Y axes have little variation between

Fig. 3 Detection of the behavior of the accelerometer axes when performing the proposed activities

908

A. M. Conceição et al.

the sit and stand positions, while the Z axis has a greater variation between the two positions.

4

Conclusions

From the results obtained with the developed software, it is concluded that the analysis of movements performed during the activities proposed in this study, using an axis of the accelerometers for each type of activity, can be used to detect simple patterns of body movements. Therefore, the developed system is capable of detecting the movements of sitting, standing in addition to walking parameters, such as the number of steps taken by the patient, analyzing the individual behavior of the accelerometer axes. The software also has the function of storing data in text file format, for possible further analysis of the clinical parameters evaluated in this study, such as walking and balance. The system developed in this research can be applied in the first evaluation of the patient, during the rehabilitation process and in the reevaluation, allowing a measure to compare the physiotherapeutic progress. In addition to the viability of the motor and neurofunctional rehabilitation applications. When compared to some scales and tests that are already commonly used, the software offers more precise measures on postural changes that occurred during the execution of activities performed by patients during the rehabilitation period. It can be used in association with the existing rating scales or not. Thus, the health professional will have the addition of another important quantitative metric to assist in the moment of clinical decision making, supporting a more adequate treatment plan according to the individual needs of each patient. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Silva FG (2013) Reconhecimento de movimentos humanos utilizando um acelerômetro e inteligência computacional. Dissertação de mestrado, Escola Politécnica da Universidade de São Paulo, Programa de Pós-Graduação em Engenharia Elétrica, 105 p 2. Zago M, Sforza C, Pacífico I, Cimolin V, Cameroto F, Celletti C, Condoluci C, Pandis MF, Galli M (2018) Gait evaluation using inertial measurement units in subjects with Parkinson’s disease. J Electromyogr Kinesiol 42:44–48. https://doi.org/10.1016/j.jelekin. 2018.06.009 3. Gondim ITGO, Souza CCB, Rodrigues MB, Azevedo IM, Coriolano MGWS, Lins OG (2020) Portable accelerometers for the evaluation of spatio-temporal gait parameters in people with Parkinson’s disease: an integrative review. Arch Gerontol Geriatr 90:1–11. https://doi.org/10.1016/j.archger.2020.104097 4. Neville C, Ludlow C, Rieger B (2015) Measuring postural stability with an inertial sensor: validity and sensitivity. Med Dev (Auckl) 8:447–455. https://doi.org/10.2147/MDER.S91719 5. Weiss A, Herman T, Gilad N, Hausdorff JM (2015) New evidence for gait abnormalities among Parkinson’s disease patients who suffer from freezing of gait: insights using a body-fixed sensor worn for 3 days. J. Neural Transm 122(1):403–410. https://doi.org/10. 1007/s00702-014-1279-y 6. Sasaki J, Coutinho A, Santos C, Bertuol C, Minatto CG, Berria J, Tonosaki L, Lima L, Marchesan M, Silveira P, Krug R, Benedetti T (2017) Orientações para utilização de acelerômetros no Brasil. Rev Bras Ativ Fís Saúde 22(2):110–126. https://doi.org/10.12820/rbafs. v.22n2p110-126 7. Mello POB (2015) Arte e programação da linguagem Processing. Dissertação de mestrado, Universidade Católica de São Paulo— PUC-SP, Programa de pós-Graduação em Tecnologia e Design Digital, 137 p 8. Souza EC (2016) Programação no ensino de matemática utilizando Processing 2: Um estudo das relações formalizadas por alunos do ensino fundamental com baixo rendimento em matemática. Dissertação de mestrado, Faculdade de Ciências da Universidade Estadual Paulista—UNESP, Programa de Pós-Graduação em Educação para a Ciência, 189 p 9. Processing (2015) Processing. Disponível em https://www. processing.org/. Acesso em 03 Mar 2015

B-Mode Ultrasound Imaging System Using Raspberry Pi R. A. C. Medeiros, A. A. Assef, J. de Oliveira, J. M. Maia, and E. T. Costa

Abstract

1

The B-mode ultrasound imaging represents one of the main imaging methods in medical diagnosis. To improve the quality of generated images, new approaches and techniques for digital signal processing based on hardware and software platforms are being introduced nowadays. This article shows the implementation and evaluation of digital signal processing algorithms on the Raspberry Pi using Python programming language for B-mode image reconstruction. The proposed steps include digital filtering, focusing delay, coherent summation, demodulation with envelope detection, and logarithmic compression. To validate the implemented algorithm, 12-bit sampled data with a frequency of 40 MHz were used. Qualitative and quantitative analyses using the Normalized Root Mean Squared Error (NRMSE) and the Normalized Residual Sum of Squares (NRSS) cost functions show results compatible with the reference model in Matlab and validated in previous studies. All NRMSE results were less than 10% and NRSS results were close to zero, indicating excellent agreement with the reference Matlab model. Keywords





Ultrasound Digitalsignalprocessing B-modeimaging Raspberry Pi Python

R. A. C. Medeiros (&)  A. A. Assef  J. de Oliveira  J. M. Maia CPGEI/PPGSE/DAELN/DAELT, Federal University of Technology—Paraná (UTFPR), Av. Sete de Setembro, Rebouças, Curitiba, 3165, Brazil E. T. Costa School of Electrical and Computing Engineering (FEEC), Center for Biomedical Engineering (CEB), University of Campinas (UNICAMP), Campinas, Brazil



Introduction

Ultrasound (US) systems are essential in medical diagnostic imaging. Due to the electronic complexity involved, such systems use multiple printed circuit boards, typically reconfigurable logic devices such as Field-Programmable Gate Arrays (FPGAs) and Application-Specific Integrated Circuits (ASICs), multicore Digital Signal Processors (DSPs), Central Processing Unit (CPUs) and, more recently, Graphics Processing Units (GPUs) [1, 2]. However, in addition to high costs, available commercial systems do not always meet the research needs for alternative and innovative digital signal and image processing applications, as they typically have closed and fixed architectures [2]. New approaches and research methods based on hardware and software platforms are being developed, especially from the academic sector, to provide US images that lead to more reliable medical diagnosis [1]. Thus, the introduction of low-cost, low-power single-card computers, the size of a credit card, may in the future foster the development of embedded systems for processing and generating images in test and diagnosis for personalized medicine. As an example of applications, Point of Care (PoC) devices [3] can be cited, which can accelerate decision making in health care units through diagnostic imaging. Development boards such as Intel Galileo, Raspberry Pi and BeagleBone Black have been explored with special emphasis on digital image processing and Internet of Things (IoT) applications [4]. In the healthcare sector, the majority of researches is focused on monitoring the health parameters of patients such as heart rate and temperature, etc. [3]. However, there is an absence of studies and scientific researches that look at these platforms as an alternative method in digital signal processing for the reconstruction of US images. In this paper, the implementation and evaluation of US signal processing algorithms for B-mode image generation

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_136

909

910

R. A. C. Medeiros et al.

are presented using the beamforming Delay-and-Sum (DAS) method [5] with the Raspberry Pi.

2

Materials and Methods

The proposed processing system was implemented on a Raspberry Pi (RPi) 3 Model B board on Raspbian operating system. To develop the image processing algorithms, the Python (version 3.0) programming language was used. The digital signal processing of the B-mode US image generation using the DAS method shown in Fig. 1 consists of: digital filtering, variable delay block for efficient reception focus, apodization window, coherent summation, demodulation using an approximate Hilbert Transform strategy with FIR filter [6], envelope detection, and logarithmic compression with a custom 12-bit Look-up Table (LUT). The final steps of scan conversion and image display were performed with the Matlab software. The experiments were performed offline using radiofrequency (RF) data from a multi-purpose biological tissue mimicking phantom (Fluke Biomedical, model 84-317), acquired by an US system developed by our research group [2]. The RF data were sampled at 40 MSPS with a 12-bit resolution, from the individual elements of a 128-element convex matrix transducer with a 3.2 MHz center frequency (Broadsound Inc., AT3C53B). In this research, the active aperture of 8 elements was adopted for the DAS beamforming method with a focus on the depth of 2.5 cm. Subsequently, the computation results were stored on the RPi’s SD card. The resulting 968 RF lines were stored to compose 121 scanlines, each line having a length of 2046 samples, totaling 1,980,528 words. This step, related to the capture of ultrasonic data, is represented in the first dashed block in Fig. 1. In digital processing for generating reconstructed US images, digital filtering is the first and one of the most significant digital processing techniques to improve the quality of US images. The technique is used to eliminate the noise in the echo signals generated from the non-homogeneity of the biological media, which decrease the image quality, mainly degrading the contrast [2]. In this work, a low-pass Finite

Capture of Data

Digital Filtering

Demodulation

Delay Insertion Focusing

Logarithmic Compression

Apodization Window

Scan Conversion

Coherent Sum

B-Mode Image

Fig. 1 Block diagram of the digital signal processing for B-mode ultrasound imaging

Impulse Response (FIR) filter was chosen over the Infinite Impulse Response (IIR) filters as FIR filter offer inherent stability, phase linearity, and simpler implementation [7]. The design of the digital filtering stage was developed with the Filter Design and Analysis Tool (FDATool) in the Matlab software (The Mathworks Inc.), as specified in Table 1. From the tool, we obtained a filter using the Equiripple method with 16 coefficients. The magnitude and phase responses, in addition to the impulse response of the low-pass FIR filter, are shown in Figs. 2 and 3, respectively. The symmetry characteristics of the filter were explored to decrease computational resources. Following the processing steps, the signal delays were corrected, since in the DAS beamforming focusing technique, the piezoelectric elements are excited with different delay times, and consequently the received signals have the same characteristic delay. The correction was carried out by applying the time delay coefficients to the reflected echo samples. Based on the geometry of the convex transducer, the Field II simulation program [8] was used to calculate the delay times, resulting in Fig. 4a. Subsequently, these times were converted into 40 MHz clock cycles (Fig. 4b) and applied to the RF signals. The apodization technique was applied to mitigate the effects due to the elements more distant from the center of the active aperture of the transducer elements, resulting in an acoustic beam with reduced sidelobes and, thus, a higher quality image [9]. Consequently, this window reduces the intensity of the beam irradiated around the point of focus, improving the contrast of the image [5, 10]. In the experimental evaluation of the project, only the rectangular window was used, however, other apodization profiles can be applied. After alignment and application of the weighting window, a coherent sum of the resulting signals was performed to compose the scanlines S(t), according to Eq. (1): Sð t Þ ¼

N X

wi Si ðt  ti Þ;

ð1Þ

i¼0

where wi is the weighting window corresponding to the signal Si, with delay ti of the element i [11]. From the formation of the scanline, the demodulation operation was performed, followed by envelope detection to extract the envelope from the resulting scanlines. Due to its ability to compute the envelope in an efficient and approximate way to the ideal [6], the use of the Hilbert Transform (HT) with the FIR filter was chosen as the method, shown in the block diagram of Fig. 5. This technique generates a complex analytical signal in which the imaginary part (quadrature signal Q(n)) represents the HT of the original signal, produced from a FIR filter with an HT-type response. The real part (phase signal

B-Mode Ultrasound Imaging System Using Raspberry Pi Table 1 Digital filter specifications

911

Parameter

Specification

Sampling frequency

40 MHz

Project method

FIR—generalized equiripple

Response type

Low-pass

Passband frequency

3.2 MHz

Stopband frequency

8.0 MHz

Passband attenuation

−1 dB

Stopband attenuation

−50 dB

Fig. 2 Magnitude and phase responses of the low-pass FIR filter

Fig. 3 Low-pass FIR filter impulse response

I(n)) represents the original delayed signal, to compensate for the phase delay during the quadrature component. After processing the complex components, the envelope information E(n) is obtained from the root of the quadratic sum of the signals I(n) and Q(n) [6], where n is the sample index.

The main parameters for FIR filter with the coefficients from the HT response are shown in Table 2. Figures 6 and 7 show the magnitude and phase responses, and the impulse response of the project filter, respectively. The characteristics of negative symmetry and null interval values were explored to simplify the system design.

912

R. A. C. Medeiros et al.

Fig. 4 Focusing delay profile for active aperture of 8 elements. a Temporal profile. b Temporal profile converted to 40 MHz clock cycles

EcðnÞ ¼ 20 log10 ðEðnÞÞ;

Fig. 5 Block diagram of the envelope detector

Finally, the logarithmic compression step is applied to adapt the dynamic range of the scanline to the gray scale levels required for B-mode imaging, improving the image contrast [5]. In logarithmic compression, US data values are mapped nonlinearly by a logarithmic function, using Eq. (2):

ð2Þ

where Ec(n) is the compressed envelope signal, given in decibels (dB). To facilitate the computation of this step, a LUT with pre-calculated values with a length of 12 bits and a dynamic range limited to −30 dB was used. During the implementation of the processing algorithm, the values of the coefficients calculated for the various steps were exported from the Matlab workspace and imported into the code developed on the Raspberry Pi in Python. In addition to the qualitative assessment, quantitative analyses of the coherent summation, demodulation with envelope detection and logarithmic compression were

B-Mode Ultrasound Imaging System Using Raspberry Pi Table 2 Specifications of the FIR filter with HT-type response

913

Parameter

Specification

Project method

FIR—equiripple

Response type

Hilbert transformer

Filter order

32

Fig. 6 Magnitude and phase responses of the FIR filter with HT-type response

Fig. 7 Impulse response of the FIR filter with HT-type response

performed in comparison with a reference model implemented via the Matlab script and validated in previous studies [2, 5]. The quantitative analysis of the processing was performed using Normalized Root Mean Squared Error (NRMSE) and the Normalized Residual Sum of Squares (NRSS) cost function, represented mathematically by Eqs. (3) and (4) [6], respectively:

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uPM u n¼1 jRðnÞ  SðnÞj2 NRMSE ¼ t P  2  100; M   n¼1 SðnÞ  S PM NRSS ¼

n¼1 jRðnÞ  SðnÞj PM 2 n¼1 jSðnÞj

2

;

ð3Þ

ð4Þ

914

R. A. C. Medeiros et al.

Fig. 8 Evaluation of the digital filtering algorithm. a US signal added to the 8 MHz sinusoidal with 10% relative amplitude and b its frequency spectrum. c Filtered US signal and d its frequency spectrum

where M is the total number of samples, R(n) represents the sample of the model used for reference. S(n) represents the sample of the model to be compared, and S is the average of the samples of the model that is desired to compare.

3

Results

Initially, the signal processing steps for digital filtering (Fig. 8) and envelope detection (Fig. 9) were verified. In Fig. 8a, c we present an example of a US signal (2046 samples) added to an 8 MHz sinusoidal wave with a relative amplitude of 10%, before and after digital filtering, respectively. Figure 8b, d show the respective frequency analysis results. It can be seen that the 8 MHz component was attenuated by 50 dB. Figure 9 presents an example of the envelope detection result, of a scanline, where the HT and the envelope

computed from the digital signal processing on the Raspberry Pi are shown superimposed. After the raw data were completely processed on the Raspberry Pi, the resulting signals were exported to Matlab. Subsequently, the scan conversion step was performed, followed by the generation and presentation of the reconstructed image. Figure 10 shows a photo of the Raspberry Pi board and the computer’s LCD screen showing the comparison between the simulated model and the experimental result. For better visualization, Fig. 11a, b show the final B-mode images obtained from the processing in Matlab and Raspberry Pi, respectively. The overall time to complete the processing steps proposed on the Raspberry Pi was approximately 27 min. The results of the cost functions with the respective average value and standard deviation, considering the 121 scanlines computed by the Raspberry Pi in comparison with the reference Matlab script, are presented in Table 3. The

B-Mode Ultrasound Imaging System Using Raspberry Pi

915

Fig. 9 Example of envelope detection result of a scanline obtained from the Raspberry Pi processing

Matlab Response

RPi Response

Fig. 11 Comparison of the final US image obtained from a processing in Matlab and b in Raspberry Pi

RPi board

Fig. 10 Photo of the Raspberry Pi board and LCD screen showing the comparison between the simulated model and the experimental result

Table 3 Result of the cost functions of the 121 scanlines processed by the Raspberry Pi in comparison with the reference Matlab script

quantitative analysis of NRMSE and NRSS shows that the Python algorithm, implemented in Raspberry Pi, presents results compatible with the model reference framework adopted. All results from the NRMSE were less than 10%, and from the NRSS were close to zero, indicating an excellent agreement with the Matlab reference model.

Stage

Metric

Average value

Standard deviation

Coherent sum

NRMSE

3.43e−10%

6.99e−11%

NRSS

1.23e−23

5.01e−24

NRMSE

3.58%

0.75%

NRSS

12e−04

5.02e−04

NRMSE

4.35%

2.31%

NRSS

2.22e−04

2.57e−04

Envelope detection Logarithmic compression

916

4

R. A. C. Medeiros et al.

Conclusions

It can be concluded that the main objective of this work, which was to implement and evaluate the digital processing of US signals for the reconstruction of images in B-mode using Raspberry Pi, was successfully achieved. Despite the excellent results corroborated by the cost functions in comparison with the reference model, it was found that it is not possible to generate US images in real-time due to the limitations of the Raspberry Pi 3 Model B—specifically the frequency of operation and memory capacity. However, it opens up a new possibility of processing US signals in low cost single-board computers using the Python language, given that there is a shortage of information sources as well as programming codes for this purpose. Thus, this work presents results that can be explored to expand future investigations by the scientific community with more modern technologies. Acknowledgements The authors would like to thank the following Brazilian organizations: Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Financiadora de Estudos e Projetos (FINEP), Araucária Foundation, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Federal University of Technology —Paraná (UTFPR), and the Ministry of Health for their financial support that made our research possible. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Boni E et al (2018) Ultrasound open platforms for next-generation imaging technique development. IEEE Trans Ultrason Ferroelectr Freq Control 65(7):1078–1092

2. Assef AA, Maia JM, Costa ET (2016) Initial experiments of a 128-channel FPGA and PC-based ultrasound imaging system for teaching and research activities. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 5172–5175 3. Dietrich CF et al (2017) Point of care ultrasound: a WFUMB position paper. Ultrasound Med Biol 43(1):49–58 4. Ray PP (2017) A survey on visual programming languages in internet of things. Sci Program 5. Assef AA, Maia JM, Costa ET (2015) A flexible multichannel FPGA and PC-based ultrasound system for medical imaging research: initial phantom experiments. Res Biomed Eng 31 (3):277–281 6. Assef AA et al (2019) FPGA implementation and evaluation of an approximate Hilbert transform-based envelope detector for ultrasound imaging using the DSP builder development tool. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 2813–2816 7. Dwivedi AK, Ghosh S, Londhe ND (2018) Review and analysis of evolutionary optimization-based techniques for FIR filter design. Circuits Syst Signal Process 37(10):4409–4430 8. Jensen JA (1996) Field: a program for simulating ultrasound systems. Paper presented at the 10th Nordic-Baltic conference on biomedical imaging. Med Biol Eng Comput 34(1):351–353 9. Jensen JA (1999) Linear description of ultrasound imaging systems. https://server.oersted.dtu.dk/personal/jaj/field/documents/ ref_jaj_1999.pdf 10. Cincotti G et al (1999) Efficient transmit beamforming in pulse-echo ultrasonic imaging. IEEE Trans Ultrason Ferroelectr Freq Control 46(6):1450–1458 11. Frazier CH, O’Brien WD (1998) Synthetic aperture techniques with a virtual source element. IEEE Trans Ultrason Ferroelectr Freq Control 45(1):196–207

Access Control in Hospitals with RFID and BLE Technologies B. C. Bispo, E. L. Cavalcante, G. R. P. Esteves, M. B. C. Silva, G. J. Alves, and M. A. B. Rodrigues

Abstract

The absence of systems that allow greater security in access control in healthcare environments is a serious problem faced by health institutions. This fact results in losses of both economical and operational nature, since the activities involved on provision of medical services performed by professionals are significantly impacted directly or indirectly. Therefore, it’s important to manage the access control of people and equipments in healthcare environments. For this to happen efficiently, systems that implement such management with redundant security routines appear as good alternatives. In this sense, this paper proposes a system that performs access control in healthcare environments, capable of managing the movement of people and equipments, allowing to know, at any time, an approximated indoor-location of each equipment or person. For this type of communication, several consolidated technologies are used, such as RFID (Radio-Frequency Identification) and Beacons (Bluetooth). The purpose of this paper is to discuss the hardware and software modules, in addition to present a graphical interface, with connectivity options for remote management through IoT (Internet of Things) resources. Keywords

RFID

1

  BLE

Access control



Hospital



MQTT

Introduction

According to Al-Fuqaha in 2015 [1], the economic sector that will be most affected by the contributions of IoT services will be the healthcare sector. One of the essential B. C. Bispo (&)  E. L. Cavalcante  G. R. P. Esteves  M. B. C. Silva  G. J. Alves  M. A. B. Rodrigues DES, PPGEE, Federal University of Pernambuco, Av. Prof. Moraes Rego, Recife, 1235, Brazil

services to be implemented in the health area is the security, which the access control of people and equipments is a key part of this process. The investments on access control of people and equipments aim to improve hospitals management, where for example, systems for locating medical equipments appear as proposals to increase the effectiveness of the time of patient care [2]. Among the types of wireless communication used by IoT services in a healthcare environment, precautions must be taken so that there is no impact on the operation of medical equipments or bring health risks for people in the hospital. In this context, several IoT devices use frequency bands called ISM bands (Industrial, Scientific and Medical Radio Band), already regulated by the International Telecommunications Union [3]. Typically and commonly used examples are wireless communication protocols that revolve around 2.4 GHz, such as Bluetooth, WiFi, ZigBee, in which they are among the most attractive communication technologies for IoT devices in M2M (Machine-to-Machine) [1]. Based on studies of frequency bands used in hospital environments and the need for a system to locate equipments, staffs and patients, the project exposed in this paper shows the development of a system capable of store access records of people and equipments in real time. The developed system uses two technologies known in the access control market, RFID (Radio-Frequency Identification), and one of most used technology for indoor positioning applications, BLE (Bluetooth Low Energy) [4] Beacons, to aid in tracking and access controlling of people or equipments within a hospital environment. The synergy of these two technologies can bring significant advantages in healthcare applications, applied to patients, professionals and equipments safety in restricted sectors of healthcare environments. The use of Bluetooth technology in access control adds to the conventional access control systems, based on RFID technology, security redundancies and make the access control of people and equipment as reliable as possible.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_137

917

918

2

B. C. Bispo et al.

Materials and Methods

In this section are presented all the entities that make up the proposed system, as well as its communication architecture, the hardware used, the management and registration of the collected data by the developed software.

2.1 Topology and Communication Protocol The implemented topology uses bidirectional communication between a Server-Raspberry Pi and a Module, named BLID, through a communication protocol commonly used in IoT devices, the MQTT (Message Queuing Telemetry Transport) [5], which is based on the TCP/IP protocol (suite of communication protocols used to interconnect network devices on the internet). The choice of the MQTT simplifies the process of implementation in embedded systems and used in several IoT applications [6, 7]. One of the main features of the MQTT protocol for IoT devices is asynchronous operation. Unlike the conventional HTTP protocol (Hypertext Transfer Protocol), which is based on the Client– Server paradigm. The Server awaits the Client’s requests, responding with the provision of the requested services, synchronously. In MQTT protocol, the network is based on a Client-Client relationship, where a control instance called MQTT Broker intermediates informations, passing messages between the various clients, through asynchronous publication or subscription into topics, in other words, the allocated device on the MQTT network can at any time receive messages from other devices on the network or send to other devices, without the need for service requests. Many advantages of the MQTT protocol are explored in the literature compared to the HTTP protocol [6]. The implemented topology consists of four entities and it is illustrated at Fig. 1: BLID Module: MCU (Microcontroller Unit) responsible for authorizing access to the desired sector and sending all collected records (made by MCU through RFID Readers or Beacons detector) to the Server-Raspberry Pi. iTAG: Responsible for the unique identification of registered individuals and/or equipments, as shown in Fig. 1 where “iTAG - Equipment” refers to the equipment’s identification, “iTAG - Nurse” and “iTAG - Doctor” refer to the professional’s identification in a hospital, for example. Two types of technologies are used for unique identification, the RFID (Radio-Frequency Identification) Tag and the BLE (Bluetooth Low Energy) Beacon.

The iTAGs can be identified in two types: • Authorized iTAG: The RFID Tag and BLE Beacon UUIDs (Universal Unique Identifications) are registered in the system and have access granted by the module, that is, the employee or equipment is authorized to move between sectors of the health establishment. • Unauthorized iTAG: The RFID Tag and BLE Beacon UIDs are registered in the system and are not authorized to move between sectors of the health establishment. Router: Responsible for creating a local network for system communication based on TCP/IP protocol. Server-Raspberry Pi: Responsible for registering in local SQLite Database and receiving the records made by the BLID Module, through the MQTT protocol and provide a GUI (Graphical User Interface) with the software NODE-RED, as shown in Fig. 1. In addition, the MQTT broker (central entity of the MQTT protocol) is located in this entity whose function is to route all MQTT messages through topics, published by the MQTT clients, for other clients who are connected and configured to receive those messages by subscribing into the respective topics [5].

2.2 Software For the iTAGs records management and communications between the Server and the BLID Module, the softwares implemented on the Raspberry Pi board consists the following functions: • SQLite Database: The SQLite database [8] was used, due to simple compatibility, usability and low processing consumption on a Raspberry Pi. • NODE-RED: The NODE-RED software [9] is a graphical programming tool based on JavaScript, it was responsible for processing data that arrives from the Mosquitto Broker, building a GUI for displaying the records, sending commands to the BLID Module, registering new users (RFID tags and BLE Beacons) and interfacing with SQLite Database. In summary, through this tool it was possible, in a simple way, manage and store all records made by the BLID Module, making accessible to an Administrator and remotely control the BLID Module through MQTT protocol in real time. • MQTT Broker: To allow network communication between the module and the Server, the Mosquitto software [10] was defined as the broker of the MQTT

Access Control in Hospitals with RFID and BLE Technologies

919

Fig. 1 System topology implemented in laboratory on a private local network, consisting of a Server-Raspberry Pi (where the SQLite Database, the software NODE-RED and MQTT Broker Mosquitto are allocated), a Router, the BLID Module and three iTAGs (each one consisting in a RFID Tag and a BLE Beacon). Source Author’s collection, 2020

System. It’s open-source MQTT Broker, uses low hardware processing resources and is easy to handle, this broker was allocated on the Server-Raspberry Pi as well as the SQLite and NODE-RED.

2.3 Hardware Based on a cost–benefit study done during the system design, several devices and manufacturers were analyzed. Thus, it was possible to develop the system (illustrated in Fig. 1) with the following hardware specifications:

1. BLID Module: This module consists of an OLIMEX ESP32-Evaluation Board, two RFID Readers PN532, Real Time Clock DS3231, 3D Printed Case, Buzzer and Magnetic Door-lock. The development platform used was the OLIMEX ESP32 EVB [11], which has the ESP32 as microcontroller, manufactured by Espressif [12]. All features are orchestrated according to the developed firmware. The ESP32 has a PCB antenna attached, for 2.4 GHz wireless communication. In this way, it is possible to establish communication between the BLID Module and the nearby iTAGs, via Bluetooth. This development board

920

meets all the needs of the proposed system, as it has all the necessary peripherals, such as: • Relay: used in switching the magnetic door-lock that maintain the door locked or unlocked; • Ethernet Gate: used for connection to the local network created by the Router; • SD Card Reader: used for configuring the connection parameters of the BLID Module, that is, setting the IP address, BLID Module’s ID in the MQTT network, account and password for accessing the MQTT network in secure mode, etc. In addition to the peripherals embedded in the development board, there are two RFID readers PN5323 (manufactured by NXP) [13], one located inside the environment, the other outside. They are responsible for reading the 13.56 MHz RFID tags inside and outside the environment. An RTC (Real Time Clock) DS3231 [14], is used to record the day and time of each event registered by the BLID Module. At last a buzzer for audible signaling and a magnetic door-lock to maintain the door locked or unlocked. 2. iTAG: It is a BLE Beacon powered by a Li-Ion CR2032 battery, coupled with a passive RFID Tag, whose operating frequency is 13.56 MHz. The adoption of RFID technology is a second feature to perform access control of people in restricted areas of a hospital environment, being a complementary safety measure, since it remains in operation, even if the battery charge, which powers the Bluetooth chip, runs out. 3. Server-Raspberry Pi: A SBC (Single Board Computer) Raspberry pi 3 Model B+ [15] was used as a server that provides the SQLite Database, MQTT Broker and the software NODE-RED. The database system and the MQTT Broker require very little machine processing, which is why all tests were performed on a simple and low-cost hardware platform with a Linux based operating system.

2.4 Firmware For programming the OLIMEX ESP32-EVB development platform, the Arduino IDE (Integrated Development Environment) was used. In Fig. 2, a flowchart of the main tasks that the microcontroller performs is illustrated. 1. Module configuration: In this step, all communication parameters, IP address, registration tags for new users, passwords for accessing the MQTT network, are defined by reading the micro-SD card, in a text file format (.txt), as it becomes more flexible to change of module parameters, in order to avoid the reprogramming time delay issues of the microcontroller in each change of system parameters.

B. C. Bispo et al.

2. Verifying the presence of RFID tags: First, the microcontroller checks whether the RFID tags are present by both readers. If so, it’s checked whether this tag has access to the system, if not, proceed with step 3. If the RFID tag is valid, the door is unlocked and then proceeds to step 3, if not, access is blocked and routine goes to step 3. 3. Verifying the presence of BLE Beacons: The presence of BLE Beacons, nearby the BLID Module, is verified through the Bluetooth signal emitted by them. It is verified for all iTAGs present with distance up to 2 m from the BLID Module (approximate power of up to −70 dBm), if an iTAG is authorized to access the sector. In the list of all iTAGs next to the BLID Module, the priority will always be for iTAGs that don’t have permission to enter or exit the sector. For example, if there are iTAGs of two types previously classified by the BLID Module, the module will decide to keep the door locked, only when the module detects the presence of iTAGs with authorized access and the non-detection of unauthorized iTAGs, the module will unlock the door. In addition, it is made records of the proximity of authorized and unauthorized iTAGs to the server, in order to record when a respective iTAG passed nearby the BLID Module. 4. Connection to the MQTT network: First, the connection to the MQTT Broker on the local network is verified. If the module is not connected to the MQTT Broker, a new connection attempt will be made after 3 s and the routine returns to step 2. If it is connected to the MQTT Broker, the routine will proceed to step 5. 5. Sending the records to the server: All records made by the BLID Module will be sent through a certain topic to which the Server-Raspberry Pi is subscribed in the MQTT network. Each record sent by the BLID Module receives an acknowledgment of the packet sent by the Server-Raspberry Pi, if the BLID Module does not receive this confirmation from the Server-Raspberry Pi within 2 s of waiting, the module will return to step 2. This process of verifying the existence of packages for sending to the broker is carried out constantly every 3 s. If no registration has been made by the module, the program will return to step 2. As the communications of the MQTT protocol are completely asynchronous, that means, at any time the BLID Module or the Server-Raspberry Pi can receive messages from one or another device, so the firmware developed for the BLID Module has routines that are triggered to handle the commands or messages coming from the Server-Raspberry Pi at any time through pre-determined topics, such as registration of new users, confirmation of received data package, command to unlock the magnetic door, update the RTC, etc.

Access Control in Hospitals with RFID and BLE Technologies

Fig. 2 Flowchart of the BLID Module operating routine. First, as soon as the BLID Module is activated, the microcontroller defines all the constants used in the program after reading the SD card. The system then enters in a loop, where the presence of RFID tags is checked and the appropriate actions are performed. Subsequently, the

921

presence of BLE Beacons is verified and the necessary actions are taken. Finally, the management of the data packets stored in the buffer is performed as soon as the device is connected to the MQTT Broker. Source Author’s collection, 2020

922

B. C. Bispo et al.

2.5 Test Protocol Tests were carried out in laboratory to validate the proposed system. First, a total of 3 iTAGs were used, two of which have free access, whose identifications are illustrated below: • RFID Tag/BLID Tag: 08576179/FF:FF:C0:20:BF:26 • RFID Tag/BLID Tag: E445A379/FF:FF:C0:20:34:DD A third tag was used to simulate equipment or a person who does not have authorized access. After defining the respective Tags, the system constituted by the Server-Raspberry Pi and the BLID Module were connected in the same local network and tested.

3

• • • • •

Results

The system was implemented in the Human–Machine Interface Laboratory (LIHOM), where all entities of the topology illustrated in Fig. 1 were present. Both Server-Raspberry Pi and BLID Module were connected to the local network via Ethernet cable. The results acquired by the system were obtained through a GUI that was developed by the NODE-RED software allocated on the Server-Raspberry Pi, as a Supervisory of collected records through the MQTT protocol and stored in the local database. The Supervisory can be accessed via a WEB page, where a computer or smartphone can access within the private local network created by the Router (see Fig. 1). The URL for accessing the Supervisory is the server’s IP number through the TCP port 1880. In this case the Supervisory is accessed via Raspberry Pi’s IP on the TCP port 1880 (example: https://192.168.0.100:1880). In Fig. 3a, the developed GUI is displayed, where all the records made by the BLID Module can be viewed. The system administrator, for example, can observe all the movements of equipment, employees or patients between sectors in a hospital. Below are listed all the existing topics in the Supervisory for the admin’s interaction between the records system and the BLID Module: • U01: Displays all iTAGs registered in the module that have free access and last movements of those iTAGs registered by the module, illustrated in Fig. 3b. • Backup_U01: Displays the last 20 records with their respective indexes, RFID Tags, BLID (Bluetooth Identification) and date/time of the record. Thus, the registered iTAGs with free access are shown in field U01 of Fig. 3b, have records presented in the Backup_U01 field with their respective date/time. There is also an access attempt of an unauthorized iTAG where it can be viewed as

• •

4

“NON-AUTHORIZED”. After a certain time when a unauthorized iTAG moves away from the BLID Module and an authorized iTAG approaches the module with address FF:FF:C0:20:34:DD, the access is released and later registered in the system. Config U01: Command button that open the door remotely by the supervisor, in case of an emergency situation for example. Status: Current status of the Door (Locked or Unlocked). Update RTC: Synchronize the RTC of BLID Module with the current time of the Server-Raspberry Pi. Reset Module: Command to restart the BLID Module. Search U01: Make a search on the database according to the selected RFID Tag from a certain Date and Time (mandatory field), until another Date and Time (if left blank, the system will search from the date and time determined above until the current date and time) that you want to observe (if it is leaved the RFID Tag field in blank, the system will select all tags at a time, previous determined). Update/Register: Responsible for insert or update the RFID Tag or BL (Bluetooth) Tag. Remove: Removes the access of the selected RFID Tag/BL Tag from the Database and BLID Module.

Discussion

The proposed system, whose topology is illustrated in Fig. 1, consisting of a BLID Module, a Server—Raspberry Pi, a Router and three iTAGs that were successfully implemented at the Human–Machine Interface Laboratory (LIHOM). The tests were carried out with members of the research group using three iTAGs in order to simulate and record movements in and out of a room. In parallel, the records, made by the system through the developed GUI, were monitored in real time. The system is easy to install, provides a low-cost hardware solution and may be an update to security and access control services in hospitals, or the implementation of a new system. The developed supervisory system has a self-explanatory and very useful interface, favoring the control of entry and exit in hospital environments. In addition, the possibility of maintaining access records in a hospital can collaborate in later situations where it’s necessary to verify which individuals had access to a certain environment, indoor-location or history of movement of hospital equipment. The tests carried out in this project aimed the implementation of an access control system in a private local network where only the unit that holds all the system’s records (Server-Raspberry Pi) communicates in a dedicated

Access Control in Hospitals with RFID and BLE Technologies

Fig. 3 a Developed GUI, showing all the records collected by the BLID Module as well as the remote functionalities between Server-Raspberry Pi and BLID Module. b Illustrate the fields U01 and BACKUP_01 that shows all the authorized iTAGs and the last 20

923

records of detected iTAGs made by the BLID Module, respectively. Developed GUI, showing all the records collected by the BLID Module as well as the remote functionalities between Server-Raspberry Pi and BLID Module. Source Author’s collection, 2020

924

B. C. Bispo et al.

way with a BLID Module. However, the way of assembling, installing and configuring the network and used protocols, can be expanded to several BLID Modules spread across the health establishment with their respective UUIDs, unique IP addresses and the existence of only one unit holding all records carried out by the BLID Modules. Where the supervisor can have access to all records provided by the GUI and the database system. A study made by Park et al. in 2018 [16] demonstrates the efficiency in combining RFID and Bluetooth technologies in the field of access control and indoor-location, focusing on hardware and physical approaches to explain how the electromagnetic interference of these technologies can provide an optimal accurate indoor location. However, the presented manuscript provides a methodological way to merge two technologies in a hardware approach and an access control system in a software approach. Showing promising topologies, network protocols and software tools to implement in a healthcare environment and provide optimal and reliable security system.

5

Conclusions

The study allowed to evaluate the combination of two technologies, RFID and BLE that provides great versatility when using the MQTT network protocol as the main protagonist in the communication between smart devices in an efficient way, providing the control and security of people and medical equipments. The characteristics of these technologies are essential for use in healthcare establishments since they don’t interfere with the operation of other medical devices or that are harmful to people’s health, due to the reduced power of radio frequency transmission (used by this research) and fit in the ISM band [17]. This work aimed to cover several areas of knowledge, from the selection of electronic components and hardware assembly, implementation of algorithms in firmware to the study of use cases and network protocols that meet the needs of this project. Within these needs, the communication system must supply the intense bidirectional traffic between intelligent devices connected into the network, the security and guarantee of reception of data, minimizing the time processing and management of the collected data by the sensors coupled to the microcontroller, in addition, the hardware must be adaptable to non-ideal events in the environment where it’s located, such as oscillations in communication between system’s entities and providing security in the data traffic.

The usefulness of access control is clear in healthcare environments that have restricted locations, contributing to avoid theft of equipments, safety of patients and employees, and maintain a history of access to these locations, allowing for better management of people and medical equipments. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Al-Fuqaha A, Guizani M, Mohammadi M et al (2015) Internet of Things: a survey on enabling technologies protocols and applications. IEEE Commun Surv Tutor 17:2347–2376. https://doi.org/ 10.1109/COMST.2015.2444095 2. Balbi P, Ribeiro PC, Gomes CF et al (2019) Análise do uso da RFID nas operações em dois hospitais no Brasil. In: XXXIX Encontro Nacional de Engenharia de Produção. https://doi.org/10. 14488/ENEGEP2019_TN_STO_295_1669_36931 3. Sandoval R, Garcia-Sanchez AJ, Garcia-Sanchez F et al (2016) Evaluating the more suitable ISM frequency band for IoT-based smart grids: a quantitative study of 915 MHz vs. 2400 MHz. Sensors 17(1):76. https://doi.org/10.3390/s17010076 4. Ferreira J, Resende R, Martinho S (2018) Beacons and BIM models for indoor guidance and location. Sensors 18(12):4374. https://doi.org/10.3390/s18124374 5. Hillar GC (2017) MQTT essentials—a lightweight IoT protocol. Packt, Birmingham 6. Yassein MB, Shatnawi MQ, Aljwarneh S et al (2017) Internet of Things: survey and open issues of MQTT protocol. In: 2017 international conference on engineering MIS (ICEMIS), pp 1–6. https://doi.org/10.1109/ICEMIS.2017.8273112 7. Soni D, Makwana A. (2017) A survey on MQTT: A protocol of Internet of Things (IOT). 8. SQLite. https://www.sqlite.org/index.html 9. NODE-RED. https://nodered.org/ 10. Mosquitto Broker. https://mosquitto.org 11. ESP32-EVB DEVELOPMENT BOARD. https://www.olimex. com/Products/IoT/ESP32/ESP32-EVB/open-source-hardware 12. ESP32 series datasheet. https://www.espressif.com/sites/default/ files/documentation/esp32_datasheet_en.pdf 13. PN532/C1 NFC controller. https://www.nxp.com/docs/en/nxp/ data-sheets/PN532C1.pdf 14. DS3231 extremely accurate I2C-integrated RTC/TCXO/crystal. https://datasheets.maximintegrated.com/en/ds/DS3231.pdf 15. Raspberry Pi 3 B+. https://static.raspberrypi.org/files/productbriefs/Raspberry-Pi-Model-Bplus-Product-Brief.pdf 16. Park JK, Kim J, Kang SJ (2018) A situation-aware indoor localization (SAIL) system using a LF and RF hybrid approach. Sensors Basel 18(11):3864. https://doi.org/10.3390/s18113864 17. Ghamari M, Janko B, Sherratt R et al (2016) A survey on wireless body area networks for eHealthcare systems in residential environments. Sensors 16(6):831. https://doi.org/10.3390/ s16060831

Piezoelectric Heart Monitor A. de S. Morangueira Filho, G. V. B. Magalhães, and F. L. Lopes

Abstract

1

Cardiovascular diseases are the leading cause of death globally. The increase in chronic diseases, such as Diabetes Mellitus and hypertensive heart disease, added to the ageing population, brings in risk factors that increase these statistics. In this scenario, studies related to these diseases are fundamental for the prevention of deaths and for improvements in quality of life. For this purpose, a cardiac monitor was constructed using a piezoelectric transducer as the sensor. The signal was conditioned using Rail-to-Rail amplifiers and digitized by a PIC microcontroller. A computer controls and receives data from the microcontroller through an interface developed on Qt. When data is received, the application presents a seismocardiogram (SCG) waveform on a real-time chart, allowing the user to save data to a file and estimate the heart rate. The developed software also applies a digital filter to reduce interferences in frequencies above 55 Hz. Recorded data were verified using MatLab®. The characteristic points of the SCG were identified in the graphs presented on interface. Keywords



Heart monitor Piezoelectric Seismocardiogram



SCG

  PIC

A. de S. Morangueira Filho (&)  G. V. B. Magalhães  F. L. Lopes National Institute of Telecommunications—INATEL, Av. Joao de Camargo, Santa Rita Do Sapucai, MG 510, Brazil e-mail: [email protected] G. V. B. Magalhães e-mail: [email protected] F. L. Lopes e-mail: fi[email protected]

Introduction

According to the World Health Organization (WHO), of 56.9 million deaths occurring in the world, in 2016, 24% were related to cardiovascular diseases, that together ischemic heart disease and stroke were the major causes of death in the world in 2016 [1]. Hyperdia data, a program aimed for registration and monitoring of patients with Systemic Arterial Hypertension (SAH) and/or diabetes Mellitus in the Brazilian Government healthcare system—Sistema Único de Saúde (SUS), registers that the main cause associated with the increasing in Cardiovascular disease in Brazil is SAH, and it is estimated that 22% of the Brazilian population is hypertensive. Studies related to cardiovascular diseases are fundamental for preventing deaths related to this cause and also to increase the quality of life of those who carry these diseases [2, 3]. Continuously and effectively monitoring of biosignals is becoming more and more necessary in the sphere of biomedical engineering, coupled with this the use of accessible and reusable devices. In this context, the monitoring and early detection of discrepancies in the cardiac cycle are of great importance for prevention and treatment of heart diseases [3]. A method that has been studied in order to increase reliability and accuracy of the results of cardiological exams, the seismocardiogram (SCG), records body vibrations, induced by the heartbeat. This signal contains information about cardiac mechanics, particularly the sounds and cardiac output [3, 4]. Some studies carried out in this regard presented a wearable device to obtain signs of SCG during long-term monitoring in order to perform remote surveillance of patients with symptoms of heart congestion. Although there are alternative approaches to telemonitoring, some papers propose the use of a wearable system, as a non-invasive and low-cost alternative, presenting higher efficacy [3].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_138

925

926

A. de S. Morangueira Filho et al.

It is worth noting that mechanical cardiac vibrations are generated by valvular movement, blood flow, myocardial contraction and relaxation, so the waveform of the SCG signal is dependent on the location on the body where the sensor is placed, and distance in relation to the heart [5, 6]. Based on this context, the present work presents the development of a low-cost single-channel cardiac monitor using a piezoelectric transducer, capable of providing the waveform of a SCG signal in order to propitiate the use of potentialities of seismocardiography.

2

is the aortic valve that causes the vibrations that will stimulate the transducer. These tests served as the basis for the evaluation of the SCG signals, both in real time tests during the implementation of the application and in the analysis of the recorded data, using MatLab® Software tools. One of the authors himself underwent the experiment. The analysis consisted of the subjective comparison of the waveform of the digitized signals with those found in other studies, identifying the points that characterize the SCG signal.

Methods 3

In the development of the project, the main materials that compose the hardware are a sensor element (piezoelectric diaphragm), voltage regulators, operational amplifiers, a microcontroller and a computer. The software elements are subdivided into two categories. The first one deals with the processing and interface steps, developed using Qt Framework®. The second category refers to the firmware (embedded software in the microcontroller) and was implemented using the IDE set MPLAB X® and the XC8 Compiler®. All above-mentioned elements are described in the following sections. Except for the bench and computer equipment, the approximate total cost of the components used for the prototyping of the equipment was about U $20.00 in Brazil. Regarding acquisition, the signal was acquired by the own author. For that some tests were performed positioning the sensor in two different specific locations. Such points can be identified in Fig. 1 [6]. In the first region, the captured vibrations originate from the movement of the mitral valve, while in the second one, it

Fig. 1 Location of measurement points [6]

Hardware

The hardware project consists of all the necessary devices for the vibration of the rib cage, originating from the cardiac movements, to be converted into relevant information presented in the graphical interface of the Human–Machine Interface (HMI). The components were arranged in modules according to specific functions, described below:

3.1 Power Module Responsible for supplying the necessary voltages to the operation of the amplifiers and microcontroller circuits, this circuit consists of two voltage regulators, one for each voltage level needed by the device, named 5 and −5 V. This module can be powered by two 9 V batteries or by a symmetrical source.

3.2 Sensor Module This module consists of the sensor element itself and the signal conditioning circuit. The electronic circuit in this module receives the signal from the piezoelectric transducer, then performs conditioning and transmission to the digitization module. The sensor element is a 27 mm diameter model 7BB-27-4 Metallic piezoelectric diaphragm. A simplistic explanation of how the piezoelectric component works is that, by submitting it to pressure waves, the positive and negative balance of charges are disturbed leading to the generation of electrical charges on the terminals, proportional to the amplitude of the applied force. Due to the long range of possible values of that electrical charge (from µV to V), it’s essential to perform the signal conditioning before digitization [7, 8]. To condition the signal, a pre-amplification and an amplification circuit was designed using the load amplifier configuration. After this stage, signal is submitted to a fine

Piezoelectric Heart Monitor

adjustment shifting from the operating range of −2.5 to 2.5 V to the range between 0 and 5 V, which is the operation range of the Analogic Digital Converter (ADC) of the microcontroller. After a well-defined excursion, the signal is made available to the microcontroller for sampling [9].

3.3 Digitization Module (PIC) Consisting of a microcontroller Microchip PIC16F877A, this module receives the conditioned signal from the sensor module and, when commanded, performs the analogic/ digital conversion, generating a digital word that represents the amplitude of the sample. Configured to operate at a sampling rate of 1428 Hz, it sends the digital samples to the HMI through an asynchronous serial link. The PIC16F877A microcontroller was chosen mainly by its low cost, approximately U$5.00 in Brazil, and for providing 8 ADC channels with 10-bit resolution and interface for serial communication. Its versatility allows various operation settings. Using the ADC, it is possible to obtain, roughly, the representation of an analog magnitude in digital format. In this project, using the microcontroller ADC, the range from 0 to 5 V is mapped in 1024 levels, since 210 = 1024. So, the voltage levels is represented by a digital value in which each level indicates that the signal voltage increased by approximately 0.00489 V. In Fig. 2 the signal conditioning circuit is presented. This circuit to prepare the signal to the digitizing process in the microcontroller. The firmware was developed with IDE MPLAB® and compiler XC8® both from Microchip®. This code comprises the control and operation instructions for the PIC

Fig. 2 Signal conditioning circuit

927

microcontroller. The objectives, in addition to the startup and configuration of the device, are three basic operations: receive commands through the serial link, sample ADC channel and send data through the serial link. The firmware operation is described by the block diagram illustrated in Fig. 3. Once energized, the microcontroller starts up, and enters operation mode, in which, the system stays in loop checking the asynchronous serial link reception flag continuously. A green LED indicates the operation mode. If the device receives a byte through the serial receive port, an interrupt is generated, and the received byte flag is set up. Then, the contents of the received byte are checked against the accepted commands: 0x85h or 0x86h, corresponding to start sampling or terminate sampling, respectively. From the initial state, upon receiving the command to start sampling, the microcontroller enters the acquisition mode, activating an indicative blue LED. In this mode, it performs another loop, in which sends a 16-bit digital word every 700 µs. In this word, only the less significant 10 bits are useful, containing the sample value scanned for the serial link. Thus, the microcontroller makes the samples available for the serial link at a speed of 1428 Hz, generating a transmission rate of 2857 bytes/s. The microcontroller should wait other delays caused by the acquisition and processing steps of the HMI software, resulting a sampling rate of 1 kHz for the entire system. This rate was evaluated during software development by use of debugging tools. In each cycle of the acquisition loop, the received byte flag is also checked. The loop will only terminate, returning the microcontroller to the initial operation mode, when a byte corresponding to the terminate sampling command word, 086h is received.

928

A. de S. Morangueira Filho et al.

Fig. 3 Microcontroller program description

3.4 Human–Machine Interface—HMI The HMI is a computer on which the monitor operation program resides. It is through this program that the equipment is controlled and is also where data processed in real time are presented. The software architecture is described in the following section.

4

Software

In this project two categories of software were developed. The first, also known as firmware (already described in the previous section), relates to the programming of the microcontroller. This section presents the second category of software developed in this work, comprising the processing

of digitized data and the graphical interface for the system operation, residing in HMI. The processing and interface software were developed using Qt® Framework. One of the motivations for choosing this platform is due to the ease of system portability for most operating systems (Windows, Linux, Android), being for such only necessary to compile the code for the desired architecture. In addition, Qt® provides several types of development tools (threads, networks, graphical interfaces, etc.) that facilitate the development of the program [10]. A very interesting feature of Qt® is the extremely flexible message exchange mechanism based on signals, slots and connect. However, because it is not scoped to this article to detail the functioning of these features, just understand that signal is a message sent by a given object and the slot is a function, which can belong or not to the same object, which

Piezoelectric Heart Monitor

will be executed when the signal that this function has connected is emitted. Summarizing, grosso modo, a function (slot) is executed whenever a trigger (signal) attached to it (connect) is triggered [10]. The program architecture can be analyzed by subdividing it according to its specific functions, explained in the following subsections.

4.1 Main Thread It performs construction, all control and updating of the graphical interface screens (currently only available in Portuguese language), which can be viewed in Fig. 4. This thread receives and manages the interactions with the operator through the buttons and other input elements, including system activation. As soon as the system is activated, the main thread starts the acquisition thread, sends the message to the microcontroller to enter acquisition mode, and opens a file to save the data, if the operator has so selected. The samples are received from the acquisition thread whenever it emits a signal indicating such occurrence, which makes the Amostra Recebida slot be executed. Each sample received is then added to the vector (buffer), and if the operator so desires, it will also be stored in the recording file. When the signal associated with it has received, the Plotar slot is executed. It is immediately checked whether the operator has activated the digital filter and, if so, the convolution of the signal with the FIR filter is performed, resulting the filtered signal.

Fig. 4 Graphical user interface —HMI

929

The filtered waveform is then drawn in the chart area on the graphical interface, or, if the filter is disabled, the waveform of the original signal will be drawn in this area. The aforenamed digital filter was implemented for the processing of the digitized signal in the HMI program in order to allow the filtering of spurious signals from the power grid. This filter is configured to eliminate signals with frequency above 55 Hz and can be disabled or enabled at run time, that is, even during monitoring. This filter is a low pass filter of the FIR type, with Hamming window, cutoff frequency centered at 55 Hz and bandwidth of the transition range of 10 Hz. It was implemented following the expression (1):,  sin 2pfc n  M2 h½ n ¼ K w½n ð1Þ n  M2 where w is Hamming window, M is the length of the filter and K is the factor to normalize the filter coefficients. The purpose of the window is to decrease the filter ripple in the frequency domain [11]. The algorithm that calculates the heart rate analyzes the signal which waveform is currently stamped on the graph, be it original or filtered. The software scans the signal and identifies the parts corresponding to the instants when the aortic valve is open, the AO points. This scan observes a given time window, longer than 3 s, defined by the system operator. Knowing the instant of time when the AO points occurred, the algorithm calculates the average time interval between these points. From the average period, it is straightforward task to calculate the heart rate in beats per minute, which value is then displayed in the graphical interface.

930

A. de S. Morangueira Filho et al.

The functionality of recording data to a file for further analysis of the acquired data allows, for example, that the file be sent to a specialist who is geographically distant, to perform the analysis and issue an opinion. In addition to clinical use, the file can also be used for scientific purposes, such as the study of SCG signal patterns using software like MatLab®. It is also possible to generate an audio file from this data, translating the analysis to the sound domain.

6

Results

There are some points in the SCG waveform that characterize specific mechanical activities of cardiac dynamics. These points are called Fiducial points. From Sahoo et al. [5] nine main points can be highlighted: peak of Atrial Systole (AS), closing of Mitral Valve (MC), Isovolumic Movement (IM), opening of Aortic Valve (AO), Isovolumic Contraction (CI), peak of rapid systolic Ejection (RE), closing of Aortic Valve (AC), opening of Mitral Valve (OM) and peak of Rapid diastolic Filling (RF) (Fig. 5) [5]. The locations of the points of interest in the SCG for the mitral valve are identified in Fig. 5. Figure 6 illustrates the characteristic points in the aortic valve SCG.

Once the signals are digitized, while the waveform is delineated in the HMI, the system also saves the data to the file. Figure 7 reproduces the image generated in the chart area of the HMI, while the system captures the signals in the aortic valve region. In the spotlight, the location of the fiducial points in the SCG for this region can be seen, manually inserted through a subjective comparative analysis between Figs. 6 and 7. Figure 8 shows the representation of a signal captured in the same region, but in different moments of time, to demonstrate the operation of the implemented digital filter. Following for the analysis of the saved data, a recorded data file was uploaded using MatLab® software. It is important to emphasize that data is recorded in raw (unfiltered) form, even if the digital filter is activated. Figure 9 illustrate the difference between raw signal and filtered signal, both plotted on MatLab® from the data of the same recorded file. The filtered plot was processed by a filter, similar to that of the operation software, but implemented in MatLab®. On the top, red plot of Fig. 8, the acquisition is done with the filter deactivated, evidencing the high level of noise energy that sometimes distorts the signal. Thereafter, the blue plot, the filter is activated, which causes the signals to be above 55 Hz to be attenuated, improving the representation of the signal. Regarding acquisition of signals in the mitral valve region, the representation of a SCG waveform by the HMI is presented in Fig. 10. Note that the characteristic points in the waveform of Mitral SCG were spotted also by a subjective analysis, comparing the patterns of the waveforms of Figs. 5 and 10. Following the same waveform SCG HMI representation, Fig. 11a shows the mitral SCG when the digital filter was disabled, in red plot, and the waveform displayed on the HMI with the digital filter enabled, are represented by the blue plot in (b). Considering the analysis of the recording data of the mitral SCG, when evaluated by MatLab®, the result

Fig. 5 Waveforms SCG characteristic points—mitral valve [5]

Fig. 6 Waveforms SCG characteristic points—aortic valve [5]

4.2 Acquisition Thread This thread is intended to receive each digitized sample and provide this data for the main thread, which will then process the data and present the results on the screen. When enabled, the thread enters in a loop: every time a sample coming from the digitizer module arrives, it will be made available in a buffer and a signal will be emitted so that the main thread can run the Amostra Recebida slot. In this same loop, there is a sample received counter, and every time this counter reaches a threshold value, set by the operator, a signal will be issued for the main thread to run the Plotar slot. The length of the processing window is based on this counter threshold, controlling, indirectly, the chart refresh rate in the HMI.

5

Seismocardiogram Fiducials Points

Piezoelectric Heart Monitor

931

Fig. 7 Aortic characteristic SCG points—HMI

Fig. 8 Raw (a) and filtered (b) aortic seismocardiogram—Human–Machine Interface

presented at Fig. 12 show an overlap of the raw SCG with the filtered curve. Using the digitized data in the MatLab® software, audio files were also generated, making it possible to observe the signal under the sound aspect.

7

Discussions

It is verified that, although built with components of easy access and relatively low cost (about U$20.00 in Brazil), the experiment achieved good approximations of the

932

A. de S. Morangueira Filho et al.

Fig. 9 Recorded aortic seismocardiogram—Human– Machine Interface and filtered— MatLab®

Fig. 10 Mitral characteristic points—HMI

waveforms, even when subject to several influences from the environment. This fact can be observed by comparing Fig. 6 (template) and Fig. 7 (sample), where it was possible to manually identify the aortic SCG fiducial points by a simple subjective comparative analysis. It was also possible to identify the mitral valve SCG characteristic points, as can be corroborated by a comparison between Fig. 5 (template) and Fig. 10 (sample). It was not possible to guarantee the neutrality of the electromagnetic environment in the performed tests, the noise perceived in the acquired signals, by the frequency and format, was most certainly originated in the power grid. Through the recording functionality developed, the stored data allows the study of cardiac mechanics from the point of view of the SCG, which may lead to new understandings about the cardiovascular behavior and SCG itself. Furthermore, the possibility of converting recorded data to sound form, provides for a differentiated analysis of the signal. This enables alternative approaches in the analysis of SCG, either for research or for possible clinical applications, since audio can be used as an auxiliary tool in a possible specialized

analysis, providing greater support in investigation of these data. Previous studies that used signal processing and fluid mechanics modelling to synthesize sounds from the heart, suggest a great potential in obtaining information for monitoring and diagnosis. In addition, the detection and processing of heart sounds of very low frequency, lower than the limit of detection of the human ear, can increase the diagnostic power of auscultation [12]. Although the SCG seems only a complementary alternative to the other types of exams, since 1990 Salerno and Zanetti suggest that the seismocardiogram can be a useful tool for detecting and evaluating left ventricular diseases [13]. Moreover, the estimation of heart rate using the SCG has already been more accurate than those obtained through signs of phonocardiography (PCG) and photoplethysmography (PPG), once that the appropriate algorithms are used. Another recent study has also evidenced the SCG as an appropriate alternative to PCG in the estimation of a biomarker used in the diagnosis of various cardiac pathologies [12, 14].

Piezoelectric Heart Monitor

Fig. 11 Raw (a) and filtered (b) mitral seismocardiogram—Human–Machine Interface Fig. 12 Recorded mitral seismocardiogram—raw and filtered—MatLab®

933

934

A. de S. Morangueira Filho et al.

As previously described, the SCG format may be affected by the location of the sensor in the body, in addition to respiratory movements and other factors. Investigating the effect of these interactions on estimating cardiac time intervals from SCG signals may possibly reveal clinically useful information or simple be used to monitor the breathing [15].

3.

4.

5.

8

Conclusions

The data present in the SCGs, obtained through the proposed device, demonstrated the ability to enable monitoring of cardiac activity for prevention and diagnostic purposes, since they allowed identification the fiducial points, which characterize the signal. In addition, the option to record the signals provides for further analysis possibilities and this, on the other hand, can yield information that corroborates the results of other exams, such as the conversion of SCG to audio, for example. Furthermore, it is possible to obtain a useful, simple and non-invasive tool that, with some enhancements, can offer even remote monitoring using the wearable format, and all this at an affordable cost.

6.

7. 8.

9. 10. 11. 12.

Conflict of Interest The authors declare no potential conflicts of interest with respect to the research, authorship, or publication of this article.

13.

References

14.

1. World Health Organization (2018) Global health estimates 2016: deaths by cause, age, sex, by country and by region, 2000–2016. Geneva 2. Almeida-Santos MA, Prado BS, Santos DMS (2018) Análise espacial e tendências de mortalidade associada a doenças hipertensivas nos estados e regiões do Brasil entre 2010 e 2014. Int J

15.

Cardiovasc Sci 31(3):250–257. https://doi.org/10.5935/2359-4802. 20180017 Inan OT, Pouyan MB, Javaid AQ et al (2018) Novel wearable seismography and machine learning algorithms can assess clinical status os heart failure patients. Circ Heart Fail 11(1). https://doi. org/10.1161/CIRCHEARTFAILURE.117.004313 Al Ahmad M (2016) Piezoelectric extractions of ECG signal. Nat Sci Rep 6. Article number: 37093. https://doi.org/10.1038/ srep37093 Sahoo PK, Thakkar HK, Lee MY (2017) A cardiac early warning system with multi channel SCG and ECG monitoring for mobile health. Sensors (Basel) 17(4):711. https://doi.org/10.3390/ s17040711 Lin WY, Chou WC, Chang PC et al (2018) Identification of location specific feature points in a cardiac cycle using a novel seismocardiogram spectrum system. IEEE J Biomed Health Inform 22(2):442–449. https://doi.org/10.1109/JBHI.2016.2620496 Texas Instruments. https://www.ti.com/lit/an/sloa033a/sloa033a. pdf ST Electronics. https://www.st.com/content/ccc/resource/technical/ document/application_note/03/c3/82/1e/a3/57/44/50/DM00188713. pdf/files/DM00188713.pdf/jcr:content/translations/en.DM00188713. pdf Boylestad RL, Nashelsky L (2013) Dispositivos Eletrônicos e Teoria de Circuitos, 11a edn. São Paulo, Pearson, p 518 Lazar G, Penea R (2016) Mastering Qt 5. Packt, Birmingham, pp 18–19 Smith SW (1997) The scientist and engineer’s guide to digital signal processing. California Technical, San Diego Taebi A, Solar BE, Bomar AJ, Sandler RH, Mansy HA (2019) Recent advances in seismocardiography. Vibration 2(1):64–86. https://doi.org/10.3390/vibration2010005 Salerno DM, Zanetti J (1990) Seismocardiography: a new technique for recording cardiac vibrations. Concept, method, and initial observations. J Cardiovasc Technol 9:111–118. https://doi. org/10.1378/chest.100.4.991 Cosoli G, Casacanditella L, Tomasini EP, Scalise L (2017) Heart rate assessment by means of a novel approach applied to signals of different nature. J Phys Conf Ser 778. https://doi.org/10.1088/ 1742-6596/778/1/012001 Di Rienzo M, Vaini E, Lombardi P (2017) An algorithm for the beat-to-beat assessment of cardiac mechanics during sleep on earth and in microgravity from the seismocardiogram. Nat Sci Rep 7:15634. https://doi.org/10.1038/s41598-017-15829-0

Transducer for the Strengthening of the Pelvic Floor Through Electromyographic Biofeedback C. M. Silva, B. C. Bispo, G. R. P. Esteves, E. L. Cavalcante, A. L. B. Oliveira, M. B. C. Silva, N. A. Cunha, and M. A. B. Rodrigues

signal capture in the training of muscle fibers for treatment in urogynecological physiotherapy.

Abstract

The Pelvic Floor Muscle (PFM) dysfunction affects about 50% of women and strengthening the PFM prevents and minimizes disorders such as hypotonia. Facing the need to strengthen the PFM, an intravaginal transducer was developed to capture the PFM electrical activity and use a game with a smartphone, using the Bluetooth protocol, allowing the user to have easy interaction and visual feedback of the PFM contraction. The developed transducer was positioned in the intracavitary region of the volunteer’s vagina, in order to acquire the PFM contraction signal, identified by the analysis of the electromyography (EMG) record. The transducer has its own EMG amplification system and a wireless communication system in a flexible casing. The volunteers were instructed to insert the transducer and perform the contractions and relaxation proposed during the game, which has a scoring system for each stage and measures EMG’s amplitude in real time. The use of the transducer was assessed at end of the tests, using a form, where the volunteers could also give suggestions for improvements in terms of use and comfort of the transducer. The evaluation reported that the transducer is easy to use and handle. Regarding the EMG analysis, the tracing was considered normal, without any interference. The transducer had similar responses for both volunteers with the same amplifier gain. However, the transducer is an invasive instrument for PFM rehabilitation, so it’s necessary to identify the patient according to the parameters weight, height, age and the proper positioning of the transducer in the vaginal canal, to improve the EMG C. M. Silva (&)  B. C. Bispo  G. R. P. Esteves  E. L. Cavalcante  M. B. C. Silva  N. A. Cunha  M. A. B. Rodrigues Department of Electronics and Systems, Federal University of Pernambuco (UFPE), Prof. Moraes Rego Ave, Recife, PE, Brazil A. L. B. Oliveira Department of Physiotherapy, Federal University of Pernambuco (UFPE), Recife, PE, Brazil

Keywords

Pelvic floor muscle

1



EMG



Invasive



Women

Introduction

The dysfunction of the Pelvic Floor Muscle (PFM) affects about 25–50% of women, who report muscle hypotonia and dissatisfaction of sexual function. Strengthening the PFM improves this dysfunction [1]. Electromyography (EMG) has been used by physiotherapists and specialists in urogynecology as a method of measuring muscle electrical activity, in order to recruit muscle function (contraction and relaxation) using biofeedback, obtaining a response through signals and images [2]. The contraction of the pelvic floor muscles can be measured through EMG, as well as the identification of the pelvic floor muscle tone, which provides real-time guidance to women who do not effectively contract PFM [3]. When measuring the bioelectric activity of muscle fibers with EMG, it’s based on the depolarization and polarization of the muscle surface membrane, this change generates electrical potentials, which together make the EMG record. The EMG can be used as a method of assessing the pelvic floor muscle [4]. The EMG has been used in research to assess PFM contraction, associated with electromyographic biofeedback in Virtual Reality (VR) games. This objective is achieved by correlating the contractions of the motor units existing in the perineal region with the EMG signal [5]. The prototype of an intravaginal transducer, developed in this work, consists of an equipment to perform the acquisition of the EMG signal in the intracavitary region, wireless data transmission and visual analysis (graphs, signals and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_139

935

936

C. M. Silva et al.

images) through a game, using a smartphone device. The device communicates with the smartphone through the Bluetooth protocol, allowing the user to have an easy interaction with the device and thus obtain PFM feedback.

2

Materials and Methods

The research was carried out through the presentation of the Free and Informed Consent Term, after being approved by the Ethics and Research Committee for Studies with Human Beings at Federal University of Pernambuco (UFPE), Brazil, under the number 97248818.7.0000.5208. Then the volunteers were instructed on the benefits of the intracavitary electrode, its functions and purpose of the work. After agreeing with terms, the volunteers were invited to start the collection procedure. Initially the mechanical part of the transducer was modeled on the Online OnShape CAD platform and, later, flexible casing was made on the 3D printer. These were developed specifically for this function in conjunction with the amplification and wireless communication system of EMG signal, all inserted inside the flexible casing, composing the transducer to be used in this study. Each volunteer used a single transducer. In this work, 10 healthy women participated in the tests to validate the equipment and the electromyographic biofeedback system, because the more different the volunteers’ EMG signal for the PFM, the better the equipment acceptability test in this study. The transducer has a cylindrical shape (see Fig. 1a), with two stainless steel electrodes affixed to its external surface, on opposite sides on the transducer body. The total height of the transducer is 8 cm that was inserted only 5 cm into the vaginal cavity adjusting to the pelvic floor, without the need for women to hold the transducer while performing PFM contractions. The transducer also has a cavity with internal diameter 19 mm and external diameter 22 mm, allowing the insertion of electronic circuit boards that can be removed for cleaning, adjustments and reuse in another sensor. The transducer has a conductive wire covered with a sterile urethral probe number 10 (diameter 3.3 mm), length 400 mm, type Embramed, to avoid contact with the intimate region. During the tests it was defined to use the LiPO 3.7 battery with 400 mA and dimensions of 25  35  3 mm (responsible for feeding all electronic components). This battery is located outside the transducer, together with the external contact electrode, which must be located on the volunteer’s anterior iliac spine, connecting with the smartphone through Bluetooth and the pre-existing game (see Fig. 1a). The developed firmware has the function of configuring the microcontroller MSP430, capturing the EMG signals and sending information to the game. The configuration

Fig. 1 a Electrode mounted with built-in circuit, battery box and external contact electrode coated with probe; b diagram of acquisition, conditioning and transmission of the EMG signal. Source Author’s own collection, 2020

Transducer for the Strengthening of the Pelvic Floor …

determines which ports will be inputs or outputs and what type of communication will be used. Reading of the EMG signals is done through an analog port of the microcontroller. After signal’s acquisition, a moving average filter with N equal to 20 is applied to eliminate high frequency noise. Finally, the data packet is assembled and sent to the smartphone device through the Bluetooth protocol. In this way the hardware includes the microcontroller, operational amplifiers, connector for Bluetooth module, voltage regulators, in addition to passive components such as resistors and capacitors. The architecture implemented in the developed system is illustrated in (Fig. 1a), from the acquisition of the electromyographic signals to the Bluetooth communication. The entire circuit, with the exception of the battery and the external electrode, was installed inside the flexible intravaginal casing. In this transducer, custom steel electrodes were made to acquire the electrical signals from the PFM. These electrodes were coupled to a flexible casing from PLA material, manufactured in 3D printing, where inside this casing are all the electronic components for acquisition, conditioning and sending of the electromyographic signals, that is, the microcontroller, analog circuit and Bluetooth module. The protection of the entire electronic circuit inside the enclosure was performed with layers of plastic film, in order to protect the circuit from fluids that could enter the enclosure and damage the circuit. The terminals coming from the electrode pair are electronically welded to the entrance of a filter, a filter called “Fluid Filter” (Fig. 1b) is coupled where it’s nothing more than a high pass filter, whose functionality is to eliminate low frequency noise caused by vaginal moisture present in the environment where the system is in. These components are embedded in the casing and sealed with a screw base so that there is no direct contact with vaginal moisture, however the use of the prototype is in direct contact with the vaginal canal. The same transducer is for individual use, easy to clean using neutral soap and water, and stored separately. The instrumentation amplifier used was the INA129 from Texas Instruments [6], amplifying the PFM bioelectric signals to voltage levels in the order of hundreds of millivolts, for later analog to digital conversion. In addition, a bandpass filter was coupled to the output of the INA129, through the combination of a passive high pass filter with a cutoff frequency of 7.2 Hz and an active low pass filter, cascading with a cutoff frequency of 200 Hz, in order to attenuate the noise coming from the environment in which the transducer is, corroborating with a study by [7, 8]. After filtering the signals from INA129, an adder circuit (illustrated in Fig. 1b whose symbol is a “+” adder) was implemented with the purpose of increasing an offset voltage of approximately 1.3 V and avoiding negative voltages in

937

the analog port of the microcontroller, since the presence of negative voltages in microcontroller pins can cause irreversible damage to the chip. The microcontroller used is part of a family of low consumption microcontrollers in the Texas Instruments family [9]. This microcontroller met all the needs of the proposed system, whose responsibility is to receive the analog signal from the EMG signal processing circuit, perform the analog to digital conversion, with a sampling rate of 1000 samples per second, and send the digitized EMG data to the Bluetooth module. The Bluetooth module is responsible for sending the EMG signals digitized by the microcontroller to the smartphone device. Through these data received, it is possible to interact in real-time with the game as visual and ludic feedback to the volunteer.

3

Electromyographic Feedback

A smartphone game was used to interact with the developed transducer through Bluetooth communication. The main objective of the application is to promote visual and interactive feedback to the user, in order to encourage her in a playful way to therapeutic activities. The application was inspired by mountain biking (uphill/downhill) and consists of a 2D platform game, where the user controls a cyclist who needs to travel as far as possible on a route up and down the mountains, in which PFM contractions are needed, then the avatar (see Fig. 2) can overcome the obstacles represented by the valleys. The game features two modes, fast fibers and slow fibers, being able to select as soon as the application is started, which mode the user wants to use. In both modes, the contraction must be maintained from the arrival of the first flag, before the mountain (see Fig. 2a) until the cyclist reaches the top of the valley where there is the next flag (see Fig. 2b) and then the volunteer must relax the PFM, so that the cyclist can go down the valley during the game. In addition, commands are illustrated in text form to facilitate the synchronization of PFM contraction or relaxation time [5]. The developed transducer was adapted so that it is able to acquire EMG signals specific to PFM and enable transmission via Bluetooth to the smartphone executing the MyoPelvic application. When starting the application, the volunteer is asked to insert the transducer in the vaginal canal. Then, at the beginning of the game, the transducer is paired via Bluetooth with the smartphone. Soon after, the user can select the slow or fast fiber mode and, finally, the game starts. The game, in fast fibers mode, allows a contraction period of less than 4 s and should relax twice as many contractions, respecting the contraction period for phasic fibers (1:2). While in the slow fibers mode this period extends between 4

938

C. M. Silva et al.

Fig. 2 a The game screen of the MyoPelvic application shows the moment of PFM contraction; b MyoPelvic application screen shows the moment to relax the PFM after the cyclist reaches the top of the mountain. Source Author’s own collection, 2020

and 10 s and should contemplate twice the relaxation (2:4) for tonic fibers. The game has 12 matches (each match with 12 contractions) after the end of a game, the woman needs a 2 min rest. At this moment, there is a relaxation of the PFM to start the next match and a regressive timer is presented to assist in the waiting time [5].

4

Results and Discussion

The prototype was initially tested with the use of a digital oscilloscope to adjust the gains. The main researcher used the internal intracavitary electrode where the contractions were verified with a peak-to-peak voltage amplitude of approximately 2 V with an average value around 1.3 V (see Fig. 3a) and in relaxation (see Fig. 3b), 500 mV peak-to-peak voltage amplitudes were measured (due to noise) and average voltage of 1.3 V. Due to this average value, the system’s hardware gain was configured so that the volunteers could use it, a previous calibration was necessary, via software, since the amplitudes and the average value vary from one individual to another. During the prototype autonomy tests, an autonomy of approximately 2 h was verified with the prototype connected in an interrupted way.

Fig. 3 a Contraction of the pelvic floor; b relaxation of the pelvic floor. Source Author’s own collection, 2020

The prototype was tested on healthy women, however, women with symptoms of urinary incontinence would have little variation in the EMG signal, since an initial calibration of the maximum and minimum values of the EMG signals is performed. This calibration is done individually each time the game is started on the smartphone. From the first tests performed, the frequency range of the analog filters was adjusted accordingly, as well as the gain of the amplifiers so that the system could later be tested with the 10 volunteers. In this research, women with different biotypes, over 18 years of age, with different body masses, a body mass pattern is not important in order to analyze the responsiveness of the transducer in different anatomies of the vaginal canal. In the study by Enck and Vodusek [10] it was observed that the adipose tissue of the cavity can influence the contraction of the PFM. According to Moretti et al. [5] some women use accessory musculature to assist in the moment of PFM contraction and relaxation, and it’s necessary to have a good awareness of the proposed activity during the testing and validation of the equipment. Thus, it was necessary to use different biotypes for validation of the transducer and it was necessary for women to present different weights. Although volunteers may have different biotypes and contraction levels, the calibration performed previously guarantees the detection of EMG signals, limiting

Transducer for the Strengthening of the Pelvic Floor …

939

them between the maximum and minimum values in the instrumentation amplifier operating range. In the lithotomy position (gynecological position), the volunteers were instructed not to move the pelvis and perform contractions accompanied by the smartphone where the application is displayed as visual feedback. Data analysis was performed in a Microsoft Excel 2013 spreadsheet, later exported to Microsoft Word 2013. The data obtained were from the Online QuestionPro platform where volunteers were invited to answer questions before the test that contained personal information such as age, marital status, sexual frequency, use of devices for sexual satisfaction. Right after the test, the usability of the product was questioned, such as size, weight of the transducer, discomfort or difficulty with the game and the acceptability of women in relation to the use of the transducer. It can be considered that the dimensions of the prototype were well accepted by the volunteers and suitable for capturing the EMG signals, compared with the study by Enck and Vodusek [10], which observed that the use of electrodes with larger diameters can influence PFM contractility. However, muscle contraction changes according to the anatomical variation of each woman, a fact that corroborated the need for system calibration for each user. In this study, an 8.2 cm long transducer was developed, with only 5 cm being inserted into the vaginal cavity, that way causing any inconvenience to the volunteers [10]. In view of the pilot tests performed on patients, it was possible to observe similar responses to the EMG signals. It is expected in future research with patients suffering from urinary incontinence responses in a tenuous amplitude of the EMG signal compared to normal patients, however the calibration of the equipment performed previously guarantees the detection of EMG signals in the amplifiers’ operating ranges. Thus, if adjustments are needed in the gain of the amplifiers for patients with urinary incontinence, analog or digital corrections can be made for a better capture of the EMG signals.

contractility of the PFM through the bioelectric signal corroborating the study by Enck and Vodusek [10] where it was reported that the use of large invasive electrodes can present greater difficulty during contractions of PFM training. Therefore, it is important that the electrode is positioned at the 3 and 9 o’clock position of the perineal clock, so the steel plate must be adjusted on the sides of the PFM muscles. This way the EMG signal will be captured more safely. The wireless equipment had good acceptance when it comes to usability in dynamic form through the game with the smartphone with satisfactory results of acceptance, practical and hygienic. The proposed objective was achieved with the development of the 3D casing, with the aid of the OnShape CAD platform, dedicated to providing comfort to the user, in the case of an invasive method. This transducer can accommodate the entire amplifier circuit, being easy to handle for removal and cleaning the transducer. In this way, the developed transducer was well accepted during the tests, improving the ludic activity when associating with the game, making the PFM contraction and relaxation command easy with the visual feature of the game that enhanced the applicability of the PFM invasive EMG. The transducer with the game was able to provide entertainment to the patient, but further studies are needed, with a larger number of women, to define that the intracavitary invasive transducer can be considered as a generic instrument and can be used by several women with variations in weight, height, age, etc. With the tests, it was possible to identify the possibility of using the system for rehabilitation in physiotherapy in pelvic floor disorders. The transducer can also be used for training and rehabilitation in public or private environments, due to the fact that it is a small and portable wireless device and with easy handling, but it’s desired that the therapy is always guided and accompanied by a specialist physiotherapist.

5

References

Conclusions

The use of the developed transducer is easy to handle and, thus, it was possible to find the standard gain for the equipment. This pilot test was constituted by ten volunteers, where the average gain of the amplifier was around 120. The electrodes positioned on the lateral wall of the vagina also showed more security of contractility at the moment of the proposed activity, thus being considered the vertical position of the electrodes positioned on the lateral wall of the vagina in a correct and accurate way to evaluate this muscle group. However, it was possible to verify that 5 cm is enough to be inserted in the vaginal canal, capturing the

Conflict of Interest The authors declare that they have no conflict of interest.

1. Mohktar MS, Ibrahim F, Rozi NFM et al (2013) A quantitative approach to measure women’s sexual function using electromyography: a preliminary study of the Kegel exercise. Med Sci Monit 19:1159–1166 2. Keshwani N, Mclean L (2013) A differential suction electrode for recording electromyographic activity from the pelvic floor muscles: crosstalk evaluation. J Electromyogr Kinesiol 23:311–318 3. Koenig I, Luginbuehl H, Radlinger L (2017) Reliability of pelvic floor muscle electromyography tested on healthy women and women with pelvic floor muscle dysfunction. Elsevier 4. Chmielewska D, Stania M, Sobota G et al (2015) Impact of different body positions on bioelectrical activity of the pelvic floor muscles in nulliparous continent women. BioMed Res Int

940 5. Moretti E, Filho AGM, Almeida JC et al (2017) Electromyographic assessment of women’s pelvic floor: what is the best place for a superficial sensor? Neurourol Urodyn 9999:1–7 6. Texas Instruments. INA12x precision, low-power instrumentation amplifiers. SBOS051E–Oct 1995. Revised Apr 2019. https://www. ti.com/lit/ds/symlink/ina128.pdf 7. Souza PVE (2015) Sistema de aquisição de sinais de EMG e ECG para plataforma Android™. Dissertação de mestrado em Engenharia Elétrica. Universidade Federal de Pernambuco-UFPE

C. M. Silva et al. 8. Flury N, Irene Koenig I, Radlinger L (2017) Crosstalk considerations in studies evaluating pelvic floor muscles using surface electromyography in women: a scoping review. Arch Gynecol Obstet 295:799–809 9. MSP430G2x53 TI mixed signal microcontroller. https://www.ti. com/lit/ds/symlink/msp430g2553.pdf 10. Enck P, Vodusek DB (2006) Electromyography of pelvic floor muscles. J Electromyogr Kinesiol 16:568–577

Monitoring Hemodynamic Parameters in the Terrestrial and Aquatic Environment: An Application in a 6-min Walk Test K. R. C. Ferreira, A. V. M. Inocêncio, A. C. Chaves Filho, R. P. N. Lira, P. S. Lessa, and M. A. B. Rodrigues

Abstract

Keywords

Walking tests are widely used to evaluate therapeutic options in several patients, including pulmonary and cardiac patients, as they demonstrate functional capacity and possible clinical prognosis. Among the therapeutic options, exercises can be performed in terrestrial and aquatic environments, with the purpose of preventing and promoting health. The objective was to develop an integrated device for the acquisition and monitoring of hemodynamic parameters in terrestrial and aquatic environments during a walk test. It is a pilot study in three stages: device development; application and validation; and data analysis. A device for the acquisition and monitoring of hemodynamic parameters was developed, with the visualization of the sensor variables simultaneously in an LCD display. The step consisted in the validation and application of the instrument by the author herself, where a 6-min walk test was carried out on the ground and in the water. The last step was the analysis and interpretation of the data. With the application of the equipment, it was possible to identify hemodynamic variables in both environments through a walk test, in addition to data visualization and real-time transmission via Bluetooth protocol. The device proved to be easy to apply for monitoring hemodynamic parameters in terrestrial and aquatic environments, presenting good visibility to the health professional, assisting in decision making about the continuity of the conduct. The device becomes applicable in several environments because it is a portable tool and easy to handle.

Hemodynamic monitoring

K. R. C. Ferreira (&)  A. C.C. Filho  R. P. N.Lira Biomedical Engineering Department, Post-Graduation Program in Biomedical Engineering, Federal University of Pernambuco, Av. da Arquitetura, Bloco B, 4o Andar, Sala 412, Recife, Brazil A. V. M. Inocêncio  P. S. Lessa  M. A. B. Rodrigues Electronic and Systems Department, Post-Graduation Program in Electrical Engineering, Federal University of Pernambuco, Recife, Brazil

1



Hydrotherapy



Walk test

Introduction

When performing physical activity, changes in the functional, cardiovascular and respiratory capacity of individuals may occur [1]. Changes such as activation of blood circulation, improve venous return and ventilatory capacity, increase metabolic demand, such as oxygen consumption, cardiac output and ejection volume, which are expected [1, 2]. However, some non-physiological and even pathological changes can be detected from changes in heart rate (HR) and oxygen saturation (SPO2) during physical activity. Through the reference values, the monitoring of these parameters serve as a criterion for monitoring and continuity, or interruption of the activity [1–3].Was adjusted. Cordially. Walking tests are effective tools for evaluating and monitoring an individual’s functional capacity during physical activity [4–6]. In addition, it allows to demonstrate its possible clinical prognosis [5]. Among the tests, the 6-min Walk Test (6MWT), has good reproducibility and is considered a submaximal test, since individuals choose their own exercise intensity and do not reach maximum capacity, better reflecting the functional capacity for activities of daily living (ADL’s) [5, 6]. The applicability of the walk test in a terrestrial environment is common and is part of the evaluation process, especially for individuals with comorbidities [6, 7]. Although the 6MWT is simple, it requires attention and agility from the therapist, due to the need to use several equipment, which are not integrated into a single one and makes it difficult to perform the test in other environments [7, 8]. Among the therapeutic options in order to prevent and promote health, exercises can be performed in terrestrial or

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_140

941

942

K. R. C. Ferreira et al.

aquatic environments [9–11]. Aquatic Physiotherapy has an advantage in the therapeutic approach, due to the physical properties of water and the physiological effects of immersion [9, 12, 13]. These effects promote greater mobility, redistribution of blood flow, reduction of joint overload, in addition to decreased sensitivity in nerve endings, promoting muscle relaxation and pain relief [11, 12]. This approach is performed in a thermal pool, with a temperature between 33 and 35 °C, which leads to an increase body temperature and favors an increase in metabolism [12–14]. Thus, it is necessary to understand the demands and effects that may favor or not individuals, due to the activities, increments and therapeutic offers focused in the public with systemic, biomechanical and/or musculoskeletal changes. Because of these factors, the development of an integrated device for the acquisition and real-time monitoring of hemodynamic parameters in terrestrial and aquatic environments is justified.

2

Materials and Methods

The present research is a pilot study for monitoring non-invasive hemodynamic parameters, oxygen saturation and heart rate, in terrestrial and aquatic environments. The study consisted of the following steps: development of the instrument; validation and application; and data analysis. Research approved by the Ethics Committee in Research of the Federal University of Pernambuco under agreement number 3.207182 and CAAE 03413118.0.0000.5208.

2.1 Prototype Development The development board used to make a connection between the parameter capture and the Bluetooth module was a Freedom Board KL25Z. The microcontroller combines robust processing and low consumption. The development board has I2C, SPI and UART ports, ideal for the communication of the peripherals used in the project. For data acquisition, it was used an oximetry sensor integrated with an I2C module communication chip. From the oximetry sensor, once the initialization and data acquisition values are set in the registers, it becomes possible to obtain oximetry parameters. The HR is obtained by the digital processing of the oximetry signal provided by the sensor, using a second-order Butterworth digital low-pass filter with cut off frequency Wn = 6 Hz designed using Matlab. To capture the hemodynamic variables, a sensor was positioned in the frontal and anteroposterior region of the volunteer’s head, and fixed with an elastic band (Fig. 1a).

Fig. 1 Visualization of sensor positioning (a) and visualization of heart rate and peripheral oxygen saturation through the LCD display in the laboratory testing phase (b). Source Authors, 2020

A 2.4-inch LCD display was used to view the oxygen saturation and heart rate parameters, as shown in Fig. 1b. Regarding the communication between the components, an activation and operation code was simulated and implemented using the MBED development environment. After the LCD display configuration, data was sent via SPI communication for viewing on the display and via Bluetooth protocol to a computer. The prototype firmware flow is shown in Fig. 2. Initially, the variables used in the program were defined, then the I2C, SPI and UART communications were started. After configuring the display, the reading cycle started.

Monitoring Hemodynamic Parameters in the Terrestrial …

Definition of variables Communication configuration Display configuration SPO2 and HR reading Sending data via Bluetooth Display of data on the display Presentation of data on the computer SPO2 – Oxygen saturation; HR – Heart rate. Fig. 2 Firmware flow for activation and operation of the prototype. Source Authors, 2020

The data is then sent to the computer at each reading, however the display is only updated with the mean value of five acquisitions.

2.2 Validation and Application After the development of the instrument, for the acquisition of oxygen saturation and heart rate data, tests to evaluate the transmission and data record were performed in the laboratory and later in a thermal pool, with the authors of this project. After approval in these tests, the 6MWT was employed in terrestrial and aquatic environments, in order to verify the functionality of the prototype. With the sensor positioned and fixed with an elastic band, the instrument with the LCD display was previously placed on the chest, inside a transparent bag made for this purpose. Through the application of the developed instrument, it was verified the effective monitoring in real time during the walk in both environments. As well as good visibility through the display and communication via Bluetooth protocol.

2.3 Data Analysis The oximetry detection was performed using the same sensor and the calculation was performed according to the literature [15]. For HR detection, the data obtained by the oximetry sensor were used, in which a peak detector was

943

made for the light signal obtained by the sensor, in order to identify the time between the peaks and, consequently, the HR. All heart rate and oximetry data, inside and outside the water, was sent via Bluetooth in real-time to the computer. After reading five sample of both data, an average value is sent to the display. After capturing the data, they were analysed offline using the Matlab software. To remove external interferences, such as those arising from the movement of the sensor, the signals were subjected to an average filter with five points. After that, the signals were plotted and analyzed.

3

Results and Discussion

With this study, an instrument was developed for the acquisition and monitoring of hemodynamic parameters. Through tests and application of the 6MWT, validation for the terrestrial and aquatic environment were possible. It was observed that the changes caused in hemodynamic variables, which are expected, also occur due to immersion in a heated environment. In Fig. 3a, it is possible to observe a greater variation in HR in an aquatic environment. This variability can occur due to the physical properties of the water and the resistance imposed on walking. For oxygen saturation (Fig. 3b), values in aquatic environments were higher than those observed when the activity was carried out in the terrestrial environment. However, such variation was not significant and it is believed that this occurs because the volunteer does not have comorbidity. With any physical exercise, hemodynamic changes occur and in some individuals this can become a health risk, so monitor such parameters in an aquatic environment is relevant. When observing the monitoring inside and outside of the water, it is noted that the standard deviation (Table 1) of the heart rate, in the terrestrial environment, is slightly higher. Possibly, due to an effort to exercise more. What can also be justified, to the values in the aquatic environment, by the adaptability of the system when immersed, and the direct action of hydrostatic pressure and buoyancy, which favors the reduction of body weight and facilitates the movements [16–19]. Activity in a thermal pool can cause variations in cardiac output, and consequently in the heart rate. Such variation is related to immersion and the physiological responses offered by this process, and is correlated with the exercise, the posture adopted, depth and temperature of the pool [2, 20, 21]. Immersion in heated water also suggests that the muscle relaxation, associated with vasodilation, favors adaptation and decreases heart rate [22].

944

K. R. C. Ferreira et al.

Fig. 3 Monitoring of heart rate (a) and oxygen saturation (b) during the 6MWT in terrestrial and aquatic environments. Source Authors, 2020

Table 1 Mean and standard deviation of heart rate and oxygen saturation parameters in terrestrial and aquatic environments

Hemodynamic parameters

Standard deviation

Hear rate (bpm) Oxygen saturation (%)

Terrestrial environment

Aquatic environment

107,35 ± 15,58

89,42 ± 19,35

95,43 ± 0,55

95,44 ± 1,20

Source Authors, 2020

Due to its direct relationship with oxygen saturation, heart rate is usually used as a monitoring criterion. Both are important predictors for clinical decision making. Regarding the oxygen saturation parameter during immersion, the results of this study demonstrate that this parameter remained in the predictive indexes of normality and, according to Table 1, the variability between the terrestrial environment and the aquatic environment was not significant, suggesting safety in carrying out activity in an aquatic environment [23]. The increase in saturation in the aquatic environment is caused by changes in the cardiovascular system, due to the action of hydrostatic pressure in the tissues, generating a compression of the blood vessels. This promotes the improvement of venous return and circulation, favouring an increase in blood flow and influencing gas exchange. Subsequently this generates an increase in blood oxygenation [24]. Regarding the application of the instrument by the authors, it occurred without complications and it was possible to view the parameters through the display and transmit the data via Bluetooth protocol inside the thermal pool

because the equipment was not immersed. So, with the prototype, it is possible to acquire, capture and transmit data in real time in an effective way.

4

Conclusions

With the development of the prototype for measurement, visualization and monitoring, it is possible to monitor in real time the patient by the health professional in an aquatic environment. In view of the lack of adequate devices for insertion in the aquatic environment, in addition to the relevance in monitoring hemodynamic variables through physical activities. Regarding heart rate and oxygen saturation, the monitoring performed with the prototype occurred without complications and the values remained within the normal range. These parameters were verified through tests performed in the laboratory and in accordance with the literature. The device proved to be easy to apply for monitoring hemodynamic parameters in terrestrial and aquatic environments, presenting good visibility for health professionals.

Monitoring Hemodynamic Parameters in the Terrestrial …

It is an unprecedented device for Aquatic Physiotherapy, as it consists of a single equipment with visualization of the parameters, which can assist in decision making regarding the continuity of the conduct. Because it is a portable and easy to handle tool, it becomes applicable in several environments. In this way, new research areas for the use of the prototype arise. New studies are being developed to support health professionals regarding clinical decision-making and the continuity or not of the intervention in the aquatic environment. Acknowledgements To Coordenação de Aperfeiçoamento de Pessoal de nível Superior—CAPES and to Conselho Nacional de Desenvolvimento Científico e Tecnológico—CNPq. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Powers SK, Howley ET (2005) Fisiologia do exercício: teoria e aplicação ao condicionamento e ao desempenho, 5ª ed. Manole. ISBN 85-204-1673-X 2. Graef FI, Kruel LFM (2006) Heart rate and perceived exertion at aquatic environment: differences in relation to land environment and applications for exercise prescription – a review. Rev Bras Med Esporte 12(4):221–228. 3. Rodrigues ESR et al (2013) Importância da aferição dos dados vitais em indivíduos submetidos à ginástica laboral. Rev Amazônia 1(3):12–19 4. Soares MR, Pereira CAC (2011) Six-minute walk test: reference values for healthy adults in Brazil. J Bras Pneumol 37(5):576–583 5. Dias CMCC et al (2017) Desempenho no teste de caminhada de seis minutos e fatores associados em adultos jovens saudáveis. Rev Pesqui Fisioter 7(3):408–417. Escola Bahiana de Medicina e Saude Publica. https://doi.org/10.17267/2238-2704rpf.v7i3.1555 6. Holland AE, Spruit MA, Troosters T, Puhan MA, Pepin V, Saey D et al (2014) An official European Respiratory Society/American Thoracic Society technical standard: field walking tests in chronic respiratory disease. Eur Respir J. 44(6):1428–1446 7. Resqueti VR et al (2009) Confiabilidade do teste da caminhada de seis minutos em pacientes com miastenia gravis generalizada. Fisioter Pesqui 16(3):223–228 8. Morales-Blanhir JE et al (2011) Teste de caminhada de seis minutos: uma ferramenta valiosa na avaliação do comprometimento pulmonar. J Bras Pneumol 37(1):110–117

945 9. Lamezon AC, Patriota ALVF (2005) Eficácia da fisioterapia aquática aplicada a gestantes para prevenção e tratamento da lombalgia—revisão sistemática. Rev Terra Cult 21(41):127–132 10. Cunha MCB, Alonço AC, Silva TM, Raphael ACB, Mota CF (2010) Ai Chi: Efeitos do Relaxamento Aquático no Desempenho Funcional e Qualidade de Vida em Idosos. Fisioter Mov Curitiba 23(3):409–417 11. Gosling AP (2013) Physical therapy action mechanisms and effects on pain management. Rev Dor São Paulo 13(1):65–70 12. Antunes MD, Vertuan MP, Miquilin A, Leme DEC, Morales RC, Oliveira DV (2016) Efeitos do Watsu na qualidade de vida e quadro doloroso de idosas com fibromialgia. ConScientiae Saúde 15(4):636–641 13. Oliveira TR et al (2019) Análise da funcionalidade em idosos após a prática de exercícios resistidos em ambiente aquático. Fisioter Bras 20(6):704. 19 dez. Atlantica Editora. https://doi.org/10. 33233/fb.v20i6.2153 14. Schlemmer GBV, Ferreira ADM, Vendrusculo AP (2019) Efeito da fisioterapia aquática na qualidade de vida e na funcionalidade do membro superior de mulheres mastectomizadas. Rev Saúde (Sta. Maria) 45(3) 15. Nitzan M, Romem A, Koppel R (2014) Pulse oximetry: fundamentals and technology update. Med Dev (Auckl) 7:231–239. Published 2014 Jul 8. https://doi.org/10.2147/MDER.S47319 16. Ide MR, Caromano FA (2003) Movimento na Água. Fisioter Bras 4(2):1–4 17. Ribas DIR, Israel VL, Manfra EF, Araujo CC (2007) Estudo comparativo dos parâmetros angulares da marcha humana em ambiente aquático e terrestre em indivíduos hígidos adultos jovens. Rev Bras Med Esporte 13(6) 18. Carregaro RL, De Toledo AM (2008) Efeitos fisiológicos e evidências científicas da eficácia da fisioterapia aquática. Rev Movimenta 1(1):6 19. Pinto C, Salazar AP, Marchese RR, Stein C, Pagnussat AS (2018) Is hydrotherapy effective to improve balance, functional—mobility, motor status, and quality of life in subjects with Parkinson’s disease? A systematic review and meta-analysis. PM&R 20. Ovando AC et al (2009) Efeito da temperatura da água nas respostas cardiovasculares durante a caminhada aquática. Rev Bras Med Esporte 15(6):415–419 21. Ferreira RAP (2010) O comportamento da frequência cardíaca em atividades aquáticas. p 31 22. Heathcote SL et al (2019) How does a delay between temperate running exercise and hot-water immersion alter the acute thermoregulatory response and heat-load? Front Physiol 10:1381 23. Almeida C et al (2016) Efeitos da imersão nos parâmetros ventilatórios de pacientes com distrofia muscular de Duchenne. Acta Fisiátrica 19(1):21–25 24. Braga HV et al (2019) Efeito da fisioterapia aquática na força muscular respiratória de crianças e adolescentes com síndrome de down. Arq Ciênc Saúde Unipar 23(1):9–13.

Scinax tymbamirim Amphibian Advertisement Sound Emulator Based on Arduino K. C. Grande, V. H. H. Bezerra, J. G. V. Crespim, R. V. N. da Silva, and B. Schneider Jr.

Abstract

Sound communication is the way animals use to exchange information, attack and defend themselves and also one of the ways the scientists can use to locate them. Passive sonar uses sounds from the environment to count things, locate things or trace trajectories. It can be used to find a gun shooter in a crowd or count the number of penguins in a beach or anything that makes noise in a shine or dark place. This paper presents a simple circuit capable of emulate a territorial sound of the Amphibian Scinax tymbamirim, based on Arduino board and the sound recorded previously in the animal habitat and an analysis that makes it possible to say that the sound produced is very similar to the recorded sound of the studied species. Keywords





Scinax tymbamirim Arduino emulation Passive sonar

1



Amphibian



Sound

Introduction

Communication is the way animals use to exchange information and make decisions. There are several forms of communication in nature, like chemical, physical, physiological, anatomical. Animals have evolved and adapted in the most diverse ways. The fact is that the transmission of this communication is called a signal, and this signal can be a sound (called vocalization in biology), an aroma, a K. C. Grande (&)  J. G. V.Crespim  R. V. N. da Silva  B. Schneider Jr. UTFPR/CPGEI, Federal University of Technology—Paraná, Av. 7 de setembro, Curitiba, 3165, Brazil V. H. H. Bezerra UTFPR/DAELN/PETEE, Curitiba, Brazil

movement, an electrical discharge, a smell, or even a combination of these signals. Signs are defined as behavioral, physiological or morphological characteristics created or maintained by natural selection, because they transmit information to other organisms [1]. The animal that receives the signal is called the sender (or emitter) and the one that receives it is known as the receiver. The information received by the recipient is used for decision making [2]. In nature the most important and common way of signal exchanging is through sounds, or vocalization. The most diverse groups of animals have adapted and evolved to communicate through songs, grunts, barks, laughter, or frequencies sometimes so low or higher that human ears cannot hear. We know a lot about the complexity of human speech, but little about the complexity of various animal languages [3, 4]. Currently, amphibian communication systems are studied in several ways, including their communication is used to improve the processes of data communication networks. Amphibians, which are the focus of this study, also evolved and diversified. Each species has different vocalizations and uses basically the same frequency range, and these frequencies are combined in different ways, forming calls. Amphibians have more than one type of ringing sound, such as the advertisement call, the dating call, the territorial call, the distress call and the release cry, among others [5]. The specie studied in this work is the Scinax tymbamirim (Fig. 1) [6], an amphibian of the family Hylidae. It is a small animal, which measures approximately 2 cm and lives in the leaves of “taboas”, Typha domingensis, that grow in fresh water pools on the coast of the state of Paraná, Brazil, and is generally confused with other anurans of the same genus [7], which has generated many discussions throughout of the years by several scientists and motivated this study. Its vocalization is one of the main characteristics that differ them from other species, as it is unique, as well as the details of its upper lateral bands (Fig. 1). One of the great problems of biologists and researchers of these animals is finding them in the field. They inhabit

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_141

947

948

K. C. Grande et al.

island with several protected areas, the Ilha do Mel, and was processed at UTFPR’s BIOTA laboratory. After processing, the frequencies behavior were determined and were recreated in Arduino.

2

Fig. 1 Scinax tymbamirim

flooded regions and are usually found in the middle of foliage that is generally the same color as their skins, which makes the process even more difficult. Besides, their vocalization occurs at night and they are cold-blooded animals, so visible and heat sensing technologies are not viable. A viable solution to find these animals is through the sound they emit. The sound is unique, a form of identity, and each individual vocalizes separately, as they respect the time of vocalization and the territory of others. The amphibian singing network has been the focus of the study of the internet of things [8]. The intention at the end of the project is to create a system that localizes and counts all animals in a certain area through their vocalization, but for that it is necessary to create all this technological support. Aiming to develop solutions for the study of animals that vocalize and are difficult to count in nature, the UTFPR-Curitiba biotelemetry group has been developing a series of researches and developments over the years [9–13], especially a passive sonar that is in the final testing phase. The object of this paper is an important part of an acoustic passive sonar, in the final stage of development. It can be used to find a gun shooter in a crowd or count the number of penguins in a beach or anything that makes noise in a shine or dark place. It is precisely for the laboratory testing phase of the passive sonar system that the device presented in this article will be needed. A passive sonar is one that does not emit ping, but only captures and processes sounds from the environment to determine the location, type and number of individuals of one or more species. These researches have contributed to the development of equipment for monitoring endangered species and bioindicators species of environmental quality, as is the case of these frogs. For the development of a Scinax tymbamirim sound simulator, it was necessary to know the sound of the studied species, including going to the field to record it in its natural habitat. The Scinax tymbamirim sound was recorded in an

Materials and Methods

Initially, the field research happened specifically in the known Scinax tymbamirim habitat at Mar de Fora beach, in Ilha do Mel, sited 19 km of the Paranaguá city, state of Paraná, Brazil, latitude 25° 34′ 14.5″ S, longitude 48° 18′ 34.1″ W, in April 2019. The recording of the vocalizations was made at night, between 7 and 9 p.m., when these amphibians vocalize, using a Sony ICD-PX333 digital audio recorder. This species inhabits a kind of cattail, Typha domingensis, which is immersed in a freshwater flooded ground with depths that go up to 50 cm deep. Therefore, it is necessary to use special clothing to enter that habitat because it is permanently flooded. The recorded audio was processed in MathWorks Inc. Matlab®, Audacity®, Spectrum Analyzer® and MP3DirectCut®. Once the most important frequencies for the species were identified, various functions were tested until something similar to the required sound was obtained. It was defined, for the sake of simplicity and final price, since several units must be assembled for the final test of the larger System, that the hardware would be a simple UNO Arduino, with an output with a multifrequency buzzer. It started with the simple sum of sinusoidal functions and then several mathematical functions were used (sum of square, triangle, sawtooth signals with different frequencies, and frequency modulated signals) until a result that was acceptable to human ears was obtained. The next step was to assemble the hardware and write the Arduino UNO® firmware, and then determine its spectrum for comparison with that of the real amphibian. There is also a field step that would be the testing of the device next to the amphibian habitat, to see if it would induce any response, scheduled for the second half of March this year, including the participation of other IC (scientific Initiation) and PET (Tutorial Education Program) students, but due to the stoppage of academic presential activities and social distancing caused by the CoViD-19 quarantine this stage had to be suspended.

3

Results

The sound sample of the Scinax tymbamirim amphibian was taken under an air temperature of 21 °C. The water temperature was not measured because this species vocalizes fixed on the cattail’s stalk, above the water level.

Scinax tymbamirim Amphibian Advertisement Sound Emulator …

949

Fig. 2 Frequencies obtained by recording the song of the Scinax tymbamirim

The spectrum analysis was generate by applying FFT using Hanning windows with 512 points before FFT and overlap of 128 points. After preliminary analyzes, the FFT spectrum of the recording performed on the island can be summarized in Fig. 2. The frequencies 4.125, 2.63, and 3.325 kHz can be highlighted in Fig. 2, in this order of predominance. In the same image, at a frequency of approximately 5.6 kHz, a continuous pulse train is observed, which probably indicates the presence of an orthoptero, such as a cricket, on the spot. The sound file in Fig. 2 was treated with noise reduction using background noise samples as suppressor vector (including wind, steps in the water, cricket, voices and the sound of the recorder itself) in the Audacity software. The result was passed by the same process of Fig. 2 to find the spectrum shown in Fig. 3. It was possible to produce the spectrum that can be considered the pure signal of the species in question. A noise next the 7 kHz in the right of Fig. 3 may be related to recorder itself. It is clear that some vocalizations are more intense and carry components of higher frequencies, but it is not possible to know if they carry substantially different information for animals. There is also a certain characteristic distribution and agglutination that can be associated with the species. Figure 4 gives an idea of the range used by the species in relation to things we know. It is presented the ranges of the human voice and the guitar. It was built the amphibian advertisement sound emulator sound device based on an Arduino UNO, a multi-frequency buzzer and a push button. A firmware was written in wire language using the Arduino IDE. The function that worked well in imitating Scinax tymbamirim was the stepped ramp train (sawtooth wave) with the following characteristics, like shown in Fig. 5: a sequence of twenty five (25) saw teeth stepped in time periods of 100 ms by excitation signal of 250 Hz steps, initiating in 1 kHz and stopping in 5 kHz, followed by a single step of 100 ms of 100 Hz of excitation signal for sound damping purposes. The figure represents only three teeth for clarity purposes.

Although Fig. 5 presents the excitation signal like something similar to a frequency spectrum, the real frequency response of the device is quite different of that. So, the emitted sound was recorded and again analyzed the same way the first signal. The response is shown in Fig. 6, where it is possible to note the similarities of the distribution and agglutination of the frequencies. Figure 7 shows the fast Fourier transform of the sound emulator device. The horizontal axis presents the real frequencies in Hz.

4

Discussion

It is necessary to consider that the excitation of a sound transducer device, like a multifrequency buzzer, can cause an adverse effect of that predicted mathematically, because the analog response of the device is not a linear function. The first sensor was the human ear and the produced sound was very acceptable like the Scinax tymbamirim. Sound, but the human ear is not so good for this judgment. The sound emitted was recorded and analyzed and presented a behavior quit similar to that of the original amphibian. It is possible to note higher amplitudes of frequencies in Fig. 7 next of 1, 2, 3, 4 and 6 kHz. That is compatible with the agglutinations in Fig. 3 (4.125, 2.63, and 3.325 kHz). The spectra in Figs. 2, 3 and 7 are compatible with spectra of this kind of amphibian in literature [5, 6]. No quantitative comparison between the real and emulated sound signals was done. This can become a necessary step if what you are looking for is the perfect replication of the sound. This emulated sound can be used in different ways, for population study, with the use of playback, for behavioral analysis, or in the passive sonar system under development. The application of instruments in the study of Brazilian biodiversity is one of the fundamental principles of biomedical engineering, as it combines technological innovation and environmental sustainability, so important and necessary for the preservation of nature.

950

K. C. Grande et al.

Fig. 3 Frequency spectrum of the Scinax tymbamirim, after noise suppression

Sinax tymbamirin Human voice Guitar

27,5 Hz

55 Hz

110 Hz

220 Hz

440 Hz

880 Hz

1760 Hz

3520 Hz

7040 Hz 14080 Hz

Buzzer Excitation frequency [Hz]

Fig. 4 The figure has a base two musical logarithmic scale to better compare the normal ranges of the Scinax tymbamirim, the human voice and a known musical instrument

6000 5000 4000 3000 2000 1000 0 0

1000

2000

3000

4000

5000

time [ms]

Fig. 5 Representation of three ramps of the buzzer frequencies excitation signal

Fig. 6 Spectrogram of the amphibian advertisement sound emulator sound output

6000

Relative Amplitude

Scinax tymbamirim Amphibian Advertisement Sound Emulator …

f [Hz]

Fig. 7 FFT of the sound emulator device. Horizontal axis presents the real frequencies in Hz

5

Conclusions

The Scinax tymbamirim amphibian advertisement sound emulator device, based on Arduino, worked properly, according to the characteristics found by our group and other groups in Brazil. This will make it possible to develop other simulators for other animals and to develop other devices to assist biologists. Acknowledgements We would like to thank CPGEI, PPGEB, UTFPR Curitiba, the entire Biotelemetry team at Biota and PETEE (Tutorial Education Program of Electronic Engineering).This study was financed in part by the CAPES Coordination for the Improvement of Higher Education Personnel. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Otte D (1974) Effects and functions in the evolution of signaling systems. Annu Rev Ecol Syst 385–417

951 2. Searcy WA, Nowicki S (2005) The evolution of animal communication: reliability and deception in signaling systems. Princeton University Press, Oxford. https://doi.org/10.2307/j.ctt7s9pr 3. Starnberger I, Preininger D, Hödl W (2014) From uni- to multimodality: towards an integrative view on anuran communication. J Comp Physiol A 200(9):777–787. https://doi.org/10. 1007/s00359-014-0923-1 4. Grafe TU (2005) Anuran choruses as communication networks. Anim Commun Netw 277–299 5. Toledo LF, Martins IA, Bruschi DP, Passos MA, Alexandre C, Haddad CF (2015) The anuran calling repertoire in the light of social context. Acta Ethol 87–99. https://doi.org/10.1007/s10211014-0194-4 6. Nunes I, Kwet A, Pombal J P (2012) Taxonomic revision of the Scinax alter species complex (Anura: Hylidae). Copeia 554–569. https://doi.org/10.1643/CH-11-088 7. Pombal Jr JP, Haddad CFB, Kasahara S (1995) A new species of Scinax (Anura: Hylidae) from southeastern Brazil, with comments on the genus. J Herpetol 1–6 8. Aihara I, Kominami D, Hirano Y, Murata M (2019) Modelagem matemática e aplicação de coro de sapos como sistema autônomo de comunicação distribuída. Ciênc Aberta R Soc 6(1):181117. https://doi.org/10.1098/rsos.181117 9. Grande KC, Schneider NB, Sato GY, Schneider Jr B (2019) Passive acoustic localization based on time of arrival trilateration. In: IFMBE proceedings. Springer, Singapore, pp 519–524 10. Gomes FH, Grande KC, Schneider Jr B (2019) Desenvolvimento de tecnologia de um dispositivo de longo alcance e baixo consumo para rastreabilidade de animais ameaçados. In: IX Simpósio de instrumentação e imagens médicas Uberlândia 11. Grande KC, Gomes FH, Santiago E, Gewehr PM, Bergossi VHD, Schneider Jr B (2018) LORA based biotelemetry system for large land mammals. In: European test and telemetry conference— Chapter 5. Time-space position technologies. AMA Science, Berlin, pp 101–105 12. Grande KC, Schneider Jr B (2016) Uso de tecnologias globais para a determinação de habitats de espécies bioindicadoras e transformações de territórios. In: XXV Congresso Brasileiro de Engenharia Biomédica, Foz do Iguaçu. Anais do XXV Congresso Brasileiro de Engenharia Biomédica, Curitiba. UTFPR2016, pp 2161–2164 13. Grande KC, Trento NS, Schneider Jr B, Faria RA (2014) Biotelemetria animal por sistema de monitoramento via internet com dispositivo portátil baseado em GPS e GSM/GPRS. In: XXIV Congresso Brasileiro de Engenharia Biomédica. Anais do XXIV Congresso Brasileiro de Engenharia Biomédica. UFU, Uberlândia, pp 2252–2255

Microphone and Receiver Calibration System for Otoacoustic Emission Probes M. C. Tavares, A. B. Pizzetta, and M. H. Costa

Abstract

The Brazilian Universal Neonatal Hearing Screening Program (TANU) aims to detect hearing loss in newborns. TANU is mandatory in Brazil since the Federal Act 12.303 published in 2010. Otoacoustic Emissions (OAE) is an objective and non-invasive screening method recommended by international organizations. Despite its well-known significance and strong local demand, there is still no Brazilian manufacturers for this type of equipment. This work describes the initial efforts for the development and production of OAE devices: a system for generating calibrated sound signals, such as like clicks, pure tones, and white noise, which meets the entire frequency range and intensities observed in TEOAE and DPOAE exams. The proposed system aims to be a reference for adjustment and calibration of probe microphones. Keywords



Otoacoustic emissions Calibration

1



OAE probe



Sound generation

Introduction

The Universal Neonatal Hearing Screening (TANU in Portuguese) aims to detect hearing loss in the first days of life [1]. It is mandatory in Brazil since the 12,303 Federal Act, published on August 2, 2010 [2, 3]. Following the M. C. Tavares (&) Contronic Sistemas Automáticos Ltda/PDI, Rua Rudi Bonow, Pelotas, 275, Brazil e-mail: [email protected] A. B. Pizzetta  M. H. Costa Department of Electrical and Electronic Engineering, Federal University of Santa Catarina, Florianópolis, Brazil

international models, screening is performed by objective, non-invasive and quantifiable methods [4]. The preferred methods to perform this screening are the evoked otoacoustic emission test (OAE), due to its practicality, and the brainstem audiometry, due to its precision and discrimination capacity. Despite the OAE test importance, there are still no Brazilian manufacturers, resulting in a significant impact in the Brazilian trade balance [5], and causing associated difficulties for user training, maintenance, and periodic calibration of these devices [6]. Evoked otoacoustic emissions are small-amplitude sound signals produced by the cochlea in response to an auditory stimulus. They were first reported by Kemp [7], who associated them with the product of an active tuning mechanism, based on the motor action of outer hair cells during the normal hearing process. As these cells are often affected in hearing loss cases, the otoacoustic emissions measurement is an objective method for helping hearing loss diagnosis. The OAEs are obtained by applying an acoustic stimulus through a receiver, to evoke the response, captured by a small microphone. Both transducers are positioned at the entrance of the auditory canal through an ear probe [8]. There is a broad field for otoacoustic emissions diagnosis, since it is a low-cost test. It is also easy to perform, since it does not require attaching electrodes. The two most applied OAE methods in TANU, in both hospitals and medical clinics, are the Transient Evoked Otoacoustic Emissions (TEOAE) and the Distortion Product Otoacoustic Emissions (DPOAE) [9]. Screening tests using OAE are not capable of discriminating the hearing loss level or providing a precise indication about the frequencies where the losses occur, such as in audiometry tests. Results provided by screening are of pass/fail (PASS/REFER) type, considering five frequency bands. Some devices show these results automatically, from parameters such as signal to noise ratio and response reproducibility.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_142

953

954

M. C. Tavares et al.

TEOAE sound stimulation is performed by clicks with a duration of 80 ls, presented at a rate of approximately 50 Hz. The adjustment of the sound pressure produced by the receiver must be performed beforehand, during the calibration procedure, and automatically, during the examination, using the sound intensity measured by the microphone embedded onto the probe—which does not dispense the previous calibration step. A peak sound pressure of 0.3 Pa or 83.5 dB peSPL (ILO Otodynamics standard) should be maintained, and, in some cases, reduced to 76 dB peSPL. In neonates, the sound pressure achieves up to 86 dB peSPL [10]. In DPOAE, the stimulus is presented with 70 dB SPL, in the form of two tones f2 and f1, in which f2 > f1. These tones are simultaneously and continuously generated throughout the examination process. In modern equipment, the f2 frequency is adjusted in steps from 500 Hz to 12 kHz. Latency is measured by phase difference, which has a systematic relationship with the f2/f1 ratio, increasing with the reduction of this ratio [11]. There is a variety of probes in the market, containing transducers with different characteristics. The commercial tendency is that each manufacturer develops its specific probe. In [12], a commercial TEOAE probe (BS-5 model) was employed in adults. The calibration process for that probe was performed in a closed cavity of 1 cm3—dimension compatible with the adult auditory canal. The author pointed out that the calibration was built according to the IEC-711 standard. Since then, this standard was updated to IEC 60318-4 [13]. Recently, our research group developed a prototype for clinical otoacoustic emission analysis using an ER-10D (Etymotic Research) probe. This probe model was approved by the FDA and has been employed in some commercial OAE equipment [14]. In this work, we present a method for adjustment and calibration of transducers embedded in TEOAE probes, with attention devoted to the microphone.

2

Materials and Methods

Sound intensities generated (by receivers) and acquired (through microphones) by OAE probes need to be adjusted to represent accurate values in dB peSPL units, in case of clicks, and dB SPL, in case of pure tones or white noise. With the purpose of OEA probe calibration, the system shown in Fig. 1 was designed. The stimulation source comprises an FRDM-K22F (NXP) card [15], an SGTL5000 CODEC [16], and some peripheral, as shown in Fig. 2. Sounds are digitally generated via I2S (Inter-IC Sound) port under DMA (Direct Memory Access peripheral) command [17], at 96 kS/s rate, with 24-bit resolution. The

minimum impedance in the SGTL5000 headphone output is 16 X, while the ER-2 (Etymotic Research) insertion earphone nominal impedance is 10 X. When the earphone is connected directly to the CODEC output, the maximum voltage obtained is 490 mVRMS, which is insufficient to generate the required sound intensity of 90 dB SPL. Given this, it was necessary to add some audio amplification, and the LM386 (Texas Instruments) class A-B amplifier was elected. In addition to matching the impedance between the sound source and the earphone, this circuit allows to adjust the output voltage through a precision potentiometer, allowing a 0.1 dB SPL fine adjust. The ER-2 earphone has an approximately flat frequency range from 100 Hz to 16 kHz, with a sensitivity of 100 ± 3 dB SPL for a sine signal of 1 VRMS at 1 kHz. In the assembly shown in Fig. 3, the headphone output tube is coupled to the sound level meter through an adapter built in polyoxymethylene (POM), also known as polyacetal, to which a transparent plastic tube with 50 mm length and 9 mm internal diameter was connected. POM was chosen because of its dimensional and thermal stability, and easy machining, making this material widely used in health products [18]. Coupling between the ER1-04 adapter (sound delivery tube adapter, Etymotic Research) and the plastic tube was built with a disposable foam ear tip, model ER1-14A. The type of sound to be generated, frequency and intensity are configured through a Windows© software created in C++ Builder 2010 IDE, shown in Fig. 4. Sine tones can be generated with discrete audiometric values between 250 Hz and 8 kHz, and clicks, lasting 80 ls, with compression or rarefaction polarity. White noise can be generated as well. The intensities can be adjusted from 0 to 90 dB SPL, with a minimum step of 0.5 dB SPL. Communication between the computer and the FRDM-K22F card is performed through a CDC driver over a USB 2.0 port. The same port/cable drives the system with the VBUS voltage from 4.5 to 5.5 V. The setup was implemented according to Fig. 5 to adjust the sound intensities generated by the ear probe receivers. Another adapter was also built in POM, which receives a central rubber part, 10 mm long, containing a centralized hole of 5 mm. It aims to hold the probe and ensure that the microphone’s distance does not vary significantly from one measurement to another.

2.1 Adjustment Procedures The first procedure is to adjust the sound level meter reading using as reference a calibrated sound source from 94 dB SPL at 1 kHz (CAL-3000, Instrutherm Ltda., Brazil). The second procedure consists of adjusting the OAE device microphone and associated circuits by applying sine

Microphone and Receiver Calibration System for Otoacoustic …

955

Fig. 1 Calibration system block diagram

Fig. 2 System components: a FRDM-K22 card with SGTL5000 CODEC; b voltage amplifier with hand-made shield; c ER-2 insert earphone

signals through the setup shown in Fig. 3. The sine wave intensity is varied from 90 to 40 dB SPL in steps of 5 dB SPL, combined with frequency variation from 250 Hz to 8 kHz, in 7 steps. The click adjustment is performed by the peak-equivalent method, using the analog output from the sound level meter and an oscilloscope [19, 20]. The value read on the sound level meter display in each condition is taken as the actual value and registered in a table that relates frequencies with intensities. After the total completion of the table, the adapter (Fig. 3, item c) is removed from the sound level meter, and the ER-10D probe is coupled using an olive of appropriate diameter. Care is taken to maintain the same distance between the sound source and the microphone location. The same intensities and frequencies are generated, noting the point-to-point differences. Then, an adjustment table is generated and stored in a Flash memory contained in the OAE device. This table is employed during TEOAE or DPOAE acquisition.

Fig. 3 Setup for OAE device probe microphone adjustment: a sound generation boards—FRDMK22F and amplifier shield, b ER-2 probe, c coupler, d sound level meter

The third procedure consists of adjusting the intensity generated in each receiver, with the aid of the modified setup, as shown in Fig. 5. For each receiver, a pure tone table, a compression click table, a rarefaction click table, and a white noise table are created. The tone table includes all possible combinations of intensity, from 90 to 40 dB SPL with a −5 dB step, and frequencies of 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 kHz, 6 kHz, and 8 kHz. In the same way, as for the microphone, the tables are filled with the differences between the values read in the sound level meter and the nominal values programmed in the Windows© software. These tables are also stored in the Flash memory of the OAE device and are employed during the exam configuration.

956

M. C. Tavares et al.

matched by a 5 dB attenuation in the tuning system. Calculating the arithmetic attenuation average at each test point, results in errors of 0.7, 1.3, 2.1, 2.8, and 3.4 dB SPL, indicating that the SGTL5000 attenuator effective step is about 0.7 dB in the tested range. Figure 8 shows the frequency spectrum produced by a 1 kHz tone generated by the system. The powers in the harmonic frequencies are relevant in the range from 2 to 6 kHz (see the yellow bar in Fig. 8). Table 1 shows the values measured in a Tektronix TBS 1062 oscilloscope, calibrated with traceability to Brazilian Calibration Network—RBC, for the fundamental 1 kHz and harmonics, allowing calculation of the total harmonic distortion (THD) according to IEC 61000-2-2, defined as THD% ¼ 100 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi G22 kHz þ G23 kHz þ G24 kHz þ G25 kHz þ G26 kHz  : G1 kHz ð1Þ

Fig. 4 Windows© software for sound generation via receivers

By recalculating the THD as defined in IEEE 519 standard, the resulting value is identical for two decimal places.

The click calibration is performed by the peak-equivalent method, already mentioned.

4

3

Results

The results evidenced in Figs. 6 and 7 were obtained using the hardware and software described in the previous section. Note that each 5 dB attenuation step in the software is not

Fig. 5 Setup to adjust the sound intensities generated by OAE probe receivers. Left: a device for OAE under development, b ER-10D probe, c coupler, d sound level meter, e voltage output. Right: Windows© software for generating sounds via receivers

Discussion

After reviewing the calibration needs of OAE devices, concerning to current technical standards, it was verified the necessity of developing a specific system for performing such task. It requires additional hardware, firmware, and software. Few studies address this issue, as manufacturers do not usually disclose their calibration methods.

Microphone and Receiver Calibration System for Otoacoustic …

957

Fig. 6 SGTL5000 attenuator performance prior to the setting described in Sect. 2.1

Fig. 7 Differences between the sound pressure values programmed in the software and obtained in the sound level meter for the setup

Fig. 8 Spectrum for 1 kHz tone at 90 dB SPL in the setup, obtained by oscilloscope FFT using the Hanning window. Note the harmonics with relevant power between 2 and 6 kHz (showed along yellow bar interval)

Attenuator linearity errors (shown in Fig. 7) are compensated, in practical use, by using a correction table, as previously described. This strategy has been successfully used for a

long time in many other audiological diagnosis products like auditory evoked potentials analyzers. The discrepant point observed at 1 kHz @ 65 dB SPL presents a lower error (−1 dB SPL) than those observed at adjacent points and may result from acoustic interference during the measurement (as preliminary tests were performed in an office environment). The 0.98% THD obtained in this work does not meet the IEC 60645-6 requirements. As a result, such issue must be reworked in future. The LM386 THD is 0.2% for operating conditions analogous to those used in the adjustment system. In this way, the most significant limiting factor to the measured THD is the SGTL5000 CODEC. THD depends not only on the electronic circuit, but also on the firmware that triggers the DMA process over I2S, requiring extra attention. According to IEC 60645-6 standard, the tones emitted by the OAE device should not differ by more than 1.5 dB SPL from the actual value. Comparing the combined accuracy of the reference instruments with the IEC 60645-6 standard requirement “Maximum permitted expanded uncertainty of measurements Umax” and “Values of Umax for basic measurements” [19, Table 3], it is noted that the measurements may have slightly higher uncertainty than allowed. This condition does not invalidate the method proposed in

958

M. C. Tavares et al.

Table 1 Developed system total harmonic distortion (THD)

Frequency (Hz)

Measured (dB)

Gain

1000

0.00

100

2000

−40.60

9.33  10−3

3000

−57.80

1.29  10−3

4000

−53.00

2.24  10−3

5000

−63.00

7.08  10−4

6000

−60.60

9.33  10−4

THD

0.98%

the present study but can be challenged in the industrial environment during regulatory auditing, requiring the use of more accurate Class I equipment. Among the best quality Class I equipment are those manufactured by Brüel and Kjäer [21] and Davis [22]. It is important to point out that the reference instrument must be properly calibrated by a laboratory approved by an official metrological agency, such as INMETRO, in Brazil, or NIST in the United States of America. Another critical point is to have an acoustically protected environment to perform adjustments and calibrations, to avoid interference by noise.

5

Conclusion

In this work, we proposed a method for adjustment and calibration of microphones and receivers embedded in otoacoustic emission ear probes. A digital/analog hybrid electronic system comprising of an ARM Cortex-M4 microcontroller, 24-bit CODEC, and amplifier was implemented, accompanied by specific firmware and software for command through a personal computer, via USB 2.0. Properties and limitations were discussed, showing the capability of the proposed method. Acknowledgements Thanks to the National Council for Scientific and Technological Development (CNPq) for the Senior Postdoctoral Fellowship (164594/2017-5 grant), and the scholarship in Productivity in Technological Development and Innovative Extension (305309/2017-0 grant). Thanks to the engineer Gustavo Ott for contributing to the literature review and development of the Windows© based software for TEOAE, and to André Wille Lemke for his contribution to the development of the SGTL5000 DMA firmware library. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Wroblewska-Seniuk K, Dabrowski P, Szyfter W et al (2017) Universal newborn hearing screening: methods and results, obstacles, and benefits. Pediatr Res 81:415–422. https://doi.org/ 10.1038/pr.2016.250

2. Triagem Auditiva Neonatal Universal (2018) Implantação com ética, técnica e responsabilidade. Available at https://www. fonoaudiologia.org.br/cffa/wp-content/uploads/2016/08/folder-tanu. pdf. Accessed 15 June 2020 3. BRASIL. Lei n. 12.303 (2010) Dispõe sobre a obrigatoriedade de realização do exame denominado Emissões Otoacústicas Evocadas. Brasília, DF. Available at https://www.planalto.gov.br/ ccivil_03/_ato2007-2010/2010/lei/l12303.htm. Accessed 15 June 2020 4. Konradsson K, Kjaerboel E, Boerch K (2007) Introducing universal newborn hearing screening in Denmark: preliminary results from the city of Copenhagen. Audiol Med 5(3):176–181 5. ABIMO (2018) Comércio Exterior da Saúde. Available at https:// abimo.org.br/docs/BalancaComercial_out15.pdf. Accessed 15 June 2020 6. Calil S, Gomide E (2002) Equipamentos Médico-Hospitalares e o Gerenciamento da Manutenção: capacitação à distância/Ministério da Saúde. Projeto REFORSUS. Ministério da Saúde, Brasília, DF, 709 p 7. Kemp D (1978) Stimulated acoustic emissions from within the human auditory system. J Acoust Soc Am 64(5):1386–1391 8. Maico (2009) Pediatricians guide to otoacoustic emissions (OAEs) and tympanometry. Available at https://www.schoolhealth.com/ media/pdf/51057_Pediatricians_Guide_to_OAEs.pdf. Accessed 15 June 2020 9. McPherson B, Li S, Shi B, Tang J, Wong B (2006) Neonatal hearing screening: evaluation of tone-burst and click-evoked otoacoustic emission test criteria. Ear Hear 27(3):256–262 10. AAA (2011) Childhood hearing screening guidelines. Available at https://successforkidswithhearingloss.com/wp-content/uploads/ 2011/08/Childhood-Hearing-Screening-Guidelines-FINAL-approvedby-AAA-Board.pdf. Accessed 15 June 2020 11. Dhar S, Hall J III (2018) Otoacoustic emissions—principles, procedures and protocols, 2nd edn. Plural Publishing, San Diego 12. Tujal P (2004) Redução de Artefato de Estímulo em Emissões Otoacústicas Evocadas por Cliques. PhD Thesis, COPPE/UFRJ, Rio de Janeiro 13. Wulf-Andersen P, Wille M (2016) Two modified IEC 60318-4 ear simulators for extended dynamic range. G.R.A.S. Sound & vibration whitepaper, Aug 2016. Available at https://www.gras. dk/files/MiscFiles/WhitePapers/43BB_Whitepaper.pdf. Accessed 15 June 2020 14. FDA (2001) Etymotic research ER-10D OAE probe clearance. Available at https://510k.directory/clearances/K011114. Accessed 15 June 2020 15. NXP (2014) Freedom board for kinetis K22F hardware—user’s guide. Available at https://www.nxp.com/docs/en/user-guide/ FRDMK22FUG.pdf. Accessed 15 June 2020 16. NXP (2014) SGTL5000 initialization and programming. Freescale Semiconductor Inc. Application None AN3663. Rev. 3.0. Available at https://www.nxp.com/docs/en/application-note/AN3663. pdf. Accessed 15 June 2020

Microphone and Receiver Calibration System for Otoacoustic … 17. Galda M (2012) Audio output options for kinetis—using DMA and PWM, DAC, or I2S audio bus. Freescale Semiconductor Application Note AN4369. Rev. 0. Available at https://www.nxp.com/docs/ en/application-note/AN4369.pdf. Accessed 15 June 2020 18. Penick K, Solchaga L, Berilla J, Welter J (2005) Performance of polyoxymethylene plastic (POM) as a component of a tissue engineering bioreactor. J Biomed Mater Res A 75(1):168–174 19. IEC (2007) IEC 60645-3. Electroacoustics—audiometric equipment—part 3: test signals of short duration 20. Fedtke T, Hensel J (2010) Specification and calibration of acoustic short-term stimuli for objective audiometry. In: Proceedings of

959 20th international congress on acoustics, pp 1–5. Available at https://www.acoustics.asn.au/conference_proceedings/ICA2010/ cdrom-ICA2010/papers/p534.pdf. Accessed 15 June 2020 21. Brüel & Kjaer (2015) Hand-held analyzer types 2250, 2250-L and 2270 with microphone type 4189. Available at https://www.bksv. com/downloads/2250/be1712.pdf. Accessed 15 June 2020 22. Davis L (2020) SoundAdvisor™ model 831C sound level meter & noise monitoring kit. Available at https://www.larsondavis.com/ contentstore/MktgContent/LinkedDocuments/LarsonDavis/831C_ lowres.pdf. Accessed 15 June 2020

Pneumotachograph Calibration: Influence of Regularization Methods on Parameter Estimation and the Use of Alternative Calibration Models A. D. Quelhas, G. C. Motta-Ribeiro, A. V. Pino, A. Giannella-Neto, and F. C. Jandre

Abstract

Keywords

New ventilator designs in response to the pandemic would benefit from flow and volume measurements. This work compares approaches to calibrate a pneumotachograph (PTC) with either a polynomial or a non-polynomial mapping function. Bidirectional 3-L manual strokes with a syringe provided fixed-volume digitized segments of signal spanning the range of flow measurement of a 4-orifice PTC. Voltage-to-flow calibration functions, either polynomial or based on an additive square root term, were optimized in the sense of least squared errors between estimated and known volume per stroke, for each direction, inspiratory and expiratory. Calibrations with a regularizing penalization in the objective function, aiming at removing local extrema (maxima/minima) of the calibration curves within the usage range, were compared to the non-regularized versions. Regularized polynomial (3rd to 5th degrees) yielded relative errors on volume from 3.56 to 7.17%, whereas for standard (non-regularized) models the errors ranged from 0.81 to 1.68%; errors with square-root-based curves were an order of magnitude smaller. Although regularization worsened error performance, on the other hand it avoided implausible curve shapes. Robustness, convergence properties, computational and practical matters and requirements for field operation are issues to be probed further.

Flowmeter Pneumotachograph Pulmonary engineering

A. D. Quelhas (&) PETROBRAS/GIA-E&P/EAEP/ACADUP-OSS, Republica do Chile, 65, Rio de Janeiro, Brazil e-mail: [email protected] G. C. Motta-Ribeiro  A. V. Pino  A. Giannella-Neto  F. C. Jandre PEB/COPPE/UFRJ, Rio de Janeiro, Brazil

1





Calibration



Introduction

In the year of 2020, the world faced the Covid-19 pandemic. This disease has respiratory consequences and, for some patients, artificial ventilation is required to maintain life. The characteristics of the contagion raised concerns whether the available mechanical ventilators would suffice for the predicted number of patients simultaneously in need of them; in fact, the health systems of some countries were overwhelmed [1]. This circumstance gave rise to an urgent demand for mechanical pulmonary ventilators, for which the responses were such as the acceleration of the production of previously existing models and the design of new ones, aimed in this case at being as rapidly reproducible as possible to meet the emergency, whilst remaining safe and reliable. Entities such as the Medicines and Healthcare products Regulatory Agency (UK) issued guidelines for such designs [2]. Among the features pointed as desirable is the ability to monitor quantities such as pressures, flow rates and respiratory volumes. Pneumotachographs (PTCs) are a usual choice to integrate the design of a ventilator as an instrument to measure the flows and volumes; among them, PTCs that give readings based on the differential pressure that develops between taps of a pneumatic resistor through which flow passes. In current practical embodiments, a PTC of this type is attached to a differential pressure transducer, whose output, an electrical signal, is then digitized and mapped to the respective flow value by a calibration curve. One of the techniques to estimate this curve begins by digitizing the analog signal from the transducer while repeatedly pumping

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_143

961

962

A. D. Quelhas et al.

known fluid volumes through the device. Then, assuming that the function that maps the digitized quantities into flow is well represented by a polynomial of sufficiently high degree, its coefficients are computed—for instance by linear regression—in the sense of minimizing the errors (usually their summed squares) between the time integral of the mapped signals and the respective known volume during one repetition [3]. However, this approach may pose problems, for one, high-degree polynomials may be required to keep calibration errors low along the operating range. That may lead to local extrema (minima/maxima) in the curves, breaking the biunivocity between the input, transducer voltage, and the calibrated quantity, flow. Non-polynomial alternative approaches have been tried in the literature, see for instance the use of power-law models in [4]. Also, regularization techniques have been employed in the field of pulmonary engineering as a tool to coerce parameter estimates into plausible value space [5]. In this work, two strategies are tested to enhance calibration whilst keeping low the number of required parameters: the use of a first principles-oriented mathematical model for a 4-orifice PTC using the square root of the transducer voltage; and a regularization procedure which penalizes the presence of extrema in the polynomial within the working range. In the following sections, the Methods, including experimental procedures, mathematical modeling and statistical analysis, are presented followed by the Results and a Discussion with conclusions and a provision of observations entailing future works.

  Fexp ðtÞ ¼ f Zexp ðtÞ; hexp

ð3Þ

where insp and exp stand for inspiration and expiration, respectively. Because it is usually easier to have a gas volume reference instead of a gas flow reference, flow calibration is indirectly achieved using the fact that, during a ventilatory cycle, the volume, V, corresponds to the numerical integration of the gas flow. Z V¼ f ðZ ðtÞ; hÞ  dt ð4Þ cycle

Thus, the calibration problem can be expressed in an optimization framework (5), where the parameter set, h, is chosen according to the minimization of the objective function shown in Eq. (6): hk ¼ argminðgðhÞÞ h

   Ztfi Nk  X     f Zi;k ðtÞ; h  dt gð hÞ ¼ Vref    i¼1

ð5Þ

ð6Þ

t0i

where Vref is the volume reference; k 2 {inspiration, expiration} is the ventilatory direction; i is the cycle number; t0i and tfi are, respectively, the beginning and ending times of the ith ventilatory cycle; Nk is the number of cycles for the type k of ventilatory direction.

2.2 Regularization of Parameter Estimation

2

Materials and Methods

2.1 Mathematical Description of Flow Calibration Calibration of a pneumotachograph is a parameter estimation problem, in which the estimated parameter vector, h, is supposed to be part of some known function f relating gas flow through pneumotachograph, F and some electrical signal, Z, from a differential pressure transducer, as shown in Eq. (1). F ðtÞ ¼ f ðZ ðtÞ; hÞ

ð1Þ

Since upstream and downstream flow paths are different in the pneumotachograph, a common practice [6] is to estimate a different set of parameters (hinsp ; hinsp ) to each flow direction:   ð2Þ Finsp ðtÞ ¼ f Zinsp ðtÞ; hinsp

The gas flow, F, is calculated by the function f ðZ; hÞ, which is related to the pressure drop across the pneumotachograph (DPptc ). The inner geometry of commercially available pneumotachograph ranges from a simple orifice to more complex shapes, all with expected monotonic shape of the F x DPptc curve. If h is estimated from a non-constrained optimization problem as shown in Eq. (5) it is possible that the optimization algorithm finds a parameter set that yields a non-phenomenological flow versus pressure drop curve although it represents a minimum of the objective function. In order to avoid such curve shapes, a regularization procedure is applied in the form of a constraint added to the optimization problem as shown in Eqs. (7) and (8). The intention of this modification on the original optimization problem is to avoid the presence of extrema (maxima or minima) within the nominal range of use. This is achieved

Pneumotachograph Calibration: Influence of Regularization …

963

by penalizing analytical derivatives of the calibration function that have roots inside such range. h ¼ argminðgðhÞÞ h

subject to jri j  rangeðZ Þ;

for i ¼ 1; 2. . .n

ð7Þ

where ri is the ith root of Eq. (8), n is the total number of roots, and range(Z) is the range of the electrical signal of the differential pressure transducer. d ðf ðZÞ; hÞÞ ¼0 dZ

ð8Þ

2.3 Calibration Function It is usual that the calibration function (1) takes the form of a polynomial [3] as shown in Eq. (9). Commonly n ranges from 3rd up to the 5th order. 2 3 6 7 f ðZ ðtÞ; hÞ ¼ 4Z ðtÞ Z ðtÞ2 . . .Z ðtÞn 5½h1 h2 . . .hn T |fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}

ð9Þ

Pn ðZ Þ

In this work we propose alternative calibration functions with the structure shown in Eq. (10), where the degree n = 0 means dependence only on the square root term. hpffiffiffiffiffiffiffiffiffi i f ðZ ðtÞ; hÞ ¼ Z ðtÞ Pn ðZ Þ ½h1 h2 . . .hn T ð10Þ Experimental data was fit into three pure polynomial models (n = 3, 4, 5), according to Eq. 8 and into mixed models (Eq. 9) with n ranging from 0 to 2.

Syringe strokes are carefully controlled along cycles in order to elicit flow distribution spanning the whole range of the differential pressure transducer. Baseline is removed by subtracting the median of the first 10 s of signal.

3

Results

The calibration signal generated with the manual stroke of the syringe covered the voltage range of the differential pressure transducer (±2 V) with a fairly homogeneous distribution between ±1.5 V (Fig. 2). Figure 3 shows individual distribution of pressure transducer signal for each inspiratory cycle. Figures 4 and 5 show the different shapes of the fitted calibration curves as a result of the use of the regularization method shown in Eq. (8) for the parameter estimation procedure. Influence of the regularization on the mean absolute (%) relative error on volume can be seen in Table 1. Polynomial models resulted in larger errors for the lowest flows, as high as 40% of the reference volume (Fig. 6). While the simplest, square root only, model had also the worst errors for low flows, the overall error magnitudes were lower than that of the polynomials. The largest error was about 4% (Fig. 7). Figure 8 shows the boxplot distribution of the (%) relative error on volume for polynomial models from 3rd to 5th degree (P3, P4, P5), for pure square root model (SR) and for mixed models (P1 + SR, P2 + SR) as described in Eq. (9). Mean absolute (%) relative error on volume is presented for these models in Table 2. Estimated polynomial coefficients are shown in Tables 3, 4 and 5.

2.4 Experimental Setup The apparatus includes a 3-L manual syringe, a pressure transducer Honeywell DC001NDC4 and an AD converter NI-USB-6008, from National Instruments with sampling frequency set at 5 kHz. In the scope of the Sars-Cov-2 ventilator project 4 pneumotachograph were tested. This work reports the results from Magnamed Adult PTC for the Fleximag ventilator, shown in Fig. 1. Inspiratory and expiratory cycles were simulated by pumping the syringe in opposite directions. Each experiment contains 32 cycles. A pre-processing algorithm removes cycles where signal surpass the nominal range of the pressure transducer.

Fig. 1 Pneumotachograph used in this study (Magnamed, Brazil)

964 Fig. 2 Histogram showing the approximate probability distribution of the signal from the differential pressure transducer in all 32 cycles of the flow signal stroke

Fig. 3 Box plot showing the distribution of the signal from the differential pressure transducer along each of the 32 inspiratory cycles

Fig. 4 Non-regularized polynomial models. Left: 5th degree, right: 3rd degree. All experimental data is inside black circles interval

A. D. Quelhas et al.

Pneumotachograph Calibration: Influence of Regularization …

965

Fig. 5 Regularized 5th degree polynomial models. All experimental data is inside black circles interval

Table 1 Mean absolute (%) relative error on volume Regularized

Non-regularized

Insp

Exp

Insp

Exp

P3

7.17

6.64

1.68

1.47

P4

5.22

3.58

1.68

0.94

P5

5.22

3.56

1.14

0.81

Fig. 6 Calibration using 5th degree polynomial model. Top: inspiratory cycles; bottom: expiratory cycles. Red dashed line: mean volume error

966 pffiffiffi Fig. 7 Calibration using F = Z model. Top: inspiratory cycles; bottom: expiratory cycles. Red dashed line: mean volume error

Fig. 8 Boxplot of (%) relative error on volume for different calibration models. Pn— polynomial of nth degree; SR— square root dependence. Top: inspiratory cycles; bottom: expiratory cycles

A. D. Quelhas et al.

Pneumotachograph Calibration: Influence of Regularization … Table 2 Mean absolute (%) relative error on volume for different calibration models. Pn— polynomial of nth degree; SR— square root dependence

967 Insp

Exp

P3

10.59

3.78

P4

10.37

3.33

P5

9.62

3.31

SR

0.85

0.47

P1 + SR

0.32

0.34

P2 + SR

0.24

0.33

Table 3 Estimated coefficients (3rd degree polynomial)

h3

h2

h1

insp

1.235E+00

−3.141E+00

2.962E+00

exp

6.635E−01

1.878E+00

2.098E+00

n=3

n = 3 (regularized)

Table 4 Estimated coefficients (4th degree polynomial)

insp

1.242E−01

−7.454E−01

1.884E+00

exp

8.616E−02

5.170E−01

1.367E+00

h4

h3

h2

h1

n=4 insp

−3.623E−03

1.231E+00

−3.128E+00

2.959E+00

exp

4.147E−01

1.844E+00

2.894E+00

2.347E+00

n = 4 (regularized) insp

5.840E−03

1.401E−01

−9.808E−01

2.055E+00

exp

−5.544E−03

1.018E−01

7.437E−01

1.576E+00

Table 5 Estimated coefficients (5th degree polynomial)

h5

h4

h3

h2

h1

n=5 insp

−2.006E−01

−7.668E−01

4.473E+00

−6.077E+00

3.664E+00

exp

−1.054E−02

4.148E−01

1.873E+00

2.905E+00

2.347E+00

n = 5 (regularized)

4

insp

4.916E−07

5.841E−03

1.401E−01

−9.808E−01

2.055E+00

exp

−5.204E−05

−5.553E−03

1.025E−01

7.438E−01

1.573E+00

Conclusions

This study evaluated a proposal for a regularization method for parameter estimation of pneumotachograph calibration functions. It also evaluated the use of a calibration function based on the square root of the output of the pressure transducer, instead of a pure polynomial model. Due to their mathematical form, any constraints included in the optimization problem will cause the worsening or, in the best case, no change of the main component of the

objective function—the composition of estimation errors— performance since the optimal parameter set may not correspond to the minimum of the objective function. This fact can be seen in Table 1. For polynomials P3–P5, regularized models yield mean absolute relative error on volume from 3.56 to 7.17% and standard (non-regularized) models range from 0.81 to 1.68%. These errors are in the accepted range (ABNT NBR ISO 80601-2-12:2014). Differences between inspiration and expiration results are probably due to asymmetry of the airflow circuit.

968

In many other cases the poorer performance of regularized models could be the main reason to avoid its use. However, in the case of pneumotachograph calibration, empirical and theoretical assumptions about the pressure transducer curves and from fluid mechanics assumptions allows one to seek avoidance of calibration curves with shapes as seen in Fig. 3. Parameter estimation in this field of study should be cautious with optimization algorithms that could lead to the choice of the parameter set blindly towards the very minimum of the main objective function if it leads to shapes that impairs the use along all the operating range. This study also brings further practical evidence that pure polynomial models may not be the only class of functions used for pneumotachograph calibration. Some pneumotachograph models may be much better suited for much simpler square root calibration functions. In the present case, it can be seen in Figs. 6 and 7 that even relatively high degree polynomial functions are not able to deal with low flow range of values. The narrower error distribution (Fig. 8) and smaller mean absolute volume error shows that a single parameter model can perform better if the model structure is well chosen. The likely consequence of using simpler models is a more robust calibration (to be probed in in future studies) and an easier recalibration procedure in intensive care use. Numerical and computational issues, such as convergence, choice of initial guesses and implementational adequacies for a gamut of devices, as well as practical issues such as number and nature of strokes, feasibility of field procedures for calibration and errors under a larger range of conditions, may also be target of further research.

A. D. Quelhas et al. Acknowledgements We would like to thank Petrobras for allowing the author Andre D. Quelhas to join the research group of COPPE/UFRJ in its efforts to build a ventilator of exception for COVID-19.GCMR, AVP, AGN and FCJ wish to thank FAPERJ, CNPq and CAPES for partial support to their research work. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Gattinoni L, Coppola S, Cressoni M, Busana M, Rossi S, Chiumello D (2020) COVID-19 does not lead to a “typical” acute respiratory distress syndrome. Am J Respir Crit Care Med 201 (10):1299–1300 2. Medicines and Health Care Products Regulatory Agency (2020) Specification for rapidly manufactured ventilator system (RMVS). https://assets.publishing.service.gov.uk/government/uploads/ system/uploads/attachment_data/file/879382/RMVS001_v4.pdf. Accessed 15 June 2020 3. Giannella-Neto A, Bellido C, Barbosa RB, Melo MF (1998) Design and calibration of unicapillary pneumotachographs. J Appl Physiol 84(1):335–343 4. Biselli PJC, Nóbrega RS, Soriano FG (2018) Nonlinear flow sensor calibration with an accurate syringe. Sensors (Basel) 18(7):2163 5. Motta-Ribeiro GC, Jandre FC, Wrigge H, Giannella-Neto A (2018) Generalized estimation of the ventilatory distribution from the multiple-breath washout: a bench evaluation study. Biomed Eng Online 17(1):3 6. Pino AV, Kagami LT, Jandre FC, Giannella-Neto A (2004) DAS— um programa de aquisição e processamento de sinais para engenharia pulmonar. In: Annals of the III Congresso Latinoamericano de Engenharia Biomédica, pp 765–768

Geriatric Physiotherapy: A Telerehabilitation System for Identifying Therapeutic Exercises A. C. Chaves Filho, A. V. M. Inocêncio, K. R. C. Ferreira, E. L. Cavalcante, B. C. Bispo, C. M. B. Rodrigues, P. S. Lessa and M. A. B. Rodrigues

It is concluded that this device can assist in the management of elderly patients, assisting the health professional during the rehabilitation process.

Abstract

Population aging is a global reality, which can be characterized by morphological, psychological and functional changes. With the limitations resulting from the aging state, changes can occur in the performance of activities of daily living, such as decreased muscle strength, reduced bone mass, loss of flexibility and decreased capacity of the sensory system. Physiotherapy actively assists in the health promotion and prevention process, reducing the effects of the aging and improving the life quality. This paper has the objective of develop a movement monitoring device to assist in the rehabilitation of elderly people. The accelerometers or motion sensors, store data for long periods of time, providing information about the movement activities of the subjects over a desired period. These sensors are a useful tool in the assessment of physical activity. Through the data of accelerometer, it is possible to evaluate all type of physical activities. This is a pilot study composed by three steps: development, validation and application of the instrument. For the application and validation of this study, it was used a protocol of exercises, in order to capture the variations of the X, Y and Z axes of the accelerometers in each exercise. In the study, it was possible to identify the movements during the performance of the proposed protocol. Through he accelerometers position and the varying of his axes, it was possible to classify the exercises according to the number of repetitions and the time spent performing the exercise. In addition, it was possible to monitor remotely in real time by a physiotherapist.

A. C. Chaves Filho (B) · A. V. M. Inocêncio · K. R. C. Ferreira · E. L. Cavalcante · B. C. Bispo · P. S. Lessa · M. A. B. Rodrigues Federal University of Pernambuco, Av. Prof. Moraes Rego, 1235,Recife, PE, Brazil C. M. B. Rodrigues University of Porto, Porto, Portugal

Keywords

Monitoring • Telerehabilitation • Accelerometry • Elderly • Physiotherapy

1

Introduction

The elderly population had a great growth. With the advancing in age, there was an increasing incidence of degenerative diseases and dependence. The idea of functionality is proposed as health of seniors. This is considered one of the most important characteristics during the human aging process [1]. The aging process may cause some biopsychosocial changes in the individual, associated with fragility and may result in the increased vulnerability. The health of the old population must be viewed as a general state of health and not just as the absence of disease, always observing the level of functional independence of them [2]. Among the effects of aging, some diseases can appear and may cause limitations. In this scenario, health professionals must be inserted in order to promote the well-being of them and ensure that aging occurs with quality, as supported by public health policies [3]. Regular physical activity as a form of therapeutic exercise is important in a rehabilitation program for the aged. It can improve the cardiovascular system and neuromuscular aspects from the perspective of motor control and cortical plasticity [4]. However, the displacement to the therapy environment may be an impacting factor for the accomplishment and maintenance of the proposed rehabilitation program. The problem concerning displacement can be affected by factors such as individual needs and socioeconomic conditions [5]. In these cases, telerehabilitation appears as a tool to maintain the prac-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_144

969

970

A. C. Chaves Filho et al.

Fig. 1 Schematic of the System composed of four accelerometers. Source Authors (2020)

tice of therapeutic exercises, leading the care of health professionals to the patient [6]. Meantime, the lack of familiarity with technology by some elderly people reflects the need to develop a simple, practical, little and non-invasive tool with adequate sensitivity for the measurement of physical activity to be used in telerehabilitation [7]. In this perspective, an instrument integrated with an accelerometer sensor can evaluate a greater diversity of movements and also distinguish the period of use, frequency and duration of physical activity. Portable and easy to handle, this instrument enables the acquisition of these measures in the most conditions [8,9]. Due the scarcity of specific devices to identify movements with effective acquisition, the idea of this research was developed. In addition, it makes possible the recognition of the movements and can serve as subsidies for future researches.

2

Materials and Methods

This is a pilot study on the identification of movements in the elderly. Research approved by the Research Ethics Committee of the Federal University of Pernambuco upon opinion number: 3.219.408 and under CAAE: 06276819.6.0000.5208. The developed equipment is composed by four accelerometers integrated with a microcontroller(MCU) and a bluetooth module. For the construction of equipment, it was used the MCU MSP430G2553 with 16 KB of flash memory and 512 B of RAM. This MCU developed by Texas Instruments can work in 16 MHz. Among the main features of this microcontroller, the low power consumption can be mentioned. It allows the use of batteries for long periods, feature needed in this project. The equipment send the data for recording in a personal computer through the Bluetooth protocol.

The Inertial sensors are used in situations where it is necessary to perceive the effects of forces that cause any change in the inertial state of systems. In rehabilitation protocols, the accelerometer data can be used to identify patterns of prescribed exercises. The inertial sensor module chosen was the LSM6DS3, because it presents low power consumption. The MCU MSP430G2553 allows the integration various LSM6DS3 modules through the SPI interface. In this project, it was used four modules. [10]. Figure 1 shows the scheme of the project. The four accelerometer modules capture and send the signal to the microcontroller. After the acquisition, the MSP430G2553 creates a package with the data and sends the information via Bluetooth protocol to a personal computer. Following the data package receiving, the Reability software [11] performs its validation. After verify the information, the software displays the graphics in real time and generates a file containing the exercise protocol data. With this file, the signals are imported to Matlab software. This analysis allows to identify the exercises, the number of repetitions and the duration of each exercise.

2.1

Methodology

To validate the instrument, a protocol with 10 exercises developed by the authors was applied, which were adapted for the volunteers according to their needs in order to verify the feasibility of the equipment. However, for this article only will be presented four of these exercises. Two female volunteers between 65 and 75 years old participated in this stage, this sample was obtained by convenience. The volunteers were informed about the exercises and how to perform them.

144 Geriatric Physiotherapy: A Telerehabilitation System …

The accelerometers were fixed distally to the upper limbs on the posterior aspect of the forearm, between the styloid process of the radius and ulna bones, and distally to the lower limbs equidistant from the medial malleoli of each volunteer. The exercise protocol was adapted for the volunteers according to their needs in order to verify the equipment’s viability. The protocol consisted of exercises for upper limbs (UL), exercises for lower limbs (LL) and combined exercises (UL and LL). In the acquisition of the signals, an entire session of exercises was performed. After importing the data into Matlab, movement patterns were identified, allowing also to find parameters such as the number of repetitions per minute and the total exercise time. In the study, variations of the axes were analyzed during the performance of each proposed exercise.

2.2

Data Analysis

In possession of the files containing the data captured during the exercise protocols, it was possible to use processing techniques to improve the signal through filtering. In this step, the high frequency noise and the DC level of the signal were removed.

971

After processing the acquired information, some parameters were extracted, such as time intervals and displacements. In addition, a graphical user interface was made to plot graphs with the extracted data. The data were analyzed using the Matlab software, where the number of repetitions was identified through the variations of the axes, the total exercise time and the time of each repetition were also counted.

3

Results and Discussion

To identify movement through accelerometry, the protocol consisted of 10 exercises. For this article, the exercises that showed the greatest variation in the axes during the performance of the protocol by the selected volunteers were chosen.

3.1

Volunteer 1

Figure 2a shows the variation in the Z axis, during the exercise with the LL. The execution of static gait with closed kinetic chain, consisted of alternate movement of the lower limbs simulating a walk, while the upper limbs were supported,

Fig. 2 Exercise of static gait (a) and lower limb flexion and extension with bench press (b) collected with four accelerometers Source Authors (2020)

972

A. C. Chaves Filho et al.

there was a great variation in the X and Z axis. The Z axis had a greater variation due to hip and knee flexion, and it can be seen that, while one leg is in contact with the ground, the other is making the movement. During the tests, a small variation of the Y axis was observed and it can be related to the fact that the volunteer’s leg is out of alignment when the movement is made, or the variation may be caused by the imbalance during the execution. For the combined exercise, the volunteer remained seated and performed exercises for the UL and LL simultaneously. The volunteer was asked to perform a 90◦ abduction of the shoulder joint with 90◦ flexion of the elbow complex joint and pronation from the radioulnar joint for UL. For the LL as a priority, it was requested an extension of the knee joint. It can be seen in Fig. 2b that in the combined exercise of UL and LL, there is a greater variation of the X axis for the upper limbs and the Z axis for the lower limbs. Also, there is a synchrony between the extension of the lower limbs and the extension of the arm, as shown in the Fig. 2b.

3.2

Volunteer 2

In Figs. 3a, b, it can be seen, during the combined exercise of abduction and adduction of the vertical shoulder joint and abduction and adduction of the hip joint, there was variation in the three axes, X, Y and Z. Likewise, in the execution of the 90◦ abduction of the glenohumeral joint, associated with a 90◦ flexion of the elbow complex joint and pronation of the radioulnar joint for UL. In this example, the movement occurred in the right upper limb and left lower limb simultaneously. During the exercise for UL, the right upper limb showed a greater variation in the X and Y axes. While for left lower limb, the axes that showed the greatest variation were Y and Z. It was noted, during the combined exercise of UL and LL, that the volunteer performed more repetitions with the leg than with the arm. This can happen due to the lack of motor coordination, interfering in the quality of the movement’s execution. During the movement for upper limbs, a variation was observed in the X, Y and Z axes, but with a greater variation in the X axis of both arms. Figure 3c illustrates the x axis behavior during this exercise. For the purpose of identifying variability, in this study, the exercises in the aforementioned volunteers that presented the greatest variation in each participant were described, although they all performed the same exercises.

Table 1 shows the comparison between the two users according to the number of repetitions per minute. It is observed that among the volunteers, in the combined exercise of upper limbs and lower limbs, and the combined exercise of right upper limb and left lower limb, volunteer 1, managed to perform a greater number of repetitions than volunteer 2. This can happen due to the age difference between the two volunteers and the practice of physical activity performed by volunteer 1, which can interfere with the performance of the activity, the degree of strength, balance and fatigue. Table 2 summarizes all movements of the accelerometers, highlighting only the axes in which there were the greatest variations. Note that the variation pattern is different for all exercises contemplated, showing that it is possible to differentiate the type of exercise only through the accelerometry signal. In a study performed by Bourke et. al [12], elderly volunteers were asked to perform a series of activities with a triaxial accelerometer located at the waist, so that the activity classification algorithm could be validated in real time. Participants performed a semi-structured protocol and a freeliving activity protocol. For the semi-structured protocol, the subjects received instructions to perform tasks designed to generate specific activities by a study investigator. All tasks were repeated sequentially 3 times. The study presented by Jatesiktat [13], illustrates a method that uses a 3-axis accelerometer and a barometer, both are coupled to a device used on the wrist to detect falls. Simulations were performed on a single individual, where 30 simulated falls were recorded, containing 6 styles of fall. Soaz and Diepold [14], in their work, used a triaxial accelerometer for the detection of stair climbing in elderly who walk at low speed and, therefore, with risk of functional decline. A second objective of the study was to investigate the potential of the device to assess the gait capacity of the elderly based on differences in the patterns of fragile or healthy walking. An algorithm for stage detection developed and validated using data from 10 healthy adults and 21 institutionalized elderly was used. Each individual performed a 10 m walk test.

4

Conclusions

This research shows that the acquisition of therapeutic exercise signals in elderly, can be registered through the accelerometry signal. Through the exercise protocols, it was observed that it’s possible to define which exercise was performed with accelerometers, the amount of executed repetitions and the time spent to perform the exercises, achieving

144 Geriatric Physiotherapy: A Telerehabilitation System …

Fig. 3 Exercise of diagonal between right arm (a) and left leg (b). Bench press exercise in upper limb (c). Source Authors (2020)

973

974

A. C. Chaves Filho et al.

Table 1 Number of repetitions per minute Exercise

Volunteer 1

Volunteer 2

Bench press Static gait Lower limb flexion and extension + Bench press Diagonal (left arm + right leg) Source Authors (2020)

40 21 34

22 15 22

32

19

Table 2 Major variations in the accelerometer patterns of the protocol exercises Exercise

Right Arm X

Bench press

Y

Z

Left Arm X









Y

Static gait Lower limb flexion and extension + Bench press Diagonal (left arm + right leg)





Right Leg Z

X

Y

Left Leg Z

X



















Y

Z



Source Authors (2020)

more reliable results through the proposed protocol for the submitted elderly to physical therapy treatment, helping to develop rehabilitation protocols and improving the quality of life of the elderly. The use of accelerometry, as a tool in therapeutic conduct, shows to be important in encouraging future studies, supporting the use of motion sensors during the rehabilitation process. Therefore, the use of accelerometry data can quantify and classify moviments in the execution of a rehabilitation protocol. Acknowledgements For their support of National Council for the Improvement of Higher Education (CAPES) and National Council for Scientific and Technological Development (CNPq). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Siqueira FV, Facchini LA, Piccini RX et al (2007) Prevalência de quedas em idosos e fatores associados. Revista de Saúde Públicad 41:749–756 2. WHO (2005) Envelhecimento ativo: uma política de saúde

3. Mallmann DG, Galindo Neto NM, Sousa JC et al (2015) Health education as the main alternative to promote the health of the elderly. Ciência & Saúde Coletiva 20:1763–1772 4. BRASIL (2013) Ministério da Saúde. Diretrizes de atenção à reabilitação da pessoa com acidente vascular cerebral. Brasília (DF): Ministério da Saúde. http://bvsms.saude.gov.br 5. Cruz PKR, Vieira MA, Carneiro JA et al (2020) Difficulties of access to health services among non-institutionalized older adults: prevalence and associated factors Revista Brasileira de Geriatria e Gerontologia. 23: 6. Marques MR, Ribeiro ECC, Santana CS (2014) Aplicações e benefícios dos programas de Telessaúde e Telerreabilitação: uma revisão da literatura Revista Eletrônica de Comunicação. Informação e Inovação em Saúde. 8 7. Peel N, Russell T, Gray L (2011) Feasibility of using an in-home video conferencing system in geriatric rehabilitation. J Rehabil Med 43:364–366 8. Item-Glatthorn JF, Casartelli NC, Petrich-Munzinger J et al (2012) Validity of the intelligent device for energy expenditure and activity accelerometry system for quantitative gait analysis in patients with hip osteoarthritis. Arch Phys Med Rehabil 93:2090–2093 9. Knuth AG, Assunção MCF, Gonçalves H et al (2013) Methodological description of accelerometry for measuring physical activity in the 1993 and 2004 Pelotas (Brazil) birth cohorts. Cadernos de Saúde Pública 29:557–565 10. LSM6DS3 datasheet ST electronics. https://www.st.com/resource/ en/datasheet/lsm6ds3.pdf 11. Cavalcante EL (2015) Plataforma Dinâmica de Avaliação Fisioterápica 12. Bourke AK, Ihlen EAF, Van de Ven P et al (2016) Video analysis validation of a real-time physical activity detection algorithm based on a single waist mounted tri-axial accelerometer sensor. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, Orlando, FL, USA, pp 4881–4884 13. Jatesiktat P, Ang WT (2017) An elderly fall detection using a wristworn accelerometer and barometer. In: 2017 39th annual international conference of the IEEE engineering in medicine and biology society (EMBC). ISSN: 1558-4615, pp 125–130 14. Soaz C, Diepold K (2016) Step detection and parameterization for gait assessment using a single waist-worn accelerometer. IEEE Trans Biomed Eng 63:933–942

UV Equipament for Food Safety F. B. Menechelli, V. B. Santana, R. S. Navarro, D. I. Kozusny-Andreani, A. Baptista, and S. C. Nunez

Abstract

Keywords

The application of ultraviolet (UV) mostly UVC to food safety has gained attention in recent years as a potential alternative to chemical and thermal disinfection methods mostly due to the movement that happens worldwide towards the consumption of organic food from small producers. The objective of this work was to present a methodology to disinfect eggs using ultraviolet radiation in an equipment made with germicidal light for aquaria and commercially available materials at low cost. To disinfect the eggs, a box of galvanized material (40  20  25 cm) was built, with two ultraviolet lamps (253.7 nm; P = 8 W), 28.7 cm long, positioned above and below a grid made of nylon wire for positioning the eggs without blocking the light. The lamps were connected in a parallel circuit, which allows the switch to be activated and the two lamps to be switched on synchronously. To evaluate microbial reduction, the Salmonella sp survival fractions of 80 eggs collected from small rural producers in the city of Votuporanga were quantified, before and after exposure to UV light for 1 min, with an energy density of 0.208 J/cm2 and a power density of 3.4 mW/cm2. In the tested parameters, there was a reduction of approximately 1 log of microorganisms. Therefore, we can conclude that ultraviolet light is a possible technology to be applied to decontaminate eggs contaminated with Salmonella sp, which may contribute to the reduction of foodborne diseases.

Ultraviolet applications Microorganisms Food safety Eggs

F. B. Menechelli  R. S. Navarro  S. C. Nunez (&) Biomedical Engineering Postgraduation Program, Universidade Brasil, Rua Carolina Fonseca, 235, São Paulo, Brazil e-mail: [email protected] V. B. Santana  R. S. Navarro  A. Baptista  S. C. Nunez Bioengineering Postgraduation Program, Universidade Brasil, Rua Carolina Fonseca, 235, São Paulo, Brazil D. I. Kozusny-Andreani Environmental Sciences Postgraduation Program, Universidade Brasil, Fernandópolis, Brazil

1







Introduction

According to the World Health Organization (WHO) food contamination with dangerous bacteria, viruses, parasites or chemical products cause more than 200 diseases with various degrees of dangerous ranging from diarrhoea to cancer [1]. As stated by WHO an estimated 600 million people get sick after eating contaminated food products per year and as much as 420,000 deaths can be linked to food contamination [1]. Several chemical products can be used to clean food products and besides the residues that can cause allergies and other complications the environmental toll of using chemical products is also a dangerous for a sustainable food chain supply. Ultraviolet radiation was discovered in 1801 by the German scientist Johan Ritter, who researched, analyzed, and discovered another source of light besides violet. This new light source was not visible to the human eye and it was capable of oxidizing silver halides [2]. He gave the name of ultraviolet light at the end of the nineteenth century. This radiation has been used for disinfection since 1910, when the first drinking water treatment device was inaugurated in Marseille (France). In 1955 this method was applied in Europe and the USA. It was used in the 1940s and 1950s to combat tuberculosis in hospitals in the United Kingdom and the USA for disinfection.[2] Currently, UV-C is still in use, but in another context, to purify the air, clean surfaces, and food packaging, with its greatest applicability in scientific research [3, 4]. It is well-known that prolonged UV irradiation can damage human tissues and particularly the skin. The effects of UVC on human tissues is less studied than UVA and UVB, but theirs impacts seem to be closely

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_145

975

976

F. B. Menechelli et al.

related. UVB has been particularly well studied and is accepted as one of the causes of skin cancer. With the advent of COVID 19 pandemic state the use of UVC radiation gained attention form the scientific community and society. Food and packaging safety became a concern for the sanitary authorities and for the society. Conventional chemical decontamination methods involving quaternary ammonium disinfectant, sodium hydroxide, phenol, hydrogen peroxide or formaldehyde, leave residues on eggshells that may damage the cuticle layer [5]. The current physical methods employed to decontaminated eggshells include Gamma irradiation, freeze drying, hot air, hot water, infrared, atmospheric steam, microwave heating and radiofrequency heating [6] UV has been suggested to decontaminate eggshells and it seems to present good results [7]. Therefore, the objective of this work is to build and test a UV irradiation box prototype to decontaminate eggshells that could be use by small producers.

2

Material and Methods

The methodology of this work is divided into two stages. The first consists in the elaboration and execution of a project for the development of the equipment that uses UV light to decontaminate the eggshells and the second, which consists of a microbiological test using UFC reduction counting to validate the developed equipment.

2.1 Equipment The equipment uses a UV light source as a radiation source to obtain an antimicrobial effect (two lamps—8 W OSRAM germicidal lamp - with k = 253.7 nm and 28.7 cm in length, each, centrally arranged in the upper and lower part of the equipment, as shown in Fig. 1. The box was made with galvanized sheets and has the dimensions of 40  20  25 cm, designed in such a way as to ensure that the light has a uniform distribution inside it (Fig. 1), without radiation leaks. The cover is 5 cm high, 20 cm wide and 40 cm long. Inside the cover we design and inserted two reactors R1 and R2, the switch (on/off) and the wire that connects the circuit to power the equipment. The elements that connect the lamps to the circuit were installed in a parallelepiped device measuring 3 cm in height, 1.0 cm in width and 0.5 in depth. This device, in addition to fixing the lamps, also protects the wires to prevent electrical damage. A handle has also been designed for easy transport. When opened, this entire structure moves, facilitating the insertion of the eggs inside the equipment, which are suspended through a grid, which is

Fig. 1 Project design of the egg disinfection box

a structure that allows the positioning of the egg to receive the radiation (Fig. 2). The grid was made of nylon wire, and positioned at a height of 8 cm, in relation to the L1 and L2 lamps so that the eggs were suspended during lighting. Figure 3a, b show the dimensions of the grid: 5 cm away from the side faces, a 3 cm distance from the sides, distancing the eggs from each other by 1.5 cm, totaling 24 eggs arranged in 30 cm of length by 18 cm width. The lamps (top and bottom), are activated through a device forming a parallel circuit (represented in Fig. 4 by the red color), activated by a conventional light switch.

2.2 Microbiological Test To evaluate the effectiveness of eggshell disinfection, microbiological tests were carried out at the Central Micro-biology Laboratory of Universidade Brasil—Campus de Fernandópolis/SP using Salmonella sp as microorganism, and plate counting of CFU/ml. The eggs were removed from the nest and inserted directly into a sterile container, they were not cleaned and were not refrigerated. For the initial microbiological analysis, 80 microbiological samples of egg shells coming from small rural producers from the city of Votuporanga-SP were collected, with the aid of a sterile swab. To standardize the microbial collection an area of 2 cm2 was choose for each egg and the swab was performed into two areas of 1 cm2. Thereafter the initial collection of microorganisms was performed and, the eggs were placed in the disinfection box, with a distance of 1.5 cm between them, as shown in Fig. 5, the box was closed and the samples were exposed to UV light for 1 min.

UV Equipament for Food Safety

977

Fig. 2 Cover design and positioning of resistors and lamps

Fig. 3 Grid design and eggs net wire placement

Fig. 4 Irradiation area and circuit position

New microbiological samples were collected with the aid of a sterile swab. For microbial recovery, all samples were stored in Falcon tubes containing 1 mL of sterile 0.5% saline solution. The tubes were agitated in vortex for 30 s.

Thereafter, cell suspensions were serially diluted tenfold in PBS on 96-well plates to generate dilution of 10–1 to 10–3 times the original concentrations and a 100 mL aliquot was distributed onto Agar Salmonella Shigella (SS) plate in

978

F. B. Menechelli et al.

Fig. 7 Illustrative image of the grid for positioning the eggs

Fig. 5 Distribuition of the samples (eggs) inside the UV box

The mesh of the net where the eggs are deposited was placed centrally in the box and was made of nylon threads, thus allowing the distribution of light, including the areas where the eggs are supported (Fig. 7).

3.2 Microbiological Test

Fig. 6 Box with UV light and the nylon grid for decontamination of eggs

Bacterial load before treatment showed a significantly greater average of bacterial counts compared to its value after the irradiation with UV light (p > 0.05), a 1 log reduction was obtained after 1 min of irradiation (Fig. 8). The microbial reduction achieved 1 log in 1 min of irradiation, therefore one can assume that increasing the dose a more pronounced microbial reduction could be achieved. The organoleptic characteristics of the egg must be evaluated and the understanding of UV irradiation impact on egg characteristics must be further investigated. Interestingly Thilini et al. [6] in a review evaluating eggshells decontamination did not mention UV as a possibility, the authors cited Gamma irradiation, freeze drying, hot air, hot water, infrared, atmospheric steam, microwave heating and radiofrequency heating for the production of microbiologically safe eggs.

triplicate. Plates were incubated for 48 h at 35 °C to allow bacteria colony formation. Bacterial colonies were counted and converted into colony-forming units per gram (cfu g-1). The survival fraction values were obtained by dividing the data for the mean of bacterial load from eggs not submitted to UV light (before treatment). Student-t test was used to verify difference between moments with a significance level of p < 0.05.

3

Results

3.1 Equipment Construction The equipment was built exactly as the project presented in the methodology. Figure 6 shows the position of the UV lamps, above and below the grid, both in a centralized way.

Fig. 8 Effect of UV light in Salmonella sp. Data are means values. Bars are standard erro

UV Equipament for Food Safety

Our preliminary results demonstrated that decontamination of eggshells with low dose of UVC light is possible and further studies should be encouraged in this area, also the prototype MUST endure several metrological tests to improve its performance.

4

Conclusions

A simple inexpensive device was built up in this study. The equipment costs were under a U$10,000 and it was able to clean 30 eggs per minute, therefore being viable even for small farmers and organic producers. UVC is a simple and low-cost technology which has been shown to be effective in decontaminating of the eggshell surface contributing to the reduction of foodborne diseases and increasing sustainability for the production, diminishing the use of chemical products to clean the farmer production. Acknowledgements We thank the University Brazil for the possibility of carrying out the study. Conflict of Interest The authors declare that they have no conflict of interest.

979

References 1. World Health Organization (2015) WHO estimates of the global burden of foodborne diseases: foodborne disease burden epidemiology reference group 2007–2015. World Health Organization. https://apps.who.int/iris/handle/10665/199350 2. Bagnato VS, Kurachi C, Menezes PFC, Chianfrone DJ, Pires L (2016) Dispositivo para desinfecção de superfícies. Universidade de São Paulo, USP, SP 3. Barroso LB, Wolf DB (2009) Radiação Ultravioleta para desinfecção de água. Disc. Scientia. Série: Ciências Naturais e Tecnológicas, S. Maria 10(1):1–13 4. BRASIL (2010) Ministério da Saúde. Secretaria de Vigilância em Saúde. Departamento de Vigilância Epidemiológica. Manual integrado de vigilância, prevenção e controle de doenças transmitidas por alimentos/Ministério da Saúde, Secretaria de Vigilância em Saúde, Departamento de Vigilância Epidemiológica. Editora do Ministério da Saúde, Brasília 5. Sobotka J (1993) The efficiency of water treatment and disinfection by means of ultraviolet radiation water. Science Technology, vol 27 6. Thilini PK, Ross K, Allowfield H, Whiley H (2017) Reducing risk of salmonellosis through egg decontamination processes. Int J Environ Res Public Health 14(3):335 7. TurtoI M, Borda D (2014) Decontamination of eggshells using ultraviolet light treatment. World’s Poultry Sci J 70(2):265–278

Method to Estimate Doses in Real-Time for the Eyes J. C. de C. Lourenco, S. A. Paschuk, and H. R. Schelin

estimated dose in real-time helps the interventionist physician optimize their exposure to scattered radiation during the procedure, allowing them to enhance personal protection and reduce occupational dose in the of the eyes.

Abstract

In the interventional medical procedure, the medical team is subject to high exposure to scattered radiation when compared to other occupational groups that work with X-rays. The dose limits accumulated on the lens deserve attention to avoid a relative risk of developing a cataract after being absorbed by the eye lens. The objective of this work is to develop a system that allows estimating the dose accumulated in real-time in the lens of the eye received by the physician during the interventional procedure and indicate means of its protection against ionizing radiation. Dose rates were collected by the ionization chamber at 166 cm about the ground level, with distances of the main beam at 19, 38, 57, 76, and 152 cm, around the fluoroscopy. The estimated doses were visualized in real-time through a display showing the intensities of the accumulated doses. To test the performance of the system, we used an adult phantom with a size of 30  30  22 cm to measure the dose estimate in real-time in the region of the lens of the eyes simulating an interventionist procedure, we irradiate for 20 min, and the results of the estimates about X-ray beam distances were 220.6, 159.3, 110.3, 78.4 and 22.1 lSv per procedure and unshielded, respectively for those distances. The validation of the measurements was consistent in p-value 0.05 < p-value < 0.025. For the radiologist, the doses checked annually for 110 procedures in the usual position may be reduced from 17.5 from 38 cm to 12.1 mSv/years distant from 57 cm of the beam, well below the ICRP recommendations. The

J. C. de C. Lourenco (&) State University of Londrina, Londrina, Brazil e-mail: [email protected] S. A. Paschuk Federal University of Technology-Paraná, Curitiba, Brazil H. R. Schelin Pele Pequeno Principe Research Institute, Curitiba, Brazil

Keywords

Eye dose radiology

1



Real-time measurement Phantom simulation



Interventional

Introduction

The daily routine of an interventional physician uses X-ray images intensively to visualize anatomical structures in medical procedures and, for this reason, this professional is at risk due to the effects of ionizing radiation, requiring adequate radiological protection with the proximity of the primary. X-ray beam [1–3]. Good protection against ionizing radiation will prevent acute risks such as cataracts and skin lesions. Protection against ionizing radiation is also the best approach to minimize the effects of late radiation-induced by ionizing radiation, especially cancer. Cutaneous lesions are an acute side effect for both the patient and the interventional medical staff and occur due to extremely high radiation doses during the intervention. In radiology activities, medical professionals should use the, As Low As Reasonably Achievable, ALARA principle, which dictates that radiation protection should be optimized and all exposure to radiation should be kept as low as possible concerning social and economic factors [4]. The Brazilian Institute for Radiation Protection and Dosimetry (IRD) has determined that when working with ionizing radiation during a medical diagnosis, the occupational doses of radiation to which medical personnel is exposed should be controlled. The IRD also recommends

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_146

981

982

that records be kept so that clinicians can obtain a report of their activities regarding interventional radiology. It also suggests that routines for measuring occupational doses of staff should be improved by frequent measurements of radiation doses on the lenses as well as in the assessment of recorded personal doses. In hospitals that use radiology, thermoluminescence dosimeters (TLDs) are generally used by employees working in areas with ionizing radiation. Another reason why the staff is exposed to high doses of radiation during interventional work is the short distance from the source of radiation. Often, when working near the patient, it is difficult to use protective screens or even glasses, which are devices that protect against the radiation that should be used when working near the source of radiation. Unfortunately, the interventionist physician does not always use these tools properly and many find them awkward and uncomfortable to use during long procedures [5]. The topic of ionizing radiation is especially relevant for an interventionist physician since the collective dose of medical staff representing more than 90% of the collective dose of all radiology personnel. The dosimeter used by medical staff have the disadvantage of not immediately showing its cumulative dose, which is usually read one month after its use; so, an effective dose rate measurement system is important for the safety of the medical staff. This system must show radiation doses in real-time as this can help them be aware of the protection against ionizing radiation and therefore a decrease in their doses [6]. Through the real-time system, an interventionist physician can visualize the equivalent dose in real-time, and thus maintain a reasonable distance from the source of radiation; and adjust to a suitable position with less radiation exposure without altering its performance in the interventionist procedure. To obtain an equivalent cumulative dose real-time system [6], it is necessary to know the dose rate levels around the fluoroscopy table, represented by the isodose map and the factors with the reference point of the ionization chamber. The construction of the isodoses map is the result of the dose rate measurements of all points around the fluoroscopy table. The measured points with the same dose rates have adjusted the use of the interpolation technique to construct the isodoses. Dose rates are recorded in real-time with an ionization chamber fixed at a distance from the main beam. The measured values are corrected in real-time through conversion factors in the isodose map showing dose rates and cumulative dose. These values are numerically image viewer by the medical staff [3] associated with their position to the main beam. The visualization of the system will allow verifying the level of exposure of the radiation in real-time to which the

J. C. de C. Lourenco et al.

interventionist physician and his team are subject. The knowledge of the effects of ionizing radiation, the medical staff, can adequately adapt to a better position to the dose received, and this without interfering in the performance of the medical procedure. Recently the International Commission on Radiological Protection [7] recommended reducing the occupational dose in the lens of 150 at 20 mSv annually, indicating that the limit of cumulative dose in the lens prevents a relative risk of developing a cataract after being absorbed by the eye lens [7]. The main objective of this project was to develop a system to estimate the equivalent dose in real-time in the eye lens during the interventional medical procedure using fluoroscopy.

2

Materials and Methods

Dose rate measurements were performed with the fluoroscopy equipment, Axiom Artis FC Siemens AG 2001, arc-C, for simulating the scattered radiation, the simulator object, a Polymethylmethacrylate (PMMA) phantom, adult with the dimensions of 30  30  22 cm3 was used and dose rates were measured using a Ludlum Model 9DP-1 ionization chamber. For the acquisition of dose rates, the ionization chamber was used, placed in a fixed position, and 76 cm away from the main beam and 166 cm long in the fluoroscopy table. Data for these measurements were obtained using the 9DP Logging Spreadsheet-Ludlum. The development of the real-time system was divided into two parts Fig. 1. The first part referred to the linearity test of the system to the dose rate measurements considering the voltage and amperage and the time of exposure. The current remained constant at 30 mA with a voltage ranging from 50 to 90 kVp and the voltage remained constant at 50 kV with a current ranging from 5 to 30 mA and irradiated with the time of 10 s. Then the development of data acquisition and processing. These measurements were defined in an X and Y plane, and the height, Z, at 166 cm from the ground level, measurements were made in the positions of 19, 38, 57, 76 and 152 cm in the XY plane from 45° to 45°. All these measurements were performed with a current of 30 mA, a voltage of 50 kV, an exposure time of 10 s, and a distance from the x-ray tube to the table at 100 cm. The ionization chamber was configured to obtain continuous data at 1-s intervals, and its alignment went towards the center of the scattering object at all points of the plans. After the acquisition of all points, the data processing algorithm was developed to (1) eliminate data below 10 lSv/h, identifying the data of each point; (2) eliminate the

Method to Estimate Doses in Real-Time for the Eyes

983

Fig. 1 System development diagram

effect of the dead time of the ionization chamber using the data above the average of each point; and (3) to eliminate discrepant data, the Chauvenet criterion was applied, thus obtaining the mean and standard deviation of the dose rate at each point. Using the measured data of the radiation scattered in the region of the professional's eyes to construct the isodoses curves using the interpolation technique. For the isodose on the map, a conversion factor was applied. This conversion factor was calculated the isodose rate and the value of the dose rate at the reference point, located at a height of 166 and 76 cm from the x-ray beam, DoseRateReferencePoint is 443 lSv/h, by Eq. (1),

levels was greater than 5%, this hypothesis was rejected. The tests were performed from the position of the interventionist physician. The isodoses rates, EstimateDoseRate, were initialized every time the x-ray system was triggered, otherwise, the recording of dose rates referred to background radiation. The estimated cumulative dose, EstimateAcc, bin each isodose was calculated by Eq. (3),

Estimate ¼ Factor  DoseRateReferencePoint

For the linearity test of the system, the dose rate measurements were performed in the position of the physical interventionist placed 76 cm from the center of the table. In Fig. 2, the current remained constant at 30 mA with a voltage ranging from 50 to 90 kV, and other, where the voltage remained constant at 50 kV with a current ranging from 5 to 25 mA, in a 10 s exposure time, the correlation values R2, were 0.986 and 0.996, respectively, Fig. 2. Dose rate acquisitions were measured for the 30 positions around the fluoroscopy table at 45° and 45° intervals in the x and y plane in the eye region at a height of 166 cm. For all

EstimateAcc ¼ EstimateAcc þ Estimate=3600

ð3Þ

Thus, in all interventionist procedures, readings started at zero to zero in the accumulated doses of each isodose map; and while the system was connected to the system, the result of the accumulated dose was constantly updated and an Factor ¼ RangeDosesRatesInMap=DoseRateReferencePoint image viewing system. The algorithms were developed and ð1Þ implemented in the 9DP Logging Spreadsheet Pkg software The second part corresponded to a real-time system Ludlum to read the acquisition of dose rates at the reference where the ionization chamber was positioned at the fixed point in real-time, to calculate the estimated isodoses rates reference point and was set to collect data at intervals and and estimated equivalent cumulative of each isodose, and to electronically transmit that data to the 9DP Logging display in an image viewer the estimated equivalent cumuSpreadsheet software. Estimates of real-time isodoses rates lative in the medical staff. referred to the acquisitions of dose rates obtained in real-time at the reference point associated with the conversion factor 3 Results given by Eq. (2), ð2Þ

The validation of the real-time system was carried out using the Student's t-test for independent samples [8]. This test addressed the analysis of the existence of a significant statistical difference in the values collected between the measured values and the values estimated around the fluoroscopy table. At this point, it was considered that the hypothesis test presupposed that there was no difference between the measurement of the dose rates and the estimated dose rates (Ho). If the value obtained for the significance

984

J. C. de C. Lourenco et al.

Fig. 2 Distribution of rate dose depending on voltage (kV) and electric current (mA)

positions, the dose rate measurements were performed at a voltage of 50 kV and a current of 30 mA and a 10 s exposure time. The construction of the isodoses curves was obtained from the measurements of the positions of 19, 38, 57 and 76 cm about the main beam respectively for the angles of 76o, 63o, 53° and 45o at the height of 166 cm, Fig. 3. The results of the measurements of the 30 positions around the fluoroscopy table are shown in Table 1 the distribution of the average dose rates for the eye region. The measurement values were used to adjust the curves using the interpolation technique in the construction of the isodoses curves. The set of Isodoses curves for the eye region is shown in Fig. 4 with the dose rate distributions for each distance about the main beam representing the Isodoses map.

Fig. 3 Configuration for the acquisition of dose rates in relation to the distance in cm from the main beam at the level of the doctor’s eyes

The knowledge of the isodoses to the primary beam distance and the dose rate at the reference point was 443 ± 1.4 lSv/h, which was necessary to find the conversion factors. The conversion factors obtained in Eq. (1) were: 2.0 for isodose of 900 lSv/h; 1.5 for 650 lSv/h; 1.0 for 450 lSv/h; 0.7 for 320 lSv/h; and 0.2 for 90 corresponding to the main beam distances of 19, 38, 57, 76, and 152 cm in the region of the eyes. To facilitate the visualization of dose rate distributions on the isodose map, dose ranges, represented by the variable, were assigned colors for each range on the map. The representation of the color bands on the map, Fig. 4, is the blue color for rates below 320 lSv/h; between 320 and 450 the color green; 450–650 yellow color; from 650 to 900 in orange, and rates above 900 lSv/h in red. The second part of the development was to calculate the dose estimates in real-time through the implementation of the computational code, shown in Fig. 5. The reading of the dose rates, according to the second, in the acquisition system concerning the reference point, is multiplied by the conversion factor, finding the estimate in real-time, Eq. (2). Cumulative dose calculations performed in real-time, second by second, the numerical values displayed on the monitor [3] of the dose rate ranges on the map, Eq. (3). During the intervention procedure, the fluoroscopy system is activated. The computer program basically reads dose rates in real time using the fixed ionization chamber at the reference point. Readings are processed second by second. Dose rate estimates are calculated in relation to the distances of the main beam around the fluoroscopy table at 45° at 45°. The sum of all dose rate estimates in real time are numeric values of the cumulative doses and are displayed on the display monitor according to the second of the accumulated doses until the system is turned off, Fig. 5.

Method to Estimate Doses in Real-Time for the Eyes Table 1 Values of measured dose rates (lSv/h) of scattering radiation distributions around the fluoroscopic table system for the respective distances in cm, associated with angles: 0°, 45°, 90°, 135°, 225° and 315° at the height of the eye region

985

Distance (cm)



45°

90°

135°

225°

315°

19

910 ± 0.2

866 ± 1.1

985 ± 1.1

940 ± 0.4

777 ± 0.4

820 ± 1.1

38

696 ± 0.8

662 ± 3.4

735 ± 0.3

727 ± 0.9

514 ± 2.6

495 ± 0.7

57

403 ± 2.4

451 ± 0.1

483 ± 0.3

537 ± 2.1

348 ± 0.2

365 ± 0.5

76

331 ± 1.5

321 ± 1.6

312 ± 1.2

366 ± 1.4

253 ± 2.4

241 ± 1.5

152

92 ± 1.2

95 ± 1.4

101 ± 0.7

101 ± 1.1

94 ± 7.5

60 ± 1.5

30  30  22 cm3: 220.6, 159.3, 110.3, 78.4 and 22.1 lSv/h, Fig. 6 in the suitable positions for measured in real-time at a height of 166 cm, in the region to the lens of the eyes [9].

4

Fig. 4 Map of the Isodose curves of the dose rate distributions in lSv/h in relation to their distances from the main beam to the height of the eyes

The validation of the real-time dose system was conducted from the position of the interventionist physician, and measurements were taken from the location of the physician at a distance from the main beam in the region of the eye. The dose rate and estimated dose rate tests performed in the positions of the physician did not refute the hypothesis that the results found between the measured and calculated values were equal to the p-value of 0.05 < p < 0.025. Thus, the hypothesis indicated that the system was accepted, suggesting its validity. We simulated the measurements using an adult person’s phantom. When the real-time fluoroscopy system was switched on, the system started reading the dose rate in one-second intervals and displayed the estimated cumulative doses in real-time, Fig. 6. The results in real-time of the estimated cumulative dose in the second to the second display system in the fluoroscopy room were the irradiated phantom in 20 min, in the automatic mode, obtaining the doses for a phantom

Discussion

Recently the International Council Radiation Protection [9], reinforced the need to protect the effects of ionizing radiation used during the medical procedure in the fluoroscopy room. The level of scattered radiation is extremely high in the region near the x-ray equipment, where the medical staff performs the interventional medical procedures, are subject to risks their health of such radiation. Therefore, the evaluation of individual doses in radiation examinations is essential, and the physician can assess the doses received in real-time during the medical procedure [10]. This study simulated through an adult phantom calculating the accumulated dose estimates in real-time, which are displayed in a visualization system during the intervention procedure. Measurements were performed with the x-ray tube above the fluoroscopy table and the incidence of the beam using the anteroposterior projection and considering the proximity of the lens region of the physician's eyes where the doses received are more intense [11]. The results found between the measured and estimated values were equal to the p-value of 0.025 < p < 0.0125. Thus, the hypothesis indicated that the system was accepted, suggesting its validity. Also, the prototype would not cause any obstruction during the medical procedures. Our study comparatively simulated with an adult phantom the annual dose for the interventional physician, cardiologist, during Catheter-Based Cardiovascular Interventions in 110 procedures per year [12], without shielding and in the eye region, 17.5 mSv corresponding to 46 cm of the main beam; if that same doctor approaches the main beam, 57 cm, the received dose decreases to 12.1 mSv [3, 9, 10]. The ICRP-103 [2, 7, 13] recommendation to reduce the dose received decreasing the occupational dose in the lens of the eye from 150 to 20 mSv annually, indicating that dose limit accumulated of the eye.

986

J. C. de C. Lourenco et al.

Fig. 5 Part of the processing code for accumulated dose estimates in real-time

5

Fig. 6 Display of estimated cumulative doses in real time by intervention procedure

To evaluate the dose received according to the [3, 10] recommendation, that is, 20 mSv annually, the system calculates the annual dose rate for the interventional worker.

Conclusions

The real-time system can help the interventionist physician and staff to optimize the position around the fluoroscopy table during the fluoroscopy-guided procedure To evaluate the dose received according to the [7] recommendation, that is, 20 mSv annually, the system calculates the annual dose rate for the interventional worker radiologist. The image viewer system presents from second to second the estimated dose rates and the cumulative dose estimated for each position about the main beam. At any given time, the interventionist physician may change positions to reduce scattered radiation without compromising medical procedures. This improves personal protection and reduces the occupational dose of the eyes. Acknowledgements The authors thank the Hospital University of the State University of Londrina, for his generous help.

Method to Estimate Doses in Real-Time for the Eyes Conflict of Interest The authors declare no conflict of interest.

References 1. Vano E, Fernandez JM, Sanchez R (2011) Occupational dosimetry in real-time benefits for interventional radiology. Radiat Meas 46:1262–1265. https://doi.org/10.1016/j.radmeas.2011.04.030 2. Stewart FA, Akleyev AV, Hauer-Jensen M, Hendry JH, Kleiman NJ, MacVittie TJ, Aleman BM, Edgar AB, Mabuchi K, Muirhead CR, Shore RE, Wallace WH (2012) ICRP PUBLICATION 118: ICRP Statement on tissue reactions and early and late effects of radiation in normal tissues and organs—threshold doses for tissue reactions in a radiation protection context. Ann ICRP 41:1–322. https://doi.org/10.1016/j.icrp.2012.02.001 3. Sandblom V, Mai T, Almén A, Rystedt H, Cederblad A, Båth M, Lundh C (2013) Evaluation of the impact of a system for real-time visualization of occupational radiation dose rate during fluoroscopically guided procedures. J Radiol Prot 33:693–702. https:// doi.org/10.1088/0952-4746/33/3/693 4. Valentin J (2000) Abstract: Avoidance of radiation injuries from medical interventional procedures, ICRP Publication 85. Ann ICRP 30:7–7. https://doi.org/10.1016/S0146-6453(01)00004-5 5. Martin CJ (2011) Personal dosimetry for interventional operators: when and how should monitoring be done? Br J Radiol 84:639– 648. https://doi.org/10.1259/bjr/24828606

987 6. Vano E (2011) ICRP and radiation protection of medical staff. Radiat Meas 46:1200–1202. https://doi.org/10.1016/j.radmeas. 2011.05.031 7. ICRP (2013) Radiological protection in cardiology ICRP Publication 120. Ann ICRP 42:68–81 8. Leon-Garcia A (2009) Probability, statistics, and random processes for engineers. Technometrics. https://doi.org/10.1198/00401700 4000000400 9. Vano E, Fernández JM, Sánchez RM, Dauer LT (2013) Realistic approach to estimate lens doses and cataract radiation risk in cardiology when personal dosimeters have not been regularly used. Health Phys 105:330–339 10. Efstathopoulos EP, Pantos I, Andreou M, Gkatzis A, Carinou E, Koukorava C, Kelekis NL, Brountzos E (2011) Occupational radiation doses to the extremities and the eyes in interventional radiology and cardiology procedures. Br J Radiol 84:70–77. https://doi.org/10.1259/bjr/83222759 11. Sanchez R, Vano E, Fernandez JM, Gallego JJ (2010) Staff radiation doses in a real-time display inside the angiography room Cardiovasc. Intervent Radiol 33:1210–1214 12. Bacchim Neto FA, Alves AFF, Mascarenhas YM, Nicolucci P, de Pina DR (2016) Occupational radiation exposure in vascular interventional radiology: a complete evaluation of different body regions. Phys Med 32:1019–1024. https://doi.org/10.1016/j.ejmp. 2016.06.014 13. Vetter RJ (2008) ICRP Publication 103, The recommendations of the international commission on radiological protection. Health Phys 95:445–446. https://doi.org/10.1097/01.hp.0000324200. 73903.5b

An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture Marciel B. Pereira, J. P. V. Madeiro, Adriel O. Freitas and D. G. Gomes

Abstract

In recent years, the Internet of Things (IoT) has emerged as a major computing paradigm leading the development of new architectures for collecting, processing and store data. Among several concepts associated with IoT, we highlight the wearable computing, which allows users to interact with devices attached to the body. In this context, a large amount of human body signals can be acquired by using sensors connected to an embedded system, such as the Electrocardiogram (ECG) signal, which is widely used in medical diagnoses of heart diseases. Moreover, there is a huge interest on the maintaining of ECG samples to accomplish disease recognition by using different techniques that require a large database. Here we propose an architecture for ECG data acquisition, storage and visualization with a low cost and energy efficient embedded device. The proposed ECG reader with IoT architecture, which is a portable device with at least 40 h of autonomy, is able to collect data with sampling frequency from 125 Hz to 1 kHz and store up to 60 s of data samples in cloud server. Keywords

Electrocardiogram • Internet of Things • Telemedicine

1

Introduction

The concept of Internet of Things (IoT), which is the ability of gather data, control and interact with electronic devices using Internet [1], has received a growing interest due to many M. B. Pereira (B) · J. P. V. Madeiro · D. G. Gomes Universidade Federal do Ceará (UFC), Ceará, Brazil e-mail: [email protected] A. O. Freitas Universidade da Integração Internacional da Lusofonia Afro-brasileira (UNILAB), Ceará, Brazil

improvements in recent years such as the increase of Internet speed worldwide, the advances in Integrated Circuit (IC) design and the development of many easy programmable embedded systems. Hence, IoT has a lot to offer in fields of personal healthcare and telemedicine [2] as a result of integration of smartphones and health data acquisition devices. The acquisition of personal biosignals, such as the representation of heartbeat signals known as Electrocardiogram (ECG), might be used to detect or predict diseases of many patients. Thus, health data as biosignals are considered crucial to identify, model and predict illness, which makes the research concerning novel and robust data acquisition architectures greatly worthwhile. By physiological principles, before a normal heartbeat completes, a wave of electrical current passes through the heart, which triggers myocardial contraction [3]. The pattern of electrical propagation produces a measurable change known as the ECG wave, so, an architecture for collecting, analysing and storing these signals is labeled as ECG reader. As described in [4], a typical ECG reader is constituted by a set of modules from which Sensing, Processor and Storage are preeminent modules. In pursuance of build a simple IoT enabled platform, a project needs to incorporate at least a Communication module to succeed on connection to a cloud service or a remote server. Although the increase on complexity, the inclusion of a remote storage facilitates the build of a database as long as allows of near-real time patient monitoring, a very desirable feature in healthcare. Lately many inexpensive heart rate measure systems have been available to personal health monitoring, however, it is a challenge to reduce power consumption of sensor devices dramatically while maintaining the high quality of signals [5]. A real time ECG monitoring must be able to store the heart wave with good quality because many heart diseases require a large number of ECG signal samples to be identified. In this context, an architecture which is able to collect and store in a cloud database can be crucial to aid in the heart diseases remote monitoring.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_147

989

990

M. B. Pereira et al.

We propose in this work the design of a very affordable and low power consumption IoT architecture using a precise ECG biosensor combined with the Embedded Signal Processor 32 bit (ESP32) for collecting and perform cloud storage and data visualization of heart wave signals over a Wireless Local Area Network (WLAN). This work is organized as follows: we present related work of ECG signal reading embedded systems in Sect. 2. We describe the ESP32 embedded system and ECG biosensor in Sect. 3. The architecture description is presented in Sect. 4. The work evaluation methodology is described in Sect. 5. We present our proposed model results in Sect. 6. Finally, we describe conclusions in Sect. 7.

2

Related Work

Recently many programmable devices were proposed to perform ECG signal collection. Considering a non-network based approach, in the work of [6] the authors proposed an Atmega 32 micro-controller, the Arduino, architecture with embedded digital Lowpass Filter (LPF), Notch Filter (NF) and Bandpass Filter (BPF) with cable connection between device and a computer to collect ECG samples. Considering network enabled devices, the work of [7] designed a web-based ECG reader and recorder using a Raspberry-Pi. Satija et al. [8] designed an IoT-enabled ECG monitoring framework using ECG sensors, Arduino, Android phone, Bluetooth, and cloud server, as well as the proposition of metrics for signal quality evaluation. The authors in [9] proposed a monitoring system using a multi-platform hardware composed by Raspberry-Pi and Arduino. The resource management in ECG reader device is an important challenge, for instance, the authors in [10] implemented a compression method to decrease memory usage of an ECG logger preserving fiducial point detection integrity. Also, the use of programmable embedded system ESP32 was found in [11], where the authors were able to send data over wireless network, although only 6 s of ECG signal could be stored. Regarding previously described works, many projects used Arduino and Raspberry-Pi due to their easy programming development environments and good performance as embedded system. In spite of similarities in ease of use of these systems, there is a massive difference considering computational power and capabilities of one to another. An unfrequently applied approach to produce an IoT ECG reader is the ESP32, which is an intermediate device considering comput-

ing power and network capabilities compared with Arduino and Raspberry-Pi but offers low cost and small power consumption.

3

ECG Data Acquisition Biosensor and ESP32 Embedded Systems

3.1

ESP32 Embedded System

The ESP32 system is composed by a single-core 32-bit microprocessor, up to 240 MHz, with 4 MB of flash storage, 128 KB ROM, 320 KB Random Access Memory (RAM) and WiFi 802.11 b/g/n features [12], which presents approximately 100 times greater memory resources than, for instance, the ATmega328 micro-processor used in Arduino UNO board. Hence, ESP32 can be used to develop an IoT data acquisition hardware more effortlessly than low end Arduino-based boards. The programming of ESP32 can also be performed using Arduino Integrated Development Environment (IDE) using appropriate ESP32 compiler. Furthermore, MicroPython [13] is a clean and efficient implementation of Python 3 programming language optimised to run on micro-controllers and in constrained environments such as the ESP32. MicroPython provides a simple file system on board, which allows the creation of data files while running a pre-flashed software, such as data configuration files that can be used to add new wireless networks to known list without re-compile software. Also, as a result of Python being an interpreted language, the debugging process is more effortless, allowing a fast develop of new features to embedded code.

3.2

EKG-EMG Bio-sensor for Data Acquisition

The EKG-EMG board designed by [14] is intended to acquire ECG signals by using three biosensors attached in right arm (RA), left arm (LA) and right leg (RL). These biosensors are combined in a set of filters and amplifiers in order to obtain a clean analogical ECG signal. In spite of this board is designed to fit in an Arduino board, the analog output can be input in many micro-controllers, such as the ESP32. The EKG-EMG board raises signal amplitude by ≈ 2800 times due to array of filters and operational amplifies. Thus, the bio-signal is increased from less than 1 mV to a maximum

147 An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture

value of 3.0 V. Therefore, the analog signal provided by EKGEMG is sent to ESP32 analog input pin with 12 bit resolution.

System Architecture

In this work, we propose an ECG IoT local visualization and cloud storage architecture using ESP32 with MicroPython environment. The proposed architecture can be divided in four modules: acquisition, processing, cloud storage and visualization, as shown in Fig. 1. Thus, we present a detailed explanation of each component in following subsections:

4.1

Acquisition Module

The acquisition module is represented by the EKG-EMG biosensor described in Sect. 3.2, which consists in a three-lead ECG biosensor connected to the [14] board that produces an analog signal up to 3.0 V. The source input and analog collected signal output are transferred between ESP32 and EKGEMG through serial cable. Since the EKG-EMG is designed using analog amplifiers, it does not require any kind of synchronization.

4.2

Processing Module

This module is constituted by the ESP32 as detailed in Sect. 3.1, which is programmed using MicroPython environment. When the ESP32 is powered on, it starts a processing thread of main Python function file that performs network and socket management, analog signal reading, basic data compression, cloud service communication and visible information feedback in a OLED display. The processing module must handle memory shortages and power issues in order to make the device suitable to collect data streams for long periods without failing. The ESP32 can read the analog signal input as a integer number with up to 12-bit resolution, which means input signal is stored within the range (0, 4095). Furthermore, processing

4.3

Data Visualization - Client Module

Once the ESP32 connects to a local WLAN and displays the acquired Internet Protocol (IP) address in a OLED display, any client connected to the same WLAN than the ECG reader is straightaway able to connect to device’s IP address via Web browser, thereupon ECG reader works as a Hypertext Transfer Protocol (HTTP) Web server. By GET requests, the client is able to select measurement time in seconds and desired sample frequency. The use of the ESP32 ECG reader in a client-server model allow the usage of any network-capable device to control measurements and obtain the heart signal.

4.4

Cloud Storage

As described in Subsect. 4.2, Our ECG reader collects signal by using a 12-bit analog-to-digital conversion, which is reduced to a 8-bit resolution. Afterwards, the signal is encoded using base64 coding, which simplify data transmission to cloud server. The base64 converts groups of 3 bytes to a sequence of 4 url-friendly ASCII characters. We decided to send a base64-coded ECG signal via HTTP protocol to a Google web service, which handles and store the received base64 string into a Google Spreadsheet. Also, the service stores the sample frequency in Hertz, reading time in seconds, input voltage, measured in an analog port in ESP32 to power consumption measurements, free memory. The web server handles a POST method with payload is the encoded base64 output signal and updates parameters in a Google Spreadsheet right away.

Processing

Acquisition

SHIELD EKG-EMG

Fig. 1 ECG data acquisition: proposed architecture

module has the flexibility of choose sample frequency. As described in [15], a minimum sample frequency of 250 Hz is acceptable for heart rate variability analysis. Nevertheless, the device performs a down-sampling of amplitude from 12-bit to 8-bit resolution in order to simplify data transmission to cloud server, producing samples in a new range of (0, 255).

ESP-32

4

991

Cloud Storage

Analysis Visualization

992

5

M. B. Pereira et al.

To evaluate our proposed design and compare with related works, we selected a set of criteria to be evaluated while perform a series of performance test benches. These criteria are described as follows: • Network capability is the ability of connect to Wireless Local Area Network (WLAN) and complete data communication from ECG reader node to client or cloud server using several protocols as well as handle requisitions; • Energy efficiency relates to efficiency of ECG reader device according to power consumption—current and voltage—needed to succeed in a data read and cloud storage per time unit; • Cost concern with the carried capital expenditure to design the ECG reader solution, considering hardware and software services cost. • Computing power is how much hardware is available to execute applications. A higher computing power implies to the capability of accomplish more complex algorithms, but might also result in low resource usage.

5.1

client handshake and data transfer time Tcli . The duty cycle dC is represented as follows:

Methodology

dC =

Ts (×100%) Ts + Tcli + Tdb

(1)

4. Real ECG reading test: gathering of ECG signals in different environments. Analysis by signal shape and power spectrum comparison;

6

Results

In this Section we present major results by using our architecture proposal by performing a test bench described in Subsect. 5.2. We show in Fig. 2 the implemented system architecture of our ECG reader.

EKG-EMG shield BIOSENSORS

Comparison with related works ESP32 ECG READER

Our proposed ECG reader is designed to present High Network capability, High Energy efficiency, Low Cost and mid range Computing power. We selected a group of research papers and graded according our suggested criteria. In most related work, the more network capability the higher the cost and computing power, therefore, the Raspberry-Pi is the most used embedded system because of its generous computing and network resources.

5.2

Test Bench

To evaluate ECG reader criteria, we adopted the following test bench: 1. Cost evaluation: we specified a budget for the ECG reader considering retail price of components. We also collected prices of other hardware devices, such as Arduino and Raspberry Pi, to compare total cost. 2. Power consumption test: we performed a measurement of voltage and current consumption and, then, estimate total power consumption and lifetime based on battery capacity. 3. Cloud server test: evaluation of amount of messages that are stored in cloud server database and evaluation of percentage of total received data per time considering network delay, the duty cycle. Let elapsed time of a signal as Ts , cloud server handshake and data transfer time Tdb and local

Fig. 2 System arcitecture: ESP32 ECG reader, EKG-EMG shield and biosensors

6.1

Device Parameters and Budget

We show our proposed device budget in Table 1, considering cost in Brazilian Real (BRL) and United States Dollar (USD). As a result of using a free cloud server, we do not consider

Table 1 Device budget (in June 2020) Component

Cost (BRL/USD1 )

ESP-WROOM-32 DevKit 5000 mAH battery OLIMEX EKG-EMG shield Biosensors Charger module Other acessories Total cost 1 1 USD = 5 BRL

R$79.90/US$15.98 R$29.90/US$5.98 R$99.90/US$19.98 R$39.90/US$7.98 R$9.90/US$1.98 R$30.00/US$6.00 R$289.50/US$57.90

147 An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture

993

the cost of network cloud service. Furthermore, the collect of many samples per day may require a paid cloud storage service.

6.2

Power Parameters

In order to measure power consumption, we set our device to collect data samples considering a sample frequency of 250 Hz. We measured a 10-sec signal and refresh the measurement for each 15 s, which means the ECG reader performed 4 measurements per minute. Considering the current consumption, the ECG reader presented a mean current input of 96 milliamperes (mA). Regarding the average current and voltage, the estimated power consumption of our device is 0.39 W, 3–4 times lower than Raspberry Pi (1.4 W) but 50% greater than Arduino Uno (0.25 W). Notice that our device was measured with sensor and OLED display attached to the ESP32. Considering an average power consumption of 0.39 W, a 5000 mAh battery takes 50 hours to discharge.

6.3

Cloud Server Test

Considering the cloud storage architecture presented in Subsect. 4.4, we provided data to our data base varying sample rate and measurement time in order to verify cloud storage performance and duty cycle as described in Subsect. 5.2. Table 2 exhibits stored parameters retrieved from cloud data base, thus, we use timestamp column to estimate duty cycle. In Fig. 3 we show the measured duty cycle according to Eq. 1. Considering sample frequency of 125 Hz, our device was able to read, exhibit and store in cloud a sample of 60 s in 67.5 s, resulting in an average duty cycle of 89.2%. Regarding the lowest sample rate for preserving quality, 250 Hz, our

Fig. 3 Measured duty cycle

device achieved a mean duty cycle of 74.3%, which means one hour of reading can store up to 45 min in cloud.

6.4

Comparison with Related Work and Real ECG Reading Test

We preset a more detailed comparison of our work with selected papers in Table 3. Excepting the work of [11], all related solutions are not designed to mobility, thus, they require a DC power supply. Some works did not present signal collect parameters, such as sample frequency. We assert our solution achieved great benefit-cost ratio as well as display major features of a mobile IoT architecture. Also, in Fig. 4 we show produced ECG plots from related research papers. In order to compare data samples with our work, we conducted a set of readings of ECG signal by varying sample frequency and measurement time. In Fig. 5 we show collected samples for 125, 250, 500 Hz and 1 kHz, where notice a good quality signal by inspection. Also, we were able to retrieve signal from web server.

Table 2 Data Storage example Timestamp

Free mem. (bytes)

Votage (V)

Sample freq. (Hz)

Time (s)

Raw data (base64)

2020-04-17_14:31:45 2020-04-17_14:33:03 2020-04-17_14:34:40 2020-04-17_14:35:11

62960 59632 43808 49568

3.98 3.97 4.00 3.99

250 250 250 250

10 10 20 20

W1tfY2R... hX2BhYW... FhYmJgX... V1hY2Nj...

Table 3 Comparison of features in selected works

[7] [9] [11] [16] Our Work

Network capability

Max. sample freq.

Max. signal length

Power cons.

Cost (USD)

Wi-FI/ HTTPS Wi-FI/ HTTPS Wi-FI/ HTTPS Wi-FI/ HTTPS Wi-FI/ HTTPS

N/A N/A 250 Hz N/A 1 kHz

N/A N/A 6s N/A 30 s

3.0 W 3.25 W 0.39 W 3.25 W 0.39 W

170.00 71.00 40.00 80.00 57.90

994

M. B. Pereira et al.

Fig. 4 ECG Sample plots from selected research papers

(a) [7]

(b) [9]

(c) [11]

(d) [16]

(a) Sample frequency: 125 Hz

(b) Sample frequency: 250 Hz

(c) Sample frequency: 500 Hz

(d) Sample frequency: 1 kHz

Fig. 5 Examples of collected signal from proposed ECG reader

147 An IoT-Cloud Enabled Real Time and Energy Efficient ECG Reader Architecture

7

Conclusion

Here we propose an IoT ECG reader architecture using the ESP32 embedded device running MicroPython. The major strengths of this proposed architecture are the low power consumption combined to great network potentials and low cost as well as the ability of collecting data up to 1 kHz sample frequency. The proposed test bench can also lead to a better comparison between different IoT architectures for collecting bio-signals. We propose as future work the design of a complete monitoring system using the cloud-based approach and the use of robust compression algorithms to provide a greater duty cycle. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES) - Finance Code 001. Danielo G. Gomes greatly appreciates the financial support of CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico-Brasil) grants 440092/2020-5, 310317/2019-3. João P. V. Madeiro acknowledges the support of the Brazilian Research Council, CNPq (Grant n. 426002/2016-4).

4.

5.

6.

7.

8.

9.

10.

11.

Conflict of Interest The authors declare that they have no conflict of interest. 12.

References 1. 2.

3.

Biru CA, Roberto M, Domenico R (2014) Towards a definition of the Internet of Things (IoT). PhD thesis Bahar F, Farshad F, Victor C, Mustafa B, Nicholas Constant, Kunal Mankodiya (2018) Towards fog-driven IoT eHealth: promises and challenges of IoT in medicine and healthcare. Future Gener Comput Syst 78:659–676 Clifford GD (2006) Advanced methods and tools for ECG data analysis. Azuaje Francisco. Artech House, Inc., McSharry Patrick, USA

13. 14. 15.

16.

995

Bansal M, Gandhi B (2018) IoT based development boards for smart healthcare applications. In: 2018 4th international conference on computing communication and automation (ICCCA), pp 1–7 Nguyen Gia T, Jiang M, Sarker VK et al (2017) Low-cost fogassisted health-care IoT system with energy-efficient sensor nodes. In: 2017 13th international wireless communications and mobile computing conference (IWCMC), , pp 1765–1770 Al-Busaidi A, Khriji L (2013) Digitally filtered ECG signal using low-cost microcontroller. In: 2013 international conference on control, decision and information technologies (CoDIT), pp 258–263 Yakut O, Solak S, Bolat E (2015) Implementation of a webbased wireless ECG measuring and recording system. In: 17th international conference on medical physics and medical sciences (ICMPMS) Satija U, Ramkumar B, Sabarimalai MM (2017) Real-time signal quality-aware ECG telemetry system for IoT-based health care monitoring. IEEE Internet Things J 4:815–823 Singh P, Jasuja A (2017) IoT based low-cost distant patient ECG monitoring system. In: 2017 international conference on computing, communication and automation (ICCCA), pp 1330–1334 Liu C, Zhang X, Zhao L et al (2018) Signal quality assessment and lightweight qrs detection for wearable ecg smartvest system. IEEE Internet Things J:1 Cen P, DeLong W, Amatanon V, Iamsamang J, Naiyanetr P (2019) Intelligence ECG monitoring system: wireless platform and arrhythmia classification using residual neural network. In: 2019 12th biomedical engineering international conference (BMEiCON), pp 1–5 Espressif-Systems. ESP32-S2-WROOM & ESP32-S2-WROOM-I datasheet technical report 2020 MicroPython (2020) MicroPython project. https://micropython.org/ OLIMEX (2014) SHIELD-EKG-EMG bio-feedback shield USER’S MANUAL technical report Kwon O, Jeong J, Kim Hyung B et al (2018) Electrocardiogram sampling frequency range acceptable for heart rate variability analysis. Healthc Inform Res 24:198–206 Rahman A, Rahman T, Ghani NH, Hossain S, Uddin J (2019) IoT based patient monitoring system using ECG sensor. In: 2019 international conference on robotics,electrical and signal processing techniques (ICREST), pp 378–382

Development of a Hydraulic Model of the Microcontrolled Human Circulatory System Andrew Guimarães Silva, B. S. Santos, M. N. Oliveira, L. J. Oliveira, D. G. Goroso, J. Nagai, and R. R. Silva

Abstract

Keywords

Cardiovascular diseases are the leading cause of deaths worldwide, causing more than 15.2 million deaths in 2016 alone. Thus, the need for intervention is clear so that these worrying numbers can be reversed. In this sense, a collective effort is needed in the areas of research and development of treatments for these cardiopathies. The purpose of this work is to develop a microcontrolled didactic bench that reproduces the behavior of the human circulatory system (HCS). The prototype will be able to elucidate the concepts involved in the dynamics of the flow and blood pressure of the systemic circulation. The basic elements that constitute the bench are a reservoir, a compliance chamber, a hydraulic piston pump driven by a direct current motor and a gate valve. All actuator elements and sensors are interconnected by a control system that can be accessed by a computer. The hydraulic circuit is based on the Windkessel model, which explains the transformation of the pulsatile flow of the heart into a virtually constant flow. The bench is capable of simulating scenarios of this phenomenon with parametric pressure variations between 40 and 210 mmHg and heart rate from 70 to 100 bpm. After the development of the bench, it was subjected to tests with fixed values of ejection volume and engine rotation for hypotension, normotension and hypertension, according to the norm ISO 5840-3:2013. The experimental data obtained from the bench were compared to the values of the systolic and diastolic pressure ranges reported in the literature.

HCS mathematical model HCS hydraulic model Windkessel effect Didactic bench Ventricular assist devices

A. G. Silva (&)  B. S. Santos  M. N. Oliveira  L. J. Oliveira  D. G. Goroso  J. Nagai  R. R. Silva Technological Research Center, University of Mogi das Cruzes, Av. Dr. Cândido X. de Almeida e Souza, Mogi das Cruzes, Brazil R. R. Silva Center for Biomedical Engineering, State University of Campinas, Campinas, Brazil



1







Introduction

Obtaining a better understanding of how the HCS works is of vital importance for the advancement of research that seeks to develop treatments for the various existing heart diseases. In order to assist these studies, computer simulations, also known as “in silico” experiments, capable of mimicking physiological systems, are among the most promising tools, allowing, for example, to simulate the behavior of the cardiovascular system by changing several parameters [1]. However, the use of computational tools does not eliminate “in vitro” or “in vivo” experiments. Thus, physical models composed of hydraulic circuits capable of representing the HCS are essential for the development and testing of Ventricular Assist Devices (VADs). These hydraulic models can also be used in the manufacture of artificial valves, hearts and lungs, allowing several tests before the “in vivo” experiments. Another important function of these systems is their use as a teaching tool, because it allows the student to visualize the behavior of pressure and blood flow in real time, helping to learn cardiac physiology [2]. In this context, the objective of the present work is to implement the concept of the Windkessel model in a didactic bench, composed of a hydraulic circuit with computer controlled adjustments and measurements. In this way, the work continues the line of research that studies HCS models, started with the first version of the HCSSim simulator [3], a HCS didactic simulator based on the Windkessel models and its second version, called HCSSim Web [4], a simulator based on a more robust model capable of representing

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_148

997

998

A. G. Silva et al.

systemic circulation and pulmonary circulation and certain heart diseases.

2

Materials and Methods

2.1 Reference Model The system that serves as the basis for the implementation of the didactic bench (Fig. 1) is based on the two-element Windkessel model [5]. This consists of pipes, a reservoir, a piston pump, a compliance chamber and a restriction valve.

Fig. 2 a Reservoir. b Compliance chamber

2.2 Reservoir and Compliance Chamber For making the reservoir (Fig. 2a) and compliance chamber (Fig. 2b) polymethyl methacrylate (acrylic) material was chosen which has high transmittance (90–92%), has good impact resistance, 50% lighter than glass, totally waterproof and withstands internal working pressure up to 3 kgf/cm2 (2207 mmHg) with 3 mm of thickness produced by the centrifugation process (without sewing). The base of the reservoir, base of the compliance chamber and its respective cover were produced with polyamide plastic material (nylon), which has low cost and easy machining process.

2.3 Restriction Valve For the choice of the restriction valve, the cost, the possibility of adjusting the head loss for different opening angles, the ability to increase or decrease the system pressure and the invariability of the system flow were taken into account. All things considered, a gate valve driven by a stepper motor was chosen to be worked with, as shown in Fig. 3.

Fig. 3 Gate valve drive assembly

2.4 Piston Pump When choosing the pump, the aspects taken into account were the constant ejection volume in each cycle, the flow and pressure curves similar to those provided inside the aorta [6] and the ability to simulate the ejection pattern performed by the left ventricle based on the Windkessel model. To meet these characteristics, the choice was to work with the alternative piston pump driven by a rod-crank system as shown in Fig. 4.

2.5 Sensing and Actuators System

Fig. 1 Base model system

To read two pressure signals in the hydraulic circuit, the pressure transducer Mpx5050dp was used with an actuation

Development of a Hydraulic Model …

999

compares it with the setpoint defined by the user in the software. The opening valve uses the same PID control system as the motor, with the adjustment parameters Kp = 1, Ki = 50 and Kd = 0, obtained by the experimental method. The control works by closed loop, measuring the angular displacement of the valve using a multiturn potentiometer coupled to the valve driving shaft. The measured value is compared with the setpoint defined by the user and which generates a control effort in the activation of the stepper motor acting on the valve. All other actuators use a simple control system just by turning it on or off, as needed. Fig. 4 Piston pump

2.7 Bench Test Parameter Control

range from 0 to 375 mmHg. In the measurement of the water level of the compliance chamber, the ultrasonic sensor HC-SR04 was adopted with a measurement range of 2– 400 cm and ±3 mm of precision. In order to obtain the rotation speed of the drive disk, the Ecf422-1024 series 648 encoder was used, with a resolution of 1024 ppr. The fluid and air temperatures in the chamber were measured with the Ntc 10 k 3950 thermistor sensor, which has a measurement range of—20–105 °C. The actuators in the system are the S0520AV-D bidirectional solenoid valve, used to allow air to enter and exit the compliance chamber, the SC3704PM Skoocom® air pump, responsible for ejecting air into the chamber and thus controlling the height of the fluid level in the chamber. Also, a Nema 17 stepper motor, model 17HD34008-22B to control the degree of opening of the gate valve. And finally, an AK555/306PL12S6500C 6500 RPM DC motor coupled to a planetary type gearbox with a reduction ratio of 64:1 in 3 stages.

After the construction of the didactic bench, experiments were performed on the system with constant ejection volume (approximately 70 ml) and constant motor rotation (72 RPM). The compliance of the chamber was also altered (varying the height of the fluid) and the opening of the gate valve, seeking to approximate the results to the theoretical values in hypotensive, normotensive and hypertensive conditions. This way, the systolic and diastolic pressure ranges measured at the points between the compliance chamber and the gate valve were obtained. The values were obtained according to the ISO 5840-3: 2013 [7] standard shown in Table 1.

2.6 Control System The Arduino MEGA module with Atmega 2560 processor from Atmel was the choice of control drive. The microcontroller communicates with a software developed on the MATLAB® platform by sending data over a serial communication. The data is sent using a USB cable that connects the control board to the computer. To control the engine speed and keep it fixed according to the reference, the PID control system (proportional—integral —derivative) was used by the Arduino library (PID Library —Version 1.2.1). The adjustment parameters were Kp = 1, Ki = 2 and Kd = 0, obtained by the experimental method. The motor control system works by the closed loop method that measures the actual motor rotation by the encoder and

3

Results

After assembling the hydraulic circuit and integration of all sensor elements and actuators, the experimental bench was obtained, as shown in Fig. 5. The control interface created to perform the interaction with the user can be seen in Fig. 6. In this interface the user can select the graphical display of the following profiles: pressure, flow rate, valve opening and rotation of the pump drive system. The functioning of the didactic bench can be monitored on the link: https://bit.ly/30lj96p. By adjusting the bench parameters according to the protocol, the values obtained are presented in Table 2.

Table 1 Reference pressures for experiments Condition

Systolic pressure

Diastolic pressure

Hipotensive

60

40

Normotensive

120

80

Mildly Hipertensive

140

90

1000

A. G. Silva et al.

4

Fig. 5 Built didactic bench

With the gate valve opening parameters and fluid height in the chamber adjusted according to Table 2, blood pressure profiles were obtained in hypotensive (blue), normotensive (red) and mild hypertensive (black) conditions, the signs are shown in Fig. 7. Table 3 presents the quantitative data of systolic and diastolic pressure referring to the signs shown in Fig. 7.

Discussion

After the collection of data in the experiments performed in the didactic bench, a comparison was made among the experimental results. It is observed that in all tests carried out the didactic bench obtained very close values to the reference values found in the norm ISO 5840-3:2013 [7]. These results demonstrate the capacity that the prototype have to reproduce the characteristics of the systemic circulation, thus representing the elements of compliance and peripheral resistance. Despite the satisfactory results obtained, it is emphasized that the bench control is not based only on the parameterization of peripheral resistance and compliancy, as it is expected for a system that represents the Windkessel model [5]. To obtain a system completely controlled by physiological variables, it is necessary to perform a direct conversion between the hydraulic parameters of the system with the physiological parameters of the systemic circulation, being the heart rate directly related to engine speed and compliance related to the hydraulic characteristics of the chamber. Among the three characteristics, the only parameter without direct relation is the resistance generated by the

Fig. 6 Interface overview

Table 2 Parameters obtained in the experiment

Condition

Opening valve angle [º]

Height fluid in the chamber [cm]

Hipotensive

1390

27.5

Normotensive

1460

31.1

Mildly Hipertensive

1470

32.4

Development of a Hydraulic Model …

1001

Fig. 7 Pressure signals obtained Table 3 Experimental values obtained Condition

Systolic pressure

Diastolic pressure

Hipotensive

61

39

Normotensive

123

80

Mildly Hipertensive

139

86

gate valve, due to the constructive aspects of the bench, such as: length and diameter of the pipe cross section and the fact that the flow regime in the didactic bench is turbulent, preventing direct conversion to physiological units by the Poiseuille Law [8], which establishes the laminar regime as an ideal condition. However, the results obtained with the turbulent regime proved to be satisfactory, being possible in future works to develop a model for converting the real system into peripheral resistance units. The developed prototype proved to be quite robust, showing no leakage during its operation, a very important aspect both for the reliability of the data presented and for its safe and functional use. The construction of the acrylic reservoir and chamber as well as the piping using transparent PVC ensured the hydraulic circuit the possibility to visualize the dynamics of the fluid in the system internally. This reinforces the didactic focus of the bench as it allows that important aspects of the Windkessel model such as direction of fluid flow and change in liquid levels in the compliance chamber can be observed in real time during the simulations.

5

Conclusions

This study presented a new systemic circulation simulator based on the Windkessel model, combining a hydraulic circuit, a microcontrolled system and a graphical interface that allows the user to change system parameters and monitor the signals obtained in real time. The microcontrolled system of the bench proved to be effective in controlling all

sensor elements and actuators, being able to reproduce various blood pressure conditions just by adjusting hydraulic characteristics through the developed software interface. The integration of the automation set with the hydraulic circuit and the base structure of the bench make the system a versatile teaching and training tool, being possible to transport and operate it using only a computer. The system can still be improved by making changes in the hydraulic circuit in order to seek a laminar flow regime in the gate valve. These changes allow all hydraulic elements to be converted into physiological characteristics, making the system completely automated from the point of view of simulations of the characteristics of the systemic circulation. Another aspect that can be improved in future works is the insertion of pathological conditions in the system, such as changes in heart valves, an aspect that would make the didactic function of the bench more comprehensive. Acknowledgements The authors are grateful for the financial support of the University of Mogi das Cruzes (OMEC/UMC) and the Foundation for Research Support of the State of São Paulo (FAPESP) Grants #2013/20220-5 and #2016/18422-7. Conflict of Interest The authors state that there are no undisclosed financial or personal relationships that might results in conflict of interest with respect to this study.

References 1. Naik KB, Bhathawala DPH (2014) Mathematical modelling and simulation of human systemic arterial system 4:1–7 2. Gregory SD Simulation and development of a mock circulation loop with variable compliance, p 174 3. Silva AG, Goroso DG, Silva RR (2019) HCSSim: a simulator of elastic arterial vessels using Windkessel models. In: González Díaz CA, Chapa González C, Laciar Leber E, Vélez HA, Puente NP, Flores D-L et al, organizadores (eds) VIII Latin American conference on biomedical engineering and XLII national conference on biomedical engineering [internet]. Springer International Publishing, Cham [citado 12 de outubro de 2019], pp 709–717 4. Silva AG, Goroso DG (2019) HCSSim: Um Simulador Do Sistema Circulatório Humano Utilizando Circuitos Elétricos Equivalentes 4 5. Westerhof N, Lankhaar J-W, Westerhof BE (2009) The arterial Windkessel. Med Biol Eng Comput. 1o de fevereiro de 47(2):131– 141 6. Oliveira BRF (2011) Circuito hidráulico mimetizador de ejeção do ventrículo esquerdo e de pressão no interior da aorta. Universidade Federal do Rio de Janeiro 7. ISO 5840-3:2013(en), Cardiovascular implants—Cardiac valve prostheses—Part 3: Heart valve substitutes implanted by transcatheter techniques [Internet]. [citado 14 de junho de 2020]. Disponível em. https://www.iso.org/obp/ui/#iso:std:iso:5840:-3:ed1:v1:en 8. Okuno E, Caldas IL, Chow C (1986) Física para ciências biológicas e biomedicas. Harbra, 490 p

A Design Strategy to Control an Electrosurgery Unit Output Paulo Henrique Duarte Camilo and I. A. Cestari

Abstract

Ideally, the power output of electrosurgery units must be kept constant during its utilization despite the many differences in the resistance of tissues and the anatomic features of the patients being operated on. The aim of this paper is to present a method of designing the control stage of a high frequency electrosurgery equipment considering the adequate choice of passive components to reduce the variation of the power source voltage. The design focused on the automatic regulation of the average power output. The results obtained demonstrate that by using a simulation approach it is possible to predict the response of the electrosurgery unit output as a means to better regulate it. Keywords



Electrosurgery unit Medical equipment instrumentation Control electronics

1



Medical

Introduction

Electrosurgery units (ESU) are needed to perform cutting, coagulation, devitalization and thermofusion of tissues according to the specific needs of a surgery. For that end ESU use high frequency (HF) alternating currents combined with variations in the exposure time, the amount of energy

P. H. D. Camilo (&) Programa de Pós-Graduação em Engenharia Biomédica, Departamento de Engenharia de Telecomunicações e Controle, Escola Politécnica da USP, Professor Luciano Gualberto, Travessa 3, Número 158, Prédio de Engenharia Elétrica, São Paulo, 05508-010, Brazil e-mail: [email protected] I. A. Cestari Divisão de Bioengenharia, Instituto do Coração (InCor) HC-FMUSP, São Paulo, Brazil

delivered, the HF waveform, the size and shape of the electrode in contact or near the target tissue [1]. In the context of electrosurgery techniques, the patient body and tissues placed between two electrodes can be considered as a means of conduction for the circulating current which causes a thermal effect. This thermal effect is not produced by the passive transfer of heat from the hot active electrode to the target tissue. It is, instead, an endo-thermic process by which the current being conducted through the body produces a temperature rise inside the cells. This is most evident at the electrode vicinity where the current density is higher [1]. The use of a high alternating frequencies in ESU, usually beyond 300 kHz, causes no neuromuscular stimulation since muscle and neural cells cannot respond to excitation above 100 kHz [1, 2]. In ESU, the output mean power should be kept constant independently of the type of tissue for safety reasons since very discrepant power fluctuations can burn the target tissue and consequently delay the post-surgery recovery. The ESU performance is also affected by these fluctuations [2] and the non-homogeneous tissue resistance can affect cut quality or hemostasis, causing dragging or bleeding. The thermal tissue effect depends on the tissue temperature resulting from the application of the energy as shown in Table 1 [3].

1.1 ESU Output Circuit According to international standards safety guidelines [4] an electrosurgery unit must be electrically insulated from the AC supply. The best way to ensure this requirement for power applications is the use of magnetically coupled components such as transformers [5]. Considering power efficiency, the design of high frequency surgical equipment can use concepts similar to the ones used in switched power supplies [6]. The most commercially used topology for this application is the class C

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_149

1003

1004

P. H. D. Camilo and I. A. Cestari

Table 1 Tissue thermal effect by temperature Tissue effect

Approximated temperature

Hyperthermia

42–50 °C

Devitalization (cellular death)

60–90 °C

Desiccation/coagulation

100–150 °C

Carbonization/necrosis

150–250 °C

Vaporization

300 °C—above

Fig. 1

ESU output schematic

amplifier in a resonant configuration [3], as illustrated in Fig. 1. In the schematic “R” represents the patient with similar electrical characteristics of a resistance in the frequency range that the ESU works. In monopolar applications the target tissue impedance can range from 100 to 3000 X [7]. The alternating current is generated when the MOSFET transistor “Q” is driven by the input signal “v2(t)” in a switched-mode operation. The “v1(t)” is a direct current power supply with the power output dependent of “v1(t)” voltage value which is thus an input control variable. Considering the initial values of zero voltage across the capacitor and zero current flowing through the primary inductor we have two situations: (a) when “Q” is in conduction state the source current starts to flow through the primary inductor, the current increases in a ramp proportional to the primary inductance “L1” and (b) when “Q” is turned off the current flows through the capacitor and decreases in a damped sinusoid pattern. The waveforms depend on the passive component values, which are the inductance, capacitance, coupling factor, winding relation and the load resistance [8]. Considering the influence of component choice in the operation of the unit designed it is important to define a method to identify the most suitable components.

2

Materials and Methods

2.1 PSPICE Model The method proposed in this work is based on the study of the circuit behavior by computing the theoretical class C amplifier circuit using a PSPICE model. In this amplifier the active component does not need to conduct energy during the entire input signal cycle and the components that store energy are used to constitute the output signal. A time domain transient analysis was performed with a time lapse of 3 ms and using ideal component models [9]. Programed switches were used to simulate load variations in 5 steps, ranging from 100 to 3000 X, so that each switch opens sequentially every 0.5 ms. Using a pure resistive load without reactive parts the power was determined by the product of the load voltage and current and the mean power is obtained by using a first order low-pass filter. Thus the output variable, the averaged power, was used as a feedback signal to simulate a closed loop control. Only the averaged output is considered, in this application and a slow proportional integral controller followed by a negative feed-back compared with a reference value to reach the expected outcome was used [8]. A low-pass filter was applied to the reference as well to promote a soft-start and balance. With the model established it was possible to apply the parameters evaluation method. Figure 2 displays the PSPICE circuit. In the schematic the association of R1 to R6 represents the load resistance, L1 and L2 represent the transformer, C1 is the capacitor in parallel with the transistor Q1, V2 is the input signal which is a square wave with fixed frequency and duty cycle. V1 is a dependent voltage source; its value will be adjusted by the PI controller comparing the output power with 300 W as a reference.

2.2 Sweep Parameters Method The method consists in simulating the circuit with different values for each circuit component and analyzing the responses. The project constraints must be taken into account to narrow the scan interval as the output frequency which in this application of electrosurgery must be above 300 kHz. The parameters used for sweep considers a capacitance C, an inductance L1, and the transformer turns ratio N. For better comparison the PSPICE SWEEP PARAMETER analysis option was used [9].

A Design Strategy to Control an Electrosurgery Unit Output

1005

Fig. 3

Fig. 2

PSPICE circuit for simulation

With this simulation method the expected result is to identify the components parameters which cause the voltage variation to be the least, making it possible to reduce the control effort to keep the average power output constant.

3

Results And Discussion

Figures 3, 4, 5 and 6 show simulation results in time transient graphics illustrated in the highlighted curves. In Figs. 4, 5 and 6 present the components with the smallest variation of the power source voltage as a function of the load sweep.

Fig. 4

Voltage across Q1 for different values of inductance in time

The results show that if these values were chosen for the high frequency surgery generator design, a lower effort would be necessary for the controller to properly correct the power output since small voltage steps would be necessary. As seen in Fig. 3 an advantage of selecting the proper parameters is to obtain a low voltage level stored in the capacitor at the moment when a new conduction state starts. The higher the level of energy stored, the higher is the current spike through the transistor, so a low voltage level is also desired, this property is known as zero voltage switching (ZVS) [5, 6]. In each figure we present 4 values for each component representing 64 variations. In our simulation the best fit was obtained with components: • L1 = 5 lH; • C = 10 nF; • N = 4.

Power source voltage variations as a function of the load resistance in time for different inductance values

1006

P. H. D. Camilo and I. A. Cestari

Fig. 5

Power source voltage variations as a function of the load resistance in time for different capacitance values

Fig. 6

Power source voltage variations as a function of the load resistance in time for different turns ratio values

4

Conclusions

The method proved to be a good strategy for the design of an electrosurgery unit output. Using the method described it was possible to identify the best fit configuration of components which smaller variation compared to other values tested. The power voltage level ranged from 70 to 85 V if it is not considered the initial transition states. The same result is observed for different power set points allowing a single selection of a single set of component for a wide range of applications. ESU’s with controlled output are safer and

efficient. Future work will consider applying system identification techniques to improve circuit analysis and the comparison of simulation results with experimental tests. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Feldman L, Fuchshuber P, Jones DB (2012) The SAGES manual on the fundamental use of surgical energy (FUSE), 1st edn. Springer, New York

A Design Strategy to Control an Electrosurgery Unit Output 2. Tokar JL, Barth BA, Banerjee S et al (2013) Electrosurgical generators—technology status evaluation report. Gastrointest Endosc 78:197–208. https://doi.org/10.1016/j.gie.2013.04.164 3. Carr-Locke DL, Day J (2011) Principles of electrosurgery:125–134 4. International Electrotechnical Commission IEC (2017) IEC 60601-2-2:2017—Medical electrical equipment—Part -–2: particular requirements for the basic safety and essential performance of high frequency surgical equipment and high frequency surgical accessories

1007 5. Abraham IP, Keith Billings TM (2009) Switching power supply design, 3rd edn. 6. Maniktala S (2012) Switching power supplies A-Z 7. Bronzino JD (2006) The biomedical engineering handbook: medical devices and systems, 3rd edn. CRC Press, Hartford, Connecticut, U.S.A. 8. Salam MA, Rahman QM (2018) Fundamentals of electrical circuit analysis. Springer Singapore 9. Sandler SM (2006) Switch mode power supply simulation designing with SPICE3, 1st edn. McGraw-Hill, New York

Near Field Radar System Modeling for Microwave Imaging and Breast Cancer Detection Applications F. A. Brito-Filho, D. Carvalho and W. A. M. V. Noije

Abstract

This paper presents a system modeling for radar application in breast cancer detection. Near Field considerations are included in a high-level model to perform radio frequency system simulations. The proposed system is evaluated in order to provide the propagation losses due to attenuation in different breast tissues for a given set of transceiver parameters. Simulation results are presented in order to validate the system. A higher than 120 dB dynamic range can be pointed for a worst case scenario considering the breast with innermost tumor at 10 GHz frequency. Keywords

Radar • System modeling • Microwave imaging • Breast cancer • Detection

1

Introduction

Breast cancer is the most common type among women around the world, regardless the country, thus leading the number of deaths due cancer [1]. Early detection and diagnosis is the best way to prevent mortality, once if a tumor is detected in early stages, a 95% of survival have been reported [2]. Nowadays mammography is the most popular exam done in order to diagnose breast cancer [3]. Since mammography uses X-ray technology it is not suitable for continuous monitoring the evolution of the patient because of ionising energy exposition, and also it suffer from high-levels of false negatives [4]. Other imaging methods used to help in the diagnosis of breast F. A. Brito-Filho · D. Carvalho · W. A. M. V. Noije Department of Electronic Systems Engineering of the Polythecnic School, Sao Paulo University (USP), Sao Paulo, Brazil F. A. Brito-Filho (B) Engineering Department, Federal University of the Semiarid Region (UFERSA), RN-233, Km 01, Caraubas, Brazil e-mail: [email protected]

cancer includes ultrasound technology, that lacks of precision and needs skilled operators; and magnetic resonance imaging (MRI), that is expensive and due that has limited access to the population [5]. Microwave systems have been reported to imaging and detection of breast cancer [6]. Some of them uses tomographic approach that aims to reconstruct the scattered dielectric profile of the breast [7]. Since that approach demands high computational processing efforts to solve differential Maxwell’s equations to recover the dielectric profile, other proposed Microwave Imaging (MWI) systems uses High-Resolution Radar Method to image the breast by time-of-flight calculations [8]. Regarding radar imaging different algorithms such as beamforming [9] or confocal [10] are explored in order to deal with high-resolution imaging reconstruction problem. About the detection of the breast tumor, combined solutions using imaging and/or machine learning algorithms are also proposed to deal with the problem [11]. Some implementations using radar topologies have been proposed to MWI adapting commercial Vector Network Analyzers (VNA) [12–14], but that solution is not suitable for mass adoption since it is bulky and expensive. More affordable custom solutions also proposed includes the Impulse Radio Ultra Wideband (IR-UWB) [8] and Stepped Frequency Continuous Wave (SFCW) architetures [15,16]. Some have advantages and drawbacks [17]. In order to evaluate overall radar performance, a good system-level modeling is needed, including transmitter, receiver and the medium between them (or the breast model) [18]. This work aims to perform an overall system modeling using RF simulations to provide a better understanding on how important parameters correlates between them and how it can affect the application on microwave breast cancer detection and imaging. A high-level hierarchical system modeling is proposed and implemented using Advanced Design System Software. Continuous Wave based radar topology is used as core transceiver in order to evaluate the model. In Sect. 2 is presented a prior art review regarding on microwave imaging and breast cancer detection. Section 3 shows the methodol-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_150

1009

1010

F. A. Brito-Filho et al.

Fig. 1 Frequency-domain versus time-domain radar approaches typical waveforms a SFCW, b pulsed

ogy used in this work and in Sect. 4 the proposed system is presented. Some results and discussions are provided in the Sect. 5. Finally in Sect. 6 some conclusions are made.

2

Microwave Imaging and Breast Cancer Detection Review

This section presents some considerations about microwave imaging for breast cancer detection, including commonly used methods and topologies in the state of the art, frequency spectrum and power considerations, and also a brief review of system modeling works previously reported.

2.1

Radar Topologies

Common Radar topologies used to implement MI and Breast Cancer Detection Systems can be divided in two approaches: time-domain and frequency-domain. The approaches differs from the way that the object or target (in this case, the breast tumor) is detected. First one uses a narrow pulse to be transmitted, and samples the reflected or transmitted signal after electromagnetic scattering at the object. Second approach uses continuous-wave signals with narrow bandwidths stepped in time to cover a wider bandwidth and recovers the scattered signals using frequency conversion such

as low-IF or direct-conversion radio frequency techniques. Figure 1 illustrates differences between the two approaches. A time-domain approach highly used to Breast Cancer Detection is the Impulse Radio Ultra Wide Band (IR-UWB) Radar, that consists of a transmitter generating a Gaussiantype pulse and a receiver using equivalent time-sampling techniques [8]. That approach tends to give well relaxed specifications to the transmitter, but exigent specifications to the receiver, since the signal to be sampled is a scattered narrow pulse and demands a signal recover synchronization with pico-seconds specification or lower [8]. Also, due to the nature of scattered signal that contains a large skin-reflected component together with a small tumor-reflected component, high dynamic range is needed, that is difficult to achieve at also at a wide bandwidth. Frequency-domain approach tends to use directconversion radio topologies due to the highly stringent phase coherence specification [19]. Continuous wave signals with narrow bandwidths are transmitted along time changing the central frequency according to a frequency step to cover an entire frequency range. Then the receiver processes the scattered components in a narrow bandwidth to achieve lower noise and consequently a higher dynamic range. Since the approach has DC-translated components, second-order linearity also can be analyzed in order to maintain the high dynamic range [19].

150 Near Field Radar System Modeling for Microwave …

1011

Since some architectures has advantages and drawbacks, a mixed solution can also be analyzed in order to get the goods of each one. To deal with this problem to find viable specifications for time-domain or frequency-domain approaches, or to propose optimal solutions, high-level modeling systems need to be taken in to account in order to explore the design space, from architecture to the specifications viewpoint, and also, considering the medium dielectric characteristics.

2.2

Frequency Spectrum and Signal Propagation Considerations

Radar topologies presented in subsection 2.1 uses the highresolution principle to detect small targets [20]. The resolution is given in the two planes by the slant-range and cross-range parameters, that relates to Eqs. 1 and 2, respectively. rs =

v 2B

(1)

rc =

R v l B

(2)

where v is the wave velocity, B is the transmitted signal bandwidth, R is the distance from the target to the Radar and l is the aperture length of the antenna. From Eqs. 1 and 2 is possible to conclude that a higher central frequency with a wider bandwidth is better to detect small targets. Since the resolution of imaging systems depends on frequency and bandwidth definition, considerations about spectral masks and government rules need to be taken in to account. Frequency bands that can be used for medical applications or UWB applications without licensing are summarized in Table 1. According to [21] a tumor below of 2 cm is required to early diagnoses purposes. Considering the Eqs. 1 and 2, the frequency bands above ISM (Industrial, Scientific and Medical) 2.4 GHz is necessary. Note that bandwidth requirement is not well achieved by only single ISM bandwidths, but the use of them combined can be explored. UWB band could accomplish the early diagnoses requirement but lacks on spectrum power limitations.

Another consideration that needs to be taken in to account is the electromagnetic (EM) wave propagation nature in dielectrics. Since the breast can be model as a dispersive medium for the EM waves, the signal is highly attenuated at high frequencies [22], that imposes a trade-off between penetration depth and radar resolution. Propagation or dispersive models have been used to solve Maxwell’s equation to the problem of the EM waves propagating in the breast. Equation 3 is one of them used to solve attenuation in a lossy medium as function of the dielectric constant and conductivity.    με σ −1 (3) 1+ α=ω 2 ωε where α is the attenuation constant, ω is the angular frequency, μ is the permeability, ε is the dielectric constant and σ is the conductivity of the medium. Cole-Cole or Debye relaxation models are also used to model dielectric characteristics of the breast over frequency, and are summarized by Eqs. 4 and 5, respectively. ε∗ (ω) = ε∞ +

εs − ε∞ 1 + (iωτ )1−α

(4)

εs − ε∞ 1 + (iωτ )

(5)

ε∗ (ω) = ε∞ +

where ε∗ is the complex dielectric constant, εs and ε∞ are the static and infinite frequency dielectric constants and τ is the time constant. Note that Cole-Cole equation tends to Debye equation when α tends to zero. Numerical electromagnetic modeling such as finite differences time-domain (FDTD) are also used to solve that problem with numerical phantoms [23]. The trade-off problem of high-resolution versus penetration depth is also a demand for high-level modeling at the system level in order to find optimal solutions. Next subsection presents a more detailed view of this approach.

2.3

System Modeling

Typical MWI System can be illustrated in Fig. 2. The breast model or a real breast is the target to be illuminated by a trans-

Table 1 Comparison of microwave ISM bands and UWB band ISM 2.4 Max. EIRPa (dBm) 36 Fcentral (GHz) 2.45 Bandwidth (GHz) 0.1 a Effective Isotropic Radiated Power b Peak ERP measured with 50 MHz bandwidth

ISM 5.8

ISM 24

UWB

36 5.8 0.15

36 24.165 0.250

0b 6.85 7.5

1012

mitter (TX) and its scattered signal is received (by the receiver - RX) using a radiating system implemented by mono-static or multi-static antenna configuration. TX and RX comprises the conversion system that translates the scattered microwave energy to a low frequency signal in order to be processed in a computer or similar hardware that is responsible for the processing system (to provide imaging reconstruction or tumor detection).

F. A. Brito-Filho et al.

In this work is proposed an hierarchical system modeling that has the same goals but implemented in some way that abstraction levels can be interchangeable and the simulations and analysis is performed in a more general way, not only for a specific topology.

3

Methodology

A high-level system modeling is proposed based on hierarchical entities depicted in Fig. 3.

Fig. 2 Typical microwave imaging system

Most of high-level designs or system modeling applied to MWI uses simple equations based on radar or propagation theory. Even those been well developed theories and useful to solve approximations from Maxwell’s equations, as mentioned in previous subsection, are not well connected when complexity from multiple models can influence the final result. An example is as the case of transceiver design that uses RF circuits and system different models and simulations, that needs to be connected with propagation models and also complex dielectric relaxation models to deal with homogeneous breast which have different tumor patterns, depths, lengths, widths; tissues with different gradients and water concentrations, and so on. System modeling aims to know or to specify an overall application by performing interconnected simulations considering individual specifications. In MWI related to Breast Cancer Detection, some works were reported with this goal. In [19] pure equations together with EM models of the breast are used to derive specifications for a particular transceiver architecture based on SFCW direct-conversion receiver. In a different way, [18] used a mixed-signal approach not to derive specifications, but to check if particular transceiver IRUWB based specifications works together with a previously reported breast cancer database. Both approaches has the goal defined in the beginning of this subsection, to known how the system will behaves on the application considering some specifications. That approaches differs on its abstract level, by a particular goal to achieve by simulation or by the way in which analysis are performed, and also, considering specific system architectures.

Fig. 3 High-level system modeling based on hierarchical entities

The high-level hierarchy of the system are the testbenches that provides stimulus and control signals to the low level blocks: TX, model and RX; as well as, evaluate the results. The TX block is responsible to generate the electromagnetic waves or the input signals of the system. In this particular case, is a CW transmitter with re-configurable parameters, as: frequency, bandwidth and output power, among others. The model block is a breast model as illustrated in Fig. 4.

Fig. 4 Breast models based on layer tissues

As seen in Fig. 4, the breast can be modeled as healthy (without tumor) or with tumor. Some anatomical parameters can be changed as tumor depth or breast tissue composition. Since the breast can be modeled with different tissue composi-

150 Near Field Radar System Modeling for Microwave …

tion organized in layers, different forms can be configured for each one, in order to analyze some propagation and detection issues by the system. The RX block is the receiver and is responsible to detect the scattered electromagnetic wave from the model. The RX block is also configurable in terms of dynamic range, noise, linearity and other important specifications. Next section will present the system in more details, as well as the simulations that were performed using this methodology.

4

Proposed System Modeling

In order to evaluate a MWI System with hierarchical methodology described in previous section, a system library is done using Analog Design System (ADS) Software, that supports hierarchical organization using libraries and different views. The system components organization based on hierarchical levels is illustrated in the Fig. 5.

1013

hierarchical level. Testbench library can manage all third level views and blocks from the other libraries in order to control and evaluate simulations. Next section will use the hierarchical system proposed here to perform simulations modeling propagation losses due to attenuation in different breast tissues considering T X _high, R X _high, Layer _model views.

5

Results and Discussions

In order to evaluate the proposed hierarchical modeling system, a testbench with all the components were done, to analyze an important feature of the system that is penetration depth, related to the propagation loss or attenuation of the electromagnetic waves in the medium. Figure 6 presents the testbench that was done for this task, implemented on the ADS, using the library components.

Fig. 6 Testbench for simulations to extract propagation loss and penetration depth Fig. 5 System components organization based on hierarchical levels

BC D_lib (acronym to Breast Cancer Detection library) is the top library with directives to all components in below hierarchies. T X _lib is in the second hierarchical level and is the top library related to transmitter blocks and is divided in 2 hierarchical views, that are: T X _high, which is a transmitter model with high-level specifications; and T X _low, that is a architectural level model. R X _lib is the top receiver library and is divided also in two lower level hierarchical views: R X _high, which is the model with receiver specifications; and R X _low, that is a more detailed model that can consider specific topology details, such as homodyne or heterodyne conversion. Model_lib comprises the breast models: Layer _model, which is based on Fig. 4 and on the Eqs. 3 and 5; and F DT D_model, which has numerical equivalent phantoms provided by FDTD electromagnetic modeling. Finally T B_lib is the testbench library and it is also in the second

The goal of this testbench (BC D_lib → T B_lib → Atten setup from Fig. 5) is to simulate the signal propagation in a layered breast model by the transmitted electromagnetic wave and consequently to extract the attenuation for a given set of transceiver parameters. Figure 7 presents the simulation results from different breast tissue composition using Layer _model considering Eqs. 3 and 5 with parameters from [23], for a 1 cm distance, in a frequency range from 1 to 10 G H z. So, the received power (PR X ) is directly related to the attenuation in a 1 cm layer of the breast tissue. As expected, a breast with mainly glandular tissue attenuates more than the fat one, imposing critical specification to the receiver dynamic range. In this particular simulation, the TX power was maintained with 0 d Bm along the frequency range and a high RX dynamic range were set to not affect attenuation measure by the testbench, but due to the flexibility of the proposed approach, all TX and RX specifications can be changed in order to evaluate other practical situations.

1014

F. A. Brito-Filho et al.

Fig. 7 Attenuation of transmitted signal from TX to RX over frequency for different breast tissue compositions

10

0

Power (dBm)

-10

-20

PRX (Glandular-high) -30

P P

-40

P

RX RX RX

(Glandular-median) (Glandular-low) (Fat-high)

PRX (Fat-median) -50

P

RX

(Fat-low)

PTX -60 1

2

3

4

5

6

7

8

9

Fig. 8 Transmitted and received power considering a 10 cm deep breast with a innermost 1.5 cm tumor

10 109

Frequency (Hz)

0

-20

Power (dBm)

-40

-60

-80

-100

PTX

-120

P

RX

-140 1

2

3

4

5

6

Frequency (Hz)

Figure 8 presents the simulation result considering a worst case scenario with 10 cm breast (almost 20 cm wave round trip attenuation in the back scattering configuration) with a innermost 1.5 cm tumor, modeled with Cole-Cole parameters as in [24]. The system results affect directly the requirements of the transceiver used for microwave imaging application. Different breast tissue combinations in the model library and different transmitter-receiver specifications can be simulated to explore the design space in order to optimize the final device and consequently provide a more precise tool to help physician on the diagnose of breast cancer.

6

Conclusions

This paper presented a high-level system modeling proposal that can be used to explore the design space of microwave imaging related to breast cancer detection. The system was implemented using hierarchical libraries with Advanced Design System Software. A testbench to extract the atten-

7

8

9

10 109

uation of transmitted electromagnetic waves in different breast tissues and in a breast cancer detection scenario were done for validation purposes. Simulation results shown the related received power over frequency. A worst case scenario considering the breast with a innermost tumor at 10 GHz frequency points to the necessity of a dynamic range larger than 120 dB to the receiver. The results can be used to optimize the final microwave device in order to provide a more precise tool that help on the diagnose of breast cancer. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. 2.

3.

American Cancer Society. https://www.cancer.org/ Ries L, Reichman M, Lewis D et al (2003) Cancer survival and incidence from the surveillance, epidemiology, and end results (SEER) program. Oncol 8:541–552 Drukteinis J, Mooney B, Flowers C et al (2013) Beyond mammography: new frontiers in breast cancer screening The Ame. J Med 126:472–479

150 Near Field Radar System Modeling for Microwave … 4. 5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

Huynh P, Jarolimek A, Daye S (1998) The false-negative mammogram. Radiographics 18:1137–1154 Lehman C, Isaacs C, Schnall M et al (2007) Cancer yield of mammography, MR, and US in high-risk women: prospective multiinstitution breast cancer screening study. Radiology 244:381–388 O’Loughlin D, O’Halloran M, Moloney B et al (2018) Microwave breast imaging: clinical advances and remaining challenges. IEEE Trans Biomed Eng 65:2580–2590 Zeng X, Fhager A, He Z et al (2014) Development of a time domain microwave system for medical diagnostics. IEEE Trans Instrument Measur 63:2931–2939 Song H, Sasada S, Kadoya T et al (2017) Detectability of breast tumor by a hand-held impulse-radar detector: performance evaluation and pilot clinical study. Sci Rep 7:1–11 Bond E, Li X, Hagness S et al (2003) Microwave imaging via spacetime beamforming for early detection of breast cancer. IEEE Trans Antenn Propag 51:1690–1705 Fear E, Li X, Hagness S et al (2002) Confocal microwave imaging for breast cancer detection: localization of tumors in three dimensions. IEEE Trans Biomed Eng 49:812–822 Song H, Li Y, Men A (2018) Microwave breast cancer detection using time-frequency representations. Med Biol Eng Comput 56:571–582 Fear E, Bourqui J, Curtis C et al (2013) Microwave breast imaging with a monostatic radar-based system: a study of application to patients. IEEE Trans Microw Theory Techn 61:2119–2128 Fasoula A, Duchesne L, Cano J et al (2018) On-site validation of a microwave breast imaging system, before first patient study. Diagnostics 8:53 Mahmud M, Islam M, Islam M et al (2019) A low cost and portable microwave imaging system for breast tumor detection using UWB directional antenna array. Sci Rep 9:1–13

1015 15. Bassi M, Caruso M, Khan M et al (2013) An integrated microwave imaging radar with planar antennas for breast cancer detection. IEEE Trans Microw Theory Techn 61:2108–2118 16. Casu M, Vacca M, Tobon J et al (2017) A COTS-based microwave imaging system for breast-cancer detection. IEEE Trans Biomed Circ Syst 11:804–814 17. Oloumi D, Bevilacqua A, Bassi M (2019) UWB Radar for high resolution breast cancer scanning: system, architectures, and challenges. In: 2019 IEEE international conference on microwaves, antennas, communications and electronic systems (COMCAS). IEEE, pp 1–4 18. Guo X, Casu M, Graziano M et al (2014) Simulation and design of an UWB imaging system for breast cancer detection. Integration 47:548–559 19. Bassi M, Bevilacqua A, Gerosa A et al (2012) Integrated SFCW transceivers for UWB breast cancer imaging: architectures and circuit constraints. IEEE Trans Circ Syst I Regular Pap 59:1228–1241 20. Wehner D (1987) High resolution radar. Artech House 21. Berg W, Hendrick E, Kopans D et al (2009) Frequently asked questions about mammography and the USPSTF recommendations: a guide for practitioners. Society of Breast Imaging, Reston 22. Hippel A (1954) Dielectrics and waves. Wiley 23. Zastrow E, Davis S, Lazebnik M et al (2008) Development of anatomically realistic numerical breast phantoms with accurate dielectric properties for modeling microwave interactions with the human breast. IEEE Trans Biomed Eng 55:2792–2800 24. Lazebnik M, Popovic D, McCartney L et al (2007) A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries Phys. Med Biol 52:6093

Biomedical Optics and Systems and Technologies for Therapy and Diagnosis

Development and Validation of a New Hardware for a Somatosensorial Electrical Stimulator Based on Howland Current-Source Topology L. V. Almeida, W. A. de Paula, R. Zanetti, A. Beda and H. R. Martins

can be used for all the exams performed by its competitors (Neurometer and NeuroStim), adding new features to perform different exams.

Abstract

Several neuropathies of the peripheral nervous system (PNS) cause sensory dysfunction, leading to information distortion or even sensory loss. For instance, Diabetes nerve-damaging process leads to progressive loss of sensation, which indicates that an early intervention could reduce its effects. Among available exams, electrical stimulation can provide quantitative metrics as current perception threshold (CPT) and reaction time (RT), which would serve as a screening tool for assessing the disease’s evolution and differentiate the affected nerve fibers. We aim at developing a new multichannel hardware for the electrical stimulation system EE L S, which would allow it to overcome limitations of the Neurometer and NeuroStim systems. We employ the Howland current-source topology combined with a bootstrapping power supply scheme to simplify the circuit design, reducing power stages and the number of required power rails. The new hardware presents high output linearity and stable load regulation when tested at frequencies (ranging from 1 Hz to 3 kHz). It can generate current stimulus of up to 8.63 mA with 4.51 µA resolution, showing total harmonic distortion (THD) inferior to 1%. When compared with NeuroStim and Neurometer, EE L S presented similar or superior characteristics in term of stimulus generation (amplitude, frequency, bandwidth, linearity, and THD). Moreover, it presents two stimulation channels for the support of new exam protocols as, e.g., a two-point discrimination test. Therefore, EE L S

L. V. Almeida · W. A. de Paula · A. Beda · H. R. Martins Graduate Program in Electrical Engineering, Universidade Federal de Minas Gerais, 6627 Presidente Antonio Carlos Avenue,Belo Horizonte, MG, Brazil L. V. Almeida (B) · R. Zanetti Departamento de Eletrônica e Biomédica,Centro Federal de Educação Tecnológica de Minas Gerais, Belo Horizonte, MG, Brazil e-mail: [email protected]

Keywords

Neuropathy • Somatosensorial assessment • Sinusoidal electrical stimulation • Howland current-source topology • Embedded system

1

Introduction

The skin is widely innervated by primary afferent nerve fibers, which are part of the peripheral nervous system (PNS). These neurons are responsible for carrying information from somatic sensory receptors to sites in the central nervous system (CNS). The conduction velocity of such nerve fibers is intrinsic related to the presence of myelin, and, therefore, to the nerve-fibers diameter [1]. Regarding its diameter and connected receptors, we can classify nerve fibers into three groups: C, Aδ, and Aβ [1]. Hence, each group is correlated with the conduction-velocity of nerve-fibers and the type of sensation led to the CNS [1,2]. For instance, C-fibers are unmyelinated and have the smallest diameter among the three fiber groups (less than 1.5 µm), also showing the lowest conduction velocity (between 0.5 and 2 m/s). C-fibers are related to the transmission of pain, temperature, and itching sensations [1]. The Aδ-fibers, on the other hand, are myelinated but thin if compared with Aβ-fibers (thick myelinated axons). The former is related to sensations of pain and temperature, and the latter, with conduction velocity of up to 75 m/s, are related to tactile sensations [1]. Several diseases of the PNS cause sensory dysfunction, leading to information distortion or even sensory loss. For example, some neuropathies as Diabetes acts on nerve fibers selectively, others as carpal tunnel syndrome and Hansen’s disease gradually impair the PNS. In both cases, early interventions could reduce the effect of such diseases and improve

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_151

1019

1020

patients’ quality of life. To this aim, we can use, e.g., monofilaments [2], discrimination between two points [3], and sinusoidal electrical stimulation [4]. These tools or procedures can be used to evaluate sensory loss and, for some of them, also to assess conduction velocity. Thus, we can relate the assessment results to the affected nerve group as a screening tool. Moreover, among the above cited exams, the electrical stimulation can provide quantitative metrics to the disease assessment. For instance, the measure of current perception threshold (CPT) and reaction time (RT) [5] would permit tracking disease’s evolution and evaluate subject response to treatment. The electrical stimulation exam that allows us to measure CPT utilizes a sinusoidal electrical stimulus to determine the weakest current intensity capable of evoking a perception on the patient [2,4]. Furthermore, these measures are taken for various frequencies, as previous studies suggest that stimuli with different frequencies would trigger different somatosensory nerve-fibers. Hence, perceived sensations would vary in accordance with the group of nerve fibers most stimulated. The accessed studies indicate that: (1) a low frequency sinusoidal stimulus at 5 Hz would excite more C-fibers (2) 250 Hz, would affect more Aδ-fibers and, (3) a 2000 Hz one, Aβfibers [6,7]. A later study shows that the frequencies of 1, 250 and 3000 Hz would be more neuroselective than the previous set [5]. To the best of our knowledge, the Neurometer is the first equipment available for determining CPT [8]. It features a current-source topology capable of generating sinusoidal stimulus with peak intensities up to 9,99 mA at a fixed set of frequencies (5 Hz, 250 Hz and 2 kHz). A second device for electrical stimulation found in the literature is the NeuroStim [9]. NeuroStim current-source topology is based on the design proposed in [10], adding resources (hardware, firmware, and software) to provide stimuli with arbitrary waveform and frequency. Moreover, it also adds the possibility of measuring subject’s reaction time (RT) to the perception of a stimulus (time elapsed between stimulation and the pressing of a hand-button to indicate its perception). The current-source topology used in both above-cited devices is considered to present a complex circuit-design. The circuit is based on two AB-class power-amplifiers connected in cascade by a precision rectifier. The first stage amplifier and the rectifiers generate the positive and negative parts of a stimulus separately, and the second stage join them at the output [9]. This approach can create signal distortions and also DC offset problems, requiring fine-tuning by the use of potentiometers during calibration. Moreover, this design requires several power supplies (±15, ±120, ±135, ±150 V), to operate properly, increasing the complexity of the circuit and its cost. Seeking to minimize the limitations of the Neurometer and NeuroStim, we propose the development of a new multichannel hardware to compose the electrical stimulation system

L. V. Almeida et al.

EE L S (acronym of the Portuguese words estimulador elétrico somatossensorial, which translates to somatosensorial electrical stimulator). In this new hardware, we use the Howland current-source topology combined with a bootstrapping power supply scheme [11]. This approach allows for a simple hardware circuitry, reducing power supply complexity and production costs. Moreover, we extend the number of stimulation channels to present two independent outputs, allowing EE L S to perform exam protocols that are not available in other commercial equipment. At last, we test this hardware and compare it with NeuroStim in terms of operation.

2

Materials and Methods

In this section, we present the EE L S electrical stimulation system, describing its features and the proposed current-source topology. Moreover, we introduce the benchmarks considered on the current-source evaluation.

2.1

EE L S

We developed hardware, firmware, an Android-based app as user interface, and server software to compose the EE L S electrical stimulation system. EE L S design accommodates the requirements of a CPT exam, extending the set of features also to include RT measurement and a two-point discrimination test [12]. EE L S operation is quite simple. Using the app, an operator can register a new patient, retrieve/send information from/to the server database, and control an exam execution. All information exchange between app and equipment occurs throughout a Bluetooth 4.0 communication interface. EE L S firmware works in a master/slave fashion, waiting for commands from the app to execute each exam protocol [13]. Figure 1 shows the EE L S hardware block diagram. It is divided into a control unit, Bluetooth interface, current control-voltage conditioning circuit, Howland current-source, calibration-feedback conditioning circuit, and power supply. The EE L S control unit is based on the STM32F446 microcontroller, which features a 32-bit RISC processor operating at a frequency of up to 180 MHz. This microcontroller presents 512 KB flash memory, 128 KB of RAM, and several other peripherals, including two internal digital-to-analog converters (DAC) of 12 bits. A Bluetooth module HC-05 is used on communication between the equipment the mobile app, speeding-up exam setup and reducing possible problems arising from cable connection failures [13]. EE L S can generate stimuli with a resolution of 806 µV/bit and with frequencies up to 25 kHz using 360 points sinusoidal cycle. The current-source control signal (Vctrl ) is generated using a DAC output, adjusted to meet the current-source requirements using a conditioning circuit. The latter trans-

151 Development and Validation of a New Hardware …

1021

Fig. 1 EE L S new hardware block diagram

forms the asymmetric signal generated by the DAC (0 - 3.3 V) into a symmetrical signal of up to ± 9.24 V of amplitude. This signal is the control reference for the voltage-controlled Howland current-source [11]. A better description of this circuit operation and about the Howland current-source can be found on the Subsect. 2.2. EE L S is powered by a medical graded power source (AC/DC converter, ±15 V and 45 W), complying with the NBR 60601-1 ABNT Brazilian general requirements for basic safety. A 3.3 V DC/DC converter is used to power the digital circuitry and a M30 series 30 W adjustable DC/DC converter (maximum output voltage of ±150 V) powers the current source circutry, permitting it to drive high-impedance loads [14].

2.2

Current-Source Design and Calibration Scheme

The Vctrl generated by each DAC channel is the reference for the Howland current-source output, enabling firmware control of the stimulus amplitude and waveform. The signal conditioning circuit (Fig. 2a) presents two stages. The first stage transform the asymmetric DAC output-voltage in a symmetric signal (according to the Eq. 1). Thereafter, the input signal voltage-range of 0–3.3 V leads to a first stage output of ±3.3 V. The second stage amplify the signal with a 2.8 V/V voltage gain, allowing the current-source control signal to reach up to ±9.24 V.

Fig. 2 a Current control-signal conditioning-circuit, b Howland current-source with boostrap scheme, and c current conditioning-circuit for automatic calibration

1022

L. V. Almeida et al.

As shown in Fig. 2b, we implement a modified Howland topology by replacing the common operational amplifier with a difference amplifier (the AD629). This modification is needed to account for a possible asymmetrical ramp of the power supply voltage, which would violates the specification of input common-mode voltage (VI C M ) [11]. A second modification added to the design aims at expanding the circuit power supply rails to drive high outputimpedance loads. By bootstrapping U2 and U3 circuit we can modify their supply range (±VCC ) and supply the circuit with very high voltages (VD D above ±100 V). The only limitation for the power rail is Q 1 and Q 2 collector-emitter maximum voltages (VC E Omax ). In the proposed design, we use the MJD340 and MJD350 transistor (VC E Omax = 300V ) and adjust the power supply output to ±130 V. The maximum allowed output current (Imax ) depends on the maximum U2 output voltage (Voutmax ). To simplify analysis, we ignore transistor base current draw and obtain the Eq. 2. In this equation, VCCmin is the minimum required U2 voltage supply, which is 2.5 V for AD629 [15]. Solving Eq. 2, we obtain Voutmax = 102 V. In turn, Imax can be obtained by the Eq. 3 as 9.27 mA for an output load of 10 k and Rr e f = 1 k. (1) V1 = 2 × (VD AC − 1.65)

Voutmax = VD D − VCCmin ×

Imax =

R7 + R8 R7

Rload Voutmax × Rload Vloadmax +Rr e f = Rload Rload

I OU T =

Vctrl RRE F

(2)

(3)

(4)

voltage, we implement a self-calibration scheme by measuring the output current in real-time and adjusting Vctrl to counteract offset variations. We continuously measure the output current using a calibration-feedback conditioning-circuit and an analog-to-digital converter (ADC), as shown in Fig. 2c. This circuit uses an instrumentation amplifier (U4) and the potentiometer RV1 to set voltage gain to 50 V/V. We add an offset of 1.65 VDC to adjust output signal within microcontroller’s voltage rails. Equation 5 provides the feedback-signal amplitude. We calculate the average output voltage value and adjust the stimulus offset per stimulus cycle. VloadCurr ent = (Iload × 1.5) × gain ± o f f setcalibration (5)

2.3

Current-Source Testing

In order to evaluate EE L S proposed current source, we perform several benchmark tests using an Agilent Technologies DSO-X 2002A digital oscilloscope and a 6 21 digit Agilent 344010 multimeter (true RMS—root mean square). We use a 10 k as standard output load for all tests but the load regulation, as it is considered the maximum expected skin impedance [16]. The following tests were performed for the frequencies of 1, 5, 250 Hz, 2 and 3 kHz: (a) Maximum current amplitude: measurement of the maximum output current by increasing the stimulus amplitude until the moment a distortion on waveform (saturation) is observed. (b) Linearity: current-source output linearity assessment by obtaining second order polynomial regression over ten output measurements, between 1 mA and Imax . We use Matlab® polyfit function to execute the regression. (c) Load regulation: we measure the output current while varying the output impedance between 100  and 15 k. We adjust the output to maximum current with an output load of 10 k before any measurement. The load regulation is then calculated according to the Eq. 6.

The feedback circuit with the operational amplifier U3 in Fig. 2b is responsible to maintain a fixed voltage over the reference resistor (R R E F ), ensuring constant stimulus current even with the variation of electrode-skin contact impedance. A direct consequence of using such feedback circuit along with a difference amplifier is that the output current can be Imax − Imeasur ed × 100 (6) L R(%) = calculated by the Eq. 4. Considering the maximum controlImax reference signal of 9.24 V, we would have an expected maximum output current of 9.24 mA (peak current). This value (d) Total harmonic distortion (THD): to estimate the degree is within maximum allowed output current (Imax ). Moreover, of nonlinarity of the proposed current-soruce topology, considering the DAC voltage resolution and conditioning cirwe estimate the harmonic distortion of control-reference cuit gain, we achieve a current resolution of 4.51 µA. signal (input) and of the output voltage waveform. The The output current offset level is a important parameter THD is estimated using at least ten cycles of each sigin this type of equipment as it can mislead patient perception nal for the output currents of 1, 5 mA, and the maximum and provoke pain feeling. To guarantee a reduced output offset amplitude.

151 Development and Validation of a New Hardware …

3

Results

Figure 3 shows an EE L S measured output, a 3 kHz stimulus at the maximum current amplitude for this frequency. Table 1 presents the measurements of maximum current amplitude per frequency evaluated. We can observe that the maximum current amplitude at 3 kHz presents a 5.5% drop compared to the one at 1 Hz. Table 2 contains the polynomial coefficients obtained by the polynomial regression over the linearity test measured values. It shows that first-order coefficient (a1 ) is more than five hundred thousand times greater than the second-order one (in worst-case scenario at 3 kHz) and, therefore, that the current-source output presents high linearity. The load regulation test shows that the output current (Table 3) varies less than 3% at 250 Hz when output impedance varies between 100 and 10 k, reaching almost 14.30% for 15 k. For higher frequencies, 2 and 3 kHz, the load regulation is stable up to 5.6 k (1.87% and 3.94%, respectively). However, the regulation is reduced by 5.55% and 10.52% for 10 k output load, respectively, at 2 kHz and 3 kHz. This effect is further observed for the 15 k, for

1023

which we obtain 18.10 and 21.80% drop in load regulation, at the same frequencies. We did not evaluate it at 1 and 5 Hz as the multimeter used is not specified to measure RMS current values at frequencies below 20 Hz. In general, the output THD was inferior to 1% at all evaluated frequencies (Table 4) and the maximum difference between input and output harmonic distortion is 0.42%, occurring at 5 Hz for 5 mA output. EE L S presented higher harmonic distortions at lower stimulus amplitudes, but no specific trends could be observed in respect to the stimulus frequency.

4

Discussion

EE L S, likewise other electrical stimulation systems found on the literature (Neurometer and NeuroStim), allows the execution of the CPT exam and also the RT measurement (only available on NeuroStim). These functionalities are supported by the new hardware, designed to fulfill the requirements for such kind of device. Moreover, the new hardware presents two stimulation channels, supporting new exam protocols, e.g. the two-point discrimination test [12], that can be used to assess

Fig. 3 Measured EE L S maximum output stimulus at 3 kHz Table 1 Maximum output current amplitude per frequency Freq (Hz)

1

5

250

2000

3000

Imax (mA)

8.41

8.63

8.63

8.38

7.94

1024

L. V. Almeida et al.

Table 2 Regression coefficients obtained for the linearity test considering the second order equation as y = a2 .x 2 + a1 .x + a0 Freq (Hz)

a2

a1

a0

250 2000 3000

− 6.5231e-07 − 1.4818e-06 − 1.8693e-06

1.0032e+00 9.6245e-01 9.1764e-01

2.1549e+00 2.5436e+00 3.5899e+00

Table 3 Load regulation: current amplitude per output impedance per evaluated frequency (in mA R M S ) Load ()

250 Hz

2000 Hz

3000 Hz

100 180 330 560 1000 1800 3300 5600 10000 15000

6.162 6.164 6.165 6.164 6.162 6.158 6.150 6.137 6.106 5.374

6.267 6.271 6.271 6.269 6.265 6.254 6.224 6.154 5.923 5.136

6.233 6.234 6.234 6.232 6.225 6.207 6.152 6.024 5.611 4.904

differences intrasubject. Moreover, EE L S new hardware complies with NBR 60601-1 ABNT standard, featuring Bluetooth 4.0 communication to reduce time to set the equipment up. The new features are possible due to current-source topology used on EE L S. By using a Howland current-source topology, modified with a bootstrapping scheme to extend its output voltage range, we also simplify the hardware design and reduce its cost. Tests demonstrate the capacity of the new hardware to generate sinusoidal stimulus (Fig. 3), reaching a slightly lower maximum current (8.63 mA @ 250 Hz) than the specified one. This difference may be explained by the transistor behavior when the output current is at its maximum value and the voltage is closed to the rail limit. Each transistor may stop operating on the linear region, which would reduce, to a certain extent, the maximum allowed output current. We also observe an inferior value of maximum output current at 3 kHz, in turn, explained by the circuit frequence response which is limited by its bandwidth (projected to be DC-10 kHz). In relation to other tests, EE L S presented high output linearity for all evaluated frequencies and low nonlinear distortion as all THD estimated values are inferior to 0.88%. THD values were higher for lower output current values, which would be caused by a small (amplitude inferior to 500 mV) high frequency (approximately 50 kHz) noise superimposed to the output signal and probably produced by the high-gain feedback loop. Such noise is expected to be filtered out by the resistive-capacitive (RC) low-pass behavior of the skin [17]. EE L S load regulation is stable for all tested frequencies on output impedance up to 5.6 k (less than 4% variation). At higher frequencies, we observe a deterioration

in such parameter for 10 and 15 k impedance (10.52% and 21.80%, respectively, at 3 kHz). This deterioration can be explained by the bandwidth response of the circuit as we are using the maximum current output during the load regulation test. However, as we expect a skin impedance inferior than 1 k at frequencies higher than 1 kHz [16], the proposed current-source would still have adequate load regulation for the proposed application. In a direct comparison with the NeuroStim [9], EE L S: (1) is able to generate similar stimulus in terms of waveform, frequency, and amplitude, but with wider bandwidth (NeuroStim 2–5 kHz, EE L S DC-10 kHz); (2) presents better current resolution (NeuroStim 8 µA, EE L S 4.51 µA); (3) presents similar output linearity and THD; (4) presents a real-time calibration scheme; (5) its circuitry is simpler which may lead to lower cost of production; (6) presents two stimulation channels, allowing more types of exam protocols. In general, the results indicate that the EE L S may be a more suitable option for assessing neuropathies than NeuroStim, and Neurometer.

5

Conclusion

EE L S electrical stimulation system is designed for somatossensorial assessments in patients with neuropathies such as diabetes mellitus and Hansen’s disease. EE L S hardware have a relatively simple circuitry, which allowed the inclusion of a second stimulation channel. When compared with NeuroStim, EE L S hardware presented similar or superior characteristics in terms of stimulus generation (amplitude, frequency, bandwidth, linearity, and THD) and

151 Development and Validation of a New Hardware …

1025

Table 4 THD estimations per frequency per current amplitude Freq (Hz)

I (mA)

T H Dcond (%)

T H Dcarga (%)

T H D f onte (%)

1

1 5 Imax 1 5 Imax 1 5 Imax 1 5 Imax 1 5 Imax

1.18 0.35 0.34 0.68 0.22 0.28 0.76 0.30 0.26 0.50 0.25 0.33 1.24 0.31 0.20

0.88 0.50 0.41 0.79 0.64 0.45 0.85 0.47 0.40 0.62 0.35 0.31 0.59 0.42 0.30

0 0.15 0.07 0.11 0.42 0.17 0.09 0.17 0.14 0.13 0.10 0 0 0.11 0.10

5

250

2000

3000

simpler circuit. It allows real-time offset adjustment, keeping stimulus parameters under required conditions for performing the exam. Therefore, EE L S can be used for all the exams performed by its competitors (Neurometer and NeuroStim) and also on other exams that make use of two stimulation channels as, e.g., the two-point discrimination test.

7.

8.

9.

Acknowledgements The work was supported by CNPq, CAPES, FAPEMIG and FAPDF, Brazil. Conflict of Interest The authors declare that they have no conflict of interest.

References

10.

11. 12.

1. 2.

3.

4.

5.

6.

Bear MF, Connors BW, Paradiso MA (2016) Neuroscience: exploring the brain. Wolters Kluwer Pimentel JM, Petrillo R, Vieira MMF et al (2006) Perceptions and electric sinusoidal current stimulation. Arquivos de Neuropsiquiatria 6:10–13 Guclu-Gunduz A et al (2012) Upper extremity function and its relation with hand sensation and upper extremity strength in patients with multiple sclerosis. NeuroRehabilitation 30:369–374 Katims JJ et al (1987) Constant current sine wave transcutaneous nerve stimulation for evaluation of peripheral neuropathy. Arch Phys Med Rehabil 68:210–213 Martins HR, Zanetti R, Santos CC et al (2013) Current perception threshold and reaction time in the assessment of sensory peripheral nerve fibers through sinusoidal electrical stimulation at different frequencies. J Biomed Eng 29:278–285 Koga K, Furue H, Rashid MH et al (2005) Selective activation of primary afferent fibers evaluated by sine-wave electrical stimulation. Mol Pain 1:13

13.

14. 15.

16.

17.

Matsutomo R, Takebayashi K, Aso Y (2005) Assessment of peripheral neuropathy using measurement of the current perception threshold with the neurometer® in patients with type 2 diabetes mellitus. J Int Med Res 33:442 Neurometer CPT—Painless electrodiagnostic clinical and laboratory sensory nerve testing equipment. Neurotron Incor. https://www. neurotron.com/ Martins HR (2008) Sistema para o estudo do limiar de percepção de corrente elétrica com forma de onda arbitrária Dissertação (Mestrado em Engenharia Biomédica). Universidade Federal de Minas Gerais, Belo Horizonte Katims JJ (1998) Digital automated current perception threshold (CPT) determination device and method. Depositante: Jefferson Jacob Katims. US005806522A. Depósito: 15 ago. 1995. Concessão: 15 set. United States Caldwell J (2013) A high-voltage bidirectional current source. Analog Appl Eng Texas Instruments 2013 Catley MJ, Tabor A, Wand BM, Moseley GL (2013) Assessing tactile acuity in rheumatology and musculoskeletal medicine-how reliable are two-point discrimination tests at the neck, hand, back and foot? Rheumatology 52:1454–1461 Paula WA, Zanetti R, Almeida LV et al (2019) somatosensory electrical stimulator for assessment of current perception threshold at different frequencies in 5th . In: World congress on electrical engineering and computer systems and sciences. ICBES 112, Lisboa, Portugal American Power Design, Inc. M30 series 30 Watt programable DC/DC converters. http://www.apowerdesign.com/pdf/m30.pdf Analog devices. High common-mode voltage, difference amplifier, AD629 datasheet. https://www.analog.com/media/en/technicaldocumentation/data-sheets/AD629.pdf Medina LE, Grill WM (2015) Phantom model of transcutaneous electrical stimulation with kilohertz signals. In: 2015 7th international IEEE/EMBS conference on neural engineering (NER). IEEE, pp 430–433 Vargas Luna JL, Krenn M, Cortés Ramírez JA, Mayr W (2015) Dynamic impedance model of the skin-electrode interface for transcutaneous electrical stimulation. PLOS One 10

Autism Spectrum Disorder: Smart Child Stimulation Center for Integrating Therapies R. O. B. Ana Letícia, C. M. Lívia, A. P. Phellype, B. V. Filipe, L. F. A. Anahid, and S. A. Rani

Abstract

Keywords

Autism is a neurodevelopmental disorder that affects one in every a hundred sixty children in the world, according to the Pan American Health Organization. Interventions in multi-sensory environments are able to stimulate communication and social behavior. Thus, skills training programs and the behavioral treatment performed in this environment reduces the psychosocial impact of the disorder. Moreover, it gives to specialists tools for exploring personalized analyses for different groups of individuals and their levels of autism. Therefore, this study purposes the development of a multi-sensorial environment integrated to a multimedia software that deepens the experience of patients into different thematic scenarios configured by a specialist. It aims to optimize the time spent in clinical sessions used for evaluate various aspects of the treatment, provide resources for achieving the needs of patients and reduce costs during the medical supervision.

Autism Technology Neurodevelopmental Multisensorial Software

R. O. B. A. Letícia (&)  C. M. Lívia  A. P. Phellype  B. V. Filipe  S. A. Rani Center of Development and Transference of Assistive Technology, National Institute of Telecommunications, Avenue Joa˜O de Camargo, Santa Rita do Sapucaí, Brazil e-mail: [email protected]; ; ana.leticia. [email protected] A. P. Phellype e-mail: [email protected] B. V. Filipe e-mail: fi[email protected] S. A. Rani e-mail: [email protected] L. F. A.Anahid Avenue Joa˜O de Camargo, NeurobrinqSa˜o Paulo, Brazil e-mail: [email protected]

 

1





Introduction

In a world so immersed in the technological context, the ability to send information and the communication have become extremely fast and so important for social interaction. However, according to Fernandes [1], the number of children that arrive in the clinics or care centers for treatment of learning difficulties and having problems to process information has increased. And one of the most prominent neurodevelopmental disorders is childhood autism [1]. Autism or”Autism Spectrum Disorder (ASD)” includes different syndromes marked by neurodevelopmental disorders with three fundamental characteristics, which may manifest together or alone. They are difficulty in communication by disability in the field of language and in the use of the imagination to deal with symbolic games, difficulty of socialization and the pattern of restrictive and repetitive behavior. It is estimated that autism affects 1% of the population, or about seventy million people worldwide, according to the World Health Organization [2]. In the last few years a lot of research and clinical practice have been advanced, contributing to the improvement of the autistic quality of life and the early diagnosis of the disorder. The classification of autism is given according to its progress degree, being considered the three: light autism, moderate autism and severe autism. As autistic people see, hear, and feel differently, there are different levels of support for their development in each situation. To extract the maximum of the intellectual, psychological and cognitive potential, thus providing a dignified life and, in the majority, with autonomy and independence. For this, an autist needs an average of 40 h of clinical care a week [3].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_152

1027

1028

R. O. B. A. Letícia et al.

Data from the United States Centers for Disease Control and Prevention (CDC) show that today 1/59 children are diagnosed with autism. According to the researchers, this increase of 15% compared to the last survey, issued two years ago, points to factors such as: increased awareness of the existence of autism, improvement in screening, diagnostic services, treatment and intervention services, and better documentation of ASD behaviors. In addition, the report indicates changes in the criteria to identify the disorder early, that is, the increase is not entirely due to the existence of new cases diagnosed. In the report, the CDC used data collected from the following states: Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin [4]. In this scenario, children are becoming more and more in touch with the technological immersion and speed of information, developing from the outset the relationship with all the technological panorama with influence in the diverse social contexts, such as health, education or leisure. In view of this, it is possible to take advantage of the technology for the stimulation of children, whose strategies become appropriate and innovative for the complement of the treatment, and a differential so that they can learn in a playful way. In order to produce technological innovations aimed at the sensory, cognitive and motor stimulation of children in early infancy, the authors developed the Multi Sense program and the sensory stimulation environment, which, together, promotes the development of the six learning channels: hearing, touch, palate, smell, vision and balance and movement. The environment is a child stimulation center with the objective of integrating and optimizing the different types of therapies and treatments through Multi Sense, a software interface that enables health professionals to perform actions in real time, as well as to plan and execute activities. The program is responsible for integrating solutions, devices and applications based on Internet of Things (IoT) to the treatment of children with neurodevelopmental disorders and learning in the environment of sensory stimulation. Therefore, the objective of this article is to demonstrate the development and functioning of Multi Sense, in order to offer an important tool in the accomplishment of the activities proposed by health professionals to children with neurodevelopmental disorders, since through it all the communication, drives and IoT resources interact with the environment.

2

Materials and Methods

The child stimulation center designed contains a minimum structure with bubble column, sensory panel, ball pool, fiber cascade, essence diffusers, red, green and blue

(RGB) perimetral lighting, wind simulator, bubble column, set of seven reflectors, human piano and the interactive bubble machine [5]. Figure 1 shows the disposition of equipment in the multi-sensory environment. The sensory panel is a wall made of wood with three modules, divided by areas. It brings a module with games and activities of fine motor coordination with all types of locks and a set of parallel objects, as well as a course game that works fine motor coordination, representation and indication of numbers and the notion of quantity. The second module integrates various musical instruments and objects related to letters and numbers, such as an abacus. The third module is composed of animals with distinct textures. This environment has some functions, allowing the work with audio and video editing and the interaction with all the equipment via IoT for fairy tales stories or didactic teaching. The teaching didactic approach involves visual activities, stimulating vision, smell, hearing, touch, thermoception, proprioception, vestibular system, besides the logical reasoning developed with the specific activities applied. Using one Makey Makey®, a board made up of microcontrollers that carries holes through which cables are connected to components or objects, an integration of the systems has been made to the project. Such objects are chosen according to the application of the activity, so they can be varied fruits, or any object that conducts electrically. In this way, with the integration, the Multi Sense system provides eight interaction options, that is eight children playing, seven of which are responses or interactions of activities and one for the user who is performing the activity. With this, it becomes possible to develop activities that form the multi-sensory environment, and all the stimulation directed to the user [6]. In addition, with Makey Makey®. it is possible to use the “Piano” option to work with musical notes from different instruments, stimulating logical reasoning and stimulating the child to develop skills such as focus, operational memory and sustained attention. The library of sounds and images of the instruments is adapted according to the need of each training or therapy, for each patient. After proving the efficiency of this methodology by the work of the author Fernandes [1], the Multi Sense program was created, which is a software capable of joining Neuroscience with Engineering [1]. With this software, it is possible to control all the pieces of equipment that are used in the therapy of the children, simplifying the work of the therapists and teachers in the application of the activities. For the construction of the software, the Applied Behavior Analysis (ABA) methodology, which is an approach to psychology that is used for the understanding of behaviors and it has been widely used in the care of people with autism, was used as standard for the development [7].

Autism Spectrum Disorder: Smart Child Stimulation …

1029

Fig. 1 Example of the minimum structure of the child stimulation center

The multi-sensory environment has several devices adapted to be triggered by an IoT interface and controlled by a computer connected to a router via Wi-Fi. Each device can be triggered by the therapists alone or together, causing the expected effect in real time. In the same way, it will be possible to plan more specific training and activities using the resources offered by the system. The developed materials can be saved in a library for using, editing or even being shared with other therapists and the multidisciplinary team. The professional can use the images from the available standard library, about twenty, or add new libraries. In the Fig. 2 there are four options: “Studio” (to set the movie), “Efeitos” (to trigger the effects individually), “Piano” (to play), “Atividades” (to create a new activity or choose one ready). It is possible to create activities that involve learning colors, animals, fruits, as well as other concepts or contents. As an example, up to seven colors can be used in the area of the screen where the answers are found, as well as a written or visual tip as a question, so that the user can relate the question, writing statement or visual symbols to the answer. The system provides a control interface to trigger an isolated equipment for specific activities, and it may include those involving the palate and any taste system. To achieve this aim, the professional can replace the plates used for Makey Makey® activities by real fruits and vegetables, in addition to the motor and balance system (Fig. 3).

The interface also allows the user even without any programming knowledge, to make sensory videos. Any video can be used as long as it is in a format that can be played on the computer. It is recommended that videos be played with the resolution of seven hundred and twenty pixels for better visualization, although other resolutions are accepted. The higher the resolution of the video, the higher the capacity of the computer that will run the program. The activities are customized according to the application of the professional to the child. At the end of each activity is generated a report in Excel® format for the professional with data of error, correctness, time of activity, date and other information that will be used to prove the effectiveness of the program, which was developed with the objective of measuring the evolution of each patient within the proposed treatment in the multisensory environment.

3

Results

The 6D multi-sensory environment was first implemented in the School of Talent, located in Sa˜o Paulo, Brazil, where the first space was installed. Two more sensory spaces were later installed in the Northeast, which have become a working tool for psychologists, physicians and therapists who work with the development of patients with ASD. In general, these

1030

R. O. B. A. Letícia et al.

Fig. 2 Screen of options for the user

Fig. 3 Example of activity screen related with palate and color

environments, shown in account for approximately one hundred and fifty activities provided by the interaction of the multi-sensory program features (Fig. 4). The program has a simple and easy-to-understand interface for customization of effects based on the video presentation according to Fig. 5. Your work area is composed of three divisions. The area 1 is where the video is attached, the area 2 contains the options play/pause, full screen, stop, new, open, save, export, and the area 3 is for editing video

effects. In Fig. 6, the video has already been selected and edited. The gray menu on the left side of the image is a list with all equipment in the multi-sensory environment. The colored bars in front of each of them, mean their activation time and the color is the same reproduced by the equipment that uses RGB. The system allows the professional to edit the library so that it is fully adapted to the level of development of the

Autism Spectrum Disorder: Smart Child Stimulation …

1031

Fig. 4 One of the environments already installed in Brazil

patient, proposing challenges that stimulate it in who presents a greater deficit. At the end of each activity, the software generates a report that contains data of error, correctness, activity time, date and other information for the professional.

4

Discussions

The efficacy of a sensory and motor stimulation program on cognitive abilities in preschool children has been proven. In addition, results from four months of intervention showed that the neuroscience-based program on brain development and learning significantly improved the development and performance of infants in early childhood without any specific diagnostic picture. Furthermore, behavioral modification techniques have been shown very effective in treatment, especially in more severe cases of autism [7]. Therefore, the challenge for this proposal is to prove the efficacy in the development of atypical children

through reports of therapists and the software Multi Sense itself. For this purpose, the report generated by the software should be used with data of error, correctness, activity time, date and other information for the professional at the end of each activity, in order to measure the evolution of each patient within the proposed treatment in the multi-sensory 6D environment [1]. The multi-sensory environment, which offers technological resources tuned to the advances of neuroscience, proposes the integration of therapies with the professional team promoting a multidisciplinary clinical care. From this, there is a financial viability of the treatment of children with ASD, considering that there is a greater demand for quality services capable of formulating diagnoses and providing the necessary support, according to the Protocol of the State of Sa˜o Paulo of Diagnosis Treatment and Referral of Patients with Autistic Spectrum Disorder [8]. The technology associated with neuroscience aims to offer the possibility of children having contact with all the

1032

R. O. B. A. Letícia et al.

Fig. 5 Display panel demo, without the chosen video

Fig. 6 Display panel demo, with the chosen video

sensory information simultaneously, providing the integration and the rehabilitation of them. In this way, new neural connections are created to favor neuroplasticity in early

childhood, the main stage of brain development, increasing the chances of making these individuals independent and autonomous to manage their own lives [9].

Autism Spectrum Disorder: Smart Child Stimulation …

Clinical care is proposed through activities and games that promote sensory development and cognitive abilities. The environment is designed to transform a common therapy space into an interactive medium that reacts according to a schedule and the therapist’s need, providing simultaneous and contextualized technologic resources. Therefore, the software is a support tool that offers the possibility to increase, decrease, change and monitor the amount of stimuli offered to the children during the sessions in real time. With the bubble column it is possible to create and modify environments where colors, vibration, reflexes and interactivity open up infinite possibilities for monitoring visual, tactile and motor skills. Thus, it provides the development of attention, concentration, cause/effect relationship and also of relaxation and touch, allowing the control of variation, intensity of colors and bubbles with varied sequences, providing smooth changes of movements through effervescent water. This item provides the automatic visual stimulus, which can accelerate or decelerate the rhythm and variety of the proposed exercises, and it can be combined with several resources, objects and sounds, increasing the repertoire of activities and sensory stimuli of the environment. The sensory panel is a wall with toys designed for children in early childhood that aims to stimulate vision, touch, hearing, fine motor coordination, and some cognitive skills such as concentration, strategy and logical reasoning. This tool offers therapists innumerable possibilities of interventions and integration of the stimuli, expanding the range of activities during the therapies. The transparent ball pool with LED (Light Emitting Diode) lighting is a recreational equipment developed for more relaxing therapies that involve bodily and tactile perception. In the same way, they can be oriented to direct activities if combined with other materials and toys, because they allow the identification of textures, sound objects, among other stimuli proposed during the therapeutic procedure. It works with and without lighting, interacts with the other equipment of the park and can be used individually, in pairs or small groups, of three to four children. The fiber optic cascade is used for tactile desensitization during the proposed therapies in the ball pool. Its RGB system allows the variation of colors and also the programming and variation between them, extending its functionality. The cascade can also be used in isolation for other visual activities, allowing the therapist to program specific actions and commands during training and therapy. The diffuser of aromatic essences is an equipment that was developed to work with different types of smells and aromas during the trainings and therapies. It uses microcapsulated essences based on nanotechnology and dissipates the essence only when activated, allowing the retention of the aroma by the time of activation of the fans, being able to be fired a second or third aroma after a few seconds of

1033

interruption of the first one. It is ideal for developing olfactory memory of fruits, spices, food, flowers, among other odors. It can be associated to food recognition activities with and without images, musical exercises, stories, as well as a support for the gustatory stimuli of children who have hypersensitivity and food selectivity. The RGB perimetral lighting is a RGB LED lighting system that circulates throughout the ceiling of the environment and it is used to simulate dawn, dusk, lightning or other situations involving the use of colored lights. Colors are obtained with the combination of red, green and blue. This enables a large number of color variations, as well as their alternation. The wind simulator is made up of a high power fan that is used to simulate the wind, storm, cold and blowing sensation and can be combined with other effects. The artificial rain is formed by a set of three or four clouds illuminated with LED and fixed in the ceiling, in the center of the environment. This system fires controlled artificial rain splashes, causing this effect in real time in activities involving rain or wet sensation. The set of seven reflectors is fixed to the ceiling, around the clouds and are used to direct lights to certain room locations according to the need for scheduled activities. It can be used in conjunction and synchrony with the other equipment and resources offered in space, as well as in isolation, in activities involving questions and answers, sequences, dance and body expression. In addition, the laser ceiling is a laser projection system for starry sky simulation, also used for moments of achievement of steps and rewards. The “Human Piano” is a game that allows the substitution of keys or common actions in games using different parts of the human body. The keys can also be replaced by vegetables, fruits, chips and materials of different shapes, colors and textures. The piano offers a library of images, musical notes and varied sounds that can be used in various therapeutic activities. The interactive bubble machine produces soap bubbles by means of commands via wireless technology to simulate activities that involve a seabed, lake or river environment. It can also offer moments of celebration at the end of the training or when the child reaches an established goal for motivation purposes. All the devices in the attached environment stimulate sensory integration. According to Ayres (2005), it is a neurological process by which the human being becomes able to perceive, learn and organize the sensations received from the environment and from his own body in order to create adaptive responses. It is the organization of information coming from different sensory channels and the ability to relate stimuli from one channel to another in order to emit an adaptive response. Therefore, by filtering all information and selecting what should be the focus of attention, meaning is attributed to experiences and forms the

1034

R. O. B. A. Letícia et al.

basis for learning, social behavior, and emotional development [10]. According to Luria [11], brain dysfunctions and lesions interfere in the processing of learning information. These dysfunctions can be: reception (which cause perceptual problems), integration (which generate difficulties in memory retention and elaboration) and expression (which lead to disturbances in ordering, sequencing, planning and execution) [11]. Thus, a program of sensory and motor stimulation based on Neuroscience offered in a multi-sensory space developed especially for children in early childhood, when they learn to use and integrate their senses, rich in equipment, materials, interactive games and toys will promote better brain development and performance [1]. According to Calvert, Spence and Stain (2004), the multi-sensory process of stimuli is a principle of brain structure and function, and even the experiences that seem specific of only one sense are modulated by the activity of the other senses, showing that we use information from all the sensory organs for many brain activities that we perform [12]. The subsequent goals for finalizing the project are the scientific evidence of the effectiveness of the method for atypical children from the submission to the Committee of Ethics in research and data collection with the clinical team that already interacted with the multi-sensory environment.

5

Conclusions

Based on the above, it can be concluded that the creation and development of the multi-sensory 6D environment and, therefore, the interface of the Multi Sense program, play a fundamental role in the transformation of a conventional therapy space into a highly conducive environment for learning, interactive and able to react according to the proposal of professional using technological resources. The technology of the environment is aimed to integrate the therapies, so that the professionals can work together and reduce the time of dedication and, consequently, the costs. The multi-sensory environment was designed with the purpose of assisting professionals in the area of education and health in the application of therapies and educational activities. Therefore the individual can become autonomous in the daily tasks and well prepared for the job market. The use of technological evolution in favor of therapy is a necessary and extremely important task, since it is capable of providing sensorial stimuli and controlled environments. Thus, it allows the utilization of games for the development of learning and treatment of neurodevelopmental disorders.

This integration makes the children with such disorders being immersed in a highly qualified environment with the presence of specialized health professionals. That is, the combination of these professionals makes the therapy optimized in such a way as to contribute with costs, time and especially the needs of the child Acknowledgements This work was supported by the Center of Development and Transference of Assistive Technology CDTTA; the National Institute of Telecommunications—INATEL; and the Foundation for Research Support of the State of Minas Gerais—FAPEMIG. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Fernandes , Anahid. Programa de estimulac¸a˜o sensorial e motora em habilidades cognitivas em crianc¸as pre´-escolares. Faculdade de Cieˆncias Me´dicas da Santa Casa de Sa˜o Paulo (2017) 2. NeuroConecta. O que e´ o Transtorno do Espectro do Autismo (TEA). Available in https://neuroconecta.com.br/o-que-e-otranstorno-do-espectrodo-autismo-tea/. Accessed at 25 Mar 2020 3. NeuroConecta. Deficieˆncia intelectual e autismo: compreender para desenvolver. Available in https://neuroconecta.com.br/ deficiencia-intelectuale-autismo-compreender-para-desenvolver. Accessed 10 Apr 2020 4. R. Schuchat,A.L. Dauphin et al., Prevalence of autism spectrum disorder among children aged 8 years. MMWR Surveill. Summ. 1–23 (2019) 5. Neurobrinq. Produtos e equipamentos - Ferramentas Tecnolo´gicas para Cl´ınicas. Available in www.neurobrinq.com/produtoseequipamentos/. Accessed 15 Jan 2020 6. Makey Makey. Available in https://makeymakey.com/. Accessed 23 Jan 2020 7. AutismSpeaks, Applied Behavior Analysis (ABA). Available www. autismspeaks.org/applied-behavior-analysis-aba-0. Accessed 12 Jan 2020 8. Governo SP. Protocolo do Estado de Sa˜o Paulo de Diagno´stico Tratamento e Encaminhamento de Pacientes com Transtorno do Espectro Autista. Available in www.saude.sp.gov.br/resources/ses/ perfil/profissional-dasaude/homepage//protocoloteasp2014.pdf. Accessed at 06 Dec 2019 9. S. Serra,O multissensorial no caso portugueˆs: Uma abordagem poss´ıvel? in Dissertac¸a˜o de mestrado (Ensino do Portugueˆs como L´ıngua Segunda e Estrangeira)-Faculdade de Cieˆncias Sociais e Humanas;1(Nova de Lisboa, Lisboa), 102 (2012) 10. A. Ayres,Sensory Integration and the Child (Western Psychological Services, Los Angeles, 2005) 11. A. Luria,Fundamentos de Neuropsicologia, traduc¸a˜o de Juarez Aranha Ricardo. Sa˜o Paulo: Rio de Janeiro: Livros Te´cnicos e Cient´ıficos 1981 12. Gemma C, Charles S, Barry S (2004) The Handbook of Multisensory Processes. The MIT Press, Cambridge, MA

Fuzzy System for Identifying Pregnancy with a Risk of Maternal Death C. M. D. Xesquevixos and E. Araujo

Abstract

A fuzzy system for identifying pregnancy with a risk of maternal death is proposed in this paper. The system aims at identifying a high risk of maternal death, be it during pregnancy, or within 42 h after childbirth. The maternal age, number of prenatal appointments, and previous number of children/childbirth correspond to the input linguistic variables used to compute the risk of maternal death. The proposed approach employs the Mamdani inference system to represent the uncertainty and imprecision concerning the sort of diagnosing variables. Results demonstrate that the non-invasive system can be used by different health professionals, including in a screening process for pregnancy, thus avoiding maternal deaths. Keywords

Fuzzy logic • Maternal death • Risk • Pregnancy

1

Introduction

Maternal mortality is worldwide used as an indicator of health in a country, and attest the female social and economic status, points out flaws in public health policies and strategies

C. M. D. Xesquevixos (B) · E. Araujo PPG Stricto Sensu em Engenharia Biomédica,Universidade Anhembi Morumbi (UAM), São José dos Campos, SP, Brazil E. Araujo Centro de Inovação, Tecnologia e Educação (CITE), Parque Tecnológico, São José dos Campos, Brazil Inteligência Artificial em Medicina e Saúde (IAMED), São José dos Campos, Brazil

[1]. Such an indicator refers to deaths of pregnant women, or those who provides birth within 42 days, due to or worsened by the pregnancy, regardless the time and locality in which the pregnancy occurred [2]. The higher are the number of deaths, the more precarious is the socioeconomic conditions and difficult is to acess the health services. Due to its importance, reducing the maternal deaths is a goal for the Millennium Development Goals for the international countries [2]. The target for the millennium is to reduce the global mortality to 70 per 100.000 live births up to 2030, although estimation suggests it may not be achieved in some countries [3]. Preventing maternal death hence concerns to a policy of reproductive and human rights of woman [4]. The reduction of maternal mortality represents a challenge for society, health professionals and governments. In clinical practice, early disease detection, multidisciplinary assessment, and appropriate treatment influence the outcome of a risky pregnancy. In so doing, identifying high-risk groups are important for determining public policies and efficient strategies [5,6]. The pregnancy and the different risk factors for mother’s death are inherently uncertain or imprecise. The use of fuzzy set theory to identify the risk of maternal death come about as an alternative to deal with the risk factors that are inherently multivalued, allowing a rigorous and systematic use of these subjective factors. For instance, a fuzzy predictive model estimates the risk of maternal mortality in Bangladesh presenting an accuracy of 88.95% represented by the area under the curve (AUC) [7]. The advantage of using fuzzy logic is the inherent flexibility of this system when compared to traditional or Aristotelian logic. While the latter accepts only discrete true or false values, fuzzy logic is closer to reality as it is able to approach information that requires infinite degrees of truth. This paper aims at designing a fuzzy system for identifying pregnancy with a risk of maternal death. The proposed approach addresses a forecasting system to assist in identifying the risk of death for improving the quality of prenatal and obstetric care and reducing the maternal mortality.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_153

1035

1036

2

C. M. D. Xesquevixos and E. Araujo

Identifying Pregnancy with a Risk of Maternal Death Based on Fuzzy Logic

2.2

Input and Output Fuzzy Sets Maternal-Age , Maternal-Age

The input fuzzy sets Mj The risk of maternal death is mostly associated with the occupation, age, education and maternal obstetrics characteristics, and preexisting health problems [8,9]. When monitoring pregnant women, prenatal care suits the identification of risk factors for maternal morbidity and death, thus leading to interventions to be carried out in time, avoiding death [1]. The mother’s age at the time of childbirth is a relevant factor for low birth weight newborns, such that, under 20 and over 40 years old, they are more likely to die [10]. Another protective factor against maternal and neonatal death, the prenatal care is internationally recommended, whose amount of appointment depends from the country [10]. Childbirth causes physiological stress in addition to the number of previous children it is indicative of a decrease in the demand for assisted care in subsequent pregnancies, this leads to an increase in diseases and problems for the mother in that pregnancy. Then it seems an important risk factor for death, being multiparous women, in general, associated with a higher risk of death and comorbidities [11]. In this sense, the age of the mother, the number of appointment performed during prenatal care, and the number of previous children/childbirth compose the input diagnosing premise space mapped into the maternal death risk assessment by using a set of fuzzy inference system.

2.1

Fuzzy Input-Output Inference Mapping

The fuzzy modeling employed in this paper uses the Mamdani type inference [12], prompt to imitate and to represent knowledge concerning the experience of healthcare professionals. Characterized as a set of IF–THEN rules:     Rj : IF x1 is Mj1 (x1 ) AND . . . AND xn is Mjn (xn ) (1) THEN y is N  , the antecedent part, IF proposition, defines the premise while the consequent part, THEN proposition, refers to conclusion, both described by linguistic expressions in propositional form, P = x is M . The j–th rule, j = 1, 2, . . . , m, represents the amount of rules. The elements xi and y refer, respectively, to the i–th input and the output concerning objects inserted in distinct classes (sets) named universe of discourse, xi ∈ Xi and y ∈ Y , also assigned linguistic variables. The input vector, x = [x1 , . . . , xn ]T , is related to the premises while the output, y, is associated to the conclusion. The linguistic expressions “AND” corresponds to the T–norm, t(x, y), . When using the Mamdani fuzzy system, the T– norm is carried out by the minimum operation. The defuzzification operation is herein carried out by employing the center of area. The elements Mi ⊂ Xi and N ⊂ Y are fuzzy sets and assigned linguistic terms, as well, partitioning the respective universes of discourse.

Appointment-Prenatal Appointment-Prenatal

Mj

and

MjNumber-Children-Childbirth , for jMaternal-Age = jAppointment-Prenatal = Number-Children-Childbirth jNumber-Children-Childbirth = 1, 2, 3, and the output fuzzy sets NjRisk-Maternal-Death for jRisk-Maternal-Death = 1, . . . , 4, have their Risk-Maternal-Death membership functions defined according to the general description as follows. Consider a membership function, µM : Xi → [0, 1], defined upon an universe of discourse, Xi , to which is associated a set of terms T = {M1 , M2 , M3 }; a linguistic term Mj ∈ T , where c(Mj ) = {x0 ∈ Xi |µMj (x0 ) = 1} and s(Mj ) = {x0 ∈ Xi |µMj (x0 ) > 0}, respectively, denote the core and support of Mj . In this paper, each linguistic term Mj ∈ T is shaped according to a trapezoidal membership function µMij (xi ; a, b, c, d ) = repremax(min((xi − a)(b − a), 1, (d − xi )(d − c))), sented by a 4–tuple s1, c1, c2, s2, for s(M ) = [s1, s2] being the support and c(M ) = [c1, c2] the core parameters. The system is designed by employing the Ruspini partitions.

2.3

Input Linguistic Variables and Linguistic Terms

2.3.1 Maternal Age Maternal age during pregnancy is represented by the set of linguistic terms TMaternal-Age = {Extreme Low Age (ELA), Low Age (LA), Average Age (AA), Average High Age (AHA) High , Age (HA)} partition the input variable such that their associated membership functions are distributed in the universe of discourse in the range XMaternal-Age = [8, 65] as depicted in Fig. 1a. The set of terms Maternal-Age for XMaternal-Age is given by MExtreme Low Age = 8, 8, 11, 13,  Maternal-Age Maternal-Age MLow Age = 11, 13, 19, 22, MAverage Age = 19, 22, 29,  Maternal-Age Maternal-Age 33 MAverage High Age = 29, 33, 40, 45 and MHigh Age = 40, 45, 65, 65.

2.3.2 Number of Prenatal Appointment Pregnancy must be followed up and monitored through prenatal, that input variable presents four partitions determined by the set of linguistic terms TAppointment-Prenatal = {Extreme Low Prenatal (ELP), Low Prenatal (LP), Average Prenatal (AP), High Prenatal (HP)}. The set of linguistic terms and their associated membership functions are distributed in the universe of discourse in the range Appointment-Prenatal = [0, 10] as depicted in Fig. 1b. The set of terms for XAppointment-Prenatal is Appointment-Prenatal Appointment-Prenatal = 0, 0, 1, 2, MLow = MExtreme Low Appointment-Prenatal 1, 2, 3, 4, = 3, 4, 6, 7 and MAverage Appointment-Prenatal MHigh = 6, 7, 10, 10.

153 Fuzzy System for Identifying Pregnancy with a Risk of Maternal Death

(a) Maternal Age

1037

(b) Appointment Prenatal

(c) Number of previous children-childbirth

Fig. 1 Fuzzy partition of the input variables

2.3.3 Number of Children and Childbirth The input variable corresponding to number of children and childbirth is partitioned by the linguistic terms TNumber-Children-Childbirth ={Few (FC), Average (AC), Many (MC)}, representing the number of previous children of the pregnant woman. The set of linguistic terms and their associated membership functions are distributed in the universe of discourse in the range XNumber-Children-Childbirth = [0, 10] as depicted in Fig. 1c. The set of terms for Number-Children-Childbirth is MFew = XNumber-Children-Childbirth Number-Children-Childbirth 0, 0, 1, 2, MAverage = 1, 2, 3, 4, and Number-Children-Childbirth MMany = 3, 4, 10, 10. 2.3.4 Output Diagnosing Variable The output linguistic variable concerning the risk of maternal death is partitioned into four fuzzy shaped sets is also partitioned by employing trapezoidal fuzzy membership functions but designed by using four fuzzy sets, NjRisk-Maternal-Death , ∀ jRisk-Maternal-Death = 1, . . . , 4, as Risk-Maternal-Death illustrate in Fig. 2. The linguistic terms TRisk-Maternal-Death = {Low (LR), Average (AR), High (HR), Very High (VHR)} are distributed in a range of XRisk-Maternal-Death = [0, 100]. The membership functions partition the universe of discourse Risk-Maternal-Death Risk-Maternal-Death = 0, 0, 7, 10, NAverage = as NLow Risk-Maternal-Death 7, 10, 22, 28, 22, 28, 45, 55, NHigh = Risk-Maternal-Death 45, = 55, 100, 100. NVery High

2.4

Fuzzy Assessment for Maternal Death Risk Rules

The resulting Mamdani–based Fuzzy System for identifying the Risk of Maternal Death 1 is given as:

1 Disclaimer:

The fuzzy rules listed here should not be used in clinical diagnosis without consulting experienced physicians.

Fig. 2 Output variable

R1 : IF Maternal-Age is Extreme low AND . . . Appointment-Prenatal is Extreme low AND . . . Number-Children-Childbirth is Many THEN Risk Death is Very High R2 : IF Maternal-Age is Extreme low AND . . . Appointment-Prenatal is Extreme low AND . . . Number-Children-Childbirth is Average . . . THEN Risk Death is Very High ... R59 : IF Maternal-Age is High AND . . . Appointment-Prenatal is High AND . . . Number-Children-Childbirth is Average . . . THEN Risk Death is Average R60 : IF Maternal-Age is High AND . . . Appointment-Prenatal is High AND . . . Number-Children-Childbirth is Many . . . THEN Risk Death is Average

(2)

1038

C. M. D. Xesquevixos and E. Araujo

The proposed fuzzy system for the risk of maternal death assessment yields a set of 60 valid fuzzy regions in a three–dimensional input premise space, xMAN = [xMaternal-Age , xAppointment-Prenatal , xNumber-Children-Childbirth ]T .

3

Results and Discussion

The proposed fuzzy system for assessing the risk of maternal death yields the decision-making surfaces in Fig. 3. Given the human visual limitations in dealing with objects in no more than three dimensions, three surfaces are obtained corresponding to the risk analysis of maternal death. At a glance, the resulting surfaces point out the number of prenatal appointments being modulated by the maternal age (Fig. 3a). Young mothers and mothers over the age of 35 are at high risk of dying. Further, the maternal risk surface points out that in the extreme of childbearing age, women over 35 in the first pregnancy have a higher risk of death, in accordance with the literature [13], as illustrated in Fig. 3b. Nevertheless, even in advanced age, if there are more than 7 prenatal appointments, the risk of death decreases, acting as a protective effect for the mother, is also presented (Fig. 3a). Pregnancy in adolescence mostly is related to low school grading, poverty, or inadequate social and/or economic conditions. These characteristics are, in general, source of achieving health complications in pregnancy [14]. According to the maternal death risk surface, the number of previous children/childbirth directly influences on maternal death in the 20–30 years-age group such that the higher is the amount of previous children, the higher is the number of maternal death (Fig. 3b). Early pregnancies influence both the health of mothers and their newborns. Pregnancy and childbirth complications are the leading cause of death among girls aged 15–19 years old worldwide; in low-and middle-income countries account for 99% maternal deaths [15]. For instance, adolescent mothers aged 10–19 years old face higher risks of eclampsia and systemic infections than women aged 20–24 years [16,17]. Further, women over 35 years old with an increase in the number of previ-

ous children are subjected to a high risk of death. Age over 30 years old is further identified as a factor strongly associated with maternal mortality for multiparous women. The resulting fuzzy model also points out the beneficial effect of the number of prenatal appointment. Women who carry out 7–10 appointments, even in the presence of 4 previous children, reduces the risk of death. Counterclockwise, this risk is worsened when there is an increase in the number of previous children when there is only 0–4 appointments. Further, the number of children does not have a significant influence on maternal death, whereas the women attended 4– 6 appointments (Fig. 3c). In turn, the number of appointments has limited influence on the risk of death when taking into account women within 25–35 years old. Nevertheless, low or very low ages increases the risk of death when the number of appointments is lower than 6 (Fig. 3a). Evidence demonstrate that the evaluation of obstetric risk in the first prenatal appointment, preferably until the 16th week of pregnancy, and the timely referral of pregnant women at risk to specialized care, contribute to the reduction of maternal mortality, and this action must be urgently implemented [18]. In order to represent the application of the proposed fuzzy system for maternal death risk, consider the set of illustrative examples in Table 1. Take into account, first, two adolescent examples. The adolescent, P1 , is 11-years-old, xMaternal-Age = 11, fires the linguistic terms Extreme Low and Low age. This individual did not perform prenatal care, xAppointment-Prenatal = 0, being classified as Extreme Low appointment. This adolescent advantages of not having previous children, xNumber-Children-Childbirth = 0, thus, concerning the linguistic term Few Children. The risk of maternal death is classified as Very High, scoring xRisk-Maternal-Death = 75.2. In a second example, P2 , there is a 13-years-old adolescent, xMaternal-Age = 13. Likewise, since fuzzy sets enables overlapping of the classes, such an adolescent is simultaneously assigned two linguistic terms of Extreme Low and Low age, as well as presented no previous children, xNumber-Children-Childbirth = 0. Contrary to the first example, this adolescent attends eight times the prenatal

Fig. 3 Surfaces of the fuzzy system for identifying the risk of maternal death: a XMaternal-Age × XAppointment-Prenatal b XMaternal-Age × XNumber-Children-Childbirth and c XAppointment-Prenatal × XNumber-Children-Childbirth

153 Fuzzy System for Identifying Pregnancy with a Risk of Maternal Death

1039

Table 1 Illustrative example of fuzzy system for identifying the risk of maternal death Risk factors for maternal death Maternal age Scale Stratification

Number of appointment prenatal Previous number of children Scale Stratification Scale Stratification

Fuzzy system for identifying the risk of maternal death Risk of maternal death Stratification Score

P1

11

0

Extreme low

0

Few

Very high risk

75.2

P2

13

P3 P4

8

High

0

Few

Average risk

16.8

18 22

Extreme low, low Extreme low, low Low Low, average

4 7

1 1

Few, average Few, average

High risk Low risk

37.6 4.0

P5

25

Average

9

Low, average Average, high High

3

Low risk

4.0

P6

29

5

Average

4

Average risk

16.8

P7

36

Average, Average high Average high

1

1

Very high risk

75.2

P8

41

0

Few

48

5

Many

Low risk, Average risk Average risk

8.7

P9

Extreme low, low Average, high High

Average, many Average, many Few, average

Average high, high High

7 8

exams, xAppointment-Prenatal = 8, concerning High appointments. In this sense, the risk of maternal death considerably decreases, xRisk-Maternal-Death = 16.8, related to an Average risk, detaching the importance of prenatal care. Further, the P3 consists of a 18-years-old youth, xMaternal-Age = 18, classified now as Low age, who achieved four prenatal care, xAppointment-Prenatal = 4, related to Low and Average appointments. This example detaches the influence of Low maternal age along with a previous child, xNumber-Children-Childbirth = 1 expressing a High risk of death, xRisk-Maternal-Death = 37.6. Representing the ideal age range for pregnancy [16], consider an woman, P4 , aged as 22 years old, and a slightly distinct age, 25 years old, for another youth woman, P5 . Meanwhile the first is classified as Low and Average age; the second one, refers to Average age. They attend 7 and 9 prenatal care, P5 P4 = 7 and xAppointment-Prenatal = 9, respecxAppointment-Prenatal tively, corresponding to Average and High appointment. Their P4 = 1, number of previous children are xNumber-Children-Childbirth P5 and xNumber-Children-Childbirth = 3. Despite the differences about the input diagnosing variables, they present equivalent Low risk of death, xRisk-Maternal-Death = 4, exemplifying the proposed fuzzy system is able to deal with the variability of the measures. Likewise, the risks of maternal death for P6 and P9 also present the same stratification and score—i.e. Average, xRisk-Maternal-Death = 16.8. The patient P6 is 29-years-old, xMaternal-Age = 29, being classified as Average and Average High age, and attended five prenatal appointment, xAppointment-Prenatal = 5, triggering the linguistic term Average. In this example, there are four previous children, xNumber-Children-Childbirth = 4, corresponding to Average and Many children. Meanwhile, P9 is

16.8

48-years-old, xMaternal-Age = 48, firing the High linguistic term; there are High number of prenatal appointment, xAppointment-Prenatal = 8; and presents Many previous children, xNumber-Children-Childbirth = 5. In these examples, the effect of the number of prenatal care stands out; when the number of prenatal appointments is five or more, there is a decrease in the risk of maternal death, regardless of the number of previous children (Fig. 3c). The next example, P7 , represents a Average High 36-years-old aged woman, xMaternal age = 36, who attended a single prenatal care, xAppointment-Prenatal = 1, considered Extreme Low and Extreme Low prenatal care. Further, this woman had a previous children, xNumber-Children-Childbirth = 1, classified as Few and Average children. The risk of maternal death is classified as Very High, scoring xRisk-Maternal-Death = 75.2. This result is similar to that of patient P1 and demonstrates the worse is risk, the lower is the number of prenatal appointments inferior to 4. Counterclockwise, P8 bring about that even with advanced age, the risk outcome can be modulated by the high number of appointments in prenatal. A 41-years-old woman, xMaternal-Age = 41 simultaneously classified as Average High and High age, with no previous childbirth, xNumber-Children-Childbirth = 0, but attends seven prenatal appointments, xAppointment-Prenatal = 7, considered as Average and High prenatal care. Herein, the risk of maternal death decreases to Low and Average, scoring xRisk-Maternal-Death = 8.7. As it is possible to note, these examples capture the differences in maternal health conditions, even in the presence of subjectiveness, uncertainty, and imprecision related to the complexity concerning the maternal death risk assessment. In

1040

C. M. D. Xesquevixos and E. Araujo

this sense, the proposed approach can be an alternative to support healthcare professionals in the hard task of identifying and preventing maternal deaths.

6.

7.

4

Conclusion

The fuzzy system proposed in this paper based on maternal age, number of prenatal appointments and number of previous children identifies a pregnancy at risk of maternal death. The system is non-invasive and can be applied before the childbirth by different health professionals, including in a screening process for pregnancy. In this way, it can assist health professionals where there is a risk of maternal death, in order to avoid it.

8.

9.

10. 11. 12.

Acknowledgements The author Caroline M. D. Xesquevixos thanks the Biomedical Engineering Center University Anhembi Morumbi by the sponsor of the Doctorate Degree.

13.

Conflict of Interest The authors declare that they have no conflict of interest.

14.

15.

References 1.

2.

3. 4. 5.

Alencar CA Jr (2006) Os elevados índices de mortalidade materna no Brasil: razões para sua permanência. Revista Brasileira de Ginecologia e Obstetrícia 28:377–379 Organization World Health, Unicef, Others Trends in maternal mortality (1990) to 2010: WHO. UNICEF, UNFPA and The World Bank estimates, p 2012 ONU Brasil (2016) Transformando Nosso Mundo: a Agenda 2030 para o Desenvolvimento Sustentável Brasil (2009) Manual dos Comites da Mortalidade Materna Kassebaum NJ, Bertozzi-Villa AV, Coggeshall MS et al (2014) Global, regional, and national levels and causes of maternal mortal-

16. 17.

18.

ity during 1990–2013: a systematic analysis for the global burden of disease study 2013. Lancet 384:980–1004 Victora CG, Aquino EML, Maria CL, Monteiro CA, Barros FC, Szwarcwald CL (2011) Maternal and child health in Brazil: progress and challenges. Lancet 377:1863–1876 Sumon SA, Rahman RM (2018) Fuzzy predictive model for estimating the risk level of maternal mortality while childbirth. In: 2018 international conference on intelligent systems (IS). IEEE, pp 73–79 Say L, Chou D, Gemmill A et al (2014) Global causes of maternal death: a WHO systematic analysis. Lancet Glob Health 2:e323– e333 Legese T, Abdulahi M, Dirar A (2016) Risk factors of maternal death in Jimma University specialized hospital: a matched case control study. Am J Public Health Res 4:120–7 Brasil (2012) Gestação de alto risco: manual técnico Tabish SA (2020) Global maternal, newborn, and child health: so near and yet so far. Int J Sci Res 9 Mamdani EH, Assilian S (1993) An experiment in linguistic synthesis with a fuzzy logic controller. Read Fuzzy Sets Intell Syst:283– 289 Viana RC, Novaes MRCG, Calderon IMP (2011) Mortalidade materna: uma abordagem atualizada Comunicação em Ciências da Saúde, pp 141–152 Sayem AM, Nury ATMS (2011) Factors associated with teenage marital pregnancy among Bangladeshi women. Reproduct Health 8:16 Neal S, Matthews Z, Frost M, Fogstad H, Camacho A, Laski L (2012) Childbearing in adolescents aged 12–15 years in low resource countries: a neglected issue. New estimates from demographic and household surveys in 42 countries. Acta obstetricia et gynecologica Scandinavica 91:1114–1118 Geneva WHO (2000) Global health estimates 2015: deaths by cause age. Sex, by country and by region 2015:2016 Darroch JE, Woog V, Bankole A, Ashford LS, Points K (20016) Costs and benefits of meeting the contraceptive needs of adolescents .Guttmacher Institute Calderon IMP, Cecatti JG, Vega CEP (2006) Intervenções benéficas no pré-natal para prevenção da mortalidade materna. Revista Brasileira de Ginecologia e Obstetrícia 28:310–315

Evaluation of Temperature Changes Promoted in Dental Enamel, Dentin and Pulp During the Tooth Whitening with Different Light Sources Fabrizio Manoel Rodrigues, Larissa Azevedo de Moura, and P. A. Ana

since it is the one that promotes greater temperature increases in these tissues.

Abstract

Considering the great demand for aesthetic treatments in the dental hard tissues, the wide availability of whitening gels and light sources, as well as the concern with the harmful effects that the increase in temperature can bring to biological tissues, this study verified the changes in temperature of dental enamel, root dentin and pulp chamber during the in-office tooth whitening procedure with different light sources and whitening gels. Sixty recently-extracted lower human incisor teeth were prepared and randomly distributed among six experimental groups for treatments: G1-red whitening gel without light; G2-green gel without light; G3-red gel and exposed to blue LED light; G4-red gel and exposed to green LED light; G5-green gel and exposed to blue LED lighting; G6green gel and exposed to red laser light. During the treatments, as well as one minute after finishing them, the pulp and surface temperature variations of the enamel and root dentin were monitored using four thermocouples. The pulp temperature changes were 0.3 °C (G1), 0.8 °C (G2), 1.8 ± 0.5 °C (G3), 1.9 ± 0.5 °C (G4), 3.35 ± 0.6 ° C (G5), 0.34 ± 0.1 °C (G6). The temperature differences at the interface between gel and enamel were 0.3 °C (G1), 0.9 °C (G2), 4.42 ± 0.9 °C (G3), 4.9 ± 1.1 °C (G4), 4.7 ± 1.1 °C (G5) and 1.5 ± 0.2 °C (G6). The root surface temperature differences were 0.3 °C (G1), 0.6 ° C (G2), 3.05 ± 0.87 °C (G3), 2.6 ± 0.96 °C (G4), 3.8 ± 0.27 °C (G5) and 0.30 ± 0.1 °C (G6) and the temperature differences on the enamel side were 0.25 °C (G1), 0.46 °C (G2), 3.2 ± 0.7 °C (G3), 2.97 ± 0.6 °C (G4), 5.13 ± 1.2 °C (G5) and 0.70 ± 0.25 °C (G6). None of the configurations of the experimental groups causes permanent damage to the dental tissues, however the combination of green gel + blue LED should be avoided, F. M. Rodrigues  L. A. de Moura  P. A. Ana (&) Federal University of ABC/Center for Engineering, Modeling and Applied Social Sciences, Sao Bernardo do Campo, Brazil e-mail: [email protected]

Keywords

Laser

1



LED

  Heat

Root



Enamel



Pulp

Introduction

Tooth darkening can be caused by extrinsic (eating habits, smoking and use of medications, among others) or intrinsic agents (such as hypomineralization). The extrinsic pigments can bond to the dental surface and change the color of the teeth. Intrinsic factors are related to the formation of enamel and dentin, which are the most difficult to remove [1]. A whitening gel is basically composed of hydrogen peroxide or carbamide peroxide, one of the precursors of hydrogen peroxide. The whitening action of the gel occurs through its contact with the tooth, promoting an oxidation reaction by decomposition in water and highly reactive oxygen [2]. For the activation of whitening gels, the light sources are not indispensable, since the gels are activated by small temperature changes. However, they are widely used for decreasing the time of treatments. It is known that an increase in temperature of 5.5 °C is capable of causing damage to the pulp tissue, while an increase of 16 °C can lead to its complete necrosis [3]. Considering the periodontal ligament, a temperature augment of 10 °C cannot be acceptable [4]. Despite the protective function of the gel, there is still a probability of an increase in temperature above the limit of 5.5 °C in the pulp chamber, thus making studies on the interaction between light sources and gels indispensable [5]. Considering the variety of gels and light sources available for in-office tooth whitening, this study aimed to assess the surface temperature of the coronary enamel and root dentin, as well as the pulp temperature of human teeth subjected to

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_154

1041

1042

F. M. Rodrigues et al.

the in-office tooth whitening process, using different light sources and whitening gels.

2

Material and Method

After approval of the present study by the Research Ethics Committee of the Federal University of ABC (CEP-UFABC, CAAE 49,456,215.7.0000.5594), a blind in vitro study was conducted in which 60 recently-extracted lower human incisor teeth were used. After cleaning, disinfecting and removing periodontal and pulp tissues, root chambers were filled with a thermo-conductive paste and the samples were randomly distributed into 6 different experimental groups (n = 10) for in-office dental bleaching treatments (Table 1). In groups 1, 3 and 4, bleaching gel based on hydrogen peroxide (35%) with red thickener (Whiteness HP, FGM, Brazil) was used; in groups 2, 5 and 6, bleaching gel based on hydrogen peroxide (35%) with green thickener (Total Laser, Clean Line, Brazil) was used. Both gels were manipulated using 1 drop of thickener to 3 drops of hydrogen peroxide, according to the manufacturers’ instructions. Three light sources were used in the treatments. In groups 3 and 5, the Whitening Lase II device (DMC Equipamentos, Brazil) was used, which has 6 LEDs emitting blue light (k = 470 ± 10 nm). During the experiments, only the central LED was used, the other LEDs were covered with black tape. In group 4, a non-commercial green LED was used (k = 515 nm). In group 6, the Brite Laser Max device (Clean Line, Brazil) was used, (k = 654 nm). In the bleaching process, a uniform layer of approximately 1 mm of bleaching gel was applied to the vestibular region of the enamel of each sample. The samples were irradiated at a distance of 5 mm by the light source, 30 s after the application of the gel. The procedure for whitening consisted of alternating 1 min with the light source on and 1 min with it off, until a total of 2 min of irradiation, that is, in minutes 0–1, and 2–3, the light source remained on and, from 1 to 2 min, the light was off. After the treatment, the gel was removed from the surfaces by drying with absorbent paper. During the treatments, as well as one minute after the end of them, the temperature monitoring was performed using a K-type thermocouple (chromel–alumel, 0.05 mm diameter, Table 1 Experimental groups of this study Experimental group

Thickener color

Light source

G1

Red



G2

Green



G3

Red

Blue LED

G4

Red

Green LED

G5

Green

Blue LED

G6

Green

Red laser

Fig. 1

Positioning of the four type-K thermocouples in a sample

resolution of 0.2 °C e sensibility from 0.1–100 °C) and a NI USB9162 acquisition board (National Instruments, USA). Four channels were used to perform temperature measurements at four different points in the sample (Fig. 1). The bleaching gels and light sources were also characterized regarding their absorption and emission wavelengths, respectively, using a spectrophotometer (Biospectro SP-220, Brazil) and a USB-650 Red Tide Spectrometer (Ocean Optics, EUA). The optical power of the light sources was measured using a powermeter (FieldMax II TOP, Coherent, USA). The results obtained were statistically evaluated using Kruskal–Wallis and Student–Newman–Keuls, at a significance level of 5%.

3

Results

Figure 2 show the comparisons between the emission wavelength of the blue and green light sources and the absorbance of the red thickener, and between the emission wavelength of the blue and red light sources and the absorbance of the green thickener. The optical power values found were 44 mW for the blue LED, 27.8 mW for the green LED and 51.7 mW for the red laser. Table 2 summarizes the characteristics of the light sources used in the present study. Immediately after the application of the gel on the tooth surface, it was observed a drop in temperature on the surface in contact with the gel (channel 1), followed by a drop in temperature in the other channels (Fig. 3). In groups 1 and 2, the temperature drop occurred throughout the acquisition period. In the other experimental groups, it was observed that channel 1 showed the highest temperature rise. The second largest temperature variation was observed in channel 3, while channels 0 and 2 showed the smallest variations.

Evaluation of Temperature Changes Promoted in Dental Enamel …

Fig. 2

Absorbance of thickeners and emission wavelength of the three different light sources used in this study

Table 2 Emission range and optical power of the lighting sources

Light source

k nominal (nm)

k measured (nm)

FWHM (nm)

Optical power (mW)

Blue LED

470 ± 10

454

26.3

44

Green LED

525 ± 10

518

72.8

27.8

Red laser

654–662

657

2.2

51.7

The means of temperature variations (DT) during irradiations for groups 1 to 6 and their respective standard deviation values are shown in Table 3. The temperature variations observed in groups 1 and 2 (without using a light source) were negative, resulting from the application of the bleaching gel. It was observed that there was no significant difference between the groups treated with gel with red thickener, regardless of the source of light used, for all regions evaluated. In this way, the light source does not interfere with pulp, enamel and periodontal temperature when used with a red thickener gel. However, group 5, which used gel with green thickener and blue LED, showed the highest temperature values when compared to the other experimental groups, except for channel 1. The group that used gel with green thickener and red laser was the which that presented lower temperature values when compared to all other experimental groups. Thus, for the gel with green thickener, it is observed that the light source is important at the temperatures generated during the whitening procedure, being statistically lower if red laser is used.

4

1043

Discussion

In-office dental whitening with photoactivation leads to an increase in the surface temperature of the enamel and, consequently, to an augment in the temperature of adjacent structures, such as the root and pulp chamber. This phenomenon

can be dangerous as it can lead to inflammation of dental and periodontal tissues, in addition to causing post-operative pain and sensitivity. Thus, the study of the temperature changes caused during tooth bleaching is important to prevent tissue damage from occurring due to the procedure [5]. It is known that reversible changes can happen to the pulp tissue when subjected to temperature variations greater than 3.3 °C, while temperature variations above 5.5 °C can cause loss of vitality and changes greater than 16 °C cause total necrosis of the pulp. Temperature variations above 10 °C in the periodontum can also lead to damage to these tissues (bone and periodontal ligament) [3, 4]. The in-office dental whitening procedure is a very common and widespread aesthetic procedure with studies evaluating the effects on the temperature of dental tissues older than 20 years [6]. Therefore, the literature is vast in showing temperature variations during tooth whitening with different light sources (halogen lamps, high and low power lasers, LEDs, plasm arc lamps, non-thermal atmospheric pressure plasmas, UV lamps) and distinct wavelengths, as well as diverse treatment protocols and gels of dissimilar colors and compositions. Still, different evaluation methodologies are used, such as thermocouples and thermographic cameras. Because of this, the reported results are conflicting and there are still doubts about the real effects of light absorption on the temperature generated during the procedure. Another point to be emphasized is that, often, the dentist has different products for teeth whitening in his office, but not

1044

Fig. 3

F. M. Rodrigues et al.

Temperature variation observed in all channels of thermocouples, for all experimental groups

always the most adequate light source for a specific gel. Thus, clinical use of the same gel with different light sources is frequent, or vice versa. There are previously published works that tested activations of the same gel using different light sources [7, 8] and often use sources of the same central emission

wavelength, but with different radiant powers, or compare LEDs with halogen lamps [9] or distinct laser systems [10, 11], which have very different collimation characteristics. Yet, although there are some studies that compare different gels with light sources of distinct emission wavelengths [12, 13], some of

Evaluation of Temperature Changes Promoted in Dental Enamel … Table 3 Average (±SD) temperatures obtained during treatments for each experimental group

1045

Channel

G1

G2

G3

G4

G5

G6

0

−0.39

−0.85

1.87 ± 0.5(a)

1.93 ± 0.5(a)

3.35 ± 0.6(b)

3.35 ± 0.1(b)

1

−0.30

−0.98

4.42 ± 0.9(a)

4.95 ± 1.1(a)

4.78 ± 1.1(a)

4.78 ± 0.2(a)

2

−0.33

−0.63

3.05 ± 0.8

2.66 ± 0.9

3.89 ± 0.2

(b)

3.89 ± 0.1(b)

3

−0.25

−0.46

3.24 ± 0.7(a)

5.13 ± 1.2(b)

5.13 ± 0.2(b)

(a)

(a)

2.97 ± 0.6(a)

Subscript letters indicate statistically different means at the 5% level according to the Student–Newman– Keuls test

them used sources with characteristics dissimilar from each other (such as the comparison between a broadband LED with emission between 400 and 760 nm and a femtosecond laser with emission in 770 nm but with power density 4 times higher than the compared LED) [13]. Such equipment is difficult to access clinically and quite different from each other and, therefore, studies with more accessible sources are necessary. The justification for the use of different light sources, mainly related of the use of high power lasers that heat the irradiated surface [12] refers to the attempt to increase the effectiveness of diverse types of gels, with present distinct pH values; also, it is necessary to eliminate different types of pigments from the tooth surface, mainly those related to tetracycline, with is difficult to be removed in a clinical situation. Thus, the present study sought to assess whether the activation of commercially colored gels by economically accessible LEDs or low power laser of different emission wavelengths could alter the temperature generated during a standard clinical procedure, which could guide a professional regarding the safety of the procedure clinical even when the light source used is not the most absorbed by the gel in question. During the clinical procedure, the temperature variation is influenced by several factors, such as the type of pigment contained in the gel, as well as the characteristics of the light source, irradiation time, the amount of the gel and thickness of the enamel and dentin. It is for this reason that, in the present study, human lower incisor teeth were used, as these are the teeth with less enamel and dentin thickness and, thus, the conduction of heat to the pulp tissue is facilitated [4]. Still, a thermally conductive paste was used inside the pulp chamber, in order to detect the highest possible temperature rise. Such detection attends as a safety parameter for future clinical extrapolation, considering that the pulp blood flow, as well as the presence of saliva and gingival fluid facilitate the loss of heat and, therefore, the temperatures detected in an in vivo study are certainly smaller than those of an in vitro model. In this study, groups not exposed to any light source (G1 and G2) showed a sudden drop in temperature in the region in contact with the gel (channel 1) immediately after application, with no significant temperature increase during the acquisition period. This fact was expected, considering that the gel was stored in a closed environment protected from

light, preventing its photodecomposition. Thus, it is likely that the gel is at a lower temperature than the dental element and, therefore, due to the heat transfer, a slight cooling of the tooth surface occurs when in contact with the whitening gel. Comparing the results obtained from the G3 and G4, both with the use of red thickener, it is noted that, despite the lower power presented by the green LED (G4), the temperature increase on the gel surface was greater when irradiated by the green LED compared to irradiation by the blue LED. The temperatures on the enamel side and root dentin, however, were lower and, although the pulp temperature was slightly higher, the difference was not significant (about 0.05 °C). The greater heating of the gel in the group exposed to the green LED can be justified by the absorption spectrum of the thickener. The red thickener has its absorption peak at 516 nm, presenting an absorption about 50% lower near 450 nm, which consequently causes less heating when using light sources with wavelengths close to blue [14]. Group 5 showed the highest temperature differences in pulp, enamel side and root surface among all groups; this group also presented temperature slightly higher in gel surface when compared to G3 (red thickener and with irradiation of blue LED). The average pulp temperature difference obtained in G5 is sufficient to cause reversible damage to the pulp and therefore this association should not be used in a clinical procedure. The discrepancy between pulp, periodontal and enamel side temperature variations observed in the comparison between groups 3 and 5 suggests that the efficient absorption of light by the thickener (such as that which occurred in G3) resulted in less heat propagation to the adjacent tissues, whereas, in G5, there was no absorption of photons by the thickener and, consequently, they were transmitted to other regions, generating greater temperature increases in these regions. Such data reinforce, once again, that the association between green thickener and blue light should not be used clinically, as there is a risk of overheating regions distant from those of the irradiation site. Group 6 (treated with green thickener gel and red laser) showed the lowest temperature variation among all irradiated groups. The green gel shows high absorption in the red region [5], with an absorption peak close to 630 nm; however, the absorption at 657 nm is lower, which justifies the

1046

F. M. Rodrigues et al.

variation temperature of only 1.5 °C. Yet, there was also no significant absorption by the adjacent tissues in the samples, which remained with variations below 1 °C. The use of gel with green thickener and 660 nm light proved to be an efficient method in a previous study carried out by Pleffken et al. [15], also showing low pulp temperature variation. The low risk of injury due to increased pulp temperature together with its good performance makes the treatment with the green gel followed by red light exposition as a good option for tooth whitening. In the study, among all groups, only group 5 had an average pulp temperature difference above 3.3 °C. There were no samples in which the pulp temperature variation reached values above 5.5 °C; however, a single sample from group 3 and a significant number of samples from group 5 showed variations above 3.3 °C. The greatest temperature variation among all channels was approximately 5.1 °C, which also rules out possible lesions in the periodontal tissues, as these occur only with temperature variations above 10 °C. The temperature differences obtained in this study can be considered low; however, in an in vivo study, these values may be even lower due to the arrangement of the teeth in the dental arch and the protection provided by the gingiva, which prevent the possible direct irradiation of the enamel side and root surface. Also, the presence of blood circulation and gingival fluid also contributes to the control of temperature [5].

5

Conclusions

It was possible to conclude that one of the experimental groups showed pulp temperature variations that present a potential risk to the vitality of the pulp and periodontal tissues. The treatment with green whitening gel associated with blue LED lighting, however, showed temperature rises in pulp chamber that can cause reversible damage to this tissue and, in this way, it is not recommended for clinical application. Among all configurations evaluated, the use of green thickener gel associated to red laser was the safest for clinical use. Based on the results achieved, in-office whitening is harmless as long as the proposed application times are respected. Acknowledgements To PROCAD-CAPES (88881.068505/2014-01), National Institute of Photonics (CNPq/INCT 465763/2014-6), FAPESP (2017-21887-4), Multiuser Central Facilities (CEM-UFABC) and IPEN-CNEN/SP.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Coutinho DS et al (2009) Comparison of temperature increase in in vitro human tooth pulp by different light sources in the dental whitening process. Lasers Med Sci 24:179–185 2. Nakamura T, Saito O, Ko T, Maruyama T (2001) The effects of polishing and bleaching on the colour of discoloured teeth in vivo. J Oral Rehabil 28:1080–1084 3. Zach L, Cohen G (1965) Pulp response to externally applied heat. Oral Surg Oral Med Oral Pathol 19:515–530 4. Eriksson A, Albrektsson T, Grane B, McQueen D (1982) Thermal injury to bone. A vital-microscopic description of heat effects. Int. J Oral Surg. 11:115–121 5. Kabbach W, Zezell DM, Bandeca MC, Pereira TM, Andrade MF (2010) An in vitro thermal analysis during different light-activated hydrogen peroxide bleaching. Laser Phys 20:1833–1837 6. Reyto R (1998) Laser tooth whitening. Dent Clin North Am 42:755–762 7. Coutinho DS, Silveira L, Nicolau RA, Zanin F, Brugnera A (2009) Comparison of temperature increase in in vitro human tooth pulp by different light sources in the dental whitening process. Lasers Med Sci 24:179–185 8. Mondelli RFL, Soares AF, Pangrazio EGK, Wang L, Ishikiriama SK, Bombonatti JFS (2016) Evaluation of temperature increase during in-office bleaching. J Appl Oral Sci 24:136–141 9. Carrasco TG, Carrasco-Guerisoli LD, Fröner IC (2008) In vitro study of the pulp chamber temperature rise during light-activated bleaching. J Appl Oral Sci 16:355–359 10. Sulieman M, Addy M, Rees JS (2005) Surface and intra-pulpal temperature rises during tooth bleaching: an in vitro study. Br Dent J 199:37–40 11. Sari T, Celik G, Usumez A (2015) Temperature rise in pulp and gel during laser-activated bleaching: in vitro. Lasers Med Sci 30:577–582 12. Gao Y, Zhao Z, Li L, Zhang K, Liu Q (2020) In Vitro Evaluation of the Effectiveness of Bleaching Agents Activated by KTP and Nd:YAG laser. Photodiagnosis Photodyn Ther 31:101900. https:// doi.org/10.1016/j.pdpdt.2020.101900 13. Klaric E, Rakic M, Sever I, Tarle Z (2015) Temperature rise during experimental light-activated bleaching. Lasers Med Sci 30:567– 576 14. Coutinho DS, Silveira L Jr (2006) Comparação dos Coeficientes de Absorção da Luz Emitida por um LED Verde e um LED Azul em um Espessante na Cor Vermelha. Rev Assoc Bras Odontol 2:12–13 15. Pleffken PR et al (2012) The Effectiveness of Low-Intensity Red Laser for Activating a Bleaching Gel and Its Effect in Temperature of the Bleaching Gel and the Dental Pulp. J Esthet Rest Dent 24:126–132

Effects of a Low-Cost LED Photobiomodulation Therapy Equipment on the Tissue Repair Process F. E. D. de Alexandria, N. C. Silva, A. L. M. M. Filho, D. C. L. Ferreira, K. dos S. Silva, L. R. da Silva, L. Assis, N. A. Parizotto, and C. R. Tim

low-cost red LED light therapy equipment resulted in faster healing process of the wounds, favoring the reepithelization and recovery of skin integrity.

Abstract

Wound is the interruption of tissue continuity, caused by physical, chemical, mechanical trauma or triggered by a clinical condition, which triggers the fronts of organic defense. The body reacts to the installation of the wounds by initiating the healing process, however, these events may fail due comorbidities may be impair the healing response, resulting in a chronic wound. LED, light-emitting diode, have been successfully used as adjunctive therapies for stimulating skin wound healing however, the parameters used in low-cost equipment have not yet been studied yet. This study aims to evaluate the efficiency of a red LED light therapy equipment on wound healing in rats. The project was approved at CEUA/UESPI under protocol 0298/2019. Two experimental groups were delimited, one treated with LED and one control group. The animals were anesthetized and an area of 2 cm2 was cut with a skin punch. The treatment was performed for a period of 14 days, in which the regression of the wound was evaluated using the Image J program. When comparing the 14-day regression percentage between the LED treated group and the untreated group, it was observed that there was a significant difference in the regression percentage, being p < 0.05, in which the group treated with LED presented a higher regression percentage than the untreated. LED photobiomodulation therapy has been widely used demonstrating success in the regeneration of lesions by promoting photobiological effects that stimulate this process. This study showed that the F. E. D. de Alexandria  N. C. Silva  L. Assis  N. A. Parizotto  C. R. Tim (&) Department of Biomedical Engineering, University Brazil, São Paulo, Brazil e-mail: [email protected] F. E. D. de Alexandria  N. C. Silva Facid, Teresina, Brazil N. C. Silva  A. L. M. M.Filho  D. C. L. Ferreira  K. dos S. Silva  L. R. da Silva UESPI, Teresina, Brazil

Keywords

Healing LED

1



Wounds



Wistar rats



Photobiomodulation



Introduction

Wound is the interruption of the continuity of body tissue to a greater or lesser extent, caused by physical, chemical, mechanical trauma or triggered by a clinical condition, which triggers the fronts of organic defense directly interfering in its functions, causing damage to the organism [1, 2]. After the installation of the wounds, the body is expected to react by initiating the complex healing process that involves physiological, biochemical, cellular and molecular events that interact for tissue reconstruction, in order to restore its integrity, however in many cases, these events fail giving rise to wounds that can take years to end the process of tissue reconstruction, and many times this process may not even happen [3]. It is known that the health costs of treating wounds that are difficult to heal in Brazil and in the world are high and have been increasing rapidly, making it difficult to maintain the sustainability of health systems. The constant search for alternatives that reduce the costs of skin lesions, has proven to be an option to balance health expenses. Searching for treatments that can reduce costs, as well as restoring the integrity of patients’ skin sees an alternative in the honest process of health systems [4]. The LED (Light Emitting Diode), a light emitting diode, has been studied for decades, whose evidence of wound treatments is based on two main stimulating effects, which is due to the stimulation in the differentiation and proliferation

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_155

1047

1048

F. E. D. de Alexandria et al.

of fibroblasts and collagen synthesis of tissular tissue. It is known that LEDs are generally inexpensive and easy to handle, and can be used together with other wavelengths, forming an applicator tip that covers a larger area of lighting, and can also be associated with other therapeutic resources. This treatment provides professionals in the field to carry out an advanced and non-invasive procedure, which does not promote damage to the skin, without restriction to skin types and can be used at any time and in other pathologies [5, 6]. Although the positive effects of LED interventions on wound healing, the parameters used in low-cost equipment were not studied before. In this context, it was hypothesized that a low-cost red LED light therapy equipment may optimize the healing process and would contribute to mitigate socio-economic spending as well as improving the quality of life of affected individuals. Thus, the aim of the present was evaluated the efficiency of low-cost red LED light therapy on cutaneous wound healing in rats. For this purpose, the macroscopic aspect and wound regression were evaluated. Fig. 1 Schematic representation of experimental procedure

2

Materials and Methods

In order to development of this work, 14 adult male Wistar rats (rattus norvegicus albinus) (over 60 days old), with an average of 250 g body weight, randomly selected from the vivarium of the Health Sciences Center of the State University were used from Piauí. The animals were born and raised on the place, not requiring an acclimatization period. The study was athorized by the ethical committee of animals using (CEUA / UESPI) da State University of Piauí with protocol nº 0312/2019. During the experimental phase, all animals received standardized rodent food and water ad libitum. They were kept in polypropylene boxes, autoclavable and in a ventilated environment. The temperature was maintained at 25 °C and the humidity controlled. The 12-h light–dark cycle was followed. After weighing, the rats were anesthetized with an intramuscular injection (IM) of ketamine hydrochloride, at a dose of 60–80 mg/kg, and chlorpromazine hydrochloride, at a dose of 1.6–2.0 mg/kg, as ethical standards and guidelines for research with animals of Resolution 879 of February 15, 2008, of the Federal Council of Veterinary Medicine [7]. After trichotomy of the dorsal and cervical region, the rats were placed in prone position, asepsis was performed with 70% alcohol, and with a skin punch, an area of 2 cm2 was marked on the back of the animals, corresponding to skin area to be removed (Fig. 1). The skin fragments were resected with a steel blade until the muscular fascia was exposed. Hemostasis was performed by digital compression, using sterile gauze.

In the first two postoperative days, paracetamol 200 mg/mL was administered as an analgesic medication at a dose of 1 mL/20 mL of water every 8 h. The animals were randomly divided into 2 groups containing 7 animals each, being a negative control group, in which the wound on the animals’ back was induced and treated with 0.9% saline; and a group treated with LED, in which the wound on the back of the animals was induced and LED was applied, then saline 0.9% The LED used was Tendlite® medicinal, model 204, red wavelength 660 nm, nominal output power of 160 mW, 60 s. Immediate irradiation 0.9% saline (SF) was applied to the wounds, as well as to the control group. After the 14-day experimental period, the animals of both groups were humanely killed, with a lethal dose of anesthetic of sodium thiopental at a dose of 100 mg/kg, intraperitoneally. For the assessment of wound regression, a digital camera was used in basic mode, without flash, without zoom and with a resolution of 8.0 megapixels. To standardize the distance from the camera to the wound, an aluminum support 20 cm apart and perpendicular to the wound was used. A ruler placed next to the animals and next to the wound was used to standardize the lesion area unit in mm2. The images were analyzed using the ImageJ 1.45 software (Research Services Branch, National Institutes of Health—NIH (Bethesda, Maryland, USA). The residual area of the lesion was calculated based on the images, obtaining the following parameters:

Effects of a Low-Cost LED Photobiomodulation Therapy Equipment …

• • • •

lesion size (mm2) residual area (Ar) of the lesion (%) Aday represents the area measured daily Ainitial is the initial area measured immediately after the puncture. Ar ¼

Aday  100 Ainitial

Statistical analyses were performed in the statistical Graph Prism software. The level of significance adopted in the study was a = 0.05. The data were processed and analyzed using the Statistical Package for the Social Sciences (SPSS), version 20.0. The normal distribution of all variables was verified by Shapiro–Wilk’s W Test. Comparisons between groups with parametric distribution were performed using the Unpaired t test.

3

1049

Results

In the macroscopic findings obtained from the qualitative analysis of the experimental groups, it was possible to observe the absence of necrosis, odor, presence of fibrosis or liquid in the lesion cavity in both experimental groups. In the control group, there was a slight local inflammation, evidenced between the 5th and 6th day after the lesion was performed, not manifested in the LED group (Fig. 2). After the end of the 14th day, the morphometric analysis of the lesions of both groups was performed, treated with LED and negative control, by the Image J analytical software, in which the initial wound area, day 0, was measured and the area compared the last day. The program generates a percentage of regression from this comparison. The obtained value was processed by the parametric test of statistical analysis, the “Unpaired t test”, which showed a significant difference between the two groups tested, the LED treated and the negative control (Fig. 3).

Fig. 2 Macroscopic analysis

Fig. 3 Record of the percentage of regression of the injured area. *** = p < 0.05 when compared to the negative control

4

Discussion

Photobiomodulation has been widely used demonstrating success in the regeneration of injuries, as it promotes photobiological effects that stimulate this process [8, 9]. The use of LED in the healing process is already a reality. There are reports of cases of use of phototherapy that demonstrate the effectiveness of its use in the treatment of wounds. A case study was carried out with a patient from the HC/UEL Clinic Clinic who had ulcers in the lower limbs. The application was made once a week, using LEDs with a wavelength of 628 nm in the ulcer of the left lower limb and the right was used as a control. Evolution was measured by means of photographic record, area measurement and pain measurement. After the eighteen sessions, the results showed changes in the clinical characteristics of the lesion and the healed area was 30% larger compared to the control ulcer. As for pain, the visual analog scale varied from eight to zero in the irradiated limb and from nine to two in the control. It was concluded in this study that LED therapy is a resource of choice in the treatment of venous ulcers, in healing and pain reduction aspects [10]. Another reported case reported a pediatric patient with oral mucositis due to chemotherapy used to treat acute lymphoblastic leukemia, in which the lesions were treated daily with a light-emitting diode (LED). The lesions remitted after 10 days of treatment. It is concluded in the article that the LED was effective in the treatment of mucositis, as it decreased the symptoms of pain and accelerated the tissue repair process [11]. Currently the University of Brasilia (UnB) is developing a project for the use of LED in the treatment of diabetic foot and other types of injuries. Equipment formed by a mobile system of tissue neoformation was developed based on the principles of phototherapy to assist in wound healing. Its light emitting circuit is formed by a control module and a high brightness LED module. Tests were carried out with

1050

F. E. D. de Alexandria et al.

nine patients in the Federal District—six, with 11 different ulcers, at the Regional Hospital of Taguatinga (HRT), in 2013, and three at the Regional Hospital of Ceilândia (HRC), in 2016—who used the LED, presenting to the end of the study total healing of ulcers [12]. Research focused on developing strategies to accelerate and improve the healing process is increasingly necessary and valued. Today, one of the biggest drivers of research involving stomatherapy is the search for effectiveness with cost reduction in the area. A study carried out in the wound sector, at UFMG, compared traditional dressings and new technologies, such as the use of new standardized coverings for the treatment and the introduction of photobiomodulation and proved a 5.4% reduction in their costs, with a reduction observed average total cost, up to 60.7% of the initial value of conventional dressings. Thus, technological advances in the use of new products and procedures in the realization of dressings have been showing benefits to the population with wounds, improving the quality of life of people.

5

Conclusion

The low-cost red LED light therapy equipment was efficient in accelerate healing processes of the wounds, favoring the reepithelization and recovery of skin integrity. Acknowledgements We would like to acknowledge the contributions of funding agency CAPES for the financial support of this research. Conflict of Interests The authors declare that they have no conflict of interest.

References 1. Cesaretti IUR (1998) Processo fisiológico de cicatrização da ferida. Rev Pelle Sana 2:10–12 2. Blanck M (2009) Curso de Feridas: Anatomia, histologia, fisiologia, imunologia, microbiologia e o processo cicatricial. Enferm Atual 9(49):6–12 3. Garcia SJ et al (2018) Protocolo de Tratamento de feridas para o Sistema penitenciário do Estado de São Paulo. Universidade Federal de São Paulo, São Paulo—SP 4. Silva DRA et al (2017) Pressure ulcer dressings in critical patients: a cost analysis. Rev Esc Enferm USP 51:1–12 5. Schubert EF (2006) Light emitting diodes, 2nd edn. Cambridge Univestity Press. New York, Cambridge 6. Silva EF, Moraes DEPF, Silva PM (2018) A terapia combinada de LED associada com ácidos no tratamento de acne. Fisioterapia Brasil 19(5):63–69 7. Brasil, Conselho Federal de Medicina Veterinária (CFMV) Resolução Nº 879, de 15 de fevereiro de 2008. Manual de Legislação do Sistema CFMV/CRMVs 8. Parizotto NA, Baranauskas V (1998) Structural analysis of collagen fibrils after he-ne laser photostimulated regenerating rat tendon In: Second World Congress Kansas City. Annals. Kansas City (USA), 66–67 9. Martignago CCS et al (2015) Proliferação de células fibroblásticas submetidas a terapia a laser de baixa potência de 904 nm. Blucher Biochem Proc 1(2):362–362 10. Siqueira CPCM et al (2009) Efeitos biológicos da luz: aplicação de terapia de baixa potência empregando LEDs (Light Emitting Diode) na cicatrização da úlcera venosa: relato de caso. Semina: Ciências Biológicas e da Saúde 30(1):37–46 11. Rimulo AL, Ferreira MC, Abreu MH, Aguirre-Neto JC, Paiva SM (2011) Chemotherapy-induced oral mucositis in a patient with acute lymphoblastic leukaemia. Eur Arch Paediatr Dent 12 (2):124–127 12. Caxito C (2017) UnB desenvolve tratamento com LED para cicatrização em diabéticos. Jornal Brasília Capital, Brasília, Brasil

Occupational Dose in Pediatric Barium Meal Examinations G. S. Nunes, R. B. Doro, R. R. Jakubiak, J. A. P. Setti, F. S. Barros, and V. Denyak

Abstract

1

Multipurpose fluoroscopy systems can be used as a remotely controlled system, or as a system for performing simple interventional procedures. Barium meal, or upper gastrointestinal studies, using fluoroscopy, is widely used for gastroesophageal reflux disease diagnostic in children and required professionals to stay inside the examination room for pediatric patient positioning and immobilization during the procedure. In such a situation the radiation exposure of professionals is an issue. The present study had as main goal to estimate the dose received by the professionals involved in pediatric upper gastrointestinal series. The polyethylene mannequin filled with water were used to simulate the patient’s body and another assistant. The equipment used to measure dose rate was a parallel plate ionization chamber calibrated with accuracy of 1.8%. The dose values in one examination are somewhat smaller but comparable with typical patient doses in chest radiography. The annual effective dose may exceed the limit in the case of hundreds of procedures. The contribution of the radiographic images may reach 30%. Keywords



Pediatric fluoroscopy Upper gastrointestinal tract Occupational dosimetry



G. S. Nunes (&)  R. R. Jakubiak  J. A. P. Setti  F. S. Barros  V. Denyak Federal University of Technology, Paraná, Av. Sete de Setembro, Rebouças, Curitiba, 3165, Brazil R. B. Doro BrasilRad—Innovation and Quality in Medical Physics, Florianópolis, Brazil V. Denyak National Science Center ‘Kharkov Institute of Physics and Technology’, Akademicheskaya 1, Kharkiv, Ukraine

Introduction

Multipurpose fluoroscopy systems can be used as a remotely controlled system, or as a system for performing simple interventional procedures. One of the tests performed on this equipment is the barium meal examinations. This test allows evaluating the anatomy and physiology of the esophagus, stomach, and first part of the small intestine (duodenum) with the use of barium sulfate (BaSO4) as a contrast medium. The main source of exposure to the professionals in procedures involving fluoroscopy is the scattered radiation, which comes from the interaction between the primary beam and the patient. For every 1000 photons reaching the patient, 100–200 are scattered, about 20 arrive at the detector and the remaining is absorbed by the patient [1]. Since occupational dose is directly related to patient’s dose [1], optimization of the last one should also lead to decrease in occupational dose. The European Communities Guidelines [2] presents basic requirements for fluoroscopic equipment and technical parameters as an aid to fulfilment of the quality criteria for pediatric imaging, such as: minimum tube voltage of 70 kV; additional filtration of 0.2 mm of copper or more; use of pulsed exposure; application of the Last Image Hold (LIH) concept; nominal focal spot value of 0.6; usage of automatic brightness control. In addition to the factors already mentioned, according to the ICRP Publication 121 [3], proper attention must also be given to the patient positioning before exposure to radiation. This publication also warns that pediatric intervention procedures should be performed by expert interventional pediatricians. Concerning about reducing the risks of radiation exposure is constantly reported. The United Nations Scientific Committee on the Effects of Atomic Radiation states the need to evaluate occupational doses according to the procedures that cause them [4]. In order to reduce the risk of radiation, fluoroscopy should be constantly performed with the least possible exposure and within the shortest time required [5].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_156

1051

1052

G. S. Nunes et al.

There are few studies [6–10] about occupational doses in pediatric diagnostic fluoroscopy, especially in the barium meal procedures. There is the significant difference between the results obtained in the first work [6] and in the latter ones. The present study had as main goal to estimate the dose received by the professionals involved in pediatric barium meal examinations in hospital not specializing in pediatrics applying the method different from that used in the previous studies. This method allowed to estimate the contribution of radiographic images to total dose to test the LIH concept for occupational dosimetry which was not done in the previous investigations.

2

Materials and Methods

The study was carried out in a University Hospital in the city of Curitiba (Brazil), in the Department of Radiology, in June 2018. The research was not submitted to the Institutional Review Board for not involving human beings. The fluoroscopy equipment used was the remotecontrolled system AXIOM ICONOS MD (Siemens, 2006) with an X-ray ampoule set over the articulated table. Technical parameters used in the simulation of the pediatric UGI series are presented in Table 1. The equipment used to measure ionizing radiation was a parallel plate ionization chamber (Radcal Corporation, 10X5-6) with a sensitive volume of 180 cm3 calibrated with accuracy of 1.8%. The polyethylene mannequins filled with water were used to simulate the patient’s body and another assistant (Fig. 1). Position and dimensions of the simulators are presented in Table 2. Figure 2 shows the experimental layout. Three measurements of the dose rate in fluoroscopic mode were performed. Besides, the dose in radiographic image production (one radiography for each of five exam position) was measured. The mean values and the errors of

Table 1 Technical parameters used in the simulation of the pediatric UGI series Item

Value

Focus-detector distance

115 cm

Focus-isocenter distance

95 cm

Field size on the table

28  28 cm2

Additional filtration

0.3 mm (Cu)

Nominal focal spot value

0.6

Total fluoroscopy time

2.30 min

Fluoroscopy technique

83 kVp

2 mAs

Radiography technique

68 kVp

28 mAs

Fig. 1 Polyethylene mannequins filled with water were used to simulate the patient’s body (a) and another assistant (b)

the mean of dose rate in fluoroscopy and dose in radiography were calculated. The dose in fluoroscopy was calculated as the product of the dose rate and the total fluoroscopy time. The dose in radiography was calculated as a sum of the doses measured in the production of all five radiographic images. The Dose Equivalent Hp(10) was calculated by using a conversion factor. Brazilian regulation [11] establishes the

Occupational Dose in Pediatric Barium Meal Examinations

1053

Table 2 Mannequin’s position and dimensions Item

Value

Distance from the assistant to the isocenter

50 cm

Distance from the ionization chamber to the isocenter

50 cm

Distance from the ionization chamber to the floor

115 cm

Dimensions of the pediatric patient simulator: length  width  height

20 cm  18 cm  44 cm

Dimensions of the assistant simulator: length  width  height

20 cm  19 cm  50 cm

Thoracic thickness of the patient simulator (age—1 year)

16 cm

Fig. 2 Experimental layout. 1 is the patient simulator. 2 is the ionization chamber. 3 is another assistant simulator. 4 is the X-ray equipment

value of 1.14 Sv/Gy as the only conversion coefficient between air kerma and Hp(10), disregarding the coefficient’s dependence on the beam quality under consideration. Hpð10Þ ¼ 1:14 Ka ðSvÞ:

ð1Þ

In order to estimate the effective dose that the professionals receive annually during the pediatric barium meal procedures, the Hp(10) value of the test was multiplied by the number of exams, which required professionals inside the exam room, performed during the year 2017. To achieve this goal, a survey of the number of pediatric upper gastrointestinal tract series was performed by checking the Hospital Information System between January 1 and December 31, 2017.

3

Results and Discusion

The obtained total dose per examination (54.15 ± 0.43 lGy) is somewhat smaller but comparable with typical patient dose in chest radiography. The total fluoroscopy time a variable that influences both the occupational and the patient’s dose, and according to the IAEA (2018) it is usually between 3 and 6 min per patient. For the total fluoroscopy time of 6 min, the

total dose per examination is 116.21 ± 0.45 lGy. Our result confirms the results obtained earlier in the works [7–10] and is up to ten times larger than the ones from [6]. In the first study [6] performed in this area, the radiation doses received by hands and thyroid during 66 pediatric barium meal examinations were analyzed. Calcium sulfate thermoluminescent dosimeters (TLDs) indicated average doses per week for professional’s hands ranging from 40 to 210 µSv and for thyroids ranging from 20 to 50 µSv. Taking into account the number of examinations, the dose value per exam does not exceed 13 µSv, which is significantly less than the result of the present study. Further investigations indicated higher doses. In the work [7], when estimating the radiation dose obtained by the staff’s hands in 25 pediatric exams using Calcium Fluoride TLDs, the measured average dose was of 47 µGy per procedure. The average thyroid equivalent dose in 48 pediatric barium meal studies in [8], measured with Lithium Fluoride TLDs, indicated a range of 20–54 µSv per procedure. In [9], the same group investigated occupational dose on hands, lens and thyroid of professionals in 41 pediatric barium meal exams. Lithium Fluoride TLDs measured average doses of 85, 49 and 32 µSv per procedure, correspondingly. Further, they reported [10] the influence of procedure optimization on occupational exposure of hands, lens and thyroid during pediatric barium meal examinations. The study was carried out in two stages: before (49 patients) and after (44 patients) the implementation of the optimization. Lithium Fluoride TLDs indicated average doses before the optimization of 83, 48 and 52 µSv per procedure, respectively. After optimization, the results were 43, 25 and 28 µSv per procedure, correspondingly. There are two factors that might cause such a difference between the first work [6] and the subsequent ones. Firstly, the barium meal procedures were performed in [6] without antiscatter grid, which is known to increase the patient dose. Secondly, X-ray tube in [6] was positioned under the articulated table unlike the other works. If a part of the radiation comes from the X-ray tube collimator or exit window, the table acts as a shield. The contribution of the radiographic image production to the total dose per examination is almost 30% for fluoroscopy time of 2.3 min and decreases up to *15% for fluoroscopy time of 6 min. (Fig. 3). Of course, this value depends not only on the total fluoroscopy time but on the number of radiographies also, which vary from one examination to another. However, it is clear that the contribution of the radiographic images is of order of tens of percent, which emphasizes the importance of LIH concept also for occupational exposure.

1054

G. S. Nunes et al.

The annual effective dose may exceed the limit in the case of hundreds of procedures, the typical quantity for a pediatric hospital. The contribution of the radiographic images production to the total dose per examination may reach 30% which emphasizes the importance of the Last Image Hold concept also for occupational exposure. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)-Finance Code 001. The authors also want to express their gratitude to Dra. Danielle Filipov for criticisms and helpful suggestions. Conflict of Interest The authors declare that they have no conflict of interest.

References

Fig. 3 Average dose received by professional in one examination during the fluoroscopy and the radiographic image production

The estimated annual effective dose (1.98 ± 0.04 mSv) is significantly lower than the limit of 20 mSv/year [12], mainly due to the small number of procedures performed during the year 2017: 32. However, the effective annual dose will exceed the limit in the case of 720 procedures, a typical value for the pediatric hospital in the same city [8].

4

Conclusions

The performed estimation showed that the total dose value received by professionals in one pediatric barium meal procedure is somewhat smaller but comparable with typical patient doses in chest radiography. The discrepancy between the existing in literature dose values may be caused by the usage of antiscatter grid and the position of the X-ray tube.

1. International Atomic Energy Agency. Diagnostic and Interventional Radiology. (cited 2018 Nov 10) Available from: https:// www.iaea.org/file/2017/training-radiologyalllectureszip 2. European Communities. European guidelines on quality criteria for diagnostic radiographic images in pediatrics. Office for Official Publications of the European Communities, Luxembourg 1996. (cited 2018 Nov 10) Available from: https://www.sprmn.pt/pdf/ EuropeanGuidelinesEur16261.pdf 3. ICRP. Radiological protection in pediatric diagnostic and interventional radiology. ICRP Publication 121 Ann. 42(2) 2013 (cited 2018 Nov 1) Available from: https://radon-and-life.narod.ru/pub/ ICRP_121.pdf 4. UNSCEAR. Sources and effects of ionizing radiation, Report Vol. I. United Nations Scientific Committee on the Effects of Atomic Radiation 2008. (cited 2018 Nov 1) Available from: https://www. unscear.org/docs/publications/2008/UNSCEAR_2008_Report_ Vol.I.pdf 5. Mohd Ridzwan SF, Selvarajah SE, Abdul HH (2018) Radiation dose management in fluoroscopy procedures: less is more? Jurnal Sains Kesihatan Malaysia 14(02):103–109. https://doi.org/10. 17576/JSKM-2016-1402-12 6. Coakley KS, Ratcliffe J, Masel J (1997) Measurement of radiation dose received by the hands and thyroid of staff performing gridless fluoroscopic procedures in children. Br J Radiol 70:933–936. https://doi.org/10.1259/bjr.70.837.9486070 7. Damilakis J, Stratakis J, Raissaki M, Perisinakis K, Kourbetis N, Gourtsoyiannis N (2006) Normalized dose data for upper gastrointestinal tract contrast studies performed to infants. Med Phys 33(4):1033–1040. https://doi.org/10.1118/1.2181297 8. Filipov D, Sauzen J, Schelin HR, Paschuk SA, Denyak V, Legnani A (2015) Dose Equivalente na Tireoide dos Profissionais que Utilizam o Protetor Plumbífero nos Exames de SEED Pediátrico. Rev Bras Física Médica. 9(2):23–26 9. Filipov D, Schelin HR, Denyak V, Paschuk SA, Porto LE, Ledesma JA, Nascimento EX, Legnani A, Andrade MEA,

Occupational Dose in Pediatric Barium Meal Examinations Khoury HJ (2015) Pediatric patient and staff dose measurements in barium meal fluoroscopic procedures. Rad Phys Chem 116:267– 272. https://doi.org/10.1016/j.radphyschem.2015.05.036 10. Filipov D, Schelin HR, Denyak V, Paschuk SA, Ledesma JA, Legnani A, Bunick AP, Sauzen J, Yagui A, Vosiak P (2017) Medical and occupational dose reduction in pediatric barium meal

1055 procedures. Rad Phys Chem 140:271–274. https://doi.org/10.1016/ j.radphyschem.2017.01.034 11. Agência Nacional de Vigilância Sanitária, Ministério da Saúde (1998) Portaria n. 453, de 1 de junho de 1998. Brasil 12. ICRP (2007) Recommendations of the international commission on radiological protection. ICRP Publication 103 Ann 37:1–332

Synthesis of N-substituted Maleimides Potential Bactericide A. C. Trindade and A. F. Uchoa

disk diffusion test, indicating new permeability, selectivity and mechanistic studies, as well as possible applications of them as bactericidal agents.

Abstract

In view of the growing spread of antibiotic resistant bacteria causing high morbidity and mortality, it is essential to develop compounds that are more efficient than those currently on the market. Maleimide derivatives, due to their chemical structure, have biological activity capable of deactivating enzymes and inhibiting the metabolic pathway of these bacteria and fungi. They have high selectivity and are new excellent antimicrobials. In this scenario, the objective of this work was to synthesize, characterize and verify the existence of bactericidal activity of two derivatives of maleimides, 4-vinyl-phenyl-maleimide and Cl-phenyl- maleimide. For this, the structures of the two compounds were characterized by High Resolution Mass Spectrometry (HRMS), Infrared (IR) and Nuclear Magnetic Resonance (NMR) 1H. Both compounds showed a double bond in the maleimide ring, essential for the inhibition of enzymes present in bacteria. The disk diffusion test to determine the bactericidal activity of the compounds, was carried out against the microorganisms: S. Aureus (Staphylococcus Aureus)—Gram positive, Monocytosis Lister—Gram positive, Escherichia Coli Enteropathogenic (EPEC)— Gram negative and Pseudomones Aureginosas—Gram negative. Compounds derived from maleimide obtain greater biological activity in Gram positive microorganisms, as evidenced by the inhibition halo in the diffusion test. The compounds proved to be active in the face of the

A. C. Trindade (&)  A. F. Uchoa Biomedical Engineering Center, Estrada Dr. Altino Bondensan, Anhembi Morumbi University (UAM), 500 Distrito de Eugênio de Melo, CEP, São José dos Campos, SP 247-016, Brazil A. F. Uchoa Center for Innovation, Technology and Education (CITE), Estrada Dr. Altino Bondensan, 500 Distrito de Eugênio de Melo, CEP, São José dos Campos, SP 247-016, Brazil

Keywords









Synthesis of maleimides N-substituted maleimides Bactericides Bacteria Biological activity Chemical compounds

1

Introduction

It is known that the spread of antibiotic-resistant bacteria poses a considerable threat to humanity, as the main consequence of high morbidity and mortality. Currently, studies are focused on bacterial and fungal enzymes in charge of catalyzing the main biochemical reactions in microbial cells. Compounds with biological activity capable of deactivating these enzymes and inhibiting the metabolic pathways of these bacteria and fungi with high selectivity in the human body are promising antimicrobial agentes [1–3]. Maleimide derivatives have in their structure the group— CO–N (R)-CO- with R being a hydrogen, alkyl or aryl group. The double bond present in the maleimide ring is prone to be attacked by nucleophilic species such as the thiol group present in the amino acid structure cysteine, thus, the enzymatic activity of the cysteine protease is altered and its growth is inhibited [4–6]. The maleimide group binds efficiently and specifically with biomolecules that contain the terminal thiol group and form a stable and irreversible covalent bond as shown in Fig. 1 [7]. Maleimides achieved relevant biological activities, including antimicrobial activities [8] and enzyme inhibition [9]. The antimicrobial mechanism was studied and found that maleimides could preferentially interact with the hydrophobic domains of the target enzymes, resulting in the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_157

1057

1058

A. C. Trindade and A. F. Uchoa

Fig. 1 Reaction between maleimide and the thiol group of biomolecules

inactivation of sulfhydryl groups essential for their catalytic activities [9]. The activities were greatly affected by the structure of the C = C double bond present in the maleimide ring [10]. There is a variation in the capacity of the biological activity of each maleimide depending on the group bound to nitrogen. In the case of the compounds proposed in this study, the presence of chlorine can potentiate the attack of the thiol group, as well as the presence of the vinyl group, in addition to the attack already described on the pair in the maleimide ring [11]. In this work, the synthesis of N-substituted maleimides was carried out, being them 4-vinyl-phenyl-maleimide (M1) and 4-Cl-phenyl-maleimide (M2). The structures of the synthesized compounds were characterized by means of Nuclear Magnetic Resonance (NMR), High Resolution Mass Spectrometry (HRMS) and Infrared (IR) analysis. The analysis of the bactericidal activity of the compounds was performed using the disk diffusion test.

2

Materials and Methods

For the synthesis of compounds M1 and M2, the anilines 4-vinylaniline, 4-chloroaniline and maleic anhydride were dissolved separately in ethyl ether, and after filtration they composed a single solution (50 mL), which was subjected to stirring for 15 min at room temperature. The precipitate was filtered and taken to the oven. After drying, the precipitate was heated in an oil bath at 72 °C for 4 h. After this period, the reaction medium was poured into a beaker with ammonia and ice solution. After neutralization (pH * 7), dichloromethane was added and partitioned. The organic phase was eliminated in a rotary evaporator and the solid residues were purified by column chromatography using silica gel as a support and 50:2 dichloromethane / methanol as the mobile phase.

3

Results and Discussion

The structures of the two synthesized maleimides are shown in Table 1

Table 1 Structure of N-substituted maleimide Name of Compound

Structure

Yield (%)

M1

81.43

M2

40.84

A. Characterization—M1 and M2 The compound M1 was characterized by a HRMS of 222.0527 g / mol. The theoretical mass was determined at 222.0531 g/mol, a difference of 0.00018%, a percentage lower than the 0.04% tolerance. The Fig. 2a of IR, showed the results for M1: 3069.94 cm−1 (CH2 = C), 2353.28 cm−1 (C-N), 1703.25 cm−1 (C = O), 1606.27 cm−1 (ArC) and 1405.39 cm−1 (C–N). The compound M2 showed by a HRMS of 208.0154 g/mol. The theoretical mass was determined at 208.0159 g/mol, a difference of 0.00024%, a percentage lower than the tolerance of 0.04%. The Fig. 2b of IR showed the results for M2: 2517.46 (C–N), 1712.8 cm−1 (C = O), 1596.62 cm−1 (ArC), 1309.79 cm−1 and 708.77 cm−1 (C–Cl). The analysis by NMR Fig. 3a of compound M1 showed the following results: 1H NMR (500 MHz. CDCl3): dH 7.51–7.35 (4H, ArH), dH 6.88 (2H, CH2), dH 6.71 2H, CH2). The NMR for compound M2 Fig. 3b following results: 1H NMR (500 MHz. CDCl3): dH 7.43–7.33 (4H, ArH), dH 6.86 (2H, CH2). B. Disk diffusion test Compounds M1 and M2 were tested in four different bacterial strains and as a control the antibiotic streptomycin. The compounds were dissolved in Dimethylsulfoxide (DMSO) concentration of 0.25 mol/L, so the DMSO was also tested to ensure its insufficient biological activity. The test was performed using the disk diffusion method. The results are shown in Table 2 [12].

Synthesis of N-substituted Maleimides Potential Bactericide Fig. 2 Spectrum of IR compounds M1 (A) e M2 (B) 50

1059

A

C-N

%T

40

C-N

CH2 = C

30

ArC C=O

20 3500

4000

3000

2500

2000

1500

1000

500

cm-1 60

B

50

%T

40 C-N

C-Cl

30

C-Cl ArC

20 C=O 10 4000

3000

2000

cm

The DMSO solvent obtained an inhibition zone only for the bacterium S. Aureus (Staphylococcus Aureus)—Gram Positive 4.63 mm. To ensure that DMSO had no influence on the biological activity of the compounds, a statistical test was carried out. The result of the t test returned a p < 0.05 when comparing the results of the zone of inhibition of compounds M1, M2 and DMSO. Thus ensuring that there is a significant difference between the biological activity of the compounds compared to the inhibition of DMSO. According to Table 2, the reactivity of compounds M1 and M2 was positive in relation to the four bacteria tested. The compounds M1 and M2 showed greater reactivity for the bacterium Listéria Monocytose—Gram Positiva with an inhibition zone of 17.78 mm and 20.38 mm, respectively. The streptomycin antibiotic achieved greater biological activity than the compounds M1 and M2 only for the Enteropathogenic bacteria—Gram Negative with an

1000

-1

inhibition zone of 16.37 and for the Aureginosas—Gram Negative bacteria with an inhibition zone of 15.23. Being an indication that the synthesized compounds have a greater biological activity for Gram Positive bacteria.

4

Conclusion

The results of the characterization studies of compound M1 and M2 show that the structures shown in Table 1 have been successfully synthesized. The two structures presented the double bond that tends to be attacked by nucleophilic species, the thiol group, present in bacteria. The biological activities of compounds M1 and M2 were proven from the disk diffusion test, making the two compounds derived from N-substituted maleimides M1 and M2, potential bactericide candidates. However, the two compounds showed a greater

1060

Fig. 3 NMR of compounds M1 (a) e M2 (b)

A. C. Trindade and A. F. Uchoa

Synthesis of N-substituted Maleimides Potential Bactericide Table 2 Biological activity of the compounds in relation to the four bacteria Bacteria

Inhibition zone (mm)* M1

M2

DMSO

Streptomycin

S. Aureus (Staphylococcus Aureus)—Gram positive

15.01

16.65

4.63

13.51

Monocytosis Lister —Gram positive

17.78

20.38

0.00

13.63

9.84

11.19

0.00

16.37

10.57

12.83

0.00

15.23

Escherichia Coli Enteropathogenic (EPEC)—Gram negative Pseudomones Aureginosas— Gram negative

zone of inhibition for Gram Positive bacteria. For further investigation, a more intensive study on Gram Positive and Gram negative bacteria should be carried out to assess the differences between the bactericidal activities of these synthesized compounds in relation to the two types of bacteria. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Tacconelli E, Carrara E, Savoldi A et al (2018) Discovery, research, and development of new antibiotics: the who priority list of antibiotic-resistant bacteria and tuberculosis. Lancet Infect Dis 18(3):318–272. https://doi.org/10.1016/s1473-3099(17)30753-3

1061 2. Butler M, Blaskovich M, Cooper M (2016) Antibiotics in the clinical pipeline at the end of 2015. Springer Science and Business Media LLC 70(1):3–24. https://doi.org/10.1038/ja.2016.72 3. Salewska N, Boros-Majewska J, Lcka I et al (2012) Chemical reactivity and antimicrobial activity of n-substituted maleimides. J Enzyme Inhib Med Chem 70(1):117–124. https://doi.org/10. 17576/mjas-2016-2004-06 4. Bansode T, Shelke J, Dongre G (2009) Synthesis and antimicrobial activity of some new n-acyl substituted phenothiazines. Eur J Med Chem 44(12):5094–5098. https://doi.org/10.1016/j.ejmech.2009. 07.006 5. Prado S, Cechinel-Filho V, Campos-Buzzi F et al (2004) Biological evaluation of some selected cyclic imides: mitochondrial effects and in vitro cytotoxicity. Zeitschrift Fur Naturforschung Sect C J Biosci 59(9–10):663–672 6. Ye X, Li X, Yuan L et al (2007) Interaction of houttuyfonate homologues with the cell membrane of gram-positive and gram-negative bacteria. Colloids Surf Physicochem Eng Aspects 301(1–3):412–418. https://doi.org/10.1016/j.colsurfa.2007.01.012 7. Xu H, Baidoo K, Wong K et al (2008) A novel bifunctional maleimido chx-a″ chelator for conjugation to thiol-containing biomolecules. Bioorg Med Chem Lett 18(8):2679–2683. https:// doi.org/10.1016/j.bmcl.2008.03.022 8. Chen X, Zhang L, Li F et al (2015) Synthesis and antifungal evaluation of a series of maleimides. Pest Manag Sci 71(3):433– 440. https://doi.org/10.1002/ps.3824 9. Silvia N, Maria, V (2005) In vitro antifungal properties, structure-activity relationships and studies on the mode of action of N-phenyl, N-aryl, N-phenylalkyl maleimides and related compounds. Arzneim-Forsch 123–132. 10. Fan Y, Lu Y, Chen X et al (2018) Anti-leishmanial and cytotoxic activities of a series of maleimides: synthesis biological evaluation and structure-activity relationship. Molecules 23(11):2878–2887. https://doi.org/10.3390/molecules23112878 11. Hassan N (2016) Synthesis and antimicrobial activities of eleven N-substituted maleimides. Malaysian J Anal Sci 20:741–750. https://doi.org/10.17576/mjas-2016-2004-06 12. Weinstein M, Patel J, Burnham C et al (2018) Performance standards for antimicrobial disk tests. 9th ed. CLSI approved standard M2–A9, Pennsylvania, USA 15–39

A Review About the Main Technologies to Fight COVID-19 P. A. Cardoso, D. L. Tótola, E. V. S. Freitas, M. A. P. Arteaga, D. Delisle-Rodríguez, F. A. Santos, and T. F. Bastos-Filho

Abstract

This work presents a review about research published from the last five months (January–May 2020) about technologies used during the pandemic to fight the disease caused by the SARS-CoV-2 virus, termed COVID-19. Through an analysis of these studies, Telemedicine was considered as a viable option to decrease the dissemination of the virus and identify infected people. However, to implant Telemedicine worldwide, it is necessary a fast and reliable way to transfer of large amounts of data, such as the new fifth generation of mobile communications (5G). Thus, new concepts as Internet of Medical Things (IoMT) and the Electronic Health (e-Health) can be used, which have the necessary structures and tools for fast communication, with high-resolution, between patients and health professionals. Keywords

COVID-19

1



Technologies



Telemedicine



Review

Introduction

COVID-19 is a disease caused by the SARS-CoV-2 virus that attacks the respiratory system causing symptoms such as cough, fever, fatigue and shortness of breath [1]. Its transmission occurs among people or by close contact through droplets and aerosols. The first confirmed case occurred in Wuhan, China, in December 2019, causing rapidly the virus spread on worldwide, and consequently the pandemic declaration by the World Health Organization (WHO) on March P. A. Cardoso (&)  D. L. Tótola  E. V. S. Freitas  M. A. P. Arteaga  D. Delisle-Rodríguez  F. A. Santos  T. F. Bastos-Filho Universidade Federal do Espírito Santo, Avenida Fernando Ferrari 514, Vitoria, 29075-910, ES, Brazil

11, 2020 [2]. Due to its high rate of contamination, social isolation has been adopted globally as the best strategy for reducing its dissemination, since there are currently no drugs, vaccine or cross-protection licensed for the treatment or prevention of the disease. Although COVID-19 has a mortality rate lower than other coronavirus diseases like SARS or MERS, it is more transmissible and until the current moment most people have no immunity. The medical environment is filled with worries about possible contaminations, since a high number of cases at the same time could lead the healthcare system of most countries to collapse. Under these circumstances, while research for drugs and vaccines are being developed, other solutions through biomedical and advanced technologies have been implemented to mitigate the transmission and support people. Technology, throughout chaos, become parallelly an ally to help fight this pandemic, since it brings solutions that maintains the distance between people, preserving healthcare professionals working in the frontline, as well as people classified as risk group (with cardiac disease, cancer, etc.). In this context, e-Health, Telemedicine and the Internet of Things (IoT) are viable options. The purpose of this article is to present the main technologies that have been implemented during the COVID-19 crisis.

2

Materials and Methods

The present review was based on the systematic literature review proposed by [3]. The first step was based on the selection of keywords to be used in the search for works already published. Then, a logical analysis of the combination of such words was conducted, as well as the identification of the importance of them being present in the title of the publications. The databases chosen for the research were the Web of Science (WoS) and the IEEE Xplore, since the relevant information for this study is related to engineering

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_158

1063

1064

P. A. Cardoso et al.

Table 1 Disposition of publications from WoS database Area e-Health Telemedicine IoT

COVID-19

Results

Coronavirus

2

0

17

3

2

1

3D-printing

1

0

Biomedical signals

0

0

and technology. This review covered publications between 2019 and 2020, since the contribution of technology is analyzed for the case of the pandemic that started in December 2019. No restrictions were made in relation to the type of publication, since the theme is recent and no information should be discarded. The search to the database was carried out on May 18, 2020 and Table 1 presents the number of publications found on the platform Web of Science when combining the keywords selected for analysis, totalizing 26 publications. The “AND” logic was used in all searches and only one publication was found on IEEE Xplore database due to the combination of COVID-19 and IoT. The second step of review was eliminating publications, considering the following criteria: 1. 2. 3. 4.

3

Duplicates; Publications dedicated to a medical specialty; Publications unavailable for download; Publications non-written in English.

Figure 1 synthesizes the process of elimination, showing the amount of publications after the application of each criterion. Finally, the total amount of publications that meets all the chosen requirements is equal to eleven. Table 2 shows how the publications distribution after the elimination step according to the chosen keywords.

Fig. 1 Representation of the process of elimination and its result

After reviewing the selected publications, it was noticed that even though the public health system of the countries affected with the new coronavirus suffers with the lack of trained people, devices and beds in the Intensive Care Units (ICUs), the digital infrastructure of the hospitals remains intact [4]. Nonetheless, the need to continue monitoring chronic patients and the elderly population, who fear a possible infection, raises the interest in investing in technological and digital solutions that enhances the relationship between doctors and patients in the course of the pandemic [2]. To prevent feelings such as loneliness, stress, depression and anxiety in patients who already have symptoms for COVID-19, the authors of [5] discuss the value of e-Health tools to enable healthcare professionals to remotely monitor their patients, creating new communication channels that allow the screening and verification of health status of patients. The advent of 5G technology (fifth generation of wireless internet communication) made it possible to employ virtual reality in Telemedicine [6] while facilitating real-time diagnosis due to its fast response, low latency and greater range [7]. A group of hospitals in the United States adopted Telemedicine to help respond to the current crisis. It was identified that hospital emergencies remain busy despite the introduction of screening via telephone and drive-thru [8]. The strategy of creating isolated rooms equipped with a high-resolution audio and video platform helped minimizing viral transmission to professionals in the healthcare industry. Among the operational demands for the ideal functioning of the platform, there is a need for a robust network supported with an Information Technology (IT) team. Obviously, nurses and clinicians should be trained on how to handle digital peripherals, such as stethoscopes, otoscopes, ophthalmoscopes and dermoscopes, when a clinical examination is necessary.

A Review About the Main Technologies to Fight COVID-19 Table 2 Distribution of publications after the elimination step Area

Publications

e-Health

1

e-Health and Telemedicine

1

Telemedicine

6

Telemedicine and IoT

1

IoT

2

The association of Telemedicine and IoT would facilitate a worldwide approach against COVID-19. In this context, big data and real-time data analysis can assist emerging countries with quick responses for remote assessment of the current situation. Studies carried out by [9] argue about the opportunity to create an international support network for those countries that have an insufficient number of clinicians and healthcare professionals specialized in intensive care. Thus, the use of Telemedicine would allow radiologists from developed countries, who already have greater expertise in the clinical picture of the new coronavirus, to diagnose suspected cases. For instance, a platform created by [10], initially for national purposes, offers online consultation for clinicians who need support in the analysis of radiography and computerized tomography images of patients with symptoms linked to COVID-19. This platform became accessible to countries that need support during this crisis. The Internet of Medical Things (IoMT) is an agglomeration of medical devices and software that support health services through connection with IT systems [6]. Its utilization includes remote monitoring of the patient’s location, tracking of medical prescriptions and the implementation of wearable devices that inform the user’s vital signs to their doctor. The use of Artificial Intelligence (AI), combined with IoMT, also brought positive impacts during the pandemic [11]. Using AI, digital image processing allowed the remote diagnosis of COVID-19, and the concept of big data was implemented in the logistical determination of the distribution of medical supplies, also assisting in tracking production demands across the country. AI can also help predicting risks, identifying fake news, creating medicines and analyzing and modelling the virus [6]. Wearable devices are other kind of technology used during the pandemic. This equipment is used in contact with the body and have an internet connection [6]. During the pandemic, companies with this type of devices have modified their functions, in order to track and identify possible symptoms of the coronavirus. Among the signals captured and analyzed are temperature, respiratory and heart rates, blood saturation and the heart’s electrical stimulus (electrocardiogram—EEG). The saved information can be shown and sent to healthcare professionals through the database if there is an indication of the disease.

1065

4

Discussion

Telemedicine is the most commented theme contemplated in this literature review as shown in Table 2. To allow social distancing and reduce the risks to frontline professionals, it is seen as an option to be used in different stages of the consultation for those still at home or seeking hospital care. Although this review has been focused on technologies used to combat COVID-19, authors such as [2, 5, 7] emphasize the importance of monitoring the patient’s mental health. However, it is noteworthy that studies on the mental health of the healthcare professionals were not found. As this is a highly relevant perspective, a search on this topic with keywords in specific publications is highly recommended. In [12] the authors report the relevance of technology in science, considering that the new electrical and computerized devices allow to deliver the appropriate care to patients, including better treatment with less impact and more efficiency. However, this moment is seen as a learning opportunity to analyze how to use technology in the post-pandemic, since the health community is experiencing scenarios that will help to develop an optimized health system, with guaranteed access and quality care for users. In this context, the Brazilian Ministry of Health publicized the regulations and operationalizations necessary for the implementation of Telemedicine as a measure to fight the virus [13]. Since then, pre-clinical care, consultation, monitoring and diagnosis has been temporarily allowed in private and public care, as long as the measures to confront COVID-19 last. While no publication was found relating 3D printing to the new coronavirus, it is known that this technology is being widely used to develop pieces of personal protective equipment, such as face shields, and to replace/build parts for medical devices, such as ventilators. Likewise, the keyword biometric signals did not result in publications, but the wearable devices presented by [6] perform different captures of this kind of signals, in order to determine the health status of their users. In fact, their study presents a recent and vast review of the technologies that have been implemented to mitigate the impact of the pandemic. Among them, they discuss about robots and drones that perform the disinfection of public and hospital areas, surveillance of possible agglomerations, public announcements and deliver essential medical materials.

5

Conclusion

The present study includes publications from the last five months (January to May 2020) that refer to the technologies that have been adopted during the pandemic, such as

1066

Telemedicine, IoMT, e-Health, AI and wearable devices. For each one, their characteristics and implementations were presented. Telemedicine has proven to be a viable option to decrease the dissemination of the virus and identify infected people. Regardless, 5G technology, IoMT and e-Health can guarantee the necessary structures and tools for a faster communication, with high-resolution, between patients and health professionals. Acknowledgements The authors would like to thank CAPES and FAPES for research grants and financial support, and UFES for technical and scientific support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Ministério da Saúde. “Sobre a doença.” coronavirus.saude.gov.br. https://coronavirus.saude.gov.br/sobre-a-doenca. Accessed 21 May 2020 2. Hau YS, Kim JK et al (2020) How about actively using telemedicine during the COVID-19 pandemic? J Med Syst 44:108. https://doi.org/10.1007/s10916-020-01580-z 3. Thome AMT, Scavarda LF et al (2016) Conducting systematic literature review in operations management. Prod Plan Control 27 (5):408–420. https://doi.org/10.1080/09537287.2015.1129464 4. Omboni S (2020) Telemedicine during the COVID-19 in Italy: a missed opportunity. Telemed e-Health. https://doi.org/10.1089/ tmj.2020.0106

P. A. Cardoso et al. 5. Pappot N, Taarnhøj GA et al (2020) Telemedicine and e-health solutions for COVID-19: patients’ perspective. Telemed e-Health. https://doi.org/10.1089/tmj.2020.0099 6. Chamola V, Hassija V et al (2020) A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain and 5G in managing its impact. IEEE Access. https://doi.org/10. 1109/ACCESS.2020.2992341 7. Zhang J, Wang H et al (2020) Family member visits to critically Ill patients during COVID-19: a new pathway. Telemed and e-Health. https://doi.org/10.1089/tmj.2020.0097 8. Doshi A, Yonatan P et al (2020) Keep calm and log on: telemedicine for COVID-19 pandemic response. J Hosp Med 15 (5):301–304. https://doi.org/10.12788/jhm.3419 9. Azizy A, Fayaz M et al (2020) Do not forget Afghanistan in times of COVID-19 and the internet of things to strengthen planetary health systems. OMICS J Integrat Biol https://doi.org/10.1089/ omi.2020.0053 10. Taheri MS, Farahnaz F et al (2020) Role of social media and telemedicine in diagnosis & management of COVID-19; An experience of the Iranian society of radiology. Arch Iran Med 23 (4):285–286. https://doi.org/10.34172/aim.2020.15 11. Lin B, Wu S (2020) COVID-19 (Coronavirus Disease 2019): opportunities and challenges for digital health and the internet of medical things in China. OMICS J Integrat Biol 24(5):231–232. https://doi.org/10.1089/omi.2020.0047 12. Bashshur R, Doarn CR et al (2020) Telemedicine and the COVID-19 pandemic, lessons for the future. Telemed e-Health 26(5):571–573. https://doi.org/10.1089/tmj.2020.29040.rb 13. Ministério da Saúde. “Diário Oficial da União Portaria No 467, de 20 de março de 2020.” Imprensa Nacional. https://www.in.gov.br/ en/web/dou/-/portaria-n-467-de-20-de-marco-de-2020-249312996. Accessed 21 May 21

Evaluation of Cortisol Levels in Artificial Saliva by Paper Spray Mass Spectrometry A. R. E. Dias, B. L. S. Porto, B. V. M. Rodrigues, and T. O. Mendes

ration step. From a volume of only 5.0 µL of sample, placed directly on the sampler equipment, it was possible to obtain an identification of cortisol by analyzing the mass spectrum in approximately 45 s.

Abstract

Cortisol is a steroid hormone that has an important function in the regulation of many physiological and pathological processes, while its quantification in biological fluids may contribute to the diagnosis of numerous diseases. The objective of this study is to verify the use of paper spray mass spectrometry (PS-MS) to evaluate the levels of cortisol in artificial saliva. Therefore, mass spectra in the range of 100–800 m/z were acquired using a Thermo Fisher Scientific LCQ FLEET equipment equipped with a low-resolution Ion Trap mass analyzer. The presence of cortisol in artificial saliva samples was evaluated by the presence of ions 363, 385, 725 and 747 m/z, which are related to the cortisol molecule. The spectra of saliva with cortisol addition shows the presence of ions attributed to the presence of cortisol at different m/z ratios, which are not present in the spectra of pure saliva, pointing out the potential of these ions for monitoring cortisol levels. The Principal component analysis (PCA) showed groups of samples with and without cortisol addition. The loading plot explains the groups formed by the differences between m/z associated to cortisol molecule, confirming PCA analysis as a powerful qualitative method to assess cortisol levels in saliva. Finally, the ions selected to discriminate the groups formed in the PCA, named 363, 385 and 747 m/z, were suggested as markers for determining cortisol levels in saliva samples. The PS-MS showed to an efficient and promising technique to the determination of real-time salivary cortisol levels without using any sample prepa-

A. R. E. Dias  B. V. M. Rodrigues  T. O. Mendes (&) Universidade Brasil/Instituto Científico e Tecnológico, São Paulo, SP 08230-030, Brazil B. L. S. Porto Departamento de Química, Universidade Federal de Minas Gerais, Belo Horizonte, MG 31270-901, Brazil

Keywords

Diagnosis

1



Rapid method



Biofluids



Cortisol

Introduction

Cortisol is a steroid hormone that plays an important role in regulating a wide range of physiological and pathological processes that involve immune response, electrolyte balance, blood pressure and metabolism, among others [1]. Cortisol levels increase in the presence of physical and/or psychological stress. A high level of cortisol can be a trigger for psychophysiological reactions that may result in hyperfunction of the sympathetic nervous system and the endocrine system, specifically the adrenal glands [2]. The increased production of cortisol is related to immunological changes [3]. It decreases the proliferation of lymphocytes, interferes in the communication between them and inhibits the production of antibodies. This proliferation increases the individual's sensitivity to acquire or aggravate allergic diseases. An example of these immunological changes is psoriasis, considered an immunopathological response to stress against the body itself, where there is an increase in local or systemic inflammatory mediators that contribute to the mechanism of this condition [4]. In healthy adult individuals, morning salivary cortisol levels are above 12 nmol L−1 (4.35 µg L−1), while nighttime concentrations vary between 1.0 and 8.3 nmol L−1 (0.36 and 3.01 µg L−1) [5]. The measurement of the level of cortisol in biological fluids, usually plasma, is used in the diagnosis of diseases related to adrenal, pituitary and hypothalamic function, including Cushing's syndrome and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_159

1067

1068

Addison's disease [1]. Another possibility is the measurement of cortisol in saliva or parotid liquid, since the free fraction is the main component, and its determination in this form showed a high correlation with plasma analyzes [6]. The traditional techniques routinely used to quantify cortisol are based on radioimmunoassay. However, this technique presents some drawbacks due to the steroid hormones’ structural similarity and the difficulty in obtaining specific antibodies, associated with the wide concentration range in which they can be found [7]. Other methods used to quantify cortisol have been described using electrochemical [8] and spectroscopic methods, such as infrared spectroscopy [9], in addition to liquid chromatography methods [6]. Nevertheless, chromatographic and electrochemical methods, despite their excellent selectivity and quantification limits, present laborious sample preparation steps and a long analysis time. Conversely, while spectroscopic methods are fast and cost-effective, they present overlapping limiting points and spectral bands, leading to a low selectivity and difficulties associated to the standardizing detection and quantification limits for routine diagnostic laboratory practices. Recently, mass spectrometry (MS) has been used for several analysis, such as food, pesticides, biofluids and drugs [10]. With the introduction of new ionization sources, MS has been experiencing an extraordinary development in recent years. In this context, with direct and fast ionization methods and portable mass spectrometers, MS has been able to act in practically all fields [11]. MS provides information on the elemental composition of samples, the structures of inorganic, organic and biological molecules, the qualitative and quantitative composition of complex mixtures, the structure and composition of solid surfaces, and the isotopic ratios of atoms in the samples [12]. Taking into account the clinical importance of measuring cortisol levels and the use of alternative methods to perform biofluid analyzes aiming at rapid and accurate diagnoses, the purpose of the present work is to use the technique of paper spray mass spectrometry (PS-MS) as an alternative method for cortisol quantification, enabling real-time results without any sample preparation steps. For this, the identification of the cortisol molecule in artificial saliva will be analyzed, simulating the determination of salivary cortisol levels. It is important to emphasize that reports in the literature on the use of this technique to determine salivary cortisol levels are insipient, as well as in other biofluid matrices. This justifies the interest and importance in developing research in this segment, enabling the implementation of new diagnostic methodologies in biological samples.

A. R. E. Dias et al.

2

Paper Spray Mass Spectrometry (PS-MS)

PS-MS has made possible the analysis of samples without pre-treatment or with just a minimal preparation. Numerous studies have demonstrated the applicability of PS-MS, mainly in clinical applications, including neonatal screening, monitoring of therapeutic drugs, personalized medicine and the analysis of biological fluids, such as blood, urine and saliva [13–15]. In 2009, the source of ionization PS-MS was developed by Cooks et al. [12]. The technique consists of a wire welded to an alligator-like claw attached to a support. It consists of the application of high tension on a triangular shaped paper moistened with a solution containing the analyte, which by capillarity spreads through the paper until its end. The formation of a spray in the shape of a Taylor cone is induced by the electric field, arising from the difference between the high voltage (3–5 kV) applied to the paper and the capillary voltage of the mass spectrometer [16]. PS-MS was designed for analysis of complex mixtures, mainly because the large number of hydroxyls present in cellulose that is the main component of the paper. These hydroxyls are of great importance, since they interact with the macromolecules of complex mixtures, e.g., proteins and blood hemoglobin, while retaining them and preventing their passage to the mass spectrometer [17].

3

Material and Methods

Analytical standard of cortisol was purchased from Sigma-Aldrich (St. Louis, MO, USA). Alpha-amylase, potassium chloride (KCl), calcium chloride dihydrate (CaCl2.2H2O), magnesium chloride hexahydrate (MgCl2.6H2O), methylparaben (CH3(C6H4(OH)COO), dipotassium phosphate (K2HPO4), monopotassium phosphate (KH2PO4) and sodium carboxymethylcellulose were purchased from Vetec (Rio de Janeiro, RJ, Brazil). Deionized water was obtained using a Milli-Q purification system. Artificial saliva was prepared according to the methodology described by Arain and collaborators (2014). To obtain its mass spectrum, 5.0 µL of the artificial saliva sample were placed directly on the paper and after that, 15.0 µL of methanol was added as an eluent solution. For the analysis of the artificial saliva sample containing cortisol, an aliquot of 5.0 µL of saliva solution with cortisol (3.62 µg.L−1) was placed on the paper. This concentration was selected from the levels of salivary cortisol reported in the literature [5]. After that, 15.0 µL of methanol was added as an eluent solution. All the aforementioned analyzes were performed in triplicates.

Evaluation of Cortisol Levels in Artificial Saliva by Paper …

1069

4

Results and Discussion

The mass spectrum in the positive mode for the cortisol analytical standard is shown in Fig. 1. In this figure, the sign at 363 m/z represents the cortisol molecule, with molecular mass equal to M = 362 a.m.u. after interaction with a hydrogen atom (atomic mass = 1 a.m.u.). Thus, for this peak, [M + H]+ = 363 m/z is denoted. The most abundant peak is due to the interaction of two cortisol molecules with a hydrogen aotm, [2 M + H]+ = 725 m/z, while the third peak represents a sodium adduct with two cortisol molecules [2 M + Na]+ = 747 m/z (atomic mass of Na = 23 a.m.u). Figure 2 shows the mass spectrum for the artificial saliva with cortisol addition. In this figure, the characteristic ions of the cortisol molecule, highlighted by the arrows, were attributed to a sodium adduct [M + Na]+ = 385 m/z, in addition to the others mentioned in the analysis of Fig. 1. The presence of the ions 347, 509 and 671 m/z are attributed to saliva compounds.

Relative abundance (arb. units)

100

725

Cortisol

80

363 60

*

40

747 20

*

*

0 100

200

300

400

500

m/z

Fig. 1 Mass spectra of cortisol

600

700

800

100

Relative abundance (arb. units)

Mass spectra were acquired with a Thermo Fisher Scientific LCQ FLEET model equipped with a low resolution Ion Trap mass analyzer. The experimental conditions of the analyzer were: capillary tube at 275 °C, voltage of 50 V in the capillary and 100 V in the tube lens. The range of masses analyzed was 100–800 m/z. The paper spray ionization source consisted of an alligator connector connected to a +5.0 kV source (detection in positive mode). A piece of chromatographic paper, triangular in shape and with 1.0 cm of side, was attached by the alligator connector, which was used as a sampler. The paper was positioned 0.5 cm from the entrance of the mass spectrometer. Principal Component Analysis (PCA) was performed using MATLAB software, version 7.9.0.529 (MathWorks, Natick, USA), and PLS Toolbox, version 5.2.2 (Eigenvector Research Inc., Manson, USA).

Saliva wiht cortisol addition

347

747 385

80

509 671

363 60

40

20

0 100

200

300

400

500

600

700

800

m/z

Fig. 2 Mass spectra of artificial saliva with cortisol addition

During mass spectrometry, the coelution of compounds does not occur as in a chromatographic analysis, nor even the overlapping of bands as in a spectroscopic analysis. Mass spectrometry is the most selective analysis technique among all analytical ones. This characteristic promotes it as an unambiguous analysis. The higher or lower selectivity among the different mass analyzers depends on the resolution of the analyzer; in this work we used one of low resolution, presenting precision of the Daltons unit order, up to a high precision analyzer. Sensitivity depends on the concentration of the compound and its capacity for ionization and fragmentation. Typical concentrations of analysis are in the order of ppm or ppb, and can vary greatly depending on the target compound and the matrix in which it is found. In this study, we did not explore the limit of detection and quantification of salivary cortisol by PS-MS. Herein, we have demonstrated that cortisol presented itself as a compound with a great potential to be monitored by the PS-MS technique and that, in addition, its detection can be performed in the middle of the matrix composed of the constituents present in the artificial saliva, which simulate an analysis of salivary cortisol in a real sample. Thus, this method can be expanded for studies of different purposes with no need for sample preparation. PCA results are shown in the Fig. 3. The purpose of this analysis was to verify whether unsupervised methods of multivariate analysis are able to identify and group saliva samples as a function of the cortisol concentration present in the samples. For this, a data matrix containing mass spectra of artificial saliva samples with and without cortisol addition was used. The spectra were obtained from three analyzes of artificial saliva samples without cortisol addition and three analyzes of saliva with cortisol at a concentration of 3.62 µg L−1. After obtaining the spectra, the PCA analysis followed with the data matrix centered on the mean.

1070

A. R. E. Dias et al.

Fig. 3 PCA of saliva samples by mass spectrometry

Figure 3a shows the PCA scores, with explained variance R2 equal to 95.3% (PC1) and 2.6% (PC2). In this figure, the two groups formed are highlighted, which are separated along the first main component (PC1). On the left, one can find saliva samples without addition of cortisol, whilst on the right saliva samples with addition of cortisol. The dispersion along the second main component (PC2), together with the percentage of variance explained in this direction, suggest a small disturbance of the data due to variations in the experimental technique used. In other words, it is possible to observe the formation of two groups of samples, separated along the first main component of the main component analysis. The separation of groups of samples with and without cortisol along the can be through the analysis of the loadings graph of the first main component (Fig. 3b). The analysis of the loadings graph highlights three load-mass ratios, 363, 385 and 747 m/z, as the most significant peaks to explain the separation of groups evidenced in the PCA. These compounds correspond to a hydrogen with a cortisol molecule (363 m/z), a sodium adduct with cortisol (385 m/z) and a sodium adduct with two cortisol molecules (747 m/z), i.e., mass charge ratios characteristic of the cortisol molecule. In addition, as the highlighted loadings are positive, it can also be concluded that the saliva samples located on the right, in the scores graph, have higher abundance values in these peaks. This analysis corroborates with the use of PCA for a qualitative analysis of cortisol levels in saliva samples, because the high values of abundance, for the peaks corresponding to the cortisol molecule correspond to a higher concentration of cortisol in the samples. Therefore, we can infer that samples located to the right in the PCA scores

graph have higher concentrations of cortisol, while samples located to the left have lower concentrations of cortisol.

5

Conclusions

The fragmentation profile of the cortisol molecule was obtained by mass spectrometry using paper spray ionization. The ions 363, 385, 725 and 747 m/z were identified and assigned, allowing for the recognition of the presence of cortisol in an artificial saliva sample. The PS-MS technique proved to be efficient and promising for the determination of salivary cortisol without using any sample preparation step. Starting from a volume of only 5.0 µL of sample, placed directly on the equipment's sampler, it was possible to identify the presence of cortisol by analyzing the mass spectrum in approximately 45 s. The principal component analysis showed that mass spectrometry using paper spray ionization can be used for the qualitative assessment of salivary cortisol levels, while grouping samples with respect to their cortisol concentration range. The analysis of PCA loadings suggested three mass load ratios as candidate markers for quantifying the concentration of cortisol in saliva samples. Finally, the set of explanations and observations presented in this work can be used to develop methods of analysis in biofluids with minimal, or none, sample preparation. We should emphasize the unambiguous character of this analysis due to its selectivity. Furthermore, it can be used concurrently with multivariate analysis methods that allow to qualitatively and quantitatively assess the concentration of analytes of interest present in the analyzed samples.

Evaluation of Cortisol Levels in Artificial Saliva by Paper … Acknowledgements Authors thank Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq Project number 437516/2018-0) for their financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Alvi SN, Hammami MM (2019) A simple ultraperformance liquid chromatography-tandem mass spectrometry method for measurement of cortisol level in human saliva. Int J Anal Chem 2019:909352–4909352. https://doi.org/10.1155/2019/4909352 2. Luz Neto LMd, Vasconcelos FMNd, Silva JEd, Pinto TCC, Sougey ÉB, Ximenes RCC (2019) Differences in cortisol concentrations in adolescents with eating disorders: a systematic review. J Pediatr 95:18–26 3. Burla R, Matos M, Rocha T, Correa F, Silva C (2018) Anatomofisiologia do estresse e o processo de adoecimento. R Vértices 20:281–289. https://doi.org/10.19180/1809-2667. v20n22018p281-289 4. Evers AW et al (2010) How stress gets under the skin: cortisol and stress reactivity in psoriasis. Br J Dermatol 163:986–991. https:// doi.org/10.1111/j.1365-2133.2010.09984.x 5. Vogeser M, Durner J, Seliger E, Auernhammer C (2006) Measurement of late-night salivary cortisol with an automated immunoassay system. Clin Chem Lab Med 44:1441–1445. https:// doi.org/10.1515/cclm.2006.244 6. Vieira JGH, Nakamura OH, Carvalho VM (2014) Determination of cortisol and cortisone in human saliva by a liquid chromatography-tandem mass spectrometry method. Arq Bras Endocrinol Metabol 58:844–850 7. Castagnola M et al (2017) Salivary biomarkers and proteomics: future diagnostic and clinical utilities. Acta Otorhinolaryngo 37:94–101. https://doi.org/10.14639/0392-100X-1598

1071 8. Montenegro ACP, AssunçãoVRD’, Luna MGB, Raposo PVN, Bandeira F (2010) Evaluation of levels of cortisol in saliva using electro-chemical luminescence in low-risk and high-risk pregnancies. Rev Bras Saude Mater Infant 10:69–74 9. Lemes LC, Caetano Júnior PC, Strixino JF, Aguiar J, Raniero L (2016) Analysis of serum cortisol levels by fourier transform infrared spectroscopy for diagnosis of stress in athletes. Res Biomed Eng 32:293–300. https://doi.org/10.1590/2446-4740. 01216 10. Maciel LÍL, Carvalho TC, Pereira I, Vaz BG (2019) Determinação de designer drugs em saliva por paper spray mass spectrometry. Quím Nova 42:676–682. https://doi.org/10.21577/0100-4042. 20170379 11. Guo Q, Gao L, Zhai Y, Xu W (2018) Recent developments of miniature ion trap mass spectrometers. Chin Chem Lett 29:1578– 1584. https://doi.org/10.1016/j.cclet.2017.12.009 12. Liu J, Wang H, Manicke N, Lin J-M, Cooks R, Ouyang Z (2010) Development, characterization, and application of paper spray ionization. Anal Chem 82:2463–2471. https://doi.org/10.1021/ ac902854g 13. Espy RD et al (2014) Paper spray and extraction spray mass spectrometry for the direct and simultaneous quantification of eight drugs of abuse in whole blood. Anal Chem 86:7712–7718. https:// doi.org/10.1021/ac5016408 14. Manicke NE, Bills BJ, Zhang C (2016) Analysis of biofluids by paper spray MS: advances and challenges. Bioanalysis 8:589–606. https://doi.org/10.4155/bio-2015-0018 15. Shi RZ, EL Gierari TM el, Manicke NE, Faix JD (2015) Rapid measurement of tacrolimus in whole blood by paper spray-tandem mass spectrometry (PS-MS/MS). Clin Chim Acta 441:99–104. https://doi.org/10.1016/j.cca.2014.12.022 16. Yang Q et al (2012) Paper spray ionization devices for direct, biomedical analysis using mass spectrometry. Int J Mass Spectrom 312:201–207. https://doi.org/10.1016/j.ijms.2011.05.013 17. Taverna D, Di Donna L, Bartella L, Napoli A, Sindona G, Mazzotti F (2016) Fast analysis of caffeine in beverages and drugs by paper spray tandem mass spectrometry. Anal Bioanal Chem 408:3783–3787. https://doi.org/10.1007/s00216-016-9468-1

Tinnitus Relief Using Fractal Sound Without Sound Amplification A. G. Tosin and F. S. Barros

Abstract

Keywords

Tinnitus is an irritating sensation of hearing a sound when no external sound is present, even interfering with the quality of life of its patients. Its increasing and high prevalence has reached millions of people. Fractal sounds are musical nuances with combined and semi predictable connections, generated by recursive procedures. This study presents a possible alternative treatment through the use of fractal sound, without sound amplification, in the relief of tinnitus, for a population of medium and low income, who do not have access to hearing aids of individual sound amplification—hearing aids. The study included 32 individuals, of genders, different ages, complaining of tinnitus, with sensorineural hearing loss, up to 60 dB. The project was approved by CEP-UTFPR. The research participant performed anamnesis, meatoscopy, pure tone audiometry, logoaudiometry, immittanciometry, acuphenometry and answered the questionnaire Tinnitus Handicap Inventory—THI and numerical scale at the beginning and at the end of the study. The fractal sound was heard through the participants’ own cell phones, with headphones provided by the cell phone manufacturer, for three months, for one hour daily. The type of tinnitus found was high pitch, with an ave-rage of 5673.5 Hz. According to THI, the severity of tinnitus that predominated most was the moderate degree at the beginning of the study and the mild degree at the end of it. The THI showed a significant difference in the functional scale, emotional scale, and catastrophic scale and also in the final total score of the questionnaire. There was a statistically significant difference in the initial and final averages of the numerical scale, in the level of awareness and in the level of discomfort with tinnitus.

Tinnitus

A. G. Tosin (&) UTFPR/PPGEB, UTFPR-Curitiba, Curitiba, Brazil e-mail: [email protected] F. S. Barros UTFPR/PPGEB-DAFIS, UTFPR-Curitiba, Curitiba, Brazil

1



Sound Therapy



Fractal Music

Introduction

Tinnitus is an irritating sensation of hearing a sound when no external sounds are present. It was called by Jastreboff in 1990, as a phantom auditory perception [1, 2]. Tinnitus is a common problem and a symptom that is difficult to treat [1– 4]. The impairments in the daily routine of patients with tinnitus range from difficulties in concentration, memory loss, reasoning to problems with insomnia, depression, irritation, anxiety, fatigue and stress [5]. In 90% of cases, tinnitus is accompanied by auditory changes, and most cases of tinnitus are linked to problems in the cochlea [1, 4]. Emotional issues interfere with the feeling of improvement or worsening of tinnitus [3, 6]. Some studies carried out with twins have evaluated the genetic and non-genetic influences of buzzing and concluded that there is a possibility that hereditary factors interfere to some degree in bilateral tinnitus [7, 8]. The classification of tinnitus according to its place of origin can be auditory and para-auditory. This classification has contributed to the improvement of diagnoses, treatments and their results, respectively. Auditory tinnitus is generated by the sensorineural auditory system, by changes in the external, middle and internal ears. Para-auditory tinnitus is caused by muscular and vascular structures that are perceived by the cochlea [9, 10]. Through neurophysiology, it is known that tinnitus is the product of the relationship between the activities of several centers of the central nervous system. Within this process, auditory, non-auditory pathways and the limbic system are stimulated, causing an imbalance in neuronal activity that ends up being perceived as tinnitus [2, 4, 10]. Tinnitus is an

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_160

1073

1074

erroneous activity in the auditory pathways and this activity is interpreted as a sound [1, 4]. Since 1980, sound therapy has been used for the rehabilitation of patients with tinnitus [11]. “Sound therapy” is the way in which the application of external noise has been designated to reduce the perception of tinnitus. The sound of therapy can promote total or partial masking. In partial masking, there is a change in the perception of tinnitus, reducing the relevance of the tinnitus symptom, and the intensity of the masking sound is not strong [10–13]. In this study, the Tinnitus Retraining Therapy—TRT approach was used. The TRT technique uses the neurophy-siological example of tinnitus end sound enrichment as support [1, 2]. The standard guidance used for research participants was based to the TRT directive counseling [1, 2, 14]. Fractal sound is one of the uses of the fractal principle. An algorithm is used and repeated several times to create a composition presented in sounds and these sounds become harmonious and attractive for human hearing [15]. From the evolution of the speed of computers, it was possible to generate compositions based on fractal algorithms in real time [16]. The fractal sound resembles wind chimes or organ music and follows a series of small patterns [17]. It is a melodic song, predictable, but not monotonous and with slower time [18]. Predictable without being tiring, fractal music echoes as already known, however it does not exactly represent music that is fixed in the mind, being the ideal sound therapy to be used in this study [15]. Fractal music was previously studied in individuals with tinnitus, with effective responses [15, 17]. This type of treatment is available in high cost hearing aids, leaving the Brazilian population of medium and low income out of this possibility of improvement. The Tinnitus Handicap Inventory—THI questionnaire was created in 1996, in the United States, by Newmann et al., with the intention of measuring and quantifying tinnitus in a standardized way [19] the questions in the questionnaire are not affected by age, sex, or hearing loss of the tinnitus sufferer [19]. These measures of the THI questionnaire serve as results standards and are applied pre and post-treatment to quantify the improvement in tinnitus. The decrease in the score, the reduction in self-perception means that the individual feels less tinnitus, with the desired positive result [19, 20]. The interpretation of scores according to Newman et al. [19] are: 0–16 Slight disadvantage; 18–36 Mild Deficiency; 38–56 Moderate Def. end 58–100 Severe Disabilities [20]. THI is subdivided into 3 scales. The Functional scale (F), reflects the individual’s performance in the areas of mental functioning, and there are 11 items. The Emotional scale (E), presents 9 items and represents a wide range of affective responses to tinnitus, such as anger, anxiety, frustration, irritability, depression. The Catastrophic scale (C), reflects

A. G. Tosin and F. S. Barros

the agony, anguish, and despair of patients in the presence of tinnitus, presents 5 items [19, 20]. As the objective of the present study was to provide tinnitus patients with a low-cost intervention, the research participants’ own cell phones were used as sound generators.

2

Materials and Methods

This study was approved by the Ethics Committee on Research involving Human Beings—CEP-UTFPR, under the number of Consubstantiated Opinion No. 3,500,008 of August 11, 2019. The participants of the research were attended at the Dante Castagnollie Health Unit and the São Camilo Private Clinic, located in the city of Campo Largo, in the state of Paraná. This is a before and after clinical trial with tinnitus patients. Once it is a research that involves minimal risk of discomfort to the participant, they could feel embarrassed when answering the THI questionnaire or when performing tonal and vocal audiometry, acupheno metry and imitanciometry exams. All clarifications and guidance were provided to the participants throughout the intervention. Both the questionnaire and all the necessary examinations for the research were applied and performed by the researcher Aurea Maria Gil Tosin, a speech therapist, duly registered with the Regional Council of Speech Therapy under number 5676 and qualified to perform such procedures. In cases where irritation and discomfort occurred when experiencing fractal sound, the participant was instructed to readjust the sound volume according to the need of each one. The privacy and anonymity of the participants whose data were collected, gathered and stored in medical records and databases were preserved. When necessary, compensation for transportation expenses was made available and the right to indemnification was guaranteed as recommended by Resolution 466/12. All the resources applied in this research were guaranteed by the researcher. The intervention consisted of applying non-amplified fractal sound for one hour a day for 3 months. For the sample, previously selected participants, adults, of both sexes with constant tinnitus complaint, for more than 6 months, unilateral or bilateral, were included in the research; without claustrophobia due to the need to enter an acoustic booth to perform the tonal and vocal audiometry exam and the acufenometry exam; over 18 years old, with unobstructed external auditory canal; with normal audiometry or with sensorineural hearing loss of a maximum of 60 dB per frequency; with normal immittanciometry; without medication treatment to improve tinnitus, with use before or during research; committed to performing the exercise daily, listening to fractal music for an hour a day, for three months, and may be asleep or awake; cell phone and headset carriers in operation; who consented to their

Tinnitus Relief Using Fractal Sound Without Sound Amplification

participation in the research by signing the free and informed consent term. Exclusion criteria were considered: participants with irritation and discomfort when experien-cing fractal sound and ear infections during the research. As clinical outcomes, the following results were consi-dered: acuphenometry, numerical scale, awareness and discomfort with tinnitus, responses to the Tinnitus Handicap Inventory—THI questionnaire, patient relief and satisfaction. The exams and questions were applied to the research participant before starting training with the fractal sound, and at the end of it. Of the 43 research participants who started the test with fractal music, nine did not complete the research, one did not follow the procedure guidelines and was excluded at the end of the study and one participant did not adapt to fractal music giving up on continuing to participate in the research, being that, this was the only patient that indicated worsening of tinnitus. Therefore, 32 participants actually performed the daily exercises and completed the survey. The sound enrichment used was Fractal Sound [21], with the participant’s cell phone as the sound generator. The fractal sound was adjusted to a comfortable and audible sound intensity, and sufficiently capable to promote tinnitus relief [1, 2]. The participant listened to the fractal music, through his cell phone, attaching headphones provided by the manufacturer of the cell equipment. The fractal sound used was the same for all participants and lasted exactly one hour, ending any noise after that [21]. It was observed that the fractal sound was audible and comfortable for the patient, using the Sound Meter application to measure the sound volume. Thereafter, the position of the volume bar and the volume/intensity number were visually defined on the participant’s cell phone screen. After this control step, the sound volume was always the same or less, never above this value. Meatoscopy was performed with an MD otoscope, model Xenon. The pure tone audiometry, vocal and acuphenometry exams were performed with a Madsen audiometer, model Itera II, calibrated on April 9, 2019. The immittance test was performed with the Interacoustics Imitanciometer, model AT235, calibrated on 04/09/2019. The test results were stored in software called Audireport, suitable for the audio-logy service. All data and results of the research were inserted into the IBM SPSS software—Statistics free version.

3

ResultS

The group of participants in this research was constituted by 32 individuals with tinnitus, being 23 females (71.9%) and 9 males (28.1%), with a mean age of 49.3 ± 10, 3 years, with

1075

no difference between genders (p = 0.85). Regarding education, in 5 cases, primary education I (15.6%) was observed, in 3 primary educations II (9.4%), in 11 secondary educations (34.4%) and in 13 higher educations (40.6%). The types of tinnitus most commonly found in patients were: cicada, whistle and wheezing. Most of them did not change after the use of the fractal sound, remaining as a whistle in 100% of cases of this type, in 87.5% of cicada-type tinnitus, and the others, of less frequency, also remained. Equal (p > 0.05). The same was observed in relation to the location of the tinnitus that remained in the left ear (45.4%), in the right ear (66.7%), in both ears (83.3%) and in the head (87, 5%), without significant change (p > 0.05). A sense of relief was observed in 28 cases, after the intervention with fractal music (87.5%) and 29 participants (90.7%) felt satisfied or very satisfied after the intervention. Regarding the numerical scale, a significant decrease was observed after the intervention (p < 0.001). There was also a significant decrease in the level of tinnitus awareness (p < 0.001) and in the level of discomfort with tinnitus (p < 0.001). In the audiometry exam, there was no significant difference as to laterality (p = 0.24) in relation to sensorineural hearing loss between the moments. The degree of hearing loss was, for the most part, mild on both sides (p = 0.54). The distribution of the hearing loss configuration in patients with sensorineural hearing loss was similar in both ears (p = 0.80). In the acuphenometry exam, there was no statistically significant difference in the results before and after the intervention with fractal music, both with respect to frequency, as to intensity or minimum masking level. In the THI questionnaire, a significant decrease was observed after the intervention in all scales: Functional, Emotional and Catastrophic (p < 0.001). In the final result of THI, a significant decrease was also observed after the intervention with fractal music, as shown in Fig. 1. Regarding the severity of tinnitus, it was observed that of the participants who started the study in the “severe” classification, 75% completed the study in the “mild and slight” handicap degree and 25% in the “moderate” degree. Participants with a “moderate” disability, 55.5% completed the study in the “mild and slight” handicap category. Among the individuals who started the study in the “light” grade, 50% were classified as “mild” handicap at the end of the study, 38.8% remained in the “light” grade and 11.11% went into the category more severe, and these differences are not statistically significant (p = 0.31). The data are shown in Table 1. The measures of central tendency and dispersion are expressed as means and standard deviation (mean ± SD) for continuous variables with symmetrical distribution and in means, minimum and values (median, minimum–maximum) for those of asymmetric distribution. The estimation of the

1076

A. G. Tosin and F. S. Barros

Fig. 1 Final result of the Tinnitus handicap inventory before an dafter the intervention with fractal music

difference of continuous variables with normal distribution was performed by the parametric test, Student’s t test, One-way Factorial Anova and Factorial Anova for repeated measures with Duncan’s post-hoc test, whereas for asymmetric distribution variables, the non-parametric test, Wilcoxon test. The difference estimate between categorical variables was performed using Pearson’s chi-square test and McNemar’s test. For all tests, a significance level of 5% was considered and the sample studied gives a minimum test power of 90%.

4

Discussion

The focus of this study was on participants with tinnitus that bothers, is persistent and that negatively affects the individual’s everyday quality. In view of the difficulty that most

participants encounter when they need a hearing aid, either in SUS—Health Public System (due to the delay and waiting for years) or in the private system (due to high costs), it was necessary to expand the possibilities of improving and relieving tinnitus while waiting for the solution of the problem. It was thinking about this dilemma that participants with tinnitus suffer that came up with the idea of a simple and low-cost help, making the patient’s own cell phone the sound generator to listen to fractal music. Following the recommendation of the scientific community in this study, tinnitus was measured by the THI questionnaire, by numerical scale and by psychoacoustic measures. The group analyzed was composed of 32 people with tinnitus, 23 of whom were female (71.9%) and 9 were male (28.1%). All participants reported chronic tinnitus, with at least six months of recurrence. The predominant hearing loss was mild sensorineural loss, with a descending configuration. As a classification of hearing loss in relation to the degree, the mean values of 500, 1000 and 2000 Hz were used. The Guide for Basic Audiological Evaluation, produced by the Federal Council of Speech Therapy, was used for all results and classifications of this study [22]. The elderly are the main complainers of the terrible symptom of tinnitus. However, in this study, the volunteer participants were of different ages, with a mean age of 49.3 ± 10.3 years and a minimum age of 24 years and a maximum age of 75 years. The sample profile and the average age were not compatible with several previous studies, in which older individuals are included, being presented in the studies by [5, 15, 23]. The types of tinnitus did not change with the use of fractal music, which the participants presented was a reduction in the sensation of the intensity of the tinnitus, reporting that they continue to hear the same tinnitus, but with less

Table 1 Severity of tinnitus before and after the intervention with fractal music Severity

After MILD

Before

Slight Mild Moderate Severe

Total

Total Light

Moderrate

Score

1

0

0

1

% of total

3.10%

0.00%

0.00%

3.10%

Score

9

7

2

18

% of total

28.10%

21.90%

6.30%

56.30%

Score

3

2

4

9

% of total

9.40%

6.30%

12.50%

28.10%

Score

2

1

1

4

% of total

6.30%

3.10%

3.10%

12.50%

Score

15

10

7

32

% of total

46.90%

31.30%

21.90%

100.00%

Tinnitus Relief Using Fractal Sound Without Sound Amplification

discomfort and with perceived relief. Most participants were satisfied with the survey. Regarding the type of tinnitus, the most mentioned was tinnitus, whose characteristics seem like the sound of a cicada. By psychoacoustic measurements, most of the tinnitus was found in the high frequencies, with the frequency most referred to 8000 Hz and the average genera-ted was 5673.5 Hz. Acuphenometry did not show significant differences between the evaluations performed before and after the intervention, as well as no differences were found in the frequency, intensity, and minimum level of masking of the exam, nor any difference between the right and left ears. Even so, significant differences were observed in the numerical scale, in the level of consciousness and in the level of discomfort with tinnitus before and after the use of fractal music without sound amplification. The numerical scale showed an initial average of 6.1 points and a final average of 4.9 points, demonstrating that there was a reduction in the values and a statistically significant difference (p < 0.001), which suggests that the therapeutic approach was effective for the habituation of the patient buzz. In this study, we sought for the lowest sleep intensity capable of improving tinnitus. Most therapeutic approaches suggest that for tinnitus habituation to occur, there is a need for an amplified sound generator, that is, coupled to hearing aids; but in this research, the cell phones of the patients were used as the sound generator, without sound amplification, using fractal music as a stimulus [21]. According to the THI questionnaire, the predominant degree in the survey was the “mild” degree, a degree of discomfort with enough tinnitus to cause participants to seek help. The data obtained from the THI questionnaire at the beginning and at the end of the research were different and statistically significant according to the functional, emotional, catastrophic scale and the total score of the questionnaire, demonstrating that there was actually a decrease in tinnitus, increasing the credibility of the use of fractal music, without sound amplification, to relieve tinnitus. Important developments were noted in the THI scores. The average THI for this group dropped from 36.31 points in the initial response to 21.06 points, after 3 months of listening to the fractal sound, characterizing a statistically significant difference. The results of the present study were effective in demonstrating that three months of listening to fractal music, using the cell phone as a sound generator and using minimum intensity in that sound, are capable of providing relief from the discomfort caused by tinnitus, however further studies should be done and performed to verify and prove this data. After three months of hearing the fractal sound without sound amplification, most participants reported satisfaction with the research methodology. Regarding the question about tinnitus relief and improvement, 87.5% of the

1077

participants answered “yes”, who perceived a relief in tinnitus, showing as a report that the sleep therapy, using sound generators without sound amplification may be useful for people with constant tinnitus. The direct and expected benefit to the participant in this research was to reduce and relieve the sensation of constant tinnitus. For society, it was a scientific presentation that the use of fractal sounds relieves tinnitus, promoting an improvement in the provision of services in the area of audiology, in the public and private health sector, qualifying and introducing a simple and low-cost method, with the individual’s own cell phone as a sound generator, increasing the quality of life of tinnitus patients.

5

Conclusion

The answers obtained in this study indicate that listening to fractal music, through the tinnitus carrier’s own cellphone, can be a good alternative for those who have constant tinnitus. The sound stimulus of fractal music, without amplification, as usual sound therapy, proved to be effective in mitigating the harmful effects of tinnitus. The conclusions of this study demonstrate that the use of sound therapy, with the use of fractal music as a stimulus, can provide a significant reduction in tinnitus. Future studies with fractal sounds can confirm the effects in relieving tinnitus by associating fractal music, without sound amplification, helping to reduce awareness and the annoyance that tinnitus causes. In the future, this simple and low-cost method may be introduced in the provision of audiology services, from the public and private health sector, and may help patients with tinnitus. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Jastreboff P (1990) Phanton auditory peception (tinnitus): mechanisms of generation and percetion, ncbi.nlm. nih.gov/ pubmed/217585. https://doi.org/10.1016/0168-0102(90)90031-9 2. Jastreboff P, Hazell J (2004) Tinnitus retraining therapy-implementing the neurophysiological model. Cambridge University Press. ISBN: 9780521088374 3. Santos G (2013) The Influence of Sound Generator Associated with Conventional Amplification for Tinnitus Control: Randomized Blind Clinical Trial. Trends Hearing 18:2331216514542657. https://doi.org/10.1177/2331216514542657 4. Sanchez TG, Mak MP, Pedalini MEB, Levy CPD, Bento RF (2005) Zumbido e evolução auditiva em pacientes com audição normal Int Arco. Otorhinolaryngol. 9(3):220–227 5. Reavis KM, Rolhhpltz VS, Tang O, Carroll JA, Djalilian H, Zeng FG (2012) Temporary suppression of tinnitus by modulated

1078

6.

7.

8.

9. 10.

11.

12.

13.

14.

sounds. J Assoc Res Otolaryngol 13:561–571. https://doi.org/10. 1007/s10162-012-0331-6 Bertuol B et al (2018) Tinnitus, quality of life and emotional issues of hearing aids users. Rev. Distúrbios da Comunicação, São Paulo, v. 30, n. 1, pp 80–89 Bogo R, Farah A, Karlsson KK, Pederson NL, Svartengren M, Skjönsberg Å (2017) Prevalence, incidence proportion, and heritability for tinnitus: a longitudinal twin study. Ear Hear 38(3): 292–300. https://doi.org/10.1097/AUD.0000000000000397 McFerran DJ, Stockdale D, Holme R, Large CH, Baguley DM (2019) Why is there no cure for tinnitus? Front Neurosci 13:802. https://doi.org/10.3389/fnins.2019.00802 Figueiredo R, Azevedo A (2013) Zumbido. 1ª. ed. Rio de Janeiro: Revinter pp 6–29. ISBN 9788537205099 Suzuki FA, Suzuki FA, Onishi ET, Penido NO (2018) Psychoacoustic classification of persistent tinnitus. Braz J Otorhinolaryngol 84:583–590. https://doi.org/10.1016/j.bjorl.2017.07.005 Hoare DJ, Searchfield GD, El Refaie A, Henry JA (2014) Sound therapy for tinnitus management: practicable options. J Am Acad Audiol 25(1):62–75. https://doi.org/10.3766/jaaa.25.1.5 Ibarra D, Tavira-Sanchez F, Recuero-Lopez M, Anthony BW. (2017) In-ear medical devices for acoustic therapies in tinnitus treatments, state of the art Aurisnasuslarynx. https://doi.org/ 101016/j.anl.2017.03.020 Searchfield GD, Durai M, Linford T (2017) A state-of-the-art review: personalization of tinnitus sound therapy. Front Psychol 8:1599. https://doi.org/10.3389/fpsyg.2017.01599 Jastreboff PJ, Hazell JW (1993) A neurophysiological approach to tinnitus: clinical implications. Br J Audiol 27(1):7–17. https://doi. org/10.3109/03005369309077884

A. G. Tosin and F. S. Barros 15. Sweetow R, Kuk F, CaporalI S (2015) A controlled study on the effectiveness of fractal tones on subjects with minimal need for amplification Hearing Rev 22:9–30 16. Su Z, Wu T (2007) Music walk, fractal geometry in music. Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 380 (C), pages 418–428 https://doi.org/10.1016/j.physa.2007.02.079 17. Sekiya Y, Takahashi M, Kabaya K, Murakami S, Yoshioka M (2013) Using fractal music as sound therapy in TRT treatment. Br J Audiol. Article#11623. 18. Hsu K, Hsut A (1990) Fractal geometry of music. Proc Natl Acad USA 87:938–941 19. Newman C, Jacobson G, Spitzer J (1996) Development of the Tinnitus Handicap Index. Research Gate.net, Researchgate. net/publication/14570768. https://doi.org/101001/archotol.1996. 01890140029007 20. McCormack A et al (2016) A systematic review of the reporting of tinnitus prevalence and severity, Sciencedirect.com/science/article/ pii/S0378595516300272 21. Segovia J. Musica Fractal—Duo Guitarras 07/14 . Disponível em: https://www.youtube.com/channel/UCcWwWdwODdwvb2EGtVbb S2g . Acesso em 02 de julho de 2019. 22. Fonoaudiologia (2017) Guia de Orientações na Avaliação Audiológica Básica, Fonoaudiologia.org.br/cffa/wp-content/uploads 23. Lee K, Makino K, Yamahara K (2017) Evaluation of tinnitus retraining therapy for patients with normal audiogramns versus patients with hearing loss. Auris Nasus Larynx 45(2):215–221. https://doi.org/10.1016/j.anl.2017.03.009

Analysis of the Heat Propagation During Cardiac Ablation with Cooling of the Esophageal Wall: A Bidimensional Computational Modeling S. de S. Faria , P. C. de Souza , C. F. da Justa , S. de S. R. F. Rosa , and A. F. da Rocha

Abstract

Keywords

Atrial fibrillation (AF) is a cardiac arrhythmia that affects around 33 million people worldwide. A standard form of treatment for AF is cardiac ablation with the radiofrequency catheter (RFCA). RFCA generates heat through the ablation electrode, and this process can cause severe lesions in the atrial and esophageal tissues. This work presents a two-dimensional computational model that uses geometry and boundary conditions that approximate cardiac ablation conditions with a non-irrigated catheter. The paper’s objective is to simulate the RFCA and analyze the heat propagation during cardiac ablation when the esophageal wall is cooled down. The esophagus, the connective tissue, and the heart wall were simulated, assuming laminar blood flow in the heart wall. The simulated electrode temperatures were 60, 70, and 80 °C for 60 seconds with constant peak voltage. The cooling temperature was 0 °C. The results showed that cooling decreases the temperature between tissues. The temperature in connective-cardiac tissue dropped by approximately 6.51%. In the esophageal-connective tissue, the temperature decreased by about 28.22%. In all cases, there was also a slowing in temperature increase, which can help prevent tissue damage. The results suggest that the method has significant potential for improving the safety of RFCA.

Cardiac ablation Catheter ablation Computer modeling Thermal injury Esophageal cooling

S. de S. Faria (&)  A. F. da Rocha Electrical Engineering Graduate Program, University of Brasilia, Brasilia, Brazil P. C. de Souza Mechatronic Systems Graduate Program, University of Brasilia, Brasilia, Brazil C. F. da Justa  S. de S. R. F. Rosa Electronics Engineering, Faculty of Gama, University of Brasilia, Brasilia, Brazil S. de S. R. F. Rosa  A. F. da Rocha Biomedical Engineering Graduate Program, University of Brasilia, Brasilia, Brazil

1

 





Introduction

Atrial fibrillation (AF) is a cardiac arrhythmia that affects around 33 million people worldwide [1]. The disease is the most common chronic disease non-communicable in Brazil, and cardiovascular diseases are responsible for about 30% of the deaths registered in the country [2, 3]. AF increases the risk of stroke by up to five times [1]. According to SOBRAC (Brazilian Society of Cardiac Arrhythmias), the estimate is that 5–10 % of Brazilians will have this cardiac arrhythmia [4]. AF occurs in the left atrium (LA), which is anatomically aligned with the aortic artery and the esophagus [5]. The aorta presses the esophagus against the LA wall [6]. The esophagus is in a position below the aortic arch and constitutes the heart base’s primary posterior relationship [6]. The thermal lesions propagated to the esophagus are proportional to the surface area of contact between the esophagus and the LA [7]. RFCA is considered the gold standard for the treatment of AF. The procedure isolates the electrical signals that originated the AF so that the electrical cardiac function can return to normal [8]. As it is a technique performed in the LA and generates a large amount of heat, one of the risks of this process is the development of thermal lesions that can develop at different levels: among them, ulcer, reflux, gastritis and atrial-esophageal fistula (AEF) [9–11]. As pharmacological strategies for the treatment of AF, anticoagulants and drugs for heart rate control are used [5]. Clinical procedures are recommended for all patients, while rhythm control is recommended for symptomatic patients [5, 12]. However, in addition to being an expensive treatment,

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_161

1079

1080

S. de S. Faria et al.

drugs do not cure AF, and some cases are recurrent. In this scenario, cardiac ablation (CA) by radiofrequency emerged as the most effective treatment for controlling heart rate [13]. Esophageal cooling can be a complementary technique to RFCA in the future because it could decrease the excessive heating of the tissues adjacent to those ablated [14]. Previous studies [14–17] have suggested that cooling can provide better boundary conditions for the operation of the RFCA. This better temperature distribution may help to decrease injuries and to decrease or avoid the risk of developing AEF. Esophageal cooling reduces the severity of the lesions that may result from RF ablation [17]. A mathematical model of the process is useful as it provides a means of further investigation into this approach to prevent subsequent esophageal injury from RFCA. Moreover it can serve to guide the clinical investigations that are currently underway [15]. Esophageal cooling may provide a significant protective effect against thermal injuries, as shown in recent studies [11, 16, 17]. The goal of the present study was to study the probable temperature distribution in the cardiac, connective, and esophageal tissues that would happen in consequence of the cooling of the EW. The purpose of the analysis is to understand how heat spreads when the esophagus is cooled.

2

Materials and Methods

The simulation developed in this study consists of two-dimensional modeling with thermal and electrical coupling and includes cardiac, connective, and esophageal tissues. The two-dimensional computational model consists of all elements coupled in the COMSOL software: laminar flow, heat transfer in solids and fluids, and Joule heating. The thicknesses of the simulated tissues are the same used in studies [16, 18]: 2.5 mm for heart tissue (HT), 2.5 mm for connective tissue (CT), and 3 mm for the esophagus. According to [19], this scenario is close to reality. In the present study, the electrode inserts 0.5 mm from the HT and positioned at 90° [9]. The electrode has a radius of 1.5 mm, which is the same used commercially. The sample size was 20 mm. Articles [8, 18], as in this study, use the laminar blood velocity at 8.5 cm−1. The single electrode is non-irrigated. In the model, the blood is in contact with the electrode and the HT. The blood works as a convective cooler in the tissue [11]. In the model, the flow occurs from left to right, as shown in Fig. 1. The flow is non-symmetrical. The simulated temperatures at the electrode are 60, 70, and 80 °C. The study has a duration of 60 seconds. Points A and B, in Fig. 1, represent the intersections between the simulated tissues. At point A, there is contact between the HT and the CT. At point B, there is contact between the CT and the esophagus.

Fig. 1 A two-dimensional model with the distribution of the different simulated thermoelectric conditions

For the implementation of thermal interaction (Fig. 1), the lateral and upper contours of the model correspond to a temperature of 37 °C, which is the average body temperature, chosen in other studies [8, 11]. This condition means that, at a distant limit of the electrode, the temperature is 37 °C. The lower thermal boundary condition—control temperature (Tc )—is simulated in two situations: when the EW is cooled, the temperature is 0 °C; and with the temperature at 37 °C, when the EW is not cooled. The central equation is Pennes’s bioheat equation (Eq.1) to solve the computer simulation [8, 20–22]. Eq. (1) demonstrates the spatial distribution of temperature in the tissues, described as follows: q:c: dT dt ¼ rðk:rT Þ þ q  Qp þ Qm ;

ð1Þ

where q is the mass density in Kg/m3, c is the specific heat in J/Kg K, T is the temperature measured in °C, k is the thermal conductivity of the material measured in W/m K, q is the heat source measured in W/m3, Qp is the infusion heat loss measured in W/m3, and Qm is the metabolic heat generation, also measured in W/m3. At the RFCA frequency, around 500 kHz, electrical energy is deposited in a small radius around the active electrode. The small area of interest used in the ablation, with low displacement currents, makes the tissues behave as a purely resistive biological environment [11, 18]. Studies [8, 9, 11] also took a quasi-static approach and, therefore, uses the Joule effect (Eq. 2) to justify the heating of the tissue. Thus, the description of the heat source is q ¼ J:E;

ð2Þ

where J is the current density measured in A/m2, and E is the electric field strength measured in V/m. For resistive heating, it is necessary to use the Pennes’s bioheat equation coupled with the Laplace equation (Eq. 3).

Analysis of the Heat Propagation During Cardiac Ablation with …

The solution provides accurate distributions of temperature and heat propagation in tissues [8]. The Laplace equation allows the calculation of the electric potential and the current distribution: r:rrV ¼ 0;

ð3Þ

where V is the square root of the voltage, and r is the electrical conductivity (S/m). The simulations use the protocols of RF energy supply: constant peak voltage and constant temperature at the electrode, as in studies [19, 22, 23]. Table 1 shows the electrical and thermal properties of the elements simulated [11]. The RFCA uses two electrodes in the procedure: an active electrode and a dispersive one [5, 20]. The active electrode is in contact with the HT and the blood. In this study, the peak voltage is constant at 20 V. The dispersive electrode has a voltage of 0 V, and it is placed around the model, as shown in Fig. 1. The idea is to mimic a monopolar configuration, where the RF current is encouraged to flow between the active and the dispersive electrodes as in studies [8, 11, 19–21].

3

Results

The computer simulation used coupled thermal and electrical systems. The result presents a two-dimensional simulation with the temperature values at points A and B, as shown in Fig. 1, precisely at the instant of 60 s. The active (Te ) and cooling (Tc ) electrode temperatures were pre-defined values for simulating different heating situations. The lateral boundaries of each simulation were assigned to the temperature of 37 °C. The thermal injuries generated during the ACRF procedure involve the temperature and time variables in which the tissue is exposed to the heat generated in the electrode. The study related these variables and obtained significant improvements with the use of cooling the EW to reduce possible esophageal lesions created with different temperatures on the electrode. Table 1 Thermoelectric characteristics of the materials [11] r (S/m)

q (kg/m3)

c (J/kg.K)

k (W/m.K)

Heart tissue

0.610

1200

3200

0.700

Connective tissue

0.020

900

2222

0.200

Esophagus

0.127

1000

3700

0.400

Blood

0.667

1000

4180

0.541

21500

132

71

Electrode

4.6.10

6

r: electrical conductivity; q: mass density; c: specific heat and k: thermal conductivity

1081

The studies [11, 19, 24] showed that, at temperatures above 50 °C, cell death occurs. Thus, the present study considered this temperature (50 °C) as the temperature at which thermal injuries could occur. Furthermore, as a result, the temperature curves with and without EW cooling were included. In all simulations, the curve without cooling was less pronounced.

3.1 Simulation for Te ¼ 60 °C The simulation uses two different boundary conditions on the esophageal wall (EW): 37 and 0 °C (lower boundary in Fig. 1). The temperature distribution without cooling (Tc = 37 °C) is shown in Fig. 2. The temperature at point A (TA) was 64.729 °C and, at point B (TB), it was 51.940 °C. Fig. 2b shows the temperature distribution when cooling occurs (Tc = 0 °C), and, in this case TA = 60.212 °C and TB = 37.353 °C. Therefore, the temperature at point A decreased by 4.517 °C and, at point B, by 14.587 °C. The simulation boundary around the model maintained a temperature of 37 °C, which corresponds to the average body temperature, as expected. When no cooling was applied to the EW, the temperature at point A reached 50 °C in 14 s, as shown in Figure 3. Therefore, the cooling of EW had minimal effect on the temperature behavior at this point. On the other hand, at point B, when no cooling was applied, the same position reached 50 °C in 50 s, whereas the temperature at this point always stayed below 50 °C when colling was applied.

3.2 Simulation for Te ¼ 70 °C In the simulation, for Te ¼ 70 °C, the temperature distribution when there is no cooling (Tc = 37 °C) is shown in Fig. 4a. The temperature at point A reached 68.731 °C and, at point B, 53.172 °C. The temperature distribution when cooling occurs (Tc = 0 °C) is shown in Fig. 4b. In this case TA = 64.259 °C and TB = 37.958 °C. Therefore, temperature at point A decreased by 4.472 °C, and at point B, it decreased by 15.214 °C. The evolution of the temperatures at points A and B with and without cooling is shown in Fig. .5. At point A, the temperature without cooling reached 50 °C in 12 s. And with cooling, the curve reached 50 °C at the same time. From 17 s onwards, temperature curves start to differentiate, and cooling becomes evident at point A. At point B, without cooling, the temperature reached 50 °C in the 45s. With cooling, the temperature remained close to 37 °C during the 60 s.

1082

S. de S. Faria et al.

Fig. 2 A two-dimensional simulation with Te = 60 °C. a Without cooling in the EW, Tc = 37 °C. b With cooling in the EW, with Tc = 0 °C

Fig. 3 Temperature (°C) x time (s) curve at points A and B with and without cooling in the EW at Te = 60 °C

Fig. 4 A two-dimensional simulation with Te = 70 °C. a Without cooling in EW, Tc = 37 °C. b With cooling in the EW, with Tc = 0 °C

Analysis of the Heat Propagation During Cardiac Ablation with …

1083

Fig. 5 Temperature (°C) x time (s) curve at points A and B with and without cooling in the EW to Te = 70 °C

3.3 Simulation for Te ¼ 80 °C In the simulation, for Te ¼ 8 °C, the temperature distribution without cooling (Tc = 37 °C) is shown in Fig. 6a. The temperature at point A reached 72.559 °C and, at point B, 54.263 °C. Fig. 6b shows the temperature distribution when cooling occurs (Tc = 0 °C). In this case, TA = 68.154 °C and TB = 39.096 °C. Therefore, the temperature at point A decreased by 4.405 °C, and, at point B, it decreased by 15.167 °C. At points A and B, the evolution of the temperatures with and without cooling is shown in Fig. 7. At point A, the temperature without cooling reached 50 °C in 10 s. With cooling, the curve reached 50 °C at almost the same time. From 17 s onwards, temperature curves start to differentiate, and cooling becomes evident at point A. At point B, the

temperature without cooling reached 50 °C in 42 s. And the temperature with the cooling stayed above 40 °C since the beginning of the ablation.

4

Discussion

The data obtained in the simulations with and without cooling suggest that EW may promote greater security to the RFCA procedure. Improvement in the process occurs in situations in which the temperature field in regions close to the EW is expected to reach temperatures above 50 °C. At this temperature, the cells undergo denaturation (become carbonized) [9, 20]. With the EW cooling provided, the temperature at the interface between the conjunctive tissue and the EW never

Fig. 6 A two-dimensional simulation with Te = 80 °C. a Without cooling in the EW, Tc = 37 °C. b With cooling in the EW, Tc = 0 °C

1084

S. de S. Faria et al.

Fig. 7 Temperature (°C) x time (s) curve at points A and B with and without cooling in the EW to Te = 80 °C

reached 50 °C. This feature may improve the safety of the procedure. The results obtained in the present study suggest that cooling the EW generates better temperature distribution for the ablation process. The cooling procedure does not interfere with the ablation procedure. Study [12] states that, at temperatures below 47.9 °C, the tissue may recover, and the ablation may become ineffective. In Figs. 2, 4, and 6, it is possible to see that the electrode maintained the temperature constant for a considerable ablation area. In those figures, it is possible to understand that even with cooling, the electrode temperature does not become under 50 °C at point A. It means that ablation will occur as expected in the procedure. Moreover, it is possible to observe that comparing the temperature curves generated, Figs. 3, 5, and 7, the curve of the cooling situation of the EW is less pronounced than that without cooling. The reduction of the temperature allows the RFCA to occur in a more controlled and safer way for the patient. The blood flow at the heart wall was an essential parameter in the simulation, as it is possible to see in Figs. 2, 4, and 6 that the heat spread horizontally with the movement of the blood like in studies [11, 21]. The blood flow could maintain the desired temperature at the top boundary, showing that the convection coefficient provided by the blood should maintain the temperature at the HT constant and uniform. It is important to observe that the blood

impedance is relatively low as the current conducts through the tissue; thus, there will be rapid resistive heating. The simulation used a non-irrigated catheter. Consequently, the energy of this kind of catheters passes through the tissue; resistive heating occurs and generates substantial heat, which is simultaneously transferred to deeper tissues by conductive heating as the passing blood convectively cools the tissue. It is possible to observe the heat conduction in Figs. 2, 4, and 6. The thermal boundary conditions preserved the boundary temperature, which remained at 37 °C during the EW cooling. In Figs. 2, 4, and 6, it is possible to observe this behavior near the limit of the edges. This information is essential for system validation. Studies [8, 20] also used this thermal condition in their simulations. For the electrode temperature, Te ¼ 60 °C, at point A, the temperature was above 60 °C throughout the procedure, regardless of cooling. At point B,Te ¼ 60 °C, after cooling, the temperature was close to the average body temperature, and no injury should occur. For Te ¼ 70 °C, with and without cooling, there would be lesions on the CT at point A in 12 s. After cooling, the temperature dropped to 64.259 °C and failed to prevent tissue damage. Besides, cooling decreased the temperature by just over 4 °C. However, at point B, the temperature without cooling reached 50 °C in 45 s. With cooling, the

Analysis of the Heat Propagation During Cardiac Ablation with …

temperature was below 40 °C. This condition proved that ablation could occur without injury to the EW. For Te ¼ 80 °C, at point A, the CT cells would be carbonized due to the high temperature. The cooling is not strong enough to decrease the temperature at point A in 60 s. There is a decrease of just over 4 °C. At point B, temperature without cooling reached 50 °C in 42 s. The results suggest, for Te ¼ 80 °C, a possibility of treatment at this condition while avoiding, through the cooling action, the appearance of esophageal lesions. Control of the variables is a possibility because the temperature with cooling stayed above 40 °C since the beginning of the ablation. However, for temperatures like this, it is crucial to control the duration of RFCA. Studies [19, 20] claim that keeping the electrode in the same position and increasing the energy over a period longer than 60 s can cause a more profound injury due to thermal conduction in the tissues in this scenario. The present study showed that cooling could delay the spread of heat to other tissues adjacent to HT. However, this hypothesis should undergo further study. Studies [18, 20] showed that the propagation of heat after the electrode is off continues by thermal conduction. They call it thermal latency. The result obtained in this study suggests that cooling can be a tool to circumvent this problem, or create a more favorable condition, considering the time in which the other tissues decreased with cooling of the EW. In some scenarios, the overheating could happen because the cooling did not reach the insufficient tissue time to avoid injury or the peak tension was too high and generated too much heat. Further studies would be needed to improve the model. If ablation occurs in the same cases, maybe deeper lesions between HT and EW will occur. Thus, cooling brought the possibility of decreasing the temperature curve and therefore, potentially avoiding thermal injuries of any depth.

5

Conclusion

The results presented suggest that the cooling of the EW allows an effective ablation to occur, with the death of cells with abnormal functioning instead of the lesion in regions with healthy cells. The study showed a decrease in the internal temperature between the cardiac, connective, and esophageal tissues by using esophageal cooling. The study also showed that cooling slows down the spread of heat. With these motivating results, the group intends, in future studies, to continue developing the theme with more complex analyses, more elaborate graphic modeling and the realization of physical experiments.

1085 Acknowledgements This work had the support of the Coordination for the Improvement of Higher Education Personnel—Brazil (CAPES) —Financing Code 001, and the support of the Brazilian National Council for Scientific and Technological Development (CNPq). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Salmasi S, Loewen PS, Tandun R, Andrade JG, De Vera MA (2020) Adherence to oral anticoagulants among patients with atrial fibrillation : a analysis of systematic review and meta—observational studies. BMJ Open 10:(4)1–14 2. Costa Martinez A, Walter Dafico Pfrimer F, Scroccaro Costa M, Yoshihiro Nakano A (2018) How to develop a single channel electrocardiograph with a low budget. IEEE Lat Am Trans 16:(4) 1057–1063 3. SOBRAC—Sociedade Brasileira de Arritmias Cardíacas, at https://sobrac.org/home/ 4. Martins A, Baruco E SOBRAC—Sociedade Brasileira de Arritmias Cardíacas, at https://sobrac.org/home/release-dia-mundial-docoracao-e-a-necessidade-de-atencao-paras-as-arritmias-cardiacas/ 5. de Oliveira DB (2015) Estudo sobre o efeito de técnicas preventivas na incidência de lesões esofageanas após ablação do átrio esquerdo para tratamento de fibrilação atrial. Universidade de São Paulo 6. Hall JE, Guyton AC (2011) Tratado de Fisiologia Médica, 12th edn. Elservier Editora Ltda, Rio de Janeiro 7. Lemola K et al (2004) Computed Tomographic analysis of the anatomy of the left atrium and the esophagus implications for left atrial catheter ablation. Am Hear Assoc 110(24):3655–3660 8. Berkmortel C, Avari H, Savory E (2018) Computational modelling of radiofrequency cardiac ablation to study the effect of cooling on lesion parameters. In: Proceedings of the Canadian society for mechanical engineering international congress. Canada 9. Olson M, Nantsupawat T, Sathnur N, Roukoz H Cardiac ablation technologies. Elsevier Inc 10. Knopp H et al (2014) Incidental and ablation-induced findings during upper gastrointestinal endoscopy in patients after ablation of atrial fibrillation: a retrospective study of 425 patients. Hear Rhythm 11(4):574–578 11. González-Suárez A, Berjano E (2016) Comparative analysis of different methods of modeling the thermal effect of circulating blood flow during RF cardiac ablation. IEEE Trans Biomed Eng 63(2):250–259 12. Thanavaro JL (2019) Catheter ablation for atrial fibrillation. J Nurse Pract 15(1):19–25 13. Scanavacca M (2016) Current atrial fibrillation ablation: an alert for the prevention and treatment of esophageal lesions. Arq Bras Cardiol 106(5):354–357 14. Sohara H, Satake S, Takeda H, Yamaguchi Y, Nagasu N (2014) Prevalence of esophageal ulceration after atrial fibrillation ablation with the hot balloon ablation catheter: what is the value of esophageal cooling? J Cardiovasc Electrophysiol 25(7):686– 692 15. Montoya MM et al (2019) Protecting the esophagus from thermal injury during radiofrequency ablation with an esophageal cooling device. J Atr Fibrillation 11(5):1–7

1086 16. de S. Faria S, de Souza PC, Souza GP, de S, Rosa SRF, da Rocha AF (2018) Estudo sobre a propagação do calor durante o procedimento de ablação cardíaca com e sem resfriamento da parede esofágica. In: Anais do XII Simpósio de Engenharia Biomédica—IX Simpósio de Instrumentação e Imagens Médicas 17. Leung L et al (2019) Esophageal cooling for protection during left atrial ablation: a systematic review and meta-analysis. medRxiv, p. 19003228 18. Berjano EJ, Hornero F (2004) Thermal-electrical modeling for epicardial atrial radiofrequency ablation. IEEE Trans Biomed Eng 51(8):1348–1357 19. Erez JJP, D’avila A, Berjano E (2015) Electrical and thermal effects of esophageal temperature probes on radiofrequency catheter ablation of atrial fibrillation : results from a computational modeling study. J Cardiovasc Electrophysiol 26(5):556–564 20. Irastorza RM, D’avila A, Berjano E (2017) Thermal latency adds to lesion depth after application of high-power short-duration

S. de S. Faria et al.

21.

22.

23.

24.

radiofrequency energy: results of a computer-modeling study. J Cardiovasc Electrophysiol 1(1):1–6 Gonzalez-Suarez A, Berjano E, Guerra JM, Gerardo-Giorda L (2016) Computational model for prediction of the occurrence of steam pops during irrigated radiofrequency catheter ablation. IEEE Comput Cardiol Conf 43:1117–1120 Berjano EJ, Hornero F (2005) A cooled intraesophageal balloon to prevent thermal injury during endocardial surgical radiofrequency ablation of the left atrium: A finite element study. Phys Med Biol 50–20 Berjano EJ (2006) Theoretical modeling for radiofrequency ablation: State-of-the-art and challenges for the future. Biomed Eng Online 5:1–17 Berjano EJ, Hornero F (2005) What affects esophageal injury during radiofrequency ablation of the left atrium? An engineering study based on finite-element analysis. Physiol Meas 26:837–848

Development of a Rapid Test for Determining the ABO and Rh-Blood Typing Systems E. B. Santiago and R. J. Ferreira

Abstract

Laboratory tests conventionally used to determine blood typing could represent a barrier to patients with reduced accessibility in emergency situations. Aiming to expand access to a simple and quick laboratory test to identify the ABO and Rh systems, we propose the development of a colorimetric test in a Vertical Flow format. The test was developed based on the treatment of chromatographic paper membranes, in order to filter the red blood cells after the reaction with the antibodies impregnated inside the device. When the blood reacts with the membranes, the plasma passes through the device forming a visible bluish complex on the last membrane. The prototype showed satisfactory results when tested on real samples, visibly showing the presence or absence of reaction in the samples added to the device, enabling the determination of the ABO and Rh systems with accuracy. Keywords

Blood typing POC

1



ABO-system



Rh-system



Rapid tests



Introduction

Today, although more than 600 erythrocyte antigens are known. In clinical situations, the most important are the ABO and Rh-antigens because the compatibility in blood transfusion. So, these systems are characterized as the most sought markers in tests for blood typing. In the last decades, the use of sophisticated and high-performance equipment has made routine laboratory tests increasingly automated and efficient. However, the high cost and the delay in delivering E. B. Santiago (&)  R. J. Ferreira Programa de Pós-Graduação Em Engenharia Biomédica, Universidade Tecnológica Federal do Paraná, Curitiba, Brazil

the results still hamper access to these tests, creating the opportunity for the development of new solutions in laboratory diagnosis [1, 2]. The rapid tests called Point of Care (POC) are characterized by simple laboratory tests performed close to the patient with faster results than in conventional tests, where the samples are sent to a laboratory [3]. These tests offer advantages over conventional laboratory tests, such as simple sample handling, execution facilitated by not requiring additives such as anticoagulants, possibility of collecting capillary blood, low cost and short response time (5–15 min) [4]. The development of simple, fast and reliable tests for the determination of blood typing is of great value for compatibility checks in emergency situations, when there is a need for blood transfusion and in situations where there is no access to laboratory facilities [5]. This work aimed to expand access to the ABO and Rh systems blood typing test by developing a rapid colorimetric result test, based on the ability to bind the green bromocresol dye (BCG) with serum albumin (HSA). In this test the agglutinated blood is “filtered”, not passing through the pores of the chromatographic paper, what allow the HSA to bind with BCG dye forming the blue-colored BCG-HSA complex. Non-agglutinated blood passes through the pores of the chromatographic paper assuming a brown color when mixed with the dye [6]. This POC model can accurately distinguish the presence of blood agglutination, determining the ABO and RH systems with accuracy and accessibility. The objective of this project was to provide a greater range of examination for determining blood typing, providing faster diagnosis in emergency situations.

2

Materials and Methods

The prototypes were developed by assembling layers of chromatographic papers treated with the reagents in a 3D-printed cartridge with 4 openings (Fig. 5). Three of these

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_162

1087

1088

openings received membranes treated with antibodies (anti-A, anti-B and anti-RhD) and the reading was performed individually. The openings of the cartridges were arranged to receive the sample on one side and display the result on the other. So, the blood passes through the layers by capillarity until it reaches the other side of the cartridge. The following structure was initially determined to the cartridge: • Sample pad: polyester membrane to receive samples and to transfer to the next layer; • Antibody pad: layer of Whatman filter paper membranes treated with antibodies; • Hydrophobic pad: filter paper to receive the sample from the Antibody pad and distributing to the detection zone; • Detection zone: membranes treated with bromocresol green dye assembled directly after the Hydrophobic pad, which will determine the passage of whole blood or plasma from the Antibody pad. After analyzing the literature, the following membrane treatments were performed: 1. Hydrophobic pad treatment: Whatman filter paper No. 1 treated with BSA 1% solution for 10 min and dried at 25 °C for 2 h. 2. Antibody pad treatment: Whatman filter paper No. 1 and No. 4 treated with 0.1 M carbonate buffer for 10 min and dried at room temperature for 2 h. After drying, they were submerged in antibody solution for 10 min and dried at room temperature for 2 h. 3. Detection zone treatment: Whatman filter paper No. 1 in BCG solution for 10 min and dried at a temperature of 25 °C for 2 h. 4. SEFAR PETEX® membrane: produced with polyester material. Whatman filter papers No. 1 and No. 4 and the SEFAR PETEX® membrane, 180 µm, 210 µm and 75 µm thick, respectively, were tested for sample transfer capacity between membranes. For the two Whatman models, the retention capacities of the agglutinated samples were analyzed. To this end, 3 initial tests were organized to determine the ideal conformation of the membranes for the prototype: 1. Whatman filter papers No. 1 versus No. 4 as Antibody pad: the two membranes were treated with antibodies to test the ability to retain clustered cells. For this analysis, both membranes were treated with anti-A antibodies and assembled with the other layers of membranes. A sample of A+ typing was used for analysis. 2. Hydrophobic pad layer: to test the need for this layer in the prototype, first the assembly of two models was

E. B. Santiago and R. J. Ferreira

carried out: with and without the Hydrophobic pad layer. Two prototypes were assembled from both models, one with Antibody pad treated with anti-A antibody, and the other treated with anti-B. The porosity of the Whatman and SEFAR PETEX® membranes were also tested, with the analysis of both acting on the Hydrophobic pad layer. For these analyzes, an AB+ typing sample was used. 3. Sample volume for testing: 20, 30 and 40 lL of whole blood were added to the models to analyze the performance of membranes. For this, the Antibody pad membranes were treated with anti-A antibodies. An A + sample was used for testing. After determining the optimal test configuration, the prototype was assembled and tested on real samples from a sample bank. These were tested in direct typing using anti-immunoglobulin antibodies in whole blood. The reactions were tested in the models for individual verification of the possible reaction responses for the ABO and Rh systems. After that, the cartridges were assembled, simulating commercial manufacture, and were tested with varied samples from the sample bank. All membrane treatment and testing procedures were carried out under controlled temperature and humidity conditions, at 25 °C and relative humidity < 40%.

3

Results

A. Whatman No. 1 versus No. 4 as Antibody pad Result of this test can be seen in Fig. 1. In this, the prototype with Whatman No. 1 as Antibody pad is on the left and shows a better result in red cell agglutination than the model with Whatman No. 4 (on the right). B. Whatman No. 1 and SEFAR PETEX® as Hydrophobic pad The results of the aforementioned tests on the Hydrophobic pad layer are shown in Figs. 2 and 3, in which the reaction in prototypes with and without the presence of the Hydrophobic pad can be observed, as well as the comparison of the performance of the Whatman and SEFAR PETEX® membranes, respectively. C. Sample volume Tests with different sample volumes can be seen in Fig. 4. D. Assembling the cartridge The cartridges used for this project were printed in polylactic acid (PLA) from a 3D printer, model Ultimaker, in dimensions 4.5  8 cm. These cartridges are shown in Figs. 5 and 6.

Development of a Rapid Test for Determining the ABO …

Fig. 1 Comparison with Antibody pads using Whatman paper No. 1 and No. 4. On the left, the Antibody pad is made up of Whatman paper No. 1 and on the right with Whatman paper No. 4. Lighter colored spots indicate formation of the bluish colored HSA-BCG complex, showing the presence of blood agglutination and the passage of plasma to the Detection zone

1089

Fig. 2 a and b Antibody pad treated with anti-B antibodies on the Antibody pad without (a) and with (b) Hydrophobic pad. C and D: Antibody pad treated with anti-A antibodies, without (c) and with (d) Hydrophobic pad

E. Analysis of the colorimetric reactions of the ABO Rh system The reactions for each response for the ABO and Rh systems are shown in Fig. 7. F. Cartridge analysis in commercial model Different samples of whole blood from the sample bank were tested in the final prototypes. The results are shown in Fig. 8.

4

Discussion

For the creation of the proposed test, analyzes were performed with different membranes and sample volumes to provide the best visualization of the result in the detection zone. First, Whatman filter papers 1 and 4 were tested as Antibody pad. Due to the greater thickness of the Whatman membrane No. 4, the prototypes assembled from this material needed a larger sample volume for the result to be observed. However, as can be seen in Fig. 1, the larger sample volume overloaded the membranes, causing red blood cells to overflow to the detection zone. When using

Fig. 3 Hydrophobic pad composed by Whatman membrane No. 1 (on the left) and SEFAR PETEX® membrane (on the right). For these tests, the Antibody pad was treated with anti-A antibody, and the sample used was blood type A+

Whatman's membrane No. 1, a smaller sample volume was needed for the passage between the steps, and it was possible to visualize the expected result in the detection zone.

1090

E. B. Santiago and R. J. Ferreira

Fig. 4 Performance of prototypes with increasing sample volume. All prototypes had anti-A antibodies in the Antibody pad and the A+ typing samples

Fig. 5 Cartridge model manufactured by additive manufacturing in PLA. The model has 4 independent openings for sample entry, arranged in two plates

Fig. 6 Arrangement of the membranes inside the cartridge. Each opening is intended for a set of membranes differentiated by its Antibody pads, starting from top to bottom with the anti-A region, followed by anti-B and anti-RhD

To analyze the viability of the Hydrophobic pad layer, a test was performed comparing the response of a sample with and without Whatman membrane No. 1 treated with 1% BSA. In the proposed model, the Hydrophobic pad is a stage

composed of a membrane positioned between the Antibody pad and the detection zone. Its function is basically to forward the sample resulting from the interaction of the blood with the antibodies fixed in the Antibody pad to the last membrane of the prototype, where the appearance of a bluish or reddish brown color will indicate the presence or not of agglutination of the sample. The membrane to be used for this step must be inert, not interacting with the sample with which it will come into contact. Thus, a blocking solution composed of 1% BSA was chosen for the treatment of the membrane to be used as the Hydrophobic pad, preventing the sample from binding to nonspecific locations on the membrane [7]. To validate the action of this step in the system, analysis of the response was carried out in the Detection zone of the prototype with and without the Whatman membrane No. 1 treated with the blocking solution. It can be seen from Fig. 2 that the presence of Whatman membrane No. 1 as Hydrophobic pad improved the visualization of the result. The treatment with BSA 1%, in addition to acting as a second filter for the

Development of a Rapid Test for Determining the ABO …

Fig. 7 Visualization of the reactions of the ABO and Rh systems in the cartridges. The bluish reaction of the HSA-BCG complex can be observed in cases where the blood is agglutinated on contact with the

1091

Antibody pad, and the reddish color when the red blood cells are not captured by this step

Fig. 8 Cartridges tested with samples of different blood types

samples retained in the Antibody pad, prevented nonspecific connections from occurring in this phase, without interfering with the result. In addition, this membrane prevented the excess of agglutinated red cells in the Antibody pad from

impairing the visualization of the result in the detection zone, improving the sensitivity of the test. After proving the need for a membrane to make up the Hydrophobic pad layer, the feasibility of using SEFAR

1092

PETEX® membranes for this step was tested. This membrane was chosen as the Sample pad due to its ability to homogeneously distribute the sample to the next layer. Due to this competence, a test was also carried out to replace the Whatman membrane No. 1 treated with 1% BSA by the SEFAR PETEX® membrane acting as Hydrophobic pad, for the function of directing the whole blood or plasma from the Antibody pad to the detection zone. The comparison of results between Whatman No. 1 membranes treated with 1% BSA and SEFAR PETEX® can be seen in Fig. 3. When using the SEFAR PETEX® membrane, the response signal in the Detection Zone had more interference than with the use of Whatman membrane No. 1 treated with 1% BSA solution. This demonstrated the need for a thicker membrane after the Antibody pad, capable of assisting in filtering the red blood cells that may escape from the previous step, interfering with the Detection zone. After defining the composition of the prototype, the sample volume that would demonstrate better results in the Detection zone was analyzed. The prototypes were tested with sample volumes of 20, 30 and 40 lL. When adding 20 lL of blood to the Sample pad, there was not enough sample flow between the membranes, making it impossible to read in the detection zone. With 40 lL, the sample is retained in the Antibody pad with red blood cells that could interfere with the reading. When the prototype was tested with 30 lL of sample volume, the results obtained were clearer, with less interference from red blood cells that improperly reached the Detection zone due to Antibody pad saturation. With the definition of the ideal model of the prototype, the cartridges were printed on a 3D printer, Ultimaker model, with polylactic acid filament (PLA). These were produced in dimensions 4.5  8 cm, having 4 front and rear openings, as can be seen in Fig. 5. The cartridges were designed with internal and lateral fittings that, in addition to keeping the membranes immobile inside, produce pressure between the membranes and assist in the passage of the sample from side to side. The proposed model contains 4 openings because, in addition to the antibodies of the ABO and Rh systems (anti-A, anti-B, anti-RhD), it is expected in the future to add a control portion containing natural anti-erythrocyte antibodies to all red blood cells, such as Anti-CD36, which was not feasible for this stage of the project. The membranes were arranged in the four openings of the cartridge, and separated according to the antibody present in the Antibody pad. The composition of the membranes and the size of the cut were identical to the standards of the previous tests. The membranes were positioned in the following order inside the cartridge: anti-A, anti-B, anti-RhD. Figure 6 shows the arrangement of the membranes inside the cartridge. In Fig. 7, the difference in reactions in the detection zone can be observed when there is the presence or absence of the

E. B. Santiago and R. J. Ferreira

antigen to be identified. Despite the presence of red blood cells overflow in positive reactions, it is possible to observe the presence or absence of reaction due to the intensity of the brown color when the antigen is not present. Figure 8 demonstrates the test using samples selected from the sample bank for testing with the prototype in its final configuration. Adding 30 lL to the Sample pad for each opening of the prototypes, it was observed in the Detection Zone that the Antibody pads showed satisfactory results when agglutinating samples containing the target antigen, making it possible to determine the ABO and Rh systems successfully for each of the 6 tested samples.

5

Conclusions

The results obtained through the experiments carried out with the prototypes allowed the successful differentiation of reactive and non-reactive samples from the visualization of the formation of the bluish complex HSA-BCG in the detection zone. It was observed that the treatment of the Hydrophobic pad with 1% BSA blocking solution improved the reading of the results in the prototypes, acting as a second filter after the Antibody pad and avoiding non-specific connections. The BCG dye used for the treatment of the Detection zone demonstrated the ability to show blue color in contact with blood plasma. However, features such as hemolysis of the collected sample and the use of fresh or EDTA samples can interfere with the process and should be investigated. In the next stage of the prototype development, it is necessary to use a control region, containing an specific antibody for the detection of ubiquitous red blood cell antigens, such as Anti-CD36. This region must react with all healthy red blood cells. The combination of the prototype with a remote laboratory device (Point of Care) is being developed to make it possible to send the report to the patient with information about the performed test. In this way, an alternative to laboratory diagnosis will be available to the population of patients with reduced accessibility, through a quick and easy-to-handle test for blood typing. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Zago MA, Falcão RP, Pasquini R (2004) Hematologia: fundamentos e prática. Atheneu, Rio de Janeiro 2. Abel G (2015) Current status and future prospects of point-of-care testing around the globe. Expert Rev Mol Diagn 15(7):853–855 3. Tanna S, Lawson G (2016) Emerging issues in analytical chemistry. RTI International.1ª ed, pp 51–86

Development of a Rapid Test for Determining the ABO … 4. Vashist SK et al (2015) Emerging technologies for next-generation point-of-care testing. Trends Biotechnol 33(11):692–705 5. Al-Tamimi M et al (2012) Validation of paper-based assay for rapid blood typing. Anal Chem 84(3):1661–1668 6. Zhang H et al (2017) A dye-assisted paper-based point-of-care assay for fast and reliable blood grouping. Sci Trans Med 9(381): eaaf9209

1093 7. Postigo PJ (2017) Desenvolvimento de Testes Rápidos Imunocromatográficos para Detecção de Cinomose Canina. Dissertação (Mestrado em Ciências). Universidade de São Paulo. São Carlos

In Silico Study on Electric Current Density in the Brain During Electrochemotherapy Treatment Planning of a Dog’s Head Osteosarcoma R. Guedert, M. M. Taques, I. B. Paro, M. M. M. Rangel, and D. O. H. Suzuki

Abstract

Keywords

Cancer is the most frequent cause of death in dogs. Osteosarcoma is a bone cancer common in dogs. Traditional treatments of osteosarcoma involve surgery and chemotherapy. Electrochemotherapy emerges as a possibility to treat the safety margin after a debulking, especially near vital organs like the brain. The use of electrochemotherapy near sensitive tissues to electric stimulation can be an issue. Preserving social aspects (recognition and temperament) is essential when treating a pet. Noxious brain stimulation is related to high electric current density (25 mA/cm2) and high electric charge density (5.24 C/cm2). This paper brings a series of in silico studies to evaluate the electric current density in a dog’s head osteosarcoma treatment in different bone destruction levels, with both needle and plate electrodes. The results show that all simulated cases induce electric currents in the brain. The worst-case scenario occurs when the plate electrode is directly in touch with soft tissues (all bone layers compromised by tumor infiltration), where the maximum electric current density was 3376 mA/cm2. During our discussion, we adjust the transcranial direct current stimulation threshold to the electrochemotherapy exposure time, getting an electrochemotherapy threshold of 6.55 kA/cm2. Even in the worst-case scenario, the calculated maximum electric current density is lower than the safety limit. We considered the application of electrochemotherapy safe near the brain, even with direct contact of the electrodes.

Bone infiltration Brain Stimulation Electroporation Safe electric current threshold Electric charge density

R. Guedert (&)  M. M. Taques  I. B. Paro  D. O. H. Suzuki Institute of Biomedical Engineering, Federal University of Santa Catarina (UFSC), Florianopolis, Brazil M. M. Taques Federal Institute of Santa Catarina (IFSC), Joinville, Brazil M. M. M. Rangel Oncology Veterinary, Vet Cancer, Sao Paulo, Brazil

1



 



Introduction

Cancer is the most frequent cause of death in dogs. Osteosarcoma is the most common primary bone cancer in dogs [1]. Traditional treatments of osteosarcoma involve surgery and chemotherapy [1, 2]. When osteosarcoma is located in the cranium, surgery becomes complex due to proximity with the brain tissue. Electrochemotherapy (ECT) emerges as a possibility to treat the surgery safety margin without damaging healthy tissues [3]. ECT is a technique that joins Electroporation (EP) combined with the administration of a cytotoxic agent (chemotherapeutic drug) to improve drug diffusion into the tumor cells. EP is a physical phenomenon that occurs when a biological tissue is exposed to sufficiently high electric fields during short periods [4]. The exposure to electric pulses causes a pore opening effect into the cell membrane, increasing its permeability momentaneously or permanently. Despite the benefits, ECT also brings concerns about electric safety, mainly when applied near sensitive tissues [5]. The use of ECT in the treatment of brain tumors is already studied. Reports demonstrate the removal of 88% of rats brain tumors when applied with acupuncture needles and associated with bleomycin [6]. Brain tumors were also treated by nonthermal irreversible electroporation (NTIRE) [7]—a technique that uses EP in an irreversible way to kill the cells directly. However, even though the post-treatment evaluation considered that the neurological capacity was still preserved (as the ability to walk and eat) and no convulsions were observed, social aspects (as recognition or temperament) after the brain exposure were not analyzed.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_163

1095

1096

R. Guedert et al.

When treating a pet, social aspects are essential in the post-treatment, as people share emotions with their animals. The electric current induced in the brain is directly associated with possible tissue damages. The safety threshold is recommended by studies on transcranial direct current stimulation (tDCS) [8]. tDCS is a non-invasive brain stimulation technique on the rise in recent years. The tDCS protocol defines a safe threshold of 0.04 mA/cm2 for 20 min of stimulation [8]. Harmful effects on the brain were observed with electric current density from 14 to 29 mA/cm2 during the same time. Other reports also indicate the safe threshold of 25 mA/cm2 [9, 10]. Despite, time aspects are also important to define safe threshold due to increasing exposure time also increases possible tissue damages. Nitsche et al. [11] suggest a dual security criterion to define thresholds of brain stimulation: the electrical current density and the electrical charge density. The maximum electrical charge density is reported as 5.24 C/cm2 to cause brain damages [12, 13]. This paper aims to evaluate the safety of treating a dog’s head osteosarcoma with minimal surgery removal (debulking) followed by the application of ECT to guarantee the safety margin. As the x-ray exam of our case study does not allow to identify the tumor infiltration into the cranium layers, we simulate nine different geometries to evaluate the possibility of damages in the brain. We also calculate the safety electric current density threshold when using the most common ECT protocol.

2

Materials and Methods

2.1 Case Study A female dog, Poodle, ten years old, was histopathologically diagnosed with osteosarcoma located at the top of the head over the parietal bone, Fig. 1 shows the x-ray exam of the case study. The tumor had an estimated dimension of 3  3 cm. Due to tumor dimension, the suggested treatment was the removal of the primary tumor mass with no margin (debulking) followed by ECT to guarantee the surgical safety margin. The x-ray exam does not allow to evaluate the tumor macro infiltration (bone destruction). Concerns about electric current density in the brain and its associated damages came to light during treatment planning. In silico studies were performed to figure out possible electrical current density issues. Unfortunately, the dog died before the procedure after a convulsion. Despite that, the performed simulations could help other similar procedures.

Fig. 1 X-ray exam of a female dog with osteosarcoma located at the top of the head over the parietal bone. E—Left. D—Right

2.2 In Silico Study Simulations of both needle and plate electrodes were performed. Needle electrode was modeled by three needles pairs with 0.7 mm of diameter, a 3 mm gap between needles of the same polarity, and a 5 mm gap between each pair (needles of different polarity). Plate electrode was modeled by two parallel plates with 20 mm height, 5 mm depth, 1 mm thickness, and separated by 5 mm from one another. We used COMSOL Multiphysics® (COMSOL AB, Stockholm, Sweden) to solve our in silico studies. COMSOL is a Finite Element Method (FEM) solver software. To simulate and calculate the electrical current density, we used the AC/DC module in a stationary regime, that solves the conservation of charge principle, as shown in Eq. (1).

In Silico Study on Electric Current Density in the Brain …

1097

r  ðr  rVÞ ¼ 0

ð1Þ

where r is the electrical conductivity (S/m) and V the applied voltage (V). The electrical conductivity of tissues changes during EP due to the pores opening effect. We performed the in silico studies with conductivity models that follow the European Standard Operating Procedures on Electrochemotherapy (ESOPE) [14]. ESOPE establishes a protocol of eight square-wave pulses of 100 µs length repeated from 1 Hz to 5 kHz [15]. Miklavcic et al. [16] created a sigmoidal function, as shown in Eq. (2), that is widely used during tissue conductivity modeling. rðEÞ ¼ r0 þ A¼

rmax  r0 EA 1 þ D  eð B Þ

Eire þ Ere 2



ð2Þ

Eire  Ere C

where r(E) is the tissue electric conductivity in function of the electric field, r0 and rmax are respectively the initial and maximum values of tissue conductivity, Erev and Eire are the electric field thresholds of reversible and irreversible EP, respectively. C = 8 and D = 10 are model constants. Table 1 presents the variables of each tissue that was used during our experiments. The geometric model was built using COMSOL Geometric Tools. The thickness of tumoral tissue, dura mater, cerebrospinal fluid (CSF), and brain was considered on the simulations as being, respectively, 1, 0.252, 0.2, and 10 mm [22, 23]. Both internal cortical bone (CI) and trabecular bone (TR) tissues thickness of 1 mm, and the external cortical bone (CE) tissue thickness of 1.5 mm were obtained through tomography measures of a healthy dog. Figure 2 shows the simulation model. The geometric mesh was generated by the COMSOL Mesh Creation Tool at the Finer resolution, resulting in 2.6 million tetrahedral elements for each Table 1 Electric properties of tissues used in the in silico study Tissue

r0 (S/m)

rmax (S/m)

Erev (kV/m)

Eire (kV/m)

Ref.

Tumor

0.300

0.750

40

80

[17, 18]

Cortical Bone

0.070

0.203

40

80

[17]

Trabecular Bone

0.020

0.060

40

80

[17]

Dura mater

0.100

-

-

-

[19]

Cerebrospinal Fluid

1.790

-

-

-

[20]

Brain

0.256

0.767

50

70

[21]

Fig. 2 Geometric reconstruction of cranium layers. a X–Y plane cross-section with detailed of all layers. b Mesh created by the COMSOL Mesh Creation Tool

geometric problem. Calculations were run on a cluster server (Intel Xeon Gold 6126 @ 2.60 GHz, 20 cores, 300 GB RAM) with Ubuntu Linux (64, Canonical Ltd., London, United Kingdom) operating system.

3

Figures 3 and 4 show the electric current distribution when using needle and plate electrodes, respectively. The thickness of CE, TR, and CI was modified to simulate bone destruction as a result of tumor growth in both electrodes simulations. The applied voltage was 500 V (100 kV/m) with the needle electrode and 650 V (130 kV/m) with the plate electrode, as recommended by ESOPE. To evaluate possible damages, we calculated the maximum electric current density in the brain tissue. Table 2 presents the electric current density in the brain tissue for each simulated case.

4

Dura mater and Cerebrospinal Fluid do not have a developed model for electroporation, so both conductivities were considered constant

Results

Discussion

We calculated the maximum electric current density when applying the ESOPE protocol with Eqs. (3) and (4). Q ¼ I  Dt

ð3Þ

1098

Fig. 3 Electric current density distribution with the needle electrode. The thickness of each bone layer is a CE = 0.5 mm, TR = 1 mm, CI = 1 mm; b CE = 0 mm, TR = 0.5 mm, CI = 1 mm; c CE and TR = 0 mm, CI = 0.2 mm; d CE and TR = 0 mm, CI = 0.02 mm

R. Guedert et al.

Fig. 4 Electric current density distribution with the plate electrode. The thickness of each bone layer is a CE = 0.5 mm, TR = 1 mm, CI = 1 mm; b CE = 0 mm, TR = 0.5 mm, CI = 1 mm; c CE and TR = 0 mm, CI = 0.2 mm; d CE and TR = 0 mm, CI = 0.02 mm

In Silico Study on Electric Current Density in the Brain … Table 2 Electric current density in the brain for each simulated case

1099 Electric current density (mA/cm2)

Cranium layers thickness (mm) CE

TR

CI

Needle electrode

Plate electrode

a

1.00

1.00

1.00

93

168

b

0.50

1.00

1.00

173

308

c

0.00

1.00

1.00

289

485

d

0.00

0.50

1.00

358

571

e

0.00

0.00

1.00

415

651

f

0.00

0.00

0.20

930

2399

g

0.00

0.00

0.05

1464

3003

h

0.00

0.00

0.02

1788

3210

i

0.00

0.00

0.00

1986

3379

CE, TR, and CI are Cortical External bone, Trabecular bone, and Cortical Internal bone, respectively. Cases simulate bone destruction due to tumor mass growth



I A

ð4Þ

where Q is the electrical charge (C), I is the electric current (A), Dt is the exposure time (s), J is the electric current density (A/m2), and A is the tissue area (m2). By combining Eqs. 3 and 4, we get Eq. (5). J¼

Q A  Dt

ð5Þ

The ESOPE protocol consists of eight pulses of 100 µs length each pulse; thus, total exposure time is 800 µs. Using Q/A = 5.24 C/cm2 as reported before, we found a maximum electric current density of 6.55 kA/cm2 when using the ESOPE protocol. The timing between each pulse and possible relaxation aspects were not considered in our calculations. The application of ECT pulses on the cranium induces electric currents in the brain. The electric current density increases with cranial bone destruction as a result of tumor growth, as shown in Table 2. The maximum electric current density was 3379 mA/cm2 when the plate electrode is directly in contact with soft tissues, i.e., without bone tissue (Table 2i). Even in the worst-case scenario, the electric current density is lower than the safe threshold of 6.55 kA/cm2. The maximum energy density was 1000 J/kg when calculated considering the ESOPE protocol (eight pulses of 100 µs length) and constant electric current during each pulse. The maximum energy is also lower than the already reported harmful threshold value of 3500 J/kg [24]. Figures 3 and 4 help to observe that the majority of the high electric current density distribution is located in the external portion of the brain, preserving deep regions.

5

Conclusions

We evaluated that the electric current density resulting from the application of electric fields with needle and plate electrodes is lower than the security thresholds. The simulated results demonstrate that ECT procedures in cranial tumors with both electrodes are safe regarding possible damages by the induced electric current density in the brain tissue. In vivo studies are recommended to validate in silico experiments. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES).The authors would like to thank the Brazilian research funding agencies CAPES and CNPq for the scholarships granted to the post-graduate students participating in the study.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Simpson S, Dunning MD, de Brot S et al (2017) Comparative review of human and canine osteosarcoma: morphology, epidemiology, prognosis, treatment and genetics. Acta Vet Scand 59:71. https://doi.org/10.1186/s13028-017-0341-9 2. Mueller F, Fuchs B, Kaser-Hotz B (2007) Comparative biology of human and canine osteosarcoma. Anticancer Res 27:155–164 3. Fini M, Salamanna F, Parrilli A et al (2013) Electrochemotherapy is effective in the treatment of rat bone metastases. Clin Exp Metastasis 30:1033–1045. https://doi.org/10.1007/s10585-0139601-x

1100 4. Suzuki DOH, Berkenbrock JA, Frederico MJS et al (2018) Oral mucosa model for electrochemotherapy treatment of dog mouth cancer: ex vivo, in silico, and in vivo experiments. Artif Organs 42:297–304. https://doi.org/10.1111/aor.13003 5. Miklavčič D, Serša G, Brecelj E et al (2012) Electrochemotherapy: technological advancements for efficient electroporation-based treatment of internal tumors. Med. Biol. Eng, Comput 6. Linnert M, Iversen HK, Gehl J (2012) Multiple brain metastases— current management and perspectives for treatment with electrochemotherapy. Radiol, Oncol 7. Ellis TL, Garcia PA, Rossmeisl JH et al (2011) Nonthermal irreversible electroporation for intracranial surgical applications: laboratory investigation. J Neurosurg. https://doi.org/10.3171/ 2010.5.JNS091448 8. Russo C, Souza Carneiro MI, Bolognini N, Fregni F (2017) Safety review of transcranial direct current stimulation in Stroke. Neuromodulation Technol Neural Interface 20:215–222. https:// doi.org/10.1111/ner.12574 9. McCreery DB, Agnew WF, Yuen TGH, Bullara L (1990) Charge density and charge per phase as cofactors in neural injury induced by electrical stimulation. IEEE Trans Biomed Eng. 37(2):996– 1001. https://doi.org/10.1109/10.102812 10. Hsu TY, Tseng LY, Yu JX et al (2011) Modulating inhibitory control with direct current stimulation of the superior medial frontal cortex. Neuroimage 56(4):2249–2257. https://doi.org/10. 1016/j.neuroimage.2011.03.059 11. Nitsche MA, Fricke K, Henschke U et al (2003) Pharmacological modulation of cortical excitability shifts induced by transcranial direct current stimulation in humans. J Physiol 553(1):293–301. https://doi.org/10.1113/jphysiol.2003.049916 12. Liebetanz D, Koch R, Mayenfels S et al (2009) Safety limits of cathodal transcranial direct current stimulation in rats. Clin Neurophysiol 120(6):1161–1167. https://doi.org/10.1016/j.clinph. 2009.01.022 13. Chhatbar PY, Chen R, Deardorff R et al (2017) Safety and tolerability of transcranial direct current stimulation to stroke patients—a phase I current escalation study. Brain Stimul 10 (3):553–559. https://doi.org/10.1016/j.brs.2017.02.007 14. Marty M, Sersa G, Garbay JR et al (2006) Electrochemotherapy— an easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. Eur J Cancer, Suppl 4:3–13. https://doi.org/10.1016/j.ejcsup.2006.08. 002

R. Guedert et al. 15. Lacković I, Magjarević R, Miklavčič D (2010) Incorporating Electroporation-related Conductivity Changes into Models for the Calculation of the Electric Field Distribution in Tissue. In: Bamidis PD, Pallikarakis N (eds) XII Mediterranean Conference on Medical and Biological Engineering and Computing 2010. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 695–698 16. Sel D, Cukjati D, Batiuskaite D et al (2005) Sequential Finite Element Model of Tissue Electropermeabilization. IEEE Trans Biomed Eng 52:816–827. https://doi.org/10.1109/TBME.2005. 845212 17. Cindrič H, Kos B, Tedesco G et al (2018) Electrochemotherapy of Spinal Metastases Using Transpedicular Approach—A Numerical Feasibility Study. Technol Cancer Res Treat 17:153303461877025. https://doi.org/10.1177/ 1533034618770253 18. Sel D, Lebar AM, Miklavcic D (2007) Feasibility of Employing Model-Based Optimization of Pulse Amplitude and Electrode Distance for Effective Tumor Electropermeabilization. IEEE Trans Biomed Eng 54:773–781. https://doi.org/10.1109/TBME.2006. 889196 19. Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues. Phys Med Biol 41:2271–2293. https://doi. org/10.1088/0031-9155/41/11/003 20. Baumann SB, Wozny DR, Kelly SK, Meno FM (1997) The electrical conductivity of human cerebrospinal fluid at body temperature. IEEE Trans Biomed Eng 44:220–223. https://doi.org/ 10.1109/10.554770 21. Garcia PA, Rossmeisl JH, Neal RE et al (2011) A Parametric Study Delineating Irreversible Electroporation from Thermal Damage Based on a Minimally Invasive Intracranial Procedure. Biomed Eng Online 10:34. https://doi.org/10.1186/1475-925X-1034 22. Kuchiwaki H, Inao S, Ishii N et al (1995) Changes in dural thickness reflect changes in intracranial pressure in dogs. Neurosci Lett 198:68–70. https://doi.org/10.1016/0304-3940(95)11949-W 23. Mao B, Zhang H, Zhao K et al (2010) Cerebrospinal fluid absorption disorder of arachnoid villi in a canine model of hydrocephalus. Neurol India 58:371. https://doi.org/10.4103/00283886.65601 24. Fini M, Tschon M, Ronchetti M et al (2010) Ablation of bone cells by electroporation. J Bone Joint Surg Br 92-B:1614–1620. https:// doi.org/10.1302/0301-620X.92B11.24664

Evaluation of Engorged Puerperal Breast by Thermographic Imaging: A Pilot Study L. B. da Silva, A. C. G. Lima, J. L. Soares, L. dos Santos, and M. M. Amaral

Abstract

Aim of this work was to conduct a pilot study on the use of thermography to evaluate engorged breasts. Ten lactating volunteers, five normal and five engorged, were evaluated by clinical examination and thermographic images. Two regions of interest for breast are selected in the thermographic images. The average temperature in these regions were statistically compared, significance level of 95%. The average normal breast temperature was 34.13 ± 0.71 °C and the abnormal breast 35.03 ± 0.50 °C. These temperatures showed statistical difference when compared between (p-value < 0.05). The data obtained made it possible to analyze the thermographic changes of the puerperal breast, thus contributing to more accurate diagnoses. The results indicated that with clinical examination and infrared thermography, it was possible to delineate a differential pattern between the various events that affect the breast in the lactation process. Keywords

 



Puerperium Breast Engorged Breast Inflammation Thermography Imaging Breastfeeding

1



Introduction

Infrared medical thermography is a non-invasive analysis technique than does not use ionizing radiation, capable of analyzing physiological functions related to skin temperature L. B. da Silva (&)  L. dos Santos  M. M. Amaral Instituto Científico E Tecnológico, Universidade Brasil, São Paulo, Brasil J. L. Soares Faculdade Adelmar Rosado, Teresina, Brasil A. C. G. Lima Universidade Estadual Do Piauí—UESPI, Teresina, Brasil

control. It detects infrared radiation emitted by the body to measure changes in body temperature related to changes in blood flow. It is not a method that shows anatomical abnormalities, but it can show physiological changes [9]. Thermographic imaging is applied in several medical field, such as: neurology, rheumatology, muscular disorders, vascular diseases, urology, gynecology, orthopedic and sports medicine disorders. It is a non-invasive and objective method, in addition safe and harmless [7]. The human breast is an attached gland of the skin that constantly undergoes changes in its biological manifestations, such as microcirculation, perfusion, inflammatory and vascular activity, which can be quantified by high resolution thermal image. However, there are no internationally standardized temperature values or normal reference breasts for thermographic imaging diagnostic of a disease. It is due the variation individual metabolism and room temperature factors [8]. Microorganisms with a high potential for contamination can affect the mammary gland, triggering inflammatory reactions, presenting heat, flushing and edema. These signs can appear in the gland itself or in its cover skin. Temperature changes are one of the main symptoms in inflammatory processes [2]. In the first days after giving birth, the mother’s body begins to adapt to the correct milk amount necessary for the child’s feeding. This process is called full breast [6]. During this adaptation period, if the baby’s feeding is effective, there will be a frequent emptying of milk from the breasts, which will make the mother more comfortable. However, if this balance between production and consumption does not occur, the breast will be full, and it can lead to a engorgement process. It usually occurs between the third and tenth days after birth [5]. Due to changes on the society, the breastfeeding has been decreasing. The Brazilian Ministry of Health recommends breastfeeding due to its biological factor, the human breast milk is the only food that provides important nutrients for the human brain development, fights infections, protects children against bacteria and viruses, and prevents diarrhea. Its consumption is indicated for any child, who can and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_164

1101

1102

should only feed on breast milk in the first six months of life, not needing to eat or drink anything—not even water or teas, because in it there is everything the baby needs to be nourished, grow and develop healthily [1]. Most women experience discomfort in the breasts, right from the start of breastfeeding and this pain is considered normal, if it is a slight discomfort. When this pain is persistent and becomes increasingly uncomfortable, it is necessary to pay attention to the appearance of the breasts, which can become increasingly painful and present swelling, which indicates the beginning of breast engorgement. [4]. Breast engorgement can occur in two ways, physiological or pathological. In the first form, it does not require an intervention, because despite the pain, the milk remains flowing. In the second form, there is a great distension of tissue, which in addition to causing enormous discomfort in the breast, may be accompanied by fever and malaise [1]. The breast engorgement process occurs in three stages: (1) increased vascularization and accumulation of milk; (2) milk retention in the alveoli; and (3) edema due to congestion and obstruction of drainage of the lymphatic system, where signs such as pain, interstitial edema, increased breast volume, shiny skin, flattened nipples, whether or not accompanied by diffuse and reddened areas, elevated body temperature including feverish state. [4]. The consequences of this process can lead to difficulties or even prevent milk from escaping from the alveoli. There is a pressure on the breast generated by the accumulation of milk, which will become more viscous, popularly called “cobbled milk” [9]. One of the best treatments for engorgement is massage, followed by milking. High fluid intake, rest and the application of heat and/or cold to the sinuses are also indicated. Massage contributes to the breakdown of intramolecular interactions, which occurs due to kinetic energy, thus allowing the fluidization of milk. The technique also allows the stimulation of oxytocin synthesis, necessary for the milk ejection reflex. It may be necessary to use analgesics, antipyretics and antibiotics (the use of penicillin’s being more indicated) to assist in the treatment. [4]. The aim of this work was to conduct a pilot study on the use of thermography to evaluate engorged breasts. We sought to statistically analyze thermographic exams, which may indicate possible pathologies in the breast of patients, establish a thermographic profile of the physiological breast and the pathological breast that are in the immediate postpartum period, identifying the main breast complications that may occur during the period puerperal and the signs of the inflammatory process of the breasts resulting from lactation.

L. B. da Silva et al.

2

Methods

This work was carried out after approved by the ethics committee (CEUA / UESPI), nº. 3.815.031. In this pilot study, ten volunteer women were recruited in the immediate postpartum period (from the 1st to the 10th day after parturition), aged 18 years or older, hospitalized in the ward, with a good level of awareness, with no history of stillbirth in the recent childbirth, who were not transferred to the ICU due to clinical complications and without HIV, hepatitis B and C or cancer. The participants were between 17 and 37 years old. Several factors can trigger the pathological breast engorgement process and can affect breast that produces milk, breastfeeding or not, of women at any age. The participants were evaluated by a clinical examination of the breasts through inspection and palpation for a clinical diagnosis. Redness, heat, edema, induration and pain were evaluated to assess the signs of inflammation. Next, the volunteers were directed to the thermographic imaging examination [3]. The volunteers were asked to bare their breast for fifteen 15 min in an 18 m3 environment, with a controlled temperature between 23 and 24 ºC, to achieve thermal stabilization. After this acclimatization time, the thermographic examination was performed with the patient seated, hands on her head, in a frontal position, including both breasts in a single image. The photograph taken was only of the breast region, preserving the patient’s identity. The images were acquired only on breast region, preserving the patient’s identity. The thermographic images were acquired using the T430SC (Flir® Systems Inc) thermographic camera with 320  240 pixels/image, thermal sensitivity ˂ 30 mK, accuracy of ±2 ºC and an 18 mm objective lens attached. The emissivity for recording by the thermosensor was adjusted to 0.98, value indicated for human skin thermograms and the irradiance equivalent to the ambient temperature recorded at the time of the examination. The equipment was positioned on a tripod 100 cm away from the patient to obtain an image of the two breasts simultaneously. Thermograms were analyzed using the Software FLIR tools and FLIR Researcher to determine the absolute thermal values and the comparative temperature gradients. Two points per breast were selected and the mean temperature value were recorded. The mean temperature data were analyzed with software Minitab 18, using 95% of significance level.

Evaluation of Engorged Puerperal Breast by Thermographic …

3

1103

Results E Discussions

Figure 1 presents representative images of the normal (Fig. 1a) and engorged breast (Fig. 1b). The images were de uncharacterized due to ethical issues. In the initial stages of engorgement, it is not possible to differentiate the normal from engorged breast in every case. In more advanced stages the engorged breast is enlarged, painful and has diffuse, swollen and shiny red areas (Fig. 2). Engorgement may be restricted to the areola (areolar), only to the breast body (peripheral) or both. Figure 2 presents the engorged breast in a peripheral way. Figure 3 shows the obtained thermographic image of a normal (Fig. 2a) and an engorged breast (Fig. 2b), the color on the images represents the temperature values associated with the breasts. In the thermographic image, the points of obtaining the temperature around the patient’s aureole are observed (E1– E2, E3–E4). Due to the geometry of the breast and the positioning of the camera, the points were chosen so that they would be less influenced by the individual’s anatomical variations. The breast aureole was used as a reference, positioning one upper point and one interior point in each breast thermographic image. After select and record the temperature of each point on the breast of each volunteer, the average temperature of each group was evaluated. Figure 3 presents the boxplot of the average temperature of the normal and abnormal breast. Most participants have both engorged breasts, allowing for contralateral comparison.

Fig. 2 Breast with peripherical engorgement

Fig. 3 Representative image of the procedure

Fig. 1 Normal (a) and Engorged (b) breast

The analysis of the average temperature within the evaluated points showed that the group of patients with normal breast has an average temperature of 34.13 ± 0.71 °C and the abnormal breast, 35.03 ± 0.50 °C. The average temperature of the normal breast is 0.9 °C. The temperature was statistically significant different between normal and abnormal groups (p = 0.001) (Fig. 4).

1104

L. B. da Silva et al.

Fig. 4 Mean temperature of the normal and engorged breast

Thus, according to this pilot study, it is possible to differentiate the breast engorgement process by analyzing its temperature using thermography. The results indicate that, based on the tests performed, it is possible to determine patterns of engorgement, pain or heat and favor the patient with better assistance in specialized health.

4

Conclusions

In this work, the infrared medical thermography technique was used to assess the breast temperature of 10 patients, 5 normal and 5 with an indication of engorgement. The analysis of this pilot study indicated that the breasts of the group of normal volunteers have an average temperature below the temperature of the breasts with some type of abnormality. The results showed a statistically significant difference. The next steps of this work will be to increase the panel of volunteers to improve the representativeness of the data to be presented, as well as to develop alternative ways of assessing breast temperature, considering other factors that can influence the patient’s diagnosis, such as her body temperature.

References 1. Brasil. Ministério Da Saúde. Secretaria de Atenção à Saúde. Departamento de Atenção Básica (2015) Saúde da criança: aleitamento materno e alimentação complementar/Ministério da Saúde, Secretaria de Atenção à Saúde, Departamento de Atenção Básica. 2. ed. Brasília: Ministério da Saúde 2. Frasson AL, Millen EC, Novita G et al (2011) Doenças da mama: guia prático baseado em evidências. Atheneu, Porto Alegre 3. Herbele ABS, Ishisato SMT, Nohama (2015) P. Avaliação da mama na lactação por termografia e presença de dor. Acta Paul Enferm 28(3):256–63 4. Heberle ABS, Moura MAM, Souza MA, Nohama P (2014) Avaliação das técnicas de massagem e ordenha no tratamento do ingurgitamento mamário por termografia. Rev. Latino-Am Enfermagem 22(2):277–285 5. Jesus ALBC (2013) Influência dos fatores maternos e práticas de aleitamento materno no ingurgitamento mamário. Curso de Mestrado em Enfermagem de Saúde Materna e Obstétrica, Coimbra 6. Mangesi L, Dowswell T (2016) Treatments for Breast Engorgement During Lactation Cochrane. Database Syst Rev 6:CD0069 7. Reis EBN (2014) Fundamentos da termografia para o acompanhamento do treinamento desportivo. Rev Uniandrade 15(2):79–86 8. Souza GAGR et al (2015) Temperatura de referência das mamas: proposta de uma equação. Rev Einstein 13(4):518–524 9. Termografia Brasil at https://www.termografiabrasil.blogspot.com

Study of the Photo Oxidative Action of Brosimum gaudichaudii Extract V. M. de S. Antunes, C. L. de L. Sena, and A. F. Uchoa

Abstract

1

Brosimum gaudichaudii (BG) is a medicinal plant native from Brazil and popularly known as mamica-de-cadela, mamacadela, among others. It presents as main metabolites coumarins and furanocoumarins, recognized photoactive agents. There are several studies that indicate the photodynamic action of Brosimum gaudichaudii extracts. The purpose of this study was to perform photophysical studies of the aerial parts of the plant extract for application in photodynamic therapy. The results of this study were determined by the absorption spectrum in the ultraviolet (UV) and visible regions, as well as the emission spectrum when excited at different wavelengths. In the region of 345 nm, the stem extracts stand out, since this region is where bergaptenes and other furanocoumarins absorb light. In the region 667 nm all the extracts were found, mainly those of the leaves, and in this region chlorophyll absorbs light, which shows the contribution of chlorophyll in the process. The photodynamic potential action of the extracts by solar irradiation was attributed to a synergism by action on UV and chlorophyll and its derivatives in the photodynamic therapy. Keywords

Furanocoumarines Photosensitizer



Photodynamic therapy



V. M. de S. Antunes (&) Universidade Anhembi Morumbi/CITÉ, São José Dos Campos, SP, Brazil C. L. de L. Sena Universidade Camilo Castelo Branco (Atual Universidades Brazil), Fernandópolis, Brazil A. F. Uchoa Universidade Anhembi Morumbi/CITÉ, São José dos Campos, SP, Brazil

Introduction

Since ancient times, man has used medicinal plants as a therapeutic resource to preserve health and cure diseases, in addition to their use in cosmetics. It was through the observation of animals that man developed the knowledge about plants, medicinal or not. This knowledge, passed through time, identified the species that were relevant to health and beauty. This precious legacy about nature tells the history of the continent and the different peoples that inhabited the planet [1]. Our ancestors used concentrated mixtures and extracts of plants to promote the desired effects, without thinking about isolating and understanding what, inside the plant, was effectively the main component or which had the healing power [2, 3]. For decades, the World Health Organization (WHO) has recommended the use of phytotherapy in primary health care programs [4]. Similarly, the Pan American Health Organization (PAHO), realized the importance of the use of medicinal plants for public health in the 1970s, relying on social movements of the time that challenged the valorization of technology and the devaluation of nature [5]. In Brazil, new legislation for the use of phytomedicines and phytocosmetics appears [6]. The Moraceae family has about 50 genera and 1500 species. In Brazil, there are 27 genera with approximately 250 species. Among the genera of this family, Brosimum gaudichaudii Trecul, popularly known as Mamacadela or mamica-de-cadela, a plant of the Cerrado Biome of Brazil, stands out [7]. The chemical constitution of plant species goes through numerous anabolic and catabolic reactions that make up the plant metabolism, which is divided into primary and secondary. Secundary metabolism are widely exploited due to their structural variation, and consequently shows many biological activities. Coumarins have fluorescent properties [8, 9] and can be classified according to the substitutions of the benzene ring and the pyrene ring in simple coumarins.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_165

1105

1106

V. M. de S. Antunes et al.

2

Materials and Methods

The sample of the plant Brosimum gaudichaudii was collected near the Regional Airport of Dourados-MS, Francisco de Matos Pereira, located at Av. Guaicurus (beginning of the Rodovia Dourados-Itaum, km 12), with 22º 11′ 52 S latitude and 54° 55′ 21 W longitude. Located at 458 m above sea level and its time zone is UTC-4 (-3DT). The characterization of the samples as Brosimum gauduchaudii Trecul (Moraceae), was performed by Prof. Dr. Paulo T. Sano of the Department. Botany—Institute of Biosciences of the University of São Paulo. The hydroalcoholic extracts were obtained at the Institute of Chemistry—USP, where the characterization of the extract was performed. The samples were separated Stem, Leaf and Fruit, then individually crushed in the four knife mill, the mass was determined; 92.8% alcohol was added, it was sonicate with ultrasound for 30 min, in order to make a better extraction possible in a reduced period. After 28 days, filtration was performed with Büchener funnel. The extracts were concentrated in rotary evaporator to reduce the solvent. The spectra were obtained in a UV–visible spectrophotometer UV-240 1PC (Shimadzu Co, USA), quartz bucket (Hellma, Brazil) with optical path of 1.0 cm. Measurements

were taken at intervals of 300–800 nm each 0.5 nm. Micronal Spectrophotometer B582 (Micronal, Brazil) and quartz bucket (Hellma, Brazil) with optical path of 1.0 cm. Solvents used: Anhydrous ethanol, hydrated ethanol, acetone and chloroform.

3

Results

The extracts were obtained from the aerial parts Brosimum gaudichaudii, leaves, stems and fruits. The plant material was dried at room temperature and away from light. The mass of the leaf (65.7 g), stem (77.2 g) and fruit (10.3 g) was then crushed and determined. The material was submitted to hydroalcoholic extraction for 28 days. After this period, the material was filtered and the solvent eliminated by rotaevaporation at reduced pressure. The material was resuspended in hydrated ethanol and characterized by the UV–vis spectrum. Figure 1 shows the spectral curves for the three extracts obtained. It is observed that all spectral curves have a maximum at 400 and 667 nm. These maxima as well as all the spectral curves in the visible region show the chlorophylls For comparative analysis the extracts were resuspended in chloroform and again determined their UV–vis spectra. In this study, a mathematical treatment was performed, where the spectral curves were normalized at 667 nm, maximum absorption of chlorophyll and 309 nm maximum absorption of bergaptenes and other furanocoumarines [15]. These spectral curves, with and without mathematical treatment, are presented in Fig. 2a, b and c. Figure 2a shows a similar spectral profile in the visible region. Figure 2b shows these curves normalized at 667 nm, region where chlorophylls have a maximum absorption. In Fig. 2c the normalization was performed at 309 nm, where the maximum absorption of bergaptene, considered the main furanocoumarin of Brosimun guadichaudii, occurs.

4 F o lh a C a u le F r u to

3

Abs

furanocoumarins, pyranocoumarines. Depending on the position of the furan ring they are classified in angular and linear [9]. Furanocoumarines are classified into linear or psoralenes. Psoralenes have photobiological and phototherapeutic activity and absorb energy between 300 and 400 nm (UVA). The linear ones can be phototoxic. However, when excited by UVA radiation they react with nuclear DNA, cell membranes or proteins, transferring electrons or hydrogen atoms [10, 11]. The photosensitizing action by linear furanocoumarines normalizes differentiation of altered keratinocytes, decreases the expression and secretion of cytokines [11]. In addition to their therapeutic properties, they also exhibit antioxidant activity and have high fluorescence [12]; it can be enhanced by the use of Photodynamic Therapy (PDT), use of UV radiation or visible light in the treatment of various skin conditions [12, 13]. The efficacy of the technique depends on the dose of light applied, the concentration of the photosensitizer and the availability of oxygen, generating reactive oxygen species [14]. Thus, the aim of this work was to perform photophysical studies of the aerial parts of Brosimum gaudichaudii and to characterize the extract as a photosensitizer, showing the contribution of chlorophyll in the process.

Leaf Stem Fruit

2

1

0 300

400

500

600

700

nm Fig. 1 Absorption spectra of Brosimum guadichaudii extracts, resuspending in ethanol

Study of the Photo Oxidative Action of Brosimum gaudichaudii …

0 ,9

F ru to

A

F o lh a Leaf C a u le Stem Fruit

Abs

0 ,6

0 ,3

0 ,0 300

400

500

600

700

nm 5

B

F ru to

F o lh a Leaf C a u le Stem Fruit

Abs

4 3 2 1 0 300

400

500

600

700

nm 3 ,5 3 ,0

C

F ru to

F o lh a Leaf C a u le Stem Fruit

Abs

2 ,5 2 ,0 1 ,5 1 ,0 0 ,5 0 ,0 300

400

500

600

700

nm

Fig. 2 a Absorption spectrum of Brosimun guadichaudii extracts resuspending in chloroform, b Spectrum standardized at 667 nm, C Spectrum standardized at 309 nm

4

Discussion

It is observed that all spectral curves have a maximum at 400 and 667 nm. These maxima, as well as all spectral curves, show chlorophylls as the main chromophores they absorb in the visible region. Such pigments are responsible for skin conversion of radiant energy into chemical energy, in the process of photosynthesis. They also participate in the redox balance, and can be used as therapeutic agents. Besides chlorophyll and its derivatives, Brosimum gaudichaudii is known to present as secondary furanocoumarine metabolites, a compound that presents high photodynamic activities when irradiated in the ultraviolet. This can be confirmed after inspection of the bergaptene chlorophyllase spectra [14, 15]. It can be observed in the spectral curves of Fig. 2a, b and c that the spectra of all the extracts are well defined both in the visible and ultraviolet regions. The mathematical treatment made possible a relative quantification of the possible chromophores present in each extract. For comparative

1107

analysis the spectra have been standardised at 667 nm where only chlorophyll is absorbed and at 309 nm where maximum absorption of bergaptene and other possible furanocoumarins [15, 16] which are constituents of these extracts occurs. In Fig. 2a, the spectral profile in the visible region is consistent with that expected for chlorophylls and their derivatives. Already Fig. 2b the curves have been normalized to 667 nm. Region is where the maximum absorption of the last and most intense Q-band occurs. When these curves are artificially equalized, it can be observed that they are quite differentiated in the ultraviolet region, specifically at 309 nm where it absorbs bergaptene. Thus, in this region a greater absorption of the stem is observed, followed by the fruit and the posterior of the leaf. Thus, it is inferred that the extract of the stem is more potentiated in these chromophores. It belongs to the furanocoumarine family. In the case of Brosimun guadichaudii, the furanocoumarins that stands out the most is bergaptene. In Fig. 2c, normalization was performed at 309 nm, where maximum absorption of bergaptene occurs, considered the main furanocoumarin of Brosimun guadichaudii. When the spectra were artificially equalized in the maximum absorption of bergaptene (309 nm), it was observed that the leaves are proportionally richer in chlorophyll, followed by the “green” fruits and finally the stem which is poor in chlorophyll and rich in bergaptene. The bands in the absorption spectrum at 400 and 665 nm, as well as the entire spectral profile, confirm the presence of chlorophyll. Chlorophylls are green pigments found in plants and are capable of absorbing visible light, which is a converter of radiant energy into chemical energy. These pigments have been considered as excellent photosensitizers, antioxidants and as therapeutic agents in the combat of several diseases, being the chlorophylls A and B the most found and almost always together, however, chlorophyll A is more abundant in most cases. According to the literature, it is known that the absorption spectrum of chlorophyll “a” shows high absorption in the region of 660 to 670 nm, an extremely favorable fact for the application in PDT, since in this region there is a grade of light penetration in the tissue.

5

Conclusion

In these studies it was demonstrated that Brosimun guadichaudii hydroalcoholic extract is composed of two important families of photosensitizers, furancumarins and chlorines, represented in the extract by bercaptene and chlorophyll respectively. It was also evidenced that furanocoumarins are in higher concentration in the stem, while chlorophyll is mostly found in the leaves. Thus, it was also evidenced that Brosimun guadichaudii extract can be used

1108

for dermatological treatment being irradiated in the ultraviolet region, if the objective is superficial treatment as in the visible region, if desired greater depth and/or solar irradiation, allowing irradiation in both chromophores. Acknowledgements To Fapesp, REDOXOME project—Center for Research on Redox Processes in Biomedicine. Processo 2013/07937-8. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Pozzetti GL (2005) Brosimum gudichaudii Trecul (Moraceae): da planta ao medicamento. Rev Ciênc Farma Básica Apl 26(3):159– 166 2. Thomford NE, Senthebane DA, Rowe A et al (2018) natural products for drug discovery in the 21st century: innovations for novel drug discovery. Int J Mol Sci 19:1578. https://doi.org/10. 3390/ijms19061578 3. Tomazzoni MI, Negrelle RRB, Centa ML (2006) Fitoterapia Popular: a busca instrumental enquanto prática terapêutica. Texto E Contexto-Enfermagem 1:115–121 4. Ministério da Saúde (BR) A fitoterapia no SUS e o Programa de Pesquisa de Plantas Medicinais da Central de Medicamentos: Série B. Textos Básicos de Saúde. At https://bvsms.saude.gov.br/bvs/ publicacoes/fitoterapia_no_sus.pdf 5. Alho C Jr (2012) Importância da biodiversidade para a saúde humana: uma perspectiva ecológica. Estud. Av 26:151–166. https://doi.org/10.1590/S0103-40142012000100011 6. Ministério da Saúde (BR) Medicamentos e insumos—fitoterápicos. At https://www.saude.gov.br/acoes-e-programas/programanacional-de-plantas-medicinais-e-fitoterapicos-ppnpmf/plantasmedicinais-e-fitoterapicos-no-sus

V. M. de S. Antunes et al. 7. Borges JDC, Perim MC, de Castro RO et al (2017) Evaluation of antibacterial activity of the bark and leaf extracts of Brosimum gaudichaudii Trécul against multidrug resistant strains. Nat Prod Res 31:2931–2935. https://doi.org/10.1080/14786419.2017. 1305379 8. O’Connor SE (2015) Engineering of secondary metabolism. Annu Rev Genet 49:71–94. https://doi.org/10.1146/annurev-genet120213-092053 9. Menezes FG, Gallardo H, Zucco C (2010) Recentes aplicações sintéticas de compostos orgânicos tricloro (bromo) metila substituídos. Química Nova 33:2233–2244. https://doi.org/10.1590/ s0100-40422010001000037 10. Cestari TF, Pessato S, Correa GP (2007) Fototerapia: aplicações clínicas. an Bras Dermatol 82:7–21. https://doi.org/10.1590/ S0365-05962007000100002 11. Carbone A, Montalbano A, Spanò V, Musante I, Galietta LJV, Barraja P (2019) Furocoumarins as multi-target agents in the treatment of cystic fibrosis. Eur J Med Chem 180:283–290. https:// doi.org/10.1016/j.ejmech.2019.07.025 12. Lajos K, Emese V, Zoltan N (2019) Advances in phototherapy for psoriasis and atopic dermatitis. Expert Rev Clin Immunol 15:1205–1214. https://doi.org/10.1080/1744666X.2020.1672537 13. Uchoa AF, Oliveira CS, Baptista MS (2010) Relationship between structure and photoactivity of porphyrins derived from protoporphyrin IX. J Porphyrins Phthalocyanines 14:832–845. https://doi. org/10.1142/S108842461000263X 14. Ramos RR, Kozusny-Andreani DI, Fernandes AU, Baptista MS (2016) Photodinamic action of protoporphyrin IX derivatives on Trichophyton rubrum. Bras Dermatol 91:135–140. https://doi.org/ 10.1590/abd1806-4841.20163643 15. Maestrin APJ, Neri CR, Oliveira KT (2009) Extração e purificação de clorofila a, da alga spirulina máxima: um experimento para os cursos de química. Química Nova 32:1670–1672. https://doi.org/ 10.1590/S0100-40422009000600054 16. Pedriali CA, Uchoa AF, Santos SMM, Severino D, Baptista MS (2010) Antioxidant activity, cito- and phototoxicity of pomegranate (Punica granatum L.) seed pulp extract Ciênc. Tecnol. Aliment 30:1017–1021

Electrochemotherapy Effectiveness Loss Due to Electrode Bending: An In Silico and In Vitro Study D. L. L. S. Andrade, J. R. da Silva, R. Guedert, G. B. Pintarelli, J. A. Berkenbrock, S. Achenbach, and D. O. H. Suzuki

Abstract

In electrochemotherapy, needle electrodes can be used to apply electric fields in biological tissues to catalyze chemotherapy drug effect. Electrode bending, which is not entirely unavoidable, may affect the capacity of covering the entire tumor mass by insufficiently high electric fields. This paper aims to study how electrode needles misplacement affect the effectiveness of EQT, using in silico and in vitro experiments to analyze inwards and outwards bending. We observed an electric current increase at inwards bending. On the other hand, the electric field distribution is disturbed when they bent outwards. Both cases may induce treatment effectiveness loss. Based on those results, it is recommended to avoid usage of electrodes with mechanical deformations during electrochemotherapy treatments. Keywords

 

Electrochemotherapy Biological modeling field distribution Mechanical fault

1



Electric

Introduction

Electrochemotherapy (ECT) is a cancer treatment technique that uses electroporation (EP) to catalyzes the chemotherapeutic drug (e.g. bleomycin) entrance into the intracellular D. L. L. S.Andrade (&)  J. R. da Silva  R. Guedert  G. B. Pintarelli  D. O. H. Suzuki Institute of Biomedical Engineering, Federal University of Santa Catarina, Florianopolis, Brazil e-mail: [email protected] D. O. H. Suzuki e-mail: [email protected] J. A. Berkenbrock  S. Achenbach Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada

environment [1]. ECT is a well-established technique in the treatment of cutaneous and subcutaneous tumors, with veterinary [2–4] and human [5–7] applications. During the treatment, the cytotoxic agent can be intratumorally or intravenous administrated [5]. Then, the tumor volume is submitted to high electric field protocols, inducing higher cell permeability and consequent drug transportation. The European Standard Operating Procedures of ECT (ESOPE) was released in 2006 [8] and updated in 2018 [9]. Those are guidelines for ECT treatment, for instance, indicating the electrode type, necessary voltage and exposure times. The main reason correlated with ECT effectiveness is the capacity of covering the entire tumor mass by sufficiently high electric fields to guarantee the EP occurrence [10–12]. As electric field distribution changes based on the electrical properties of the tissue and the EP effect change tissue properties [13], it becomes a complex iteration system. Therefore, numeric simulations (e.g. based on the Finite Element Method (FEM)) are used to study the electric field distribution during the EP process [14–16]. These simulations are often validated using vegetal models, such as potato tubers, due to dark areas appearance a few hours after electroporation [17, 18]. Both lack and excess of the electric field intensity on tumor mass can be a problem. Cancer recurrence may occur if regions are not affected by the electroporation process, and the cytotoxic agent is unable to enter the cell [8]. On the other hand, excessive electric fields may injury healthy tissue by irreversible EP or cause thermal effects due to Joule heating [19, 20]. ESOPE guidelines recommend electric field magnitudes from 100 to 120 kV/m for ECT [8, 9]. A reason for unintentional alteration of electric field distribution relies on the ESOPE’s parallel electrode needles bent due to continuous use on ECT treatments. The mechanical effort caused by insertion and removal of the electrode in non-uniform tissue surfaces may also configure the bent. This paper aims to evaluate whether those electrode disturbances could affect ECT treatment effectiveness.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_166

1109

1110

2

D. L. L. S. Andrade et al.

Materials and Methods

The electrodes bent was studied at two levels: outwards (B+) and inwards (B−). A control group (B0) was utilized to compare changes regarding electric field indentation loss and electric current changes. The study groups are shown in Fig. 1 During the evaluation process, we used in silico and in vitro analysis.

2.1 In Silico Study The computational model was performed with FEM software COMSOL Multiphysics v.5.1 (COMSOL Inc., Stockholm, Sweden) running on a cluster server (Intel Xeon Gold 6126 @ 2.60 GHz, 20 cores, 300 GB RAM) with Ubuntu Linux (64, Canonical Ltd. London, United Kingdom) operating system. COMSOL solves the equation of continuity for a steady-state regime shown in Eq. 1. r  ðr  rVÞ ¼ 0

ð1Þ

where r is the tissue electrical conductivity (S/m), and V is the electric potential (V). The boundary conditions were all insulating on the external surfaces (Neuman’s boundary condition). The contact between electrode and tissue was modelled as Dirichlet’s boundary condition. EP changes the tissue properties and its electrical conductivity. Ivorra et al. [17] modelled tissue conductivity changes on potatoes with a sigmoidal function dependent on the electric field magnitude exposure, as shown in Eq. 2. rðEÞ ¼ 0:03 þ 0:35  ee

0:01ðjEj250Þ

Figure 1 a represents the three-dimensional geometry; it consists of 4 stainless steel (AISI 304) needle pairs with 1 mm diameter and 20 mm inserted in a 50 mm diameter and 50 mm height cylindric potato. Needle distances and voltage are according to the ESOPE’s type II standard (i.e. 4 mm between the opposite columns, 3 mm between needles at the same column and potential difference of 400 V). In the bending study, the anode column was revolved inwards (until columns were 1 mm spaced, Fig. 1b and outwards (until columns were 8 mm spaced, Fig. 1c. The choice of distances was based on experimental possibilities and visualization of effects. Around those geometries, an air block was placed and considered infinite to avoid boundary condition issues at the potato tuber model surfaces. FEM tool generates a mesh with 1.5 million tetrahedral elements (COMSOL extra-fine presets) for each group. Figure 1d shows the generated mesh for the standard group B0. From the numerical simulations results, it was examined the electric currents flow and the electric field distribution.

ð2Þ

on what r is the potato electrical conductivity (S/m), and E is the applied electric field (V/m).

2.2 In Vitro Study Experiments were carried out using 21 potato tubers (Solanum tuberosum) bought in a local supermarket. Each one of the groups had seven samples. Potatoes were not peeled and received a minimum amount of cuts followed by drying its surfaces before and after the pulse application to minimize the presence of artifacts unrelated to the electroporation process [21]. The EP protocol was according to ESOPE (eight square pulses with 100 µs length at 1 Hz, and 400 V amplitude) [8]. The electric current was measured using a Tektronix A622 (Tektronix Inc., Oregon, United States) current probe. After application, samples were kept in Petri dishes for 24 h at 25 °C. Orthogonal slices were cut precisely

Fig. 1 Experimental groups geometry: a ESOPE’s standard configuration (parallel needles) or group B0 b inwards bent or group B− and c outwards bent or group B+ d Generated mesh of the in silico study

Electrochemotherapy Effectiveness Loss Due to Electrode Bending …

1111

where each pair of needles had been inserted. The slices were photographed under a lighting control system, with a 13MP, f. 2.2 LG M250F (LG, Seoul, South Korea) digital camera. A total of 84 slices (28 each group) were studied. The depth loss caused by electric field indentation was calculated using ImageJ (University of Wisconsin, US) image processing tool. For this purpose, the indentation height h (mm) was calculated on group B+. Still, group B0 was the control group, once it represents the ESOPE configuration.

Table 1 shows the results of both in silico and in vitro electric currents. The in vitro data are represented by means and 95% confidence interval current values. B-increased by 43.4% in silico and 59.05% (58.40–60.25%) in vitro concerning the B0 group. The electric field indentation height h at the control group was considered to be zero. The median h at B + was significantly different from zero (when testing h > 0 hypotheses using Wilcoxon test results on p-value < 0.01), totalizing 4.52 mm (first to the third quartile, 2.43–7.29 mm).

3

4

Results

Discussion

Figure 2 compiles results from in silico model and in vitro experiments. The electric field distribution can be observed at the in silico model, while in vitro experiments show the four slices containing the dark stains resulted from the EP process.

Needle bending configures a relevant electroporation problem. Mir et al. [8] claim that if the needles are misplaced during ECT treatment, the electrical pulses must be reapplied. Also, Miklavčič et al. [22] implemented an analysis of

Fig. 2 Compiled results from the standard protocol (B0), with outwards bending (B+) and with inwards bending (B-). The first column shows the three configurations in silico results. Colored lines

are electric fields contours of 20–60 kV/m from blue to red. Four sample slices are shown for each in vitro group (all data not shown). The h is marked at B+

1112

D. L. L. S. Andrade et al.

Table 1 In Silico and means (95% C.I.) of in vitro electric current data

Group

Electric current In silico

In vitro means (95% C.I.)

B0

12.65 A

13.14 (12.62–13.66) A

B+

10.24 A

9.98 (9.55–10.43) A

B−

18.14 A

20.9 (19.99–21.89) A

95% C.I. = 95% confidence interval

the local electric field in tissues and determined a minimum electrical potential for electroporation in different types of cells as an attempt to compensate for possible needle displacements during the treatment. However, it did not present a quantitative analysis of the ECT effectiveness loss due to the bending. Campana et al. [23], using a numerical model and potato tuber experiment, analyzed the distribution of the electric field when approximating the electrode plates. We observed similar electric field distribution behavior using needle electrode. Both Cliniporator® and VITAE® (IGEA S.p.A., Carpi, Italy), support a 20 A of maximum current for type II electrode geometry [24, 25]. Those are the leading commercial equipment. The electric current increase was observed on the B- group regarding the standard one, at both in silico and in vitro results. The numerical simulation resulted in 18.14 A, which is 90% of the equipment capacity (supported current). In vitro study resulted in 20.9 A, which is 4.5% higher than the equipment limit. In this case, the application of ECT in tissues with similar electrical conductivity behavior could be abruptly interrupted due to equipment shutdown. Moreover, undesired tissue ablation and Joule heating may occur with inwards bending [10, 20]. Group B+ at the in vitro study presented an indentation height loss of 22.6%, meaning that the deepest portion of tissue did not receive enough electric field magnitude to induce electroporation. We observed that this indentation loss corresponds to approximately 50 kV/m at in silico study, implying that it was the minimum electric field on potato stains. Similar results are observed in the literature [18, 21].

5

Conclusions

This paper aimed to evaluate whether electrode bending would affect ECT treatment effectiveness. Outwards and inwards electrode bending was studied using in silico and in vitro analysis. The electric current values may be close to the commercial equipment limits when there is an inwards bending. Still, an insufficient electric field distribution may occur when there is an outwards bending. Those cases may

result in an ECT effectiveness loss. We recommend avoiding ECT treatment the presence of electrode deformation. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001.The authors would like to thank the Brazilian research funding agencies CAPES and CNPq for the scholarships granted to the post-graduate students participating in the study. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Miklavčič D, Mali B, Kos B et al (2014) Electrochemotherapy: from the drawing board into medical practice. Biomed Eng Online 13:29. https://doi.org/10.1186/1475-925X-13-29 2. Cemazar M, Tamzali Y, Sersa G et al (2008) Electrochemotherapy in veterinary oncology. J Vet Intern Med 22:826–831. https://doi. org/10.1111/j.1939-1676.2008.0117.x 3. Suzuki DOH, Berkenbrock JA, de Oliveira KD et al (2017) Novel application for electrochemotherapy: immersion of nasal cavity in dog. Artif Organs 41:767–773. https://doi.org/10.1111/aor.12858 4. Suzuki DOH, Berkenbrock JA, Frederico MJS et al (2018) Oral mucosa model for electrochemotherapy treatment of dog mouth cancer: ex vivo, in silico, and in vivo experiments. Artif Organs 42:297–304. https://doi.org/10.1111/aor.13003 5. Esmaeili N, Friebe M (2019) Electrochemotherapy: a review of current status, alternative igp approaches, and future perspectives. J Healthc Eng 2019:1–11. https://doi.org/10.1155/2019/2784516 6. Campana LG, Edhemovic I, Soden D et al (2019) Electrochemotherapy—Emerging applications technical advances, new indications, combined approaches, and multi-institutional collaboration. Eur J Surg Oncol 45:92–102. https://doi.org/10. 1016/j.ejso.2018.11.023 7. Cemazar M, Sersa G (2019) Recent Advances in Electrochemotherapy. Bioelectricity 1:204–213. https://doi.org/10.1089/bioe.2019. 0028 8. Mir LM, Gehl J, Sersa G et al (2006) Standard operating procedures of the electrochemotherapy: instructions for the use of bleomycin or cisplatin administered either systemically or locally and electric pulses delivered by the CliniporatorTM by means of invasive or non-invasive electrodes. Eur J Cancer Suppl 4:14–25. https://doi.org/10.1016/j.ejcsup.2006.08.003 9. Gehl J, Sersa G, Matthiessen LW et al (2018) Updated standard operating procedures for electrochemotherapy of cutaneous tumours and skin metastases. Acta Oncol (Madr) 57(3):1–9. https://doi.org/10.1080/0284186X.2018.1454602

Electrochemotherapy Effectiveness Loss Due to Electrode Bending … 10. Yarmush ML, Golberg A, Serša G et al (2014) Electroporationbased technologies for medicine: principles, applications, and challenges. Annu Rev Biomed Eng 16:295–320. https://doi.org/10. 1146/annurev-bioeng-071813-104622 11. Berkenbrock JA, Machado RG, Suzuki DOH (2018) Electrochemotherapy effectiveness loss due to electric field indentation between needle electrodes: a numerical study. J Healthc Eng 2018:1–8. https://doi.org/10.1155/2018/6024635 12. Pintarelli GB, Berkenbrock JA, Rassele A et al (2019) Computer simulation of commercial conductive gels and their application to increase the safety of electrochemotherapy treatment. Med Eng Phys 74:99–105. https://doi.org/10.1016/j.medengphy.2019.09.016 13. Corovic S, Lackovic I, Sustaric P et al (2013) Modeling of electric field distribution in tissues during electroporation. Biomed Eng Online 12:16. https://doi.org/10.1186/1475-925X-12-16 14. Zupanic A, Kos B, Miklavcic D (2012) Treatment planning of electroporation-based medical interventions: electrochemotherapy, gene electrotransfer and irreversible electroporation. Phys Med Biol 57:5425–5440. https://doi.org/10.1088/0031-9155/57/17/5425 15. Groselj A, Kos B, Cemazar M et al (2015) Coupling treatment planning with navigation system: a new technological approach in treatment of head and neck tumors by electrochemotherapy. Biomed Eng Online 14:S2. https://doi.org/10.1186/1475-925X-14S3-S2 16. Vera-Tizatl AL, Kos B, Miklavcic D et al (2017) Investigation of numerical models for planning of electrochemotherapy treatments of invasive ductal carcinoma. In: 2017 Global Medical Engineering Physics Exchanges/Pan American Health Care Exchanges (GMEPE/PAHCE). IEEE, pp 1–6 17. Ivorra A, Mir LM, Rubinsky B (2009) Electric field redistribution due to conductivity changes during tissue electroporation:

1113

18.

19.

20.

21.

22. 23.

24.

25.

experiments with a simple vegetal model. IFMBE Proc 25:59– 62. https://doi.org/10.1007/978-3-642-03895-2-18 Berkenbrock JA, Pintarelli GB, Júnior ADCA, Suzuki DOH (2019) Verification of electroporation models using the potato tuber as in vitro simulation. J Med Biolog Eng 39(2):224–229. https://doi.org/10.1007/s40846-018-0408 Garcia PA, Davalos RV, Miklavcic D (2014) A numerical investigation of the electric and thermal cell kill distributions in electroporation-based therapies in tissue. PLoS ONE 9:e103083. https://doi.org/10.1371/journal.pone.0103083 van den Bos W, Scheffer HJ, Vogel JA et al (2016) Thermal energy during irreversible electroporation and the influence of different ablation parameters. J Vasc Interv Radiol 27:433–443. https://doi.org/10.1016/j.jvir.2015.10.020 Oey I, Faridnia F, Leong SY et al (2016) Determination of pulsed electric fields effects on the structure of potato tubers. In: Miklavcic D (ed) Handbook of electroporation. Springer International Publishing, Cham, pp 1–19 Miklavčič D, Pavšelj N, Hart FX (2006) Electric properties of tissues. Wiley Encycl Biomed, Eng Cl G (2019) Non-parallellism of needles in electroporation: 3D computational model and experimental analysis. COMPEL Int J Comput Math Electr Electron Eng 38:348–361. https://doi.org/10. 1108/COMPEL-04-2018-0189 Staal LG, Gilbert R (2011) Generators and applicators: equipment for electroporation BT—clinical aspects of electroporation. In: Kee ST, Gehl J, Lee EW (eds) Springer. New York, NY, New York, pp 45–65 Bertacchini C (2017) Cliniporator: medical electroporation of tumors. Handbook of electroporation. Springer International Publishing, Cham, pp 1–36

Conductive Gels as a Tool for Electric Field Homogenization and Electroporation in Discontinuous Regions: In Vitro and In Silico Study L. B. Lopes, G. B. Pintarelli and D. O. H. Suzuki

Abstract

Electrochemotherapy (ECT) is a cancer treatment that combines chemotherapy and electroporation (EP) where EP is used to increase cells membrane permeability, facilitating the entrance of drugs into cancer cells. For successful treatment, the entire tumor region needs to be exposed to an adequate electric field intensity. In silico and in vitro studies are used in a pre-treatment step to analyse the electric field distribution and possible mistakes, especially in irregular and complex tissue structures such as protuberances and holes. Conductive gels can be used to fill irregular tissue structures and make the electric field distribution homogenous. In this paper, an in silico study and in vitro vegetal model were used to evaluate the effectiveness of commercial conductive gels in electric field homogenization of discontinuity areas. Both studies demonstrate that conductive gels were effective in homogenizing electric field in the discontinuity region. Keywords

Electroporation • electrochemotherapy • electric field • conductive gel

1

Introduction

Electroporation (EP) consists of applying short and intense electric field pulses (usually tens of kV/m and hundreds of microseconds) in order to open pores in the cell membrane and increase cell permeability. The pores allow the passage of ions, molecules and even macromolecules into the intracellular medium [1–3]. There are two levels of EP, mostly conL. B. Lopes (B) · G. B. Pintarelli · D. O. H. Suzuki Instituto de Engenharia Biomédica (IEB-UFSC), Universidade Federal de Santa Catarina, R. Eng. Agrônomo Andrei Cristian Ferreira, s/n-Trindade,Florianópolis, Brazil e-mail: [email protected]

trolled by the intensity of the applied electric field. The first is called reversible electroporation (RE), in which the pores close after application and the membrane is restored. The second is irreversible electroporation (IRE) which provokes membrane destruction and cell death [4–6]. Both techniques are used in biotechnology and medicine fields. RE has been used combined with chemotherapy. This treatment is named electrochemotherapy (ECT). ECT improves drug delivery and volume selectivity over chemotherapy. IRE is also used to treat cancer without the need for a chemotherapeutic agent. For the success of EP, all tumor region should be exposed to an adequate electric field intensity. Disorderly tumors growth can result in irregular shapes, which can cause diffraction of electric field [7,8]. Regions with an insufficient electric field for EP are named blind spots. Blind spots may cause ECT failure [9–11]. Heyse et al. investigated the impact of protuberance geometries in the diffraction of the electric field in ECT. The conductivity boundary between tumor and air disrupts the homogeneity of the electric field, creating blind spots [12]. It has been reported that conductive gel can improve the electric field homogeneity and mitigate blind spots [13,14]. Ivorra et al. reported 0.5 S/m gel as a tool for electric field homogenization at animal tissues using plate electrodes [15]. The same principle also can be used with needle electrodes [16]. In silico and in vitro studies are recommended to evaluate blind spots, and then successful procedures can be further studied in animals and humans. Vegetal models provide fast visual feedback on the effectiveness of EP and can also be used to validate EP equipment, new electrodes arrangement and pulse parameters. The potato tuber is an electroporation model that is easily found, prepared and agrees with the 3Rs (replacement, reduction and refinement) concept of animal welfare in research. [12,17,18]. Moreover, when studying ECT failures due to electric field blind spots, it is recommended to avoid animal usage (avoidance of animal suffering caused by treatment failure).

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_167

1115

1116

L. B. Lopes et al.

Our group recently investigated that conductive gels in Brazil are in the 0.1 and 0.2 S/m range [19]. The reported optimized gel for ECT is 0.5 S/m. An optimized gel should be in the tissues conductivity range. In this paper, we present an in vitro and in silico study in order to investigate commercial conductive gels in discontinuous regions as a way to improve electric field homogenization in ECT with needle electrodes. Potato tuber was used as a vegetal model and two commercial conductive gels were evaluated.

2

Materials and Methods

2.1

In Silico Study

region which causes diffraction in the electric field [12]. The height of the discontinuity is 10 mm. The two needles are entirely inserted into the potato and have 13.5 mm of height, 0.7 mm of diameter and are 5 mm apart. The conductive gel was also simulated as a block of the same discontinuity height, 30 mm of width and 10 mm of depth. The mesh was automatically generating by the software, resulting in 1,789,467 tetrahedral elements for the simulation without gel and 1,978,071 for the simulation with gel. Steady-state regime was considered and Dirichlet boundary condition and Neumann were applied. The tissue was considered homogenous and the Laplace Eq. (1) was solved by the finite element method. ∇ · (σ · ∇V ) = 0

The software COMSOL Multiphysics (COMSOL AB, Sweden) was used to simulate the three-dimensional models displayed in the Fig. 1. The geometry targets a discontinuous

(1)

The model used for describe potato tissue conductivity (2) was previously described in [20]. σ (E) = σ0 +

σmax − σ0 1+ D·e

  − E−A B

(2)

A=

E ir e + Er e 2

(3)

B=

E ir e − Er e C

(4)

The parameters used to compute the simulation are shown in Table 1 [18].

2.2

In Vitro Study

The potatoes (Solanum tuberosum) were purchased on the day of experiments. The potatoes were cut using a blade folded to reach the 90◦ discontinuity in all samples. Three experiments were conducted: without gel and using two different commercial conductive gels. The conductivity of Carbogel ULT Fig. 1 Simulated geometric model and mesh Table 1 Simulation parameters Parameter

Value

Electrode relative permittivity Electrode electrical conductivity Potato relative permittivity Potato initial electrical conductivity (σ0 ) Potato final electrical conductivity (σmax ) Potato RE threshold (Er e ) Potato IRE threshold (E ir e ) Constant C Constant D

1 1.74 × 109 S/m 1 0.03 S/m 0.36 S/m 20 kV/m 80 kV/m 8 10

167 Conductive Gels as a Tool for Electric Field Homogenization …

1117

3

Fig. 2 Experimental setup. a Without gel; b With gel

is approximately 0.1 S/m, and Ultra Gel is approximately 0.2 S/m at 10kHz [19]. The EP protocol used consists of eight 100 μs pulses at 1 Hz a repetition rate and electric field of 100 kV/m, following ESOPE standard [21]. This procedure was applied to each sample with a custom electroporation system [22]. The stainless-steel needle electrode with 13.5 mm of height, 0.7 mm of diameter and 5 mm of separation was completely inserted in the edge of the discontinuity resulting in a needle fully inside of the potato and the other partially inside. On gel experiments, the gel was inserted covering the entire discontinuous cliff and electrodes (Fig. 2). The samples were stored in sealed recipient on a paper towel moistened with 3 ml of citric acid 4% for a 24h period in a room temperature of 25 ◦C. Electroporation stimulates interaction between potato polyphenol oxidase enzyme and phenol substrates which result in the accumulation of brown stains in the electroporated area. Acid environment reduce potato tissue natural oxidation [17,23]. The sample size was 14 (N = 14) for each case. A sample control group was tested with the same protocol but without any voltage applied. Current measurements of each sample were performed with Tektronix A621 AC Current Probe and Tektronix DPO2012B Oscilloscope. Stained areas were measured with ImageJ (National Institutes of Health, USA).

2.3

Statistical Analysis

Current and area values observed in experiments were subjected to Shapiro-Wilk normality test at 95% confidence level, which indicated adherence of all three groups data to normal distribution. The data were then subjected to unpaired t-test, which indicates a significant difference between the groups, except by the areas of 0.1 and 0.2 S/m gel.

Results

The in silico and in vitro results are shown on Fig. 3. Regions where the electric field was not intense enough to electroporation were circled in red. The samples were cut along the needles where the field was applied. In simulation results, the dark grey area represents the region where the electric field was higher than 80 kV/m (IRE). Gray area represents electric field between 20 and 80 kV/m (RE). Light gray represents electric field under 20 kV/m (no known significant effects). Table 2 shows current results of simulation and experiments. A 95% confidence interval (CI) was calculated. Table 3 shows the areas resulting from simulation (RE and IRE) and the total electroporated area observed in experiments.

4

Discussion

For the best ECT performance, individual treatment planning (also known as pre-treatment) is encouraged. Pre-treatment starts from imaging exams that allow the analysis of characteristics of the region’s anathomy and then proceed to studies for verifying treatment parameters such as optimum electrodes configuration, electroporation protocol [24] and the use of conductive gels [13,19,25]. All parameters are managed to reach adequate electric field at tumor volume. The potato model is an alternative to have fast and inexpensive visual feedback of electroporation and reduce animal testing, especially for testing new EP parameters and electrodes. Potato dark stained area is an indirect indicator of EP success and provides evidence about improving electric field distribution. In silico measured areas show that increasing gel conductivity produces an IRE area increase, but not a considerable increase in total electroporated area. There was no significant area difference between the two gels in in vitro experiments. Area results show a 8.4% difference between in silico and in vitro studies, in the worst case. Current results show a 9.6% difference between in silico and in vitro studies, in the worst case. Therefore, the in silico study proved to be a good way to predict the electric current. A 16% increase in the average current was observed between the two gels. It was expected since the 0.2 S/m gel is more conductive. In ECT treatments, care should be taken to ensure the equipment is capable of supplying the additional current increase by the gels. Discontinuous regions can diffract the electric field and lead to treatment failure. Conductive gels are used to improve contact between electrodes and tissues and also to homogenize electric field in irregular geometry cases [13]. Previous studies demonstrate that gels with 0.5–1 S/m of conductivity

1118

L. B. Lopes et al.

Fig. 3 In silico (left column) and in vitro (the three right side columns) results. a Without gel; b 0.1 S/m gel; c 0.2 S/m gel

Table 2 Current results Experiment

Simulated current

Experimented current (Mean and 95% CI)

Without 0.1 S/m Gel 0.2 S/m Gel

1.36 A 2.21 A 2.63 A

1.49 A (1.42–1.57 A) 2.28 A (2.15–2.41 A) 2.65 A (2.58–2.77 A)

Table 3 Area results Experiment

Simulated RE area (mm2 )

Simulated IRE area (mm2 ) Total simulated area (mm2 ) Experimented area (Mean and 95% CI)

Without gel

71.1

34.9

106.0

0.1 S/m Gel

82.9

38.9

121.8

0.2 S/m Gel

82.0

42.6

124.6

98.1 mm2 (93.6–102.6 mm2 ) 114.4 mm2 (111.5–117.2 mm2 ) 114.2 mm2 (110.7–117.7 mm2 )

167 Conductive Gels as a Tool for Electric Field Homogenization …

1119

can solve most cases [14]. Nevertheless commercial conductive gels are usually made for increasing contact of electrocardiogram and ultrasonography electrodes, and in this cases conductivity is not a significant parameter and is not regularly reported on labels. Both in silico and in vitro results show blind spots in discontinuity region which can cause tumor recurrence. The 0.1 and 0.2 S/m conductive gels were effective in homogenizing the electric field and eliminating blind spots in potato tissue discontinuity, which suggests that a conductive gel with the conductivity matched with tumor tissue conductivity can produce the same effect in ECT treatment. Potato tissue is about ten times less conductive then tumor tissue and conductivity measures of commercial showed that none reached recommended ECT values [14,19] for homogenize the electric field in ECT.

8. Pavliha D, Kos B, Županiˇc A et al (2012) Patient-specific treatment planning of electrochemotherapy: Procedure design and possible pitfalls. Bioelectrochemistry 87:265–273 9. Corovic S, Lackovic I, Sustaric P et al (2013) Modeling of electric field distribution in tissues during electroporation. Biomed Eng Online 12:16 10. Suzuki DOH, Anselmo J, Oliveira KD et al (2015) Numerical model of dog mast cell tumor treated by electrochemotherapy. Artif Organs. 39:192–197 11. Berkenbrock JA, Machado RG, Suzuki DOH (2018) Electrochemotherapy effectiveness loss due to electric field indentation between needle electrodes: A numerical study. J healthcare Eng 12. Heyse A B, Pintarelli G B, Suzuki D O H. Electric field distribution and electroporation in discontinuous regions using vegetal model: In: vitro and in silico study in XXVI Brazilian congress on biomedical engineering, 465–469 13. Ivorra A, Rubinsky B (2007) Electric field modulation in tissue electroporation with electrolytic and non-electrolytic additives. Bioelectrochemistry 70:551–560 14. Ivorra A, Al-Sakere B, Rubinsky B et al (2008) Use of conductive gels for electric field homogenization increases the antitumor efficacy of electroporation therapies. Phys Med Biol 53:6605 15. Ivorra A, Rubinsky B (2007) Optimum conductivity of gels for electric field homogenization in tissue electroporation therapies. In: IV Latin American Congress on biomedical engineering 2007, bioengineering solutions for Latin America health, 619–622 16. Suzuki DOH, Marques CMG, Rangel MMM (2016) Conductive gel increases the small tumor treatment with electrochemotherapy using needle electrodes. Artif Organs 40:705–711 17. Oey I, Faridnia F, Leong SY et al (2016) Determination of pulsed electric fields effect on the structure of potato tubers handbook of electroporation, 1–19 18. Berkenbrock JA, Pintarelli GB, Jr Antônio A C et al (2019) Verification of electroporation models using the potato tuber as in vitro simulation J Med Biol Eng 39:224–229 19. Pintarelli GB, Berkenbrock JA, Rassele A et al (2019) Computer simulation of commercial conductive gels and their application to increase the safety of electrochemotherapy treatment. Med Eng Phys 74:99–105 20. Sel D, Cukjati D, Batiuskaite D et al (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans Biomed Eng 52:816–827 21. Mir LM, Gehl J, Sersa G et al (2006) EJC Suppl 4:14–25 22. Berkenbrock J, Pintarelli G, Antônio A et al (2017) In vitro simulation of electroporation using potato model. In: CMBES Proceedings, 40 23. Ivorra A, Mir L M, Rubinsky B (2009) Electric field redistribution due to conductivity changes during tissue electroporation: experiments with a simple vegetal model. In: World congress on medical physics and biomedical engineering; Munich, Germany, 59–62 24. Groselj A, Kos B, Cemazar M et al (2015) Coupling treatment planning with navigation system: a new technological approach in treatment of head and neck tumors by electrochemotherapy. Biomed Eng Online 14:S2 25. Mahna A, Firoozabadi SMP, Shankayi Z (2014) The effect of ELF magnetic field on tumor growth after electrochemotherapy. J Membr Biol 247:9–15

5

Conclusion

The in silico and in vitro studies demonstrated the effectiveness of commercial conductive gels in homogenizing the electric field distribution in discontinuous areas. Vegetal models can be used as a tool for analysing new electrodes, EP parameters and different approaches in ECT treatment. Acknowledgements The author thanks the funding agencies CAPES and CNPq. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Kinosita K, Tsong TY (1977) Formation and resealing of pores of controlled sizes in human erythrocyte membrane. Nature 268:438– 441 2. Kinosita K, Ashikawa I, Saita N et al (1988) Electroporation of cell membrane visualized under a pulsed-laser fluorescence microscope. Biophys J 53:1015–1019 3. Miklavcic D (2019) Handbook of electroporation. Springer, Berlin 4. Weaver JC, Chizmadzhev YA (1996) Theory of electroporation: a review. Bioelectrochem Bioenerg 41:135–160 5. Chen C, Smye SW, Robinson MP et al (2006) Membrane electroporation theories: a review. Med Biol Eng Comput 44:5–14 6. Fox MB, Esveld DC, Valero A et al (2006) Electroporation of cells in microfluidic devices: a review. Anal Bioanal Chem 385:474 7. Lackovi´c I, Magjarevi´c R, Miklavˇciˇc D (2010) Incorporating Electroporation-related conductivity changes into models for the calculation of the electric field distribution in tissue. XII Mediterranean Conf Med Biolog Eng Comput 2010:695–698

Line Shape Analysis of Cortisol Infrared Spectra for Salivary Sensors: Theoretical and Experimental Observations C. M. A. Carvalho , B. L. S. Porto , B. V. M. Rodrigues , and T. O. Mendes

Abstract

1

Commonly used to the diagnosis and monitoring of different diseases, such as Addison’s and Parkinson’s, or stress association, cortisol is a steroid glucocorticoid that has a special interest for the medical community. The present work approaches the study of the cortisol infrared spectrum in order to reveal vibrational markers for the quantitative determinations of salivary cortisol levels. For this, the infrared spectrum of the cortisol molecule was obtained by computational methods, based on the Density Functional Theory (DFT), using Avogadro, Gaussian and VEDA software. The stretching of the double C–O bonds highlights as a more intense region of the infrared spectrum of the cortisol molecule. A set of assignments for the majority vibrations has been suggested. The experimental spectra of the analytical standard of cortisol, artificial saliva and artificial saliva with cortisol addition were obtained in the reflectance mode using an ATR accessory. The theoretical spectral profile was compared to the experimental values of the vibratory modes. Finally, the vibrational bands at 2912, 1714, 1706, 1642, 1630 and 1610 cm−1 were highlighted as potential vibrational markers for the determination of salivary cortisol concentration, pointing out that infrared spectroscopy can be used for qualitative and quantitative analysis of salivary cortisol levels. Keywords

Biofluids



Diagnosis



Infrared spectroscopy



Cortisol

C. M. A. Carvalho  B. V. M. Rodrigues  T. O. Mendes (&) Universidade Brasil, Instituto Científico E Tecnológico, São Paulo, SP CEP08230-030, Brazil B. L. S. Porto Departamento de Química, Universidade Federal de Minas Gerais, Belo Horizonte, MG CEP31270-901, Brazil

Introduction

Biological fluids, such as blood, saliva and urine, appear as the main liquid components of living beings, while its production occurs permanently and without dependence on external stimuli. Because an intimate relationship with the bodily organs, its analysis is pointed out as a key element for the elaboration of accurate diagnoses and prognosis of various types of pathologies, as well as for the monitoring of the performance of organic functions [1]. The availability and ease of collection, as well as the repeatability, enable a more accurate monitoring of disease progression and an accurate assessment of the effectiveness of treatment [2]. While widely used in studies that propose the measurement of organic fluids, saliva is conceptualized as a biological fluid originated in the parotid, submandibular and sublingual glands, which are located in the oral cavity. Saliva is composed predominantly of water (99.5%), in addition to proteins, such as glycoproteins, immunoglobulins and antimicrobial peptides (0.3%), and inorganic matter (0.2%) [3]. Among the most studied biomarkers, we can highlight saliva, cortisol, salivary immunoglobulin A (IgA-s), a-amylase and testosterone as the most important ones. Each of them contributes differently, brining distinct information about the organism. For example, cortisol is produced by the organism as response to different stressors, whilst its analysis in saliva is possible due to its correlation with the serum free fraction [4]. Several studies correlate the levels of this hormone to the development of a stressful state in humans, a fact that made cortisol a highly relevant biomarker for the detection of physiological stress [5]. As the glucocorticoid most representative, cortisol has a molar mass of 362.46 g mol−1 and a molecular formula of C21H30O5. According to Ramamoorthy [6], cortisol is an endogenous hormone generated from the cholesterol through a multi-enzymatic process called steroidogenesis. After adrenal release, cortisol is released into the bloodstream and directed to target tissues. In addition to controlling different

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_168

1121

1122

functions related to the cardiovascular and immune systems, the hormone participates as a stimulator of lipolysis in fat cells, increases protein breakdown and decreases muscle protein synthesis, triggering a greater release of lipids and amino acids in the bloodstream [7]. The organic production of cortisol is normal in the range of 10–20 mg daily. Because its average life is only 80 to 100 min, this glucocorticoid shows quantitative fluctuations throughout the day, which leads to a constant dependence on its production to maintain its serum levels. In healthy adult individuals, morning salivary cortisol levels are above 12 nmol L−1 (4.35 µg L−1), while nighttime concentrations vary between 1.0 and 8.3 nmol L−1 (0.36 and 3.01 µg L−1) [8]. The standard organic quantity of cortisol is changed most of the time due to the occurrence of stressful situations, which can be physical and/or psychological. After the stressor ceases, cortisol levels return to baseline patterns [9]. Due to the fluctuation of its concentration at different times along the day, the development of methods for the determination of cortisol through rapid analysis is essential. Traditionally, the analysis of cortisol content is carried out by means of blood dosage using immunoassays. However, despite being considered a “gold standard”, tests such as electrochemiluminescence assay (ECL), radioimmunoassay, and enzyme-linked immunosorbent assay (ELISA) are considered invasive methodologies due to the need of taking blood samples for analysis. Furthermore, these processes use of complex extraction/purification systems, which are carried out in multiple stages, leading to days or weeks of waiting to the final report [5, 8]. Such characteristics are likely to compromise the use of immunoassays, especially at times when the urgency of diagnosis is a decisive factor for the use of therapies, which implementation is directly related to the patient’s survival [5]. The relevance of cortisol as one of the biomarkers of great interest in the biomedical area is notorious, and its salivary analysis is evidenced by virtue of the several advantages previously mentioned. In this context, in response to the search for medical sciences for accessible, efficient, and real-time analysis tools, infrared absorption spectroscopy emerges as a promising tool for diverse analyzes in biofluids, meeting the requirements for a real-time technique capable of performing hormone dosing in an easy and precise way in a non-invasive manner. In addition, this techniques has minimal or no sample preparation, whereas it does not generate chemical residues [5, 10]. Thus, the purpose of the present study is to get an in-depth understanding of the infrared spectrum of the cortisol molecule. For that, computational calculations of quantum chemistry will be performed to determine and assign the main vibrational modes of cortisol. Following, these results will be correlated to the experimental analysis of the analytical pattern of cortisol and, finally, it will be

C. M. A. Carvalho et al.

investigated which spectral regions are identified as candidates for biomarkers in order to monitor the cortisol concentration in saliva sample. Thus, we expect to contribute to the improvement of analytical techniques of greater usability, speed, and at a more affordable cost, which will be capable of analyzing hormone in real time.

2

Material and Methods

2.1 Sample Preparation The cortisol analytical standard was purchased from Sigma-Aldrich (St. Louis, MO, USA). Alpha-amylase, potassium chloride (KCl), calcium chloride dihydrate (CaCl2.2H2O), magnesium chloride hexahydrate (MgCl2.6H2O), methylparaben (CH3(C6H4(OH)COO), dipotassium phosphate (K2HPO4), monopotassium phosphate (KH2PO4) and sodium carboxymethylcellulose were purchased from Vetec (Rio de Janeiro, RJ, Brazil). Deionized water used in sample preparation was obtained with a Milli-Q purification system (Bedford, MA, USA). In order to obtain the infrared spectrum of the cortisol standard, an aliquot of cortisol (powder) was removed and placed directly on the ATR crystal of the infrared spectrometer. To obtain the infrared spectrum of artificial saliva, 10.0 lL of artificial saliva, prepared according to the methodology described by Arain and collaborators [11], were placed directly on the equipment’s ATR crystal. The artificial saliva solution with the cortisol addition was prepared from the stock saliva solution with the cortisol addition at 3.62 µg L−1. This concentration was selected from the levels of salivary cortisol reported in the literature [8]. The infrared spectrum of artificial saliva with cortisol addition was obtained by placing 10.0 lL of this solution directly on the equipment’s ATR crystal.

2.2 Infrared Conditions Infrared spectra of cortisol (powder), artificial saliva and artificial saliva with cortisol addition (liquid) were obtained in attenuated total reflectance mode (ATR) on a PerkinElmer Frontier Single Range FT-IR spectrometer. The analyzes were carried out comprising the spectral range from 4000 to 600 cm−1 with a resolution of 4 cm−1. Each sample was placed directly on the ATR crystal and each spectrum was obtained from 16 accumulations. Among the different analyzes, the ATR crystal was cleaned with ethanol. In order to correct atmospheric interference, a background was performed with a clean and empty ATR accessory immediately before each analysis.

Line Shape Analysis of Cortisol Infrared Spectra for Salivary …

2.3 Theoretical Calculation Theoretical vibrational modes for the cortisol molecule C21H30O5 were obtained through the use of computational methods dedicated to the understanding of chemical models by the Density Functional Theory (DFT). A pre-optimization of the molecular structure of cortisol was performed using the Avogadro software. This optimization was used as an input to perform the calculations in the Gaussian software with the functional B3LYP [12, 13] and the Gaussian base set 6-311G [14]. The vibrational modes were assigned with the results of the calculations above, which were visualized in the GaussView software and interpreted concurrently with a suggestion of assignments using the VEDA software.

3

Results and Discussion

The number of vibrational modes for a non-linear molecule can be written as 3N-6, where N represents the number of atoms. As the cortisol molecule is given by the formula C21H30O5, we have N = 56, i.e., a possibility of 162 vibrational modes. Thus, 162 vibrational modes were obtained through theoretical calculations performed in the Gaussian software. The optimized structure of the cortisol molecule, obtained in the computer simulation performed here, is shown in Fig. 1. In this figure, oxygen atoms are represented in red, while carbon and hydrogen atoms are represented in dark and light gray, respectively. The infrared spectrum of cortisol obtained via DFT theory is shown in Fig. 2. In this figure, the 162 possible vibrational modes for this molecule are represented with dots. The spectral profile (black line) was obtained by interpolating a Lorentzian adjustment over the set of all vibrational modes and their respective theoretical intensities. The main active infrared vibrational modes were 1776, 1744, 1664, 3824, 3688, 3064, 3096, 1440, 1248, 1120,

Fig. 1 Optimized structure of the cortisol molecule

1123

1296, 928 and 888 cm−1, with the elongations of the double bonds C(1)O(12) at 1744 cm−1 and C(40)O(43) at 1776 cm−1 being the most intense. Therefore, the vibrational band visualized in this region will be the most intense in the infrared spectrum of the cortisol molecule, attributed to CO double bonds, which have a high variation in the electric dipole moment. The vibrational assignments can be separated into CO (double bond, elongation at 1744 and 1776 cm−1), OH (symmetrical stretch at 3688 and 3824 cm−1), CH (symmetrical and asymmetric stretches close to 3064 and 3096 cm−1), CH2 (angular deformation in 1440 and 1664 cm−1) and balance and breathing modes in the other wave numbers mentioned in the previous paragraph. It is worth mentioning that the calculation ranges obtained by the computational method (cm−1) were for the isolated cortisol molecule (a single molecule in a vacuum). Band displacements ate likely to be observed when compared to experimental measurements due to the matrix effect, i.e., the interaction of a cortisol molecule with a neighbor molecule, and also because the physical form of the compound and interactions with other constituents present in a complex sample. Figure 3 shows the infrared spectrum of cortisol obtained from the analysis of an analytical standard of cortisol, in powder, in an equipment with ATR cell. By comparing Figs. 2 and 3, the vibrational modes of the experimental analyzes were assigned. Such correspondence was performed while obtaining a satisfactory correlation (R2 > 0.9) for the major vibrational modes found theoretically and experimentally for the cortisol molecule. Figure 4 shows the correlation between the wave numbers for the most intense vibrational modes. The standard deviation between the values obtained via DFT theory and the experimental values was 35 cm−l. The identification and understanding of the characteristic vibrational modes of the cortisol molecule allow for the

1124

C. M. A. Carvalho et al.

Fig. 2 Infrared spectrum of the cortisol molecule obtained by computer simulation

Fig. 3 ATR-FTIR spectrum of cortisol

identification of traces of this compound in a complex solution, such as in a plasma or saliva sample. From this point, our goal becomes to find evidence of the presence of cortisol in saliva samples. For this, the analysis of the saliva spectrum, without the presence of cortisol, is essential. The spectral profiles of cortisol and artificial saliva and artificial saliva with cortisol addition are shown in Fig. 5. For the spectrum of pure saliva, the most relevant vibrational

modes are those centered on 3310 and 1638 cm−1. The first region is due to an overlap of different contributions of CH, CH2 and OH groups present in alpha-amylase, carboxymethylcellulose and water. The second region shows an overlap of OH and CO contributions from the aforementioned organic compounds. Despite a strong overlap, there are still spectral bands present in the cortisol spectrum that do not overlap with the

Line Shape Analysis of Cortisol Infrared Spectra for Salivary …

1125

Fig. 4 Correlation between DFT and Experimental vibrational modes for cortisol molecule

Fig. 5 ATR-FTIR spectrum of cortisol, artificial saliva and artificial saliva with cortisol addition

saliva spectrum. At least six visually highlighted wave numbers stand out as belonging to the contribution of the cortisol molecule in the artificial saliva sample, namely 2912, 1714, 1706, 1642, 1630 and 1610 cm−1. These regions can be monitored to determine the concentration of cortisol in a saliva sample. Figures 6a and b highlight these spectral bands.

Figure 6a highlights the region in the saliva spectrum with cortisol due to the contribution of CH vibrations of tertiary carbons present in the cortisol molecule. Figure 6b shows the influence of the presence of cortisol in the saliva sample due to the CO and CH2 vibrations present in the cortisol, which are abundant in the cortisol molecule when compared to the other constituents of artificial saliva.

1126

C. M. A. Carvalho et al.

A

B 95

90

Reflectance (%)

Reflectance (%)

90

2912

85

3200

3000

2800

1630

1706 70

Saliva Saliva + Cortisol Cortisol

80

1610 1714

80

Saliva Saliva + Cortisol Cortisol

1642

2600

1800

-1

1750

1700

1650

1600

1550

1500

1450

-1

Wavenumber (cm )

Wavenumber (cm )

Fig. 6 Spectral differences between saliva and saliva with cortisol addition

4

Conclusions

The infrared spectrum of the cortisol molecule was obtained through the use of computational methods and were compared with experimental measurements of the cortisol analytical pattern. The main vibrational modes were highlighted, while attributions were made. Two spectral regions stood out as candidates for vibrational markers for quantifying the concentration of cortisol in saliva samples. CH vibrations of tertiary carbons present in the cortisol molecule near to 2900 cm−1 and CO and CH2 vibrations in the region around 1640 cm−1 were suggested as discrete wave numbers to be used in a multiple linear regression adjusted for the purpose of determining cortisol levels in a saliva sample. Finally, this set of vibrational modes can also be used for systems based on infrared LEDs to real-time determination of salivary cortisol without any sample preparation step. Acknowledgements Authors thank Brazilian National Council for Scientific and Technological Development (CNPq Project number 437516/2018-0) for their financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Baker MJ et al (2016) Developing and understanding biofluid vibrational spectroscopy: a critical review. Chem Soc Rev 45:1803–1818. https://doi.org/10.1039/C5CS00585J

2. Leal LB, Nogueira MS, Canevari RA, Carvalho L (2018) Vibration spectroscopy and body biofluids: literature review for clinical applications. Photodiagn Photodyn Ther 24:237–244. https://doi.org/10.1016/j.pdpdt.2018.09.008 3. Nunes LAS, Brenzikofer R, Macedo DV (2011) Reference intervals for saliva analytes collected by a standardized method in a physically active population. Clin Biochem 44:1440–1444. https://doi.org/10.1016/j.clinbiochem.2011.09.012 4. Papacosta E, Gleeson M, Nassis GP (2013) Salivary hormones, IgA, and performance during intense training and tapering in judo athletes. J Strength Cond Res 27:2569–2580. https://doi.org/10. 1519/JSC.0b013e31827fd85c 5. Sankhala D, Muthukumar S, Prasad S (2018) A four-channel electrical impedance spectroscopy module for cortisol biosensing in sweat-based wearable applications. SLAS Technol 23:529–539. https://doi.org/10.1177/2472630318759257 6. Ramamoorthy S, Cidlowski JA (2016) Corticosteroids: mechanisms of action in health and disease. Rheum Dis Clin North Am 42:15–31. https://doi.org/10.1016/j.rdc.2015.08.002 7. Hill EE, Zack E, Battaglini C, Viru M, Viru A, Hackney AC (2008) Exercise and circulating cortisol levels: the intensity threshold effect. J Endocrinol Invest 31:587–591. https://doi.org/ 10.1007/bf03345606 8. Vogeser M, Durner J, Seliger E, Auernhammer C (2006) Measurement of late-night salivary cortisol with an automated immunoassay system. Clin Chem Lab Med 44:1441–1445. https:// doi.org/10.1515/cclm.2006.244 9. Bueno JR, Gouvêa CMCP (2011) Cortisol and exercise: effects, secretion and metabolismo. Rev Bras Fisiologia Do Exercício 10:178–180. https://doi.org/10.33233/rbfe.v10i3.3443 10. Ellis DI, Goodacre R (2006) Metabolic fingerprinting in disease diagnosis: biomedical applications of infrared and Raman spectroscopy. Analyst 131:875–885. https://doi.org/10.1039/b602376m 11. Arain SS, Kazi TG, Arain JB, Afridi HI, Brahman KD (2014) Preconcentration of toxic elements in artificial saliva extract of different smokeless tobacco products by dual-cloud point extraction. Microchem J 112:42–49. https://doi.org/10.1016/j.microc. 2013.09.005 12. Becke A (1988) Density-functional exchange-energy approximation with correct asymptotic behavior. Phys Rev a 38:3098–3100. https://doi.org/10.1103/PhysRevA.38.3098

Line Shape Analysis of Cortisol Infrared Spectra for Salivary … 13. Lee C, Yang W, Parr R (1988) Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density. Phys Rev B 37:785–789. https://doi.org/10.1103/ PhysRevB.37.785

1127 14. Rassolov VA, Ratner MA, Pople JA, Redfern PC, Curtiss LA (2001) 6–31G basis set for third-row atoms. J Comput Chem 22:976–984. https://doi.org/10.1002/jcc.1058

Differential Diagnosis of Glycosuria Using Raman Spectroscopy E. E. Sousa Vieira, L. Silveira Junior, and A. Barrinha Fernandes

Discriminant analysis (DA) based on a regression model (PLS) proved to be promising as it discriminated the control group without errors, and the rate in the DM & HBP group was 89.1%. Raman spectroscopy can be a potentially useful tool for testing glucose in urine.

Abstract

The aim of this research was to detect spectral differences in glycemic components. Urine samples were collected from 40 patients who were divided into a control group and a diabetic and hypertensive group. The samples were obtained in the morning, fasting, and stored a freezer at −80 °C until spectral analysis. Spectral data collection was performed using a dispersive Raman spectrometer (Dimension P-1 model, Lambda Solutions, Inc., MA, USA). The equipment uses a stabilized multimode diode laser operating at 830 nm, with about 300 mW power output, and time integration to collect the Raman signal was adjusted to 5 s. The mean Raman spectra displaced from the urine of patients in the study groups (CT and DM & HBP) were identified at the range of 516 and 1127 cm−1. Comparative analysis of mean urine spectra showed a significant difference (p < 0.05) between the groups, the Student's t-test was used to compare the mean Raman spectra of the groups. The comparative analysis of peak intensities at 516 and 1127 cm−1 in the urine of diabetic control and hypertensive patients revealed that it was higher in the DM & HBP group than in the CT group, however, with no significant difference (p > 0.05). To quantify the glucose in urine and discriminate the groups, a model was developed to estimate the concentration using a quantitative regression model based on partial least squares (PLS). According to the data obtained, there was an excellent correlation (r = 0.98) between the concentrations estimated by the model and the concentrations determined by colorimetric analysis. E. E. Sousa Vieira (&)  L. Silveira Junior  A. Barrinha Fernandes Universidade Anhembi Morumbi (UAM), São Paulo, Brazil L. Silveira Junior  A. Barrinha Fernandes Centro de Inovação, Tecnologia E Educação (CITÉ), São José dos Campos, Brazil E. E. Sousa Vieira Centro Universitário da Amazônia (UNAMA), Santarém, Brazil

Keywords



Glucose Diabetes mellitus Glycosuria Urine

1



Raman spectroscopy



Introduction

Diabetes mellitus is a complex metabolic condition, and its classification and diagnosis has been the object of intense research for decades. It is categorized into four types: type 1 and type 2 diabetes (most recurrent), hyperglycemia in pregnancy (including gestational diabetes) and diabetes that has a specific etiology due to genetics, secondary to drugs, pancreatic factors, or other diseases. The International Diabetes Federation (IDF) ranked Brazil as the third country with the highest incidence of new cases of diabetes, second only to the USA and India [1] The diagnosis of diabetes is based on the concentration of glucose in blood and urine. The traditional invasive methods for blood collection, such as the colorimetric biochemical method [2], have been replaced by non-invasive strategies such as the use of photonics [3–5]. Since 1841, urine has been used as a diagnostic fluid for diabetes, which has been extensively studied, as it can be collected easily and non-invasively. Urine is composed of metabolites, such as glucose, proteins, and nitrates, in addition to other dissolved salts, such as sodium and potassium. Glucose can be found in urine when it is excreted from blood in elevated levels and as a result, this fluid has been used for the diagnosis of diabetes [6].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_169

1129

1130

Glycosuria means the presence of glucose in urine. It can be measured by using reagent strips that make a semiquantitative measurement of glucose in urine, which is easy to perform at low cost. In positive glycosuria, the serum concentration is greater than 180 mg/dl in patients with normal renal function and higher values are found in patients with diabetic nephropathy. Several interferences may occur when measuring glucose in urine, such as volume. Despite these limitations, the measurement of glycosuria can be used for both the diagnosis and monitoring of glucose and for patients on insulin treatment who are unable to measure capillary glucose before meals and at bedtime [7]. The concentration of glucose in urine is also an important indicator for many other diseases, which makes the development of non-invasive or minimally invasive methods for frequent monitoring of glucose in urine imperative [8]. Raman spectroscopy is the study of the spectrum obtained by the interaction of electromagnetic radiation with matter. One of its main objectives is to determine the energy levels of atoms or molecules. This analysis technique uses electromagnetic radiation to test the vibrational behavior of molecules by observing the absorption or scattering of radiation. This innovation has been used successfully in the diagnosis of pathologies in biological materials for the biochemical characterization of the sample. The Raman spectrum provides chemical and structural information of organic and inorganic compounds, thus allowing their identification in a few seconds, with little or no need for sample preparation [9]. RS can be performed in real time in in vivo materials to monitor the analytes in biological fluids, including the non-invasive analysis of urine, dispensing the need for chemicals and reagents. In addition, as it uses an optical technique, it offers more benefits than traditional biochemical techniques due to its molecular specificity and capacity for quantitative analysis [10]. It may be used for in vitro and in vivo urine tests, offering early diagnosis of diseases, with the benefit of providing a rapid analysis in individual samples [11]. The main advantages for using RS over other techniques, such as mass spectroscopy and chromatography, are that it allows the analysis process to be developed in real time, at a lower cost, without tissue extraction or use of dyes and other contrast agents [12]. In addition to the above-mentioned aspects, it is important to note that techniques based on the Raman effect require minimal or no sample preparation, offer reduced artifact production, may be an alternative to techniques that require extensive preparation, provide non-destructive analysis of the sample [13], and multiple biochemical components can be analyzed in a single spectral collection. The research conducted by [5] aimed to develop a model for quantifying biomarkers, such as creatinine, urea and

E. E. Sousa Vieira et al.

glucose. By using selected peaks of these compounds, it was possible to obtain quantitative information on important biomarkers in urine for the assessment of renal failure in patients with diabetes and hypertension. The information obtained could be correlated with clinical criteria for the diagnosis of chronic kidney disease. Based on this information, the peaks in glucose in the urine of healthy volunteers (CT) and diabetic and hypertensive patients (DM & HBP) were identified in the present study. The use of RS allowed to quantify, analyze, and monitor glucose in urine in real time as well as verify significant differences in the mean spectra of the groups studied.

2

Materials and Methods

This study was approved by the Research Ethics Committee of Universidade Anhembi Morumbi (report No. 2,717,746), in accordance with the guidelines and regulatory standards for research involving human beings (resolution No. 466/12). A total of 40 patients (24 women and 16 men) were recruited and divided as follows: 20 normoglycemic and normotensive volunteers in a control group (CT) and 20 diabetic and hypertensive individuals (DM & HBP). Urine samples were collected in a sterile vial provided by the team. The volunteers were instructed to collect the medium jet in spontaneous urination, after previous hygiene of the external genitalia, that is, the first jet had to be discarded and urination was to be deposited into the sterile vial. The biological samples were obtained in the morning on an empty stomach, stored in 2.0 ml cryogenic tubes and kept frozen in a freezer at -80 °C until spectral analysis. The samples of normoglycemic and normotensive volunteers in the control (CT) and diabetics and hypertension (DM & HBP) groups were obtained from the Hiperdia Group organized by the Municipal Secretary of Health (Santarém/Pará). The collection of spectral data was performed using a dispersive Raman spectrometer (Dimension P-1 model, Lambda Solutions, Inc., MA, USA). The equipment uses a stabilized multimode diode laser operating at 830 nm, with about 300-mW power output and the time integration to collect the Raman signal was adjusted to 5 s. A Raman fiber optic probe cable was used to deliver radiation to the sample and collect the signal emitted by the sample. The urine samples were placed in an aluminum sample holder with 4-mm diameter holes and approximately 80 lL in volume. The Raman probe was placed at a distance of 10 mm from the sample holder. Thus, the spectral changes in the urine samples were accessed via optical fiber, repetition of excitation geometry and signal collection. Therefore, it was possible to study the spectral differences related to the

Differential Diagnosis of Glycosuria Using Raman Spectroscopy

differences in the biochemical constitution of the urine from different individuals. Six spectra were collected randomly from each samples, totaling 232 spectra for 40 samples (Fig. 1).The collected Raman spectra were processed in this order: manual removing the cosmic ray spikes; removal of the background fluorescence by fitting and subtracting a 5th order polynomial, providing baseline correction and normalization by spectrum of water. The mean normalized spectra of urine of the groups studied were compared and statistical analysis was performed using the Student’s t-test (p < 0.05) and the GraphPad Instat® software, version 3.0. The algorithm used for multivariate data analysis was based on partial least squares (PLS) [14], employing the spectra obtained for each sample and the concentrations of the glucose in urine obtained by colorimetry. The routine from Chemoface (https://www.ufla.br/chemoface) [15] was used to discriminate between CT and DM & HBP (PLS-DA) and also to predict the concentration of the glucose, the leave-one-out cross-validation method was used.

3

Results

Raman spectrum of urine samples and identification of glucose peaks Figure 2 shows the mean Raman spectra displaced from the urine of patients in the study groups: Control and DM & HBP. The arrows in Fig. 2 highlight the glucose peaks at 516 and 1127 cm−1 in urine. According to the scientific literature, peaks at 527 and 1129 cm−1 can be attributed in part to glucose overlapping protein bands [16–20]. The comparative analysis of the mean urine spectra showed that there was a significant difference (p < 0.05) between the CT and DM & HBP groups. The Student’s t-test was applied by comparing the mean Raman spectra of the CT and DM & HBP groups, and the statistical analysis showed a significant difference (p < 0.05).

DM & HBP group (n=20)

CT group (n=20) 6 spectra were collected randomly from each sample (n=120) Excluded 7 spectra (n= 113)

6 spectra were collected randomly from each sample (n=120) Excluded 1 spectra (n= 119)

Number of spectra (n=232)

Fig. 1 Illustrative diagram to demonstrate number of spectra that were included and excluded of study

1131

A comparative analysis of the peak intensity at 516 and 1127 cm−1 was also performed in the urine of control patients and hypertensive diabetics. The data in Fig. 3 show that the peak intensity at 516 and 1127 cm−1 was greater in the DM & HBP group than in the CT group, however, there was no significant difference (p > 0.05) in peak intensity. Quantitative model based on the partial least squares regression analysis (PLS) To quantify the glucose in urine and discriminate the groups, a model was developed to estimate the concentration using a quantitative regression model based on partial least squares (PLS) (Fig. 4). The leave-one-out cross-validation method was used, one prediction model, in which a sample is removed. The model (discriminating or quantitative) is built and it is tested on that sample left out, acquiring the predicted group or concentration in this sample. The process of obtention of the model and testing is repeated to all samples. Table 1 presents the parameters of the PLS-based regression model obtained by Chemoface routine. According to the data obtained, there was an excellent correlation (r = 0.98) between the concentrations estimated by the model and the concentrations determined by colorimetric analysis. Table 2 shows the number of spectra classified correctly using PLS discrimination model in comparison with the clinical classification of patients in the Control and DM & HBP groups. The data showed that the model employed classified correctly at 100%, 89.1%, respectively, the patients in the Control and DM & HBP group.

4

Discussion

In this research, we investigated the viability of RS to identify and quantify glucose in urine based on two study groups: CT and DM & HBP patients. Based on the results obtained, the mean Raman spectra of urine of patients for the study groups (Fig. 2) showed peaks in glucose at 516 and 1127 cm−1 in urine. These data are in agreement with the data of [11], who highlighted that the peak at 1128 cm−1 in urine was one of the biomarkers for diabetic and hypertensive patients. Their study pointed out that the Raman technique is a fast and reliable method for the qualitative assessment of urine in patients with diabetes and hypertension and it can be useful to diagnose complications associated with these diseases. A study that reports the successful analysis of the Raman spectroscopy for non-invasive (transcutaneous) quantification of blood analytes, using glucose as an example, was conducted by Enejder and colleagues [21]. The authors used a standard glucose tolerance test protocol for the transcutaneous measurement of glucose in 17 healthy individuals

1132

E. E. Sousa Vieira et al.

Control

DM & HBP

0,02 glucose peaks

0,015 0,01

516

Intensity (arb. un.)

0,025

1127

Fig. 2 Mean spectrum of glucose in urine of the control group in comparison with DM & HBP groups

0,005 0 400

600

800

1000

1200

1400

1600

1800

Raman shift (cm -1)

1600

Fig. 3 Mean peak intensity at 516 and 1127 cm−1 in urine of the CT and DM & HBP groups

1400

Intensity (arb. un.)

1200 1000 800

CT

600

DM & HBP

400 200 0

516

-200

Glucose Peaks

Estimated concentration PLS (mg/dL)

-400

Fig. 4 Estimated concentration of glucose (mg/dL) in urine according to the PLS regression analysis versus the concentration determined by spectrophotometry (colorimetric analysis). RMSEcv (mean square error of cross-validation)

1127

(cm-1)

3000 2500 2000 1500 1000 500 0 -500 -500

0

500

1000

1500

2000

2500

3000

Biochemical concentration (mg/dL)

Table 1 Parameters of the PLS-based regression model obtained by Chemoface routine

Parameters of the PLS-based regression model No. of latent variables

RMSEcv (mg/dL)

R2cv

rcv

Error (%)

Accuracy (%)

8

92

0.97

0.98

5.6

94.4

Differential Diagnosis of Glycosuria Using Raman Spectroscopy Table 2 Results of the discriminant analysis (DA) based on the PLS regression model included the analysis of 8 latent variables (LVs)

1133

Classification according to the clinical criteria

Predicted Classification according to PLS-DA Successes

Unsuccesses

Correct classification (%)

Control (n = 113)

113

0

100

DM & HBP (n = 119)

106

13

89.1

whose glucose levels in the blood were elevated over a period of 2–3 h. The mean absolute errors for each individual were 7.8 ± 1.8% (mean ± standard) with R2 values of 0.83 ± 0.10. The difference in our study is that we carried out the analysis in a control group and a group with pathologies, expanding the possibilities of applying the technique. The data from our research revealed that the PLS regression analysis, using the model adopted the R2 value was 0.97 (Table 1). Figure 3 shows the peak intensity analysis that, despite being higher, revealed no significant differences between the groups, probably due to the high standard deviation. These data are in agreement with [5], that evaluated peak intensity at 1128 cm−1 in urine of diabetic patients. One justification for this fact would be that not all patients perform an adequate glycemic control, even receiving the treatment prescribed by the medical team. Other possibility can be related with the number of samples, we intend to conduct further research, increasing the number of patients for a more robust analysis. Thus, was applied a model based on partial least squares (PLS) regression to correlate the concentrations of glucose in urine through Raman spectroscopy using the biochemical concentrations evaluated by the colorimetric method as the real concentrations of the sample. Table 2 shows that the capacity of glucose discrimination in patients in the CT group was 100 and 89.1% in the HBP group, which suggests that the Raman Spectroscopy may be an efficient low-cost technique for rapid population screening of glucose. In some countries, diagnostic tests for glucose in urine have been carried out using urinary biochemical analysis. In the work conducted by [22], a screening test using glucose in urine detected that 97.5% of the population were diagnosed with asymptomatic DM and were classified as DM2. The technique has been described as a powerful new diagnostic tool compared to routine biochemical tests, as it has many advantages, such as: rapid analysis, little or no sample preparation, use of small sample volume, detection of a wide variety of parameters in a single spectrum and no need for reagents. The advantages of the Raman spectroscopy technique as an instrument for analysis include the rapid speed of processing, reduced processing time and precision in the results to monitor and control glycemic changes in patients. The data presented in the present study show that the RS may be a useful tool in the future for population screening, in order to identify asymptomatic patients e that are without

adequate treatment and to monitor patients being treated. In addition, the proposed model was able to discriminate healthy patients with 100% success when compared to diabetics and hypertensive patients.

5

Conclusions

In the present study, we evaluated the performance of Raman spectroscopy as a screening tool for testing glucose in urine. The comparative analysis of the mean urine spectra showed that there is a significant difference between the groups studied. The comparative analysis of the peak intensities at 516 and 1127 cm−1 in urine of the control and hypertensive diabetic patients, showed that it was greater in the DM & HBP group than in the CT group, with no significant difference. The PLS method discriminated 100% of the patients in the DM & HBP group, and the R2 value was 0.97. The discriminant analysis (DA) based on the regression model (PLS) proved to be promising, as it managed to discriminate the control group without errors, and the hit rate in the DM & HBP group was 89.1%. The prediction errors indicated that Raman spectroscopy is a promising technique that can be used to quantify glucose in urine complementing or replacing conventional biochemical analysis techniques, particularly for population screenings. Acknowledgements L. Silveira Jr. thanks FAPESP (São Paulo Research Foundation, Brazil) for the grant to obtain the Raman spectrometer (Grant No. 2009/01788-5) and CNPq (Brazilian National Council for Scientific and Technological Development) for the productivity fellowship (Process No. 306344/2017-3). Elzo E. S. Vieira thanks the Coordination for the Improvement of Higher Education Personnel (CAPES) for the scholarship and the Municipal Secretary of Health of Santarém for giving permission to conduct the research.

References 1. Forouhi NG, Wareham NJ (2018) Epidemiology of diabetes. Medicine 2. Mendel B, Kemp A, Myers D et al (1954) A colorimetric micro-method for the determination of glucose. Biochem J 56:639 3. Luppa PB, Vashist SK, Luong JH et al (2018) Non-invasive analysis. Point-of-care testing. Springer, Berlin 4. Novikov I (2018) Noninvasive determination of blood glucose concentration by comparing the eardrum and head skin temperatures. Biomed Eng 51

1134 5. De Souza VEE, Bispo JAM, Silveira L, Fernandes AB et al (2017) Discrimination model applied to urinalysis of patients with diabetes and hypertension aiming at diagnosis of chronic kidney disease by Raman spectroscopy. Lasers Med Sci 32:1605–1613 6. Bruen D, Delaney C, Floresa L, Diamond D et al (2017) Glucose sensing for diabetes monitoring: recent developments. Sensors 8:1866. https://doi.org/10.3390/s17081866 7. Gross JL et al (2002) Diabetes Melito: Diagnóstico, Classificação e Avaliação do Controle Glicêmico. Arquivo Brasileiro de Endocrinologia e Metabolismo [online] 8. Guerrero JL, Flores M, Aaron M, Jimenez FN, Fuente RP, Rasgado ET, Vivanco GR, Viveros NG, Ramos JC et al (2020) Novel assessment of urinary albumin excretion in type 2 diabetes patients by raman spectroscopy. Diagnostics. https://doi.org/10. 3390/diagnostics10030141 9. Shapiro A, Gofrit ON, Pizov G et al (2011) Raman molecular imaging: a novel spectroscopic technique for diagnosis of bladder cancer in urine specimens. Eur Urol 59:106–112. https://doi.org/ 10.1016/j.eururo.2010.10.027 10. Hanlon EB, Manoharan R, Koo TW, Shafer KE, Motz JT, Fitzmaurice M et al (2000) Prospects for in vivo Raman spectroscopy. Phys Med Biol 45:R1 11. Bispo JAM, Vieira EES, Silveira JRL et al (2013) Correlating the amount of urea, creatinine, and glucose in urine from patients with diabetes mellitus and hypertension with the risk of developing renal lesions by means of Raman spectroscopy and principal component analysis. J Biomed Opt 188:087004. https://doi.org/10. 1117/1.JBO.18.8.087004 12. Guimarães AE, Pacheco MTT, Silveira Jr L, Barsottini D, Duarte J, Villaverde AB et al (2006) Near infrared raman spectroscopy (NIRS): a technique for doping control. Spectroscopy Int J 20(4):185–194 13. Beattie JR, Brockbank S, McGarvey JJ, Curry WJ et al (2007) Raman microscopy of porcine inner retinal layers from the area centralis. Mol Vis 13:1106–1113

E. E. Sousa Vieira et al. 14. Goitz MJ, Cote GL, Erckens R, March W, Motamed M et al (1995) Application of a multivariate technique to Raman spectra for quantification of body chemicals. IEEE Trans Biomed Eng 42:728–731. https://doi.org/10.1109/10.391172 15. Nunes CA, Freitas MP, Pinheiro ACM, Bastos SC et al (2012) Chemoface: a novel free user-friendly interface for chemometrics. J Braz Chem Soc 23.https://doi.org/10.1590/S010350532012005000073 16. Barman I, Dingari NC, Kang JW, Horowitz GL, Dasari RR, Feld MS et al (2012) Raman spectroscopy-based sensitive and specific detection of glycated hemoglobin. Anal Chem 845:2474– 2482. https://doi.org/10.1021/ac203266a.PMid:22324826 17. Berger AJ, Itzkan I, Feld MS et al (1997) Feasibility of measuring blood glucose concentration by near-infrared Raman spectroscopy. Spectrochimica Acta Part A Mole Biomole Spectroscopy 53A (2):287–292. PMid:9097902 18. Dingari NC, Barman I, Singh GP, Kang JW, Dasari RR, Feld MS (2011) Investigation of the specificity of Raman spectroscopy in non-invasive blood glucose measurements. Analyt Bioanalyt Chem 400(9):2871–2880 19. Saade J, Pacheco MTT, Rodrigues MR, Silveira L Jr et al (2008) Identification of hepatitis C in human blood serum by near-infrared Raman spectroscopy. Int J 22(5):387–395. https://doi.org/10.1155/ 2008/419783 20. Shao J, Lin M, Li Y, Li X, Liu J, Liang J, Yao H et al (2012) In vivo blood glucose quantification using Raman spectroscopy. PLoS One 7(10):e48127. https://doi.org/10.1371/journal.pone. 0048127. PMid:23133555 21. Enejder AM, Scecina TG, Oh J, Hunter M, Shih W, Sasic S, Horowitz GL, Feld MS et al (2005) Raman spectroscopy for noninvasive glucose measurements. J Biomed Opt 10:031114 22. Kim SM, Lee DY (2017) Urinary glucose screening for early detection of asymptomatic type 2 diabetes in Jeonbuk Province Korean Schoolchildren. Korean, Med

Development of a Moderate Therapeutic Hypothermia Induction and Maintenance System for the Treatment of Traumatic Brain Injury Reynaldo Tronco Gasparini, Antonio Luis Eiras Falcão, and José Antonio Siqueira Dias

Abstract

Keywords

Therapeutic Hypothermia is a technique that involves an intentional and controlled, systemic or selective body cooling, for the treatment of conditions such as traumatic brain injury, stroke, and after cardiopulmonary resuscitation. Decreasing body temperature to below normal levels is a proven clinical intervention and has been used in operating rooms since the 1950s and in the early twenty-first century gained recognition as a neuroprotective agent. There are currently several cooling techniques available which can be invasive or non-invasive, each one has its advantages and disadvantages. The ideal method would be one capable of rapidly inducing hypothermia without risk of overcooling, maintaining the target temperature during the maintenance phase, with minimal oscillation, providing controlled and slow rewarming and being minimally invasive. Each case should be evaluated individually to define the best cooling method to be used. This paper presents a new low cost, safe and easy operation equipment of induction and maintenance of moderate Therapeutic Hypothermia. The system involves direct cooling of the blood through cardiopulmonary bypass, using the same blood lines from hemodialysis procedures, which are fitted in heat exchangers, so that blood cooling occurs within the circuit itself and no contact with any other equipment. The central function of this equipment is to decrease the patient's systemic temperature in order to reach the pre-set target temperature of hypothermia as rapid as possible (induction phase), control the hypothermic temperature precisely to maintain minimal fluctuation (maintenance phase) and slow and controlled rewarming of the patient until the normothermia (rewarming phase).

Therapeutic hypothermia Extracorporeal circulation cooling

R. T. Gasparini (&)  J. A. Siqueira Dias Department of Semiconductors, Instruments and Photonics, UNICAMP, Campinas, Brazil A. L. E. Falcão College of Medical Sciences, UNICAMP, Campinas, Brazil

1





Body temperature control Stroke Thermoelectric



Introduction

Central body temperature is a precisely controlled physiological parameter. Human beings are homeotherms and have a thermoregulatory system that allows small variations from 0.2 to 0.4 °C around 37 °C for the maintenance of metabolic functions. This precise control results from aggressive thermoregulatory responses, which are triggered by small deviations from normal physiological temperature [1, 2]. The most important physiological process is the blood flow regulation to the skin; when the internal temperature rises to a certain point, more blood is directed to the skin. The vasodilation can increase the skin's blood flow 15 times in extreme heat, transporting the internal heat to the skin and then transferring it to the environment. When the body temperature drops below a certain point, the blood flow of the skin is reduced to save heat, muscle tension increases to generate additional heat, causing visible tremors [3]. Therapeutic Hypothermia (TH) involves the controlled reduction of body temperature to values below normal, in attempting to protect an organ at risk of injury. This is a technique with proven effectiveness in clinical conditions such as stroke, TBI (Traumatic Brain Injury), spinal cord injury and in patients after CPR (Cardiopulmonary Resuscitation). Several studies have shown that hypothermia triggers a series of beneficial factors such as limiting the degree of ischemic damage, decreasing infarct volume, extend the time required for damage to occur (“time window”), limiting reperfusion injury and others [4–6]. Moderate TH in the treatment of TBI aims to attenuate secondary lesions, i.e. early pathological events including

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_170

1135

1136

R. T. Gasparini et al.

intracranial hypertension, cerebral hypoxia/ischemia, energy dysfunction, non-convulsive seizures and systemic insults that can occur immediately after primary brain injury and may harm the patient outcome [7]. Hypothermia can be induced by external cooling methods (non-invasive), intravascular devices or extracorporeal circulation (invasive). External cooling methods include the use of: (a) cooling blankets; (b) application of ice packs to the groin, armpits and neck; (c) use of wet towels and ventilation; (d) use of special beds and mattresses; (d) cooling helmets; (e) immersion in cold water [8–11]. The advantages of these methods are that they do not require advanced equipment and are inexpensive, but they do require hours to reach the target temperature, increasing the risk of complications. Invasive techniques include the infusion of ice-cold fluids (4 °C), such as ringer lactate or saline solutions [12, 13], single carotid perfusion with cooled extracorporeal blood, nasal, peritoneal, pleural and nasogastric lavage with cold water [8, 14]. Catheters covered with antithrombotic agents can be inserted into the venous system. Cold or warm water circulates inside the catheter, producing a heat exchange between it and the blood, allowing faster cooling towards the target body temperature [9, 10]. The extracorporeal blood heat exchange can enable a more efficient cooling system than catheters, where blood can be pumped out from the patient, cooled through an extracorporeal circuit and then pumped back into the venous system [9]. The aim of this work was to develop a safe, low-cost and easy-to-operate equipment for inducing and maintaining moderate systemic Therapeutic Hypothermia, for the treatment of patients mainly with stroke, TBI or after CPR.

2

Materials and Methods

In this work a new equipment was developed to induce and maintain moderate systemic hypothermia applying direct cooling of blood by extracorporeal circulation [15].

2.1 Operation Figure 1 illustrates the operation of the equipment. The blood is pumped out from the patient using a peristaltic pump (3), conducted through disposable arterial and venous blood lines (the same ones used in hemodialysis procedures) (12), cooled inside the extracorporeal circuit and then it returns to the venous system.

Fig. 1 Extracorporeal Hypothermia

circulation

system

for

Therapeutic

The vascular access can be accomplished by two large venous catheters (7) or a double-lumen catheter in larger caliber veins, such as femoral, subclavian or internal jugular. The arterial and venous blood lines are perfectly fitted inside the machined grooves of an aluminum plate (4) and a copper plate (5). Thus, the heat exchange and blood cooling occur inside the lines itself, and the blood doesn’t need to be allocated elsewhere to be cooled, reducing the risk of contamination and air entrance in the system, due to the reduced number of connections: two between the lines and intravascular catheters (6) and one between arterial and venous lines themselves (10). The venous line has a bubble chamber (11), to prevent air in the blood returning to the patient. The copper and aluminum plates are placed on Peltier thermoelectric coolers, which are capable of modifying the blood temperature dynamically and in a controlled way. The system has 18 Peltier modules in total, arranged in 9 stacks of 2 modules each, electrically connected in series, and mechanically positioned in parallel, one on top of the other. From the 9 stacks, 6 are under the copper plate (Fig. 2), where about 130 cm of the venous blood line are attached, and 3 stacks are under the aluminum plate (Fig. 3), where about 100 cm of the arterial line are attached. Once the blood lines are attached, a thick copper and aluminum plate covered by an elastomeric foam on top, are placed on the respective plates containing the lines, to improve the heat exchange. In turn, Peltier stacks are individually positioned on aluminum heat-sinks, connected to fans to assist the

Development of a Moderate Therapeutic Hypothermia …

1137

according to the temperature acquired by the developed temperature sensors (1).

2.2 System Hardware

Fig. 2

Cooling structure with Peltier modules and copper plate

The core of the main circuit board is 16-bit microcontroller manufactured by Texas Instruments© which is a low cost and low power microcontroller that has general purpose input/output ports suited to the project requirements. The data acquisition from the 3 temperature sensors was performed by the independent 24-bit sigma-delta A/D converters. The microcontroller timer was configured, via software, as a PWM that controls the 9 H-bridges (one for each Peltier stack), controlling the current through the Peltier modules. The power supply section works with 12 V, powering the H-bridges, the heatsink fans and the peristaltic pump. This 12 V power supply is electronically isolated from the 5 V and 3.3 V digital supplies by an optocoupler TCMT4100, thus isolating the noise generated by the switching of the H-bridges from the analog circuits and the A/D converters. Voltage regulators MC78L05 and MCP1700-3302 were used for 5 V and 3.3 V, respectively. A 30 A switched-mode power supply feeds the equipment providing sufficient current for all Peltier modules operation, also enabling a future expansion of the system. The time elapsed in the cooling (or rewarming) process, the temperature acquired by the 3 sensors and the defined target temperature are shown on the LCD Display.

2.3 Developed Temperature Sensors

Fig. 3

Cooling structure with Peltier modules and aluminum plate

dissipation of the heat generated by the hot side of the module, helping to cool the cold side. The greater the current through the Peltier modules, the greater the temperature difference between the Peltier plates (and vice versa). When the hypothermia function is initiated, the pump is switched-on by a relay. The pump speed is controlled manually by a potentiometer and the blood flow can be settled between 150 and 1000 ml/min. An electronic circuit (2) activates the peristaltic pump (8) and controls the aluminum and copper plates temperature (9) by controlling the current through Peltier modules,

Temperature sensors acquire the tympanic temperature, which faithfully and non-invasively represents the central body temperature. The sensor circuit was designed with a PMP4201G, a REF200 and an AD8500. The PMP4201G is formed by two NPN BJTs (Bipolar Junction Transistors), with interconnected emitters and SOT encapsulation. The circuit was covered by a non-toxic, electrically insulating and thermally conductive resin. REF200 was used as a 100 µA current source. The temperature sensor is connected to the main circuit board by a coaxial cable and BNC connector. The AD8500 is an operational amplifier configured as a buffer, to feed the signal from the sensors in the microcontroller A/D converter. One temperature sensor is placed in each patient's ear. For safety, the highest temperature acquired between the two sensors is used by the software, in case one of the sensors displaces undesirably and fails measuring the temperature, the other sensor keeps measuring the body temperature properly.

1138

The developed sensors were calibrated using an Incoterm LTDA thermometer, manufactured by Uebe Medical GMBH (German) and certified by Inmetro (Brazilian Institute of Metrology, Quality and Technology). The maximum error observed between the thermometer and the 3 temperature sensors was 0.07 °C.

R. T. Gasparini et al.

• System shutdown (Peltier modules and peristaltic pump) when in the rewarming phase the patient reaches normothermia (36.5 °C); • Serial communication to register the values obtained by the temperature sensors and PWM during the process via LabView®.

2.4 Control Software

3 The control software developed for the MSP430F6736 microcontroller features the following functionalities: • Reading of analog sensors (obtaining 150 samples per cycle and averaging), conversion to digital values and then, to temperature values in °C; • Setting the hypothermia target temperature (between 30 and 35 °C); • Choice of the highest value between the 2 tympanic temperature sensors to control the H-Bridges; • A PWM, configured by the microcontroller timer, controls the H-Bridges, which control the current through the Peltier modules to increase or decrease the cooling or heating in the heat exchangers; • Activation of the peristaltic pump and PWM control, when starting the hypothermia function. PWM duty cycle works at maximum since normothermia (body temperature TB = 36.5 °C) until 0.06 °C above target temperature (TT) of hypothermia (TB > TT + 0.06) (induction phase); • PWM control by PI (Proportional Integral) when the body temperature lower 0.06 °C above the target temperature and remains above 0.03 °C below the target temperature (TT – 0.03 < = TB < = TT + 0.06) (maintenance phase), keeping the body temperature at the hypothermia target value with minimal fluctuation; • Reversing the current direction through the Peltier modules if the body temperature drops more than 0.03 °C from the hypothermia target temperature (TB < TT – 0.03). In this condition, the PWM duty cycle is maintained at 25% until the body temperature reaches the target temperature again (TB = TT), when then the current direction is reversed again and PWM is over again controlled by PI; • Slow and controlled rewarming of the body (rewarming phase), raising the target temperature by 0.2 °C per hour to normothermia (36.5 °C). In this phase, PWM is also controlled by PI; • Error and Output update of the PI control every 5 seconds; • Display on the LCD display the temperatures of the 3 sensors, hypothermia target temperature, current process (hypothermia or rewarming) and the elapsed time of the process;

Results

According to ASHRAE®, a basal metabolism level is defined as the metabolism of a resting adult in silence, which produces about 100 W of heat. Because most of this is transferred to the environment through the skin, it is often convenient to characterize metabolic activity in terms of heat production per unit area of skin. An unit called “met” was defined in terms of basal metabolism as: 1 met = 58.15 W/m2 of skin surface. This is based on the average male European, with a skin surface area of about 1.8 m2, and female 1.6 m2. A sleeping person has a metabolic rate of 0.7 met and lying awake 0.8 met [3]. Considering that the patient submitted to TH is lying down, and unconscious in most cases, a metabolic rate of 0.7 met, or approximately 75 W of heat produced, was used to perform laboratory tests using the developed system. To simulate the body heat, it was used a heater regulated to provide 75 W, a glass tank containing approximately 5 L of water, simulating the average blood volume of a person, and a small water pump to keep the water moving and the temperature more uniform throughout the tank. To verify the equipment's response in a worst case, tests were also performed considering a metabolic rate of 0.8 met, or approximately 85 W. Figure 4 shows the hypothermia induction phase and the time needed to lower the temperature from 36.5 to 32 °C considering metabolic rates of 0.7 met and 0.8 met. Figure 5 shows the time needed to lower the temperature from 36.5 to 30 °C, which is considered profound hypothermia.

4

Discussion

In the moderate hypothermia induction phase, about 10 and 14 min were necessary to lower the temperature to 35 °C, which is when the neuroprotective effects begin, considering 0.7 met and 0.8 met respectively, and about 27 and 33 min to reach the target temperature of 32 °C at the respective metabolic rates.

Development of a Moderate Therapeutic Hypothermia …

Fig. 4

Time to lower the temperature to 32 °C

Fig. 5 Time to lower the temperature to 30 °C

1139

1140

R. T. Gasparini et al.

Table 1 TH induction time in relation to the metabolic rate

Table 2 Temperature variation during the maintenance phase at 32 °C

Metabolic rate

Time to 35 °C

Time to 32 °C

Time to 30 °C

0.7 met (75 W)

10′

27′

40′

0.8 met (85 W)

14′

33′

50′

Metabolic rate

Minimum temperature (°C)

Maximum temperature (°C)

Variation (°C)

0.7 met (75 W)

31.98

32.04

0.07

0.8 met (85 W)

31.97

32.04

0.08

Simulating profound hypothermia at 30 °C, it was needed about 40 and 50 min, considering the metabolic rates of 0.7 met and 0.8 met respectively. Table 1 presents the results found. During the hypothermia maintenance phase, it was observed a maximum temperature variation of 0.08 °C, considering both metabolic rates, around the target temperature of 32 and 30 °C, as shown in Table 2. During the rewarming phase, a maximum temperature variation of 0.08 °C was also observed around the momentary target temperature, which is increased by 0.2 °C/h. Slow and controlled rewarming is fundamental for a successful treatment, risking losing all the beneficial effects of hypothermia if the temperature rises rapidly.

5

Conclusions

In this work, a new systemic moderate Therapeutic Hypothermia equipment using extracorporeal circulation was developed. The characteristics presented by this equipment make it safer than other invasive methods of Therapeutic Hypothermia, as blood circulates only through disposable blood lines, where it is cooled in a controlled and automatic way, needing no contact with any other reservoir, reducing the risk of contamination. Likewise, the few number of connections in the extracorporeal circulation considerably reduces the risk of air entrance in the blood circuit and hence, possible embolisms. The developed hardware allows a simple equipment operation; only the desired hypothermia temperature must be set, and the system performs the necessary operations to induce and maintain hypothermia. The system also rewarms the patient automatically, when requested, slowly and gradually (0.2 °C/h). The small observed oscillation, less than 0.1 °C, confirms that the system can maintain a stable temperature during the maintenance phase and promote a slow and controlled rewarming during the rewarming phase. The results obtained in the laboratory are compatible with the existing equipment and sufficient to respond to the therapeutic demands. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Biazzotto CB, Brudniewski M, Schmidt AP, Auler JOC Jr (2006) Hipotermia no Período Peri-Operatório. Rev Bras Anestesiol 56:89–106 2. Sessler DI, Sladen RN (1997) Mild perioperative hypothermia. N Engl J Med 336:1730–1737 3. ASHRAE—American Society of Heating, Refrigerating, and Air Conditioning Engineers (2009) Handbook of fundamentals. ASHRAE Inc., Atlanta 4. Marion D, Bullock R (2009) Current and future role of therapeutic hypothermia. J Neurotraum 26:455–467 5. Ahmed AI, Bullock R, Dietrich D (2016) Hypothermia in traumatic brain injury. Neurosurg Clin N Am 27:489–497 6. Abou-chebl A, Degeorgia MA, Andrefsku JC, Krieger DW (2004) Technical refinements and drawbacks of a surface cooling technique for the treatment of severe acute ischemic stroke. Neurocrit Care 1:131–143 7. Urbano LA, Oddo M (2012) Therapeutic hypothermia for traumatic brain injury. Curr Neurol Neurosci 12:580–591 8. Varon J, Acosta P (2008) Therapeutic hypothermia: past, present and future. Amer Coll Chest Physicians 133:1267–1274 9. Kuhnen G, Jensen NE, Tisherman SA (2005) Cooling methods. In: Therapeutic hypothermia. Springer Science+Business Media Inc., New York 10. Polderman KH, Herold I (2009) Therapeutic hypothermia and controlled normothermia in the intensive care unit: practical considerations, side effects, and cooling methods. Crit Care Med 37:1101–1120 11. Harker J, Gibson P (1995) Heat-stroke: a review of rapid cooling techniques. Intens Crit Care Nur 11:198–202 12. Bernard SA, Buist M, Monteiro O, Smith K (2003) Induced hypothermia using large volume, ice-cold intravenous fluid in comatose survivors of out-of-hospital cardiac arrest: a preliminary report. Resuscitation 56:9–13 13. Virkkunen I, Yli-hankala A, Silfvast T (2004) Induction of therapeutic hypothermia after cardiac arrest in prehospital patients using ice-cold ringer’s solution: a pilot study. Resuscitation 62:299–302 14. Nolan JP et al (2003) Therapeutic hypothermia after cardiac arrest: an advisory statement by the advanced life support task force of the international liaison committee on resuscitation. Circulation 108:118–121 15. Gasparini RT (2020) Desenvolvimento de um Sistema de Indução e Manutenção de Hipotermia Terapêutica Moderada para o Tratamento de Traumatismo Cranioencefálico. Doctorate Thesis, Dept Semiconductors Instruments Photonics/UNICAMP, Campinas, 86 p

Temperature Generation and Transmission in Root Dentin During Nd:YAG Laser Irradiation for Preventive Purposes Claudio Ricardo Hehl Forjaz, Denise Maria Zezell, and P. A. Ana

Therefore, this laser protocol can be used as long as the irradiation time is reduced in future studies.

Abstract

High-intensity lasers are widely used in dental procedures and the heating produced on the surface is necessary to ensure protective activity against the development of caries and erosion lesions. However, caution should be exercised regarding the spread of heat to the pulp, periodontal tissue and alveolar bone, which can cause harm to these tissues. This study sought to evaluate the generation and transmission of heat in the root dentin and adjacent tissues during irradiation with Nd:YAG laser for preventive activity. For that, 15 lower incisor human teeth had an area of 9 mm2 of root dentin irradiated with Nd: YAG laser (k = 1.064 µm, 10 Hz, 60 mJ/pulse, 84.9 J/cm2) for 30 s. During irradiations, pulpal temperature was evaluated by fast-response thermocouples, while surface temperature and heat distribution on surrounding tissues were measured by infrared thermography. It was observed a mean surface temperature increase of 293.48 ± 30.6 °C in root dentin surface, and 15.85 ± 39.6 °C below the irradiated area, 11.72 ± 8.7 ° C above the irradiated area, 19.77 ± 4.9 °C at 1 cm laterally and 7.03 ± 2.7 °C at 2 cm laterally to the irradiated area. The mean pulpal temperature augment registered was 6.5 ± 1.4 °C. It can be concluded that Nd: YAG laser irradiation promoted surface temperature rises that suggest chemical changes on dentin; however, the temperature increases generated in the adjacent tissues (region of periodontal ligament) and in the pulp chamber may be dangerous in future clinical application considering the irradiation time of 30 s made in this study. C. R. H. Forjaz  P. A. Ana (&) Center for Engineering, Modeling and Applied Social Sciences, Federal University of ABC, 03 Arcturus St., São Bernardo do Campo, Brazil e-mail: [email protected] D. M. Zezell Center for Lasers and Applications/Nuclear and Energy Research Institute (IPEN-CNEN/SP), Sao Paulo, Brazil

Keywords

Laser

1

  Heat

Propagation



Root

  Pulp

Ligament

Introduction

High-intensity lasers are widely used in tooth tissues for different applications. When choosing an irradiation parameter for any clinical application, an important data to be considered is the temperature reached, in order to avoid risks to biological tissues. In addition, the determination of temperature rises on the surface of hard tissues predicts the possibility of the occurrence of different chemical and crystallographic effects. For example, the beginning of carbonate removal occurs after 100 °C, while changes in the crystalline phases occur from 800 °C [1]. These changes have already been identified with the use of the Nd:YAG laser on enamel and dentin, depending on the energy density applied [2]. Considering the use of lasers for preventing caries, the respect for the temperature limits determined by the classic work of Zach and Cohen [3] is still imperative for the determination of laser protocols that are safe for pulpal tissue. The effects of these lasers are mostly photothermal and photoacoustic; thus, it is essential to know the transmissibility of this heat generated to the interior of the tooth or to the surrounding tissues. Still, it is important to respect the temperature limits that can be tolerated by periodontal tissues, such as ligaments and alveolar bone, so as not to induce inflammation or necrosis in these tissues [4]. The first factors to be considered in determining secure protocols using high-intensity lasers are those related to the tissue to be irradiated, such as the thickness, the total mass of the tooth and the absorption coefficients for each laser

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_171

1141

1142

C. R. H. Forjaz et al.

wavelength. In this context, dentin is the tissue of greatest concern, as it is a tissue with low thermal conductivity [5] that offers greater risks to the pulp as it is worked in depth, as the area of the dentinal tubules increases with the depth of this tissue. It is worth mentioning that decayed tissue has more water and less minerals, therefore, the spread of heat becomes easier. Considering the successful application of the Nd:YAG laser to prevent enamel demineralization, little is known about the parameters that can also be used in dentinal tissue for the same purpose. Thus, an accurate assessment of the generation and propagation of heat from irradiations is imperative. This in vitro study evaluated the heat generation in the root dentin during the use of the Nd:YAG laser adjusted with preventive parameters, as well as to quantify the spread of heat to the surrounding tissues, through the simulation of a clinical application, in order to verify the safety of this protocol for dentin, enamel, pulp and periodontal tissues.

2

Material and Method

In this study, 15 healthy lower human incisor teeth were used. These teeth were provided by volunteers after approval of this study by the UFABC Research Ethics Committee (CAAE 49456715.0.0000.5594). The teeth were cleaned and the pulp and periodontal tissues were completely removed using endodontic files and manual ultrasonic scrapers, according to the recommended clinical technique. Before use, all teeth were observed using a stereoscopic magnifying microscope, so that those with cracks or any other defects that interfered with the analysis were excluded. To simulate a standard pulp tissue, a 20% red gelatin solution was inserted into the pulp chamber and root canal through the apical foramen taking into account the similarity of gelatin collagen with the molecule present in pulp tissue, which was posteriorly sealed with photopolymerizable dental composite resin. Afterwards, an opening of 1 mm in diameter was performed on the gingival third of the lingual side of each tooth up to the limit of the pulp chamber (Fig. 1) using a high-speed dental spherical diamond drill under air–water refrigeration, in order to allow the placement of a type K thermocouple (chromel–alumel, NiCr-NiAl, Omega Eng. Inc., USA) in the region of the coronary pulp adjacent to the irradiations. The thermocouple has 127 µm diameter, resolution of 0.2º C and sensibility from 0.1 to 100 °C, and it was kept in place during irradiations with dental wax. The temperature measurements were performed using a NI USB9162 acquisition board (National Instruments, USA), with temporal resolution of 0.05 s, and the analysis of temperature variations during and after irradiation was

performed using LabView software. Until irradiations, the teeth were kept individually in a humid environment under refrigeration (in sterile cotton and distilled water with thymol) at +4 ºC. During the experiments, the temperature and humidity of the room was kept constant and controlled. The irradiations were performed using a Nd:YAG laser (Pulse Master 1000 ADT, USA), which operates at wavelength of 1.064 µm, pulse duration of 100 µs, repetition rate of 10 Hz, beam diameter of 300 µm, 60 mJ/pulse and at energy density of 84.9 J/cm2 [6]. All irradiations were performed after the application of 0.5 mm-thick of a photoabsorber (coal paste, composed of vegetal coal diluted in 50% water and ethanol) [6] in dentin surface, in the absence of refrigeration. The irradiations were performed manually scanning the surface for 30 s, keeping the laser fiber focused, at a distance of 1 mm from the surface, which was maintained with the aid of a standardized endodontic file. For irradiations, an area of 9 mm2 (3  3 mm) was delimited with a pencil and a jig, which was totally irradiated with a constant speed of 6 mm/s by a single calibrated operator. During the experiments, the samples were kept static on optical supports (Newport Corp., Irvine, USA) and the energy emitted was verified before irradiation of each sample by a power and energy meter (Coherent FieldMaster GS + Detector LM45; Coherent, USA). During the irradiations, the surface temperature detection as well as the measurements of heat propagation were performed using an infrared thermographic camera (ThermaCam FLIR SC3000 Systems, Boston, USA). The camera

Fig. 1 Scheme of the positioning of the type-K thermocouple and laser irradiation in a tooth

Temperature Generation and Transmission in Root Dentin …

was positioned laterally to the tooth and to the optical fiber of the laser, 10 cm away (Fig. 2), and the images were kept focused through the use of appropriate lenses. Figure 2 shows an infrared image representative of the placement of a sample in front of the thermographic camera, making it possible to observe the lateral face of the tooth, as well as the head of the laser equipment handpiece and the optical fiber used during irradiations. In addition, it is possible to observe the presence of the photoabsorber paste and the positioning of the thermocouples to measure the intrapulpal temperature. The experiments were carried out at a controlled temperature of 21.5 °C, 70% relative humidity and considering the tooth emissivity equal to 0.91. The infrared images were obtained with a resolution of 0.01 oC, using a recording rate of 300 Hz. The monitoring of the temperature both by the thermocouple and by the thermographic camera started 5 s before and ended 3 min after the laser irradiation so that not only the temperature rise was observed, but also the time needed to return to the initial condition. For the analysis of the temperature on the surface of the root dentin, three points on the dentin surface were selected (Fig. 3a), namely SP01 (inferior border of the irradiated area), SP02 (superior border of the irradiated area) and SP03 (center of the irradiated area). For the analysis of the temperature propagation to the adjacent tissues, 4 points were analyzed (Fig. 3b): SP01—1 cm below the irradiated area; SP02—1 cm above the irradiated area; SP03—1 cm in depth of the irradiated area, and SP04—2 cm in depth of the irradiated area. Such analyzes were performed using the software ThermaCam Researcher 2001 (FLIR Systems, USA). All temperature data obtained, both from the surface and from the pulp, were transported to Microsoft Excel, so that the average curves with their respective standard deviation values were plotted.

1143

3

Figure 4 shows a representative sequence of infrared images obtained during irradiations. It is noticed the ejection of the coal paste at the beginning of the irradiation, as well as the generation and spread of the heat generated by the adjacent areas, reaching pulp, enamel and periodontal tissue. Still, it is possible to observe the cooling of the tooth after the end of the irradiation. The analysis of the superficial points of the root dentin immediately below the irradiated area showed an average temperature increase of 293.48 ± 30.6 °C in the center of the irradiated area, and 299.92 ± 106.8 °C in the inferior border (Table 1). These values were calculated by subtracting the initial temperature of each tooth detected by the camera. In these same analyzes, temperature peaks of up to 407.7 °C were detected. The temperature variation curves detected during the irradiations (Fig. 5) show a rapid increase in the beginning of the irradiations, with some peaks and valleys characteristic of the interaction of the laser pulses with the tissue. The exponential decay is observed immediately after the end of the irradiations, whose return to the initial temperature takes approximately 200 s. At points adjacent to the irradiated area, it was evidenced an average elevation of 15.85 ± 39.6 °C below the irradiated area, 11.72 ± 8.7 °C above the irradiated area, 19.77 ± 4.9 °C at 1 cm laterally and 7.03 ± 2.7 °C at 2 cm laterally to the irradiated area (Table 2). In these regions, temperature peaks of 166.39 °C, 53.33 °C, 47.28 °C and 30.3 °C were observed, respectively. Figure 6 shows the average pulpal temperature variation curve measured by the thermocouple. An average increase of 6.15 ± 2.4 °C was observed. However, it was possible to detect a maximum peak of 10.71 °C during the measurements.

4

Fig. 2 Infrared image taken from the positioning of the laser fiber, teeth and thermocouple

Results

Discussion

The Nd:YAG laser is widely used in preventive actions on enamel, and the literature shows promising results in dentin with the same irradiation protocol used in this study [2]. As 1.064 µm photons are not absorbed by any chromophore present in dental hard tissues (absorption coefficient is less than 4  10–2 cm–1) [1], there is a need to apply a photoabsorber, which aims to restrict the generation of heat on the surface and reduce the transmission to deep tissues [6]. Although there have been tests with different photoabsorbers, the coal paste used in this study is still the one that has the best safety in its use, as well as being easily removed after use, which prevents aesthetic damage.

1144

C. R. H. Forjaz et al.

Fig. 3

Stablished points on the root dentin surface (a) and surrounding areas (b) for temperature analysis in infrared images

Fig. 4

Representative sequence of infrared images obtained at irradiations

Temperature Generation and Transmission in Root Dentin … Table 1 Average (±SD) of maximum and variation of temperature detected on surface of root dentin during irradiations

1145

Temperature (oC)

Center

Superior border

Inferior border

Variation

293.48 ± 30.6

276.8 ± 52.9

299.92 ± 106.8

Maximum

312.77 ± 30.6

296.17 ± 52.9

319.21 ± 106.8

Fig. 5 Temperature variation curves detected during the irradiations below, above, 1 and 2 cm laterally to the irradiated region. The shaded areas represent standard deviation

The increase in surface temperature detected in this study (average of 299.92 ± 106.8 °C and peaks up to 407.7 °C) suggests that the laser radiation chemically modifies the dentin structure, which corroborates previous studies [2]. It is known that the beginning of carbonate removal, water loss and denaturation of the organic matrix begin at temperatures of 100 °C, and such actions are important for increasing dentin resistance to demineralization [1]. It is important to consider that the temporal pulse width of the Nd:YAG laser used in this study (100 ls) is less than the recording rate of 300 Hz of the thermographic camera used. Therefore, it is possible that the highest temperature peaks were not detected during irradiations in this study and that these may have been even greater. In fact, this same laser protocol promoted melting and recrystallization of the dentin reported in a previous study [2], which suggests that at least temperatures close to 1000 °C have been reached. In the present work, even with the use of the coal paste [6], there was an average increase in temperature of 15.85 ± 39.6 °C in the region immediately below the irradiated area, in the apical direction, which corresponds to the region where periodontal ligaments are found. Temperature increases above 10 °C can generate inflammatory changes in periodontal tissues [4]; in this way, the protocol used in this study can be considered dangerous for a future clinical application. In relation to the pulp temperature, it was noticed an average rise of 6.15 ± 2.4 °C, which exceeds the limit of 5.5 °C that is potentially threatening to pulp vitality [3]. Although the value determined in this study is below the limit for total necrosis of the pulp (16 °C) [3], and that the pulp blood circulation can contribute in the dissipation of heat, it is considered that the temperature rise detected in the present study may be unsafe for this tissue. For future clinical extrapolation, the differences between this in vitro and a clinical study must be considered. The model chosen for this study involves the use of a tooth with a small mass, which is more susceptible to heat. The results obtained with this model can be easily extrapolated to other teeth with larger masses, as long as the same irradiation protocol is followed. In teeth with higher masses, it is expected that the heat propagation is less for the pulp; therefore, if a protocol is considered safe in this model, it can certainly be applied to another tooth safely. The lack of blood circulation, gingival fluid and saliva also contribute to the reduction of heat dissipation, given that periodontal tissues have less thermal conductivity than air [5]. Furthermore, differences in dentin thickness, hydration

1146

C. R. H. Forjaz et al.

Table 2 Average (±SD) of maximum, minimum and variation of temperature detected on the different regions during irradiations

Temperature (oC)

Peak

Minimum

Variation

Below the irradiated area

36.44 ± 39.2

20.59 ± 3.9

15.85 ± 39.6

Above the irradiated area

30.54 ± 8.7

18.82 ± 0.5

11.72 ± 8.7

1 cm laterally to the irradiated area

36.92 ± 4.5

17.15 ± 1.0

19.77 ± 4.9

2 cm laterally to the irradiated area

25.42 ± 2.8

18.39 ± 0.8

7.03 ± 2.7

in the pulp chamber may be dangerous for future clinical application considering the irradiation time of 30 s. Acknowledgements To FAPESP (2017-21887-4), PROCAD-CAPES (88881.068505/2014-01), National Institute of Photonics (CNPq/INCT 465763/2014-6) and IPEN-CNEN/SP. Conflict of Interest The authors declare that they have no conflict of interest.

Fig. 6 Average curve of pulp temperature variations detected by the thermocouple. The shaded area represents the standard deviation

degree or even in its composition reflect the high values of standard deviation found in all analyzes performed, even when using a large number of samples (15 teeth). Considering that the energy density employed in the present study has proven benefits in increasing dentin resistance to demineralization and erosion [2, 6], a strategy for future clinical application would be to reduce the irradiation time or fragment it, in order to allow a longer time of thermal relaxation of the dentin. Such an alternative should be evaluated in future studies.

5

Conclusion

Nd:YAG laser irradiation promoted surface temperature rises that suggest chemical changes on dentin; however, the temperature increases generated in the adjacent tissues and

References 1. Fowler BO, Kuroda S (1986) Changes in heated and in laser irradiated human tooth enamel and their probable effect on solubility. Calcif Tissue Int 38:197–208 2. Pereira DL, Freitas AZ, Bachmann L et al (2018) Variation on molecular structure, crystallinity, and optical properties of dentin due to Nd:YAG laser and fluoride aimed at tooth erosion prevention. Int J Mol Sci 19:433–447 3. Zach L, Cohen G (1965) Pulp response to externally applied heat. Oral Surg Oral Med Oral Pathol 19:515–530 4. Eriksson A, Albrektsson T, Grane B et al (1982) Thermal injury to bone. A vital-microscopic description of heat effects. Int J Oral Surg 11:115–121 5. Brown WS, Dewey WA, Jacobs HR (1970) Thermal properties of teeth. J Dent Res 49:752–755 6. Zezell DM, Boari HGD, Ana PA et al (2009) Nd:YAG laser in caries prevention: a clinical trial. Lasers Surg Med 41:31–35

Photobleaching of Methylene Blue in Biological Tissue Model (Hydrolyzed Collagen) Using Red (635 nm) Radiation G. Lepore, P. S. Souza, P. A. Ana, and N. A. Daghastanli

Abstract

Methylene blue (MB) is used for Photodynamic Therapy (PDT), with a protocol which is efficient, inexpensive, and safely, used for decades in clinical applications. The quantification of efficacy of a photosensitizer in aqueous solution is well known in literature. Photobleaching, a photoinduced degradation or modification of photosensitizer, can procedures significant lack of efficacy in PDT treatment since the generation of reactive oxygen species is directly associated with photons absorption of this photosensitizer. In this paper was studied the Photobleaching kinetics of methylene blue in presence of hydrolyzed collagen. In the investigated conditions, where the collagen matrix simulates the cytoskeleton net, it was demonstrated that the compartmentalization of MB can modulate its photobehavior, how bigger MB concentration in collagen medium, smaller Photobleaching rates. We associated this behavior to methylene blue concentration and by the environmental interaction. Keywords

Photodynamic therapy Photobleaching rates

1



Methylene blue



Introduction

Methylene blue (MB) is used in photodynamic therapy (PDT), with an efficient, inexpensive, and safe photosensitizer, used by experiments in clinical applications. It is a photosensitizer (PS) of high efficiency and aqueous solution G. Lepore  P. S. Souza  P. A. Ana  N. A. Daghastanli (&) Universidade Federal do ABC (Centro de Engenharia Modelagem e Ciências Sociais Aplicadas), Alameda da Universidade, s/nº—Bl. Delta, Sala 382, São Bernardo do Campo, Brazil e-mail: [email protected]

has properties well used in the literature. Figure 1 shows the photoprocess of Methylene blue, and that MB absorbs light in the red region and when excited with suitable wavelength (kMAX = 664 nm) generates singlet oxygen and/or superoxide anion [1, 2]. Photobleaching (PhB) is characterized by the loss of color of photosensitizer, and consequently the photon absorption capacity, that reduces the capacity for the formation of cytotoxic species. If PhB occurs before cells are affected, no damage to them will occur, or what is suitable for healthy cells, is exposed to therapy and is not suitable for malignant cells that are treated. Then showing that PhB is related to an efficiency of photodynamic therapy [1, 2]. The objective of this work is to study the photobleaching kinetics (PhB) of methylene blue, in a medium simulating the biological tissue formed by the hydrolyzed collagen matrix.

2

Materials and Methods

The hydrolyzed collagen samples were prepared with hydrolysed collagen powder in Milli-Q water (concentration 0.1 mg/ml), at different concentrations of methylene blue (50, 62.5, 75, and 100 lM). The collagen samples were placed in fluorescent acrylic cuvettes (optical path length 10 mm). Samples were irradiated (7.5 min) with a collimated diode laser beam (Coherent-USA) with initial power (P0) of 7 mW, and wavelength k = 635 nm. Transmitted light power (Ptr) through the collagen samples was measured by a light power meter (FieldMaxII TO— Coherent) as a function of irradiation time. Figure 2 shows the experimental setup. To calculation of photobleaching rates, experimental data were fitted with function of type described by expression (1), where K is the temporal photobleaching kinetic constant under different MB concentrations, P0 is the initial laser power, and Ptr is the transmitted laser power from Collagen-MB samples.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_172

1147

1148

G. Lepore et al.

Fig. 1 Methylene blue photochemical reactions. UP: methylene blue chemical structure. The insets show the Monomer and Dimer absorption spectra. Extracted from ref [3]

Fig. 3 Optical absorption spectrum of 50 lM Methylene blue in hydrolized collagen matrix, with maximum optical absorption band in k = 667 nm. The dot marks the diode laser emission (k = 635 nm)

Fig. 2 Experimental Setup: Diode Laser (a), acrylic cuvette (b), and power meter (c) distanced 10 cm from laser

Ptr ¼ P0  ð1  ekt Þ

3

ð1Þ

Results and Discussion

Methylene blue shows optical absorption in the 550–700 nm range, with characteristic band in 664 nm. When irradiated with laser light (k = 635 nm, P0 = 7 mW), methylene blue loses its characteristic blue color band, and this phenomenon is called photobleaching of methylene blue. This photobleaching is characterized by loss of optical absorption property, then more light can penetrate more deep in the samples. The temporal loss of blue color due to irradiation is followed by an increase of light power transmission. Figure 3 shows the spectral optical absorption of methylene blue in matrix of hydrolyzed collagen measured with a spectrophotometer (Red Tide USB650 Ocean Optics-USA) couplet to optical fiber (/ = 400 nm core). Figure 4 shows the comparative spectrum of methylene blue in aqueous and collagen media. Optical absorbance redshift was encountered between both aqueous and

Fig. 4 Comparative optical absorbance spectra of methylene blue in aqueous (—), and collagen (-⦁-) media. INSERT: shows the redshift of maximum peak of methylene blue to aqueous (664 nm), and collagen (667 nm) media

collagen media: 664 nm ! 667 nm. This redshift is an indication of different microscopic media to methylene blue. The absorption band maximum of the methylene blue in aqueous solution (664 nm) is different with respect to its maximum in collagen media (667 nm) in aqueous solution. We suppose that this redshift (664 nm ! 667 nm) can be explained by the methylene blue behavior in different environment, with different dielectric constants, like water, a polar substance (e = 80.2), and the hydrolized collagen, an apolar biological material. The measured absorption band maximum of the methylene blue in collagen media (667 nm) is very similar to glycerol solvent (e = 46.5) [4]. The environment polarity (water or collagen) causes shifts in the maximum absorption of methylene blue. This effect can modulate the production of reactive oxygen

Photobleaching of Methylene Blue in Biological …

1149 Table 1 Calculated photobleaching

Fig. 5 a Normalized transmitted light power (Ptr/P0), monitored at 635 nm in hydrolysed collagen samples with different Methylene Blue concentrations: dots are the Mean ± SD of normalized triplicate measurements. Continuous lines represent fitting curves using mathematical expression Ptr ¼ P0 :ð1  ekt Þ, where K is the rate of MB photobleaching for each MB concentration (◼: 50 lM, ◯: 62.5 lM, ▲: 75 lM, ▽: 100 lM). b Calculated K rates MB photobleaching in function of MB concentrations from kinetic curves of transmitted power presented in Fig. 5a

species, like singlet oxygen and can play an important role in clinical application of PDT. The skin and muscles, parts of the body that have a large amount of collagen, have a dielectric constant similar to that of glycerol (skin e = 47.6, and muscles e = 53.8) [5]. Figure 5a shows the intensity of transmitted light power, measured in function of irradiation time. Data were normalized due to significant absorption of methylene blue at higher concentrations, and consequent lower intensity of transmitted light power (Ptr), monitored with powermeter device (Fig. 2). Curves show that methylene blue photobleaching was concentration-dependent: the lower methylene blue concentration, the higher photobleaching rate. Table 1 shows the calculated K rates photobleaching of methylene blue.

Kinetic

rates

(K)

of

Methylene

Blue

[MB]/lM

K (s−1)  10–3

50.0

7.04

62.5

5.41

75.0

4.31

100.0

3.12

Figure 5b shows the temporal photobleaching rates (K), calculated from fitting of transmitted light power for each Methylene Blue concentration (expression 1). Methylene Blue photobleaching at low concentrations is insignificant in water solution. However, when immobilized in a collagen matrix, we measure a concentration-dependent photobleaching (Table 1). We can suppose that when MB molecules are trapped in the collagen matrix, these trapping can induce a limitation of molecular freedom to transfer the absorbed photon energy to environment, causing some molecular structural rearrangement, with loss of capability of absorption of light, and then it is possible to observe the photobleaching. This supposition was reforced from our experimental results, Methylene Blue molecules were in their monomeric form in hydrolized collagen matrix: the MB dimeric form presents its maximum at 590 nm (Fig. 1, insert), and our results show maximum at 667 nm (Fig. 3). Figure 6 shows the Methylene Blue photobleaching frames, obtained from a standard digital camera (10 Hz frame rate), at some continuous laser (k = 635 nm)

Fig. 6 Digital images of several irradiation times. Cleared areas represent the Photobleached regions of methylene blue (100 lM)

1150

irradiation times. The blank areas show the bleached regions of Methylene Blue inside the hydrolysed collagen sample for each irradiation time. Methylene blue presents a singlet oxygen (1O2) quantum yield formation (/ = 0.5), with a low reduction potential, intense light absorption within the phototherapeutic window for Photodynamic Therapy (in the 600–800 nm range), with a molar extinction coefficient (e) in water of 7.4  104 M−1  cm−1 (k = 664 nm) [4, 6]. PDT using Methylene Blue photosensitizer can induce cell death in cancer cells in a selective way, whereas not significantly affecting nonmalignant cells [7, 8]. Methylene Blue presents efficiency in photodynamic inactivation of microorganisms and virus, and high photodynamic efficiency causing death of cancer and microorganism cells [9]. It is achievable to observe that at the beginning of the irradiation (0 s) the light has low penetrating power when compared to the end of the irradiation (270 s) that light has high penetrating power. These frames acknowledge the kinetic curve of photobleaching at Fig. 5a, the photobleaching augmented the light power transmitted measure from the powermeter. When irradiated in the collagen matrix, the methylene blue can transition to bleached form, losing its characteristic blue color peak (664 nm), becomes partially uncolored. Our experimental data shows that the increase of MB concentration decreases the photobleaching rate constant in 126% (Table 1). We can suppose that the approximation of MB monomeric form in a structured collagen matrix plays a fundamental role to this photobleaching process. Additionally, the increases of MB concentration has an internal filter effect, contributing to photobleaching rate. Photodynamic therapy (PDT) uses photosensitizers, which are activated by visible light to produce oxidizing species capable of killing living cells, like cancer cells, and microorganisms. One of the clinical limitations of PDT is to deliver a light in necessary depth, due to some tumor or infection being deeply located in the organism. In our previous work [10], we demonstrated the efficacy of MB-PDT treatment in massive melanoma tumors in animal trials, where tumors were eradicated with just one MB-PDT session, even with high concentration of photosensitizer—a absorptive solution in a dark melanocytic tissue. This result shows that light was capable of penetrating in a local with very absorption behavior. The present work showed that Methylene Blue suffers photobleaching and this behavior may explain how light can penetrate deep in biological tissues. Then, we consider that methylene blue photobleaching has had a fundamental role in the effectiveness of the treatment because photobleaching permitted light to be delivered to the target site, even though it’s a deep place, as seen in the kinetic curves and in literature [11, 12].

G. Lepore et al.

4

Conclusion

The Methylene Blue—photobleaching can be playing a significant role in PDT procedures, and may affect the efficiency of the treatment, since the generation of ROS is directly associated with an efficient photon absorption by PS (particularly MB molecule), being a necessary step toward inflicting irreversible biological damage. In this paper, where the collagen matrix simulates the biological tissues like muscle, cartilages, etc., it was demonstrated that the compartmentalization of MB can modulate the photo-behavior of a PS modulated by its MB-aggregational state. We assume that the different dielectric properties of water and collagen (dielectric constant) can explain the differences between the photobleaching times of methylene blue and this can affect its optical absorption properties and consequently its efficiency in PDT. Acknowledgements The authors are grateful to CAPES-UFABC for the graduate grant (master’s degree). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Tardivo JP, Giglio A, Oliveira CS et al (2005) Methylene blue in photodynamic therapy: from basic mechanisms to clinical applications. Photodiagnosis Photodyn Therapy 2(3):175–191. https:// doi.org/10.1016/s1572-1000(05)00097-9 2. Mills A, Wang J (1999) Photobleaching of methylene blue sensitised by TiO2: an ambiguous system? J Photochem Photobiol A Chem 127(1–3):123–134. https://doi.org/10.1016/s10106030(99)00143-4 3. Daghastanli NA, Itri R, Baptista MS (2008) Singlet oxygen reacts with 2′,7′-dichlorodihydrofluorescein and contributes to the formation of 2′,7′-dichlorofluorescein. Photochem Photobiol 84 (5):1238–1243. https://doi.org/10.1111/j.1751-1097.2008.00345.x 4. Moreira LM, Lyon JP, Lima A et al (2017) The methylene blue self-aggregation in water/organic solvent mixtures: relationship between solvatochromic properties and singlet oxygen production. Orbital Electron J Chem 9(4). https://doi.org/10.17807/orbital. v9i4.996 5. Ibraheem A (2014) Manteghi M (2014) Performance of electrically coupled loop antenna inside human body at different frequency bands. IEEE Antennas Propag Soc Int Symp (Apsursi) 145:195– 202. https://doi.org/10.1109/aps.2014.6904815 6. Prahl S (2017) Methylene blue spectra. https://omlc.org/spectra/ mb/ 7. Khan S, Hussain MAB, Khan AP (2020) Clinical evaluation of smartphone-based fluorescence imaging for guidance and monitoring of ALA-PDT treatment of early oral cancer. J Biomed Opt 25(6). https://doi.org/10.1117/1.jbo.25.6.063813 8. Santos AF, Terra LF, Wailemann RAM et al (2017) Methylene blue photodynamic therapy induces selective and massive cell death in human breast cancer cells. BMC Cancer 17:194. https:// doi.org/10.1186/s12885-017-3179-7

Photobleaching of Methylene Blue in Biological … 9. Vecchio D, Gupta A, Huang L et al (2015) Bacterial photodynamic inactivation mediated by methylene blue and red light is enhanced by synergistic effect of potassium iodide. Antimicrob Agents Chemother 59(9):5203–5212. https://doi.org/10.1128/AAC. 00019-15 10. Daghastanli NA, Baptista MS, Itri R (2007) Efetividade da Terapia Fotodinânica (TFD) contra tumores de melanoma B16- Estudos in vivo em camundongos HRS/J hairless. Jornal Brasileiro De Laser 1:24–27

1151 11. Bacellar IOL, Baptista MS (2019) Mechanisms of photosensitized lipid oxidation and membrane permeabilization. Acs Omega 4 (26):21636–21646. https://doi.org/10.1021/acsomega.9b03244 12. Tasso TT, Schlothauer JC, Junqueira HC et al (2019) Photobleaching efficiency parallels the enhancement of membrane damage for porphyrazine photosensitizers. J Am Chem Soc 141 (39):15547–15556. https://doi.org/10.1021/jacs.9b05991

Effect of Photodynamic Inactivation of Propionibacterium Acnes Biofilms by Hypericin (Hypericum perforatum) R. A. Barroso, R. Navarro, C. R. Tim, L. P. Ramos, L. D. de Oliveira, A. T. Araki, D. B. Macedo, K. G. Camara Fernandes, and L. Assis

showed that hypericin and aPDT using hypericin photosensitizer showed effective antimicrobial action for inhibiting P. acnes biofilms and may be promising resources in the clinical treatment of acne vulgaris.

Abstract

The colonisation of the pilosebaceous follicle by Propionibacterium acnes (P. acnes) is known as one of the main factors driving acne by taking part in the inflammatory response of the skin. Antimicrobial Photodynamic Therapy (aPDT) has important applicability in skin diseases, however its use with hypericin (Hypericum perforatum) photosensitizer for inhibit of P. acnes have not yet been clarified. The aim of this study was to evaluate in vitro the effects of aPDT using hypericin photosensitizer associated with red low-level laser therapy on P. acnes biofilms. The biofilms were placed in 96-well microplates by using standard suspensions (2  107 CFU/mL) and grown in BHI broth for 48 h in anaerobic chamber. Subsequently, the control group received application of 0.9% sterile saline solution for 3 min (C); laser groups received irradiation of energies of 3 J (L3J); hypericin group at concentrations of 15 lg/ml for 3 min (H15%); aPDT group received 15 lg/ml concentration of hypericin associated with laser energy of 3 J (H15%L3J). After the biofilms were broken up and seeded for CFU counting. The results showed a reduction in P. acnes biofilms in H15% and H15%L3J. In addition, this reduction was higher in H15%L3J. This study

R. A. Barroso  R. Navarro  C. R. Tim  A. T. Araki  D. B. Macedo  K. G. Camara Fernandes  L. Assis (&) Scientific and Technological Institute of Brasil Univesity, Brasil University, Carolina Fonseca, 235, Itaquera, São Paulo, Brasil e-mail: livinha_fi[email protected] L. P. Ramos  L. D. de Oliveira Institute of Science and Technology, Department of Biosciences and Oral Diagnosis, Universidade Estadual Paulista Júlio de Mesquita Filho, São José dos Campos, Brazil R. A. Barroso  R. Navarro  C. R. Tim  L. P. Ramos  L. D. de Oliveira  A. T. Araki  D. B. Macedo  K. G. Camara Fernandes  L. Assis Dentistry Graduate Program, Universidade Cruzeiro do Sul, São Paulo, SP, Brasil

Keywords



 

 

Acne Vulgaris Hypericin Photochemotherapy Photosensitizing agents Propionibacterium Antimicrobial photodynamic therapy

1

Introduction

Acne is a chronic disease affecting the pilosebaceous units and is influenced by hormonal action, immune conditions, changes in keratinization and colonization of sebaceous follicles [1]. Among the species that can colonize pilosebaceous ducts, one can highlight three species of the genus Propionibacterium, namely, P. avidum, P. acnes e P. granulosum. One of the major characteristics of these species is their ability to produce propionic acid while growing, which is one of the main activators of the immune system during the development of acnes [1, 2]. Despite the treatments available, factors such as microorganism’s resistance to antibiotics and slow initial action of topical therapies have increased the search for new therapeutic options aiming at efficacy, quickness and safety in the clinical application. Within this context, Antimicrobial Photodynamic Therapy (aPDT) emerges as a new option for treatment of acnes [3–6]. There are a varieties of photosensitizer agents for photodynamic inactivation of microorganisms, but factors such as safety and low toxicity make the choice more selective. Some studies demonstrated that hypericin, a red pigment produced by a plant of the genus Hypericum perforatum (Hypericaceae) —popularly known as Saint John’s wort, has anti-oxidant and analgesic activities and has recently effective and useful

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_173

1153

1154

R. A. Barroso et al.

photosensitizer agent found in the nature [7–11]. This photosensitizer when stimulated by low-level laser irradiation with red wavelength, in the presence of oxygen, produces reactive oxygen species (ROS) and others free radicals which destructive and antimicrobial actions [12–18]. Despite some authors have already investigated the effects of aPDT on the treatment of acnes in experimental and clinical works, there is a lack in the literature describing the exact mechanism of action of aPDT using hypericin photosensitizer on the treatment of acnes vulgaris. Similarly, there are no consensus in the literature about the ideal protocol of laser parameters (wavelength, dose, treatment time) and the more appropriate hypericin concentration to be used in this clinical condition. Therefore, the aim of this study was to evaluate in vitro the effects of aPDT using hypericin photosensitizer associated with red low-level laser therapy on P. acnes biofilms.

2

parameters: semi-conductor of Indium-Gallium-AluminiumArsenide diode (InGaAlAs), wavelength of 660 nm, optical output power of 100 mW, energy of 3 J for exposure times of 30 s and doses of 100 J/cm2 continuous-mode emission, focused mode of irradiation area of 3 mm2 [17].

2.4 Experimental Groups The experimental design was carried out in triplicate using the following experimental groups (n: 10): • Control group (C): sterile 0.9% saline solution; • Laser group 3 J (L3J): red low-level laser irradiation with 3 J for 30 s; • Hypericin group (H15%): 15 lg/ml of hypericin applied for 3 min (PIT); • aPDT group H15% + L3J (H15%L3J): 15 lg/ml of hypericin + low-level laser irradiation with 3 J for 30 s;

Materials and Methods

2.1 Growth of Strain and Biofilm Formation The strain of Propionibacterium acnes (ATCC6919) was reactivated in brain heart infusion (BMI) agar (Himedia, Mumbai, India) under incubation at 37 °C for 48 h in an anaerobic chamber (Whitley DG250 Workstation, UK). Biofilm assembly was prepared by using standard solution and spectrophotometer (Micronal AJX, 1900) operating at 595 nm, with turbidity being observed after an optical density of 0.010, which represented a concentration of 2  107 CFU/ml [16]. Next, 100 ll of the standard inoculum was distributed in 96-well microplates for a concentration of 1  107 CFU/ml per well before addition of 100 ll/well of BHI broth, followed by incubation of the microplates at 37 °C for 48 h in anaerobic chamber.

2.2 Photosensitizer The extract of strain of hypericin (Hypericum perforatum) photosensitizer (Pratika Pharmacy, São Paulo, SP, Brazil) was solubilised in dimethyl sulfoxide (Synth, São Paulo, SP, Brazil) to obtain a stock solution of 1.000 lg/ml, from which dilutions in phosphate buffer saline (PBS) were made to obtain the photosensitizer at concentration of 15 lg/ml, for an absorption band ranging from 562 to 606 nm [15, 16].

The groups L3J and H15%L3J received red low-level laser irradiation, 1 cm away from the biofilm surface, in focused mode of irradiation, taking care of an empty well before and after the well being irradiated, and during the laser irradiation charge the microtiter plate with black paper, to avoid exposure of light in the surrounding wells.

2.5 Disaggregation of Biofilm and CFU Counting After the treatments, the microplates were washed three times with sterile PBS and then the biofilms were disaggregated with an ultrasonic homogenizer operating at a power of 25%. Next, aliquots were drawn from the microplates for dilutions of 10– 2, 10–4 and 10–6 before being seeded on BHI agar supplemented with 5% sucrose and volume of 10 ll, followed by incubation in anaerobic chamber for 48 h. After the incubation period, the plates were submitted to CFU counting.

2.6 Statistical Analysis How the resulting data had no normal distribution, they were statistically analyzed by using Kruskal–Wallis test complemented by Dunn’s test at a significance level of 5% (p  0.05).

3

Results

2.3 Light Source The low-intensity laser (MMOptics, São Carlos, SP, Brazil) was used as a light source according to the following

The application of hypericin at a concentration of 15 lg/ml (H15%) decreased the biofilm by 10.9% (p = 0.0003), whereas the association of photosensibilizer and red

Effect of Photodynamic Inactivation of Propionibacterium …

Fig. 1 Reduction of biofilm treated with different concentrations of hypericin (Hypericum perforatum) associated or not with red laser irradiation with 3 J for 30 s. Control group (CG) sterile 0.9% saline solution; Group H15%: 15 lg/ml of hypericin applied for 3 min (H15%); Group L3J: red laser irradiation with 3 J for 30 s; Group H15% + L3J:15 lg/ml of hypericin + red laser irradiation with 3 J for 30 s. *p < 0.05 versus CG; #p < 0.05 versus L3J

low-level laser irradiation with energy of 3 J (H15%L3J) resulted in the reduction of P. Acnes by 14.1% (p < 0.0001) in comparison with control group (CG) as show in Fig. 1. However, this reduction was higher in H15%L3J compared to L3J (0.046).

4

Discussion

There are many studies assessing the application of aPDT for treatment of P. acnes, such as those using chlorine e6 (Ce6). In the study of Jeon et al. [18] conducted a study in which halogen light was used in association with HP at a concentration of 31.25 lg/ml, reporting a minimum inhibitory concentration (MIC) for the P. acnes strain. The authors also emphasized that in vivo application of aPDT reduced the inflammatory effects of the acne vulgaris. Antimicrobial action and decrease in inflammatory factors promoted by chlorine e6 (Ce6) were also observed in the study by Wang et al. [19], who reported immunomodulation of factors NFkB and MAPKs, thus reducing the pathogenicity of acne vulgaris. The use of blue LED for inactivation of P. acnes has already been shown by Dai et al. [20], who reported that this antimicrobial action is

1155

due absorption of blue wavelength of light source by chromophores produced by microorganisms, leading to oxidative process intra cellular and destructive effects. In the same way, a study of Wang et al. [19] demonstrated the blue light (400– 470 nm) promoted intrinsic antimicrobial properties resulting from the presence of endogenous photosensitizing chromophores in pathogenic microbes. Despite the several studies assessing the action of APDT on P. acnes, the literature is scanty regarding the use of hypericin (Hypericom perforatum) as a photosensibilizer for reduction or elimination such bacteria. The use of hypericin photosensitizer alone has been shown to have an effective antimicrobial activity, as demonstrated in the studies by Lopez-Bazzocchi et al. [21] and Hudson et al. [22], who reported that the application of hypericin photosensitizer resulted in the elimination of hepatitis B and herpes simplex viruses, including murine cytomegalovirus. This herbal medicine also has excellent antiviral action against human immune-deficiency virus (HIV) [23]. In addition to its antiviral activity, it was also possible to observe that the extract of hypericin has an antimicrobial action on bacteria living the oral cavity. Süntar et al. [24] demonstrated that alcoholic extract of HP reduced biofilms and planktonic cultures of Streptococcus mutans, S. sobrinus, Lactobacillus plantarum and Enterococcus faecalis by using concentrations of 32 and 16 lg/ml. Despite using another genus of microorganism, the present study demonstrated that the use of hypericin photosensitizer at concentration of 15 lg/ml showed an antimicrobial action on monotypic biofilms of P. acnes with reductions of 10.9% The use of different photosensitizer agents, in this case the hypericin (Hypericum perforatum) in association with specific wavelength of different sources of light, how low level laser or light emitting diode (LED) irradiation, in the presence of oxygen generates reactive oxygen species (ROS), such as singlet oxygen, hydroxyl radicals and superoxide. These products promoted degradation of proteins, cell membrane, organelles and intra cellular biochemical process, which induce injury or death to microorganisms and antimicrobial actions [5, 10, 13]. Therefore, the present study makes innovative contributions to the literature by pioneering in the application of hypericin photosensitizer mediated aPDT for destructive and elimination of P. acnes biofilms. The conclusion that hypericin (Hypericum perforatum) photosensitizer at concentrations 15 lg/ml and red low-level laser irradiation with 3 J (100 J/cm2) promoted effective antimicrobial against P. acnes biofilms. Future studies of acne, in animal model and clinical in humans, should be developed aiming to observe the effectiveness of hypericin (Hypericum perforatum) photosensitizer mediated aPDT for destructive and elimination of P. acnes and treatment of this important and complex dermatological disease.

1156

5

R. A. Barroso et al.

Conclusions

This study showed that hypericin and aPDT using hypericin photosensitizer showed effective antimicrobial action for inhibiting P. acnes biofilms and may be promising resources in the clinical treatment of acne vulgaris.

12.

13.

Conflict of Interest The authors declare that they have no conflict of interest. 14.

References 1. Corvec S (2018) Clinical and biological features of cutibacterium (formerly propionibacterium) avidum, an Underrecognized Microorganism. Clin Microbiol Rev 31:e00064-e117. https://doi. org/10.1128/CMR.00064-17 2. Achermann Y, Liu J, Zbinden R et al (2018) Propionibacterium avidum: a virulent pathogen causing hip periprosthetic joint infection. Clin Infect Dis 66:54–63. https://doi.org/10.1093/cid/ cix665 3. Wan MT, Lin JY (2014) Current evidence and applications of photodynamic therapy in dermatology. Clin Cosmet Investig Dermatol 7:145–163. https://doi.org/10.2147/CCID.S35334 4. Fox L, Csongradi C, Aucamp M et al (2016) Treatment modalities for acne. Molecules 21:1063. https://doi.org/10.3390/ molecules21081063 5. Nardini EF, Almeida TS, Yoshimura TM et al (2019) The potential of commercially available phytotherapeutic compounds as new photosensitizers for dental antimicrobial PDT: a photochemical and photobiological in vitro study. Photodiagnosis Photodyn Ther 27:248–254. https://doi.org/10.1016/j.pdpdt.2019.05.027 6. Boen M, Brownell J, Patel P et al (2017) The role of photodynamic therapy in acne: an evidence-based review. Am J Clin Dermatol 18:311–321. https://doi.org/10.1007/s40257-017-0255-3 7. Li SS, Zhang LL, Nie S (2018) Severe acne in monozygotic twins treated with photodynamic therapy. Photodiagnosis Photodyn Ther 23:235–236. https://doi.org/10.1016/j.pdpdt.2018.06.016 8. Galeotti N (2017) Hypericum perforatum (St John’s wort) beyond depression: A therapeutic perspective for pain conditions. J Ethnopharmacol 200:136–146. https://doi.org/10.1016/j.jep. 2017.02.016 9. Napoli E, Siracusa L, Ruberto G et al (2018) Phytochemical profiles, phototoxic and antioxidant properties of eleven hypericum species—a comparative study. Phytochemistry 152:162–173. https://doi.org/10.1016/j.phytochem.2018.05.003 10. Marrelli M, Menichini G, Provenzano E et al (2014) Applications of natural compounds in the photodynamic therapy of skin cancer. Curr Med Chem 21:1371–1390. https://doi.org/10.2174/ 092986732112140319094324 11. Villacorta RB, Roque KFJ, Tapang GA et al (2017) Plant extracts as natural photosensitizers in photodynamic therapy: in vitro activity against human mammary adenocarcinoma MCF-7 cells.

15.

16.

17.

18.

19.

20.

21.

22.

23.

24.

Asian Pac J Trop Biomed 7:358–366. https://doi.org/10.1016/j. apjtb.2017.01.025 Gonçalves MLL, da Mota ACC, Deana AM et al (2018) Photodynamic therapy with Bixa orellana extract and LED for the reduction of halitosis: study protocol for a randomized, microbiological and clinical trial. Trials 19:590. https://doi.org/10. 1186/s13063-018-2913-z Garcez AS, Núñez SC, Azambuja N Jr et al (2013) Effects of photodynamic therapy on gram-positive and gram-negative bacterial biofilms by bioluminescence imaging and scanning electron microscopic analysis. Photomed Laser Surg 31:519–525. https:// doi.org/10.1089/pho.2012.3341 Okuda KI, Nagahori R, Yamada S et al (2018) The composition and structure of biofilms developed by propionibacterium acnes isolated from cardiac pacemaker devices. Front Microbiol 9:182. https://doi.org/10.3389/fmicb.2018.00182 Li ZH, Meng DS, Li YY et al (2014) Hypericin damages the ectatic capillaries in a Roman cockscomb model and inhibits the growth of human endothelial cells more potently than hematoporphyrin does through induction of apoptosis. Photochem Photobiol 90:1368–1375. https://doi.org/10.1111/php.12323 Bernal C, Ribeiro AO, Andrade GP et al (2015) Photodynamic efficiency of hypericin compared with chlorin and hematoporphyrin derivatives in HEp-2 and Vero epithelial cell lines. Photodiagnosis Photodyn Ther 12:176–185. https://doi.org/10. 1016/j.pdpdt.2015.04.003 Yow CM, Tang HM, Chu ES (2012) Hypericin-mediated photodynamic antimicrobial effect on clinically isolated pathogens. Photochem Photobiol 88:626–632. https://doi.org/10.1111/j.17511097.2012.01085.x Jeon YM, Lee HS, Jeong D et al (2015) Antimicrobial photodynamic therapy using chlorin e6 with halogen light for acne bacteria-induced inflammation. Life Sci 124:56–63. https://doi.org/ 10.1016/j.lfs.2014.12.029 Wang YY, Ryu AR, Jin S et al (2017) Chlorin e6-mediated photodynamic therapy suppresses P. acnes-induced inflammatory response via NFjB and MAPKs signaling pathway. PLoS One 12: e0170599. https://doi.org/10.1371/journal.pone.0170599 Dai T, Gupta A, Murray CK et al (2012) Blue light for infectious diseases: propionibacterium acnes, helicobacter pylori, and beyond? Drug Resist Updat 15:223–236. https://doi.org/10.1016/ j.drup.2012.07.001 Lopez-Bazzocchi I, Hudson JB, Towers GH (1991) Antiviral activity of the photoactive plant pigment hypericin. Photochem Photobiol 54:95–98. https://doi.org/10.1111/j.1751-1097.1991.tb01990.x Hudson JB, Lopez-Bazzocchi I, Towers GH (1991) Antiviral activities of hypericin. Antiviral Res 15:101–112. https://doi.org/ 10.1016/0166-3542(91)90028-p Miskovsky P (2002) Hypericin–a new antiviral and antitumor photosensitizer: mechanism of action and interaction with biological macromolecules. Curr Drug Targets 3:55–84. https://doi.org/ 10.2174/1389450023348091 Süntar I, Oyardı O, Akkol EK et al (2016) Antimicrobial effect of the extracts from Hypericum perforatum against oral bacteria and biofilm formation. Pharm Biol 54:1065–1070. https://doi.org/10. 3109/13880209.2015.1102948

Evaluating Acupuncture in Vascular Disorders of the Lower Limb Through Infrared Thermography Wally auf der Strasse, A. Pinto, M. F. F. Vara, E. L. Santos, M. Ranciaro, P. Nohama, and J. Mendes

chronic pain related to vascular disorders in the lower limbs.

Abstract

Objectives: This study was carried out to evaluate the analgesic effect and the thermal distribution of the treatment through acupuncture in patients with chronic pain and vascular disorders in the lower limbs. Methods: Two female patients undergoing clinical follow-up at the Hospital de Santo António, Portugal. The skin temperature of the lower extremity regions was measured before and after the acupuncture treatment at acupoints B17, BP10, BP6 and IG4, by infrared medical thermography. Thermographic data were analyzed by ThermaCAM Researcher Research Pro® 2.10 software, establishing the average temperature in the regions of painful symptoms. Results: When painful symptoms fadeout, the average temperature in the regions of interest has decreased 0.5 °C in the Anterior Tibial Muscle, 1.3 °C in the Gastrocnemius Muscle, 1.0 °C in the Soleus Muscle, in patient 1; and 0.6 °C in the anterior Tibial Muscle, in patient 2. Conclusions: Despite the small sample size, it indicates that infrared thermography may be applicable in the area of acupuncture, presenting sensitivity in diagnostic and monitoring the treatment of W. auf der Strasse (&)  P. Nohama Universidade Tecnológica Federal do Paraná, Avenida Sete de Setembro, 3165, Rebouças, Curitiba, Brasil A. Pinto Serviço de Estomatologia e Cirurgia Maxilo-facial do Centro Hospitalar e Universitário – Hospital de Santo António, Porto, Portugal M. F. F. Vara  E. L. Santos  M. Ranciaro  P. Nohama Pontifícia Universidade Católica do Paraná, Programa de Pós-Graduação em Tecnologia em Saúde, Curitiba, Brasil J. Mendes Faculdade de Engenharia da Universidade do Porto, Porto, Portugal J. Mendes LABIOMEP, Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal

Keywords



Infrared medical thermography Acupuncture Chronic pain

1



Vascular disorder



Introduction

Acupuncture is a technique of the traditional Chinese medicine used since 3000 years before Christ, and in particular from 1950 it has been used as pain reliever in surgical and post-surgical procedures. However, it was only in 1955 that there was an official recognition of this therapy and, in this way, the acupuncture was accepted by western scientific medicine, and start being used in hospitals [1]. Based on the Yin-Yang theory, the insertion of needles in specific anatomical points of the body (acupoints) produces analgesia, regarding the electromagnetic vital energy (Qi). The Chinese medicine defines a set of meridians, parallels and symmetrical vessels, which connects the energy Qi (flow of vital energy) through the body [2, 3]. Changes of this flow would manifest a symptom of energy accumulation (Yang— hot, active) or deficiency (Yin—cold, passive). The placement of needles in Yin and Yang points normalizes this energy imbalance and provides analgesic effects in chronic pain [4]. Pain in the lower limbs varicose veins are usually originated by a problem of malfunctioning of the walls and valves of the blood vessels that cause blood stagnation and deformation of the veins, hindering the venous return and the correct functioning of the circulatory system. Also, according to Chinese medicine, where there is blood staining, there is also stagnation of vital Qi [5]. The acupoints B17, BP6, BP10 and IG4 of the treatment protocol for vascular disorders of lower limbs [6], favor the

© Springer Nature Switzerland AG 2022 T. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_174

1157

1158

W. auf der Strasse et al.

Yin principle of energy balance by toning kidney, spleen and pancreas, increase the production of blood and the strengthening of blood vessels, allowing the reduction of stasis inside. Thus, following the principles of temperature flow distribution (hot and cold), the choice of points for applying the needles of the acupuncture technique occurs according to the meridian path and the distribution of the nerves that cross the area of painful symptoms at points proximal and distal to the affected area [7]. Measurements of skin temperature are known to reflect changes in peripheral circulation and can provide useful information on diagnosis and vascular responses after acupuncture, due to the differences between cold and hot areas of the body's electromagnetic fields [8], causing energy imbalance. For medical use, such changes in the thermal profile make it possible to adjust the points of application of acupuncture to better understand the effects and adjustments of the protocols for treating pain [9]. Body temperature is a clinical indicator of disease [10] and infrared imaging as a non-invasive, harmless and painless medical imaging technique able to measure very subtle thermal changes on the skin surface. In clinical practice, a minimal difference of the skin surface temperature in comparison to the contralateral limb higher than 0.3 °C can be considered an indication of clinically significant pathology [11]. Thus, in view of the presented scenario, the goal of this study was to assess the viability of the application of thermography as a method of monitoring acupuncture treatment, by means of a quantitative evaluation of changes on skin temperature in the lower limbs of two patients with clinical diagnosis of chronic pain and vascular disorders.

2

Materials and Methods

2.1 The Sample The research was carried out in accordance with the ethical recommendations of the Portuguese Resolution and was approved by the Research Ethics Committee of the Faculty of Medicine of the University of Porto—FMUP, under the approval letter number 398/19, of November 4, 2019. The images were captured at the Pain Unit of the Department of Anesthesia, at Hospital Geral de Santo António, Centro Hospitalar do Porto, Portugal, during February 2020. The sample consisted of two female patients, with an average age of 53.3 years, diagnosed with painful symptoms in the lower limbs and peripheral circulatory disease, without use of analgesic medication and anti-inflammatory drugs.

2.2 Acupuncture Protocol The acupuncture protocol for vascular disorders of the lower limbs was composed of points B17, BP6, BP10 and IG4 [12]. The acupoint—B17 (Geshu) is located at the level of the lower depression of the T7 spinal apophysis of the thoracic spine at 1.5 cun (Chinese inch) on the side in relation to the dorsal midline. The BP6 acupoint (Sanyinjiao) is located on the medial side of the leg, 3 cun above the medial malleolus, in the posterior fossa to the medial margin of the tibia. The BP10 acupoint (Xuheai) is located 2 cun proximal to the upper edge of the patella, in the belly of the vast medial muscle and 2 medial cun from the center of the pate. The IG 4 (Hégu) acupoint is located next to the midpoint of the second metacarpal bone, at the apex of the musculature between the 1st and 2nd fingers of the hand. The cun is a traditional length unit of Chinese system that corresponds to 1st phalanx of the thumb, i.e. 3.33 cm, and a tenth of a chi (Chinese foot).

2.3 Acquisition of Thermographic Images The patients were submitted to an acclimatization period of 15 min before the collection of the thermographic images at the doctor's office, in a standing position, so as not to compress the area of interest for clinical investigation. The doors and windows were kept closed and in a controlled temperature environment, at 21 ºC, following the recommendation for the acquisition of thermograms, technical guidelines [13]. Thermographic images were taken before starting the acupuncture treatment and then again, 45 min after removing the therapeutic needles from the insertion points. For patient number one, the thermographic images were acquired in two positions requested by the anesthesiologist. The first sequence of images was obtained at a distance of 1 m between the camera and the patient standing in the anteroposterior position, in the frontal view, including the knees and feet. The second image was acquired in the anteroposterior position, in the dorsal view, at a distance of 0.50 m between the camera and the patient in the prone position, including the anomalous regions of the knees, popliteal region and Achilles tendon. For the patient number two, the images were obtained at a distance of 1 m, standing up, in the anteroposterior position, in the frontal view. The Regions of Interest (ROIs) were defined as to include the muscles under study, based on the superficial muscle anatomy and pain reported by the patients, as shown in Fig. 1.

Evaluating Acupuncture in Vascular Disorders of the Lower …

1159

2.4 Equipment

A

A thermal camera certified for medical use, model T530 (FLIR® Systems Inc., Wilsonville, Oregon, USA) was used, with a NETD below 40 mK, a temperature ranges of 20–120 ºC, focal plan array of 320  240 pixels, with an accuracy ± 2% of the reading.

2.5 Thermal Image Processing Anterior Tibial Mus-

B

Gastrocnemius Muscle

Soleus Muscle

Fig. 1 a Thermographic image of the anteroposterior view in the frontal view; b thermographic image of the anteroposterior view in the dorsal view, showing the thermo-anatomical ROIs

In the frontal plane, ROI 1 was defined by the Anterior Tibial Muscle (ATM), because it is more superficial in the anterior compartment of the leg. This muscle extends medially and caudally to the lateral surface of the tibial bone, being reference for the neurovascular path of the anterior tibial arteries and veins. In the dorsal plane, ROI 2 was defined by the Gastrocnemius Muscle (GM), because it is an equally apparent muscle, it starts at 8 cm proximal to the popliteal fold in the region posterior knee, extending distally up to 10 cm proximal to the medial malleolus, and has an average length of 15 cm [14]. The ROI 3 was defined by the Soleus Muscle (SM), which is a broad flat muscle located immediately in front of the GM. The tendon fibers are from the back of the fibular head and from the upper third of the posterior surface of the bone body and the popliteal line and the middle third of the medial edge of the tibia. Some fibers also arise from a tendinous arch placed between the tibial and fibular origins of the muscle, in front of which the popliteal vessels and the tibial nerve [15].

The ROIs in thermal images were analyzed using the software Flir ThermaCAM Research Pro® 2.10, calculating the maximum, average and minimum temperatures of each region. The geometric shapes and sizes of the ROIs were the same for the images obtained before and after the acupuncture treatment. In the frontal view, a rectangular 12  60 pixels ROI was defined to analyze the ATM, while in dorsal view, an ellipse with axis of 70  46 pixels was used for the analysis of GM and finally, a square area of 30  30 pixels for the analysis of SM. The dimensions of the ROIs were as to avoid the curvature of the leg, in order not to influence the correct acquisition of the skin temperature of the region of interest investigated according to the guidelines of medical thermography [13]. Thus, allowing standardization for the different sizes of lower limbs of the studied patients, as shown in Table 1.

3

Results

3.1 Patient 1 Patient 1 had a medical diagnosis of peripheral vascular disease (PVD), with severe painful symptoms in the left leg, with an emphasis on the posterior region, with predominance in GM and SM. In the thermographic evaluation before the application of the acupuncture therapeutic protocol, the data demonstrated thermal asymmetry in all the ROIs evaluated, presenting a temperature 0.5 °C higher on the right side, in the ATM compared to the contralateral. In the analysis of the GM and SM, regions of pain with greater intensity reported by the volunteer, the results were significant, with a temperature 1.3 °C higher on the right side, and SM showing a difference of 2.2 °C, also greater on the right side. However, 45 min after the application of acupuncture, in the second thermal image acquisition, all the ROIs investigated showed better thermal distribution on the lower limbs, as described in Table 2. The sequence of thermographic images in the frontal and dorsal views referring to patient 1, obtained before the therapeutic protocol of acupuncture, shows hot and cold

1160

W. auf der Strasse et al.

Table 1 Regions of interest, respective shape and size (in pixels)

Table 2 Mean temperature in the anteroposterior view, in front view, and dorsal view, before acupuncture and after 45 min (patient 1)

ROI

Region

Shape

Size (pixels)

1

Anterior tibial muscle (ATM)

Rectangular

12  60

2

Gastrocnemius muscle (GM)

Ellipse

46  70

3

Soleus muscle (SM)

Square

30  30

ROI Before acupuncture

Assymmetry Temperature (°C)

45 min after acupuncture

Left Side (°C)

Right Side (°C)

Asymmetry temperature (°C)

1

30.0

30.5

0.5

2

29.4

30.4

1.3

3

29.8

32.0

2.2

1

29.0

28.9

0.1

2

30.1

30.3

0.2

3

28.7

29.9

1.2

2.5 2.0 1.5 1.0 0.5 0.0 Before ATM

After 45 min GM

SM

Fig. 2 Patient 1—asymmetry to the contralateral side in anterior tibial muscle (ATM), before and 45 min after acupuncture

regions in the investigated ROIs, corresponding to ATM, GM and SM muscles. The comparative analysis before and 45 min after the acupuncture application, shows a more stable thermal distribution when compared to the contralateral in all the muscles, as illustrated in Fig. 2. Though the SM asymmetry decreases from 2.2 to 1.2 °C, still remains asymmetrical, being hotter on the right leg. Thus more treatment sessions would be necessary to recover the thermal equilibrium. The small thermal difference found in ATM and GM are considered within the normal range regarding the criteria of sensitivity for diagnostics of the patient according to the medical thermography manual [11], Fig. 3.

3.2 Patient 2 Patient 2 was diagnosed with varicose veins in the lower limbs, accompanied by more severe painful symptoms in the right leg in the frontal region, marked fatigue of the legs in

the standing position, presence of edema bilaterally and involvement of episodes of night cramps. The evaluation of the thermal images before the application of the acupuncture suggests an abnormality in the right leg with an asymmetry of 0.8 °C, as shown in Table 3. Forty-five minutes after the application of the acupuncture, the investigated ROI demonstrated a better thermal distribution in the lower limbs, remaining thermal asymmetry in the right leg of 0.2 °C (mean value), considered within normal standards [11], according to the thermal images illustrated in Fig. 4. In the contralateral comparison, the analysis of the data showed better equilibrium in the thermal distribution, according to Fig. 5.

4

Discussion

The acquired thermograms of the patients showed an initial asymmetry in the ROIs, coinciding with the pain regions reported in the clinical medical diagnosis. According to the researchers [16], medical thermography helps to assess the local and systemic effects of acupuncture treatment. The results presented in [16] indicates that the stimulation of acupoint R3 (Taixi), located in the leg, causes a decrease in temperature in relation to the unstimulated side. But needle stimulation can also promote an increase in temperature in the case of a previous low vascularization. The results presented by [16] corroborate the images acquired in the protocol of vascular disorder of the lower limb at the acupoints B17, BP6, BP10 and IG4, where the contralateral temperature difference observed may be due to reduced oxygenation by activity peripheral vasoconstrictor, as a consequence of reduced blood perfusion, showing a lower temperature.

Evaluating Acupuncture in Vascular Disorders of the Lower …

1161

Fig. 3 Thermal images of the patient 1: before the application of acupuncture (a, b) and 45 min after (c, d). Anterior tibial muscle (a, c), gastrocnemius muscle and soleus muscle (b, d)

A

B

Before Acupuncture

D

C

After 45 min Acupuncture Table 3 Average temperature in the anteroposterior view, before acupuncture, 45 min after (patient 2)

ROI

Left side (°C)

Right side (°C)

Asymmetry temperature (°C)

Before acupuncture

1

32.6

31.8

0.8

45 min after acupuncture

1

32.3

32.5

0.2

Fig. 4 Thermal images of patient 2, before the application of the acupuncture (a), and 45 min after application (b)

A

Before Acupuncture Such results favor needle therapy in the meridians, according to Chinese medicine concerning the principles energy accumulation (Yang—hot, active) or deficiency (Yin— cold, passive). Regarding the premise that the mechanical stimuli caused by the needling of acupoints promote the dispersion of stagnant energy, decreasing the temperature [17]. Regarding the asymmetry of the lower limbs, the results reaffirm the research data presented by [18], who had conducted clinical research on thermographic analysis of the upper limbs. The outcomes of this investigation had shown that the perception of temperature differences allowed to adjust the treatment by the feasibility of correlating the thermal image with the location of the perfusion veins.

B

After 45 min Acupuncture

According to [9, 19], it is possible to correlate the acupoints with the thermal effects expressed on the skin after stimulation, demonstrating therefore the therapeutic efficacy in reducing pain symptoms and the ability of infrared thermography to monitor the procedure. In the present study, it was also found a reduction in painful symptoms, as well as a decrease in the absolute temperature and of the asymmetry, especially in patient 1— Soleus Muscle ROI. It is noteworthy that the assessment of pain referred by the patient is subjectively evaluated based in the reference scales, such as the Visual Analog Scale (VAS) and the Visual Numeric Scale (VNS). The sensation of inflammatory pain involves systemic responses of the

Asymmetry Temperature (°C)

1162

W. auf der Strasse et al.

1.0

Likewise, yet another study carried out by [22], proposed to evaluate the applicability of the thermography in the diagnosis of venous thrombosis. In this severe pathology, the thermal images show higher temperature in the investigated points, unlike peripheral vascular disease in which the evaluated points are colder in accordance with the findings of this investigation.

0.8 0.6 0.4 0.2 0.0 Before

After 45 min ATM

Fig. 5 Patient 2—asymmetry to the contralateral side in anterior tibial muscle (ATM), before and 45 min after the application of acupuncture

body with changes in blood circulation and differences in body temperature. Thermal imaging evaluation has the potential to generate quantitative information about the thermal variation on the skin, providing detailed information on the location and magnitude of pain in the inflammatory response, according to the investigations carried out by the authors [20]. The effects of acupuncture treatment in the thermal skin response described by [21], carried out in fifty patients, indicate the possibility of visualize and quantify the effects of therapy by means of thermal alterations as a function of needle application, in the majority of the evaluated points. Our results demonstrated a significant increase on the skin temperature after 2 min of needling, at the acupoints, and then a decrease in the temperature in the monitored area 10 min after. Another important investigation in the evaluation of thermography had been carried out by Álvarez-Prats et al. [3], who had analyzed the mapping of lower limbs in 144 female volunteers, in the veins perfusion. The results of the study had demonstrated that thermography proved to serve as an important tool in monitoring treatment in patients using acupuncture to help the perfusion of cutaneous vessels at different acupoints on the lower limbs. By overlapping the patient's thermal and photographic images, it was possible to visualize the effects generated by the acupuncture technique from the vascular autonomic point of view. The results had shown thermal changes due to the insertion of the needle in the perfusion of the vessels, in response to the treatment protocol, which may cause thermal changes. So, it demonstrates that medical thermography proves to be a useful tool for conducting acupuncture guided treatments. The thermal profile of painful regions of the two volunteers of the present studied show a correlation between pain symptoms and thermal changes before and after treatment in acupoints B17, BP6, BP10 and IG4 of the proposed therapeutic protocol, which are in accordance with the results presented by [3].

5

Conclusions

From the obtained results in the experimental clinical essay, we have concluded that it is possible to evaluate acupuncture analgesic therapy through infrared thermography in people with peripheral vascular disease, in the regions of interest (ATM, GM and SM). The painful symptomatic points related to disorders of the circulatory system were analyzed through the mean temperature values, which showed to be adequate for the diagnosis, or quantitative monitoring, of the treatment of the pain. However, in view of the small investigated sample group, more subjects will be analyzed searching for data confirmation and generalization, and in different portions of the lower limbs. Acknowledgements The authors thank to the volunteers of Hospital de Santo António, who collaborated in this research, as well as to the Araucária Foundation and CNPq for scholarships and support from the Coordination for the Improvement of Higher Education Personnel— Brazil (CAPES)—Financing Code 001. Conflict of Interest The authors do not have conflicts of interest to declare.

References 1. Ammendolia C, Furlan AD, Imamura M, Irvin E, van Tulder M (2008) Evidence-informed management of chronic low back pain with needle acupuncture. Spine J 8:160–172 2. Chen YH, Lin JG (2018) Acupuncture and itch: basic research aspects. In: Experimental acupuncturology. Springer, Singapore, pp 67–80 3. Álvarez-Prats D, Carvajal-Fernández O, Valera Garrido F, PecosMartín D, García-Godino A, Santafe M M, Medina-Mirapeix F (2019) Acupuncture points and perforating cutaneous vessels identified using infrared thermography: a cross-sectional pilot study. Evidence-based complementary and alternative medicine 4. Povolny B (2008) Acupuncture and traditional chinese medicine: an overview. Techniques Regional Anesthesia Pain Manage 2:109–110 5. Shaw V, Mclennan AK (2016) Was acupuncture developed by Han Dynasty Chinese anatomists? Anatom Record 299:643–659 6. Wei DZ, Yu S, Yongqiang Z (2016) Perforators, the underlying anatomy of acupuncture points. Altern Ther Health Med 22:25 7. Fan Y, Wu Y (2015) Effect of electroacupuncture on muscle state and infrared thermogram changes in patients with acute lumbar muscle sprain. J Trad Chinese Med 35:499–506

Evaluating Acupuncture in Vascular Disorders of the Lower … 8. Norheim AJ, Mercer J (2012) Can medical thermal images predict acupuncture adverse events? A case history. Acupuncture Med 30:51–52 9. Li W, Ahn A (2016) Effect of acupuncture manipulations at LI4 or LI11 on blood flow and skin temperature. J Acupuncture Meridian Stud 9:128–133 10. Lahiri BB, Bagavathiappan S, Raj B, Philip J (2017) Infrared thermography for detection of diabetic neuropathy and vascular disorder. In: Application of infrared to biomedical sciences. Springer, Singapore, pp 217–247 11. Brioschi ML, Teixeira MJ, Yeng LT, Silva FMM (2012) Manual de Termografia Médica, 1st edn. Andreoli, São Paulo 12. Kim DH, Ryu Y, Hahm DH, Sohn BY, Shim I, Kwon OS, Lee BH (2017) Acupuncture points can be identified as cutaneous neurogenic inflammatory spots. Sci Rep 7:1–14 13. Ring EFJ, Ammer K (2012) The technique of infrared imaging in medicine. Thermol Int 10:7–14 14. Moraes FBD, Paranahyba RM, Oliveira ED, Kuwae MY, Rocha VLD (2007) Estudo anatômico do músculo gastrocnêmio medial visando transferência muscular livre funcional. Revista Brasileira De Ortopedia 42:261–265 15. Araujo LB, Braga de DM, Kanashiro MS, Baccaro VM, de Souza CDA, Batista BP, Lourenço MA (2019) Análise eletromiográfica dos músculos tibial anterior e sóleo em pacientes hemiparéticos nos ambientes aquático e terrestre. Revista Brasileira de Ciência e Movimento 27:106–121

1163 16. Ipólito A J (2010) Efeitos Térmicos da Acupuntura no Ponto Taixi (Rim 3), Avaliados Mediante Teletermografia InfraVermelha (Doctoral dissertation, Dissertação do Programa de Pós-Graduação Interunidades em Bioengenharia. São Carlos) 17. Huang T, Huang X, Zhang W, Jia S, Cheng X, Litscher G (2013) The influence of different acupuncture manipulations on the skin temperature of an acupoint. In: Evidence-based complementary and alternative medicine 18. Soares Parreira F, Pereira Barbosa M, Álvarez Prats D, Carvajal Fernández O (2019) Acupuncture points and cutaneous perforating vessels identified with thermography 19. Freire FC, Brioschi ML, Neves EB (2015) Avaliação dos Efeitos da Acupuntura no IG4 (Hégu) por Termografia de Infravermelho. Pan Am J MedThermol 2:63–69 20. Etehadtavakol M, Ng EYK (2017) Potential of thermography in pain diagnosing and treatment monitoring. In: Application of infrared to biomedical sciences. Springer Nature Science, Germany, pp 19–32. ISBN: 978-981–10-3146-5 21. Agarwal-Kozlowski K, Lange AC, Beck H (2009) Contact-free infrared thermography for assessing effects during acupuncture: a randomized, single-blinded, placebo-controlled crossover clinical trial. Anesthesiology 111:632 22. Shaydakov M, Diaz, (2017) Effctiveness of infrared thermography in the diagnosis of deep vein thrombosis: an evidence-based review. J Vasc Diagnostic Interv 5:7–14

Photodynamic Inactivation in Vitro of the Pathogenic Fungus Paracoccidioides brasiliensis José Alexandre da Silva Júnior, R. S. Navarro, A. U. Fernandes, D. I. Kozusny-Andreani, and L. S. Feitosa

Abstract

Keywords

Paracoccidioidomycosis (PCM) is a disease caused by the fungus Paracoccidioides brasiliensis (Pb), manifesting as a systemic mycosis, which can lead to death. This study evaluated the action of in vitro Photodynamic Inativation (PDI) on Pb yeast cells through the LED light source (LEC Prime WL—MMOptics: 455 nm, 0.8 W/cm2 and 0.2 W with 6 mm tip) associated with the photosensitizer (Fs) curcumin 98%. The photodynamic action was performed with 4 groups and 6 application cycles of the technique, fractionated every 5 min, for a total cycle of 30 min. The tests were performed in quintuplicate and the groups were divided into: group 1—without Fs in its composition and without LED irradiation (L−F−); group 2 —without Fs and with exposure to irradiation (L+F−); group 3—with Fs and without exposure to LED irradiation (L−F+); and group 4—composed with Fs and exposed to LED irradiation (L+F+). The results showed that the reduction of Pb, measured by the means of CFU Log, was significantly higher in group 4 (0.7748 ± 0.1876) when related to group 1 (2.399 ± 0.09470) (p < 0.001), group 2 (2.178 ± 0.08214) (p < 0.001) and group 3 (1.818 ± 0.09987) (p < 0.001). There was elimination of the fungus P. brasiliensis from the 20th minute of irradiation in the group exposed to the LED associated to the presence of Fs (p < 0.001). We concluded that PDI associated with the administration of curcumin 98% is capable of promoting the in vitro elimination of the pathogenic fungus Paracoccidioides brasiliensis.

Photodynamic inactivation Paracoccidioides brasiliensi LED Curcumin

J. A. da Silva Júnior (&)  A. U. Fernandes Department of Biomedical Engineering, Anhembi Morumbi University, Doutor Altino Bondensan Road, 500, São José dos Campos Technology Park - Eugênio de Mello, São José dos Campos, São Paulo, Brazil R. S. Navarro  D. I. Kozusny-Andreani  L. S. Feitosa Bioengineering and Biomedical Engineering Graduate Program, University Brazil, São Paulo, Brazil



1





Introduction

Paracoccidioides brasiliensis is a thermodimorphic fungus that presents itself in the form of mycelium at room temperature, 25 °C, and yeasts at 37 °C [1]. This fungus is the etiologic agent of Paracoccidioidomycosis (PCM), occurring in tropical and subtropical regions [2], manifesting itself as a deep and systemic mycosis, of granulomatous character, which especially compromises the pulmonary and mucocutaneous tissues, the lymphatic system and by extension any other organ, and can lead to death [3, 4]. Like other fungal diseases, PCM depends on the interaction between the fungus and the immune response of the host to evolve to spontaneous cure or to spread through the body causing chronic granulomatosis [5]. The confirmatory diagnosis of the pathology depends on the demonstration of the etiological agent in biological materials associated with serological and mycological tests [6–8], and the treatment consists of the attack and maintenance phases, through the administration of specific drugs, consolidating itself as a usually long treatment, up to two years of therapy [9]. Faced with the need to implement other therapies for fungal infections, Photodynamic Inactivation (PDI) has been considered a promising alternative. The technique is based on topical or systemic administration of a photosensitive followed by irradiation in the therapeutic window (600– 800 nm) at the appropriate wavelength [10]. The photosensitization process may occur by two different mechanisms: TYPE I, in which the light energy absorbed by the photosensitizer is transferred to the biomolecule through electron transfer or hydrogen abstraction and TYPE II, where the excitation energy is transferred to the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_175

1165

1166

J. A. da Silva Júnior et al.

spectrophotometer with a wavelength of 530 nm, observing the concentration of 103 viable cells/mL of P. brasiliensis.

2.4 Photosensitizer Fig. 1 Type I and Type II photosensitization mechanisms, where is a biomolecule, FS is a photosensitizer and targets are the biological sites (lipids, proteins, DNA)

molecular oxygen, resulting in the formation of singlete oxygen (1O2), as shown in Fig. 1 [11]. Both paths induce cell death and destruction of diseased tissue [12, 13].

2

The photodynamic action of the photosensitizer curcumin 98%, of commercial origin, was evaluated (PDT Pharma, Cravinhos, São Paulo—Brazil). The Fs was stored in a dry environment and protected from light, avoiding possible decomposition. To determine the photodynamic activity Fs was dissolved in dimethylsufoxide (DMSO—brand Dynamic, Diadema, São Paulo—Brazil) at a concentration of 1 mg/mL−1. Of this solution 0.05 mL was removed and added in microtubes containing 1.1 mL of yeast cell suspension of Pb and TSB medium.

Materials and Method

2.1 Fungal Lineage and Culture Media

2.5 Light Source

To determine the antifungal activity of PDI, the strain from the Center Specialized in Medical Mycology at the Federal University of São Paulo—UNIFESP, Diadema campus, was used. The Sabouraud-Dextrose Agar (KASVI branded ASD) and the liquid Tryptocasein Soya (TSB branded KASVI) prepared according to the manufacturer’s recommendations were used for cultivation.

A Light Emitting Diode (LED), LEC Prime WL model from MM Optics (São Carlos, São Paulo—Brazil) with a wavelength of 455 nm, power density of 0.8 W/cm2 and power of 0.2 W with 6 mm tip was used as light source.

2.2 Cultivation of Paracoccidioides brasiliensis The Paracoccidioides brasiliensis strain was cultivated in ASD medium, incubated at 28 ºC in a culture greenhouse for seven days to verify cell viability. Part of the mycelium was removed from the P. brasiliensis colony and cultivated in TSB liquid medium.

2.3 Preparation of Paracoccidioides brasiliensis for in Vitro Tests After the cultivation of P. brasiliensis in TSB medium, the supernatant was discarded and the precipitated cellular material was re suspended in a sterile NaCl solution (0.5%) and again centrifuged (4000 rpm for 5 min), repeating the procedure five times. After the 5th wash, the conidia were re suspended and this suspension was diluted in 9 mL of sterile NaCl (0.5%) and homogenized for 1 min in vortex, then 1 mL was removed and added in another tube containing 9 mL of NaCl and homogenized for 1 min. Three dilutions of the fungus (1  103) were performed and then the number of viable cells was counted using a

2.6 Photodynamic Action The photodynamic action was performed using an experimental design entirely randomized with four groups and six cycles, performed in quintuplets. The groups were divided into group 1 (negative control group)—without the presence of Fs in its composition and not exposed to LED irradiation (L−F−); group 2 (LED group)—without the presence of Fs and exposed to irradiation (L+F−); group 3 (Fs group)—with Fs in its composition and not exposed to LED irradiation (L −F+); group 4 (treatment group)—composed with Fs and exposed to LED irradiation (L+F+). To receive irradiation the samples were prepared in eppendorf’s. The final concentration of each sample contained in the eppendorf was equivalent to 1150 lL, so groups 1 and 2 consisted of 1050 lL of TSB and 100 lL of the third dilution (103) of the Pb fungus, and groups 3 and 4 consisted of 1000 lL of TSB, 100 lL of Pb fungus (103) and 50 lL of Fs, as described in Table 1. After preparation, the samples remained incubated for fifteen minutes, obeying the pre-irradiation time necessary for the Fs to adhere to the cell wall and allow light to enter the fungal cell. Subsequently, the samples were irradiated with the LED light source—LEC Prime WL (MMOptics, São Carlos, Brazil) at 455 nm. The irradiation occurred for periods of five continuous minutes, making six cycles, totaling thirty minutes of irradiation.

Photodynamic Inactivation in Vitro of the Pathogenic … Table 1 Composition of evaluated groups

1167

Group

Composition

1 (L−F−)

1050 µL of TSB + 100 µL of fungus Pb

2 (L+F−)

1050 µL of TSB + 100 µL of fungus Pb

3 (L−F+)

1000 µL of TSB + 100 µL of Pb + 50 µL of Fs curcumin 98%

4 (L+F+)

1000 µL of TSB + 100 µL of Pb + 50 µL of Fs curcumin 98%

Groups 1 and 3 were submitted to the same treatment, where 50lL were removed from each sample and plated in ASD liquid medium (plates 1–5), repeating the procedure five more times every five minutes, totaling six times equivalent to 30 final minutes. Groups 2 and 4 were submitted to LED irradiation treatment, where the samples were exposed to irradiation for thirty minutes, and every five minutes of this period, 50lL were removed from each sample and plated in ASD liquid medium (plates 1–5) with the aid of a Drigalski loop. The seeded Petri dishes were incubated at 28 °C in the BOD for seven days to later count the Formed Colony Units per milliliter (CFU/mL). The numbers were transformed into logarithms (Log) to be analyzed statistically.

2.7 Statistical Analysis The results passed the normality test, which identified parametric distribution of the data. Thus, for comparison between the CFU counts as a function of time, the analysis of variance (Two-way ANOVA) was used, adopting the Bonferroni test as a complementary test. All analyses were performed using the GraphPad Prism statistical software, version 5.0, and the level of significance considered was

Table 2 Comparison between the means (CFU/mL log) of the 3 experimental groups used in the P. brasiliensis fungus trials with respect to group 4

Table 3 Values of CFU/mL log averages per experimental group and time of LED exposure

95% (p < 0.05). The mean and standard deviation were adopted to present the results, both appropriate for parametric values.

3

Results

In this study, 455 nm LED photodynamic inactivation associated with the photosensitizer curcumin 98% in vitro reduction of the pathogenic fungus Paracoccidioides brasiliensis was applied. The amount of CFU counted in the Petri dishes was transformed into a base log 103 for checking averages and applying statistical tests. Group 1 (L−F−) had a mean CFU/mL of 2.399 and standard deviation of 0.09470; group 2 (L+F−) had a mean of 2.178 with standard deviation of 0.08214; and group 3 (L−F+) had a mean of 1.818 and standard deviation of 0.09987. The three groups were compared with group 4 (L+F+), which obtained a mean of CFU equal to 0.7748 with a standard deviation of 0.1876. Table 2 shows that the three experimental groups are statistically different from group 4, where groups 1, 2 and 3 had p < 0.001 in relation to group 4.

Group

Average ± standard deviation

Anova test

G1: L−F−

2.399 ± 0.09470

p < 0.001 versus G4

G2: L+F−

2.178 ± 0.08214

p < 0.001 versus G4

G3: L−F+

1.818 ± 0.09987

p < 0.001 versus G4

G4: L+F+

0.7748 ± 0.1876



Time of exposure (min)

Experimental groups (CFU/mL log average) Group 1

Group 2

Group 3

Group 4

0

2.54

2.54

2.54

2.54

5

2.54

2.37

1.93

1.82

10

2.43

2.29

1.85

1.49

15

2.39

2.20

1.82

0.57

20

2.37

2.13

1.80

0

25

2.36

2.04

1.78

0

30

2.31

2.03

1.73

0

1168

J. A. da Silva Júnior et al.

Table 3 shows the reduction in CFU averages occurring as a function of exposure time. The initial time of 0 min was standardized in all groups with the value corresponding to the 5th minute of the negative control group, equivalent to 2.54 CFU/mL. Group 1 was reduced from 2.54 to 2.31 CFU/mL at the end of thirty minutes; Group 2 from 2.54 to 2.03 CFU/mL; and Group 3 from 2.54 to 1.73 CFU/mL after the period of application of the proposed protocol. In group 4 it was observed that after 5 min of irradiation the mean counted was 1.82 CFU/mL, after 10 min it was 1.49 CFU/mL, and at 15 min it was 0.57 CFU/mL. From the 20th minute of LED irradiation it was verified the elimination of CFU, reaching 0 CFU/mL, remaining until the 30th minute of irradiation.

CFU/mL (Log)

3

*** A

*** A

*** A

2

4 B

1

+ L+ F

LF

+

L+ F

-

0

LF

Figure 2 shows the evolution of logarithmic reduction in the four experimental protocols, disregarding the time of PDI application. The reduction is more marked in group 4, which has a statistically significant difference from the other three groups, where group 1 (L−F−), group 2 (L+F−) and group 3 (L−F+) presented p < 0.001 in relation to group 4 (L+F+). There was no statistical difference only in the comparison between groups 1, 2 and 3. Figure 3 shows the evolution of the logarithmic reduction as a function of the time of exposure in the 4 experimental protocols, verifying that the reduction is more accentuated in group 4 from 15 min, reaching 0 in the times of 20, 25 and 30 min. According to Tables 2, 3 and Figs. 2, 3, the elimination of CFU was observed from 20 min of LED irradiation associated with the photosensitizer curcumin 98% for inactivation of the fungus Paracoccidioides brasiliensis.

GROUPS Fig. 2 Distribution of logarithmic CFU/mL results according to experimental groups. The asterisks represent the significance when compared to group 4 (L+F+) (***p < 0.001)

Discussion

The high growth in the number of fungal infections has increased the interest in alternative and complementary therapies that aim at fungicidal or fungistatic action. For this reason, the Photodynamic Inactivation has attracted scientific interest because it is a non-invasive technique and its microbicidal and healing properties [14]. Photodynamic Inactivation is based on local administration and the accumulation and retention of a photosensitizing agent in the tissue followed by irradiation with visible light of wavelength corresponding to the absorption spectrum of the photosensitizer to the region to be treated [15]. The

p < 0,001

Fig. 3 Distribution of CFU/mL logarithmic results according to experimental groups as a function of LED exposure time

CFU/mL (Log)

3

Group 1: L-FGroup 2: L+FGroup 3: L-F+ Group 4: L+F+

2

1

0

0 minute

5 minutes

10 minutes

15 minutes

Time

20 minutes

25 minutes

30 minutes

Photodynamic Inactivation in Vitro of the Pathogenic …

knowledge about the various characteristics of PDI in the fight against pathologies that cause systemic mycoses provides even more satisfactory results in terms of its treatment. Among the peculiarities are the mastery of the technique regarding the concentrations of the photosensitizer to be used, the knowledge about the species of the microorganism, distinguishing periods of pre-irradiation, light source and doses employed [16]. The pathogenicity of Paracoccidioides brasiliensis is conditioned by several factors, and thermotolerance is one of the factors most related to the dissemination of the fungus in the host organism. Although the fungus is only infecting in the form of hyphae, present in the environment under thermic conditions of 25 ºC, in this study PDI was applied to yeast-formed cells, as it is the pathogenic form in living beings under thermic conditions of 37 ºC and that manifests itself in more severe clinical forms in immunocompromised patients [14, 17, 18]. Photodynamic Inativation is indicated by several studies as an alternative treatment for various infectious pathologies. The interest starts from the fact that there is a cytotoxic effect on yeast cells resulting from sensitization with a substance, in this case a photosensitizer, which is harmless to host tissues [19]. Based on the principle that describes the action of PDI in several microorganisms, in this study with P. brasiliensis Fs curcumin 98% was used, why P. brasiliensis is more susceptible to the action of curcumin when compared to other fungi, such as some species of candida, data also confirmed in our research [20]. It was conducted a study to verify the photodynamic action of six photosensitizers derived from protophyrin IX dissolved in DMSO at a concentration of 1 mg/mL−1 associated with continuous LED light emission in the reduction of Trichophyton rubrum fungus, and noted that most of the Fs caused a reduction in CFU, corroborating with the results obtained in this study, since the photosensitizer curcumin 98% was also dissolved in DMSO at a concentration of 1 mg/mL−1 [21]. It was found that PDI inhibited the growth of Candida albicans yeasts in 90% of the samples when the methylene blue photosensitizer was used at concentrations of 0.05 mg and 0.1 mg/mL−1 [22]. However, there was no total inactivation of the fungus, which differs from the other study [20], which observed a complete reduction of T. rubrum fungus using photosensitizers at a concentration of 1 mg/mL−1, agreeing with the results of this study. Curcumin presents bactericidal effect only if administered in very high concentrations, explaining the reason for the use of curcumin 98% in high dosage in this research [23]. In our study it was found that in group 3, a group treated without LED irradiation and containing photosensitizer in its composition, there was a very discrete reduction of CFU,

1169

which in Log expresses a reduction from 2.54 to 1.73 CFU/mL, showing that only the use of Fs without the association of LED does not promote the inactivation of the fungus. As in group 2, a group treated with LED irradiation without presenting Fs in its composition, which also presented a very discrete reduction of CFU, from 2.54 to 2.03 CFU/mL, showing that without Fs the LED irradiation does not cause the inactivation of the fungus. These results are similar to those observed in the other study [24], which emphasize that the PDI only presents an effect when there is interaction of light of adequate wavelength with the presence of a photosensitizer to intermediate the penetration of light in the cell. The literature describes that the Fs should not present any cytotoxic effects in the absence of light, due to the fact that the molecules of the Fs are in their fundamental energetic state. At the moment that light in a specific wavelength is introduced into the system, the energy contained in the photons is transferred to the molecules of the Fs and this causes these molecules to become excited [25, 26]. However, in our results it was verified that in groups 1 and 2, groups that did not contain Fs in the composition of the samples, the values of CFU/mL expressed in Log are higher than the values of group 3, group composed with Fs, showing that curcumin 98% in the concentration of 1 mg/mL−1 offers some cytotoxic effect. This finding may be related to the fact that as P. brasiliensis is highly susceptible to curcumin 98%, it may be able to interact with the molecules of the medium, causing the formation of free radicals or singlet oxygen and, consequently, the apoptosis of some fungal cells [14, 27]. It was also observed that in group 2, a group exposed to LED irradiation but not containing Fs, there was some reduction of CFU’s when compared to group 1, negative control group. This finding may be linked to the fact that even in the presence of light, free radicals or singlete oxygen were generated, which can lead to cell death [27]. It was performed a study that verified the influence of PDI by laser associated with methylene blue Fs on the respiration of the fungus P. brasiliensis and verified that the therapy associated with photosensitizer produces an inhibitory effect on the respiration of the fungus, which induces cell death. These data are similar to the findings of the present study, since PDI in the presence of curcumin 98% also resulted in a reduction of P. brasiliensis [28]. In this study it was observed that the group irradiated with LED and with the presence of curcumin 98%, group 4, there was a total reduction of CFU at the end of seven days after incubation, indicating that the fungus P. brasiliensis was inactivated by therapy. This result is related to the fact that the transfer of electrons between the photosensitizer activated by LED irradiation and the cell components may have occurred, promoting the formation of free radicals.

1170

J. A. da Silva Júnior et al.

Another possibility is that there may have been the formation of singlet oxygen, which reacts and attacks the cell components such as lipids, proteins and DNA. Both mechanisms are capable of inducing the inactivation of the microorganism [29, 30]. In this study, it was verified that there was fungal growth in the three groups that did not receive the PDI, and that the group irradiated with LED in the presence of curcumin 98% did not have fungal proliferation, indicating that only in the interaction of the photosensitizer with visible light is it possible to reduce microbial growth. In view of these findings it is possible to infer that curcumin 98% as photosensitizer and Light Emitting Diode (LED) are efficient in eliminating in vitro the fungus Paracoccidioides brasiliensis in its pathogenic form. Therefore, this work opens new perspectives for the study of PDI in vivo trials, seeking alternative therapies for the treatment of patients with Paracoccidioidomycosis.

5

Conclusion

Based on the analysis of the results found in this study it can be concluded that the Photodynamic Inactivation associated with the administration of the photosensitizer curcumin 98% is capable of promoting the in vitro elimination of the pathogenic fungus Paracoccidioides brasiliensis. The isolated application of LED irradiation or photosensitizer did not promote the inactivation of the fungus P. brasiliensis. It is suggested that further research be carried out with the application of P. brasiliensis to reduce the fungus P. brasiliensis and in vivo trials, since this is a microorganism that in humans causes a pathology that can lead to death.

3. 4. 5.

6.

7.

8. 9.

10.

11. 12.

13.

14. 15.

16.

17. 18.

Acknowledgements Thank the Universidade Brasil University for the supporting during Master Post Graduation Course, Anhembi Morumbi University and the Coordination for the Improvement of Higher Education Personnel (CAPES) for financing this research. Conflict of Interest The authors declare that they have no conflict of interest.

19.

20.

21.

References 1. Lacaz C, Franco M, Restrepo-Moreno A et al (1994) Historical evolution of the knowledge on paracoccidioidomycosis and its ethiologic agent. In: Paracoccidioidomycossis. Boca Raton, London, pp 1–11, 13–25 2. San Blas G, Nino-Vega G, Iturriaga T (2002) Paracoccidioides brasiliensis and paracoccidioidomycosis: Molecular approaches to

22.

23.

morphogenesis, diagnosis, epidemiology, taxonomy and genetics. Med Mycol 40:225–242 Brummer E, Castañeda E, Restrepo A (1993) Paracoccidioidomycosis. Clin Microbiol Rev 6:89–117 Negroni R (1993) Paracoccidioidomycosis (South American blastomycosis, Lutz’s mycosis). Int J Dermatol 32:847–859 Valle A, Costa R (2001) Paracoccidioidomicose. In: Batista RS, Igreja RP, Gomes AD, Huggins DW (eds) Medicina tropical: abordagem atual das doenças infecciosas e parasitárias. Rio de Janeiro: Cultura Médica, pp 943–958 Sandhu G, Aleff R, Kline B et al (1997) Molecular detection and identification of Paracoccidioides brasiliensis. J Clin Microbiol 35:1894–1896 Goldani L, Sugar A (1998) Short report: use of the polymerase chain reaction to detect Paracoccidioides brasiliensis in murine paracoccidioidomycosis. Am J Trop Med Hyg 58:152–153 Gomes G, Cisalpino P, Taborda C et al (2000) PCR for diagnosis of paracoccidioidomycosis. J Clin Microbiol 38:3478–3480 Bisinelli J, Ferreira M (2002) Doenças infecciosas: paracoccidioidomicose (blastomicose sul-americana). In: Tommasi AF. Diagnóstico em patologia bucal. 3ª ed. São Paulo: Pancast, pp 202–209 Gad F, Zahra T, Hasan T et al (2004) Effects of growth phase and extracellular slime on photodynamic inactivation of gram-positive pathogenic bacteria. Antimicrob Agents Chemother 48:2173–2178 Foote C (1968) Science 162:963–970 Lambrechts S, Aalders M, Van Marle J (2005) Mechanistic study of the photodynamic inactivation of Candida albicans by a cationic porphyrin. Antimicrob Agents Chemother 49:2026–2034 Demidova T, Hamblin M (2005) Effect of cell-photosensitizer binding and cell density on microbial photoinactivation. Antimicrob Agents Chemother 49:2329–2335 Perussi J (2007) Inativação Fotodinâmica De Microorganismos. Química Nova 30(4):988–994 Buytaert E, Dewaele M, Agostinis P (2007) Molecular effectors of multiple cell death pathways initiated by photodynamic therapy. Biochem Biophys Acta 1776:86–107 Junqueira J, Ribeiro M, Rossoni R et al (2010) Antimicrobial photodynamic therapy: photodynamic antimicrobial effects of malachite green on staphylococcus, enterobacteriaceae and Candida. Photomedicine Laser Surg 28:67–72 Rappleye C, Goldman W (2006) Defining virulence genes in the dimorphic fungi. Annu Rev Microbiol 60:281–303 Shikanai-Yasuda M, Telles Filho F, Mendes R et al (2006) Consenso em paracoccidioidomicose. Rev Soc Bras Med Trop 39 (3):297–310 Chan Y, Lai C (2003) Bactericidal effects of different laser wavelengths on peridontopathic germs in photodynamic therapy. Lasers Med Sci 18:51–55 Martins C, Silva D, Neres A et al (2009) Curcumin as a promising antifungal of clinical interest. J Antimicrob Chemother 63:337– 339 Ramos R (2014) Atividade antifúngica de fotossensibilizadores derivados de protoporfirina IX sobre Trichophyton rubrum. 50p. Dissertação (Mestrado em Engenharia Biomédica) – Programa de Pós-Graduação em Engenharia Biomédica, Universidade Camilo Castelo Branco, São José dos Campos, São Paulo Sales (2006) Efeito da terapia antimicrobiana, utilizando o azul de metileno como agente fotossensibilizante sobre o crescimento de Candida albicans. 43f. Dissertação (Mestrado em Engenharia Biomédica) – Instituto de Pesquisa e Desenvolvimento, Universidade Vale do Paraíba, São José dos Campos, São Paulo Dahl T, Mcgowan W, Shand M et al (1989) Photokilling of bacteria by the natural dye curcumin. Arch Microbiol 151(2):183– 185

Photodynamic Inactivation in Vitro of the Pathogenic … 24. Almeida L, Zanoelo F, Castro K et al (2012) Cell survival and altered gene expression following photodynamic inactivation of Paracoccidioides brasiliensis. Photochem Photobol 88(4):992– 1000 25. Fingar V, Kik P, Haydon P et al (1999) Analysis of acute vascular damage after photodynamic therapy using benzoporphyrin derivative (BPD). Br J Cancer 79:1702–1708 26. Gabor F, Szocs K, Maillard P et al (2001) Photobiological activity of exogenous and endogenous porphyrin derivatives in Escherichia coli and Enterococcus hirae cells. Radiat Environ Biophys 40:145–151 27. Chavantes M (2009) Laser em Biomedicina – Princípios e Prática. Editora Atheneu, São Paulo, pp 60–102

1171 28. Nowotyn J (2011) Influência da Terapia Fotodinâmica e azul de metileno na respiração do fungo patogênico Paracoccidioides brasiliensis. 62p. Dissertação apresentada ao programa de pós-graduação em Bioengenharia do Instituto de Pesquisa e Desenvolvimento da Universidade do Vale do Paraíba, São José dos Campos, São Paulo 29. Barreiros A, David JM, David JP (2006) Estresse oxidativo: relação entre geração de espécies reativas e defesa do organismo. Química Nova 29(1):113–123 30. Gandra P, Alves A, Macedo D et al (2004) Determinação eletroquímica da capacidade antioxidante para avaliação do exercício físico. Química Nova 27(6):980–985

In Vitro Study of the Microstructural Effects of Photodynamic Therapy in Medical Supplies When Used for Disinfection A. F. Namba, M. Del-Valle, N. A. Daghastanli, and P. A. Ana

It was concluded that 0.2% peracetic acid, 1% sodium hypochlorite and PDT alter the chemical composition of both masks and extensions, and that such changes have a positive relationship with the number of treatments performed. These compositional changes may be related to the color changes promoted in both materials by all agents tested.

Abstract

Cleaning and disinfecting surfaces and materials in health services are primary elements in infection control measures. For thermosensitive materials, the chemical agents used have disadvantages such as the odor of the products, which can cause allergic reactions to patients and the nursing staff. Photodynamic therapy (PDT) has been shown to be an effective technique in the treatment of infections caused by different microorganisms; however, nothing is known about the effects of this technique on the microstructure of hospital supplies. This in vitro study aimed to evaluate the effects of 0.2% peracetic acid, 1% sodium hypochlorite and PDT with 0.01% methylene blue on the composition and color changes of hospital masks and extensions. For this purpose, 100 mask samples and 100 extension samples were randomly distributed in 20 experimental groups (n = 10, 10 groups for each material), in which the applied substance was varied (sodium hypochlorite, peracetic acid and PDT) and the number of applications (without application, 1, 2 or 3 applications). The compositional analysis was performed by Fourier transform infrared spectroscopy, while the color changes were evaluated using image analysis by CIElab method evaluating the parameters L*, a*, b* and DE. The statistical analysis was performed at 5% significance level. It was observed that all agents altered the composition of the materials in a similar way. Although all agents promoted changes in different parameters evaluated, peracetic acid and methylene blue alone altered the final color perceived only in extensions. A. F. Namba  N. A. Daghastanli  P. A. Ana (&) Center for Engineering, Modeling and Applied Social Sciences, Federal University of ABC, 03 Arcturus St., São Bernardo do Campo, Brazil e-mail: [email protected] M. Del-Valle Center for Lasers and Applications, Nuclear and Energy Research Institute (IPEN-CNEN/SP), Sao Paulo, Brazil

Keywords

Disinfection therapy

1



Laser



Medical supply



Photodynamic

Introduction

Currently, the environment in health services has been the focus of special attention to minimize the spread of microorganisms that can cause serious infections, such as from multi-resistant microorganisms. The use of medications by inhalation has led to a greater use of nebulizers and there is a concern that these devices may contribute as a source of bacterial infection [1]. The cleaning and disinfection of surfaces and materials in health services are primary and effective elements in the control measures to break the epidemiological chain of infections. Nebulizers are medical hospital devices that are generally used in the treatment of respiratory tract disorders to relieve inflammatory, congestive and obstructive processes. They are devices that come into contact with colonized intact mucous membranes and minimally require an intermediate level of disinfection always after rigorous cleaning. Currently, it is recommended that 0.2% peracetic acid (high-level disinfectant) and 1% sodium hypochlorite (intermediate-level disinfectant) be used as disinfectants for nebulizers. These agents are effective in eliminating bacteria, viruses, fungi and spores [1–4]. However, disinfection by

© Springer Nature Switzerland AG 2022 T. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_176

1173

1174

A. F. Namba et al.

immersion in chemical solutions is operationally complex, presents occupational risks and the possibility of toxic residues remaining. Thus, the search for agents that guarantee the disinfection of medical supplies and that do not present such adverse effects, that are effective and of low cost, is necessary. Photodynamic therapy (PDT) has been reported as an effective technique for the decontamination of biological tissues, such as periodontal pockets and peri-implant sites in dentistry, as well as skin and mucous membranes. The efficiency of this technique has been proven to eliminate bacteria, fungi and parasites [5]. There are no studies that report the use of PDT for decontamination of hospital devices. Observing the absence of studies on this topic, we question whether the current disinfection techniques or PDT can physically and chemically modify the structure of medical devices that consist of polyvinyl chloride. The study of such changes may contribute to the proposal of an alternative decontamination technique for these inputs. Therefore, this study aims to evaluate the effects of 0.2% peracetic acid, 1% sodium hypochlorite and photodynamic therapy with methylene blue (PDT) on the composition and color of masks and extensions used in hospital nebulizers.

Table 1 Experimental groups of the present study

2

Material and Method

For this study, 100 specimens of extension and 100 specimens of nebulizer masks were obtained. The material was cleaned with enzymatic detergent (Proaction-AS110-4E, Grow Química, Brazil) and sectioned in dimensions of 5 mm2 using a sterile circular cutter. Samples were randomly distributed into experimental groups according to Table 1. For groups 1 and 11, the samples were stored in sterile well plates with 1000 µL of distilled water for 10 min. In groups 2, 3, 4, 12, 13 and 14, 0.2% peracetic acid (Proaction Peracetic 0.2%, Grow Química, Brazil) was used. For the treatments, the samples were individually immersed in 1000 µL of peracetic acid for 10 min; then, they were rinsed with distilled water and placed in sterile well plates. In groups with repetition, treatments were repeated in the same way, respecting 10-min intervals between them. In groups 5, 6, 7, 15, 16 and 17, 1% sodium hypochlorite (Proaction 1%, Grow Química, Brazil) was used. For the treatments, the samples were individually submerged in 1000 µL of sodium hypochlorite for 10 min, rinsed with distilled water and placed in sterile well plates. In groups with repetition, treatments were repeated in the same way, respecting 10-min intervals between them.

Experimental group

Material

1

Mask

2

Treatment

Repetitions

Distilled water

1

Peracetic acid

1

3

2

4

3

5

Sodium hipoclorite

6

1 2

7

3

8

PDT

1

9

2

10

3

11 12

Extension

Distilled water

1

Peracetic acid

1

13

2

14

3

15

Sodium hipoclorite

16

2

17 18

1 3

PDT

1

19

2

20

3

In Vitro Study of the Microstructural Effects …

In groups 8, 9, 10, 18, 19 and 20, the samples were individually submerged in 1000µL of methylene blue 0.01% at 10 mM (Sigma Aldrich, USA) for 10 min; afterwards, laser irradiation was performed for 8 min. Then, they were rinsed with distilled water. In groups with repetition, treatments were repeated in the same way, respecting 10-min intervals between them. For irradiations, it was used a laser (k = 660 nm, DMC Equipamentos, Brazil), with a fiber diameter of 600 µm, 50 mW, 8 J of total energy delivered, mode continuous with a distance of 1 cm from the sites to be irradiated in order to irradiate the entire well of a 96-well culture plate. Considering the well area (0.31 cm2), the irradiance of 161.3 mW/cm2 was obtained. The compositional analysis was performed by Fourier transform infrared spectroscopy (FTIR). The spectra in the infrared region (4000–650 cm−1) were obtained with a Frontier spectrometer (Perkin Elmer, USA), using the ATR accessory (attenuated total reflectance) with a diamond crystal. From each sample, spectra were collected in a central area of 1.5 mm2. Each spectrum had a background spectrum subtracted during the acquisition and was obtained with 64 scans, with a resolution of 4 cm−1. A descriptive comparative analysis was performed for all obtained spectra using the Origin 8.0 software. The comparison of the intensities of the absorption bands was performed only considering spectra of the same material, and was performed after normalization by the peak of highest intensity found for each material. The color analyzes were performed adapting the methodology of Cal et al. [6]. For that, digital photos were obtained using a scientific CCD camera mvBlueFOX120a (Matrix Vision, Germany) and an objective lens model #53-301 (Edmund Optics, USA). The illumination was performed by a lighting system composed of three white LEDs arranged in a concentric way and with a standardized

Fig. 1 Average infrared absorption spectrum of extensions before and after treatment with peracetic acid, in the region between 3250 and 2500 cm−1

1175

distance, avoiding shadow areas. The camera, the lighting system and the sample holder are positioned within a fully closed black box. Before the beginning of the image acquisition and after every 10 images taken, the optical power emitted by the LEDs was measured using a power meter (FieldMaxII, Coherent, USA). To enable the comparative study between the samples of the present study, the same reference sample was used for all images (a reference sample for masks and a reference sample for extensions), which was kept dry and in the same position. Thus, for all images, the samples were positioned two by two (test sample and reference sample), in a standardized position side by side, in a mark made on the sample support plate, to ensure that the energy received by each of them had the smallest possible variation in all acquisitions. To acquire the images, the wxPropView capture software was used for the Labview® environment. The camera gain and exposure time were adjusted to 10 dB and 106 ls, respectively. The images were saved in bitmap format. For the analysis of the images, a routine in MATLAB® environment was elaborated to calculate the values of a*, b*, L* and ΔE, which indicates if there were changes in the color of the samples, as follows: ΔE*ab = (ΔL*2 + Δa*2 + Δb*2)1/2.

3

Results and Discussion

Figure 1 shows the average FTIR spectra of the extension samples after treatment with peracetic acid. The application of peracetic acid reduced the intensity of the bands with absorption in 2957–2917 cm−1 and in 2851 cm−1, which correspond to the group alkanes (CH stretching and bond, respectively). This reduction occurred in a similar way for all treatment repetitions, that is, the application of 1, 2 or 3

1176

A. F. Namba et al.

Fig. 2 Average infrared absorption spectrum of masks before and after treatment with peracetic acid, in the region between 4000 and 650 cm−1

times did not cause significant differences in the reduction of the intensity of these bands. The application of peracetic acid also reduced the intensity of the absorption bands located at 1462 cm−1 (CH2). The 873 cm−1 band had its complete disappearance after all treatments. There were no new bands, which indicates that there was possibly no adsorption of the agent in the material. In the mask samples (Fig. 2), it is observed that this agent promoted a reduction in the intensity of the bands located in 2957–2851 cm−1 (CH group), as they have very few absorption bands in the infrared spectrum but, unlike what happened in extension samples, peracetic acid did not change the intensity of the 2957 cm−1 band. The 873 cm−1 band also completely disappeared after all treatments with peracetic acid. Due to this analysis, we can suggest that there was no total material degradation but that the changes produced by peracetic acid were similar in the two materials tested. Peracetic acid is a toxic and corrosive product, with the molecular formula C2H4O3, which acts by oxidizing cellular constituents, releasing active oxygen that interacts with sulfur bonds of proteins, enzymes and other microbial metabolites [3]. This released oxygen can be a degrading agent for certain types of materials. Although the literature shows that this agent does not interfere with the compositional and mechanical properties of certain hospital materials [3], in the present study we show that peracetic acid interferes with the composition of masks and extensions used in hospital nebulizers; however, further studies are needed to clarify whether this chemical modification interferes with the durability of the material.

The application of sodium hypochlorite also promotes compositional changes in the extensions, being evidenced by the reduction in the intensity of the absorption bands in 2917 and 2851 cm−1 (Fig. 3), 1462–1436–1425 cm−1, which correspond to the CH connection, and complete elimination of the 873 cm−1. In masks, sodium hypochlorite apparently promoted the same chemical changes as peracetic acid (Fig. 4). Sodium hypochlorite (NaClO) is an agent that dissolves and degrades fatty acids and organic material; also, lead to amino acid degradation and hydrolysis and reduces the surface tension of the remaining solution [4]. This agent is quite toxic in biological tissues, whose action goes beyond the concentration used and, for this reason, a concentration of 1% is recommended for disinfecting surfaces and hospital supplies [1, 2]. In this study, it was found that the action on masks and extensions was very similar to that promoted by peracetic acid, with little chemical modification but that does not imply the complete degradation of the evaluated materials. In Fig. 5, it was noticed that PDT significantly decreases the intensity of the absorption bands of 2917 and 2851 cm−1, in the same way as observed for peracetic acid and sodium hypochlorite. Still, it was observed that the PDT after two applications promoted a displacement of the band from 1735 to 1721 cm−1, as well as a significant increase in the intensity of the bands from 1723 to 744 cm−1. For masks, the effects of PDT were also similar to the effects promoted by the other agents (Fig. 6). It is noted a reduction in the intensity of the 2917, 2851, 1577, 1541,

In Vitro Study of the Microstructural Effects … Fig. 3 Average infrared absorption spectrum of extensions before and after treatment with sodium hypochlorite, in the region between 4000 and 650 cm−1

Fig. 4 Average infrared absorption spectrum of masks before and after treatment with sodium hypochlorite, between 3250 and 2500 cm−1

Fig. 5 Average infrared absorption spectrum of extensions before and after treatment with PDT, in the region between 4000 and 650 cm−1

1177

1178

A. F. Namba et al.

Fig. 6 Average infrared absorption spectrum of masks before and after treatment with PDT, in the region between 4000 and 650 cm−1

1461, 1436, 1425, 1258, and 873 cm−1 bands (with total elimination of the latter) and an increase in the intensity of the 1720 cm−1 band in a similar way to that promoted by the others agents. However, unlike sodium hypochlorite, PDT does not change the 1016 cm−1 band. It is known that during PDT, different reactive oxygen species are formed, such as singlet oxygen and superoxide anion, in addition to different free radicals [5]. According to the results observed in this study, it is suggested that these products are responsible for promoting changes in hospital masks and extensions, probably breaking chemical bonds in functional groups. Thus, even acting through different mechanisms, PDT has effects similar to the effects promoted by sodium hypochlorite and peracetic acid, which can compromise the structure of the materials tested in the long term; however, from the data presented, it is not possible to say whether such changes interfere with the useful life of the materials. If this interference occurs, it is likely to be of the same magnitude as that promoted by agents currently employed. However, it is noteworthy that PDT has

advantages over these agents, such as the absence of side effects to biological tissues and the absence of odor, which makes it an attractive technique for use by the hospital team. Bearing in mind that all the agents tested promoted chemical changes in the masks and extensions, it was evaluated whether such changes can be visually perceptible, through a color change. For these analyzes, the effects of treatments on the parameters DL (luminosity, from white to black), Da [from red (+a*) to green (−a*)], Db [from yellow (+b*) to blue (−b*)] and DE (total color change, perceptible to the human eye when greater than 1) were compared to samples without treatment and treated with different repetitions. It was observed (Fig. 7) that peracetic acid did not promote statistically significant changes in the luminosity (DL) of the extensions; however, it changed the parameters Da positively (became more reddish) and decreased the parameter Db (became more bluish). In this way, this agent changed the perceived total color (DE > 1 after 2 and 3 repetitions).

Fig. 7 Variation of DL, Da, Db and DE in the extensions before and after treatment with peracetic acid in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

In Vitro Study of the Microstructural Effects …

1179

Fig. 8 Variation of DL, Da, Db and DE in the extensions before and after treatment with peracetic acid in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

Fig. 9 Variation of DL, Da, Db and DE in extensions before and after treatment with sodium hypochlorite in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

Fig. 10 Variation of DL, Da, Db and DE in masks before and after treatment with sodium hypochlorite in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

In masks (Fig. 8), the application of peracetic acid under different repetitions promoted statistically significant changes in the parameters DL, Da and Db; however, it did not change the perceived total color. This finding is expected because the extensions have a transparent color and, for this reason, changes in the shade of this material are more noticeable than changes in masks, which have a green color. The treatment of extensions with sodium hypochlorite statistically increased DL and Da, which means that it increased the luminosity and made the extensions more greenish, thus decreasing Db (it became more bluish); however, it did not promote significant changes in the final perceived color of the extensions, as can be seen in Fig. 9. In the masks, the treatment with sodium hypochlorite only significantly decreased Da (made it more greenish), but did not promote changes in the other parameters, that is,

sodium hypochlorite also does not change the color of hospital masks, even after 3 repetitions of treatments (Fig. 10). These results suggest that the masks are composed of material more resistant to the action of sodium hypochlorite. Although the masks and extensions are composed of the same material (polyvinyl chloride), they have some differences related mainly to the presence of plasticizer [7]. The plasticizers widely used in health products are Di-isononyl phthalate and Trioctyltrimelite, which have an aromatic ring and ester group, and confer different mechanical and chemical characteristics for the materials. In this way, these materials can be broken down in different ways by the different disinfecting agents used in this work. The effect of methylene blue alone on the color of both extensions and hospital masks was also evaluated (Figs. 11 and 12). It was noticed that methylene blue alone promotes

1180

A. F. Namba et al.

Fig. 11 Variation of DL, Da, Db and DE in extensions before and after treatment with methylene blue in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

Fig. 12 Variation of DL, Da, Db and DE in the masks before and after treatment with methylene blue in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

Fig. 13 Variation of DL, Da, Db and DE in the extensions before and after treatment with photodynamic therapy in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

an increase in the parameter DL (increases the luminosity) and a significant decrease in the parameters Da and Db (makes the extensions greener and more bluish), which causes a significant change in the perceived color (DE > 1). In masks, methylene blue alone significantly reduced luminosity, Da and Db (makes extensions greener and more bluish); however, although there was a tendency to change the total color of the samples, this trend was not statistically significant (p < 0.05). Methylene blue is a blue colorant with a C16H18ClN3S structure, which is responsible for such coloration. From the results of this work, even the few repetitions of treatment with this dye already reflect changes

in the color of PVC materials, in a more significant way than those promoted by peracetic acid and sodium hypochlorite. Finally, the effects of PDT with this same dye on the optical changes of PVC materials were tested. By analyzing Fig. 13, it is possible to observe that, just like methylene blue alone, PDT also increases the luminosity and makes the extensions more bluish (significantly increases DL and significantly decreases Db); however, it did not result in a significant change in the perceived color. In masks (Fig. 14), PDT only promoted a statistically significant decrease in the parameter Da (made it more greenish), but did not significantly change any other color parameter. Thus, it is observed

In Vitro Study of the Microstructural Effects …

1181

Fig. 14 Variation of DL, Da, Db and DE in the masks before and after treatment with photodynamic therapy in different repetitions. The bars indicate standard error. Different letters show statistically different averages at the level of 5% according to the Student-Newmann-Keuls test

that PDT has a more pronounced effect on extensions (as expected, considering that the extensions are originally transparent in color), but do not change the color of the masks in up to 3 repetitions of treatments. Still, it was possible to verify that the color changes were promoted by the dye alone, and not by the photochemical reaction of the PDT. In other words, the release of singlet oxygen, superoxide anion or free radicals does not seem to be responsible for the color changes observed in hospital masks and extensions. Thus, considering the benefits of PDT in microbial reduction reported in the literature on the irradiation parameters studied in this study, as well as the compositional changes promoted in the tested materials, it is possible to infer that photodynamic therapy can be used as an alternative for decontamination of these materials. Yet, future microbiological as well as durability evaluations of the tested materials, are essential for this to be put into future clinical use.

4

Conclusion

It can be concluded that 0.2% peracetic acid, 1% sodium hypochlorite and PDT alter the chemical composition both masks and hospital extensions, and that these changes have a positive relationship with the number of treatments performed. These compositional changes may be related to the color changes promoted in both materials by all agents tested.

Acknowledgements To FAPESP (2017-21887-4), PROCAD-CAPES (88881.068505/2014-01), National Institute of Photonics (CNPq/INCT 465763/2014-6) and Multiuser Experimental Center of UFABC (CEM-UFABC). The authors also thank Prof. Emery C.C.C. Lins (UFPE) for access to the imaging equipment used in this study. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. BRASIL (2010) Agência Nacional de Vigilância Sanitária. Segurança do paciente em serviços de saúde: limpeza e desinfecção de superfícies. Agência Nacional de Vigilância Sanitária, Brasília 2. Centers for Diseases Control and Prevention (2008) Guideline for disinfection and sterization in healthcare facilities. Atlanta, EUA 3. Poletto AP, Hoss J et al (2016) Efeito do ácido peracético sobre a adesão microbiana e as propriedades de tubos endotraqueais. Rev Fac Odontol Passo Fundo 21:196–200 4. Estrela C, Estrela CRA et al (2002) Mechanism of action of sodium hypochlorite. Braz Dent J 13:113–117 5. Tardivo JP, Giglio AD, Oliveira CS et al (2005) Methylene blue in photodynamic therapy: from basic mechanisms to clinical applications. Photodiagnosis Photodyn Ther 2:175–191 6. Cal E, Guneri P, Kose T Comparison of digital and spectrophotometric measurements of colour shade guides. J Oral Rehab 33: 221–228 7. Bernard L, Dévaudin B, Lecoeur M et al (2014) Analytical methods for the determination of DEHP plasticizer alternatives present in medical devices: a review. Talanta 129:39–54

Technological Development of a Multipurpose Molecular Point-of-Care Device for Sars-Cov-2 Detection L. R. Nascimento, V. K. Oliveira, B. D. Camargo, G. T. Mendonça, M. C. Stracke, M. N. Aoki, L. Blanes, L. G. Morello, and S. L. Stebel

Abstract

1

The present work describes the technological development of a low-cost and miniaturized instrument to Polymerase Chain Reaction (PCR) for detection of Sars-CoV-2 ribonucleic acid (RNA) and, potentially, an open platform for detection of other microorganisms. Most devices use a big aluminum bar coupled to a peltier to heat and cool the reaction tube; however, a lot of energy is wasted in the process. To take advantage of the energy and reduce cost–benefit of the device, we introduced the Joule Effect in the printed circuit board for heating samples, and a computer fan for cooling. Other improvements such as a precise heating sample spot, and a LM35 thermal sensor with a PID (proportional integral-derivative) algorithm to control the circuit temperature, have also been included. The processes were carried out based on cost–benefit and performance to bring to the market a robust detection platform for in vitro diagnostic tests. Keywords

PCR



LAMP



SARS-CoV-2



COVID-19



POC

L. R. Nascimento (&)  V. K. Oliveira Instituto de Biologia Molecular Do Paraná (IBMP)/Business Development Department, IBMP, Algacyr Munhoz Maeder, Curitiba-PR, 3775–CIC, Brazil e-mail: [email protected] B. D. Camargo  G. T. Mendonça  M. C. Stracke  M. N. Aoki  L. Blanes  L. G. Morello Carlos Chagas Institute, Oswaldo Cruz Foundation (FIOCRUZ)/Tech Development, Curitiba, Brazil S. L. Stebel Pós-Graduação Em Engenharia Biomédica/UTFPR, Curitiba, Brazil

Introduction

Most of molecular biology assays, RNA and DNA amplification protocols are necessary to obtain accurate results of an in vitro diagnostic test, in order to detect an specific microorganism [1–3]. The amplification phase involves biochemical and enzymatic processes with cyclical variations of temperature [3]. The technique of this reaction is called polymerase chain reaction (PCR). The reaction received its name due to the chain reaction of the DNA polymerase enzyme, which makes copies of billons of molecules, which can be detected and quantified by specific devices [1–4]. The present work will not bring an in-depth focus on the details of chemical and biological reactions, but the functionality of the detection device, to create cost-effective alternatives to mainstream devices that can reach more than U$100,000 each. The team researched the production of low-cost PCR devices that transport heating elements based on the Joule’s First Law, which states that the heat is generated through an electric current and flows in a conductor. We tested our theory developing a miniaturized PCR device, based on Loop-mediated Isothermal Amplification Method (LAMP), where the sample amplification takes place in a constant temperature environment [5], in a second step, the conventional PCR technique will also be inserted in the device, so that the user can perform the methodology that best suits the purpose of the detection.

2

Materials and Methods

In this project, the team will develop a molecular diagnostic device that will allow the user to choose between conventional PCR reaction and/or LAMP methodology. First, the device will be optimized in the LAMP, due to the lack of need to reach extreme temperature variation, which will allow the early development of the device.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_177

1183

1184

Figure 1 illustrates the layout of the device’s PCR control board. A PCB (Printed Circuit Board) was developed, containing circuits for each function of the device, basically with the objective of temperature controlling of the sample during the reaction, one for heating, using a metal board and another for cooling, using fans, temperature sensors containing circuit filters, to reduce noise and optimize the sensor readings, will be included as well. A buzzer will be inserted for audible alerts and an RGB LED circuit and a display, used for visual alerts. For user interface, a single button circuit is used to control the entire instrument. For the general protection of the circuit, a fuse was used. The first procedure was to create an equipment to heat eight individual samples in parallel with a variation of less than one Celsius degree to all samples. After that, we improved the circuit, boards, and software in order to increase the device’s capacity to sixteen individual samples with differences lower than 0.5 degrees between them. The base heating board pattern was designed in a very symmetrical way (Fig. 2A), the base has holes that can fit in a 200 µl PCR tube, the tubes can carry samples and/or

Fig. 1 Device’s PCR control board schematic

L. R. Nascimento et al.

controls. As the LAMP methodology does not require extremely high temperatures (65 degrees), there is no vapor formation and consequently will be no difficulties or interferences in the reading process. The tube will be only heated only at its base, saving energy. A 3 mm aluminum plate was inserted above the heating plate to improve heat distribution and to act as a holder. After the manufacturing of the components necessary for the LAMP method was completed, a second heat plate (Fig. 2B) was developed to heat the tube lid, avoiding the formation of steam, already thinking about the PCR reaction, which uses higher temperatures (95 degrees). For the molecular reaction, a Sars-CoV-2 colorimetrical detection is under standardization process. The reaction is a miniaturized version of an already marketed product for COVID-19 molecular detection [6]. The test is based on the Chinese Centers for Disease Control and Prevention protocol, established on the first sequences of Sars-CoV-2 made available by the China CDC, later published by the World Health Organization as a document called Summary of Available In-House Assays Protocols [7] the sequenced targets are the Open Reading Frame (ORF1ab) and the

Technological Development of a Multipurpose Molecular …

1185

Fig. 2 Heating plates design images for the 16 samples device. a Heating plate for the tube base. b Heating plate for the tube lid

nucleoprotein N (N gene). The test uses the real-time polymerase chain reaction technique, that allows the detection of specific sequences in an extracted RNA through intensity reading of the fluorescent dye during the reaction. The first step of the technique starts the reverse transcription process, which generates complementary DNA (cDNA) through the sample RNA, followed by the second reaction step, the real-time polymerase chain reaction (qPCR) itself. As our device do not have a fluorescence detector the amplification only can be performed using conventional gel electrophoresis. Using the LAMP method, a colorimetrical change occur in the vial if the sample contain the virus. The positive reactions will turn the liquid from red into yellow, and the negative samples will remain reddish/pink (Fig. 3). The color change occurs due to the use of a pH indicator dye present in the reaction. If the reaction occurs, the media became more acidic changing the color.

Fig. 3 Colorimetrical reaction for Covid-19 using LAMP method. The samples that presented color change to yellow indicates positive presence of Sars-CoV-2. The human RNA samples that remained

The average time of reaction of each test, or test per patient was 40 min, faster than the traditional qPCR techniques, that can take up to 2 h for completion.

3

Results and Discussion

The overall results were satisfactory, for both, reactions and device. With the development of the device, the first major challenge was to design and manufacture a heating plate that would maintain homogeneous temperatures in all samples due to the heat distribution of one tube that influences the other tube, near or far. Many prototypes were designed with different patterns of heating plate and heating filaments, seeking the most homogenous temperature. All prototypes showed similar temperature variations between samples. The coldest portion of the plates was located at the ends, and the warmest part, in the center of the plate (Fig. 4).

without color change, maintaining the pink/reddish color, do not indicate the virus presence

1186

L. R. Nascimento et al.

Fig. 4 Graphs obtained through homogeneity temperature tests on heating plates. Each dot on the horizontal axis represent one of the eight tubes. The red LINE represents an optimized board and the BLUE one the optimized board with an aluminum plate on top of it

New projects were established seeking to compensate for the loss of heat at the ends, until the graphic results became more constants, with a variation of heat below 0.5 degrees Celsius throughout the plate. The aluminum plate was added to cover the entire heating plate, which provided greater homogeneity through its extension, which can also be seen in Fig. 4. For the standard PCR reaction, the plates reach 95 degrees Celsius without showing signs of overheating or other weaknesses, however, more stress tests are needed. The control circuit board has several SMD (surface mounted devices) components. Seeking to reduce the space occupied by the components they were divided into 3 functionality sections: power, analogue, and digital control. For temperature measure, we use the LM35 temperature gauge. To attenuate the signal reflection, the printed circuit tracks have rounded corners. During the development of the printed circuit board for the device control, we detected some noise in the temperature sensor. The problem was solved by adding a circuit to filter the signs received by the sensor. After all corrections, the device was subjected to recalibration. Within the scope of the “Conventional PCR” on the device, the cap heating plates were designed and manufactured to be inserted close to the tube caps to prevent the formation of water vapor. This process is quite similar to the heating process of conventional PCR devices, but anticipating future problems, the team plans to insert the heating plate inside the device, instead of placing it above the tube caps. Currently, the research and development team has two main tasks: a group is responsible for implementing and validating the standard PCR on the device (Fig. 5); the other

Fig. 5 LAMP/PCR device for 16 samples

group is dedicated to validating the Sars-CoV-2 RT-LAMP methodology, which will be the first target of this platform.

4

Conclusions

The promising results for both device and assay formulation indicate that the solution has potential to be a marketed product. After the initial tests, the solution will be taken to further performance evaluation and fine adjustments. If the product performs as expected, it can be submitted to scale-up, standardized manufacturing processes and regulatory license application, first through the Brazilian Health Regulatory Agency (ANVISA), and other potential submissions to other countries agencies.

Technological Development of a Multipurpose Molecular …

For Intellectual Property (IP) protection, as soon as this technological development reaches the stage of industrial prototype, the appliance for patents will take place by the associated institutes enrolled in this project. The use of simple and low-cost materials is capable to design and produce devices that present the same results as the products already commercialized, in terms of temperature management. Large-scale manufacturing was considered to be very positive in many ways, such as complexity, viability and price, mainly. Estimates are that the device will cost around U$100,00. In general, the device is effective, easy-to-use, and portable, it can be taken to any region with difficult access. In addition, molecular point-of-care devices are the most growing market segment in the in vitro diagnostic (IVD) industry. The development and national production of a point-of-care and low-cost solution for molecular IVD is becoming real. The pandemic scenario and future can make use of this platform to improve surveillance, control, and intervention as a tool to combat COVID-19. Acknowledgements This work was supported by the Oswaldo Cruz Foundation (Fiocruz), Universidade Tecnológica Federal do Paraná (UTFPR), Instituto Carlos Chagas (ICC-Fiocruz PR), Instituto de Biologia Molecular do Paraná (IBMP), The Brazilian National Council

1187 for Scientific and Technological Development (CNPq), Fundação para o Desenvolvimento Científico e Tecnológico em Saúde (Fiotec), and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES)-Código de Financiamento 001. Conflict of Interest The authors certify that there is no actual or potential conflict of interest in relation to this article.

References 1. Mullis KB, Faloona FA (1987) Specific synthesis of DNA in vitro via a polymerase-catalyzed chain reaction. Methods Enzymol. 155 (1987):335–350. https://doi.org/10.1016/0076-6879(87)55023-6 2. Mullis K, Faloona F, Scharf S, Saiki R, Horn G, Erlich H (1986) Specific enzymatic amplification of DNA in vitro: the polymerase chain reaction. Cold Spring Harb Symp Quant Biol. 51(1986):263– 273. https://doi.org/10.1101/sqb.1986.051.01.032 3. Erlich HA (2015) PCR technology: principles and applications for DNA amplification. https://doi.org/10.1007/978-1- 349-20235-5 4. Innis MA, Gelfand DH, Sninsky JJ, White TJ (2012) PCR protocols: a guide to methods and applications, Academic press 5. Njiru, ZK (2012) “Loop-mediated isothermal amplification technology: towards point of care diagnostics.” PLoS neglected tropical diseases 6.6 : e1572 6. Instruction manual of Molecular IVD commercial product at www. ibmp.org.br/pt-br/info/ 7. Summary of Available In-House Assays Protocols at https://www. who.int/docs/default- source/coronaviruse/whoinhouseassays.pdf

Chemical Effects of Nanosecond High-Intensity IR and UV Lasers on Biosilicate® When Used for Treating Dentin Incipient Caries Lesions M. Rodrigues, J. M. F. B. Daguano, and P. A. Ana

increase in phosphate content when compared to the group treated with Biosilicate® alone, without the presence of siloxane bands. It was concluded that laser irradiation can augment the bioactivity of the Biosilicate®, evidenced by the greater formation of HAC and, for this, the wavelength of 1064 nm should be used.

Abstract

The root caries lesions still represent a health problem due to their rapid progression and, therefore, more efficient remineralization strategies are necessary. High-intensity lasers are useful because they modify the microstructure of the irradiated tissue due to heating; however, nothing is known about the effects of these lasers when associated with bioactive materials to remineralize caries lesions. This study evaluated the compositional changes that the Q-switched lasers emitted in the spectral region of infrared (IR, 1064 nm) and ultraviolet (UV, 355 nm) make in Biosilicate® on dentin with incipient caries lesion. Sixty blocks of demineralized root dentin were randomly divided into 6 experimental groups, to be treated with Biosilicate® alone (10% in fetal bovine serum), lasers alone (IR or UV, 5 ns, 10 Hz, 5 pulses/sample, 250 mJ/pulse or 100 mJ/pulse, respectively) or laser irradiations after 24 h of Biosilicate® application. After 24 h immersed in artificial saliva, samples were evaluated by Fourier transform infrared spectroscopy between 450 and 4000 cm−1. Laser irradiation alone reduced the organic, carbonate and water contents of the dentin, with greater effects promoted by the IR-laser as a result of heating. Biosilicate® alone elevated the content of phosphate and carbonate, which suggests formation of carbonated hydroxyapatite (HAC) on dentin. The irradiation with UV-laser after Biosilicate® also promoted an increase in the phosphate content, however there was less conversion of the biomaterial, evidenced by the rise in the intensity of the bands corresponding to the siloxane and amorphous phase of the apatite. Irradiation with IR-laser after Biosilicate®, on the other hand, promoted a significant M. Rodrigues  J. M. F. B.Daguano  P. A. Ana (&) Center for Engineering, Modeling and Applied Social Sciences, Federal University of ABC, 03 Arcturus St., São Bernardo do Campo, Brazil e-mail: [email protected]

Keywords







Demineralization High-powerlaser Bioactivematerial Root FTIR

1



Introduction

Caries is still a highly prevalent disease [1] and, among the hard tissues affected, dentin lesions are the ones that progress most rapidly due to the greater amount of organic matrix in this tissue. It is known that dentin is composed, by weight, of 70% inorganic material (hydroxyapatite), 20% organic material (mostly type I collagen, with little amount of type III and IV collagen, and inclusions of proteoglycans, lipids and non-collagenous protein matrices) and 10% water. The enamel, however, has a more mineralized structure, with 98% hydroxyapatite and only 2% organic matrix [2]. Considering this rapid progression mainly in high caries-risk patients, it is essential the development of effective measures for early prevention in a lasting and effective way. Currently, fluoride therapies are highly recommended; however, these therapies require constant professional interventions [3], in addition to patient dedication and cooperation. Thus, prevention strategies that have a longer lasting effect, that require less attention on the part of the patient who is already weakened for example, will favor the maintenance of his quality of life. Among the alternatives that can promote a more lasting preventive effect, high-intensity laser irradiation stands out, as it can promote microstructural changes in enamel and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_178

1189

1190

M. Rodrigues et al.

dentin, making them less soluble and, consequently, suggesting a more lasting effect [4]. Nd:YAG lasers, with a wavelength (k) of 1064 nm and a pulse width of the order of microseconds, are quite popular for several clinical applications; however, due to their emitted photon’s weak absorption by the main chromophores of the dental hard tissue (water and hydroxyapatite), such lasers should be used with caution in some procedures. Thus, lasers with smaller temporal pulse widths (Q-switched) can be a promising alternative, considering that such lasers can be adjusted to lower energies per pulse and reach significant surface temperatures in much shorter periods [5]. This favors the occurrence of chemical and structural changes in the irradiated tissue with less harmful thermal effects both on the surface and in depth, with less heat propagation. Emitting lasers in the ultraviolet (UV) region can also be applied to dental hard tissues, considering their absorption by organic content. The third harmonic of the Q-switched Nd:YAG laser, k = 355 nm and temporal pulse width of 5 ns, has a potential use for the treatment of dental hard tissues, changing the morphology and decreasing the dissolution of calcium and phosphate of the substrate [6]. Considering the possibility of remineralization of incipient caries lesions, bioactive materials, those that are able to bind to living tissues through the formation of a biologically active carbonated hydroxyapatite (HAC) layer, have been shown to be a good treatment possibility. For this, Biosilicate® is a biomaterial with promising therapeutic approaches, being able to form a layer of HAC in less than 24 h of exposure to body fluids [7]. The association of Biosilicate® with laser irradiation has been suggested, as it would enable greater integration of these materials with dentin, as well as the maintenance of a more uniform layer of formed material. However, there are no studies in the literature that assess the changes that irradiation with Q-switched lasers emitted in the infrared (IR) and ultraviolet (UV) can promote in Biosilicate®, which motivated this study.

2

Material and Method

For this study, 30 lower bovine incisor teeth were used (CEUA UFABC 9,614,190,917) and, from these, sixty 4  4  2 mm slabs of root dentin were prepared. After cutting, the samples were flattened and cleaned with distilled water for 5 min. After preparation, all samples were kept identified in a moist environment (sterile cotton wool moistened with deionized water and thymol) under refrigeration at +4 °C until the beginning of the experiments. The induction of incipient caries lesions was performed in vitro according to the method established by Queiroz [8], which creates incipient caries lesions at average depths of 95.4 ± 5.8 µm in dentin [8]. For that, each sample was

submerged in a demineralizing solution (2 mL/mm2), with pH = 5.0 and composition of 1.4 mM calcium, 0.91 mM phosphate, 0.06 ppm fluoride in 0.05 M acetate buffer, for 32 h at 37º C. After this period, the samples were washed with deionized water for 1 min each and placed again in the humid environment. The samples were randomly assigned to 6 different experimental groups of 10 samples each: • Group 1 (Untreated): untreated samples; • Group 2 (Biosilicate®): samples treated with Biosilicate®; • Group 3 (IR laser): samples treated with IR laser; • Group 4 (UV laser): samples treated with UV laser; • Group 5 (BS + IR laser): samples treated with Biosilicate® and, after 24 h, received IR laser irradiation; • Group 6 (BS + UV laser): samples treated with Biosilicate® and, after 24 h, received UV laser irradiation; In the G1 group, the samples had no treatment, being stored in artificial saliva (1.5 mL/sample) for 48 h at 37 °C until analysis. Artificial saliva has a composition of 1.5 mM Ca and 0.9 mM P in a buffer solution of 0.1 M Tris buffer at pH 7.0 [9]. For the samples of group G2, Biosilicate® (kindly provided by the Laboratory of Vitreous Materials of the Federal University of Sao Carlos—LaMav) was used, which was crushed and de-agglomerated using agate grade and pistil. Then, it was added in fetal bovine serum (Sigma Aldrich, USA) at a concentration of 10% [10] and kept at temperature of 37 °C during 6 h for conditioning. For the treatments, 150 µL of the biomaterial was rubbed on the surface of each sample with a dental micro-applicator for 30 s [10]; then, the samples were immediately placed in artificial saliva (1.5 mL/sample [10]) without washing. Afterwards, the samples were kept at a temperature of 37 °C for 48 h. In the G3 group, an Nd:YAG laser (Q-switched, Quantel, USA) was used, with a wavelength of 1064 nm, a pulse width of 5 ns, energy per pulse of 250 mJ, energy density of 1.27 J/cm2, beam diameter of 5 mm and repetition rate of 10 Hz [11]. The energy per pulse was monitored before the irradiation of each sample using a power and energy meter (Coherent, USA). The samples were positioned individually in front of the beam and were kept static, at a standardized distance of 20 cm, so that a single pulse reached the entire sample surface at once. In each sample, 5 pulses were given after application of a coal paste, used as a photoabsorber [11]. After irradiation, the samples were individually placed in artificial saliva and kept at 37 °C for 48 h. In the G4 group, the Nd:YAG laser emitted in the UV (k = 355 nm, 5 ns, 10 Hz, 100 mJ/pulse, without photoabsorber) was used and the procedures for irradiation and storage of the samples were the same as those described for the G3 group.

Chemical Effects of Nanosecond High-Intensity IR and UV Lasers …

1191

For groups G5 and G6, Biosilicate® was prepared and applied as described for group G2 and, afterwards, the samples were kept individually in 1.5 mL of artificial saliva for 24 h at 37 °C. After this period, irradiation was performed with IR laser (Group G5) and UV laser (group G6) as described for groups G3 and G4, respectively, and then the samples were again kept in 1.5 mL of artificial saliva for 24 h at 37 °C until analysis. The compositional analysis of the samples was performed by Fourier transform infrared spectroscopy—FTIR (Frontier, Perkin Elmer, USA), using the diffuse reflectance and attenuated total reflection (ATR with a diamond crystal) method. The spectra were collected with a resolution of 4 cm−1 and 60 scans in the region of 4000–450 cm−1. For the comparative semi-quantitative analysis, the vectorial normalization of the spectra was performed [12] and the comparison of the intensity of the main absorption peaks was done in averaged spectra of each experimental group using the OriginPro 8 software (Origin Lab, USA).

to the m4 flexion mode of phosphate, band 2 of the triplet), 807 and 868 cm−1 (corresponding to the m2 stretching of carbonate, which is indicative of substitution of types A and B in the carbonated apatite and/or overlapping of the band referring to the stretching of the Si–O–Si of the biomaterial), 958 cm−1 (corresponding to the non-degenerate symmetric stretching mode of m1 of phosphate), 1021 cm−1 (corresponding to the m3 asymmetric vibrations of phosphate), 1196 cm−1 (corresponding to amide III), 1225 cm−1 (corresponding to amide III), 1274 cm−1 (corresponding to amide III), 1331 cm−1 (corresponding to collagen band), 1394 cm−1 (corresponding to the symmetric CH3 bending of the methyl groups of proteins of dentin), 1442 cm−1 (corresponding to the stretching m3 of CO3), 1536 cm−1 (corresponding to amide II) and 1630 cm−1 (corresponding to amide I). The bands located in the region between 1350 and 1080 cm−1 can also correspond to the stretch of phosphate. The application of Biosilicate® promoted the increase in the intensity of 475 cm−1 (which correspond to the bend of Si–O–Si), 530 cm−1 (that corresponds to the HPO42− vibration, amorphous phase of apatite), in addition to the increase in the intensity of the 562, 600, 807, 868, 958, 1021, 1196, 1225, 1274, 1331, 1394, 1536 and 1630 cm−1 when compared to the untreated group. However, the application of the biomaterial reduced the intensity of 676 cm−1 band, which corresponds to the structural OH− (libration) [13] of dentin. These findings suggest the formation of carbonated hydroxyapatite on the dentin surface, as it is possible to perceive the higher intensity of the

3

Resuts and Discussion

Figure 1 shows the average of the infrared spectra obtained for the samples of the group without treatment (G1) and treated with Biosilicate® (G2). In the two spectra, it is possible to notice the presence of bands [7, 13, 14], located in 562 cm−1 (corresponding to the m4 flexion mode of phosphate, band 1 of the triplet), 600 cm−1 (corresponding

Fig. 1 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with Biosilicate® (G2), in the region between 1690 and 450 cm−1

1192

M. Rodrigues et al.

Fig. 2 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with IR laser (G3) and UV laser (G4) in the region between 1150 and 450 cm−1

phosphate bands (530, 562, 600, 958, 1021 cm−1) and carbonate (807, 868 cm−1) bands in the samples treated with Biosilicate®, in addition to a dentin coating with the biomaterial, evidenced by the decrease of the intensity of the band corresponding to the structural water (676 cm−1) of this tissue. However, it was observed the presence of siloxane (Si–O–Si) bands (475, 868 cm−1 and between 710 and 1175 cm−1), which suggests that there is still a portion of the material that has not been completely converted to carbonated hydroxyapatite, perhaps due to the short time (48 h) in contact with demineralized dentin and artificial saliva, which favors the bioactivity of the material. Figure 2 shows the average of the infrared spectra obtained for the samples of the untreated group (G1) compared to the groups treated with IR (G3) and UV (G4) lasers in the region between 1150 and 450 cm−1. It is noted, in the same way as the previous analysis, that there was no appearance or disappearance of absorption bands, which suggests that there was no tissue degradation, such as carbonization, and that there was no formation of new material resulting from the irradiations. The irradiation of dentin with IR laser promoted minor changes in dentin composition, with a small reduction in the intensity of the peaks corresponding to the m4 flexion mode and m3 asymmetric vibrations of phosphate vibrations. When analyzed by ATR method (Fig. 3), it was detected that laser irradiation decreased the intensity of bands of the amides I, II and III, corresponding to the organic content of the dentin, as well as the water content (absorbed water—antisymmetric

stretch in 3450 cm−1; water or amide A in 3300 cm−1, m1 + 2m2 vibration of water in 3212 cm−1, amide B in 3083 cm−1 and adsorbed water—bending m2 H–O–H in 1553 cm−1) [13]. This fact is consistent with the literature [5] and confirms that there was an increase in temperature above 100 °C, which was sufficient to promote the removal of carbonate and water, as well as the protein denaturation of dentin. On the other hand, it is noted that irradiation of dentin with UV laser promoted a significant increase in intensity of the 1021 cm−1 peak (corresponding to the m3 asymmetric vibration of phosphate), which may be due to the increase in local temperature, promoting a structural rearrangement in hydroxyapatite. Such rearrangement is evident by the significant change in the proportion between bands 1 and 2 of the phosphate triplet (increase in the 562 cm−1 band and decrease in the 600 cm−1 band), as it can be seen in Fig. 2. In addition, irradiation with UV laser reduced the intensity of the bands corresponding to the organic material and water of the dentin (Fig. 4) in the same way as that promoted by IR laser irradiation, which suggests an evaporation and denaturation of this material due to the strong absorption of the 355 nm wavelength by the dentin proteins and lipids [6]. The comparison between the effects promoted by the UV and IR laser irradiation in the Biosilicate® on the demineralized dentin can be seen in Fig. 5. Greater intensity is observed in all phosphate bands (1021, 562 and 600 cm−1) and reduction of the 676 cm−1 band in the samples treated with Biosilicate®, regardless of whether or not there is laser irradiation, in comparison with the untreated samples (G1),

Chemical Effects of Nanosecond High-Intensity IR and UV Lasers …

1193

Fig. 3 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with IR laser (G3) in the region between 4000 and 700 cm−1

Fig. 4 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with UV laser (G4) in the region between 4000 and 700 cm−1

which suggests formation and coating of carbonated hydroxyapatite on the dentin surface. The irradiation with UV laser (G6) does not promote significant difference in the intensity of the 1021 cm−1 band (corresponding to the m3 asymmetric vibrations of phosphate) in comparison with the

samples treated with Biosilicate® alone (G2); however, there is a slight increase in the intensity of the triplet bands (562 and 600 cm−1, m4 flexion mode of phosphate) after irradiation, which suggests that laser irradiation may interfere in HCA formation on dentin. However, it is also possible to

1194

M. Rodrigues et al.

Fig. 5 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with Biosilicate® (G2), Biosilicate® + IR laser (G5) and Biosilicate® + UV laser (G6) in the region between 1690 and 450 cm−1

observe greater intensity of the 475 cm−1 bands (which correspond to the bend of Si–O–Si), 530 cm−1 (that corresponds to the HPO42− vibration, amorphous phase of apatite) and 1175 cm−1 (which correspond to the tetrahedral vibrational mode of the Si–O–Si), which suggests less conversion of Biosilicate® to HCA after irradiation (G6) when compared to Biosilicate® applied without irradiation (G2). The irradiation with UV laser may have promoted, in this case, some degradation in the material that may have decreased its bioactivity in the evaluated period. On the other hand, IR laser irradiation after application of the Biosilicate® (G5) promoted an increase in the intensities of the three phosphate bands (562, 600 and 1021 cm−1), with a significant rise in the last one (corresponding to the m3 asymmetric vibrations of phosphate) in relation to the group treated with Biosilicate® alone (G2). In addition, there was a reduction in the intensity of the 1175 cm−1 band (tetrahedral vibrational mode of siloxane) and there was no difference in the intensity of the 475 cm−1 (bend of Si–O–Si) and 530 cm−1 (HPO42− vibration, amorphous phase of apatite) bands when compared to the group treated with Biosilicate® alone (Fig. 6). These results suggest that irradiation with IR laser increases the bioactivity of the Biosilicate®, probably due to the temperature rises promoted during irradiations. In fact, the literature shows [5] that this laser promotes melting and recrystallization of the dentin surface, which results in temperature rises of the order of 1000 °C. Also, preliminary analyzes revealed that the irradiation of the Biosilicate® with this wavelength promotes the greater coverage of the

dentin surface by biomaterial [11], as well as the formation of sodium and calcium phosphate phase [15], which intensifies the formation of HCA by the Biosilicate®. Although the same analysis was not carried out with laser emitted in UV, the literature [6] reports that the 355 nm laser has an important effect on proteins and lipids, which can increase the permeability of tooth substrates. In this way, due to the composition of the biomaterial, this study suggests that the effects on UV laser on Biosilicate® may have been less than those promoted by the IR laser. Also, considering the emission wavelength, absorption by different chromophores may have led to the degradation of the material by the UV laser. Future morphological and X-ray diffraction analyzes are necessary to confirm this hypothesis. A priori, considering the results observed by this study, it can be inferred that laser irradiation may favor the action of Biosilicate® in the remineralization of dentin with incipient caries lesions, and the IR laser emitted at 1064 nm wavelength seems to be the most promising for such activity.

4

Conclusion

Under the experimental conditions of the present study, it can be concluded that high-intensity Q-switched Nd:YAG laser irradiation can augment the bioactivity of the Biosilicate®, evidenced by the greater formation of carbonated hydroxyapatite. For this, the wavelength of 1064 nm should be used.

Chemical Effects of Nanosecond High-Intensity IR and UV Lasers …

1195

Fig. 6 Average infrared absorption spectrum of demineralized root dentin untreated (G1) and after treatment with Biosilicate® (G2) and Biosilicate® + IR laser (G5) in the region between 1690 and 450 cm−1

Acknowledgements To FAPESP (2017-21887-4), PROCAD-CAPES (88881.068505/2014-01), National Institute of Photonics (CNPq/INCT 465763/2014-6), Multiuser Experimental Center of UFABC (CEM-UFABC) and Vitreous Materials Laboratory (LaMaV UFSCar). The authors are grateful to Prof. Oscar Peitl (UFSCar) for the methodological suggestions given in the course of the study. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Pitts N, Zero D, Marsh P et al (2017) Dental caries. Nat Rev Dis Primers 3(17030):1–16 2. Nanci A (2007) Ten Cate’s oral histology: development, structure and function. Mosby, St. Louis Missouri 3. Tenuta LMA, Cerezetti RV, Del Bel Cury AA, Tabchoury CP, Cury JA (2008) Fluoride Release from CaF2 and Enamel Demineralization. J Dent Res 87:1032–1036 4. Featherstone JD (2000) Caries detection and prevention with laser energy. Dent Clin North Am 44:955–969 5. Antunes A, Vianna SS, Gomes ASL, de Rossi W, Zezell DM (2005) Surface morphology, elemental distribution, and spectroscopic changes subsequent the application of nanosecond pulsed Nd: YAG laser on dental enamel surface. Laser Phys Lett 2:141–147 6. Wheeler CR, Fried D, Featherstone JD, Watanabe LG, Le CQ (2003) Irradiation of dental enamel with Q-switched lambda = 355-nm laser pulses: surface morphology, fluoride adsorption, and adhesion to composite resin. Lasers Surg Med 32:310–317

7. Peitl O, Zanotto ED, Hench LL (2001) Highly bioactive P2O5Na2O-CaO-SiO2 glass-ceramics. J Non-Cryst Solids 292:115–126 8. Queiroz CS (2004) Modelos de estudos in vitro para avaliar o efeito do fluoreto na desmineralização e remineralização do esmalte e dentina. Tese (Doutorado em Cariologia). Universidade Estadual de Campinas, Piracicaba 9. Hara AT, Queiroz CS, Giannini M, Cury JA, Serra MC (2004) Influence of the mineral content and morphological pattern of artificial root caries lesion on composite resin bond strength. Eur J Oral Sci 112:67–72 10. Tirapelli C, Panzeri H, Soares RG, Peitl O, Zanotto ED (2010) A novel bioactive glass-ceramic for treating dentin hypersensitivity. Braz Oral Res 24:381–387 11. Ana PA, Pereira DL, Ferreira ES, Figueredo DC, Daguano JKFB, Zezell DM (2019) Advances in the prevention and monitoring of root dentin demineralization using lasers. In: SBFoton proceedings of international optics and photonics conference (SBFoton IOPC). Sao Paulo, Brazil, pp 1–6. https://doi.org/10.1109/SBFoton-IOPC. 2019.8910213 12. Baker MJ, Trevisan J, Bassan P et al (2014) Using fourier transform IR spectroscopy to analyze biological materials. Nat Protoc 9:1771–1791 13. Benetti C, Ana PA, Bachmann L, Zezell DM (2015) Mid-infrared spectroscopy analysis of the effects of erbium, chromium: yattrium-scandium-gallium-garnet (Er, Cr:YSGG) laser irradiation on bone mineral and organic components. Appl Spectrosc 69:1496–1504 14. Movasaghi Z, Rehman S, Rehman I (2008) Fourier transform infrared (FTIR) spectroscopy of biological tissues. Appl Spectrosc Rev 43:174–179 15. Pereira GS (2019) Caracterização e avaliação do Biosilicato® associado a laser de Nd:YAG para prevenção da cárie radicular. Dissertação (Mestrado em Biotecnociência). UFABC, Santo André

Antioxidant Activity of the Ethanol Extract of Salpichlaena Volubilis and Its Correlation with Alopecia Areata A. B. Souza, T. K. S. Medeiros, D. Severino, C. J. Francisco, and A. F. Uchoa

characteristic to the generation of 102 with maximum absorption at 1270 nm. While the extracts of the rhizome and leaf/rhizome, had little emission of 1O2.

Abstract

Plants are organisms that produce and store diversity of bioactive substances, between primary and secondary metabolites. Secondaries can have therapeutic properties and are called active ingredients. Salpichlaena volubilis, in Amazonas is known as “Rabo de onça”, the parts used as remedies are the leaves and rhizomes. In the scientific literature, no records were found about this species in the treatment of diseases, only reports of use for the treatment of alopecia areata. To evaluate the first technical-scientific essay, two methods were used: the first method used was to capture the organic radical DPPH, the second was to characterize 1O2.The work aimed to evaluate the antioxidant activity of the ethanol extract of S. volubilis and its correlation with alopecia areata. To compare the determination of antioxidant action, the standard model of capture of organic radicals DPPH/TROLOX and the extracts of S. volubilis were used. The results are presented in the UV/visible absorption spectra and in the kinetics of the extracts, being significant compared with the standard model.The leaf/rhizome extract showed the best suppression of DPPH, 80% in five minutes. In the characterization of the 1O2 evaluated with emission at 532 nm when excited at different wavelengths, it was possible to observe that the extracts, compared with the blue methylene/acetonitrile model, present singlet oxygen generation. In the emission spectrum of 1O2 irradiated at 532 nm, the leaf extract/ACN showed a spectral curve A. B. Souza (&)  A. F. Uchoa Instituto de Engenharia Biomédica, Universidade Anhembi Morumbi, São José dos Campos, SP, Brazil D. Severino Instituto de Química, Universidade de São Paulo, São Paulo, SP, Brazil T. K. S. Medeiros Centro Universitário do Norte - UNINORTE, Manaus, AM, Brazil C. J. Francisco Universidade Nilton Lins - UNL, Manaus, AM, Brazil

Keywords

Salpichlaena volubilis areata

1



TROLOX



DPPH



Alopecia

Introduction

Many cultivated and wild plants are used for the management of various diseases. The search for relief and cure of diseases, through the ingestion of herbs, was perhaps one of the first ways to use these products. In recent years, demand for natural products has increased, in what has been called the “naturalist race” [1]. In Brazil, the use of medicinal plants had a multicultural origin, given that it seemed to be the only source of healing for the natives, who contributed their traditional knowledge to medicine, such as: o Curare indígena ou Dedaleira (Digitalis purpúrea) e o Jaborandi (Pilocarpus spp). Other plants were brought by Europeans like chamomile (Matricaria chamomila). The Brazilian flora is rich in plants used in folk medicine to treat illnesses mainly in northern regions of Brazil. Among the species of Brazilian flora we can mention the Salpichlaena volubilis popularly called “rabo de onça”, native in the Amazon region, and belonging to the family Blechnaceae. S. volubilis, characterized by indeterminate growth, reaching several meters in length [2]. In a bibliographic survey, no scientific data were found regarding S. volubilis, however, in the Amazon region the natives use extracts from this plant to treat alopecia areata (AA). This disease affects the anagen phase of hair follicles, with the participation of CD4 + CD25 +high Tregs (T) cells, in the mechanism of development of this disease [3]. AA is not restricted to sex, age, racial criteria or hair types, and has

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_179

1197

1198

A. B. Souza et al.

a prevalence rate of 2% of the population [4]. In this study with the aim of correlating empirical knowledge, with the mechanisms of induction to alopecia, tests were carried out relating the disease to oxidative stress.

2

Material and Methods

The plant S. volubilis was located on the left bank of the highway BR 174, Km 4, Manaus-Presidente sueiredo— Amazonas/Brazil, with a geographic location corresponding to2°56′54.19″S 60°02′04.11°O. Elevation 45 m altitude of the view point 397 m. The identification of the plant S. volubilis, was carried out by the botanist—IBT of São Paulo-SP, Dr. Jefferson Prado, under a voucher no 465030, a copy was deposited in the scientific herbarium with the following identification: popular name: “Rabo de onça”; family: Blechnaceae; scientific name: Salpichlaena volubilis (Kaulf) J. Sm.; genus: Salpichlaena volubilis and species: volubilis. The plant was exposed to drying in a ventilated place and protected from light. After drying, the leaves and rhizomes were separated and manual fragmentation was performed, producing 100 g of leaves and 100 g of rhizome. Leaves/ rhizome (100 g), each sample subjected to extraction with 600 mL of 92.8% ethanol. Where it remained in an amber container, sealed at room temperature for a period of 5 months. After this period, the material was filtered, the solvent was removed by evaporation and resuspended in ethanol for studies of radical suppression by DPPH. In this assay, 2 mL of the DPPH radical solution were used at a concentration of 5  10−4 mol L−1, with 0.5 mL of each ethanol extract. After analyzing the kinetic curves of the extracts, the percentages of the absorbance of the DPPH radical were calculated, by comparison with a pre- established calibration curve. As a comparison parameter, a solution of TEAC (antioxidant capacity equivalent to Trolox) was used, with a concentration of 1.49  10−5; 2.48  10−5; 3.4  10−5; 4.97  10−5; 9.94  10−5 mol L−1 in ethanol. The absorption spectra were determined using a UV spectrophotometer—UV–Vis 2401PC-SHIMADZU® recording spectrophotometer. The suppression of singlet oxygen (1O2) was determined by direct method, through irradiation at 532 nm, with a pulse of 0.3 ns and emission at 1270 nm in methylene blue solution in acetonitrile, and increasing fractions of the extract in the same solvent in quartz cuvette. Spectrophotometer model UV—2401PC UV–Vis Recording Spectrophotometer-SHIMADZU®, photomultiplier Hamamatsu (R5509) ®

3

Results

3.1 Extract Characterization The ethanol extract of the species S. volubilis showed a strong absorption band in kmax = 411 nm; Characteristic soret band, around the 667 nm band, in the Q4 band. Characterizing in both bands the presence of chlorophyll “a” and “b”. The spectral curves are shown in Fig. 1. A. Characterization of DPPH The radical DPPH, was characterized by the visible ultraviolet spectrum. The Fig. 2 presents its characteristic spectral curve, with absorption maximums at 208 nm in UV-C, 326 nm UV-A and at 516 nm, in the green region of the visible spectrum. Table 1 shows the data used to determine the calibration curve. Figure 3. The absorption data refer to the absorption band at 516 nm. B. Calibration curve for DPPH radical concentrations Figure 3 shows the calibration curve for the concentration of the DPPH radical and the respective equation from which it was determined y = 11,436  −0.0202. The curve shown in Fig. 3 shows linearity, with a regression coefficient (r2) = 0.99. C. Kinetics of antioxidant activity TROLOX (a) Curve A, referring to the DPPH decay kinetics against the suppressive action of TROLOX, presents a decay of approximately 0.86, in 1 min, approximately 11% (Fig. 4). (b) The kinetics of the antioxidant activity of the extract of the rhizome of S. volubilis showed suppression of the organic free radical DPPH in 76% in 2 min. Compared with the standard model DPPH/TROLOX, the rhizome extract showed significant absorbance of the organic radical DPPH. (c) The kinetics of the antioxidant action of the extract of leaves of S. volubilis was suppressed in 18% in 3 min. Compared with the standard DPPH/TROLOX model, the leaf extract showed a slow absorbance result of the organic radical of DPPH. (d) For the kinetics of the antioxidant action of the leaf extract and rhizome of S. volubilis, the extract reached a fast state in the suppression of the organic radical DPPH, approximately 80% in 5 min, compared to the

Antioxidant Activity of the Ethanol Extract of … Fig. 1 Absorption spectrum of leaf extracts and rhizome of S. volubilis in ethanol

Fig. 2 UV/visible absorption spectrum of the organic radical DPPH, in ethanol solution

1199

1200 Table 1 Absorbance for different concentrations of DPPH in 10 mL ethanol

A. B. Souza et al. Samples

DPPH (mL)

Ethanol (10 mL)

Concentration (mol/L)

Absorbance (516 nm)

1

0.3

9.7

1.49  10−5

0.15

−5

2

0.5

9.5

2.48  10

3

0.7

9.3

3.48  10−5 −5

4

1.0

9.0

4.97  10

5

2.0

8.0

9.94  10−5

0.26 0.36 0.58 1.10

Fig. 3 Calibration curve for different concentrations of DPPH in 1.0  10−4 mol L−1 for ethanol solution versus absorbance

standard model DPPH/TROLOX. Note the presence of secondary metabolites with antioxidant action. D. Kinetics of antioxidant activity of Trolox/extracts The components extracted from the rhizome show faster suppression of the organic DPPH radical, in 76% in two minutes, better than the standard DPPH/TROLOX model that suppressed the DPPH radical in 11% in one minute. The extract of the leaf and rhizome suppressed the organic radical DPPH by 80% within five minutes. While the leaf extract showed absorption of DPPH organic radical in 18% in three minutes. It is observed that, by the results obtained, the extracts present substances with antioxidant action (Fig. 5). E. Characterization of Singlet Oxygen 1O2 Singlet oxygen (1O2) was produced by excitation of methylene blue at 532 nm. Characterization was determined

by direct method, by emission at 1270 nm. Figure 6 shows the characteristic emission curves. The upper curve being the emission in the absence of plant extract, the second in the presence of the leaf extract, followed by the rhizome extract and last leaves/ rhizomes, all in acetonitrile. It can be observed that, in the order mentioned, there is a decrease in signal intensity, indicating that the suppression of species occurred 1O2. Figure 7, this same test was carried out for the extracts in the absence of methylene blue, where it was observed that the extracts had the characteristic spectrum of 1O2 with maximum absorption at 1270 nm. The spectral curves showed that the one with the highest emission was the leaf extract, followed by the leaves/rhizomes and rhizomes. This emission was attributed to the presence of chlorophylls, which are considered excellent photosensitizers and have absorption bands in the region of irradiation, 532 nm. For a better interpretation of the results, a study was carried out, where the emission and stability of 1O2. In this

Antioxidant Activity of the Ethanol Extract of …

Fig. 4 a Kinetics of antioxidant activity of TEAC (100 µL), with organic radical DPPH (2 mL) with ethanol solution. b Kinetics of the antioxidant activity of rhizome extract (0.5 mL), with plant S. volubilis, with organic radical DPPH (2 mL) in ethanol solution. c Kinetics of Fig. 5 Kinetics of the comparison of the standard capture model of the organic radical DPPH/TROLOX, Leaf Extract/DPPH; Rhizome Extract/DPPH; Leaf Extract and Rhizome/DPPH, in ethanol solution

1201

antioxidant activity of S. volubilis leaf extract (0,5 mL), with organic radical DPPH (2 mL) in ethanol solvent. d Kinetics of antioxidant activity of leaf extract and rhizome (0.5 mL), with organic radical DPPH (2 mL) in ethanol solvent

1202

A. B. Souza et al.

Fig. 6 Emission spectrum of 1O2 with methylene blue, acetonitrile, leaf extract, rhizome extract and leaf/rhizome extract

followed by the rhizome and, finally, for the leaf extract. Direct analysis of these results can lead to misinterpretation. It is important to mention that, all the extracts present chlorophyll, a potential photosensitizer. However, the leaf extract has a greater potential for this metabolite. Chlorophyll is said to be one of the most robust photosensitizers; and, it has an absorption band at 532 nm, where excitation occurred. In this way, cofotosensitization occurs in the solution, where both photosensitizers (chlorophyll and methylene blue) are excited at 532 nm. It was also determined by the visible ultraviolet spectrum that the extracts contain chlorophyll and that the concentration of this metabolite is in the following order: leaf ‹leaf/rhizome› rhizome. For a better analysis, suppressive activity with temporal resolution was determined. Where suppressive activity by other metabolites was observed. In the curve with temporal resolution, it was observed that at the moment of the pulse, the transient has greater intensity, however, the species’ life span is reduced with the increase of the extract concentration. Such results, show that the power of suppression of 1O2 and other oxidative species, appear to be superior to those obtained by a direct calculation from the area below the backward curve. This result does not preclude the purposes established here, since the suppressive activity of DPPH and 1O2. However, for a quantification, it is necessary to eliminate chlorophyll from each extract.

5 Fig. 7 Emission spectrum of ethanol extracts from S.volubilis, suspended with acetonitrile and irradiated at 532 nm

test, it was evidenced that the emission of the leaf extract is much higher than the other extracts, proving that this extract has a high power of photosensitization. In addition, a resolved time test was performed for the methylene blue solution against the concentration of the extracts, where the suppression of 1O2 by the presence of extracts.

4

Conclusions

It was observed by two tests, different suppressions of the reactive species. In the first test, free radical suppression was observed through the TROLOX/DPPH assay. In the second, suppression of 1O2 with temporal resolution was observed. If the extracts suppress the radical and the 1O2; and hair loss is related to oxidative stress, so the extracts can fight hair loss, due to the action of oxidative species. However, more in-depth studies are needed to actually achieve the characterization of chemical compounds of the species S. volubilis and the confirmation of the correlation with alopecia areata.

Discussion

In the DPPH suppression tests, you can see that all extracts showed suppression, showing that hydroalcoholic extraction extracts metabolites from S. volubilis with the power to suppress radical species. All extracts showed suppression power close to TROLOX, which is a reference standard for free radical suppressing activity [5, 6]. The suppression of 1O2 it was observed for all extracts, being the most effective for the leaf/rhizome extract,

Acknowledgements The authors wish to acknowledge, Fapesp, Process 2013/07937-8, CEPID REDOXOMA—Center for Research on Redox Processes in Biomedicine To Professors Dr. Jefferson Prado IBt-SP, Dr. Leandro Procópio—UAM and Dr. Henrique Cunha Carvalho—UAM. Conflict of Interest The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Antioxidant Activity of the Ethanol Extract of …

References 1. Naveed M, Majeed F, Taleb A, Zubair HM, Shumzaid M, Farooq MA et al (2020) A review of medicinal plants in cardiovascular disorders: benefits and risks. Am J Chin Med 48 (2):259–286 2. Prado J (2005) Flora da reserva Ducke, Amazônia, Brasil: Pteridophyta—Blechnaceae. Rodriguésia 56(86):33–34 3. Jalili RB, Kilani RT, Li Y, Khosravi-maharlooie M, Nabai L, Wang EHC et al (2018) Fibroblast cell-based therapy prevents induction of alopecia areata in an experimental model. Cell Transplant. 27(6):994–1004

1203 4. Lima N, Cabral A, Furtado F, de Barros LÍ, Macedo R (2008) Urtica dioca: uma revisão dos estudos das suas propriedades farmacológicas. Rev Bras Farm. 89(3):199–206 5. Pedriali CA, Fernandes AU, Bernusso LDC, Polakiewicz B (2008) The synthesis of a water-soluble derivative of rutin as an antiradical agent. Quim Nova 31(8):2147–2151 6. Pedriali CA, Fernandes AU, Santos PA, Silva MM, Severino D, Silva MB (2010) Antioxidant activity, cito-and phototoxicity of pomegranate (Punica granatum L.) seed pulp extract. Ciência e Tecnol Aliment 30(4):1017–1021

Evaluation of the Heart Rate Variability with Laser Speckle Imaging C. M. S. Carvalho, A. G. F. Marineli, L. dos Santos, A. Z. de Freitas, and M. M. Amaral

Abstract

Keywords

The Autonomic Nervous System (ANS) is responsible for regulating various physiological processes in the human body. The Heart Rate Variability (HRV) represents a measurement used in evaluating the modulation of the ANS in different physiological conditions such as stress, physical activity, sleep, metabolic alterations, and, also, pathological conditions. Physical activity results in important changes in the cardiovascular system, such as an increase in blood flow and a decrease of the peripheral vascular resistance. Monitoring the peripheral microcirculation represents an important aid in evaluating the general conditions of an individual. The Laser Speckle Contrast Analysis (LASCA) is a non-invasive optical resource developed that uses nonionizing radiation in the region of the infrared and is important in the diagnosis of problems regarding the peripheral microcirculation. This work aimed to implement a method to obtain the HRV through the peripheral microcirculation utilizing the Laser Speckle Contrast Analysis (LASCA) technique. A commercial LASCA equipment was used to obtain a face-video. A custom software was implemented to process the LASCA videos and obtain the HRV. A heart rate monitor (HRM) was also utilized to measure the HRV and the values were compared against the ones obtained with LASCA. The method had consistent results in obtaining both the pulsation and the HRV, making it possible for future studies to use such a technique.

Heart rate variability Laser speckle contrast analysis Microcirculation LASCA

C. M. S. Carvalho (&) Centro Universitário Uninovafapi, Vitorino Orthiges Fernandes, Teresina, 6123, Brazil e-mail: [email protected] C. M. S. Carvalho  A. G. F. Marineli  L. dos Santos  M. M. Amaral Instituto Cientifico E Tecnológico, Universidade Brasil, São Paulo, Brazil A. Z. de Freitas Instituto de Pesquisas Energéticas e Nucleares–IPEN/ CNEN, São Paulo, Brazil

1





Introduction

The autonomic nervous system (ANS) plays an important role in regulating various physiological processes in the human body, whether in normal or pathological situations [1]. The ANS participates in controlling the cardiovascular system through both sympathetic and parasympathetic nerve endings. The former acts on the myocardium while the latter acts on the sinus node, atrial myocardium, and atrioventricular node [1]. These systems are antagonistic, in such a manner that the sympathetic system promotes endings promote an increase in the cardiac frequency while the parasympathetic system produces the opposite response. For this reason, changes in the heart rate are verified as a physiological response to stimuli such as stress, physical activity, sleep, metabolic alterations, pathological conditions, and others [2]. The increase in the heart rate due to action of the sympathetic system and a reduction in the parasympathetic activity could lead to an increase in the morbidity risk due to cardiovascular problems. It highlights the need for evaluation tools that directly measures the cardiac frequency and guides intervention to maintain its properly function [3]. Thus, the development of techniques to analyze the heart rate variability (HRV) has contributed to evaluating the ANS modulation, both in physiological and pathological situations. The HRV has been applied as an indicator of cardiovascular complications, such as high blood pressure, acute myocardial infarction, coronary insufficiency, and arteriosclerosis. An elevated HRV translates into proper cardiac functions and characterizes a healthy individual, while low HRV is a strong indicator of poor heart functions [1].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_180

1205

1206

The HRV is represented by the oscillations of the intervals between consecutive heartbeats. It may be estimated through measuring the difference between consecutive R-R intervals at the electrocardiogram (ECG). Monitoring the HRV through a non-invasive technique may be applied to identify phenomena associated with ANS in healthy individuals, as well as in those with some abdominal condition [4]. The electrocardiogram (ECG), obtained with the electrical cardiograph, translates the graphic representation of the cardiac impulses. The P wave on the ECG represents the impulse generated on the sinus node, which propagates to the atrium, resulting in the atrial depolarization. This impulse reaches the ventricles through the atrioventricular node, causing the depolarization of the ventricles, represented on the ECG by the Q, R, and S waves—the QRS complex. The T wave represents the ventricular repolarization [5]. Hence, the HRV is measured based on the analysis of the intervals between the R waves registered on the ECG. Besides, the electrocardiograph, there are other instruments for the evaluation of the HRV. The heart rate monitors (HRM), are cheaper and more practical and can be utilized both during exercise and rest [6]. Though, both ECG and HRM are devices that need to be in contact with the skin to perform its measurements. The HRV analysis has also been applied as an efficient method to evaluate the effect of physical activity on heart functions. Based on the indicators of the HRV, it is possible to verify that individuals who are physically active show better autonomic cardiac modulation, indicating that physical exercise produces a positive cardiac response that results in a lower risk of cardiovascular diseases [7]. Therefore, studies have shown that physical activities, when in normal patterns, helped to regulate and modulate the heart frequency, which results from better adaptation of the ANS and vagal control to the body movements [8]. The stress induced by physical activity generates important alterations in the cardiovascular system and the analysis of the HRV helps to control the frequency during its execution [7]. Physical activity increases the blood flow, creating more pressure on endothelium walls. Thus, it stimulates the release of vasodilator agents and consequently lowering the peripheral vascular resistance [9]. Monitoring the peripheral microcirculation offer important data for evaluating the general conditions of an individual. Recent studies suggest an association between changes in the peripheral blood flow (microcirculation) and the development of organic complications [10, 11]. Microcirculation consists of a system of vessels with a diameter inferior to 100 µm called arterioles, metarterioles, capillaries, and venules, which are responsible for transporting blood flow, carrying oxygen and nutrients to the cells and tissues of the organism [12].

C. M. S. Carvalho et al.

Utilizing complementary techniques of the valuation of the cardiovascular functions may offer an important contribution to the clinical field. Regarding the evaluation of peripheral microcirculation, the Laser Speckle Contrast Analysis (LASCA) technique represents an optical, non-invasive alternative. LASCA uses non-ionizing radiation, on the infrared region, to obtain images with a temporal and spatial resolution of vascularized areas with blood flow without contact. Since it is a non-invasive technique that provides no discomfort to the patient, it has been applied to aid the diagnosis of problems related to the peripheral microcirculation to measure blood perfusion in brain tissues, kidney cortex, liver, and skin [13, 14]. The LASCA technique relies on the interferometric pattern that rises form randomly distributed scatter particles. Moving particles, such as a red blood cell, with change this interferometric patter and it can be used to quantify movement. Hence, when the body surface is illuminated, imaging will show higher contrast in areas where there is an increase in blood flow [15, 16]. So, this study aimed to implement a method to obtain the HRV through the peripheral microcirculation utilizing the Laser Speckle Contrast Analysis (LASCA) technique.

2

Materials and Methods

It was approved by respective ethics committees (CEP Universidade Brasil 2,685,042) and conducted in agreement with the declaration of Helsinki for medical research involving human subjects. A commercial equipment (moorFLPI, Moor Instruments, Devon, UK) was utilized to obtain the images with the Laser Speckle Contrast Analysis (LASCA) technique. It was obtained five-minutes video of the face posterior section by utilizing the LASCA technique. A software was developed in a MATLAB R2019b (MathWorks, Inc., Natick, Massachusetts, United States) environment for LASCA video processing and to obtain the HRV. Figure 1 shows the flowchart of the implemented software. The video was initially uploaded to the developed app. The initial frame (frame 1 in Fig. 1) was presented to the operator, who selects the area of interest (the red rectangle in the ROI—Fig. 1), where the analysis of the signal variance was performed. The selected ROI was utilized in all the other frames of the video. In the ROI, the average intensity of the signal is computed on each frame of the image, thus obtaining the profile of the pulse collected through the video. After that, the analysis of the HRV is computed, allowing us to determine the variation of the intervals between consecutive peaks, which corresponds to the RR interval.

Evaluation of the Heart Rate Variability with Laser Speckle …

1207

Fig. 2 LASCA image of the microvascularization in the frontal section of the face

Fig. 1 Flowchart of the software algorithm to obtain the HRV from LASCA video analysis

Simultaneously to the acquisition of LASCA video, the heart rate monitor (HRM) Polar V800 collected the HRV, which has been proven reliable for measuring RR intervals according to recognized standards [17]. The result was used in comparison to the one acquired with the LASCA technique. The time series were compared using time-domain, frequency domain and non-linear methods.

3

Results and Discussion

A video of the microcirculation in the area analyzed was obtained with the LASCA technique. Figure 2 shows a LASCA video frame revealing the frontal area of the face. The areas with higher microcirculation are in white. The red rectangle represents the area where the microcirculation variation was calculated (ROI). The video was processed to extract the HRV by the microcirculation in the skin, as previously described. Figure 3A shows the average intensity of the microcirculation signal in the selected ROI. It is possible to identify a pattern befitting to the pulsation.

This signal (Fig. 3a), acquired from the analysis of the microcirculation in the skin, was used to calculate the HRV with the LASCA technique. Using the HRM it was also measured the HRV as reference for comparison. The HRV results, both the acquired with the heart rate monitor (HRM) and the LASCA technique, are presented in Fig. 3b. The HRV numbers obtained with the LASCA method present a wider distribution in comparison to the ones obtained with the reference method (heart rate monitor). Moreover, the histogram presents some intervals equally spaced that do not contain any counts. That may be due to the low sampling rate of video (25 frames per second) in comparison to the rate monitor. Using a camera with a higher rate of acquisition would be a possible solution for this problem. Table 1 presents the comparison time-domain, frequency-domain, and the nonlinear results for the time series of HRM and LASCA measurements. Despite the low sampling rate limitation of the LASCA technique, it is possible to observe a good agreement between booth techniques. Booth the mean RR and HR do not presented statistical difference (pvalue < 0.0001).

1208

C. M. S. Carvalho et al.

Fig. 3 a Average microcirculation intensity in the ROI; b Variability of the heart rate obtained by both the Heart Rate Monitor (HRM) and LASCA

The present method shows consistent results in getting both the pulsation and the HRV, which makes it feasible for other studies to utilize this technique.

4

Conclusion

By analyzing the LASCA video, it was possible to get the pulsation profile and to calculate the heart rate variability.

In comparison to the results acquired with the reference method (Polar monitor—V800), the variability scores obtained with the LASCA method present a wider distribution of values. The present method showed consistent results in obtaining both the pulsation and the HRV. More research is needed to determine the sensitivity of the technique regarding the differentiation of the states of stress.

Evaluation of the Heart Rate Variability with Laser Speckle … Table 1 Time-domain, frequency-domain and nonlinear results for HRM and LASCA time series

1209 HRM

LASCA

Time-domain results Mean RR (ms)

847.6

901.7

STD RR (SDNN) (ms)

38.6

58.6

Mean HR (ms)

70.93

66.83

STD HR (ms)

3.30

4.49

0.0195

0.0039

LF (0.04–0.15 Hz)

0.1289

0.1445

HF (0.15–0.4 Hz)

0.3203

0.3711

Frequency-domain Results—Peak (Hz) VLF (0–0.04 Hz)

Nonlinear Results Recurrence plot Mean line length (Lmean) (beats)

10.47

10.55

Max line length (Lmax) (beats)

355

94

Recurrence rate (REC) (%)

32.30

34.10

Determinism (DET) (%)

98.28

97.50

Shannon entropy (ShanEn)

3.136

3.151

Approximate entropy (ApEn)

1.235

1.105

Sample entropy (SampEn)

1.702

1.317

Detrended fluctuations (DFA): a1

1.312

0.922

Detrended fluctuations (DFA): a2

0.988

0.666

Other

Correlation dimension (D2)

2.449

4.128

Multiscale entropy (MSE)

1.202 –2.495

1.005–2.089

Acknowledgements C.M.S. Carvalho acknowledges CAPES for the financial support. This work was financially supported by CAPES and FAPESP 17/21851-0, 18/03517-8. Conflict of Interest The authors declare that there has been no conflict of interest.

References 1. Vanderlei LCM, Pastre CM, Hosshi RA et al (2009) Basic notions of heart rate variability and its clinical applicability. Rev Bras Cir Cardiovasc 24(2):205–217. https://doi.org/10.1590/S010276382009000200018 2. Lopes PFF, Oliveira MIB, André SMS et al (2013) Aplicabilidade clínica da variabilidade da frequência cardíaca. Rev Neurocienc 21 (4):600–603. https://doi.org/10.4181/RNC.2013.21.870.4p 3. Marães VRFS (2010) Frequência cardíaca e sua variabilidade: análises e aplicações. Rev Andal Med Deporte 3(1):33–42 4. Aubert AE, Seps B, Beckers F (2003) Heart rate variability in athletes. Sports Med 33(12):889–919. https://doi.org/10.2165/ 00007256-200333120-00003

5. Guyton AC, Hall JE (2017) Tratado de fisiologia médica. Elsevier, Rio de Janeiro 6. Acharya UR, Joseph KP, Kannathal N et al (2006) Heart rate variability: a review. Med Bio Eng Comput 44:1031–1051. https:// doi.org/10.1007/s11517-006-0119-0 7. Yamanaka Y, Hashimoto S, Takasu NN et al (2015) Morning and evening physical exercise differentially regulate the autonomic nervous system during nocturnal sleep in humans. Am J Physiol Regul Integr Comp Physiol 309(9):R1112–R1121. https://doi.org/ 10.1152/ajpregu.00127.2015 8. Edelhäuser F, Minnerop A, Trapp B et al (2015) Eurythmy therapy increases specific oscillations of heart rate variability. BMC Complement Altern Med 15:167. https://doi.org/10.1186/s12906015-0684-6 9. Paschoal MA, Siqueira JP, Machado RV et al (2004) Efeitos agudos do exercício dinâmico de baixa intensidade sobre a variabilidade da frequência cardíaca e pressão arterial de indivíduos normotensos e hipertensos leves. Rev Cienc Med 13 (3):223–234 10. Backer D, Donadello K, Sakr Y et al (2013) Microcirculatory alterations in patients with severe sepsis. Crit Care Med 41 (3):792–799. https://doi.org/10.1097/CCM.0b013e3182742e8b 11. Santos DM, Quintans JSS, Quintans LJ Jr et al (2019) Association between peripheral perfusion, microcirculation and mortality in sepsis: a systematic review. Rev Bras Anestesiol 69(6):605–621. https://doi.org/10.1016/j.bjane.2019.09.005

1210 12. Puskarich MA, Shapiro NI, Massey MJ et al (2016) Lactate clearance in septic shock is not a surrogate for improved microcirculatory flow. Acad Emerg Med 23:690–693. https://doi. org/10.1111/acem.12928 13. Boas DA, Dunn AK (2010) Laser speckle contrast imaging in biomedical optics. J Biomed Opt 15(1):011109. https://doi.org/10. 1117/1.3285504 14. Kazmi SMS, Richards LM, Schrandt CJ et al (2015) Expanding applications, accuracy, and interpretation of laser speckle contrast imaging of cerebral blood flow. J Cereb Blood Flow Metab 35 (7):1076–1084. https://doi.org/10.1038/jcbfm.2015.84

C. M. S. Carvalho et al. 15. Dunn AK, Bolay H, Moskowitz MA et al (2001) Dynamic imaging of cerebral blood flow using laser speckle. J Cereb Blood Flow Metab 21:195–201. https://doi.org/10.1097/00004647-200103000-00002 16. Cordovil I, Huguenin G, Rosa G et al (2012) Evaluation of systemic microvascular endothelial function using laser speckle contrast imaging. Microvasc Res 83(3):376–379. https://doi.org/ 10.1016/j.mvr.2012.01.004 17. Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology (1996) Heart rate variability: standards of measurement, physiological interpretation, and clinical use. Circulation 93:1043–65

Photobiomodulation and Laserpuncture Evaluation for Knee Osteoarthritis Treatment: A Literature Review L. G. C. Corrêa, D. S. F. Magalhães, A. Baptista, and A. F. Frade-Barros

the effects of photo-biomodulation in acupuncture points for the treatment of KOA. Additionally, we could observe that the surveyed articles differ in their forms of tabulation and data analysis, making it difficult to compare them. We suggest that further studies be performed on the subject to produce more relevant data and make the use of these therapies a consensus.

Abstract

Osteoarthritis is a degenerative joint disease that commonly affects weight-bearing joints, being the knee the most affected one. Treatment options include invasive therapies and non-pharmacological therapies (pain education, exercise programs, electrotherapy, acupuncture and ozone therapy). Another form of treatment that has been increasingly studied and used in patients with knee osteoarthrosis is photobiomodulation therapy, which despite its conflicting results has been considered effective in reducing the inflammatory process when used alone or associated with exercise programs or acupuncture. When associated with acupuncture, laser has proven to be effective despite few high-quality randomized controlled studies on the subject. Recently, patients have been seeking less and less invasive and natural treatments, making more and more professionals train, practice and research the effects of treatments such as acupuncture and laserpuncture. Objectives: To conduct a literature review on photobiomodulation for the treatment of KOA, focusing on the use of low-level laser therapy associated with acupuncture points. Methodology: An integrative literature review was carried out between March and May 2020 on PubMed and the Physiotherapy Evidence Database (PEDro). The selection criteria used were English language and articles published in the last 10 years that included the keywords: “knee”, “osteoarthritis”, “photobiomodulation”, “laserpuncture” and “acupuncture”. Results: Out of the 50 articles found, 30 were selected for presenting methodology or applications of replicable techniques. Conclusion: After this review, we could note that there is still a very small number of controlled and randomized articles with relevant data on L. G. C. Corrêa  D. S. F. Magalhães  A. Baptista  A. F. Frade-Barros (&) Bioengineering Program, University Brazil, Carolina Fonseca st, 235, São Paulo, Brazil e-mail: [email protected]

Keywords



Osteoarthritis Photobiomodulation Bioengineering

1



Laserpuncture



Introduction

According to the World Health Organization (WHO), people are aging at a faster pace and the life expectancy is increasing [1]. While the increase in the elderly population and life expectancy should not be a problem, the complications presented by these people must be carefully analyzed since this is the population that presents complications resulting from aging, which can lead to disability and chronic systemic diseases [1]. Osteoarthrosis (OA) is a degenerative joint disease that commonly affects weight-bearing joints. The knee is the most commonly affected joint in the body, particularly in older individuals, in addition to other risk factors, such as obesity, previous trauma and female patients [2]. Knee OA (KOA) is clinically associated with pain, restricted range of motion and muscle weakness, resulting in difficulties in daily activities and impairment of quality of life [3]. It represents between 30 and 40% of medical consultations in rheumatology outpatient cares, and according to Social Security data, it is the third leading cause of absence from work, behind only lombalgy and depression [4, 5]. The WHO estimates that 40% of individuals over 70 years of age suffer from KOA, almost 80% of patients have some degree of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_181

1211

1212

movement limitation and 25% are unable to perform their main activities of daily life [6]. According to the Brazilian Society of Rheumatology and based on the data from the Social Security of Brazil, osteoarthrosis accounts for 7.5% of leaves of absence; it is the second disease among those that justify the initial aid, with 7.5% of the total; it is also the second in relation to sickness benefits (in extension) with 10.5% and the fourth to determine retirement (6.2%) [7]. KOA treatments include pharmacological and nonpharmacological modalities. Among pharmacological treatments we can cite the use of hyaluronic acid, which has low success rates, lack of consensus and methodological flaws that make it difficult to prove its efficacy and justify its high cost [8, 9]. Other forms of intra-articular treatment, such as the use of corticosteroids, anesthetics and analgesics, only present relief of the pain symptom, without improving mobility or independence. Besides not solving the inflammatory process, they do not transform the local catabolic system into anabolic system. Mostly, these therapeutic modalities may have systemic repercussions, such as systemic arterial hypertension and hyperglycemia. Additionally, they do not produce a longlasting effect nor act on the local inflammatory process and may increase the risk of aggravating pre-existing systemic diseases [10]. Among the non-pharmacological treatments with satisfactory results is the use of intravenous and intra-articular mesenchymal stem cells guided by ultrasound, which has good analgesic and cartilaginous tissue reconstruction effects. On the other hand, this technique has some the drawbacks, including the use of anesthetics that cause systemic repercussions and the lack of long-term studies to observe the durability of such technique in the cartilaginous tissue and search for possible side effects. Besides not being able to combat an inflammatory process installed in the intra-articular region, the intra-articular procedure itself generates an inflammatory process. Non-pharmacological treatments also include pain education and physiotherapy for patients, using resources such as physical training and the use of electrophysical agents (ultrasound, transcutaneous electrical nerve stimulation—TENS and interferential current) to relieve pain [9, 10]. Another non-pharmacological resource that has been widely studied and demonstrated pain control and anti-inflammatory action in KOA is phototherapy [11–13]. Despite the wide clinical application, the results of experimental and clinical studies are still conflicting. Most results show the effectiveness of photobiomodulation always associated with exercise programs, however, with satisfactory results in the reduction of the inflammatory process, being used alone or associated with exercise programs [12]. Another possibility of non-pharmacological treatment is the traditional Chinese medicine (TCM) approach, which

L. G. C. Corrêa et al.

associates acupuncture with low-level laser, the so-called laserpuncture, and has been producing relevant effects regarding the relief of pain levels, the mobility improvement, a more long-lasting analgesic effect and an antiinflammatory action [13–16]. Besides entering the hall of non-pharmacological therapies, this type of therapy still has some advantages: it is not invasive, it is painless and has low cost, which makes it more accepted. In spite of the range of conflicting results, the search for non-pharmacological therapies has become a reality in recent years due to its safe use without causing systemic complications to its users, regardless of some risks, such as small local changes, which are present in some episodes [17]. Considering the increased demand for nonpharmacological therapies, the use of TCM has proven to be effective and with an increasing number of interested people, not only in its application, but also in its studies. Therefore, our aim was to carry out a survey of the published articles in the last 10 years regarding the use of laserpuncture for the treatment of KOA. This therapy is promising, as it has more and more practitioners and users, being a great option for non-pharmacological treatment of patients with KOA [18–20].

2

Materials and Methods

Between March and May 2020, surveys were carried out on PubMed and the Physiotherapy Evidence Database (PEDro) using the keywords: “knee”, “osteoarthrosis”, “photobiomodulation”, “laserpuncture” and “acupuncture” in the last ten years (between 2010 and 2020). The selection criteria were English language and articles published in the last 10 years to conduct an integrative literature review. Out of 50 articles found, 30 were selected for presenting satisfactory clear and easy to understand and/or applications of replicable techniques and protocols, as well as the use of questionnaires, scores and methods for evaluating validated results and consistent statistical treatment for the analysis of the results. Results obtained, as shown in the flowchart (Fig. 1).

3

Results

Out of 50 articles found from the use of isolated or associated keywords, only 30 showed satisfactory methodology results and were selected for this review. We could observe that there is a lack of standardization in data collection and analysis in the articles, and that among the pharmacological treatments, the vast majority has

Photobiomodulation and Laserpuncture Evaluation for Knee …

1213

Fig. 1 Methodology study flowchart

Idenficaon Literature review carried out between March and July 2020 PubMed and PEDro.

Arcles idenfied through data base search (n=50) Inclusion criteria : English language and arcles published in the last 10 years . Keywords: “knee”, “osteoarthris”, “photobiomodulaon”, “laserpuncture” and “acupuncture”.

Elegibility criteria/ Screening

Addional arcles idenfied through others sources (n=0)

Full text Arcles included for elegibility (n=30)

systemic repercussions and worsening or origin of other systemic pathologies resulting from the use of these drugs, which was already expected [6, 9, 11, 12]. The authors disagree on the results regarding the effects of their therapies. Moreover, the forms of evaluation and tabulation of results are divergent, which makes it difficult to compare them. Most therapies are effective alone or associated with others only in relieving pain, inflammation and function, agility and mobility improvements in short, medium and long terms [21]. While the use of drugs orally or intramuscularly has satisfactory analgesic and antiinflammatory effects, these medications may have longterm systemic complications, such as systemic arterial hypertension, renal and liver failures, diabetes or the worsening of these diseases, if pre-existing [22–24]. The use of most intra-articular therapies is only related to an analgesic effect, having no anti-inflammatory or range of motion improvement effect, besides the possibility of increased pain and inflammation resulting from the procedure of needle introduction into the intra-articular region [24]. We could observe in the studies that used the laser alone or associated with physical exercises or acupuncture points showed statistically significant results in pain relief, decreased inflammatory process and improved joint mobility, without the need of invasive procedures nor the use of any type of drugs, consequently avoiding the systemic side effects. Most of the selected articles used international validated tests such as VAS, WOMAC and TUG to quantify and evaluate the protocols results in the volunteers [13, 19, 25].

4

Discussion

The increase in life expectancy is a new world reality. As a result, diseases and their complications become more and more numerous. Among them is KOA, a chronic and degenerative disease. So that elderly be able to maintain independence, joint mobility and quality of life, several forms of non-pharmacological treatment have been studied regarding their satisfactory results, since most of them do not have systemic complications, proving to be effective and practically painless [25–27]. There is a lack of consensus in the community of orthopedists regarding the treatment and handling of cases of KOA [28]. According to bodies such as OARSI (Osteoarthritis Research Society International) and the Italian Society of Rheumatology, there is a consensus on the use of acupuncture alone or associated with photobiomodulation by low-level laser therapy and pharmacological measures for the treatment of KOA with high levels of evidence [29, 30]. However, none of these guidelines recommend laserpuncture, a well-known and widely applied technique considered a branch of contemporary acupuncture. For this purpose, further studies are necessary concerning both laserpuncture and its effects on KOA as well as studies on the associated use of this therapy with so many other non-pharmacological forms of treatment of KOA. In studies in which photobiomodulation by low-level laser therapy was used, alone or associated with exercise programs or acupuncture points, positive results were

1214

L. G. C. Corrêa et al.

obtained with respect to pain relief, inflammation reduction and range of motion improvement, without beginning or worsening of systemic diseases and with low risk of sequelae or injuries subsequent to the procedure, except for rare cases of erythema or slight redness of the skin that did not compromise the final result of the therapy nor aggravated the patient's condition [6, 11, 12, 19, 25]. They showed that laserpuncture can be a great alternative for nonpharmacological treatment due to its reduced side effects, low cost and non-invasive procedure. Since there was a conflict in the results of the surveyed articles, a standardization of data analysis is needed in order to prove more and more the effectiveness of this therapy. This article shows that there is a reduced number of articles on laserpuncture and its behavior in KOA, and highlights the importance of carrying out further studies on such technique so that authorities of the area can include in their new guidelines the use of laserpuncture for the treatment of KOA.

5

Conclusion

After this review, we could note that the use of photobiomodulation by low-level laser therapy presents great results in the treatment of KOA, alone or associated with exercises or acupuncture points. The studies that used laser alone or associated with physical exercises or acupuncture points showed beneficial effects for pain, inflammation and joint mobility, without the need to use pharmacological therapies or invasive procedures, avoiding side effects. However, there is a very small number of relevant articles on the effects of photobiomodulation in acupuncture points for the treatment of KOA. Moreover, the surveyed articles diverge their forms of tabulation and data analysis, making it difficult to compare them. We suggest that further studies be conducted on the subject so that there is more relevant data and the use of these therapies becomes a consensus. Acknowledgements The author would like to thank the research team form Bioengineering program at Universidade Brasil for all the support in the development of my master’s degree and for development of this review. Conflict of Interest The authors declare that they have no conflict of interest. Ethical Requirements As this work is a literature review, the approval of the Ethics Committee was not required.

References 1. Heidari B (2011) Knee osteoarthrosis prevalence, risk factors, pathogenesis and features: Part I. Caspian J Intern Med 2(2):205–212 2. Mahir L, Belhaj K, Zahi S et al (2016) Impact of knee osteoarthrosis on the quality of life. Ann Phys Rehabil Med. 59 (Supplem): e155–e159 3. Laires PA, Canhão H, Rodrigues AM et al (2018) The impact of osteoarthrosis on early exit from work: results from a population-based study. BMC Public Health 18(1):472 4. Neogi T (2013) The epidemiology and impact of pain in osteoarthrosis. Osteoarthr Cartil 21(9):1145–1153 5. Rezende UM, de Campos GC, Pailo AF (2013) Current concepts in osteoarthrosis. Acta Ortop Bras 21(2):120–122 6. Arthur K, do Nascimento LC, Figueiredo DAS et al (2012) Efeitos da geoterapia e fitoterapia associados à cinesioterapia na osteoartrite de joelho—estudo randomizado duplo cego. Acta Fisiatr 19 (1): 11–15 7. Maly MR, Robbins SM (2014) Osteoarthrosis year in review 2014— rehabilitation and outcomes. Osteoarthr Cartilage 22(12):1958– 1988 8. Alfredo PP, Bjordal JM, Junior Lopes-Martins RAB et al (2017) Long-term results of a randomized, controlled, double- blind study of low-level laser therapy before exercises in knee osteoarthrosis: laser and exercises in knee osteoarthrosis. Clin Rehabil 32(2):173– 178 9. Quintana JM, Arostegui I, Escobar A et al (2008) Prevalence of knee and hip osteoarthrosis and the appropriateness of joint replacement in an older population. Arch Intern Med 168:1576– 1584 10. Oliveira NC, Vatri S, Alfieri FM (2016) Comparação dos efeitos de exercícios resistidos versus cinesioterapia na osteoartrite de joelho. Acta Fisiatr 23(1):7–11 11. Cotler HB, Chow RT, Hamblin MR et al (2015) The use of low level laser therapy (LLLT) for musculoskeletal pain. MOJ Orthop Rheumatol 2(5):00068 12. Alfredo PP, Bjordal JM, Dreyer SH et al (2012) Efficacy of low level laser therapy associated with exercises in knee osteoarthrosis: a randomized double-blind study. Clin Rehabil 26(6):523–553 13. US Burden of Disease Collaborators (2013) The state of US health, 1990–2010: burden of diseases, injuries and risk factors. JAMA 310(6):591–608 14. Miranda VS, de Carvalho VBF, Machado LAC et al (2012) Prevalence of chronic musculoskeletal disorders in elderly Brazilians: a systematic review of the literature. BMC Musculoskelet Disord 13:82 15. Wallace IJ, Worthington S, Felson DT et al (2017) Knee oeteoarthritis has doubled in prevalence since de mid-20th century. PNAS 114(35):9332–9336 16. Rezende MU, Campos GC, Pailo AF (2013) Conceitos atuais em osteoartrite. Acta Ortop Bras 21(2):120–122 17. MIchael JW, Schluter-Brust KU, Eysel P (2010) The epidemiology, etiology, diagnosis, and treatment of osteoarthrosis of the knee. Dtsch Arztebl Int 107(9): 152–162 18. Shanks S, Leisman G (2018) Perspective on Broad-acting clinical physiological effects of photobiomodulation. Adv Exp Med Biol 1096:1–12 19. Hamblin MR (2017) Mechanisms and applications of the anti-inflammatory effects of photobiomodulation. AIMS Biophys 4(3):337–361

Photobiomodulation and Laserpuncture Evaluation for Knee … 20. Medina-Porquere SI, Cantero-Tellez R (2018) Class IV laser therapy for trapeziometacarpal joint osteoarthrosis: Study protocol for a randomized placebo-controlled trial. Physiother Res Int 23 (2): e1706 21. World Association of Laser Therapy (WALT) (2006) Consensus agreement on the desegn and conduct of clinical studies with low-level laser therapy anda light therapy for musculoskeletal pain and disorders. Photomed Laser Surg 24(6):761–762 22. Miller LE (2019) Towards reaching consensus on hyaluronic acid efficacy in knee osteoarthrosis. Clin Rheumatol 38:2881–2883 23. Richards MM, Maxwell JS, Weng L et al (2016) Intra-articular treatment of knee osteoarthrosis: from anti-inflammatories to products of regenerative medicine. Phys Sports Med 44(2):101–108 24. Wang AT, Feng Y, Jia HH et al (2019) Application of mesenchymal stem cell therapy for the treatment of osteoarthrosis of the knee: a concise review. World J Stem Cells 11(4):212–235 25. Dima R, Tieppo Francio V, Towery C et al (2017) Review of literature on low-level laser therapy benefits for nonpharmacological pain control in chronic pain and osteoarthrosis. Altern Ther Health Med 24(5): 8–10

1215 26. Tomazoni SS, Costa LDCM, Guimarães LS et al (2017) Effects of photobiomodulation therapy in patients with chronic non-specific low back pain: protocol for a randomised placebo-controlled trial. BMJ Open 7(10):e017202 27. de Andrade ALM, Bossini PS, do Canto et al (2017) Effect of photobiomodulation therapy (808 nm) in the control of neuropathic pain in mice. Lasers Med Sci 32(4): 865–872 28. Carlson VR, Ong AC, Orozco FR et al (2018) Compliance with the AAOS guidelines for treatment of osteoarthritis of the knee: a survey of the American association of Hip and knee surgeons. J Am Acad Orthopaedic Surg 26(3):103–107 29. Zhang W, Moskowitz RW, Nuki G et al (2008) Recomendações da OARSI para o tratamento da osteoartrite do quadril e joelho, Parte II: Diretrizes de consenso especializadas baseadas em evidências do OARSI. Osteoartrite E Cartilagem 16(2):137–162 30. Ariani A, Manara M, Fioravanti A et al (2019) As diretrizes da prática clínica da Sociedade Italiana de Reumatologia para o diagnóstico e tratamento da osteoartrite do joelho, quadril e mão. Reumatismo 71(S1):5–21

Methodology for the Classification of an Intraocular Lens with an Orthogonal Bidimensional Refractive Sinusoidal Profile Diogo Ferraz Costa and D. W. d. L. Monteiro

Abstract

The intraocular-lens (IOL) industry revolves mainly on manufacturing different types of lenses to restore the vision quality of patients through ophthalmic surgery. This paper introduces a biconvex intraocular lens with a bidimensional refractive sinusoidal profile distributed over its posterior surface. This type of pattern allows the configuration of different amplitudes and frequencies of the orthogonal sinusoidal functions, leading to different optical performance. By adjusting its parameters, it is possible to set the IOL to behave as Monofocal, Multifocal or Extended Depth of Focus. Therefore, a methodology that could assist in the classification of such lenses is proposed. The intraocular lens under test is modelled and inserted into a modified Liou-Brennan eye model. This methodology can be used as a trusted tool to help lens manufacturers in industry to determine optimal target design parameters. Keywords

Ophthalmic optics

1



Lens design



ZEMAX

Introduction

In ophthalmic surgery for cataract treatment, the human crystalline is replaced by a manufactured transparent intraocular lens usually made of a rigid material. The goal is to restore vision, since cataract causes the opacification of the crystalline lens, causing partial to total loss of vision.

The intraocular lenses (IOLs) can be classified according to their number of foci. A monofocal lens has a single focus, and that is often chosen to the be one associated with far vision, where the object is usually further than 6 m, and the image plane then coincides with the retina. Users of this lens need spectacles for intermediate and near vision. In contrast, multifocal intraocular lenses (MIOLs) are known for having the main focus designed for objects far away from the eye, and secondary foci for intermediate (0.5–2 m) and/or close (30–50 cm) object distances. Thus, the multifocal intraocular lenses try to partly compensate the loss of natural accommodation previously possible with the crystalline. Additionally, the MIOLs can be further subdivided in refractive or diffractive. A diffractive MIOLs is usually made of concentric rings with a saw-tooth profile (kinoforms), designed based on diffractive principles, while a refractive lens does not have any abrupt transitions on the surface elevations and works solely on refractive phenomena. An Extended Depth of Focus (EDoF) IOL has the characteristic of elongating the distance in which the object remains in focus, thus providing an increased range of vision. Therefore, if an EDoF IOL is designed for emmetropy (perfect vision with relaxed eye) aiming at objects 10 m distant, usually the object can be brought a few meters closer to the eye and still maintain reasonable contrast (staying in focus). This paper is structured with an explanation of intraocularlens concepts and the proposal of a sinusoidal intraocular lens design, transitioning to the implementation of the classification methodology and ending with the results expressed as a 3D bar chart followed by a discussion.

2 D. F. Costa (&) Federal University of Minas Gerais (UFMG), Av. Pres. Antônio Carlos, 6627, Pampulha, Belo Horizonte, MG, Brazil D. W. d. L.Monteiro Department of Electrical Engineering, Federal University of Minas Gerais (UFMG), Belo Horizonte, MG, Brazil

Concepts

2.1 Types of Intraocular Lenses An intraocular lens can also be classified according to its asphericity. Since most of them have a curved shape, its cross section can be a sphere, a paraboloid, a hyperboloid, an

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_182

1217

1218

D. F. Costa and D. W. d. L. Monteiro

oblate ellipsoid or a prolate ellipsoid. This classification [1] is specified by the value of the conic constant (k) as: • • • • •

k < − 1: Hyperbola k = − 1: Parabola −1 < k < 0: Prolate ellipse k = 0: Sphere k > 0: Oblate ellipse.

Therefore, if the curvature (c), which is the reciprocal of the radius of curvature along the axial length direction, and the conic constant are known, it is possible to specify the base geometry of an aspheric surface (Eq. 1). zasf ¼

cr2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ 1  ð1 + kÞc2 r2

ð1Þ

where zasf is the surface elevation in the direction along the main optical axis and r is the radial coordinate perpendicular to that axis, on the plane of the lens [1]. One can also classify an intraocular lens according to its depth of focus. The depth of focus is the longitudinal distance range about the main focal plane, throughout which the image features acceptable sharpness. It is possible to convert the different values of focus shift to the equivalent field range (usually in meters from the eye) if the effective focal length of the systems is known. An intraocular lens can be classified as having an Extended Depth of Focus (EDoF) if it can maintain an acceptable retinal image contrast when an object moves within a given field range. For example, if the EDoF IOL is designed to have maximum retinal contrast for distant objects, the focal plane can shift a few millimeters or fraction of millimeters off the retina and still preserve the image with a reasonable contrast. This focal plane shift can be translated to the object perspective as the object changing its position to one that the IOL had not been originally designed for.

2.2 Lens Design The eye model chosen to simulate an optical system similar to the human eye was a modified Liou-Brennan eye model [2]. The first optical element in this model is a lens emulating a cornea of 0.5 mm of central thickness. It has most of the eye’s refractive power (around 40 diopters). After the cornea, the model specifies the anterior chamber, which is made by an aqueous material with refractive index of 1.336 and a depth of 3.16 mm along the main optical axis. Then, it reaches the pupil that is emulated by an optical clear aperture of a given diameter. In this study the aperture was set to 3.0 mm of diameter. After the aperture, the Liou-Brennan eye model tries to emulate a human crystalline lens by

introducing a GRIN (Gradient Index) surface, where its refraction index varies along the main optical axis. In this study, the original Liou-Brennan crystalline was replaced by a custom designed intraocular lens with the remainder of the eye’s refractive power (around 20 diopters). One key characteristic of the Liou-Brennan eye model is the axial length of 23.95 mm, which means the distance between the first surface (front cornea) and the image plane (retina). Therefore, the sum of all distances and thicknesses from the front cornea to the retina must be equal to the specified axial length. To account for this restriction, the distance between the back IOL and the image plane (vitreous cavity) has 18.70 mm. In this study, the optical system was modelled using ZEMAX Optics Studio [3], since it is a software widely accepted for optics simulations. In ZEMAX, the default surface type is named Standard and has properties compatible with Eq. 1 previously presented. The Standard surface can be specified by a radius of curvature along the main optical axis, a semi-diameter corresponding to radius along the XY plane perpendicular to the main optical axis and the conic constant related to the asphericity. There are other parameters such as the refractive index of the material medium after the surface and also the thickness representing the distance to the next surface. The base Standard lens designed for this study is biconvex with its main focus designed for maximum contrast of infinitely distant objects. In order to design the intraocular lens to replace the LiouBrennan crystalline and introduce the sinusoidal surface distributed over the base Standard lens, the posterior IOL surface type was changed to one named Periodic. The equation that describes the Periodic surface type is exactly the same as the Standard surface [3], but with an added a sinusoidal term (Eq. 2). The base parameters of the previously designed Standard surface were maintained and only the Periodic parameters have been changed before the classification methodology is applied.   1 zper ¼ zasf  A ½1 þ cosð2paxÞ½1 þ cosð2pbyÞ  1 4 ð2Þ where zper is the elevation in the direction along the main optical axis, a is the frequency of peaks along the X axis and b is the frequency along the Y axis, both in cycles/mm. Its profile is shown in Fig. 1, with an exaggerated amplitude to illustrate the general shape. The modified Liou-Brennan eye model has the following parameters (Table 1). With a Periodic surface, it is possible to specify a different frequency for each axis, meaning a different number of sinusoidal shaped peaks along the X direction and the Y

Methodology for the Classification of an Intraocular Lens …

1219 Table 2 Amplitude versus frequency search space

Fig. 1 Periodic IOL back view

Table 1 Modified Liou-Brennan eye model parameters Surface

Radius (mm)

Thickness (mm)

Asphericity

Refraction index (555 nm)

1. Front cornea

7.70

0.5

− 0.18

1.376

2. Back cornea

6.40

3.16

− 0.6

1.336

3. Front IOL

16.145

1.59

1.095

1.492

4. Back IOL (Periodic)

− 16.145

18.70

− 2.993

1.336

direction. In this study, the same frequency for both directions was used. The simulations were made for a monochromatic light with a wavelength of 550 nm, corresponding to the green color. The object was maintained infinitely distant from the eye throughout all simulations. The amplitude (A) and frequency (f) were varied in a search space according to Table 2. There was a total of 40 simulated IOLs in which the minimum and maximum amplitudes and frequencies were chosen as to maintain at least a MTF of 0.43 at 100 cycles per millimeter. This has a relation with the requirements for contrast of a monofocal IOL in ISO standard [4].

Minimum

Maximum

Step

Amplitude (mm)

0.25E−3

1.5E−3

0.25E−3

Frequency (cycles/mm)

0.25

2

0.25

punctual, the image is not. This is, on the one hand, intrinsically a result of diffraction, due the effects of a finite aperture. On the other hand, the system aberrations will spread the point further. The PSF is known as the impulse response of an optical system. Since the MTF is the Fourier transform of the PSF, it represents the system frequency response, i.e. it converts the PSF information in the spatial domain to the MTF frequency domain. This mathematical operation converts bidimensional spatial information (X and Y positions) into spatial frequency (cycles per millimeter). The MTF curve is very important in visual acuity measurements because it is not biased towards the object distance. For example, a big object distant from the eye can exhibit the same frequency content as a small object closer to the eye. Therefore, the frequency content represents both situations at the same time. In ophthalmic optics, the MTF is usually reported up to a frequency of 100 cycles per millimeter, since it is related to a PSF that is compatible with typical dimensions of photoreceptors [6] and for being a frequency that the reference human eye, with good visual acuity, can typically still perceive on a usual Snellen chart. The Through Focus Modulation Transfer Function (TFMTF) gives information that is complementary to the MTF. It describes how the MTF of an optical system varies as the image plane moves through the focal region for a chosen spatial frequency. For example, if an object comes closer to the eye, the TF-MTF shows how the contrast decreases. In ophthalmic optics, it is common to evaluate the TFMTF at a frequency of 50 cycles per millimeter, since it corresponds to the fundamental frequency of the 20/40 line on the Snellen eye chart, which is an acceptable value for visual acuity [6].

2.3 Merit Functions

3 For the purpose of lens performance evaluation, two types of curves were computed: the Modulation Transfer Function (MTF) and the Through Focus Modulation Transfer Function (TF-MTF). The MTF represents the Fourier Transform of the Point Spread Function (PSF) [5]. The PSF exhibits the response of an optical imaging system to a point source or point object. In practical systems, even if the source is infinitesimally

Methodology

3.1 Algorithm A Python algorithm was developed to assist in the design process with the purpose of iteratively changing the IOL properties within the modified Liou-Brennan eye model previously presented. The method of communication between the Python programming language and ZEMAX

1220

made the process of evaluating the optical system performance much easier, since it can execute a great number of simulations that would take a much longer time if done manually (and also susceptible to errors). Using the PyZDDE protocol, there are built-in functions that can readily communicate with ZEMAX, changing the eye-model configurations when needed. The implemented algorithm iteratively changes the amplitudes (A) and frequencies (f) of the sinusoidal function on the posterior surface of the proposed IOL and extracts the MTF and TF-MTF curves for each configuration. Then, there is a processing of the TF-MTF curve, where its peaks and valleys are counted and compared with predetermined amplitude thresholds for the purposes of classification. For example, a monofocal aspheric IOL will have an elevated main peak but it will be thin, whereas an EDoF IOL will have a wider peak. A multifocal IOL will have multiple peaks that can also contribute to the overall system performance. The following steps summarize the classification and ranking process: 1. Establish thresholds for the peak search: peak and valley thresholds. 2. Find all peaks and valleys of the TF-MTF curve. 3. Classify each IOL between Monofocal, Multifocal or EDoF, considering the thresholds. 4. Calculate the score for each IOL. 5. Plot 3D bar chart after all IOLs are classified and scored.

3.2 Thresholds and Ranking There were two types of threshold (Fig. 2) considered and hereby named: peak threshold (green) and valley threshold (red). They represent the values of the TF-MTF amplitude according to which peaks and valleys are assessed. The vertical axis is the normalized amplitude of the MTF and the horizontal axis is the depth of focus in millimeters. The purpose of the peak threshold is to establish a minimum value of contrast amplitude that it is possible to classify an IOL between monofocal and “non monofocal”. Therefore, it only checks how many peaks have an acceptable contrast: if only one peak is found, the IOL is monofocal; otherwise, it checks the valley threshold. In contrast, once the TF-MTF has more than one acceptable peak, the valley threshold helps in establishing if the IOL is classified as multifocal or EDoF. If the valleys fall below the valley threshold, it means that the secondary focal peaks are separated from the main focus. Therefore, the IOL is multifocal. Otherwise, the IOL is an EDoF. In this study, the thresholds used have the following values:

D. F. Costa and D. W. d. L. Monteiro

• Peak threshold: 0.2 • Valley threshold: 0.05. These thresholds were set by checking the typical TFMTF responses of commercial EDoF IOLs [7] and they carry the following intents. The peak threshold value was set at 20% because it ensures that only peaks with a minimum of contrast can be considered as significant. The valley threshold was set at 5% so it could classify the IOLs as multifocal only if the valleys reached a value of contrast very close to zero, indicating a significant separation from the main peak. In other words, the distinction between a multifocal IOL and an EDoF IOL is when the valley between peaks falls below 5% of amplitude.

3.3 Score The merit function used to calculate the influence of the TF-MTF to the overall lens performance was the area under the curve. This area was calculated using the trapezoidal numerical integration technique. Since the integration evaluates the area under the curve as a whole, it does not matter if the area was computed from a high central peak or multiple smaller peaks: they both have the chance to contribute to the integral sum and receive a high score. In other words, no IOL behavior is privileged in any way by the nature of the integration process. After the area is computed, the score of each of the 40 lenses (as specified in Table 2) is calculated by weighing the area by the maximum peak value of the MTF curve at 50 cycles/mm (Eq. 3). SCOREðiÞ ¼ AREAðiÞ  PEAKðiÞ

4

ð3Þ

Results

4.1 3D bar Chart The 3D bar chart is the main result from this study. It has all the simulated Amplitudes and Frequencies along the graph XY plane, with the Score represented by the bar height and the IOL class represented by the bar color (Fig. 3 and Table 3).

4.2 Discussion A relevant concept to mention is the threshold sensitivity analysis. In this study, it was important to check how the thresholds affect the classification algorithm, so they could

Methodology for the Classification of an Intraocular Lens … Fig. 2 Classification algorithm

1221

1222

D. F. Costa and D. W. d. L. Monteiro

It is important to note that these found EDoF IOLs are not yet compared to the best spherical IOLs for each category.

4.3 Future Improvements One way to improve this methodology is to iteratively find a way to establish acceptable thresholds for a given family of IOLs. Another direction is to introduce other merit functions to improve the score calculations, such as the CSF (Contrast Sensitivity Function) or the SSIM (Structural Similarity Index). Also, it is possible to refine the distinction between a multifocal IOL and an EDoF IOL by finding new ways to adjust and measure the separation of the secondary peaks from the main one. Perhaps use the alignment of the half width of each secondary peak.

5

Fig. 3 Main result

Table 3 Classification summary for 40 intraocular lenses with different parameters of the sinusoidal function

IOL quantity

Monofocal

Multifocal

EDoF

19/40

19/40

2/40

be better tuned (manually). If the peak threshold is high, many IOLs are considered Monofocal, since almost none of the secondary peaks reach this threshold. If it is low, the classification tends to multifocality. In contrast, if the valley threshold is high, most curves that have secondary peaks will be considered Multifocal instead of EDoF because many secondary peaks are classified as separated from the main peak. And if it is low, many of them will be considered EDoF IOLs since the valleys must only reach a low value of amplitude to be considered as part of the main central peak. The two EDoF IOLs found during the classification methodology have the following characteristics (Table 4). According to [7], the values of contrast exhibited in Table 4 are well within the range of the TF-MTFs of commercial EDoF IOLs, such as the TECNIS Symfony ZXR00 (Johnson & Johnson) and the AT LARA 829MP (Carl Zeiss Meditec).

Table 4 Extended depth of focus IOLs Amplitude (central peak)

Amplitude (@ 0.25 mm focus shift)

EDoF 1

0.65

0.35

EDoF 2

0.58

0.4

Conclusions

This study made possible the development of an intraocular lens that is highly customizable, since it can perform as a monofocal, multifocal or extended depth of focus lens depending on the amplitude and frequency of the added orthogonal grid with a refractive sinusoidal profile introduced to its posterior surface. Also, a useful methodology for the classification and ranking of such type of lenses in accordance with optical merit functions was successfully implemented. This concept of evaluation can be extended to other types of intraocular lenses, in which parameters can produce different kinds of merit functions that can lead to a myriad of IOL products or families of products. Acknowledgements The authors thank the Electrical Engineering Graduate Program for enabling the development of this field of study, and also CAPES, CNPq and FAPEMIG for their support. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Saleh BE, Teich MC (2019) Fundamentals of photonics. Wiley 2. Liou HL, Brennan NA (1997) Anatomically accurate, finite model eye for optical modeling. JOSA A 14(8):1684–1695 3. Zemax LLC (2013) OpticStudio user manual. Zemax LLC., Kirkland, WA, USA 4. No, I. S. 11979-2 (2014) Ophthalmic implants-intraocular lenses-part 2: optical properties and test methods. International Organization for Standardization, Geneva, Switzerland

Methodology for the Classification of an Intraocular Lens … 5. Hecht E (2012) Optics. Pearson Education India 6. Artal P (ed) (2017) Handbook of visual optics, two-volume set. CRC Press

1223 7. Chae SH, Son HS, Khoramnia R, Lee KH, Choi CY (2020) Laboratory evaluation of the optical properties of two extendeddepth-of-focus intraocular lenses. BMC Ophthalmol 20(1):1–7

Discrimination Between Artisanal and Industrial Cassava by Raman Spectroscopy Estela Doria, S. C. Nunez, R. S. Navarro, J. C. Cogo, T. O. Mendes, and A. F. Frade-Barros

Abstract

1

Raman spectroscopy is a high-resolution photonic technique that can provide, in a few seconds, chemical and structural information on almost any material, organic or inorganic compound, thus allowing its identification. This technique has been used in the food industry to detect adulteration in food products and to characterize new chemical compounds. In this work, we used the Raman technique to elucidate the composition of cassava flour that provides 3.92% of the daily energy for Brazilians. Different samples of flour produced industrially and artisanal (in the flour houses in the interior of the country and sold at open markets) were analyzed. The objective of this work was to discriminate by origin and determine which vibrational modes characterize these samples of artisanal and industrial cassava flour by Raman spectroscopy technique. From artisanal and industrialized cassava flours spectra analysis, it was found that the principal Raman peaks are located between the 300 and 3000 cm−1 bands. This fact leads us to suggest that cassava flour has in its composition complex carbohydrates such as polysaccharides, unsaturated fatty acids and proteins, characteristics founds in plant origin’s food. With the present work we were able to carry out for the first time a artisanal and industrialized cassava flours chemical analysis by the Raman method. Keywords



Flour Cassava Bioengineering



Raman



Spectroscopy



E. Doria (&)  S. C. Nunez  R. S. Navarro  J. C. Cogo  A. F. Frade-Barros Departamento de Bioengenharia, Bioengineering Program, Brazil University, Campus Itaquera, Rua Carolina da Fonseca, 235, São Paulo, Brazil R. S. Navarro  J. C. Cogo  T. O. Mendes Biomedical Engineering, Brazil University, São Paulo, Brazil

Introduction

Raman spectroscopy has been used to elucidate the chemical composition of foods and to detect fraud in industrialized products [1]. Raman spectroscopy is an optical technique that uses a laser as the source of excitatory energy. When the laser light falls on the sample and it is spread with a different energy than the incident light; inelastic mirroring of light; there is a difference between two energies. This difference in energies is called Raman shift, and this fraction of scattered energy can be detected by a spectrometer. The set of Raman shift associated with a sample can be summarized in a spectrum, called the Raman spectrum, which is associated with unique characteristics of the target sample. The waves are then captured by photodetectors that can be cameras or conventional optical microscopes, which detect scattered radiation, characterizing the Raman band. In the Raman Spectroscopy technique, is possible to obtain information about the chemical composition of the sample, geometry of molecules, chemical bonds, molecular vibrations and even quantifying compounds present in a complex sample [2]. According to the household food survey in Brazil in 2008–2009, Brazilian families spent 70–80% of their budget on food. Carbohydrates are responsible for providing 59% of daily energy, with 3.92% of this energy coming from cassava flour. Its contribution to energy acquisition is greater in rural than in urban areas, in the north and northeast its participation in food is six times more significant than in other regions of Brazil [3]. Cassava is the fourth most cultivated food in the world, Brazil is in sixth place in the production of cassava in tons, responsible for the production of 23,044,557 tons, with a revenue of US$1,203,6511 dollars [4]. Through the processing of cassava, we can obtain several by-products that are widely consumed by the Brazilian population, such as: tapioca flour, sour flour, cassava pasta, starch or sweet powder, cassava flour, tucupi and roasted

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_183

1225

1226

E. Doria et al.

cassava flour [4]. Due to its high relevance as food consumed by the Brazilian population, this work sought to identify the main nutrients present in artisanal and industrialized cassava flours using the Raman method, produced and marketed in the municipalities of Oriximiná, Belém, São Paulo and Rio de Janeiro, in order to better know its composition and detect compounds that can be added to it as dyes and preservatives harmful to human health. Raman spectroscopy was used because it has the advantage: the performance of rapid and non-destructive analysis, does not require steps of sample preparation and consumption of chemical reagents. Discrimination from artisanal flour to industrialized flour can be useful to identify the authenticity and quality of these foods for industries and for the food market and consumers in general [5].

2

Objectives

Discrimination of artisanal and industrial cassava flour samples due to their chemical composition by Raman spectroscopy technique.

3

Material and Methods

Twenty samples of cassava flour produced “in house” handcrafted and sold in the municipality of Oriximiná, Pará, and 10 samples of industrialized cassava flour sold in the cities of Belém, São Paulo and Rio de Janeiro were used, totaling 30 samples. The artisanal and industrialized cassava flour analysis was carried out at Physic Department, at UFMG, Belo Horizonte. The samples were analyzed in an FT-Raman Multiram spectrometer manufactured by Bruker (Billerica, USA) equipped with a liquid nitrogen-cooled germanium detector and an Nd: YAG laser with an excitation laser at a wavelength of 1064 nm. Spectrum pre-processing includes removing spikes, baseline subtraction, smoothing and vector normalization of the spectra. Initially, an exploratory analysis was performed including the entire set of spectra obtained by (PCA— Principal Component Analysis) [5]. This analysis was performed with the spectra matrix centered on the mean. Then, the samples behavior was evaluated using the Discriminant Analysis with Partial Least Squares Regression supervised method (PLS-DA) [6, 7]. Statistical analysis was performed using MATLAB-MathWorks software (Natick, USA), including the PLS Toolbox package from Eigenvector Technologies (Manson, USA) [8].

4

Results

The Raman spectra of the 30 samples of cassava flour, 20 of which are artisanal samples and 10 industrialized samples can be seen in Fig. 1. Figure 1a–d show the effect of the pre-processing steps. In Fig. 1a we have the spectra of the 30 samples without treatment, in Fig. 1b the spectra passed through the FT filter, in Fig. 1c they passed through the FT and Baseline filter, in Fig. 1d they passed through the FT, Baseline filter and vector normalization. The artisanal cassava flour spectra (in black) and industrialized cassava flour (in red) in Fig. 2, shows that the intensity of some vibrational modes was different when comparing the two groups analyzed. From the analysis performed on artisanal and industrialized cassava flour it was found that some of the main Raman peaks are located between the bands 300 cm−1 and 3000 cm−1. The PCA demonstrates the samples behavior according to their spectral characteristics. In Fig. 3 it is possible verify the clear samples separation in two groups. On the positive side of the PC1, we see the class of artisanal flours, represented by red crosses; and on the negative side of the PC1, the class of industrialized flours are represented by black triangles and artisanal cassava flour in red crosses. In the PLS-DA Analysis, we list the artisanal and industrialized flour samples. To demonstrate that the separation occurs due to the class of samples, industrialized and artisanal flours, and not from any uncontrolled parameter, a PLS-DA model (Fig. 4) was performed, which represents the prediction of the samples in two discriminating classes. The separation is clear and the VIP (Variable Importance in the Projection) scores reveals the spectral regions of greatest interest to explain such a separation (data not shown).

5

Discussion

Raman spectroscopy provides fast, non-destructive analysis without sample preparation. Raman spectra also provide spectral fingerprints of the molecule, which can provide a considerable amount of information. Compared to other more common analytical techniques, Raman spectroscopy is a technique that is much simpler to use, faster and cleaner and leads to greater sensitivity and lower cost [7, 8]. When combined with appropriate multivariate statistical methods, it can be an ideal method for detecting and quantifying chemicals in different types of food [9]. With the physico-chemical characterization of cassava we can determine that it contains mainly carbohydrates responsible for energy supply in its nutritional composition,

Discrimination Between Artisanal and Industrial Cassava …

1227

Fig. 1 Spectrum of Artisanal and Industrialized cassava Flours. a 30 samples without treatment Spectra. b Spectra passed through the FT filter. c Spectra passed through the FT and Baseline filter. d Spectra passed through the FT, baseline and normalization filters

industrializada artesanal

Intensidade Raman (u.a.)

0.10 0.08 0.06 0.04 0.02 0.00

3000

1500

1200

900

600

300

Deslocamento Raman (cm-1)

Fig. 2 Analysis on principal components (PCA) of artisanal cassava flour (in red) and industrialized cassava flour (in black)

Fig. 3 Analysis of the principal components (PCA) of artisanal cassava flour (red crosses) and industrialized cassava flour (black triangles)

1228

E. Doria et al.

Fig. 4 PLS-DA model of artisanal flour samples (crosses in red), and industrialized (black triangles)

followed by proteins, lipids, calcium, iron, fibers, antinutritional factors (cyanogenic compounds), carcinogens and allergens [10]. As no studies were found in the literature that used the Raman method to determine the components of cassava flour, the attempt to assign the vibrational modes found was established by comparison with studies of other foods or isolated nutrients. We tried to attribute the Raman peaks detected between the regions of the bands 300 and 1500 cm−1, to the shift of carbohydrates Glucose and Maltose according to the study by Vasko et al. [9] and Wiercigroch et al. [11], which determined the spectra of simple and complex carbohydrates from Glucose, Celobiosis, Maltose and Dextrin dissolved in water, the following Raman shifts were attributed to these carbohydrates: the COH bands stretches were identified as characteristic of the Glucose solution at the following frequencies 914, 1020, 1071 and 1349 cm−1; in the Maltose solution the stretches of the C–O–H bands at the following frequencies 913, 1022, 1070, 1079 and 1350 cm−1; in the Celobiose solution the stretches of the COH bands at the following frequencies 890, 915, 1020, 1080 and 1350 cm−1 and in the Dextrin solution the stretches of the COH bands at the following frequencies 917, 1024, 1084 and 1348 cm−1 [12, 13]. The study by Camerlingo et al. [14], investigated the presence of Fructose and pectin detected in apple and pear juices and purees, using the spectrum of pure Fructose 627 cm−1, as already established in the literature. He attributes the observed Raman shifts to the observed vibrational modes. For the Fructose spectrum in aqueous solution it determines as: 333 cm−1 band C–O–C; 424 cm−1, C–C–O band (Pyranose); 492 cm−1, band C–C–C (Furanose); 517 cm−1, C–C–O band (Pyranose); 622 cm−1, C–C–O band; 678 cm−1, C–C–O band (Furanose); 815 cm−1, stretch C–C (Pyranose); 872 cm−1 stretch C–C (Furanose); 940 cm−1 band C–C–H (Furanose); 1058 cm−1 C–O stretch;

1095 cm−1 band C–O–H; 1272 cm−1 and 1340 cm−1 twist of CH2; 1370 cm−1 and 1412 cm−1 twist of CH2 and 1460 cm−1 band CH2 [14]. We also found in this study with cassava flour, a very characteristic Raman peak between 2700 and 3000 cm−1, which can be attributed to the lipids of vegetable origin present in cassava flours. As seen in the study by Czamara et al. [12], where several spectra of lipids of plant and animal origin were found, we can see that among the lipids of plant origin, composed of unsaturated fatty acids, we can highlight a very characteristic shift that starts at 2700 cm−1 and decays at 3000 cm−1, as the stretch of the CH band of the Eladic acid in the region 2890 cm−1; stretching of the C–H band of Eladic acid, in the 3000 cm−1 region; stretch of the band = C–H of Palmitic acid, in the region 3005 cm−1; stretch of the band = C–H of Linoleic acid, in the region 2923 cm−1; stretch of the band = C–H of Linolenic acid, in the region 2852 cm−1 and stretch of the band = C–H of Arachidonic acid in the region 2845 cm−1 [1]. Silveira et al. [1] detected the presence of saturated and unsaturated fat in processed foods using the Raman method. It was observed the occurrence of characteristic bands between the spectral regions 800–1800 cm−1, corresponding to vegetable oils. The 1750 cm−1 band is assigned the C=O bond to the short saturated fatty acids of coconut oil; the band 1440 cm−1 the deformation of the C–H ring in virgin olive oil characterizing the presence of unsaturation. In vegetable margarine it was observed that its composition is mainly of saturated fatty acids with the band 1300 cm−1 very intense characteristic of concentration of saturated fat [1]. From the analysis carried out it was also possible to verify the presence of peaks found in the ranges from 500 to 1700 cm−1 in the cassava flour samples. Thus, we can verify characteristic bands of proteins according to the studies by Rygula et al. [13], who reviewed the main bands of proteins of plant and animal origin and it was reported that the protein of plant origin called Lecithin, presents as characteristic

Discrimination Between Artisanal and Industrial Cassava …

bands: amide I in the region 1667 cm−1; Amide III in the region 1236 cm−1 and the region 540 cm−1 the PO43, indicating that it contains phosphate ions in its composition. Oliveira [15], on the other hand, studying the Raman spectrum of traditional and light curd cheese, reports as a characteristic band of milk protein the region 1652 cm−1 corresponding to the stretching of Amide I. In the study by Almeida, 2011 [15], on adulteration of powdered milk, it reports as characteristics of milk proteins the regions of 1550 cm−1 the vibrational mode of Amide II, stretching of the band C=N [15]. The peaks found in our studies for the determination of the chemical composition of cassava flour by the Raman method, were located in bands close to those of the studies described above. This fact leads us to suggest that cassava flour has in its composition complex carbohydrates such as polysaccharides, unsaturated fatty acids and proteins characteristic of those found in foods of plant origin, as can be corroborated by the physical–chemical analysis of the Table of Chemical Composition of Food prepared by UNICAMP in 2011 [16]. When submitting the data to PCA and PLS techniques, we verified that there are differences in intensities between the spectra of artisanal and industrialized flour, and the difference in spectra within each group can be determined by the factors to which the tuber is subjected since its planting until the harvest stage. These factors interfere with the amount of nutrients that the tuber can retain for its growth and determine its chemical composition. Some factors that will interfere with the chemical composition of the tuber are: 1. Soil: which must be deep and friable, 2. Rainfall index: rain range between 1000 and 1500 mm distributed throughout the year, without dry seasons, 3. Temperature: between 16 and 38 °C, with 12 h of sunlight, 4. Fertilization: improves productivity, 5. Genetic variability: interferes with resistance to pests, productivity, adaptability to the soil, amount of Cyanide Acid present in the plant. 6. Planting season: should be carried out at the beginning of the rainy seasons, where humidity and heat are greater, stimulating sprouting and rooting [17, 18]. Through the use of Raman Spectroscopy, it was possible to elucidate the cassava flour samples chemical characteristics. For that we compared the samples chemical composition used in this study and we tried to assign vibrational modes of the main nutrients of foods already reported in the literature. According to our bibliographical survey, the 476 cm−1 vibrational mode is inserted in the carbohydrates spectrum,

1229

more specifically in the Fructose spectrum in aqueous solution; the 2906 cm−1 vibrational mode, can be attributed to the lipids spectrum of plant origin; and the range from 800 to 1400 cm−1, can be attributed to the proteins spectrum, mainly the plant origin Lecithin protein. The use of Principal Component Analysis demonstrated that the difference between the groups of artisanal and industrialized flours and the difference of the samples within each group. We hypothesized that those difference are caused by the variation in the nutrients percentage, probably due to intrinsic factors in tuber cultivation.

6

Conclusion

In the present work we were able to determine for the first time a chemical analysis on artisanal and industrialized cassava flour by the Raman method. It was possible to determine the chemical composition of artisanal and industrialized cassava flours. We are able to compare and verify the differences between the samples. The differences may be caused by the intrinsic characteristics of the tubers and the factors they were subjected to during growth. There is still a great lack of scientific works to study the main characteristics in food consumed by the low-income Brazilian population, such as cassava flour. Improve the research about chemical composition of these foods is essential to better understand their nutritional aspects and avoid ingesting chemicals that could be harmful to the population health. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Silveira FL (2008) Determination of the concentration of saturated fats in commercial foods by dispersive Raman spectroscopy. Thesis (Master's degree). University of Vale do Paraíba. Research and Development Institute. São José dos Campos, SP 2. Rodrigues ADG, Galzerani JC (2012) Infrared, Raman and photoluminescence spectroscopies: potential and complementarities. Rev Bras Phys Educ 34(4):4309–4319 3. Levy RB, Claro RM, Mondini L, Sichierii R, Monteiro CA (2012) Regional and socioeconomic distribution of household food availability in Brazil in 2008–2009. Rev Saúde Pública 46(1):6–15 4. Corção M (2014) Memories and oblivions in Brazilian cuisine proposed by Câmara Cascudo Introduction 1(1): 77–93 5. Wold S, Esbenseb K, Geladi P (1987) Principal component analysis. Chemom Intell Lab Syst 7439(August):1–17 6. Barker M, Rayens W (2003) Partial least squares for discrimination. J Chemom 17(5):166–173 7. Kennard RW, Stone LA (1969) Computer aided design of experiments. Technometrics 11(1):137–148

1230 8. Shinzawa H, Awa K, Kanematsu W, Ozaki Y (2009) Multivariate data analysis for Raman spectroscopic imaging. J Raman Spectrosc 40(12):1720–1725 9. Vasko PD, Blackwell J, Koenig JL (1971) Infrared and Raman spectroscopy of carbohydrates. Part I: indentifications of O–H and C–H- related vibrational modes for D-glucose, maltose, cellobiose and dextran by deuterium-substitution methods. Carbohydr Res 19:297–310 10. de Souza JML, de Álvares VS, Leite FMN, Reis FS, Felisberto FÁV (2008) Physico-chemical characterization of cassava flours from the municipality of Cruzeiro do Sul-Acre. UEPG Exact Earth Sci 14(1): 43–49 11. Wiercigroch E, Szafraniec E, Czamara K, Pacia MZ, Majzner K, Kochan K et al (2017) Raman and infrared spectroscopy of carbohydrates: A review. Spectrochim Acta Part A: Mol Biomol Spectrosc 185(June):317–335 12. Czamara K, Majzner K, Pacia MZ, Kochan K, Kaczor A, Baranska M (2015) Raman spectroscopy of lipids: a review. J Raman Spectrosc 46(1):4–20

E. Doria et al. 13. Rygula A, Majzner K, Marzec KM, Kaczor A, Pilarczyk M, Baranska M (2013) Raman spectroscopy of proteins: a review. J Raman Spectrosc 44(8):1061–1076 14. Camerlingo C, Portaccio M, Tatè R, Lepore M, Delfino I (2017) Fructose and pectin detection in fruit-based food products by surface-enhanced raman spectroscopy. Sensors (Switzerland) 17(4):1–12 15. De Almeida MR (2011) Evaluation of the quality and variety of powdered milk and condensed milk by Raman spectroscopy and multivariate analysis. Thesis (Master's degree). Juiz de Fora Federal University. Juiz de Fora, MG, Chemistry Department 16. NEPA—UNICAMP. TACO (2011) Brazilian food composition table, 4th ed. Center for Studies and Research in Food, São Paulo 17. de Oliveira LL, Rebouças TNH (2009) Hygienic-sanitary profile of cassava flour (Manihot esculeta CRANTZ) processing units in the southwest region of Bahia. Alim Nutr, Araraquara 19(4):393–399 18. Nascimento Neto Fd (2006) Basic recommendations for the application of good agricultural and manufacturing practices in family farming. EMBRAPA, 1a. Brasília, DF, 243 p

Effect of Light Emitted by Diode as Treatment of Radiodermatitis Cristina Pires Camargo, H. A. Carvalho, R. Gemperli, Cindy Lie Tabuse, Pedro Henrique Gianjoppe dos Santos, Lara Andressa Ordonhe Gonçales, Carolina Lopo Rego, B. M. Silva, M. H. A. S. Teixeira, Y. O. Feitosa, F. H. P. Videira, and G. A. Campello

radiodermatitis to grade 2–2.5. In the histological analysis, photobiomodulation increased the division and migration of cells in the basal layer of the epidermis, demonstrating the regenerative potential of this treatment in the effects of radiotherapy, increasing the speed of epithelialization of the lesion. This study suggested that the association of 630 + 850 nm improved radiodermatitis regeneration.

Abstract

Radiotherapy can cause radiodermatitis in 85–90% in oncologic patients. There are several therapeutic alternatives to treat radiodermatitis with variable results. A new option is the use of light emitted-diode (LED) to treat this condition. We analyzed twenty male Wistar rats weighing 200–250 g. All the animals underwent a radiotherapy session. After 15 days, the animals were divided into four groups: control (no treatment) and LED 630 nm, 850 nm, 630 + 850 nm. The LED treatment was applied every two days until the 21 days). We analyzed the macroscopic aspect of radiodermatitis before and after treatment. After this phase, samples were collected for histological (HE). Macro and microscopic analysis indicated positive effects with exposure to light, especially with the association between wavelengths 630 and 850 nm, resulting in a reduction in the severity of C. P. Camargo (&)  R. Gemperli Division of Plastic Surgery, Hospital das Clínicas, Laboratory of Microsurgery and Plastic Surgery (LIM-04), Medical School, Universidade de São Paulo (USP), São Paulo, SP, Brazil e-mail: [email protected] H. A. Carvalho Service of Radiotherapy, Department of Radiology and Oncology. Medical School, Universidade de São Paulo (USP), São Paulo, SP, Brazil C. L. Tabuse Medical School, University of São Paulo (USP), São Paulo, SP, Brazil e-mail: [email protected] P. H. G. dos Santos  L. A. O. Gonçales  C. L. Rego  B. M. Silva  M. H. A. S.Teixeira  Y. O. Feitosa  F. H. P. Videira  G. A. Campello Polytechnic School, University of São Paulo (USP), São Paulo, SP, Brazil e-mail: [email protected] L. A. O. Gonçales e-mail: [email protected] C. L. Rego e-mail: [email protected]

Keywords

Radiotherapy

1



Photobiomodulation



Radiodermatitis

Introduction

Radiotherapy (RDT) is considered an adjuvant treatment for oncological surgery [1, 2]. The ionizing effect of RDT causes ruptures on the DNA of non-cancer and cancer cells. These lesions inhibit cell repairment and, thus, cause apoptosis. Even though the development of radiotherapy technique, radiodermatitis is still frequent. About 85–90% of patients that undergo radiotherapy presented some degree of radiodermatitis. The radiodermatitis occurrence depends on the total dose of irradiation, number of sessions, comorbidities (use of chemotherapy), smoking habits, and patient skin sensibility. The signs and symptoms vary from light local burn sensation to necrosis in the irradiated area. Also, this disease can be classified according to the initial onset of symptoms, in acute (in the first 90 days after irradiation) or chronic (months after irradiation). In clinical practice, patients who present these complications often report poor quality of life, due to pain caused by irradiation, as well as recurrent medical visits and extra efforts at self-care.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_184

1231

1232

C. P. Camargo et al.

Some therapeutic alternatives are applied to avoid and/or mitigate the symptoms of radiodermatitis, such as creams and lotions, topical or systemic corticoids, and antioxidants. However, according to a systematic review, there is no consensus among researchers regarding which of these therapeutic alternatives would be more effective in the treatment of radiodermatitis [3]. Therefore, due to lack of evidence and based on clinical experience, this study will evaluate the effectiveness of light emitted by diode on the dorsal region of rats undergoing radiotherapy.

2

Materials and Methods

Twenty male Wistar rats weighing from 200–250 g were used. These animals were subjected to a session of radiotherapy. After 15 days the animals were exposed to light emitted by diode every two days up for 21 days.

On the equipment, it is possible to choose between three distinct values of irradiance (denominated “Maximum”, “Medium” and “Minimum”), which is a measurement of the amount of energy per unit time that reaches a certain area, and 3 options of emitted wavelength (630, 850 nm and both at the same time), and also the time of light exposure. Table 1 reveals the correspondence between the choices and irradiance (in W/cm2) that falls on the animal’s injury. For the tests performed on the rats, the fluency obtained from the red light was 3.24 J/cm2 and the fluency obtained from the infrared light was 5.76 J/cm2 with 6 min for each test. All irradiance measurements were made with Model 1825-C Power/Energy Meter from Newport and the sensor 818-SL/DB Silicon (Si) Photodetector. The electronic part of the device is responsible for controlling the wavelength emitted by the LEDs and also the irradiance (W/cm2) of light that affects the animal’s injury. For this, the circuit schematic in Fig. 1 was used, composed of 5 infrared LEDs (850 nm), 36 red LEDs (630 nm), TIP-122 transistors, varied SMD resistors, 4.3 inch Nextion NX4827T043 and ATmega328 microcontroller.

2.1 Development of Photoemitter Equipment The development of the LED (Light-Emitting Diode) photoemitter equipment was performed by Grupo ARGO—an interdisciplinary undergraduate team from USP who works with Biomedical Engineering and Health Innovation. Such device allows sessions from 1 to 30 min with red (630 nm) LEDs, infrared (850 nm) LEDs or both simultaneously. The equipment allows the user to customize the treatment session, adjusting the irradiance of incident light on the injury and choosing the wavelength to be used. The choice of the light source and its parameters (wavelength, irradiance, and fluency) were based on studies related to the treatment of similar types of wound burns caused by photobiomodulation [4–7]. Also, it was taken into account that wavelengths between 600 and 700 nm interfere in the superficial tissue and between 780 and 1000 nm in deeper tissues [8]. Firstly, the choice of the light source as LEDs involved the analysis of physical properties like the absence of coherence and collimation of the light emitted by LED. The coherence of the light must be distinguished by two aspects: its physical effect and the interaction between this light with molecules and tissues. In a physiological aspect, the absorption of low-intensity light by the biological system is of purely incoherent nature, as well as the light emitted by the LED. The energy density of the LED is distributed between a large electromagnetic band (630 ± 35 nm), this permits this light to interact with a wide group of specific photoreceptors. Thus, it was determined that the wavelengths to be tested would be 630 nm (Red) and 850 nm (Infrared).

2.2 Irradiation of the Panniculus Carnosus All animals underwent general anesthesia by intraperitoneal injection of a mixture of ketamine hydrochloride (100 mg/kg) and xylazine hydrochloride (5 mg/kg). An area of 10  5 cm was trichotomized throughout the dorsal region of the animal. Subsequently, two 3  4 cm cesium plates were placed on the proximal and distal part of the animals’ back (with 5 cm spacing between the two plates) for 13 min. After irradiation, the animals remained under observation for 14 days for the establishment of radiodermatitis. After the 14-day period after irradiation exposure, the animals were divided in four groups: • • • •

Control—(n = 10) Light emitted by diode (LED 630 nm)—(n = 10) Light emitted by diode (LED 850 nm)—(n = 10) Light emitted by diode (LED 630 nm + 850 nm)— (n = 10).

2.3 LED or Ambient Light Exposure The LED sessions are made according to the wavelength and energy intensity according to the allocated group (630, 850 nm, 630 + 850 nm association), lasting 6 min and in alternating days until the 21 days.

Effect of Light Emitted by Diode as Treatment of Radiodermatitis Table 1 Correspondence between wavelength (nm), light irradiance (W/cm2), light fluency (W/cm2) and time (s)

Fig. 1 Electronic system

1233

Group

Irradiance scale

Irradiance (W/cm2)

Exposure time (s)

Light fluency 630 nm (J/cm2)

Light fluency 850 nm (J/cm2)

Control











630 nm

Minimum

0.009

360

3.24



850 nm

Minimum

0.016

360



5.76

630/850 nm

Minimum

0.025

360

3.24

5.76

1234

C. P. Camargo et al.

Table 2 Radiodermatitis scale (RTOG) Grade

Radiodermatitis classification

Grade 1

Normal appearance

Grade 1.5

Minimal erythema

Grade 2

Moderate erythema

Grade 2.5

Erythema associated with dry desquamation

Grade 3

Erythema associated with confluent dry desquamation

Grade 3.5

Confluent dry desquamation, scabs

Grade 4

Moist desquamation, moderate scabs

Grade 4.5

Moist desquamation, small ulcers

Grade 5

Large ulcers

Grade 5.5

Necrosis

2.7 Statistical Analysis The variables obtained in the microscopic analysis were described by median and interquartile range. Comparisons between groups were analyzed using the Kruskal–Wallis test, using the Dunn test for post hoc analysis. Alpha p of 5% and study power of 80% were considered. The statistical program STATA v14 was used (StataCorpo. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP).

3

Results

3.1 Macroscopic Analysis The control group was not exposed to any kind of photobiomodulation (Table 2).

2.4 Macroscopic Analysis The radiodermatitis area was evaluated according to the radiodermatitis scale (RTOG), at the beginning of the treatment and after the photobiomodulation treatment [9].

At day zero of the LED treatment, the animals were classified according to the radiodermatitis scale as grade 4.5. After treatment with the different wavelengths of all exposed groups (630 nm, 850 nm, 630/850 nm) the radiodermatitis severity was reduced to grade 2–2.5. In the control group, the radiodermatitis kept the previous classification (4–5).

3.2 Microscopic Analysis 2.5 Microscopic Analysis At the end of 21 days after irradiation with different LED wavelengths (630, 850 nm and 630 + 850 nm) all animals were euthanized with anesthetic overdose through intraperitoneal route (ketamine 150 mg/kg + xylazine 10 mg/kg). One sample was fixed in 4% formalin and then stained with HE.

2.6 Euthanasia of Animals On the 21st day after the LED session, the animals were euthanized with an overdose of anesthetic through intraperitoneal route (ketamine 150 mg/kg + xylazine 10 mg/kg). After euthanasia, the animals received the necessary measurements for their disposal according to the Waste Disposal Guidance Booklet in the FMUSP-HC System, which follows resolution number 306, of December 7, 2004, of the National Health Surveillance Agency and the Resolution of the National Environment Council – CONAMA, number 358, of April 29, 2005. All the procedures were performed according to the protocol of the Animal Use Ethics Committee (CEUA) 1060/2018.

The general aspect of radiodermatitis areas was analyzed at 20 magnification. The control group presented thin epidermis, with few basal cells. The group exposed to 630 and 850 nm manifested a thicker epidermis compared to the control group, with basal cells migrating from the basal layers to the superficial layers. The 630/850 nm group manifested thicker epidermis, with organized formation of the layers and signs of early keratinization. Regarding the quantitative analysis of dermal appendages (hair follicles and pilosebaceous glands), there was a significant difference between the groups (p < 0.001). In post hoc comparison, a difference between the following groups was shown: control and 630/850 nm (p = 0.003); 630 nm versus 850 nm (p = 0.007); 630 nm versus 630/850 nm (p = 0.01); 850 nm versus 630/850 nm (p < 0.001) Table 3. Figure 2. The quantitative analysis of vascular density (arterioles), also showed a significant difference among the groups (p < 0.001). In the post hoc comparison, a difference between the groups was: control and 630/850 nm (p = 0.01); 630 nm versus 630/850 nm (p = 0.0002); 850 nm versus 630/850 nm (p < 0.001) Table 3 and Fig. 3.

Effect of Light Emitted by Diode as Treatment of Radiodermatitis Table 3 Comparison of histological structures according to the allocated group

1235

Control (Median-IQR)

630 nm (Median-IQR)

850 nm (Median-IQR)

630/850 nm (Median-IQR)

Dermal appendages

16.5 (15–20)

21 (17–24)

10.5 (8–12)

34.5 (30–42)

Arterioles

4.5 (3–8)

5 (3–6)

4 (3–6)

26 (19–29)

IQR interquartile range

Fig. 2 Count of dermal appendages units by group

Fig. 3 Count of arterioles by group

4

Discussion

The comparison of several wavelengths of light emitted by diode showed that the association of 630 + 850 nm caused the most considerable effect on the neoformation of dermal appendages, as well as on vascular density when compared to the control group and other groups. According to several studies, the appropriate wavelength for the treatment of radiodermatitis should range from 600 to 1000 nm. Wavelengths above these parameters are known to cause loss of intracellular water, which can lead to more

intense lesions. For this reason, we compared the wavelengths most described in the literature [10–13]. DeLand et al. demonstrated that the use of photobiomodulation with parameters of 590 nm, 0.15 J/cm2 twice a week, reduces the severity of radiodermatitis to lower grades in patients with breast cancer undergoing radiotherapy [12]. Other studies, such as DERMIS [14] and TRANSDERMIS [15] demonstrated an improvement in the degree of radiodermatitis in patients that underwent mastectomy surgery with irradiation parameters of 660–850 nm, 0.15 J/cm2, twice a week. Thus, based on these studies and considering that the photobiomodulation presents several parameters to be evaluated such as wavelength, power, energy intensity, and treatment methods (exposure time, frequency), we primarily evaluated different wavelengths with constant energy intensity. Regarding exposure time and treatment frequency, parameters already established in clinical practice were used. The mechanism of action of the LED is explained by the absorption of light by the Cytochrome C oxidase enzyme in the mitochondria, which is the final enzyme in the electron transport chain. Studies have shown that this molecule acts as a photo acceptor and transducer of photo signals at wavelengths of the red and infrared light spectrum. Thus, the absorbed light leads to an increase in the transport of electrons, in the membrane potential of the mitochondria and, consequently, acts increasing the production of adenosine triphosphate (ATP) in the mitochondria [16, 17]. Based on these events, signaling pathways activate transcription factors that increase gene expression related to collagen synthesis, cell migration and proliferation, anti-apoptotic proteins and antioxidant enzymes. In vitro and in vivo studies have shown that photobiomodulation increases phagocytosis, angiogenesis, decreases inflammatory mediators, and increases proliferation and migration of keratinocytes and fibroblasts with collagen production. Our study demonstrated that photobiomodulation with different wavelengths increased cell division and cell migration from the basal layer of the epidermis. Therefore, it proved the regenerative potential of this method of treatment on the complications of radiotherapy, increasing the speed of epithelialization of the lesion. The results of this study indicate that photobiomodulation with LED wavelengths of 630 + 850 nm has shown the best therapeutic results in the treatment of radiodermatitis.

1236

5

C. P. Camargo et al.

Conclusion

In conclusion, the diode-emitted light therapy (LED) was efficient in the treatment of radiodermatitis, and the association of 630 + 850 nm the parameters that demonstrated the best macro and microscopic results. Therefore, this experience corroborates previous studies on photobiomodulation, since the results of microscopic and macroscopic analysis endorse this method as an adequate treatment that can be indicated for patients with radiodermatitis. Acknowledgements This research was carried out with financial support and encouragement from Fundo Patrimonial Amigos da Poli, an association that aims to raise funds and apply donations from this endowment to projects at Polytechnic School of the University of São Paulo. Thanks to Fundo Patrimonial Amigos da Poli for the promotion, to Prof. Dr. Arturo Forner-Cordero for the support, and to colleagues of the Grupo Argo—P. S. Scandoleira, V. M. Souza, and C. H. Q. Souza —for technical assistance with the equipment development.

Conflict of Interests The authors declare that there is no conflict of interest.

References 1. Bray FN, Simmons BJ, Wolfson AH, Nouri K (2016) Acute and chronic cutaneous reactions to ionizing radiation therapy. Dermatol Therapy 6(2):185–206. https://doi.org/10.1007/s13555-0160120-y 2. Gianfaldoni S, Gianfaldoni R, Wollina U, Lotti J, Tchernev G, Lotti T (2017) An Overview on radiotherapy: from its history to its current applications in dermatology. Open Access Macedonian J Med Sci 5(4):521–525. https://doi.org/10.3889/oamjms.2017.122 3. Chan RJ, Webster J, Chung B, Marquart L, Ahmed M, Garantziotis S (2014) Prevention and treatment of acute radiation-induced skin reactions: a systematic review and meta-analysis of randomized controlled trials. BMC Cancer 14:53. https://doi.org/10.1186/ 1471-2407-14-53 4. Corazza AV (2005) Fotobiomodulação comparativa entre o laser e LED de baixa intensidade na angiogênese de feridas cutâneas de ratos. Dissertation (Master’s degree in Bioengineering)—Bioengineering, University of São Paulo, São Carlos. https://doi.org/10. 11606/D.82.2005.tde-25072006-095614 5. de Freitas LF, Hamblin MR (2016) Proposed mechanisms of photobiomodulation or low-level light therapy. IEEE J Sel Top Quantum Electron 22(3):7000417. https://doi.org/10.1109/JSTQE. 2016.2561201

6. Robijns J, Lodewijckx J, Bensadoun R-J, Mebis J (2020) Photobiomodulation, photomedicine, and laser surgery 332–339. https://doi.org/10.1089/photob.2019.4761 7. Strouthos I, Chatzikonstantinou G, Tselis N et al (2017) Photobiomodulation therapy for the management of radiation-induced dermatitis: a single-institution experience of adjuvant radiotherapy in breast cancer patients after breast conserving surgery. Strahlenth Onkologie 193(6):491–498. https://doi.org/10.1007/s00066-0171117-x 8. Simunovic Z (2020) Lasers in medicine and dentistry: basic and up-to-date clinical application of low energy-level laser therapy LLLT. Vitagraf, Rijeka 9. Cox JD, Stetz J, Pajak TF (1995) Toxicity criteria of the radiation therapy oncology group (RTOG) and the European organization for research and treatment of cancer (EORTC). Int J Radiat Oncol Biol Phys 31(5): 1341–1346. https://doi.org/10.1016/0360-3016 (95)00060-C 10. Antunes HS, Herchenhorn D, Small IA et al (2017) Long-term survival of a randomized phase III trial of head and neck cancer patients receiving concurrent chemoradiation therapy with or without low-level laser therapy (LLLT) to prevent oral mucositis. Oral Oncol 71:11–15. https://doi.org/10.1016/j.oraloncology.2017. 05.018 11. Robijns J, Lodewijckx J, Mebis J (2019) Photobiomodulation therapy for acute radiodermatitis. Curr Opin Oncol 31(4):291–298. https://doi.org/10.1097/CCO.0000000000000511 12. DeLand MM, Weiss RA, McDaniel DH, Geronemus RG (2007) Treatment of radiation-induced dermatitis with light-emitting diode (LED) photomodulation. Lasers Surg Med 39(2):164–168. https://doi.org/10.1002/lsm.20455 13. Park J, Byun HJ, Lee JH et al (2019) Feasibility of photobiomodulation therapy for the prevention of radiodermatitis: a single-institution pilot study. Lasers Med Sci. https://doi.org/10. 1007/s10103-019-02930-1 14. Censabella S, Claes S, Robijns J, Bulens P, Mebis J (2016) Photobiomodulation for the management of radiation dermatitis: the DERMIS trial, a pilot study of MLS(®) laser therapy in breast cancer patients. Support Care Cancer 24(9):3925–3933. https://doi. org/10.1007/s00520-016-3232-0 15. Robijns J, Censabella S, Claes S, et al (2018) Prevention of acute radiodermatitis by photobiomodulation: a randomized, placebo-controlled trial in breast cancer patients (TRANSDERMIS trial) [published online ahead of print, 2018 Feb 10]. Lasers Surg Med. https://doi.org/10.1002/lsm.22804. https://doi.org/10.1002/ lsm.22804 16. Karu TI (1987) Photobiological fundamental of low power laser therapy. IEEE J Quant Electron 23:1703. https://doi.org/10.1109/ JQE.1987.1073236 17. Zhang X, Li H, Li Q et al (2018) Application of red light phototherapy in the treatment of radioactive dermatitis in patients with head and neck cancer. World J Surg Onc 16:222. https://doi. org/10.1186/s12957-018-1522-3

Effect of Photobiomodulation on Osteoblast-like Cells Cultured on Lithium Disilicate Glass-Ceramic L. T. Fabretti, A. C. D. Rodas, V. P. Ribas, J. K. M. B. Daguano, and I. T. Kato

Abstract

Keywords

Biomaterials are employed to aid bone regeneration in localized bone defects, especially when there is a great loss of tissues, compromising their ability to repair. A new generation of lithium disilicate glass-ceramic presents favorable characteristics for bone reconstruction, such as high strength and bioactivity. In turn, photobiomodulation therapy has been studied in order to stimulate bone metabolism since it acts as a modulating agent for tissue regeneration. In this study, we evaluated the effect of photobiomodulation therapy on the interaction between osteoblastic cells and lithium disilicate glass-ceramic. MG63 cells suspension was seeded on lithium disilicate and glass discs. After 24 h samples were irradiated with a LED device ( = 660 nm) with 50 mW/cm2, 2 J/cm2 for 40 s. Cell viability was assessed 2, 7, and 12 days after irradiation using MTS assay. Samples previously stained with alizarin red S were observed at a confocal microscope to evaluate calcification of bone matrix. We first observed an increase in the number of cells adhered to lithium disilicate, compared to glass discs on day 2. LED irradiation promoted an increase in the number of cells on day 12 in both irradiated groups, being quantitatively higher on disilicate discs. Disilicate discs also showed a higher deposition of calcium on the extracellular matrix compared to glass. Red light anticipated mineral deposition for all irradiated groups, being earlier for disilicate samples. In conclusion, photobiomodulation therapy anticipated the positive effects of lithium disilicate glass-ceramic on bone matrix formation and mineralization.

LED Cell proliferation Matrix mineralization

L. T. Fabretti  A. C. D. Rodas  V. P. Ribas  J. K. M. B.Daguano  I. T. Kato (&) Center for Engineering, Modeling and Applied Social Sciences, Federal University of ABC, Alameda da Universidade, s/n, São Bernardo do Campo, Brazil e-mail: [email protected]



1



Bioactive glass-ceramic



Introduction

Bone tissue can regenerate itself, and it may be activated by a localized injury. Its regeneration can be limited by the abstinence of proper blood irrigation and possible structural defects. In those situations, where tissue defects compromise the regenerating properties, the use of biomaterial implants has become a common practice that revolutionized the therapeutic modalities of tissue reparation [1]. Glass-ceramics based on Li2O-SiO2 are widely used and studied due to their highly favorable uses on dental reconstruction such as crystallization properties, strength, the lack of in vitro cytotoxicity, and inertness to the buccal environment, providing a proper ground for osseointegrated dental implants [2, 3]. However, despite the mechanical characteristics that make them a highly preferable choice of use for dental prosthesis, the lack of bioactivity composes a limitation to their usefulness. For this purpose, new lithia-silica glass-ceramics have been produced that have proved to be highly bioactive while maintaining their endurable mechanical properties, such as the lithium disilicate [4]. Photobiomodulation therapy is a modality of treatment that consists of applying low-powered light to the surface of the tissue to provide reparation. Its mechanisms are based on the absorption of a chromophore at the last enzyme in the respiratory electron transport chain in the mitochondria, cytochrome c oxidase, leading to increased mitochondrial function [5]. PBT has been widely studied for bone tissue repair [6], showing its properties for nodule formation [7], bone mineralization [8], osteoblasts proliferation [9].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_185

1237

1238

L. T. Fabretti et al.

Despite acknowledging the possibility of bone tissue proliferation on lithium disilicate glass-ceramics and the likely auxiliary properties of photobiomodulation, this study intended to explore the effects of PBT interaction with osteoblastic cells on a lithium disilicate glass-ceramic.

2

Materials and Methods

2.1 Cell Culture The MG63 cells (human osteosarcoma) were cultured with EMEM medium (Eagle’s Minimum Essential Medium) supplemented with 10% fetal bovine serum and 1% antibiotic/antimycotic solution (Gibco cat n. 15240-062) on tissue culture bottles. They were incubated at 37 °C with increasement of 5% CO2. When the cells reached 85–90% confluence, they were detached from the culture bottle by the action of the trypsin solution (0.05%)—EDTA (0.02%) and prepared a cell suspension seeded on the sample discs.

2.2 Sample Preparation Lithium disilicate discs (D) with 1 cm diameter and 1 mm thickness were previously sterilized (humid heat at 121 °C for 20 min) and placed in 24-well plate. The well with the discs received a cell suspension of 2  104 cells/mL and transferred to the incubator for 24 h. Two plates were used, one for irradiation in a LED device, and the other was kept in the dark. For test reference, glass discs (G) with 1.2 mm of diameter were used.

490 nm is directly proportional to the number of viable cells in the culture. To cell growth evaluation, the samples were used in duplicate at 2, 7, and 12 days of culture after the irradiation. The same was made for the dark plate, which was not irradiated. In each time of analysis, the samples were moved to a new plate without cells and reacted with the culture medium and MTS. The soluble solution was taken from the wells and transferred for another plate to be read in a spectrophotometer. After the evaluation of the cell activity, each sample was washed with concentrated PBS 1X and fixed with 4% formaldehyde for 5 min for functional analysis. The areas of lithium disilicate tablets and coverslips were calculated (0.8 cm2 and 1.33 cm2, respectively), and absorbance was adjusted for absorbance/cm2.

2.5 Functional Analysis For the calcification of osteoblast analysis on the D and G discs, the samples were stained with 1% alizarin red S (pH 4.2) for 2 min. After washing with distilled water, the samples were visualized using the Olympus confocal laser microscope (Lext OLS4100) [3, 11].

3

Results

3.1 Cellular Viability The viability analysis was performed to evaluate cell proliferation on lithium disilicate (D) or glass (G) discs after irradiation (i). The absorbance values are presented in Fig. 1. In all groups, the absorbance value increased with time. Two days after irradiation, the absorbance data indicated a

2.3 LED Irradiation After 24 h of cell seeding, the culture medium was changed to DPBS for light irradiation. An irradiation device composed of LEDs (k = 660 nm) was used (LEDbox, Biolambda, Brazil). This equipment irradiated the plate homogeneously with an irradiance of 50 mW/cm2. The cells adhered to disc materials were irradiated during 40 s, with a radiant exposure of 2 J/cm2 [10].

2.4 Cell Viability The evaluation of cell viability was performed directly on the samples using a vital dye and soluble in the culture medium, MTS (CellTiter96® AQueous Non-Radioactive Cell Proliferation Assay Promega Corporation) [3]. MTS is bioreduced by cells to formazan product which is soluble in the tissue culture medium. The absorbance of formazan at

Fig. 1 Viability of human osteoblasts assessed by the MTS test. The graph shows the mean ± standard deviation of the absorbance values at 2, 7, and 12 days after irradiation

Effect of Photobiomodulation on Osteoblast …

higher number of cells attached to disilicate discs than glass (p < 0.05). On the other hand, PBM did not alter cell proliferation on both materials (p > 0.05). On day 7, no difference was observed between materials and irradiated and non-irradiated groups (p > 0.05). Finally, on the last day, cell proliferation increased in disilicate compared to glass discs. A comparison between D and Di groups showed that PBM also increased the number of cells attached to disilicate samples (p < 0.05). In contrast, irradiation did not alter cell proliferation on glass discs (p > 0.05).

3.2 Functional Analysis This analysis was conducted using alizarin red S stain to detect bone mineralization. In Fig. 2, cells are colored purple, hydroxyapatite formed from the deposition of calcium and phosphate in the bone matrix is colored yellow, and carbonated hydroxyapatite is colored brown. Regarding G and Gi groups, the number of cells adhered to the glass increased from day 2 to day 7 (Fig. 2a-b-d-e). Hydroxyapatite and a few carbonated hydroxyapatite spots were visualized in the G group only on day 12 (Fig. 2c). About Gi discs, hydroxyapatite and carbonated hydroxyapatite were observed on day 7 (Fig. 2e), and an increase in the amount of carbonated hydroxyapatite on day 12 (Fig. 2f). The analysis of D groups showed a higher cellular concentration on day 2 (Fig. 2g) compared to glass groups (Fig. 2a–d). At this moment, the formation of hydroxyapatite was observed in D and Di groups, evidenced by yellowish deposits on biomaterial (Fig. 2g–j). Lithium disilicate glass-ceramic also induced early carbonated hydroxyapatite formation. On day 7, there are several regions colored brown (Fig. 2h), and on day 12, this mineral was predominant in D groups (Fig. 2i). Besides, the irradiation of cells attached to disilicate disc induced carbonated hydroxyapatite formation on day 2 (Fig. 2j). The mineralization increased on days 7 and 12 (Fig. 2k, l respectively).

4

Discussion

Bioactive glass-ceramic materials have been developed in recent decades for bone regenerative materials in orthopedic and dental applications [12, 13]. When exposed to biological fluids, these biomaterials exhibit chemical and topographic modifications that result in hydroxyl-carbonate apatite formation. This mineral is considered responsible for biomaterial-collagen interaction that results in bone bonding [13]. In this study, we used lithium disilicate, a glass-ceramic developed by Daguano [4]. This biomaterial has a high bending strength (233 MPa), and it is bioactive,

1239

biocompatible, capable of promoting cell adhesion and proliferation, inducing MG63 cells to produce a bone-type matrix [4]. In this manner, we irradiated the osteoblasts attached to the biomaterial with a red light to enhance cell proliferation and bone formation. Photobiomodulation therapy has also been a tool of great importance in the dental and medical fields. Many studies have been carried out to understand the effects of this phototherapy at the cellular level and suggest that it modulates the metabolic processes of various cell types [5, 10]. Therefore, this phototherapy can stimulate cell adhesion and proliferation, promote vascularization of damaged tissues, and modulate the inflammatory response. These effects justify its use to promote wound healing, tissue repair, and prevention of cell death [5, 14]. We first observed an increase in the number of cells adhered to lithium disilicate, compared to glass discs on day 2. This result confirmed that this biomaterial stimulates cell adhesion and proliferation [4]. Photobiomodulation promoted an increase in the number of cells on day 12 in both irradiated groups, being quantitatively higher on disilicate discs. Several studies showed that red and infrared light could induce osteoblast adhesion and proliferation [6, 7, 9]. Besides cell proliferation, we also employed alizarin red S for qualitative assessment of cell mineralization. It was observed that disilicate discs also showed a higher deposition of calcium on the extracellular matrix than glass. The application of the red light increased mineral deposition for all irradiated groups. For cells adhered to glass discs, photobiomodulation anticipated mineralization from day 12 to day 7. An early calcium phosphate deposition was observed in disilicate groups, from day 7 for the non-irradiated group to day 2 for the irradiated group. These results are the following other studies. Photobiomodulation therapy with red and infrared light showed to accelerate bone matrix mineralization when applied to osteoblast. These effects were also improved in the presence of other biomaterials [6]. This preliminary study confirmed our hypothesis that photobiomodulation could improve the interaction between osteoblast and this bioactive glass-ceramic. Further experiments will be conducted to advance the knowledge of this research area.

5

Conclusions

Taking together our results showed that lithium disilicate promotes cell adhesion and proliferation, and induced bone matrix secretion. Also, photobiomodulation improved the positive effects observed with lithium disilicate. It increased the number of cells adhered to this glass-ceramic and anticipated the process of matrix mineralization.

1240

L. T. Fabretti et al.

Fig. 2 Alizarin red S stain of osteoblast adhered to glass or disilicate discs. The arrows black, red, yellow, and white indicates osteoblast cells, hydroxyapatite, wired hydroxyapatite, and nodules of carbonated hydroxyapatite, respectively. Bars represent 100 lm

Acknowledgements The authors would like to thank CNPq (Grant nº. 430057/2016-4). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Benic GI, Hämmerle CH (2014) Horizontal bone augmentation by means of guided bone regeneration. Periodontol 2000 66:13–40. https://doi.org/10.1111/prd.12039

2. Montazerian M, Zanotto ED (2017) Bioactive and inert dental glass-ceramics. J Biomed Mater Res A 105:619–639. https://doi. org/10.1002/jbm.a.35923 3. Willard A, Chu TG (2018) The science and application of IPS e. Max dental ceramic. Kaohsiung J Med Sci 34:238–242. https://doi. org/10.1016/j.kjms.2018.01.012 4. Daguano JKMB, Milesi MTB, Rodas ACD et al (2019) In vitro biocompatibility of new bioactive lithia-silica glass-ceramics. Mater Sci Eng C Mater Biol Appl 94:117–125. https://doi.org/ 10.1016/j.msec.2018.09.006 5. Hamblin MR (2018) Mechanisms and mitochondrial redox signaling in photobiomodulation. Photochem Photobiol 94:199– 212. https://doi.org/10.1111/php.12864 6. Escudero JSB, Perez MGB, de Oliveira Rosso MP et al (2019) Photobiomodulation therapy (PBMT) in bone repair: a systematic

Effect of Photobiomodulation on Osteoblast …

7.

8.

9.

10.

review Injury 50:1853–1867. https://doi.org/10.1016/j.injury. 2019.09.031 Ozawa Y, Shimizu N, Kariya G, Abiko Y (1998) Low-energy laser irradiation stimulates bone nodule formation at early stages of cell culture in rat calvarial cells. Bone 22:347–354. https://doi.org/10. 1016/s8756-3282(97)00294-9 Fujimoto K, Kiyosaki T, Mitsui N et al (2010) Low-intensity laser irradiation stimulates mineralization via increased BMPs in MC3T3-E1 cells. Lasers Surg Med 42:519–526. https://doi.org/ 10.1002/lsm.20880 Stein A, Benayahu D, Maltz L, Oron U (2005) Low-level laser irradiation promotes proliferation and differentiation of human osteoblasts in vitro. Photomed Laser Surg 23:161–166. https://doi. org/10.1089/pho.2005.23.161 Deana AM, de Souza AM, Teixeira VP et al (2018) The impact of photobiomodulation on osteoblast-like cell: a review. Lasers Med Sci 33:1147–1158. https://doi.org/10.1007/s10103-018-2486-9

1241 11. Wang G, Zheng L, Zhao H et al (2011) In vitro assessment of the differentiation potential of bone marrow-derived mesenchymal stem cells on genipin-chitosan conjugation scaffold with surface hydroxyapatite nanostructure for bone tissue engineering. Tissue Eng Part A 17:1341–1349. https://doi.org/10.1089/ten.TEA.2010. 0497 12. Hench LL (2015) The future os bioactive ceramics. J Mater Sci Mater Med 26:86. https://doi.org/10.1007/s10856-015-5425-3 13. Jones JR (2013) Review of bioactive glass: from Hench to hybrids. Acta Biomater 9:457–4486. https://doi.org/10.1016/j.actbio.2012. 08.023 14. Chung H, Dai T, Sharma SK et al (2012) The nuts and bolts of low-level laser (light) therapy. Ann Biomed Eng 40:516–533. https://doi.org/10.1007/s10439-011-0454-7

Reference Values of Current Perception Threshold in Adult Brazilian Cohort Diogo Correia e Silva, A. P. Fontana, M. K. Gomes, and C. J. Tierra-Criollo

genders. There was only significant difference in between groups of different ages for the 1 Hz frequency on the Ulnar nerve, and for the 250 Hz on the Median nerve (p < 0.05). The NEUROSTIM evaluation protocol seems to be effective, objective and able to be replicated for sensory loss evaluation. Having reference values in the CPT of Brazilian adults can help professionals in the clinics to have a better understanding, management and treatment of diseases that affects touch sense, such as leprosy, diabetes and others.

Abstract

Semmes-Weinstein monofilaments have been widely used to quantify touch sense thresholds. However, it is an instrument of low sensitivity with subjective responses. In the 1980s, a procedure was proposed for the psychophysical assessment of touch sense by sine-wave electric current stimulation. This assessment was based on studies that suggested sinusoidal stimuli of different frequencies would excite sensory systems related to fibers of different diameters, thus increasing the selectivity of the stimulation. This study aims to evaluate and describe the CPT Reference values for the upper limbs nerves of healthy Brazilian subjects using sine-wave electric stimulation. Sixty-five healthy subjects were included, which 36 were male and 29 were female. The mean of age was 45 years (±20).All subjects had the ulnar, median and radial nerves evaluated with a sine-wave electric stimulator (NEUROSTIM) to quantify the Current Perception Threshold (CPT), for the frequencies 1, 250 and 3000 Hz. No statistical differences were found when comparing different limbs on the same patient, neither when comparing

D. C. e Silva Department of Medical Clinic, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil D. C. e Silva (&) Federal University of Rio de Janeiro, R. Prof. Rodolpho Paulo Rocco, 255 - Cidade Universitária, Rio de Janeiro, Brazil e-mail: [email protected] A. P. Fontana Department of Physiotherapy, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil M. K. Gomes Department of Family and Community Health, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil C. J. Tierra-Criollo COPPE Department, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Keywords

Current perception threshold stimuli

1



Touch sense



Sine-wave

Introduction

Since the 1980s, a procedure was proposed for the psychophysical assessment of skin sensitivity by sine-wave electric stimulation [1]. The proposed evaluation of this new method was based on studies that suggested sinusoidal stimuli of different frequencies would excite sensory systems related to fibers of different diameters, thus increasing the selectivity of the stimulation [1]. Unlike the pulsating electric current, such as that used in electromyography, the sinusoidal current is able to detect the electrical threshold of sensory perception at a minimum amount of current [2]. Since that time it has been suggested that the 5 Hz frequency would stimulate non-myelinated (C) fibers, 250 Hz fine myelinated fibers (Ad) and 2000 Hz myelinated fibers of medium caliber (Ab) [3]. Ab fibers conduct the senses of touch, pressure, and vibration, while both Ad fibers and C fibers conducting the senses of pain and temperature, and C fibers also serve as post-sympathetic fibers [3].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_186

1243

1244

D. C. e Silva et al.

Lately it has been shown that the correlation between the 5 Hz frequency and clinical neurologic exams are poor, suggesting that the 1 Hz frequency is more selective to evaluate the non-myelinated (C) fibers and 3000 Hz myelinated fibers of medium caliber (Ab) [3]. Thus, it would be natural to use the neuroselective characteristic of the sine-wave electric stimulation for the early diagnosis of peripheral neuropathies of progressive evolution, such as leprosy. In this case, it might be expected that the Current Perception Threshold (CPT) to low frequencies (associated with the neural fibers of thin caliber) could be altered before the CPT for high frequencies (associated with the coarse fibers). Evaluation of the sensory nerve fiber’s function through the touch sense is important in the diagnosis and follow-up of certain diseases involving the sensory conducting pathways, such as diabetes, leprosy, carpal tunnel syndrome, and others. Different from others examination tools, such as the Semmes-Weinstein monofilaments (SWM) or sensory nerve conduction velocity, the CPT can evaluate the functioning of the sensory nerve fibers through touch sense, in a quantitative form and achieve differential neuro-excitatory effects of the sensory fibers depending on the different frequency of stimuli [1, 4]. This study aims to evaluate and describe the CPT Reference values for the upper limbs nerves of healthy Brazilian subjects using sine-wave electric stimulation.

2

Materials and Methods

2.1 Subjects Healthy subjects with no history or complaints of sensory dysfunction and having a normal physical. The exclusion were sensibility loss in SWM (>0.05 g), low tolerance/adaptation to electrical current, wounds or ulcers on the hands, metal prosthesis on the hand to be tested, cardiac pacemaker, any diagnosis of peripheral neuropathy for another cause, central nervous system disorder, as well as other orthopedic injuries on the arms, and use of any centrally acting drugs. All subjects were submitted to the CPT Protocol for the Ulnar, Median and Radial nerves on both upper limbs for the frequencies 3000, 250 and 1 Hz. The study was approved by the Local Ethics and Research Committee under the number 3001499/2018 and the participation was voluntary. All subjects of this study provided informed consent through a written document.

2.2 CPT Protocol The CPT to sine-wave electric current was measured by the NEUROSTIM a device capable of generating electrical stimuli with controlled current and programmable waveform, with intensity up to 8 mA. The test was divided into 2 steps: The first step is a rapid assessment, called RAMP test, in which the amplitude value is continuously incremented until the volunteer presses the Stimulus Perception Button (SPB). At that time, the amplitude value is stored as Ramp Threshold (RT) and the first step of the test comes to an end. The RT is used as a reference to systematically determine the values of the parameters “initial amplitude” (iA) and “initial increment” (INC) used in the CPT evaluation, the second stage of the evaluation. From the values of the parameters iA, INC, the duration of the stimuli (T_ON), the resting time between stimuli (T_OFF) and the stimulation frequency determined by the user, the CPT determination process is started, which is the lower value of intensity felt by the patient. Two gold electrodes with a diameter of 10 mm were used to stimulate the palmar face of the phalanges of the 5th finger to evaluate the ulnar nerve, on the palmar face of the phalanges of the 2nd finger for evaluation of the median nerve and in the anatomical radial snuff for evaluation of the radial nerve, separated by a distance of 2 cm between the centers. According to Martins et al. [5, 6] assessment protocol three different types of frequencies were tested for each upper limb nerve (ulnar, median and radial): 1 Hz, which would evaluate amylin fibers C, 250 Hz that would evaluate myelinated fibers Ad, and 3000 Hz (Ab myelinated fibers of large size). The frequencies were randomly tested to avoid adaptation to the stimulus.

2.3 Data Analysis Before the statistical treatment of the data, the Shapiro-Wilk test was applied. The test indicated non-normality for all variables evaluated. Non-parametric statistical such as Wilcoxon and Mann-Whitney were used to analyze all variables. The level of significance (a) used was equal to 0.05.

3

Results

Sixty-five healthy subjects were included, which 36 were male and 29 were female. The mean of age was 45 years (±20).

Reference Values of Current Perception …

1245

Table 1 Cpt values

Median (µA)

Min–Max (µA)

3000 Hz Ulnar

897

468–1012

Median

1099

423–1321

Radial

1096

484–1456

250 Hz Ulnar

288

168–322

Median

368

144–476

Radial

366

112–533

Ulnar

442

184–627

Median

276

168–453

Radial

227

168–347

1 Hz

1500

1000

µA 500

3000Hz

250Hz

18-25 25-35 35-45 45-55 55-65

18-25 25-35 35-45 45-55 55-65

0 18-25 25-35 35-45 45-55 55-65

The CPT values for each frequency from the different nerves are shown in Table 1. When comparing right and left sides of the same subject, no statistical difference was observed in each nerve for the different frequencies (p > 0.05), as well as no statistically significant differences when comparing the different genders (p > 0.05). We divided all 65 subjects in 5 age ranges to compare the difference in between groups. Group A (18–25 years) had 12 subjects, group B (25–35 years) had 15 subjects, group C (35–45 years) had 15 subjects, group D (45–55 years) had 12 subjects and group E (55–65 years) had 11 subjects. When comparing the different age ranges, most of the frequencies to each nerve had no statistical difference in between the different groups. There was only significant difference in between groups for the 1 Hz frequency on the Ulnar nerve, and for the 250 Hz on the Median nerve (p < 0.05) (Figs. 1, 2 and 3). Even though, it seems that the first three groups (A, B and C) has always a tendency to have lower CPT values and dispersions than the last two groups (D and E). Groups A, B and C seem more alike to each other, and groups D and E also seem more alike to each other (Figs. 1, 2 and 3). The only significant difference between groups A, B, C and groups D, E were also for the 1 Hz frequency on the Ulnar nerve, and for the 250 Hz on the Median nerve (p < 0.05).

1Hz

Fig. 1 Age groups ulnar

1500

1000

µA 500

4.1 Conflict of Interest All authors disclose any actual or potential conflict of interest.

3000Hz

250Hz

Fig. 2 Age groups median

18-25 25-35 35-45 45-55 55-65

Compliance with Ethical Requirements

18-25 25-35 35-45 45-55 55-65

4

18-25 25-35 35-45 45-55 55-65

0

1Hz

1246

D. C. e Silva et al. 1500

1000

µA 500

18-25 25-35 35-45 45-55 55-65

3000Hz

250Hz

18-25 25-35 35-45 45-55 55-65

18-25 25-35 35-45 45-55 55-65

0

1Hz

Fig. 3 Age groups radial

4.2 Statement of Human and Animal Rights The study was approved by the Local Ethics and Research Committee from the Clementino Fraga Filho Hospital and is in accordance with the Helsinki Declaration of 1975, as revised in 2000 and 2008. The participation was voluntary and all subjects of this study provided informed consent through a written document.

understanding, management and treatment of diseases that affects touch sense, such as leprosy, diabetes and others. This study also calls attention for relation between age and touch sense perception even in healthy subjects to specific nerves evaluation. Elderly subjects have a raised threshold for perception of electrical stimuli compared with younger subjects. One explanation for this is the progressive loss of cutaneous afferent axons and changes to cutaneous receptors [10]. At a given stimulus strength, fewer sensory axons will be stimulated in elderly subjects compared with the young, as the available pool of sensory axons is diminished. This means that in the elderly a greater stimulus strength is required to recruit a critical number of sensory axons [11]. The CPT evaluation protocol through the NEUROSTIM is also a fast and easy to test wherever it is needed to be done. Any professional trained can run both software and hardware with simple commands. Different than the EMG protocol that needs a neurophysiologist to do the protocol and is an expensive equipment. Also, The CPT protocol has the advantage of stimulates thin caliber axons, the most and firstly fibers involved in touch sense loss in different neurologic diseases. In the other hand, the EMG protocol stimulates prior the large fibers in nerve’s axons, those are normally the latest to be affected.

References 5

Conclusions

The CPT evaluation protocol through the NEUROSTIM used with sine-wave current at the frequencies of 3000, 250 and 1 Hz seems to be a reliable method to be replicated and used in the clinic protocol for touch sense evaluation. The CPT values found in the present study in a sample of 65 healthy subjects for the Radial nerve at the frequencies of 1, 250 and 3000 Hz have a tendency to be lower than the previous CPT values found by Martins et al. and Galvao et al. [5, 6]. Likewise, the median nerve sensitivity threshold values for the same frequencies also follow the same tendency to be lower than the values found by Neurotron Inc [7, 8], in which the frequencies tested were 2000, 250 and 5 Hz. Differently than other studies, this study reveals CPT results for the Ulnar nerve at frequencies evaluated. As well as it reveals CPT values of 1 Hz for the median nerve, which is more neuroselective to evaluate C fibers, the first to be affected at the onset of neural damage in many neuropathologies [9]. Having reference values in the CPT of Brazilian healthy adults can help professionals in the clinics to have a better

1. Katims JJ, Long DM, Lky N (1986) Transcutaneous nerve stimulation. Frequency and waveform specificity in humans. Appl Neurophysiol 49:86–91 2. Holzner S (2014) Física para Leigos. Alta Boojs, São Paulo 3. McGlone F, Reilly D (2010) The cutaneous sensory system. Neurosci Biobehav Rev 34:148–159. https://doi.org/10.1016/j. neubiorev.2009.08.004 4. Langille M et al (2008) Analysis of the selective nature of sensory nerve stimulation using different sinusoidal frequencies. Int J Neurosci 118(8):1131–1144. https://doi.org/10.1080/0020745070 1769323 5. Martins HR et al (2013) Current perception threshold and reaction time in the assessment of sensory peripheral nerve fibers through sinusoidal electrical stimulation at different frequencies. Rev Bras Eng Biom 29(3):1–8 6. Galvão ML, Manzano GM, Braga NIO et al (2005) Determination of electric current perception threshold in a sample of normal volunteers. Arq Neuropsiquiatr 63(2A):289–293 (PubMed: PMID 16100976) 7. Neurotron inc. (2012) Normative neuroselective current perception threshold (CPT) values. Site da Neurotron Incorporated – acesso: 15 de maio de 2012. http://neurotron.com/Normtive_Current_ Perception_Threshold_CPT_Values.html 8. Masson EA (1989) Current perception threshold: a new, quick, and reproducible method for the assessment of peripheral neuropathy in diabetes mellitus. Diabetologia 32(10):724–728 9. Villarroel MF, Orsini MB, Lima RC et al (2007) Comparative study of the cutaneous sensation of leprosy-suspected lesions using

Reference Values of Current Perception … Semmes-Weinstein monofilaments and quantitative thermal testing. Lepr Rev 78(2):102–109. PubMed: PMID 17824480 10. Nadler MA, Harrison LM, Stephens JA (2002) Changes in cutaneomuscular reflex responses in relation to normal ageing in man. Exp Brain Res 146:48–53

1247 11. Yin H, Liu M, Zhu Y, Cui L (2018) Reference values and influencing factors analysis for current perception threshold testing based on study of 166 healthy Chinese. Front Neurosci 12:14. https://doi.org/10.3389/fnins.2018.00014

Analysis of the Quality of Sunglasses in the Brazilian Market in Terms of Ultraviolet Protection L. M. Gomes, A. D. Loureiro, M. Masili, and Liliane Ventura

Abstract

The Ophthalmic Instrumentation Laboratory (LIO) from the University of Sao Paulo—Brazil, was involved in research about sunglasses and its standards, and has already contributed for changing parameters in the previous Brazilian sunglasses standard. According to the standard used in Brazil, depending on luminous spectral transmittance, lenses are classified into different categories and for each category there is a required UV protection. In this work, we evaluated the UV protection of a representative sample of the sunglasses available in the Brazilian market. Most of the 231 samples were from unbranded sunglasses, and all of the samples had the same conditions as if they were sold to costumers in the informal market. In our analysis, all the branded samples were approved in the UV protection test, and by buying lighter color sunglasses (categories 1 and 2), the costumer might be more subjected to find a unbranded sunglasses that don’t have UV protection. Keywords





Sunglasses UV protection ABNT NBR ISO 12312 Sun exposure Brazilian market

1



Introduction

The Ophthalmic Instrumentation Laboratory (LIO) from the University of Sao Paulo—Brazil, was involved in research about sunglasses and its standards, and has already L. M. Gomes  A. D. Loureiro  M. Masili  L. Ventura Ophthalmic Instrumentation Laboratory—LIO/Sao Carlos, School of Engineering—EESC, University of Sao Paulo—USP, Sao Carlos, SP, Brazil L. Ventura (&) University of Sao Paulo—EESC, Av. Trabalhador Sancarlense, 400, Sao Carlos, SP, Brazil e-mail: [email protected]

contributed for changing parameters in the previous Brazilian sunglasses standard, the ABNT NBR 15111-2013 [1]. Other works discussed that the sunglasses standards should consider geographic characteristics in the calculation of UV transmittances, also the importance of the upper limit for the UVA as 400 nm [2], and the important question about the equivalence between solar simulators and the real sun exposition [3, 4]. The Brazilian standard for sunglasses for general use (ABNT NBR ISO 12312-1:2018) [5] is a translation of the international one (ISO 12312-1:2013) [6]. According to the standard used in Brazil, depending on luminous spectral transmittance, lenses are classified into different categories and for each category there is a required UV protection (Table 1). Sunglasses must comply with sunglasses standards in respect to visible and UV transmittances, among other characteristics including their physical robustness and material quality. In this paper, we present an evaluation and analysis of a large sample of sunglasses (original and from the informal market) which represents the sunglasses that might eventually reach the costumers. All samples were donated to our laboratory by ABIÓPTICA (Brazilian Association of Optical Industry) and were originally confiscated from illegal trade. We measured the optical characteristics of the samples and followed the current standard to evaluate the percentage of sunglasses that would reach the costumers without complying with the UV protection criteria (Table 1).

2

Materials and Methods

For our evaluation and analysis, we used a sample group of 231 lenses, in which 9 were branded (official lenses) and the remaining were unbranded (illegal trade market). 158 lenses had pair, i.e., we had the right and left lenses for testing in a total of 79 complete pairs, and 73 samples were only the right or left lens of an original pair. Some of the lenses were

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_187

1249

1250

L. M. Gomes et al.

Table 1 Transmittance requirements for sunglass lenses for general use [5]

Lens category

Visible spectral range

UV spectral range

Range of luminous transmittance (sV)

Maximum value of solar UVB transmittance (s SU VB)

Maximum value of solar UVA transmittance (s SU VA)

From over (%)

To (%)

280–315 nm

315–380 nm

0.05 sV

sV

0

80

100

1

43

80

2

18

43

1.0% absolute or 0.05 sV (the greater)

0.5 sV

3

8

18

1.0% absolute

4

3

8

1.0% absolute or 0.25 sV (the greater)

destroyed in previous tests, for example within the resistance to impact or the flammability tests of the same standard [7]. The transmittance spectrum of each sample was measured using a Varian Cary-5000 spectrophotometer. Measurements were taken in 280–2000 nm range, with a 5 nm step (as required by standard ABNT NBR ISO12312-1:2018. In order to minimize positioning errors in the transmittance measurements, the lenses had their geometric center determined, as well as the holders used in the spectroscopy. This determination of the center of the lens is important for the gradient colored lenses, since a slight change in lens positioning may induce systematic errors in spectroscopy. Visible (sV), UV (sSUV) and IR (sSIR) transmittances were calculated for all samples using a software developed by our team, and the calculations were done according to the equations presented in ISO 12311: 2013 [6] and the criteria for approving or not the UV protection of each lens was done as represented in Table 1.

3

Results and Discussion

An example of the transmittance spectrum for a sample is presented in Fig. 1. The sample LD (right lens) 04 264 was an unbranded sample and had the following values: sSUV = 5.79%, sSUVA = 7.15%, sSUVB = 3.55% and sV = 36.41% (category 2). By checking the values with the parameters in Table 1, we can verify that this sample failed in the UV protection test because it exceeded the sSUVB maximum limit, which should be the greater value between 5% of the sV (1.82%) or 1% absolute. We did the same test for all samples and calculated the amount of samples that failed in the UV protection test (only because sSUVB, sSUVA or both). We measured all samples in the interval between the UVB (280 nm) and IR (2000 nm), but the analysis of the IR transmittance is out of the scope of this work.

Fig. 1 Example of a measured transmittance spectrum for a disapproved sample

Table 2 presents the quantity of samples that failed in the UV protection test (only because sSUVB, sSUVA or both). Not all lenses that failed in the sSUVB have also failed in the sSUVA, but a total of 35 samples (15.15%) failed in at least any of them. We also verified that, when analyzing lenses of the same pair, sometimes we could find optical differences. Table 3 shows that from our sample group, 3 pairs had lenses that both failed and was approved for the sSUVA test, the same amount, but not necessarily the same pairs, for the sSUVB test, and 2 pairs had lenses with different categories. Table 4 presents the average transmittance values for the groups of lenses of the same category. The standard classifies the lenses by 4 categories, from 0 to 4. In our samples, we had not any lens of 0 category, but we had some samples darker than the darkest category described in the standard. In our analysis, we call them the 99 category samples, i.e. the luminous transmittance of those samples is smaller than 3% (Table 1), so those lenses are automatically disapproved in any optical test. We did the same analysis of Table 2, but grouping the samples by the same category. Table 5 presents the

Analysis of the Quality of Sunglasses … Table 2 Disapproval rate of the samples using the ABNT NBR ISO 12312:2018 parameters

1251

Disapproved

sSUVA

sSUVB

sSUV

Total (samples)

20

30

35

Total %

8.65

12.98

15.15

Table 3 Quantity of sunglasses pairs with some different parameter between the right and left lenses

Different requirement

sSUVA

sSUVB

Category

Total (samples)

3

3

2

Total %

3.79

3.79

2.52

Table 4 Average transmittance values of the samples using the ABNT NBR ISO 12312:2018 and grouping by the same lens category (darkening level)

Category

Table 5 Disapproval rate of the samples using the ABNT NBR ISO 12312:2018 and grouping by the same lens category (darkening level)

sUV

sSUVA

sSUVB

1

7

1.7

2.27

0.69

2

76

3.02

4.11

1.12

3

98

0.88

1.23

0.27

4

44

0.22

0.33

0.021

99

6

0

0

0

n samples

Cat

sUVA

sSUVB

sSUV

1

0

14.28

14.28

2

10.52

23.68

25

3

2.04

5.1

5.1

4

9.09

0

9.09

99

100

100

100

disapproval rate for any of the UV protection tests by category of the lens. All the samples used in this work had not previously been tested in any of the standard tests, and neither exposed to any source of UV radiation that might degrade their optical performance [4, 8, 9]. The values in Table 5 represent better than Table 2 the characteristics of UV protection that costumers can find when they buy a new pair of sunglasses. The samples represent the Brazilian Market, especially the unbranded or illegal portion of the products because of the large amount of this type in the samples (96% of the samples). In our analysis, all the branded samples were approved in the UV protection test, but this might not represent the real condition of branded sunglasses, because the quantity of this type of sample was reduced in our analysis. It is important to mention that some differences, even rare, can occur regarding the UV protection condition of the lenses from the same pair. Without having any instrument or means for measuring the lenses, the costumer might never know about this [10]. Table 5 also shows us that by buying lighter color sunglasses (categories 1 and 2); the costumer might be more

subjected to find unbranded sunglasses that do not have UV protection, following the guidelines of the current Brazilian standard (and international ISO) for sunglasses protection. By buying darker sunglasses, the risk of being unprotected is reduced, but the costumer might acquire a product that is not suitable for driving, for example, according to the same standard.

4

Conclusions

This work presented an analysis of the UV protection of a representative sample of the sunglasses available in the Brazilian market. Most of the 231 samples were from unbranded sunglasses, and all of the samples had the same conditions as if they were sold to costumers in the informal market. Some of the samples were branded, and all of them presented satisfactory UV protection according to the current sunglasses standard adopted in Brazil. From the unbranded group, we verified that costumers have more risks of buying unprotected lighter color unbranded sunglasses than darker ones. This work is part of series of works that will analyze

1252

extensively the optical characteristics of sunglasses in different conditions, and evaluate if the sunglasses of the Brazilian market protects or not the ocular health of the costumers. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) - Finance Code 001 and also by FAPESP (grant number: 2014/16938-0, coordinator Liliane Ventura).

Conflict of Interest The authors declare that they have no conflict of interest.

L. M. Gomes et al.

4.

5.

6.

7.

References 1. Ventura L et al (2013) ABNT NBR 15111:2013 - Óculos para proteção solar, filtros para proteção solar para uso geral e filtros para observação direta do sol. ABNT, v. 1, p. 1-47, Rio de Janeiro, Brazil 2. Masili M, Schiabel H, Ventura L (2015) Contribution to the radiation protection for sunglasses standards. Radiat Prot Dosimetry 164:435–443. https://doi.org/10.1093/rpd/ncu274 3. Masili M, Ventura L (2016) Equivalence between solar irradiance and solar simulators in aging tests of sunglasses. Biomed Eng

8.

9.

10.

Online (Online) 15:86–98. https://doi.org/10.1186/s12938-0160209-7 Gomes LM, Masili M, Ventura L (2019) Análise do efeito da exposição solar natural e simulada em lentes de óculos de sol: um estudo sobre a degradação dos materiais. Revista Brasileira de Física Médica 13:3. https://doi.org/10.29384/rbfm.2019.v13.n3. p47-52 Associação Brasileira De Normas Técnicas (2018) NBR ISO 12312-1: Proteção dos olhos e do rosto – óculos para proteção solar e óculos relacionados – parte 1: óculos para proteção solar para uso geral. Rio de Janeiro, Brazil International Standard Organization (2013) ISO 12311: Personal protective equipment—test methods for sunglasses and related eyewear Magri R, Masili M, Duarte FO, Ventura L (2017) Building a resistance to ignition testing device for sunglasses and analysing data: a continuing study for sunglasses standards. BioMed Eng OnLine 16:114. https://doi.org/10.1186/s12938-017-0404-1 Masili M, Duarte FO, White CC, Ventura L (2019) Degradation of sunglasses filters after long-term irradiation within solar simulator. Eng Fail Anal 103:505–516. https://doi.org/10.1016/j.engfailanal. 2019.04.038 Loureiro A, Loureiro A, Gomes LM, Ventura L (2016) Transmittance variations analysis in sunglasses lenses post sun exposure. J Phys Conf Ser 733: https://doi.org/10.1088/1742-6596/733/1/ 012028 Mello MM, Lincoln VAC, Ventura L (2014) Self-service kiosk for testing sunglasses. Biomed Eng Online 13:45. https://doi.org/10. 1186/1475-925X-13-45

Do Sunglasses on Brazilian Market Have Blue-Light Protection? A. D. Loureiro, L. M. Gomes, and Liliane Ventura

Abstract

1

In recent years there has been growing interest among the public and scientists in blue-light protection. We analyzed blue-light protection in sunglasses available on Brazilian market, separating them according to their categories. We measured lens transmittance using a VARIAN Cary 5000 spectrophotometer, and we calculated luminous and blue-light transmittance. A sunglasses lens is considered to be minimally safe against blue light if its blue-light transmittance is less than 1.2 times its luminous transmittance. From 222 unbranded lenses, 4.5% of them fail our test of blue-light protection and 2.7% of them are excessively dark. This study is part of a series that intend to investigate optical characteristics of sunglasses on Brazilian market in order to evaluate whether they provide enough safety and protection against harmful solar radiation to the public. We believe that our analyzes may improve knowledge about the quality of sunglasses available on Brazilian market. Keywords



Sunglasses Blue light Brazilian market



ABNT NBR ISO 12312



A. D. Loureiro  L. M. Gomes  L. Ventura Ophthalmic Instrumentation Laboratory—LIO/Sao Carlos, School of Engineering—EESC, University of Sao Paulo—USP, Sao Carlos, SP, Brazil L. Ventura (&) University of Sao Paulo—EESC, Av. Trabalhador Sancarlense, 400, Sao Carlos, SP, Brazil e-mail: [email protected]

Introduction

In recent years there has been growing interest among the public and scientists in blue-light protection. Many studies have been published on blue light-induced damage to retinal tissues, with special attention to chronic effects [1–4]. Considering the public concern and the harmfulness reported in the literature about blue light, it is of great importance to investigate whether sunglasses available on Brazilian market have a minimum of blue-light protection. The Brazilian standard for sunglasses for general use is ABNT NBR ISO 12312-1 [5]. Despite defining solar blue-light transmittance, ssb, this standard does not set safety transmittance limits for blue light. In fact, no current standard sets limits like these. Therefore, we searched for limits in old standards. An old British Standard for sunglasses, BS2724, required that blue-light transmittance should not exceed 1.2 times luminous transmittance [6]. As others have highlighted, it is reasonable to set a limit on blue-light transmittance relative to luminous transmittance [7, 8]. Luminous transmittance, sv, is calculated as a weighted mean. The transmittance of each wavelength from 380 to 780 nm with 5-nm step is weighted by both the spectral distribution of radiation of CIE Standard Illuminant D65 and the spectral luminous efficiency function for photopic vision. Blue-light transmittance, as defined in BS2724, ssb(BS), and solar blue-light transmittance, as defined in ABNT NBR ISO 12312-1, ssb(ISO), are calculated within the spectral range from 380 nm to 500 nm. Whereas ssb(BS) is calculated with 10-nm step as an arithmetic mean (it has equal weights for all wavelengths), ssb(ISO) is calculated with 5-nm step and it is weighted by both the solar radiation at sea level for mass air 2 and the blue-light hazard function. Figure 1 shows the spectral weighting function for ssb(ISO), ssb(BS) and sv [5, 6]. The Brazilian standard grades sunglasses in five categories depending on sv values of their lenses. Sunglasses of each category are recommended based on the sun glare

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_188

1253

1254

A. D. Loureiro et al.

Fig. 1 Spectral weighting functions: for solar blue-light transmittance calculation, as defined in ABNT NBR ISO 12312-1 (Wsb(ISO)); for blue-light transmittance calculation, as defined in BS2724 (Wsb(BS)); and for luminous transmittance calculation, as defined in ABNT NBR ISO 12312-1 (Wv) [5, 6]

The aim of this study is to analyze whether the sunglasses available on Brazilian market safely protect wearers against blue light. A sunglasses lens is considered to be minimally safe against blue light if its Qsb value is less than 1.2. Although Qsb(ISO) tests are more appropriate than Qsb(BS) ones, we opted to do both in order to compare their results. This study is part of a series that intend to investigate optical characteristics of sunglasses on Brazilian market. We aware that our blue light-protection analysis relied solely on filter transmittance, as well as the traditional analysis approach, and it ignored other important factors related to attenuating ocular exposure, namely sunglasses size, shape, and wearing position [12].

2 intensity. Darker lenses are advised for environments with higher solar incidence. Since ABNT NBR ISO 12312-1 is a standard for sunglasses for general use and lenses with sv < 3% are not suitable for general use because of their exaggerated darkness, these lenses fall outside the scope of the standard. Table 1 details sv range that specifies each category. Lens transmittance can vary after long exposure to the Sun. In other words, after years of wearing the same sunglasses, both luminous transmittance and blue-light transmittance can change [9, 10]. For convenience, we defined a new measurement, relative visual attenuation quotient for solar blue light (Qsb), as ssb over sv. Thus, the above-mentioned BS2724 criterion can be rewritten as Qsb < 1.2. Furthermore, ssb is a measurement of how much blue-light protection sunglasses lenses offer. When blue-light protection is mentioned, it refers to protection against blue light-induced photomaculopathy. By using Wsb (BS), one gives the same importance to wavelengths with less photochemical risk to retinal tissues as to those with higher risk. Therefore, it makes more sense to use Wsb(ISO) than Wsb(BS) in ssb calculation. It is worth noting that it does not mean that wavelengths with little risk of inducing maculopathy are safe for human eyes, in particular, wavelengths up to 400 nm should be used in UV-hazard measurements [11].

Table 1 Range of luminous transmittance, sv, that defines each category [5]

Materials and Methods

First, we randomly selected 231 sunglasses lenses (9 branded and 222 unbranded). Among these lenses, there is 79 pairs (158 lenses) and 73 single lenses (left and right lenses). Since lens transmittance changes after long solar exposure, we decided to use only non-irradiated lenses in this study. Second, we measured lens transmittance using a VARIAN Cary 5000 spectrophotometer, and we calculated sv, ssb(ISO), ssb(BS), Qsb(ISO) and Qsb(BS). In addition, we tested whether measured lenses block minimal blue light to be safe, adopting as criterion Qsb < 1.2. Tests were performed for both Qsb(ISO) and Qsb(BS). Furthermore, the results were analyzed separating the samples in groups according to its category. Even though lenses with sv < 3% falls outside the scope of sunglasses standards due to being extremely dark, they are not so difficult to be found on the market. Thus, for handiness, we designate these lenses as if they belong to a new category—category 99. We also discussed the difference between using Qsb(ISO) and Qsb(BS) in our tests.

3

Results

The average ssb and sv values (mean ± SD) are shown in Table 2. The average Qsb values of all 231 tested lenses are 0.83 ± 0.26 (for Qsb(ISO)) and 0.77 ± 0.31 (for Qsb(BS)).

Lens category

From over (%)

To (%)

0

80

100

1

43

80

2

18

43

3

8

18

4

3

8

Do Sunglasses on Brazilian Market Have Blue-Light Protection? Table 2 Average ssb and sv values for all lenses and for branded and unbranded lenses

Table 3 Average Qsb values per category and number of lenses per category

Branded (9)

1255

ssb(ISO) [mean ± SD]

ssb(BS) [mean ± SD]

sv [mean ± SD]

18.87 ± 11.76

14.09 ± 8.69

21.43 ± 14.34

Unbranded (222)

12.93 ± 8.39

12.35 ± 8.48

16.49 ± 10.63

All (231)

13.16 ± 8.62

12.42 ± 8.50

16.68 ± 10.84

Cat.

n samples

% samples

Qsb(BS) [mean ± SD]

0

0

0.0





1

7

3.0

0.68 ± 0.10

0.58 ± 0.08

2

76

32.9

0.77 ± 0.29

0.75 ± 0.28

3

98

42.4

0.84 ± 0.25

0.77 ± 0.25

4

44

19.1

0.94 ± 0.23

0.86 ± 0.47

99

6

2.6

0.88 ± 0.09

0.69 ± 0.08

We divided the lenses according to its category to analyze the influence of lens degree of darkening on Qsb values, as shown in Table 3. According to our criterion for blue-light protection, 10 lenses (4.3%) failed Qsb(ISO) test (Fig. 2) and 20 lenses (8.7%) failed Qsb(BS) test (Fig. 3). Among these lenses, 8 failed both tests and all two lenses that failed only Qsb(ISO) test belong to category 3. Table 4 shows the number of failed lenses for each category. As examples of lenses that failed only Qsb(ISO) test or only Qsb(BS) test, we plotted the spectroscopy of lenses LE 23 22 (Qsb(ISO) = 1.32, Qsb(BS) = 0.90) and LE 09 139 (Qsb(ISO) = 1.09, Qsb(BS) = 1.43) in Fig. 4. All 9 branded lenses passed both Qsb tests (Qsb(ISO) < 1.2, Qsb(BS) < 1.2).

4

Qsb(ISO) [mean ± SD]

fact that, for the measured samples, average Qsb(ISO) is higher than average Qsb(BS), Qsb(BS) test rejected twice as many samples as Qsb(ISO) test. The two lentes that failed only Qsb(ISO) test has a peak near 450 nm, like LE 23 22 in Fig. 4. The 12 lenses that failed only Qsb(BS) test are lighter (categories 2 and 3) than the ones that failed Qsb(ISO) test. All 20 lenses that failed Qsb(BS) test (Fig. 3) have high transmittance at 380 nm (sF(380 nm) > 10%). Presumably,

Discussion

Although both ssb(ISO) and ssb(BS) take into account sample’s transmittance within the same spectral range (from 380 to 500 nm), their values often are disparate for the same sample due to their weighting functions (Fig. 1). Despite the

Fig. 2

Spectroscopy of all 10 lenses that failed Qsb(ISO) test

Fig. 3

Spectroscopy of all 20 lenses that failed Qsb(BS) test

Fig. 4 Spectroscopy of two lenses: LE 23 22 failed only Qsb(ISO) test and LE 09 139 failed only Qsb(BS) test

1256

A. D. Loureiro et al.

Table 4 Lenses that failed our Qsb criteria per category

Cat.

Qsb(ISO) < 1.2 n of fails

n of fails

% of fails

0

0

0

0

0

1

0

0

0

0

2

1

10

7

35

3

4

40

8

40

4

5

50

5

25

99

0

0

0

0

they also failed UV transmittance test. On the one hand, their high transmittance around 390 nm and around 490 nm are significant for their failure in Qsb(BS) test. On the other hand, these wavelengths are not related to blue-light hazard. Indeed, Qsb(BS) test is neither effective in rejecting lenses without blue-light protection nor in approving ones with blue-light protection. All tested branded sunglasses (9 lenses) safely block blue light (Qsb < 1.2) and none of them are excessively dark (sv < 3%). Since the branded sample size is small, this result is statistically not very significant. From the 222 unbranded lenses, 4.5% (10 lenses) do not safely block blue light and 2.7% (6 lenses) are excessively dark for general use. Although all tested excessively dark lenses have blue-light protection (Qsb < 1.2), they are not recommended to be used due to exaggerated darkness. The public normally has no means of testing the protection of their own sunglasses and, as consequence, they wear sunglasses without knowing whether they are protected [13].

5

Qsb(BS) < 1.2 % of fails

Conclusions

In this paper we analyze blue-light protection in sunglasses available on Brazilian market, separating them according to their categories. Whereas the sample size of unbranded lenses is significant (222 lenses), the sample size of branded lenses is not significant (9 lenses). Consequently, even though all branded lenses pass our test of blue-light protection and none of them are excessively dark, larger sampling is required for more substantial conclusions about branded lenses on Brazilian market. Conversely, 4.5% of the unbranded lenses fail our test of blue-light protection and 2.7% of them are excessively dark. Despite the fact that half of the lenses that fail our blue-light test are very dark (category 4), all exaggerated dark lenses pass it. Our research group is carefully studying optical characteristics of sunglasses available on Brazilian market in order to evaluate whether they provide enough safety and protection against harmful solar radiation to the public. We

believe that our analyzes may improve knowledge about the quality of sunglasses available on Brazilian market. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 and also by FAPESP (grant number: 2014/16938-0, coordinator Liliane Ventura). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Tomany SC, Cruickshanks KJ, Ronald K, Klein BEK, Knudtson MD (2004) Sunlight and the 10-Year incidence of age-related maculopathy: the Beaver Dam eye study. Arch Ophthalmol Am Med Assoc 122:750–757 2. Shang Y-M, Wang G-S, Sliney D, Yang C-H, Lee L-L (2014) White light-emitting diodes (LEDs) at domestic lighting levels and retinal injury in a rat model. Environ Health Perspect Environ Health Inf Serv 122:269–276 3. Liu X, Zhou Q, Lin H, Wu J, Wu Z, Qu S, Bi Y (2019) The protective effects of blue light-blocking films with different shielding rates: a rat model study. Transl Vis Sci Technol 8:19 4. Vicente-Tejedor J, Marchena M, Ramírez L, García-Ayuso D, Gómez-Vicente V, Sánchez-Ramos C, de la Villa P, Germain F (2018) Removal of the blue component of light significantly decreases retinal damage after high intensity exposure. PLOS ONE 13:1–18 (Public Library of Science) 5. Associação Brasileira de Normas Técnicas. [ABNT NBR ISO 12312-1:2015 - Eye and face protection - Sunglasses and related eyewear - Part 1: Sunglasses for general use]. 2015 6. BSI Group (1987) BS2724:1987—Specification for sun glare eye protectors for general use 7. Dain SJ (2003) Sunglasses and sunglass standards. Clin Exp Optom 86:77–90 8. Barker FM (1990) Does the ANSI Z80.3 nonprescription sunglass and fashion eyewear standard go far enough? Optom Vis Sci 67:431–434 9. Masili M, Duarte FO, White CC, Ventura L (2019) Degradation of sunglasses filters after long-term irradiation within solar simulator. Eng Fail Anal 103:505–516 10. Loureiro AD, Gomes LM, Ventura L (2016) Transmittance variations analysis in sunglasses lenses post sun exposure. J Phys Conf Ser 733:012028 (IOP Publishing)

Do Sunglasses on Brazilian Market Have Blue-Light Protection? 11. Masili M, Schiabel H, Ventura L (2015) Contribution to the radiation protection for sunglasses standards. Radiat Prot Dosimetry 164:435–443 12. Rosenthal FS, Bakalian AE, Lou CQ, Taylor HR (1988) The effect of sunglasses on ocular exposure to ultraviolet radiation. Am J Public Health Am Public Health Assoc 78:72–74

1257 13. Loureiro AD, Ventura L (2020) Prototype for blue light blocking tests in sunglasses. Ophthalmic Technol 11218:83–85 (XXX, SPIE)

Thermography and Semmes-Weinstein Monofilaments in the Sensitivity Evaluation of Diabetes Mellitus Type 2 Patients G. C. Mendes, F. S. Barros, and P. Nohama

Abstract

1

Diabetic foot is one of the most common complication of Diabetes Mellitus (DM) that arises along the evolution of this disease. Diabetic neuropathy tends to be a progressive pathology, affecting more aggressively patients with poor glycemic control. Generally, the clinical examination includes detailed feet evaluation to determine areas of diminished sensitivity, using Semmes-Weinstein Monofilaments. Infrared thermography is a modern high-resolution technique that may measure thermal alterations referring to vasomotor changes, which may indicate neuropathies. So, the aim of this research was to correlate thermal changes in the feet with tactile sensitivity alterations in type 2 diabetic patients at risk for diabetic peripheral neuropathy. It is a cross-sectional qualiquantitative study, achieved with 10 participants. The two assessment techniques used favor early diagnosis for people with type 2 DM and a better clinical course of the disease, reducing the high costs for public health and improving the quality of life of people with DM. Keywords



Semmes-Weinstein monofilaments Diabetes mellitus Diabetic peripheral neuropathy Thermography

G. C. Mendes  F. S. Barros (&)  P. Nohama UTFPR-Curitiba/PPGEB, Av. Sete de Setembro, 3165, Curitiba, Paraná, Brazil e-mail: [email protected]



Introduction

Diabetes Mellitus (DM) is a chronic disease, considered a public health problem, due to the lack and/or inability of insulin to properly perform its functions. The Diabetic Foot (DF) is one of the most common complications of DM that arises along the course of the disease evolution, being characterized by the progressive degeneration of nerve fibers axons and by the decrease in the arterial blood supply to the periphery, which can cause necrosis of its anatomical structures. There is a decrease in the magnitude of the sensory and motor responses of the peripheral nerves and a loss of the control of blood vessels diameter, which causes a less vascularized skin, and colder. The world population with DM is estimated to be approximately 425 million people and is expected to reach 629 million in 2045, which can be considered epidemic [1, 2]. Brazil occupies the fourth position in the world ranking among the countries with the largest number of individuals with diabetes, approximately 14.3 million diabetics and projections for 23.3 million in 2040. Half of the diagnosed patients are unaware of their clinical condition and have a late diagnosis, hindering prevention, treatment and prognosis [3]. According to the International Diabetes Federation (IDF), 40% of people with diabetes do not know they have the disease and in 2017 there were 4 million deaths from the disease worldwide, half of which in Brazil. Therefore, conducting a detailed clinical examination of the feet with the assessment of sensibility for light touch, vibrating, and painful is essential. For this, the method commonly used in the evaluation of diabetic neuropathy is Semmes Weinstein’s Monofilament (SWM) [4]. This test has good specificity, low-cost, high sensitivity, and can detect changes in the touch sensation and proprioception. It is used to determine an increased ulceration risk. When there is an inability to feel the pressure of the 10 g monofilament

P. Nohama PUC-PR/PPGTS, Curitiba, Paraná, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_189

1259

1260

G. C. Mendes et al.

in specific points of the foot, the result is compatible with sensory neuropathy [5, 6]. Infrared Thermography (IT) is a technique that measures the infrared energy emitted by the body, when thermal changes occur during inflammatory processes. A thermal difference of at least 0.3 °C between the region of interest and its correspondent symmetrical region is considered thermal asymmetry [7]. Despite it is not a common exam, due to its sensitivity and resolution, thermography has been gaining interest as an auxiliary diagnosis technique [8]. IT may detect physiological abnormalities. Therefore, it can be useful in the treatment of diabetic foot, since an increase in the temperature of the sole of the foot indicates inflammation [9]. Diabetic peripheral neuropathy (DPN) decreases or causes total loss of protective sensitivity of the feet, being an important risk factor for trauma, which, as a consequence, advances to ulcerations and to partial or total amputations of the lower limbs. Patients with neuropathy in lower limbs do not recognize that their feet are injured, until a wound arises. As these injuries are preceded by inflammation, precocious identifying pre-ulcerative inflammation can prevent foot ulcers [10]. In this research, we seek to correlate thermal changes and tactile sensitivity in type 2 diabetic patients with risks for diabetic peripheral neuropathy.

2

Materials and Methods

This study is a descriptive, experimental study with a qualitative and quantitative approach. It was carried out into three stages: the first one consisted of an evaluation protocol, which includes the identification data of the volunteer, evaluation of vital signs and anamnesis. Still at this stage, the survey volunteer’s feet were inspected. During this stage, he had his lower and upper limbs uncovered in order to acclimate the human body to the environment. In the second stage, thermographic images of the limbs were taken. And in the third, the sensitivity was assessed with the Semmes-Weinstein monofilament. The research essay was approved by the Research Ethics Committees of UTFPR (CAAE: 79583317.9.0000.5547) and of Hospital de Clínicas/Federal University of Paraná (CAAE: 79583317.9.3001.0096). All participants signed the Free and Informed Consent Form at the time before filling out the questionnaire, collecting the thermal images, and measuring the sensitivity. The sample selection was performed at the diabetic foot clinic at the Hospital de Clínicas, located in Curitiba, Brazil. The population consisted of 10 individuals with Diabetes Mellitus. As inclusion criteria, individuals with type 2 DM diagnosed in the laboratory and undergoing regular medical treatment, who did not have necrotic and/or infectious

lesions in the lower limbs, of both genders, aged over 30, able to understand the simple verbal command were selected. The exclusion criteria included individuals with neurological disorders, low cognitive level, type 1 DM and the manifestation of interrupting the research for some reason or the lack of participation in any of the predicted evaluations.

2.1 Evaluation Protocol The volunteers were submitted to an evaluation protocol in which personal identification data were collected, anamnesis, clinical examination and interrogation were carried out to raise complaints about the symptoms of the disease. On palpation of the feet, regions that could be suggestive of infection or ischemia were observed. The feet were analyzed by anatomical division into four parts: forefoot, midfoot, hind foot and forefoot.

2.2 Thermographic Images To acquire thermographic images, the protocol defined by the American Academy of Thermology (AAT) was applied. The volunteers had stayed for 20 min for thermal stabilization in a room without air-conditioned (18.5–25 °C) and with a relative humidity below 60%. This hospital room was not exposed to sunlight and, to avoid thermal losses caused by drafts, the windows and door were kept closed. Infrared images of the sole and dorsum of the feet, as well as the lower leg, were acquired using a camera produced by FLIR Inc.®, model A-325. For the acquisition, storage, processing and analysis of infrared images, the program ThermaCamTM Researcher Pro 2.9 was used. During the thermographic images acquisition, all participants remained dressed, with only the clothes of the lower limb rose to the level of the knees for assessment of the feet and legs, sitting at rest in an upholstered chair with the feet raised at the height of the chair seat, for temperature stabilization. The room temperature and humidity measurements were done with a portable digital hygrometer, Incoterm® and body temperature (in the central part of the patient’s forehead) with the digital forehead thermometer G TECH, model FR1DZ1. Using FLIR Tools/Tools+ software, an analysis of the temperature distribution was made, selecting regions of interest. In this case, we had used anatomical points followed by a statistical measurement of thermal data, as shown in Fig. 1. The even points identified the left foot and the odd points the right foot, with the dorsal region identified through D letter. The regions of interest (feet and lower leg) for temperature analysis were demarcated on the skin surface. For each

Thermography and Semmes-Weinstein Monofilaments …

LEFT FOOT

1261

RIGHT FOOT

Fig. 1 Regions of interest indication using anatomical points

image of the feet (left and right—sole and back, left and right leg), the minimum, mean and maximum estimated temperature were obtained.

The volunteer had remained with his eyes closed and was instructed to answer “yes” when feeling the filament, referring to the location of the stimulus. After the first intervention, the monofilament was applied to the sole and back of each foot with manual pressure until they had bent for about 2 s over the specified points as shown in Fig. 1. The tested points in the plantar region of the feet were: the first, third and fifth digits to plant; first, third and fifth heads of plantar metatarsals; left and right lateral of the plantar medium; calcaneus and one in the dorsal region (between the first and second toes), with seven points of innervation of the tibiae nerve, one of the sural nerve, one of the saphenous nerve and one of the deep fibular nerve (dorsum of the foot), respectively. In response, the participant should feel the monofilament when it bends. The inability to feel this effect refers to a positive response to the lack of sensitivity.

2.3 Sensitivity Test

3 The sensitivity test was carried out using the SemmesWeinstein (SW) Kit made by Sorri Bauru, with a 10 g monofilament, indicated for the evaluation of individuals with DM. Initially, it was explained the test and a demonstration was made on the forearm so that they could know what sensation would occur with the filament contact on the skin.

Table 1 Dermatological analysis

Part.

Ulcer

1

Previous part of leg E

2

2 foot pododactyles E 5 foot pododactyles D

Crack

Results

Ten volunteers were evaluated in the period from April to May 2018. Of these, 7 are male with an mean of 66 ± 7.85 years, and mean time since the diagnosis of DM 2 of 32.42 ± 6.85 years; and 3 females with a mean of 65.33 ± 4.16 years, an mean time since the diagnosis of

Callosity

Dry skin

2 foot pododactyles D

5

Below hallux E and sole on D

6

Scaly

Predominant E

Ankles

Predominant E

E

Heels Hallux E

Both the members

8

10

Pain MMII

D/E

4

9

Edema

Calcaneus

3

7

MMII brownish

Predominant E 1e5 pododactyles both feet

Sole and calcaneus Scaly

Foot

1262

G. C. Mendes et al.

DM 2 of 19.33 ± 7.50 years. Table 1 shows the dermatological analysis of the feet. Among the 10 volunteers, none presented lameness and Tingling Cushioning lower limbs. Only participant 2 presented amputation in two toes of the right foot. As an example, Fig. 2 shows the sensitivity measurement points using the Semmes-Weinstein Monofilament in the Patient 7. The filled dots represent no sensitivity. Thermography is very useful for identifying areas of inflammation related to the diabetic foot before the

Fig. 2 Application points of the Semmes-Weinstein mono filament on Patient’s foot 7. Points D2, 2, 4, 6, 8, 10, 12, 14, 16 and 18 without sensitivity on the left foot. Points D1, 1, 5, 7, 9, 11, 13, 15 and 17 without sensitivity on the right foot

Table 2 Identification of patients in relation to feet sensitivity

Fig. 3 Thermographic images of the sole and dorsum of the feet and leg—lower, left and right portion of a female participant (more compromised)

appearance of visual signs. Therefore, it was important to obtain results on which regions of interest existing in the plantar view of the foot are necessary for the detection of hotspots, as well as the identification of areas with no heat. So, thermographic images of the sole and instep of the feet of the participants were taken. Establishing a comparison with the participants without loss of sensitivity, as shown in Table 2, it is possible to state that after 17 years as diabetics, sensitivity problems started to appear. Figure 3 shows the thermographic images of female participant 9 (most affected). In the 10 participants analyzed, all of them presented some compromise in the temperature of the feet. In turn, the fingertip region, both on the soles and on the backs of patients 1, 6 and 9, proved to be quite involved. When analyzing images of male participants, as in Fig. 4 (more committed), there were no major differences when compared to the female group. The participants who had presented temperature changes have the region of the fingers and toes greatly affected. This was seen in participants 4, 5 and 7. The only difference found is that in the male group there were patients with temperatures close to or above 32 °

Patient

Genre

Age (years)

Diabetes time (years)

Sensitivity

1

Feminine

70

15

Preserved sensitivity

2

Male

65

36

Loss of sensitivity in one of the feet

3

Male

60

41

Lack of sensitivity in both feet

4

Male

64

41

Sensitivity harmed in the sole of the feet

5

Male

83

25

Lack of sensitivity in both feet

6

Feminine

62

15

Preserved sensitivity

7

Male

60

26

Lack of sensitivity in both feet

8

Male

66

30

Sensitivity harmed in the sole of the feet

9

Feminine

64

28

Sensitivity harmed in the sole of the feet

10

Male

74

17

Sensitivity harmed in the sole of the feet

Thermography and Semmes-Weinstein Monofilaments …

1263

Fig. 4 Thermographic images of the sole and dorsum of the feet and leg—lower, left and right portions of male participants (more committed)

C, the back of participant 2 and the back and sole of the participant 10. Table 3 shows the relationship between mean temperature and sensitivity at the points on the soles of the feet of participants 1 to 10. Aiming to segment the results and give greater visibility to the specific aspects of the thermographic evaluation, the measurement results of the mean temperature and sensitivity of the points on the soles of the feet of the 10 participants were obtained, to verify if there is a relationship between the two types of evaluation. The characters “0” and “1” indicate whether there is or no relationship between the two types of assessment.

4

Discussion

Soares et al. stated in his studies that physical examination of the feet is essential to identify risk factors, reducing the risk of ulcerations and amputations [11]. In a literature review developed by Araújo [12], it was found that the main risk factors for the development of ulcers resulting from diabetic feet are: previous history of ulcer, history of previous amputation, duration of DM (older than 10 years), inadequate glycemic control quantifying glycated hemoglobin (HbA1c> 7%), decreased visual acuity, diabetic polyneuropathy,

1264

G. C. Mendes et al.

Table 3 Average temperature and sensitivity at the sole points Average temperature and sensitivity sole points P1

P2

P3

P4

P5

P6

P7

P8

P9

P10

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

Tm

S

E1D

27.5

1

29.2

1

28.4

1

27.2

1

28.8

0

26.2

1

31.5

0

27.6

0

26.9

1

31.3

1

E2E

27.2

1

30.4

1

27.2

1

27.6

1

30.4

0

27.0

1

26.4

0

27.1

1

26.9

1

33.5

1

E3

27.7

1

29.5

1

26.7

1

27.0

1

28.5

0

26.9

1

29.7

1

27.8

0

26.8

0

29.3

0

E4

28.1

1

31.3

1

27.0

1

26.2

1

29.3

1

27.3

1

29.5

0

26.6

0

27.6

0

30.3

0

E5

29.9

1

29.8

1

27.6

1

29.5

1

28.2

1

29.3

1

30.2

1

29.9

0

28.3

0

30.9

0

E6

29.6

1

30.4

1

30.4

1

29.4

1

30.2

0

29.2

1

30.5

0

29.8

0

29.4

0

32.3

0

E7

26.9

1

29.2

1

26.6

1

25.5

0

27.1

0

28.4

1

27.7

0

28.7

0

28.6

0

33.4

0

E8

27.0

1

30.9

0

27.2

1

28.1

1

28.0

0

26.6

1

26.9

0

29.0

0

29.5

0

32.1

0

E9

26.3

1

29.3

1

26.8

1

25.8

1

27.6

0

27.4

1

27.9

0

29.8

0

27.8

0

32.2

1

E10

26.3

1

31.7

1

27.1

1

26.7

1

28.7

0

26.4

1

26.3

0

29.7

0

27.3

0

31.5

1

E11

26.4

1

29.3

1

26.2

1

25.7

1

27.2

0

26.3

1

28.4

0

29.3

0

26.5

0

30.5

0

E12

26.6

1

32.9

1

26.5

1

26.5

1

28.5

0

25.8

1

27.2

0

29.0

0

26.2

0

30.6

0

E13

25.6

1

27.6

1

25.4

1

23.7

1

25.0

0

28.6

1

25.2

0

29.0

0

26.5

0

33.8

0

E14

25.6

1

28.2

0

26.0

1

24.9

0

26.1

0

25.5

1

25.7

0

29.7

0

25.8

0

32.5

0

E15

25.1

1

26.7

1

25.5

1

23.3

0

25.7

0

25.9

1

25.6

0

29.8

1

26.1

0

31.6

0

E16

25.0

1

31.4

1

25.5

1

24.0

0

26.9

0

24.7

1

24.2

0

30.0

1

24.8

0

31.2

0

E17

25.3

1

27.1

1

24.1

1

24.8

1

26.4

0

25.1

1

26.3

0

28.6

0

25.1

0

29.8

0

E18

24.5

1

27.9

1

26.0

1

24.9

1

27.5

0

24.5

1

25.4

0

28.9

0

24.5

0

29.0

0

peripheral joint disease and poor guidance and/or education about DM and foot care. This study also carried out a detailed physical examination of the feet, which brought us relevant information for analysis. DM time was quantified according to the participants’ reports; we had not obtained glycated hemoglobin values because we had not obtained access to medical records for analysis. In the dermatological evaluation carried out, several alterations were found, such as ulcer, cracks, calluses, dry skin and edema. A result similar to that found by Soares et al. who observed the presence of interdigital mycoses, onychomycosis and onychocriptosis, cracks, dryness, ulcers and calluses [11]. Dry skin was found in 22 patients. Dutra et al. [13] and Schreiber et al. [14] had found that dry skin is an important sign in the clinical inspection of the feet, not only for individuals who had neuropathy, but also for those who had suffered neuropathic pain. Anhydrases and dry skin are related to sensitive neuropathy, associated with impairment of the neurovegetative nervous system. If they are not prevented or treated, they can leave the skin scaly and cracked, which favors ulceration and the entry of microorganisms, in addition to subsequent infection. The presence of ulceration was verified in 5 patients. Silva et al. [15] had identified a high prevalence of risk of foot ulceration among participants (43.7%). The assessment of sensation loss is possible in certain aspects of skin sensitivity, namely touch (pressure and

vibration), temperature and pain. Still on the importance of sensitivity analysis, Maia had found that when these diagnostic methods are used, the number of patients with diabetic neuropathy increases significantly, being a method that can anticipate the diagnosis [16]. From the comparative analysis of studies on Semmes-Weinstein Monofilaments, it was verified in Table 2 that 8 participants had altered sensitivity, with 3 having impairment of both feet. In the present study, 1 participant had tender points on one of the feet, with the left foot being the most injured. According to Lavery et al. [17], clinical studies show that frequent temperature assessment, in cases of temperature difference greater than 2.2 °C between a region of the foot and the same region in the contralateral foot can prevent ulcers in the diabetic foot. On the other hand, the decrease in foot temperature may indicate vascular insufficiency in the foot, according to Netten et al. state that patients with local complications, such as uninfected and non-ischemic foot ulcers or abundant callus, had locally increased temperatures of more than 2 °C compared to the contralateral foot and the average ipsilateral foot temperature [18]. Patients with diffuse complications, such as foot ulcers with osteomyelitis or Charcot’s foot, had shown increase greater than >3 °C when compared to the contralateral foot, whereas patients with critical ischemia had a colder foot if compared to the contralateral foot.

Thermography and Semmes-Weinstein Monofilaments …

Although thermography is a complementary method of assessment and diagnosis in these patients, according to Bandeira et al., the evaluation is carried out and the diagnosis is made by means of image analysis comparatively to the contralateral limb or with standard thermal images obtained from control groups of healthy individuals [19]. Table 3 indicates that in this group of patients there was a strong relationship between the two types of assessment, varying in participants 2, 4, 5, 7, 8 and 10. Participants 2 and 4 presented only two and four unrelated points respectively. However, patients 5, 7, 8, 9 and 10 had several points with no relationship between temperature and sensitivity.

5

Conclusions

Complications in the lower limbs due to DM have been significantly increasing the medical and hospital costs of the public and private health system. Early screening and identification of loss of protective sensitivity in the feet can prevent consequences such as wounds, ulcers and amputations. The two assessment techniques studied in this research use favor early diagnosis for people with type 2 DM and a better clinical course of the disease, reducing the high costs for public health and improving the quality of life of people with DM. Acknowledgements This work was carried out with the support of the Coordination for the Improvement of Higher Education Personnel— Brazil (CAPES)—Financing Code 001.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Oliveira JEP, Montenegro Junior RM, Vencio S (2017) Diretrizes da Sociedade Brasileira de Diabetes 2017-2018. Editora Clannad, São Paulo 2. IDF—International Diabetes Federation (2019) Atlas IDF 2017. Diabetes no Brasil. Disponível em: . Acesso em 23.06.2019 3. Tambascaia M (2014) E-book 2.0—Diabetes na Prática Clínica. Publicado no site da Sociedade Brasileira de Diabetes (SBD). Disponível em: https://www.diabetes.org.br/ebook/mainpage#modulo1. Acesso em 09 de agosto de 2018 4. Brasil (2005) Associação Médica Brasileira e Conselho Federal de Medicina e Sociedade Brasileira de Endocrinologia e Metabologia. Diabetes Mellitus: Neuropatia Projeto Diretrizes. Fev. 2005

1265 5. Duarte N, Gonçalves A (2011) Pé diabético. Angiologia e Cirurgia vascular, v 7, n 2, june, 2011 6. Caiafa JS, Castro AA, Fidelis C, Santos, VP, Silva E, Sitrangulo CJ (2011) Atenção integral ao portador de pé diabético. Jornal Vascular Brasileiro, 10(4, Suppl 2):1–32. https://dx.doi.org/10. 1590/S1677-54492011000600001 7. Brioschi ML, Teixeira ML, Silva MT, Colman FM (2010) Medical thermography textbook: principles and applications (ed Andreoli). São Paulo, Brazil, pp 1–280. ISBN: 978-85-60416-15-8 8. Cerdeira F, Vasquez ME, Collazo J, Granada H (2011) Applicability of infrared thermography to the study of the behavior of stone panels as building envelopes. Energy and Buildings, Oxford, vol 43, pp 1845–1851. https://doi.org/10.1016/j.enbuild.2011.03.029 9. Bharara M, Schoess J, Armstrong DG (2012) Coming events cast their shadows before: detecting inflammation in the acute diabetic foot and the foot in remission. Diabetes Metab Res Rev 28(Suppl 1):15–20. https://doi.org/10.1002/dmrr.2231 10. Armstrong DG, Abu-Rumman PL, Nixon BP, Boulton AJ (2001) Continuous activity monitoring in persons at high risk for diabetes-related lower-extremity amputation. J Am Podiatr Med Assoc 91(9):451–455. https://doi.org/10.7547/87507315-91-9-451 11. Soares RL, Ribeiro SMO, Fachin LB, Lima ACTS, Ramos LO, Ferreira LV (2018) Avaliação de rotina do pé diabético em pacientes internados – prevalência de neuropatia e vasculopatia. HU Revista 43(3):205–210 12. Araujo LM (2018) Neuropatia diabética periférica em pacientes atendidos pelo sistema único de saúde no município de Aracaju. Monografia apresentada à Universidade Federal de Sergipe como requisito à conclusão do curso de Medicina do Centro de Ciências Biológicas e da Saúde. Aracaju 13. Dutra LMA, Novais MRCG, Melo MC, Veloso DLC, Faustino DL (2018). Sousa, LMS. Avaliação do risco de ulceração em diabéticos. Revista Brasileira de Enfermagem 71(Supl 2):733– 739. https://doi.org/10.1590/0034-7167-2017-0337 14. Schreiber AK, Nones CF, Reis RC, Chichorro JG, Cunha JM (2015) Diabetic neuropathic pain: physiopathology and treatment. World J Diabetes 6(3):432–444. https://doi.org/10.4239/wjd.v6.i3.432 15. Silva CAM, Pereira DS, Almeida DSC, Venâncio MIL (2014) Pé diabético e avaliação do risco de ulceração. Revista de Enfermagem Referência, serIV(1):153–161. https://dx.doi.org/10. 12707/RIII12166 16. Maia EAR (2018) Protótipo para a avaliação sensorial da neuropatia diabética periférica através de testes térmicos. Trabalho de conclusão de curso (graduação) – Universidade Federal de Santa Catarina. Centro Tecnológico. Graduação em engenharia Elétrica. Florianópolis 17. Lavery LA, Higgins KR, Lanctot DR et al (2007) Preventing diabetic foot ulcer recurrence in high-risk patients: use of temperature monitoring as a self-assessment tool. Diabetes Care 30(1):14–20. https://doi.org/10.2337/dc06-1600 18. Van Netten JJ, van Baal JG, Liu C, van der Heijden F, Bus SA (2013) Infrared thermal imaging for automated detection of diabetic foot complications. J Diabetes Sci Technol 7(5):1122–1129. Published 2013 Sep 1. https://doi.org/10.1177/193229681300700504 19. Bandeira F, Moura MAM, Souza MA, Nohama P, Neves EB (2012) Pode a termografia auxiliar no diagnóstico de lesões musculares em atletas de futebol?. Revista Brasileira de Medicina do Esporte 18 (4):246–251. https://doi.org/10.1590/S1517-86922012000400006

Biomedical Robotics, Assistive Technologies and Health Informatics

Design and Performance Evaluation of a Custom 3D Printed Thumb Orthosis to Reduce Occupational Risk in an Automotive Assembly Line H.Toso, D. P. Campos, H. V. P. Martins, R. Wenke, M. Salatiel, J. A. P. Setti and G. Balbinotti

Abstract

The main goal of this research was to develop a custom made 3D printed thumb orthosis model prototype, able to reduce the necessary effort during part assembly in an automotive industry. Several materials and models were designed, 3D printed and evaluated using surface electromiography. Results show a significant reduction of the evaluated sEMG parameters when using the proposed orthosis that can lead to a decrease in occupational risk. Keywords

3D printing • Ergonomics • Musculoskeletal disorders (MSD) • Orthosis • Surface electromyography (sEMG)

1

Introduction

MSDs or Musculoskeletal Disorders are a reality from our modern industrial production systems. In Brazil, where this research was conducted, MSDs represent more than 380 thousand cases—26.4% of the total permanent retirements due to disability. If the 423 thousand people who are temporarily retired from work due to MSDs are considered, the costs add to more than 180 million Euros annually to the government [1]. It is clear that the enhancement of work conditions would benefit not only the financial health of both company and government, but also greatly improve the quality of life of the people involved, which is this work’s primary motivation. H. Toso (B) · D. P. Campos · H. V. P. Martins · R. Wenke · J. A. P. Setti · G. Balbinotti Graduate Program in Biomedical Engineering (UTFPR–PPGEB), Curitiba, Brazil e-mail: [email protected] M. Salatiel Graduate Program in Biomechanics (UFPR), Curitiba, Brazil

During assembly of the body shell (at the conventional assembly line of automotive vehicles), the car is supported by metal cradles that holds the body in place through holes in the vehicle’s frame. However, as the assembly process reaches a certain stage, cradling is no longer necessary, and these production leftover (i.e. holes) are sealed through the assembly of rubber and plastic plugs to fill the frame’s space and prevent water infiltration in the vehicle. Considering that this process is done manually, due to the high complexity of the operational station, and the fact that cycle time in assembly line at an automotive industry is very fast and nonstop, we have a scenario that exposes operators to the risks of MSDs due to the effort (associated with repetition) required to complete these cycles, which is where wearing an orthosis might prove to be useful. Exoskeletons are assistive technologies (AT) that can be defined as “a wearable device supporting the human to generate the physical power required for manual tasks” [2]. They are usually divided between active and passive. The first possesses actuators with an external power source that enhance the human body, and the latter consists in materials, springs or mechanisms, that store energy to be released when needed [3]. Although the model proposed in this study fits this description, it will be addressed as an orthosis, since it does not possess any movable parts and works based on a passive material spring effect. For the hands region (interest of this research), many novel approaches are being researched. Ranging from examining finger kinematics using a fully digital environment and motion capture systems to evaluate joint kinematics (reducing the need for multiple prototypes to adjust comfort and allowing better kinematic simulations) [4], to analyzing finger muscular forces when wearing hand exoskeletons [5]. Thus, the goal of this study is to develop a 3D printed functional orthosis capable of reducing the required effort for this industrial operation, contributing to risk-reducing of MSDs

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_190

1269

1270

H.Toso et al.

during vehicle assembly process. The model was validated in live industrial conditions using surface electromyography (sEMG) feature analysis.

2

Materials and Methods

2.1

Thumb Orthosis Development

The thumb orthosis development is composed by the reference model acquisition and iterative prototype creation (prototyping and 3D printing cycles), as shown in Fig. 1. In total, nine different models were designed, each of them printed varying printing parameters, while also optimizing the wall thickness for usage comfort and flexibility. A custom-made approach was adopted in order to validate the idea, which is also the reason as to only one subject was chosen for this study. The subject is a 25 year old average male, 180 cm tall and weighing 70 kg. This allows for quick design iteration and test cycles, very effective when the tests are performed in live situations with limited testing times.

2.1.1 3D Scanning By using this particular technology, it is possible to create a close-to-exact digital copy of the subject’s real hand, which maximizes comfort, but it hinders large-scale production, since there are many different hand sizes and proportions. The reference hand model was acquired using the Handyscan Exascan Scanner (Creaform 3D® ). In order to scan a complex geometry such as the human hand, the laser triangulation range scanner had to be set at a lower resolution, generating a triangle mesh where each triangle had a side of 1 mm. That generated noise on the resulting Standard Triangle Language (STL) model, which was then post-processed using Meshmixer (Autodesk® ). Fig. 1 Reference model creation and iterative prototype creation. Nine different models were designed, each of them printed varying printing parameters, while also optimizing the wall thickness for usage comfort

2.1.2 Prototyping The design followed two separate paths: The first, fully blocking the abduction and limiting extension movements of the thumb, which could lead to a possible relief of the thumbrelated muscles, such as Extensor Pollicis Brevis and Abductor Pollicis Longus (directly linked to thumb abduction), and Extensor Pollicis Longus (responsible for extension movement) [6]. For the other design path, it was sought only to protect the thumb region related to the operation, not blocking any movement, so that the effectiveness of blocking the muscles could also be analyzed. It is expected on both cases that during the operation, when the operator presses the part against the frame, the orthosis will deform while also distributing the tension across its own body, storing the energy. As the pressed part is fit into the frame by pushing it against the pressure point, energy is released, and orthosis return to its original form. It is important to mention that in both cases, a loss of dexterity is expected when compared to a hand without protection, but the procedures required for assembly can be still performed. All of the models were designed on Meshmixer (Autodesk® ). By having a surface reference, an offset surface was made considering the thickness of the gloves, worn by the operator, and the accuracy of the printer. Then, an offset from the created surface was made to generate the thickness of the orthosis. The parameters of thickness, format and offset distance were all iteratively adjusted to maximize comfort for the user, as shown by Table 1. The numerical values for offset varied from 1 to 1.5 mm, and thickness from 1 to 2 mm respectively, in order to adjust comfort and flexibility. During iteration, Mod. 8 (the last one for the abduction block path to be printed), have been evolved to have a bigger surface area, covering also the palmar region on an attempt to improve the energy distribution. Prototyping

Reference Model Creation

3D Scanning

3D Printing

Iteration

3D CAD

Table 1 Model numbers, offsets (mm) and thicknesses (mm) Model number

1

2

3

4

5

6

7

8

9

Thumb offset Thickness

1.5 2

1 1

1 1.5

1 1.5

1 1.5

1.1 1.5

1.1 1.5

1.3 1.8

1.3 1.8

190 Design and Performance Evaluation of a Custom 3D Printed …

1271

2.1.3 3D Printing and Polymer Selection The fabrication of the models was done using a Grabber i3, an open-source 3D printer, with a build volume of 200 × 200 × 200 mm3 . The tip of the thumb was used as the base of the print, as shown in Fig. 2.

PLA (eSun® ) and a test Thermoplastic polyurethane (TPU) filament from a local company, that cannot be named due to a confidentiality agreement. By printing Mod. 1 in several materials, it is possible not only to evaluate the variation of performance and comfort between materials, but also its evolution as the design is refined.

b) Model 8

a) Model 1

2.2

c) Model 4

d) Model 9

Fig. 2 Evolution of the models developed: seeking to block abduction movement (a, b) and to protect the thumb region (c, d). On both variations, comfort was a decisive factor, so they became thinner, and holes have been created to allow better ventilation

Mod. 9

Mod. 8

Mod. 1 (TPU)

Mod. 1 (Flexible PLA)

Mod. 1 (PLA)

As printing parameters and print accuracy may vary not only because of different brands of materials from the same base polymer, but from printer to printer as well (directly impacting on geometry and how prototype might slightly perform), used printing parameters were omitted. In total, there were nine model versions printed, with several prototypes made from each. Only five of them were tested on live assembly line atelier conditions: The latest models for each design variation, and three prototypes of the first model printed on different materials, shown by Fig. 3. The others were not tested as they were inferior in comfort to the latest models, and the total testing time considering all models would be too demanding for live atelier conditions.

Fig. 3 Printed prototypes that were tested. From the left: Mod. 1 PLA, Mod. 1 Flexible PLA, Mod. 1 TPU, Mod. 8 TPU, Mod. 9 TPU

The three used materials were selected based on the great difference in the tensile yield strength, and elongation at break between them: Polylactic acid (PLA), Soft Flexible

SEMG Data Acquisition and Signal Processing

In order to evaluate the thumb orthosis prototype, sEMG data was collected during the experiment. The signal was processed to compare the muscle contraction levels during the task with and without the orthosis. Data acquisition and signal processing steps are represented in Fig. 4. Signal is measured by a set of bipolar electrodes on each muscle connected to a sEMG acquisition module. A reference is provided attaching a single electrode on a neutral point. The signal is sent from the module to a computer, where it is filtered and normalized. Muscle activity is detected using a threshold method to segment signal windows of interest. From each segment five features are extracted and analyzed.

2.2.1 Experimental Protocol The experimental procedures are in agreement with the university’s ethical committee in Human Researches (UTFPRCAAE 89638918.0.0000.5547). Data collection was conducted in the atelier of the assembly line on a real vehicle used to perform tests, after bodyshell assembly phase, and at the painting stage. Subject performed the task reproducing continuous assembly conditions of a real assembly line. Five different prototypes were evaluated with three repetitions each. Performance without orthosis was also evaluated for comparison purposes (control). Each sample contains the continuous assembly sequence of four parts: Two on the frontal panel frame (one elongated and one circular), and two on the floor (one elongated and one circular), the holes were located at the central line of the vehicle, as demonstrated by Fig. 5. The operator first picks up a part, with an approximate diameter of 3 cm, by using a regular pinch grip, then, presses it against the hole using a downwards force with his thumb until the part clamps appropriately onto the frame. Process is repeated continuously until all parts are assembled. Myographic data was acquired 2000 Hz with a resolution of 12 bits. Bipolar electrodes, distanced by 2 cm, attached to the Extensor Pollicis region (responsible for thumb movement) and Flexor Digitorum region (responsible for pinch grip movements) muscles, were connected to two channels of the measuring board. The bias electrode was applied on a non-related point (near Radius Styloid process).

1272

H.Toso et al.

Data Acquisition

Signal Processing Filtered Raw Signal

Thumb Exoskeleton

to PC REF CH1 CH2

sEMG Acquisition Module

Ag/AgCl Electrodes

Segmentation

Feature Extraction

Analysis

Fig. 4 Data acquisition and signal processing steps. The signal is acquired and sent to a computer where it is processed

Raw sEMG signal Teager-Kaiser Energy Operator

a)

Rectification and Smoothing

b) th Tcrit

Fig. 5 Assembly sequence in order, starting by the elongated hole on the floor (a) and finishing by the circular hole on the panel (b)

2.2.2 Filtering and Normalization After acquiring sEMG data, signal was filtered by a digital third-order Butterworth filter to select the frequency band 5 Hz 400 Hz, which is the EMG frequency range [7]. The power line noise (60 Hz) was removed by a second-order Notch digital filter with a quality factor Q = 5. Filtered sEMG signal was then normalized dividing the signal amplitude vector by the maximum value reached on each channel considering all trials. 2.2.3 Segmentation Data was segmented using a two stage threshold method [8]. To improve the onset detection a Teager-Kaiser Energy Operator (TKEO) conditioning was applied to the signal [9]. Figure 6 shows the signal segmentation process diagram. The energy signal is rectified and smoothed, then a twostep threshold is applied: if the energy amplitude is greater than a threshold (th), a segment window is detected and; if the window period (T) is greater than a critical period (T > Tcrit ) the segment is selected. The signal was smoothed using a Savitzky-Golay filter with a 800 ms window, and critical window was adjusted at 100 ms.

T>Tcrit

Tcrit T1,000,000 sequences per day, with average reading lengths of around 20,000 bases and maximum reading lengths close to 1,000,000 bases [18], however, the speed error is greater than SGS methods [17]. ONT sequencing has been increasingly applied in clinical virology, due to its ability to perform long readings. It is also able to get, in the shortest time possible, sample responses to discover as much information about the pathogen candidate, which is an important feature to deal with public health emergencies. Li et al. [19] noticed that Nanopore is efficient in terms of time, both for detection (of nucleotides) and for data analysis, and has been used in different organisms, from the simplest to the more complex (like humans) because there is no maximum limit on the size of the sequence, as occurs in other techniques [20].

1343

Nanopore sequencing has other advantages, such as tracking biomarkers or genes, requiring a low volume of samples with low concentration, and the fact that it does not require complex steps such as amplifications and conversions of genetic material. Thus, Nanopore proves to be a cheaper and more efficient technique than techniques that use PCR [21]. It can be also used for real-time detection of pathogens in complex clinical samples [22]. In addition, one of the great advantages demonstrated by this technique is the direct RNA sequencing. This method is attractive because it eliminates the need for reverse transcription and, therefore, can reduce initialization errors and non-random copies introduced by reverse transcriptase [23]. Its functioning is based on the polarization of the membranes of the Nanopore, allowing to perceive changes in the signal of the electric current of the RNA or DNA molecules when they pass through the pores [24]. Basically, the ionic current depends on the nitrogenous base that is passing at the time of reading, making it possible to know the sequence of nucleotides, as changes in ionic currents are happening [21]. In Brazil, scientists from the Adolfo Lutz Institute, in collaboration with University of São Paulo (USP) and the University of Oxford, used the Nanopore methodology to carry out the first genetic sequencing of the new coronavirus (SARS-CoV-2) in Latin America, carried out in just 48 h after the first confirmed case [25]. The estimated time to sequence SARS-CoV-2 using another technique is longer, so it is evident that the ONT has been used on several occasions. For example, the entire MinION workflow, from sample preparation to DNA extraction, sequencing, bioinformatics and interpretation, was carried out in approximately 2.5 h [26]. There is no doubt about the importance of the MinION sequencer to determine the SARS-CoV-2 sequence, however, its error rate must be considered, which ranges from 5 to 20% [26]. That is why currently there are different protocols to reduce the error rate, such as nanoCORR— Error-correction tool for nanopore sequence data [27], NanoOK—Software for nanopore data, quality and error profiles [28], and Nanocorrect—Error-correction tool for nanopore sequence data [29, 30]. Also, in ARTIC network— Real-time molecular epidemiology for outbreak response— the protocol starting a MinION sequencing run using MinKNOW [31]. Otherwise, without these protocols it is difficult to carry out an analysis and/or trace the phylogeny of this virus. Thus, with different protocols and sequencing methods in tandem repetition (concatemer), reduction of error rates from 1 to 3% is obtained [26]. This NGS used in Brazil to sequence the new coronavirus shows its benefits, allowing the monitoring of the epidemic in real-time and tracing the behavior of the virus locally and globally. Also, through the analysis of genetic variations, it is possible to predict the route of transmission and dispersion

1344

A. M. Corredor-Vargas et al.

of the virus (allowing, in the future, the development of vaccines), verifying the evolution of the effectiveness of drugs on it, and conducting epidemiological research. Thus, as previously mentioned, sequencing SARS-CoV-2 using an efficient and cost-effective technology, such as is the case of portable sequencing devices as the Oxford Nanopore MinION, is of great importance for the humanity today [23].

5

Conclusions

After analyzing the various studies available in the literature so far, it is concluded that sequencing and analyzing the new coronavirus, in order to detect its mutations, is extremely necessary to prevent new outbreaks (of zoonoses). From this kind of analysis, it is possible to trace a phylogenetic relationship between the virus that causes COVID-19 and other coronaviruses present in the wild to predict its possible damage to human health and its capacity for infection, as well as to know and analyze its mutation rate, as it is transmitted from human to human, and predict future therapeutic targets [9]. Acknowledgements The authors thank CAPES and CNPq for their scholarships. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Wang C, Liu Z, Chen Z et al (2020) The establishment of reference sequence for SARS-CoV-2 and variation analysis. J Med Virol. https://doi.org/10.1002/jmv.25762 2. Fehr AR, Perlman S (2015) Coronaviruses: an overview of their replication and pathogenesis. Methods Mol Biol 1282:1–23 3. Yin C (2020) Genotyping coronavirus SARS-CoV-2. Genomics. https://doi.org/10.1016/j.ygeno.2020.04.016 4. Ji W, Wang W, Zhao X, Zai J, Li X (2020) Cross-species transmission of the newly identified coronavirus 2019-nCoV. J Med Virol 92:433–440 5. GISAID at https://www.gisaid.org/ 6. Chen Y et al (2020) Emerging coronaviruses: genome structure, replication and pathogenesis. J Med Virol 92:418–423 7. Dawood AA (2020) Mutated CoVID-19 may foretell a great risk for making the future. New Microbes New Infect 35:1000673 8. Phan T (2020) Genetic diversity and evolution of SARS-CoV-2. Infect Genet Evol 81:104260

9. Zhamg J et al (2020) The continuous evolutions and disseminations of 2019 novel human coronavirus. J Infect 80:671–693 10. Sanjuán R, Domingo-Calap P (2016) Mechanisms of viral mutation. Cell Mol Life Sci 73:4433–4448 11. Rye C et al (2017) Biology. OpenStax, Houston 12. Schadt EE, Turner S, Kasarskis A (2010) A window into third-generation sequencing. Hum Mol Genet 19:R227–R240 13. Lu H et al (2016) Oxford nanopore MinION sequencing and genome assembly. Genomics Proteomicis Bioinform 14:265–279 14. Kraft F, Kurth I (2019) Long-read sequencing in human genetics. medizinische genetik 31:198–204 15. Kchouk M, Jean-François Gibrat Elloumi M (2017) Generations of sequencing technologies: from first to next generation. Biol Med 9:395 16. Pillai S et al (2017) Review of sequencing platforms and their applications in phaeochromocytoma and paragangliomas. Crit Rev Oncol/Hematol 116:58–67 17. Laver T, Harrison J, O’Neill PA, Moore K, Farbos A, Paszkiewicz K et al (2015) Assessing the performance of the Oxford nanopore technologies MinION. Biomol Detect Quantif 1 (3):1–8 18. Huang J, Liang X, Xuan Y, Geng C, Li Y, Lu H et al (2017) Real-time DNA barcoding in a rainforest using nanopore sequencing: opportunities for rapid biodiversity assessments and local capacity building. Gigascience 6(5):1 [cited 2020 Jul 30] 19. Li Y, He X, Li M et al (2020) Comparison of third-generation sequencing approaches to identify viral pathogens under public health emergency conditions. Virus Genes 56(3):228–297 20. Roach NP et al (2020) The full-length transcriptome of C. elegans using direct RNA sequencing. Genome Res 30(2):299–312 21. Venkatesan B (2011) Nanopore sensors for nucleic acid analysis. Nat Nanotechnol 6:615–624 22. Greninger AL, Naccache SN, Federman S et al (2015) Rapid metagenomic identification of viral pathogens in clinical samples by real-time nanopore sequencing analysis. Genome Med 7:99 23. Quick J, Grubaugh N, Pullan S et al (2017) Multiplex PCR method for MinION and Illumina sequencing of Zika and other virus genomes directly from clinical samples. Nat Protoc 112:1261– 1276 24. Li Y, Han R, Bi C, Li M, Wang S, Gao X (2018) DeepSimulator: a deep simulator for nanopore sequencing. Bioinformatics 34(17): 2899–2908 25. Jesus J (2020) Importation and early local transmission of COVID-19 on Brazil, 2020. J São Paulo Inst Trop Med 62:e30 26. Loit K, Adamson K, Bahram M, Puusepp R, Anslan S, Kiiker R et al (2019) Relative performance of MinION (Oxford Nanopore Technologies) versus Sequel (Pacific Biosciences) thirdgeneration sequencing instruments in identification of agricultural and forest fungal pathogens. Appl Environ Microbiol 85(21) 27. NANOCORR at https://github.com/jgurtowski/nanocorr 28. NANOOK at https://documentation.tgac.ac.uk/display/NANOOK/ NanoOK 29. NANOCORRECT at https://github.com/jts/nanocorrect/ 30. Lu H, Giordano F, Ning Z (2016) Oxford nanopore MinION sequencing and genome assembly. Genomics Proteomics Bioinf 14:265–279 31. MINKNOW at https://www.protocols.io/view/starting-a-minionsequencing-run-using-minknow-7q6hmze

Vidi: Artificial Intelligence and Vision Device for the Visually Impaired R. L. A. Pinheiro and F. B. Vilela

Abstract

Independence and autonomy represent the capability to make your choices and act the way you want, without needing help. The visually impaired in their daily lives face several difficulties and often need help. To make users more independent, a prototype that uses artificial intelligence and narrates images from a camera attached to the glasses was developed. Some of the functions are facial, object and color recognition, all through machine learning and image recognition. Practical tests were carried out by the authors placing different objects in front of the device to verify if they would be recognized. The results were satisfactory for proof of concept about the device. Keywords

Artificial intelligence • Autonomy • Machine learning • Visually impaired

1

Introduction

A person becomes truly independent when is able to make his own decisions and carry out his basic activities autonomously. Grimley Evans (1997), defined autonomy as “the ability of individuals to live as they want” [1]. The concept of the expression “autonomy” can be represented as: “the ability to act for oneself, to be responsible for one’s own actions, without depending on others or on conditions of the environment”. Adding, “it is necessary to take into account the relativity of this definition, consequently, to speak of degrees and levels of people’s autonomy, since, to a greater or lesser extent, depend on others in different situations of daily life” [2]. Therefore, autonomy means, for the author, the ability to R. L. A. Pinheiro (B) · F. B. Vilela National Institute of Telecomunication, Santa Rita do Sapucaí, Brazil e-mail: [email protected]

conduct his behavior independently, in certain situations and frequent events of daily life [3]. In general, autonomy is the ability to perform simple everyday tasks, which can sometimes be extremely difficult for people who are blind or have low vision, such as the act of identifying bank notes [4]. In a paper carried out at the Federal University of Paranã, blind people were interviewed. They also pointed out that changed the pattern of Brazilian banknotes, and the two reais banknote is smaller, but using only the tact, they cannot identify [4]. One of the people interviewed, an elderly woman, reporting the same difficulty for identifying cash and mentioned her strategy of separating the notes in the wallet with the help of her daughter, just because she memorizes the order of the notes and their values respectively [4]. That same paper identified some difficulties when the blind person or person with low vision goes to the supermarket. In a reported case, a person with a visual disability feels many difficulties in identifying products in the supermarket and stores [4]. Not all products have Braille identification, which are bumped dots on the packaging. So, they take the product and tries to use touch, but often they are very similar and confuse them, like canned foods. Thus, they reported that often buys the wrong products, highlighting the fact that the help of someone to make purchases in a market is almost indispensable, as the identification of the products is practically impossible [4]. Another important point to highlight is the predisposition of people around the disabled to help them. Many have a certain resistance and fear, or even help once or twice, but soon give up. Facts observed by Araújo in her paper, who interviewed some people about their daily lives, reported that people often offer help but not in the right way. For example, they want to hold the arm themselves instead of letting the blind support on the shoulder. At other times, they were abandoned in the middle of the street while crossing with the help of a person, putting they life at risk [5]. The state of the art points out some solutions that can improve this scenario. Scientific solutions led to commercial devices that seek to bring more autonomy for the visually impaired. However, the most sophisticated and complete

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_200

1345

1346

R. L. A. Pinheiro and F. B. Vilela

being imported, they have a high cost, reaching the range of three thousand dollars. These devices are capable of transcribing static texts, some colors and product labels [6]. There are also applications for smartphones with features that assist the disabled people in some activities. Pay Voice for example, is an application to check the value to be paid written on the card machine [7]. For notes, there is the Cash Reader, an application identifies money bills [8]. By pointing the phone towards the money, the application speaks aloud what the banknotes are worth. Eye-D makes texts convert to speech [9]. But the most used today is Be My Eyes. It works as a support network between people who see completely and visually impaired, promoting video calls so that a user with perfect vision can describe to the other drawings on the screen and read texts [10]. Based on the information obtained, the present paper aims to create a proof of concept of an artificial intelligence and vision support device, which can be attached to regular eyeglasses. It is able to transcribing the objects, faces, colors and texts for the user. Therefore, it converts the images captured by the camera into audio by an artificial intelligence system. The images are processed by an embedded algorithm, using OpenCv and developed in Python within a Raspberry Pi 3B+, that would be fixed in the belt or stored in a backpack. The main objective of this project is the development of a light, versatile and efficient device that can give greater autonomy and facilitate non-dependent relationships for people with visual disability.

2

Materials and Methods

The project started with a search for existing technologies, seeking all solutions focused on helping people with any level of visual disability. A bibliographic search was made to identify the main technologies available in computer vision (CV), where similar works were found, which proposes the use of CV with the return in 3D sounds, and also works with a focus on mobile applications (APP), which also with CV provides assistance for cross the street at the crosswalk, or study applications that help in the academic scope [11–13]. Tests were also carried out on three APPs available for download highlighted in the article “Apps for different visuals” by Mellina, visually impaired who uses them frequently and portrayed them as being the best, they are Aipoly Vision, Seeing Ai and Tap Tap See [14].

Based on the information obtained, five main functionalities were designed to meet the minimum requirements identified as important: • Voice recognition: the user interacts with the device through speech, requesting the task to be performed; • Natural language reproduction: the device can reproduce responses to user requests; • Object identification: when an object already saved in memory is placed in front of the disabled person, an audio with the name of the object will be played; • Face recognition: when a person already stored in the system is in front of the user, her name will be reproduced; • Color detection: the device returns in audio the predominant color in front of the disabled person. To validate these functionalities, some basics tests had been developed. Initially, they were performed only by the authors. With the device positioned in the glasses, the head was directed to a surface, initially empty, and requested by the voice for the device to start the recognition. Then a water bottle was placed on the surface and wait for processing. The next step was to put an apple next to the bottle, to check the recognition of two objects together. The device then was directed to a person to check facial recognition. After these steps, a sheet of colored paper was placed in front of the user, and they requested the color identification. A moving test was also performed. The person walks in a hallway that is one meter wide. Objects were along the way, to verify that the device is able to inform them to the user.

2.1

Hardware

With the functionalities defined, the development of the prototype began. As the main hardware, a Raspberry Pi B3+ was used, which has a Broadcom BCM2837B0 64-bit ARM Cortex-A53 Quad-Core processor and 1GB of RAM. Figure 1(a) highlights the device, which is a compact minicomputer that has all the main components of a computer on a small card sized [15,16]. Other Raspberry models will be tested for comparison purposes only. Together, a Microsoft Lifecam Cinema webcam—H5D00013, shown in Fig. 1b, was used in order to capture images in real time and with a 720p definition, which exceeds the needs of the project. This camera was selected not only for having high resolution, but also for having a built-in micro-

200 Vidi: Artificial Intelligence and Vision Device for the Visually Impaired

phone, facilitating the development and reducing the costs of the prototype [17]. For the audio feedback, a mini 30 mm speaker, with 32  and 100 dB, illustrated in Fig. 1c, was used.

(a) Photo of Raspberry Pi B3 +

(b) Microsoft Lifecam Cinema-H5D-00013

(c) Views of the used speaker

Fig. 1 Hardware parts

identified, in other words, does not have the same identity. The point is that the anchor and positive image both belong to the same face or object while the negative image does not. The neural network computes the 128-d embeddings for each item and then tweaks the weights of the network (via the triplet loss function) such that the 128-d embeddings of the anchor and positive image lie closer together. While at the same time, moving away from the embeddings for the negative images. In this manner, the network is able to learn to quantify and return highly robust and discriminating embeddings suitable for the detection [22]. Some features require an internet connection to use. It is necessary because part of the data processing is done remotely. This is the case with the TTS and STT interface features. For this, a Wi-Fi connection was used.

2.3 2.2

Software

Development started with speech recognition and natural language processing. Application programming interfaces (APIs) of Google, Text to Speech (TTS) and Speech to Text (STT), were selected, which make it possible to transform audio into text and vice versa [18,19]. Having the user interface ready and totally non-visual, image processing was the next step, for this the Open Source Computer Vision Library (OpenCV) was selected. It is an open source computer vision and machine learning software library that was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a Berkeley Software Distribution (BSD)-licensed product, OpenCV contributes for the utilization and modification of codes [20]. To expand the possibilities of OpenCV, the TensorFlow (TF) library, from Google, was used. TF is an endto-end open source platform for machine learning (ML). It abstracts the pattern recognition to classify images, using the Convolutional Neural Networks (CNN) architecture. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-ofthe-art in ML and developers easily build and deploy ML powered applications [21]. There are three points that need to be shown to train a neural network: the anchor, positive images and negative images. The anchor is the image of the reference face or object that will be identified. The positive image is an image that also contains the item, and the negative image does not have the item to be

1347

Physical Structure

With the electrical part ready and the virtual analysis algorithm completed, the project physical supports were developed. It was used the computer program SolidWorks and a 3D printer filled with acrylonitrile-butadiene-styrene (ABS) to print part of the structure. The structure has a support for the camera to be fixed (this is shown as part A in Fig. 2a), and a second support to be attached to glasses rods by means of plastic seals (this can be seen as part B in Fig. 2b). It was tested a very thin and simple structure, to allow any glasses to be attached to the rods. Part A: in it, the camera, the activation button and the speaker were coupled. It is the part where all the technology and the two magnets were attached, allowing the user to use it in his hand if he feels comfortable or taking it out to store or change glasses without losing any functionality. The speaker is positioned to direct the sound to the ear of the user, so as not to obstruct the hearing of the disabled, as a headset would. This is shown in Fig. 2a. Part B: this part is the cheapest one and it can be replaced if necessary, allowing the user to have one for each pair of glasses. There are only two magnets in it, which have the right polarity for fitting Part A in the exact position.

3

Results

The finished support, with the camera and its respective magnets can be seen in Fig. 3. The component connection model can be seen in Fig. 4.

1348

R. L. A. Pinheiro and F. B. Vilela

(a) Part A

(b) Part B

Fig. 2 Separate parts

Furthermore, the logical operation is illustrated in Fig. 5. The final prototype is shown in Fig. 6, and has reached the expected goal at this stage. It was only observed that the size is very large, and the part attached to the glasses is very heavy. The tests mentioned in Sect. 2 obtained the following results: The device understood all the voice commands and recognized the two objects used. It pronounced them correctly. The recognition of the face and color was also successful. The device was able to identify the authors’ faces and speak their names. In the movement test, while the authors

Fig. 5 Simplified flowchart of the system

Fig. 3 Parts A and B together

Fig. 6 Finalized prototype

Fig. 4 Block diagram of the system

were walking, the device repeated the objects in front of them, until they left his field of vision. In tests carried out, it was possible to identify personal objects on a table, some obstacles on the way and people ahead, including their faces when the person had already been registered, as well as interacting with the system in a practical and simple way. The lack of some simple functions was identified, like knowing what time it is, or even identifying the weather. So it was implemented in the project, using Google Weather tools for the weather, and Google Time to identify the

200 Vidi: Artificial Intelligence and Vision Device for the Visually Impaired

hours. Then, when the device is asked for any of its functions, they check on the internet and it will speak. In the first tests, the Raspberry Pi Zero W was used, which is much smaller than that mentioned in Materials and Methods (Pi B3+). However, due to its Broadcom BCM2835 ARM11 1 GHz Single-core processor and its 512 MB of RAM, not being enough to execute the code perfectly, overheating and occupying all its processing causing latency [23]. Figure 7 presents a photo from the internal thermometer of the Raspberry where the problem can be observed. The tests with object recognition were performed too and can be seen in Fig. 8, with that some objects that had already been processed before, would be recognized by the device and TensorFlow generates the rectangle automatically. It is possible to add any objects, the process is as follows. First approximately three hundred pictures of an object are taken, in different positions and angles, as in the example with playing cards in Fig. 9. Then, with all the images properly marked, using a TensorFlow tool, the images are analyzed and processed by the neural network, resulting in a xml file with a transcribed image mapping that represents the pattern playing card. Importing this pattern in the algorithm, a playing card is identified [24].

Fig. 7 Photo of the internal thermometer showing high temperature

Fig. 8 First test performed, the identification of a chair can be observed

1349

Fig. 10 Similarity percentage

When an object is identified by the camera, the software provides a percentage of similarity compared to the object saved in memory (anchor), if this percentage is greater than the set point established, which in this case was 85%. It states that the object seen by the camera is actually the saved object, as can be seen in Fig. 10. For facial recognition, the principle is the same as object recognition. However, it should be applied in two key steps using OpenCV resources: detects the presence and location of a face in an image, but does not identify it, and extract the 128-d feature vectors (called “embeddings”) that quantify each face in an image. Thus, after having a face identified and cut out of the video frame, it will be processed in the same way as object detection. The neural network computes the 128-d embeddings for each face and then tweaks the weights of the network, via the triplet loss function [22]. The process is depicted in Fig. 11. In color recognition, the process is a little different. When the device is asked to make the identification, a photo is taken by the camera. Therefore, the color pattern that is most prevalent in the image is defined with the color to be identified. Three filters are applied to this photo, being defined by red, green and blue (RGB) characteristics and through the color balance and set points defined for the percentage of each color present. The concepts related to the hue circle in the hue, saturation and value system (HSV), highlighted in Fig. 12, are used to make the color definition [25]. In Fig. 13 it is possible to observe the filters applied to an image with a “mixed” color, to identify the color percentages.

1350

R. L. A. Pinheiro and F. B. Vilela

Fig. 9 Photos taken from playing cards for machine learning [24]

Fig. 11 An overview of the OpenCV face recognition pipeline. The key step is a CNN feature extractor that generates 128-d facial embeddings [22]

4

Discussion

When tests were realized using the prototype, by the authors, due to its high weight, discomfort was observed after a short interval of use, explaining the need to reduce the total mass of the device. The next step is to send a request to the ethics committee to carry out tests on humans in a larger sample group. Despite the satisfactory results, after the adjustments in the hardware, high latency was still observed for the application

of the device, being possible that when the user was warned of an object in the way, it was already too close for some action to be taken. The need to look for a new processor should be analyzed. For the object learning process, it is also important to take images with various other non-desired objects in the pictures and pictures with multiple objects. To be able to detect the objects when they are overlapping, make sure to have it the overlapped in many images. The size of the images cannot be very large (maximum 720 × 1280 pixels). With the Label

200 Vidi: Artificial Intelligence and Vision Device for the Visually Impaired

Fig. 12 Demonstration of Hue, Saturation and Value

Fig. 13 Red filter applied to an image with mixed colors

Fig. 14 Marking the playing cards, image used as anchor [24]

1351

1352

R. L. A. Pinheiro and F. B. Vilela

Pictures software, at least twenty percent of the photos mark the object to be identified, as shown in Fig. 14, even with the example of playing cards [26]. To improve usability, the mobile network can be implemented, to make it independent from a Wi-Fi network. Another point to highlight is the voice recognition. The voice was well recognized and interpreted, but in noisier places, the device can present a little difficulties for processing the audio due to the location of the microphone (built into the camera). Thus, changes in the hardware for the microphone and its position must be considered.

References 1. 2.

3.

4.

5.

5

Conclusion

The project started with the concern that despite different solutions designed for people with visual disability, these individuals still have difficulties in their daily lives. In addition, there is not always someone to help or a person willing to help correctly. Then, a previous review was made seeking for existing technologies applicable and a survey of the minimum necessary requirements. The development was initiated in order to use tools that, in addition to simplifying and speeding up the design of the solution itself, were freely available and license-less for the use of developers. During this stage, some difficulties were encountered, with hardware and software, but they were overcome with good tools such as code classes, reliable documentation and changes in the development board. During the tests, some points to be reworked were observed, such as the weight and size of the device, as well as the latency, which is an extremely crucial point for the safety of the user. Hardware issues were observed in the speech recognition, which failed when in noisy places. Future steps for this project are to reformulate the project based on the problems already highlighted to develop hardware solutions that meet the mentioned needs. In terms of software, no irregularities or unreliability were observed in this first prototype. Acknowledgements The authors would like to thank the Center for Development and Transference of Assistive Technology (CDTTA) for the availability of resources, equipment and infrastructure. Conflict of Interest The authors declare that they have no conflict of interest.

6. 7. 8. 9. 10. 11. 12.

13.

14.

15. 16.

17. 18. 19. 20. 21. 22.

Johnstone D (2001) An introduction to disability studies, 2nd ed. David Fulton Publishers Garcia RM (1999) Programa Básico para Favorecer a Autonomia Pessoal e a Vida Diária. Cooperativa de Educação e Reabilitação de Crianças Inadaptadas. 135 p. il Santos MER, Rodrigues F, Rodrigues D (2006) Serviço Social e Deficiência Mental: a perspectiva subjectiva da qualidade de vida. ISMT Carolina Aoki LA (2014) Análise de acessibilidade para pessoas cegas às embalagens. Trabalho de Conclusão de Curso (Graduação) - Universidade Tecnológica Federal do Paraná Simone Moraes AS (2015) O encontro de pessoas cegas e não cegas pelas ruas do Recife. Revista Iluminuras - Publicação Eletrônica do Banco de Imagens e Efeitos Visuais NUPECS/LAS/PPGAS/IFCH/UFRGS. 16:97–117 Mais Autonomia - OrCam MyEye® 2 at https://maisautonomia. com.br/produto/orcam-myeye-2-0/ Pay Voice by ABECS at https://appadvice.com/app/pay-voice/ 1344943724 Cash Reader—a money reading mobile app for blind and visually impaired at https://cashreader.app/en/ Eye-D—technology as a caring friend for blind and visually impaired at https://eye-d.in/ Be My Eyes—bringing sight to blind and low-vision people at https://www.bemyeyes.com/ Dias CM (2018) Alvisku: uso da visão computacional e sons 3D para auxílio a cegos. Engenharia de Computação JMV Aparecida Oliveira SK (2013) Uso de visão computacional em dispositivos móveis para auxílio à travessia de pedestres com deficiência visual. Mestrado Mackenzie - Engenharia Elétrica e Computação Cristina SJ, Jeferson Pezzuto DR, Cristina BJ (2015) Estudo de Aplicativos Móveis para Deficientes Visuais no Âmbito Académico. In: Brazilian symposium on computers in education (Simpósio Brasileiro de Informática na Educação - SBIE). CBIE-LACLO 2015 (Santo André, São Paulo, Brasil) SBIE Quatro Patas Pelo Mundo—Aplicativos para Deficientes Visuais at https://www.4pataspelomundo.com/aplicativos-para-deficientesvisuais Raspberry Pi 3 B+ at https://www.raspberrypi.org/products/ raspberry-pi-3-model-b-plus Olhar Digital—Raspberry Pi o que e para que serve e como comprar at https://olhardigital.com.br/noticia/raspberry-pi-o-quee-para-que-serve-e-como-comprar/82921 Microsoft—Lifecam Cinema at https://www.microsoft.com/ accessories/pt-br/products/webcams/lifecam-cinema/h5d-00013 Google—Text to Speech at https://cloud.google.com/text-tospeech Google—Speech to Text at https://cloud.google.com/speech-totext Open CV at https://opencv.org TensorFlow at https://www.tensorflow.org/ Py Image Search—OpenCv Face Recognition at https://www. pyimagesearch.com/2018/09/24/opencv-face-recognition

200 Vidi: Artificial Intelligence and Vision Device for the Visually Impaired 23. Raspberry Pi—aspberry Pi Zero W at https://www.raspberrypi.org/ products/raspberry-pi-zero-w 24. Edje Eletronics—TensorFLow Object Detection API Tutorial Train Multiple Objects Windows 10 at https://github.com/ EdjeElectronics/TensorFlow-Object-Detection-API-TutorialTrain-Multiple-Objects-Windows-10

1353

25. Py Image Search—OpenCv Python Color Detection at https://www. pyimagesearch.com/2014/08/04/opencv-python-color-detection 26. Tzutalin—Label Image at https://github.com/tzutalin/labelImg

Mobile Application for Aid in Identifying Fall Risk in Elderly: App Fisioberg D. C. Gonçalves, F. P. Pinto, and D. S. F. Magalhães

positive tendency for the adherence of health professionals and students in its use.

Abstract

The fall of a person can occur at any stage of the individual’s life, but it is more frequent in elderly, representing a high social and economic impact on the country and the world. In this scenario, the prevention and measurement of the risk of falls are considered important markers of quality of life among elderly, because fall is a factor that can negatively impact quality, if occurs. Today, the public health system broadly encourages scientific studies that involve understanding the causes of the fall of the elderly, so that the effective prevention of this episode is achieved, reducing the demand and costs in public health. Thus, in this research, we developed and validated the App FisioBerg a mobile application, based on the Berg balance scale and the Visual Analog Scale (VAS); which assists health professionals in assessing the risk of falls and pain intensity in elderly. This is a technological development study of an application that was created using the App Inventor tool, with usability heuristics for the target audience (physiotherapy professionals and students). Professional physiotherapists (n = 4) and physiotherapy students from the tenth period (n = 10) participated in the usability test. The Questionnaire for User Interaction Satisfaction (QUIS) test was applied after the care of 30 elderly people from the Palmas TO region. The software was widely accepted by professionals and students in the study. The App FisioBerg has information and results sharing, presents the VAS scale, as a differential, besides not presenting advertisements. In conclusion, the application App Fisioberg has the main instruments to identify the risk of falls in the elderly, it is easy to operate, and generates a D. C. Gonçalves  D. S. F. Magalhães (&) Scientific and Technological Institute, Bioengineering Program, Universidade Brasil, Rua Carolina Fonseca, 235, São Paulo, Brazil e-mail: [email protected] F. P. Pinto Centro Universitário Católica do Tocantins, Palmas, Brazil

Keywords

 

 

Elderly Falls Applications App Visual analog scale (VAS) Berg balance scale Bioengineering

1

Introduction

Brazil is already the sixth country in the world in terms of population aging rate [1], which represents a typical scenario of long-lived countries, with very striking characteristics, such as the presence of more complex and costly diseases [2–5]. Changes in the Brazilian demographic pattern have been increasing the demand for health services, with greater investments in the search for an improvement in the functional capacity of the elderly, directly influencing their quality of life [2–4]. Studies show that falls are the biggest cause of mobility restriction and social isolation suffered by this population, whose injuries can contribute to death [2, 3, 6–9]. Aging is a dynamic and progressive process that causes functional, morphological and biochemical changes, with a decline in functional capacities resulting, in part, from neuromuscular changes such as muscle enervation, atrophy and selective loss of muscle fibers (especially fibers type II) with a reduction in total muscle mass and a decrease in muscle strength and power, and there may also be a decline in other physiological functions such as vision, hearing and locomotion, or even representing pain symptoms in some specific pathology [9–11]. According to data from the Ministry of Health of Brazil, falls in the elderly over 65 years represent the main cause of mortality, being one of the main clinical and public health problems in the country [2, 3, 8]. Studies with elderly people living in community indicate that 30% will fall, and this

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_201

1355

1356

forecast increases to 50%, if we consider the institutionalized elderly [8, 10]. Of the percentage of elderly people who fall, about 5% are taken to the hospital and of these one in three elderly people die within a year [3, 8]. In addition to limiting the elderly’s independence and autonomy, falls in this population and its consequences have high costs for the health system. It is estimated that secondary health problems, upon falling, increase admissions to health services, making a total of 20,000 hospitalizations per year, with a cost of more than US $4,500.00 per admission to the country [5, 11]. Changes in balance and gait can lead the elderly to dependence on their daily activities [9], changing not only their autonomy, but also the epidemiological, social and economic structure of their family [9, 11, 12]. In this scenario, the prevention and measurement of the risk of falls are considered important markers of quality of life among the elderly, especially in the hospital environment. The National Health Policy for the Elderly and the Health Pact emphasizes the prevention of falls in the elderly population as a research priority for adequate intervention by professionals in the Unified Health System of Brazil (from the acronym in Portuguese: Sistema Único de Saúde—SUS) [13]. The World Health Organization (WHO) recommends the implementation of objective and standardized measures and tests in order to avoid subjectivity in the assessment of human function and dysfunction [3, 9, 13]. Scales of systematized and appropriately defined measures provide subsidies for prevention and more specific and targeted treatment for the elderly. There are, in the literature, scales and tests, widely used and validated for the Portuguese language, which are used to assess the risk of falls and pain in the elderly and thus establish measures that can prevent their occurrence, among them the Berg Balance Scale [14, 15]. In the current reality of elderly care services, these scales are applied to the patient through printed material where they are noted and added up, obtaining a determined score and leading to a classification: high, medium or low risk of falls. This information is subsequently filed in medical records or stratified for research development [14, 15]. In the present moment, we live in an era of digitalization of everyday life and technology is increasingly present in people’s lives. The contributions of technological innovations are impacted by frantic technological advances, which increasingly have imposed transformations and changes to adapt to this world. Such changes have also occurred in health institutions, reaching great proportions, due to the growing pressure of demand and universal access to health [3]. Falls represent a serious public health problem, the dimension of which has been overlooked by Brazilian society and whose impact has not been adequately discussed in academic circles (with rare exceptions), even less in the

D. C. Gonçalves et al.

sphere of health policies. In addition, falls are not the main concern of most health professionals, who still disregard them. Being a public health problem unknown to the community and overlooked by many health professionals and Brazilian health authorities [14]. In view of this fact, the developing of an application for mobile devices capable of assisting health professionals in assessing the risk of falls not only supports in the evaluation and classification of risk of falls, but also allows a closer management of files of the evaluated patients, subsidizing the development of health projects aimed at preventing these diseases more quickly and effectively. This mobile web tool has the data available, anytime and anywhere, and allows sharing in a previously registered e-mail.

2

Methods

The current study was approved by the Ethics Committee under number 20709819.2.0000.5494. It is a development study, based on software engineering that proposes the development of the application with the verification, performance and effectiveness by health professionals and academics. The conception phase included the stages of identification of needs and requirements gathering, both occurred through an observational survey during consultations and evaluations of elderly patients by health professionals in a clinic school at Guaraí (state of Tocantins, Brazil) and in a specialized physiotherapy clinic. The identification of needs came from the motivation of the urgency to transform this evaluation process into something faster and safer, keeping the historicity of monitoring the evolution of the elderly. Therefore, the observation focused on the following aspects: model used to assess the elderly (written); method of summing the results; source of stratification of results for classification of falls; monitoring of evaluations. The Massachusetts Institute of Technology (MIT) App Inventor also known as App Inventor for Android is an initiative by Google to help popularize programming among the community. It is an open source framework currently maintained by the MIT and uses a different programming pattern than conventional, Open Blocks, based on graphic blocks that represent commands equivalent to those used in an environment of traditional programming. This is possible through a library called Blocks JAVA which is distributed by the Scheller Teacher Education Program (STEP) and linked to the MIT license. Functional requirements (FRs) define the functions that the application is able to perform through data entry (system screens) and data output (reports issued by it). Such requirements define the behavior of the system after the user triggers an application function.

Mobile Application for Aid in Identifying Fall Risk …

FR01—Calculation according to the Berg Scale: the system performs the sum of points according to the answers to the 14 questions in the form filled out by the user. The result of this summation should show the individual’s chances of falling. FR02—Information sharing: the system should be able to share the results after the calculations are performed. FR03—Visual analog scale (VAS): the system should offer an option to indicate the patient’s degree of pain, which should range from 0 to 10. FR04—Editing the data: the system should provide a way to update the results of the Berg Scale and the Visual Analog Scale. FR05—Data exclusion: the system must contain an option to exclude data entered in the database.

1357

Fig. 2 Use-case diagram

Figures 1 and 2 show the project overview where the nurse records the data on the mobile device and synchronizes the data collected with the central server for automatic generation of reports and monitoring of necessary actions. Figure 2 shows a general model with all classes of the App FisioBerg with their attributes and relationships. Figure 3 shows a general model with all classes of the App, their attributes, methods and relationships. Minimum specifications for the mobile App: • Processor: 2 GHz 8 Core • Android 2.1 (or newer) operational system (OS) • Required available space: minimum of 4.8 M. It was used an Asus Zenfone 3 smartphone. The App was tested (blackbox test) by the author, by physiotherapists and students to verify the effectiveness in using the application from the home screen until the result.

Fig. 1 Architecture and deployment schema

Fig. 3 Class diagram

Performing: patient records, punctuating the scale and carrying out evaluations. Access to the system modules should be quick and require as little touch and typing as possible by users. Responsiveness was assessed after each command made. During the use of the application, it was verified, by the system analyst and by the author of the project, the time of initialization, change of screens, time of completion of the software and the time between the sending of the e-mail and its receipt. Taking into account each access screen of the application, such as: registering a new patient, processing the data on this scale, accessing the fall risk classification screen and sending the assessment to the e-mail. For usability testing the Questionnaire for User Interaction Satisfaction (QUIS) was used [16]. QUIS is a tool developed by a multidisciplinary team of researchers from the Human-Computer Interaction Laboratory (HCIL) at the University of Maryland, in order to estimate the subjective satisfaction of users focusing on specific aspects of the human-computer interface. It was structured in a modular way and organized hierarchically, being able to be configured according to the analysis needs of each interface, being able to include only the sections that are of interest to the

1358

D. C. Gonçalves et al.

user [16]. Each section specifies some points of interest for the interface. It was designed to measure the overall satisfaction of the system, addressing 11 specific interface factors: screen factors, system terminology and feedback, learning factors, system capacity, technical manuals, online tutorials, multimedia, recognition voice, virtual environment, internet access and software installation [16]. For the statistical analysis of the results of the questionnaire QUIS, the Wilcoxon test, that is a non-parametric test, was used, as the data treated is based on a Likert scale. Means and medians were used to represent the central values and standard deviation and interquartile range to represent the dispersion of the measured values.

3

Results

The software was developed, tested by the programmer and the author thoroughly and then registered with the National Institute of Intellectual Property in Brazil (INPI) [14]. Figure 4 shows the home screen at left and the Berg Scale test at right (both are shown in Portuguese). Physiotherapists with more than one year of training (n = 4) and students from the 10th period of physiotherapy (n = 10) participated in the tests with users. The sample was composed of a greater participation of females (71%) compared to males (29%). Regarding the profession, there was a greater number of students (71%)

Fig. 4 Application home screen and Berg scale evaluation screen

compared to physiotherapists (29%), as the largest number of patients were in the physiotherapy school clinic. To analyze the topic “General impressions of the use of the application”, we considered 3 questions. We can observe from the results of Fig. 5 that both the general impression of students and that of physiotherapists was excellent with average score and median very close to the maximum value (9). To evaluate whether there was a difference in the general impressions of use between students and physiotherapists, we used the Wilcoxon test where we found W = 168 with p = 0.39. This p-value shows that there was no significant difference in the general impressions of use between students and physiotherapists. To analyze the topic “Application screen” we considered 6 questions from the QUIS questionnaire. We can see from the results shown in Fig. 6 that both the evaluation of the students and the physiotherapists were excellent with an average and median score very close to the maximum value (9). To evaluate whether there was a difference in the application screen impression between students and physiotherapists, we used the Wilcoxon test where we found W = 642 with p = 0.38. This p-value shows that there was no significant difference in the general impressions of use between students and physiotherapists. In order to analyze the topic “Application learning”, 3 questions were considered. In Fig. 7, both students and physiotherapists score were excellent with an average and median score very close to the maximum value (9). To

Mobile Application for Aid in Identifying Fall Risk …

1359

Fig. 5 General impressions of the use of the application Fig. 7 Impressions of students and physiotherapists about the learning of the app

Fig. 6 Application screen evaluation

evaluate whether there was a difference in the application learning between students and physiotherapists, we used the Wilcoxon test where we found W = 133 with p = 0.032. This p-value shows that there was a significant difference in learning between students and physiotherapists. This difference is probably due to a greater contact of professionals with the use of the Berg scale. It is worth mentioning here, a greater participation of students in relation to physical therapists, a limitation of the study.

4

Discussion

The number of smartphone users is growing rapidly around the world. This situation has enabled a greater use of these devices for utilities besides messages and calls [17]. Thus, the application developed (App FisioBerg) proposed a different dynamic so far not identified in the publications found on the Berg scale, as shown in Table 1.

In order to compare the capacity and differences of the applications that test the degree of balance in the elderly using the Berg Scale, this research revealed that the App1 [18] in addition to being an easy to understand application, has explanatory audio in each item, in addition to being able to share the results in the form of graphs both in Excel and in PDF by e-mail or other social networks. However, the amount of advertisements during the test makes it more time consuming and uncomfortable in its use, making the operator lose about 1 min for each evaluation. App2 [19] has a non-flashy design, in addition to being difficult to understand; it has no sharing function. The results appear only in numbers not reporting whether there is a possibility of a risk of falls or not, making it impossible for people outside the specific area to understand. The App Fisioberg [20] has an easy-to-understand platform, because even without instructions the application itself is informative. It has a differential that is the Visual Analog Scale (VAS), where at the end of the test, the person analyzed in question says the degree of pain he is feeling, thus being able to monitor not only balance rehabilitation but also control of pain. The VAS is present in the App but it is not the core of the App Fisioberg. The results showed that the evaluations by the App are fast, without difficulty of understanding for the evaluator and the appraised, excluding the need for training. Mobile devices have great capacity as an instrument used by health professionals, they are used daily or in clinical practice, allowing low-cost execution and easy organization. The App Fisioberg [20] was successfully developed using MIT’s Framework App Inventor, being compatible with Android devices. The usability test performed using the QUIS questionnaire with physiotherapy students from the

1360

D. C. Gonçalves et al.

Table 1 Comparison between applications. X—present; NP— not present

Function

App Escala de Berg (Sensing Future)—App1

App Escala de Equilíbrio Grátis (Alecrim Tech)—App2

App Fisioberg

Language

Portuguese BR

Portuguese BR

Portuguese BR

Runs on Smartphone/Tablet

X

X

X

Runs without internet (offline)

X

X

X

Total number of questions

14

14

15 (14 +VAS)

Features suggestions platform

X

NP

NP

Patient registration tab available

X

X

X

Features VAS scale

NP

NP

X

Shows statistics in graphs

X

NP

NP

Advertisements

X

X

NP

Explanatory audio

X

NP

NP

Explains the origin of the scale

X

X

X

10th period and physiotherapists showed wide approval between the groups, with median and average score values very close to the maximum score. The factors evaluated by the questionnaire QUIS were: General impressions of the use of the application, application screen and learning how to use the application. Comparing the studied groups, it was possible to notice a statistically significant difference in learning, which showed that physical therapists attributed values that show greater ease than the group of students. The study had limitations, such as the need of the Android OS installed in the device. Another limitation of this work is associated with the greater participation of students in relation to physiotherapists and a greater number of women in relation to men. However, it appears that the use of new technologies to assess the risk of falls in the elderly is possible, despite the limitations of the study.

5

Conclusion

The application developed during the process proved to be functional after the tests, being a viable tool for use in the process of evaluating and monitoring the risk of falls and pain in the elderly, with flexibility to expand its scope to other functionalities such as synchronization with electronic medical records. Thus, it is expected that the developed application is able to be used, extensively, by health and students as it is available, free of charge, to interested users.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Siqueira FV, Facchini LA, Silveira DS, Piccini RX, Tomasi E, Thumé E et al (2011) Prevalence. Of falls in elderly in Brazil: a country wide analysis. Cad. Saúde Pública 27(9):1819–1826 2. Cruz DT, Ribeiro LC, Vieira MT, Teixeira MTB, Bastos RR, Leite ICG (2012) Prevalência de quedas e fatores associados em idosos. Revista de saúde pública 46(1):138–146 3. Antes DL, Schneider IJC, Benedetti TRB, D´Orsi E (2013) Medo de queda recorrente e fatores associados em idosos de Florianopólis, Santa Catarina, Brasil. Caderno de Saúde Pública 29 (4):758–768 4. Pereira GN, Bastos GAN, Duca GFD, Bós AJG (2012) Indicadores demográficos e econômicos associados a incapacidade funcional de idosos. Caderno de Saúde Pública 28(11):2035–2042 5. Granacher U, Muehlbauer T, Gollhofer A, Kressing RW, Zahner L (2011) An intergenerational approach in the promotion of balance and strength for fall prevention—a mini-review. Gerontology 57:304–315 6. Veras R (2009) Envelhecimento populacional contemporâneo: demandas, desafios e inovações. Revista de Saúde Pública 43 (3):548–554 7. Coutinho ESF, Bloch KV, Coeli CM (2012) One-year mortality among elderly people after hospitalization due to fall-related fractures: comparison with a control group of matched elderly. Cad. Saúde Pública 28(4):801–805 8. Pfortmueller CA, Kunz M, Lindner G, Zisakis A, Puig S, Exadaktylos AK (2014) Fall-related emergency department admission: fall environment and settings and related injury patterns in 6357 patients with special emphasis on the elderly. The Scientific World Journal, pp 01–06

Mobile Application for Aid in Identifying Fall Risk … 9. Merlo A, Zemp D, Zanda E, Rocchi S, Meroni F, Tettamanti M et al (2012) Postural stability and history of falls in cognitively able older adults: the Canton Ticino study. Gait & Posture 36:662–666 10. Melzer I, Kurz I, Oddsson LIE (2010) A retrospective analysis of balance control parameters in elderly fallers and non-fallers. Clin Biomech 25:984–988 11. Guimarães LHCT, Galdino DCA, Martins FLM et al (2004) Comparação da propensão de quedas entre idosos que praticam atividade física e idosos sedentários. Revista Neurociências 12 (2):6872 12. Ribeiro F, Gomes S, Teixeira F et al (2009) Impacto da prática regular de exercício físico no equilíbrio, mobilidade funcional e risco de queda em idosos institucionalizados. Rev Port Cien Desp 9(1):36–42 13. Ministério da Saúde (2006) Política Nacional de Saúde da Pessoa Idosa. Portaria GM n. 2.528, de 19 de outubro de 2006 14. Maciel A (2010) Quedas em idosos: um problema de saúde pública desconhecido pela comunidade e negligenciado por muitos profissionais da saúde e por autoridades sanitárias brasileiras. Rev Med Minas Gerais 20(4):554–557 15. Miyamoto ST, Lombardi Junior I, Berg KO, Ramos LR, Natour J (2004) Brazilian version of the Berg balance scale. Braz J Med

1361

16.

17.

18.

19.

20.

Biol Res 37(9):1411–1421. https://doi.org/10.1590/s0100879x2004000900017 Da Silva R, Baptista A, Serra RL, Magalhaes DSF (2020) Mobile application for the evaluation and planning of nursing workload in the intensive care unit. Inter J Med Inform 137: https://doi.org/10. 1016/j.ijmedinf.2020.104120 Costa PHV, Amaral NS, Polese JC, Sabino GS (2018) Validade e confiabilidade de aplicativos de avaliação do movimento para smartphones: revisão integrativa. Revista Interdisciplinar Ciências Médicas 2(2):66–73 Sensing Future (2018) Escala de Berg v 4.0. Sensing Future at https://play.google.com/store/apps/details?id=com.sensingfuture. ricar.escala_berg&hl=pt AlecrimTech (2019) Escala de Equilíbrio Grátis v 1.1.0. AlecrimTech at https://play.google.com/store/apps/details?id=alecrim. berg.scale.free&hl=en Magalhães DS, Pinto FP, Gonçalves DC (2020) FisioBerg - Escala de Berg e Escala Visual Analógica (EVA). Brasil/SP Patente BR512020000241-2

Real-Time Slip Detection and Control Using Machine Learning Alexandre Henrique Pereira Tavares and S. R. J. Oliveira

Abstract

The handling and gripping of objects by a prosthesis depend on the precise applied force control and the slip detection of the grasped object. These two features combined allow for the adjustment of the minimum grip force required to prevent slipping. Based on this statement, a system was developed to control the slip of objects, composed of a grip controller, for which the objective was to hold the object, and through the signal of a tactile sensor, slip is detected. An artificial neural network was used to identify the slip event for different types of objects. If the response from the classifier is positive, indicating slip, the system sends a signal to the grip controller, so that it increases the grip force performed on the object, aiming at minimizing slippage. In the end, the performance of the system for different objects was analyzed; the result encountered was that the system efficiency is proportional to the mass and the rigidity of the grasped object. Keywords

Grip control

1



Slip detection



Machine learning

Introduction

Tactile feedback is particularly important when it comes to object manipulation, as it allows one to know the applied force when grasping an object. Additionally, tactile feedback allows for the identification of other physical properties

A. H. P. Tavares (&)  S. R. J. Oliveira Federal University of Uberlândia, Av. João Naves de Ávila, 2121, 38408-196, Uberlândia, Brazil e-mail: [email protected]

concerning the object to be manipulated, such as its shape and stiffness, and it also provides information on the interaction forces between the manipulator and the object (not only normal but also shear forces, which characterizes the slip [1]). All of this is very useful in the control of prostheses. However, replicating the human tactile sensation to control the grip strength is still in need of major development. With technological advancement, smarter hand prostheses have emerged. As such, the implementation of tactile feedback for these applications is important for increasing precision and dexterity in handling objects, also in reducing the cognitive load of the user when performing this task. When it comes to object manipulation and maintaining grip stabilization, it is very important to detect the slip between the manipulator and the object, as this information represents feedback to the system, informing whether the grip strength is sufficient or not to hold the object without slipping. The slip can be divided into two parts: imminent (or micro) and total (or macro) slipping. The first refers to the relative displacement that occurs in a narrow region of S, where S represents the contact area between the sensor and the object. The total (or macro) slip involves a large number of S points. These definitions imply that imminent and total slips are temporally contiguous phenomena, with the imminent slip preceding the total slip [2]. A large number of studies are emerging in the area of slip identification with application in the control of grip strength in objects. Different types of tactile sensors can be found in the literature, for example, optical sensors [3], piezoelectric [4], acoustic [1, 5], piezoresistive [6] and photoelastic [7]. Although researches using slip sensors in hand-held prostheses started in the 1980s [8], there is still a long way to equate artificial tactile sensors to the biological ones. Although other authors use machine learning techniques to detect slipping [4, 6, 9, 10], they are linked to a more complex sensing system and use a pre-processing technique to refine data before classification regarding the presence of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_202

1363

1364

A. H. P. Tavares and S. R. J. Oliveira

slipping. With the aim of contributing to this area, this work proposes the use of a simple system composed of only one acoustic sensor and a slip detection system using an artificial neural network (ANN) that receives the raw data from the sensor in real-time. That said, it is reasonable to place the sensor in direct contact with the S surface so that it can detect the slip. One can say that the larger the area S, the stronger the vibration wave generated by the friction in the slip between the sensor and the grasped object. Besides, other physical characteristics influence the acoustic sensor’s signal strength when slipping occurs, including the object’s mass, stiffness, roughness [11]. The machine learning algorithm has the advantage of its generalization, i.e., they can learn the patterns of the signals coming from the sensors and perform equally well for other signals similar to those of the training set [12]. By learning the signal patterns, the machine learning algorithms are weakly susceptible to errors caused by noise inherited from the system [9], for example, the noise of the acquisition system.

2

Materials and Methods

The developed system consists of pressing an object against a tactile sensor and progressively reducing the grip strength, while monitoring the signal from the acoustic sensor. This signal is sent to a classifier to evaluate the efficiency in slip control, preventing the object from falling. Several experiments were carried out to evaluate slip control, using different types of objects, based on texture, mass and rigidity. Following this, graphs and tables were prepared to present the results. Next, the steps that constitute the development of this work will be presented.

Fig. 1 System for controlling the grip force on an object

A. Grip Control In order to perform grip control, a CNC machine was used, as shown in Fig. 1. This machine allows for movements on the three axes (X, Y, and Z), however, in this work only one of the axes was used to perform object gripping. A stepper motor controlled the manipulator movement (where the tactile sensor was attached), and the object was positioned between the tactile sensor and a fixed bulkhead. In this way, the manipulator moves the sensor in the direction of the object, by either pushing it away or pressing against it, as required by the experiment in progress. B. Tactile Sensor To detect the relative movement between the grasped object and the tactile sensor, an acoustic sensor (electret microphone) was used. Materials of the electret category are those for which the charge concentration can be changed when they are influenced by pressure or external deformation. At the electret microphone the current at the transistor (FET) output varies in proportion to the waveform of the incident sound [13]. In addition, when using the FET transistor, the sound signal can be amplified, which in this study represents an important advantage. About the signal acquisition, the sampling rate (Fs) was set to 1000 Hz, the analog-to-digital converter used has a range of 0–3.3 volts with 12 bits and an anti-aliasing low-pass Butterworth fourth-order filter with a cutoff frequency of 500 Hz was used. Figure 2 shows the final appearance of the tactile sensor. C. Grip Control and Signal Processing For the grip control of the object and acquisition of the signal captured by the tactile sensor, a processor board based on the Atmel SAM3X8E ARM Cortex-M3 32-bit processor was used. During the experiments, this board was used to read the signal from the tactile sensor and to control the grip when grasping the object. A computer was used to receive the signals from the tactile sensor, sent from the processor

Fig. 2 Prototype of the developed tactile sensor

Real-Time Slip Detection and Control …

1365

board, and to train a classifier algorithm to detect the slip. Furthermore, on the computer and after training, the signals received from the tactile sensor are sent to the classifier and its response is used to control the level of grip on the object, in order to prevent it from falling. D. The Classifier The classifier consists of an algorithm that, based on a set of input data, organizes them into classes so that, after training, the responses from the classifier can be used in the decision-making process. The classifier used in this study was developed using an Artificial Neural Network (ANN), which is defined as an interconnected structure of simple processing elements, in this case, artificial neurons, with the ability to perform operations for the most diverse applications. In this study, the artificial neural network was used as a classifier for the identification of slip moments. For this reason, the network output is binary (slip and non-slip). After comparing two of the most known supervised learning classifiers (Support Vector Machine and Feedforward Neural Network) [14], the second got better results in this application and therefore was chosen. The number of input neurons was defined as being equal to the number of samples within a data window. The sampling rate was 1 ms and the data window was 40 ms, which results in a number of 40 neurons in the input layer, with one neuron for each data sample. The data window was defined experimentally, considering the minimum possible number of neurons in the input layer and the minimum interval of acquisition time that allows ANN to efficiently identify the slip. The number of hidden layers and the number of neurons in each of these layers was defined considering the efficiency in the identification of the slip and the computational cost. This is an important point, as the response time of the network is negligible in relation to the response time for activating the stepper motor, while in the act of increasing the force of grip, to avoid slipping and the object falling. E. Slip Indicator During the ANN training process, the balance between the number of samples that represent the slip and those that represent the absence of slide must be maintained. The importance given to this detail is necessary in order that the network does not present a phenomenon called overfitting, in which the learning algorithm can become more specialized in the recognition of a certain class (since it has learned more concerning such information), while at the same time, the other class has its learning compromised. To avoid this type of situation, a sensor has been developed to indicate that the object has slipped. The structure of this sensor is shown in Fig. 3.

Fig. 3 Slip indicator sensor

This sensor consists of a base connected to a perforated rod and which possesses holes that pass through a photo-interrupter, this produces pulses as the grasped object falls. This means that if the slip indicator sensor is triggered, the sample at the entrance to the ANN was labeled as slip and if not; it was labeled as non-slip. This sensor allowed for the organization of the data windows presented to the ANN with their respective targets for training, thus avoiding overfitting. F. Classifier Training Parameters The tactile sensor signal is divided into 40 ms windows, which means 40 data samples of 1 ms that are already labeled with the indication of with or without slip. In the subsequent step, all these samples were presented to the classifier (ANN), so that it could learn a pattern between the samples referring to the slip event and another pattern referring to the static signal samples (without slip). Thus, after the training phase, the algorithm should be able to distinguish the class from a new sample presented, whether it represents the presence of slip or not. The training method used was supervised, i.e., during the learning process, each sample was presented with its respective response (target). Each time a new sample is presented, the mathematical model of the classifier is

1366

A. H. P. Tavares and S. R. J. Oliveira

adjusted, so that it learns that that sample is similar to those presented in class A (with slip), and this should result in a class A output; and samples similar to those presented in class B (without slipping) should go to class B output. The supervised training algorithm used was the Multi-Layer Perceptron. G. Experimental Protocol Before the data from the tactile sensor can be classified as to the presence or absence of slipping, the classifier algorithm must be trained to know which signals represent each state. For this reason, a protocol was established for the sensor data collection to acquire the targets of the classification algorithm. These targets are the classifier responses to each input data. The protocol was developed as follows. Initially, the object is trapped between the tactile sensor and the fixed vertical bulkhead, located to the right of the object to be held (see Fig. 1). Then, the experiment begins with the stepper motor moving the tactile sensor slowly at a constant speed of approximately 0.3 mm/s, moving away from the object (causing the grip between the tactile sensor and the grasped object to decrease) until it falls. Thus, at the initial moment of the object falling, the signal from the slip indicator sensor (photo-interrupter) changes, and this moment, the beginning of the slip is identified. Figure 4 shows the signals from the tactile sensor before and during the slip. The figure also shows the performance of the slip indicator showing the moment when the object falls. The experiment was carried out with 10 different objects and each one was submitted to this protocol 30 times, totaling 300 collections. Figure 5 shows the objects used in the experiment, these were: cardboard box, rigid plastic cup,

Fig. 4 Tactile sensor signals and slip indicator

Fig. 5 Objects used in the experiment

golf ball, billiard ball, tennis ball, soft rubber ball, glass bottle, velvety cylinder (collation straw), wooden board and a thick piece of Ethylene-vinyl acetate (EVA) rubber. After training with the objects shown in Fig. 5, graphs and tables will be presented to assess the efficiency in detect-ing slip in order to avoid the object from falling.

3

Results

A. ANN Setup As previously described, the ANN has 40 neurons in the input layer to receive a 40 ms acquisition window for monitoring the tactile sensor and 2 neurons in the output layer to represent the presence or absence of the slip. To define the number of hidden layers, tests were performed with the number of these layers varying from 1 to 5 and considering the fixed value of 20 neurons in each layer. The results are shown in Fig. 6. This graph shows that with 3 hidden layers a satisfactory level of accuracy is obtained, therefore, this number of hidden layers was chosen. In the subsequent discussions to be presented herein, other reasons for this choice will be put forward. Then, the number of neurons of each hidden layer was varied, intended on verifying whether the value of 20 neurons, as previously chosen, really is that which presents the best accuracy for ANN, operating with 3 hidden layers, as defined by the previous experiment. In this experiment, the number of neurons in the 3 hidden layers was varied by the values of 5, 10, 20, and 30 neurons. The results are shown in Fig. 7. This figure suggests that choosing 10 neurons in the hidden layers is the best choice. Other reasons for this choice will be detailed in the discussions.

Real-Time Slip Detection and Control …

1367

For each training it was performed a validation test for each of the 10 objects and repeated 20 times. The result of the cross-validation (k-fold, k = 5) efficiency (using the recall metric) in the identification of the slip is shown on Table 1.

Fig. 6 Accuracy representation as a function of the number of hidden layers. The green asterisk symbol shows the average of each distribution

C. Operation Phase The operation phase consisted of using the ANN weight matrix generated during the training to identify slippage and, thus, control the grip on the object. In this experiment, the object was pressed between the tactile sensor and the fixed bulkhead. The grip force was decreased by the stepper motor, moving the tactile sensor away from the object, while it was collecting data and sending it to the ANN. In the operation phase, the step between a 40 ms window and another was kept constant and equal to 10 ms, i.e., even if the signal window is 40 ms in size (40 data samples of 1 ms), the next window would be the previous window shifted in 10 data samples forward. Thus creating 30 data samples that are always the same as the previous window, featuring an overlap of 30 data samples between two windows. Following the above-mentioned, every 10 ms a new data window was generated and sent to ANN as input data. When the ANN indicates that there is a slip in the input data (40 ms newest window), this information is used by the control system to increase the grip strength, to prevent the object from falling. Ten tests were performed for each object and the number of times the control system managed to prevent the object from falling was verified. Table 2 shows the percentage of tests in which the system was able to prevent the object from falling.

4 Fig. 7 Accuracy representation as a function of the number of neurons of each hidden layers

B. Training Phase This procedure aimed at training the ANN so that it can identify object slip, from the signal captured by the tactile sensor. The slip indicator sensor shown in Fig. 3 was used as a target for ANN training. In this training phase, two types of data windows were selected for each of the 10 objects shown in Fig. 5, the first containing 40 ms windows within the slip period (the period of high-level response in Fig. 4) and the other containing the same number of 40 ms windows for the absence of slip, preceding the slip occurrence. During training, when presenting each type of window, its respective target (desired output for the ANN) was used to adjust the ANN weights.

Discussions

Figure 6 shows that with 3 and 5 hidden layers the accuracy results are similar and higher among all cases. As the number of hidden layers increases so does the computational cost of the ANN, thus it was decided to use 3 hidden layers. Figure 7 shows that with 10 and 20 neurons the difference in accuracy was very small and still considering that with the lower the number of neurons, the lower the computational cost, the number of neurons for the hidden layers was defined as being equal to 10. Analyzing the classification results presented in Table 1, the correct slip identification for the cardboard box and velvety cylinder objects, for the 20 tests, was quite low (10% of the tests). This result shows that the difference between the signals regarding the presence and absence of slipping was very small and the classifier didn’t manage to differentiate them. This may be due to the low grip strength for holding these, added to the fact that these objects are the

1368 Table 1 Percentage of correct classification in the identification of the slip, for the 20 tests

Table 2 Percentage of tests in which the object falling was avoided

A. H. P. Tavares and S. R. J. Oliveira Objects

Correct classification in slip identification (%)

Cardboard box

10

Rigid plastic cup

100

Golf ball

100

Billiard ball

100

Tennis ball

100

Soft rubber ball

75

Glass bottle

80

Velvety cylinder

10

Wooden board

85

EVA rubber

20

Objects

Percentage of tests in which the fall of the object was avoided (%)

Cardboard box

20

Rigid plastic cup

100

Golf ball

100

Billiard ball

90

Tennis ball

100

Soft rubber ball

80

Glass bottle

80

Velvety cylinder

30

Wooden board

80

EVA rubber

40

lightest among the samples, these facts could go on to justify the low ANN accuracy in identifying the slip. As for the EVA rubber, one can infer that the accuracy of 20% may have been due to the same reason as the cardboard box and the velvety cylinder, since it is also a light object, not requiring great grip strength to hold it. Therefore, the low contact force between the object and the sensor which reduces the sensitivity of the sensor to perceive the sound vibration of the object sliding over its contact surface (S). For objects that showed 100% accuracy, these have high stiffness, moderate mass (compared to other objects), and texture with a high coefficient of friction and therefore have good results for the classifier response. Such results suggest that the efficiency of the classifier depends on the mass of the object, since the greater the mass, the greater the gripping force to hold the object and the greater the frictional force between the object and the sensitive surface of the sensor. The results of the system performance on Table 2 show that the developed system achieved good results for stabilizing grip in more rigid objects, however, it did not behave so well for very soft and low mass objects, as observed in the training phase.

This proved to be a limitation, due to the sensor used having its operating principle based on the capture of sound vibration generated by the sliding between the surface of the sensor and the grasped object. When there is a slip with a soft object, or one with low mass, the amplitude, and frequency of the vibration wave are attenuated, and thus are not very well differentiated by the ANN, which compromises the slip detection and object stabilization needed to avoid it falling.

5

Conclusions

We can conclude that the implemented system presented good results in the control of the grasping of some kinds of objects. However, for objects with low stiffness and low mass, the results were less satisfactory. This is mainly due to the limitations of the sensor, where the signals indicating slip and no-slip were very similar, which caused the accuracy of the classifier to be compromised. As future work, some analysis will be made to increase the sensor signal’s intensity in the occurrence of the slip. Also, the machine learning algorithm will be improved to detect smaller signal variations.

Real-Time Slip Detection and Control … Acknowledgements The authors would like to thank the School of Electric Engineering, Federal University of Uberlândia (FEELT-UFU). Also, they are thankful for the financial support provided to the present research effort by CAPES.Conflict of InterestThe authors declare that they have no conflict of interest.

1369

6.

7.

References 1. Francomano MT, Dino D, Guglielmelli E (2013) Artificial sense of slip—a review. IEEE Sens J 13(7):2489–2498. https://doi.org/10. 1109/JSEN.2013.2252890 2. Srinivasan MA, Whitehouse JM, LaMotte RH (1990) Tactile detection of slip: surface microgeometry and peripheral neural codes. J Neurophysiol 63(6):1323–1332. https://doi.org/10.1152/ jn.1990.63.6.1323 3. Silva AN, Thakor NT, Cabibihan JJ, Soares AB (2019) Bioinspired slip detection and reflex-like suppression method for robotic manipulators. IEEE Sens J 19(24):12443–12453. https://doi.org/ 10.1109/JSEN.2019.2939506 4. Fujimoto I, Yamada Y, Morizono T, Umetani Y, Maeno T (2003) Development of artificial finger skin to detect incipient slip for realization of static friction sensation. In: Proceedings of IEEE international conference on multisensor fusion and integration for intelligent systems, MFI2003, pp 15–20. https://doi.org/10.1109/ mfi-2003.2003.1232571 5. Francomano MT, Accoto D, Morganti E, Lorenzelli L, Guglielmelli E (2012) A microfabricated flexible slip sensor. In: 4th IEEE RAS & EMBS international conference on biomedical

8. 9.

10.

11.

12.

13.

14.

robotics and biomechatronics (BioRob), pp 1919–1924. IEEE. https://doi.org/10.1109/biorob.2012.6290907 Dao DV, Sugiyama S, Hirai S et al (2011) Development and analysis of a sliding tactile soft fingertip embedded with a microforce/moment sensor. IEEE Trans Rob 27(3):411–424. https://doi. org/10.1109/TRO.2010.2103470 Dubey VN, Crowder RM (2006) A dynamic tactile sensor on photoelastic effect. Sens Actuators, A 128(2):217–224. https://doi. org/10.1016/j.sna.2006.01.040 Childress DS (1985) Historical aspects of powered limb prostheses. Clin Prosthet Orthot 9(1):2–13 Suzuki Y, Miki D, Edamoto M, Honzumi M (2010) A mems electret generator with electrostatic levitation for vibration-driven energy-harvesting applications. J Micromech Microeng 20(10): https://doi.org/10.1088/0960-1317/20/10/104002 Schweitzer Wolf, Thali Michael J, Egger David (2018) Case-study of a user-driven prosthetic arm design: bionic hand versus customized body-powered technology in a highly demanding work environment. J Neuroengineering Rehabili 15(1):1 Hendriks CP, Franklin SE (2010) Influence of surface roughness, material and climate conditions on the friction of human skin. Tribol Lett 37(2):361–373 Kotsiantis, Sotiris B, Zaharakis I, Pintelas P (2007) Supervised machine learning: a review of classification techniques. Emerg Artif Intell Appl Comput Eng 160(1):3–24 Chu V et al (2013) Using robotic exploratory procedures to learn the meaning of haptic adjectives. In: 2013 IEEE international conference on robotics and automation. IEEE Veiga F et al (2015) Stabilizing novel objects by learning to predict tactile slip. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE

Programmable Multichannel Neuromuscular Electrostimulation System: A Universal Platform for Functional Electrical Stimulation T. Coelho-Magalhães, A. F. Vilaça-Martins, P. A. Araújo and H. Resende-Martins

Abstract

This work presents an Electrical Stimulator (ES) system able to connect to different external sensors and to perform different stimulation strategies that would meet the current research needs on FES-assisted activities, including cycling and walking. An ES with a 3-channel architecture was first developed and later evolved into an 8 and 12-channel system. Each of these channels can generate different pulse waveforms, biphasic or not, achieving pulse amplitude of 100 mA and pulse width up to 1000 µs at a maximum frequency 100 Hz. To enable protocols involving different stimulation arrangements for a specific muscle, an electrode multiplexing circuit was developed. Data obtained from the functional activity can be exchanged by a wireless interface with an external platform for later offline processing. Inertial Measurement Units (IMU) can be used to calculate the joint angles of the moving limbs. This paper aims to present a brief description of the hardware and the results obtained as the platform configured as a Foot Drop Stimulator. Gait phases were detected through mechanical and inertial sensors. The identification results were compared with data obtained from a gold-standard Qualisys/AMTI system. The developed system evidences its applicability in contexts such as walking, cycling and can also be used for clinical rehabilitation purposes. Keywords

Functional electrical stimulation • FES-walking • Neuromuscular electrical stimulation • Foot drop stimulator

T. Coelho-Magalhães (B) · A. F. Vilaça-Martins · H. Resende-Martins Laboratório de Engenharia Biomédia, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil P. A. Araújo Escola de Educação Física, Fisioterapia e Terapia Ocupacional, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

1

Introduction

Neuromuscular Electrical Stimulation (NMES) is a technique that employs electrical pulses to stimulate muscles that are weak or paralyzed [1]. Over the last six decades, NMES systems have been developed aiming to help individuals with movement disorders, especially due to neuropathies, to perform muscle contractions. Their functional application, known as Functional Electrical Stimulation (FES), has been extensively reviewed in the literature [2,3]. Rehabilitation programs assisted by FES have evidenced impact on pain, cardiorespiratory function, body composition and bone metabolism [4–7]. Despite the benefits of FES, the ability to perform any movement depends on numerous muscle physiological variables that, in response to electrical stimuli, are characterized in an uncertain, nonlinear, and time-varying manner [8]. FES systems are suggested by the literature to have their parameters (frequency, pulse width and intensity) undergoing changes dynamically, especially for rapid muscle fatigue reduction during FES exercises [9]. Furthermore, many functional activities require distinct muscle groups to be activated during the movement. These characteristics emphasize the need for the electrostimulation systems to have multiple channels and to allow the adaptation of electrical stimuli to physiological and functional needs [10]. Commonly, subjects who were affected by stroke have foot drop as a neurological sequel. This deficit prevents them from raising the tip of the foot during the swing phase of walking [11]. Foot Drop Stimulators should, at a minimum, assist in this difficulty and, eventually, minimize the impact of the load response and assist the thrust through the plantar flexor stimulation. Most commercial solutions available, however, perform stimulation only during the swing phase and do not support the excitability of different muscles [2]. Considering the use of these systems in scientific research, the development of a device that allows changing its applicability through specific programs, in addition to the easy

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_203

1371

1372

T. Coelho-Magalhães et al.

connectivity to various sensors and modules, seems an effective way to facilitate experiments and studies. In this context, we present a portable, programmable and miniaturized multichannel electrostimulation platform that can be used in several FES applications such as for foot drop correction and FES-assisted cycling. The goal behind the development is to create an electrostimulator architecture that allows research in different contexts.

2

Materials and Methods

2.1

Electrical Stimulator System Architecture

The system was designed to perform various functions and to act as an universal electrical stimulation platform. Different associations between hardware, firmware and software can be developed to meet functional and clinical applications. Initially, the platform was designed to have three independent electrical stimulation channels and to have their parameters configured by a wireless interface. Functional applications such as walking, in which the system is supposed to be placed close to the moving limb, could be executed due to the reduced dimensions. After the validation of the designed topology, the system evolved into an eight-channel architecture and later, 12 channels, which is intended to be used in FES-cycling experiments. Figure 1 shows the architecture for the 3-channel topology. It is structured in four main units: digital control, power supply, sensor interface and stimulation unit. Digital Control Unit (DCU): responsible for the electrical stimulation system management. The processing capacity is based on a 32-bit ARM® Cortex-M7 low power consumption microcontroller, which features a wide range of peripherals and high processing capacity. These characteristics fit the requirements associated to the information processing speed, sensor interfacing, communication and the possibility to be battery powered. An external digital to analog converter (DAC) provides independent analog control signals which determine the electrostimulation pulse amplitude and waveform for each channel, allowing different muscle groups to be activated independently and simultaneously with different parameters. Pulse width and frequency can also be monitored through a feedback loop. Any difference between the desired and measured pulses could be detected, including opened and shortened electrodes. Finally, DCU features real-time sensors data processing, which enables the system to analyze information from body segments such as joint angles that can lead to detect events and trigger electrical stimulation. Data is also sent to an external platform through the Bluetooth communication interface for posterior analysis.

Power Supply Unit (PSU): provides the power necessary to the system functioning and the pulse energy required for each electric stimulation channel. A DC/DC converter transforms a lower input voltage into a 100VDC output, with a maximum continuous output current of 150 mA. Stimulation Unit (SU): it has distributed power flow through independent current sources, which allows real parallelization between the stimulation channels. The SU architecture consists of a voltage-current converter, a Wilson current mirror, an H-bridge, and a feedback measuring circuit, featuring a constant current source topology. The architecture overcomes the pulse current magnitude variability owing to load variations during clinical application or FES. The DAC generated analog voltage signal output is converted into a reference current for the Wilson current mirror associated with the respective channel. It is possible to vary the pulse magnitude over time and, therefore, create different pulse waveforms. The Hbridge circuitry employs 4 power MOSFETs associated with high performance half-bridge driver integrated circuits which allows the current to flow through the load in both directions and, therefore, generate biphasic pulses. A non-commercial solution using a similar H-bridge configuration has already been proposed in the literature [12]. One of the channels can have its electric stimulation pulses multiplexed in four different electrical paths. This feature provided electrode positioning optimization in order to study the importance of this technique to reduce the fatigue characteristics of FES-induced muscle contractions. Sensor Interface Unit (SIU): this unit can exchange data with open-closed mechanical switches, Force Sensitive Resistors (FSR), IMU, EIM and EMG modules. Regarding FES-assisted cycling, IMU modules are an alternative to the use of encoders. They measure and report the acceleration, angular velocity, and eventually the surrounding magnetic field. These mechanical measurements are given by accelerometer, gyroscope and magnetometer, respectively. Within the context of this work, IMUs can be used to describe a body orientation, varying or not, in a certain interval of time. Thus, it is possible to detect events through joint angle analysis. This feature facilitates installation and eliminates mechanical modifications required for systems using encoders. For the present work, specifically, IMU and FSR sensors were used to assess the system effectiveness during walking exercises in individuals. A hardware for conditioning the FSR signal was developed. Data collected from these modules are processed in real-time, which enables the system to analyze information from body segments, such as joint angles, that can lead to detect events and trigger electrical stimulation. During the clinical session, the information obtained from each subject can be saved for offline processing.

203 Programmable Multichannel Neuromuscular Electrostimulation System …

Fig. 1 Initial 3-channel system architecture

1373

1374

2.2

T. Coelho-Magalhães et al.

Firmware

The firmware architecture was designed to meet the real-time requirements and to make the most of hardware resources. Firmware libraries were created for the NMES pulse management and for the functions of the peripheral units, which can also be used in future applications. For this experiment, a custom firmware was developed.

2.3

Tests Description

In order to validate the system topology several tests were proposed, such as to detect specific movement patterns and to perform functional movements assisted by NMES. Experimental tests were approved by the Local Ethics Committee of Brasilia University (ref. 1.413.934) and were performed on a healthy male volunteer with no history of neurological disorder. Also, a maximum current intensity test aimed to verify the maximum stimulus intensity that can be produced before saturating the output current for a load of 1120. Pulse parameters were set to 100 Hz frequency, 50 0µs width and 100 mA amplitude. A first experiment aimed to measure the range of motion associated with dorsiflexion as a result of the application of balanced biphasic square pulses to the tibialis anterior (TA). This movement, in a simple way, can be defined as the action of raising the foot upwards towards the shin. Square electrodes (5 × 5 cm) were placed at identified motor points. The movement was observed in the sagittal plane and the accelerometer was positioned in the forefoot region. The subject remained seated with both feet on the floor with the knee bent to 90◦ and the evaluation started from the neutral position, i.e. at 0◦ of dorsal flexion. Parameters were set at frequency 33 Hz, pulse width of 400μs and current amplitude at 35mA. Five samples were taken considering the application of electrostimulation pulses for 2 s. To demonstrate the applicability of the system, a second test aimed to compare the gait information obtained from FSR and IMU sensors with each other. Regarding the FSR sensors, customized insoles were developed with enough space to pass the wires and to prevent the pressure on the sensors from being tampered with. Two FSR models with large sensing area were used in the project: one square for the forefoot, with an active area of 50.8 mm × 50.8 mm; and a round-shaped sensor for the heel, with a diameter of 25.4mm. Since the greatest pressure is exerted on the heel, the metatarsal head and the hallux during the walk, it was necessary to adjust the position of these sensors according to the volunteer’s anatomy. These points are crucial for determining the end and beginning of the swing phase, when the heel touches the ground and the toe is removed from the ground, respectively. As adopted in the FSR method, another way to identify the gait phases was implemented by inertial sensors. The data

from the IMU module, including an accelerometer, was evaluated in real time to identify the specific gait phases, which, in turn, are necessary to trigger the electrical stimuli. A moving average filter was implemented in the processor’s own firmware to eliminate angular variations at high frequencies. Static forces acting on a specific point on the body and its inherent inclination could therefore be quantified. The FES system, including the chosen accelerometer (MPU-6050, InvenSense), was placed on a position just below the knee to gather joint angle data. The objective included assessing whether it would be possible to identify the initial and final moments of the gait swing phase with the angular variation in the sagittal plane only. Samples were collected from a subject walking on a treadmill at a speed of 2.8 km/h. No electrical stimulation was applied. Identified gait events considered in this study were: initial contact, representing the swing phase end; loading response, mid stance and terminal stance; and the initial swing, representing the swing phase start. When turning on the device, the subject was advised to remain stationary in the vertical position for 5 s, so that the system could assess the mean static inclination of the module. A orientation offset of the equipment with respect to the ground could therefore be eliminated. In a third experiment, data processed from IMU and FSR sensors were compared with those obtained from a goldstandard motion capture system with eight cameras (ProReflex MCU 240, Qualisys AB, Gothenburg, Sweden), that collects and exports to a dedicated software, the position of lower limb segments to posterior calculation of kinematic gait parameters. This system was integrated with two force plates (AMTI, Massachusetts, USA), which were embedded in the walkway. The force plates allow the detection of two events—heel strike and toe-off, essentials to determine the gait cycle (stance and swing phases). The objective was to validate whether the developed FES-system could detect the swing phase correctly. To make the two systems work synchronously, an external trigger from the Qualisys System was used. Data was sampled 100 Hz.

3

Results and Discussion

Regarding the system topology, the maximum electrical pulse amplitude achieved with the specified power source was 90 mA, which was sufficient to perform any of the experiments planned. It is important to emphasize that, considering a DC/DC converter with higher output voltage, the pulse amplitude could have reached values above the observed. The biphasic rectangular pulse is shown in Fig. 2. It was also found that the specified DC/DC converter supported the demand for the three channels stimulating simultaneously. The activation of the three channels was carried out with different parameters of frequency, current intensity and

203 Programmable Multichannel Neuromuscular Electrostimulation System …

1375

Fig. 2 Maximum intensity test with specified converter for a load of 1120

pulse width for each one. The 8 and 12-channel topologies have also been validated for a specific FES-cycling application. However, the results will be published in future works, which will explore the effects of FES-assisted cycling in athletes with paraplegia. The dorsiflexion range of motion measured was 16.41◦ ±0.86, which falls within the limits of the theoretical references. It is worth noting, however, that the experiments were carried out on subjects without any neurological conditions. The effects of rising and falling pulse ramps were not studied in this work. Figure 3 illustrates the gait phase detection during the simulation of the gait of an individual. The first graph illustrates the angle Y measured in the sagittal plane; the second graph shows the detection of the gait phases through the IMU and FSR sensors. Value 0 corresponds to the end of the swing phase; phase 1 corresponds to the support phase, phase 2 to the end of the support phase, and phase 3 to the swing phase; the third graph corresponds to the pressures exerted on the pressure sensitive resistors, placed on the heel and the toe. The area without pressure corresponds to the swing phase, where the foot is moving in the air Experiments with the developed system evidenced that the gait event identification is possible not only by means of FSR sensors or switches but also by IMUs, which can detect the inclination of the individual’s leg segment. Also, these data

were compared with the gold standard and it is shown that the detection of pressures both on the force platform (Qualisys System) and by the FSR sensors (FES-Walking) presented statistical evidence regarding the simultaneous detection. The Spearman correlation test was performed to compare statistically the data obtained from the system proposed with data from the gold standard. A strong correlation index is indicated (82%) and the results evidenced that the system identifies the swing phase events simultaneously ( p < 0.01). The results, however, considered individuals under a normal walking pattern, which suggests future research in individuals with abnormal gait. Even though the proposed system evidences the possibility to identify the phases of gait correctly and effectively, some observations must be considered. Given that for the forefoot the greatest pressure is exerted on the metatarsal heads, it was decided to place a larger area sensor in that location. A better approach would consist of the use of a network of smaller sensors to assess each point independently and complementary. However, the information necessary to detect the balance and support phases in this study did not demand such precision, as soon as the objective remains to evidence the capacity of the system to be connected to different sensors and to perform real-time data processing. As already reported in the literature, another limitation of FSR sensors is that they are subject to deformation as a result

1376

T. Coelho-Magalhães et al.

Fig. 3 Analysis of gait phases by FSR and IMU

of the pressure applied to their structures, which can change their precision and accuracy over time [13]. Also, there is a possibility of deformation of the contacts or break of the solder connections, which could cause a failure of sensing and, possibly, stimulation out of context. In more sophisticated systems for the treatment of drop foot, the switches would have an obvious limitation, as they would restrict control architectures of greater complexity, which involves the activation of simultaneous channels. Data processed by the IMU module evidenced a specific pattern for the angular variation in the sagittal plane, which can be used to determine the gait events. The positive and negative angular variations point out the moments that are considered adequate as triggers to start and stop the stimulation [14,15].

4

Conclusion

Although many questions are still asked regarding the benefits and limitations of FES systems, it becomes evident the need for a device that allows the aggregation of different technologies in a single system, expanding the possibilities on research. In this context, we presented a device able to be

connected to different external sensors and to perform different stimulation strategies that would meet the current research needs. It was found that the system is capable to perform muscle contraction not only in therapeutic applications, but associated with functional actions, such as walking, considering that the specific moments when the pulses should be fired were detected with statistical evidence. Several solutions for gait events detection have been suggested in the literature and some of them have been successfully tested in this project. However, the present work evaluated only the gait of healthy subjects, since its application on hemiparetic or hemiplegic subjects was not included in the project. Therefore, it is for a future approach to assess the system behavior in this context, evidencing its efficiency under unpredictable situations contrary to a standard walking behavior. Future work will assess the system’s performance in FESassisted cycling activities applied to paraplegic subjects. Furthermore, parts of the system aimed at studying muscle fatigue have not yet been tested, such as the electrode multiplexing circuit. Therewith, it is worth noting the numerous possibilities between hardware, firmware and software that can be developed with the respective system, even though this study has evaluated its efficiency under a walking context only.

203 Programmable Multichannel Neuromuscular Electrostimulation System … Acknowledgements We would like to thank the Brazilian agencies FAPEMIG, FINEP, CNPq and CAPES for their financial support, the LEB/UFMG and LAM/UFMG laboratories and the Postgraduate programs in Electrical Engineering (PPGEE/UFMG) and in Rehabilitation Sciences (PPGCR/UFMG). Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2.

3.

4.

5.

6.

Barss TS et al (2018) Utilizing physiological principles of motor unit recruitment to reduce fatigability of electrically-evoked contractions: a narrative review. Arch Phys Med Rehabil 99:779–791 Melo PL et al (2015) Technical developments of functional electrical stimulation to correct drop foot: sensing, actuation control strategies. Clin Biomech 30:101–113 Takeda K et al (2017) Review of devices used in neuromuscular electrical stimulation for stroke rehabilitation. Med Dev Evidence Res 10:207–213 Chuang LL et al (2017) Effect of EMG-triggered neuromuscular electrical stimulation with bilateral arm training on hemiplegic shoulder pain and arm function after stroke: a randomizes controlled. Trial J Neuroeng Rehabil 14:122–133 Tafreshi AS et al (2017) Modeling the effect of tilting, passive leg exercise, and functional electrical stimulation on the human cardiovascular system. Med Biol Eng Comput 55:1693–1708 Gorgey AS et al (2017) Abundance in proteins expressed after functional electrical stimulation cycling or arm cycling ergometry training in persons with chronic spinal cord injury. J Spinal Cord Med 40:439–448

7.

8.

9.

10.

11.

12.

13.

14.

15.

1377

Dolbow DR, Credeur DP (2018) Effects of resistance-guided high intensity inteval functional electrical stimulation cycling on an individual with paraplegia: a case report. J Spinal Cord Med 41:248–252 Maffiuletti NA (2010) Physiological and methodological considerations for the use of neuromuscular electrical stimulation. Eur J Appl Physiol 110(2):223–234 Ibitoye MO, Hamzaid NA, Hasnan N, Abdul Wahab AK, Davis GM (2016) Strategies for rapid muscle fatigue reduction during FES exercise in individuals with spinal cord injury: a systematic review. PLoS One 11(2) Bellman MJ et al (2016) Switched control of cadence during stationary cycling induced by functional electrical stimulation. Trans Neural Syst Rehabil Eng 24:1373–1383 Ho C AL (2018) Foot drop stimulators for foot drop: a review of clinical, cost-effectiveness, and guidelines. Canadian Agency for Drugs and Technologies in Health Souza DC, Gaiotto MC, No Nogueira GN, Castro MCF, Nohama P (2017) Power amplifier circuits for functional electrical stimulation systems. Res Biomed Eng 33(2):144–155 Lyons GM, Sinkjaer T, Burridge JH, Wilcox DJ (2002) A review of portable FES-based neural orthoses for the correction of drop foot. IEEE Trans Neural Syst Rehabil Eng 10(4):260–279 Breen PP, Corley GJ, O’Keeffe DT, Conway R, Laighin GO (2007) A programmable and portable NMES device for drop foot correction and blood flow assist applications. In: IEEE Engineering in Medicine and Biology Society, pp 2416–2419 Miura N, et al (2011) A clinical trial of a prototype of wireless surface FES rehabilitation system in foot drop correction. In: Annual international conference of the IEEE Engineering in Medicine and Biology Society

Absence from Work in Pregnancy Related to Racial Factors: A Bayesian Analysis in the State of Bahia—Brazil A. A. A. R. Monteiro, M. S. Guimarães, E. F. Cruz, and D. S. F. Magalhães

considering the self-declared color/race as the input nodes. The Bayesian analysis of the absence from work during pregnancy is an important tool for the study of the topic and makes it possible to develop software for decision making by health professionals during pregnancy.

Abstract

Concerning the growing participation of women in the labor market, there are few studies on the influence of work on pregnancy and the absence from work during pregnancy, which makes difficult to develop public health policies for pregnant workers. This work evaluated the self-reported color/race and the absence from work in 502 puerperal women aged 19 years or older and nonindigenous, at Manoel Novaes Hospital, in Itabuna, Bahia—Brazil, through the application of a form. A Bayesian network was created using Bayesian Search (BS) learning algorithm. In the sample, 6 puerperal women declared themselves yellow (1.20%) and of these 2 (33.33%) absented from work during pregnancy, 49 declared themselves white (9.76%) and of these 24 (48.98%) absented from work, 322 declared themselves brown (64.14%) and of these 130 (40.38%) absented from work and 125 declared themselves black (24.90%) and of these 54 (43.20%) absented from work. In the Bayesian analysis, the largest inferences related to absenteeism was the black-race with 46%, contrasting with data from the frequentist model, where the whiterace has the highest rate of absenteeism, similarly the lowest rate was in the mixed-race (mulatto) with 39% and the frequentist method was the yellow one with the lowest rate. The collected data increase the knowledge of the main causes of absence in pregnancy and the importance of the topic and the lack of studies justifies that more studies be carried out. It was possible to create a Bayesian network from data collected from puerperal women using the BS learning algorithm and infer about absenteeism A. A. A. R.Monteiro  M. S. Guimarães  D. S. F. Magalhães (&) Scientific and Technological Institute, Universidade Brasil, Rua Carolina Fonseca, 235, São Paulo, Brazil e-mail: [email protected] E. F. Cruz Electric Engineering Department, Universidade de São Paulo, São Paulo, Brazil

Keywords



Absenteeism Pregnancy Bioengineering

1



Self-reported race



Introduction

The role of women’s work has experienced important changes, some in conjunction with remarkable events in the history of mankind, such as the French Revolution and the English Industrial Revolution in the eighteenth century, moving from a home-based activity to work in factories and elsewhere economic activities [1, 2]. According to the Brazilian Institute of Geography and Statistics—IBGE, from 70s, women are being more effectively present in the labor market in Brazil [3]. Most Brazilian female workers are in reproductive age [4]. The usual risk pregnancy does not contraindicate the continuity of most work activities, but the risks of the work environment and its repercussions in pregnancy must be known [5]. Knowledge about absence from work during pregnancy is important for planning preventive actions that aim to minimize the negative impact of work on pregnancy, enabling the construction of public policies to assist these women and to help professionals assisting them. Research on absence from work during pregnancy is almost non-existent outside Scandinavia [6]. In Brazil, there are few studies on the frequency and causes of absence during pregnancy [7]. Socioeconomic differences, in

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_204

1379

1380

A. A. A. R. Monteiro et al.

particular racial diversity in Brazil, make it difficult to compare with other countries, such as Scandinavians. The question of the relationship between race and absence from work in pregnancy is not addressed in studies in the literature. In some cases, the difference is assessed by comparing native and immigrant pregnant women, showing a higher frequency of absenteeism among immigrants [8]. The Bayesian approach had its origin based on the work of the English cleric Thomas Bayes on equality between general probabilities which became known as Bayes’ theorem [9]. Bayesian analysis has been used in studies in the area of gynecology and obstetrics, such as pre-eclampsia and teenage pregnancy [10, 11]. This work aims to contribute with scientific knowledge about the interactions between work and pregnancy, in particular with the relationship of self-reported race and the absence from work during pregnancy.

2

Methods

This study is part of the “Absenteeism during pregnancy: Bayesian analysis of the main causes” MSc thesis, which was carried out after approval by the Research Ethics Committee of the Universidade Brasil, #28724620.9.0000.5494. The target population was 502 voluntary mothers, who had their childbirth at the Manoel Novaes Hospital of Santa Casa de Misericórdia de Itabuna, in the city of Itabuna, Bahia - Brazil, who answered a form during hospital stay. The form consisted of questions about the patients’ professional occupation, socioeconomic data, the occurrence of absence from work during pregnancy and the reason for the absence. The socioeconomic variables evaluated were marital status, education level (schooling), self-reported race (Race), family income and age. These variables are compatible with those used by IBGE in the last demographic census conducted in 2010 [12]. As exclusion criteria, the following were NOT included in the research: patients under the age of 19 and/or indigenous patients and/or patients who refused to sign the consent form. Puerperal women without paid labor activity were included, this aimed to include women who work as housewives and to identify the repercussion of pregnancy on these activities. The definition of the socioeconomic parameters, as well as the definition of the entry and exit nodes and the edges (also called arches) were carried out based on the evaluation of a specialist in Obstetrics. Five input nodes were defined, represented by socioeconomic factors and one output node, which was “Absence from work” (Fig. 1).

After data collection, a table was used to build the Bayesian network (Fig. 1) using the GeNIe® software [13], version 2.0, with Bayesian Search learning algorithm. This tool has some advantages for this study, such as being a free software, presenting a graphical interface, having algorithms for the acquisition, forecasting, diagnosis or optimizing decisions, possibility of combining knowledge of experts with the knowledge embedded in the data [14]. In the present study, we address part of the results obtained, specifically the relationship of self-reported race and absence from work.

3

Results

Initially, we emphasize that this paper will deal with the influence of self-reported race on absenteeism, but in order to carry out this Bayesian analysis, the Bayesian Network represented in Fig. 1 was initially constructed using the GeNIe® 2.0 [13], where all socioeconomic factors were fed according to the data collected. Table 1 shows the profile of the sample collected and Table 2 shows the presence of absenteeism in different races. After the construction of the Bayesian Network, it was possible to do several simulations using the GeNIe® software. In this way, the scenarios were simulated where only each of the self-reported races evaluated in this study was admitted separately and the inference for absenteeism in each one was calculated. In Fig. 2, the simulation with the yellow-race is shown, in Fig. 3 the simulation with the white-race is shown, in Fig. 4 the simulation with the mulatto-race and in Fig. 5 the simulation with the black-race. Thus, the inferences obtained for absenteeism were 42% in the yellow and white-race, 39% in the mulatto-race and 46% in the black-race. As seen in Table 1, in the data collected, the absenteeism rate was higher in the white-race with 48.98% of the puerperal women reporting absence during pregnancy, followed by the black-race with 43.20%, the mulatto-race with 40.38% and finally of the yellow-race with 33.33%. In this way, we see the difference between the frequentist method, where the data point to the white-race as the most susceptible for absence from work and the yellow-race as the least susceptible and the Bayesian method where is pointed out that the black-race is the most susceptible and the mulatto-race is the least susceptible.

4

Discussion

The present study shows a difference in the results obtained when using the frequentist method and the Bayesian method in relation to the influence of race in the absence from work

Absence from Work in Pregnancy Related …

1381

Fig. 1 Bayesian network

during pregnancy. In which in the frequentist method, the white race was the one with the highest rate of absenteeism, and according to the Bayesian analysis the inference of the highest absenteeism was in the black race. The Bayesian methodology has been proposed as an alternative to the frequentist method in the area of health, where the sample size is often small [15]. The medical literature relating the absence from work in pregnancy is quite limited and it was not found in the literature studies where racial factors were studied in relation to their influence on absence from work. Most of the work on the absence from work during pregnancy comes from Scandinavian countries, which have a very different racial distribution from Brazil. The present study points out that the black-race is the most associated with absenteeism in pregnancy and the mulatto-race is the least associated when using the Bayesian analysis. However, due to the small number of yellow and black women interviewed and because it is a research conducted in a single center, in only one region of the country, a

straightforward extrapolation for the rest of the country is not recommended. Furthermore, puerperal women with indigenous self-reported race were not included in the research. It is important to note that this study has no interest in pointing out any type of racial supremacy, dealing only with analysis of data extracted from a questionnaire where the individual’s race was self-reported.

5

Conclusion

It was possible to create a Bayesian network from data collected from puerperal women using the “Bayesian Search” learning algorithm and infer about absenteeism considering the self-declared color/race as the input nodes. The relevance of the absence from work in pregnancy for the construction of public health policies for the protection of maternal health, imposes the challenge of the scientific community to expand knowledge about the theme. In a

1382 Table 1 Sample profile from the applied forms

A. A. A. R. Monteiro et al. Socioeconomic variables

Puerperal totals

Marital status Married

168 (33.46%)

Cohabitant

225 (44.82%)

Single

109 (21.71%)

Education level (schooling) Literate

3 (0.60%)

Incomplete 1st degree

42 (8.37%)

Complete 1st degree

91 (18.13%)

Incomplete high school

58 (11.55%)

Complete high school

188 (37.45%)

Incomplete college

38 (7.57%)

Complete college (Graduated)

73 (14.54%)

Post-graduate

9 (1.79%)

Color/race self-reported (race) Yellow

6 (1.20%)

White

49 (9.76%)

Mulatto

322 (64.14%)

Black

125 (24.90%)

Family income 5 (1) where (x, y) is the position on the space reached by the robot (x and y are given in meters). Figure 3 shows the robot and the child in the Workspace representing internal as a white area, observation as a gray area, and external as a dark gray area. The robot has special actions according to child’s position in the Workspace. Figure 3 shows three possible positions of the child during the interaction with the robot, and respective actions associated with them. In the first case, the child is in the external area, in which there will be no actions by the robot aiming to reduce d1 . In the second case, the child is in the observation area, so the robot will respect the boundary of the internal area, disregarding the distance the child has in the area of observation (called, in this work, distance of observation do ), and will monitor the movement of the child, always keeping itself ‘looking’ for the child keeping the child in its heading direction and reducing the distance d2 to zero. Otherwise, the child is in the internal area, in which the robot will try to maintain an interaction based on concepts of proxemic zones proposed by [12], using a specific control law to approach, slowly, to the child, i.e. reducing the distance d3 to zero. Regarding social distances, studies conducted by Hall [12] defined four types of distances people keep during communication with others: intimate, personal, social, and public, where these distances change according to culture. In this work proxemic zones are defined according Eq. 2, where d is the distance between the robot and the child (values of d for each proxemic zone were adapted from [10]), and

206 Use Of Workspaces and Proxemics to Control Interaction …

1395

y

Start

Public

External

Child

5 Observation 4

Social / Public

d1

Internal

Child

d3

Proxemic Z red on esi

e

do

Social Child

d2 di

Personal / Social

Robot

D 5

6

Fig.3 Representation of Workspace. Distance of observation do ; desired distance di ; and distances that will be reducing to zero d1 , d2 , and d3 , according to where the child is on Workspace. Workspace’s areas are defined in Eq. 1. The robot is represented as a point, disregarding its structure. The circle in turn of the robot represents the limit of the promexic zone in which the robot is

prox(d) is the proxemic zone associated with the distance d. Two auxiliary proxemic zones are defined: social/public and personal/social; these are used in the control law associated with a specific state machine to provide the robot with a soft movement of approximation. The control law and the state machine will be better explicated in the following sections. For safety reasons, the robot always keeps a distance d > 0.5 m from the child. ⎧ ⎪ public ⎪ ⎪ ⎪ ⎪ ⎪ social/public ⎪ ⎪ ⎪ ⎨social prox(d) = ⎪ personal/social ⎪ ⎪ ⎪ ⎪ ⎪ personal ⎪ ⎪ ⎪ ⎩intimate

2.1

d ≥ 4m 3.5 m ≤ d < 4 m 2 m ≤ d < 3.5 m 1.5 m ≤ d ≤ 2 m 0.5 m ≤ d < 1.5 m

Personal

x

Fig. 4 State machine used in this work to implement the concepts of proxemic zones. d is the distance that will be reducing to di , given in meters. di is the desired distance also given in meters. t is the time it takes to confirm a change of state. Each rectangle represents a proxemic zone according to Eq. 2

To ensure an approach that gives the child the opportunity to distance himself/herself, the robot uses the auxiliary social/public zone where the robot is waiting for 5 s to confirm if the child accepts its proximity. If the child does not run away from the robot, it represents a confirmation of the acceptance. So the robot modifies the desired distance (di = 2) to reach the social zone. Otherwise, if the child distances himself/herself, then the logic of the state machine sets prox(d) = public. The same logic is employed in others proxemic zones, with different times and desired distances, which can be seen in Fig. 4.

2.2 (2)

d < 0.5 m

State Machine

Figure 4 presents the state machine used in this work. In the beginning, prox(d) = public, meaning the robot is on a public zone, and the desired distance, di , is set to 3.5 m, with the intention of going to the social/public zone.

Control Law

The robotic platform used in MARIA and N-MARIA, and simulated on MobileSim,2 is an Omron Adept MobileRobots Pioneer 3-DX. This robotic platform is a differential drive (unicycle-like) robot (Fig. 5), with two motorized wheels, and one free wheel. Instead of controlling the right speed and the left speed of the drive systems, the unicycle-like model uses v (linear velocity) and ω (angular velocity) as control parameters, as the robotic platform has a low-level controller that converts these velocities in commands for each motor.

2 MobileSim

disponible at https://github.com/srmq/MobileSim.

1396

G. P. D. Piero et al.

y

as a Lyapunov candidate function [13], the first time derivative is given by

yK

V˙ (d, α) = d d˙ + α α˙

yR d

 v

Child

 v sin α = d(−v cos α) + α −ω + d = −K v (d cos α)2 − K ω α tanh α = satisfied (%)

50.00

Source Author's collection, 2020

Q2

Q3

Q4

Q5

0.0

16.67

16.67

33.33

100.00

83.33

83.33

66.67

Assisted Navigation System for the Visually Impaired

(48  48  15 mm) of the modules. This low level of approval was expected since the technical limitations inhibited the reduction of the 3D casing, given the need for a minimum internal space to accommodate the electronic components such as ultrasonic sensor, wireless communication module, main board containing the CPU, vibration motor's drive circuit and the motor itself. In contrast, the proposed system has dimensions smaller than others prototypes [16], that presented a wearable device, for the wrist, with dimensions of approximately 100  60  30 mm. This factor was expected since the approximate weight of the peripheral modules are about just 30 g while the central module is 45 g. Significant weight reduction is noted when compared to a belt-shaped prototype [17], weighing 480 g. Regarding the intensity of the vibrations provided as vibrotactile feedback, it is observed that 83.33% of the participants were at least satisfied. Thus, it can be inferred that the range of vibration intensity in relation to the proximity of obstacles were suitable for this application. Regarding the mobility assistance promoted by the prototype it is noted that 83.33% of the participants were at least satisfied. This points to the usefulness of the device, since the time of use of the equipment by the volunteers was about 10 min and yet this good rate of approval was obtained by the participants. Regarding confidence in mobility when using the device, at least 66.67% of the participants were satisfied. It is noticed that, like any assistive technology device (cane, guide dog, etc.) there is a need for a period of training and adaptation. This time is essential to produce confidence in the user and since the period of use of the device by the participant during the experiment was minimal (a few minutes), the approval rate presented at this point is considered satisfactory. With a minimal weight of 30 g (peripheral modules) and 40 g (central module), the modules presented comfort in use, without interfering in the movements of individuals or requiring the use of extra force, ensuring natural movement, as expected from a wearable device. The satisfaction rate above 80% shows that vibrotactile feedback was successful when used as a feedback technique on approaching obstacles, providing greater spatial information about the environment. The approval of 83.33% of the volunteers regarding mobility assistance, given the short period of testing, points out that the equipment developed in fact has promising potential for application as an assisted navigation device. The satisfaction rate above 60% in terms of confidence in mobility, indicates the possibility of a better future evaluation when studying volunteers, with a longer period of use of the device. Given that in this experiment more than 80% of the volunteers were satisfied with the mobility assistance provided by the prototype during the experiment.

1443

4

Conclusions

Through the interviews carried out on the usability of the prototype in relation to the execution of the experiment protocol, it was possible to validate the hypothesis that the proposed system exceeds the minimum characteristics expected by the volunteers. It’s noticed that the volunteers were satisfied with the great majority of usability characteristics evaluated in this work, such as weight, the intensity of the vibrotactile feed-back, the mobility assistance provided by the system and the reliability, presenting satisfaction rates above 65%, in the worst case. Among the contributions of this work, there is the creation of a wearable wireless communication network, a wearable, compact, lightweight, low-power assisted navigation system, with vibrotactile and auditory feedback. These characteristics make the developed system viable for the assisted navigation of the visually impaired, guaranteeing naturalness in the actions carried out on a daily basis. The developed system offers low cost design and uses devices that are easy to purchase on the market, making it accessible to the visually impaired of different social classes. In addition, it provides freedom of movement, ensures greater spatial perception and contributes to improving the quality of life of these people. Acknowledgements This work had the support and general supervision of the GPEB (Research Group in Biomedical Engineering) and the CEI (Center for Inclusive Studies), both from UFPE (Federal University of Pernambuco, Brazil). Conflict of Interest “The authors declare that they have no conflict of interest”.

References 1. IBGE at https://www.ibge.gov.br/ 2. WHO Global data on visual impairment at https://www.who.int/ blindness/globaldatafinalforweb.pdf?ua=1 3. WHO World report on vision at https://apps.who.int/iris/bitstream/ handle/10665/328717/9789241516570-eng.pdf? 4. Shoval S, Ulrich I, Borenstein J (2003) NavBelt and the guide-cane obstacle-avoidance systems for the blind and visually impaired. IEEE Robot Autom Mag 10:9–20 5. O'keeffe R, Gnecchi S, Buckley S et al (2018) Long range LiDAR characterisation for obstacle detection for use by the visually impaired and blind. In: IEEE 68th electronic components and technology conference (ECTC), San Diego, CA, pp 533–538 6. Dhod R, Singh G, Kaur M (2017) Low cost GPS and GSM based navigational aid for visually impaired people. Wireless Pers Commun 92:1575–1589 7. Bai J, Lian S, Liu Z, Wang K et al (2018) Virtual-blind-road following-based wearable navigation device for blind people. IEEE Trans Consum Electron 64(1):136–143

1444 8. Cardin S, Thalmann D, Vexo F (2007) A wearable system for mobility improvement of visually impaired people. Vis Comput 23:109–118 9. Patil K, Jawadwala Q, Shu FC (2018) Design and construction of electronic aid for visually impaired people. IEEE Trans Hum Mach Syst 48(2):172–182 10. Xiao J, Joseph SL, Zhang X et al (2015) An assistive navigation framework for the visually impaired. IEEE Trans Hum Mach Syst 45(5):635–640 11. MSP430x2xxx Texas Instrument datasheet at https://www.ti.com/ lit/ug/slau144j/slau144j.pdf 12. HC-SR04 Ultrasonic Ranging Module datasheet at https://cdn. sparkfun.com/datasheets/Sensors/Proximity/HCSR04.pdf

Malki-çedheqB. C. Silva et al. 13. nRF24L01 Single Chip 2.4GHz Transceiver nordic semiconductor datasheet at https://infocenter.nordicsemi.com/pdf/nRF24L01P_ PS_v1.0.pdf 14. C1027B001D Jinlong machinery and electronics datasheet at https://www.vibration-motor.com/products/coin-vibration-motors/ coin-vibrations-motors-with-brushes/c1027b001d 15. DFRobot DFPlayer Mini at https://wiki.dfrobot.com/DFPlayer_ Mini_SKU_DFR0299 16. Ramadhan AJ (2018) Wearable smart system for visually impaired people. Sensors (Basel) 18(3):843. https://doi.org/10.3390/s18030843 17. Katzschmann RK, Araki B, Rus D (2018) Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Trans Neural Syst Rehabil Eng. 26(3):583–593. https://doi.org/10.1109/TNSRE.2018.2800665

Development of Bionic Hand Using Myoelectric Control for Transradial Amputees C. E. Pontim, M. G. Alves, J. J. A. Mendes Júnior, D. P. Campos and J. A. P. Setti

Abstract

Bionic hand is an artificial device used to replace an amputated limb. The myoelectric prosthesis have sensors able to acquire muscle signals from a electromyography devices, in which the signal is processed, features are extracted, and classified to be transformed in movements. This work presents the development of an active myoelectric prosthesis fully developed in CAD environment controlled using surface electromyography signals (sEMG). This process taking into account technical aspects like comfort, easy adaptation, improvement in self-esteem, with a low rejection developed using a 3D printing. The 3D manufacturing allows to develop custom projects, making possible to print prosthesis anatomically adapted. The control system used in this work was evaluated in open loop operation. This way, the control system presented accuracy in order of 92.5% in the execution of movements for healthy subjects. As for market analysis, the developed prosthesis presented a low cost comparing with the commercial prosthesis. Regarding the development aspects, their physical features presented a high robustness and innovative design. Keywords

Bionic hand • Prosthetic hand • Surface electromyography (sEMG) • Assistive technology • Transradial amputees C. E. Pontim (B) · D. P. Campos · J. A. P. Setti Graduate Program in Biomedical Engineering (PPGEB), Federal University of Technology - Paraná (UTFPR), Av. Sete de Setembro, 3165, Rebouças,Curitiba, Brazil e-mail: [email protected] J. J. A. Mendes Júnior Graduate Program in Electrical Engineering and In (CPGEI), Federal University of Technology - Paraná (UTFPR), Curitiba, Brazil M. G. Alves Graduting in Faculty of Medicine, State University of Rio Grande do Norte (UERN), Mossoro, Rio Grande do Norte, Brazil

1

Introduction

The hand is one of the most important parts of the human body, in which is possible identify shapes, textures, perform movements and functions. This absence affect the autonomy of the subject, limiting the ability to perform work, social, and daily activities [1]. Amputation process can have two origins: congenital deficiency (when someone is born without a limb) and a traumatic amputation (resulting from a surgical operation to remove a limb or part of it due to an accident or a pathology). In upper limbs, the amputation are presented at various levels as hand amputation, trans-radial, and transhumeral amputations [2]. A bionic hand is an artificial device used to replace a lost limb. Therefore, prostheses can be divided in two type as their actuation: passive and active. Passive prosthesis are static devices that do not present joints or mechanisms, aiming to restore the external aspect of the subject’s body. The active prostheses are controlled by the user using some command [3]. It is important to highlight that among the active prostheses, the prosthesis with electromyograph type are made by a shaft with socket. The connection between the device and the patient is denominated as Human-Man interface. Typically, surface electromyography (sEMG) electrodes are embedded in the socket and placed above the residual muscles of the stump [4]. These sensors record the electrical activity of the muscle that the patient can voluntarily activate through muscle contraction. The signals are captured through an electromyographic device and they are processed (segmented, features are extracted, and classified), transforming in movements [5]. The sEMG signals are becoming common in the biomedical research. They are used to assist in the recovery of amputees through prosthesis control [6,7], disease identification [8], and pattern recognition [9]. The sEMG frequency band are between 10 500 Hz, depending on the person and how active their muscles are. Several pattern recognition algorithms have

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_213

1445

1446

C. E. Pontim et al.

been studied, developed, and applied for the sEMG processing [10–13]. It is worth mentioning that currently there are commercial protheses available, e.g. Bebionic (Ottobock)[14], i-Limb (Touch Bionics) [15], and Open Bionics [16]. These models showing a increase in similarity with biological members in terms of functionality and their kinematic capabilities. Despite technological advances, studies carried out in the last two decades indicate high rates of rejection of upper limb prosthetic devices [17–20] and injuries associated with excessive use of the amputated limb [21]. Other rejections are related to poor hardware functionality, design, resistance, and imprecise control [17]. Moreover, it is important to node that the use of a 3D printing in medicine began to gain space in the late 2000s and early 2010s, with the emergence of biocompatible materials (certified by ISO10993). The application of such technology includes the printing of biomodels, used for surgical planning; orthopedic implants, and prostheses. The 3D printers allow to develop custom projects and making it possible to build anatomically adaptable prostheses [3]. The 3D printing is part of a concept called rapid prototyping. This concept allows the use of different types of materials for printing (as metallic powders, thermoplastic polymers, photopolymers, casting sand, among other). The construction of a part using the 3D printer is constituted layer by layer, where the deposition table is positioned at the same distance as the thickness of the layer. The 3D process is repeated until the piece is built [2]. Thus, this work presents the development of a hand prosthesis with myoelectric control, able to perform four movements to assist the daily life activities of trans-radial amputees. The 3D printing techniques for the production of parts and the sEMG acquisition and processing using a armband device were explored in this work.

2

Materials and Methods

In this section, it is presented the Data Acquisition, the 3D prototyping, and and the prostheses control strategy.

PO

FA

FF

Fig. 1 Prosthesis movements used in this work: palm out (PO), palm in (PI), hand extension (FA) and hand flexion (FF)

nels with 200 samples/s and 8-bit resolution analog to digital converters. Before each collection, hygiene and cleaning was performed on the device and on each forearm subject in the approximate region where the device would be positioned. In the protocol, the armband sensors were identified and positioned in posterior medial region, where the channel 3 of the armband was positioned on the extensor muscle. For data acquisition, four movements were evaluated: palm out (PO), palm in (PI), finger extension (FA), and finger flexion (FF), as presented in Fig. 1. In addition, each person had three attempts to perform each movement randomly. These gesture were executed with a defined time of 10 s for movement and 10 s of rest. To avoid future errors in the process of gesture recognition, each subject should be seated in a chair, with their feet on the floor, legs apart, spine erect, keeping their forearms articulated in the 90◦ , and the relaxed hand in a neutral position, so that in this way the synergistic activation of the muscles was avoided. Then, it was requested to the subjects to perform the movements naturally in order to avoid fast or slow movements. The Myo proprietary software was used to sEMG processing. Data is segmented by windowing the signal with an overlap of 50% and a 100 sample length for each of the 8 channels (8 × 100 matrix) [22]. The sEMG onset is detected thresholding the Root Mean Square (RMS) value inside the segmentation window. If the signal surpasses a predetermined value, 15 features are extracted to compose a feature vector per channel, totalling 120 features for each window [22]. The features are the input of a Hidden Markov Model, which predicts the actual state based on the previous one and the current feature vector.

2.2 2.1

PI

Development of Prosthetic Hand

Data Acquisition and Processing

To data acquisition, the protocol was approved by the Human Research Ethics Committee of the Federal University of Technology—Paraná (CAEE 89638918.0.0000.554). The database is consists of 10 healthy subjects, 6 males and 4 females, with ages ranging from 18 to 51 years, 1.67 to 1.87 m of height, and 52 to 102 kg of weight. The sEMG signals were obtained by the wearable device Myo armband (Thalmic Labs). This device has 8 sEMG chan-

The bionic hand design was inspired by the available commercial prosthesis [14,15], taking into account some aspects of aesthetics and functionality. The prosthesis was fully developed in Computer Aided Designer (CAD) environment, a process that related technical aspects that provided comfort, ease of adaptation, improvement in self esteem, aiming at a small rejection rate. An important characteristic of the human hand is the number of degrees of freedom calibrated by its movements. The degrees of freedom can be defines as the number

213 Development of Bionic Hand Using Myoelectric Control for Transradial Amputees

of independent coordinates to describe the position of a system [23]. The bionic hand has six degrees of freedom, one degree of freedom of flexion/extension in the fingers (index, middle, ring, and little finger) and two degrees of freedom for the thumb (flexion/extension and adduction/abduction). For the index, middle, ring, and little finger, it was chosen the four-bar coupling method,widely used in the literature [24,25]. This method consists of four bodies called crank, coupler, rocker, and structure. Each finger has its individual motor, when the motor rotates clockwise, the crank pushes the coupler to close the finger towards the palm of the hand and vice versa. The thumb has a distinct actuation from the others, since it uses two motors to perform its movements of extension, flexion, adduction, and abduction of the finger. It is need to empathize that mini DC motors with reduction gears were chosen, with relation of 300:1, HPCB 12 V, coupled by a set of worm gears (module = 0.5) and worm transmission. The endless crown system consists of a helical worm gear coupled to the motor shaft where the circular movement generated by the screw moves a endless crown (or pinion), perpendicular to the motor shaft, and fixed at the proximal end of the prosthesis. Another point to be highlighted was the prototyping of the prosthesis, where 3D printing techniques were used. The method of manufacturing with fused filament (FDM) and stereolithography (SLA) was applied, using polylactic acid (PLA) for technology in FDM, and liquid resin in gray for SLA. This, the prosthesis had all the components of the hand body made in PLA, with a resolution of 150 µm per layer, excepting for the fingers, which was used by SLA with a resolution of 50 µm by layer (Fig. 2). Gray liquid resin (SLA)

Gears (Worm Screw)

Wrap Fiber Carbon 5D

PLA (FDM)

Fig. 2 Bionic hand specifications

Mini Motors DC (12v 300RPM)

2.3

1447

Control

The control system used in this work is an open-loop approach, where the system has no sensorial feedback for user (only visual feedback). This system consists of applying a control signal at the input of a system, with the expectation that in the output the controlled variable will achieve a certain value or exhibits a certain desired behavior. In this context, the input of the system is the sEMG signals obtained by the device Myo armband, the signal processing occurs in a microcontroller ATMega328 and Myo Armband, using the software MyoDuino. This software sends the information of the performed gestures, receives and process this information through the MyoController library, and determined the actuation of the DC motors that act the prosthesis. The chosen DC motor use Pulse Width Modulation (PWM) to drive them. In this case, it was used a DRV8833 dual drive motor module, which consists of a double H bridge, providing active braking, speed control, and steering of the six engines independently. It is important to note that the use of a 3.7 V/2 Ah battery, with the use of a MT3608 DC step-up voltage regulator, and a TP4056 5 V battery charger. To evaluate the time of operation of each engine, the time of activation and its return was measured, with three repetitions of execution for each engine. The mean and standard deviation of each set of measurement were calculated.

3

Results

The physical characteristics of the prosthesis presented robustness and innovative design, being able to easily pick up a variety of domestic objects. The prosthesis presented the six degrees of freedom corresponding to flexion/extension in the five fingers and with the addition of adduction/abduction to the thumb. It is important to highlight the choice of the four-bar mechanism in conjunction with the DC mini motor with gear reductions with a relation of 300:1 coupled to a worm gear system were essential for executing suitable and precise movements for the bionic hand. The time for operating time was independently analyzed, with the index finger (0.99±0.01 s), middle finger (1.02 ± 0.03 s), ring finger (0.98 ± 0.01 s), little finger (1.00 ± 0.06 s), extension/flexion for the thumb (0.99 ± 0.01 s), and adduction/abduction for the thumb (1.3 ± 0.15 s). Moreover, considering the data collected from all engines, the global average time was 1.06 s with standard deviation of ±0.15.

1448

C. E. Pontim et al.

It should be noted that the use of 3D printing has demonstrated significant potential for the development of the bionic hand. The choice of 3D FDM and SLA technologies presented refined finishing, mechanical resistance, and a weight of 352 g, being considered light when compared to commercial prostheses (Bebionic V3 around 600 g and I-Limb between 478–528 g)[14,15]. Another important benefit is the low cost and easy availability of spare parts. However, it is not possible to obtain prostheses with the quality level equivalent to available commercial prostheses only with the use of 3D printing. In this sense, with the development of techniques, 3D technologies, and new materials this trend can reach or even surpass that level of quality. Regarding to control, people with amputations of upper limbs can control the hand using the pattern recognition of electromyographic. With the intelligent system of the prosthesis, it was possible to perform the movements proposed in the research, which obtained an accuracy of 92.5% in the execution of movements by healthy individuals. Figure 3 shows matrix of confusion. This way, as it is being analyzed the hit rates of the gestures FF, AF, PI and PO for the prosthesis. The row is denominated as target gestures and the column represents the output, showing which gesture was executed. Moreover, Table 1 presents the metrics obtained for each gesture from the confusion matrix. It is noted that the PI and PO are gestures with high sensibility, in other words, they have a high selectivity in their accuracy. It is due to these gestures (PO and PI) recruited antagonistic muscles [26], which can be easier to be recognized. The gestures FA and FF pre-

PO

30 25.0%

30 25.0%

FA

4 3.3%

Target

PI

FF

4 3.3%

PO

26 21.7%

1 0.8%

PI FA Output Class

Table 1 Metrics from the confusion matrix Parameter Accuracy Sensibility Specificity Efficiency

FF

Fig. 3 Confusion matrix for the gestures PO, PI, FA, and FF used in this work. Blank cells mean null misclassified samples

PI

FA

FF

0.97 1 0.95 0.98

0.97 1 0.95 0.98

0.96 0.87 0.99 0.93

0.96 0.83 1 0.92

sented some misclassified samples, with sensibility between 83% and 87%. These gestures recruited muscles related to fingers movements, which could provide some confusion in pattern recognition. However, for the accuracy metric, gestures presented high hit rates, above 95%. It should be noted that the use of wearable devices such as Myo Armband can be easily handled and applied, due to its bracelet look. However Myo Armband has a low sampling rate and low bit depth in AD acquisition. In addition, for better processing and pattern recognition, more accurate signals are required because the sensors are not positioned in the ideal locations of the muscles of interest. Nevertheless, as Myo Armband is a commercial device, however discontinued, are usually closed (black box), it has no possibility to modify the firmware. However, the hardware may be enough for applications that do not must consider several classes (gestures) or to evaluate prototypes in early development. Although the Myo software is widely used, some details about the processing are unknown (like the feature vector, and aspects of the signal processing). This is because the details about the device are only available in a patent document where the authors do not reveal all the details required to reproduce it. Therefore, it is important to evaluate the device and try to understand how this system works in order to improve it. Future works may develop the processing pipeline from scratch for other users to modify and improve the signal processing of this globally spread device.

4

25 20.8%

Gesture PO

Conclusion

Aiming to improve the quality of life of trans-radial amputees, the development of a myotropic prosthesis was presented, which is capable of executing movements that can assist in daily life activities. As a result of this research, it is possible to infer that from the use of the 3D printer, it is possible to develop customized prostheses with precision, refined finishes, high mechanical resistance, and a low production cost (cost of $392.00 to purchase the materials) in relation to the myoelectric prostheses commercial available. The bionic hand demonstrated precision in the movements performed and an aesthetic approach to the anatomical hand. In this way, this

213 Development of Bionic Hand Using Myoelectric Control for Transradial Amputees

prosthesis could be able to assist the process of restoring the life quality of trans-radial amputees. Future works are focused in the implementation of intelligent system embedded in specific microcontrollers. Also, the finger force control and additional degrees of freedom should be considered. The development of a signal processing pipeline from scratch and pattern recognition systems should be also developed to assure reproducibility aiming the system improvement. Moreover, adding a haptic biofeedback system to the prosthesis, which can bring the tactile sensation to the user, may be a major advantage. In addition, robustness/resistance, motor accuracy and speed control tests will be performed. Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001, of Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), from Fundação Araucária (FA), and from Financiadora de Estudos e Projetos (FINEP). Conflict of Interest The authors declare that they have no conflict of interest.

9.

10.

11. 12. 13.

14. 15. 16. 17. 18.

19.

References 1. 2.

3.

4. 5.

6.

7.

8.

Francesca C, Lisa CA, Rinaldo S et al (2016) Literature review on needs of upper limb prosthesis users. Front Neurosci 10 Ferreira D, Duarte T, Lino AJ, Isaac F (2018) Development of lowcost customised hand prostheses by additive manufacturing plastics. Rubber Compos 47:25–34 Pontim CE, Campos DP, Utida MA, Setti JAP (2019 )Prototyping and control of a 3D printed bionic hand operated by myoelectric signals: toward accessibility for amputees. In: Anais do XII Simpósio de Engenharia Biomédica e IX Simpósio de Instrumentação e Imagens Médicas Uberlândia, Brasil)Zenodo Martin A, Matthias S, Clemens G et al (2019) Bionic hand as artificial organ: current status and future perspectives. Artif Organs Mohammadreza O, Hu H, Oskoei MA (2008) Support vector machine-based classification scheme for myoelectric control applied to upper limb. IEEE Trans Biomed Eng 55(8):1956–1965 Manfredo A, Matteo C, Henning M (2016) Deep learning with convolutional neural networks applied to electromyography data: a resource for the classification of movements for prosthetic hands. Front Neurorobotics 10 Geng Y, Ouyang Y, Samuel OW et al (2018) A robust sparse representation based pattern recognition approach for myoelectric control. IEEE Access 6:38326–38335 Maxwell A, Li R, Yang B et al (2017) Deep learning architectures for multi-label classification of intelligent health risk prediction. BMC Bioinform 18:523

20.

21. 22.

23.

24. 25.

26.

1449

Olsson Alexander E, Paulina S, Elin A, Anders B, Nebojša M, Christian A (2019) Extraction of multi-labelled movement information from the raw HD-sEMG image with time-domain depth. Sci Rep 9:7244 Resnik L, Helen HH, Anna W, Crouch Dustin L, Zhang F, Wolk N (2018) Evaluation of EMG pattern recognition for upper limb prosthesis control : a case study in comparison with direct myoelectric control 1–13 Geethanjali P (2016) Myoelectric control of prosthetic hands : stateof-the-art review 247–255 Engineering Procedia. Available online at http://www.sciencedirect. com. 38:3547–3551 Angkoon P, Khushaba Rami N, Erik S (2018) Feature extraction and selection for myoelectric control based on wearable EMG sensors. Sensors (Switzerland) 18:1–17 Ottobock Bebionic V3 https://www.ottobockus.com/prosthetics/ upper-limb-prosthetics. Accessed 22 May 2020 I-limb Touch Bionics http://www.touchbionics.com. Accessed 22 May 2020 Open Bionics https://openbionics.com. Accessed 22 May 2020 Vujaklija I, Farina D, Aszmann O (2016) New developments in prosthetic arm systems Orthopedic Res Rev 8:31–39 Pylatiuk C, Schulz S, Döderlein L (2008) Results of an internet survey of myoelectric prosthetic hand users. Prosthetics Orthotics Int 31:362–70 McFarland Lynne V, Hubbard Winkler Sandra L, Heinemann Allen W, Melissa J, Alberto E et al (2010) Unilateral upper-limb loss: satisfaction and prosthetic-device use in veterans and servicemembers from Vietnam and OIF/OEF conflicts. J Rehabil Res Dev 47:299– 316 Kristin Østlie, Ingrid Lesjø, Rosemary Franklin, Beate Garfelt, Ola Skjeldal, Per Magnus (2011) Prosthesis rejection in acquired major upper-limb amputees: a population-based survey disability and rehabilitation. Assistive Technol 7:294–303 Elaine B, Chau TT (2007) Upper-limb prosthetics: critical factors in device abandonment. Am J Phys Med Rehabil 86(12):977–987 Stephen L, Matthew B, Aaron G (2014) Methods and devices for combining muscle activity sensor signals and inertial sensor signals for gesture-based control 2014. Library Catalog: Google Patents James I, Konrad K, Ian H, Daniel W (2008) The statistics of natural hand movements experimental brain research. Experimentelle Hirnforschung. Expérimentation cérébrale. 188:223–36 Ahmed B, Richard A (2019) Myoelectric prosthetic hand with a proprioceptive feedback system. J King Saud Univ Eng Sci Choi KY, Akhtar A, Bretl T (2017) A compliant four-bar linkage mechanism that makes the fingers of a prosthetic hand more impact resistant. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 6694–6699 Mendes Junior José Jair A, Freitas Melissa LB, Siqueira Hugo V, Lazzaretti André E, Pichorim Sergio F, Stevan Sergio L (2020) Feature selection and dimensionality reduction: An extensive comparison in hand gesture classification by sEMG in eight channels armband approach. Biomedi Signal Process Control 59:101920

Communication in Hospital Environment Using Power Line Communications N. A. Cunha, B. C. Bispo, K. R. C. Ferreira, G. J. Alves, G. R. P. Esteves, E. A. B. Santos, and M. A. B. Rodrigues

Abstract

Data communication over the electrical network (Power Line Communications—PLC) has become an alternative to other forms of communication, as it offers, among other advantages, the possibility of reusing the electrical infrastructure of environments. In the Establishments of Health Assistance (EHA), the PLC is also attractive because it does not emit high-frequency radiation, which can affect the imaging equipment. However, Brazilian EHA electrical installations are often neglect and it is not known whether PLC is possible. This work presents a micro-controlled system solution with PLC for evaluating communication at EHA, which uses a remote communication system with data storage platform. The system built-in hardware and software was able to perform real-time monitoring of packages over the PLC network, where it was verified that it is possible to carry out the sending and receiving of information without loss. Keywords

Power line communication

1



Hospital



Network

existing energy supply and distribution structure, can be highlighted as immediate, which does not require investments with the implementation of another structure to fulfill the same purpose [1]. Studies have already shown that high-frequency radio signals in the EHA can interfere with the operation of Medical and Hospital Equipment (MHE). Specifically, those that perform imaging diagnosis [2]. The problem is to dimension a communication method that avoids these interferences. The use of PLC in data communication in healthcare units (EHA) is an interesting application for not emitting high-frequency radio waves, which occurs when using wireless network technology (Wireless), for example [3]. Besides this fact, the use of PLC avoids the need to install new cables (for use of Ethernet, for example) [4]. However, in Brazil, there is a culture of not pay the right attention to the maintenance of electrical networks in the EHA [5]. In certain cases, the noise level present in these networks is exceedingly high and may make the application of the PLC in these environments unfeasible [6]. This work proposes an electronic system prototype to evaluate the electrical network for the implementation of PLC in the EHAs. The main objective is to serve as a reference and assist in deciding which technique to choose to implement data networks in EHA.

Introduction

Data communication over the power grid, or Power Line Communication (PLC), is a versatile type of communication. It offers several advantages, among which the use of the N. A. Cunha (&)  B. C. Bispo  G. J. Alves  G. R. P. Esteves  E. A. B. Santos  M. A. B. Rodrigues Federal University of Pernambuco/DES, PPGEE, Av. Da Arquitetura S/N, Recife, Brazil K. R. C. Ferreira Federal University of Pernambuco/CB, PPGBAS, Recife, Brazil

2

Materials and Methods

2.1 Prototype The prototype consists of a microcontrolled platform, connected to a PLC hardware, called Sensor Module, as shown in Fig. 1a. In addition to the prototype, a computer was used as a server. The initial intent was to use a single type of PLC hardware that was suitable for operating both the Sensor Module and the server.

E. A. B. Santos Polythecnic School of Pernambuco/POLI-UPE, Recife, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_214

1451

1452

N. A. Cunha et al.

(a)

(b)

For that, the interface needed to meet both needs. The AR7420 transceiver integrated circuit has an ethernet interface compatible with the scope of the project work [7]. The choice was suitable because every computer has the same interface, and the microcontrolled supports the platform that had this native interface, or even a platform with flexibility for the interface to be coupled. Since several manufacturers have PLC modules based on the AR7420, the chosen one for this work was TP-LINK's TL-PA4010, due to the ease of acquisition and compatibility with the proposed system. The AR7420 is compatible with the IEEE 802.3 and IEEE 1900 standards. It operates in a region around 67.5 MHz and can reach a peak speed of 600 MHz, depending on the quality of the environment in which it operates (of the power line). The microcontrolled platform used was the ESP32-EVB, from the manufacturer Olimex. Each platform has an ESP32 microcontroller, from Expressif Systems. This development board has an Ethernet interface integrated that helps with the communication by the network and does the interface with the PLC hardware. A photograph of the prototype is shown in the Fig. 1b, with blue highlight for the power line plug, yellow highlight for the USB connector and orange highlight for the operation indication leds. The prototype is 10 cm high, 8 cm wide and 13 cm long. For connection to the power line, a 3-pin inlet C14 socket was chosen. Three leds are connected to de circuit, a power led, another indicating active connection to the computer and another indicating whether the communication is stable. A button was also added to reset the prototype.

2.2 Firmware

(c) Fig. 1 a Block diagram that represents the hardware in the prototype, in which the microcontroller is connected to the PLC module through Ethernet and the PLC module is connected to the electrical network; b photography of the prototype, with emphasis on the plug for connection to the power line (blue), the USB connector (yellow) and the operation indication leds (orange); c flowchart that illustrates the algorithm followed by the firmware developed for the prototype. Source Authors, 2020

The firmware was developed to perform the routine to be followed by the microcontroller which is illustrated in the flow diagram in Fig. 1c. It starts including the libraries and initializes the constants and variables. The serial port is also initialized for any need for debugging. After initialization, the procedures for connecting the system to Ethernet are called. With the Ethernet connection established, the connection to the server is established. Finally, the routine for assembling and sending packages begins, incrementing a counter. The function of sending the packet uses the MQ Telemetry Transport (MQTT) protocol. The libraries used are related to communication Ethernet and MQTT. One of the constants in the code is a string, which contains an example of a communication package. It

Communication in Hospital Environment Using Power Line …

1453

contains the module ID and seventeen sample samples. One variable present is the package counter, which is incremented with each shipment, and used in the assembly of the final package.

2.3 Test Protocols To perform the tests, an example of a communication package was created according to Table 1, and it was sent to the server. It has nineteen (19) fields, this value was chosen because this is suitable for applications using IoT (Internet of Things) applications. The first field has the character format and the others, integer values. The first field is the Customer Module Identification (CMI). From the second and the eighteenth field, there are the samples to be sent. The last field refers to the package number. For the communication tests where the modules sent the signal to the server, two prototypes were made. The sample-set chosen represented a triangular wave. This type of wave was chosen because it is easy to detect the possible loss of some samples (corruption in the package) due to its shape. An example of a package is found in Table 2, which shows the name of the module, followed by the samples and, finally, the package number. The points representing the triangular wave samples assumed values between 0 and 1024, distancing 128 units from each other. It is showed in Fig. 2. The value of 128 was obtained by dividing 1024 by 8, which made possible the construction of the wave. The first sample was 512, the following were in ascending order, adding 128 at a time, until the value of 1024. The next ones followed a decreasing sequence, which was subtracted 128 units at a time until it reached the value of 0. Then, the process continued to form the wave. Each prototype was adjusted to send a sample every 16 ms. The server received the packages and stored them in files, also recording the time for each sample. The first test was carried out at the Department of Electronics and Systems (DES), at the Federal University of Pernambuco (UFPE). The rooms where the system was

tested are illustrated in the diagram in Fig. 3a. The DES was chosen because it is the controlled environment and did not present any relevant noise in the electrical network; for these reasons, it was considered a suitable environment for PLC test. The server was connected to a point in the electrical network of the laboratory where the prototypes were developed and assembled, the Human Machine Interface Laboratory (LIHOM), and the prototypes were connected to the electrical network of a nearby Professor’s room 410, which was about 20 m away. The experiment lasted for approximately 50 min. The second test was performed in the Bronchoscopy sector of Hospital das Clínicas (HC-UFPE), the intervention environment, whose distribution of the rooms is illustrated in the diagram in Fig. 3b. In this sector, two types of exams are performed using imaging diagnosis: The Bronchoscopy and the Transesophageal echocardiogram. Equipment for imaging diagnosis inserts noise into the electrical network, making PLC unstable. Due to this fact, the sector was chosen for the tests. Each exam was performed in its respective rooms. At the same time, the server was connected to the electrical network of the reception room, and a prototype was attached to the electrical network of the room where a Transesophageal echocardiogram was performed. Another prototype was in the electrical network of the Bronchoscopy room. The

Fig. 2 Graph representing the set of samples used to conduct the PLC communication tests. Source Authors, 2020

Table 1 Sample package model for PLC validation ID

Samples

1

2

3

No 4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

512

1

Source Authors, 2020

Table 2 Representation of the sample package for PLC validation ID

Samples

M1

512

640

No 768

896

1024

896

768

640

512

384

256

128

0

128

256

384

1454

N. A. Cunha et al.

3

Results and Discussion

The first test that was carried out at DES-UFPE, presented the results showed in Table 3. The average period for receiving packets approached 16 ms. There was no loss of packages or changes in them. The results demonstrate that the DES electrical network can be considered a reference for the same test to be carried out in the HC-UFPE electrical network. The second test, which was performed at the HC-UFPE, presented the results showed in Fig. 4. The experiment lasted one hour and forty-eight minutes. As in the case of DES, there was no loss of packages or samples, also there was no delay in the communication. The test showed that the electrical network of the sector is viable to the PLC implementation, even with EMH of image diagnosis, which makes the electrical network hostile to such type of communication. Tests have shown that PLC is possible in a hospital environment considered hostile to it. In the same way that Wireless Communication loses speed through the presence of barriers, the speed of PLC also decreases according to the quality of the electrical network [7]. Even so, the use of PLC in EHA is more appropriate as it does not interfere with diagnostic imaging MHE. Regarding Ethernet, the PLC is preferable because it reuses

Table 3 Package model to validate the PLC DES Duration

Fig. 3 a A diagram that represents the section of the DES room where the first test was carried out, showing which rooms the server (highlighted in yellow) and prototypes (P1 and P2) were positioned; b a diagram representing rooms of HC Bronchoscopy sector, which shows the positioning of the prototypes (P1 and P2) and the server (highlighted in yellow). Source Authors, 2020

Client 1

Client 2

51 min

51 min

Number of received packets

190,849

192,995

Average received time packet

16.010 ms

16.017 ms

Packet loses

0

0

Changed sample

0

0

Source Authors, 2020

Table 4 Results related to the test performed in the HC Bronchoscopy sector HC Bronchoscopy Department

distance between the prototype of the Transesophageal echocardiogram room and the server was around 10 m, and between the second module in the Bronchoscopy room and the server was approximately 20 m. During the test, the Medical-Hospital Equipment (EMH) in the examination rooms were switched on, simulating a diagnosis being conducted. The test duration was approximately the time it took to perform two exams.

Client 1

Client 2

Duration

1 h 48 min

1 h 48 min

Number of received packets

406,011

405,122

Average received time packet

16.008 ms

16.016 ms

Packet loses

0

0

Changed sample

0

0

Source Authors, 2020

Communication in Hospital Environment Using Power Line …

1455

the infrastructure of the electric network that already exists in the EHA. Also, according to the chosen example package and the transmission frequency, the execution of the tests still implies that the PLC is viable for most IOT applications (Table 4).

cause noise in diagnostic imaging equipment. In the future, prototypes can be used to verify the viability of the PLC in other sectors of EHA also considered hostile to the PLC.

4

References

Conclusion

This work showed how a prototype was developed and assembled for the evaluation of PLC in EHA. The tests that were carried out with the prototypes showed that even with the limitations of the electric network of the chosen sector of EHA, it was possible to communicate data using PLC. They also showed that the PLC at that EHA worked fine and stable as in the controlled environment (DES), where the first test was carried out. In other words, the electrical network in the hospital sector considered unsuitable for PLC did not present any relevant interferences for this type of communication. The results of the tests show, therefore, the viability of the PLC instead of the standard Ethernet network when infrastructural changes are not desirable, or even possible since PLC does not require the installation of a wired structure for communication. Proof of the viability of PLC communication in the EHA sector also implies greater feasibility in relation to wireless communication, as it prevents the spread within the hospital environment of high frequency electromagnetic radiation, characteristic of wireless communication, radiation that can

Conflict of Interest The authors declare that they have no conflict of interest.

1. Ahn J, Heo J, Lim S et al (2008) A study of ISO/IEEE11073 platform based on power line communication. In: 14th Asia-Pacific conference on communications, pp 1–4 2. Qingping W, Shen K (2011) Design of medical ward call system based on power line carrier technology. In: International conference on system science, engineering design and manufacturing informatization, vol 2, pp 101–104 3. Ishida K, Hirose M, Hanada E (2016) Investigation of interference with medical devices by power line communication to promote its safe introduction to the clinical setting. In: International symposium on electromagnetic compatibility, pp 818–822 4. Ding W, Yang F, Yang H et al (2015) A hybrid power line and visible light communication system for indoor hospital applications. Comput Ind 68:170–178 5. Oliveira B, Starling C, Andery P (2015) Gestão Do Processo De Projeto De Instalações Elétricas Em Empreendimentos Hospitalares: Estudo De Caso. Gestão & Tecnologia de Projetos 10:47–60 6. Pinto M, Pimenta C, Moreno L (2016) A CMOS power line communication for EEG. In: 28th international conference on microelectronics, pp 41–44 7. AR7420 IEEE 1901 compliant home plug AV MAC/PHY transceiver: datasheet. Qualcomm Atheros: 2p. at https://www. codico.com/fxdata/codico/prod/media/Datenblaetter/AKT/AR7420 +AR1540%20Product%20Brief.pdf. Acesso em: 15 mai. 2019

Influence of Visual Clue in the Motor Adaptation Process V. T. Costa, S. R. J. Oliveira, and A. B. Soares

Abstract

1

Understanding the functioning of behavioral mechanisms related to neuromotor adaptation is becoming crucial to idealizing improved rehabilitation. Motor adaptation is a trial and error process for adjusting movement to new demands, modifying the movement by attempting judgment based on feedback. To maximize performance, forces can be generated stochastic to disrupt movement, whether dependent on speed or position that destabilizes fluid movement, forcing a motor adaptation process. Given this reality, it is essential to explore how the motor adaptation processes occur so that you can learn to predict the sensory consequences of motor commands. In this sense, the objective of this study was to verify how, and if learning occurs when visual cues, in face of force field disturbances, are applied in predetermined trajectory movements, and these factors will contribute to the adaptation of sensorimotor behavior. From a serious game, associated with a robotic platform (based on the ‘H-man’ model) for upper limbs, 10 volunteers were evaluated while performing a movement segment containing force field variations and visual cues. about the intensity and direction of the field. It was realized that the learning process occurs independently and that for a single movement there may be more than one adaptation. Keywords

Adaptation platform



Learning



Sensorimotor



Robotic

V. T. Costa (&)  S. R. J. Oliveira  A. B. Soares BIOLAB—Biomedical Engineering Laboratory, Federal University of Uberlandia, Uberlandia, Brazil

Introduction

Moving and interacting in the world requires rapid processing of the visual environment to identify potential motor objectives, select a movement and ultimately move promptly [1, 2]. Understanding how we learn movement and how we use tools is one of the central challenges in neuroscience for use in both rehabilitation and performance enhancement [3, 4]. There is growing evidence that sensory feedback is rapidly integrated into motor decisions. Sensory feedback is integrated with high-level behavioral goals to make quick decisions about how to move, interact, and adapt in the environment [5–8]. What drives adaptation? Cerebellar damage is known to consistently compromise this process, decreasing experimental improvements during adaptation when new demand is imposed, decreasing the post-stored effect when demand is removed. The type of error information that drives adaptation is not yet fully clear, although recent work has shown that sensory prediction errors are sufficient to direct adaptation (that is, the difference between observed and predicted motion) and that motor corrections made after errors do not alter the adaptive process. In other words, seeing the error in one trial is sufficient to affect the next trial and correcting the error does not improve this process [9, 10]. Learning and adaptation processes are still being strengthened in the literature in the perception of the action underlying the user's skills. Another important aspect of voluntary motor control is the ability to inhibit a motor action. When instructed, a person can easily reach space targets as they appear in the workspace. In contrast, it can be difficult to reach a target when instructed to move in the opposite direction. In this counter-reach condition, subjects may have initial erroneous motor responses to the spatial objects and are delayed in moving in the opposite direction. This task requires the voluntary replacement of an automatic response to reach the target and involves many areas of the brain, including the frontal and parietal cortex. This ability to move can be

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_215

1457

1458

V. T. Costa et al.

impaired in people with stroke, mild cognitive impairment, Alzheimer's disease, and a history of concussion [5, 11–17]. Several paradigms were created to discover how humans learn dynamics for manipulating objects [18–20]. Seeking reward and avoiding punishment are powerful motivational factors that shape human behavior. Traditionally, motor adaptation has been an implicit process that is not affected by motivational feedback [21]. Other studies also point out that motor adaptation generally depends on disorders imposed externally to induce errors in behavior, which seems to be a promising line for us to better understand how humans adapt to motor interference [22]. To test how upper limb motor adaptation works, an experiment was created that required volunteers to update their range control to compensate or optimize their movement by disturbing a resistive force field, as well as a visual cue that would increase the range. information on how the disturbance would operate. Two groups were compared in which one of them received visual cues indicating where and how the force field would be applied; Therefore, it was possible to see if these visual clues could help or hinder the motor adaptation process.

2

Materials and Methods

2.1 Equipment To perform the movements, the volunteers used a differential robotic manipulator based on the H-man model [23]. The manipulator supported the dominant arm in the horizontal plane, with the shoulder at 45° of transverse flexion and the elbow at 90° flexed (Fig. 1a). Participants controlled the task by exerting forces with the elbow flexor muscles to move the handle to the indicated region. The participants needed to control the position of the cursor on the X and Y axes, the impedance of movement was applied by two motors that controlled the axes and their strength measured by the torque sensors. The participants had the objective of reaching the goal and stopping the cursor over the specified region. In all experiments, volunteers controlled a cursor through the robotic platform that Fig. 1 Experimental setup and procedure. a How the volunteer positioned himself for the handler. b Procedure when the volunteer's vision was withdrawn

tracked movement through two encoders located on the motor shaft. During the experiment, the participants and vision were removed to the handler to verify their full attention to the monitor (Fig. 1b).

2.2 Experimental Protocol The study was conducted in 5 phases (Fig. 2a), each phase contained 2 or 3 blocks (depending on the phase) and each block would have 50 intervals, where participants should reach these intervals with a distance of 15 cm between the start and the target. A disturbance was generated in the form of 7.5 Nm semi-sinusoidal force and a visual clue was shown on the monitor by a dashed line indicating how the force field was being applied (Fig. 2d). Each block was separated by a 1 min rest, where the volunteer was asked to rest. The phases were divided as: Baseline: Two blocks were performed at this stage, with 50 trials in each series, undisturbed and without a visual clue. This phase was performed for the volunteer to learn the movement to be evaluated. Adaptation: Three blocks with force field perturbation. Depending on the group, a visual clue was shown, with a dashed line in the shape of the applied force field. To observe how the volunteer would adapt to the task. No vision: three blocks equal to the adaptation phase, but the volunteer's hand was covered on the robotic platform and could only receive visual feedback from the monitor (Fig. 1b). It was evaluated whether the visualization of the hand on the handler could influence the adaptation. Washout: Three blocks without disturbance and visual clue. This phase was performed to observe retention in the readaptation phase. Readaptation: Two disturbed series with no visual clue. In total, the volunteers were divided into 2 groups, the first group was employed with random force field disturbances and visual clue and the second was also employed with random force field disturbances but without visual clues. The monitor in front of the volunteer gave visual feedback of the manipulation of the robotic platform, it also

Influence of Visual Clue in the Motor Adaptation Process

1459

Fig. 2 Procedure of the experiment. a Division of the phases of the experiment, three blocks of baseline, three blocks of adaptation, three blocks without the vision of the manipulator, three blocks of washout and two blocks of rehabilitation. b Visualization of the game at the beginning, the gray frames indicate the beginning and end that the blue frame (representing the position of the handler) must reach. c Upon

reaching the upper gray target, this target turns green to signal the end of the interval and returns to the lower gray frame. d The purple line shows how the visual clue was shown to the user; This line can also be mirrored on the right to show the field formed by the right side, depending on randomness

showed two gray targets located in the lower and upper center of the screen, there was also a blue frame that indicated the position of the handle in space (Fig. 2b). The lower gray target indicated where the participant should start the movement, while the upper target should be his final target when reaching the handle (indicated in blue) to the final target, it would turn green, indicating that the participant would have completed the range (Fig. 2c) and should return to the lower target. Upon reaching the lower target, the upper target would turn gray again, indicating that a new range could be started.

be predicted for the next range, participants would systematically have to adapt to the next error experienced. One group would have traced visual clues indicating how the field would behave (Fig. 2d), but this group was not told if this clue would exist or how it worked, leaving the volunteer to understand the information on the screen. In contrast, the other group just had to react to the field without any kind of clue. To calculate the user's ability, the area that the volunteer forms in the range based on the compound trapezoidal rule, shown by the formula below, was used, a straight line between the lower and upper target was assumed as the ideal range, so it was considered as the base of the area, this Aerrorn ð f Þ is the error area of each range, h is the maximum height of the center deviation, f ðaÞ the smaller base, f ðbÞ the largest base and f ðxi Þ and the sum of the existing bases in the way. The error area was used in the module so that the positive value was always assumed. Data were normalized over the second baseline series for everyone, where all participants were considered to have the same skill errors.  !  n1  f ðaÞ þ f ðbÞ X   ð1Þ Aerrorn ð f Þ ¼ h þ f ðxi Þ    2 i¼1

2.3 Data Analysis The position of the manipulator was collected through two encoders positioned in the motors, the resulting force was obtained by torque sensors, the data were collected at a sampling rate of 500 Hz. Differences were calculated between the displayed target and the movement start time, which should not exceed 600 ms, causing a longer time than this stipulated, an attempt would be excluded [21, 24]. In the assessment of skills, it was initially investigated whether visual cues could influence motor adaptation. Therefore, the volunteers were exposed to semi-sinusoidal perpendicular force fields randomly alternated during the reaching task where the goal was to reach the target as accurately as possible (Fig. 2b). Disturbance in one range could not

The effects between groups were compared using repeated-measures ANOVA followed by t-tests. No statistical methods were used to predetermine sample size, but sample sizes are similar to literary publications [21, 25–27]. The significance level was set at p < 0.05.

1460

V. T. Costa et al.

2.4 Endpoint Errors Errors made when attempting to reach each target were measured as the shortest distance from the cursor endpoint to the outer limit of the target. Only attempts that fell within the target limit were computed. If the participant omitted any targets, that range was discarded. We took this approach because it was considered that a target's omission error is different from trying to migrate to it and make a mistake [28].

2.5 Volunteers In all, 10 volunteers participated in the experiment, these volunteers could not have a motor and/or neurological problems or vision-related problems. The average age of the volunteers was 25.1 ± 1.7 years. The volunteer was informed that his main objective would be to reach the target with the shortest path, highest accuracy, and speed possible. Of these participants, 9 were right-handed and 1 left-handed. Among the 10 candidates, two were excluded from the results because they were able to experiment within the space limitations of the device, thus generating a larger error than the equipment could show, so only 8 candidates were presented at the end. The collection protocol was approved by the ethics committee CNS 466/12 of the Federal University of Uberlandia and all individuals gave written informed consent by the Helsinki declaration.

3

Results

3.1 Impact of Visual Clue on Movement Adaptation The result below (Fig. 3a) shows the average adaptation of 50 trials of each volunteer. The evolutions of the subjects in the group with visual clues are shown in descending colors of pink, for the subjects who did not receive visual clues the descending colors of blue are used. To allow analysis of possible variations (adaptations) during the various range series, the data were normalized to the mean value of the second baseline series individually. As can be seen, it is not possible to perceive apparent differences between the groups by purely assessing the individual averages. Statistical analysis shows that the groups started similar performing experiments by ANOVA (F3.94 = 15.29, p = 0.00017). However, both groups showed evolution (learning/adaptation) when we considered the average errors of the last session in relation to the first series of the adaptation phase with a visible manipulator. For

the group without visual cue, t8 = 0.53, p = 0.29 (using the t-test), proving that we had a difference between the beginning and the end of these phases. In the group with visual clue, the values t8 = 2.31, p = 0.12 were dated, which also shows learning in this group. Thus, showing that these two groups have learned. Moreover, the analysis of the differences between the groups in the final session shows that their performances are different: F3.9 = 0.6, p = 0.41, group that did not receive visual cue presented a higher evolution with lower error in the group. final session when compared to the group that received visual clues. As this difference is not obvious in the evolution of individual averages, it was decided to analyze the evolution of the averages of each group for each trial (Fig. 3b). In this case, one can notice a tendency of separation between the groups, markedly in the last two sessions. This difference can be attributed to the fact that visual cues may have contributed to a greater dispersion of attention of each subject, and consequently caused a delay in the learning/motor adaptation process. It is noteworthy that motivational feedback does not provide any additional directional information, they had no sign that could serve as additional misleading learning.

3.2 Effect of Obstruction of Vision Under the Manipulator The comparison of the mean errors between the adaptation blocks with or without visual obstruction (of the phase without vision) of the manipulator showed no significant difference between the blocks. To specifically assess the possible impact of the handler's obstruction (from the phase without vision), we evaluated the statistical difference between the last series of manipulator's vision adaptation and the first series with the manipulator's obstruction (phase without vision), for each group. The results show that for the group that received visual clues on the computer screen there was no significant difference (t8 = 2.31, p = 0.012), the same for the group that did not receive visual clues on the computer screen (t8 = 0.53, p = 0.029).

3.3 Effects of Concentric and Eccentric Force Fields One possibility is that there could be differences between the fields, so the fields were separated against the concentric movement of the dominant hand (Fig. 4b, d) and in favor of the concentric movement of the dominant hand (Fig. 4a, c). When comparing the first series of adaptation, we noticed that the range against the concentric movement of the dominant hand had no significant difference (F2.47 = 3.94, p = 0.011), whereas the range in favor of the concentric

Influence of Visual Clue in the Motor Adaptation Process

1461

Fig. 3 Represent the area error normalized by trial. a Shows the mean area errors per series of each volunteer, volunteers represented by circles represent individuals who had visualization of the trajectory in the adaptation stages and without vision, whereas individuals represented by squares indicate people who did not receive the trajectory. b Average area errors per group, transparent red represents the average area errors of the group that received the path track and the transparent blue color of the group that did not receive the path track. The red circle represents the average of the series of the group that received the path track and the blue square represents the average of the area error series

movement of the dominant hand began separately. (F3.94 = 0.09, p = 0.09). This shows that there was a knowledge of the force majeure when the field is against the movement. Finally, after the washout period, the learning consolidation was evaluated by the readaptation phase. Then we compared the last series of the visionless phase with the first series of the readaptation phase, which showed that in the field range against and favor of the movement there was no significant difference, either in the track group or in the visual track group. This shows that in all cases the learning was retained in the process. The adaptation measures covered in this article take into account the speed of movement, such speed differences are highly unlikely to explain the lack of learning [29, 30]. These results show that when a movement is predictive of the direction of the field planned, even if not executed, there is a substantial reduction in interference.

4

Discussion

4.1 Visual Clues Lead to Slower Adaptation The groups with no visual cues had a higher motor performance gain than the group with visual cues. Consequently, visual clue feedback was associated with slower adaptation. There are several possibilities for how this visual clue may have slowed motor adaptation. First, feedback signals may have increased cerebellar sensitivity to sensory prediction errors, that is, a directional mismatch between expected location and perceived position [21, 31]. A prediction error could have led to further behavioral exploration and thus increased the time with which the correct solution was found. Another possibility is that there has been an increase in cortical processing due to increased parameter processing (from visual clues), so it may lead to increases in learning

1462

V. T. Costa et al.

Fig. 4 They represent the attempt normalized area error separated by fields against and in favor of the concentric movement of the dominant hand. a Shows the mean area errors per series of each field volunteer in favor of the concentric movement of the dominant hand, volunteers represented by circles represent individuals who have seen trajectory visualization in the adaptation and non-vision phases, whereas individuals represented by squares indicate people who have not

received the path. b Shows the mean area errors per series of each volunteer in the field against the concentric movement of the dominant hand. c Mean area errors by groups of field movements versus concentric motion. The red circle represents the average of the series of the group that received the path track and the blue square represents the average of the area error series. d Mean area errors per group in the field of motion in favor of concentric motion

time during random motor disorders. Thus, differences in the experiment must have arisen in the volunteers, becoming more sensitive to the directional information provided by a sensory prediction error [21]. It was observed that the implication of visual cues caused an increase in reaction time for the group. Because the cerebellar function is sensitive to behavioral outcomes and aversive stimuli, it believes that visual cue-induced differences in performance in error-based learning were a direct result of the cerebellum being more sensitive to sensory prediction error [10, 21].

improvements induced by the visual cue were a direct result of the cerebellum being more sensitive to sensory prediction errors associated with visual clues. In other words, visual cues directly hamper learning in cerebellum prediction error, possibly by increasing cerebellar serotonin levels [21, 33, 34]. The result of the group with visual cues shows that the performance of the volunteers indicates adaptation/learning was completely outdated from the group that does not have the visual cues and cannot be ruled out as the results seem to have motivational value. It would be informative, however, to further examine the relationship between learning and the magnitude of the visual cue offered [21]. As shown in the results, a greater ability to control the manipulator in movements performed during the application of concentric force fields about the compensatory movement associated with the elbow joint. This greater control may be associated with greater ease for elbow flexion movements, which, in turn, is associated with the activity of the biceps brachii—more potent than its agonist (triceps brachii). As they are antagonistic movements, we can observe in the results that two adaptations were generated, suggesting that

4.2 Greater Adaptability in Concentric Fields One point that was noted was that the aversion to error when expecting where the field will come from, which happens when on the visual clue, may have caused a greater focus on the difference between the ideal path and the one seen by the user, thus demotivating. the adaptation [21, 32]. The cerebellar function is sensitive to negative behavioral outcomes and adverse stimuli, it can be assumed that

Influence of Visual Clue in the Motor Adaptation Process

they had two different learnings for the same range. Studies address that adaptation may depend on control points and object locations that may change in learning [3, 35]. Thus, the motor system forms separate memories for different control points of the same object. As in this experiment, the user was asked to hold the handle in the most comfortable way for him, after each rest the user could have changed its grip, which generates a new adaptation of the engine. This gives evidence that some volunteers’ improvement does not come from better adaptation, but from parameter learning processes that form distinct motor memories that are returned when needed [3, 28]. Separate memories can be formed for different control points on objects. The formation of distinct motor memories for different control points is not mandatory but occurs only if the dynamics of each control point are different, so the allocation of motor memory is flexible and efficient. Results from another study suggest that objects are not represented by the motor system as holistic entities, but are split in a task-dependent manner according to control points [3].

5

Conclusion

The effects of motor adaptation on volunteers were compared, and it was observed that action control may make learning difficult, but this does not invalidate the process, which results in slower learning. It has been shown that more than one adaptation can occur for a single movement and this adaptation has led to different learning. When visual clues are added a priori to movement, adaptation produces a more accurate estimate of the sensory consequences of motor commands (that is, learns an accurate model) and, on the other hand, our brain needs more cortical processing, causing a delay in adaptation. For this, it seems clear that more detailed investigations on the subject are needed, especially about the neurobiological mechanisms associated with motor memory reconsolidation and how to effectively trigger and control the process. Acknowledgements To the BIOLAB Research Laboratory of the Federal University of Uberlandia, for providing the space for all research and collection, and to the project funding agencies, CAPES and FAPEMIG.

References 1. Huinink LHB, Bouwsema H, Plettenburg DH, Van der Sluis CK, Bongers RM (2016) Learning to use a body-powered prosthesis: changes in functionality and kinematics. J Neuroeng Rehabil 13 (1):1–12

1463 2. Biddiss E, Chau T (2007) Upper limb prosthesis use and abandonment: a survey of the last 25 years. Prosthet Orthot Int 31(3):236–257 3. Heald JB, Ingram JN, Flanagan JR, Wolpert DM (2018) Multiple motor memories are learned to control different points on a tool. Nat Hum Behav 2(4):300–311 4. Ingram JN, Wolpert DM (2011) Naturalistic approaches to sensorimotor control. Prog Brain Res 191:3–29 5. Bourke TC, Lowrey CR, Dukelow SP, Bagg SD, Norman KE, Scott SH (2016) A robot-based behavioural task to quantify impairments in rapid motor decisions and actions after stroke. J Neuroeng Rehabil 13(1):1–13 6. Cisek P (2011) Cortical mechanisms of action selection: the affordance competition hypothesis. Model Nat Action Sel 208–238 7. Cisek P, Pastor-Bernier A (2014) On the challenges and mechanisms of embodied decisions. Philos Trans R Soc B Biol Sci 369 (1655) 8. Scott SH (2016) A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci 39 (8):512–526 9. Tseng Y-W, Diedrichsen J, Krakauer JW, Shadmehr R, Bastian AJ (2007) Sensory prediction errors drive cerebellum-dependent adaptation of reaching. J Neurophysiol 98(1):54–62 10. Bastian AJ (2008) Understanding sensorimotor adaptation and learning for rehabilitation. Curr Opin Neurol 21(6):628–633 11. Day BL, Lyon IN (2000) Voluntary modification of automatic arm movements evoked by motion of a visual target. Exp Brain Res 130(2):159–168 12. Guitton D, Buchtel HA, Douglas RM (1985) Frontal lobe lesions in man cause difficulties in suppressing reflexive glances and in generating goal-directed saccades. Exp Brain Res 58(3):455–472 13. Pierrot-Deseilligny C, Müri RM, Ploner CJ, Gaymard B, Demeret S, Rivaud-Pechoux S (2003) Decisional role of the dorsolateral prefrontal cortex in ocular motor behaviour. Brain 126 (6):1460–1473 14. Hawkins KM, Sayegh P, Yan X, Crawford JD, Sergio LE (2013) Neural activity in superior parietal cortex during rule-based visual-motor transformations. J Cogn Neurosci 25(3):436–454 15. Tippett WJ, Alexander LD, Rizkalla MN, Sergio LE, Black SE (2013) True functional ability of chronic stroke patients. J Neuroeng Rehabil 10:20 16. Salek Y, Anderson ND, Sergio L (2011) Mild cognitive impairment is associated with impaired visual-motor planning when visual stimuli and actions are incongruent. Eur Neurol 66(5):283– 293 17. Tippett WJ, Sergio LE (2006) Visuomotor integration is impaired in early stage Alzheimer’s disease. Brain Res 1102(1):92–102 18. Johansson RS, Westling G (1984) Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Exp Brain Res 56 (3):550–564 19. Moulton EA, Elman I, Pendse G, Schmahmann J, Becerra L, Borsook D (2011) Aversion-related circuitry in the cerebellum: responses to noxious heat and unpleasant images. J Neurosci 31 (10):3795–3804 20. Ernst M et al (2002) Decision-making in a risk-taking task: a PET study. Neuropsycho Pharmacol 26(5):682–691 21. Galea JM, Mallia E, Rothwell J, Diedrichsen J (2015) The dissociable effects of punishment and reward on motor learning. Nat Neurosci 18(4):597–602 22. Izawa J, Rane T, Donchin O, Shadmehr R (2008) Motor Adaptation as a Process of Reoptimization. J Neurosci 28 (11):2883–2891

1464 23. Campolo D, Tommasino P, Gamage K, Klein J, Hughes CML, Masia L (2014) H-Man: A planar, H-shape cabled differential robotic manipulandum for experiments on human motor control. J Neurosci Methods 235:285–297 24. Yeo SH, Wolpert DM, Franklin DW (2015) Coordinate representations for interference reduction in motor learning. PLoS ONE 10 (6):1–14 25. Dayan P, Daw ND (2008) Decision theory, reinforcement learning, and the brain. Cogn Affect Behav Neurosci 8(4):429–453 26. Huang VS, Haith A, Mazzoni P, Krakauer JW (2011) Rethinking motor learning and savings in adaptation paradigms: model-free memory for successful actions combines with internal models. Neuron 70(4):787–801 27. Galea JM, Vazquez A, Pasricha N, Orban De Xivry JJ, Celnik P (2011) Dissociating the roles of the cerebellum and motor cortex during adaptive learning: the motor cortex retains what the cerebellum learns. Cereb Cortex 21(8):1761–1770 28. Hardwick RM, Rajan VA, Bastian AJ, Krakauer JW, Celnik PA (2017) Motor learning in stroke. Neurorehabil Neural Repair 31 (2):178–189

V. T. Costa et al. 29. Sheahan HR, Franklin DW, Wolpert DM (2016) Motor planning, not execution, separates motor memories. Neuron 92(4):773–779 30. Howard IS, Wolpert DM, Franklin DW (2015) The value of the follow-through derives from motor learning depending on future actions. Curr Biol 25(3):397–401 31. Shadmehr R, Krakauer JW (2008) A computational neuroanatomy for motor control. Exp Brain Res 185(3):359–381 32. De Martino B, Camerer CF, Adolphs R (2010) Amygdala damage eliminates monetary loss aversion. Proc Natl Acad Sci 107 (8):3788–3792 33. Wu HG, Miyamoto YR, Castro LNG, Ölveczky BP, Smith MA (2014) Temporal structure of motor variability is dynamically regulated and predicts motor learning ability. Nat Neurosci 17 (2):312–321 34. Hester R, Murphy K, Brown FL, Skilleter AJ (2010) Punishing an error improves learning: the influence of punishment magnitude on error-related neural activity and subsequent learning. J Neurosci 30 (46):15600–15607 35. Goudarzi A, Teuscher C, Gulbahce N, Rohlf T (2012) Emergent criticality through adaptive information processing in Boolean networks. Phys Rev Lett 108(12):1–5

Application of MQTT Network for Communication in Healthcare Establishment N. A. Cunha, B. C. Bispo, E. L. Cavalcante, A. V. M. Inocêncio, G. J. Alves, and M. A. B. Rodrigues

Abstract

It is common to use TCP/IP (Transmission Control Protocol/Internet Protocol) for data communication through PLC (Power Line Comunications). However, for IOT applications the MQTT (Message Queuing Telemetry Transport) protocol has been used as an alternative to TCP / IP, as it is lighter, allows implementation on restricted hardware and works well on limited networks. In Brazilian Healthcare Establishments (HE), electrical installations often have maintenance and sizing issues, which can cause difficulties for PLC communication, mainly using TCP/IP. This work has implemented a MQTT network for PLC communication and performed tests on an HE to evaluate its functioning. The network was composed of two sensor modules and a server. The modules send information about the energy consumption of Medical-Hospital Equipment (MHE). The server displays graphics with the information arriving in real time and stores this information for later analysis. The tests resulted in the possibility of using the MQTT protocol for PLC communication in HE, even though the power grid is in unfavorable circumstances for this type of communication. Keywords

MQTT

1

  PLC

Hospital

Introduction

According to Al-Fuqaha [1], the sector of the economy related to health will be the most affected by the contributions made in IoT services (Internet of Things). All IoT N. A. Cunha (&)  B. C. Bispo  E. L. Cavalcante  A. V. M. Inocêncio  G. J. Alves  M. A. B. Rodrigues Federal University of Pernambuco/DES, PPGEE, Av. Prof. Moraes Rego, 1235, Recife, Brazil

devices, which commonly use network protocols based on the conventional TCP/IP (Transmission Control Protocol Internet Protocol), can benefit from the electrical infrastructure already built in Healthcare Establishments (HE) to communicate [2]. This technology is called Power Line Communications (PLC). It is possible to build a closed communications network with a PLC [3]. Such communication allows, for example, applications such as the monitoring, control or management of energy consumption of Medical-Hospital Equipment (MHE) [4]. In the IoT scenario, MQTT (Message Queuing Telemetry Transport) is a Machine-to-Machine (M2M) connectivity protocol that has become popular in services that demand communications between devices quickly and reliably [5]. It works with the Publish-Subscribe mechanism, which is based on the TCP/IP protocol, in which the equipment devices are addressed with unique IP [6], but is a lightweight protocol that allows the implementation of highly restricted device hardware and on networks with limited bandwidth, high latency. However, it was designed to be a protocol with less overhead than HTTP (Hypertext Transfer Protocol) [5]. It is also a flexible protocol that provides support for several application scenarios for IoT devices and services [6]. Research related to the use of PLC technology has been carried out for over 10 years as a new alternative for communication between devices and for automation of establishments. The study by Mainardi [7] describes a simple microcontrolled communication architecture, which is made possible communication between two stations, one master and one slave, through the residential power grid, using low cost PLC communication modules for automation. Another application focused on the current scenario of intelligent systems and IoT services, is the improvement and adaptation of PLC technology to conventional network protocols, with TCP/IP. In this scenario, PLC technology becomes a potential and competitive alternative as to how to build a communication network compared to Ethernet, WiFi, ZigBee, etc. in IoT applications. A study by Di Zenobio [8]

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_216

1465

1466

N. A. Cunha et al.

demonstrates architectures using common network protocols used in IoT services in the automation of a residential property using PLC technology and advantages related to the economy of structural cost compared to other types of technology network communication. Beyond economics, studies also point out the disadvantages of using technologies based on radio frequency for certain applications, as in the case in hospital environments [9]. Although most studies related to electromagnetic interference caused by wireless communications allocated in the ISM (Industrial, Scientific and Medical) radio band claim that these are harmless to applications related to health care, Badizadegan [9] presents a review of literature on proven cases of electromagnetic interference in clinical laboratory equipment, and illustrate in detail the activity of these types of interference in the spectrum of activity of the major wireless communication technologies used in the market. Cunha [4] performed communication tests using PLC in HE and detected the possibility of using the electrical network for this communication, when he was able to communicate data without loss and with low latency for monitoring, in a hospital sector where diagnostic imaging exams were carried out, whose equipment inserts noise in the electrical network. Therefore, the main objective of this work is the use of the MQTT protocol in a PLC network for the real-time monitoring of information on the electrical consumption of MHE, as well as the storage of the same information for later analysis. Also, to analyse the stability and reliability of the data that transits in this network in an environment where, in addition to diagnostic imaging equipment, there is surgical equipment, such as the electric scalpel, which also inserts noise into the electrical network and can make the PLC unviable.

2

Materials and Methods

In this section, it is presented the topology of the network implemented for the realization of communication tests on a PLC network. The system consists of 3 entities, where there are two sensory devices and a central storage unit, with Windows operating system. All connected to the same private local network created by PLC modules coupled in each entity of the system. Firstly, it is possible to observe in Fig. 1a the implemented topology, where two sensory modules entitled Client 1 and Client 2 are connected to the PLC network, as well as the central storage unit, Windows Server. An MQTT network was implemented as the main communication protocol between the devices connected to the PLC network.

In this way, Client 1 and Client 2 are microcontrolled modules connected to the MQTT network via the PLC. which send messages containing electrical parameters of MHE (voltage, current, power and power factor) and who publish on different topics on the MQTT network. Figure 1b illustrates the architecture implemented in the MQTT network with the respective clients and IPs, where Client 1 publishes in the topic “/teste1”, Client 2 publishes in the topic “/teste2”. Windows Computer as Client 3 receives messages from Client 1 and Client 2 and subscribes to the topics “/teste1” and “/teste2”. The central unit in the redistribution of messages by topics in an MQTT network, the Broker, was implemented by the NODE-RED software allocated on Windows Server. NODE-RED [10] is a graphical programming tool based on JavaScript responsible for the implementation of the MQTT Broker (called Mosca), for processing and storage of the data contained in the packages coming from the clients, through the MQTT protocol. In addition, NODE-RED is responsible for building a GUI (Graphical User Interface) for displaying real-time information to the user. The use of NODE-RED made the prototyping simpler to be performed, because through a single software it was possible to structure the MQTT Broker and an MQTT Client (illustrated in Fig. 1b) on the same development platform. In other words, within the Windows Server, in a single software, the MQTT Broker is allocated, also the MQTT network’s client message processing and storage unit, in addition to a GUI server, that displays all messages received by clients in real time. After structuring the MQTT network and the features of each entity that makes up the system, the message format to be sent by the microcontroller modules Client 1 and Client 2 was defined. Then, it was set the sequence of the data packet containing electrical parameters of consumption of the equipment measured: ID, voltage, current, active power, apparent power, power factor and package number. All these parameters are grouped in a single message of type String, that is, a set of characters, where the respective electrical parameters are separated by a comma. Because the variables measured by the module have varying numbers of characters, then the String sent by the modules can assume different amounts of characters, however close to 40 units. For example: ‘‘MOD1; 220:00; 1:00; 176:00; 220:00; 0:80; 8500 The package has the meter identification, five samples, referring respectively to the effective voltage, effective current, active and apparent powers, power factor and the value of the package counter, which identifies which package is that. Packets are sent at a rate of one packet per second, per customer.

Application of MQTT Network for Communication …

1467

After receiving the data packets by the server, the graphical interface of the voltage, current, active power, apparent power and power factor data are displayed. Then NODE-RED writes the packages to files for further analysis. The storage is done in such a way that the packages coming from different sensor modules are allocated in unique and distinct files. That is, each file is relative to its respective sensor module, containing all received packets. The tests to validate the communication between the entities of the network and analysis of consumption were conducted in the Clinicas’ Hospital (CH) at Federal University of Pernambuco, in the Clinical Engineering sector. The sensor module (Client 1) was attached to a ventilator and the other module (Client 2) was attached to an electrosurgical scalpel, installed in one of the equipment maintenance rooms in the Clinical Engineering sector. The scalpel is a device that generates a lot of noise in the electrical network, which can affect both other equipment connected to the same network, as the PLC. For this reason, the tests had the respirator and the scalpel connected to the same network at the same time. The Server was installed in the adjacent room, where an outlet was found belonging to the same electrical phase in which the prototypes were connected. The operation of electrosurgical scalpel and ventilator, the data collected by the Server transmitted by the sensor modules, through PLC network were verified.

3

Fig. 1 a Illustration of the implemented topology, with two sensor modules, Client 1 and Client 2, and the central reception and storage unit, Windows Server. The three entities connected to the PLC network; b illustrates the architecture implemented in the MQTT network and the respective topics, in which MQTT network clients publish messages or subscribe to receive these messages

Results and Discussion

A print screen of the GUI (Graphical user Interface) generated by NODE-RED on the server is illustrated in Fig. 2a to have an overview of the interface, where it is possible to monitor the electrical parameters related to the energy consumption of the ventilator in real time connected to the Client 1 module, during the last minute. The measured values arrive at the Server through the MQTT protocol. The Electrical parameters displayed are: effective voltage, effective current, active power, apparent power and power factor, all referring to the module to which the ventilator is connected. The Fig. 2b shows a magnified image of the active power graph, illustrated in Fig. 2a, with the objective of improving the view. In addition to the graphical display of the behaviour in relation to equipment consumption, the Server stores the values collected by Client 1 and Client 2 simultaneously, through the communication established in the PLC network. Thus, it allows later analysis. The Fig. 3 shows the data collected by Client 2 in the test with the electrosurgical scalpel, when switching between its various modes of operation. In Fig. 3a, the effective voltage

1468

N. A. Cunha et al.

Fig. 2 a Print screen of GUI generated by the NODE-RED, that displays real-time graphs of the last minute of the electrical parameters related to consumption for the ventilator, connected to the Client 1 module; b graph of active power extracted from a for a better view

values in Volts measured from the mains is displayed on the vertical axis, while on the horizontal axis it is displayed the time in seconds to recorded for each measurement. In Fig. 3b is illustrated the graph of the effective current values (in amperes) by the time (in seconds). In the graph in Fig. 3b initially the electrosurgical scalpel was in “stand-by” mode, with current consumption around 310 mA. After 57 s, the equipment was turned on, and it was observed a small increase in current to 350 mA. The first operating mode was then selected and, in this mode, the

electrosurgical scalpel was activated at 69, 78, 87 and 121 s, when peaks of 390 mA were recorded. Then, the second mode of operation was selected and the electrosurgical scalpel was activate at 133 s, recording a peak current of 580 mA. Finally, when the last operating mode was selected, the electrosurgical scalpel was activate at the 150 s, where a peak of 1.21 A was recorded. In additional, it was recorded a voltage drop from 217 to 215.7 V. Then, changes in the electrosurgical scalpel’s operating mode were performed once more in ascending order, to

Application of MQTT Network for Communication …

1469

Fig. 3 a This graph shows the variation of voltage as a function of time; b this graph shows of current as a function of time, in which the variation of this current can be observed according to the electrical stimuli generated when the electrosurgical scalpel operating mode was switched

verify again if the current and voltage variations of the hospital equipment would continue to be correctly registered on the server. The data collected from both the electric scalpel and the respirator were analysed after the tests and no packet losses were detected, in addition to the fact that the receiving period between one package and another was on average 1 s, the same period for sending the packages. This fact reveals that the communication was stable and there were no considerable delays for the purposes of analyzing energy consumption [11]. In the test with the electric scalpel, 283 packages were transmitted, while in the respirator there were 3749. In Fig. 3, it is possible to observe that the voltage and current graphs responded adequately when alternating the modes of operation of the electrosurgical scalpel. Such fact implies the fidelity of the data transmitted by the PLC network through the MQTT protocol, even in abrupt situations

of current and voltage variations. The data arrived without communication failures to the Server. It is also important to note that during the variations of the electrosurgical scalpel operation mode in testing, communication with the ventilator remained stable, with no changes resulting from the noises generated by the electrosurgical scalpel in the power grid.

4

Conclusions

The work demonstrated that it is possible to use the MQTT protocol to establish communication on a PLC network. It also enabled monitoring of energy consumption of MHE in real time and store the information obtained for later analysis, even if the power grid at the test site was inappropriate for this type of communication, due to electrical noises generated by electrosurgical scalpel.

1470

In this way, the MQTT protocol was used in a PLC network and fulfilled its basic purpose, enabling communication through unfavourable means, demonstrating acceptable fidelity and latency of the information communicated. The continuity of the work would be to investigate the possibility of remote monitoring a greater number of Medical-Hospital Equipment (MHE) in an Healthcare Establishments (HE), using the MQTT protocol, starting from the most important equipment, according to the project of the establishment, being able to reach all the MHE of that HE. In this process, other means of communication and their advantages (or disadvantages) in relation to the PLC can be analysed. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Al-Fuqaha A, Guizani M, Mohammadi M et al (2015) Internet of things: a survey on enabling technologies, protocols and applications. IEEE Commun Surv Tutor 17 2. Ahn J, Heo J, Lim S et al (2008) A study of ISO/IEEE11073 platform based on power line communication. In: 14th Asia-Pacific conference on communications, pp 1–4

N. A. Cunha et al. 3. Zhang H (2010) The intelligent device management system is based on the power line communication. In: International conference on E-business and information system security, pp 2 4. Cunha N (2019) Monitoramento do Consumo Energético de Equipamentos Hospitalares Utilizando PLC. Dissertação (Dissertação em Engenharia Elétrica) Universidade Federal de Pernambuco. 5. Yassein MB, Shatnawi MQ, Aljwarneh S et al (2017) Internet of things: survey and open issues of MQTT protocol. In: International conference on engineering MIS (ICEMIS), pp 1–6 6. Hillar C (2017) MQTT essentials—a lightweight IoT protocol. Packt Publishing Ltd. 7. Mainardi E, Banzi S, Bonfe M et al (2005) A low-cost home automation system based on power-line communication. In: International symposium on automation and robotics in construction 8. Di Zenobio D, Steenhaut K, Thielemans S et al (2017) An IoT platform integrated into an energy efficient DC lighting grid. In: 2017 wireless telecommunications symposium, pp 1–6 9. Badizadegan N, Greenberg S, Lawrence H et al (2019) Radiofrequency interference in the clinical laboratory: case report and review of the literature. Am J Clin Pathol 151 10. NODE-RED Getting Started at https://nodered.org/docs/gettingstarted/ 11. Brasil (2010) Procedimentos de Distribuição de Energia Elétrica no Sistema Elétrico Nacional: Módulo 8: Qualidade da Energia Elétrica. ANEEL, 62 p

Artificial Neural Network-Based Shared Control for Smart Wheelchairs: A Fully-Manual Driving for the User J. V. A. e Souza, L. R. Olivi and E. Rohmer

Abstract

Wheelchairs play an important role in regaining lost mobility and in the social reintegration of its users. However, users with quadriplegia yet have a few low cost solutions that meet their needs. The present work proposes a shared control strategy designed to operate together with discrete human-machine interfaces, where the user has few commands, and capable to integrate different types of sensors that can be attached to a commercial powered wheelchairs, without the necessity of localization. Simulations show that the proposed strategy provides a robust and safe navigation through daily environments and does not take away the navigation autonomy from the user, who performs a fully-manual driving. Keywords

Smart wheelchair • Assistive technology • Shared control • Artificial neural networks • Robotics

1

Introduction

Physically disabled people, notable those with major mobility restrictions, such as quadriplegic people, face great difficulties not only in getting around, but also difficulties associated with their inclusion in society [1]. One of the main objectives of assistive robotics is to promote technologies that assists those people in their daily lives, in order to make them recover the formerly lost autonomy and integrate them into society, regaining self esteem and health. J. V. A. e Souza (B) · E. Rohmer School of Electrical and Computer Engineering, State University of Campinas (UNICAMP), Campinas, Brazil e-mail: [email protected] L. R. Olivi School of Electrical Engineering, Federal University of Juiz de Fora (UFJF), Juiz de Fora, Brazil

The most recent National Health Research, realized in 2013 [2] indicates that 1.4% of Brazilian population has some kind of physical disability, which means, in absolute terms, a group of 2.6 millions of Brazilians. Still according to the research, 1.2 million of Brazilians have a high level or too high level of physical limitations, which implies that they are unable to perform any daily activities. Returning a portion of the mobility of quadriplegic people is an objective of assistive robotics. It becomes an even more challenging task when takes into account the monetary point of view, since many of the developed technologies still have a high cost. Then, there is a great demand for low-cost systems that can be incorporated into the user’s daily life. One kind of technology that have been widely developed in the last times and is present in many of commercial smartphones is the recognition and classification of facial patterns. Some works, as presented in [3,4], make use of this technology as a Human-Machine Interface (HMI), in which a robotic wheelchair can be controlled by a user. However, due to the inherent limitations of HMIs [3], only a small set of commands is available for the user, usually directional commands for navigation. Thus, there are limitations in the commands attributed for the robot. Moreover, some environment conditions, such as the brightness, may influence sensors and, therefore, errors in command classification can occur, which may put the user in a dangerous situation. In order to mitigate the errors arising from HMIs, the present work proposes a novel shared control strategy for robotic wheelchairs using artificial neural networks to obtain an adapting control law for the linear and angular velocity of the robot during navigation tasks. This strategy is designed to work with different types of distance sensors, including low-cost ones, offering a navigation which adapts from the environment’s configuration, guaranteeing the user’s safety and his/her decision autonomy, once that all required commands are applied to the wheelchair in a safe way.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_217

1471

1472

2

J. V. A. e Souza et al.

Literature Review

A smart wheelchair consists of a powered wheelchair with sensors, processing units and an HMI attached to it [5]. The main difference of a smart wheelchair to a common powered wheelchair is related to the robotics and intelligent techniques embedded. In order to implement robotic techniques in a wheelchair, sensors are required to provide information about the environment, the robot and the user. This information is processed and used by algorithms to map, plan a path, avoid a collision, among other functionalities. Commonly, the navigation of a powered wheelchair is manual and it is given by the commands sent through a joystick. However, controlling a joystick is very complicated or impossible for users with a high loss of arms and hands mobility. Thus, some HMIs can offer a way to use the remaining movements to control the wheelchair [3]. Although HMIs allow the user to send commands to control the wheelchair, some daily maneuvering tasks still difficult to be executed, due to the amount of effort to send a long command sequence, and to the HMI discrete characteristic, that usually results in a small set of available commands. For this reason, robotics techniques are used to assist the person in these maneuvers, dividing the control between the user and the machine, and this is called Shared Control [6–8]. The authors of [6] classify the existing shared control approaches in two ways: the first category comprises approaches in which navigation mode changes are triggered by the user through an interface. The second category comprises approaches in which navigation mode changes are triggered automatically by the shared control algorithm. Alternatively, authors of [7] display a third way: a shared control technique in which the user is in charge of controlling the velocities and direction of movement, while the system is responsible for path planning and collision avoidance, and also, switching between navigation modes is not required. The first works in shared control topics use the concept of mode switching. The NavChair [9] is a wheelchair that makes use of sonar sensors and wheel odometry to build a cartesian map centered around the wheelchair. To avoid the obstacles, techniques based on Potential Fields are used. There are three operating modes for the wheelchair. The first one is a mode designed to keep the wheelchair in a safe distance to the obstacles. The second mode is for a door passage, moving between two closely spaced obstacles. The last mode is to follow a wall automatically, which makes the NavChair modifies the commands given by the user, intending to follow the wall. Con-

cerning to the operation mode, the user should select which mode is active, then the switching mode is manual. The authors of [10] present a Bayesian Neural Network to avoid obstacles surrounding a wheelchair. This method uses a laser sensor with the wheel odometry to build a 360◦ map, used to detect accessible spaces for the wheelchair. The inputs of the neural network are the laser data and the goal direction, while the outputs are velocity and steering of the wheelchair. The goal direction is calculated using a cost function, this function leverage the directional command sent by the user with the free-spaces directions given by the map. The wheelchair navigates autonomously in the goal direction, that is the closest safe direction to the user’s command. Authors of [8] proposed a shared control strategy based on Vector Fields that executes the exactly command sent by the user, instead of leverage this with the environmental data, such in [10]. Thus, it can be classified as a fully manual driving. The obstacles create a repulsive field, that are responsible to regulate the linear and angular velocities. This approach fits in the third class of shared control, proposed by [7], since the user does not need to switch between the operation modes, being responsible for choosing directions. In [11], authors presented a hybrid motion planner applied to a brain-actuated wheelchair. The planner has a threedimensional global path planner, that uses the A* algorithm to generate the path, which then goes to a smoothing step. The reactive part of the algorithm is done by a modified version of dynamic window approach (DWA), that allows the wheelchair to react to dynamic obstacles. In this approach, the user can send high-level commands, composed by a set of target goals, or low-level commands, that are the classical directional commands, similar to a joystick. The method proposed has a collaborative controller, a decision-making layer that receives the user and the machine’s command and it determines if the user’s command is allowed to be executed or not, based on the environmental information. Considering the related work, this work proposes a novel approach to the problem of shared control, aiming to give the user maximum autonomy during navigation, allowing a full manual driving assistance, controlling only the linear and angular velocity of the wheelchair. In this point, this work agrees with what is presented in [8]. However, to overcome the disadvantage of the adjustment of the potential field parameters for different environments, in this work is used a strategy based on artificial neural networks, that learns different examples of basic spacial situations and generalizes it to complex environment, allowing the user a safe navigation without the need to switch between navigation modes.

217 Artificial Neural Network-Based Shared Control for Smart Wheelchairs …

3

Methodology

The development of HMI’s based on image processing has become a trending area in the last years due to the accessible prices of cameras and the popularization of it with smartphones. Moreover, advances in image classification using convolution neural networks contributed to this popularization. The present work uses an HMI based on head movements called Joyface [3]. The Joyface interface uses the movement of the nose to determine which command will be sent to the wheelchair. The Fig. 1 shows the virtual joystick emulated by the Joyface application. The available commands with Joyface are move forward, move backward, turn to the left, turn to the right and stop the wheelchair. Table 1 shows the commands and movements related to each of them. This work proposes a strategy that shares the control of the wheelchair between the interface’s user and a control law of linear and angular velocities determined by a multilayer perceptron neural network (MLP). The proposed control strategy architecture is shown in Fig. 2. In this shared control strategy, user’s movements/expressions are classified into moving commands which are sent to the wheelchair controller. In par-

1473

allel, the environment’s data are acquired by distance sensors attached to the wheelchair, and those readings are classified into a polar occupancy grid, named here as influence zones. In this work the influence zones configurations are used as inputs for the neural network, which, in turn, provides as output a vector of four elements. The first element is a percentage of the wheelchair’s maximum linear velocity to be applied for the forward’s movement speed. The second element is a percentage of the maximum linear velocity that is related to the backward movement. The third element is a percentage for the angular velocity related to the left turn movement, and the last element is the percentage of angular velocity for the right turn. Directional commands sent by the user via HMI are applied to the wheelchair with its corresponding velocity in the neural network output.

3.1

Inputs

The proposed shared control technique is aimed at commercial wheelchairs with few modifications to robotize the vehicles, such as installing rangefinder sensors. The only requirement is that the wheelchair controller is able to receive a velocity set point, otherwise the velocities have to be converted in traveling distances. Therefore, the system is designed to work with environment readings from different types of distance sensors, such as ultrasonic sensors, lasers, depth cameras and others. To handle with the data acquired by the sensors, detected obstacles are transformed into a polar coordinates, that presents the obstacle distance to the wheelchair and its respective angle in the robot’s frame, as shown in Eq. 1. ρO =

 (x O − xw )2 + (y O − yw )2

θ O = atan2((y O − yw ), (x O − xw ))

Fig. 1 Joyface interface Table 1 Joyface commands Movement

Command

Nose up Nose left Nose right Nose down Smile

Moves forward Turn left Turn right Moves backward Stop the wheelchair

(1)

All sensor readings are fit to a polar grid which divides the robot’s surroundings into influence zones, shown in Fig. 3. Therefore, influence zones are a polar grid delimited by three ranges and ten regions. Five of these regions are responsible for the front of the wheelchair’s measurements, and the other five regions are for the rear of the wheelchair. In this work, the regions of the influence zones are all arcs of Fig. 3, which are delimited by an inferior and a superior angle. Table 2 shows the regions and their respective limiting angles. The presence or absence of obstacles inside each zone is pointed by a binary representation. Consequently, the zone receives 1 if there’s any obstacle inside it, otherwise the zone receives 0. A binary representation is used to facilitate the training dataset acquirement.

1474

J. V. A. e Souza et al.

Fig. 2 Shared control overview

Table 2 Influence zones’ regions and its respective angles

Fig. 3 Influence zones configuration

3.2

ANN Training

In order to acquire the training data to do the neural network supervised learning, environmental data were recorded using the V-REP simulator [12]. Maps with common daily configurations of environments were built to determine which influence zone is active in each situation. For each different influence zones state detected in the maps, are empirically associated four velocities values, given as outputs of the neural network. Associated velocities are applied to the virtual wheelchair model, in order to verify if navigation at that given velocity does not result in a collision. The influence zones state and its correspondent output values

Region

Symbol

Angular interval

Right front Frontal right Frontal Frontal left Left front Right rear Rear right Rear

RF RF F LF LF Rr RR R

Rear Left Left Rear

LR LR

[−90◦ , −60◦ ) [−60◦ , −25◦ ) [−25◦ , 25◦ ] (25◦ ,60◦ ] (60◦ ,90◦ ] (−90◦ , −120◦ ] (−120◦ , −155◦ ] (−155◦ , −180◦ ] ∪ [180,155◦ ) [120◦ , 155◦ ) [90◦ , 120◦ )

are stored in the dataset for further analysis. This process resulted in a dataset composed by 180 samples. This approach is based on the assumption that a complex environment can be subdivided in a simpler set of configurations. Situations used in the training phase of this work are the following: A wheelchair approaching to an obstacle in front of it, a wheelchair approaching to an obstacle behind it, a wheelchair navigating through a narrow corridor, a wheelchair crossing a door, a wheelchair following a wall in the left side, a wheelchair following a wall in the right side, a wheelchair navigating in an obstacle-free environment. The dataset is used to train a Multilayer Perceptron (MLP) Neural Network. The topology is described as follows: The input layer has 30 entries, corresponding to each zone of the polar grid. The intermediate layer has 100 neurons with logistic activation function. The output layer has 4 neurons with logistic activation function, all weights related to the robot’s velocities. The first one corresponds to the forward linear velocity (v f ), the second to the backward linear velocity (vb ), the third to the right

217 Artificial Neural Network-Based Shared Control for Smart Wheelchairs …

angular velocity (ωr ) and the last corresponds to the left angular velocity (ωl ). Equation 2 represents the output of the first hidden layer, where S stands for the influence zones’ states, W1 is the matrix of each connections (synaptic) weights, W01 is the bias vector, b is the logistic growth rate, defined as 1 in this work. y1 =

1 1 + e−b(W1 ×S+W01 ×1)

The average training results were fitted with high accuracy, up to 10−3 , and a very low standard deviation (10−5 ), showing high confidence in the results. The minimum distance, measured from the wheelchair frame to all obstacles in the scene, is used to assess the safety provided by the algorithm, greater distances from obstacles implies a most safe navigation.

(2)

4 The output of the hidden layer is the input to the output layer. Hence, the MLP output is given by the Eq. 3. The y1 term is the hidden layer output, W2 is the matrix of each connections weights, W02 is the bias vector. y=

1 1 + e−b(W2 ×y1 +W02 ×1)

(3)

The MLP output represents a weight, in the form of a percentage with respect to the wheelchair’s maximum velocity, and is in the interval [0,1]. Thus, the MLP output is multiplied max by a vector composed by the maximum velocities (vmax f , vr , max max max ωr , ωl ) allowed for the wheelchair. Usually, v f = vrmax and ωrmax = ωlmax . Therefore, resultant velocities are given by Eq. 4. Both of them may be set by user in the wheelchair controller. max max max ]×y (4) V = [vmax f , vr , ωr , ωl

Since the training dataset is relative small in comparison with all the 230 possible theoretical combinations of influence zones states, a cross-validation technique known as leave-oneout cross-validation [13] was employed, in which the dataset is partitioned into n fold, where n is the number of samples. In this technique, n different subsets are fitted. In each model are used n − 1 samples to train the neural network, and the remaining sample is used for validation. The resultant models are combined in an ensemble averaging. The loss function used to train this MLP was the Mean Squared Error (MSE), and the optimizer was the Stochastic Gradient Descendent (SGD). Once that the ANN models were fitted, we generated an ensemble composed by the top 10 models according to the validation loss criteria. The output of each model that compounds the ensemble is combined using the average of the outputs, like in Eq. 5. 1  yi 10

1475

Results

To evaluate the performance of the controller, a map of our lab was built using the V-REP simulator to represent an environment similar to that observed in daily basis. The simulated wheelchair has a ring of 12 ultrasonic sensors and two 1D laser sensors at the front, to detect steps and unevenness. Table 3 summarizes parameters used in simulations. The first scenario is the navigation towards an obstacle, shown in both Figs. 4 and 5. Figure 4 shows the linear velocity profile during the navigation, as the wheelchair approaches to the obstacle the linear velocity gradually decreases until the obstacle is close to the wheelchair. In this situation the wheelchair has a near zero linear velocity. The horizontal dashed line represents the mean velocity. The mean linear velocity during navigation was 0.12 m/s. The second scenario is placed in our laboratory. The task delegated to the wheelchair is to start from a room, leave

Table 3 Simulation Parameters Parameter

Value

Maximum linear velocity Maximum angular velocity Sensors sampling frequency

0.3 m/s 20 deg/s 10 Hz

10

yˆ =

i=1

(5) Fig. 4 Velocity profile towards an obstacle

1476

J. V. A. e Souza et al.

Fig. 5 Path traveled during navigation towards an obstacle

Fig. 6 Velocity profile during navigation. The green highlighted areas represent the time interval where the user’s command is sent to the wheelchair and the arrows indicate the directional command sent

this room, cross the corridor and reach another room, on the other side of the passage. Figure 6 shows the velocity profile during the navigation, the arrows represents the commands sent by user in each time. While Fig. 7 shows the navigation itself. During 97.2 seconds, the wheelchair traveled a path of 14.15 m, with average linear velocity of 0.15 m/s. The average minimum distance measured from the wheelchair frame to the obstacles was 0.68 m, reaching a minimum of 0.04 m while aligning with the narrow passage through the second door. The velocity profile in Fig. 6 shows that the linear velocity decreases as the user approaches to the door. When the wheelchair starts to pass through the door, the front sensors identify a free zone that allows linear velocity to increase, once it is aligned with the door. The angular velocity to align the wheelchair to the corridor is almost half of the maximum angular velocity because there is a free space, but corridor walls are inside the range of the outside zone. The linear velocity through the corridor is

almost constant, with some oscillations due to the pillars that narrow the path in a stretch. Finally, when the wheelchair becomes next to the second door, it rotates with low velocity, which allows the user to align the wheelchair towards the door. The linear speed is low through the narrow passage in order to give time for the user to send a new rotational command. The angular velocity when the wheelchair is passing through the door is very low to allow the user to align the wheelchair using the HMI commands. Collisions are a null possibility, and the path tends to the Voronoi subspaces, which are the most secure possible.

5

Conclusion

This work proposes a novel strategy using a multilayer perceptron neural network ensemble to provide a velocity control law that adapts the linear and angular velocity of a wheelchair

217 Artificial Neural Network-Based Shared Control for Smart Wheelchairs …

1477

door and passing through a corridor. Since the velocity adaptation provides time for the user to analyze the environment and send a command with his/her decision, it enables the use of this shared control strategy with interfaces that have a low frequency of command acquisition. Acknowledgements This work is being supported in part by SEW Eurodrive Brazil. The authors would like to thank Unicamp and UFJF for making this work possible.

References

Fig. 7 Path traveled during navigation on laboratory

according to the configuration of obstacles in the environment, sharing control with the user, which is responsible for sending directional commands through an HMI. Many shared control strategies in literature makes use of expensive sensors, or need other information like odometry, a map of the environment or even SLAM. The present strategy does not require those expensive techniques to protect and allow the user of a robotic wheelchair to drive itself through a complex environment. With a robust and reliable shared control technique, this strategy minimizes the stochastic errors from the HMI, works with different types of sensors, that can be attached to the wheelchair, and is planned to always lead the user in the safest way possible. Influence zones work as the input of the neural network, that provides the desired velocities for each command in its output. The results demonstrated that the shared control was able to perform the collision avoidance without removing the user control, in this case the person can get closer to an obstacle, such as window, with very reduced velocity, preventing crashes. The results also show that the adaptive velocity provided by the neural network facilitates tasks such as crossing a

[1] Maciel M (2000) Portadores de deficiência: a questão da inclusão social São Paulo em Perspectiva 14:51–56 [2] Pesquisa nacional de saúde: 2013: Indicadores de saúde e mercado de trabalho: Brasil e grandes regiões. Rio de Janeiro (RJ), BR: Instituto Brasileira de Geografia e Estatística (IBGE) 2016 [3] Pereira GM, Mota SV, Andrade DTG, Rohmer E (2017) Comparison of human machine interfaces to control a robotized wheelchair. In: Proceedings of XIII SBAI [4] Pinheiro P, Pinheiro CG, Cardozo E (2017) The wheelie—a facial expression controlled wheelchair using 3D technology. In: 26th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, pp 271–276 [5] Leaman J, La HM (2017) A comprehensive review of smart wheelchairs: past, present, and future. IEEE Trans HumanMachine Syst 47:486–499 [6] Millán JDR, Rupp R, Müller-Putz G et al (2010) Combining braincomputer interfaces and assistive technologies: state-of-the-art and challenges. Front Neurosci 4:161 [7] Cowan RE, Fregly BJ, Boninger ML, Chan L, Rodgers MM, Reinkensmeyer DJ (2012) Recent trends in assistive technology for mobility. J Neuroeng Rehabil 9:20 [8] Olivi L, Souza R, Rohmer E, Cardozo E (2013) Shared control for assistive mobile robots based on vector fields. In: 10th international conference on ubiquitous robots and ambient intelligence (URAI). IEEE, pp 96–101 [9] Levine SP, Bell DA, Jaros LA, Simpson RC, Koren Y, Borenstein J (1999) The NavChair assistive wheelchair navigation system. IEEE Trans Rehabil Eng 7:443–451 [10] Trieu HT, Nguyen HT, Willey K (2008) Shared control strategies for obstacle avoidance tasks in an intelligent wheelchair. In: 30th annual international conference of the IEEE Engineering in Medicine and Biology Society. IEEE, pp 4254–4257 [11] Lopes A, Rodrigues J, Perdigao J, Pires G, Nunes U (2016) A new hybrid motion planner: applied in a brain-actuated robotic wheelchair. IEEE Robot Autom Mag 23:82–93 [12] Rohmer E, Singh SP, Freese M (2013) V-REP: a versatile and scalable robot simulation framework. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 1321–1326 [13] Friedman J, Hastie T, Tibshirani R (2001) The elements of statistical learning, vol 1. Springer Series in Statistics, New York

Identifying Deficient Cognitive Functions Using Computer Games: A Pilot Study Luciana Rita Guedes, Larissa Schueda, Marcelo da Silva Hounsell, and A. S. Paterno

Abstract

This paper presents the evaluation of a classical videogame for the assessment of people learning performance by identifying some deficient cognitive functions during matches. A pilot study was conducted to verify the feasibility to apply metrics to relate players behavior to deficient cognitive functions. We present a methodology where game data were collected from matches using the Game Learning Analytics (GLA) approach. Deficient cognitive functions were selected from Structural Cognitive Modifiability (SCM) theory. The obtained results at this pilot study pointed out to the feasibility of using established metrics in future work. Keywords

Computer games Game analytics

1



Cognitive functions



Neuroscience



Introduction

Digital games are used for many applications in biomedical engineering. There are several examples of serious games used for rehabilitation [1–5], remote monitoring [6, 7], simulation or education [8–11]. Serious games are software developed like games, although they are applied for

purposes beyond entertainment [12]. Besides that, games developed only for entertainment can also be used to support other areas such as medicine or education [13, 14]. Prensky called it as digital game based learning [15]. In this context, game analytics can be applied for identifying and understanding players behavior employing both quantitative and qualitative methods [16–18]. Game analytics aims at providing information about users in order to improve their game experience and, in some cases, their learning outcome. In cognitive neuroscience, mediated learning experience (MLE) is a theory used for improving learning disabilities [19]. Reuven Feuerstein proposed a program called instrumental enrichment. This program aims at correcting deficient cognitive functions by using MLE principles [20]. An assessment tool was developed to evaluate the results and diagnose the cognitive status of the subject, called the learning potential assessment device. With the group of such supporting theories and assessment devices [21, 22], one is able to guide the development and exploitation of other technological instruments like videogames to explore its application in cognitive diagnosis. In this work, these deficient cognitive functions are used as a theoretical framework and will be related to quantitative indicators to indicate the level of cognitive disabilities in a subject. This work presents how to use a classical video game for detecting deficient cognitive functions and improving them by a methodology for collecting and analyzing data from the game.

2 L. R. Guedes (&)  L. Schueda  M. S. Hounsell  A. S. Paterno Department of Electrical Engineering, Santa Catarina State University, Joinville, SC, Brazil e-mail: [email protected] L. R. Guedes  M. S. Hounsell Department of Computer Science, Santa Catarina State University, Joinville, SC, Brazil

Related Work

Recently, biomedical researches commonly use digital games as an important tool. One simple search by “game and biomedic*” expression in IEEE Xplore finds 1242 results in last 20 years (since 2000), which more than 50% (632 results) in last five years (since 2015). In biomedicine, these

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_218

1479

1480

L. R. Guedes et al.

games has been used for training professionals such as nurses and doctors, or supporting diseases diagnoses, or training and supporting patients, or evaluate people health conditions, among other several applications. Keshner et al. [23] presented a review about games (also called virtual realities) used to rehabilitation, including cognitive rehabilitation, and found 312 studies related to cognitive rehabilitation, assessment and intervention between 2006 and 2018, using digital games. Das et al. [24] showed a review about scientific discovery games for biomedical research and pointed out that, among other areas, educational applications (specially science education) may drive the next generation of this kind of games. However, many studies that relates cognitive functions and games are, frequently focused on elderly people [25–27], but the games used are not digital games, only mechanical or board games. In the other hand, classical video games like Pac-Man are commonly used for testing computational intelligence algorithms or machine learning issues [28–30]. In summary, we did not find any other study using classical video game (like Pac-Man) for identifying deficient cognitive functions as we are proposing in this work.

3

Methodology

This work proposes a methodology for the assessment of people learning performance using a classical videogame to identify some deficient cognitive functions during matches. First, we selected a game. Our methodology was is based on game learning analytics (GLA) model [25]. According to that model we executed the initial five phases: 1. 2. 3. 4. 5.

3.1 The Game Pac-Man was the selected game, the classical maze arcade videogame developed by Namco1 in 1980. The version used in this work is a computer version for web environment. It is a single player, not immersive and bidimensional game. Players can only use keyboard arrows to move the character in four directions: up, down, left and right. We choose this game because these restricted features allow us to evaluate each player actions such as pressed key, chosen direction, answer speed among others. The image presented in Fig. 1 shows the appearance of Pac-Man game used in this work. The goal of this game is to conduct the main character, called Pac-Man, around the maze to collect all dots and pellets, while avoiding four ghosts that pursue him. There are 240 dots and 4 pellets on the map. Each collected dot gives 10 points on the scoreboard and each pellet gives 50 points. The player starts the game with three lives. If Pac-Man makes contact with a ghost, he loses a life. The game ends when all lives are lost. When catching a pellet, the ghosts turn blue and go in the opposite direction, while Pac-Man is able to temporarily catch the ghosts. Catching all blue ghosts gives respectively 200, 400, 800 and 1600 points. We adapted the game software to collect data of players behavior patterns and to be possible to replay any match played to observe characters movements as well as all player actions.

Game engine: we modified the game program Collector: we collected data from matches played Aggregator: we included player information Reporter: we presented data in many formats Evaluator: we evaluate results.

The sixth phase from model (called Adapter) was not implemented in this work because it is not applicable for our goal. The phase 2 was implemented in our proposed methodology in six steps: (1) filling a consent form (2) filling a personal data form; (3) playing the first round matches; (4) answer to a self-mediation form; (5) playing the second round matches and (6) filling the final evaluation form. The next sections present this methodology, step by step.

1

Namco is a Japanese developer and publisher of arcade and home console video games.

Fig. 1 PacMan game

Identifying Deficient Cognitive Functions Using Computer Games …

3.2 The Experiment We conducted a pilot study with four adults, two male and two female, each one with different game experience, since novice to expert in playing PacMan. The ages were 23, 45, 51 and 53 years old. In this preliminary study, the experiment was conducted by authors themselves. The proposed methodology had six steps: (1) filling a consent form (2) filling a personal data form; (3) playing the first round matches; (4) answer to a self-mediation form; (5) playing the second round matches and (6) filling the final evaluation form. In the first step, the consent form follows the rules of ethics committee on research with human beings. In the second step, participants fill a form with personal data such as name, gender, age, among others, as well as experience and ability in playing electronic games. After that, Pac-Man game instructions are presented on screen. Then, the participant is invited to play the first round (step 3), from at least three matches. Data about all matches are recorded in a game log. Raw data collected are used to generate a secondary data set, called Pac-Man details, that includes the distances between the main character and each ghost during the matches. Personal player data is also stored. All data are recorded in a database. The database model with four tables is represented in Fig. 2. After the first round, in step 4, participants answer a self-mediation form where they are invited to think about their strategies, weaknesses and strengths in playing the game. This approach is based on the theory of structural cognitive modifiability proposed by Feuerstein [31]. Then, in step 5, the second round starts and participant is invited to play again at least three new matches. New data are also stored in the database. Finally, in step 6, participants answer an evaluation form about his/her impressions.

1481

which the gathered information is processed; and the output phase in which the products of elaboration are expressed [31]. In this work, we produced a cognitive functions map for Pac-Man game and a subset of some of the deficient cognitive functions is shown in Table 1. Deficient cognitive functions are frequently associated with cognitive impairments. However, people without cognitive problems can present some deficient cognitive functions in different levels. We choose only three of them to be evaluated in the experiment: • impaired spatial and temporal orientation: it is an input function; in the game, this behavior can be detected when a player frequently uses incorrect keys to move Pac-Man towards the maze (e.g. using the up arrow when Pac-Man cannot move upwards); • lack of or impaired capacity for relating to two or more sources of information simultaneously: it is also a input function; in the game, four ghosts can represent four different sources of information that can be managed by the player, because it is necessary to avoid them while catching dots and pellets around the maze; therefore, distance between ghosts and Pac-Man can be a clue about disability with this cognitive function; • impulsive acting out behavior: it is an output function; in the game, when a player uses repeatedly unnecessary keys (e.g. using the up arrow when Pac-Man is already moving upwards) it can demonstrate an impulsive behavior. The related used metrics chosen for each cognitive functions are presented in Table 2. Those metrics are obtained from player when playing the game and stored in the game log.

3.3 Cognitive Functions

3.4 Measurement

In cognitive neuroscience, the mediated learning experience is a method used for improving learning disabilities [19]. Feuerstein proposed a program called instrumental enrichment that can provide changes in cognitive structures of an individual [20]. This program achieves the goal of correcting deficient cognitive functions by using a learning potential assessment device. These deficient cognitive functions can be used as parameters to indicate tendencies in cognitive disabilities of an individual. This project has been submitted to the Ethics Committee code number CAAE 36030220.7.0000.0118. Cognitive functions were separated into three phases of mental acts: input, elaboration and output. The input phase in which information is gathered; the elaboration phase in

The game provides basic information about player performance such as: score, remaining Pac-Man lives and final result (win or lose). These data are not enough to identify player behavior during matches. So, it was necessary to collect data along matches to understand player behavior. We changed the game software to store player information in a database. We used the game learning analytics (GLA), an approach as described in Fig. 3 [32]. During the collecting phase, raw data are extracted. Each game cycle catches the timestamp (date and time, with millisecond precision). For each timestamp, the position X and Y of the characters (Pac-Man and the ghosts) are stored, score, remaining lives and pressed key (when a key was pressed in the keyboard).

1482

L. R. Guedes et al.

Fig. 2 Database model

Table 1 Some cognitive functions used in Pac-Man game [31] Mental act phase

Deficient cognitive functions

Input

Unplanned, impulsive, and unsystematic exploratory behavior; Impaired spatial and temporal orientation; Lack of or impaired capacity for relating to two or more sources of information simultaneously;

Elaboration

Output

Inability to select relevant cues; Lack of or restricted hypothetical or inferential reasoning; Lack of planning behavior; An episodic grasp of reality; Trial and error behavior; Impulsive acting out behavior

Items highlighted in bold were choosen to be evaluated in this work

From these data it was possible to find other information like characters routes, distance between Pac-Man and ghosts, elapsed time for Pac-Man without catching dots or pellets, wrongly pressed keys and so on. In GLA, this phase is described as aggregator phase. In this pilot experiment we intend to use some of this information as parameters for

Table 2 Used metrics Deficient cognitive functions

Metrics

Impaired spatial and temporal orientation

Wrongly pressed keys

Lack of or impaired capacity for relating to two or more sources of information simultaneously

Distance between ghosts and Pac-Man

Impulsive acting out behavior

Number of unnecessary repeatedly pressed keys

player performance. Then, in the reporting phase, it was possible to generate reports, graphics and tables from data stored. After that, in the evaluation phase, the results are evaluated by comparing them with the chosen metrics. The last phase of GLA, called adaptation phase, is optional and it was not implemented in this work. Distance was calculated between each ghost and Pac-Man. For each character, the game provided XY positions related to pixels on a computer screen. First, it was necessary to convert it into a maze coordinate system—line (l) and column (c)—using the following functions (values of d, t and k were obtained from software code):

Identifying Deficient Cognitive Functions Using Computer Games …

1483

Fig. 3 Above: game learning analytics (GLA) model [32]. Bellow: proposal methodology based on GLA model



yd þ t; 18



8 d ¼ 26; t ¼ 18y  134 > > > > d ¼ 150; t ¼ 88134\y  258 > > < d ¼ 256; t ¼ 148258\y  364 where d ¼ 362; t ¼ 208364\y  452 > > > > d ¼ 452; t ¼ 258452\y  506 > > : d ¼ y; t ¼ 298y [ 506

ðx  30 þ kÞ ; k

where k ¼ 19; 52

Then, it was possible to apply an Euclidean distance function between two Cartesian coordinates P(x1,y1) and Q (x2,y2), using values of c (column) e l (line) from Pac-Man and ghosts’ positions: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D ¼ ðl1  l2 Þ2 þ ðc1  c2 Þ2 ;  l ; c is PacMan position where 1 1 l2 ; c2 is ghost position We called this result as ghost proximity, like presented in Table 3. Number of wrongly pressed keys were obtained from matches, by counting each pressed key on keyboard that could not move Pac-Man toward that corresponding direction. It happens when the player chooses an invalid direction key, such as trying to move Pac-Man upwards when it is at the top of the maze. In Table 3, this number is presented as wrong keys rate. We calculated the rate of wrong keys related to the total of pressed keys in the match.

n  100; p  n ¼ number of wrongly pressed keys where p ¼ total pressed keys in the match W¼

The same model was applied to calculate unnecessary pressed keys, by counting every key pressed by the player to move Pac-Man towards a direction that it is already moving to. The game mechanics requires the player to press only one arrow key each time. Thus, if Pac-Man is moving to the left side, it will keep moving to the left until it finds a wall and it is not necessary to press the left key again. We also calculated the rate of wrong keys related to the total of pressed keys in the match. Table 3 presents this metric as unnecessary keys rate. v U ¼  100 p  v ¼ number of unnecessary pressed keys where p ¼ total pressed keys in the match Finally, a set of graphs and tables was generated to support the analysis for this experiment.

4

Results

According to the GLA model, we use both collected and aggregated data to generate summaries and visual data reports such as graphs and tables. The main goal was to

1484

L. R. Guedes et al.

Table 3 Pilot study results—first round Metrics

Player 1

Player 2

Player 3

Player 4

First round Won matches rate (%)

40

50

20

0

Wrong keys rate (%)

9

2

14

17

Unnecessary keys rate (%)

16

5

4

15

401

359

387

381

Won matches rate (%)

50

100

60

17

Wrong keys rate (%)

4

8

12

25

Unnecessary keys rate (%)

38

2

5

11

453

342

364

362

Ghosts proximity Second round

Ghosts proximity

Items highlighted in bold indicate player with higher rate

identify deficient cognitive functions. Main results were summarized in Table 3, where calculated metrics is presented for both first and second round of matches played. We can highlight player 2 as experienced and player 4 as novice as this is denoted by the rate of won matches, in both rounds. Player 4 presented the highest rate of wrong pressed keys. It is possible to observe that all players have improved their results in the second round of matches. Player 1 had the highest rate of unnecessary keys and the highest ghost proximity in the second round. Ghost proximity denotes that

the ghosts were closer from Pac-Man during the most part of the match time. According to the data, we could deduct an impulsive acting out behavior for Player 1, who had an unnecessary keys rate of 16 and 38% during the first and second rounds, respectively. We could also observe a possible impaired spatial and temporal orientation from Player 4, who had 17 and 25% of wrong keys rate in each round. We also observed that the index used to measure possible lack of or impaired capacity for relating to two or more sources of information simultaneously was not effective. Ghosts proximity was calculated by distances from Pac-Man to ghosts. After that, the frequency when they were close or far from each other during matches was included in the result. Nevertheless, these data were not enough to demonstrate that deficient cognitive function. As visual reports, we generated a set of graphs to support player behavior analysis. The graph presented in Fig. 4 (bar graph) shows the game variables in each event along one match by a player (in chronological order). We can observe, that the player choose the right-down corner at first and the right-up corner at last. It is possible to conclude that the player won the match cause the game was over before he or she lost second life. The heatmap, also presented in Fig. 4 (color map) denotes Pac-Man course. Red areas indicate places more visited and yellow areas show places where Pac-Man according to scale of colors, from yellow to red.

Fig. 4 Examples of graphs generated in experiment: main events: bar graph and heatmap (path taken by the character as a color map)

Identifying Deficient Cognitive Functions Using Computer Games …

5

Discussion

This pilot study allowed us to assess the feasibility of our proposed methodology. We intend to apply this methodology in a larger scale in order to verify these results. In these future experiments, we will generate data in terms of graphs, maps and tables as presented here, after each round to guide researchers in the mediation process. As a pilot study, we had some limitation related to the number of participants. However, our goal was reached in order to we could test the proposed methodology. In future work, we consider the use some gold standard to validate data.

6

Conclusion

The authors considered that the correlation between deficient cognitive functions, or any other cognitive deficiency, requires a set of elaborative parameters customized for each behavioral characteristic to be inferred from game analytics data. This is a preliminary development, that to our knowledge is of a relatively high novelty in the medical, psychological and educational fields. It is also possible to infer without more demonstration that methodology indicated here is appropriate and may be suggested for extensive use after results from larger sets of subjects be obtained in future work for purposes of using them in diagnostic, therapeutic or medical applications. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Noveletto F, Soares A, Mello B, Sevegnani C, Bertemes Filho P (2016) Reabilitação do Equilíbrio de Pacientes Hemiparéticos por AVC Utilizando um Jogo Sério. In: XXV Congresso Brasileiro de Engenharia Biomédica 2. Noveletto F, Soares A, Bertemes Filho P, Hounsell M, Eichinger FL, Domenech S (2016) Jogo Sério Baseado em Sinal de Força para Reabilitação Motora de Hemiparéticos por Acidente Vascular Cerebral. In: XXV Congresso Brasileiro de Engenharia Biomédica 3. Jimenez N, Cardoso V, Lyra J, Longo B, Glasgio G, Frizera Neto A, Bastos T (2016) Jogos Sérios Para Reabilitação Motora de Pacientes Pós-AVE. In: XXV Congresso Brasileiro de Engenharia Biomédica 4. Esteves G, Aragão E, Barbosa M, Rodrigues M (2016) Biofeedback via smartphone com Uso de EMG para Auxílio à Reabilitação por Meio de Jogo Interativo. In: XXV Congresso Brasileiro de Engenharia Biomédica 5. Fernandes F, Cardoso A and Lamounier Júnior E (2016) Aplicação de Jogos Sérios para Reabilitação de Pacientes com Deficiência Física nos Membros Superiores. In: XXV Congresso Brasileiro de Engenharia Biomédica

1485 6. Wilkinson A, Tong T, Zare A, Kanik M, Chignell M (2018) Monitoring health status in long term care through the use of ambient technologies and serious games. IEEE J Biomed Health Inf 22(6):1807–1813 7. Joshi V, Wallace B, Shaddy A, Knoefel F, Goubran R, Lord C (2016) Metrics to monitor performance of patients with mild cognitive impairment using computer based games. In: 2016 IEEE-EMBS international conference on biomedical and health informatics (BHI), pp 521–524 8. Silva E, Ianaguivara E, Godinho T, Martucci H, Scardovelli T, Boschi S, Pereira A (2016) Jogo Computadorizado para Identificar Características de Falta de Atenção e Hiperatividade. In: XXV Congresso Brasileiro de Engenharia Biomédica 9. Lima I, Bissaco M, Lima L (2016) Quinzinho em Torre de Pedra: Jogo para Auxiliar as Crianças com Transtorno de Aprendizagem da Escrita. In: XXV Congresso Brasileiro de Engenharia Biomédica 10. Psaltis A, Apostolakis K, Dimitropoulos K, Daras P (2017) Multimodal student engagement recognition in prosocial games. IEEE Trans Games 10(3):292–303 11. Cid E, Scarpioni A, Rodrigues S, Bissaco M (2016) Serious Game para Auxiliar Jovens com Síndrome de Down a Aprimorar Sua Expressão Escrita. In: XXV Congresso Brasileiro de Engenharia Biomédica 12. Djaouti D, Alvarez J, Jessel J, Rampnoux O (2011) Origins of serious games. In: Serious games and edutainment applications. Springer, London, pp 25–43 13. Teixeira A, Tomé A, Roseiro L, Gomes A (2018) Does music help to be more attentive while performing a task? A brain activity analysis. In: 2018 IEEE international conference on bioinformatics and biomedicine (BIBM), pp 1564–1570 14. Khan Q, Hassan A, Rehman S, Riaz F (2017) Detection and classification of pilots cognitive state using EEG. In: 2017 2nd IEEE international conference on computational intelligence and applications (ICCIA), pp 407–410 15. Prensky M (2003) Digital game-based learning. Comput Entertain (CIE) 1(1):21–21 16. El-Nasr M, Desurvire H, Aghabeigi B, Drachen A (2013) Game analytics for game user research, Part 1: a workshop review and case study. IEEE Comput Graph Appl 33(2):6–11 17. Callaghan M, McShane N, Eguiluz A (2014) Using game analytics to measure student engagement/retention for engineering education. In: 11th international conference on remote engineering and virtual instrumentation, pp 297–302 18. Cano A, García-Tejedor A, Alonso-Fernández C, Fernández-Manjón B (2019) Game analytics evidence-based evaluation of a learning game for intellectual disabled users. IEEE Access 7:123820–123829 19. Feuerstein R, Klein P, Tannenbaum A (1991) Mediated learning experience (MLE): Theoretical, psychosocial and learning implications. Freund Publishing House Ltd. 20. Feuerstein R, Jensen M (1980) Instrumental enrichment: theoretical basis, goals, and instruments. In: The educational forum, vol 44, issue 4. Taylor & Francis Group, pp 401–423 21. Haywood C, Lidz C (2006) Dynamic assessment in practice: clinical and educational applications. Cambridge University Press 22. Tzuriel D (2001). Dynamic assessment of young children. In: Dynamic assessment of young children. Springer, pp 63–75 23. Keshner E, Weiss P, Geifman D, Raban D (2019) Tracking the evolution of virtual reality applications to rehabilitation as a field of study. J NeuroEng Rehabil 16(1):76 24. Das R, Keep B, Washington P, Riedel-Kruse IH (2019) Scientific discovery games for biomedical research. Ann Rev Biomed Data Sci 2:253–279

1486 25. Agres K, Lui S, Herremans D (2019) A novel music-based game with motion capture to support cognitive and motor function in the elderly. In: 2019 IEEE conference on games (CoG), pp 1–4 26. Li H, Zhang T, Yu T, Lin C, Wong AM (2012) Combine wireless sensor network and multimedia technologies for cognitive function assessment. In: Third international conference on intelligent control and information processing, pp 717–720 27. Liu K, Huang H, Chen S (2017) Combining narrative and dynamic assessment theory to design a game for the elderly on cognitive function training. In: 2017 international conference on applied system innovation (ICASI), pp 1849–1852 28. Rohlfshagen P, Lucas S (2011) Ms Pac-Man versus ghost team CEC 2011 competition. In: 2011 IEEE congress of evolutionary computation (CEC), pp 70–77

L. R. Guedes et al. 29. Gallagher M, Ryan A (2003) Learning to play Pac-Man: an evolutionary, rule-based approach. In: The 2003 congress on evolutionary computation, CEC'03, vol.4, pp 2462–2469 30. Robles D, Lucas S (2009) A simple tree search method for playing Ms. Pac-Man. In: 2009 IEEE symposium on computational intelligence and games, pp 249–255 31. Feuerstein R, Hoffman M, Rand Y, Jensen M, Tzuriel D, Hoffmann D (1985) Learning to learn: mediated learning experiences and instrumental enrichment. Special Serv Schools 3(1– 2):49–82 32. Alonso-Fernandez C, Calvo A, Freire M, Martinez-Ortiz I, Fernandez-Manjon B (2017) Systematizing game learning analytics for serious games. In: 2017 IEEE global engineering education conference (EDUCON), pp 1111–1118

A State of the Art About Instrumentation and Control Systems from Body Motion for Electric-Powered Wheelchairs A. X. González-Cely, M. Callejas-Cuervo, and T. Bastos-Filho

Abstract

Keywords

Instrumentation and control for automatic wheelchairs have diversified with technological developments that include non-invasive built-in sensors to capture body signals. The use of micro-electromechanical sensors (MEMS), tilt sensors, electroencephalographic signals (EEG), electromyographic signals (EMG) and cameras to detect movements of the head and hands have been included in the instrumentation of controllers to operate wheelchairs with the aim of improving the mobility of people with disabilities in lower or upper limbs, or in both regions. The goal of this work is to conduct a detailed review of articles about instrumentation and controllers based on body motion capture for automatic wheelchairs. The articles found develop different types of instrumentation for the safety of the user. Due to the inclusion and exclusion criteria, 30 articles that were published between 2003 and 2019 were analyzed. The controllers that implement head motion capture as well as their electronic instrumentation were given special attention in order to realize future developments. The designs that appeared in some articles demonstrated limitations due to specific user or location requirements, as well as offering solutions to these constraints. In conclusion, the controllers that receive information about body motion are useful for people with the aforementioned disabilities, resulting in greater independence and dexterity in the mobility of the wheelchair.

Electric-powered wheelchair Instrumentation Control Head movement Hand movement

A. X. González-Cely (&)  M. Callejas-Cuervo Software Research Group, Universidad Pedagógica y Tecnológica de Colombia, Calle 24 # 6-138, Tunja, Colombia e-mail: [email protected] T. Bastos-Filho Universidade Federal do Espírito Santo, Vitoria, Brazil



1





Introduction

Technical assistance or supportive technologies enable actions to be taken by people who present deficiencies in functional performance. A specific case is the condition of disability in upper or lower limbs, or in both areas. The causes which are related to these disabilities are: problems in baby gestation and birth, or those of a congenital origin, modular injuries and internal body difficulties; also, sedentary behaviours [1], cerebral palsy, diplegia, dislocations, contractures and scoliosis [2], and spina bifida [3]. It is also possible to include causes related to neural tube defects, referring to birth defects of the brain, spinal column and spinal cord [4]. Moreover, the type of environment in which the person lives must be considered in order to improve their quality of life. The person is perceived within a different sociocultural context than those who do not suffer from neuromusculoskeletal disorders as well as those related to movement [5]. Knowing the causes that give rise to deficiencies in functional performance, the number of people who possess a certain type of disability at global, national and local levels can be established. According to the World Health Organization (WHO), “About 10% of the global population have disabilities…” that is, approximately 650 million people. Out of this number, 10% require the use of a wheelchair because of disabilities in their upper or lower limbs, or in both [6]. Wheelchairs are classified as either manual or electric-powered. They can also be categorised according to type of use: temporary or permanent, and also depending on

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_219

1487

1488

A. X. González-Cely et al.

the need for postural support of the user. Another classification considers the design for the portability of the wheelchair, such as folding, quick-release wheels or detachable wheels [7]. Manual wheelchairs have the disadvantage of requiring an assistant in the event that the user cannot exert physical effort in order to move, and thus, the use of automatic wheelchairs has grown in popularity in recent years thanks to electrical and electronic advances in this area. An automatic wheelchair has mechanical components, plus a pair of motors with their respective batteries and an electronic control system. The currently marketed system has a joystick to control the position of the chair. However, it is not suitable for people with tetraplegia, referring to damage to the spinal cord, cervical spine or brain, which restricts the movement of both the lower and upper limbs. The different types of controllers include the instrumentation of the wheelchair and/or the user. The advances in this field can be attributed to researchers who have worked on the design and implementation of wheelchair technology and the respective contributions. Therefore, the authors pose the following question, with the purpose of conducting an adequate search for articles: Under what operational characteristics do wheelchairs function and how precise are the control systems which implement the instrumentation for body motion capture that were developed between 2003 and 2019? The instrumentation and control systems that are analysed in this article are based on body motion capture, specifically that of the head and hands, and vary in the instrumentation used. On the other hand, the analysis of the safety systems and low-cost design for the development of autonomous wheelchairs are emphasized. The objective of this article is to review the data published from 2003 to 2019 related to the design and implementation of navigation controllers using sensor technologies such as MEMS or others that capture body motion and cameras that operate the wheelchair based on those movements. This analysis will provide information regarding parameters that will be implemented in future developments in this field.

2

Materials and Methods

The steps taken to develop this article were: (1) search in specialized databases to obtain information on the different types of controllers implemented in automatic wheelchairs from 2003 to 2019; (2) process of inclusion and/or exclusion of articles; (3) data extraction and information quality; and (4) an analysis and presentation of the results of the revised articles.

3

Search in Specialized Databases

The databases that were used were IEEE Xplore, Scopus, Science Direct, Research Gate and Google Scholar. The terms included in the searches were: TITLE-ABS-KEY (wheelchair AND control AND head AND movement) obtaining 5 results on Science Direct, 13 results on IEEE Xplore, 13 results on Google Scholar, 127 results on Scopus and 48 results on Research Gate. The search parameters for hand motion capture were: TITLE-ABS-KEY (wheelchair AND control AND hand AND movement) obtaining 5 results on Science Direct, 15 results on IEEE Xplore, 27 results on Google Scholar, 212 results on Scopus and 38 results on Research Gate, limiting the search year range from 2003 to 2019.

4

Selection and Inclusion Criteria

The selected articles were included based on the following criteria: (a) publication date: September 2003 to November 2019; (b) approach based on the control of wheelchairs, using non-invasive built-in body motion capture systems. This article excluded other designs of controllers with instrumentation that do not refer to body motion capture. Using the previously mentioned inclusion and exclusion criteria, 32 articles were selected from the total articles found, of which 30 were included for review. The main reason for the exclusion of documents was the presentation of the mathematical model without its respective implementation in a prototype or electric-wheelchair. Autonomous wheelchair control must be specified for open, closed or both environments, and ensure the independence from an assistant. Finally, the implementation should be low-cost, including the mechanical parts of the wheelchair. The articles were selected using the following inclusion criteria: controller design and implementation in a wheelchair or prototype. The 30 articles included in the study mentioned the instrumentation and control of the wheelchair taking into account that 24 of them mentioned a type of control by means of head motion capture and 6 articles mentioned control with hand movements. In these articles different control techniques implementing software and hardware with specific characteristics are highlighted.

5

Data Extraction and Evaluation Criteria

The procedure for organising and classifying the information was carried out on the basis of the results obtained using the search parameters mentioned previously.

A State of the Art About Instrumentation and Control Systems …

6

Synthesis, Analysis and Presentation of Results

The section about results is divided in two parts: • Controllers designed from head motion capture. • Controllers designed from hand motion capture.

7

Results

The analysed articles have a graphic user interface (GUI) for interaction with the wheelchair user, in addition to collecting input data using MEMS technology, cameras and tilt sensors. Processing systems include microcontrollers, microprocessors and System on Chip (SoC). A flowchart representing the different phases of the state of the art based on PRISMA is shown in Fig. 1.

8

Controller Design from Head Motion Capture

A summarize described in Table 1 guides the reader to get an idea about controller design from head motion capture. Controllers that use sensors for head motion capture allow the detection of changes in position from tilt angles or image processing. Delving into the subject, Chen et al. [8] describes the design of a wheelchair using tilt sensors to change the direction and speed of the wheelchair. In [9], a simple method for recognising head movements to control the wheelchair is proposed and evaluated, focused on patients with tetraplegia, using capacitive type sensors. The design was tested on 3 participants with specific characteristics, obtaining an accuracy of between 96 and 98% while choosing a random participant the accuracy changed to 57– 80%. Kaur and Vashisht [10] developed an automated system for controlling the rotation of the wheelchair using head Fig. 1 Phases of the state of the art according to PRISMA format for motion capture control in wheelchairs

1489

and hand movements from the use of an accelerometer. In [11], a communication system was designed using tilt sensors to operate the wheelchair and using wireless modules for data transmission. The purpose of the wheelchair design is for quadriplegic people. A classification system was presented in [12] using neural networks implementing a magnified gradient function (MGF) algorithm for the recognition of head movements. The system does not have instrumentation or control, but it classifies the signals to be applied in a basic wheelchair controller. The results presented in [13] show a controller which has a visual recognition system implementing the Adaboost algorithm for real-time execution. Furthermore, the investigation purposed in [14] aims to improve mobility using tilt sensors that drive a wheelchair in 4 directions: forward, backward, right and left. The M3S protocol is established by the European commission to be applied in this system having the advantage of real-time monitoring. A control system is proposed in [15] using head movements through Linux and Artificial Intelligence. A system design with a camera for detecting gestures was developed in [16]. It has a GUI and the information range obtained from the stereo camera allows the wheelchair to be used indoors and outdoors. It contextualizes the reader about the location of the camera and which would be the most appropriate use. Then, [17] designed a micro controlled system that allows the manipulation of a wheelchair by head movements. The algorithm is implemented by suppressing the gravitational component, reading and filtering the accelerometer output, setting the thresholds, stablishing the commands, making visual signals, turning around and recognizing unpredictable situations. The following controller developed in [18] implements accelerometers to distinguish head movements and determine the action to be performed by the controller. The limitations of the project are the lack of a system to detect obstacles and a navigation assistant, in addition to a navigation control. A method that translates the position of the user’s head in one direction to control the wheelchair is proposed in [19]. The proposed algorithm is tested in real-time. A Proportional-Integral-Derivative

26 search records (head motion capture)

6 search records (hand motion capture)

2 deleted records because they did not meet the publication date 30 full- text articles screened 30 articles included that comply with the qualitative synthesis of the review 6 references that implement hand controllers, 23 that implement head controllers and 1 that is hybrid, with hand and head control

1490

A. X. González-Cely et al.

Table 1 Description about instrumentation and control of head motion capture Ref

Instrumentation

Control system

[8]

Tilt sensors, microcontroller and wheelchair

The microcontroller receives the signals for its processing and determines 5 directions

[9]

Capacitive sensors

The system receives and processes the signals through feature extraction

[10]

ADXL330 sensor and obstacle detection system

The ADC of microcontroller receives the signals and digitalize the accelerometer data to power the wheelchair

[11]

Tilt sensors, wireless modules and wheelchair

The system determines the obstacles distance to avoid them and directs the wheelchair with head orientation

[12]

Accelerometer, computer, databases

A classification system is developed through neural networks using a magnified gradient function (MGF)

[13]

Camera, ultrasonic sensors, joystick, DSP

It implements the Adaboost algorithm for face detecting executed in real time

[14]

Tilt sensors, DSP and CAN

The system is developed on a DSP using seral communication

[15]

Tilt sensors, RS-232 module, a GUI

The embedded system has 3 layers: hardware, software and control layer

[16]

Stereo camera, a GUI and the wheelchair

The system uses an algorithm to detect the head orientation and the estimated angle processed using a state machine

[17]

Accelerometers, microcontroller

The system uses a recognition technique based on data processing

[18]

Accelerometers, microcontroller

The microcontroller executes the process and sends signals to the motors

[19]

Videocamera, encoder, touch screen, DAQ

The system implements a PID controller for its execution and the use of Kalman filters

[20]

MPU6050 and ultrasonic sensors, microcontroller, solar panel

The system translates the head orientation as the direction of the wheelchair should take

[21]

Reception and transmission system, microcontroller ATMega

The system has a range of values that the accelerometer delivers to power the wheelchair

[22]

IMU, digital signal conditioning

A neural network process data. The controller test was performed in simulation

[23]

MEMS sensors, Bluetooth module, GPS and GSM

The system can make a GPS link with the text message that allows the user to navigate long distance

[24]

EEG sensors and a GUI

The user selects a control method: voice recognition commands or head movements

[25]

Accelerometers and EMG signals and GUI

The system is simulated with LabVIEW and EMG signals are processed

[26]

Infrared sensors

The system is developed using state machine and is implemented in a wheelchair

[27]

Camera

The images are processed using Adaboost algorithm including lips movement detection

[28]

Sensors for detect movements, a GUI

Neural networks are implemented recognizing commands of the head

[29]

Tilt and orientation sensors, GSM

The sensors have a calibration system. In addition, they have thresholds to generate appropriate movements

[30]

LIDAR, laser and infrared sensors

A control type is developed with head movements and another with the speed

[31]

HMU, WiFi module, infrared sensors and microcontroller

UDP protocol is implemented to send data through WiFi module

(PID) control is implemented for the execution of the same and uses Kalman filters to read signals with better accuracy. A system implemented in [20] controls a wheelchair with a seat belt and ultrasonic sensors for detecting obstacles. The system is activated and detects head movements, which are recognized and it then executes an action using its motors, translating the head movements into the movements of the wheelchair. Then, [21] describes a system for controlling a wheelchair implementing accelerometers and a radio-frequency module. The system has two fundamental parts: receiver and transmitter systems. The transmitter system has an accelerometer, an RF transmitter and a

transmitting antenna. The system was tested indoors and outdoors, obtaining greater effectiveness indoors. The research in [22] implements an Inertial Movement Unit (IMU) to capture head movements and uses MATLAB® for data classification. The IMU sends data such as angular velocity of body motion using an accelerometer and a gyroscope. Data is entered into a neural network in order to obtain better results. Then, Ruzaij et al. [23] presents a system by voice recognition and head motion control for people with quadriplegia, using MEMS sensors to run the controller. It has ultrasonic sensors to ensure the safety of the wheelchair in case of a collision. The article presented in [24] develops a robotic system, which has input commands

A State of the Art About Instrumentation and Control Systems …

through blinking, as well as head, eye movement and EEG. The system is verified and meets the technical assistance characteristics to be used by people with different levels of disability. In [25], an automatic system to control the rotation of the motors of a wheelchair based on head movements focused on people with quadriplegia is developed. The information is collected through accelerometers for all four directions. It does not have an obstacle detection system. Then Garcia and Christensen [26] presents a human machine interface (HMI) to control a wheelchair using head movements. Head position is determined by infrared sensors. The tests show that the system is functional in real life and the position of the head is analysed by tilt, allowing the wheelchair to be moved to the right, left, forward and backward and the reference point called neutral point. A set of commands is generated in [27] by head movements, which allow the wheelchair to be operated in all four directions. The camera collects information from head gestures and is processed with the Adaboost algorithm, including lip detection to make the controller more accurate. Neural networks are used in [28] to recognize commands generated by head movements. The use of neural networks makes the system more accurate and faster, allowing information collection from 7 participants and classifying specific characteristics. Then, Ruzaij et al. [29] developed a controller focused on people with quadriplegia, which collects information from the tilt of the head using an orientation sensor and sends an alarm message in case of sensor failure or any abnormal situation in the user’s behaviour. It also has a confirmation command to operate the wheelchair when the user executes an involuntary movement. The system of [30] presents two control modes, the first to operate the wheelchair direction and the second to control the speed. Control commands are done by mapping analysis using 3D and they have an obstacle detection system. The system was implemented with a physical wheelchair. Finally, Gomes et al. [31] has an interface that uses inertial sensors placed on the head to direct the wheelchair and control it. The UDP (User Datagram Protocol) protocol is implemented for sending data with a WiFi module. The system has a biaxial joystick and an emergency button. The combination of inertial and infrared sensors enters to the control system. A movement corresponds to a two-axis control action.

9

Controller Design from Hand Motion Capture

A summarize described in Table 2 guides the reader to get an idea about controller design from hand motion capture. The design of controllers using hand movements is included in this article because it is important to know the

1491

control strategies for motion capture. Therefore in, [32] a controller is developed using MEMS sensors aimed at people with quadriplegia. The system can direct the wheelchair. The purpose is to implement a control based on the reorganization of the signals and characteristics from hand gestures. Moreover, [33] developed a system using an accelerometer, and ultrasonic and infrared sensors integrated into the wheelchair, which is controlled by a glove that detects hand movements according to inclination, and this data are sent to the microcontroller for processing. The main advantage is its low cost and simple use. A control system is operated by hand movements in [34] which use MEMS technology. Experimental values of axis “X” and “Y” are taken from all four directions. Another control system includes MEMS technology [35]. The system was organised to obtain control signals. Recognising the gesture with the Bayes method, it sends a control command to the wheelchair. The control is carried out based on the lineal and rotational speed of the same. Also, [36] developed a control system that implements MEMS technology in the user’s hand. Finally, the system in [37] implemented a MEMS gyroscope and accelerometer, as well as a neural network that learns and recognizes the movement sequence of the user for a specific action with the wheelchair.

10

Discussion

The review of the literature focuses on research about the instrumentation and control systems of wheelchairs. The development of control systems from 2003 to 2019 was analysed. The two categories below classify the instrumentation applied to the wheelchair and the user, in addition to others controllers. The review includes 30 articles about body motion capture to operate a wheelchair. 1. Controllers designed based on head motion capture The controllers designed based on head movements require filtering and amplification stages. However, the movement of the head is translated into the wheelchair movement, that is, if the movement is executed in a forward direction, the wheelchair will also move forward. These systems implement PID controls and other types of control to improve the closed-loop stability of the system. 2. Controllers designed based on hand motion capture The controllers designed based on hand movements help people with disabilities in their lower limbs, but are not useful for people with tetraplegia. The review conducted indicates that the authors tend to use tilt sensors, cameras and MEMS technologies for body motion signal acquisition given that they possess ideal

1492

A. X. González-Cely et al.

Table 2 Description about instrumentation and control of hand motion capture

Ref

Instrumentation

Control system

[32]

Transmission and reception system, accelerometers

The specified angle generates a movement in the prototype

[33]

MEMS, ultrasonic and infrared sensors

A glove that controls the wheelchair with user hand movements

[34]

MEMS sensors, microcontroller and LCD

Gestures are conditioned and recognized by the Bayes method

[35]

MEMS accelerometers, MCU and Bluetooth module

The wheelchair control is based on linear and rotational speed

[36]

MEMS accelerometer, Wireless communication module and microcontroller

A PID controller is implemented. The controller is simulated on MATLAB® Simulink

[37]

Accelerometer and gyroscope MEMS

The system has neural networks and movement patterns which are recognized for an action with the wheelchair

characteristics in terms of size and resolution for the development of this type of controllers. There are multiple control methods, from ON/OFF control to the use of artificial intelligence. Not all systems have safety controls for obstacle detection and this can become a serious flaw regarding the safety of the user. Moreover, the drivers included HMI to facilitate user handling, apart from the fact that it is more user-friendly. Reference [9] indicates that the characterization of the user is vital for the correct operation of the controller because this type of instrumentation varies depending on the user. Finally, the control systems proposed by different authors provide insights into new designs that present improvements to the constraints that have been discussed above.

11

Conclusion

From a technological point of view, the development of body motion capture systems makes independence possible as the systems make use of different types of instrumentation that allows the user to operate the wheelchair without physical effort or the use of significant body movement, as well as guaranteeing the precision of the system and accommodating to the conditions of the user. From a research perspective, this review article shows the constraints that have been highlighted by some of the papers and invites researchers to improve the controllers, resolving these problems. From the outlook of health, people suffering from tetraplegia have the possibility of movement without assistance and this improves the independence of the user and their emotional wellbeing. Acknowledgements To the Software Research Group (GIS) of the School of Computer Science, Department of Engineering, Universidad Pedagógica y Tecnológica de Colombia (UPTC).

References 1. Ganz F, Hammam N, Pritchard L (2020) Sedentary behavior and children with physical disabilities: a scoping review. Disabil Rehabil, 1–13.https://doi.org/10.1080/09638288.2020.1723720 2. Sol M, Verschuren O, de Groot L et al (2017) Development of a wheelchair mobility skills test for children and adolescents: combining evidence with clinical expertise. BMC Pediatr. 17:1– 51. https://doi.org/10.1186/s12887-017-0809-9 3. Wai E, Owen J, Fehlings D et al (2000) Assessing physical disability in children with spina bifida and scoliosis. J Pediatr Orthop 20:765–770 4. Busby A, Armstrong B, Dolk H et al (2005) Preventing neural tube defects in Europe: a missed opportunity. Reprod Toxicol 20:393– 402 5. WHO at https://apps.who.int/iris/bitstream/handle/10665/42407/ 9241545429.pdf;jsessionid= 7E462920F28D36DF0F8CE141DD849D65?sequence=1? 6. WHO at https://apps.who.int/iris/bitstream/handle/10665/205041/ B4616.pdf?sequence=1&isAllowed=y 7. WHO at https://www.who.int/publications/i/item/guidelines-onthe-provision-of-manual-wheelchairs-in-less-resourced-settings 8. Chen Y, Chen S, Chen W et al (2003) A head orientated wheelchair for people with disabilities. Disabil Rehabil 25:249–253 9. Dobrea M, Dobrea D, Severin I (2019) A new wearable system for head gesture recognition designed to control an intelligent wheelchair. E-Health Bioeng, 7–11.https://doi.org/10.1109/ EHB47216.2019.8969993 10. Kaur S, Vashist Ch (2013) Automation of wheelchair using MEMS accelerometer (Adxl330). Adv Electon Electr Eng 3:227– 232 11. Kader M, Alam Md, Jahan N et al (2019) Design and implementation of a head motion- controlled semi-autonomous wheelchair for quadriplegic patients based on 3-axis accelerometer. In: ICCIT proceedings, International conference on computer and information technology, Dhaka, Bangladesh, pp 18–20 12. King L, Nguyen H, Taylor P (2005) Hands-free head-movement gesture recognition using artificial neural networks and the magnified gradient function. In: IEEE EMBS proceedings, Annual international conference on engineering in Medicine and Biology Society, Shangai, China, pp 2063–2066 13. Jia P, Yuan H (2012) Head gesture recognition for hands-free control of an intelligent wheelchair. Indust Robot 34:60–68. https://doi.org/10.1108/01439910710718469

A State of the Art About Instrumentation and Control Systems … 14. Chen S, Chen Y, Chioul Y et al (2003) Head-controlled device with M3S-based for people with disabilities. In: IEEE EMBS proceedings, Annual International conference of the IEEE engineering in Medicine and Biology Society, Cancun, México, vol 4, pp 1587–1589 15. Nguyen H, King L, Knight G (2004) Real-time head movement system and embedded linux implementation for the control of power wheelchairs. In: IEEE EMBS proceedings, Conference on IEEE engineering in Medicine and Biology Society, San Francisco, USA, vol 2, pp 4892–4895 16. Yoda I, Tanaka J, Raytchev B et al (2006) Stereo camera based non-contact non-constraining head gesture interface for electric wheelchairs. In: ICPR proceedings, Pattern recognition international conference, Hong Kong, China, vol 4, pp 740–745 17. Pajkanović A, Branko D (2013) Wheelchair control by head motion. Serb J Electr Eng 10:135–151. https://doi.org/10.2298/ SJEE1301135P 18. Prasad S, Sakpal D, Rawool S (2017) Head-motion controlled wheelchair. In: RTEICT 2017—2nd IEEE IRTEICT, Proceedings of the international conference on recent trends in electronics, information and communication technology, Bangalore, India, 2013, pp 1636–1640 19. Solea R, Margarit A, Cernega D et al (2019) Head movement control of powered wheelchair. In: ICSTCC, Proceedings of the international conference on system theory control and computing, Sinaia, Romania, pp 632–637 20. Dey P, Hasan M, Mostofa S et al (2019) Smart wheelchair integrating head gesture navigation. In: ICREST, Proceedings of the international conference on robotics, electrical and signal processing techniques, Dhaka, Bangladesh, pp 329–334 21. Nasif S, Khan M (2018) Wireless head gesture controlled wheel chair for disable persons. In: R10-THC, Proceedings of IEEE region 10 humanitarian technology conference, Dhaka, Bangladesh, 2017, pp 156–161 22. Marins G, Carvalho D, Marcato A et al (2017) Development of a control system for electric wheelchairs based on head movements. In: IntelliSys, Proceedings of the Intelligent System Conference (IntelliSys), London, UK, pp 996–1001 23. Ruzaij M, Neubert S, Stoll N et al (2016) Multi-sensor robotic-wheelchair controller for handicap and quadriplegia patients using embedded technologies. In: HIS Proceedings, International conference on human system interactions, Portsmouth, UK, pp 103–109 24. Bastos-Filho T, Cheein F, Torres S et al (2014) Towards a new modality-independent interface for a robotic wheelchair. IEEE Trans Neural Syst Rehabil Eng 22:567–584 25. Manogna S, Vaishnavi S, Geethanjali B (2010) Head movement based assist system for physically challenged. In: ICBBE

1493

26.

27.

28.

29.

30.

31.

32.

33.

34.

35.

36.

37.

Proceedings, International conference bioinformatics and biomedical engineering, Chengdu, China, pp 6–9 Garcia C, Christensen H (2014) Infrared non-contact head sensor, for control of wheelchair movements. In: Proceedings of the European conference for the advancement of assistive technology, Alcalá, Spain, pp 336–340 Hu Z,Li L, Luo Y et al (2010) A novel intelligent wheelchair control approach based on head gesture recognition. In: ICCASM proceedings, International conference on computer application and system modelling, Taiyuan, China, vol 6, pp 159–163 Taylor P, Nguyen H (2003) Performance of a head-movement interface for wheelchair control. In: IEEE EMBS proceedings, Annual international conference of the IEEE engineering in Medicine and Biology Society, Cancun, Mexico, pp 1590–1593 Ruzaij M, Neubert S, Stoll N et al (2017) Complementary functions for intelligent wheelchair head tilts controller. In: SISY IEEE Proceedings, International symposium on intelligent systems and informatics, Subotica, Serbia, pp 117–122 Manta L, Cojocaru D, Vladu I et al (2019) Wheelchair control by head motion using a noncontact method in relation to the pacient. In: ICCC proceedings, International Carpathian control conference, Krakow-Wieliczka, Poland, pp 1–6 Gomes D, Fernandes F, Castro E et al (2019) Head-movement interface for wheelchair driving based on inertial sensors. In: ENBENG proceedings, Portuguese meeting on bioengineering, Lisboa, Portugal, pp 1–4 Pande V, Ubale N, Darshana P et al (2014) Hand gesture based wheelchair movement control for disabled. Int J Eng Res Appl 4:152–158 Anjaneyulu D, Kumar MB (2015) Hand movements based control of an intelligent wheelchair using accelerometer, obstacle avoidance using ultrasonic and IR sensors. Int J Eng Res Dev 11:1–9 Kumar V, Ramesh G, Nagesh P (2015) MEMS based hand gesture wheel chair movement control for disable persons. Int J Curr Eng Tech 5:3–6 Lu T (2013) A motion control method of intelligent wheelchair based on hand gesture recognition. In: ICIEA IEEE proceedings, Conference on industrial electronics and applications, Melbourne, Australia, pp 957–962 Ramessur Sh, Oree V (2019) Hand gesture controller for robotic-wheelchair using microelectromechanical sensor ADXL 345. Smart Sustain Eng Next Gener Appl 561:3–5 Ijjina E, Mohan C (2014) Human action recognition based on mocap information using convolution neural networks. In: ICMLA Proceedings, International conference on machine learning and applications, Detroit, USA, pp 159–164

Socket Material and Coefficient of Friction Influence on Residuum-Prosthesis Interface Stresses for a Transfemoral Amputee: A Finite Element Analysis Alina de Souza Leão Rodrigues and A. E. F. Da Gama

Abstract

1

One of the main difficulties associated with the use of lower limb prosthesis is the perception of discomfort and pain in the residual limb, mainly due to poorly-fitted sockets. In this context, numerical simulations have performed a fundamental role in the search for successful fittings, contributing towards enhancement of the traditional iterative and labor-intensive fabrication process. This work aims to apply finite element modeling to simulate the socket-limb interface stresses for a transfemoral amputee during gait, and to investigate the effects of socket material and coefficient of friction on comfort and durability. The developed model was composed by socket, stump and femur geometries, and presented a loading scenario corresponding to the forces acting on the hip joint during a gait cycle. Several simulations were conducted, with varying design parameters, such that each configuration was analyzed in terms of prosthetic resistance to cyclic loading and distribution of contact pressures and frictional stresses. Carbon fiber sockets demonstrated greatest durability among the four tested materials, but it also induced a slight increase in the maximum contact pressure. As the coefficient of friction was incremented, contact pressures were reduced, and frictional stresses increased, with the values between 0.5 and 0.8 showing the best compromises. Keywords





Socket Prosthesis Transfemoral amputation Finite element analysis Interface stress



A. de S. L. Rodrigues (&) Department of Mechanical Engineering, Universidade Federal de Pernambuco, Av. da Arquitetura, Recife, Brazil A. de S. L. Rodrigues  A. E. F. Da Gama Rehabilitation Engineering Research Group, Department of Biomedical Engineering, Universidade Federal de Pernambuco, Av. da Arquitetura, Recife, Brazil

Introduction

Bionic science, targeted muscle reinnervation, microprocessorbased controllers and myoelectric technology are some of the advancements towards rehabilitation and quality of life enhancement of people with amputations. However, for millions of amputees, especially in developing countries, prosthetic limbs are an expensive and distant reality [1], whose implementation faces accessibility issues and inefficiency of public policies [2]. Besides the deficit in the provision of medical devices, the rehabilitation of lower limb amputees for restoration of functional independence is a growing and substantial socioeconomic challenge [3]. Practitioners face difficulties when it comes to prescribing the parts that compose the most appropriate artificial limb for a given patient [4]. Modular components of lower limb prostheses, such as knees and feet, are commercialized and categorized in catalogues according to user’s height and activity level. High-quality sockets, on the contrary, do not follow such principle, and must be patient-specific, respecting stump characteristics, presenting good load response, providing safety and not affecting blood flow [5]. However, the perception of discomfort and pain, mainly caused by poorly-fitted sockets, is one of the major difficulties associated with the use of lower limb prostheses [6]. The process of fitting a prosthetic limb is iterative and laborious, relying on professional experience and patient’s feedback [3]. Therefore, the socket-limb interface is the site at which most of the postoperative complications occur, and the patient’s quality of life is directly influenced by the socket design. In the search for successful fittings, numerical simulations have been widely applied in the field, contributing towards greater comprehension regarding the biomechanical structure of sockets and reducing uncertainties and labor intensities associated with the traditional fabrication process [3]. In this context, the finite element (FE) method is a useful computational tool for evaluating the performance of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_220

1495

1496

A. de S. L. Rodrigues and A. E. F. Da Gama

prosthetic sockets [7]. By simulating the response of the studied system to certain loading scenarios, FE modeling predicts patterns of stresses distributions and allows systematic parametric analyses to be performed [3]. The results provided by the method can be used for assessing the socket design before manufacturing, allowing it to be continuously and systematically adjusted until satisfactory outcomes are achieved. In this context, this work aims to apply the FE method to analyze the effect of socket material and limb-prosthesis coefficient of friction (COF) on socket resistance to cyclic loading and on interface stresses distribution during the stance phase of the gait cycle, for a transfemoral amputee. It intends to highlight the need for customized designs, and the method’s role in increasing reliability, reducing manufacturing costs, decreasing remanufacturing rates and improving user’s comfort, potentially aiding socket designs.

2

Materials and Methods

In this computational study, the first step was to develop a 3D model composed of socket, residuum and femur geometries, in order to predict, by simulation, the pressure distribution at a real limb-prosthesis interface for a transfemoral amputee during gait. The effects of liner geometry and material were not considered. Once the model has been created, it was possible to compare the predicted outcomes with experimental and numerical results found in literature. From the validated model, multiple simulations were conducted with modifications in socket material and coefficient of friction, aiming improvements in terms of durability and comfort. The methods described herein were approved by the local Research Ethics Committee (CAAE86378918. 7.0000.5208).

2.1 Numerical Model Development Aiming the development of a reliable three-dimensional model, digital representations of the socket and the residual limb were created from 3D scanning. A 23-year-old male (1.70 m, 79.4 kg) unilateral transfemoral amputee was selected for this study. The patient had three years of prosthesis usage experience, and did not present any additional physical, vascular, neurological or psychological conditions that could affect the results. The participant provided informed written consent prior to data collection. The scanning procedure took place in a clean and spacious laboratory. Sense™ 3D scanner was used to capture the desired shapes. The patient was requested to remove his prosthesis, and the scanner was steadily rotated around the

residual limb, at a minimum distance of 38 cm, to capture views from all the angles. The prosthetic socket was also scanned, including its inside. Resulting scans were smoothed and exported in STL format. SpaceClaim 18.1 was used to convert the faceted bodies to solids. Due to overclosure issues, the stump scans were not considered for simulations, and the limb shape was recreated from the socket internal surface. For the femur geometry, the model available at The Standardized Femur Model open repository [8] was adapted and sectioned 25 cm below the greater trochanter, to represent the amputation level. Discretization: The FE method’s working principle is based on the discretization process, which consists in modeling a complex system as a mesh of small elements, connected through nodes that allow a coherent transmission of forces and displacements acting on the structure under loading [7]. The analytical solution, originally impossible or too complex to be found, can then be approximated by the sum of each element’s responses [7]. In this work, the SCDOC file with the final geometric assembly created in SpaceClaim was imported into ANSYS Workbench 18.1 for preprocessing. All bodies were modeled using ten-node tetrahedral elements. Starting from the software-generated automatic mesh, local refinements were conducted nearby the contact regions. The optimal mesh consisted of 284,303 elements, with maximum size of 6 mm at the contact regions. Patch independent algorithms were applied to discretize the femur and the residuum. Material properties: Similarly to the discretization process, the definition of laws that govern the behavior of the studied materials is of extreme importance for FE analyses. The means by which each element responds to external influences, represented by the material stress  strain curve, exerts direct influence on the obtained outcome [7]. Homogeneous, isotropic and linear-elastic behaviors were assumed for the femur, the socket and the residual limb, which is a commonly used approximation in finite element investigations of the socket-stump interface [3]. The adopted Young’s modulus (E) and Poisson’s ratio (m) for the studied geometries were [3]: Elimb ¼ 200 kPa, Esocket ¼ 1.5 GPa, Efemur ¼ 15 GPa, mlimb ¼ 0.475, msocket ¼ 0.3 and mfemur ¼ 0.3. Boundary conditions: After mesh generation and materials characterization, the implementation of boundary conditions simulates the physics that govern the studied system’s behavior, by adding restrictions and interactions with the surroundings, without which the solution cannot be obtained [7]. In this work, all the degrees of freedom were constrained at the distal end of the socket. For contacts definition, the

Socket Material and Coefficient of Friction Influence …

authors chose to model the interaction between the bone and the residual limb using a bonded condition, while the socket-limb interface was defined by a frictional surface-to-surface contact, such that the slave surface (stump) would not penetrate the master surface (socket). Initially, a friction coefficient of 0.5 was adopted for the frictional contact [3]. Loading scenario: An inverse dynamics analysis was conducted to determine the joint loads during gait [9]. The data required for performing such calculations derives from experimental gait analysis and was shared by Segal et al. [10]. The final loading scenario used in the simulations was applied at the femur head and consisted of the three components of the force acting on the hip joint during stance. The analyses were conducted in ANSYS Static Structural systems, with final time equal to 0.6667 s, which is equivalent to the interval between heel strike and toe off. ANSYS contact tool was used for results visualization. The runtime for each simulation ranged from 10 to 12 h, using two cores of an i7-7500U, 2.07 GHz processor and a 16 GB RAM Windows 10 64-bit operating system.

2.2 Socket Modifications After obtaining the interface pressure distribution with the previously described configurations, multiple simulations were conducted, with modifications in socket material, aiming evaluation of its influence on pressure distribution and socket resistance to cyclic loading. Polypropylene, high-density polyethylene (HDPE), carbon fiber and fiberglass were selected for the computational tests. Table 1 informs the mechanical properties of interest for the chosen materials. Linear-elastic behaviors were assumed at all cases. The most adequate socket material in terms of resistance to cyclic loading was evaluated through calculation of the probability of fatigue failure, according to the procedure described by Chen et al. [11]. Different activity levels were considered for this analysis, based on the work of Halsne et al. [12], which reported between 222 and 3667 daily steps for 27 transfemoral amputees. After conducting simulations with the four previously mentioned socket materials and performing the fatigue

Table 1 Young’s modulus (GPa), Poisson’s ratio and fatigue curve for the evaluated socket materials

1497

failure analysis, the coefficient of friction at the socket-limb interface was modified in ANSYS, assuming values between 0.3 and 3 [18]. Results were then exported to MATLAB, and nonlinear regressions were implemented through the curve fitting tool, to estimate the equations that best describe how maximum contact pressure, maximum frictional stress and maximum sliding distance vary with the COF.

3

Results

The first step in this computational study was to develop a 3D model that satisfactorily simulates the limb-prosthesis interface pressure for a transfemoral amputee during gait. The predicted contact pressure distribution during this stage can be found in Fig. 1a–c, for loading response, mid-stance and terminal stance. The maximum pressure for the initially adopted configurations was approximately 75.9 kPa. After the creation of a reality-representative model, the computational tests with modifications in socket material did not present expressive variations in contact pressure distribution among the two thermoplastics used (polypropylene and HDPE). For the two laminates (carbon fiber and fiberglass), however, the maximum pressure presented a slight increase of approximately 4.3% (from 75.9 to 79.2 kPa). The probability of socket fatigue failure is illustrated in Fig. 2, as a function of time in months. It is possible to observe that the carbon fiber laminate excels in terms of resistance to cyclic loading, while polypropylene presents the worst performance. The curves that best represent the influence of the friction coefficient on maximum contact pressure, sliding distance and tangential stress are presented in Fig. 3a–c.

4

Discussion

The following discussion will be presented in two stages. First, the results obtained with the initially adopted configurations will be validated against experimental and numerical studies. Then, the final topic aims to counterbalance the advantages and disadvantages of each computationally tested socket material and of increasing the coefficient of friction, in order to suggest an optimal range.

Material

E

m

S–N curve

Polypropylene [3, 13]

1.5

0.3

N  S9.71 = 6.31  1015

HDPE [14, 15]

1.38

0.3

N  S8.6 = 6.7  1014

Carbon fiber [16, 17]

2.586

0.332162

N  S6.1 = 1.51  1015

Fiberglass [16]

2.5

0.330616

N  S8.26 = 7.5  1014

1498

Fig. 1 From left to right, anterior, medial, posterior and lateral views of the contact pressure distribution at the socket-limb interface at a 0.19 s, b 0.37 s and c 0.48 s. The analyzed instants correspond to the

A. de S. L. Rodrigues and A. E. F. Da Gama

end of loading response, mid-stance and terminal stance phases of the gait cycle, respectively

Socket Material and Coefficient of Friction Influence …

Fig. 2 Probability of socket fatigue failure as a function of time of use of the prosthesis for the four tested materials. The lower limit for each material represents the least intense activity level, while the upper boundary represents the most intense considered level

1499

socket-limb interface for transfemoral amputees is restricted [3]. Zhang et al. [25] and Surapureddy [26] obtained maximum pressures of 80.6 kPa and 84 kPa respectively, both at the distal end of the stump, which are comparable to the maxi-mum of 75.9 kPa predicted in this work. Despite the simplifying assumptions adopted in model development, the predicted contact pressures have satisfactorily simulated real-world scenarios, with magnitudes comparable to those predicted by previous studies. Applying the method in more representative situations has the potential of monitoring residual limb volume variations and pressure fluctuations, contributing towards proper targeting of solutions, such as the use of liners [27], subatmospheric suspensions [28] or variable stiffness sockets [29].

4.1 Parallels with Literature-Based Experimental Pressure Measurements in Humans and Numerical Studies

4.2 Evaluation of Different Materials and COFs

The amount of data on experimental measurements of transfemoral stump-socket interface pressures is limited [19]. Studies are generally based on a small number of subjects and do not follow standardized protocols [19]. Besides, the commonly adopted technologies for measuring interface pressures present disadvantages, such as hysteresis, drift, temperature sensitivity and spatial restrictions [20]. Therefore, the comparison presented herein is a qualitative analysis of the results, since direct numerical verification is impracticable. The simulations predicted the distal end of the residual limb as being the most critical in terms of stress concentration. Thus, the maximum contact pressures at this area were selected as model validation criteria. Figure 4 compares the maximum predicted pressures at anterior, medial, posterior and lateral sites of this region during the replication stage against experimental results found in literature [20–23]. It is possible to observe that the predicted pressures lie within the range of values obtained by direct measurements in humans. A large variability was verified among the presented results, not only between different methodologies, but also within a same study. It is worth highlighting that the pressure distribution depends not only on prosthesis characteristics, but also on the combined effect of clinical and anatomical factors, e.g. socket suitability, muscle action, body mass, residual limb volume and contact area [24]. Because of the frequent short- and long-term stump volumetric fluctuations, changes in the pressure pattern are expected even in the same individual over time. All the mentioned factors reinforce the need for subject-specific measurements and the impossibility of performing direct quantitative validations. When comparing the developed model with similar computational investigations, limitations have also been found, since the amount of studies that aimed to simulate the

Despite the unquestionable lowest probabilities of fatigue failure observed for a carbon fiber socket, it is important to highlight that such outcome is directly dependent on number of reinforcement layers, resin composition, fiber orientation and type of weave, which dictate the mechanical properties of the resulting laminate [30]. Therefore, manufacturing a carbon fiber socket requires previous identification of areas that demand flexibility, torsional resistance or reinforcement. Besides, the user’s pain tolerance must also be investigated, once the increase in contact pressures observed at the distal region might lead to discomfort. The highest failure probabilities associated with the use of polypropylene indicate the need for more frequent returns to the orthopedic clinic for inspections and eventual replacements. The COF between the residual limb and the prosthetic component determines slip behavior, but reliable clinical values are still to be found [18]. Studies suggest that tangential stresses are at least equivalent to normal pressures as tissue breakdown triggering factor [31]. Excessive shear on the skin and underlying tissues might rupture epidermis and obstruct blood and interstitial fluid flow [32]. Furthermore, repetitive friction is likely to degrade and heat skin, causing blisters [32]. By observing the obtained relations, it is possible to deduce that the COF that provides the best compromise between normal pressure reduction and frictional stress increase lies between 0.5 and 0.8. However, the choice of the ideal limb-prosthesis interface COF must consider the desired adhesion (providing safety specially during swing) and the individual pain tolerance. It is important to highlight that the COF is influenced by external factors, such as temperature and humidity, and by subject’s characteristics, such as perspiration and hirsuteness [33]. Therefore, apart from the material in contact with the limb, components like lubricants [34] and cooling [35] or sweat expelling systems [36] can also be considered for improvements in friction control.

1500 Fig. 3 Effects of varying coefficient of friction on a maximum interface pressure, b maximum sliding distance and c maximum frictional stress

A. de S. L. Rodrigues and A. E. F. Da Gama

Socket Material and Coefficient of Friction Influence …

1501

References

Fig. 4 Comparison of pressures predicted by this work’s model with experimental measurements from previous studies [20–23]. The values related to the present work refer to nodes located at the anterior, medial, posterior and lateral regions of a reference plane, transverse to the stump, drawn at the same height as the distal end of the femur

5

Conclusions

The developed model demonstrated results that satisfactorily simulate the reality, when compared to previous studies. Among the tested socket materials, carbon fiber presented lowest fatigue failure probabilities. However, such benefit was accompanied by an increase in maximum contact pressure, which requires that the user’s pain tolerance is verified before a socket is made of this composite. Higher COFs between the stump and the prosthetic component tended to reduce contact pressures and increase tangential stresses. Even though elevated friction decreases slip occurrence, improving contact adherence, excessive shear might lead to pressure ulcers. COF values between 0.5 and 0.8 presented best stresses combination at the interface, but individual tolerances must always be considered. The conducted research presented preliminary results on different design criteria and its effects on pressure distribution for a transfemoral amputee. The need for customized designs, focused on the user’s demands and lifestyles, is clear. The finite element method presents great potential as an analysis tool, allowing systematic studies on the configuration that provided the best compromise between resistance and comfort. Thus, FE modeling contributes towards more reliable designs, reduces fabrication costs related to the traditional manufacturing process and decreases the need for remanufacturing that arises from poorly-fitted sockets. Acknowledgements The authors would like to thank Dr Glenn Klute and Krista Cyr, from the Department of Veterans Affairs Center for Limb Loss and Mobility, not only for sharing the data that allowed this work to be concretized, but also for the availability and promptitude they have demonstrated.

Conflict of Interest The authors declare that they have no conflict of interest.

1. Harkins CS, McGarry A, Buis A (2012) Provision of prosthetic and orthotic services in low-income countries. Prosthet Orthot Int 37(5):353–361. https://doi.org/10.1177/0309364612470963 2. Vargas MAO, Ferrazzo S, Schoeller SD et al (2014) The healthcare network to the amputee. Acta Paul Enferm 27(6):526– 532. https://doi.org/10.1590/1982-0194201400086 3. Dickinson AS, Steer JW, Worsley PR (2017) Finite element analysis of the amputated lower limb: a systematic review and recommendations. Med Eng Phys 43:1–18. https://doi.org/10. 1016/j.medengphy.2017.02.008 4. Balk EM, Gazula A, Markozannes G et al (2018) Lower limb prostheses: measurement instruments, comparison of component effects by subgroups, and long-term outcomes. Comp Effect Rev 213:1–18 5. Colombo G, Filippi S, Rizzi C et al (2010) A new design paradigm for the development of custom-fit soft sockets for lower limb prostheses. Comput Ind 61(6):513–523. https://doi.org/10.1016/j. compind.2010.03.008 6. Lee WCC, Zhang M (2007) Using computational simulation to aid in the prediction of socket fit: a preliminary study. Med Eng Phys 29:923–929. https://doi.org/10.1016/j.medengphy.2006.09.008 7. Ginestra PS, Ceretti E, Fiorentino A (2016) Potential of modeling and simulations of bioengineered devices: endoprostheses, prostheses and orthoses. P I Mech Eng H 230(7):607–638. https://doi. org/10.1177/0954411916643343 8. Viceconti M, Casali M, Massari B et al (1996) The “standardized femur program” proposal for a reference geometry to be used for the creation of finite element models of the femur. J Biomech 29 (9):1241 9. Dumas R, Cheze L, Frossard L (2009) Loading applied on prosthetic knee of transfemoral amputee: comparison of inverse dynamics and direct measurements. Gait Posture 30(4):560–562. https://doi.org/10.1016/j.gaitpost.2009.07.126 10. Segal AD, Orendurff MS, Klute GK et al (2006) Kinematic and kinetic comparisons of transfemoral amputee gait using C-Leg and Mauch SNS prosthetic knees. J Rehabil Res Dev 43(7):857–870. https://doi.org/10.1682/JRRD.2005.09.0147 11. Chen N-Z, Lee WCC, Zhang M (2006) A numerical approach to evaluate the fatigue life of monolimb. Med Eng Phys 28(3):290– 296. https://doi.org/10.1016/j.medengphy.2005.07.002 12. Halsne EG, Waddingham MG, Hafner BJ (2013) Long-term activity in and among persons with transfemoral amputation. J Rehabil Res Dev 50(4):515–530. https://doi.org/10.1682/jrrd. 2012.04.0066 13. Maier C, Calafut T (1998) Polypropylene: the definitive user’s guide and databook. William Andrew, Norwich 14. Quadrant (2017) Quadrant EPP Proteus HDPE at https://www. piedmontplastics.com/resources/literatures/view/quadrant-eppproteusR-hdpe 15. Djebli A, Bendouba M, Aid A et al (2016) Experimental analysis and damage modeling of high-density polyethylene under fatigue loading. Acta Mech Solida Sin 29(2):133–144. https://doi.org/10. 1016/s0894-9166(16)30102-1 16. Al-Khazraji K, Kadhim J, Ahmed PS (2011) Effects of reinforcement material on fatigue characteristics of trans-tibial prosthetic socket with PMMA matrix. In: 4th international scientific conference of Salahaddin University-Su Erbil, Kurdistan, Iraq, pp 1–9 17. Mahjoob M, Alameer AKA (2018) Material characterization and fatigue analysis of lower limb prosthesis materials. Assoc Arab Univ J Eng Sci 25(3):137–154 18. Cagle JC, Hafner BJ, Taflin N et al (2018) Characterization of prosthetic liner products for people with transtibial amputation.

1502

19.

20.

21.

22.

23.

24.

25.

26.

27.

J Prosthet Orthot 30(4):187–199. https://doi.org/10.1097/jpo. 0000000000000205 Paternò L, Ibrahimi M, Gruppioni E et al (2018) Sockets for limb prostheses: a review of existing technologies and open challenges. IEEE T Bio-Med Eng 65(9):1996–2010. https://doi.org/10.1109/ tbme.2017.2775100 Buis A, Kamyab M, Hillman S et al (2017) A preliminary evaluation of a hydro-cast trans-femoral socket, a proof of concept. Prosthet Orthot Open J 1(1):1–9 Moineau B (2014) Analyses des pressions à l’interface moignon-emboiture de la prothèse chez le patient amputé fémoral. Université de Grenoble, Grenoble Neumann ES, Wong JS, Drollinger RL (2005) Concepts of pressure in an ischial containment socket: measurement. J Prosthet Orthot 17(1):2–11. https://doi.org/10.1097/00008526-20050100000003 Lee VSP, Solomonidis SE, Spence WD (1997) Stump-socket interface pressure as an aid to socket design in prostheses for trans-femoral amputees—a preliminary study. P I Mech Eng H 211 (2):167–180. https://doi.org/10.1243/0954411971534287 Dumbleton T, Buis AWP, McFadyen A et al (2009) Dynamic interface pressure distributions of two transtibial prosthetic socket concepts. J Rehabil Res Dev 46(3):405–416. https://doi.org/10. 1682/JRRD.2008.01.0015 Zhang L, Zhu M, Shen L et al (2013) Finite element analysis of the contact interface between trans-femoral stump and prosthetic socket. In: 35th annual international conference of the IEEE EMBS, Osaka, Japan, pp 1270–1273 Surapureddy R (2014) Predicting pressure distribution between transfemoral prosthetic socket and residual limb using finite element analysis. University of North Florida, Jacksonville Baars ECT, Geertzen JHB (2005) Literature review of the possible advantages of silicon liner socket use in trans-tibial prostheses. Prosthet Orthot Int 29(1):27–37. https://doi.org/10.1080/ 17461550500069612

A. de S. L. Rodrigues and A. E. F. Da Gama 28. Kahle JT, Orriola JJ, Johnston W et al (2014) The effects of vacuum-assisted suspension on residual limb physiology, wound healing, and function: a systematic review. Tech Innov 15(4):333– 341. https://doi.org/10.3727/194982413X13844488879177 29. Faustini MC, Neptune RR, Crawford RH et al (2006) An experimental and theoretical framework for manufacturing prosthetic sockets for transtibial amputees. IEEE T Neur Sys Reh 14 (3):304–310. https://doi.org/10.1109/TNSRE.2006.881570 30. Eitel J (2013) Carbon fiber: the more you know, the more you can do at https://opedge.com/Articles/ViewArticle/2013-07_10 31. Laszczak P, Mcgrath M, Tang J et al (2016) A pressure and shear sensor system for stress measurement at lower limb residuum/socket interface. Med Eng Phys 38(7):695–700. https:// doi.org/10.1016/j.medengphy.2016.04.007 32. Polliack AA, Scheinberg S (2006) A new technology for reducing shear and friction forces on the skin: implications for blister care in the wilderness setting. Wild Environ Med 17(2):109–119. https:// doi.org/10.1580/pr30-05.1 33. Restrepo V, Villarraga J, Palacio JP (2014) Stress reduction in the residual limb of a transfemoral amputee varying the coefficient of friction. J Prosthet Orthot 26(4):205–211. https://doi.org/10.1097/ jpo.0000000000000044 34. Cagle JC, Reinhall PG, Hafner BJ et al (2017) Development of standardized material testing protocols for prosthetic liners. J Biomech Eng 139(4):1–12. https://doi.org/10.1115/1.4035917 35. Webber CM, Davis BL (2015) Design of a novel prosthetic socket: assessment of the thermal performance. J Biomech 48(7):1294– 1299. https://doi.org/10.1016/j.jbiomech.2015.02.048 36. Klute GK, Bates KJ, Berge JS et al (2016) Prosthesis management of residual-limb perspiration with subatmospheric vacuum pressure. J Rehabil Res Dev 53(6):721–728. https://doi.org/10.1682/ jrrd.2015.06.0121

Subject Specific Lower Limb Joint Mechanical Assessment for Indicative Range Operation of Active Aid Device on Abnormal Gait Carlos Rodrigues, M. Correia, J. Abrantes, M. A. B. Rodrigues, and J. Nadal

minimum run velocity on stiff knee condition. Inverse kinematics and dynamics were performed using AnyGait with TLEM model and lower limb joint angular signal analyzed. Indicative range operation from lower limb joint mechanical assessment were obtained at complementary domain for subject specific gait aid device selection and parametrization.

Abstract

This study presents subject specific lower limb joint angular kinematic and dynamic analysis at time and frequency domain as well as joint mechanical work, power and dynamic stiffness assessment during normal gait, stiff knee gait and slow running for indicative range operation of personalized active gait aid device. Gait aid devices present increasing interest for the generalization of gait rehabilitation, as an answer to the growth demand of population with gait rehabilitation need, as well as the insufficient health care personnel. Nevertheless, the large costs and standardized equipment leave out many patients without gait rehabilitation, with the need for low cost, personalized gait rehabilitation equipment, based on subject-specific analysis. In vivo and noninvasive case study was assessed of a healthy male subject 70 kg mass and 1.86 m height on human gait lab. Reflective adhesive marks were applied at skin surface of lower limb selected anatomical points and images captured with eight 100 Hz camera Qualisys along with ground reaction forces and force moments acquired at 2000 Hz with two AMTI force plates during foot contact with the ground on normal gait (NG) at comfortable auto-selected velocity, stiff knee gait (SKG) with lower knee flexion and slow running (SR) at

C. Rodrigues (&) Centre for Biomedical Engineering Research, INESC TEC, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal e-mail: [email protected] M. Correia Department of Electrical and Computer Engineering, FEUP, Porto, Portugal J. Abrantes MovLab Interactions and Interfaces Lab, ULHT, Lisbon, Portugal M. A. B. Rodrigues Department of Electronic and Systems, UFPE, Recife, Brazil J. Nadal Biomedical Engineering Program, UFRJ, Rio de Janeiro, Brazil

Keywords

Lower limb joint

1



Gait aid

Introduction

As a result of the increase in average life expectancy, ageing effects and growth prevalence of neurological and non-neurological diseases as well as human lifestyle have contributed to the increase in the number of persons with mobility problems using their own limbs and the need of gait aid devices. Regardless the type of gait limitation and rehabilitation device technology, anthropometric and anthropomorphic user features and subject specific kinematic, dynamic, and joint mechanical power signals are determinant for development of personalized gait aid devices [1]. Both rehabilitation and ergonomic active devices require indicative range of operation for the human movement of lower limb joints at kinematic, dynamic and energy, involving complementary time and frequency domain to select or develop the most adequate device, configuration and operation to specific subject and purpose. Previous studies [1] conclude for the lack of clinical trials with human movement rehabilitation devices, namely on achieved progress when compared with traditional therapies. Dominant sources of signal information on ergonomic and rehabilitation equipment is associated with the devices operation itself including physical and cognitive human–machine interaction, with the need of higher precision acquisition of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_221

1503

1504

C. Rodrigues et al.

kinematic and dynamic signal during human movement to understand subject specific biomechanics towards prescription and configuration of best suite gait aid device. With this objective in mind we’ve performed a subject specific lower limb gait analysis at human movement lab with kinematic and dynamic inverse analysis on joint angular movement assessment at normal and modified gait for comparison of the requirements at each gait mode and estimation of subject specific gait aid device indicative range operation. Selected subject specific gait modes correspond to stiff knee gait (SKG) as a common abnormal gait characterized by the reduction of knee range of motion generally associated to stroke patients [2] and cerebral palsy [3], slow running (SR) at minimum run velocity on stiff knee condition and normal gait (NG) for comparison with SKG and SR abnormal gait.

2

are normally coupled to speed reducers, adapting it to lower human joint angular velocities with higher joint force moments. Its efficiency, smaller size, and shape variety, along with the ability to be electrically powered with low voltage batteries make it the most appropriate choice for portable devices. They are usually applied as serial elastic actuators (SEAs) providing variable dynamic stiffness, with prices varying according to its specification, and cost increase for higher frequency range operation. Finally, the limitations on energy supplies are determinant for the autonomy of portable devices, whereas the power development capability is essential to provide at key instants the individual joints power requirements, and the total power requirements at all the joints. These data can generally be obtained from gait measurements, but they are seldom used as indicative range operation for personalized active gait aid device.

Materials and Methods 2.2 Trial Tests

2.1 Gait Aid Devices Features According to main purpose of gait aid devices on assisting or replace lower limb function, those are generally classified on three different dimensions [1] based on its (i) topology determined by the anthropomorphic level, (ii) the operation mode usually classified as passive or active depending on the control level and applied technology, and (iii) the functionality based on its utilization for rehabilitation or ergonomics and assistance. The first issue on gait aid devices, more specifically on exoskeletons, is related with security and physical integrity of the user subject. This implies that several operating limits are respected, to avoid exceeding joint space operation at the joints kinematic level on angular displacement, velocity, and acceleration, while ensuring sufficient level of joint force moment to produce the intended movements and dynamic stiffness for joint stabilization, within the acceptable limits to the specific subject without physical harm. To fulfill the range operation within these limits, subject specific assessment under different gait mode is determinant since these limits are essentially subject dependent and gait mode dependent, with different subject gait function preservation depending on subject pathology or injury. Another key issue consists at the technology of applied actuators and its specifications, with the ability to operate at the necessary frequency rate required for joint movement with adequate action level. Among the various technologies, electrical actuators are the most used on gait aid devices, due to their compact and robust nature, and their high torque/ weight ratio. These actuators present high control level and

According to the interest on subject specific study and difficulty of average sample in representing individual subject a case study of a healthy male subject 70 kg mass and 1.86 m height was assessed on human gait lab after informed written consent and ethical committee approval (N-20130014) in accordance with the ethical guidelines of the North Denmark Region Committee on Health Research Ethics and Helsinki Declaration of 1975, as revised in 2000 and 2008. Reflective adhesive marks were attached at skin surface of perceived selected anatomical points according to marker protocol including right and left anterior and posterior superior iliac spines, thigh superior, knee medial and lateral, shank superior, ankle medial, lateral and toes. Gait measurement protocol started with performing instructions, followed by a standing reference trial (static trial), where the test subject was instructed to stand straight. New instructions were provided, with subject performing a gait trial at a self-selected speed (NG), a slow running (SR) trial at minimum run velocity, and finally a stiff knee gait (SKG) trial, where the subject was instructed to keep his right knee straight during gait at a self-selected speed. Rest time between trials correspond to the time need, two to five minutes, for rest, recovery, and instructions [4]. Ground reaction forces and force moments were acquired at 2000 Hz with two AMTI force plates during foot contact with the ground on NG, SKG, and SR tests. 3D cartesian coordinates of attached reflective markers were obtained from direct linear transform (DLT) of 2D captured images with eight 100 Hz camera Qualisys system during NG, SKG, and SR.

Subject Specific Lower Limb Joint Mechanical Assessment …

2.3 Inverse Kinematics and Dynamics AnyGait (AnyBody Technology, Aalborg, DK) was configured for specific experimental motion capture lab setup and musculoskeletal modelling was performed using AnyBody with generation of stick-figure model based on a static trial, over-determinate kinematic analysis over the dynamic trial and inverse joint angles, morphing Twente Lower Extremity Model (TLEM) to match the size and joint morphology of the stick-figure model for inverse dynamic analysis based on joint angles and kinetic boundary conditions [4].

2.4 Signal Analysis A complete stride during NG, SKG, and SR was selected for analysis, corresponding to the time span between the start of two successive contacts of the same foot with the ground, as presented on Fig. 1. Entire time series (heel strike to heel strike) and frequency domain of the hip (H), knee (K) and ankle (A) joints angular displacements (h), angular velocities (x) and angular accelerations (a) at sagittal plane were analyzed. Non-normalization to % of gait cycle was chosen due to the normalization effect modifying signal time structure (either expanding or compressing it) thus modifying mechanical work and power, as well as dynamic stiffness computation. Also, it would affect frequency calculation under analysis.

1505

Gait subphases were time delimited on NG and SKG at the start of the same feet contact with the ground on initial double stance (IDS), start of the single support (SS), start of the terminal double stance (TDS), start and end of the transfer phase (TRF), and on SR at the start and end of the stance (ST), early float (EF), middle swing (MS) and late float (LF) from alternate contact of each feet with the ground. Succeeding IDS sub-phases on NG, SKG, and preceding LF sub-phase on SR were time delimited to point out periodic gait behaviour on Fig. 1. Joint angular kinematic signals h, x and a at H, K, A were analyzed at time domain based on amplitude and time span, providing information for the joint actuators’ workspace and drivers configuration. Fourier decomposition of h, x and a at H, K, A entire stride during NG, SKG and SR was implemented using fast Fourier transform (FFT) on MATLAB R2010b (The MathWorks Inc., Natick, MA, USA), with frequencies of the fundamental harmonics according to the time period on each gait mode NG, SKG, and SR. FFT coefficients from decomposition of h, x, and a at H, K, A during entire stride on NG, SKG, and SR were analyzed according to the frequency of the harmonic with higher amplitude (fmax) and 90th percentile (f90) of the Fourier series for each gait mode, providing information for joint actuator’s selection and configuration to operate at the necessary frequency rate required for joint movement. Sagittal plane flexion–extension hip, knee, and ankle joint force moments (Mfe) obtained from inverse dynamics were also assessed on entire time series and frequency domain,

Fig. 1 Hip (H), knee (K), ankle (A) joint angular displacement (h), velocity (x), acceleration (a) time profiles during one trial stride on NG, SKG and SR

1506

C. Rodrigues et al.

providing information for joint actuator’s selection and configuration ensuring sufficient level of joint force moment to produce the intended movements within the acceptable limits to the specific subject without physical damage. Joint mechanical work (W) and mechanical power (P) were obtained from instantaneous join flexion–extension force moment and sagittal angular displacement and angular velocity at the hip, knee, and ankle joints, providing information for joint actuator’s selection according to energy requirements of portable devices autonomy, and the power development capability to deliver instant individual and collective demanded joint power. Joint flexion–extension force moments vs joint angle (Mfe  h) diagrams were analyzed for the hip, knee, ankle during NG, SKG, and SR with linear correlation statistical test and instantaneous joint angular dynamic stiffness kh = dMfe/dh at each gait subphase, contributing for joint actuator’s selection and configuration ensuring variable dynamic stiffness to produce the intended movements, as well as for joint stabilization.

preserve subject physical integrity, with the need of gradual increase at the range of motion (ROM) during the transition from SKG to NG and SR. Comparison of joint angular velocities and accelerations provide determinant information for the choice of the joints drivers actuators, determining their velocity profiles and consequently their accelerations. Thus, knee joint actuators require faster drivers, providing higher velocity and acceleration levels in relation to the hip and ankle joints. Also, these actuators need to be carefully configurated at the start of rehabilitation on SKG towards NG and SR, protecting subject physical integrity due to the substantially lower joint velocities and accelerations at SKG in relation to NG and SR. The exception is on SKG at the ankle joint where aA presents higher maximum value than the knee aK with this phenomena of aA increase an adaptation mechanism to compensate for aK decrease, needing to be carefully addressed on drivers actuator configuration.

3.2 Frequency Spectrum

3

Results and Discussion

3.1 Kinematics Subject specific stride period presented higher value 1.169 s on SKG than NG with 1.020 s and SR with 0.800 s. Figure 1 presents for the right lower limb H, K, A joint sagittal plane h, x and a time profiles including entire stride on NG, SKG and SR respectively. The first important result is based at the higher amplitudes of hH and hK than hA on NG, SKG, and SR, on Fig. 1, and Table 1, with the need for selection or configuration of different actuators, enabling higher workspace amplitude at the hip and knee joint in relation to the ankle. The fact that SKG presents lower values of maximum hip and knee joint angles when compared to NG, with the hip, knee, ankle presenting lower maximum joint angles at NG and SKG when compared to SR, requires that the preparation for rehabilitation process must be carefully performed to Table 1 Hip (H), knee (K), ankle (A) joint angular displacement (h), velocity (x), acceleration (a) limit during one trial stride on NG, SKG, SR

Kinematic signal curve shape presented at natural time scale on Fig. 1 are determinant at Fourier analysis distribution of the signal energy by the different harmonics, with impact at frequency operation of selected actuators. Comparison of maximum amplitude fmax of h, x, a FFT, at Fig. 2a, point for the need of selecting different actuators for the hip in relation to the knee and the ankle, with the ability of operating at higher frequencies for the knee and the ankle, and lower frequencies for the hip joint. This is in agreement with the results from kinematic analysis, with knee requiring faster actuator drivers. Also, results from each lower limb joint h, x, a maximum FFT (fmax) point to the need of dynamic control of the joint actuators during rehabilitation process due to different variation of the dominant FFT harmonic frequency (fmax) on each joint kinematics at the transition from SKG to NG and SR. Joint control at frequency domain provides an excellent tool for this purpose due to the simplicity introduced, namely

NG

SKG

SR

hH (rad)

[− 0.30; 0.65]

[− 0.31; 0.48]

[− 0.20; 0.63]

hK (rad)

[0.08; 1.06]

[0.05; 0.70]

[0.16; 1.52]

hA (rad)

[− 0.10; 0.40]

[0.04; 0.40]

[− 0.12; 0.60]

xH (rad/s)

[− 3.23; 3.87]

[− 2.39; 2.58]

[− 2.97; 5.13]

xK (rad/s)

[− 6.30; 6.04]

[− 3.65; 4.75]

[− 10.29; 7.47]

xA (rad/s)

[− 4.92; 2.90]

[− 3.66; 2.44]

[− 6.77; 6.88]

aH (rad/s )

[− 27.84; 44.71]

[− 27.40; 36.45]

[− 62.65; 45.80]

aK (rad/s2)

[− 71.84; 122.23]

[− 52.47; 54.74]

[− 119.95; 150.25]

aA (rad/s )

[− 55.59; 106.13]

[− 44.18; 83.81]

[− 110.47; 121.27]

2

2

Subject Specific Lower Limb Joint Mechanical Assessment …

1507

Fig. 2 Frequency fmax of the FFT harmonic with higher amplitude (a), and Fourier series 90th percentile frequency f90 (b) of the hip (H), knee (K), ankle (A) joint angular displacement (h), velocity (x), acceleration (a) during one trial stride on NG, SKG, SR

the conversion of coupled differential equations from successive derivatives of joint angular displacement, velocities and acceleration into algebraic equations. The meaning of f90 corresponds to the frequency of the necessary Fourier components for the 90% convergence of the FFT reconstruction of the original signal. Thus, the lower the f90, the lower the number of harmonics for the 90% convergence. According to registered differences of f90, at Fig. 2b, namely at SKG and SR in relation to NG, as well as at the ankle in relation to the hip and the knee, specific spectra adjustments must be performed on each lower limb joint actuator during the rehabilitation process from SKG towards NG and SR. Nonetheless, limitation of higher frequency spectrum on joint actuators presents as a sole consequence the smoothing of the kinematic signal profiles.

3.3 Dynamics and Energy Lower Mfe– at the knee on SKG, at Table 2, require particular attention on configuration of the knee joint actuator at

the beginning of rehabilitation, with gradual increase of joint force moment during the transition to NG and SR. Also, attention is needed at hip actuators at the transition during rehabilitation from SKG to NG and SR due to the need of opposite variation of Mfe+ on each transition. Results from developed joint mechanical work, on Table 2 and Fig. 3, correspond to higher energy consumption of the joints actuators at the transition from SKG to NG and SR, thus reducing energy autonomy and portability. Despite NG and SKG present similar amplitudes of maximum power consumption, on Table 2 and Fig. 3, the fact that the peaks at the hip and the ankle occur almost simultaneously on SKG may lead to limitations of power supply to these joints’ actuators. As regards to SR it presents a more challenging problem due to the simultaneous high peak power at the knee and the ankle. Knee joint force moment higher frequency fmax on SKG at Table 3, is associated to the stiff knee condition and must be carefully considered at the beginning of the rehabilitation with the knee actuator allowing this higher frequency to avoid the subject user injury.

1508

C. Rodrigues et al.

Table 2 Hip (H), knee (K), ankle (A) joint force moments (Mfe), mechanical work (W) and power (P) during NG, SKG, SR trial stride NG

SKG

SR

H

(N m)

[− 95.77; 76.94]

[− 63.96; 69.89]

[− 76.89; 44.32]

K

(N m)

[− 96.72; 28.74]

[− 21.53; 27.74]

[− 154.55; 37.50]

A

(N m)

[− 124.26; 11.91]

[− 120.89; 14.05]

[− 242.42; 0.38]

WH (J)

[− 59.20; 5.31]

[− 27.09; 5.86]

[− 36.24; 7.32]

WK (J)

[− 55.50; 7.68]

[− 6.92; 2.93]

[− 105.40; 14.67]

WA (J)

[− 45.96; 2.20]

[− 45.60; 1.69]

[− 145.21; 0.25]

PH (W)

[− 77.78; 112.47]

[− 55.53; 180.18]

[− 90.48; 136.83]

PK (W)

[− 199.10; 124.70]

[− 57.19; 24.22]

[− 350.62; 251.90]

PA (W)

[− 67.11; 393.50]

[− 56.48; 317.61]

[− 949.83; 808.17]

Mfe Mfe Mfe

Fig. 3 Hip (H), knee (K), ankle (A) joint force moments (Mfe), mechanical work (W) and power (P) time profiles during one trial stride on NG, SKG and SR

Table 3 Frequency fmax of the joint force moment (Mfe) FFT harmonic with higher amplitude and 90th percentile frequency f90 of Mfe at the hip (H), knee (K), ankle (A) joint during one trial stride on NG, SKG, SR f (Hz)

NG

SKG

SR

MfeH fmax

0.75

0.67

1.03

MfeK fmax

0.75

3.33

1.03

MfeA fmax

0.75

0.67

1.03

MfeH f90

17.28

85.24

103.23

MfeK f90

10.52

88.57

10.32

MfeA f90

5.26

5.99

5.16

Subject Specific Lower Limb Joint Mechanical Assessment …

The fact that SKG presents for the hip and the knee higher Mfe frequency dispersion f90, at Table 3, calls for selection of the joint actuators supporting this larger range spectrum, with the same for the hip on SR.

3.4 Dynamic Stiffness Statistically significant (p < 0.05) linear correlation of dMfe/dh, on Table 4, are mainly negative, thus pointing for the need of the linear and opposite control variation of the joint force moments with joint angles. Main differences at joint dynamic stiffness were registered at IDS for NG and SKG, with very high kh at the knee and small value at the ankle on SKG, as well as lower kh for the ankle at MS and LF on SR.

4

Conclusions

According to the specificity of each subject gait pathology and the need for personalized rehabilitation, average sample approach has proven limited for subject specific treatment, with the need for acquisition of patient specific mechanical signal on human abnormal gait for treatment prescription and active aid device selection or development, as well as for its parametrization and follow up during rehabilitation process. Presented methodology conduced to detection of joint workspace differences at subject specific sagittal hip, knee, and ankle angular displacement during stiff knee gait and slow running in relation to normal gait, with application for indicative transition during rehabilitation process from SKG to NG and SR. Also, joint angular velocity and acceleration detected differences on NG, SKG, and SR can contribute for definition of the actuation drivers’ profiles on joint angles Table 4 Hip (H), knee (K) and ankle (A) joint dynamic stiffness (dMfe/dh) at NG, SKG and SR subphases with * statistical significative (p < 0.05)

1509

trajectory, both at the time and frequency domain. Specifically, transition from SKG to NG and SR must be performed gradually and carefully to preserve physical integrity of the subject during rehabilitation due to higher amplitude of joint angular displacement, velocities, and accelerations at NG and SR than SKG. Another conclusion from achieved results corresponds to the possibility of using different low spectra actuator for the hip in relation to the knee and the ankle due to lower dominant frequencies at the hip joint angular displacement, velocity, and acceleration at NG, SKG, and SR, with the need to take into account both changes during rehabilitation process transition from SKG to NG and SR. Despite joint angular movement can be approximately performed without joint signals reconstruction using all its harmonics, transition from SKG to NG and SR implies the need of different f90 FFT percentile at each joint, with the necessary actuators selection and dynamic configuration during rehabilitation process. Transition from SKG to NG and SR implies large increase at the knee joint force extension moment (Mfe–) and at the ankle from SKG to SR, with the need for gradual increase on Mfe– requiring actuators capable of producing the necessary force moments, while avoid exceeding human body supported efforts. Several peaks of joint mechanical work and power were detected at distinct gait phases on SKG, NG, and SR, with the need for actuators and energy sources to comply with these requirements, both on autonomy of the batteries for the time operation, as well as on instantaneous response to simultaneous peak energy and power. Dynamic stiffness assessment allowed discrimination at the hip, knee, and ankle joint linearity between flexion–extension force moment and joint angular displacement on each subphase at SKG, NG, and SR, providing information for selection and configuration of the controllers for each joint variable

dMfe/dh (Nm/rad)

Hip

Knee

Ankle

NG–IDS

− 383.7

− 389.0*

− 406.7*

NG–SS

− 137.3*

− 199.1*

NG–TDS

− 427.0*

15.3

− 221.2 − 312.0*

NG–TRF

− 82.0*

− 33.3*

− 3.5*

SKG–IDS

− 337.2*

− 925.8*

− 9.5

SKG–SS

− 200.6*

− 279.5*

− 312.6*

SKG–TDS

− 276.9

− 67.5*

− 460.9*

− 76.6*

− 30.4*

− 0.7

SR–ST

− 169.1*

− 409.0*

− 369.6*

SR–EF

− 52.8

− 23.4*

SR–MS

− 88.9*

− 19.2*

SR–LF

57.6

− 6.2

SKG–TRF

29.0* − 2.0 2.0

1510

stiffness actuator according to the subphase on subject specific gait. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Poveda JMA, Rovira JLP, Neto AF, Botelho TR, Agudo AMG, Romero JOR (2017) Exoesqueletos robóticos para rehabilitación y

C. Rodrigues et al. asistencia de pacientes con daño neurológico: expereiencias y posibilidade en iberoamérica. Cyted, Iberoamérica 2. Campanini I, Merlo A, Damiano B (2013) A method to differentiate the causes of stiff-knee gait in stroke patients. Gait Post 38(2):165– 169 3. Böhm H, Hösl M, Schwameder H, Döderlein L (2014) Stiff-knee gait in cerebral palsy: how do patients adapt to uneven ground? Gait Post 39(4):1028–1033 4. Rasmussen J, de Zee M, Andersen MS (2012) Assignment for the course in musculoskeletal modeling by multibody biomechanics. Department of Mechanical Engineering, Aalborg University, Aalborg

Web/Mobile Technology as a Facilitator in the Cardiac Rehabilitation Process: Review Study Hildete de Almeida Galvão and F. S. Barros

Abstract

Every year, strategies have been rethought in order to reduce the time of diagnosis and hospital stay, expanding the demand for care to patients of various pathologies, ensuring access to quality public health to citizens. This study aims to identify in the scientific literature the use of web/mobile tools in the management of information of cardiac patients from their hospital discharge (Phase II), until the end of the semi-assisted or face-to-face phases (Phases III and IV). Method: Search of articles in electronic journal databases available for access in the library system of the Federal Technological University of Paraná. Publications of newspapers, magazines, conferences and reviews were considered in the time interval from January 2014 to May 2020. Keywords

Mobile health rehabilitation

1



Coronary heart disease



Cardiac

Introduction

In the year 2017, it was estimated that there were more than 380,000 deaths from cardiovascular diseases, corresponding to 30% of the death records in Brazil [1]. Heart diseases are one of the major causes of physical, social and labor disability, having received considerable sums of financial resources from the State. However, these resources do not always meet the diagnostic and therapeutic demands. Every

H. de Almeida Galvão (&)  F. S. Barros Biomedical Engineering, Federal Technological University of Paraná (UTFPR), Avenida 7 de Setembro, 3165–Rebouças, Curitiba, Brazil e-mail: [email protected]

year, strategies have been rethought with the purpose of reducing the time of diagnosis and hospital stay, increasing the demand for the care of patients with the most diverse pathologies, ensuring access to public health and quality to citizens [2]. Cardiac Rehabilitation (CR) is the set of actions that contribute to the improvement and physical, mental and social recovery of patients with some type of heart disease, enabling the resumption of their daily activities [3]. A CR programme is basically divided into four phases: hospital (Phases I and II) and semi-assisted or on-site (Phases III and IV). It has a multidisciplinary health team, involving physician, nurse, physiotherapist, physical educator, nutritionist, pharmacist, psychologist and social worker, contributing to the secondary prevention of heart diseases. Patients and family members are also advised on: the pathology to which the patients were affected, the action of the medication in use, the relationship between the disease and lifestyle, and healthy habits to be adopted still in Phase I or hospital [3]. Given this scenario, it is of utmost importance to manage the daily information of patients included in a CR program, and some biomarkers such as blood pressure, pre- and post-exercise heart rate, and subjective perception of effort, in addition to encouraging the adoption of a healthy lifestyle. According to publications in recent years, technological innovations such as mobile applications have provided the opportunity to develop tools for self-management of health, reducing the distance between patients and professionals in all areas. This research aims to identify in the scientific literature the use of web/mobile tools in the management of information of cardiac patients from their discharge from hospital (Phase II), until the end of the semi-assisted or presential phases (Phases III and IV). Then, it will guide the construction of an application for mobile phones, and after usability testing, suggestions and necessary adaptations, it will serve as a tool to be used in the rehabilitation of cardiac

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_222

1511

1512

H. de Almeida Galvão and F. S. Barros

patients. We believe that the number of works resulting from the bibliometry, will contribute to the achievement of our objectives.

2

Methodology

This study is an exploratory research, which used the bibliometric process, guided by the literature review method Knowledge Development Process-Constructivist (PROKNOW-C)4.

2.1 Inclusion Criteria In the first stage, articles from scientific journals, conferences and reviews were selected, which included at least one of the descriptors used in the search: mobile health, coronary disease, cardiac rehabilitation. The reading of the abstracts made it possible to highlight the instruments used in each one of the studies, with technologies pertinent to the proposed bibliometry and in the following formats: digital platforms, mobile devices, mobile phones, and applications for home rehabilitation, aimed at patients affected by cardiac events.

of interest Electronic and Biomedical Engineering, Latin American publications, and a database with a considerable number of publications in its collection, respectively: IEEE, Scielo, Scopus. After pasting keywords from articles searched in the Google Scholar tool, the following descriptors were used: “mobile health; coronary disease; cardiac rehabilitation”, with the use of Boolean operators “and” and “or”, resulting after the algorithm: “mobile health” AND “coronary disease” OR “cardiac rehabilitation”. Publications of articles, journals, conferences and reviews in the time span from January 2014 to May 2020 were considered (Fig. 1). For data collection, an Excel spreadsheet was elaborated and information of each selected work was inserted for full reading: title, author, year of publication, country, abstract, methodology, and the digital object identifier (DOI). Most of the studies listed analyzed the use of mobile technologies in cardiac rehabilitation, or in changing the lifestyle of patients with cardiac comorbidities such as diabetes, obesity, or exposure to tobacco.

Search period From January 2014 to May 2020

2.2 Exclusion Criteria Articles that used portable sensors, wearable technologies, and robotic agents as research tools were excluded. At this stage, 1 editorial was also excluded because it was the opinion of the authors and their peers in the academic journal Journal of the American Medical Association (JAMA). This editorial sought to foster discussion about the rise of mobile health applications and their implications effectively in the treatment of people with cardiovascular diseases (CVD), in addition to the disproportionate relationship between the development of applications for CVD and scientific evidence that encourages its applicability [4]. Other studies, although presenting in the methodology the use of mobile applications or text messaging, were also excluded because they are specific for infant hygiene, neurological pathologies, among other pathologies different from heart diseases.

2.3 Databases and Research Strategies The search for the articles was carried out in two stages, in May 2018 and May 2020, in the electronic journal databases available for access in the library system of the Universidade Tecnológica Federal do Paraná, selected based on the areas

Databases • IEEE n= 71 • SCIELO n= 39 • SCOPUS n= 36 full n= 146

Duplicate articles excluded n= 5

Articles selected for abstrat reading n= 83

Articles excluded after title analysis n= 58

Articles deleted after reading the abstract n= 56

Articles selected for full reading n= 27 Fig. 1 Organization chart of the searches of the review study: period of searches, databases used and criteria reading and selection of studies

Web/Mobile Technology as a Facilitator in the …

1513

At the end of this review, 9 articles were selected coded with the letter E (study) and by numbers, according to reading and analysis order (Table 1).

3

Results

In recent years, many studies have been conducted on the use of remote technologies in health, such as telemedicine and mobile technologies, contributing to patient care. CR objectives at the recovery of patients affected by a cardiac event, with emphasis on physical exercises, favoring early hospital discharge and contributing to the reduction of new cardiac events that lead patients to a new hospitalization [5]. However, there was a gap in the adherence and permanence of patients in programmes carried out in centres and outpatients for rehabilitation. This fact was observed in the selected studies. All selected studies have been conducted in the last 5 years, confirming the need and benefits of using technologies in CR remotely, enabling greater compliance, reducing barriers between patients and multiprofessional healthcare team. The studies are in English, and all the researches were carried out in countries that make up 4 of the 6 continents, pointing the new technologies as a global trend. Five of the studies reported used randomized clinical trials in the methodology, reaffirming the importance of outpatient care simultaneously with home follow-up, a fact reported in the study (E7) that demonstrates the importance of CR after cases of acute myocardial infarction (AMI). The first selected study (E1) presented the development of a smartphone application, to be used in the secondary prevention of CR, that is, from hospital discharge, after a first cardiac event. Usability, acceptance and feasibility tests were performed in only one of the works (E2), considering the notes made by

Table 1 Results found in the selected studies (n = 9)

users of the application, providing important contributions for future studies that seek innovations in the field of user education and prevention of cardiac events reducing risk factors (E5). Some of the barriers that make it difficult for patients to participate in CR were demonstrated in a study conducted in Australia (E3), with residents of rural and aboriginal areas, given the geographical difficulty they face in participating in traditional rehabilitation carried out in centres and outpatient clinics. The administration of medication prescriptions, with dosage and times to be followed, is primordial for the patient's recovery, contributing to the success in preventing new cardiac events or decreasing their intensity (E4), either by text message or voice message. Two of the studies used exclusively text messages to stimulate the practice of healthy habits, reducing cardiac risk factors. The first study (E6) used semi-customized messages, motivating the consumption of adequate diet, physical exercise and smoking cessation in smoking patients [6]. The second (E8), in addition to the text messages included in the program an interactive site with exercise prescription adapted to the level of conditioning of the participants [7], corroborating the application health program presented in the study conducted in Singapore (E9). In this study, Jiang et al. analyzed the effectiveness of the Care4Hearth smartphone application, developed with 4 learning stations, providing its users with greater knowledge about coronary heart disease, and tools needed for adherence to physical activity and better habits for health maintenance [8].

4

Discussion

The barriers that prevent the use of new remote technologies in CR go beyond geographical, social, and economic barriers as shown in Fig. 2. Health priorities are higher in the

Study

BD

Year

Country

Instrument

Type of study

E1

IEEE

2016

Belgium

Application Smartphone

RCT

E2

Scopus

2018

Ireland

Application Usability

SR

E3

Scopus

2018

Australia

Smartphone Cell phone Text Message

SR RCT NRCT

E4

Scopus

2016

China

Application Text Message

Search/ Evaluation

E5

Scopus

2016

Australia

Smartphone Application

Analysis of RCT

E6

Scopus

2015

Australia

Cell phone Text Message

RCT

E7

Scopus

2015

Usa

Aplicativo Smartphone

ES

E8

Scopus

2015

New Zealand

Cell phone Text Message

RCT RCT /DT

E9

Scopus

2019

Singapore

Application Smartphone

ES

RCT randomized clinical trial, NRCT non-randomized clinical trial, SR systematic review, ES experimental study, DT data triangulation Source The authors

1514

Fig. 2 Four barriers in the implementation of mobile health, according to the regional WHO

Americas than in other continents. Mobile health depends on public policies, investment in communication and infrastructure, guaranteed expansion of networks and access coverage for users. In the Americas, 50% of the countries point out the lack of legal guidelines as one of the main barriers to the expansion and credibility of mobile health. There is a scarcity of studies that evaluate the cost/benefit relation in telemedicine and mobile health. These results could guide political actions necessary for the implementation and expansion of remote health programs, providing a connection between professionals and patients, promoting greater adherence to CR, ensuring continuous and quality care. Many professionals still do not feel safe to use technological resources in health. Lack of knowledge about the technologies on the rise, in addition to human resources trained within the health spaces [9]. Telemedicine is proof that remote CR increases the compliance of cardiac patients in secondary prevention programs, bringing medium-term benefits. However, the scarcity of long-term studies causes uncertainty about the effectiveness of this type of intervention [10, 11]. Mobile technology has proven to be an instrument capable of assisting secondary prevention, reducing cardiac risk factors with diet and physical activity, reducing readmissions. The use of patient-centered applications, and the prescription and evaluation of exercises, encourage the change to a healthy lifestyle [12, 13].

H. de Almeida Galvão and F. S. Barros

According to Chen et al. [14], the text message bank reduces the workload of doctors, which makes the remote tool attractive to these professionals. The study also suggests that future works consider the participation of users in the development and customization of messages. The text messages were also well evaluated by users, stimulating the practice of physical activities. Physical exercises help maintain good levels of biomarkers such as cholesterol, blood glucose, blood pressure and weight, providing cardiovascular recovery, and increasing adherence to the use of prescribed medication [14]. The use of the mobile health tool is advocated in two moments by the same author. Chow et al. in one of the studies deals with the possibility of reducing socioeconomic differences among patients, benefiting everyone in the prevention of cardiovascular diseases. She points out data that assure that more than half of the world population uses mobile phones, which makes more research possible by discussing these resources. However, it is attentive to the increase of cases of people affected by cardiovascular diseases, surpassing the number of research on new technologies. But it states that there is a growing trend in clinical trials, seeking to reduce the lack of CR services [15]. In another study, Chow et al. discusses the efficacy of text messages, corroborating with Frederix et al. the same considerations about text messages and their influences and benefits in changing patients’ behavior, based on positive results achieved in randomized smoking cessation trials, one of the main exogenous risk factors in cardiac events [6, 10].

5

Conclusions

Most studies related to CR and the use of mobile technologies bring physical exercise and lifestyle change as the main agents in the reduction or normalization of biomarker levels. The selected studies by Chow et al., and Pfaeffli et al. fully contemplated this review, since they presented in their researches mobile technology instruments, which feed an information system with data and values of the main biomarkers in CR, besides prescribing drugs and exercises, benefiting and contributing to the work of the health team, promoting personalized patient care [7, 15]. This systematic review will guide the development of an application capable of feeding a clinical database with daily patient information, seeking health promotion, healthy lifestyle adoption, and adherence to CR programs.

Web/Mobile Technology as a Facilitator in the … Conflict of Interest The authors declare that there are no conflicts of interest.

References 1. Cardiometro. Morte por doenças cardiovasculares no Brasil at https://www.cardiometro.com.br 2. Campos, CEA (2003) O desafio da integralidade segundo as perspectivas da vigilância da saúde e da saúde da família. Ciência& Saúde Coletiva 8(2):569–584 3. Herdy AH et al (2004) Diretriz Sul-Americana de Prevenção e Reabilitação Cardiovascular. Arq Bras Cardiol 103(2, supl. 1):0066–782X. https://doi.org/10.5935/abc.2014S0003 4. Eapen ZJ, Peterson ED (2015) Can mobile health applications facilitate meaningful behavior change? Time of answers. JAMA 314(12):1236–1237. https://doi.org/10.1001/jama.2015.11067 5. Carvalho T, Milani M, Ferraz AS, Silveira AD, Herdy AH, Hossri CAC et al (2020) Diretriz Brasileira de reabilitação Cardiovascular–2020. Arq Bras Cardiol 114(5):943–987 6. Chow CK, Redfem J, Hillis GS et al (2015) Effect of lifestyle focused-text messaging on risk factor modification in patients with coronary heart disease: a randomized clinical trial. JAMA 314 (12):1255–1263 7. Pfaeffli Dale L, Whittaker R, Dixon R et al (2015) Acceptability of a mobile health exercise-based cardiac rehabilitation intervention: a randomized trial. J Cardiopulm Rehabil Prev 35(5):312–319 8. WHO Global Observatory for eHealth (2011) mHealth: new horizons for health through mobile technologies: second global survey on eHealth. World Health Organization. https://apps.who. int/iris/handle/10665/44607

1515 9. Jiang Y, Jiao N, Nguyen HD et al (2019) Effect of a mHealth programme on coronary heart disease prevention among working population in Singapore: a single group pretest-postest design. J Adv Nurs 75:1922–1932 10. Frederix S, Sankaran K, Coninx, Dendale P (2016) MobileHeart, a mobile smartphone-based application that supports and monitors coronary artery disease patients during rehabilitation. In: EMBC, 38th annual international conference of the IEEE engineering in medicine and biology society, Orlando, FL, pp 513–516 11. Hamilton SJ, Mills B, Birch EM, Thompson SC (2018) Smartphones in the secondary prevention of cardiovascular disease: a systematic review. BMC Cardiovasc Disord 18(1):25 12. Duff O, Walsh D, Malone S et al (2018) MedFit App, a behavior-changing, theoretically informed mobile app for patient self-management of cardiovascular disease: user-centered development. JMIR 2(1):e8 13. Widmer RJ, Allison TG, Lerman LO, Lerman A (2015) Digital health intervention as an adjunct to cardiac rehabilitation reduces cardiovascular risk factors and rehospitalizations. J Cardiovasc Transl Res 8(5):283–292 14. Chen S, Gong E, Kasi DS et al (2016) Development of a mobile phone-based intervention to improve adherence to secondary prevention of coronary heart disease in China. J Med Eng Technol 40(7–8):372–382 15. Chow CK, Ariyarathna N, Islam SM, Thiagalingam A, Redfem J (2016) mHealth in cardiovascular health care. Heart Lung Circ 25 (8):802–807 16. Ensslin L, Ensslin SR, Lacerda RTO, Tasca JE (2010) ProKnow-C, knowledge development process—Constructivist. Processo técnico com patente de registro junto ao INPI. INPI, Rio de Janeiro

Biomedical Signal and Image Processing

Regression Approach for Cranioplasty Modeling M. G. M. Garcia and S. S. Furuie

Abstract

Patient-specific implants provide important advantages for patients and medical professionals. The state of the art of cranioplasty implant production is based on the bone structure reconstruction and use of patient’s own anatomical information for filling the bone defect. The present work proposes a two-dimensional investigation of which dataset results in the closest polynomial regression to a gold standard structure combining points of the bone defect region and points of the healthy contralateral skull hemisphere. The similarity measures used to compare datasets are the root mean square error (RMSE) and the Hausdorff distance. The objective is to use the most successful dataset in future development and testing of a semi-automatic methodology for cranial prosthesis modeling. The present methodology was implemented in Python scripts and uses five series of skull computed tomography images to generate phantoms with small, medium and large bone defects. Results from statistical tests and observations made from the mean RMSE and mean Hausdorff distance allow to determine that the dataset formed by the phantom contour points twice and the mirrored contour points is the one that significantly increases the similarity measures. Keywords

Cranial prosthesis • Image processing • Patient-specific • Polynomial regression

1

Introduction

Bone reconstruction of cranial areas represents a challenge for neurosurgeons and plastic surgeons because of the complex anatomical shapes, unique characteristics of each bone M. G. M. Garcia (B) · S. S. Furuie School of Engineering, University of São Paulo, São Paulo, Brazil

defect, the presence of vital adjacent anatomical structures and the high risk of infection to which the patient is subjected. This procedure is necessary due to trauma, tumor resection, decompressive craniotomy, infections or deformities. It aims to aesthetically restore the affected area and rehabilitate its function, promoting protection to the brain and other vital structures, and the restoration of cerebrospinal fluid dynamics and cerebral blood flow. Consequently, the patient may have his/her self-esteem restored and his/her social life normalized [1–3]. The state of the art of cranioplasty implant production process is based on the reconstruction of the bone structure of the patient’s skull from computed tomography images, generating a three-dimensional model. The missing bone segment is virtually modeled based on the patient’s own anatomical information with the ideal geometry for filling the bone defect. The resulting implant can then be produced by subtractive or additive manufacturing [2]. A patient-specific implant provides advantages for patients and medical professionals by reducing surgery time, prosthesis adaptation trauma, patient recovery time, and increasing surgery and implant success rates [4]. Case studies [5,6] and studies with patient groups [3,7–9] demonstrate the efficiency of the technique. Currently the three-dimensional modeling process of patient-specific implants is characterized by the use of commercial and user-dependent software, which makes it more expensive and affects its standardization and repeatability [10,11]. In this context, it is appropriate to propose semi-automatic and automatic methodologies that contribute to this process, automating their critical steps, as performed in the studies by [1,11,12]. These studies use information from the morphologically healthy contralateral skull hemisphere in the generation of the segment corresponding to the bone defect. The present work proposes a two-dimensional investigation of which dataset results in the closest polynomial regression to a gold standard structure combining points of the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_223

1519

1520

M. G. M. Garcia and S. S. Furuie

bone defect region and points of the healthy contralateral hemisphere. Such analysis is not addressed in the cited reference studies and can improve semi-automatic and automatic methodologies results.

2

Materials

Two series of skull computed tomography images were obtained from the Patient Contributed Image Repository website [13] (Table 1). Three other series of skull computed tomography images were obtained from the public database The Cancer Imaging Archive (TCIA) [14], in the HNSCC3DCT-RT collection [15] (Table 1). Images in DICOM format were unidentified and anonymized prior to making them available to the public domain in both databases. The methodology consists of Python scripts (version 2.7.13, Anaconda 4.4.0 package), using the libraries numpy, skimage and scipy.spatial.

3

Methodology

3.1

Information Extraction from DICOM Images

The identification of the points that delimit the patient’s bone structure (called bone contours) is performed in each image by the measure.find_contours function [16], which is based on the marching squares algorithm (two-dimensional approach of marching cubes algorithm [17]). The function uses a fixed threshold value to determine the interface points between bone and adjacent tissues. Each bone contour is defined as a list of points with x- and y-coordinates in pixels for each slice. Bone contours of interest are defined as those whose Euclidean distance between the center of bone contour (defined by the arithmetic mean between its points) and the center of image is at most 10% of the smallest image dimension. This criterion is based on the fact that the contour of the skull bone is approximately centered on the image.

In tomographic images of a healthy skull, two bony contours of interest are expected: an outer and an inner contour approximately concentric, being the outer contour longer than the inner one. The contours are structured in a multidimensional array and associated with their image number. Those are defined as the gold standard contours. It is noteworthy that the x and y coordinates of the obtained contours are organized in such a way that the main axis of the sagittal plane of the patient’s skull is parallel to the x axis, allowing the treatment of each bone contour hemisphere data as a function.

3.2

Phantom Generation

Images above the orbit region and below the skull apex are selected from the image series of the five patients listed in Table I. Each patient’s image series have different image spacing and the skulls have anatomical variations, so the number of images selected to compose each phantom also differ. The number of images selected for patients P1 to P5 are, respectively, 63, 121, 31, 32, 32 images. From the selected images the bone contours are then obtained in each image according to section A. The threshold value used for all patients is 200 H.U. Three phantoms are produced for each patient by selecting points on the outer bone contours to simulate large (L), medium (M), and small (S) bone defects, resulting in fifteen phantoms. For each patient, a point on the series central image (x, y, image number) approximately centered on the left hemisphere of the skull for P1 and P2 and in the right hemisphere for P3, P4, and P5 is defined. This point is set as the center of an ellipsoid with parameters a, b and c referring respectively to the x, y, and z-axes. Ellipsoids with parameters (18, 24, 12), (27, 36, 18) and (36, 48, 24) are used to generate small, medium, and large bone defects, respectively. Each point on the contour of each image is converted to the patient’s coordinate system and checked for externality to the ellipsoid volume. If so, it is maintained.

Table 1 Data from the skull computed tomography image series Patient ID

Database

Patient ID in database

P1 PCIR 77654033 P2 PCIR 54879843 P3 TCIA HN_P001 P4 TCIA HN_P003 P5 TCIA HN_P004 a Information filled in with “–” is missing in the series

Sexa

Agea

Number of Image size images in series (pixels)

– Male Male Male Male

42 25 – – –

188 226 198 174 107

512 × 512 512 × 512 512 × 512 512 × 512 512 × 512

Image thickness (mm) 1.250 0.625 2.000 2.000 2.000

223 Regression Approach for Cranioplasty Modeling

3.3

Mirroring of Contour Points

In order to use the information from the morphologically healthy contralateral skull hemisphere it is necessary to align it with the bone defect region. Considering that the skull is approximately symmetric, a mirroring operation is convenient. It is necessary to define a contour symmetry axis to perform it. This axis is defined per image by a horizontal vector and a reference point (xr e f , yr e f ) calculated as the mean between the maximum and minimum values of the x and ycoordinates, respectively, of the gold standard contour points. The mirroring operation is performed on the phantom contours. The first step consists in reversing the y-coordinate signal of each contour point, mirroring them with respect to the x-axis. Then the points are translated by adding twice yr e f to the y-coordinate of each point in order to spatially overlap them with the original phantom contour points. The resulting contour is called mirrored contour.

3.4

Determination of Contour Points in the Hemisphere of Interest

Since the simulated bone defects in the present study are all lateral, it is convenient to determine the hemisphere that contains the bone defect to process only the contour points that are in the hemisphere of interest. This measure contributes to the computational efficiency of the methodology. The gold standard contour points, the phantom contour points and the mirrored contour points that have a ycoordinate greater than yr e f for patients 1 and 2 and smaller than yr e f for patients of 3 to 5 are selected for each image.

3.5

Determination of the Polynomial Regression Region in X-Axis

The set of reference points included in the regression affects the morphology of the resulting curve. Therefore, it is important to define reasonable criteria for the region that provides reference points. This region is determined for each image from the organization of phantom contour points by the x-coordinate in ascending order. The largest interval between the xcoordinates of subsequent points is found. The points whose x-coordinates are these two values are established as the starting (xini , yini ) and ending (xend , yend ) points of bone defect. The starting point of the evaluation region (xini_eval , yini_eval ) is set as the point with x-coordinate smaller than xini and the closest to (xini , yini ) with Euclidean distance greater than or equal to 10 mm. The endpoint of the evaluation region (xend_eval , yend_eval ) is established as the point with x-coordinate greater than xend and the closest to (xend , yend )

1521

with Euclidean distance greater than or equal to 10 mm. The prosthesis must overlap with the edge of the bone defect for support, justifying the 10 mm added on each side of the bone defect.

3.6

Dataset for Polynomial Regression

The objective of the present study is to verify which dataset, by combining phantom contour points and mirrored contour points, improves similarity measures between the gold standard contour and the estimated contour. The latter is a polynomial obtained by a fourth-degree two-dimensional polynomial regression using the referred dataset. The polynomial coefficients are obtained by the numpy.polyfit function [18] (which performs a fourth-degree least squares polynomial regression). It is evaluated according to the evaluation region established in section E using the numpy.polyval function [19]. The x-axis resolution of the evaluation region is 0.2 mm. The similarity measures used are the root mean square error (RMSE) [20] and the Hausdorff distance (H) [21]. Tests A through D are performed using different datasets as input points to the polynomial fit calculation: only the phantom contour points (Test A); the phantom contour points and the mirrored contour points (Test B); the phantom contour points twice and the mirrored contour points (Test C); the mirrored contour points twice and the phantom contour points (Test D). The use of the same dataset twice aims to influence more strongly the curvature of the resulting curve. Tests A through D are performed for each bone contour comprising the phantom bone defect. The criterion for identifying the contour with bone defect is a distance between xini and xend greater than 2 mm. Smaller distances indicate that the points are neighbors, with no spacing and therefore no defect between them. The bone defect starting (xini , yini ) and ending (xend , yend ) points delimit the region for calculating the similarity measures. The evaluation region is not appropriate because this set of points comprise the bone defect edges, which are present in the gold standard and therefore in the phantom identically. This would artificially reduce the error between the two datasets. For each phantom the mean and the standard deviation of each similarity measure are also calculated.

4

Results and Discussion

Figure 1 presents examples of the phantoms, demonstrating the variety of bone defects sizes and series characteristics due to the different number of images and different spacing between images.

1522

M. G. M. Garcia and S. S. Furuie

Fig. 1 Phantoms: patient 2 medium bone defect (a), patient 3 large bone defect (b)

Fig. 2 Images 13, 33 and 53 of patient 1 large bone defect phantom (left to right, respectively) (a); Images, 10, 20, 30 of patient 3 medium bone defect phantom (left to right, respectively) (b). Phantom contour points in blue, mirrored contour points in green, reference point represented by the magenta “x” and contour symmetry axis represented by the red dotted line. The bone defect region is highlighted by the black vertical bars, when present

Figure 2 shows the overlap of phantom contours and the respective mirrored contours in different skull regions for two patients. Good visual overlap of the contours can be observed in the whole skull. This is observed in all phantoms generated despite the anatomical differences between them, indicating convenient positioning of the data for the tests. The graphical results obtained indicate that the polynomial regression is satisfactorily filling the defect with the desired shape. An example is shown in Fig. 3 for image 7 of patient 2 small bone defect phantom.

Figures 4 and 5 graphically show the results for mean RMSE with respective standard deviation (both in millimeters) and mean Hausdorff distance with respective standard deviation (both in millimeters), respectively, for each phantom and test. The mean similarity measures were calculated between the gold standard contour and the estimated contour in the bone defect region considering all contours that comprise the bone defect of each phantom for each test. Table 2 presents the number of contours considered in the calculation of similarity measures for each phantom.

223 Regression Approach for Cranioplasty Modeling

1523

Fig. 3 Test A (a) to D (d) results for the evaluation region of image 07 of patient 2 small bone defect phantom: in the images on the left, gold standard contour points are in blue, the mirrored contour points are in green and estimated contour points are in red; the image on the right shows the phantom bone contour points in blue and the estimated contour points in red

Fig. 4 Mean RMSE with respective standard deviation for each phantom, both in millimeters

Fig. 5 Mean Hausdorff distance with respective standard deviation for each phantom, both in millimeters

1524

M. G. M. Garcia and S. S. Furuie

Table 2 Number (N) of contours with bone defect considered for calculating the similarity measures P1 Size

N

P2 Size

N

P3 Size

N

P4 Size

N

P5 Size

N

S M L

19 29 38

S M L

38 57 77

S M L

7 15 21

S M L

6 14 21

S M L

10 16 22

Table 3 p-Values obtained for paired, bilateral t-test for test pairs A/B, A/C, A/D Similarity measure

Test A/Test B

Test A/Test C

Test A/Test D

RMSE H

8.43 × 10−5 1.15 × 10−2

8.74 × 10−7 1.35 × 10−5

1.25 × 10−3 2.78 × 10−1

The mean RMSE and the mean Hausdorff distance varied considerably between patients and between defect sizes in the same patient due to anatomical variations. The mean RMSE and mean Hausdorff distance increase as the size of the defect increases, which is expected, because the curvature of the phantom contour becomes sharper and potentially more complex, not necessarily corresponding to the characteristic curvature resulting from the fourth-degree polynomial regression. In these cases, the RMSE and Hausdorff distance are decreased by using the mirrored contour, which provides this curvature information. This fact is demonstrated by the increase of error for test A (which only uses information from the edges of bone defect in the regression) in relation to the other tests as bone defect size increases. In small defects, for 3/5 of the patients the mean RMSE in test A is the smallest among the tests. In medium defects, this occurs for 2/5 of patients. In large defects, for only 1/5 of the patients this is the case. Considering the Hausdorff distance, the cited proportions are similar, respectively, 4/5, 2/5, 1/5. Considering only tests B, C, and D, in which mirrored bone contour information is used in the regression, the mean RMSE in test C is the lowest in 4/5 of patients with small defect and all five patients with medium and large defect. The cited proportions are identical regarding the Hausdorff distance. From these observations the following null hypothesis is formulated: there is no statistically significant difference between the RMSE/ Hausdorff distance obtained for test A and for each other test (B, C, D) considering all phantoms. Since the datasets are paired, it is verified and confirmed that the datasets resulting from the difference between the pairs test A—test B, test A—test C, test A—test D for both similarity measures have approximately normal distributions. That condition allows the calculation of p-values using the ttest (Table 3). If the p-value is smaller than 0.05, the null hypothesis is rejected. The null hypothesis was rejected for all test pairs regarding RMSE values and for test pairs A/B and

A/C regarding the Hausdorff distance, indicating that there is significant difference between these test results.

5

Conclusion and Further Work

Combining the statistical tests results and the observations made from the mean RMSE and mean Hausdorff distance, we conclude that the dataset formed by the phantom contour points twice and the mirrored contour points (Test C) is the one that significantly improves the similarity measures. It will be used for further development and testing of a threedimensional polynomial regression approach for semiautomatic generation of a cranial prosthesis polygonal mesh that complements certain bone defect. Acknowledgements The results published here are in part based upon data generated by the TCGA Research Network: http://cancergenome. nih.gov/. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES)—Finance Code 001. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Hieu LC et al (2002) Design and manufacturing of cranioplasty implants by 3-axis CNC milling. Technol Health Care 10:413–423 2. Parthasarathy J (2014) 3D modeling, custom implants and its future perspectives in craniofacial surgery. Ann Maxillofac Surg 4:9–18 3. Park E et al (2016) Cranioplasty enhanced by three-dimensional printing: custom-made three-dimensional-printed titanium implants for skull defects. J Craniofac Surg 27:943–949 4. Ventola CL (2014) Medical applications for 3D printing: current and projected uses. Pharm Ther 39:704–711 5. Jardini A et al (2014) Customised titanium implant fabricated in additive manufacturing for craniomaxillofacial surgery. Virtual Phys Prototyp 9:115–125 6. Jardini A et al (2014) Cranial reconstruction: 3D biomodel and custom-built implant created using additive manufacturing. J Cranio Maxillofac Surg 42:1877–1884

223 Regression Approach for Cranioplasty Modeling 7. Cabraja M, Klein M, Lehman TN (2009) Long-term results following titanium cranioplasty of large skull defects. Neurosurg Focus 26:E10 8. Kim BJ, Hong KS, Park KJ, Park DH, Chung YG, Kang SH (2012) Customized cranioplasty implants using three-dimensional printers and polymethyl-methacrylate casting. J Korean Neurosurg Soci 52:541–546 9. Rotaru H et al (2012) Cranioplasty with custom made implants: analyzing the cases of 10 patients. J Oral Maxillofac Surg 70:e169– e170 10. Wehmöller M, Eufinger H, Kruse D, Massberg W (1995) CAD by processing of computed tomography data and CAM of individually designed prostheses. Int J Oral Maxillofac Surg 24:90–97 11. Gelaude F, Sloten JV, Lauwers B (2006) Automated design and production of cranioplasty plates: outer surface methodology, accuracies and a direct comparison to manual techniques. Comput Aid Des Appl 3:193–202 12. Hieu LC et al (2002) Design and manufacturing of personalized implants and standardized templates for cranioplasty applications. IEEE Int Conf Ind Technol 2:1025–1030 13. Patient contributed image repository. Available at: http://www.pcir. org/researchers/downloads_available.html. Accessed 30 Jan 2019 14. Clark K et al (2013) The cancer imaging archive (TCIA): maintaining and operating a public information repository. J Digital Imag 26:1045–1057

1525 15. Brejarano T, Couto MDO, Mihaylov IB (2019) Head-and-neck squamous cell carcinoma patients with CT taken during pretreatment, mid-treatment, and post-treatment (HNSCC-3DCT-RT). The Cancer Imaging Archive. Available at: https://doi.org/10.7937/ K9/TCIA.2018.13upr2xf. Accessed 02 Apr 2019 16. Scikit-image documentation. Function measure.find_contours. Available at: http://scikit-image.org/docs/0.8.0/api/skimage. measure.find_contours.html#find-contours. Accessed: 27 Apr 2018 17. Lorensen W, Cline EC (1987) Marching cubes: a high resolution 3D surface construction algorithm. In: SIGGRAPH 87 proceedings, vol 21, pp 163–170 18. NumPy v1.15 manual. Function numpy.polyfit. Available at: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/ numpy.polyfit.html. Accessed: 01 Apr 2019 19. NumPy v1.15 manual. Function numpy.polyval. Available at: https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/ numpy.polyval.html#numpy.polyval. Accessed: 01 Apr 2019 20. Barnston A (1992) Correspondence among the correlation, RMSE, and Heidke verification measures; refinement of the Heidke score notes and correspondence. In: Climate analysis center 21. Gonçalves GA, Mitishita EA (2016) The use of Hausdorff distance as similarity measure in automatic systems for cartographic update. Bol Ciênc Geod 22:719–735

Principal Component Analysis in Digital Image Processing for Automated Glaucoma Diagnosis C. N. Neves, D. S. da Encarnação,Y. C. Souza, A. O. R. da Silva, F. B. S. Oliveira and P. E. Ambrósio

Abstract

The digital image processing (DIP) techniques usually generate attribute vectors that tend to contain a large number of elements. When working with automated classification techniques, some commonly verified attributes may have low relevance for solving a specific problem or even worse the classification, unnecessarily increasing the dimensionality of the problem. A limited set of relevant attributes simplifies the representation of the image, and consequently, a better interpretation of the data occurs. In this perspective, this research applied Principal Component Analysis (PCA), a widely disseminated technique for reducing dimensionality in the literature, to the attribute vector generated by the DIP, with the aim of increasing the accuracy of the classification of this vector. As a case study, retinal image classification for the diagnosis of glaucoma was used. The data set used was the second version of RIMONE, provided by the Medical Image Analysis Group of the University of Laguna, Spain. The results showed that with the application of the PCA, a better classification of the images occurs. With only 7 components, a better classification was obtained than the original data set, which has 36 attributes. These results validate the possibility of applying PCA to optimize the automated glaucoma diagnostic process. Keywords

Principal component analysis • Dimensionality reduction • Glaucoma • Digital image processing

C. N. Neves (B) · D. S. da Encarnação · Y. C. Souza · A. O. R. da Silva · F. B. S. Oliveira · P. E. Ambrósio Department of Exact and Technological Sciences, State University of Santa Cruz, Rod. Jorge Amado, Km 16—Salobrinho, Ilhéus, Brazil e-mail: [email protected]

1

Introduction

Glaucoma is an irreversible neuropathy characterized by a group of conditions that causes progressive damage to the optic nerve and loss of the visual field, most often occasioned by increased pressure in the intra ocular region. The main risk factors are age, high intraocular pressure, family history and inclusion in a susceptible ethnic group [1]. This neuropathy is the second leading cause of blindness in the world [2] and represents a major challenge for many areas of science. One of the main techniques for the diagnosis of glaucoma is retinography, which consists in the capture of fundus images [3]. It is possible to optimize this process by associating the images obtained with retinography to digital image processing (DIP) techniques, facilitating pattern recognition and characterization of visual occurrences related to the disease. An important step of the DIP is the extraction of features, and it results in the generation of attribute vectors that tend to accommodate a large number of elements, increasing the dimensionality and complexity of the problem. Due to this elevated number of characteristics, in certain problems of analysis and image processing the reduction of dimensionality is important to obtain an ideal set of characteristics necessary to perform a certain operation on an image. In this process, there is a reduction of the number of features used to represent a given data set. There are many methods for dimensionality reduction for various tasks, such as data mining and pattern recognition, and that demonstrates their importance for problem solving. In [4] there is a comparative between some of these methods, such as Principal Component Analysis (PCA), Kernel PCA (KPCA), Isomap, Maximum Variance Unfolding (MVU), Laplacian Eigenmaps and others. According to the authors, PCA is one of the most used methods. From this perspective, the present paper aims validate the use dimensionality reduction in glaucoma diagnosis by evalu-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_224

1527

1528

C. N. Neves et al.

ating the classification of fundus images that underwent Principal Component Analysis.

2

Texture Feature Extraction

The analysis of the texture pattern of an image provides a set of statistical information that will aid its classification, such as: spatial distribution, brightness variation, roughness, regularity, etc. [5]. In this perspective, several works develop methodologies based on the extraction of texture attributes. An example is [6], that creates a computational methodology for the diagnosis of glaucoma from the extraction of texture attributes based on taxonomic indices and machine learning. Another example is in [7]. The main objective of this work was to present a computational method using texture descriptors based on variants of binary local patterns to detect glaucoma automatically in retinographs. In this way, the extraction of texture attributes in retinography images for computational approximation of the medical diagnosis of glaucoma is widely used in the literature and, thus, the use of this type of extraction in the present work is justified. In [8] fourteen features were postulated to describe the spatial behavior between the pixels of an image. To obtain this information, the co-occurrence matrix was used, transforming the image into a two-dimensional structure of gray level intensity based on the spatial relationship between the pixels. This method is mainly used to apply statistical calculations for feature extration. The use of the co-occurrence matrix is assumed as a essential step for the extraction of the fourteen descriptors of the spatial behavior between the pixels of a texture image postulated by Haralick in [8]. The application of these techniques generates vectors with the attributes obtained from the images, and these vectors tend to contain a large number of elements. When working with automated classification techniques, some commonly verified attributes may have low relevance for solving a specific problem or even worse the classification. With a smaller number of elements, which can be obtained by reducing the dimensionality of the generated vector, it is possible to have a better interpretation of the data, and consequently a better classification is obtained. In this work, a widely disseminated dimensionality reduction method, Principal Component Analysis (PCA), was used. This method will be covered in the next section.

3

Dimensionality Reduction

According to [9], some tasks such as mining, interpreting or modeling a data set often do not demand a significant number

of attributes, and it can even prejudice the results. As stated by [10], it is necessary to consider a smaller number of variables in a model whenever is possible, so that it can be more easily interpreted. Dimensionality reduction is a technique which consists in transforming high dimensional data into structures with fewer attributes, which must maintain the most of the original data space topology to keep its relevant properties [11,12].

3.1

Principal Component Analysis

According to [13], several methods have been developed to reduce dimensionality, but principal component analysis (PCA) is one of the oldest and most widely used methods. The authors also claim that the expanded use of large data sets in the area of image analysis has brought methodological advances in data analysis which often finds their roots in PCA. This method can be seen as the projection of a point swarm in a large dimensionality space down on a lower-dimensional subspace [14]. Its objective is to reduce the dimensionality of a data set. According to [4], PCA constructs a low dimensional representation of the data that describes as much of the variance in the data as possible and performs dimensionality reduction by embedding the data into a linear subspace of lower dimensionality. In this method, the data set variance is maximally described to represent the different aspects of its topology with less data. It identifies new variables with maximum data variance, called principal components, which are linear combinations of the original variables, and thus the data set can be represented by a smaller number of attributes [15]. A PCA may be described from a data set with p variables for n individuals as in Table 1, which is an adaption from [16]. There are up to p principal components Z p for p variables. By definition [17], the first principal component is the linear combination of X p , that is: Z 1 = a11 X 1 + a12 X 2 + · · · + a1 p X p

(1)

The second principal component, Z 2 = a21 X 1 + a22 X 2 + · · · + a2 p X p

(2)

Table 1 Format of data for a principal component analysis from n observations of variables X 1 to X p i

X1

X2

···

Xp

1 2 .. . n

a11 a21 .. . an1

a12 a22 .. . an2

··· ··· .. .

a1 p a2 p .. . anp

···

224 Principal Component Analysis in Digital Image Processing …

1529

and so forth. The principal components Z p are obtained from an analysis that consists in finding the eigenvalues of a sample covariance matrix [18] which is symmetric and has the form: ⎡

c11 ⎢ c21 ⎢ C =⎢ . ⎣ ..

c12 c22 .. .

··· ··· .. .

⎤ c1 p c2 p ⎥ ⎥ .. ⎥ . ⎦

(3)

cn1 cn2 · · · cnp

where n = p, the elements of the main diagonal (c j j ) are the variances of X i and those of the secondary diagonal (ci j ) represents the co-variance between the variables X i and X j . There are p eigenvalues λ representing the variances of the principal components in the sample co-variance matrix, whose value is either positive or zero [19]. Assuming that the eigenvalues are ordered as λ1 > λ2 > · · · > λ p > 0 [16], λ1 corresponds to the first principal component (Eq. 1), and λi to the i-th principal component, that is, Z i = ai1 X 1 + ai2 X 2 + · · · + ai p X p

(4)

The constants ai p in the last equation are the elements of the eigenvector corresponding to λi , and they are staggered in a manner that. 2 2 ai1 + ai2 + · · · + ai2p = 1

(5)

The eigenvectors corresponding to the largest eigenvalues contain the most useful information related to the specific problem and the others mainly comprise noise [14]. Therefore, these eigenvectors are usually written in descending order according to associated eigenvalues. From this information, it is understood that the eigenvectors with the largest associated eigenvalues represent the most significant relationship between their dimensions. For (cii ) is the variance of X i and λi is the variance of Z i , the sum of the variances of the principal components is equal to the sum of the variances of the original variables, and, in this perspective, the principal components contain all variation of the original data [16]. To summarize the process, each principal component is a linear combination with coefficients equal to the eigenvectors of the covariance matrix of the data, ordered decreasingly by the associated eigenvalues. A lower-dimensional subspace is formed with the main components and the matrix with the original data is projected on it, and thus the coordinates of the data in this new plane can be obtained.

3.2

PCA of a Digital Image

The PCA seeks to maximize variance in the attributes that describe the objects of an image, so they must be uncorrelated

Fig. 1 PCA pipeline

[20]. This is done because in this way the data set obtained from the dimensionality reduction can represent as closely as possible the original data set. The PCA algorithm is described in Fig. 1, with the steps described in the previous subsection. The method was applied to retinography images, so the attributes representing the analysed images were the input parameter. The images that were used in this study and their attributes will be presented and explained in the next section.

4

Materials and Methods

4.1

Imagery Database

This research used an open retinal image database for optic nerve evaluation, called RIM-ONE,1 which is divided into three versions. The second version is being used in this article because it contains a similar number between retinograms of normal and glaucomatous eyes, totaling a set of 455 images (255 healthy and 200 pathological). There are two examples of the images from this database in Fig. 2: the first is the retinogram of a heathy eye (a), and the other is of a glaucomatous eye (b).

Fig. 2 Examples from the RIM-ONE database. a Healthy eye b Glaucomatous eye

1 Available

on http://rimone.isaatc.ull.es.

1530

4.2

C. N. Neves et al.

Texture Features

In this work, texture patterns were extracted from to obtain statistical information about the images presented in the previous subsection, which aid their classification. We used the co-occurrence matrix to get such information, turning the image into a two-dimensional structure of gray level intensity based on the spatial relationship between the pixels. The co-occurrence matrix is mainly used to apply statistical calculations for feature extraction. In [8] fourteen features were postulated to describe the spatial behavior between the pixels of an image. However, in this research only nine texture features were applied: angular second moment, contrast, sum average, variance, correlation, sum variance, inverse difference moment, entropy and measure of correlation. The 9 texture features were submitted to 4 angles of the co-occurrence matrix, resulting in 36 attributes for each image.

4.3

Dimensionality Reduction Stage

In this research, dimensionality reduction was applied aiming to achieve best classifications for fundus images. This was accomplished by obtaining a more compact representation of the data space to be studied, focusing only on the attributes that are really important for classification. With texture features mentioned in Sect. 4.2, it was possible to create a vector with the attributes that represents the images mentioned at Sect. 4.1. The number of elements of this vector is equivalent to the product of the number of angles that the pixels of an image were submitted to the co-occurrence matrix by the number of implemented texture features. In this case, 9 texture features were submitted to 4 angles of the co-occurrence matrix, resulting in 36 attributes for each image. The vector resulting from the DIP is the input parameter for PCA algorithm, which identifies the principal components and creates smaller space to project the original data. To apply the PCA, the Python programming language was used.

4.4

Classification

Data patterns processing techniques are applied to identify relevant information in large data sets. There are several supports that use these techniques to obtain their results, and this research used WEKA, a free software that contains a collection of machine learning algorithms and rigorous classifiers in the process of automatic glaucoma classification and detection [21]. All mining techniques go through a training process and are used later in the supervised learning phase that uses label data

to the specified class. In this work, the algorithm implemented from WEKA was the Random Forest (RF) algorithm, which results in a set of results from several decision trees. Basically, in the classification of random images, this algorithm will use a decision tree results to generate a mode, that is, to return the most frequent class. The methodology employed was cross-validation 10 fold, and the other RF configurations were maintained according to the standard provided by WEKA. This classifier was used on the original attribute space and on space obtained after the dimensionality reduction, to verify the precision achieved with each one of them and compare the results.

5

Results

The application of the descriptors cited in the previous section generated a vector with 36 attributes for each image, a matrix of 455 rows and 36 columns. PCA was used to reduce the original data space until only 2 attributes remained. As previously discussed, the attributes that make up the space with new dimensional pattern are linear combinations of the original attributes. Thus, each space with p attributes shown in Table 2 is formed by the first p main components (PCs) obtained with the PCA, except for the first line of the table, which is the original space with 36 attributes. To compare the data spaces performances in the classification, it was observed the percentages of precision, sensitivity and specificity of each classification, taking as a parameter the values of the same rating criteria for the original attributes. The values suppressed in Table 2 refer to reductions that did not show significant gains in the classification. The results in this table shows that while the classification with the original attributes obtained 79.7% precision, with 31 attributes a precision of 81.8% was achieved. The sensitivity result for the original attributes was 80.3%, while by reducing the attributes to 7, a sensitivity of 81.6% was found. The best specificity was obtained with PCA, by reducing the original space to 25 attributes, corresponding to a percentage of 83.5%. The original space obtained 79.0% for this measure. Since sensitivity represents the probability of a positive diagnosis in sick people (true positive), the specificity, the probability of a negative diagnosis in non-sick people, and the precision represents the probability of a correct diagnosis, it is necessary to consider the levels of precision and sensitivity more significant in the analysis than those of specificity. In this perspective, the best result set was found in the reduction to 31 attributes, equivalent to a precision of 81.8%, sensitivity of 80.9% and specificity of 83.0% compared to the original values of 79,7%, 80.3% and 79.0% for the same measures, respectively. However, it is important to emphasize that some smaller data spaces also had better results in the classification than the

224 Principal Component Analysis in Digital Image Processing …

1531

Table 2 Classification Results Attributes

Precision (%)

Sensitivity (%)

Specificity (%)

36 .. . 31—PCA .. . 25—PCA .. . 7—PCA .. . 2—PCA

79.7 .. . 81.8 .. . 81.5 .. . 80.1 .. . 57.8

80.3 .. . 80.9 .. . 79.9 .. . 81.6 .. . 62.2

79 .. . 83 .. . 83.5 .. . 78.2 .. . 52.1

original attribute space. For example, the space that contained only 7 attributes had a precision of 80,1% and sensitivity of 81,6% according to the classification, almost the same results that the space containing 31 attributes.

6

Conclusions

One of the main techniques for the diagnosis of glaucoma is retinography, a process that obtains fundus images. It can be optimized by associating these images with digital image processing techniques. These procedures result in the generation of texture features vectors that tend to have a large number of elements, increasing the dimensionality and complexity of the problem. In this perspective, it was possible to describe this data topology with a simpler data set through a dimensionality reduction method, to improve results in the classification of the retinography images. This paper presented an approach for optimizing glaucoma diagnosis using a dimensionality reduction technique called Principal Component Analysis (PCA) in fundus images. By applying a dimensionality reduction technique, the predictive capability of the classifier was improved. This improvement happened because these techniques seek to remove redundant or irrelevant attributes from the database, allowing the generation of a less error-prone classifier. The objective of the PCA application is to obtain approximate or better results than the original data set by reducing the robustness of the data in the problem, considering the influence of each one on the classification to optimize the process. PCA is a consolidated method of dimensionality reduction, used as a basis for several other methods and is still widely used today. This is because its procedure is simple to understand and has a solid theoretical basis. The results showed an improvement of the classification after the application of PCA. For example, with only 7 prin-

cipal components obtained through PCA, the classification achieved a precision of 80,1%, sensitivity of 81,6% and specificity of 78,2%, almost the same values as those achieved by the original data set, which were 79,7%, 80.3% and 79.0% for the same measures, respectively. In fact, the 7 PC space has had better results of precision and sensitivity than the original data, with 36 attributes. These results show that this method can be applied to have better results in the automated diagnosis of glaucoma. In addition to showing the operation and application of the PCA and validating the use of this method in glaucoma diagnosis, this work confirms the importance that dimensionality reduction has in the field of image classification. Acknowledgements The authors acknowledge the financial support from Coordination of Superior Level Staff Improvement (CAPES). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. 2.

3.

4. 5. 6.

7.

Ávila M, Alves MR, Nish M (2015) As condições de saúde ocular no Brasil. Conselho Brasileiro de Oftalmologia, São Paulo Barreto IFG (2019) Prevalência e causas de cegueira e visão subnormal em pacientes atendidos em um hospital público de referência de Fortaleza-CE. In: Monography (specialization in ophthalmology). Public Health School of Ceará, Fortaleza, Brazil Lima ACM (2019) Aprendizagem Profunda Aplicada ao Diagnóstico do Glaucoma. Master’s thesis. UFMA (Universidade Federal do Maranhão)São Luís. Brazil Van Der Maaten L, Postma E, Van den Herik J (2009) Dimensionality reduction: a comparative. J Mach Learn Res 10:13 Pedrini H, Schwartz WR (2008) Análise de imagens digitais: princípios, algoritmos e aplicações. Thomson Learning Azevedo LM, Almeida et al (2020) Diagnóstico de Glaucoma em Retinografias Usando Índices Taxonômicos e Aprendizado de Máquina. Rev Sistem Comput RSC 10 Silva MG, Pessoa ACP, Almeida JDS, Junior GB, Paiva AC (2018) Diagnóstico do glaucoma em imagens de retinografia usando variantes de padroes locais binários. In: Anais Principais do XVIII Simpósio Brasileiro de Computação Aplicada à Saúde. SBC

1532 8. 9.

10. 11.

12.

13. 14.

Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst Man Cybern 610–621 Borges HB, Nievola JC (2006) Redução de Dimensionalidade em Bases de Dados de Expressão Gênica. Master’s thesis. PPGIaPUCPR, Curitiba. Brazil Larose DT (2006) Data mining methods and models, vol 2. Wiley Online Library Medeiros CJF, Costa JAF (2008) Uma Comparação Empírica de Métodos de Redução de Dimensionalidade Aplicados a Visualização de Dados. In: Rev Soc Bras Redes Neurais 6:81–110 Camargo SS (2010) Um modelo neural de aprimoramento progressivo para redução de dimensionalidade. Ph.D. thesis. UFRGS (Universidade Federal do Rio Grande do Sul), Porto Alegre, Brazil Jolliffe IT, Cadima J (2016) Principal component analysis: a review and recent developments. Philos Trans R Soc A 374 Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chem Intel Lab Syst 2:37–52

C. N. Neves et al. 15. Ringnér M (2008) What is principal component analysis? Nat Biotechnol 26:303–304 16. Santo RE (2012) Utilização da Análise de Componentes Principais na compressão de imagens digitais. Einstein 10:135–139 17. Simon H (1998) Neural networks—a comprehensive foundation, 2nd edn. Prentice Hall 18. Jonathon S (2014) A tutorial on principal component analysis, p 13. arXiv preprint arXiv:1404.1100 19. Smith LI (2002) A tutorial on principal components analysis introduction. Statistics. 51:52 20. Augusto KS (2012) Identificação automática do grau de maturação de pelotas de minério de ferro. Rio de Janeiro 21. Claro ML, Veras RMS, Santana AM (2019) Metodologia para Identificação de Glaucoma em Imagens de Retina. In: Anais Estendidos do XIX Simpósio Brasileiro de Computação Aplicada à Saúde. SBC, pp 103–108

Pupillometric System for Cognitive Load Estimation in Noisy-Speech Intelligibility Psychoacoustic Experiments: Preliminary Results A. L. Furlani, M. H. Costa, and M. C. Tavares

Abstract

1

Cognitive load (CL) is the mental effort required to perform a task. It may be estimated by different ways, such as by electroencephalography, or analysis of pupil dilation (pupillometry). Due to the recent applications of CL for hearing aid performance assessment, this work presents a pupillometric system for CL estimation in speech intelligibility experiments. It consists of a one eye image acquisition sensor used in clinical videooculography, and a signal processing software specially developed for recording pupil’s response and estimating CL. To overcome the adversities found in very low intelligibility conditions, a new CL index was proposed. Psychoacoustic listening experiments with 7 normalhearing volunteers and noisy-speech compared intelligibility scores with pupil features for a wide range of signal to noise ratios (SNR). Preliminary results show a consistent increase of the proposed CL index with the SNR decrease. This index may be a useful tool for CL estimation in speech intelligibility investigation under very difficult listening situations. However, the observed findings are limited by the small testing group. Keywords

Pupillometry intelligibility



Cognitive load



Listening effort

A. L. Furlani (&)  M. H. Costa Department of Electrical and Electronic Engineering, Federal University of Santa Catarina, Florianópolis, Brazil M. C. Tavares Contronic Sistemas Automáticos Ltd., Pelotas, Brazil



Speech

Introduction

Morphological analysis of the pupil is of great interest in many clinical and research areas. Dynamic and static pupil characteristics have shown to be correlated with a series of physiological conditions and mental states in humans, such as: multiple sclerosis, migraine, diabetes, alcoholism, Alzheimer, Parkinson, autism, and schizophrenia [1–3]. The pupillometry consists of measuring pupil characteristics, such as diameter, and reactivity. It is a noninvasive and low-cost method for complementary diagnosis and assessment of the neurological activity [3]. This is possible because the pupil is connected to the central nervous system by autonomic innervation. As a result, its diameter reflects the balance between the sympathetic and parasympathetic activity. Fatigue has been an issue of interest in many areas. It may be predicted using the cognitive load (CL) concept [1, 4], which is the effort required for performing a certain mental task. The use of pupillometry to measure listening effort has grown in the past decade, mainly due to some specific advantages as compared to the electroencephalography (EEG), such as being less invasive, small cost, and not interfering with other devices such as hearing aids and cochlear implants. In [5], the correlation between pupil dilation and speech intelligibility was investigated, indicating that the listening effort increases in lower signal to noise ratio (SNR) conditions. Studies have shown that hearing-impaired people expend significantly more listening effort when processing noisy speech as compared to normal-hearing listeners [6]. There is also evidence that speech enhancement and noise reduction techniques reduce this effort [6]. These findings permit differentiation of speech-enhancement/noise-reduction techniques presenting similar intelligibility scores, though leading to different fatigue levels [7, 8]. This is important for increasing the daily use of hearing aids [9, 10]. As a result, the CL is of special interest in the speech enhancement area to

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_225

1533

1534

A. L. Furlani et al.

assess the subjective impact of residual-noise and speechdistortion, since listening to noisy- or degraded-speech may demand additional cognitive resources. Following the initial results presented in [5], many other works studied the CL associated with the listening effort. In [11], the CL associated with different masker types and ages was investigated. In [7, 12], the impact of noise reduction in listening effort was measured, while in [13, 14] the influence of noise type, SNR, and hearing status were analyzed. In [15], it was shown that the Wiener filter noise reduction method provides improvements in speech understanding but at the cost of increased CL. In the light of the presented issues, CL estimation seems to be of significant importance to investigate the appropriateness and design of noise reduction methods (as well as for adjusting their optimal parameters during the fitting procedure) in hearing aid applications. However, despite their commercial availability and small cost, the majority of pupillometric systems are not specially designed and do not still incorporate CL indexes for hearing aid performance assessment. In this work, we present a pupillometric system for estimating the CL in listening tasks. For compensating the concentration loss under very difficult hearing situations we propose a new CL index, based on three normalized pupil features. Preliminary intelligibility psychoacoustic experiments with pupil measurements from volunteers were performed to illustrate the performance of the developed system. The contributions of this work are: (1) it is presented the specification of a pupillometric system designed for listening experiments; (2) it is proposed a new CL index for difficult hearing situations; (3) preliminary listening experiments with volunteers are provided to illustrate the performance of the proposed index. The remaining of this work is structured as follows: Sect. 2 presents the characteristics of the developed pupillometric system, the proposed CL index, and an analysis about its robustness, as well as psychoacoustic experiment protocol. Section 3 presents preliminary results and discussion. Finally, conclusions are presented in Sect. 4.

2

Material and Methods

This section presents the proposed pupillometric system and the description of the psychoacoustic experiments performed for its validation.

2.1 Hardware The image acquisition system comprises a commercially available video-oculography headset, originally designed for

Fig. 1 Video-oculography headset

videonystagmography applications (Contronic Ltd., ANVISA 80384070008) shown in Fig. 1. It has two embedded infra-red cameras (one for each eye) with resolution of 744  480 pixels, and acquisition frequency of up to 76 frames-per-second (fps). Considering the camera resolution, the distance between camera and eye, and the focal length of the lens, it results in a spatial resolution of 0.036 mm/pixel. An internal light-emitting diode (LED) is included for fixing the gaze in closed lid conditions. The developed system employs recorded videos only from the right eye, with the lid open, at a sampling rate of 30 fps.

2.2 Software The software was developed in MATLAB. It consists of three parts: (a) Acquisition, referring to noisy-speech presentation, video recording, and calculation of intelligibility scores; (b) Image processing, referring to pupil detection, segmentation, and diameter estimation; and (c) CL estimation. Steps (b) and (c) were performed offline. Acquisition: The acquisition protocol consists of the subject under evaluation wearing the headset in a silent environment with constant illumination. The subject was instructed to keep his eyes focused on a black cross painted on a white wall at a distance of 1 m. At each experiment, an audio file containing a pre-selected noisy-speech sentence was presented, and the pupil video was recorded from 3 s before the speech onset to 3 s after the end of the utterance. Image Processing: The image processing is performed frame by frame in the acquired video, according to the recommended practices for pupillometry presented in [16]. Firstly, luminosity is adjusted and images are converted to black and white by a fixed threshold. Segmentation of the pupil is performed by automatically searching the square window with the lowest averaged intensity, considering an area of 120% of the maximum expected area for the pupil (289  289 pixels). After the segmentation, morphological

Pupillometric System for Cognitive Load Estimation in Noisy-Speech …

operations were applied (opening and closing) for filling holes. Following, large and small artifacts were removed, and then the Sobel filter was applied for enhancing the pupil edge. The resulting points were fitted to a circle using the Repeated Least Trimmed Squares method, which is robust to outliers [17]. The diameter of this circle is calculated for each video frame. Cognitive Load Estimation: For each experiment (trial), the signal defined by the evolution of the pupil diameter along time (frames) was subtracted by the average value calculated from its 1 s-epoch before the noisy-speech onset. Samples with more than three scaled median absolute deviations away from the median were considered outliers and removed from data. Three samples at each side of missing data intervals (due to blinking or ineffectiveness of the detection method) were removed. Signals with more than 20% of missing data (low-quality) were disregarded. Each valid trial was linearly interpolated for filling the missing data and then filtered by a 10-point moving average filter. The pupil dilation curve (as a function of time) was obtained from synchronous average of a set of trials obtained under the same SNR conditions and aligned with respect to the speech onset. This is a standard procedure in pupillometric analysis for minimizing task-irrelevant noise in the estimated pupil diameter [16]. Three features were estimated from the averaged pupil dilation curve (at each SNR): peak latency (pL), which is the time between the stimulus onset and the maximum peak in the dilation curve; peak amplitude (pA), which is the maximum peak after the stimulus onset; and mean dilation (mD), calculated as the averaged value of the pupil dilation curve from the stimulus onset to the end of the data series. These features have been individually applied to estimate cognitive load in previous works [11, 18, 19]. In this work, we propose a new CL index defined as ~D CL ¼ ~pL  ~pA  m

ð1Þ

in which tilde means normalization, defined by ~f ¼

f  minff g þ 1; maxff g  minff g

ð2Þ

and f represents each feature {pL, pA, mD}. The main motivation for the proposed estimator presented in Eq. (1) is the observation that, from medium to large SNRs, higher CLs are associated to higher scores for all three normalized features (pL, pA, mD). For small SNRs this behavior seems to be masked by noise.

2.3 Robustness of the CL Index The proposed CL index was compared to the three individual features {pA, pL, mD} with relation to robustness to

1535

noisy estimates. Real estimations of the three features were contaminated by additive artificial Gaussian white noise. A total of 100,000 samples were calculated for power magnitudes ranging from 0 to 100% of the true value for each feature. The mean absolute percentage error (MAPE), with respect to the true value, was calculated for each feature (pA, pL, mD, CL), according to: MAPEðf Þ ¼

N 1X jfn  ^fn j=jfn j N n¼1

ð3Þ

in which f means noisy-estimate, and f is the true-value.

2.4 Psychoacoustic Experiments The speech signals used in this work were obtained from the dataset referred in [20] and recorded in [21]. It has 200 sentences in Brazilian Portuguese equally divided into 20 phonetically balanced lists, in which 10 were pronounced by a male and 10 by a female speaker. Speech sentences were contaminated by noise from the International Collegium for Rehabilitative Audiology (ICRA) database, with SNR = {− 10, − 5, 0, 5, 10} dB [22]. Such values were defined according to preliminary experiments to represent the whole intelligibility range (from 10 to 100%), as well as the range of the most common listening situations in which adults with hearing losses are exposed (from 2 to 14 dB) [23]. The granularity was defined as 5 dB to avoid a large number of listening experiments, which may incur in volunteer’s fatigue. Seven auto-declared normal-hearing volunteers, aged from 20 to 39 years, participated in the experiment. Three were male and four were female. The non-parametric Kruskal–Wallis test (employed due to violation of ANOVA’s assumptions) indicated inexistence of significant statistical differences between volunteers’ results for all assessed SNRs. From these results, we dismissed audiometric tests for confirming hearing normality. Informed consent was obtained according to the project number 3.723.737, approved by the Research Ethics Committee of Federal University of Santa Catarina. Two lists (1 male and 1 female) were randomly selected for each SNR condition, resulting in a total of 100 sentences. Before each experiment, a training section was performed with a different set of sentences for adjusting the volume and for habituation. On average, the experiments were about 25 min long. Pauses were allowed at any time or always there was evidence of fatigue or loss of focus was detected. Each audio comprises a 3 s noise-only epoch before and after the noisy-speech. At the end of each trial, a beep (1 s of a 1 kHz tone) was presented to request the volunteer to repeat the listened sentence. The correct words were

1536

A. L. Furlani et al.

annotated and the average intelligibility score was updated. At the same time, the pupil’s eye video was acquired. This protocol is widely used in pupillometry for estimating the listening effort [11, 14]. Experiments were carried out in a quiet room, in which the participants were comfortably seated, using in-ear headphones.

3

Results and Discussion

Table 1 shows MAPE results for both pA and CL indices, followed by the error reduction percentage, given by 100  [1 − MAPE(CL)/MAPE(pA)]. Note that the proposed CL index, based on normalization and combination of the three referred features, reduces the error considerably as compared to the use of a single characteristic. The remaining features result in larger errors. Figure 3 shows boxplots for the intelligibility scores obtained in the psychoacoustic experiment, as well as

Figure 2 shows the estimated pupil dilation curves, averaged for all volunteers and SNR = {− 10, − 5, 0, 5, 10} dB. It can be noted that the maximum value for pupil dilation is not directly associated with the lowest SNR level, since the highest peak is associated with the SNR = − 5 dB.

Fig. 2 Averaged pupil dilation curves for a SNR = − 10 dB (red), b SNR = − 5 dB (blue), c SNR = 0 dB (yellow), d SNR = 5 dB (purple), and e SNR = 10 dB (green). The dashed green vertical line f at 0 s indicates the stimulus onset. The dashed red vertical line g near 3 s represents the averaged value of stimulus offset, since speech signals have different duration

Table 1 MAPE for pA and CL indices Noise power (%)

MAPE (pA)

MAPE (CL)

MAPE Reduction (%)

10

0.08

0.06

25.8

20

0.16

0.12

22.7

30

0.24

0.17

29.5

40

0.32

0.19

39.2

50

0.40

0.21

46.6

60

0.48

0.22

53.3

70

0.56

0.23

58.5

80

0.64

0.24

62.9

90

0.72

0.24

66.5

100

0.80

0.24

69.6

Fig. 3 Boxplots of: a intelligibility in terms of percentage of correctly identified words; b normalized peak amplitude, c normalized peak latency, d normalized mean dilation; and e cognitive load index

Pupillometric System for Cognitive Load Estimation in Noisy-Speech …

4

Table 2 Pairwise t-test SNR

SNR

p

− 10

−5

0.478

− 10

0

0.362

− 10

5

0.099

− 10

10

0.362

−5

0

0.027

−5

5

0.022

−5

10

0.042

0

5

0.362

0

10

0.632

5

10

0.413

pupillometric measures (peak amplitude, peak latency, and mean dilation). Figure 3e shows the proposed CL index results calculated by equation. According to Fig. 3e the maximum listening effort occurs approximately for a 50% score of speech intelligibility (defined as the percentage of the correctly identified number of words). The proposed index circumvent the non-monotonic (inverse U-shaped) pattern verified in Fig. 3b–d, which was previously described in [12–14]. The non-monotonic behavior at very small SNR may be related to estimation errors due to the low SNR or associated with very difficult listening conditions, in which listeners have a decrease in their concentration. The repeated measures ANOVA test (rmANOVA) indicates that there are significant statistical differences for the CL index obtained under different SNRs (F(4,24) = 5.3319, p = 0.0032). Mauchly’s Test of Sphericity indicates that the assumption of sphericity was not violated (p = 0.0887). Applying the Shapiro–Wilk test on the rmANOVA corroborates the validity of the normality assumption (W = 0.9668, p = 0.3615). Normality was also verified and accepted by visual Q-Q plot inspection. Multiple pairwise t-tests, with Benjamini–Hochberg correction, were performed to verify the statistical differences between CL at different SNRs. Results are shown in Table 2, in which significant statistically different conditions are presented in bold (p  0.05). As noted, there are only significant statistical differences for SNR = − 5 dB with respect to SNR = {0, 5, 10} dB. This result shows that, despite the proposed CL index presents a monotonic behavior with SNR, there is still large variability. Additional normalization procedures may be required for minimizing this variability. However, additional experiments with volunteers must be performed to validate this preliminary result.

1537

Conclusion

In this paper, we presented the development of a pupillometric acquisition system for estimating the cognitive load in listening experiments. A new cognitive load index estimator was proposed, which presents a monotonic behavior with the SNR increase. In addition, it is shown that the proposed index is more robust as compared to individual features in the presence of noise. Despite these valuable characteristics, further improvements are desired to decrease its statistical variability. Acknowledgements This work was supported by Brazilian agencies National Council for Scientific and Technological Development— CNPq, Coordination for the Improvement of Higher Education Personnel, and FAPESC. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Santis A, Iacoviello D (2006) Optimal segmentation of pupillometric images for estimating pupil shape parameters. Comp Meth Prog Biomed 84:174–187 2. Souza JKS et al (2013) An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection. Comp Meth Prog Biomed 112:607–623 3. Silva CGR et al (2019) Computer vision based systems for human pupillary behavior evaluation: a systematic review of the literature. IEEE Comp Software Appl Conf 714–719 4. Hovagimian H et al (2012) Development of a remote pupillometer system for non-invasive, distant analysis. IEEE Northeast Bioeng Conf 153–154 5. Zekveld AA, Kramer SE, Festen JM (2010) Pupil response as an indication of effortful listening: the influence of sentence intelligibility. Ear Hear 31:480–490 6. Desjardins JL (2016) The effects of hearing aid directional microphone and noise reduction processing on listening effort in older adults with hearing loss. J Am Acad Audiol 27:29–41 7. Wendt D, Hietkamp RK, Lunner T (2017) Impact of noise and noise reduction on processing effort: a pupillometry study. Ear Hear 28:690–700 8. Wendt D, Dau T, Hjortkjær J (2016) Impact of background noise and sentence complexity on processing demands during sentence comprehension. Front Psych 7:345 9. Ohlenforst B et al (2017) Effects of hearing impairment and hearing aid amplification on listening effort: a systematic review. Ear Hear 38:267 10. Perreau AE et al (2017) Listening effort measured in adults with normal hearing and cochlear implants. J Am Acad Audiol 28:685–697 11. Koelewijn T et al (2012) Pupil dilation uncovers extra listening effort in the presence of a single-talker masker. Ear Hear 33:291–300

1538 12. Ohlenforst B et al (2018) Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response. Hear Res 365:90–99 13. Ohlenforst B et al (2017) Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation. Hear Res 351:68–79 14. Wendt D et al (2018) Toward a more comprehensive understanding of the impact of masker type and signal-to-noise ratio on the pupillary response while performing a speech-in-noise test. Hear Res 369:67–78 15. Kan A (2017) Improving speech intelligibility for bilateral cochlear implant users using Weiner filters and its impact on cognitive load. In: IEEE Asia-Pacific signal inform process conference, pp 786–792 16. Winn MB et al (2018) Best practices and advice for using pupillometry to measure listening effort: an introduction for those who want to get started. Trend Hear 22:1–32 17. Nurunnabi A, Sadahiro Y, Laefer DF (2018) Robust statistical approaches for circle fitting in laser scanning three-dimensional point cloud data. Patt Recogn 81:417–431

A. L. Furlani et al. 18. Zekveld AA, Kramer SE, Festen JM (2011) Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response. Ear Hear 32:498–510 19. Borghini G, Hazan V (2018) Listening effort during sentence processing is increased for non-native listeners: a pupillometry study. Front Neurosci 12:1–13 20. Alcaim A, Solewicz JA, Moraes JA (1992) Frequency of occurrence of phones and lists of phonetically balanced sentences for spoken Portuguese in Rio de Janeiro (in Portuguese). Int J Comm Syst 7(1):23–41 21. Ynoguti CA (1999) Continuous speech recognition using hidden Markov models (in Portuguese). Ph.D Thesis. Universidade Estadual de Campinas 22. Dreschler WA et al (2001) ICRA noises: artificial noise signals with speech-like spectra and temporal properties for hearing instrument assessment. Audiol 40:148–157 23. Wu Y et al (2018) Characteristics of real-world signal to noise ratios and speech listening situations of older adults with mild to moderate hearing loss. Ear Hear 39(2):293–304

VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion P. F. Jaquetti, Valfredo Pilla Jr, G. B. Borba and H. R. Gamba

Abstract

The human facial expression of emotions is an important way of nonverbal communication. Six facial expressions of emotions are considered distinguishable by the face image from a neutral expression: anger, disgust, fear, happiness, sadness, and surprise. Accurately recognizing and distinguishing these emotions is a complex and still an open research question. In this work, a deep convolutional neural network (CNN) is applied to extract facial expressions features from images. This CNN originates from a previously trained deep neural network (DNN). A multiclass support vector machine classifier (mSVM) replaced the fully-connected and softmax layers of DNN. One specific dataset is used to fine-tune the CNN; another dataset is applied to train the mSVM and assess system performance. The system reached an average accuracy of 92.1%. The happiness class got the highest classification performance with an F1 score of 98.9%. The lowest F1 score is 87.2% and belongs to sadness class. Keywords

Human non-verbal communication • Automated facial expression classification • Deep learning neural networks • Transfer learning

P. F. Jaquetti Department of Electronics, Academic, Universidade Tecnologica Federal do Parana,Curitiba, Brazil V. Pilla Jr (B) Department of Electronics, Faculty, Universidade Tecnologica Federal do Parana, Av. Sete de Setembro, 3165, Curitiba, Brazil G. B. Borba Graduate Program in Biomedical Engineering, Faculty,Universidade Tecnologica Federal do Parana, Curitiba, Brazil H. R. Gamba Graduate Program in Electrical and Computer Engineering, Faculty, Universidade Tecnologica Federal do Parana, Curitiba, Brazil

1

Introduction

Facial expression (FE), a form of nonverbal communication that has been considered universal to humans [1], shows a close relationship with human health [2]. Deficits in the ability to perceive and express emotions is a characteristic of various permanent neuropsychological disorders, such as autism and schizophrenia [3,4]. FE is a way of assessing an emotional deficiency in its expression and recognition. The treatment and control of patients with these deficits make use of clinical ratings that require the evaluation of an external agent. Moreover, human-care systems, such as elderly-care robots [5] and indirect pain measurement for therapeutic support [6] are also important medical applications based on FE. Thus, automated facial expression recognition (AFER) is a valuable resource in medical applications. As mentioned above, AFER can facilitate diagnoses and treatments of mental related ills, improve human-machine interaction in humancare automated systems, and support systems for indirect pain assessment. Moreover, AFER has also potential utility in research related to emotions [8], through locating images in datasets corresponding to specific expressions taken from healthy or mentally ill populations [9]. There were identified six universal expressions of emotions that can be distinguished from neutral expression by face image: angry, disgust, fear, happiness, sadness, and surprise [1]. Figure 1 exemplifies the six emotion expressions and the neutral expression, extracted from the Compound Facial Expressions of Emotions (CFEE) [7]. The architecture of an AFER system includes the steps of image pre-processing, feature extraction, and expression classification [2,10]. The feature extraction can be designed and fine-tuned by hand, a complex and time-consuming task. There are plenty of hand-designed extractor algorithms, most of them based on localization of face landmarks [4] as the nose, eyes, eyebrows, chin, mouth, cheeks, and geometric or appearance measures computations [11,12].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_226

1539

1540

P. F. Jaquetti et al.

Fig. 1 Examples of emotion expression from CFEE [7] dataset. The background is automatically excluded (Sect. 3.1) for a better visualizaton: a angry; b disgust; c fear; d happiness; e sadness; f surprise; g neutral

In the last years, deep neural networks (DNNs) [13] have been applied with success as pattern classifiers. The state of art in image pattern recognition is a DNN in which the first layers are convolutional neural networks (CNNs). CNN is a special neural network capable to learn features from raw image data, acting as a feature extractor algorithm. DNN with CNNs as first-layers is an approach that intends to integrate feature extraction and classification algorithms using machine learning techniques. In this work, we present an approach for AFER based on the VGG FACE architecture [14], focusing on the fine-tuning of the parameters.

2

Related Work

The feature extraction method is a critical element in the AFER structure and has been the subject of past and current research. The authors in [15] propose a taxonomy for these methods: 1. Texture feature-based. This approach includes Gabor filters [16], Local Binary Pattern (LBP) [17], Vertical Time Backward (VTB) [18], Moments Descriptors [19], Weber Local Descriptor (WLD) [20], Weighted Projection based LBP (WPLBP) [21], and Discrete Contourlet Transform (DCT) [22].

226 VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion

1541

2. Edge based. Examples: Line Edge Map (LEM) [23], Graphics-processing unit based Active Shape Model (GASM) [24], and Histogram of Oriented Gradients (HOG) [25]. 3. Global and local feature-based. Principal Component Analysis (PCA) [26], Independent Component Analysis (ICA) [27], and Stepwise Linear Discriminant Analysis (SWLDA) [28]. 4. Geometric feature-based. Local Curvelet Transform (LCT) [29]. 5. Patch-based. Facial movement features [30].

expressions of emotions and neutral expression. Their proposed structure contains previously trained CNN layers taken from VGG16 (VGG FACE) architecture [33] for feature extraction. The classification module architecture of VGG16 was kept. This module is composed of fully-connected layers and a softmax output layer. The parameters of these layers were discarded and retrained using 10-fold cross-validation. The average accuracy reached 82.5%; the highest average accuracy belongs to the neutral class (95.6%) and the lowest to disgust class (69.0%).

CNN, the feature extraction algorithm in this work, is an example of an edged based method in the context of this taxonomy. The first layers of a CNN detect edges; upper layers concatenate these edges into more complex shapes; each higher-level layer increases the complexity of these shapes. The correlation of these shapes with the input image results in measurements of this correlation. Thus, CNNs can recognize shapes, which in this work are taken from FEs. CNNs use supervised machine learning, which training requires significant image quantities of each class besides an expensive, complex, and slow computational process. For these reasons, we choose to perform the fine-tuning the CNN layers of VGG FACE [14], a high-performance DNN for face recognition. We use two datasets in this work, the Facial Expression Recognition Challenge dataset (FER2013) [31], and the Compound Facial Expressions of Emotions (CFEE) [7]. The FER2013 is used for a CNN fine-tuning. The CFEE is used for training and evaluation of the final architecture. Section 3spresents the details. The use of CFEE allows us to compare our approach with other works, summarized below. The authors of [7] develop the CFEE dataset to research AFERs. In this work, they established a computational model of the face shape based on the measurement of the distance between fiducial points, taking the image of the neutral class as a reference; the appearance features were based on Gabor filters. Appearance and shape features were associated in late fusion in some evaluated models, while in other models, these features were evaluated alone. The appearance features are based on reflectance and albedo measures. Two classifying structures were experimented, the k-nearest neighbors (KNN) and the multiclass support vector machine classifier (mSVM). The assessment was based on 10-fold cross-validation. KNN classifier achieved the best performance (average accuracy): 89.7% when using just shape features, 92.0% when using only appearance features, and 96.9% when using both shape and appearance features. The mSVM classifier reached an average accuracy of 87.4%, 85.7%, and 96.9%, respectively, with the same features arrangements. CFEE dataset was also used in [32]. The authors applied transfer learning for classification of the six basic facial

3

Methodology

In this work, the CNN layers of a previously trained DNN are fine-tuned specifically for the classification of facial expressions. A linear multiclass support vector machine classifier (mSVM) replaced the classifying layer of the original network. The training and assessment of mSVM use 10-fold cross-validation. Below we describe the elements of this project.

3.1

Datasets and Data Pre-processing

This work uses two distinct datasets. The fine-tuning of CNN layers from VGG FACE [14] uses the Facial Expression Recognition Challenge (FER2013) dataset [31]. The classification module training and the performance assessment of the entire system use another dataset, the Compound Facial Expressions of Emotions (CFEE) [7]. FER2013 has more images than CFEE, making the first more suitable for CNN fine-tuning. The use of CFEE for training the classifier and the final evaluation of the system allows a direct performance comparison with [7] and [32]. The original images of the FER2013 dataset are 8-bit per pixel grayscale, with dimensions of 48×48 pixels. The following steps describe the pre-processing of these images: 1. Find, cut and crop: There is only one face in each image (Fig. 2.a). The localization of the face in the image is obtained using the Haar Cascade technique [34]. After the localization of the face, it is generated a grayscale mask with pixels of intensity 0 and the same dimensions as the original image. The coordinates of landmarks over face contour regions are determined with the support of the dlib library [35], as shown in Fig. 2b, marked as pixels of intensity 0. A convex hull interpolates the external landmarks, defining the face contour (Fig. 2c). The pixels of the convex hull inner region have their intensity adjusted to 255 and are applied to the mask (Fig. 2d). The logical and operation between the original image and the mask (Fig. 2e) maintains the region of interest: the face in the image. Finally, the image is cropped (Fig. 2f).

1542

P. F. Jaquetti et al.

Fig. 2 FER2013 [31] pre-processing. a Original image with a face; b Face landmarks; c A convex hull interpolates the external landmarks, defining the face contour; d Convex hull mask; e Face image after masking; f Cropped image; g Image with distortion after data augmentation; h Image without distortion after reappling pre-processing step (1)

2. Data augmentation: The original dataset is expanded by 8× with random variants within the following independent parameters: angle of rotation (0◦ –45◦ ); horizontal and vertical displacement (0–10% of the image dimension), zoom in (0–25%), and horizontal flip. 3. Elimination of distortions caused by data augmentation: The data augmentation process in step (2) can produce images with distortions in the face structure, as exemplified in Fig. 2g. For this reason, step (1) is applied again. An example of the final result of this process is in Fig. 2h. 4. Image resizing: The input layer of VGG FACE has dimensions of 224×224×3, so the images are enlarged to the dimensions of 224×224 by cubic interpolation and replicated in two other layers.

The first pre-processing step (find, cut, and crop) produced some faceless images that were discarded, another dlib library functionality [35]. This often occurred in images with faces with significant occlusions or low frontalization (for example, one eye is hidden). Table 1 shows the original number of images in each class of FER2013 and the final number of images in each class after pre-processing (steps 1, 2, and 3) and disposal. CFEE contains facial expression images taken from people of the most common ethnicities. There are two sets of images. The first set brings together images of the seven basic facial expressions (six emotions plus neutral), and the second a set of compound emotions, that is, the possible combinations between the basic emotions. We used the first set of images

226 VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion Table 1 Images per class in original FER2013 dataset and after the complete pre-processing procedure (steps 1, 2 and 3), data augmentation and disposal Facial expression

Original

After

Anger Disgust Fear Happiness Sadness Surprise Neutral Total

4953 547 5121 8989 6077 4002 6198 35,887

22,780 2741 20,532 48,742 21,910 19,461 33,154 169,320

in our work. This dataset contains a significantly lower number of images than the FER2013 and therefore was reserved exclusively for the final stage of training the classifier module and system assessment. CFEE images are RGB (24-bit per pixel) with dimensions 1000 × 750 pixels, with 230 images for each of the seven classes (six emotions plus neutral); a total of 1610 images. The pre-processing applied to this dataset is the same as described in the first pre-processing step of FER2013 (Sect. 3.1, step 1), followed by resizing to 224×224 pixels and conversion to grayscale.

3.2

Feature Extraction and Classification

The system for classification of facial expression of emotion has two main parts: a feature extraction module and a classification module. The feature extraction module comes from the convolutional neural network layers of VGG FACE, originally trained for the recognition of human faces. The classification module is a linear multiclass support vector machine (mSVM). The performance assessment uses 10-fold crossvalidation. Figure 3 presents a summary of the transformations applied to the VGG FACE for fine-tuning of its features extraction module. The original architecture is represented in Fig. 3a. It is possible to observe that the feature extraction module contains successive convolutional and maxpool layers; the classifier module contains fully-connected layers and a softmax output. The VGG FACE structure is modified for the fine-tuning process in Fig. 3b. The classification module is replaced by a new set of fully-connected layers and softmax output. The feature extraction module has its structure and parameters initially preserved (transfer learning). This modified structure is trained to achieve fine-tuning of parameters of the feature extracting layer. The FER2013 dataset, already pre-processed, is used in this process. Normalization is applied to the dataset with the mean and standard devia-

1543

tion parameters of the original dataset used for training VGG FACE. The last step is presented in Fig. 3c, where the feature extraction module of VGG FACE is already fine-tuned. The classification module is replaced by a linear mSVM. The parameters of feature extraction module are preserved, and the mSVM is trained and evaluated using the 10-fold crossvalidation criterion with CFEE. In this process, the feature extraction module outputs receive an L2 normalization.

4

Results and Conclusion

The proposed system for classification of facial expressions of emotions consists of two main modules, the feature extraction module, and the classification module. The first module comes from the VGG FACE fine-tuned convolutional layers using FER2013. The classifier module is a linear mSVM. The performance metrics used in this work are: accuracy (Eq. 1), precision (Eq. 2), recall (Eq. 3), and F1 score (Eq. 4), defined as follow for one-class case: accuracy = (T P + T N )/(T P + T N + F P + F N ) (1) pr ecision = T P/(T P + F P)

(2)

r ecall = T P/(T P + F N )

(3)

F1 = 2 × ( pr ecision ∗ r ecall)/( pr ecision + r ecall) (4) where: T P = true positive cases; T N = true negative cases; F P = false positive cases; F N = false negative cases. These metrics are calculated for each class in each fold of 10-fold cross-validation. Their average values are obtained by averaging each metric over 10-folds. The 10-fold cross-validation criterion used the CFEE dataset for mSVM training and system performance assessment. A first setup used all CNN outputs as mSVM inputs. Table 2 shows the average confusion matrix, and Table 3 presents the average precision, recall, and F1 score for each class and its global average. Another setup used only the 6000 (23.9%) most significant CNN outputs as mSVM inputs. The criterion used for feature selection is the ANOVA F-value scores [36]. Tables 4 and 5 present the results for this setup. Analyzing the results with the use of mSVM with all the features provided by CNN, we can see in Table 3 that the class with the lowest precision and recall is the anger class (F1

1544

P. F. Jaquetti et al.

Fig. 3 System’s feature extraction and classification modules. a Original VGG FACE [14]; b classification module of VGG FACE is replaced for fine-tunning of feature extraction module; c system’s final configuration, with a fine-tuned feature extraction module and mSVM for training and evaluation

Table 2 Averaged confusion matrix for mSVM, no feature selection, 10-fold cross-validation Estimated class Real class

Anger

Anger 0.865 Disgust 0.052 Fear 0.004 Happiness 0.000 Sadness 0.070 Surprise 0.004 Neutral 0.004 Average accuracy of 0.919

Disgust

Fear

Happiness

Sadness

Surprise

Neutral

0.057 0.887 0.000 0.000 0.026 0.000 0.000

0.000 0.017 0.896 0.009 0.017 0.035 0.000

0.000 0.009 0.013 0.991 0.000 0.000 0.000

0.065 0.030 0.004 0.000 0.852 0.000 0.009

0.000 0.000 0.070 0.000 0.000 0.957 0.000

0.013 0.004 0.013 0.000 0.035 0.004 0.987

score of 0.865). Table 2 indicates that 6.5% of anger images were classified as sadness, 5.7% as disgust, and 1.3% as neutral. Table 3 shows that this experiment achieved the highest F1 score (0.985) for the happiness class. According to Table 2, 99.1% of happiness images were correctly classified, and 0.9% were misclassified as fear. The average accuracy achieved was 91.9%.

For the setup where the mSVM received as inputs the 6000 most significant features, Table 5 shows that the sadness class had the lowest F1 score (0.872), closely followed by anger (0.873). This is not an unexpected result if compared to the experiment in Tables 2 and 3, where anger presented the lowest F1, followed by sadness. As shown in Tables 2 and 4, most misclassifications of sadness results in anger, and vice-versa.

226 VGG FACE Fine-Tuning for Classification of Facial Expression Images of Emotion

1545

Table 3 Performance metrics for mSVM, no feature selection, 10-fold cross-validation

Anger Disgust Fear Happiness Sadness Surprise Neutral Global average Average accuracy

Precision

Recall

F1 score

0.868 0.918 0.923 0.979 0.894 0.935 0.938 0.922 0.919

0.865 0.887 0.896 0.991 0.852 0.957 0.987 0.919

0.865 0.900 0.907 0.985 0.869 0.944 0.961 0.919

Table 4 Averaged confusion matrix for mSVM with 6,000 features selected, 10-fold cross-validation Estimated class Real class

Anger

Anger 0.870 Disgust 0.048 Fear 0.004 Happiness 0.000 Sadness 0.061 Surprise 0.004 Neutral 0.004 Average accuracy of 0.921

Disgust

Fear

Happiness

Sadness

Surprise

Neutral

0.052 0.891 0.000 0.000 0.026 0.000 0.000

0.000 0.022 0.900 0.009 0.017 0.039 0.000

0.000 0.004 0.009 0.991 0.000 0.000 0.000

0.065 0.030 0.004 0.000 0.857 0.000 0.009

0.000 0.000 0.065 0.000 0.000 0.952 0.000

0.013 0.004 0.017 0.000 0.039 0.004 0.987

Table 5 Performance metrics for mSVM with 6000 features selected, 10-fold cross-validation

Anger Disgust Fear Happiness Sadness Surprise Neutral Global average Average accuracy

Precision

Recall

F1 score

0.880 0.922 0.915 0.987 0.896 0.939 0.930 0.924 0.921

0.870 0.891 0.900 0.991 0.857 0.952 0.987 0.921

0.873 0.905 0.906 0.989 0.872 0.944 0.957 0.921

Table 4 shows that 6.1% of sadness images were classified as anger, 3.9% as neutral, 2.6% as disgust, and 1.7% as fear. Similarly to the experiment in Tables 2 and 3, Table 5 shows that this experiment obtained the highest F1 score (0.989) for the happiness class. According to Table 4, 99.1% of happiness images were correctly classified, and 0.9% were misclassified as fear. The average accuracy achieved was 92.1%. It is possible to notice that the selection of features was beneficial in this experiment. These results are superior to those obtained in [32], which reached an average accuracy of 82.5%, through a fine-tuned VGG16. Comparing the performance achieved with [7], it is possible to verify that we reached an average accuracy higher than of its best configuration (KNN, 89.7 %) and higher than

the setup with mSVM (87.4%) when using only the shape features extracted by CNN. Thus, we could infer that adding the appearance features could be important for the task of classifying facial expressions of the face; it has not yet proved to be possible to replace the composition of shape and appearance features only by the CNN module; however, the use of the CNN module has advantages: (i) it is a widely used method as it does not require ad-hoc refinements such as those needed to locate the fiducial points and to determine the necessary reference standards for your metric; (ii) the fine-tuning of complex architectures potentially allow the adjustment of its previously trained parameters to specific contexts, reducing the demand for resources of computing time and processing capacity.

1546

Therefore, we conclude that the approach presented in this work presents promising results in the direction of an AFER system. This work could proceed towards a complete DNN system, integrating feature extraction and classification modules in a single architecture with integrated training. Also, the insertion of specific structures for texture information, associated with the shape, can be evaluated.

References 1.

2.

3.

4.

5.

6.

7. 8.

9. 10.

11.

12.

13. 14. 15. 16.

17.

Keltner D, Ekman P (2000) Facial expression of emotion. In: Lewis M, Haviland-Jones J (eds) Handbook of emotions. Guilford Publications, New York, pp 151–249 Corneanu CA, Simón MO, Cohn JF, Guerrero SE (2016) Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans Pattern Anal Machi Intell 38:1548–1568 Christopher A, Christian K, Frederick B, Gur Raquel E, Gur Ruben C, Ragini V (2007) Computerized measurement of facial expression of emotions in schizophrenia. J Neurosci Methods 163:350–361 Kennedy Daniel P, Ralph A (2012) Perception of emotions from facial expressions in high-functioning adults with autism. Neuropsychologia 50:3313–3319 Díaz M., Saez-Pons J, Heerink M, Angulo C (2013) Emotional factors in robot-based assistive services for elderly at home. IEEE ROMAN 711–716 Werner P, Al-Hamadi A, Niese R, Walter S, Gruss S, Traue HC (2014) Automatic pain recognition from video and biomedical signals. In: 2014 22nd international conference on pattern recognition, pp 4582–4587 Du S, Tao Y, Martinez AM (2014) Compound facial expressions of emotion. Proce Natl Acad Sci 111:E1454–E1462 Du S, Martinez Aleix M (2015) Compound facial expressions of emotion: from basic research to clinical applications. Dialogues Clin Neurosci 17:443–455 Karsten W (2015) Measuring facial expression of emotion. Dialogues Clin Neurosci 17:457–462 Samadiani N, Huang G, Cai B et al (2019) A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors (Basel, Switzerland) 19:1863 Ryu B, Rivera AR, Kim J, Chae O (2017) Local directional ternary pattern for facial expression recognition. IEEE Trans Image Process 26:6006–6018 Ding Y, Zhao Q, Li B, Yuan X (2017) Facial expression recognition from image sequence based on LBP and Taylor expansion. IEEE Access 5:19409–19419 Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition. In: British machine vision conference Revina IM, Emmanuel WRS (2018) A survey on human face expression recognition techniques. J King Saud Univ Comput Inform Sci Zhang S, Li L, Zhao Z (2012) Facial expression recognition based on Gabor wavelets and sparse representation. In: 2012 IEEE 11th international conference on signal processing, vol 2, pp 816–819 Feng X, Pietikäinen M, Hadid A (2007) Facial expression recognition based on local binary patterns. Pattern Recogn Image Anal 17:592–598

P. F. Jaquetti et al. 18. Ji Y, Idrissi K (2012) Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn Lett 33:1373–1380 19. Singh G, Chhabra I (2014) Human face recognition through moment descriptors. In: 2014 Recent advances in engineering and computational sciences (RAECS), pp 1–6 20. Liu F, Tang Z, Tang J (2013) WLBP: weber local binary pattern for local image description. Neurocomputing 120:325–335 21. Jia Q, Gao X, Guo H, Luo Z, Wang Y (2015) Multi-layer sparse representation for weighted LBP-patches based facial expression recognition. Sensors (Basel, Switzerland) 15:6719–6739 22. Biswas S, Sil J (2015) An efficient expression recognition method using contourlet transform. In: Proceedings of the 2nd international conference on perception and machine intelligence (PerMIn ’15), New York, NY, USA. Association for Computing Machinery, pp 167–174 23. Gao Y, Leung MKH (2002) Face recognition using line edge map. IEEE Trans Pattern Anal Mach Intell 24:764–779 24. Song M, Tao D, Liu Z, Li X, Zhou M (2010) Image ratio features for facial expression recognition application. IEEE Trans Syst Man Cybern Part B (Cybern) 40:779–788 25. Carcagnì P, Del Coco M, Leo M, Distante C (2015) Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4:645 26. Calder AJ, Burton AM, Miller P, Young AW, Akamatsu S (2001) A principal component analysis of facial expressions. Vis Res 41:1179–1208 27. Bartlett MS, Movellan JR, Sejnowski TJ (2002) Face recognition by independent component analysis. IEEE Trans Neural Networks 13:1450–1464 28. Siddiqi MH, Ali R, Khan AM, Park Y, Lee S (2015) Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans Image Process 24:1386–1398 29. Atsawaruangsuk S, Katanyukul T, Polpinit P (2019) Analyze facial expression recognition based on curvelet transform via extreme learning machine 30. Zhang L, Tjondronegoro D (2011) Facial expression recognition using facial movement features. IEEE Trans Affect Comput 2:219– 229 31. Kaggle Datasets . Challenges in representation learning: facial expression recognition challenge. https://www.kaggle. com/c/challenges-in-representation-learning-facial-expressionrecognition-challenge/overview. Accessed 16 Mar 2020 32. Neto HS, Santos CC, Fernandes MR, Samatelo JLA (2018) Transfer learning for facial emotion recognition. Anais do XIV Workshop de Visão Computacional 33. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition 34. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society conference on computer vision and pattern recognition (CVPR 2001), vol 1 35. King Davis E (2009) Dlib-ml: a machine learning toolkit. J Mach Learn Res 10:1755–1758 36. Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830

Estimation of Magnitude-Squared Coherence Using Least Square Method and Phase Compensation: A New Objective Response Detector F. Antunes and L. B. Felix

leakage and presented the higher detection rate with false positive close to the significance level.

Abstract

Auditory Steady-State Responses (ASSR) are bioelectric potential evoked in the brain due to periodic auditory stimulation. An example of stimulus used to evoke an ASSR is the amplitude modulated tones, wherein the ASSR is characterized by an increase of energy at the modulation frequency of the electroencephalogram power spectrum. The presence or absence of an ASSR can be statistically determined by objective response detectors, usually implemented in the frequency domain through the Discrete Fourier Transform (DFT). To avoid spectral leakage in the DFT analysis, coherent sampling is often used. This technique consists of adjusting the stimulus modulation frequency so that an entire number of cycles occurs in an epoch of analysis. Thus, once the modulation frequency has been defined, other epoch lengths cannot be analyzed. This work proposes an objective response detector using the magnitude-squared coherence with least square and phase compensation to estimate spectral content, which does not require coherent sampling. This detector was analyzed in different epoch lengths and different type of windowing. For a small epoch, the new detector did not work because the false positive was not controlled. For large epochs, the new detector has lower detection rates. The best epoch lengths to use are the smallest possible ones before losing control of the false positive. For the data used in this work, the best epoch lengths were ranging from 256 to 512 samples. The tukey windowing was the most robust in terms of spectral

F. Antunes (&) Department of Electrical Engineering, Federal Institute of Education Science and Technology of Minas Gerais—Ipatinga Campus, Maria Silva, Ipatinga, MG, Brazil e-mail: [email protected] L. B. Felix Department of Electrical Engineering, Federal University of Viçosa, Viçosa, MG, Brazil

Keywords



 

Auditory steady-state response Magnitude-squared coherence Least square Phase compensation

1

Introduction

According to the World Health Organization, around 466 million people worldwide suffer from some hearing disorder, and 34 million of these are children [1]. When an auditory disorder is detected early, the treatments, e.g. prostheses or implants, tend to be more efficient. The gold standard test for hearing screening is the pure tone audiometry. This test requires a patient’s behavioral response to obtain pure tone hearing thresholds. Since it depends on feedback from the individual, it is very difficult to perform it on patients who are unable to cooperate, such as babies and children. For these cases, objective, automatic audiometric techniques were developed and most of them use brain electric potentials evoked by external stimuli [2]. Auditory Steady-State Response (ASSR) is an evoked potential used for the objective prediction of hearing thresholds [3]. ASSR is elicited in the brain by means of repeated sound stimuli at a high rate so that the responses to each stimulus overlap. According to [4], the ASSR evoked by amplitude modulated tone is characterized by an increase in energy in the modulation frequency (and its harmonic) in the Electroencephalogram (EEG) power spectrum. In an objective audiometry, the behavioral feedback of the patient can be replaced by the test of presence or absence of the ASSR. The tests can be performed statistically through Objective Response Detectors (ORDs), usually using the frequency domain by means of the Discrete Fourier Transform (DFT) [3]. The Magnitude-Squared Coherence (MSC) is an

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_227

1547

1548

F. Antunes and L. B. Felix

ORD frequently used to detect ASSR [5, 6]. ORD functions depend on the Signal-to-Noise Ratio (SNR) as well as on the degrees of freedom used in the detector’s estimation. In addition, the statistical threshold for the absence and presence decision is obtained based on the detector distribution under the Null Hypothesis, which is the lack of response. The challenge of the objective audiometry is the sensitivity x specificity performance and test time. The test time is directly proportional to the epoch length. Thus, reducing the number of points of the epochs without losing performance is desirable in practical objective audiometry systems. Usually, the epoch length cannot be varied because it is predefined by the coherent sampling criterion, which is used for preventing spectral leakage. This method consists in adjusting the stimulus frequency in order to have an integer number of cycles for a fixed epoch length [7]. In this sense, this work proposes a new objective response detector based on coherence, least square and phase compensation. This new detector eliminates the need of fixed epoch length in the analysis and keeps the coherent sampling criterion.

2

Methods

2.1 Magnitude-Squared Coherence (MSC) The coherence estimate of a deterministic and periodic input signal x½n, representing the auditory stimulus, and the output signal y½n, representing the EEG signal, depends only of the output signal, given as [5]: PM  2  i¼1 Yi ðf Þ ^ ; M SC ¼ PM  ð1Þ Yi ðf Þ2  M i¼1

where “^” denotes estimation, Yi ð f Þ is the DFT of the i-th epoch of the y½n, f is the frequency of the input signal and M is the number of epochs used in the calculation. In order to use this function as an ORD, the associated critical value must be found. It is a threshold in which values above it indicates response. Critical values are commonly obtained based on the inverse cumulative density function of the detector distribution under the null hypothesis (H0) of lack of response. Under the null hypothesis, y½n is assumed to be a Gaussian noise. Thus, the distribution of MSC under H0 is given by [8]: M ^SC ð f ÞjH0  bð1;M1Þ ;

1

MSCcrit ¼ 1  aM1 ;

where a is the given significance level. Thus, the ASSR is detected when M ^ SC [ MSCcrit .

2.2 New Detector The MSC is an ORD that depends of the amplitude and phase estimations of the ASSR in different signal epochs. Estimates using DFT are affected by the spectral leakage and the coherent sampling is required, so only epochs lengths that have an integer number of ASSR cycles can be used. The new detector uses the MSC, but it replaces the DFT with least square and phase compensation to estimate the ASSR spectral content. Least Square: The least squares method is a mathematical optimization technique that finds the best fit for a data set that minimizes the sum of the squares of the differences between the estimated value and the observed data. To use the least squares to extract this information, the signals from each window must be modeled as sinusoidal signals at the same frequency as the ASSR. The response model for the i-th epoch can be given by:   2pfm n yi ½n ¼ Ai cos þ hi ; ð4Þ Fs where f m is the modulation frequency, n is the discrete time, Fs is the sampling frequency, Ai is the amplitude and hi is the phase. yi ½n can be rewritten as follows:     2pfm n 2pfm n yi ½n ¼ Ai cosðhi Þ cos  Ai sinðhi Þ sin : Fs Fs ð5Þ For the ith epoch with L samples we have the following matrix form:     3 2 cos 2pfFms 1 sin 2pfFms 1 3 2 6     7 yi ð1Þ 7 6 sin 2pfFms 2 7 6 yi ð2Þ 7 6 cos 2pfFms 2  7 7 6 6     7 Ai cosðhi Þ 6 yi ð3Þ 7 6 2pfm 3 2pfm 3 ¼ : 7 7 6 6 sin Fs 7 Ai sinðhi Þ 6 .. 7 6 cos Fs 7 4 . 5 6 . . 7 6 4  ..  ..  5 yi ðLÞ 2pfm L 2pfm L cos Fs sin Fs ð6Þ

ð2Þ

where bð1;M1Þ is the beta distribution with 1 and M − 1 degrees of freedom. Thus, the detection threshold is achieved by [9]:

ð3Þ

It can be rewritten as follows:   R yi ¼ M i : Ii

ð7Þ

Estimation of Magnitude-Squared Coherence Using Least Square …

The parameters Ri and Ii can be estimated as follows:   T 1 T Rbi ð8Þ ^Ii ¼ M M M yi : The amplitude and phase estimation can be determined as follows: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Abi ¼ Rbi 2 þ Ibi 2 ; ð9Þ

and ! Ibi b hi ¼ a tan : Rbi

ð10Þ

In this way, amplitude and phase estimations of the ASSR are independent of the epoch length. In epoch length that satisfies the coherent sampling, the least square estimates the same spectral content as the DFT. Phase Compensation: Least square estimates ASSR amplitude and phase regardless of the epoch length, but in epoch lengths that do not have an integer number of ASSR cycles, the expected value for the phase in each epoch will be different. This is because the beginning of each epoch occurs at a different phase of the ASSR. For the least squares to be used in the MSC, phase compensation is proposed. The phase compensation predicts the difference in the expected value of the phase in each epoch and subtracts this value from the calculated, so that the phases computed in all epochs will have the same expected value. This phase compensation is given by the increment in the calculated phases of the following factor:    Lfm hbi hbi þ 2pði  1Þ 1  decimal ; ð11Þ Fs

where i is the index of the collected epoch and decimal is a function that extracts the decimal part of a number.

2.3 Windowing The windowing weighting function is a well-known technique to minimize the effects of spectral leakage, especially in the presence of the power-line interference. This technique consists of multiplying the signal point by point by a weighting function before estimating the spectral content. When no windowing is used explicitly, this is equivalent to using rectangular windowing [10].

1549

3

Material and Methods

The experiments were conducted in a soundproof booth, located in the Interdisciplinary Center for Signal Analysis (NIAS) at Federal University of Viçosa (UFV). This study was conducted on 5 healthy hearing adults (age range 21 −29 years old). Each subject participated in 5 sessions that consist of the EEG recording during AM auditory stimulation, according to the protocol approved by the Local Ethics Committee (UFV/CAAE: 56346916.4.0000.5153). The subjects were instructed to sit comfortably, keep their eyes closed and not to fall asleep during the exam.

3.1 Stimuli The volunteers were stimulated by an amplitude modulated tone. The carrier frequency was 1000 Hz and the modulation frequency was fixed in 37.5 Hz, in order to fit 64 cycles in an epoch of 1024 points, accordingly to the coherent sampling criterion (the sampling rate was 600 Hz). A modulation depth of 100% was used. The stimuli were generated digitally with CD quality and performed monaurally in the right ear, through a shielded cable coupled to an inserted earphone E-A-R Tone 5A (Aero Technologies). The intensity of the stimuli was calibrated by a pressure level meter to 70 dB SPL (Brüel and Kjær model 2250 with coupler 2 cc DB 0138, DKK).

3.2 EEG Data The electroencephalographer BrainNet BNT 36 (Lynx Tecnologia, Brazil) was used for EEG acquisition. The parameter settings were 100 Hz low-pass filter, 0.1 Hz high-pass filter and sampling frequency of 600 Hz. The gold-plated electrodes, with 10 mm diameter, were connected to the signal amplifier and placed on the scalp with the assistance of an electrolytic gel. The electrodes positions were defined according to the International 10−20 System, with reference to electrode Cz and ground on F pz , in the derivations: F7 ; T3 ; T5 ; Fp1 ; F3 ; C3 ; P3 ; O1 ; F8 ; T4 ; T6 ; Fp2 ; F4 ; C4 ; P4 ; O2 ; Fz ; Oz ; Pz ; A1 and A2 : EEG Bipolar Derivation: These bipolar derivations are formed by the difference of potential between two scalp positions. In this case, the total number of available bipolar derivation is the pairwise combination of 22 electrode positions, which result in 231 bipolar derivations. Since each of the 5 participants repeated 5 times the recording procedure, then the number of bipolar derivation signals available for analysis was 5775. Each recording last about 1 min and 23 s, generating signals with 49 epochs of 1024 samples.

1550

Since each bipolar derivation contains a different intensity of the ASSR, then each one has different SNR. In other words, the procedure allowed analysis of 5775 EEG signals with different SNR levels, which improve the statistical significance of the results.

3.3 Epoch Length The data were acquired to have 49 epochs of 1024 samples, resulting in signals with 50,176 samples. The use of the least square allowed to vary the epoch length to values other than 1024. The epoch length was varied between 4 and 4000 samples. For these epoch lengths where it is not possible to use all data, the last samples have been discarded. Considering the results achieved in [10], the standard windowing and the best windowing to mitigate the effects of the spectral leakage was used, which is the rectangular and tukey windowing, respectively.

3.4 Performance Measurement The performance measures were calculated considering all available EEG bipolar derivations. For each epoch length, the detection rate and false positive were calculated considering a significance level of 0.05. Detection Rate: The detection rate was calculated by the percentage of the 5775 signals where the MSC detected the presence of ASSR.

F. Antunes and L. B. Felix

False Positive: The false positive was computed by the detection rate at 20 frequencies corresponding to the 20 bins neighboring the modulation frequency bin for an epoch of 1024 samples. That is, the frequencies were taken in the range between 35.24 and 41.12 Hz.

4

Results and Discussion

The database signals have a fixed number of samples, which are 50,176 samples. The total number of epochs increase when dividing the signals into smaller epoch lengths. For most epoch lengths, it is not possible to use all 50,176 samples, which require the disposal of part of these samples. Part of the variation in the new detector performance for different epoch lengths may be associated with the amount of different data used, especially for larger epochs, which have greater disposal. Figure 1 shows the detection rate for different epoch lengths obtained by analyzing the modulation frequency and neighboring frequencies, and also depicts the false positive rate using the new detector with rectangular windowing. Regardless of the epoch length, the false positive was below 5%, which was the expected value due to the significance level of 0.05. This result corroborates that one found in [10], which analyzed different types of windowing, but with the epoch length fixed in 1024. From Fig. 1, it can also be observed that the detection rate of the new detector with rectangular windowing is very sensitive to small variation in

Fig. 1 Detection rate of the new detector calculated at the modulation frequency and on the neighboring frequency with different epoch length and using rectangular windowing

Estimation of Magnitude-Squared Coherence Using Least Square …

1551

Fig. 2 Detection rate of the new detector calculated at the modulation frequency and on the neighboring frequency with different epoch length and using tukey windowing

the epoch length and presents low detection rates for small epoch lengths. Figure 2 illustrates the detection rate for different epoch lengths obtained by analyzing the modulation frequency and neighboring frequencies, and also shows the false positive rate using the new detector with tukey windowing. It is observed that the false positives were close to the expected, except for smaller epoch lengths. This result agrees with [10]. From Fig. 2 it is also noticed that the detection rate of the MSC with tukey windowing is less sensitive to small variation in the epoch length, but presents low detection rates for small epoch lengths. As data were collected in order to split the signals into 1024 samples epochs, it is possible to observe that smaller epochs can be analyzed, with the new detector, without loss of performance. This can be an advantage when using a sequential testing strategy, in which the time between one test and another is reduced [11–13]. Table 1 reports the detection rate and false positive determined for different epoch lengths using rectangular and tukey windowing. These epoch lengths correspond to those in which no data have been discarded and a fair comparison can be done. As expected from [10], the rectangular window is very sensitive to spectral leakage and presented false positives well below expectations, while the tukey windowing is more robust regarding spectral leakage and the false positives were close to the significance level for epochs larger than 256 samples. Using the tukey windowing, the

new detector showed higher detection rates, but for epochs less than 256 samples, the false positives were less than the significance level. The data used in this work were collected in order to respect the coherent sampling criterion for epoch length of

Table 1 Detection rate and false positive for different epoch lengths and windowing using the MSC with least square and phase compensation Windowing

Epoch lengths

Rectangular

128

3.26

0.10

256

3.22

0.40

512

21.44

0.46

1024

17.21

0.86

1792

16.64

1.09

3136

23.64

1.58

Tukey

Detection rate (%)

False positive (%)

3584

15.39

1.73

128

12.81

2.19

256

40.40

4.45

512

38.79

5.15

1024

36.66

4.88

1792

36.26

4.27

3136

36.40

4.55

3584

35.40

4.41

1552

F. Antunes and L. B. Felix

1024 samples. Compared to this epoch length, the detection rate using the epoch length of 512 was 5.8% higher and using the epoch length of 256 was 10.2% higher. The low detection rate presented by the detector is due to use of all EEG bipolar derivations, which includes several signals with low SNR.

5

Conclusions

In this work a new objective response detector was proposed. This new detector uses the MSC replacing the DFT by the least square and phase compensation. The advantage of this new detector is that it allows to analyze different epoch length. The new detector does not require the use of the coherent sampling, but it is still sensitive to the presence of non-white noise such as the interference from the power-line. Windowing pre-processing continues to mitigate the spectral leakage even with the new detector. The tukey windowing presented the best result. From the results it is possible to notice that small windows are not recommended to use, as there is no false positive control. It was also found that large epochs present lower detection rate. For the data used in this work, the best epoch lengths to use with the new detector were ranging from 256 to 512 samples. In future works, the effects of varying the epoch length in sequential test strategies can be verified, in addition to using the least square with the compensation in different types of ORD beyond MSC. Acknowledgements This work received financial support of the Brazilian Agencies: CAPES, CNPq and FAPEMIG. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. WHO at https://www.who.int/news-room/fact-sheets/detail/ deafness-and-hearing-loss, accessed in Apr 2020 2. Chiappa KH (ed) (1997) Evoked potentials in clinical medicine. Lippincott Williams & Wilkins 3. Picton TW, John MS, Dimitrijevic A, Purcell D (2003) Human auditory steady-state responses: Respuestas auditivas de estado estable en humanos. Int J Audiol 42(4):177–219 4. Dolphin WF, Mountain DC (1992) The envelope following response: scalp potentials elicited in the Mongolian gerbil using sinusoidally AM acoustic signals. Hear Res 58(1):70–78 5. Dobie RA, Wilson MJ (1989) Analysis of auditory evoked potentials by magnitude-squared coherence. Ear Hear 10(1):2–13 6. Felix LB, Moraes JE, Yehia HC, Moraes MFD (2005) Avoiding spectral leakage in objective detection of auditory steady-state evoked responses in the inferior colliculus of rat using coherence. J Neurosci Methods 144(2):249–255 7. Xi J, Chicharo JF (1996) A new algorithm for improving the accuracy of periodic signal analysis. IEEE Trans Instrum Meas 45 (4):827–831 8. de Sa AMFM (2004) A note on the sampling distribution of coherence estimate for the detection of periodic signals. IEEE Signal Process Lett 11(3):323–325 9. de Sá AMFM, Infantosi AFC (2007) Evaluating the relationship of non-phase locked activities in the electroencephalogram during intermittent stimulation: a partial coherence-based approach. Med Biol Eng Compu 45(7):635–642 10. Antunes F, Felix LB (2019) Comparison of signal preprocessing techniques for avoiding spectral leakage in auditory steady-state responses. Res Biomed Eng 35(3–4):251–256 11. Cebulla M, Stürzebecher E (2015) Automated auditory response detection: further improvement of the statistical test strategy by using progressive test steps of iteration. Int J Audiol 54(8):568– 572 12. Antunes F, Zanotelli T, Bonato Felix L (2019) Automated detection of auditory response: applying sequential detection strategies with constant significance level to magnitude-squared coherence. Int J Audiol 58(9):598–603 13. Zanotelli T, Antunes F, Simpson DM, Mazoni Andrade Marçal Mendes E, Felix LB (2020) Faster automatic ASSR detection using sequential tests. Int J Audiol 1–9

Image Processing as an Auxiliary Methodology for Analysis of Thermograms C. A. Schadeck, F. Ganacim, L. Ulbricht, and Cezar Schadeck

Abstract

1

This paper presents a study by image processing to automate the thermogram analysis method of patients diagnosed with cancer. The objective is to develop a semiautomatic segmentation model of thermographic images using the Python computational language. A segmentation routine is proposed based on a region growth algorithm capable of grouping similar pixels to a Thermogram Region of Interest (ROI), starting from the manual positioning of the seed pixel, which is why the test is said to be semiautomatic. The tests were performed on twenty thermograms collected from patients with breast and thyroid cancer. As results it was verified that the proposed model comprises the tumor region with greater reliability than the manual delimitation method, thus the average and the minimum temperatures are higher (compared to the manual method) as it ensures that temperature points outside the real nodular range are not included in the ROI. As for operating time, the proposed method performs the ROI delimitation faster than the manual method. For future work, we suggest the statistical study for nodule benignity or malignancy based on thermal difference recorded in the ROI thermograms analyzed with the semiautomatic segmentation. Keywords

 

Thermography Semiautomatic segmentation processing Cancer



Image

C. A. Schadeck (&)  F. Ganacim  L. Ulbricht  C. Schadeck Programa de Pós-Graduação em Engenharia Biomédica, Universidade Tecnológica Federal do Paraná (UTFPR), Sete de Setembro, Curitiba, 3165, Brazil

Introduction

Throughout life, one in eight women will be diagnosed with breast cancer, with 52% of the cases and 62% of the deaths occurring in developing countries [1, 2]. To reduce mortality, it is necessary to adopt technologies that allow the early diagnosis of breast cancer [3, 4]. Many techniques for early detection have been used, such as mammography, ultrasound (US) and magnetic resonance imaging. However, the use of these methods can present a high percentage of false positives. In addition, mammography is not always effective in young women with dense breast tissue [4, 5]. In the head and neck region, thyroid cancer has the highest incidence, being more frequent in females. The National Cancer Institute (INCA), in Brazil, estimates the occurrence of approximately nine thousand and six hundred cases in the last two years [6, 7]. Among thyroid nodules, malignant carcinomas have the greatest vascularization, especially in their central region. In general, tumors in this gland cause the lobes to grow, altering the structure of the entire gland. The diagnostic methods involve imaging exams, the most common being ultrasound and scintigraphy [8, 9]. In some cases, breast and thyroid neoplasms can also be identified in clinical evaluations by means of palpation. This is because the presence of neoplasms in these regions is more superficial and perceptible from the anatomical point of view. Therefore, due to this more superficial location, thermography becomes a promising technique, since the structures that must be visualized are closer to the subcutaneous tissue [9–12]. Recent studies indicate the association of diagnostic resources for greater diagnostic efficiency and, based on this, thermography can be extremely useful to be used as a complementary exam. In malignant tumors (breast or thyroid) vasomotor changes occur, since the temperature in these regions is higher than that of the surrounding areas

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_228

1553

1554

[13–15]. This occurs due to its greater metabolic activity, angiogenesis and vascular dilation, since even in a premature state, tumors require nutrients to maintain or accelerate their growth [14, 16]. Therefore, IR (infrared) thermography has been consid-ered a method to detect these changes, being a safe technology (does not emit radiation), it is not invasive, it is not painful, it is fast and of low cost, and, with the current technology, it has better thermal resolution [2, 13]. IR thermography is a digital diagnostic method by thermal mapping, measuring small temperature variations, below 0.07 °C [17, 18]. In addition, a single piece of equipment can serve in different health units and diagnostic centers, as it is a portable equipment (small and easy to transport), directly impacting on the access to the technology [2, 8]. The analysis of thermograms (thermal images generated from a thermographic camera) is time-consuming and depends entirely on the examiner, which can influence the reliability of the results [19]. In this sense, computational models can, in addition to speeding up the process, increase the reproducibility and reliability of information extracted from a thermogram [20]. Data analysis is usually done visually and using the software that comes with the equipment itself. These software, in general, allow the manual selection of the Regions of Interest (ROI) in order to extract thermal information. The selection can also be made from simple geometric figures—circumferences or rectangles, for example— which does not guarantee a precise delimitation of the nodular extension [21]. Computational methods that work with thermal image processing follow algorithms capable of delimiting, automatically or semiautomatically, the ROI that contains the tumor extension, in a standardized way for different points and samples. Therefore, it is capable of capturing the temperature of the defined area by associating it with the pixels of the image [22, 23]. The procedure performed automatically, or semiautomatically, guarantees a more precise and detailed analysis of the tumor area. This study proposes to develop a semiautomatic computational model to delimit a ROI that approximates the actual extension of the tumor, in order to verify the thermal behavior in the tumor area. The study operates according to the semiautomatic segmentation method by seed growth groups regions with pixels of similar characteristics following some parameters such as color, texture, intensity, etc. The process of filling or growing the region occurs when an initial pixel (called seed) is selected and, from there, the similarity analysis is made automatically based on the similarity characteristics of the thermal image. This process is called semiautomatic because it requires manual recognition of the seed region [20, 31]. The threshold is a propagation of similarity between the seed

C. A. Schadeck et al.

and neighboring pixels, obeying a homogeneous stopping criterion [30].

2

Methodology

This is a descriptive study carried out in a capital of southern Brazil, with the collection of sample data performed in a hospital specialized in the treatment of neoplasms. The samples (thermograms) were collected from volunteers diagnosed with breast and/or thyroid cancer, age over 18 years old, and who agreed to participate in the procedure. The study was submitted to the hospital’s ethics committee and approved for breast on 05/16/2018, under number 2.656.992, and for thyroid on 08/15/2018, under number 2.822.595. The capture of thermal images was performed, following the image capture protocol recommended by the International Academy of Clinical Thermology (IACT), in a room with controlled temperature, in the range of 22 °C ± 0.5 °C, by a digital thermal hygrometer, model SH112 (J-Prolab, Brazil), with 0.1 °C resolution. The background patient accommodation region was insulated with black fabric to ensure the patient’s privacy and reduce light variations during image capture. To prepare for the exam, participants were instructed to avoid coffee and cigarettes for at least two hours before the exam, and to remove jewelry from the body in the capture region. Then, each patient was accommodated in a comfortable position for 15 min for acclimation. The next step was to carry out cold stress, using cooled gel bags applied to the region of interest for two minutes in contact with the skin surface. After the acclimatization period, images were collected before and after cold stress using a Fluke camera, version 2.49.0, Washington, EUA, Ti4000 model, matrix 320  240, emissivity adjusted to 0.98 and calibration in the range − 20 to + 80 °C, with the blue-red color palette selected. The came-ra was positioned at a distance capable of capturing the entire length of the region of interest, and supported on a tripod. Image capture was performed every 30 s for 15 min, which refers to the time for the cooled tissue to reheat [16]. After being collected, the images were sent for visualization and analysis using SmartView, version 4.3 of Fluke the communication software with the thermal camera, chosen because it is compatible with the camera and with the Windows system (operating system chosen to run the application). The SmartView software loads images in the IS2 or IS3 format (extension of thermal image data files created by digital IR cameras [24]), and converted to JPEG and PNG formats, which are digital file formats and operate with compact data for graphic models, used in image

Image Processing as an Auxiliary Methodology for Analysis …

conversions [25]. Thereby, the thermal images were visualized on the computer, and converted from IS2 to PNG. The SmartView performs this step of converting the temperature matrix to the set of corresponding pixels automatically, providing the image in a color scale, according to the selected palette. The thermogram analysis is performed after the images are transferred from the camera to the computer program. For this, it is necessary to define the ROI, which is done manually in SmartView, according to the examiner’s perception. Initially the file is opened in the program and then the thermogram can be edited by double clicking on the image. In this step, the examiner visually locates the tumor region and delimits the ROI using a geometric marker. The markers available in the software are rectangles, ellipses or polygons. After manual application of the marker, the maximum, average and minimum temperatures of the ROI, comprised by this geometric marker, will be displayed in the program interface. However, the analysis proposed in this study deals with the semiautomatic delimitation of ROI, developed in CPython language. For the development of the routine, it is necessary that the thermal information of the captured image is initially pre-processed in Python software, version 3.8. This software was chosen because it is free, it has several applications with aggregated modules that can be easily downloaded or used at the standard library. Python is among the five most popular languages used in object-oriented programming [26, 27]. Among the formats provided by SmartView to download thermal information, a text file (extension.txt) with temperature information was selected to perform the thermogram processing (in this format there is no loss of thermal data, which usually happens when it is necessary to transform thermal information into pixel). For this reason, it is possible to use the color image itself without the need to convert it to shades of gray, which would require the execution of one extra step. The process is performed directly on the temperature matrix provided by the SmartView software, without the need to perform this step manually. To develop the segmentation model, the following packages must be added to Python: • PIL (Python Image Library): is the Python image library that adds image processing capabilities to the interpreter, such as identifying image files and reading different image formats, image display modes, resizing, etc.; • NumPy: library useful for performing mathematical operations, such as interpolation and calculations with multidimensional arrays (vector matrices); • TKinter: responsible for the development of the Graphical User Interface (GUI). It is a native Python library.

1555

Initially, the program reads the exported text file, based on the address at which it is located at the computer. It then loads that data using the NumPy function and turns it into a temperature matrix. This initial step is done so that the program can run through the matrix in search of the highest and lowest temperature. In Python, the new temperature matrix generated through NumPy must be mapped on a color scale. The operation is based on the Jet color palette (red–green–blue color gradient). The applied function is based on the Jet scale function, originally from the MATLAB software, used in scientific and mathematical data. This function takes a number in a linear range between the minimum and maximum temperature and converts it into a color map, through interpolation [28]. The pre-processing step is completed when the color matrix is converted into an image in PNG format, using the PIL library. The next step deals with image processing, which consists of the semiautomatic segmentation step. The program remains static until the user clicks anywhere on the pre-processed image that includes the tumor region. The selected point will provide the starting position from the corresponding starting pixel, which will be called the seed position. The process is called semiautomatic because it requires manual recognition of the initial seed pixel, obtained by clicking on a point in the tumor region. Semiautomatic segmentation by seed growth groups regions with pixels of similar characteristics following some parameters such as color, texture, intensity, etc. The process is called filling or region growth when an initial pixel (called seed) is selected and, from there, the similarity analysis is made based on the color under the seed, called the target color, or some other characteristic of the thermal image. The region growth segmentation method deals with the propagation of the similarity between the seed and the neighboring pixels, obeying a homogeneous stopping criterion, called threshold [20, 29, 30]. After executing the segmentation, the program presents a new color matrix with the segmented region, which will return a new PNG image pointing to the segregated region, when applied to the PIL library. Figure 1 shows an image in PNG format before processing and segmented, respectively from right to left, delimiting the ROI of a thermogram. In order to make the process practical and functional, a GUI can be added to the program. Buttons and windows with different functions were created using the functions available at the TKinter library. The interface presents a screen with visualization of the thermographic image in two situations: An image on the left that corresponds to the file selected for analysis, and an image on the right equivalent to the same thermogram with the ROI determined in a semiautomatic way.

1556

C. A. Schadeck et al.

Fig. 1 Thermographic image, in PNG format, pre- and post-processing (from left to right) of a patient diagnosed with thyroid cancer, showing semiautomatic segmented ROI in the interface of the proposed application

The GUI also presents the maximum and minimum temperatures of the thermogram to be analyzed and the maximum, average, minimum and threshold temperatures of the thermogram with the segmented ROI. As the threshold is determined manually, a cursor below the thermogram showing the ROI allows the variation of the criterion, which may increase or decrease the segregation threshold. The data processing routine developed ranges from the acquisition of data as a text file, to the generation of the segmented PNG image. Next, the processing and segmentation algorithm is defined based on two functions: Python’s pop () and append (x, y). These functions queue the pixel positions to be visited, and checks them, according to the threshold homogeneity criterion stipulated by the user.

3

Results

The proposed semiautomatic segmentation model was tested on thermograms of 20 patients diagnosed with a tumor (benign or malignant), eight of which were breasts and twelve of thyroid. Manual tests were performed based on the proven diagnosis of cancer, with the location of nodules in biopsy. Three expert examiners made manual demarcations on SmartView and the average manual test time was recorded. For the semiautomatic method, knowing the location of the tumor, the seed pixel is placed manually in the region so that the delimitation is done automatically encompassing the extension of the nodule. The test is called semiautomatic precisely because the seed pixel is positioned manually. The tests and the time records were performed with three repetitions per analysis of each thermogram, so that the average time was determined the moment when a thermal image is loaded until the moment when the ROI is delimited,

showing the maximum, average and minimum temperatures in the region. According to the proposed methodology, this time was approximately 16 s. For comparison, the ROI delimitation time manually using the Smartview software is around 40 s. This time was recorded taking into account the process of loading the thermographic image, choosing the marker, manually limiting the ROI and recording the temperatures of interest in the region. Another important factor in the comparison between the delimitation of ROI manually and semiautomatically through the proposed model is the form of data recording. The proposed program makes it possible to export thermal data in .xls format for use in Excel, for example, while the manual model presents temperatures in the SmartView interface, without the possibility of exporting data, requiring that thermal data be entered in Excel spreadsheet, if necessary, for future analysis. As for the delimitation of the area that comprises the extension of the tumor, the semiautomatic model involves the region with greater fidelity to the extension of the tumor. This is happens because the ROI bounded by the threshold groups the pixels of greatest similarity and excludes pixels outside the predetermined homogeneity criterion. The extent of the tumor was completely selected in all of the tests. Manual delimitation in general uses standardized geometric shapes as markers, for example, circles, ellipses, squares or rectangles, which can group pixels out of homogeneity patterns and use thermal data for those grouped pixels. In this way, standard geometric shapes can indicate maximum, minimum or average temperatures within the defined area, different from the actual temperatures of the tumor extension. Figures 2 and 3 show, respectively, the ROI limits determined by semiautomatic segmentation and manually determined with a standard geometric shape, for the same tumor identified in a breast thermogram. It is possible to see a difference in the areas determined by the two methods.

Image Processing as an Auxiliary Methodology for Analysis …

1557

The same occurs with minimum temperatures, because if the adjacent area irradiated by the tumor is defined as a tumor area, the temperature tends to be lower. The greatest difference found among minimum temperatures was 2.7° C, for a breast tumor.

4

Discussion

As for the thermal differences in Table 1, it presents two examples of thermograms analyzed with data on maximum, average and minimum temperatures in the segmented areas, both semiautomatically and manually. The maximum temperature is usually in the center of the tumor and therefore it does not depend on the segmented area, presenting the same values. However, the average temperature varies widely, since the correct delimitation of the tumor region will directly influence this calculation.

This study presents a new methodology for semi-automated analysis of thermographic images of patients diagnosed with cancer. The method proposes a semiautomatic image segmentation routine based on the growth of regions. The development of the proposed algorithm was based on the work of González [20] which deals with the grouping of a region by pixels with similar characteristics, starting from a seed pixel, with a routine available in the image processing package of the MATLAB R2014a software. González explains that the segmentation of a thermogram deals with the process of separating an image into parts in order to more accurately or easily analyze an area or region. It can be used to search for details, or particularities internally to the image, or to determine the limits of an object [20]. Semiautomatic computational ROI processing can avoid the high time spent on manual segmentation as it is a much faster and more accurate process, besides the fact that it allows data to be stored. The region growth segmentation method has great application in the analysis of medical images [20, 31, 32]. Therefore, the proposed algorithm was obtained through the development of a segmentation routine aiming at reducing the operational time of analysis of thermographic images and data storage in Excel. And also Milosevic developed another ROI automation work, in which she analyzes the possibility of classifying thermograms with normality or abnormality in the breasts through texture characteristics. The study separates ROI by three texture classifiers [32]. Both Milosevic's and González's works, start the processing with the conversion of RGB colors to gray scale, followed by the choice of the pixel segregation algorithm with different shades of gray [20, 32]. In a different way, the methodology proposed in the present article worked with the segmentation routine based on the pixel positions of the thermal image with their corresponding temperatures, based

Table 1 Comparison between ROI temperatures, semiautomatically and manually

Fig. 2 Breast thermogram representing the ROI for a tumor in the right breast, semiautomatically delimited using the proposed model

Fig. 3 Breast thermogram representing the ROI for a tumor in the right breast, manually delimited using the software SmartView

T semiautomathic (°C)

T manual (°C)

Tumor

Max

Med

Min

Max

Med

Min

Thyroid

37.4

37.1

36.7

37.4

36.6

35.6

Breast

32.5

32.3

32.2

32.5

31.7

29.5

1558

C. A. Schadeck et al.

on a thermogram temperature matrix, without the need for color scale conversion. The processing, called semiautomatic due to the manual selection of the seed position, was effective in exposing the thermal characteristics of the ROI, delimiting the tumor area and comparing it with a ROI defined manually by the user. The manual process deals with the delimitation of the tumor area using standardized geometric figures (circles, ellipses or rectangles) that, in this study, were made with using the software SmartView. However, manual methods depend on the examiner's perception and take longer operational time for analysis [20]. A very common manual method used to help in the diagnosis of neoplasms was selected. Detection made by standard geometric figures using SmartView was used for comparison, as the software already tracks and communicates with the Fluke camera and provides this tool. However, there must be other methods that can be compared, which are suggested to be analyzed in future works.

5

Conclusions

The developed software was able to segment the ROI of breast and thyroid thermograms. In addition, the proposed model comprises the tumor region with greater reliability than the manual ROI delimitation method. The semiautomatic model was able to capture higher average and minimum temperatures in the ROI when compared to the manual model, which ensures that temperature points outside the actual nodular extent are not included in the ROI. As for operational time, the proposed model performs the task of delimiting ROI more quickly than the manual model. The semiautomatic software has the functionality to export thermal data in compatibility with Excel, which can facilitate statistical analysis or data filling in medical records. However, it is suggested for future studies the complete automation of the software, so that the ROI delimitation takes place completely automatically, without the need for prior knowledge of the tumor region. Statistical study of nodular benignity or malignancy is also suggested, based on the thermal difference recorded in the ROI of the thermograms under analysis. Acknowledgements For sharing information, we thank researchers José Ramón González (UFF) and Adriano dos Passos (UFPR).This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES, Coordination for the Improvement of Higher Education Personnel)—Brazil—Finance Code 001. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. DeSantis CE et al (2015) International variation in female breast cancer incidence and mortality rates. Cancer Epidemiol Biomarkers Prev 24(10):1495–1506 2. Yadav P, Jethani V (2016) Breast thermograms analysis for cancer detection using feature extraction and data mining technique. In: AICTC Proceedings of the international conference on advanced information and communication technologies, Bikaner, India, vol 16, pp 1–5 3. Gerasimova-Chechkina E et al (2016) Comparative multifractal analysis of dynamic infrared thermograms and X-ray mammograms enlightens changes in the environment of malignant tumors. Front Physiol 7:1–15 4. Jesus Guirro RR et al (2017) Accuracy and reliability of infrared thermography in assessment of the breasts of women affected by cancer. J Med Syst 41(5):2–6 5. Gerasimova-Chechkina E et al (2014) Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis. Front Physiol 5:1–11 6. INCA at https://www.inca.gov.br 7. American Cancer Society at https://www.cancer.org 8. Alves MLD, Gabarra MHC (2016) Comparison of power Doppler and thermography for the selection of thyroid nodules in which fine-needle aspiration C biopsy is indicated. Radiol Bras 49 (5):311–315 9 Chammas MC, Gerhard R, Oliveira IR (2005) Thyroid nodules: evaluation with power Doppler and duplex Doppler ultrasound. J Otolaryngol Head Neck Surg 132:874–882 10. Lagalla R et al (1993) Analisi flussimetrica nelle malatti e tiroidee: hipotesi di integrazione con lo studi qualitativo con color-Doppler. Radiol Med 85(5):606–610 11. Faria M, Casulari LA (2009) Comparação das classificações dos nódulos de tireoide ao Doppler colorido descritas por Lagalla e Chammas. Arq Bras Endocrinol Metab 53:811–817 12. Nardi F et al (2014) Italian consensus for the classification and reporting of thyroid cytology. J Endocrinol Invest 37(6):593–599 13. Gavriloaia G, Neamtu C, Gavriloaia MR (2012) An improved method for IR image filtering. In: Proceeding of advanced topics in optoelctronics, microelectronics, and nanotechnologies, vol 6, Constanta, Romania 14. González JR et al (2016) Registro de imagens infravermelhas do pescoço para o estudo de desordens das tireoides. Universidade federal fluminense, Niterói, Rio de Janeiro 15. Raghavendra U et al (2016) An integrated index for breast cancer identification using histogram of oriented gradient and kernel locality preserving projection features extracted from thermograms. Quant infrared thermography 13(2):195–209 16. IACT at www.iact-org.org 17. Brioschi ML et al (2007) Utilização da imagem infravermelha em reumatologia. Rev Bras Reumatol 47(1):42–51 18. Ring F, Jung A, Zuber J (2015) Infrared imaging: a case book in clinical medicine. IOP Publishing, London 19. Brioschi ML (2011) Metodologia de normalização de análise do campo de temperaturas em imagem infravermelha humana. Universidade federal do paraná, Curitiba 20. González JR (2017) Um estudo sobre a possibilidade do uso de imagens infravermelhas na análise de nódulos de tireoide. Universidade federal fluminense, Rio de Janeiro

Image Processing as an Auxiliary Methodology for Analysis … 21. Barcelos EZ (2015) Progressive evaluation of thermal images with segmentation and registration. Universidade federal de minas gerais, Belo Horizonte 22. Dayananda KJ, Patil KK (2014) Analysis of foot sole image using image processing algorithms. In: Proceeding of 2014 IEEE global humanitarian technology conference—South Asia Satellite (GHTC-SAS), Trivandrum, India, pp 57–63 23. Bougrine A et al (2017) A joint snake and atlas-based segmentation of plantar foot thermal images. In: 2017 IPTA Proceedings of international conference on image processing theory, tools and applications, Montreal, Canada, vol 7, pp 1–6 24. Fluke at https://www.fluke.com/pt-br 25. Ahmed N, Natarajan T, Rao KR (1974) On image processing and a discrete cosine transform. IEEE Trans Comput C-23(1):90–93 26. Redmonk at https://redmonk.com/sogrady/2018/03/07/languagerankings-1-18/

1559 27. Python Software Foundation at https://www.python.org 28. Mathworks at https://www.mathworks.com 29. Rouhi R et al (2015) Benign and malignant breast tumors classification based on region growing and CNN segmentation. Expert Syst Appl 4:990–1002 30. Glassner A (2001) Fill’Er up. IEEE Comput Graphics Appl 21 (1):78–85 31. Melouah A, Amirouche R (2014) Comparative study of automatic seed selection methods for medical image segmentation by region growing technique. In: Proceedings of international conference on health science and biomedical systems, vol 3, Florence, Italy, pp 91–97 32. Milosevic M, Jankovic D, Peulic A (2015) Comparative analysis of breast cancer detection in mammograms and thermograms. Biomed Tech 60(1):49–56

Performance Comparison of Different Classifiers Applied to Gesture Recognition from sEMG Signals B. G. Sgambato and G. Castellano

Abstract

In the last years, surface electromyography (sEMG) has become a hot spot for research on signal classification methods due to its many applications with consumer grade sensors. Nonetheless, the correct classification of sEMG signals is not simple due to their stochastic nature and high variability. Our objective was to provide a comprehensive comparison between schemes used on the latest research, namely convolution neural networks (CNN) and hyperdimensional computing (HDC) using a public high-quality dataset. Our results showed that while CNN had substantially higher accuracy (68 vs. 32% for HDC, for 18 gestures), its shortcomings may be more prevalent in this area, as low amounts of training data, and lack of subject specific data can cause an accuracy drop of up to 70%/19% and 56%/7% for CNN/HDC, respectively. Our results point out that while HDC cannot reach CNN classification levels it is more versatile on small datasets and provides more adaptability. Keywords

 





sEMG Hand gesture classification Hyperdimensional computing Convolutional neural networks Ninapro

1

Introduction

Surface electromyography (sEMG) is not a new technique and it has been used since the late 60 s in a plethora of applications, as a simple and noninvasive way of measuring B. G. Sgambato (&)  G. Castellano Neurophysics Group, IFGW (Instituto de Física Gleb Wataghin), UNICAMP, R. Sérgio Buarque de Holanda, 777—Cidade Universitária, Campinas, SP 13083-859, Brazil B. G. Sgambato  G. Castellano Brazilian Institute of Neuroscience and Neurotechnology (BRAINN), Campinas, Brazil

biosignals for control aid of human–machine interfaces (HMI). Most research works have used the sEMG signals of the muscle groups of the upper limbs and hands with the main application being upper limb prostheses. While there are available solutions for sEMG-controlled prosthesis that use conventional myoelectric control schemes, such as on/off and proportional activation [1], their use is limited. The utilization of suitable pattern recognition schemes is paramount to the development of the field. However, pattern recognition has also being considered the main challenge researchers face, as differently from other well-known bioelectrical signals (e.g. EOG, ECG), the sEMG signal is stochastic [2]. Some of the challenges associated to sEMG pattern recognition are the signal’s high dependence on sensor model and placement, inter-subject and inter-section specificity and muscular fatigue. Pattern recognition solutions must also balance the need for robustness (high accuracy) and high dexterity (high number of possible outputs) while maintaining low latency [1–3]. Nonetheless, three recent changes in the scientific landscape have allowed a surge of innovation in the field. Firstly, with the advent of “big data”, the quantity and quality of trainable data publicly available for scientists have increased by unprecedent levels [2]. One example of this, used in this paper, is the Non Invasive Adaptive Prosthetics database (Ninapro) [3, 4] that is, to the best of our knowledge, the largest public dataset of myoelectric hand signals with intact and hand-amputated subjects. The Ninapro initiative started in 2014, and is an ongoing project that, at the time of this writing, has eight datasets publicly available on www.ninapro.hevs.ch, upon a simple registration procedure. Secondly, the advances in miniaturization and connectivity have allowed the emergence of wearable low cost sEMG recording devices. The Myo armband (produced by Thalmic Labs), for example, was released in 2013 as a commercial consumer grade sEMG sensor. While a setup of Myo Armbands costs around a few hundred dollars, other commonly used sensors, such as the Delsys Trigno sensors

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_229

1561

1562

B. G. Sgambato and G. Castellano

(www.delsys.com/trigno/sensors/), may cost up to several thousand dollars. Because of their consumer grade price and low setup complexity, these wearable sEMG devices have unlocked some potential applications outside of the prosthetic and medical market [5–8]. Lastly, an influx of research in machine learning (ML) and classification has allowed the solution of former unsolvable pattern recognition problems. While this push was mainly led by neural networks with very large number of hidden layers (known as deep learning–DL), other methodologies were also proposed and developed, such as Associative Memories approaches like Hyperdimensional Computing (HDC) (Sect. 3.3) [9]. Also, the success of ML models based on Convolutional Neural Networks (CNN) (Sect. 3.2) is apparent in the field of computer vision, for example, as since 2016 one of the largest object recognition challenges (ILSVRC) has been always won by CNN based schemes. HDC, on the other hand, is still scarcely studied but has had noticeable success in the processing of natural language [10, 11]. In this landscape, this paper has two main goals: (1) to compare three different pattern recognition schemes, namely the classical Support Vector Machines (SVM), the Convolutional Neural Networks (CNN) and Hyperdimensional Computing (HDC), applied to the task of sEMG gesture classification; and (2) to bring attention to HDC since, as far as we know, there are no results for its use in public sEMG benchmarks datasets. To accomplish this, we performed three classification experiments. First, we looked at how each model responds to an increasing number of classes (gestures). Second, we evaluated the impact of the number of training epochs on classification accuracy. Third, we analyzed the adaptation capabilities of the model while being tested on never seen subjects. These experiments aimed at testing usual bottlenecks present in real use cases, namely lack of training data, increasing desire for complex commands layouts and necessity for individual training for each user. The rest of this paper is as follows. Section 2 introduces the methodology, including the Ninapro database 5 [3] acquisition setup, and a description of equipment and pre-processing. Section 3 introduces the three classification schemes used, with a focus on HDC. In Sect. 4 we discuss the classification results of our investigation experiments. Finally, we conclude and describe possible future paths in Sect. 5.

2

Materials and Methods

2.1 Subjects The Ninapro database 5 was first presented by Pizzolato et al. [3]. It is comprised of 10 nondisabled subjects (eighth

male and two female). The dataset is further divided in exercise A with 12 basic finger movements; exercise B with eight hand configurations, nine basic wrist movements and rest periods (totaling 18 classes); and exercise C with 23 grasping and functional movements. In this study we decided to use only one exercise due to the horizontality of our scope and to our hardware limits. Following literature, we opted for exercise B as it is the one that appears more constantly [3, 12, 13]. In the experiment the subjects were instructed to repeat each gesture six times (trials), with their right hand, as they appeared in a video, holding it for 5 s, with 3 s of rest in between each gesture.

2.2 Acquisition Setup and Protocol The Myo armband has eight medical grade stainless steel sEMG single differential electrodes, a nine-axis inertial measurement unit (IMU) and a Bluetooth Low Energy (BLE) unit, all controlled by an ARM Cortex-M4 based microcontroller unit. The EMG sensors sampling frequency is 200 Hz. Although the sensors usually employed in medical research have frequencies of at least 1 kHz, it has been shown that the impact of the lower sampling rate used by Myo armband in classification is not high [14], while its portability and cost are significant upsides. The Myo armband also has a limitation in resolution; its EMG signal is sent in an 8-bit signed integer (127). In the database used here, the subjects wore two Myo armbands one next to the other, with the first placed closer to the elbow on the radio humeral joint, while the second was placed just below the first, closer to the hand, and tilted by 22.5°.

2.3 Pre-processing Surface EMG is generally processed in sliding windows (or epochs), and the size of the window is a well-known tradeoff. Larger windows allow higher classification accuracy, especially in low sampling frequency sensors, while in real-time applications smaller windows reduce the input delay. This tradeoff is more studied in the realm of protheses control, with Hudgins et al. [15] initially proposing a maximum allowed latency of 300 ms. However, new research by Farrell and Weir proposed that latency should be maintained within 100–125 ms [16], but that performance should take priority over speed. Considering all these factors and trying to keep some consistency with previous research done using the Ninapro database, we decided on epochs of 42 samples (210 ms) with 20 samples (100 ms) of stride. This resulted in around 1400 epochs for each gesture. For rest periods, only the first six instances were used, to avoid disbalanced data.

Performance Comparison of Different Classifiers Applied …

The data were also filtered by a 50 Hz notch filter to avoid European power line interference. Following in the literature footsteps the training set was always defined as the first, third, fourth and sixth trials while the testing trials were the second and fifth [3]. While many of the papers that use this specific Ninapro dataset take advantage of the IMU measurements provided by Myo armband, we decided against it, for the purpose of keeping a smaller scope, using only the 16 EMG channels.

3

Classification Schemes

3.1 Support Vector Machines We chose SVM, with a linear kernel, as the standard classification method to keep consistency with the literature [3, 8, 17, 18]. The features were also selected from the literature and from our previous experience with the problem. From the Huggins time domain feature set we chose root mean square (RMS), mean absolute value (MAV), slope sign change (SSC), waveform length (WL), zero crossings (ZC) and Willison amplitude (WAMP) [18], and from the Ortiz-Catalan study we included cardinality (CARD) [19].

3.2 Convolution Neural Networks Laezza proposed a structure similar to VGGNet containing only 3  3 convolutions and 2  2 max polling layers [20]. Due to its simplicity, we followed suit but made some adaptations to the model based on literature [12, 13, 21], such as zero padding to maintain the layers’ shape and changing the input shape from 42  16 (42 time samples  16 channels) to 14  16  3 pseudo images. A schematic of the architecture is found in Fig. 1.

3.3 Hyperdimensional Computing Initially proposed by Pentti Kanerva in 2009 [9], this type of computation relies on very high dimensionality and randomness as it drives inspiration from brain architectures. In HDC information is represented not by numbers but by very high dimensional vectors (10.000 or higher), called Hypervectors (HV), with (pseudo)random independent and identically distributed components. In such vectors, information is equally distributed through all components. For example, a HV with randomly distributed ones and minus ones obeys all the aforementioned properties. In the high dimensional space, we can also define a distance metric to calculate similarities between vectors such that it also represents similarities between the encoded information in

1563

them. For example, we could encode adjacent numbers (i.e. one and two) such that a euclidian distance calculation yields high similarity between them, preserving the numerical relationship. With this high dimension mathematical operations can be created to manipulate the HVs and perform biding, association, and other cognitive like operations [22]. Going back to our bipolar (ones and minus ones) HV, we find that the component wise sum operation is a binding operation, as for example, two HV created at random are surely equally distant (orthogonal) to each other as they are to a third random HV; nonetheless, their sum is significantly closer to each of its predecessors than to this third HV. While the main application of HDC has being natural language processing, some papers have proposed its usefulness in biosignals processing, such as EEG [23] and sEMG [24–27]. Our HDC application to sEMG classification follows a similar model to the one proposed by Rahimi [24]. The model is based on an Item Memory (IM), a Continuous Item Memory (CIM), an Associative Memory (AM) and an encoder. The IM is composed of one randomly created HV for each channel of the sEMG signal. The CIM, differently from the item memory, possesses vectors which are not randomly created. It stores one vector for each possible value each sample can have; for example, a sample with 8-bit resolution can hold 256 possible values resulting in the same number of HVs. For the first sEMG resolution level (e.g. zero), the model creates a random HV; then for the second it takes the first level HV and randomly flips x of its components. This process is repeated for all components. In summary, the CIM encodes the possible values of each sample preserving the relations between them. The AM is simply composed of one HV for each possible output gesture (called gesture vector, GV); such vectors are initially zeroed holding no information. The AM is were all learning takes place. The encoding process, divided into spatial encoding and temporal encoding, is show in Fig. 2. The spatial encoding takes all samples at time t from each channel. For each channel it retrieves the CIM HV which matches the sample value and the respective HV channel from IM and multiplies (binding) both HVs; then it sums (association) all into one HV; the resulting vector is therefore similar to each of its predecessors and represents the measured signal at time t. On the other hand, the temporal encoding handles the encoding of different samples in time. It takes n HV outputs from the spatial encoder and, using a fixed permutation (q) operation, binds them into one single HV. For example, for n = 3 it does the following: GV ¼ HVðt ¼ 1Þ  qðHVð2ÞÞ  qðqðHVð3ÞÞÞ The training and test processes are similar. For training, each epoch is encoded as described and added to the

1564

B. G. Sgambato and G. Castellano

Fig. 1 Schematic diagram of the architecture used in the CNN model. The shapes of each layer are shown as depth @ length  width. ‘Conv’ stands for convolution and ‘Max-Pool’ for Maximum Pooling

Fig. 2 Schematic of the Spatiotemporal HDC encoder used to capture correlations between channels (spatial encoder) and then over time (temporal encoder). Adapted from [23]

corresponding GV. For testing, the same encoder is used but the resulting HV is called a query vector, and instead of adding this example to the corresponding GV, the distance from the query vector and each GV is calculated and the nearest one is chosen as the model prediction. Our implementation used HVs of dimension 10.000 with elements 1 or –1. After each epoch encoding the resulting HV was normalized by a threshold function to remain bipolar; ties were broken at random. Controllable parameters were extensively tested for better performance. All 16 channels were used to build the IM; the resolution used for

the CIM was 50. The temporal encoder used a length of 3 in the time domain; this means that our window of 42 was downsampled by 14. New GVs were only added to the AM if their similarity was smaller than 90%.

4

Results and Discussion

Experiments were performed on Python modules sklearn and Keras (with TensorFlow 1.x backend). As for HDC, currently there are no modules, so the implementation was

Performance Comparison of Different Classifiers Applied …

done from scratch. To validate the code, similar experiments to [24] were conducted and similar results (91% accuracy while using smaller gesture windows, no downsampling and spatial encoder of length 3) were obtained. Details and results for each proposed experiment are show in Sects. 4.1–4.3 and discussed in Sect. 4.4.

4.1 Impact of Number of Gestures This experiment tested the achievable accuracy starting with 2 gestures (rest plus random one); then adding other gestures, one by one, in random order. After each new gesture was added, the model was trained from scratch with the examples in the same (random) order. The results are shown in Fig. 3a, where the error bars show the standard deviation. These results are an average of 5 runs.

1565

all 18 gestures, but lower numbers of gestures were tested (not shown), and the general trend shown in Fig. 3b applies as well.

4.3 Adaptation Capabilities The experiment was performed in a K-fold cross validation stile, where each fold comprehended a single subject. For each subject the scheme was also tested using all folds for training (it is of note that training and testing folds are always distinct). Table 1 shows accuracies obtained for each classifier, for every subject, considering the case when the subject was included in the training (‘all’) and when he/she wasn’t included (‘adap’). The reported results are an average of 5 runs, and they were obtained considering 18 gestures.

4.4 Discussion 4.2 Impact of Number of Training Examples While SVMs and HDCs can train with any number of epochs without compromises, CNNs batch size affects the performance and stability of the model. Therefore, to perform the experiment we selected specific amounts as to keep the batch size constant. For each gesture label (L), the amount of training examples was a multiple (M) of the batch size (N). Thus, the total number of examples used for each experiment can be calculated as N  M  L. Figure 3b shows the results regarding this experiment. These are an average of 5 runs. The error bars show the standard deviation. The results presented in this figure used

The sEMG classification literature has studies using all manners of gestures in wildly different numbers; even when the studies use the same dataset, different studies use different combinations of gestures (see, e.g., [13, 17, 28]).The impact of different combinations in different classification schemes is not a trivial question an thus makes it hard to draw any generalized conclusions while comparing different studies. Nonetheless, our approach tries to minimize this variation by adding different gestures at random. In the following, we will refer to the experiments detailed in the previous subsections as experiment A, B and C, respectively.

Fig. 3 Results of the impact in accuracy of the different classifiers when: a different number of gestures were used, and. b different number of examples (trained epochs per gesture) were used. The results are an average over 5 runs, and the error bars represent the standard deviation

1566

B. G. Sgambato and G. Castellano

Table 1 Results of the experiment on adaptation. ‘all’ stands for accuracy achieved training on all subjects; ‘adap’ stands for accuracy achieved when the reference subject is not trained; ‘diff’ stands for the difference between ‘all’ and ‘adap’ SVM

CNN

HDC

All

Adap

Diff

All

Adap

Diff

All

Adap

Diff

S1

47

22

25

68

17

50

23

19

3

S2

72

40

32

72

33

39

44

38

6

S3

56

28

28

67

26

42

38

34

5

S4

60

41

19

61

33

29

39

34

4

S5

57

14

43

68

12

56

18

11

7

S6

56

35

21

72

28

44

33

28

5

S7

55

31

24

69

31

38

30

28

2

S8

60

23

37

67

18

49

22

18

4

S9

62

39

23

74

27

48

42

39

3

S10

61

24

37

67

23

44

32

25

7

Mean

59

30

29

69

25

44

32

28

5

Std

6

9

8

4

7

8

9

9

2

Results are on percentages

With experiment A it is concludable that CNN showed better ability to generalize larger number of gestures, losing on average 1.8% accuracy per extra gesture, while SVM and HDC lost 2.4 and 3.5%, respectively. For HDC, the results for large number of gestures were surprisingly low (32% for 18 gestures), and from 2 to 6 gestures the performance dropped by 30%. Therefore, it seems that while the use of hyperdimensional vectors provides strong associative capabilities to the system, both the encoding method and the use of bipolar vectors, instead of real vectors, limit scalability for larger associative memory sizes. The results of experiment A also shaped our handling of the subsequent experiments, as we felt that the high difference in total accuracy for high number of gestures would influence the other tests. While not shown here, experiments B and C were also conducted with a lower amount of gestures. Experiment B showed the amount of training data necessary for each scheme to achieve its maximum performance. In this experiment, HDC has the upper hand, being able to learn from very few examples with 81% (26%) of its maximum accuracy using 24 training epochs per gesture, while, for the same number of epochs, CNN only reaches 30% (20%). Considering that in sEMG classification a well-known problem is the drop on performance when training and testing are done on different sections, performed on different days [29], constantly providing high quantities of training is not desirable nor feasible. The same trend was observed using lower number of gestures, but with a lower performance gap between the schemes. Lastly, experiment C showed that subject specific accuracy is highly variant, with some subjects experiencing up to a 50% improvement in performance when they were included in the training set. Each scheme performed very

differently, with CNN having an average 44% higher performance when trained on the subject it was predicting on, while for HDC the gap was 5%. Again, the same trend was observed using lower number of gestures, but with a lower performance gap between the schemes. For both experiments B and C, there are many ways to improve the CNN performance. For experiment B data augmentation methods may help to artificially increase the training size, while for experiment C early stopping may be used to prevent overfitting. Nonetheless, these methods create a necessity to fine tune the scheme to different situations. As for HDC, its use in natural language processing is a topic of interest for many research groups and its improvements in this topic may be applied to sEMG classification [30]. It is of note that, while not discussed in this paper, HDC was demonstrated to have an enormous advantage on the computing power, parallel efficiency and footprint of its model, making it especially useful for real-time applications and small wearable and cheap electronics [27].

5

Conclusions

Our results corroborate that there is no unique solution to the sEMG classification problem and different experimental designs cause widely different results. Also, while the use of DL (namely, CNN) has tremendously improved accuracy results, especially for larger numbers of classes (68 vs 32% for 18 classes), we show that it has a lower performance on smaller training sets (20 vs 26% with 24 examples) and a lower adaptability capability (44 vs 5% drop on new subjects) than HDC. These shortcomings are especially critical in the sEMG application where these data are more likely

Performance Comparison of Different Classifiers Applied …

not to be present. Throughout this work we proposed that the HDC application, while still unrefined, does not show such problems, being a more versatile option on usual sEMG problems. Future studies will focus on direct improvement of HDC, such as using real valued vectors, and proposing a better encoding scheme, mainly focusing on achieving a better accuracy on large numbers of classes. We also propose that inclusion of some of hyperdimensional encoding in the usual DL techniques may be beneficial. Acknowledgements We thank FAPESP (São Paulo Research Foundation, Grants 2013/07559-3 and 2019/10653-8) and FINEP (Studies and Projects Financing Agency, Grant 01.16.0067.00) for financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Geethanjali P (2016) Myoelectric control of prosthetic hands: state-of-the-art review. Med Devices (Auckl). https://doi.org/10. 2147/MDER.S91102 2. Phinyomark A, Scheme E (2018) EMG pattern recognition in the era of big data and deep learning. Big Data Cogn Comput https:// doi.org/10.3390/bdcc2030021 3. Pizzolato S, Tagliapietra L, Cognolato M et al (2017) Comparison of six electromyography acquisition setups on hand movement classification tasks. PLoS ONE. https://doi.org/10.1371/journal. pone.0186132 4. Atzori M et al (2014) Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci Data https://doi. org/10.1038/sdata.2014.53 5. Tabor A, Bateman S, Scheme E et al (2017) Designing game-based myoelectric prosthesis training. Conf Human Factors Comput Syst—Proc. https://doi.org/10.1145/3025453.3025676 6. Raina A, Lakshmi TG, Murthy S (2017) CoMBaT: Wearable technology based training system for novice badminton players. In: Proceedings of IEEE 17th international conference on advanced learning technologies. https://doi.org/10.1109/ICALT. 2017.96 7. Zhang R, Zhang N, Du C et al (2017) AugAuth: Shoulder-surfing resistant authentication for augmented reality. IEEE Int Conf Commun. https://doi.org/10.1109/ICC.2017.7997251 8. Abreu JG, Teixeira JM, Figueiredo LS et al (2016) Evaluating sign language recognition using the Myo Armband. In: Proceedings— 18th Symposium on virtual and augmented reality https://doi.org/ 10.1109/SVR.2016.21 9. Kanerva P (2009) Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cognit Comput. https://doi.org/10.1007/s12559009-9009-8 10. Najafabadi FR, Rahimi A, Kanerva P et al (2016) Hyperdimensional computing for text classification. In: Design, automation test in Europe conference exhibition (DATE), University Booth

1567 11. Kleyko D (2016) Pattern recognition with vector symbolic architectures, master’s thesis, luleå university. Luleå, Sweden 12. Chaiyaroj A, Sri-Iesaranusorn P, Buekban C et al (2019) Deep neural network approach for hand, wrist, grasping and functional movements classification using low-cost sEMG sensors. IEEE Int Conf Bioin Biome. https://doi.org/10.1109/BIBM47256.2019. 8983049 13. Wan Y, Han Z, Zhong J et al (2018) Pattern recognition and bionic manipulator driving by surface electromyography signals using convolutional neural network. Int J Adv Robot Syst. https://doi. org/10.1177/1729881418802138 14. Mendez I et al (2017) Evaluation of classifiers performance using the Myo Armband. In: Proceedings of the myoelectric controls and upper limb prosthetics symposium 15. Hudgins B, Parker P, Scott RN (1993) A new strategy for multifunction myoelectric control. IEEE Trans Biomed Eng 16. Farrell TR, Weir RF (2007) The optimal controller delay for myoelectric prostheses. IEEE Trans Neural Syst Rehabil Eng. https://doi.org/10.1109/TNSRE.2007.891391 17. Shen S, Gu K, Chen XR, Yang M et al (2019) Movements classification of multi-channel sEMG based on CNN and stacking ensemble learning. IEEE Access. https://doi.org/10.1109/ ACCESS.2019.2941977 18. Phinyomark A, Quaine F, Charbonnier S et al (2013) EMG feature evaluation for improving myoelectric pattern recognition robustness. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2013.02. 023 19. Ortiz-Catalan M (2015) Cardinality as a highly descriptive feature in myoelectric pattern recognition for decoding motor volition. Front Neurosci. https://doi.org/10.3389/fnins.2015.00416 20. Laezza R (2018) Deep neural networks for myoelectric pattern recognition—an implementation for multifunctional control. Master’s thesis, Chalmers University, Gothenburg, Sweden 21. Tsagkas N, Tsinganos P, Skodras A (2019) On the use of deeper CNNs in hand gesture recognition based on sEMG signals. In: 10th International conference on information, intelligence, systems and applications https://doi.org/10.1109/IISA.2019.8900709 22. Kanerva P (2010) What we mean when we say ‘What’s the dollar of Mexico?’ Prototypes and mapping in concept space. In: AAAI fall symposium—technical report 23. Rahimi A, Kanerva P, Del Millán JR et al (2017) Hyperdimensional computing for noninvasive brain-computer interfaces: blind and one-shot classification of EEG error-related potentials. In: EAI Internatonal conference on bio-inspired information and communications technologies. https://doi.org/10.4108/eai.22-3-2017. 152397 24. Rahimi A, Benatti S, Kanerva P et al (2016) Hyperdimensional biosignal processing: a case study for EMG-based hand gesture recognition. In: 2016 IEEE International conference on rebooting computing–conference proceedings. https://doi.org/10.1109/ICRC. 2016.7738683 25. Moin A et al (2018) An EMG Gesture recognition system with flexible high-density sensors and brain-inspired high-dimensional classifier. In: Proceedings–IEEE international symposium on circuits and systems. https://doi.org/10.1109/ISCAS.2018. 8351613 26. Moin A, Zhou A, Benatti S et al (2019) Analysis of contraction effort level in EMG-based gesture recognition using hyperdimensional computing. In: Biomedical circuits and systems conference proceedings. https://doi.org/10.1109/BIOCAS.2019.8919214 27. Benatti S, Montagna F, Kartsch V et al (2019) Online learning and classification of EMG-based gestures on a parallel ultra-low power

1568 platform using hyperdimensional computing. IEEE Trans Biomed Circuits Syst. https://doi.org/10.1109/TBCAS.2019.2914476 28. Simão M, Neto P, Gibaru O (2019) EMG-based online classification of gestures with recurrent neural networks. Pattern Recognit Lett https://doi.org/10.1016/j.patrec.2019.07.021

B. G. Sgambato and G. Castellano 29. Du Y, Jin W, Wei W et al (2017) Surface EMG-based inter-session gesture recognition enhanced by deep domain adaptation. Sens Switzerland https://doi.org/10.3390/s17030458 30. Imani M et al (2019) SearcHD: a memory-centric hyperdimensional computing with stochastic training. IEEE Trans Comput Des Integr Circuits Syst. https://doi.org/10.1109/TCAD.2019.2952544

Modelling of Inverse Problem Applied to Image Reconstruction in Tomography Systems J. G. B. Wolff, G. Gueler Dalvi, and P. Bertemes-Filho

Abstract

Numerical models were developed to analyze the Eddy currents distribution in conductive volumes excited by time-varying magnetic fields. The sensitivity matrix was obtained for a defined topology of field generating sources and field measurement sensors. From the secondary field readings in a generator/sensor system, images of the conductivity distribution inside the material were obtained. The mathematical methods used to obtain numerical solutions for the inverse problem in magnetic induction tomography are presented. The result showed slight fluctuations in the object’s conductivity. This is because the high frequency components were eliminated in the image reconstruction program developed. Keywords





 

Tikhonov regularization Tomography Image reconstruction Inverse problem Deficient post

1

Introduction

The determination of the internal electrical characteristics of the human body, such as, permissivity (e), permeability (l) or conductivity (r), have important applications in biomedical engineering. Some applications are: the detection of cancerous nodules [1, 2], the determination of blood flow in the heart [3–5], and the detection of cerebral edema [4, 6]. There are several techniques for visualizing changes in the electrical characteristics of the human body due to

J. G. B.Wolff (&)  G. G. Dalvi  P. Bertemes-Filho Electrical Engineering Department, State University of Santa Catarina, Rua Paulo Malschitzki 200, Zona Industrial Norte, Joinville, Santa Catarina 89219-710, Brazil

pathological conditions. Some of these methods have the objective of generating images of the interior of the human body, in a non-invasive way: it is tomography. There are several types of tomography, for example, computed tomography, electrical impedance tomography, industrial tomography, magnetic induction tomography, electrical capacitance tomography, etc. In tomography, reconstruction algorithms are used to produce an image from the electrical characteristics of the material from the results obtained in the measurements. For better systematization, most methods for reconstruction are divided into two stages: the direct problem and the inverse problem. In the direct problem, the voltages are modeled according to the impedances present in the system. The inverse problem consists of inverting the relationship obtained by the direct problem and obtaining the impedances. To solve the direct problem in general, computational methods are used, such as the finite element method, finite difference method, impedance method or, in some cases, linearization techniques. In order to solve the inverse problem, in general, minimization and regularization criteria are used, for example, the least squares technique and Tikhonov regularization. The objective of this work is to map and reconstruct images of the conductivity distribution on an object using secondary field measurements produced by eddy currents in a magnetic induction tomography (MIT) system using the Tikhonov regularization technique.

2

Materials and Methods

This study was based in simulations of a cylinder object for represent a conductivity distribution inside of it, using Impedance method and Tikhonov regularization. The direct problem was solved with the impedance method and was used for the first time in 2009 by Wolff and Ramos. The reverse problem at MIT is ill-posed and

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_230

1569

1570

J. G. B. Wolff et al.

non-linear. It was to use the decomposition in singular values (SVD) to linearize the problem, together with the Tikhonov regularization [7, 8]. The Impedance method was initially designed to model eddy currents in conductors. It uses a three-dimensional network of impedances connected through volume elements to represent the conductivity of the object. The equivalent circuit of concentrated parameters is obtained based on the dimensions and electromagnetic properties of the object. Therefore, the following steps were developed: The calculation of the magnetic potential, the Biot–Savart law was used and Eq. (1) shows: 2 3 N X l I0 6 Dl 7 ~ A¼ 0 ð1Þ 4 !5: 4p i¼1 ~ r  r0 

The calculation of the electrical potential in each loop of the circuit is given by the mathematical expression of Eq. (2):   Vx ði; j; kÞ ¼ jx Ay ði; j; k þ 1Þ  Ay ði; j; kÞ Dy ð2Þ  ½Az ði; j þ 1; kÞ  Az ði; j; kÞDzg:

The calculation of the induced currents in each loop starts with the impedance at x, which is given by Eq. (3): Zx ði; j; kÞ ¼

Dx : DyDz½rði; j; kÞ þ jxeði; j; kÞ

ð3Þ

The mesh currents equation was obtained using the 2nd law of Kirchhoff and is specified in Eq. (4): Zy ði; j; kÞ½Ix ði; j; kÞ  Ix ði; j; k  1Þ þ Iz ði  1; j; kÞ  Iz ði; j; kÞ þ Zy ði; j; kÞ½Ix ði; j; kÞ  Ix ði; j  1; kÞ  þ Iy ði  1; j; kÞ  Iy ði; j; kÞ þ Zy ði; j; k þ 1Þ½Ix ði; j; kÞ  Ix ði; j; k þ 1Þ þ Iz ði; j; k þ 1Þ  Iz ði  1; j; k þ 1Þ

ð4Þ

þ Zz ði; j þ 1; kÞ½Ix ði; j; kÞ  Ix ði; j þ 1; kÞ  þ Iy ði; j þ 1; kÞ  Iy ði  1; j þ 1; kÞ ¼ Vx ði; j; kÞ

The calculation of the primary and secondary field is given by law of Biot–Savart, as shown in Eq. (5), which is the integral applied to the loop through which the current flows:

 ! r  rn0 l0 I0 X Dln  ~ : Bp ¼  !3 4p n  r  rn0  ~

ð5Þ

The integral applied in the discretization network where the induced currents circulate is shown in Eq. (6):   h  !i 0 ~ I Dx~ u þ I Dy~ u þ I Dz~ u ð i; j; kÞ  r  r X bx x by y bz z l ~ Bs ¼ 0 :   3  ! 4p i;j;k  0 r  r ði; j; kÞ ~ ð6Þ

The sensitivity can be defined in relation to the amplitude or phase variation of the magnetic field. Since the measurement of small phase variations is less susceptible to errors and interference than the measurement of small amplitude variations, in this study, we consider only the sensitivity in relation to the field phase. The calculation of the sensitivity matrix using primary and secondary field angle measurements is shown in Eq. (7):

D; tan  DB Bsm B Smn ¼ ¼ ffi ; ð7Þ Dr Dr Bpm Drn

where the approximation justifies why Bs is much smaller than Bp and is lagged 90º in relation to the field primary. The solution to the problem of poor conditioning of the sensitivity matrix S is obtained through techniques regularization. Regularization provides an approximate solution to the ill-conditioned problem adding additional information in order to decrease the number of conditioning of the matrix. Figure 1 represents the flowchart used to obtain the conductivity values from the values obtained by simulation.

3

Results

In this section, simulations performed to solve the direct problem in a MIT planar system using the impedance method are presented and analyzed. The first simulation shows the geometric characteristics of the cylinder used to analyze the behavior of the distribution of fields and currents within this object. The reconstructed images of the conductivity distribution in the objects are presented below, by calculating the sensitivity matrix in conjunction with Tikhonov’s regularization techniques.

Modelling of Inverse Problem Applied to Image Reconstruction …

1571

Fig. 1 Block diagram for inverse problem using Tikhonov regularization

Table 1 shows the dimensions and simulation parameters used in this study to calculate the distribution of eddy currents and magnetic fields in a conductive volume represented by the cylinder. The calculation of the sensitivity matrix takes about seven minutes to perform and the reconstruction of images using the Tikhonov regularization technique takes about two seconds to complete. As technical specifications of the computer used are: • Intel Core i5 1.6 GHz processor; • 1.0 Tb RAM memory; • Windows 10 Pro operating system, 64 bits.

Table 1 Simulation parameters

Figure 2 shows the conductivity distribution in a cylinder with 1 S/m inside and 0 S/m outside. This distribution of Fig. 2 was created computationally, by the authors, without the solution of a mathematical expression. This is an inverse problem in magnetic induction tomography. Figure 3 shows an unregulated solution, that is, an air-conditioned system, so that you have ill-conditioned solution applied to the Tikhonov regularization technique. Check if the detected sensitivity matrix, shown in Fig. 2, is poorly conditioned: Ncond (A) = 3.3028  1023. Note that the unregulated solution has several numerical errors. Therefore, in Fig. 4, use an expected solution for a sensitivity matrix calculated numerically in a magnetic

Parameter

Symbol

Value

Frequency

f

1  106 [Hz]

Number of pixels in the mesh

N

32  32

Regularization matrix = eye matrix

L

I (n  n) 1

Regularization parameter

k

Conditioning number

Ncond

Vacuum permeability

l0

4 p  10−7 [H/m]

Relative permeability

lr

1

Conductivity inside the object

r1

1 [S/m]

Conductivity outside the object

r0

0 [S/m]

Vacuum permittivity

e0

8.854  10−12 [F/m]

 1010

1572

J. G. B. Wolff et al.

Fig. 2 Conductivity distribution in a cylinder—analytical solution

Fig. 3 Conductivity distribution in a cylinder—unregulated numerical solution

induction tomography system. Additional noise was generated in the data for reconstruction of Fig. 4.

4

Fig. 4 Conductivity distribution in a cylinder—regularized numerical solution

the induced variations. The use of the Biot–Savart law assumes that the share of irradiated fields is of little importance in this analysis. This is reasonable if the analysis is made in the region of the field close to the radiating object (object or object), or it will be, since the distances will not exceed approximately k/2 where k is the field wave length. In addition, the electrical model of the object does not include inductive effects; consequently, this does not allow the description of the effects that arise from the self-induction of eddy currents. Therefore, the skin effect cannot be observed and the results for the induced current, at first, are reliable only if the dimensions of the object are much less than the depth of penetration of the field. The depth of penetration on a flat surface of a semi-infinite volume depends on the frequency and conductivity according to the equation: d = (p . f . l0 . r)−1/2. For the conductivity of 1 S/m and 1 MHz, this value is 0.5 m. And, for 10 MHz, it is about 0.16 m. However, if the object is finite and not flat, the skin effect is the main factor that determines the current distribution. In this case, the current density depends more on the shape of the object.

Discussions 5

A good agreement observed in Fig. 4 and Table 1 between the numerical results shows the calculation of the primary magnetic potential through the numerical integration of the Biot−Savart law and the calculation of the current through the method of impedance and precise approximations in the frequency conditions and conductive use in simulations. However, the methods used in this work are limited in frequency, since they are based on the approximations of the electromagnetic theory for low frequencies both in the calculation of the magnetic potential and in the calculation of

Conclusion

In this work, a regularization algorithm based on the Tikhonov method and the application of singular value decomposition (SVD) was developed and tested to obtain the least squares solution. The proposed problem was modelled mathematically by infinitesimal magnetic dipoles, using Biot–Savart’s Law and Faraday’s Law. The regularized solution obtained by the method described in this work is similar to the conductivity distribution proposed in the electrical model of the medium under analysis. The regularized

Modelling of Inverse Problem Applied to Image Reconstruction …

1573

solution obtained shows fluctuations in the conductivity distribution, which is due to the elimination of “high frequency” components which are associated with very small singular values that were eliminated in the Tikhonov regularization process.

3. Guardo R, Charron G, Goussard Y, Savard P (1997) Contactless measurement of thoracic conductivity changes by magnetic induction. In: Proceedings of the 19th annual international conference of the IEEE engineering in medicine and biology society. ‘Magnificent Milestones and Emerging Opportunities in Medical Engineering’ (Cat. No.97CH36136) 30 Oct–2 Nov 1997 4. Xu Z, He W, He C, Zhang Z, Luo H (2019) Eddy current simulation for magnetic induction tomography forward problem in biological tissues. State Key Laboratory of Power Transmission Equipment and System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing, People’s Republic of China. 5. Zolgharni M, Ledger PD, Armitage DW, Holder DS, Griffiths H (2009) Imaging cerebral haemorrhage with magnetic induction tomography: numerical modelling. Physio Meas 30(6):S187–S200 6. Scharfetter H, Lackner HK, Rosell J (2001) Magnetic induction tomography: hardware for multi-frequency measurements in biological tissues. Physiol Meas 22:131–146 7 Wolff JGB, Ramos A (2010) Cálculo de campo em um sistema de tomografia de indução magnética. In: XXII CBEB 2010. Tiradentes 8. Wolff JGB (2011) Análise computacional da distribuição de campos e correntes e reconstrução de imagem em um sistema de tomografia de indução magnética. Orientador: Airton Ramos, Joinville. 113 f.; il; 30 cm

Acknowledgements The authors would like to thank UDESC and FAPESC for the institutional support of this project. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Gençer NG, Ider YZ, Williamsom SJ (1999) Electrical conductivity imaging via contactless measurement. IEEE Trans Med Imaging 8 (7):617–627 2. Surowiec AJ, Stuchly SS, Barr JR, Swarup A (1998) Dielectric properties of breast carcinoma and the surrounding tissues. IEEE Trans Biomed Eng 35(4):257–263

Towards a Remote Vital Sign Monitoring in Accidents A. Floriano, R. S. Rosa, L. C. Lampier, E. Caldeira, and T. F. Bastos-Filho

Abstract

Monitoring vital signs remotely is essential for accident victims because any movement can aggravate their health state. This paper proposes to evaluate the feasibility of heart rate measurements using a video camera with the subject lying down to simulate an emergency situation. Plane-Orthogonal-to-Skin method (POS) was used to estimate the blood volume variation. The cardiac frequency was calculated using the Fast Fourier Transform (FFT) technique. Three body positions were chosen for the study: Ventral Decubitus, Dorsal Decubitus and Lateral Decubitus. The results demonstrated the possibility to obtain reliable heart rate values in these conditions with average Root Mean Square Error (RMSE) of 3.04 bpm. Future work aims to develop and evaluate an application for mobile devices that maintain the privacy of the examined subject. Keywords

rPPG • Heart rate • PPG • Accidents

1

Introduction

Remote Photoplethysmography (rPPG) is an emerging technique to measure blood volume variation in skin tissue by A. Floriano (B) Federal Institute of Espírito Santo (IFES), Rodovia BR 101 Norte—Km 58, São Mateus, Brazil R. S. Rosa Department of Computer Engineering, Federal University of Espírito Santo (UFES), Vitória, ES, Brazil E. Caldeira, Department of Electrical Engineering, Federal University of Espírito Santo (UFES), Vitória, ES, Brazil L. C. Lampier · T. F. Bastos-Filho Postgraduate Program in Electrical Engineering, Federal University of Espírito Santo (UFES), Vitória, ES, Brazil

capturing the intensity changes of the light reflected by the skin using a simple camera [1, 2]. This method can be a viable solution to measure cardiac parameters, such as heart rate, in situations where traditional methods that use contact sensors, such as electrodes, are not indicated. For example, when measuring the heart rate (HR) of burned victims or neonates, which have a highly sensitive skin [3, 4]. Given its potential applications on medical care and health monitoring, it has become a focal topic of research, with many proposed methods, as it can be seen in the review done by [5]. However, most of the procedures to evaluate rPPG signals in the literature consider people seated and gazing directly at the camera [5, 6]. On the other hand, people in need of urgent treatment are often found unconscious and lying on the ground. Depending on the situation that led to that state, a car crash, or a fall from a considerable height, for example, it may not be possible to move them immediately, as it would risk aggravating an eventual spine or neck injury, resulting from the accident, causing permanent damage. Furthermore, nowadays, it is important to maintain physical distancing due to COVID-19 [7, 8]. So, it would be useful if there was a way to have vital signs measured without the need to move or touch people. As far as this research went, few studies have considered test cases in which the subject is lying in a decubitus position. Thus, evaluating a technology that offers a contactless and straightforward solution is important because that approach could be vastly applied, not only by professionals, but by regular people as well. Thus, this study aims to evaluate heart rate values estimated with the subject posed in three different body positions. This analysis aims to verify if it is possible to obtain reliable heart rate values in these conditions. This work is divided in four sections. The “Introduction” section states the motivation behind the research proposed in this paper. The “Materials and Methods” section presents the methodology to get the signals from the video and the PPG with the subjects in three different positions, the technique to extract skin

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_231

1575

1576

A. Floriano et al.

samples from the video, and the processing steps to calculate the HR from video. The “Results and Discussion” sections discuss the results obtained with the rPPG on the three different positions. In “Conclusion”, the analysis of the results are explained, followed by proposals for future works.

2

Materials and Methods

2.1

Experiment Protocol

Figure 1 illustrates the setup used for the experiments. Ten healthy subjects (ages 25.9 ± 1.5; 2 F and 8 M) participated in this study. The volunteers were chosen from among students

and collaborators of our university. Each of them was briefed on the research project. Then, they were instructed to lie down on a mat in one of the three body positions chosen for the study: Ventral Decubitus (VD), Dorsal Decubitus (DD) and Lateral Decubitus (LD) (Fig. 2). They were asked to stay still, as if they were unconscious. Then the application ran for 1 min for each body position. The PPG signal was measured using a finger pulse oximeter Max 30101 [9–11]. This sensor was connected to an Arduino Mega 2560 programmed in C, communicating with the computer by USB. Figure 3 presents a methodology overview to estimate heart frequency using a camera.

Fig. 1 Configuration used on the experiments

Fig. 2 Positions analyzed on the experiment. a Dorsal Decubitus. b Ventral Decubitus. c Lateral Decubitus

Fig. 3 Methodology overview to estimate heart rate using a camera

231 Towards a Remote Vital Sign Monitoring in Accidents

1577

The heart rate estimation from rPPG and standard PPG of each volunteer was compared using the Root Mean Square Error (RMSE) metric [2].

2.2

the detection of skin pixels more tolerant to luminance differences, given that its luminance component (Y ) is separated from the color components (Cb and Cr ). The skin tones on this color space are clustered on a contiguous volume delimited by the following values:

Images Acquisition Y < 80

The USB video camera, a Logitech C920 [12], was mounted on a wooden arm at the height of about 40 cm from the subject’s head and perpendicular to the ground. The images were taken with a resolution of 640 × 480 pixels, at a rate of 30 frames per second. The application to capture and treat the images was written with Python language [13] and OpenCV, an open-source and collaborative library for image processing [14].

2.3

Skin Segmentation

The application first identifies pixels corresponding to the skin of the face, using the Y CbCr threshold values presented in [15], and excludes other pixels. This step is useful to filter out the influence of luminance variation on other elements in the image from the result. According to [15], the skin segmentation is more robust on images described in the Y CbCr color space, as it makes

Fig. 4 Example of PPG and rPPG signals of a subject in Dorsal Decubitus

(1)

85 70% of total topographic variance) is representable by just few topographies. Most studies that have examined resting-state EEG report these same four archetypal microstates that explain most of the global topographic variance [12]. The microstate labels are assigned to EEG samples based on which microstate prototype are most topographically similar with. In order to extract relevant information from the microstates, each map was modeled using the graph theory concepts as described in Sect. 2.3. Figure 3 shows two examples of networks obtained from Microstate A and their correlation matrices, where SW is the small-worldness index. As can be seen, the networks modeled for the microstates have a small-world architecture (SW > 1), that preserves the brain characteristic of processing information in a specialized and integrated way, as reported in previous works [23]. After modeling the microstate networks, global and local metrics were extracted and evaluated as described in Sect. 2.4. Figure 4 shows the 3 metrics that were the most relevant in characterizing healthy and schizophrenic individuals. The assortative networks are characterized by connections between nodes with the same degree. Thus, high-degree

(6)

Sensitivit y =

TP × 100% T P + FN

(7)

Speci f icit y =

TN × 100% T N + FP

(8)

where: True positive (TP) is the number of cases correctly identified as patient (with schizophrenia), false positive (FP)

Fig. 2 The topographies of the four microstate classes from the clustering algorithm. Note that only map’s topography is important, whereas polarity is disregarded in the spontaneous EEG clustering algorithm

245 Microstate Graphs: A Node-Link Approach …

1683

Fig. 3 Networks graphs obtained to Microstate A and the respective thresholded Pearson Correlation Matrix (in absolute values) of the health controls and patients with schizophrenia. The main diagonal is equal to zero because the self connections were disregarded

nodes (hubs) are likely to be connected to each other. In the disassortative networks, the hubs are not connected to each other. This means that more assortative networks have greater resilience. As can be seen in Fig. 4, the assortativity of all microstate networks showed that resilience in schizophrenics was lower. Small-worldness is a property of some networks in which most nodes are not neighbors of each other but can be reached from every other node by a small number of steps. It was found that the microstate networks of healthy individuals are more efficiently wired, showing high small-worldness, and are more clustered and hierarchically organized. The microstate networks of schizophrenic patients have small local efficiency, as shown in Figs. 3 and 4. Local efficiency can be understood as a measure of the fault tolerance of the network, indicating how well each subgraph exchanges information when a node is eliminated. These three metrics were used to train an MLP in order to classify healthy controls and patients with schizophrenia.

The classification results are shown in Table 1 and as can be seen, the classifier obtained an accuracy of (91.67 ± 2.06)% and sensitivity of (94.04 ± 5.59)%. These results indicating that the method was accurate in identifying the real cases of individuals with schizophrenia. Table 2 shows the results obtained by the main works involving the automatic classification of schizophrenia from the year 2016. As observed in Table 2, the work that obtained the best classification results used features derived from the multichannel EEG signal called effective connectivity and metrics of neural convolutional network, called MDC-CNN (multi-domain connectome CNN), to perform the classification. In the phase of feature extraction, various measures of directed brain connectivity are estimated from EEG signals: partial directed coherence (PDC), time-domain vector autoregressive (VAR) coefficients and topological complex network measures. These features are used as inputs into an ensemble of multiple CNN classifiers.

1684

L. M. Alves et al.

Fig. 4 Selected metrics that better characterize the database

Table 1 Classification results using MLP Test

Accuracy

Sensitivity

Specificity

1 2 3 Mean Std.

92.86 89.29 92.86 91.67 2.06

93.33 88.89 100.0 94.04 5.59

92.31 89.47 84.62 88.87 4.09

Table 2 Most recent research results in schizophrenia classification Reference

Features

Acc. (%)

[5]

Sensor-level Source-level Combined features e-complexity of continuous vector function Mean, skewness, and kurtosis of the SNR values of SSVEPs

80.88 85.29 88.24 89.3

[6] [7]

[8]

Time and frequency domain metrics of effective connectivity

∼80

∼82 ∼89 91.3 89.13 92.87

Using the same database as [8], this work proposes a different methodology of analysis using techniques of graph theory to modeling and extract features of microstate networks. The advantages of the proposed method in relation [8] is the ability to obtain same accuracy with less computational effort, since the proposed technique produced good results without the need to apply deep learning models, which require greater time and computational capacity.

Previous works have suggested that some features from microstates, e.g., frequency of occurrence (represents the tendency of microstates to be active), average duration (represents the temporal stability) and transition probabilities (extracts the asymptotic transitions between microstates) may be promising to differentiate healthy and schizophrenic patients [3,12]. However, using such features of the microstates to perform the same experiment of Table 1, the classifiers were not successful in the classification, being the mean accuracy of 59.52%, mean sensitivity of 48.62% and mean specificity of 75.80% obtained by MLP.

4

Conclusion

In this paper was proposed a method, called Microstate Graphs, that combines EEG microstates with graph theory in order to identify schizophrenic individuals. This method allows to model and interpret each microstate as a complex network, evaluating its structure and the effects of schizophrenia. In addition, the automatic classification resulted in an average accuracy of (91.67 ± 2.06)% using an MLP trained with metrics extracted from microstate networks: assortativity, small-worldness and local efficiency. This result indicates that the method is promising in the detection of schizophrenia and in the understanding of how EEG microstates are affected by such disorder. In future work, other brain diseases can be analyzed and, possibly, detected using the proposed method, e.g., diseases that have similar symptoms and that are often difficult to differentiate early. Different statistical models can also be proposed in the analysis of the relationship between pairs of EEG electrodes in order to obtain other models of microstate networks. Moreover, different metrics can be evaluated to increase the reliability of the proposed method.

245 Microstate Graphs: A Node-Link Approach … Acknowledgements The authors would like to thank the National Mental Health Center of the Russian Academy of Medical Sciences for providing the database, and the Postgraduate Program in Electrical Engineering—UFES. This research received financial support from Fundação de Amparo à Pesquisa do Espírito Santo (FAPES), number 598/2018. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Bellack Alan S, Green Michael F, Cook Judith A et al (2007) Assessment of community functioning in people With schizophrenia and other severe mental illnesses: a white paper based on an NIMHsponsored workshop. Schizophrenia Bull 33:805–822 2. Hofer A, Rettenbacher MA, Widschwendter CG, Kemmler G, Hummer M, Fleischhacker WW (2006) Correlates of subjective and functional outcomes in outpatient clinic attendees with schizophrenia and schizoaffective disorder. 256:246–255 3. Christina A, Faber Pascal L, Gregor L et al (2014) Resting-state connectivity in the prodromal phase of schizophrenia: insights from EEG microstates. Schizophrenia Res 152:513–520 4. Patel Krishna R, Jessica C, Kunj G, Dylan A (2014) Schizophrenia: overview and treatment options. Pharmacy Therapeutics 39:638 5. Shim M, Hwang H-J, Kim D-W, Lee S-H, Im C-H (2016) Machinelearning-based diagnosis of schizophrenia using combined sensorlevel and source-level EEG features. Schizophrenia Res 176:314– 319 6. Boris D, Alexandra P, Alexander K (2016) Binary classification of multi-channel EEG records based on the -complexity of continuous vector functions. Comput Methods Programs Biomed 152 7. Alimardani F, Cho J, Boostani R, Hwang H (2018) Classification of bipolar disorder and schizophrenia using steady-state visual evoked potential based features. IEEE Access 6:40379–40388 8. Phang C-R, Ting C-M, Samdin SB, Hernando O (2019) Classification of EEG-based effective brain connectivity in schizophrenia using deep neural networks. In: 2019 9th international IEEE/EMBS conference on neural engineering (NER). IEEE, pp 401–406 9. Kathryn R, Laura DH, Anja B, Thomas K (2016) 15 years of microstate research in schizophrenia—where are we? A metaanalysis. Front Psych 7:22 10. Lehmann D, Ozaki H, Pal I (1987) EEG alpha map series: brain micro-states by space-oriented adaptive segmentation. Electroencephalography Clin Neurophysiol 67:271–288

1685 11. Pascual-Marqui Roberto D, Michel Christoph M, Dietrich L (1995) Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE Trans Biomed Eng 42:658–665 12. Arjun K, Alvaro P-L, Michel CM, Faranak F (2015) Microstates in resting-state EEG: current status and future directions. Neurosci Biobehav Rev 49:105–113 13. García-Laredo E (2018) Cognitive impairment in schizophrenia: description and cognitive familiar endophenotypes. Biopsych Rel Perspecti Rev Literature Psych 43 14. Tripathi A, Sujita Kumar K, Rashmi S (2018) Cognitive deficits in schizophrenia: understanding the biological correlates and remediation strategies. Clin Psychopharmacol Neurosci 16:7 15. Borisov SV (2005) Alexander K, Gorbachevskaia NL, Kozlova IA (2005) Analysis of EEG structural synchrony in adolescents with schizophrenic disorders. Human Physiol 31:255–261 16. Trier PA, Andreas P, Nicolas L, Kai HL (2018) Microstate EEGlab toolbox: an introductory guide. bioRxiv 17. Karwowski W, Farzad VF, Nichole L (2019) Application of graph theory for identifying connectivity patterns in human brain networks: a systematic review. Front Neurosci 13:585 18. Michel Christoph M, Thomas K (2017) EEG microstates as a tool for studying the temporal dynamics of whole-brain neuronal networks: a review 19. Telesford Qawi K, Simpson Sean L, Burdette Jonathan H, Satoru H, Laurienti Paul J (2011) The brain as a complex system: using network science as a tool for understanding the brain. Brain Connectivity 1:295–308 20. Fabrizio V, Francesca M, Maria RP (2017) Connectome: graph theory application in functional brain network architecture. Clin Neurophysiol Practice 2:206–213 21. De Vico FF, Luciano FC, Aparecido RF et al (2010) A graphtheoretical approach in brain functional networks. Possible implications in EEG studies. Nonlinear Biomed Phys 4:S8 22. Alireza B, Mostafa H, Ahmed N, El Ashal G (2015) Part 1: simple definition and calculation of accuracy, sensitivity and specificity. Emergency 3:48–49 23. Telesford Qawi K et al (2011) The ubiquity of small-world networks. Brain Connectivity 1:367–375

Eigenspace Beamformer Combined with Generalized Sidelobe Canceler and Filters for Generating Plane Wave Ultrasound Images L. C. Neves, J. M. Maia, A. J. Zimbico, D. F. Gomes, A. A. Assef, and E. T. Costa

Abstract

Keywords

The use of plane waves allows images with higher frame rates compared to conventional modes. Some studies have shown that the use of the Eigenspace Beamformer technique associated with the Generalized Sidelobe Canceler (EBGSC) reduces the effects of interference and noise, minimizing sidelobes and providing images with higher contrast and resolution compared to the Delay and Sum (DAS) and the Minimum Variance (MV) methods. Filters can also be applied after image processing to reduce speckle signals present in ultrasound images such as the Wiener and Kuan filter. This work presents the implementation of the EBGSC method with Wiener and Kuan filters to improve the processing of plane wave ultrasound images. The EBGSC method combined with Wiener filter (EBGSC-W) showed an improvement in the contrast of 70% and 57.2% when compared to the DAS and GSC methods, respectively. The geometric distortion was evaluated using the parameter of the full width at half maximum (FWHM) and the EBGSC-W also obtained a reduction in lateral FWHM by 63% and 25% compared to the DAS and GSC methods, respectively, approaching to the actual value of the target size (0.1 mm). The EBGSC method with the Kuan filter (EBGSC-K) showed improvements in the lateral FWHM, however, it worsened the contrast, with loss of information.

Ultrasound images Generalized sidelobe canceler Eigenspace beamformer Plane wave Adaptive filters

L. C. Neves (&)  J. M. Maia  D. F. Gomes  A. A. Assef Electrical and Computer Engineering (CPGEI), Federal University of Technology—Parana (UTFPR), Av. Sete de Setembro, 3165 - Rebouças, Curitiba, Paraná, Brazil A. J. Zimbico Electrical Engineering Department (DEEL), Eduardo Mondlane University (UEM), Maputo, Mozambique E. T. Costa Biomedical Engineering Department (DEB/FEEC), State University of Campinas (UNICAMP), Campinas, Brazil

1

 





Introduction

Conventional ultrasound systems have quality to aid in diagnoses, but not all human tissues can be perfectly analyzed, especially tissues with movements, because there is a limitation in the frame rate reducing the range of motion that can be detected. Currently, the use of plane waves in the formation of ultrasound images has been studying to increase the frame rate of conventional systems [1, 2], allowing to view more details of an examined region with reduced time, mainly in organs with movement. The Delay and Sum (DAS) is the base method used to generate ultrasound images with plane waves [1]. Although this method already allows images with more quality and in less time, compared to conventional techniques, it needs to be improved in relation to image quality, since it processes the received ultrasound signal considering the noise present. Thus, adaptive beamformer methods have emerged, which allow separating the analyzed ultrasound signal in the desired real signal and noise, generating images with better contrast and resolution. The Minimum Variance (MV) beamformer is an adaptive filtering technique introduced by Capon [3] that can improve signal quality due to its ability to maintain the desired signal by reducing only noise and interference. This technique is used as a basis for research with adaptive filters. In this work, new adaptive beamformer techniques were studied, including the Eigenspace-Based Minimum Variance (EBMV), which improves the contrast of the ultrasound image, as it allows the echo received to be divided into a subspace of the signal and a subspace of the noise, reducing much of the noise. The other method is the Generalized

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_246

1687

1688

L. C. Neves et al.

Sidelobe Canceler (GSC) which contributes to improved resolution in images, reducing the sidelobes [4]. Ultrasound medical images have a large amount of speckle, which is a type of granular noise that tends to reduce resolution and contrast, contributing to blur the edges of images of lesions in soft tissues [5]. To reduce the speckle, some post filters techniques are used, such as the use of the Wiener and Kuan filters [6]. This work presents the use of the Eigenspace beamformer combined with GSC, the Wiener (EBGSC-W) and Kuan (EBGSC-K) filters to improve the processing of plane wave ultrasound images.

2

Background

Considering a standard linear array of M elements, the output beamformer of the DAS technique can be described as a function of weights (~ wðkÞH Þ and the echoes received (X ðkÞÞ, as shown in (1) [7]: ð1Þ

where k is the discrete time index, X(k) is the time-delayed version of array received signals, X(k) = [x1(k); x2(k); …; xM(k)]T, w(k) = [w1(k); w2(k); …; wM(k)]T is a complex vector of weights, ()T is the transpose, * is the complex conjugate and ()H denotes the Hermitian Matrix [7]. For adaptive beamformers, the X ðkÞ can be represented as in (2) [7], where SðkÞ is the received echo signal, NsðkÞ represents the interference and noise signals and that both have no correlation. X ðkÞ ¼ SðkÞ þ NsðkÞ

ð2Þ

Thus, it is possible to apply a weight to the echo received considering only the function that represents the echo without noise SðkÞ; highlighting points of interest. The weights of the adaptive beamformer are determined minimizing the power (Pt Þ of the beamformer output under the constraint, as in (3) and (4) [7].   Pt ¼ E jZ 2 j ¼ ~ wH R~ w ð3Þ ~ wMV ¼ arg min ~ a¼1 wH R~ w; subject to ~ wH~

~ wMV ¼

~ aR1 H ~ a R1~ a

ð5Þ

The subarray upper limit is L  M/2, where L is the aperture and M the total number of transducer elements. The estimated R after averaging can be represented as in (6), where (2 K + 1) is the number of data samples used to estimate the CM R [8]. R¼

K ML X Xþ 1 1 Xl ðkÞXl ðkÞH ; ð2K þ 1ÞðM  L þ 1Þ k¼K l¼1

ð6Þ

To improve stability and robustness, a diagonal loading technique is applied, as in (7), where tr{} is the trace of sample CM, and Δ is a constant that varies from 10 to 100 [8].

2.1 Minimum Variance (MV) Beamformer

Z ðk Þ ¼ ~ wðkÞH X ðkÞ;

receive. The CM R can be found dividing the echo data into overlapped submatrices Xl.

ð4Þ

The optimal weight vector of the MV beamformer is given as in (5) [8], where a is the steering vector with 1xM and R is the data covariance matrix (CM) that evaluates the correlation of my obtained signal with the desired signal and allows to calculate a proportional weight for each echo

R ¼ R þ eI ¼ R þ

1 tr fRgI; DL

ð7Þ

2.2 Eigenspace-Based on Minimum Variance (EBMV) Beamformer The Eigenspace-Based Minimum Variance (EBMV) beamformer is a technique that allows to divide the covariance matrix (CM) into two subspaces, one as function of the received signal of interest and the other as function of the noise. Thus, it is possible to design the Minimum Variance (MV) weights to build the signal subspace, maintaining the desired signal and reducing the contribution of sidelobe signals [9]. In the EBMV, the CM R is decomposed to a signal subspace (Es ) and a noise subspace (Ep ) as in (8), where K = diag[k1, k2,…, kL], in which k1  k2  …  kL are the eigenvalues in descending order, and U = [v1, v2, …, vL], in which vi (i = 1, …, L) are the orthogonal eigenvectors [10]. R ¼ UKU H ¼ Es þ Ep ;

ð8Þ

The signal subspace Es is determined using the eigenvectors corresponding to the first largest eigenvalues, where the main lobe contributed energy is concentrated. The eigenvectors used to construct the signal subspace Es depends of the weight coefficient d, that we set, where ki dk1 [4]. Thus, the weight vector ~ wEBMV is calculated as in (9) [10]. . ~EBMV ¼ Es EH ~MV w ð9Þ s w

Eigenspace Beamformer Combined with Generalized …

3

Method

This work presents the use of the Eigenspace beamformer combined with GSC, the Wiener (EBGSC-W) and Kuan (EBGSC-K) filters to improve the processing of plane wave ultrasound images.

3.1 Image Data Acquisition The processing was performed off-line through MATLAB R2016a using the Field II program [11, 12], the UltraSound ToolBox (USTB) [13] and Plane Wave Challenge in Medical Ultrasound Imaging (PICMUS) database [14], which has simulated images and real phantom acquisitions (model 040 GSE) considering the Verasonics Vantage 128 ultrasound system with linear transducer (L11-4v). Field II is a program where any type of transducer geometry and excitation can be simulated, and pulse-echo and continuous wave fields can be calculated for transmission and reception. Positioning, shape and number of transducer elements can be configured, in addition to simulating different tissues or phantoms [12]. The UltraSound ToolBox (USTB) is a tool that assists in signal processing and generation of ultrasound images available for MATLAB. It was developed by an international group of researchers in the field of ultrasonic images to allow the use and comparison of different processing techniques [13]. The PICMUS database results from the challenge organized during IEEE IUS 2016, in which a structure was proposed to evaluate the performance of algorithms to reconstruct ultrasound images from the direct transmission of plane waves, respecting the rules of the ethics committee for acquisitions in vivo [14]. In this project, the USTB tool was used as the basis for processing DAS and MV methods. Using these bases, the EBMV, GSC, EBGSC adaptive methods and the Wiener and Kuan filters were implemented in this work. All image data simulated includes the transmission of 11 steered plane waves with angles from − 2.15° to + 2.15°, with an angle step of 0.43°, the central wave angle equal to 0° and the data were simulated for a 128 elements transducer (L11-4v) with central frequency of 5.21 MHz. Among the techniques used, the main factor that significantly changes the processing time and quality of ultrasound images is the number of transmitted steered plane waves. Therefore, each technique was processed with the same transmission settings and the image quality was analyzed.

1689

3.2 Generalized Sidelobe Canceler (GSC) The generalized sidelobe canceler (GSC), originally proposed by Applebaum and Chapmam [15], is a technique that helps to minimize the effects of sidelobes present in ultrasound images [7]. In this method the weight is obtained by combining two orthogonal components, the adaptive weight ~ wa , and the non-adaptive weight ~ wq . This form can suppress the interference and noise signal of echo data. The weight vector ~ wGSC can be calculated by (10), (11) and (12), where B is the blocking matrix of dimensions M  (M-1), such that ~ aH B ¼ 0 [4]. ~ wGSC ¼ ~ wq  B~ wa

ð10Þ

 H 1 ~ ~ wq ¼ ~ a~ a a

ð11Þ

 1 ~ wa ¼ BH RB BH R~ wq ;

ð12Þ

Thus, the weight vector of the Eigenspace-Based beamformer with Generalized Sidelobe Canceler (EBGSC) is calculated by (13) [4]. ~EBGSC ¼ Es EH ~GSC w s w

ð13Þ

3.3 Wiener and Kuan Filters Filtering is a technique for removing unwanted information from an image, such as noise and interference. One of the noises strongly present in medical ultrasound images is the speckle, which tends to reduce resolution and contrast, and contributes to blur the edge regions of biological tissues images [5]. To improve the quality of the ultrasound image, filtering techniques were studied and adapted in the processing of ultrasound signals, such as the Wiener and Kuan filters. The Wiener filter is introduced to improve image resolution and is determined considering the lowest mean square error (MMSE) between the output power beamformer and the expected signal, as shown in (14). Thus, the filter output can be defined by (15), where Rn is the noise covariance matrix and ~ wH Rn~ w is the output power of noise beamformer [9], resulting in the output beamformer that can be calculated by (16). n 2 o HWiener ¼ arg min E S  H~ wH X  ð14Þ HWiener ¼

j Sð k Þ j 2 j Sð k Þ j 2 ¼ ~ w jSðkÞj2 þ ~ wH R~ wH Rn ~ w

ð15Þ

1690

L. C. Neves et al.

ZWiener ðkÞ ¼ HWiener ~ wðkÞH X ðkÞ

ð16Þ

The Kuan [16] is an adaptive filter that follows the local MMSE criterion, using the local characteristics (mean and variance) of the pixel to be filtered to calculate a linear combination [17]. The output after this filter can be calculated using (17), where Im is the average value of the intensity of the filter, Ip is the intensity of the central pixel of the window and W is the coefficient of the adaptive filter that can be calculated by (18), where CI is the coefficient of variation of the image with the noise and CB is the noise variation coefficient [17].   ð17Þ ZKuan ðkÞ ¼ ImðkÞ þ W ðkÞ Ip ðkÞ  ImðkÞ  1  CB2 CI2 WKuan ðkÞ ¼ 1 þ CB2

4

ð18Þ

Results and Discussion

Figures 1a–e show the beamformer responses for simulation of resolution distortion of 11 steered plane waves for (a) DAS technique, (b) GSC (L = 64, K = 0, Δ = 20), (c) EBGSC (L = 64, K = 0, Δ = 20, d = 0.2), (d) EBGSC-Wiener and (e) EBGSC-Kuan. Figures 1f–j

show the simulation of contrast for the same methods as in Fig. 1a–e, respectively. The geometric distortion was evaluated using the parameter of the full width at half maximum (FWHM), which represents the width between two extreme points where the f unction reaches half of its maximum value (−6 dB of the main lobe) [7]. In addition, the contrast ratio (CR) was used to compare the contrast resolution of the images. The FWHM on the lateral axis (FWHMLat) was calculated in the regions R1, R2 and R3, Fig. 2a, and the mean value for each method was compared with the real dimension of the analyzed point of 0.1 mm in diameter. The mean value of the FWHMLat is shown in Fig. 2b, with the EBGSC-W and the EBGSC-K methods obtaining equal reductions of 0.46 mm (63%), 0.09 mm (25%) and 0.10 mm (27%) compared to DAS, GSC and EBGSC methods, respectively. Thus, the obtained image presents the targets, on the lateral axis, with dimensions closer to the real ones. For quantitative analysis of contrast, the mean value was calculated in relation to the C1, C2 and C3 regions, shown in Fig. 3a. Analyzing the values presented in Fig. 3b, it was observed that the EBGSC-W method generate a contrast value of 51.5 dB, which represents a contrast improvement, on average of 21.09 dB (70%), 18.67 dB (57.2%) and 18.47 dB (56.3%) compared to DAS, GSC and EBGSC,

Fig. 1 Beamformer simulation response with 11 plane waves for distortion analysis in a DAS technique, b GSC, c EBGSC, d EBGSC-W, e EBGSC-K and for contrast analysis in f DAS technique, g GSC, h EBGSC, i EBGSC-W and j EBGSC-K

Eigenspace Beamformer Combined with Generalized …

Fig. 2 FWHMLat results in simulation image. a Regions R1, R2 and R3 for calculating the average FWHM Lat. b Graph of average FWHMLat values for DAS, GSC, EBGSC, EBGSC-K and EBGSC-W methods, respectively

respectively. However, the EBGSC-K method shows a contrast reduction of 6.03 dB (20%), 8.45 dB (25.9%) and 8.65 dB (26.3%) compared to the DAS, GSC and EBGSC methods, respectively. The proposed methods involving the use of the adaptive Wiener and Kuan filters with EBGSC are relatively new, with only research using EBGSC and EBMV-W techniques. No research was found using the Kuan filter with the methods presented in this work, nor with ultrasound images using plane waves. Thus, the data obtained was evaluated in relation to the DAS, GSC and EBGSC-W methods, presented in this work. Zeng et al. [9] applied the EBMV-W technique in simulation (with a 128-element linear transducer, central frequency of 7 MHz and L = 48) and obtained the FWHMLat value of 0.03 mm, indicating a decrease of 0.17 mm (85%) compared to the DAS method and 0.03 mm (50%) compared to the EBMV method. Also, they found an

1691

Fig. 3 Contrast results in simulation image. a Regions C1, C2 and C3 for calculating the average CR. b Graph of average CR values for DAS, GSC, EBGSC, EBGSC-K and EBGSC-W methods, respectively

improvement of 34.83 dB (240%) and 12.23 dB (33%) in contrast with EBMV-W compared to the DAS and EBMV technique, respectively. In a simulation developed by [10], using the EBMV-W and a 128-element transducer with central frequency of 5 MHz and L = 48, the authors obtained CR values of 19.90 dB. and 46.30 dB for DAS and EBMV-W techniques, respectively, which means that the EBMV-W method provided an improvement of 26.4 dB (132%). The EBGSC-K method showed a reduction in FWHMLat equal to the EBGSC-W technique, however, presented the least contrast and loss of image information, Fig. 1j, especially in the regions closest to the z axis, indicating that it is not a good technique for improving contrast. The results obtained in this work and those presented in [4] and [10] reinforce the efficiency of the EBGSC-W method to improve distortion, suppress secondary lobes and improve contrast of ultrasound images, reinforcing the need for further research with this method.

1692

5

L. C. Neves et al.

Conclusions

This work proposed an improvement in the processing of ultrasound images with plane waves through the use of the EBMV beamformer combined with GSC and adaptive filters, and the objective was achieved with the EBGSC-W beamformer. The analysis of the results makes it possible to conclude that the EBGSC-W presented better performance when compared to the DAS, GSC and EBMV methods, as it generated improvements in both contrast and reduction of distortion. Considering that the image shows great improvement in relation to the DAS, GSC and EBGSC method, it is possible to affirm that the processing time is shorter or that it is possible to generate an image with more details and better quality at the same time as traditional methods. The use of EBGSC with adaptive filters is relatively new, so further studies of its application in ultrasound signals are needed to improve the quality of the reconstructed image and, consequently, to help in medical diagnosis. Acknowledgements The authors would like to thank the Brazilian Agencies CAPES, CNPq, Fundação Araucária, FINEP and Ministry of Health for the financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Montaldo G, Tanter M, Bercoff J et al (2009) Coherent planewave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control 56:489–506 2. Bercoff J, Montaldo G, Loupas T et al (2011) Ultrafast compound doppler imaging: Providing full blood flow characterization. IEEE Trans Ultrason Ferroelectr Freq Control 58:134–147

3. Capon J (1969) High resolution frequency-wavenumber spectrum analysis. Proc IEEE 57:1408–1418 4. Li J, Chen X, Wang Y et al (2016) Eigenspace-based generalized sidelobe canceler beamforming applied to medical ultrasound imaging. Sensors 16:1192–2003. https://doi.org/10.3390/s16081192 5. Wu S, Zhu Q, Xie Y (2013) Evaluation of various speckle reduction filters on medical ultrasound images. In: 35th Annual international conference of the IEEE engineering in Medicine and Biology Society, Osaka, pp 1148–1151 6. Tasnim T, Shuvo MMH, Hasan S (2017) Study of speckle noise reduction from ultrasound B-mode images using different filtering techniques. In: 4th international conference on advances in electrical engineering, Dhaka, pp 229–234 7. Zimbico AJ, Granado DW, Schneider FK et al (2017) Beam domain adaptive beamforming using generalized sidelobe canceller with coherence factor for medical ultrasound imaging. In: IEEE international ultrasonics symposium, Washington, DC, pp 1–4 8. Neves LC, Zimbico AJ, Maia JM et al (2019) Adaptive beamformer with generalized sidelobe canceler for plane wave ultrasound image. In: XII symposium on biomedical engineering —IX symposium on instrumentation and medical images, Uberlândia, Brazil, pp 1–3 9. Zeng X, Chen C, Wang Y (2012) Eigenspace-based minimum variance beamformer combined with Wiener postfilter for medical ultrasound imaging. Ultrasonics 52:996–1004. https://doi.org/10. 1016/j.ultras.2012.07.012 10. Aliabadi S, Wang J, Yu J (2016) Adaptive scaled wiener postfilter beamformer for ultrasound imaging. URSI Asia-Pacific radio science conference, Seoul, South Korea, pp 1449–1452 11. Jensen JA, Svendsen NB (1992) Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Trans Ultrason Ferroelectr Freq Control 39:262–267. https://doi.org/10.1109/58.139123 12. Jensen JA (1996) Field: a program for simulating ultrasound systems. Med Biol Eng Comput 34:351–353 13. The Ultrasound Toolbox at https://www.ustb.no/ius2017-abstract 14. Liebgott H, Rodriguez-Molares A, Cervenansky F et al (2016) Plane-wave imaging challenge in medical ultrasound. IEEE international ultrasonic symposium, Tours, France, pp 1–4 15. Applebaum S, Chapman D (1976) Adaptive arrays with main beam constraints. IEEE Trans Antennas Propag 24:650–662 16. Kuan D, Sawchuk A, Strand T et al (1987) Adaptive restoration of images with speckle. IEEE Trans Acous Speech Sig Process 35:373–383. https://doi.org/10.1109/TASSP.1987.1165131 17. Sivakumar R, Gayathri M K, Nedumaran D (2010) Speckle filtering of ultrasound B-scan images—a comparative study between spatial and diffusion filters. In: IEEE conference on open systems, Kuala Lumpur, Malaysia, pp 80–85

Anatomical Atlas of the Human Head for Electrical Impedance Tomography L. A. Ferreira, R. G. Beraldo, E. D. L. B. Camargo and F. S. Moura

Abstract

Electrical impedance tomography (EIT) is a technique that can be used to estimate resistivity distribution from the inside of a domain based on surface measurements. This could be useful, for example, in the diagnosis of cerebral strokes. However, a method to acquire EIT images of the head with enough quality to achieve this task is still needed. In this work, an automated method for the calculation of a statistical atlas of the human head is presented to be used as prior information for the ill-posed inverse problem associated to EIT. Fifty magnetic resonance images of healthy subjects were used for this purpose. Numerical simulations using a realistic head model with hemorrhagic and ischemic stroke were used to evaluate its effect. The results show that, when the atlas was used, there was a decrease in the root mean square error of the images obtained. Also, some artifacts observed in the image generated without the use of the atlas were eliminated or diminished. These findings hint to the possibility of using a statistical atlas of the head to improve the quality of EIT images. Keywords

Anatomical atlas • Electrical impedance tomography • Image processing • Stroke

1

Introduction

Electrical impedance tomography (EIT) is a technique where a group of electrodes is arranged around a domain to electrically stimulate it and measure its response to the stimulus. These measures are then used to reconstruct an resistivity disL. A. Ferreira (B) · R. G. Beraldo · E. D. L. B. Camargo · F. S. Moura Engineering, Modeling and Applied Social Sciences Centre, Biomedical Engineering, Federal University of ABC, Alameda da Universidade s/n, São Bernardo Do Campo, 09606-045, Brazil e-mail: [email protected]

tribution image of the interior of that domain. The use of this technique has been studied for many medical applications, such as lung monitoring and assessment, brain activity evaluation, breast cancer diagnosis, and the measurement of gastric emptying [1]. In particular, the differentiation between types of stroke is a task that could benefit from the use of EIT. Stroke is a disease that affects the blood supply of the brain, being an important cause of death in the world [2]. It can be classified into two main types: hemorrhagic, when the blood leaks from the blood vessel into the brain; or ischemic, when there is a blockage of the blood flow. Both of them have similar symptoms, but they demand distinct treatments that, if wrongly administered, may lead to worsening of the condition. Changes in the blood flow provoke impeditivity alterations in the body [3], motivating the hypothesis that after a stroke it may be possible to detect a decreased (when hemorrhagic) or an increased (when ischemic) local resistivity in EIT head images. Currently, it is advised to diagnose stroke using computerized tomography (CT) or magnetic resonance (MR) imaging [4]. Compared to these modalities, EIT has advantages such as its lower cost; its portability, allowing the equipment to be conveyed to the patient or even installed in the ambulance; and the absence of damage associated with its use, enabling continuous monitoring. Therefore, faster diagnosis and treatment would be achievable, which would improve the outcome of the disease [5]. However, at the time of this writing, no studies were found claiming to obtain EIT images of the human head that are good enough to accomplish this task. The main reason for this difficulty is the ill-posedness of the inverse problem associated with image reconstruction from the electrodes measures. In practice, this implies that using only (noisy) measurements is not enough to generate a satisfactory image. One way to resolve that is by adding a priori information about the solution, using a regularization method, in order to restrict the search space of the solution [6]. One possibility is to use a statistical anatomical atlas, which consists of a probability density function that describes

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_247

1693

1694

L. A. Ferreira et al.

the distribution of resistivity inside the body of a population [7]. This is done so the image reconstruction algorithm would tend to a result consistent with what is statistically expected. In Camargo et al. [8], CT-scan swine chest images were used to build a statistical atlas to be utilized in the EIT regularization method to image the chest. The authors show that the resulting images were closer to what was expected. This improvement was also observed in the presence of pathological conditions, such as atelectasis and pneumothorax, even though the atlas was constructed using only healthy swine subjects. On this basis, the present study aims at the construction of a statistical atlas of the human head to be used in EIT as prior information. At the time of this writing, no other studies that explored the use of this method for EIT images of the head were found.

2

Materials and Methods

The atlas was constructed from a group of 3D MR images of the human head. The main image processing steps applied were spacial normalization, segmentation, and transformation into resistivity images. The computational implementation of these methods was made using Python 3.7. Detailed description of each step is presented in the following subsections.

2.1

MR Images

Fifty 3D MR images from the head of healthy people were used. They were made available by the CASILab at the University of North Carolina at Chapel Hill, and distributed by the MIDAS Data Server at Kitware, Inc.1 An example of how to use this database can be found in Bullitt et al. [9]. The chosen images were obtained using the Fast Low Angle Shot (FLASH) sequence, and a resolution of 1 mm × 1 mm × 1 mm in a 3T equipment. The mean age of the participants was 40 ± 15 years old, with an equal number of female and male subjects.

2.2

Spatial Normalization

The spatial normalization was accomplished using the implementation of the software Advanced Normalization Tools (ANTs) available in Nipype,2 an open-source Python package that provides an uniform interface to access different existing 1 Available at: https://www.insight-journal.org/midas/community/view/

21. 2 Available

at: https://nipype.readthedocs.io/en/latest/.

neuroimaging software [10]. The function RegistrationSynQuick() was used with default parameters, except for the spatial transformation applied, which was chosen as a rigid transformation, for initial alignment, followed by an affine transformation. This function uses mutual information as its similarity measure, and the gradient descent method combined with a multiresolution strategy to perform the optimization. Further details about this software can be found in Avants et al. [11]. Each image was normalized using as reference the ICBM 2009a nonlinear symmetric atlas [12] with 1 mm × 1 mm × 1 mm resolution.

2.3

Segmentation

The images were segmented into five classes: white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), skull and soft tissue. To accomplish that, the implementation of the software Statistics Parametric Mapping (SPM) available in Nipype was used. The function NewSegment() was employed with its default parameters. This function combines three distinct techniques in a single model: (i) The representation of the MR image intensities using a Gaussian mixture model, (ii) A method for bias field correction and (iii) The spatial normalization of tissue probability maps in order to transfer the probabilities to the aimed image. The parameters related to these three steps are then optimized circularly to achieve the final result. Further details about this software can be found in Ashburner and Friston [13]. The outcome of the function is the probability values of each voxel belonging to each class. The final segmentation is obtained by assigning to every voxel its highest probability class. In the event of a tie, the voxel was initially left as background, so its assignment could be done in the next image processing steps. Three repair stages were performed to correct potential faults of the segmentation. The first one consisted of filling holes in the coronal, transversal and sagittal planes. Each hole voxel was assigned to the nearest class in its neighborhood. In case of a draw, the most frequent class in the minimum distance found was chosen. When the problem remained, the class was randomly chosen between the tied options. The second stage was the connectivity analysis of the voxels that were assigned to some tissue. Only the largest connected group, considering a 3D 6-connected neighborhood [14], was kept in the image. This step was applied to remove any artifact regions that were not connected to the head. The last stage consisted of applying four iterations of a morphological opening operation [15] on the binary mask of the head. The structuring element used was a central voxel with its 3D 6-connected neighborhood. This step was implemented to smooth the border of the head.

247 Anatomical Atlas of the Human Head for Electrical Impedance Tomography

1695



Table 1 Resistivity values of the human head (ω m) Tissue

Mean

Gray matter 3.71 White matter 5.11 CSF 0.57 Skull 71.43 Soft tissue 2.78 Values based on McCann et al. [16]

2.4

χt, j =

Standard deviation 0.271 0.471 0.085 24.439 0.494

Resistivity Images

The labels of the segmented images were replaced by resistivity values based on information obtained from McCann et al. [16], where the authors performed a meta-analysis of papers that reported conductivity measures of the human head tissues. The selected values were acquired from graphs presented throughout the aforementioned study. The median values from the graphs were used as the mean values in the present article, while the interquartile ranges were adopted as the standard deviations. These values, converted to resistivity, are presented in Table 1.

2.5

Statistical Atlas

The assumption that the resistivity of the head is described by a multivariate Gaussian distribution was adopted, as presented in (1).   1 ¯ ¯ T  −1 ( x − x) π( x ) ∝ exp − ( x − x) 2   1 ¯ 2 −1 x − x|| ∝ exp − ||  2

1 if x ∈ t 0 otherwise,

(2)

and a sample of the resistivity distribution for this subject can be written in matrix form as ⎤ ⎡ ρ1,s Nt  ⎥ ⎢ χt, j ρt,s = [ χ1, j · · · χ Nt , j ] ⎣ ... ⎦ = X j ρs , x j,s = t=1 ρ Nt ,s (3) where ρt,s is a sample of the resistivity of the t-th tissue, ρs is a vector comprising the samples of all tissues and X j is a matrix composed by stacking the characteristic functions χt, j columnwise. Considering that exists Ns samples for each subject, it is possible to demonstrate that the mean value of the images of N p subjects with those samples can be calculated as x¯ =

Np Np Ns  1  1  x j,s = x¯ j , Ns N p Np s=1 j=1

(4)

j=1

where x¯ j is the image with the mean resistivity values of the samples of each tissue. The covariance matrix can be calculated as Np

s   1 ¯ x j,s − x) ¯ T . ( x j,s − x)( Ns N p − 1

N

=

(5)

s=1 j=1

Considering that Ns → ∞, it is possible to demonstrate from (5) that (1)

=

Np 1  (X j ρ X Tj +  x j  x Tj ), Np

(6)

j=1

In order to make the estimation of the covariance matrix achievable with the available computational resources, downsampling was applied. The resolution of the image was decreased by a factor of 9 in the cranial direction and by a factor of 5 in the other two. A cubic spline [17] was used for the interpolation, and values outside the border of the image were considered as zero. The statistical measures were calculated only inside the mean contour of the head. This area was determined as the region where at least 75% of the voxels were labeled as some tissue, considering the superposition of the images of all subjects. Before the calculation, background voxels inside this region were replaced by the soft tissue resistivity in all images. Given the image of the j-th subject, segmented in Nt tissues, the characteristic function χt, j of a tissue t localized on the region t is defined as

where ρ is the covariance matrix of the resistivity samples ¯ Further details of the demonstration of and  x j = x¯ j − x. these equations can be found in Camargo [8]. In this work, the atlas was estimated using 50 subjects, provided that the mean and covariance matrix of ρ are known. The covariance matrix of the tissues ρ was constructed as a diagonal matrix with the variance of each tissue being calculated from the standard deviations presented in Table 1. Since the probability density function in (1) requires the inverse of  and this matrix is invertible only when the number of subjects is very large, it was added an identity matrix multiplied by a small scalar parameter β to  before the inversion, as shown in Eq. (7). This is equivalent to adding a small amount of uncorrelated white noise to the atlas.  −1 = ( + β I )−1 .

(7)

1696

2.6

L. A. Ferreira et al.

Mesh Development and Forward Problem

2.7

A three-layer numerical phantom of a human head consisting of scalp, skull and brain was generated based on a pediatric head model data set available online [18]. For each tissue, 3D surfaces were generated and stereolithography files were exported using Matlab. Subsequently they were cleaned and refined using the software Blender, before two layers of 16 electrodes each were added to the scalp of the model. Three meshes were developed using Gmsh software [19] to discretize the domain as tetrahedrons: a refined mesh for the forward problem (118 thousand elements, 24 thousand nodes), a refined, but different, mesh for the approximation error calculation (119 thousand elements and 24 thousand nodes) and a coarser mesh for the inverse problem (24 thousand elements, 6 thousand nodes). To simulate the measurement of a single patient, scalp and skull resistivity were set according to Table 1. Brain resistivities were projected from the image of a subject (from the same database but one that was not included in the atlas construction) that was processed the same way as the other images in this study. The pathological states (strokes) were simulated as spheres with 2 cm radius located at the posterior region of the brain. Its resistivity was set to 11.0 ω m in the case of ischemic stroke and 1.4 ωm in the case of hemorrhagic stroke [20]. The forward problem, i.e. calculating electrical potentials in the domain given its electrical resistivities ρ and imposing an electrical current c at the electrodes, was solved by the means of the Finite Element Method as vc (c, ρ) = K

−1

(ρ) c∂ ,

EIT Inverse Problem

The inverse problem, i.e., to calculate the resistivities given the measured voltages and the imposed current, was solved employing a Gauss-Newton iterative algorithm [23]. The inverse problem was solved considering the complete electrode model [24] and the generalized Tikhonov regularization was used as prior information. The solution ρs can be seen as an optimization problem ρs = arg min ρ

||vm − vc + ε||22



 2 λi ||L i ρ − ρi∗ ||2 ,

i

(9) where the sum comprises Tikhonov regularization terms. In this equation λi are regularization parameters, L i are regularization matrices, ρi∗ are fixed vectors used as a initial resistivity guess, and ε is the average vector of the approximation error model, used to mitigate observation model mismodeling [7,25]. Two regularization terms were added: a Gaussian (spacial) high pass filter and the anatomical atlas. The atlas prior information can be promptly written in the form of a Tikhonov regularization by taking the logarithm of (1), while the high pass filter can be constructed in the form of a matrix. The solution of the optimization problem can be computed by iterating the following equations Mk = JkT Jk + λ1 W1 + λ2 W2

(10)

w  k = vm − vc (ρk ) + ε

(11)

ρk+1 = ρk + α Mk−1 [JkT w k −

(8)

where K (ρ) is the global conductivity matrix, vc (c, ρ) is the nodal voltage vector, c∂ is the imposed electrical current at the electrodes located at the surface ∂ and ρ is the element resistivity vector. The current was injected considering a pair-wise skip-m current injection pattern. Since small numbers of skipped electrodes result in poor observability of the interior region, the skip-8 pattern was chosen [21,22]. Simulating 32 electrodes in two layers of 16 electrodes, the skip-8 pattern allows the pair of electrodes to be in opposite sides of the head, encouraging the current to cross the skull. Single-ended voltage measurements were taken considering the ground as the node closer to the geometric center of the brain.

+





λi Wi (ρk − ρi∗ )],

(12)

i

where Wi = L iT L i , the matrix Jk is the Jacobian of the observation model (8) around ρk , and α is a relaxation factor.

3

Results

Figure 1 shows representative examples of images obtained throughout the stages performed for the construction of the atlas. An MR image after spatial normalization is presented in Fig. 1a. Figure 1b shows the final result of the segmentation of the image into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), skull (SKU), soft tissue (TIS) and

247 Anatomical Atlas of the Human Head for Electrical Impedance Tomography

1697

Fig. 2 Root mean square error of the computed images using different regularization parameters for the atlas

pass and atlas (center) for λ2 = 3.2 × 10−14 (hemorrhagic) and λ2 = 3.2 × 10−15 (ischemic). Fig. 1 Images at different processing stages. a Transverse slice of a subject after spatial normalization. b Image after segmentation. c Resistivity image after downsampling. d Slice of the mean image of the resulting atlas using all subjects

4

background (BG). Figure 1c presents the result obtained after replacing the segmentation labels by resistivity values and downsampling the image. Figure 1d shows the mean image calculated for the atlas using all the fifty subjects. Images were reconstructed using a fixed high pass filter regularization parameter λ1 = 8 × 10−8 in all cases, while λ2 , the regularization parameter related to the anatomical atlas, ranged from 3.2 × 10−19 to 3.2 × 10−9 . Other values set in every reconstruction were both the initial guesses ρ1∗ and ρ2∗ as the average of the atlas, β = 10−5 , imposed current c = 1 mA and relaxation factor α = 0.08. The constants were empirically determined from previous experiments as adequate to achieve acceptable results. Figure 2 shows the root mean square error (RMSE) between the expected results and the ones obtained varying the atlas regularization parameter in both ischemic and hemorrhagic cases. Expected images were obtained by projecting the simulated resistivities onto the coarse mesh used to solve the inverse problem. When using only the Gaussian high pass filter for the regularization, the RMSE obtained were of 1.106 and 0.916 ωm for the ischemic and hemorrhagic cases, respectively. Figure 3 shows the resulting images for both the hemorrhagic (upper row) and the ischemic (bottom row) cases. This figure shows the expected results (left), the solution using only the high pass filter (right) and the solution using both high

The results show that it is possible to compute an estimate of the anatomical atlas of the human head through the presented methods. Using the atlas as prior information resulted in a decreased RMSE for both the hemorrhagic and the ischemic cases in the simulations. As shown in Fig. 2, this effect is achieved when an optimal parameter is selected where the atlas is relevant enough to affect the reconstruction, but not too strong so as to make the algorithm not take into account the voltage measurements. Based on Fig. 2, the best value considering both cases would be around 10−14 . The ischemic case presented higher RMSE values than the hemorrhagic one. This happened possibly because the ischemic resistivity was more distant from the normal resistivities of the brain. Nonetheless, computed images of both cases presented an abnormal region of increased or decreased resistivity around where it was expected. Using only the high pass filter, other abnormal regions were found, consisting of artifacts of the reconstruction. As shown in Fig. 3, the use of the atlas eliminated or attenuated some of those artifacts, indicating the benefits of this method for the improvement of the image quality. This result hints to the possibility of using statistical atlases of the head to improve EIT images. Despite the positive findings, the present study is only an initial exploration of the real benefits of the use of an anatomical atlas for the generation of EIT images of the head. Some limitations of this study include: computing the image only in the brain region, instead of in the whole head; using volt-

Discussion and Conclusion

1698

L. A. Ferreira et al.

Fig. 3 Comparison between slices of the expected and the obtained results. a–c regards the hemorrhagic case, whereas d–f regards the ischemic case. a, d are the expected results. b, e are from the computed images using the atlas with the best regularization parameter found. c, f are from the computed images using only the Gaussian high pass filter

age measurements without adding noise from the electronics and calibration; and using a relatively small number of head images to compose the atlas. Acknowledgements The MR brain images from healthy volunteers used in this paper were collected and made available by the CASILab at The University of North Carolina at Chapel Hill and were distributed by the MIDAS Data Server at Kitware, Inc. The authors gratefully acknowledge funding from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)— Finance Code 001 and The São Paulo Research Foundation (FAPESP), processes 2017/18378-0 and 2019/09154-7. Conflict of Interest The authors declare that they have no conflict of interest.

5.

6.

7. 8.

9. 10.

11.

References

12.

1.

13.

2.

3.

4.

Holder DS (2005) Electrical impedance tomography: methods, history and applications. IOP Publishing, Cornwall Wang H, Naghavi M, Allen C et al (2016) Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: a systematic analysis for the global burden of disease study 2015. Lancet 388:1459–1544 Solà J, Adler A, Santos A, Tusman G, Sipmann FS, Bohm SH (2011) Non-invasive monitoring of central blood pressure by electrical impedance tomography: first experimental evidence. Med Biol Eng Comput 49:409 Yew KS, Cheng E (2009) Acute stroke diagnosis. Am Fam Physician 80:33

14.

15.

16.

Hacke W, Donnan G, Fieschi C et al (2004) Association of outcome with early stroke treatment: pooled analysis of ATLANTIS, ECASS, and NINDS rt-PA stroke trials. Lancet 363:768–774 Vauhkonen M, Vadasz D, Karjalainen PA, Somersalo E, Kaipio JP (1998) Tikhonov regularization and prior information in electrical impedance tomography. IEEE Trans Med Imaging 17:285–293 Kaipio J, Somersalo E (2006) Statistical and computational inverse problems. Springer Science & Business Media Camargo EDLB (2013) Desenvolvimento de algoritmo de imagens absolutas de tomografia por impedância elétrica para uso clínico. Ph.D. thesis. Universidade de São Paulo Bullitt E, Zeng D, Gerig G et al (2005) Vessel tortuosity and brain tumor malignancy: a blinded study. Acad Radiol 12:1232–1240 Gorgolewski K, Burns CD, Madison C et al (2011) Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform 5:13 Avants BB, Tustison NJ, Song G, Gee JC (2009) Ants: open-source tools for normalization and neuroanatomy. HeanetIe 10:1–11 Fonov VS, Evans AC, McKinstry RC, Almli CR, Collins DL (2009) Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage S102 Ashburner J, Friston KJ (2005) Unified segmentation. Neuroimage 26:839–851 Borgefors G, Nyström I, Baja GS (1997) Connected components in 3D neighbourhoods. In: Proceedings of the Scandinavian conference on image analysis, pp 567–572 Raid AM, Khedr WM, El-Dosuky MA, Aoud M (2014) Image restoration based on morphological operations. Int J Comput Sci Eng Inf Technol (IJCSEIT) 4:9–21 McCann H, Pisano G, Beltrachini L (2019) Variation in reported human head tissue electrical conductivity values. Brain Topogr 1– 34

247 Anatomical Atlas of the Human Head for Electrical Impedance Tomography 17. McKinley S, Levine M (1998) Cubic spline interpolation. Coll Redwoods 45:1049–1060 18. Hammond D, Price N, Turovets S (2017) Construction and segmentation of pediatric head tissue atlases for electrical head modeling. OHBM, Vancouver 19. Geuzaine C, Remacle JF (2009) Gmsh: a 3-D finite element mesh generator with built-in pre- and post-processing facilities. Int J Numer Methods Eng 79:1309–1331 20. Horesh L (2006) Some novel approaches in modelling and image reconstruction for multi-frequency electrical impedance tomography of the human brain. Ph.D. thesis. Department of Medical Physics, University College, London 21. Silva OL, Lima RG, Martins TC, Moura FS, Tavares RS, Tsuzuki MSG (2017) Influence of current injection pattern and electric

22.

23.

24.

25.

1699

potential measurement strategies in electrical impedance tomography. Control Eng Pract 58:276–286 Beraldo RG (2019) Desenvolvimento de um modelo dinâmico da circulação cerebral para tomografia por impedância elétrica. Master’s thesis. Universidade Federal do ABC Vauhkonen PJ (2004) Image reconstruction in three-dimensional electrical impedance tomography. Ph.D. thesis. University of Kuopio Cheng KS, Isaacson D, Newell JC, Gisser DG (1989) Electrode models for electric current computed tomography. IEEE Trans Biomed Eng 36:918–924 Moura FS (2013) Estimação não linear de estado através do unscented kalman filter na tomografia por impedância elétrica. Ph.D. thesis. Universidade de São Paulo

Sparse Arrays Method with Generalized Sidelobe Canceler Beamformer for Improved Contrast and Resolution in Ultrasound Ultrafast Imaging D. F. Gomes, J. M. Maia, A. J. Zimbico, A. A. Assef, L. C. Neves, F. K. Schneider, and E. T. Costa

better than the DAS with 128 elements, indicating that the GSC method combined with proposed sparse arrays technique is suitable for imaging in B mode.

Abstract

The new ultrasound systems (US) based on the transmission of plane waves are able to generate videos with elevated frame rates when compared to the traditional techniques, allowing sophisticated exams. However, they generate a lot of data to be processed. Adaptive beamformer techniques are capable of reconstructing images with an elevate resolution, but with complex implementation and high processing demand. In this work it is suggested the use of a plane wave-based adaptive technique called Generalized Sidelobe Canceler (GSC). To evaluate the efficiency of this method with the decrease in the number of active elements of the transducer on reception, and consequent decrease in the amount of data generated, Sparse Arrays are considered. The evaluation of the proposed methods and comparison with the traditional method Delay and Sum (DAS) was performed using a simulated data set as well as in vivo collected data. The performance evaluation metrics were done using the Full Width at Half Maximum (FWHM) to check the lateral/axial resolutions, the contrast ratio (CR) and the and the Geometric Distortion Ratio (GDR). The results showed that the images generated by the proposed method, with a reduced number of active elements during the reception, were close to those provided by DAS in terms of spatial resolution and GDR, with the 65-element GSC method presenting GDR D. F. Gomes (&)  J. M. Maia  A. A. Assef  L. C. Neves  F. K. Schneider Electrical and Computer Engineering (CPGEI), Federal University of Technology—Parana (UTFPR), Av. Sete de Setembro, 3165 - Rebouças, Curitiba, Paraná, Brazil A. J. Zimbico Electrical Engineering Department (DEEL), Eduardo Mondlane University (UEM), Maputo, Mozambique E. T. Costa Biomedical Engineering Department (DEB/FEEC), State University of Campinas (UNICAMP), Campinas, Brazil

Keywords







Ultrasound imaging Adaptative beamformer Generalized sidelobe canceler Plane wave Sparse arrays

1

Introduction

With the advent of ultrafast ultrasound imaging techniques using plane waves, frame rates of 10,000 frames per second (fps) are achieved, a value well above the rates achieved by traditional dynamic focusing techniques, limited to 40 fps rates for several transducers frequencies and depth of investigated tissue of clinical interest [1, 2]. The high temporal resolution allows the rapid visualization of responses of biological tissues that cannot be observed and analyzed by a conventional ultrasound system, enabling the visualization of shear waves and, consequently, evaluation of the elastic properties of biological tissues [1–3]. Due to its easy implementation and low computational cost, one of the most used methods in the formation of ultrasound images is the DAS algorithm (Delay-And-Sum), which is a non-adaptive beamforming technique, as it applies a fixed weight function to the sum of the received signal matrix data [4]. When used with plane waves, the coherent plane wave composition (CPWC) technique has a high frame update rate, as it is physically limited only by the transit time of the ultrasound signal in the medium. In contrast, this method generally produces low-quality images, requiring the coherent composition of images from various transmissions of plane waves at different angles [1–4].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_248

1701

1702

D. F. Gomes et al.

Thus, with the use of this technique, there is an inherent trade-off between the frame rate and the quality of the image, or composite video, requiring numerous transmitted waves to obtain high quality images and a consequent decrease in frame rate. This fact makes it difficult to assess the elastic properties of biological tissues [1]. Algorithms based on adaptive beamformers that have weight functions dependent on the input data are an interesting alternative to improve the quality of images in terms of resolution and contrast generated from plane wave reconstruction, as they have the ability to preserve central lobes while reducing the side lobes. Adaptive techniques such as Minimum Variance (MV) and Generalized Sidelobe Canceler (GSC) are very efficient when aiming to improve image quality, but have high implementation complexity and high computational cost [5–7]. In order to reduce the amount of data generated and the complexity of the US equipment, sparse array techniques are employed, which consist of the deactivation of part of the transducer elements. However, the quality of the image is also linked to the number of transducer elements used, and its decrease implies an increase in the lateral lobes and a consequent reduction in contrast and resolution [8, 9]. The computational cost of beamformer algorithms increases with the increase in the amount of data processed, as well as the hardware complexity that increases with the number of reception channels and the number active elements in the transducers. Thus, in the present work, GSC adaptive beamformer techniques were used combined with sparse arrays in order to evaluate the efficiency of this method with the decrease in the number of the transducer’s active elements in reception, and consequent decrease in the amount of data generated.

2

Sidelobe Canceler Beamformer

The output zðkÞ of an adaptive beamformer is obtained by (1), with the application of a set of weights w to the received signals, where k is the discrete time index, ! x ðkÞ ¼ T ½x1 ðkÞ; x2 ðkÞ; . . .; xN ðkÞ is the time delayed version of data array with N transducer elements, and ! w ðk Þ ¼ T H ½w1 ðkÞ; w2 ðkÞ; . . .; wN ðkÞ is a vector of weights, ðÞ and ðÞT represents the Hermitian and transpose matrix, respectively. H x ðk Þ zðk Þ ¼ ! w !

ð1Þ

The weights of the GSC beamformer are found by minimizing the output power of the beamformer, similarly to the one proposed by Capon [6] for the MV method, however for the GSC it is obtained through an optimization problem without restriction, and results in two orthogonal components.

The optimized weight vector ! w GSC is composed of an ! w q that can be adaptive term w a and a non-adaptive term ! expressed by (2), (3) and (4), where ! a is the unit steering vector which represents the direction of the desired signal, R is the data covariance matrix (CM) and B is a blocking matrix that satisfies the condition BH ! a ¼ 0. In this paper, the B matrix used was (5), as proposed by Li et al. [7].

2

! w q  B! wa w GSC ¼ !

ð2Þ

  H 1 ! ! wq ¼ ! a! a a

ð3Þ

 1 ! w a ¼ BH RB BH R! wq

ð4Þ

0 1 60 1 6 B¼6 . . 4 .. .. 0 0

0 1 .. .

0 0 .. .

... 0

3 ... 0 ... 0 7 7 .. 7 .. . 5 . 1 1 ðN1ÞN

ð5Þ

The covariance matrix R is difficult to obtain in practical applications, because the transmitted ultrasound pulses are nonstationary and result in a time-varying matrix [10]. Therefore, it is estimated using the subarray averaging technique to decorrelate the coherence between input echo signals [5] and the diagonal loading technique is applied to correct the instability as in [10].

3

Sparse Arrays Method

In this method, the sparsity is performed in the reception, since in the transmission all transducer elements are used for generation of the plane waves. Thus, some of the reception elements are disabled, generating a sparse reception matrix. Null data is replaced by synthetic values obtained from linear interpolation using (6) [11]. The synthetic element rf i;j , located in row i, and column j, in the position xi;j is calculated based on its neighboring elements of the adjacent columns, with rf i;j þ 1 being the right element and rf i;j1 is the element on the left.    rf i;j þ 1  rf i;j1 xi;j  xi;j1   rf i;j ¼ rf i;j1 þ ð6Þ xi;j þ 1  xi;j1 As an example of building a sparse reception matrix, we can consider a transducer array of N = 128 elements, with only 65 active elements in reception. Thus, if we consider only the odd elements of position j to be enabled, all even elements are disabled, with the exception of the last j = 128, and their respective values are calculated synthetically with linear interpolation using Eq. (6). In this case, the last

Sparse Arrays Method with Generalized Sidelobe Canceler …

element is active in order to keep the symmetry in the extreme positions of the data matrix received as in [11, 12]. In the present work, configurations of 65, 44 and 23 sparse elements were used with a 128-elements transducer. In each of the configurations, the sparsity was distributed in a symmetrical way, with the distance between the active elements constant, with the exception of the last element, as proposed by Schiefler et al. [11].

Fig. 1 Beamformed responses for plane wave excitation using the 128-elements linear array transducer with 7 angles (−16°, −10.7°, −5°, 0°, +5°, +10.7°, +15°) and the reception for the DAS technique with 128 active elements (a), (f) and (k); GSC with 128 active elements (GSC-128) (b), (g) and (l); GSC with 65 active elements (GSC-65), (c), (h) and (m); GSC with 44 active elements (GSC-44) (d), (i) and (n); GSC with 23 active elements (GSC-23) (e), (j) and (o). The first line of

1703

4

Results and Discussion

The generated B Mode images based on plane waves, traditional (DAS) or adaptive (GSC), were simulated in work using the FIELD II [13] and USTB (UltraSound ToolBox) [14] programs executed in MATLAB (MathWorks. Inc., USA).

images a–e corresponds to the simulations for the resolution tests using FIELD II. The second line f–j corresponds to the simulations for contrast-speckle tests in the FIELD II. The third line k–o was obtained processing in vivo data acquired from a carotid artery of a volunteer with a Verasonics Vantage 256 platform and an L11-4v probe (Verasonics Inc. Redmond, WA), available in [15]

1704

D. F. Gomes et al.

The set of simulated data to evaluate resolution, distortion and contrast was generated with the FIELD II software, simulating the reception of an area with 20 reflecting targets distributed vertically and horizontally in an anechoic region, according to Fig. 1a–e. For contrast assessment data, nine anechoic cysts were generated according to Fig. 1f–j. Both simulations were carried out with a 128-elements transducer, with central frequency of 5.21 MHz, and transmission of seven plane waves with angles uniformly varying from −16° to +16 with respect to the direction of the transducer face and a step of approximately 5.33°. To evaluate the performance of the implemented algorithms in real data obtained in vivo, the PICMUS dataset provides data recorded from the examination of a volunteer’s carotid artery. The data capture was performed with the aid of the research platform Verasonics Vantage 256, using an ultrasound transducer model L11-4v with central frequency of 5.21 MHz [15], and the outputs for this dataset from the DAS and GSC beamformers with all 128 active elements at the reception are shown in Fig. 1k–l. Figure 1m–o show the results for the adaptive beamformer combined with sparse arrays for 65, 44 and 23 elements, respectively. Figure 1a–e show the simulated results using the traditional DAS beamformer and the GSC adaptive algorithm with sparse arrays. It is possible to see the effect of loss in image quality with the decrease in the number of transducer elements in reception, for GSC method. The same effect is also verified in the images obtained from other simulated

Table 1 Geometric Distortion Ratio (GDR) of the central target at depth x = 0 mm and z = 30 mm shown in Fig. 1a–e

Method

data such as Figures (f–j) or even real data according to Fig. 1k–o. Such an effect is presented and discussed in [11] with DAS beamformers and an algorithm based on Stolt migration and also in [16] where two-dimensional transducers are used. The loss in image quality is explained as a consequence of the development of lateral lobes in the beam profile, whose amplitudes increase as the number of elements in the matrix decreases. The geometric distortion ratio (GDR), defined mathematically as the ratio between the axial FWHM by the lateral FWHM calculated for a central target at depth x = 0 mm and z = 30 mm shown in Fig. 1a–e is presented in Table 1. It is possible to notice that the GSC beamformer presents a better performance when compared to DAS. For all 128 active elements in reception, the GSC methods have a GDR near to one, which would be the ideal theoretical value, while the DAS results in 0.39. For the other sparse configurations, the GSC technique has results that suggest a distortion in the resulting image for the evaluated target, the GDR being decreased with the increase in the number of sparse elements. However, it is possible to verify that for the configuration of 65 active elements in reception, the GSC beamformer still presents GDR results superior to DAS with 128 active elements in reception. In terms of contrast, the GSC adaptive method also presents better results than DAS when compared to responses with the same number of elements in reception, as shown in [4]. Figure 2 presents the results for simulation of anechoic cysts, in which the DAS has a CR equal to 26.9 dB while the GSC has a CR of 30.6 dB, representing a 3.7 dB improvement.

Number of elements 128

65

44

23

DAS

0.39







GSC

1.21

0.48

0.32

0.14

Fig. 2 Contrast results for simulated circular anechoic cysts for different beamformers. a Region used to calculate the CR; b contrast ratio (CR) calculated as in [15] for the DAS, GSC-128, GSC-65, GSC-44 and GSC-23 methods

Sparse Arrays Method with Generalized Sidelobe Canceler …

For the GSC technique combined with a 65-elements sparse array, the results shown in Fig. 2 suggest that, for this configuration, the adaptive method has the same performance as the DAS with 128 active elements in terms of contrast.

5

Conclusions

Through the analysis of the results obtained in this work, it is possible to conclude that the proposal for the use of sparse matrices with adaptive beamformer GSC is efficient in terms of geometric distortion and contrast, and presents images with higher quality than the conventional DAS method, with half as few transducer’s active elements in reception. Even with fewer active elements of the transducer in reception (e.g. 44), it was possible to identify adequately the structures. The use of sparse matrices allows the reduction of the complexity of the reception hardware, since it reduces the number of signal conditioning circuits, high speed digital analog converters and other necessary components, in addition to the consequent reduction in the amount of acquired data. Acknowledgements The authors thank the following Brazilian agencies: CAPES, CNPq, FINEP, Ministry of Health and Araucária Foundation for their financial support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Montaldo G, Tanter M, Bercoff J, Benech N, Fink M (2009) Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control. https://doi.org/10.1109/TUFFC.2009. 1067 2. Tanter M, Fink M (2014) Ultrafast imaging in biomedical ultrasound. IEEE Trans Ultrason Ferroelectr Freq Control. https://doi.org/10.1109/TUFFC.2014.2882

1705 3. Hasegawa H, Korte DC (2018) Special issue on ultrafast ultrasound imaging and its applications. Appl Sci 8:1110 4. Zimbico AJ, Granado DW, Schneider FK, Maia JM, Assef AA, Schiefler N, Costa ET (2018) Eigenspace generalized sidelobe canceller combined with SNR dependent coherence factor for plane wave imaging. Biomed Eng Online. https://doi.org/10.1186/ s12938-018-0541-1 5. Synnevag JF, Austeng A, Holm S (2009) Benefits of minimumvariance beamforming in medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control. https://doi.org/10.1109/TUFFC. 2009.1263 6. Capon J (1969) High resolution frequency-wavenumber spectrum analysis. Proc IEEE 57:1408–1418 7. Li J, Chen X, Wang Y et al (2016) Eigenspace-based generalized sidelobe canceler beamforming applied to medical ultrasound imaging. Sensors 16:1192–2003. https://doi.org/10.3390/s16081192 8. Lu JY, Zou H, Greenleaf JF (1994) Biomedical ultrasound beam forming. Ultrasound Med Biol. https://doi.org/10.1016/0301-5629 (94)90097-3 9. Roux E, Varray F, Petrusca L, Cachard C, Tortoli P, Liebgott H (2018) Experimental 3-d ultrasound imaging with 2-d sparse arrays using focused and diverging waves. Nat Sci Rep. https://doi.org/ 10.1038/s41598-018-27490-2 10. Synnevag JF, Austeng A, Holm S (2007) Adaptive beamforming applied to medical ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control. https://doi.org/10.1109/TUFFC.2007. 431 11. Schiefler NT Jr, Maia JM, Schneider FK, Zimbico AJ, Assef AA, Costa ET (2018) Generation and analysis of ultrasound images using plane wave and sparse arrays techniques. Sensors 18(11): 3660. https://doi.org/10.3390/s18113660 12. Gomes DF, Zimbico AJ, Maia JM, Neves LC, Assef AA, Schneider FK, Costa ET (2019) Sparse arrays method with minimum variance for high quality image in ultrasound ultrafast imaging. In: IEEE sensors, Montreal, QC, Canada. https://doi.org/ 10.1109/SENSORS43011.2019.8956867 13. Jensen JA (1999) Linear description of ultrasound imaging systems: notes for the international summer school on advanced ultrasound imaging at the Technical University of Denmark. Technical University of Denmark, Department of Electrical Engineering 14. Rodriguez-Molares A, Rindal OMH, Bernard O, Nair A, Bell MAL, Liebgott H (2017) The ultrasound toolbox. In: IEEE international ultrasonics symposium (IUS). https://doi.org/10. 1109/ULTSYM.2017.8092389 15. Liebgott H, Rodriguez-Molares A, Cervenansky F et al (2016) Plane-wave imaging challenge in medical ultrasound. In: IEEE international ultrasonic symposium, Tours, France, pp 1–4 16. Turnbull DH, Foster FS (1992) Two-dimensional transducer arrays for medical ultrasound: beamforming and imaging. In: New developments in ultrasonic transducers and transducer systems. International Society for Optics and Photonics

Center of Mass Estimation Using Kinect and Postural Sway G. S. Oliveira, Marcos R. P. Menuchi, and P. E. Ambrósio

Abstract

Keywords

The Center of Mass (CM) plays an important role in balance assessments due to its physical definition. And the knowledge of its behavior provides information that may help the implementation of protocols to aid physiotherapy with a view of improving postural control capacities in some people. In this work, the kinematic method (segmental method) was used to estimate the human body CM, and this was only possible by knowing some joints locations in body. So, the Kinect device was used to provide the joints because its consistence and facility of use, also for its markerless method of recognizing human joints. After calculating all segmental CM by the anthropometric parameters and using the endpoints (joints), the total body CM was estimated. A male subject 1.67 m high and 60 kg mass participated of the initial test and a static task was performed to acquire data to the analysis. The subject remained standing still during 30 s in a comfortable standard position at about 2 m of the Kinect. The subject also performed a voluntary oscillation task. As result were obtained two graphs that represent the CM trajectory along the AP-ML plane (statokinesiogram) and the amplitude of CM displacement across time (stabilogram) and one graph representing the voluntary oscillation around the ankle. The results also showed the CM location in percentage being 57.56 ± 0.10, that can be compared to the physiological CM being 55%, what is quite good estimation. Therefore, the potential of this low-cost photogrammetric device is noticeable to compose a method of CM estimation and also for studies in balance assessments.

Center of mass sway

1

M. R. P. Menuchi Health Science Department, State University of Santa Cruz, Ilhéus, Brazil

Kinect



Kinematic method



Postural

Introduction

Efforts to describe and understand variations in body balance have led to the development of several postural assessment techniques, which can be performed either from a physiological or functional point of view [1]. The postural oscillations are traditionally represented by the trajectory of the pressure center (CP) and the center of mass (CM) [2]. However, while the CP is a kinetic measure easily obtained by force platform, the CM is a kinematic measure that is more difficult to access, as estimate of the CP or with the use of motion capture systems or inertial sensors, with certain complexity of operation and spatial layout [3]. The body center of mass is a point in or outside an object or body considered to have all its mass. Also if a force is applied to this point, the object or body is supposed to move without rotating. The aim of this work is to reach an estimation of the CM position by using the Kinect coordinates and the kinematic method and analyze its behavior for a standing task. For justification, the knowledge of the body center of mass behavior is necessary to verify and implement protocols to reduce risk of fall in some people. And having a bit of sophistication, devices are often expensive.

2 G. S. Oliveira (&)  P. E. Ambrósio Computational Modelling in Science and Technology, State University of Santa Cruz, Ilhéus, Brazil



Materials and Methods

In this topic we talk about the Kinect device and how it works, hardware and softwares used in this work and the kinematic method of estimating body center of mass, also

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_249

1707

1708

known as segmental method, as well as the procedures to acquire the data.

2.1 Device–Kinect In the development of this work was used the Microsoft Kinect device (in its version 360). The Kinect device was initially created to be a new method to control virtual games, in which the player itself (user) would perform the movement to the avatar, without the necessity of joysticks or knobs. All this is possible because the device makes use of a technique called Structured-Light 3D Scanning, well explained in the following reference [4]. Based on this technique, Kinect has dedicated software created by Microsoft to track people’s body using the data extracted of the images and it creates a skeleton model composed by a stick figure with the coordinates of the body joints.

2.2 Software—Python Codes and Kinect SDK Kinect 360 can work with only 20 joints. To capture all these joints coordinates it was necessary to install software from Microsoft for Windows called SDK (Software Development Kit) v1.8 [5] to start using Kinect. A module in Python [6] named pykinect was also installed to receive the data from Kinect and manipulate them. With this module and through python programming we could dive into the data produced by Kinect software and use some important functions in this work as the Kinect initialization, depth and video frames, skeleton tracked and skeleton positions (the joints). Then, we code routines in python to start the device, capture the frames to obtain the skeleton information needed, compute, save and show data and also to count the time in which the code would be running.

2.3 Hardware–Machine We use a notebook ACER Aspire equipped with an Intel® core™ i3-2328, CPU 2.20 GHz, RAM 4 GB, OS 32bits.

2.4 Kinematic Method There are some ways to compute a body CM. One of them is based on a table supported by a pin (rotation point) in which the subject is required to lie down and stay still. This method uses the principle of torque, and it can provide the center of mass not only for the total body but also for its segments. This method was particularly important to the development of the kinematic method.

G. S. Oliveira et al.

Another method is using images and photogrammetry that consist in extract and gather information from image processing and analysis. The present work takes hold of this method in the development. The Kinematic method of calculating the body center of mass was proposed by Zatsiorsky et al. in 1990 after estimating the center of mass for different segments of human body [7]. De Leva, 1993, observed some errors using Zatsiorsky’s method and could understand that the errors were occurring in consequence of the landmarks have been taken at the bone [8]. However, de Leva understood that it could be a better approach if it were taken at the axis of rotation of body segments. This method basically consists in computing the center of mass for each segment of human body to then compose the total body center of mass, and each segment is delimited by endpoints, usually joints. In this work the joints were taken through the acquisition of information generated by the Kinect skeleton model. The body segmentation is composed of 14 parts: head, trunk, upper-arms, forearms, hands, thighs, shanks and feet so that each segment has its own CM, called segmental CM. After segmenting we need to locate all segmental CM, and that is part of Zatsiorsky et al. and de Leva’s works [7, 8]. They established an anthropometric table in which can be found the body segment parameters data. To compute the body CM using this method is necessary to calculate the positions of the segmental CM based on the anthropometric table and the acquired joints positions by Kinect. This is possible through the Eq. 1, which basically places the anthropometric segmental CM as a point along the line that connects the distal and proximal points, which delimit segmental length. It is important to know that proximal and distal endpoints are related to if the joint location is next to the torso or away of it. CM w;s ¼ wp þ L  ðwd  wp Þ

ð1Þ

where: CMw,s is position of the segmental CM in the direction chosen (w), L is the distance between the segmental CM and the proximal point wp in percent of length of the segment, wd and wp represent the variable chosen (x, y or z directions) in the distal and proximal positions, respectively. Once knowing all the segmental CM positions in the directions needed, the Eq. 2 is used to calculate the body center of mass. Pn mi  CMs;i CM ¼ i¼0 ð2Þ M body where: CM is the total body CM in the chosen direction, m is the segmental mass, CMs is the segmental CM in a specific direction and Mbody is the total body mass.

Center of Mass Estimation Using Kinect and Postural Sway

1709

In this work, at the end of calculation with Eq. 2, we will have the spatial position of body CM.

2.5 Procedures The procedure of data acquisition was performed with a healthy male subject, 1.67 m high and 60 kg mass, standing about 2.0 m from the Kinect device in a standard position (standing straight and relaxed hands by side) with double limbs, feet apart and eyes open. The subject was instructed to look straight ahead, remain silent and stay as still as possible till the end of the acquisition. This task was taken for 30 s and it was repeated three times. Also a voluntary oscillation in the Mediolateral direction around the ankle was executed, in which the subject moved to the both sides until he felt his limit of stability (the sensation of falling) for 30 s.

3

Fig. 2 CM displacement across time

Results

The results are shown in Figs. 1 and 2. They show data from one of the acquisitions, in which Fig. 1 represents data from AP and ML directions. This result would be seen if we could join a pen vertically exactly in the CM position and it could scribble a paper sheet. All data is in millimeters. Figure 1 shows the trajectory of the CM while performing the task. As human body is commonly modelled as an inverted pendulum, the representation in this figure is the projection of the spatial CM along the plane of the floor, where the ML direction corresponds to the X-axis and the AP direction to the Z-axis in the Kinect device. Fig. 3 Ankle sway across time

It is important to say that the trajectory origin (0, 0) is taken as the first point calculated from the first frame captured. Figure 2 shows two time series that represent the displacement of the CM in both directions (AP and ML) computed as the displacement relative to the first point captured. Figure 3 shows the result for a voluntary oscillation in the ML direction around the ankle.

4

Fig. 1 CM trajectory

Discussion

The purpose of this work was to calculate the body CM for a standing task using Zatsiorsky and de Leva’s works and by extracting information from MS Kinect device. The results shown in the previous topic were obtained from a male

1710

subject (because the anthropometric model makes distinction between male and female due to their morphologic variation). After implementing the segmental method for calculating the CM of the human body, tests were performed to verify its proper functioning. From then on, data were collected with the subject according to the protocol described to verify the data obtained and be able to analyze them. It was possible to calculate and verify that the estimate of the CM for the subject was (57.56 ± 0.10)%, compared to the physiological CM, which is 55% of height, according to [9]. In addition to the estimate, it was also obtained its trajectory, which is called a statokinesiogram (Fig. 1), as well as its stabilogram (Fig. 2), which represents the time series of the oscillation amplitude in each direction (AP and ML). Thus, it is possible to analyze the metrics such as the amplitude of oscillation which is important to verify, in simple terms, how much of the base area is used for the oscillation of the CM (which can give indications of the stability). It is also possible to calculate its displacement and speed, among other metrics. In the test performed, a mean velocity of 0.30 mm/s was obtained for the AP direction and 0.11 mm/s for the ML direction and also the oscillation amplitude was calculated in both directions, reaching 8.53 mm and 9.46 mm for ML and AP, respectively. Most of the literature found on Kinect and stability analysis deals with device validation compared to other devices as in [10], which makes it difficult to compare raw data as the ones placed in this work. Another task was performed in which the result is shown in Fig. 3. The subject was standing and the based formed by his feet had its largest edge measuring approximately 300 mm. This result can be seen in Fig. 3 for the ML direction, where the maximum amplitude of oscillation tends to 300 mm. The AP direction shown does not contribute to the results, but the largest valley shown at around 25 s indicates that the CM was recalculated by the software because of an imbalance at the end of the test. A similar test was performed and analyzed by the reference [11]. It is worth mentioning that none of the data were subjected to filter. The application of Kinect into practice has been studied by many researchers due to its potential of use in assessing postural stability, and many of the works are reviewed and compared in the systematic review of the reference [10]. Some of these applications are single and double-limb stance horizontal CM displacement and sway velocity with eyes open for static balance. The application of this device can help health professionals to distinguish groups of equilibrium based on the metrics obtained from tests, discriminating patients at risk of falling and also evaluate protocols to improve postural stability. It is also possible to use it as an application to help

G. S. Oliveira et al.

elder patients in doing home exercises to increase stability [12], and be supervised by physical therapists.

5

Conclusion

Based on what was shown in this work it is possible to notice that Kinect has a good potential to be used as a device for kinematic analysis of CM due to its advantages, such as markerless skeleton recognition, flexibility, low-cost and facility to be incorporated in a computational routine. Although the Kinect has been discontinued, this work can be an opportunity to show that low-cost movement sensors can be used to do tasks also done for expensive devices in balance assessments and give consistent results, even being limited. For future work, a model to estimate the Kinect measurement error will be developed. An application will also be developed to explore and use the potential of the device. Acknowledgements The authors wish to acknowledge Bahia Research Support Foundation (FAPESB) for the scholarship, Graduate Program in Computational Modelling in Science and Technology (PPGMC) for the knowledge and the State University of Santa Cruz (UESC) for the support. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Browne J, O’hare N (2001) Review of the different methods for assessing standing balance. Physiotherapy 87(9):489–495 2. Latash ML (2008) Neurophysiological basis of movement, 2nd edn. Human Kinects, Champaign 3. Yeung LF, Cheng KC, Fong CH, Lee WCC, Tong K-Y (2014) Evaluation of the Microsoft Kinect as a clinical assessment tool of body sway. Gait & Posture 40:532–538 4. Melgar ER, Diez CC (2012) Arduino and kinect projects. Apress, New York 5. SDK Kinect version 1.8 at: https://www.microsoft.com/en-us/ download/details.aspx?id=40278 6. Python at: www.python.org/ 7. Zatsiorsky VM, Seluyanov VN, Chugunova LG (1990) Methods of determining mass-inertial characteristics of human body segments. In: Chemyi GG, Regirer SA (eds) Contemporary problems of biomechanics. CRC Press, Massachusetts, pp 272– 291 8. De Leva P (1996) Adjustments to Zatsiorsky-Seluyanov’s segment inertia parameters. J Biomech 29(9):1223–1230. https://doi.org/10. 1016/0021-9290(95)00178-6 9. Narciso FV, Santos SS, Ferreira F, Lemos VS, Barauna MA, Cheik NC, Canto RST (2010) Center of gravity height and number of falls in active and sedentary older adults. Rev Bras Cineantropom Desempenho Hum 12(4):302–307

Center of Mass Estimation Using Kinect and Postural Sway 10. Puh U, Hoehlein B, Deutsch J (2019) Validity and reliability of the kinect for assessment of standardized transitional movements and balance: systematic review and translation into practice. Phys Med Rehabil Clin North Am 30:399–422. https://doi.org/10.1016/j.pmr. 2018.12.006 11. Lafond D, Duarte M, Prince F (2004) Comparison of three methods to estimate the center of mass during balance assessment.

1711 J Biomech 37(9):1421–1426. https://doi.org/10.1016/s0021-9290 (03)00251-3 12. Ejupi A, Gschwind YJ, Brodie M, Zagler WL, Lord SR, Delbaere K (2016) Kinect-based choice reaching and stepping reaction time tests for clinical and in-home assessment of fall risk in older people: a prospective study. Eur Rev Aging Phys Act 13 (1). https://doi.org/10.1186/s11556-016-0162-2

Estimation of Directed Functional Connectivity in Neurofeedback Training Focusing on the State of Attention W. D. Casagrande, E. M. Nakamura-Palacios and A. Frizera-Neto

autoregressive model (MVAR) • sLORETA • Neurofeedback

Abstract

The directed transfer function (DTF) is a measure based on the concept of Granger’s causality, which associated with a neurofeedback system, can represent an important analysis tool to support the treatment of neuropsychiatric disorders. Defined in the structure of the multivariate autoregressive model (MVAR), the DTF provides a spectral estimate of the strength and direction of any causal link between the signals acquired by electroencephalography (EEG). This study aims to estimate the directed functional connectivity related to the attention status of healthy adult individuals during neurofeedback sessions, aiming to better understand how the neuronal regions communicate and influence each other’s activity and study the feasibility of using these algorithms with neurofeedback. Data were collected from 19 individuals, eleven male and eight female, with an average age of 21.21 years, standard deviation of 2.39 and an age range of 18–26 years. As a result, they were able to identify changes in the direction and strength of the interaction flow between certain brain regions that occurred during the sessions, suggesting that the sessions enabled individuals to activate brain regions related to the state of attention. The main contribution presented in the study was the use of the mathematical methods mentioned to identify and analyze brain modulations in individuals who participated in neurofeedback sessions related to the strengthening of the state of attention. Keywords

Directed transfer function (DTF) • Electroencephalography (EEG) • Multivariate W. D. Casagrande (B) · A. Frizera-Neto Department of Electrical Engineering, UFES, Avenida Fernando Ferrari 514, Goiabeiras Campus, Vitória, ES, Brazil E. M. Nakamura-Palacios Department of Physiological Sciences, UFES, Vitória, ES, Brazil

1

Introduction

Neurofeedback is a technique based on the principle of operator learning [1], in which brain activity is recorded and monitored, usually by electroencephalography (EEG) [2,3]. The acquired information is used by the subject himself so that he can control his own performance [3]. Like EEG, other techniques such as functional magnetic resonance imaging (fMRI) can be used [4]. For this, this system makes use of digital sound, visual and even tactile resources that are modulated in real time by the registered brain activity. In this way, neurofeedback is a technique based on the continuous learning process of both the machine and the human through the modification of brain patterns carried out by the individual himself [2]. Neurofeedback has been used to treat a wide variety of neurological and psychological disorders and to improve cognitive performance in healthy individuals [1]. The patterns of brain activity to be modified through cognitive training with neurofeedback are defined according to the objective to be achieved, such as the recovery of frontal cognitive control that is compromised in impulsive-compulsiveadditive syndromes as in drug addiction [5,6], but also in other conditions such as Attention Deficit Hyperactivity Disorder (ADHD). Electroencephalography (EEG) has been widely used to research brain connectivity [7,8], in order to explore how neuronal regions communicate and influence the activities, even without necessarily being structurally connected [9]. Compared with other methods, such as magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), EEG has a high temporal resolution [8]. This characteristic of the EEG makes it suitable for investigating the causal relationships between different brain

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_250

1713

1714

sites and specifically for performing functional connectivity analyzes [9,10]. There is a growing interest in the use of mathematical algorithms to estimate the flow of information between different areas of the scalp or cortex [11]. Studies have used multivariate spectral measurements based on autoregressive modeling of multichannel EEG, in order to calculate efficient connectivity estimates [8,12]. Among multivariate methods, the directed transfer function (DTF) [13], is an estimator that characterizes, at the same time, direction and spectral properties of the interaction between brain signals. The DTF requires that only one multivariate autoregressive modeling (MVAR) model be estimated from all time series. Recently algorithms have been developed to estimate MVAR with time-dependent coefficients. This estimation procedure allows the simultaneous adjustment of an average MVAR model to a set of individual tests, each representing a measure of the same task. The advantages of this estimation technique are an efficient computation algorithm and a high adaptability [14]. The analysis of directed functional connectivity based on EEG source signals has been widely used in the study of resting brain networks [15] and in patients with neurological disorders [16,17]. Estimating the density of cerebral current at a deeper level is also of great importance for the analysis of the collected EEG data. The possibility of studying the 3D distribution of electrical neuronal activity has made LORETA (Low-resolution electromagnetic tomography analysis) a very powerful analysis tool. This method is an inverse solution that consists of estimating the density of the cortical electrical current originating from the scalp electrodes (Electroencephalogram—EEG), using ideal smoothing to estimate a direct 3D solution for the distribution of brain electrical activity [18]. This article aims to use mathematical algorithms based on multivariate autoregressive modeling (MVAR) and the directed transfer function (DTF). The purpose of the proposed algorithms is to estimate the directed functional connectivity that is related to the attention status of healthy adult individuals, during neurofeedback sessions, and understand how the neuronal regions communicate and influence each other’s activity. The feasibility of using these tools, along with neurofeedback, aims to improve treatment. Thus, it is possible to correlate the activated internal structures with the corresponding spatial configuration of EEG, contributing to a much more robust analysis. The main contribution presented in the study was the use of the mathematical methods mentioned to identify and analyze brain modulations in individuals who participated in neurofeedback sessions related to the strengthening of the state of attention.

W. D. Casagrande et al.

2

Materials and Methods

2.1

Participants

This study was carried out with 19 participants, eleven male and eight female, with a mean age of 21.21 years, standard deviation of 2.39 and an age range of 18–26 years. All participants received three sessions of NFB training. All participants read, signed and agreed to an informed consent form. Exclusion criteria for participation included previous head trauma, history of seizures, recent use of drugs or alcohol and previous psychiatric diagnosis. This study is part of a project approved by the Brazilian Institutional Review Board of the Federal University of Espírito Santo (CAAE 19403713.6.0000.5060), Brazil. It was conducted in strict adherence to the Declaration of Helsinki and is in accordance with the ethical standards of the Committee on Human Experimentation of the Federal University of Espírito Santo, ES, Brazil.

2.2

Procedures

Participants were prepared to record EEG using the EEG Quick-20 signal capture system (Cognionics, USA). The EEG was cleaned before each session to maintain consistency. NFB training was performed using the 19 derivations of the international standard 10/20 system (FP1, FP2, F3, F4, Fz, F7, F8, C3, C4, Cz, T3, T4, T5, T6, P3, P4, Pz, O1 and O2) with a reference on the left ear. The NF sessions were based on a rocket game in space. In the sessions, the participants’ goal was to control the rocket speed through brain data captured by the EEG. The greater the state of attention detected, the greater the speed of the rocket, in order to capture as many stars as possible for 5 min. The data were collected and stored using the Cognionics Acquisition Software with a bandpass filter set between 0.5 and 100 Hz at a rate of 500 samples per second. Each session took approximately 15 min to complete (10 min to train the system and 5 min of NF training). All recordings and sessions were performed in a room with a prepared environment in the Research Laboratory. Lighting and temperature were kept constant during the experiment.

2.3

Data Acquisition

In contrast to studies using traditional NFB, all head EEG data was stored continuously during sessions. In addition, the participants in this study provided a written record of their experiences, strategies and mental processes employed to obtain points for each session during this training.

250 Estimation of Directed Functional Connectivity …

2.4

Pre-processing of Data

1715 Table 1 Cortical structures grouped into Temporal, Frontal, Occipital, Parietal, Central, and Cingulate regions, and their ROI indices

All EEG data were processed with special attention to the frontal and temporal derivations, since they are regions that are directly related to the mental state of attention [19]. Initially, the signals were filtered through a FIR bandpass filter from 1 to 45 Hz, to select the frequencies of interest [delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz) and beta3 (18–25 Hz)]. After filtering, the signals were referenced to the common average reference (CAR), to eliminate some common mode artifacts. Participants were instructed by the research group to minimize blink episodes and eye movements, leaving the jaw relaxed and minimizing body movements during the experiment.

Region

ROI

Position

Temporal

Inferiortemporal Inferiortemporal Superiorfrontal Superiorfrontal Lateraloccipital Lateraloccipital Inferiorparietal Inferiorparietal Paracentral Posteriorcingulate

R.8–R.10 L.8–L.12 L.15–L.20 R.18–R.21 L.1–L.9 R.1–R.9 R.5–R.11 L.9–L.14 L.1–L.3, R.1–R.3 L.1, R.1

2.5

fer function (DTF). DTF is a measure based on Granger’s concept of causality. It is defined in the structure of the multivariate autoregressive model (MVAR). The model is installed simultaneously on all channels of the device. DTF describes the causal influence of channel j on another channel i on frequency f [14]. In this approach, we assume that a sample of the data in k channels at a time t can be expressed as a weighted sum of p previous samples with a random component added:

sLoreta

We analyzed the cortical distribution of the density of the current source, using the sLORETA. This methodology produces a low error solution for power source generators and provides statistical maps that model the distribution currents of brain activity [18] using realistic electrode coordinates for a concentric spherical head model recorded in an atlas Talairach (three-dimensional coordinate system of the human brain) of standardized magnetic resonance composed of 2394 elements of cerebral volume (voxels), allowing a reasonable approximation of anatomical marking in the neocortical volume [20]. The selected artifact-free EEG signals were analyzed to calculate the density of the cortical current source from 1 to 45 Hz. The current source density of the cortical functioning image of the sLORETA was calculated for four frequency bands: delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz) and beta3 (18–25 Hz). In this work, to calculate the location of the intracortical distribution of electrical activity, as a mathematical tool, we used the standardized LORETA (sLORETA) method, using the Neuropype software (Intheon, USA).

Frontal Occipital Parietal Central

X (t) =

Effective Connectivity Analysis

To analyze functional connectivity, we use the information acquired by the sLORETA method to determine the cortical regions of interest (ROI), aiming at greater precision, in order to obtain better results for further analysis. 10 cortical ROIs were defined, which are described in Table 1. It must be taken into account that each ROI was an average of several underlying channels. To analyze functional connectivity between all pairs of ROIs, we use mathematical algorithms based on multivariate autoregressive modeling (MVAR), so-called directed trans-

A( j)X (t − j) + E(t)

(1)

j=1

where X (t) = (X 1 (t), X 2 (t), . . . , X k (t))T is the vector of data values at the time t and E(t) = (E 1 (t), E 2 (t), . . . , E k (t))T is the vector of the random component values at the time t. The matrices A( j) of size k x k are known as the model parameters and p is the order of the model. Transforming the MVAR model for the frequency domain, it takes the form of a linear filter H with noise E at the input and signal X at the output: 

2.6

p 

H( f ) =

p 

 A(m) exp(−2π imft)

m=0 −1

X( f ) = A

( f )E( f ) = H ( f )E( f )

(2) (3)

where X ( f ), A( f ) and E( f ) are the Fourier transform of X (t), A(t) and E(t), respectively. The matrix H ( f ) is known as the transfer matrix. The normalized DTF function is defined in the frequency domain as:    Hi j ( f ) 2 2 (4) γi j ( f ) =    k  Him( f ) 2 m=1

1716

W. D. Casagrande et al.

where Hi j ( f ) are elements of the transfer matrix H. γi2j ( f ) represents the causal influence of channel j on channel i at frequency f. Equation (4) takes values from 0 to 1 producing a ratio between the inflow from channel j to channel i to all the inflows to channel i. With respect to conventional methods for estimating functional relationships between biological signals, the DTF provides a spectral estimate of the strength and direction of any causal link between the signals in the set under examination, which are considered in a single model, exceeding the limits of the methods conventional bivariate [21]. As well as the sLORETA method, the mathematical algorithms to generate functional connectivity estimates were also used in the Neuropype software (Intheon, USA).

3

Results

As previously mentioned, to estimate the cortical distribution of the current source density using the sLORETA method and to estimate functional connectivity by DTF, software called Neuropype was used, which is a platform for real-time braincomputer interface, neuroimaging and processing bio/neural signals. In Fig. 1, it is possible to observe the block system assembled to generate the estimates.

Fig. 2 sLORETA neurofeedback session. a Delta (1–4 Hz). b Theta (4–8 Hz). c Alpha (8–12 Hz). d Beta (18–25 Hz). Color bar indicates current density values

For each participant, estimates for 1 min of neurofeedback were calculated for four frequency bands: delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta3 (18–25 Hz). The neurofeedback sessions were all based on the theta/beta attention training protocol, where the intention is to increase the power of the beta frequency range and reduce theta. The sLORETA block was responsible for generating the functional images. Figure 2 presents the functional images of sLORETA from one of the sessions of one of the individuals in the experiment that showed changes in the activation regions for all four frequency bands mentioned above. After generating the functional images, 10 ROIs were selected based on the analysis of the cortical distributions of the current source density. In the construction phase of the autoregressive multivariate model, several parameters were configured, such as the order of the model, maximum number of iterations and size of the sliding window, in order to obtain the best direction results and the flow force in the estimate of functional connectivity. In the DTF block, frequencies were selected to describe the causal influence between ROIs. At the end, connectivity images were generated. Figure 3 shows the estimate of functional connectivity generated for the four bands of interest for the same individual in Fig. 2.

4

Fig. 1 Block diagram assembled on the Neuropype platform to generate functional connectivity estimates and sLORETA images

Discussion

In this article, we use a source location method (sLORETA) to find the source of the state of attention in different frequency ranges for adult individuals and, thus, define the cor-

250 Estimation of Directed Functional Connectivity …

1717

5

Fig. 3 Connectivity estimation during neurofeedback session. a Delta (1–4 Hz). b Theta (4–8 Hz). c Alpha (8–12 Hz). d Beta (18–25 Hz). The scale represents the direction and force of the flow

tical regions of interest to estimate functional connectivity through methods such as multivariate autoregressive modeling (MVAR) and the directed transfer function (DTF). We found significant differences in individuals between neurofeedback sessions in the four frequency bands: delta, theta, alpha and beta. It is possible to observe that in Fig. 2, that is, in the first session of a given individual, the source of current density was greater in the regions of the temporal lobe for the delta and beta bands. Over the course of the sessions, the individual started recruiting the prefrontal, dorsolateral left and right prefrontal regions, which indicates the activation of regions more related to the state of attention [19], which suggests that these regions are related to the state of attention and should be analyzed with greater caution. With the regions related to the state of attention identified, the cortical regions of interest were selected to estimate the functional connectivity between these regions. Figure 3 shows the rendering that was made of the spherical nodes for the regions, connected by conical edges according to the direction and force of the flow. Some individuals showed an increase in synchronization between the prefrontal regions. A more detailed study is underway, which aims to make a comparative analysis between a group with individuals diagnosed with ADHD and a control group (undiagnosed individuals). A greater number of neurofeedback training sessions must also be done, in order to obtain a more robust result, since the treatment requires many sessions to obtain concrete results.

Conclusion

This study aimed to analyze the regions of brain activation and the functional connectivity between them when adult individuals were exposed to neurofeedback training, and to enable the use of methods of estimating functional connectivity in real time together with neurofeedback training. According to the results obtained, it was possible to identify the direction and strength of the flow of interaction between certain brain regions that occurred during the sessions, implying that the prefrontal regions (left and right prefrontal dorsolateral) were more closely related intensity when individuals are in a state of attention. An important point to note is that estimating connectivity in the scalp channels, although there is no technical difficulty, has problems of interpretation due to the high probability of finding spurious associations between the channels due to the conduction of volume that stains brain activity through the channels. For this reason, MVAR is favorable in signals that better reflect the activation of individual brain sources or areas of origin. The main contribution presented in the study was the use of mathematical methods such as sLORETA, multivariate autoregressive modeling (MVAR) and the directed transfer function (DTF) to identify brain modulations in individuals who participated in neurofeedback sessions related to the strengthening of the state of attention. The use of these tools demonstrated positive results of the training sessions (change of activation region and connectivity of regions related to the state of attention, according to the studied literature [22,23]). In the future, a larger number of sessions should be performed to obtain more robust results and there should be the inclusion of individuals with ADHD, to make a comparison with a control group. Another future goal is to employ other methods of estimating functional connectivity and to compare them. Acknowledgements The authors are grateful for the financial support of the Brazilian agencies FAPES, CAPES and CNPq (process no 80615503, 304049/2019-0, 307531/2018-0) (FAPES/CNPq No. 05/2017—PRONEM, TO: 84/2017). Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

2.

Kaur C, Singh P, Sahni S (2019) Towards efficacy of EEG neurofeedback from traditional to advanced approach: a review. Biomed Pharmacol J 12:619–627 Sherlin LH, Arns M, Lubar J et al (2011) Neurofeedback and basic learning theory: implications for research and practice. J Neurother 15:292–304

1718 3.

4.

5.

6.

7.

8.

9. 10.

11.

12.

13.

Larsen S, Sherlin L (2013) Neurofeedback: an emerging technology for treating central nervous system dysregulation. Psychiatr Clin 36:163–168 Young KD, Zotev V, Phillips R et al (2014) Real-time FMRI neurofeedback training of amygdala activity in patients with major depressive disorder. PLoS ONE 9(2):e88785 Nakamura-Palacios EM, de Almeida Benevides MC, Penha ZagoGomes M et al (2012) Auditory event-related potentials (P3) and cognitive changes induced by frontal direct current stimulation in alcoholics according to Lesch alcoholism typology. Int J Neuropsychopharmacol 15:601–616 Nakamura-Palacios EM, Lopes IBC, Souza RA et al (2016) Ventral medial prefrontal cortex (vmPFC) as a target of the dorsolateral prefrontal modulation by transcranial direct current stimulation (tDCS) in drug addiction. J Neural Transm 123:1179–1194 Dai Z, De Souza J, Lim J et al (2017) EEG cortical connectivity analysis of working memory reveals topological reorganization in theta and alpha bands. Front Hum Neurosci 11:237 Wang H, Wu X, Wen X, Lei X, Gao Y, Yao L (2019) Exploring directed functional connectivity based on electroencephalography source signals using a global cortex factor-based multivariate autoregressive model. J Neurosci Methods 318:6–16 Friston KJ (2011) Functional and effective connectivity: a review. Brain Connect 1:13–36 Wen X, Rangarajan G, Ding M (2013) Multivariate Granger causality: an estimation framework based on factorization of the spectral density matrix. Philos Trans R Soc A Math Phys Eng Sci 371:20110610 Baccalá LA, Sameshima K (2001) Partial directed coherence: a new concept in neural structure determination. Biol Cybernet 84:463– 474 Dutta S, Singh M, Kumar A (2018) Automated classification of nonmotor mental task in electroencephalogram based brain-computer interface using multivariate autoregressive model in the intrinsic mode function domain. Biomed Sig Process Control 43:174–182 Kami´nski M, Ding M, Truccolo WA, Bressler SL (2001) Evaluating causal relations in neural systems: granger causality, directed transfer function and statistical assessment of significance. Biol Cybernet 85:145–157

W. D. Casagrande et al. 14. Astolfi L, Cincotti F, Mattia D et al (2008) Tracking the time-varying cortical connectivity patterns by adaptive multivariate estimators. IEEE Trans Biomed Eng 55:902–913 15. Hata M, Kazui H, Tanaka T et al (2016) Functional connectivity assessed by resting state EEG correlates with cognitive decline of Alzheimer’s disease—an eLORETA study. Clin Neurophys 127:1269–1278 16. Umesh DS, Tikka SK, Goyal N, Nizamie SH, Sinha VK (2016) Resting state theta band source distribution and functional connectivity in remitted schizophrenia. Neurosci Lett 630:199–202 17. Storti SF, Boscolo Galazzo I, Montemezzi S, Menegaz G, Pizzini FB (2017) Dual-echo ASL contributes to decrypting the link between functional connectivity and cerebral blow flow. Hum Brain Mapp 38:5831–5844 18. Lehmann D, Faber PL, Gianotti LRR, Kochi K, Pascual-Marqui RD (2006) Coherence and phase locking in the scalp EEG and between LORETA model sources, and microstates as putative mechanisms of brain temporo-spatial functional organization. J Phys Paris 99:29– 36 19. Cannon R, Congedo M, Lubar J, Hutchens T (2009) Differentiating a network of executive attention: LORETA neurofeedback in anterior cingulate and dorsolateral prefrontal cortices. Int J Neurosci 119:404–441 20. Lancaster JL, Woldorff MG, Parsons LM et al (2000) Automated Talairach atlas labels for functional brain mapping. Hum Brain Mapp 10:120–131 21. Kus R, Kaminski M, Blinowska KJ (2004) Determination of EEG activity propagation: pair-wise versus multichannel estimate. IEEE Trans Biomed Eng 51:1501–1510 22. Ligeza TS, Wyczesany M, Tymorek AD, Kami´nski M (2016) Interactions between the prefrontal cortex and attentional systems during volitional affective regulation: an effective connectivity reappraisal study. Brain Topogr 29:253–261 23. He B, Astolfi L, Valdés-Sosa PA et al (2019) Electrophysiological brain connectivity: theory and implementation. IEEE Trans Biomed Eng 66:2115–2137

Characterization of Electroencephalogram Obtained During the Resolution of Mathematical Operations Using Recurrence Quantification Analysis A. P. Mendes, G. M. Jarola, L. M. A. Oliveira, G. J. L. Gerhardt, J. L. Rybarczyk-Filho and L. dos Santos

Abstract

This work aims to apply recurrence quantification analysis to characterize electroencephalogram signals, in resting and active state situations involving mathematical operation resolution. The best values of the incorporation parameters (delay time, embedding dimension, and threshold) are investigated to obtain the best ranking results. The measures of the recurrence quantification analysis used are recurrence rate, determinism, overage length of diagonal structures, Shannon entropy, laminarity, and maximum length of vertical structures. To compare between resting and active state, the Mann-Whitney test was performed. The results demonstrate that the resting state is predominant in the alpha and theta experiments. Statistical differences were observed in the comparisons between resting and active state for alpha and theta experiments, active state alpha and active state theta experiments and resting and active state for theta experiments. Keywords

Recurrence quantification analysis • EEG • Resolution of mathematical operation • Nonlinear analysis

A. P. Mendes · L. dos Santos (B) Instituto Científico e Tecnológico, Universidade Brasil, Rua Carolina Fonseca, 235, Itaquera, São Paulo, Brazil e-mail: [email protected] G. M. Jarola · L. M. A. Oliveira · J. L. Rybarczyk-Filho Depto. Física e Biofísica, Instituto de Biociências de Botucatu, Universidade Estadual Paulista “Júlio de Mesquita Filho” UNESP, Sao Paulo, Brazil G. J. L. Gerhardt Centro de Ciências Exatas e Tecnologia, Universidade de Caxias do Sul, Caxias do Sul, Brazil

1

Introduction

The biosignals are physiologic phenomena that transmit information originating from a biological system. Each biosignal has a distinct characteristic, which can be generated by different body parts and captured through sensors or peripheral devices [1]. The signals originated from the brain and cardiac activities can be classified as nonlinear time series and reflect complex dynamics. Nonlinear dynamic methods, such as the recurrence quantification analysis (RQA), have been widely applied to these signals and have successfully demonstrated useful biomedical applications [2]. Electroencephalogram (EEG) recording can be divided into two categories: invasive EEG recorded from electrodes implanted inside the cranium and non-invasive EEG recordings obtained from electrodes attached to the surface of the scalp (used in this work). The signals obtained through the EEG represent synaptic potentials with frequency between 0.5 and 30 Hz. This frequency spectrum is divided into four bands: delta waves (0.5–4 Hz), theta waves (4–8 Hz), alpha waves (8–14 Hz), beta waves (12–22 Hz) and gamma waves (> 30Hz) [3]. The advanced studies with EEG signals and nonlinear analysis methods have been well accepted to create tools for early detection of brain disorders and diseases, such as epilepsy [4,5], autism [6,7], depression [8] and sleep states [9]. The literature contains promising results about the development of devices for brain-computer interface (BCI), applications that use human emotions recognition for robotics and systems that recognize state of mental fatigue such as: somnolence, lethargy, or direct attention fatigue [2]. This study was carried out using recurrence quantification analysis in EEG signals to identify the changes in the states of rest and activity, and if it is possible to use recurrence quantification analysis to develop BCI devices.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_251

1719

1720

A. P. Mendes et al.

2

Theoretical Framework

2.1

Electroencephalogram

The German researcher Hans Berger in 1928 performed the first register of EEG in humans. Berger introduced the term electroencephalogram when he demonstrated that it was possible to register and study changes in the brain’s electrical patterns, by attaching electrodes to the scalp in a noninvasive procedure [10]. The EEG is a bioelectrical signal from nerve endings in the cranial cavity. The EEG is the propagated variation of the postsynaptic potentials, added spatially and temporally [11]. The postsynaptic signals make action potentials between cellular membranes. The action potential is a variation in the cellular membrane potential induced by its permeability to sodium and potassium ions (Na+ e K+). In general, the EEG consists in electromagnetic waves produced by cerebral electric activity measured in cycle per second (Hertz Hz). The wave pattern of frequency and amplitude changes according to the neural activity and has direct relation with states of relaxation, meditation, concentration, somnolence, and others. Even though each individual has their own aspect of brain activity, basic behaviors can be found for each state, which permits studying the conditions of consciousness. These behaviors are basically divided in four types of cerebral waves: Beta, Alpha, Theta, and Delta [12] (Fig. 1). Delta waves are in frequencies between 0.5 and 4 Hz. They predominate during deep sleep. Delta waves generally have relatively large amplitudes [13]. Theta waves can be observed in the range of 4–8 Hz and occur when there is an increased demand for memory [13]. Upright waves are recorded during waking state, paradoxical sleep phase, and problem solving [14,15]. Alpha waves are between the frequencies 8 and 14 Hz alpha waves. They are the most dominant rhythm in normal individuals, predominate in the parietooccipital region and are characterized by feelings of relaxation and meditation [13].

Fig. 1 Types of brain activity waves

Beta waves are recorded in the frequency range from 14 to 30 Hz. Beta activity is characterized by the state of increased attention, concentration, and cognition. They can be seen in the frontal and parietal regions [13]. In this work we will only approach the context for the alpha and theta waves.

2.2

Recurrence Plot and Recurrence Quantification Analysis RQA

From Poincare’s idea of recurrence in 1890, after a sufficiently long and finite space of time, certain systems will return to a state very close to the initial state. In 1987 Eckmann proposed the recurrence plots, a tool whose purpose is to visualize the dynamic of recurrent systems, providing important details about the temporal evolution of the trajectories, where known patterns suggest specific behaviors of each system [16]. Standards are divided into: • Homogeneous Recurrences present small points when compared to the recurrence plot as a whole. It is equivalent to say that the duration of diagonal or vertical lines is short in relation to the total exposure time of the system (Fig. 2 a). • Periodic Recurrences occur in oscillating systems and present fully filled diagonal lines parallel to the main diagonal. (Fig. 2 b) • Drift Recurrences appear when the system has a slow parameter variation (Fig. 2 c). • Discontinuous Recurrences have white bands and appear when the system has abrupt changes in dynamics. (Fig. 2 d). The recurrence plot is a square matrix of order N where the evolution of the dynamic states is represented by its axes. This matrix is composed by black points and white points, where recurrent states of a system xi in the instant j are represented by black points marked in the coordinate (i, j) [16]. This is represented through the equation: Ri, j (ε) = θ (ε − ||xi − x j ||), xi ∈ R m , i, j = 1, . . . , N . (1) where xi represents each of the states formed by the systems. ε, or neighborhood radius, define which points are inside or outside of the neighborhood that are considered recurrent or non recurrents. θ is function Heaviside, responsible for mapping the recurrent points. The recurrence plots are analyzed by observing their visual characteristics. This interpretation is made in a subjective way, which may lead to misinterpretations. To make it more trustworthy, Zbilut in 1992, proposed recurrence quantification analysis (RQA), which was improved in 2002 by Marwan

251 Characterization of Electroencephalogram Obtained During the Resolution …

1721

Fig. 2 Graphic types of recurrences: a homogeneous; b periodical; c drift; d discontinuous

[17–19]. RQA is based in the presence of vertical and horizontal lines with measures able to quantify the recurrence. The main measurements are: Recurrence Rate (RR) is a measurement based on the density of the points. It represents the probability that a state will return to its neighborhood in the phase space. Determinism (DET) refers to the predictability of the dynamic system. In cases of periodic systems where previsibility is high, determinism has a value of 1. Average Diagonal Line (ADL) is the average time where two segments evolve their trajectories in a similar way. Entropy (ENTR) reflects the complexity of the determined structure in the system and is obtained through the frequency of distribution of diagonal lines greater than a certain length. Laminarity (LAM) is related to the quantity of laminar phases in the system. It provides the number of vertical structures, representing the occurrence of recurrent states that change over time. Trapping time (TT) estimates the average time the system will remain in the same state. Maximal length of the vertical lines (Vmax) measures the longest length of vertical lines.

3

Materials and Methods

3.1

Database

The data was acquired from the research project Alfa2bit of the Physics and Biophysics Department of the Universidade Estadual Paulista–UNESP, which creating on and off keys using brain activity. Sixty volunteers participated including both men and women between 18 and 37 years old. The project was approved by the Research Ethics Committee of the Botucatu Medical School under the number 59229916.4.0000.5411. The signals were obtained with the Emotiv EPOC + device. The device captures the signals at a sampling rate of 128 Hz. The equipment applies a double notch filter at 50 and 60 Hz to remove electrical interference. The device has 14 electrodes

Fig. 3 Electrode orientation and placement according to the international 10–20 system

positioned according to the international system 10 − 20 in the positions: anterior-frontal (AF3 and AF4), frontal (F3 and F4), anterior temporal (F7 and F8), fronto-central (FC5 and FC6 ), posterior temporal (T7 and T8), parietal (P7 and P8), and occipital (O1 and O2) (Fig. 3). The data were recorded on the computer via a bluetooth connection. The collection of the signals from the alpha and theta experiment were conducted during resting and active conditions. The resting condition consisted of remaining in relaxation state for the first minute, this condition was applied in both experiments. The active condition occurred in the second minute of the test, and the alpha experiment determined that the volunteers should remain with their eyes closed for 1 min. In the theta experiment, volunteers were asked to do multiplication calculations between 4 numbers with a time limit of 1 min. The numbers were generated randomly through a number generator. These conditions were repeated 10 times for each volunteer in both experiments.

1722

A. P. Mendes et al.

Fig. 4 Flow determine the incorporation values

3.2

Obtaining RQA Measurements

The construction of the recurrence plot and its measures requires a well selected choice of incorporation parameters (delay time, dimension, and threshold). The choice of inappropriate parameters affects the quality of reconstruction of the recurrence plot. The delay was obtained through the mutual information function defined by: I (τ ) =

N 

P(xn , xn+1 ) log2

n−1

P(xn , xn+τ ) P(xn )P(xn + τ )

(2)

which acts as a nonlinear correlation function. Where I (τ ) is the amount of information obtained from xn observing xn+τ identifying correlated but independent points [20]. From the values presented by the function, the first minimum value found must be chosen [21]. The dimension was obtained using the function of false close neighbors [22], where the consistency of each reconstruction of a dimension is observed. If it does not present a favorable result, the dimension is increased and a new interaction is performed, repeating the process until an acceptable size is obtained. The incorporation parameters (delay time τ , immersion dimension m, and recurrence threshold) and the recurrence quantification measures were calculated using the tool Cross Recurrence Plot Toolbox–CRP Toolbox developed by Norbert Marwan, through the functions: Mutual Information (MI) [20], False Nearest Neighbors [22] and CRQA [18]. The whole process can be seen in Fig. 4

3.3

resting-Alpha experiment active), TER-TEA (Theta experiment resting–Theta experiment active), AER-TER (Alpha experiment resting-Theta experiment resting) and AEA-TEA (Alpha experiment active-Theta experiment active):

4

Results

Figure 5 displays the resulting series from the different derivations for each couple of electrodes. The red line represents the transition from the resting time to the active time. This series represents the eighth alpha waves recording in a 19-years-old female volunteer during the alpha experiment between resting and active states with eyes closed.

Statistical Analysis

To test the normality, the Kolmogorov-Smirnov test was applied in each RQA characteristic. They demonstrated to be non-parametric. Then Mann-Whitney tests were applied followed by comparisons: AER-AEA (Alpha experiment

Fig. 5 Time series resulting from the different derivations for each pair of electrodes

251 Characterization of Electroencephalogram Obtained During the Resolution …

1723

Fig. 6 Recurrence plot obtained from the derivation between electrodes O1 and O2 for: a rest moment (τ = 2, m= 6, and ε = 0.5) b activity moment (τ = 2, m= 6, and ε = 0.5)

Fig. 7 Distribution of the delay values (τ ) found for the 16, 800 series set of the survey

Figure 6 contains recurrence plots formed from a data series of 7.680 samples. The graphs represent the resting and active times. The RP were calculated using the values for the delay τ = 2, dimension m = 6, and threshold ε = 0.5. Observe the difference in the pattern of the shapes originating between the two RP, especially in the filling of black points (recurring points) and in the formation of diagonal and vertical lines. Figure 7 shows the delay time values, in ms, resulting after applying the mutual information method for all data series in

this study. The delay equal to 2 served the largest number of series, totaling 3, 413. This value τ = 2 was used to calculate the RQA measurements for the whole set. In this case, τ = 1 is equivalent to 0.128 ms. Figure 8 charts the values found for the dimension parameter, through the false neighbor function applied to the entire data set. The dimension value of 6 served the largest number of series in the database, approximately 7000 series. Therefore, the value of 6 was adopted in the process of obtaining the RQA measures for all series.

1724

A. P. Mendes et al.

Fig. 8 Distribution of the dimension m values found for the 16,800 series set of the survey

The RQA measures were obtained from each EEG signal, considering the delay time and dimension values already obtained and values of 0.5 and 0.8 to threshold. According to Webber and Zbilut, the threshold should be chosen based on the set with the lowest average recurrence rate [23]. The average values of the recurrence rate found were 9.2% for the 0.5 threshold and 20.7% for the 0.8 threshold. Therefore, the set of RQA measures adopted were those that resulted in the threshold value at 0.5, where the average of the recurrence rate resulted in the lowest value.

4.1

Analysis Using the RQA Measures

The following analysis was made considering comparisons as described in Sect. 2.2. The Mann Whitney test results, applied to the RQA values, indicated statistical differences in the majority of the volunteers in the comparisons, as shown on Table 1. The calculation of the difference between the RQA measurements at rest and activity times was performed for each experiment. The result highlights the most prevalent moment. The two experiments, alpha and theta, presented a greater predominance in the resting times. Especially in the alpha experiment where this predominance was found in the vast majority of volunteers, as shown in Fig. 9.

5

Discussion

In the comparison between alpha and theta experiment in the resting state AER-TER, the results presented low occurrence

of statistical differences in all the volunteers, occurring in all the RQA measurements. This behavior was expected in a certain way, because the resting state was used for both experiments. Evaluating the statistical tests for the other comparisons, although there were statistical differences for most volunteers in some comparisons and measurements of the RQA, many measurements and comparisons did not present differences between the comparisons. The study by Seco in 2019 [24], using the same database, but with a different approach showed that changes in the behavior of the signals during the alpha experiment were more easily detected in the extra-occipital regions of the brain. In a theta experiment, during the calculus activity, Earle in 1996 [25] elucidated that this task was associated mainly with electrodes in the frontal and post-temporal regions. Taking into account the different regions of the scalp, we can assume that these different activity levels by region of the scalp may have influenced this result.

6

Conclusion

Thus, non-linear analysis methods can help to identify changes in patterns in the electroencephalogram signal, opening the possibility to identify the predominant wave type that is present during an action or activity. The identification of these patterns can open possibilities for the development of devices with interactions through brain activity. Devices that can assist individuals with impaired motor ability caused by accidents or illness.

Table 1 Comparisons and RQA measures that showed statistical differences in the majority of the volunteers by percentage Comparison

RR

DET

ADL

ENTR

LAM

TT

LLVL

AER-AEA TER-TEA AER-TER AEA-TEA

51.67 46.67 33.33 48.33

48.33 55.00 36.67 56.67

48.33 51.67 36.67 41.67

50 51.67 35.00 41.67

46.67 56.67 33.33 46.67

51.67 51.67 35.00 43.33

40 48.33 40 51.67

251 Characterization of Electroencephalogram Obtained During the Resolution …

1725

Fig. 9 Result obtained from the difference between resting and active state in each RQA measure for the experiments in the 60 volunteers. a Alpha experiment, b theta experiment

To improve this work, in the future, the feasibility of applying other strategies must be studied considering the application of filters on the signals, location of the electrodes on the brain, and the use of some classifier, such as SVM, decision tree, or random forest. Acknowledgements L. dos Santos thanks the Fundação de Amparo à Pesquisa do Estado de São Paulo (Fapesp grant no. 2018/03517-8) for financial support. Conflict of Interest The authors declare that there is no conflict of interest regarding the publication of this manuscript.

References 1. 2.

3. 4.

5.

6.

7.

8.

9.

Lins CT (2016) Fundamentos da medição do EEG: Uma Introdução SEA - Seminário de Eletrônica e Automação 1–6 Rodríguez-Bermúdez G, García-Laencina Pedro J (2015) Analysis of EEG signals using nonlinear dynamics and chaos: a review. Appl Math Inform Sci 9:2309–2321 Carvalho L (2008) Instrumentacao medico-hospitalar. Manole, Barueri Ouyang G, Li X, Dang C, Richards Douglas A (2008) Using recurrence plot for determinism analysis of EEG recordings in genetic absence epilepsy rats. Clin Neurophysiol 119:1747–1755 Wassila H, Laurent P, Haouaria S (2007) Interpretation of RQA variables: application to the prediction of epileptic seizures. In: International conference on signal processing proceedings. ICS, vol 4 Veronica R, Paula F, Schmidt RC, Richardson Michael J (2016) Recurrence plots and their quantifications: expanding horizons, p 180 Heunis T, Aldrich C, Peters JM et al (2018) Recurrence quantification analysis of resting state EEG signals in autism spectrum disorder—a systematic methodological exploration of technical and demographic confounders in the search for biomarkers, pp 1–17 Rajendra AU, Oliver F, Kannathal N, Chua T, Swamy L (2005) Nonlinear analysis of EEG signals at various sleep stages. Computer Methods Programs Biomed 80:37–45 Rajendra AU, Shreya B, Oliver F et al (2015) Nonlinear dynamics measures for automated EEG-based sleep stage detection. Eur Neurol 74:268–287

10. Mead LC (1949) Electrical activity of the brain. Report (U.S. Naval Medical Research Laboratory), vol 4, pp 1–4 11. Caparelli Thiago Bruno. Projeto e desenvolvimento de um sistema multicanal de biotelemetria para detecção de sinais ECG, EEG e EMG 2007:58 12. Cabral FJ (2013) Utilização de ondas cerebrais para controle de componentes eletrônicos, p 31 13. Campisi P, La RD (2014) Brain waves for automatic biometricbased user recognition. IEEE Trans Inform Forens Secur 9:782–800 14. Timo-Iaria C, Pereira WC (1971) Mecanismos das ondas elétricas cerebrais. Arquivos de neuro-psiquiatria 29:131–145 15. Klimesch W, Russegger H, Pachinger T (1996) Theta band power in the human scalp EEG and the encoding of new information. NeuroReport 1235–1240 16. Eckmann JP, Oliffson Kamphorst O, Ruelle D (1987) Recurrence plots of dynamical systems. Epl 4:973–977 17. Zbilut Joseph P (1992) Recurrence plots. Phys Lett A 171:199–203 18. Marwan N, Kurths J (2002) Nonlinear analysis of bivariate data with cross recurrence plots. Phys Lett Sect A General Atomic Solid State Phys 302:299–307 19. Norbert N, Carmen Romano M, Marco T, Jürgen K (2007) Recurrence plots for the analysis of complex systems. Phys Rep 438:237– 329 20. Fraser Andrew M, Fraser AM, Swinney HL (1986) Phys Rev A 33:1134–1140 21. Rainer H, Holger K, Thomas S (2012) Practical implementation of nonlinear time series methods : the TISEAN package Practical implementation of nonlinear time series methods, p 413 22. Kennel MB, Abarbanel HDI (1992) Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys Rev 3403–3411 23. Webber Jr CL, Zbilut JP (2005) Tutorials in contemporary nonlinear methods for the behavioral sciences. Tutor Contemp Nonlinear Methods Behav Sci 26–94 24. Seco Giordano BS, Gerhardt Günther JL, Biazotti Alex A, Molan André L, Schönwald Suzana V, Rybarczyk-Filho José L (2019) EEG alpha rhythm detection on a portable device. Biomed Signal Process Control 52:97–102 25. Earle Jonathan BB, Garcia-Dergay P, Anthony M, Christopher D (1996) Mathematical cognitive style and arithmetic sign comprehension: a study of EEG alpha and theta activity. Int J Psychophysiol 21:1–13

Performance Evaluation of Machine Learning Techniques Applied to Magnetic Resonance Imaging of Individuals with Autism Spectrum Disorder V. F. Carvalho, G. F Valadão, S.T. Faceroli, F. S Amaral and M. Rodrigues

Abstract

Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder, affected by persistent deficits in communication and social interaction and by restricted and repetitive patterns of behavior, interests or activities. Its diagnosis is still a challenge due to the diversity between the manifestations of autistic symptoms, requiring interdisciplinary assessments. This work aims to investigate the performance of the application of techniques of extraction of characteristics and machine learning in magnetic resonance imaging (MRI), in the classification of individuals with ASD. In MRI, the techniques of features extraction were applied: histogram, histogram of oriented gradient and local binary pattern. These features were used to compose the input data of the Support Vector Machine and Artificial Neural Network algorithms. The best result shows an accuracy percentage of 89.66 and a false negative rate of 6.89%. The results obtained suggest that magnetic resonance analysis can contribute to the diagnosis of ASD from the advances in studies in the area. Keywords

Autism spectrum disorder • Magnetic resonance imaging • Features extraction • Support vector machine • Artificial neural network

1

Introduction

The Autism Spectrum Disorder (ASD) is a disorder in the neurodevelopment, defined by persistent deficits in social communication and social interaction in multiple contexts. It shows restrict and repetitive patterns of behavior, interests or activities, with early symptoms in the period of developV. F. Carvalho · G. F. Valadão (B) · S. T. Faceroli · F. S. Amaral · M. Rodrigues Mechatronics Engineering, IF SUDESTE MG, Juiz de Fora, Brazil

ment, causing losses at the social functioning of the individual’s life [1]. The main features are related to language delay, difficulties at comprehension, echolalic speech, use of literal language with low or any social initiative. These symptoms begins since childhood, causing limitations and daily life losses [2]. Because it is a developmental disorder defined from a behavioral point of view, with multiple etiologies and many degrees of severity, its diagnosis is quite complex [3,4]. The criteria currently used to diagnose autism are those described in the Diagnostic and Statistical Manual of Mental Disorders (DSM) [5]. Currently it is used the DSM-V with some changes compared to the previous one [6]. The ASD precise etiopathogenesis is yet not proven, but some works suggest that structural cerebral regions may show alterations including the frontal lobes, amygdala, cerebellum [7], corpus callosum [8] and basal ganglia [9]. These results allow the possibility of helping the autism diagnosis using brains scans. Magnetic resonance imaging has been studied for some years with this objective, as shown by [10–12]. The magnetic resonance imaging is a versatile technique that obtains medical images that has the ability to demonstrate different brains structures and their minimal changes [13]. Recent studies suggest that magnetic resonance imaging (MRI), when analysed by machine learning techniques, could help at ASD diagnosis [14]. In order to contribute on the evaluation of application of MRI as an auxiliary scan in the diagnosis of autism, this paper presents a performance comparison among several image processing techniques, based on machine learning, applied to classification and identification of individuals with ASD.

2

Materials and Methods

A system was created in Python language, version 3.6, which follows the methodology presented in Fig. 1. At first, a database that could provide quality and reliable magnetic resonance imaging, both from patients diagnosed with

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_252

1727

1728

V. F. Carvalho et al.

Fig. 1 Methodology workflow

ASD and from patients in a neurotypic control group, was searched. After obtaining the images, its characteristics were extracted by means of three different techniques. The data was separated in 70% of patients for training and 30% patients for testing, in other words, 66 patients for training and 29 for testing for each slice, in this way there are not reused patients in train and test. The characteristics were used to train, with training data, two different types of predictive models of machine learning. In order to validate the systems, the models were applied to the images intended for validation. Finally, evaluation metrics were used to validate and compare these systems.

2.1

Database

MRI data analyzed in this work were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu).1 The ADNI was launched in 2003 as a public-private partnership, led by principal investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). Autism Brain Imaging Data Exchange (ABIDE), that belongs to ADNI, were the database used in the development of this work. It has 1112 patients, which 539 diagnosed with ASD and 573 control patients [15].

1 Data

used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http:// adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wpcontent/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf.

Each patient has several MRI, separated by three cuts (axial, coronal and sagital). The first step in this project was to separate which patients would be selected for the study. Due to the number of patients and amount of images per patient, it would be impracticable to use all image set. Besides, there are the registers of different number of images from some patients. In order to deal with these issues, the largest group of patients, from the original database, who has the same number of images registered for each one of the cuts, was considered. This group has 55 patients diagnosed with ASD and 40 control patients, which yields a 95-patient selected database. All patients have 512 images in the axial cut, 480 in coronal cut and 160 in sagital cut, all these images were made available, by ABIDE, in its 2D form. However, the beginning and the end of each plane have images that do not have any brain information. Therefore, these portions of each plane were discarded. Regarding the axial plane, the selected slices were from 150 to 470, while slices from 30 to 400 were considered for the coronal plane. Sagital plane has the selected slices from 20 to 140.

2.2

Feature Extraction Techniques

Three feature extraction techniques were used: histogram, histogram of oriented gradients (HOG) and Local binary pattern (LBP). These extractions were made slice by slice, in other words, for each slice were used the presented techniques. This procedure has been done for all 840 utilized slice. An image histogram describes the frequency of the intensity values that occur on it. In an 8-bit gray scale image, for example, the histogram result will have a frequency of 28 possible intensity values, ranging from 0, which would be equivalent to black, to 255, which would be equivalent to white. Therefore, a darker image would have a histogram concentrated at values closer to 0, while a lighter image would be at 255. HOG is a simple and fast method, based on the idea that the appearance and shape of an object can be described by the directions of the edges or by the distribution of the local intensity gradients. This characterizer summarizes the distribution measurements in the regions of the image, being particularly useful for cases in which it is necessary to recognize the texture of objects [16]. LBP is a method of describing texture in the local neighborhood and can be considered as a gradient of binary direction. The LBP operator labels the pixels of an image by limiting the neighbors of each pixel by the central pixel value and displays the results binary [17].

252 Performance Evaluation of Machine Learning Techniques …

2.3

Predictive Models of Machine Learning

To perform the classification, two techniques were used: the support vector machine (SVM) and the artificial neural network (ANN). Both techniques are algorithms that learn to identify patterns based on the information given to them and the labels of each piece of information and these techniques belong to supervised classification paradigm. Support vector machines, proposed by Vapnik [18], is one of the most popular learning algorithms and can be used for both classification and regression problems from structured data. When used in regression problems the support vector technique is called Support Vector Regression (SVR). The SVM aims to find an efficient way to separate a highdimensional space with hyperplanes. In this way, SVM training produces a function that minimizes training error while maximizes the margin that separates the data classes. The margin can be calculated as the perpendicular distance that separates the hyperplane and the generated hyperplanes from the nearest points [19]. In addition, these planes need not be only linear, they can vary based on the polynomial degrees of each one. For ANN, the information is processed in computational cells, called artificial neurons, which relate the input data with the output tags. For the development of an ANN it is necessary to determine its architecture, that is, the number of neurons per layer and the number of layers in the network.

2.4

1729

main advantage is that it works with multiple layers and solves nonlinearly separable problems. The activation function used for network training was the rectified linear unit (ReLU), as they are the most used and reduce the training time.

2.5

Metrics

In order to compare the results obtained by each training with the various characteristics, two metrics were used. The accuracy, which relates the number of correct answers in the prediction with the total number of images per slice, and the number of false negatives, which analyzes the number of patients diagnosed by ASD that the prediction considered as control. This metric was chosen because if some patients are not diagnosed with ASD but are false negatives they do not receive adequate assistance and may seek ineffective treatments or interrupt their search for treatments.

3

Results

The images obtained using the selected database are exemplified in Figs. 2 and 3, which represent the slice 366 of the axial plane for the group diagnosed with ASD and the control group, respectively. Fig. 2 Slice 366 of the axial plane of a patient diagnosed with ASD

Training

The SVM training is based on hyperplanes. In this study, they were used based on their polynomial degrees. For each feature extraction technique, the degree varied from 1 to 15. For higher degree values, similar or worse results were observed. It was necessary to change the SVM kernel, to match the degrees variation. In other words, if the degree was 1, the kernel would be linear and, above that, the kernel would be polynomial. So, for each feature extraction technique, 15 results were generated for each slice. The other parameters were: decision function shape: one-vs-rest(ovr), gamma: scale. Regarding the ANN training, the Multilayer Perceptron technique was used. The data were separated into 70% for training and 30% for testing. The structure of the network had 3 hidden layers and the number of neurons was varied from 3 to 5 per layer. In this way, it was possible to avoid overfitting in the neural network, because, with the increased neuron number, overfitting could happen. The other parameters were: solver: adam, alpha: 0.0001, learning rate: constant, learning rate initial: 0.001, tolerance: 0.0001, momentum: 0.9. The algorithm chosen was the back-propagation, which is an algorithm with supervised paradigm and is one of the most used and the most important in artificial neural networks. Its

Fig. 3 Slice 366 of the axial plane of a control patient

Table 1 presents the best results of the metrics used for the different simulated models. For each proposed extraction

1730

V. F. Carvalho et al.

technique, the two predictive models for each type of MRI plane were evaluated. These results were chosen as the best from all trained slices, utilizing each one of the extraction techniques. All the 840 slices were analyzed.

4

Discussion

According to the results summarized in Table 1, it is possible to verify that the HOG characteristic extraction technique was the one that presented, in the set of plans, the best results regarding the performance of the tested machine learning networks. Besides, the histogram technique also showed significant results. In relation to the tested machine learning techniques, both presented a similar result regarding the precision of the algorithm. However, the artificial neural network had less false negatives compared to the SVM technique. Evaluating the types of plane, made by MRI, the axial plane presented better results, both in relation to the accuracy of the network and the percentage of false negatives. In fact, the algorithm using the histogram technique with SVM showed 0% false negatives. The best result obtained in the tests performed was using the axial plane in the MRI, extracting the characteristics by the HOG and using the SVM algorithm. The classifier had an accuracy of 89.66%, which indicates that the improvement of the method can lead to significant results for the use of magnetic resonance imaging exams to assist in the process of diagnosing people with ASD.

5

Conclusion

This scientific work proposed the analysis of the performance of techniques used in computer vision in MRI to classify individuals with ASD. The main contribution of the study is to evaluate the possibility of using magnetic resonance imaging to assist in the diagnosis of autism spectrum disorder, which is still a challenge due to the complexity of the disorder. Based on the results obtained, it is possible to observe that some proposed techniques had quite significant results, indicating that the magnetic resonance imaging of ASD patients may contain relevant information to aid the diagnosis. The main contribution of this article, comparing with articles already done in this area, is the application of smart techniques for each slice in MRI. In this way, is possible to observe how the algorithms behave and each brain area, building an information system capable of evaluating the best techniques and the most relevant regions. In this regard, future work can improve the techniques of image processing and machine learning, seeking to increase the accuracy of the classification. In addition, such research can provide very important indications such as the main brain locations of the characteristics of individuals with ASD and the variation of such characteristics with age and with stimuli. Besides that, it may be tested algorithms using Deep Learning. In this case, it is necessary to use data augmentation techniques for repository expansion, considering that the available databases for this type of study do not have a sufficient amount of data for using Deep Learning.

Table 1 Results Extraction techniques

Predictive model

Plane

Slices

Accuracy (%)

False negatives (%)

HOG

SVM

Axial Coronal Sagital Axial Coronal Sagital Axial Coronal Sagital Axial Coronal Sagital Axial Coronal Sagital Axial Coronal Sagital

366 398 120 366 398 48 177 398 120 257 398 48 296 283 80 191 287 63

89.66 86.21 75.86 86.21 82.76 75.86 86.21 86.21 75.86 82.76 82.76 75.86 82.76 79.31 72.41 79.31 82.76 76.31

6.89 13.79 20.68 3.44 13.79 6.89 0 13.79 20.68 10.34 13.79 6.89 10.34 3.44 3.44 17.24 10.34 17.24

ANN

Histogram

SVM

ANN

LBP

SVM

ANN

252 Performance Evaluation of Machine Learning Techniques …

1731

Acknowledgements Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research and Development, LLC; Johnson and Johnson Pharmaceutical Research and Development LLC; Lumosity; Lundbeck; Merck and Co., Inc.; Meso Scale Diagnostics, LLC; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. The authors thank to the IF Sudeste MG—Campus Juiz de Fora, through the DPIPG for the support, and all people who collaborated directly or indirectly for the construction of this work.

4.

Conflict of Interest The authors declare that they have no conflict of interest.

14.

References

15.

1.

16.

2.

3.

Association American Psychiatric et al. (2013) Diagnostic and statistical manual of mental disorders (DSM-5® ). American Psychiatric Pub Silva BS, Carrijo DT, Firmo JDR, Freire MQ, Pina MFÁ, Macedo J (2018) Dificuldade no diagnóstico precoce do transtorno do espectro autista e seu impacto no âmbito familiar. CIPEEX 2:1086–1098 Rutter M, Schopler E (1992) Classification of pervasive developmental disorders: some concepts and practical considerations. J Autism Dev Disord 22:459–482

5. 6.

7. 8.

9.

10.

11.

12.

13.

17.

18. 19.

Gadia CA, Tuchman R, Rotta NT (2004) Autismo e doenças invasivas de desenvolvimento. J Pediatr 80:83–94 American Psychiatric Association (2013) Diagnostic and statistical manual of mental disorders, 5th edn Lobar SL (2016) DSM-V changes for autism spectrum disorder (ASD): implications for diagnosis, management, and care coordination for children with ASDs. J Pediatr Health Care 30:359–365 Amaral DG, Schumann CM, Nordahl CW (2008) Neuroanatomy of autism. Trends Neurosci 31:137–145 Bellani M, Calderoni S, Muratori F, Brambilla P (2013) Brain anatomy of autism spectrum disorders I. Focus on corpus callosum. Epidemiol Psychiatr Sci 22:217–221 Calderoni S, Billeci L, Narzisi A, Brambilla P, Retico A, Muratori F (2016) Rehabilitative interventions and brain plasticity in autism spectrum disorders: focus on MRI-based studies. Front Neurosci 10:139 Berthier ML, Bayes A, Tolosa ES (1993) Magnetic resonance imaging in patients with concurrent Tourette’s disorder and Asperger’s syndrome. J Am Acad Child Adolesc Psychiatry 32:633–639 Piven J, Berthier ML, Starkstein SE, Nehme E, Pearlson G, Folstein S (1990) Magnetic resonance imaging evidence for a defect of cerebral cortical development in autism. Am J Psychiatry 147(6):734– 739 Nowell MA, Hackney DB, Muraki AS, Coleman M (1990) Varied MR appearance of autism: fifty-three pediatric patients having the full autistic syndrome. Magn Reson Imag 8:811–816 Amaro JE, Yamashita H (2001) Aspectos básicos de tomografia computadorizada e ressonância magnética. Braz J Psychiatry 23:2– 3 Pagnozzi AM, Conti E, Calderoni S, Fripp J, Rose SE (2018) A systematic review of structural MRI biomarkers in autism spectrum disorder: a machine learning perspective. Int J Dev Neurosci 71:68– 82 Di Martino A, Yan C-G, Li Q et al (2014) The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol Psychiatry 19:659–667 Shu C, Ding X, Fang C (2011) Histogram of the oriented gradient for face recognition. Tsinghua Sci Technol 16:216–224 Zhang B, Gao Y, Zhao S, Liu J (2009) Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor. IEEE Trans Image Process 19:533–544 Vapnik V (2013) The nature of statistical learning theory. Springer Science and Business Media Joachims T (1999) Svm-light: support vector machine. University of Dortmund, p 19. http://svmlight.joachims.org/

Biomedical Signal Data Features Dimension Reduction Using Linear Discriminant Analysis and Threshold Classifier in Case of Two Multidimensional Classes E. R. Pacola and V. I. Quandt

Abstract

The discriminant linear analysis algorithm is a statistical tool, without iterative training, capable of determining the best data projection reducing the characteristics dimension size to a new space. The new space contains a dimension number equals to the number of input classes minus one. In the specific case of two input classes, the resulting space projection will be a one-dimensional space where a simple threshold classifier is able to determine that a sample belongs to one or another class. This paper explains the theory and also this special case, applying the technique on biomedical data like pulmonary crackles characterization, and detection of spikes in EEG signals. Besides machine learning techniques, statistical analysis proved to be a simple, fast and efficient way to classify patterns. Keywords

LDA • ROC curve • Feature dimension reduction • Biomedical patterns classifier

1

Introduction

In pattern recognition, one of the biggest effort is to reduce the number of dimensions in multivariate classes [1]. Also known as the curse of dimensionality, this term was first introduced by Bellman in 1961 [2], and refers to the computational efforts needed when handling growing multivariate data classes. Looking to extract characteristics from data that needs to be classified, it is normal to end with characteristics that overlay each other or are somehow correlated, turning difficult the classification process. To enhance classification performance, new non-correlated characteristics are extracted trying to distinguish better one class to another. Human beings E. R. Pacola · V. I. Quandt (B) Universidade Positivo, Curitiba, PR, Brazil

may easily track and classify data of one, two, or three dimensions, but this capacity degrades very fast by more than 4 dimensions. Considering this it is necessary some tool that lower significantly the number of dimensions of a problem to a order that one could manipulate. The linear discriminant analysis (LDA) algorithm aims to reduce the dimension of a multivariate data class, preserving at the most discriminatory information between classes [3,4]. LDA takes C classes in a multidimensional space of characteristics and transforms this space into a new space having C − 1 dimensions. This new space will present the highest possible distance among classes. One example of the LDA may be seen in Fig. 1, where in Fig. 1a classes Class1 , Class2 , and Class3 are shown. Each class are represented by three variables, i.e., X , Y , and Z . As presented by Fig. 1a the classes project at these dimensions are overlapped and a classifier aiming to identify one or other class needs to be carefully designed. The LDA algorithm takes this example containing three classes of data, i.e., C = 3, and determines a new projection containing C − 1 dimensions. The result is a bi-dimensional space translated by the new variables L D A X and L D AY as presented in Fig. 1b. It is possible to see that the three classes are now re-projected over that new space and although they are not completely separated but they are clearly recognizable. In example shown in Fig. 1, prior the LDA algorithm we have a 3D space where it is still possible to see the limits of each class in a dataset, however, if the dimensions of the enclosing space grows to four or more dimensions it would not be a simple task to visualize nor to recognize the classes. Despite of the growing number of dimensions, applying the LDA algorithm will result always the same bi-dimensional space presented in Fig. 1b because the number of classes (C = 3) remains the same. Regarding the nature of these specific classes spread over the new projection as seen in Fig. 1b, it would be possible to classify the classes taking just the variable L D A X . This is clearly possible to us because it regards to our facility as human beings to recognize patterns at a small number of dimensions.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_253

1733

1734

E. R. Pacola and V. I. Quandt

mi =

1  x ni

(2)

x∈X i

The dataset class means produced by Eq. (2) may also be projected by the matriz w as demonstrated by Eq. (3). m˜ i =

1  1  t y= w x = wt m i ni ni y∈Yi

(3)

y∈Yi

where m˜ i is the mean of the projected subset obtained by w. Our target is to have the maximum distance among classes, i.e., the highest distance among the dataset class means, thus Eq. (4) is our objective function.   J = |m˜1 − m˜2 | = wt (m 1 − m 2 )

(4)

Despite of Eq. (4) being our objective function, it does not considers the standard deviation of each class. To take this into consideration we will adopt the Fisher criteria [1], normalizing the dataset class mean by the dataset class internal scatter. The dataset class internal scatter is presented by Eq. (5). s˜i 2 =



(y − m˜ i )2

(5)

y∈Yi

Fig. 1 a Dataset containing classes Class1 , Class2 , and Class3 are presented and described by variables X , Y , and Z . It is possible to see that the classes overlaps each other. The intersection among classes is not clearly visible. b The LDA algorithm determines for this case a new bidimensional projection described by the new variables L D A X e L D AY . Now, the 3 classes have the highest possible distance to each other

2

Maximizing J (.) by w leads to the highest separation among classes. This calculation may be done due scatter matrices Si and Sw presented by Eqs. (7) and (8) respectively. Si =

LDA Calculation



(x − m i ) (x − m i )t

(7)

Sw = S1 + S2

(8)

x∈Di

To proceed with the LDA calculation we will consider two classes X 1 and X 2 representing two distinct classes of data of a universe of X c classes where c = 1, 2, . . . , C and for this case C = 2. Each class has n samples where each sample is associated to d−dimensions. Taking a weight matrix of coefficients w such that its scalar product to the d−dimensional class as shown in Eq. (1) generates a new class Y translated in a new space consisting by C − 1 dimensions. Y = wt X

Thus, the total scatter J (w) among projected classes equals to s˜1 2 + s˜2 2 and the Fisher criteria maximum is presented by Eq. (6). (m˜1 − m˜2 )2 (6) J (w) = s˜1 2 + s˜2 2

(1)

To find the weight matrix w the first step is to determine the mean of all classes of X c as presented by Eq. (2).

These matrices are projected by multiplying the weight matrix w as presented in Eq. (9) and also the projected means as shown in Eq. (10). s˜1 2 + s˜2 2 = wt Sw w 2

(m˜1 − m˜2 ) = w S B w t

(9) (10)

Replacing Eqs. (9) and (10) in Eq. (6) results in Eq. (11). J (w) =

wt S B w wt Sw w

(11)

253 Biomedical Signal Data Features Dimension Reduction …

1735

Equation (11) may be rewritten in form as presented in Eq. (12). (12) S B w = λSw w Constant λ may be determined by eigenvalues and eigenvectors. Taking the resulting eigenvalues the maximum argument is selected. The selected argument relates to the eigenvectors column that will be multiplied by our multidimensional input class, i.e., our weight matrix w. This product re-project the inputted multidimensional matrix of characteristics of the class to a new space keeping the maximum distance among classes. It is important to highlight that the linear discriminant analysis is the generation of this weighted matrix w that allow the class re-projection, reducing the number of dimensions to a new space of C − 1 dimensions (where C is the number of classes). This analysis is proceeded just once, i.e., it may be considered as a training phase of the algorithm, but as it is a statistical procedure, it does not need any iterative loop. As the LDA rotates the multidimensional matrix of characteristics to a new projection having C − 1 dimensions, there is no information loss. All features of the extracted characteristics used to generate the weight matrix are rearranged to maximize separation among C classes.

3

The Specific Case of Two Classes

As seen previously, the multidimensional matrix used at the input will have its dimensions reduced to C − 1 dimensions (C is the number of classes used). Considering the specific case of only two classes, the final result after LDA processing will be a uni-dimensional space, where the samples of one or other class will distribute over a single dimension. Mathematically speaking, our inputted matrices will be multiplied by a nx1 matrix resulting in a single vector. According to the theory of LDA, the classes will have the highest possible distance between them and a simple threshold classifier will be capable of distinguish each class. Figure 2 shows this specific case.

4

Performance Calculation

A classifier labels a sample belonging to predefined category or class. To evaluate a classifier one may use accuracy as a measure of performance or other techniques as the ROC curve.

Fig. 2 a Classes X 1 and X 2 described under variables X , Y , and Z . It is possible to see that the classes are overlapping each other. b Above in red is possible to see the sample distribution of class X 1 , under it is possible to see in blue the sample distribution of class X 2 , and at the bottom is possible to see the uni-dimensional vector produced by the LDA. In this vector samples in green represent overlapped samples of classes X 1 and X2

1736

4.1

E. R. Pacola and V. I. Quandt

ROC Analysis

The Receiver Operating Characteristic analysis (ROC) is a tool to measure performance of a binary classifier and it is widely used in medical statistics [5]. By means of a graphic, it allows the researcher to study variations of different threshold cut-points in sensitivity and specificity. The ROC analysis is based on two basic concepts: sensitivity and specificity. Sensitivity is the likelihood of a test to provide a positive result when a subject is sick. Specificity is the likelihood of a test to provide a negative result when a subject is not sick. The true state of a subject, sick or not, is determined by a reference test [6]. This representation is presented in Table 1. In this context sensitivity and specificity are presented in Eqs. 13 and 14 respectively [7]. TP T P + FN TN Speci f icit y = T N + FP Sensitivit y =

(13) (14)

Considering distributions of sick and not sick, and values of sensitivity and specificity defined by a test, we may draw a ROC curve as shown in Fig. 3. Figure 3 shows two distributions of sick and not sick subjects. A test will determine if a subject is sick or it is not sick. Depending on the test performance the two distributions may overlap. Curve 1 presented on bottom left of Fig. 3 means that the test has a sensitivity near to 100%, the distributions are almost completely separated examination. This test generated the ROC Curve with an area under the ROC curve (AUC) near to 1. Curve 3 in opposition overlapped the distributions completely, meaning that the classifier is as good as a random classifier with a classification rate of 50%. Curve 3 is shown in Fig. 3 as the straight line and has AUC equals to 0.5. Curve 2 presents a test with different sensitivity and specificity considering different cut-points on the threshold value as shown Table 1 Definitions of sensitivity and specificity related in terms of true positive (TP), true negative (VN), false positive (FP), and false negative (FN) Test result

Positive (sick)

Negative (not sick)

Positive Negative Total

True positive (TP) False negative (FN) Total positive (TP + FN) Sensitivity

False positive (FP) True negative (TN) Total negative (FP + TN) Specificity

Performance

by points A, B, and C. The best threshold cut-point is the one which has the shortest Euclidean distance from the upper left corner of the ROC graphic to line of the curve, i.e., point B.

5

Methodology

To apply the concepts seeing in the previous sections, mainly the special case of two classes, two different datasets of biomedical signals were used: pulmonary crackles and electroencephalography examination (EEG) of epileptic subjects. Crackles are discontinuous adventitious respiratory sounds which are explosive and transient. They are associated with cardiorespiratory diseases [8]. During the EEG examination of epileptic subjects, short signals ranging from 20 to 70 ms (spike) and from 70 to 200 ms (sharp wave) over normal EEG background may be detected [9]. These events are associated to epilepsy. On the respiratory sounds dataset, two classes were used: normal lung sounds (X 1 ) and crackles (X 2 ). 271 sound epochs were sampled at 8 kHz with duration of 40 ms. From the epochs 1 characteristic was extracted due discrete wavelet transform and analysis of its sub-bands resulting in a matrix of 3 dimensions [10,11]. The used dataset is available at [8]. In case of spike detection, 494 epochs of non-correlated spikes were sampled 200 Hz with duration of 1 s each (X 1 ). Also 1500 epochs of normal background EEG data were collected (X 2 ). Each epoch was processed by discrete wavelet transform from 2 different mother-wavelets at four decomposition levels. From each decomposition level was extracted the maximum and minimum value and the centred energy as well. These characteristics generated a matrix of 24 dimensions [12,13]. The dataset was obtained according to and approved by the ethics committee (CEP/HPP 1104-12). The calculated matrices of crackles and spikes signal were individually processed due the algorithm described.

6

Results

Using the specific case of two classes over the LDA algorithm allowed the research on two different areas: pulmonary crackle analysis [10,11] and spike detection in EEG and epilepsy [12,13]. Regarding the pulmonary crackle dataset described, it was possible to classify crackles with an AUC of 0.9906 and sensitivity and specificity of 0.9527 and 0.9694 respectively.

253 Biomedical Signal Data Features Dimension Reduction …

1737

Fig. 3 Three examples of ROC curves. There are two distributions of sick and not sick subjects with different overlapping as presented in Curves 1, 2, and 3. Each distribution localization results on a different ROC Curve as seen in the ROC curve above. Intersecting points A, B, and C denote specific points at the ROC curve 2

Considering the EEG spike dataset, the detection achieved sensitivity and specificity of 97.4% and 97.2% respectively with an AUC index of 0.9941.

the best features to be extracted during the development of a classifier architecture.

7

References

Conclusion

Differently from other techniques used by neural networks (machine learning) the linear discriminant analysis using the Fisher criteria uses a statistical algorithm. The weight matrix used to re-project the input spaces to the uni-dimensional space is generated at a single step without iterative calculations and guarantees that the classes are at the maximum possible distance to each other. There is no need in additional training and new samples are easily multiplied by weight matrix w. The extraction of the characteristics used to construct the multidimensional dataset are still needed, once the re-projection process uses a fixed input and output sizes according to the generated w matrix. Nevertheless, characteristics selection, tests, and classifier performance analysis are done in a very fast and dynamic way. In the specific case of two classes, the use of a threshold classifier turns trivial the classification task and its performance is easily obtained by means of ROC analysis. The higher the AUC index the better the classifier performance. It is known that the correct selection of non-correlated characteristics influences the classes distribution, thus, using the specific case of two classes together with the LDA algorithm helps experimentation and selection of

1.

Duda Richard O, Hart Peter E, Stork David G (2001) Pattern classification, 2nd edn. Wiley Interscience, New York 2. Bellman R (1961) Adaptive control processes—a guided tour. Princeton University Press, Princeton, New Jersey, 255 pp. Naval Res Log Quart 8:315–316 3. Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7:179–188 4. Martinez AM, Kak AC (2001) PCA versus LDA. IEEE Trans Pattern Anal Mach Intell 23(23):228–233 5. Zhu W, Zeng N, Wang N (2010) Sensitivity, specificity, accuracy, associated confidence interval and ROC analysis with practical SAS® implementations. In: NorthEast SAS users group, health care and life sciences 6. Martinez EZ, Louzada F, Pereira B (2003) A curva ROC para testes diagnósticos Cad Saúde Coletiva 11:7–31 7. Fawcett T (2006) An introduction to ROC analysis. Pattern Recogn Lett 27:861–874 8. Lehrer S (2002) Understanding lung sounds, 3rd edn. Saunders, PA 9. Vavadi H, Ayatollahi A, Mirzaei A (2010) A wavelet-approximate entropy method for epileptic activity detection from EEG and its sub-bands. Biomed Sci Eng 3:1182–1189 10. Quandt VI, Pacola ER, Sovierzoski MA, Gamba HR, Pichorim SF (2015) Reconhecimento de estertores pulmonares utilizando descritores da transformada wavelet discreta. In: SPSUNICAMP’2015, vol 1, pp 101–104. Campinas, Brazil 11. Quandt VI, Pacola ER, Pichorim SF, Gamba HR, Sovierzoski MA (2015) Pulmonary crackle characterization: approaches in the use of

1738 discrete wavelet transform regarding border effect, mother-wavelet selection, and subband reduction. Res Biomed Eng 31:148–159 12. Pacola E, Quandt V, Liberalesso P, Pichorim S, Schneider F, Humberto G (2017) A versatile EEG spike detector with multivariate matrix of features based on the linear discriminant analysis, combined wavelets, and descriptors. Pattern Recogn Lett 86:31–37

E. R. Pacola and V. I. Quandt 13. Pacola ER, Quandt VI, Liberalesso PBN, Pichorim SF, Gamba HR, Sovierzoski MA (2016) Influences of the signal border extension in the discrete wavelet transform in EEG spike detection. Res Biomed Eng 32:253–262

Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® Training Workout: A Pilot Study D. B. Castillo, V. A. A. Bento, E. B. Neves, E. C. Martinez, F. De Merneck, V. M. Reis, M. L. Brioschi, and D. S. Haddad

for monitoring the central body temperature through the facial thermal behavior of individuals pre, post and 24 h post-CrossFit® training workout. This study evaluated 10 adults’ volunteers of both sexes, physically active, practitioners of CrossFit®, with a body mass index of 23.79 ± 2.66 kg/m2. The training consisted of typical exercises of the modality CrossFit® and lasted 50 min. The athletes were monitored for 24 h, with acquisition of facial thermographic images in the Pre-Workout, Post-Workout moments and 24 h after training. The maximum (Tmax) and average (Tmed) temperatures were chosen to analyze the results, both for the front and side views. There was no difference in skin temperature between the regions of interest (p > 0.05), both for Tmax and Tmed at different times. It can be concluded that thermography is a tool for monitoring the central body temperature, through the facial thermal behavior, of individuals during a CrossFit® training, and also that the CrossFit® practitioners evaluated presented a good thermoregulation capacity, managing to effectively dissipate the heat produced in the training after 24 h of evaluation.

Abstract

Physical exercise induces the thermoregulation process of the human body, in order to avoid overheating produced by muscle contraction. The infrared thermographic image (ITI) is an option to assess the change in body temperature, as it monitors the physiological functions related to the control of the skin surface temperature in real-time. The aim of the present research was to verify the effectiveness of infrared thermography as an instrument D. B. Castillo Faculdade de Odontologia, Universidade Federal de Mato Grosso do Sul, Mato Grosso do Sul, MS, Brazil V. A. A. Bento Faculdade de Odontologia, Universidade Estadual Paulista Júlio de Mesquita Filho, Aracatuba, Sao Paulo, Brazil E. B. Neves (&) Comissão de Desportos do Exército (CDE), Rio de Janeiro, Brazil E. B. Neves Universidade Tecnológica Federal do Paraná (UTFPR), Programa de Pós-Graduação em Engenharia Biomédica, Curitiba, Brazil E. C. Martinez Instituto de Pesquisa da Capacitação Física do Exército, Rio de Janeiro, Brazil

Keywords

Universidade de Trás-os-Ontes e Alto Douro, Vila Real, Portugal



V. M. Reis Universidade de Trás-os-Montes e Alto Douro, Vila Real, Portugal V. M. Reis Research Center in Sports Sciences, Health Sciences & Human Development, CIDESD, Vila Real, Portugal M. L. Brioschi Faculdade de Medicina, Universidade de São Paulo, São Paulo, SP, Brazil D. S. Haddad Faculdade de Odontologia, Universidade de São Paulo, São Paulo, SP, Brazil

 

Thermal imaging Face Infrared thermography Exercise Skin temperature

F. De Merneck Departamento de Nefrologia, Faculdade de Medicina, Universidade Federal de São Paulo, São Paulo, Brazil

1



Introduction

Physical exercises have been recommended for health promotion, as regular practice results in improved physical capacity, decreased insulin resistance and arterial hypertension, promoting a gain in quality of life [1]. During physical exercise, heat is a by-product of the metabolism itself, increasing body temperature [2]. This increase in skin temperature can be evaluated using different instruments, such

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_254

1739

1740

as thermistors, infrared thermometers, temperature probes and thermographic images [3]. Physical exercise induces the thermoregulation process of the human body, in order to avoid overheating produced by muscle contraction [4]. Part of the energy in the form of heat is released through the skin, but, as not all the heat produced is totally released, muscle heating occurs, resulting in an increase in skin temperature followed by sweating to compensate [5, 6]. However, individuals who have comorbidities, such as type II diabetes, cardiovascular diseases, hypercholesterolemia and arterial hypertension, may have impaired thermoregulation mechanisms and, consequently, deregulation in their metabolism [7]. CrossFit® (CF) is a method of physical training that has been gaining popularity since its creation and implementation in the early 2000s [8], and is characterized by performing varied exercises at high intensity that include running, weightlifting and Olympic gymnastics [9]. This modality aims to develop a maximum of three metabolic pathways and several physical valences [10–12]. Thus, studies in sports medicine are growing to assess the real benefits of this training modality [13]. The infrared thermographic image (ITI) is an option to assess the change in body temperature, as it monitors the physiological functions related to the control of the skin surface temperature in real time, being able to identify vasomotor neurovegetative changes related to injury, inflammation, healing and microcirculation [14]. ITI is a non-invasive, non-ionizing, painless, easy to perform and harmless to the user method [15]. Studies have shown its use in evaluation of sports performance, risk identification and prevention of injuries [16, 17]. However, some authors have found it difficult to interpret the influence of high intensity exercises with changes in body temperature [18, 19]. Thermographic measurements were performed in anatomical areas that are strongly influenced by exercise, such as legs and hips, but they did not represent a relationship with the body’s central temperature. Hensel et al. [20] revealed that cutaneous thermoreceptors located on the face act as important sensors in the regulation of the body’s central temperature. Thus, monitoring changes in the temperature of the skin of the face during exercise can be an alternative to explore relationships between central body temperature and the influence of exercise [20]. Haddad et al., standardized 28 thermoanatomical reference areas by means of thermographic facial mapping, which makes the thermography analysis more accurate and with high reproducibility. However, there are no studies that have used this methodology for facial evaluation in CrossFit® athletes [21]. The aim of the present research was to verify the effectiveness of infrared thermography as an instrument for

D. B. Castillo et al.

monitoring the central body temperature through the facial thermal behavior of individuals pre, post and 24 h post-CrossFit® training workout.

2

Methods

This study was conducted in Vila Real, Portugal, and the research protocol was approved by the Research Ethics Committee of the University of Trás-os-Montes and Alto Douro (CEP/UTAD) with the number 95/2018.

2.1 Volunteers In the beginning, 19 volunteers participated in the survey, but 9 participants who were unable to perform the three thermographic measurements on different days were excluded. Thus, only 10 volunteers were included in the analysis, 6 women and 4 men with a body mass index of 23.79 ± 2.66 kg/m2, all CrossFit® practitioners from the same gym and with the same type of training (Table 1). Exclusion criteria were self-reporting in relation to systemic diseases such as arterial hypertension, dysautonomy, muscle pain or any type of disease that affects the osteo-articular system; patients diagnosed with Temporomandibular Muscle Dysfunction according to the Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) criteria, pregnant women, patients using anti-inflammatory drugs, anticonvulsants, antidepressants or psychotropic analgesics, alcohol consumption 24 h before the exam, presence of beard and recent history of facial or cervical trauma. The volunteers were instructed to avoid using cosmetics or ointments on the skin on the day of the exam, not to eat a heavy meal, not to take a very hot shower and not to smoke 3 h before the thermographic measurements.

2.2 Anthropometry of the Volunteers All 10 volunteers underwent anthropometric assessment in order to characterize the sample profile, in which they were evaluated: body mass, height, body mass index (BMI), waist circumference and hip circumference.

2.3 Evaluation by Infrared Thermography In the preparatory phase of the thermographic examination, the volunteers remained in a temperature-controlled room

Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® … Table 1 CrossFit® training session, Vila Real, Portugal, 2019

1741

Time cap

Activity

Exercises

15–20 min

Accessory work

– 3  5 Front Squat @ 80% 1RM – 3  20 Back Rack Lunges

15–20 min

Workout of the day

– Every 3′ for 15′ – 30 WallBalls @ 9 kg (men)/6 kg (women) – Máx Burpees + Box Jump

(20 ± 2 °C) with humidity below 60% for 15 min. The room temperature and humidity were monitored with a digital thermohygrometer (model Minipa® MT241) [22]. An infrared camera T430sc® (FLIR, USA) with a spectral range of the cameras from 7.5 to 14 mm and a spatial resolution of 320  240 pixels was used. Thermographic measurements were taken in three moments: pre-training, immediate post-training and after 24 h of training. During the thermographic examination, the volunteers were standing with an upright spine and a positioned head, following the Frankfurt plane (imaginary line drawn from the tragus of the ear to the palpebral commissure) parallel to the horizontal plane [23]. The exams were performed at an average distance of 30 cm between the equipment lens and the volunteer’s face, to record the close-up position of the eyes (medial palpebral commissure) and 70 cm to record the positions: frontal and face profile (right and left). Thermographic measurements were taken at the same time before and after 24 h. It was identified 14 thermoanatomical reference areas of the face, in frontal view, where the sum of all these areas was averaged (Fig. 1a). For analysis of the lateral view (right and left), the areas referring to the masseter and anterior temporal muscles were considered (Fig. 1b). For the analysis of the region on the masseter muscle, 4 lines were made, two in the horizontal plane and two verticals. The upper horizontal line was drawn from the tragus to the wing of the nose and the lower line, from the earlobe to the chin; the first vertical line 10 mm away from the ear tragus and perpendicular to the ground and the second at 10 mm from the lateral palpebral commissure and parallel to the first vertical line. The central area resulting from the intersection of these lines was considered to be the area over the masseter muscle. For analysis in a close-up view, the region referring to the bilateral medial palpebral commissure was analyzed (Fig. 1c).

2.4 Training Session The training session was composed of a 10-minute warm-up with joint mobilization exercises and 35 min of exercises (main work), Vila Real protocol (Portugal, 2019) as shown in Table 1.

2.5 Data Analysis After the identification of all regions of interest (ROI), the temperatures values of minimum (Tmin), average (Tmed) and maximum (Tmax) were automatically obtained by the VisionFy program (Thermofy, Brazil). These analyzes were performed Pre-Training, Post-Training and 24 h posttraining. The maximum (Tmax) and average (Tmed) temperatures were chosen to analyze the results, both for the frontal view (close up and face) and for the lateral view (masseter and anterior temporal). The average maximum and average temperatures of each thermoanatomical area were analyzed, including right and left sides. An ANOVA one-way with Post hoc of Tukey was used for statistical analysis, considering a significant value of p < 0.05.

3

Results

The anthropometric measurements of the volunteers are described in Table 2. The maximum (Tmax) and average (Tmed) temperatures were evaluated in the Pre-Training, Post-training and 24 h after training. The results of the Pre-training, Post-training and 24 h after training of the Tmax were presented in Table 3. When comparing the results of the regions under study, by the maximum temperature (Tmax), Pre, Immediate post and 24 h after CrossFit® training (Table 3), there was no statistically significant difference between the isolated reference thermoanatomical areas (Post Tukey test, p > 0.05), sum of the face by the frontal view (p = 0.09), anterior temporal region (p = 0.27), masseter region (p = 0.40) and medial palpebral commissure (p = 0.13) (Anova One-Way test). The results of the Pre-training, Post-training and 24 h after training of the Tmed were presented in Table 4. When comparing the average temperatures (Tmed), Pre, Post-immediate and 24 h after CrossFit® training (Table 4), there was no statistically significant difference between the isolated reference thermo-anatomical areas (Anova One-Way test). Just as there was no significant difference for

1742

D. B. Castillo et al.

4

Fig. 1 Representative image of the thermoanatomical reference areas of the face [21]. a Location of the 14 thermo-anatomical areas by the frontal view, b areas referring to the regions of the anterior temporal and masseter muscles in lateral view, c areas referring to the medial palpebral commissure (close-up view)

facial temperature (frontal view) (p = 0.10), anterior temporal region (p = 0.36); masseter region (p = 0.30) and Medial Palpebral Commissure (p = 0.23) (Anova One-Way test).

Discussion

The skin temperature is directly related to blood flow, regulated by the autonomic nervous system and affects both sides of the body in a uniform and simultaneous manner, producing a symmetrical thermal pattern in the face of normal conditions [23]. Due to the sensitivity and specificity of the method, the authors used the skin thermometry exam to evaluate and monitor facial temperature in CrossFit® practitioners. Skin temperature is a significant parameter that conditions the evolution of physiological parameters such as lactate production or heart rate [24], directly influencing the thermoregulation process. All the results showed that there was a little increase in temperature after training and its decrease 24 h post-training, although not significant. The results of facial skin temperature of the pre-workout, post-immediate and 24 h after CrossFit® training revealed differences above 1 °C between the regions over the anterior temporal and masseter muscles, with the temperatures (maximum and average) of the temporal region being higher when compared to the masseteric region. These results corroborate with the findings in the literature [25–28], since this occurs because the masseter muscle is thicker than the temporal muscle and this is influenced by the path of the temporal artery [23]. However, there was no statistical difference between Tmax and Tmed among evaluated moments (p > 0.05). Thermoregulation induced by physical exercise aims to control the body’s central temperature in a pattern compatible with normality, being close to 36–37 °C [29]. This mechanism is carried out through thermogenesis and thermolysis, the latter being responsible for the transfer of heat by emission of radiation [30]. Thus, each individual will have a different skin temperature, precisely because of the difference in metabolism that depends on age, sex, height, weight, body hydration level, among other factors [31–34]. The anthropometric measurements of the volunteers in this study showed that the BMI was within the normal range, being favorable for better heat dissipation [35]. Thus, it can be considered that all participants showed regular metabolism, leading to good thermoregulation that was quantified by the small variation in facial skin temperatures immediately after training (0.31 °C) and a decrease in temperatures 24 h after training (0.43 °C), returning very close to the baseline (initial) temperature. This can be explained by the evaluation being made in the facial region that most faithfully represents the functioning of the thermoregulation capable of controlling the central body temperature [18, 19], and also by the small sample size. Unlike the studies analyzed in the systematic review by Fernandes et al. [35], where exercise induced temperature changes of

Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® … Table 2 Anthropometric measurements of volunteers, Vila Real, Portugal, 2019

1743

Volunteers

BMI

Waist (cm)

VM1

24.85

82.00

Hip (cm) 93.80

VM2

24.62

85.15

103.00

VM3

23.72

80.60

97.55

VM4

23.38

78.65

95.15

VW1

21.78

68.00

93.00

VW2

29.03

79.40

103.45

VW3

21.39

68.15

94.40

VW4

26.92

76.15

103.90

VW5

21.15

71.00

97.00

VW6

21.05

68.00

97.10

Average

23.79

75.71

97.84

VM Man volunteer; VW Woman volunteer

Table 3 Maximum temperatures (average ± standard deviation) pre-workout, post-workout and 24 h after training by the frontal views (average of the sum of all the thermoanatomical areas), lateral (anterior temporal and masseter) and medial palpebral commissure

Table 4 Average temperatures (average ± standard deviation) pre-workout, post-workout and 24 h after training by the frontal views (average of the sum of all the thermoanatomical areas), lateral (anterior temporal and masseter) and medial palpebral commissure

Region of interest

Pre-workout (°C)

Post-workout (°C)

24 h after training (°C)

Frontal face

33.55 ± 0.46

33.98 ± 0.59

33.46 ± 0.59

Temporal

34.28 ± 0.32

34.39 ± 0.54

34.07 ± 0.31

Masseter

33.18 ± 0.62

33.60 ± 0.95

33.16 ± 0.81

Medial palpebral commissure

35.02 ± 0.47

35.32 ± 0.63

34.85 ± 0.41

Region of interest

Pre-workout (°C)

Post-workout (°C)

24 h after training (°C)

Frontal face

33.12 ± 0.50

33.56 ± 0.57

33.02 ± 0.63

Temporal

33.53 ± 0.45

33.75 ± 0.60

33.42 ± 0.44

Masseter

32.19 ± 0.74

32.65 ± 0.84

32.13 ± 0.82

Medial palpebral commissure

34.23 ± 0.52

34.60 ± 0.64

34.22 ± 0.48

approximately 5.0 °C in the forearms, 3 °C in the trunk and 4.6 °C in the quadriceps, as in the study by Formenti et al. and Al-Nakhli et al. who showed a post-exercise temperature change greater than 1 °C, evaluating legs and hips [35–37]. Two anatomical regions have been used to check the central temperature in the facial region: the region of the external auditory canal and the preoptic area [38–40]. The region of the external auditory canal is irrigated by the posterior auricular arteries and veins and its measurement is performed by an auricular thermometer, thus obtaining the tympanic temperature [25]. In the preoptic area there is a “tunnel” between the fat-free (thermally conductive) skin on the bridge of the nose and the cavernous sinus around the hypothalamic thermoregulatory center (through the upper ophthalmic vein) called the Cerebral Tunnel Temperature (BTT—Brain Tunnel Temperature) by Abreu et al. [41]. In this work, we opted for the evaluation of the region of the

pre-optic area, using infrared thermography, being called the Medial Palpebral Commissure [21]. The same summation methodology for the thermoanatomical areas used in the work by Haddad et al. [26] was adopted in this work. The authors calculated the average of the maximum (35.1 ± 0.5 °C) and average (34.7 ± 0.5 ° C) facial temperatures of Portuguese non-athletes and asymptomatic individuals for temporomandibular disorder. In our study, the average pre-workout facial temperature corresponded to 33.55 ± 0.46 °C (maximum temperature) and 33.12 ± 0.5 °C (average temperature). This decrease in temperature may be related to the difference in body composition between trained (athletes) and untrained subjects. Authors have observed that subjects with a higher percentage of body fat have a different thermal distribution than subjects with a lower percentage of body fat [42–44]. Neves et al. [42] reported that there is a negative correlation

1744

D. B. Castillo et al.

between face temperature and the percentage of body fat in adult men. The facial analysis methods of this study were used to evaluate the facial sympathetic vasomotor response in the moments of pre-training, immediate post-training and 24 h after a CrossFit® circuit. It is important to note that the results demonstrate post-workout facial homeostasis, with no change in the facial region after 24 h, corroborating the guidelines [22] described in the literature. Its limitations refer to sample size as well as the use of other assessment methods of core temperature.

5

Conclusions

Based on the results found, it can be concluded that thermography is a tool for monitoring the central body temperature, through the facial thermal behavior, of individuals pre, post and 24 h post-CrossFit® training workout, and also that the CrossFit® practitioners evaluated presented a good thermoregulation capacity, managing to effectively dissipate the heat produced in the training after 24 h of evaluation. Acknowledgements We would like to say thanks to the Brazilian National Council for Scientific and Technological Development (CNPq) for financial support (303678/2018-6). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Pagan LU, Gomes MJ, Okoshi MP. (2018) Endothelial function and physical exercise. Arq Bras Cardiol 2. Guyton AC, Hall EJ (2015) Guyton and Hall textbook of medical physiology. Elsevier Saunders, Philadelphia, pp 911–919 3. Burnham RS, McKinley RS, Vincent DD (2006) Three types of skin-surface thermometers: a comparison of reliability, validity, and responsiveness. Am J Phys Med Rehabil 85(7):553–558 4. Johnson JM (1992) Exercise and the cutaneous circulation. Exerc Sport Sci Ver 20:59–97 5. Dibai FAV, Guirro RRJ (2014) Evaluation of myofascial trigger points using infrared thermography: a critical review of the literature. J Manip Physiol Ther 6. Ferreira JJ, Mendonça LC (2008) Exercise-associated thermographic changes in young and elderly subjects. Ann Biomed Eng 36(8):14–20 7. Schlader ZJ, Coleman GL, Sackett JR, Sarker S et al (2017) Behavioral thermoregulation in older adults with cardiovascular co-morbidities. Temperature (Austin) 5(1):70–85 8. Moran S, Booker H, Staines J, Williams S (2017) Rates and risk factors of injury in CrossFit®: a prospective cohort study. J Sports Med Phys Fitness 57(9):1147–1153 9. Meyer J, Morrison J, Zuniga J (2017) The benefits and risks of CrossFit®: a systematic review. Workplace Health Saf 65(12)

10. Sprey JWC, Ferreira T, Lima MV et al (2016) An epidemiological profile of CrossFit® athletes in Brazil. Orthop J Sports Med 4(8) 11. Claudino JG, Gabbett TJ, Bourgeois F et al (2018) CrossFit® overview: systematic review and meta-analysis. Sports Med Open 4(1):11 12. Warvaz GR, Suric V, Daniels AH et al (2016) Crossfit® instructor demographics and practice trends. Orthop Ver (Pavia) 8(4):6571 13. Eather N, Morgan PJ, Lubans DR (2016) Improving health-related fitness in adolescents: the CrossFit® teens™ randomized controlled trial. J Sports Sci 34(3):209–223 14. Charkoudian N (2003) Skin blood flow in adult human thermoregulation: how it works, when it does not and why. Mayo Clin Proc 78:603–612 15. Fernandez-Cuevas I, Marins JCB, Lastras JA et al (2015) Classification of factors influencing the use of infrared thermography in humans: a review. Infrared Phys Technol 71:28–55 16. Ring EFJ, Ammer K (1998) Thermal imaging in sports medicine. Sport Med Today 1(2):108–109 17. Brukner P, Khan K (2012) Clinical sports medicine, 4ª edn. McGraw-Hill, Australia, 1268 p. ISBN-13 978-0-07099-813-1 18. Bryan AD, Hutchison KE, Seals DR, Allen DL (2007) A transdisciplinary model integrating genetic, physiological, and psychological correlates of voluntary exercise. Health Psychol 26:30–39 19. Magnan RE, Kwan BM, Bryan AD (2013) Effects of current physical activity on affective response to exercise: physical and social-cognitive mechanisms. Psychol Health 28:418–433 20. Hensel H (1981) Thermoreception and temperature regulation. Monogr Physiol Soc 38:1–321 21. Haddad DS, Brioschi ML, Baladi MG et al (2016) A new evaluation of heat distribution on facial skin surface by infrared thermography. Dentomaxillofac Radiol 45:20150264 22. Getson P et al (2019) Guidelines for dental-oral and systemic health infrared thermography. Pan Am J Med Thermol [S.l.] 5:41– 55 23. Haddad DS, Brioschi ML, Arita ES (2012) Thermographic and clinical correlation of myofascial trigger points in the masticatory muscles. Dentomaxillofac Radiol 41(8):621–629 24. Mougios V, Deligiannis A (1993) Effect of water temperature on performance, lactate production and heart rate at swimming of maximal and submaximal intensity. J Sports Med Phys Fitness 33:27–33 25. Haddad DS, Brioschi ML, Vardasca R et al (2014) Thermographic characterization of masticatory muscle regions in volunteers with and without myogenous temporomandibular disorder: preliminary results. Dentomaxillofac Radiol 43:20130440 26. Haddad DS, Oliveira BC, Brioschi ML et al (2019) Is it possible myogenic temporomandibular dysfunctions change the facial thermal imaging? Clin Lab Res Dent 2019:1–10. http://dx.doi. org/10.11606/issn.2357-8041.clrd.2019.158306 27. Gratt BM, Sickles EA (1995) Electronic facial thermography: an analysis of asymptomatic adult subjects. J Orofac Pain 9:255–265 28. Weinstein SA, Weinstein G, Weinstein EL et al (1991) Facial thermography, basis, protocol, and clinical value. Cranio 9:201– 211 29. Brioschi ML, Teixeira MJ, Silva MF (2010) Princípios e Indicações da Termografia Médica, 1st edn. Andreoli, São Paulo 30. Bernard V, Staffa E, Mornstein V et al (2013) Infrared camera assessment of skin surface temperature–effect of emissivity. Phys Med 29(6):583–591 31. Bandeira F, Neves EB, Moura MAM et al (2014) A termografia no apoio ao diagnóstico de lesão muscular no esporte. Rev Bras Med Esporte [online] 20(1):59–64 32. Kenney WL, Johnson JM (1992) Control of skin blood flow during exercise. Med Sci Sports Exerc 24(3):303–312

Facial Thermal Behavior Pre, Post and 24 h Post-Crossfit® … 33. Johnson JM (1992) Exercise and the cutaneous circulation. Exerc Sport Sci Rev 20:59–97 34. Charkoudian N (2010) Mechanisms and modifiers of reflex induced cutaneous vasodilation and vasoconstriction in humans. J Appl Physiol 109:1221–1228 35. Fernandes AA, Amorim PRS, Prímola-Gomes TN et al (2020) Evaluation of skin temperature during exercise by infrared thermography: a systematic review. Rev Andal Med Deporte 5 (3):113–117 36. Formenti D, Ludwig N, Gargano M et al (2013) Thermal imaging of exercise-associated skin temperature changes in trained and untrained female subjects. Ann Biomed Eng 41:863–871 37. Al-Nakhli HH, Petrofsky JS, Laymon MS et al (2012) The use of thermal infra-red imaging to detect delayed onset muscle soreness. J Vis Exp 59:3551 38. Demartino MMF, Simões ALB (2013) A comparative study of tympanic and oral temperatures in healthy adults. Rev Ciênc Méd Campinas 12(2):115–121 39. Amorim AMAM, Barbosa JS, Freitas APLF et al (2018) Infrared thermography in dentistry. HU Rev Juiz de Fora 44(1):15–22

1745 40. De Martino MMF, Simões ALB (2013) A comparative study of tympanic and oral temperatures in healthy adults. Rev Ciênc Med (Campinas) 12(2):115–121 41. Abreu MM, Haddadin AS, Shields B et al (2015) Noninvasive surface monitoring of core temperature via a medial canthal brain temperature tunnel. In: The anestesiology annual meeting, American Society of Anesthesiology. Abstract archives, 24 Oct 42. Neves EB, Salamunes ACC, de Oliveira RM et al (2017) Effect of body fat and gender on body temperature distribution. J Therm Biol 70:1–8. https://doi.org/10.1016/j.jtherbio.2017.10.017 43. Salamunes ACC, WanStadnik AM, Neves EB (2017) The effect of body fat percentage and body fat distribution on skin surface temperature with infrared thermography. J Therm Biol 66:1–9. https://doi.org/10.1016/j.jtherbio.2017.03.006 44. Dibai Filho AV, Packer AC, de Souza Costa AC, Rodrigues-Bigaton D (2013) Accuracy of infrared thermography of the masticatory muscles for the diagnosis of myogenous temporomandibular disorder. J Manip Physiol Ther 36(4):245–252

Implementation of One and Two Dimensional Analytical Solution of the Wave Equation: Dirichlet Boundary Conditions S. G. Mello, C. Benetti and A. G. Santiago

Abstract

This paper presents an implementation of an approximate analytical solution for the wave equation in one and bidimensional domains for Dirichlet Boundary Conditions (DBC), i.e., the primary variable is known in the domain boundary and it is defined as a periodic function of time. The approximated solutions are derived using Fourier series and are implemented in Python 3.7 programming language according the Object-Oriented Paradigm (OOP), providing flexibility for increasing the number of terms and several visualization tools. The presented results are compared with the Finite Difference Method (FDM) solution for the wave equation, leading to root-mean-squared-errors of order of 10−3 . Keywords

Wave equation • Fourier series • Analytical solutions • Dirichlet boundary conditions

1

Introduction

Ultrasound (US) is known for being one of the most popular image modality in medicine and also because of its therapeutic applications. The popularity of this technique is because of the equipment’s portability, the use of non-ionizing radiation, and its low cost when compared to others modalities such as magnetic resonance and computer tomography. As its name suggests, it uses sound waves in the ultrasound frequency however, the exact frequency, the beam shape, and the input waveform are conditioned to the intended application.

S. G. Mello · C. Benetti · A. G. Santiago (B) Center for Engineering, Modeling and Applied Social Sciences, Biomedical Engineering, Federal University of ABC, São Bernardo do Campo, SP, Brazil e-mail: [email protected]

Modeling wave propagation is a key aspect to develop new techniques for US and to improve transducers. The acoustic propagation proprieties of ultrasound in tissue, i.e., the sound velocity and pressure, are governed by the well known Wave Equation [1], which also describes oscillatory phenomena such as electromagnetic field propagation, seismologic waves, etc. Although the wave equation has great applicability, its approximate solutions could be difficult or even impossible to find due to the domain complexity. To overcome this problem, numerical methods could be used to estimate the approximate solution within such domains, like the Finite Element Method (FEM) [2], the Boundary Element Method (BEM) [3], both weak formulations, and the Finite Difference Method (FDM), a strong formulation [4]. The choice between which method to employ is based on its stability, flexibility, speed and accuracy for a given domain and boundary conditions. Analytical solutions for regular domains provide baselines for numerical comparisons. Using Fourier Series is an important and well known approach to the development of the Wave Equation analytical solution. According to Igel [5] can be found in a vast literature where the boundary conditions are considered constant, though not provide flexibility enough to deal with more complex problems. Torii et al. [6] proposes an analytical solution for one and two dimensional problems governed by Dirichlet Boundary Conditions (DBC), i.e., the boundary conditions are expressed through the main variable. The proposed solution considers a sine function with arbitrary angular frequency ω [rad/s] as boundary condition. Although the author presents the solution and illustrates it with some examples, it is desirable a programming interface in which the user is free to choose the wave and domain parameters as well as provide plotting functionalities. This paper aims to present an implementation of the approximate analytical solution proposed by Torii et al. [6] for the wave equation considering DBC. Since the wave equation for acoustic problems have a dual characteristic, i.e., it may be written as a function of the acoustic velocity or the

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_255

1747

1748

S. G. Mello et al.

acoustic pressure, the authors consider the main variable as acoustic pressure, denoted by p( x , t) Pa. p2 (x, t) =

2

∞ 

 sin

n=1

Methodology

In this section, we present the formulation for one and two dimensional approximate solution. The code was developed in Python 3.7 Anaconda distribution, using an Object-Oriented approach, resulting in a simple API that can handle any number of approximation terms, as well as total control of the domain size, number of sample points, frequency of the input wave, duration of the simulation and also plotting functions that allow the inspection of the result in any given time or coordinate in the Cartesian plane. The Numpy and Scipy libraries were used for fast matrix-vector processing, while the Matplotlib was used to implement the plotting tools [7].

nπ t L



 [Cn sin

cnπ t L

 + Bn ]

(8)

with the parameters Cn and bn given by Eqs. 9 and 10 and the function Bn (t) given by Eq. 11. Cn = −

ωbn cnπ

bn = (−1)n+1  Bn = K 2 sin

cnπ t L

 +

(9)

2L nπ

(10)

Lω2 sin(ωt)bn (cnπ )2 − L 2 ω2

(11)

Equation 12 presents the parameter K 2 :

2.1

Analytical Solution K2 = −

One-dimensional wave equation: The formulation for the approximated analytical solution for one-dimensional problem is defined within a domain of size L (m) and a time range (0, t f ) (s). The differential equation and the considered domain are given by Eq. 1: 2 ∂2 p 2∂ p = c , t > 0, x ∈ [0, L] ∂t 2 ∂x2

(1)

where c (m/s) is the medium plane wave velocity. The initial conditions for this problem are given by: p(x, 0) = 0 ∂ p(x, 0) =0 ∂t

(2)

(12)

Two-dimensional wave equation: The formulation for the approximated analytical solution for two-dimensional problem is defined within a domain . The differential equation and its domain are given by Eqs. 13 and 14, valid within the time range [0, t f ] (s): ∂2 p = c2 ∂t 2



∂ 2 p ∂ 2u + 2 ∂x2 ∂y



 = [0, L x ] × [0, L y ]

(13)

(14)

(3)

The initial conditions and the DBC are given by Eqs. 15– 21. p(x, y, 0) = 0 (15)

(4)

∂ p(x, y, 0) =0 ∂t

The DBC is given by Eqs. 4 and 5. p(x = 0, t) = 0

L 2 ω3 bn − cnπ L 2 ω2

(cnπ )3

(16)

The boundary conditions: p(x = L , t) = sin(ωt),

(5)

p(0, y, t) = 0

(17)

where ω is the input angular frequency in rad/s. The approximate weak solution to this problem is given by Eqs. 6–8:

p(x, 0, t) = 0

(18)

p(x, L y , t) = 0

(19)

p(L x , y, t) = sin(ωt) f (y)

(20)

p(x, t) = p1 (x, t) + p2 (x, t) x p1 (x, t) = sin(ωt) L

(6)

(7)

255 Implementation of One and Two Dimensional Analytical Solution …

1749

with f (y) satisfying the following conditions: f (0) = f (L y ) = 0

Bm,n (t) = A1 sin(zt) +

(21)

Equation 21 imposes the boundary conditions described by Eqs. 17 and 19. The solution for this problem is given by Eqs. 22–25:

 z =

p1 (x, y, t) =

(22)

cnπ Lx

A1 = −

∞ ∞  

a vm,n (x,

y, t)

(23)

m=1 n=1 ∞  ∞ 

p2 (x, y, t) =

b vm,n (x, y, t)

(33)

The parameters z, A1 , b1 and the function s(x, y) are given by the set of Eqs. 34–37. 2

p(x, y, t) = p1 (x, y, t) + p2 (x, y, t) + p3 (x, y, t)

sin(ωt)b1 L x (z 2 − ω2 )

4 b1 = − Lx Ly

(24)

2

 +

cnπ Ly

2

ωb1 z L x (z 2 − ω2 )

(34)

(35)

L y L x s(x, y)γm,n d xd y 0

(36)

0

m=1 n=1

x sin(ωt) f (y) Lx

p3 (x, y, t) =

a And the parameter vm,n is given by Eq. 26:



3





 λm,n = cπ

∗ Bm,n

n Ly

2

mπ x Lx

 +

m Lx

sin

2 (27)

L y L x = cm,n

g1 (x, y)γm,n d xd y 0

4 λm,n L x L y

γm,n

mπ x = sin Lx

(29)

ω x f (y) Lx

g1 (x, y) = − 

(28)

0

cm,n =





(30)

nπ y sin Ly

 (31)

b is given by Eq. 32: The parameter vm,n



b vm,n (x,

mπ x y, t) = Bm,n (t) sin Lx





nπ y sin Ly



In Eq. 32, the function Bm,n (t) is given by Eq. 33.

(37)

Results



nπ y Ly (26) ∗ , c The parameters λm,n , Bm,n m,n , g1 (x, y) and γm,n are given by Eqs. 27–31 respectively: a ∗ (x, y, t) = Bm,n sin(λm,n t) sin vm,n

s(x, y) = x(c2 f  (y) + ω2 f (y))

(25)

(32)

In this section we present the analytical solutions for the one and two dimensional problems governed by DBC and their respective solutions by FDM, implemented as in Igel [5]. As examples of application of the developed API, two different wavelengths were chosen (λ1 = 1.5 × 10−3 m and λ2 = 500 × 10−6 m). For the qualitative and quantitative evaluation of the results, different amounts of terms were chosen to approximate the analytical solution (N = 10 and N = 50). For simulations via FDM, the spatial and temporal discretization were refined until the obtained response with respect to the previous simulation presented a difference lower than 10−10 .

3.1

1D Problem

The spatial domain considered was L = 2λ m and x = λ/150 m. Figure 1a and b presents the results for p(x = L/2, t) (Pa) for λ1 and λ2 respectively. Time measurements and root-mean-squared-errors (ε R M S [Pa]) are shown in Table 1 according to the number of Fourier terms and wavelength. The time measurement was taken using the line_profile API, as described by Lanardo [8]. All simulations were performed in a Windows 10 Core i5, 1.80 GHz, with 8 GB of RAM and 1 Tb of HD (Table 2). Figure 2a and b presents the graphic result for p(x, t) considering λ1 = 1.5 × 10−3 m and λ2 = 500 × 10−6 m respectively.

1750

S. G. Mello et al.

Fig. 1 Comparison between FDM and 1D analytical solution: a λ1 = 1.5 × 10−3 m and b λ2 = 500 × 10−6 m

Table 1 Time and ε R M S measurements for 1D Dirichlet solution compared to FDM λ × 10−3 (m)

N

ε R M S × 10−3 (Pa)

Computational time (s)

1.5 1.5 0.5 0.5

10 50 10 50

81.4 81.4 785.4 785.4

0.047 0.115 0.047 0.115

3.2

2D Problem

In order to provide a standard model for comparison between different wavelengths, the following considerations were adopted:

Table 2 p( x , t) for x-t reference plane mean-squared-errors λ × 10−3 (m)

N

ε R M S × 10−3 (Pa)

1.5 1.5 0.5 0.5

10 50 10 50

121.1 123.3 548.3 436.3

• L y = 5λ m, y = λ/20 m • L x = 3λ m, x = λ/20 m. Figure 3a and b presents the results for p(x = L x /2, y = L y /2, t) Pa for λ1 and λ2 respectively. Time measurements and root-mean-squared-errors (ε R M S (Pa)) are shown in Table 3 according to the number of Fourier terms and wavelength.

255 Implementation of One and Two Dimensional Analytical Solution …

1751

Fig. 2 Response p(x, t) for the problems a λ1 = 1.5 × 10−3 m and b λ2 = 500 × 10−6 m

4

Discussion

This paper presents the analytical solutions and their implementations for one and two dimensional wave equation considering Dirichlet Boundary Conditions and their results compared to the Finite Difference Method solutions. For one dimensional problem, unlike numerical models that needed further refinement in domain discretization as the wavelength decreases, the behavior of the analytical response does not vary by a considerable amount with respect to the increasing of the number of approximation terms, as can be observed in Figure 1 and also in Table 1. In Fig. 3 it is possible to observe the behavior of the simulation for λ1 = 1.5 × 10−3 m (2a) and λ2 = 500 × 10−6 m (2b) for the entire domain and time range. On the other hand, for two dimensional problems, the results shows that, despite the increase in the number of terms,

the solution presents high divergence at the end of solution response (Fig. 3), thus reflecting an high ε R M S as observed in Table 3. Given the objective of the work, this paper presented the implementation of an approximate solution as an API for the one and two-dimensional wave equation with Dirichlet Boundary Condition, that allows the variation of several simulation parameters and also the visualization of the analytical responses obtained. It also demonstrated that the results are comparable to those obtained by Finite Differences, thus, providing a reliable tool for validation of numerical methods. The next step of this work is to use the presented model in order to compose more complex periodic signals such as a Gaussian pulse train, using the current model as part of a Fourier series, thus, providing a flexible analytical model capable of simulating the Wave Equation for any periodic input function.

1752

S. G. Mello et al.

Fig. 3 Comparison between FDM and 2D analytical solution: a λ1 = 1.5 × 10−3 m and b λ2 = 500 × 10−6 m

Table 3 Time and ε R M S measurements for 2D Dirichlet solution compared to FDM λ × 10−3 (m)

N

ε R M S × 10−9 (Pa)

Computational time (s)

1.5 1.5 0.5 0.5

10 50 10 50

1099.8 1098.5 430.8 430.7

6.42 190.10 6.13 184.30

5

Conclusion

The analytical solution for the wave equation in one and bidimensional domains for Dirichlet Boundary Conditions it was successfully implemented. The comparison with the Finite Difference Method (FDM) solution presented a root-meansquared-errors of order 10−6 (Pa).

Acknowledgements We would like to acknowledge the Federal University of ABC, and Technological Research Institute of the State of São Paulo (IPT). Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Atalla N, Sgard F (2015) Finite element and boundary methods in structural acoustics and vibration. CRC Press, Boca Raton [2] Simões RJ, Pedrosa A, Pereira WCA, Teixeira CA (2016) A complete COMSOL and MATLAB finite element medical ultrasound imaging simulation. In: ICA2016 [3] Gimperlein H, Meyer F et al (2018) Boundary elements with mesh refinements for the wave equation. Numer Math 139:867–912 [4] Nielsen KL (1964) Methods in numerical analysis. MacMillan, New York [5] Igel H (2017) Computational seismology. Oxford University Press, Oxford

255 Implementation of One and Two Dimensional Analytical Solution … [6] Torii A, Lima R, Sá R (2019) Benchmark solutions for the wave equation with boundary harmonic excitation [7] Johansson R (2019) Numerical Python: scientific computing and data science applications with Numpy, Scipy and Matplotlib. Apress

1753

[8] Lanardo G (2017) Python high performance. PACKT Publishing Limited

Application of the Neumann Boundary Conditions to One and Two Dimensional Analytical Solution of Wave Equation S. G. Mello, C. Benetti and A. G. Santiago

Abstract

This project aims the implementation of an approximate analytical solution for the wave equation in one and two dimensional domains for Neumann Boundary Conditions (NBC), which the gradient of the primary variable is known within the boundary domain and, for the proposed implementation, is defined as a periodic function of time. Fourier series was used to determine the approximated solutions for the wave equations and it was implemented in Python 3.7 programming language. The Object-Oriented Paradigm (OOP) approach was used to provide flexibility for the implementation, resulting in an Application Programming Interface (API) that provides functionalities such as increasing the number of approximation terms and several visualization tools. The results are presented considering two different frequencies and compared to the Finite Difference Method (FDM) solution, leading to rootmean-squared-errors of order 10−6 (Pa). Keywords

Acoustic transient field • Neumann • Wave equation • Finite differences

1

Introduction

The acoustic propagation is one of the several problems that can be described by the Partial Differential Equations (PDE), and it is totally characterized in a spatial domain, time range and initial and boundary conditions. The solutions for PDE problems are determined using numerical methods such as Finite Element Method (FEM) [1], the Boundary Element S. G. Mello · C. Benetti · A. G. Santiago (B) Center for Engineering, Modeling and Applied Social Sciences, Biomedical Engineering, Federal University of ABC, Alameda da Universidade s/n, São Bernardo do Campo, Brazil e-mail: [email protected]

Method (BEM) [2] and the Finite Differences Method (FDM) [3], and the results could be compared to analytical solutions in order to determine the accuracy, precision, speed and stability of methods. In this context, Torii et al. [4] presented a very interesting approach for a analytical solution. In his work a sinusoidal function is used as input of an analytical solution for the wave equation. In this way, the solution itself may be used in order to approximate other periodic functions through Fourier series. Considering the analytical solutions, the Neumann Boundary Condition (NBC), along with Dirichlet Boundary Condition (DBC), describes all possible configurations that a problem may have such as in Robin boundary condition that may be obtained as a linear combination between DBC and NBC. The NBC is defined when the normal gradient of the primary variable is prescribed at the boundary of a problem domain. In terms of an acoustic problem, a non-zero NBC represents a wave incident to a non-rigid interface. This paper aims to present the development of an Application Programming Interface (API) developed in Python 3.7 programming language in order to simulate the Wave Equation analytical solution in one and two dimensions considering NBC for acoustic problems. Since the Wave Equation may represent either the sound velocity or the sound pressure [5], the sound pressure, p( x , t), is used as the analysis variable. In order to compare the results of the proposed development, the analytical solution is compared to the Finite Difference Method (FDM) solution for the Wave Equation.

2

Methodology

The Anaconda distribution of the Python 3.7 was used to develop of Object Oriented programming with a simple API and as a result, it can handle any number of approximation terms. The libraries Numpy and Scipy are used to improve the matrix-vector processing, and the Matplotlib was use to the implement the plotting tools [6]. In this way, it was developed a model that allows total control of input wave frequency,

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_256

1755

1756

S. G. Mello et al.

duration of the simulation, number of sample points, duration of the simulation and domain size, and it is possible to inspect the results with the plotting function in given time or spatial coordinates. After the model was fully implemented, the results of the analytical solution of the one and two dimensional problems and its respective solution using the Finite Difference Method (FDM) were compared. The formulations for one and two dimensional approximate solutions are demonstrated below.

Cn = −(

An =

ω )An Kn c

2 n (−1)( −1) K n2 L

(10)

(11)

the Bn (t) function is given by the following equation: An ω2 sin(ωt) K n c2 − ω2 (12) Torii et al. [4] present the solution for parameters D1 and D2 , however, this solution does not satisfy the initial conditions shown by Eqs. 13 and 14 for whatever is t. Bn = D1 cos(K n ct) + D2 sin(K n ct) +

2.1

One-Dimensional Wave Equation

The differential equation for the approximate analytical solution for one-dimensional problem is given by: 2 ∂2 p 2∂ p = c , t > 0, x ∈ [0, L] ∂t 2 ∂x2

(1)

where c (m/s) is the medium plane wave velocity, and L (m) is the domain size. The initial conditions of problem are [4]: p(x, 0) = 0, 0 ≤ x ≤ L ∂ p(x, 0) =0 ∂t

(2)

(3)

v(x, 0) =

∞ 

Bn (0) sin(K n x) = 0 → Bn (0) = 0

(13)

n=1



∂v(x, 0)   = Bn (0) sin(K n x) = 0 → Bn (0) = 0 ∂t

(14)

n=1

Thus, the expressions for D1 and D2 are presented by Eqs. 15 and 16: (15) D1 = 0

The system is at rest, that is, for time t = 0, a primary variable and its time derivative are equal to zero. The NBC of the problem are: p(x = 0, t) = 0 (4)

D2 = −

ω3 An c3 K n3 − K n cω3

(16)

So Eq. 12 could be written as Eq. 17: ∂ p(x = L , t) = sin(ωt) ∂x

(5)

where ω is angular frequency in rad/s. The following equations describe the formulation for calculating the field: p(x, t) = p1 (x, t) + p2 (x, t)

Bn (t) = D2 sin(K n ct) +

p2 (x, t) =

∞ 

sin(K n x)[Cn sin(K n ct) + Bn (t)]

(17)

It is possible to observe that these expressions satisfy Eqs. 13 and 14 whatever t.

(6)

2.2 p1 (x, t) = x sin(ωt)

An ω2 sin(ωt) K n c2 − ω2

(7)

(8)

Two-Dimensional Wave Equation

The differential equation for the approximate analytical solution for two-dimensional problem is shown below: 2 ∂2 p ∂ 2u 2 ∂ p = c ( + ) ∂t 2 ∂x2 ∂ y2

(18)

 = [0, L x ] × [0, L y ]

(19)

n=1

The parameters Cn , K n and An are given by Eqs. 9–11: K n = (2n − 1)

π 2L

(9)

where it is define in a domain  and it is valid with the time range [0, t f ] (s).

256 Application of the Neumann Boundary Conditions …

1757



The initial conditions and the NBC of the problem are: p(x, y, 0) = 0

(20)

∂ p(x, y, 0) =0 ∂t

(21)

λm,n = cπ (

∗ Bm,n

The boundary conditions at x = 0, y = 0 and y = L y represent rigid walls and therefore p(x, y, t) = 0 at these points: p(0, y, t) = 0

4 = λm,n L x L y

n 2 (2m − 1) 2 ) +( ) , Ly Lx

(33)

L y L x g1 (x, y)γm,n d xd y, 0

(34)

0

g1 (x, y) = −ωx f (y),

(22)

(35)

b is define by: And vm,n

p(x, 0, t) = 0



(23) b (x, y, t) = Bm,n (t) sin vm,n

p(x, L y , t) = 0

(24)

For x = L x the boundary condition is defined by the gradient of p(x, t):

(2m − 1)π x 2L x

(25) A2 = −

The function f (y), must meet the requirements imposed by Eq. 26, thus satisfying the boundary conditions at y = 0 and y = L y following Eqs. 23 and 24: f (0) = f (L y ) = 0

 sin

nπ y Ly

 ,

(36) once the parameters defined for the calculation of the data b by the set of Eqs. 37–40. vm,n Bm,n (t) = A2 sin(λt) +

∂ p(x = L x , y, t) = sin(ωt) f (y) ∂x



4 b2 = − Lx Ly

(26)

sin(ωt)b2 , (λ2 − ω2 )

ωb2 , λL x (λ2 − ω2 )

(37)

(38)

L y L x s(x, y)γm,n d xd y, 0

(39)

0

The exact solution is given by Eqs. 27–30: p(x, y, t) = p1 (x, y, t) + p2 (x, y, t) + p3 (x, y, t)

p1 (x, y, t) =

∞  ∞ 

s(x, y) = x(c2 f  (y) + ω2 f (y))

a vm,n (x, y, t)

(28)

b vm,n (x, y, t)

(29)

m=1 n=1

p2 (x, y, t) =

∞  ∞  m=1 n=1

p3 (x, y, t) = x sin(ωt) f (y)

(30)

a : Equation 31 presents the function vm,n a ∗ vm,n (x, y, t) = Bm,n sin(λm,n t)γm,n

(31)

∗ ,c with λm,n , Bm,n m,n , g1 (x, y) and γm,n given by Eqs. 32–35:



γm,n

(2m − 1)π x = sin 2L x





nπ y sin Ly

 ,

(40)

(27)

(32)

3

Results

In this section we present the analytical solutions for the one and two dimensional problems governed by NBC and their respective solutions by FDM, implemented as in Iguel [7] and the FDM mesh parameters were refined until the difference between models was lower than 10−10 . In order to illustrate the implemented API, the following parameters were chosen: plane wave velocity considered was c = 3 m/s with the number N of Fourier terms N = 10 and N = 50, t f = 5 s and t = 5 × 10−3 s. Two different wavelengths, λ1 = 1.5 × 10−3 m (problem A) and λ2 = 500 × 10−6 m (problem B), were simulated in order to evaluate their influence in the solution quality. All simulations were performed in a Windows 10 Core i5, 1.80 GHz, with 8 Gb of RAM and 1 Tb of HD.

1758

S. G. Mello et al.

Fig. 1 Comparison between FDM and 1D analytical solution: a λ = 1.5 × 10−3 m and b λ = 500 × 10−6 m

3.1

1D Problem

The spatial domain was parameterized for problems A and B as follows: • L = 2λ m, with 300 points • x = λ/150 m where λ is the wavelength being analyzed. Figure 1a and b presents the results p(x = L/2, t) (Pa) for problems A and B respectively. Time measurements and root-mean-squared-errors (ε R M S (Pa)) are shown in Table 1 according to the number of Fourier terms and wavelength. The time measurement was taken using the line_profile API, as described by Lanaro [8]. The “Computational Time” column represents the time spent in the field calculation functions.

Table 1 Time and ε R M S measurements for 1D Neumann solution compared to FDM λ (m)

Fourier terms

ε R M S × 10−6 (Pa)

Computational time (s)

1.5 × 10−3 1.5 × 10−3 500 × 10−6 500 × 10−6

10 50 10 50

89.5 89.5 31.9 31.7

0.046 0.120 0.047 0.121

3.2

2D Problem

The spatial domain was parameterized for problems A and B as follows: • L y = 4λ m, with y = λ/50 m • L x = 2λ m, with x = λ/50 m

256 Application of the Neumann Boundary Conditions …

1759

Fig. 2 Comparison between FDM and 2D analytical solution: a λ = 1.5 × 10−3 m and b λ = 500 × 10−6 m

Table 2 Time and ε R M S measurements for 2D Neumann solution compared to FDM λ (m)

Fourier terms

ε R M S × 10−9 (Pa)

Computational time (s)

1.5 × 10−3 1.5 × 10−3 500 × 10−6 500 × 10−6

10 50 10 50

2.46 2.45 69.90 70.00

7.43 203.27 6.99 192.57

where λ is the wavelength being analyzed. The results are present at Fig. 2a and b with respect to p(x = L x /2, y = L y /2, t) (Pa) for λ1 and λ2 respectively. Time measurements and root-mean-squared-errors (ε R M S (Pa)) are shown in Table 2 according to the number of Fourier terms and wavelength. Figure 3 shows the response for p( x , t) (Pa) for a ti = t f /2.

4

Discussion

This paper presented the results of an implementation for an approximate solution for the Wave Equation considering Neumann Boundary Conditions for one and two dimension. Their results were then compared to the Finite Difference solutions. The API was developed according to the Object-Oriented Programming (OOP) and allows a variation of different parameters that describe the propagation of the wave such as plane wave velocity, frequency, numbers of approximation terms, sample points inside the domain and range time. Despite the increase in the number of terms for one-dimensional problems, no significant differences were observed with respect to the solution behavior for the considered wavelengths (λ1 = 1.5 × 10−3 m and λ2 = 500 × 10−6 m). The comparisons can be observe both qualitatively (Fig. 1), and quantitatively, by the ε R M S (Table 1).

1760

S. G. Mello et al.

Fig. 3 p( x , t = t f /2) (Pa)

The invariability of the solution quality with respect to the number of approximation terms can also be observed in the two-dimensional implementation, as shown in Fig. 2 and in Table 2. Another feature of this tool is the graphical response plotted at different domain points as shown in Fig. 3. It is important to note that the API can compare analytical responses to changes in Finite Differences and can be used as a standard to compare other numerical methods. The next step of this paper includes the association with the Dirichlet Boundary Condition (DBC) solution of the Wave Equation in order to generate Robin Boundary Conditions, which is the linear combination between the NBC and the DBC. The association of different boundary conditions into a single problem in order to compose more complex domains, including the variation of acoustic properties into a single domain is also considered.

5

Conclusion

It was possible to develop the API according to the ObjectOriented Programming (OOP) in one and two dimensions. Its comparison with the Finite Difference Method (FDM) solution showed root-mean-squared-errors of order 10−6 (Pa).

Acknowledgements We would like to acknowledge the Federal University of ABC, and Technological Research Institute of the State of São Paulo (IPT). Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Simões RJ, Pedrosa A, Pereira WCA, Teixeira CA (2016) A complete COMSOL and MATLAB finite element medical ultrasound imaging simulation. In: ICA2016 2. Gimperlein H, Meyer F et al (2018) Boundary elements with mesh refinements for the wave equation. Numer Math 139(2018):867–912 3. LeVeque R (2007) Finite difference methods for ordinary and partial differential equations: steady-state and time-dependent problems. Finite difference methods for ordinary and partial differential 4. Torii A, Lima R, Sá R (2019) Benchmark solutions for the wave equation with boundary harmonic excitation 5. Atalla N, Sgard F (2015) Finite element and boundary methods in structural acoustics and vibration. CRC Press, Boca Raton 6. Johansson R (2019) Numerical Python: scientific computing and data science applications with Numpy, Scipy and Matplotlib. Apress 7. Igel H (2017) Computational seismology. Oxford University Press, Oxford 8. Lanardo G (2017) Python high performance. PACKT Publishing Limited

Comparative Analysis of Parameters of Magnetic Resonance Examinations for the Accreditation Process of a Diagnostic Image Center in Curitiba/Brazil M. V. C. Souza, R. Z. V. Costa, and K. E. P. Pinho

Abstract

1

Due to increase demand for magnetic resonance imaging (MRI), there was a need to attest to the quality of exams with quality certification. In the area of diagnostic imaging, several services seek certification. The interested institution must ensure that the minimum standards required are met. Thus, this study aimed to analyze and compare the parameters of MRI exams at a Center for Image Diagnosis in Curitiba/PR, Brazil, in relation to the guidelines of the Imaging Diagnostic Accreditation Program (PADI), for the evaluation and readjustment of the standardization for the accreditation process. Examination images from the musculoskeletal system were collected and recorded in Digital Imaging and Communications in Medicine (DICOM) form, as is the submission standard for the accrediting organization. For this article, the results of the cervical spine exam were selected, following the sequence planning and routine protocol. Then, the results were compared to the literature in the area and to the guidelines of the accrediting organization, PADI, regarding four evaluation points: mandatory minimum sequences, image contrast, anatomical coverage and spatial resolution. In the conformity assessments, the protocols were shown to be within the established standards and apt to be accredited. Keywords







Magnetic resonance imaging Quality control Accreditation Standardization Hospital management

M. V. C. Souza (&) Diagnostic Clinic, Curitiba, Brazil R. Z. V. Costa  K. E. P. Pinho Federal University of Technology—Paraná, Av. Sete de Setembro, 3165 DAFIS, Curitiba, Brazil e-mail: [email protected]

Introduction

Magnetic resonance imaging (MRI) is an advanced diagnostic imaging technique, which is complex and detailed. In a very simplified way, a strong magnetic field is combined with radiofrequency (RF) pulses that transmit energy to the hydrogen (H) protons of the human body. The absorbed energy is immediately returned, initiating a relaxation process that induces a small electrical signal that is perceived by means of receiving coils. Through computational methods, which use the Fourier Transform associated with the mathematical model called K Space, this signal is converted into digital images of the anatomy studied directly on the command monitor of the MRI equipment [1]. The great advantage of this technique is the ability to provide a detailed diagnostic image in the three planes (axial, coronal and sagittal) of the region of interest without the need for exposure to ionizing radiation. For this reason, it is a less harmful technique, compared to conventional radiology and computed tomography [2]. In Brazil, the number of MRI equipment is still considered low, an average of 3.5 for every 500 thousand inhabitants in the private health network and 3.1 in the public network. There are 2776 pieces of equipment in the country, most of them concentrated in the southeast (1354) and south (510) and less present in the central-west (291) and north (143) regions [3]. However, its growth has been expressive with an increase of around 131% in the number of equipment in the last report years, compared to the 2005 Health Statistics report [4]. Regarding Diagnosis and Therapy Support Services (DTSS), the South and Southeast regions were responsible for a fraction of this increase, representing 68%. In the state of Paraná, Brazil, there were 1654 establishments related to diagnosis and therapy [5]. In 2018, 7.904.467 MRI exams were performed, an increase of approximately 4.7% over the previous year [5, 6]. These data indicate that there is a large investment in equipment and, also, expectancy an increase in the demand

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_257

1761

1762

M. V. C. Souza et al.

for MRI exams. Consequently, there is a concern with the quality of these exams. For quality to be achieved, it is necessary that the procedures for carrying out the exams have little variation and, therefore, be standardized with the implementation of technical standards that guide the reproducibility of the processes and the reduction of possible errors. The certification of compliance with technical standards implemented in a hospital environment is called hospital accreditation [7]. There are accrediting entities that define these pre-established standards and provide guidance. Among the reference entities in Brazil are: the National Accreditation Organization (ONA), the Diagnostic Imaging Accreditation Program (PADI) [8], promoted by the Brazilian College of Radiology (CBR) and the Brazilian Consortium Accreditation (CBA). Thus, this study aimed to make a comparative analysis of the standards performed in the MRI exams of a Diagnostic Imaging Center in Curitiba/PR, Brazil, in relation to the guidelines of an accrediting entity, for which they decided to start with the certification process.

2

Materials and Methods

The work proposal was submitted to the Research Ethics Committee of the Federal Technological University of Paraná (UTFPR) and obtained authorization under the registration of CAAE 59639716.0.0000.5547. To carry out this research, a memory device was provided to the clinic and samples were stored, equivalent to the images of a complete examination with all protocols and sequences, of each examination requested in DICOM (Digital Imaging and Communications in Medicine), format without compression. From the analyzed images, those that presented artifacts that impaired the quality of the exams and did not represent the proposal for quality control of the work were excluded, according to the orientation of the clinic. From the samples, data were collected from the headers of the DICOM files accessed through a specific program, and the following programs were also used: Philips® DICOM Viewer R3.0-SP10 and RadiAnt® DICOM Viewer 4.0.3.16415. Other image edits were made with Adobe® Photoshop® 13.0.1. From the images, most of the information necessary for the production sequences of the protocol was identified. This methodology is justified because it is similar to the image submission and analysis technique proposed by the accrediting organizations [8]. From the selected raw data, information was obtained regarding the types of sequences used, the intrinsic parameters (related to the immutable characteristics of the weights) and extrinsic parameters (characteristics that can be adjusted

by the user) of contrast, spatial resolution and coverage of anatomical region. The exams were presented, compared and discussed individually, having as reference the accrediting organization that the company chose, PADI, and the references of authors, such as: Bright [9] and Westbrook [10]. The PADI provides guidelines for four evaluation points: considering only the section thickness, interval of mandatory minimum sequences, image contrast, anatomical coverage and spatial resolution between sections and the reading pixel size. The items of image contrast and anatomical coverage were part of the research. These assessment items include many quality variables that can be related from the programming of the protocols to the anthropomorphic measures of the patient. A set of sequences forms an examination protocol. The sequence is a series of images using the same configuration and includes information on spatial resolution and weighting of the images. The items of spatial resolution presented in this work were observed from a single image of each sequence, of each examination protocol. For the study, examinations of the musculoskeletal system were selected. Sixteen examinations of the following anatomical regions were analyzed: cervical spine, thoracic spine, lumbar spine, shoulder, arm, elbow, forearm, wrist, hand, sacroiliac joints, hip, thigh, knee, leg, ankle and forefoot. Each exam was analyzed in two stages: 1st (production of the protocol, data collected and recorded, without suggestions for improvements) and 2nd (the exams were again presented and, this time, compared and discussed individually with reference to the PADI); in 7 items: anatomy of interest; clinical indications; patient guidance; positioning; alignment of the central region; routine protocol and sequence planning.

3

Results

The results will be presented for the cervical spine exam, following the sequence planning and the routine protocol. To program the sagittal plane (SAG) (Fig. 1) it is necessary that the cuts are parallel to the structure, dividing it on the right and left sides, for this the axial and coronal planes are used. The field of view (FOV) is large enough and should cover the head, the entire cervical spine, and the proximal part of the thoracic spine. In the axial plane: the cuts parallel to the spinous process of the vertebra must be aligned. There must be sufficient cuts to cover the entire vertebral body and the intervertebral foramens; in the coronal plane: the sections must be centered and aligned parallel to the long axis of the column. When programming the axial plane (Fig. 2) it is necessary for the cuts to divide the structure into an upper and lower region. The sagittal and coronal planes are used for

Comparative Analysis of Parameters of Magnetic Resonance …

1763

Fig. 1 Programming of sagittal sections in the axial (1) and coronal (2) plan for examining the cervical spine

this. Two cutting blocks are used. The FOV of the axial plane must cover the entire structure of the spine and must be aligned with the entire region of the patient’s neck: Sagittal plane: the block of cuts parallel to the intervertebral discs must be aligned. The blocks must cover from the lower portion of the body of the C2 vertebra to the upper portion of the vertebral body of D1. All vertebral discs from C2–C3 to C7–D1 must be included; Coronal plane: the blocks must be centered with the spine and the cuts parallel to the intervertebral discs should be aligned. The routine protocol for this exam includes: SAG T2 (Sagittal plane); TSE (Turbo Spin Eco); SAG T1 TSE and AXIAL T2 TSE. The reference parameters for each plan can be seen in Table 1.

4

Discussion

Considering the four evaluation points (mandatory minimum sequences, image contrast, anatomical coverage and spatial resolution), the image contrast and anatomical coverage items are fully in accordance with the guidelines. These assessment items include many variables that can be related from the programming of the protocols to the patient’s anthropomorphic measurements. Fig. 2 Programming of axial sections in the sagittal (1) and coronal (2) plan for examining the cervical spine

The evaluation of contrast and anatomical coverage made during the accreditation process is visual, based on the good identification of certain structures and good coverage of the joints and their specificities according to the guideline. Thus, with the focus of this work on items related to minimal sequences and spatial resolution, it was identified that the analyzed protocols are in conformity with the PADI guidelines [8]. The spine protocols have values well below those suggested and are fully in line with the proposed guideline. This characteristic indicates a possible need to readjust the matrices in the phase direction used in the protocols since it shows a slight increase that may be imperceptible at first, but demonstrates the practical need to reduce the sequence time and/or guarantee an increase in the signal-to-noise ratio (SNR). The cervical spine protocol presented is compatible with the guidelines of PADI [8], which suggests minimal acquisitions in the sagittal plane in weighting T2 and T1, with thickness  4.0 mm, interval  1.0 mm and pixel of reading  1.1 mm, and also in the axial plane in weighting T2 or T2*, the only variable in this plane is the reading pixel 1.0 mm. The values presented in the results for the sagittal planes: thickness 3.0 mm, interval 0.3 mm for both weights, whereas the reading pixel for T2 was 0.7 mm  0.9 mm and

1764

M. V. C. Souza et al.

Table 1 MRI cervical spine Mandatory minimum sequences

Image contrast

Anatomical coverage

Spatial resolution

Sagittal T1

Spinal cerebrospinal fluid (CSF) must be hypointense in relation to the spinal cord. There must be tissue contrast between the cerebrospinal fluid and the spinal cord

It must cover the craniovertebral transition to D1 at least. Laterally, it must include the intervertebral foramen

Thickness  4.0 mm Gap  1.0 mm Pixel (reading)  1.1 mm

Sagittal T2

Spinal medule signal should be homogeneous. CSF must be hyperintense in relation to the spinal medule. There must be tissue contrast between the cerebrospinal fluid and the spinal medule

It must cover the craniovertebral transition to D1 at least. Laterally, it must include the intervertebral foramen

Thickness  4.0 mm Gap  1.0 mm Pixel (reading)  1.1 mm

Axial T2 and/or T2*

Spinal medule signal should be homogeneous. CSF must be hyperintense in relation to the spinal medule. There must be tissue contrast between the cerebrospinal fluid and the spinal medule/neural roots

They can be contiguous or angled. Minimum coverage from C2–C3 to C7–D1

Thickness  4.0 mm Gap  1.0 mm Pixel (reading)  1.1 mm

Font: [8]

for T1 it was 0, 9 mm  1.1 mm. In the axial plane in weighting T2, the thickness is 3.3 mm and the interval is equal to zero, a reading pixel of 0.6 mm  0.6 mm. Table 2 shows a comparison between the summary protocol presented in the results and those suggested by the literature for the examination of the cervical spine. When compared to the literature, the sequences used in the clinic’s cervical spine protocol are similar, since all references use at least two planes [SAG and AXIAL (AX)] and two weightings (T1 and T2). The variation is due to the size of the

Table 2 Comparative table between authors on minimum sequences and suggested resolution parameters for cervical spine examination

FOV, which is much larger than recommended, as well as its respective matrix, as is the case of the SAG T1 sequence, which has a 35.7 cm FOV and 1024 * 1024 matrix when recommended by Berquist [11] is FOV of 24 cm for and matrix of 512. The size of the cut interval is incompatible when it suggests using about 20–33% of the cut thickness while in the clinic protocol this value is limited to 10%. Other authors such as Nacif and Ferreira [13] and Westbrook [10] do not mention values for FOV size, matrix, cut thickness and interval between cuts.

Author

Plane

Weighting

FOV (cm)

Matrix

Cut thickness (mm)

Interval (mm)

Clinical reference

SAG

T1

35.7

1024

3

0.3

SAG

T2

35.7

800

3

0.3

AX

T2

12

640

3.3

0

Berquist [11]

Moeller [12]

Nacif and Ferreira [13]

Westbrook [10]

SAG

T2

24

512

3

33%

SAG

T1

24

512

3

33%

AX

T2*

16

512

3

0

AX

T2 FS

16

256

3

18%

SAG

T2

24–26



3–4

20%

SAG

T1/PD

24–26



3–4

20%

AX

PD/T2*

18–20



3–4

20%

COR

T2





3–4

20%

SAG

T1









SAG

T2/T2*









AX

T2*









AX

T2









SAG

T1









SAG

T2 or T2*









AX

T2 or T2*









Comparative Analysis of Parameters of Magnetic Resonance …

5

Conclusions

From the data analyzed it was possible to identify some local characteristics: • Use of the contrast agent in most tests performed at the institution; • Predominant use of thin cuts, between 2 and 3 mm, with an interval between cuts with a value of 10% the thickness of the cut in most exams; • Small voxel compensation and poor signal quality with increased phase pixel size; • Preference for proton density weighting instead of T2 with fat suppression when liquid-sensitive sequence is suggested; • Absence of standardization between sequences in the same plans; variable matrices depending on weight. Finally, the protocols presented are very close to those stipulated by the accrediting entity to which the diagnostic imaging center is interested in certifying. It will be necessary to optimize the protocols with a focus on improving the SNR, causing a chain of changes to reduce the examination time and better tissue contrast, and the addition of mandatory sequences in the routines suggested by PADI’s guideline. Acknowledgements Our thanks to the Center for Diagnostic Imaging in the city of Curitiba, Brazil, for the authorization, release for research and data for this study. Interest Conflicts The authors declare that they have no conflicts of interest.

1765

References 1. Mazolla A (2009) Ressonância magnética: princípios de formação de imagem e aplicações em imagem funcional. Rev Brasileira de Física Médica 3(1):117–129 2. RockalL AG, Hatrick A, Armstrong P, Wastie M (2013) Diagnostic imaging, 7th edn. Willey-Blackwell, Oxford 3. DATASUS (2020) Cadastro nacional de estabelecimentos de saúde: relatório de equipamentos at https://cnes2.datasus.gov.br/ Mod_Ind_Equipamento.asp?VEstado 4. IBGE (2006) Estatísticas da saúde: assistência médico-sanitária 2005. Instituto Brasileiro de Geografia e Estatística, Rio de Janeiro 5. Martins LO (2014) The segment of diagnostic medicine in Brazil. Rev Fac Ciênc Méd Sorocaba 16(3):139–145 6. ANS (2019) Mapa assistencial da saúde complementar 2018. Instituto Brasileiro de Geografia e Estatística, Rio de Janeiro 7. Ichinose RM, Almeida RT (2001) Demystifying the certification and the accreditation of hospitals Memórias II Congresso Latino Americano de Engenharia Biomédica, La Habana: 268 8. PADI at https://padi.org.br/norma-e-diretrizes 9. Bright A (2011) Planning and positioning in MRI. Elsevier Australia, Chatswood 10. Westbrook C (2014) Handbook of MRI technique. Wiley, Oxford 11. Berquist T (2013) MRI of the musculoskeletal system. Lippincott Williams & Wilkins, Philadelphia 12. Moeller TB, Reif E (2000) Normal findings in CT and MRI. Georg Thieme Verlag, New York 13. Nacif MSN, Ferreira FGM (2011) Manual de Técnicas em Ressonância Magnética. Rubio, Rio de Janeiro

Information Theory Applied to Classifying Skin Lesions in Supporting the Medical Diagnosis of Melanomas L. G. de Q. Silveira-Júnior, B. Beserra and Y. K. R. de Freitas

Abstract

In this paper, we propose a classification technique for melanocytic lesions based on fundamentals of Information Theory. In particular, we evaluate the accuracy level of the Correntropy Coefficient as a similarity measure in classifying Melanoma and (common and atypical) nevi. These lesions were chosen because they are generally similar to each other, then it can be difficult to carry out a differential diagnosis of the melanocytic lesion type. The effectiveness of the proposed approach was verified through a case study using a public dermoscopic image dataset. The obtained results for performance evaluation and comparison show very high accuracy for melanoma classification which outperforms state-of-the-art approaches. Besides, considering the simplicity of the proposed technique and the results obtained, it is possible to use this approach in developing computational systems to support the medical diagnosis of melanomas. Keywords

Information theory • Correntropy coefficient • Melanocytic lesions • Classification of melanoma and nevus • Computer aided medical diagnosis systems

L. G. de Q. Silveira-Júnior (B) Departamento de Engenharia de Comunicações, Universidade Federal do Rio Grande do Norte, Senador Salgado Filho, 3000, UFRN Campus, Natal, RN, Brazil e-mail: [email protected] B. Beserra Grupo de Pesquisa em Prototipagem Rápida de Soluções para Comunicação, GppCom/UFRN, Natal, RN, Brazil Y. K. R. de Freitas Clínica de Oncologia e Mastologia, Natal, RN, Brazil

1

Introduction

Information Theory was proposed by the mathematician and electrical engineer Claude E. Shannon in 1948, through the article entitled A Mathematical Theory of Communication, published in Bell System Technical Journal [1]. At the time, this article achieved enormous effect because it presented all the mathematical foundations which govern the mechanisms of communication from a transmitter to a receiver. For this reason, Information Theory is known as The Mathematics of Telecommunications. Despite the massive use of Information Theory in the field of Telecommunications, the use of its concepts and measures has been occurring in the most different contexts through a search for solutions in a wide variety of domains also including Medicine [2]. Medical diagnosis is part of a broad category of problems in which the decision-making process must be carried out considering all the known evidence and presenting different levels of uncertainty [3]. Evidently, the presence of uncertainties favors developing systems based on a non-deterministic approach where the treatment of uncertainties involved in the decision-making process is carried out on well-consolidated mathematical foundations with decades of maturity, as is the case with Information Theory. This article explores classic concepts from Information Theory to propose a Nevus and Melanoma classification technique based on the Correntropy Coefficient [4]. These lesions were chosen because they are generally similar to each other in the visual inspection carried out by specialists in the field of this domain. The motivation for using correntropy is that it generalizes the concept of correlation between two random variables, being more suitable for dealing with non-Gaussian systems, for example in recognizing human faces through Computer Vision systems [5,6].

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_258

1767

1768

L. G. de Q. Silveira-Júnior et al.

Thus, the images present in two melanocytic lesion image databases in the literature were classified in order to assess the robustness of the classification strategy proposed in this work, namely: Mednode and PH2 [7,8]. From the results obtained in the performance evaluations with the proposed classification technique, it is possible to verify an accuracy rate in classifying melanomas of approximately 96.7%, with an irreducible error level of only 2 false negatives. Moreover, the accuracy rate in classifying nevi is 100%, which outperforms state-ofthe-art approaches. These results prove the effectiveness of the proposed classification technique, and at the same time they demonstrate the enormous potential that Information Theory has to be applied in various problems in Medicine, and in particular in Dermatology. This article is organized as follows: Sect. 2 summarizes the motivation for doing this work, and Sect. 3 presents the fundamentals of Correntropy. The methodology used in this work for developing the proposed classifiers is presented in Sect. 4, while the results and their discussion are presented in Sect. 5. Finally, Sect. 6 is dedicated to presenting the conclusions obtained in performing this work and the perspective continuity in developing tools to assist in medical diagnosis in future works.

2

Problem Identification

Primary cutaneous melanoma (PCM) is defined as any primary skin lesion with histopathological confirmation, without clinical or histological evidence of regional or distant metastatic disease. It constitutes about 5% of malignant skin tumors and has an increasing incidence and high lethality, being responsible for the vast majority of skin cancer deaths. Recent estimates in Brazil point to about 8450 new cases, of which about 4200 new cases are predicted in men and 4250 in women [9–11]. The main form of diagnostic suspicion for melanoma in the past was based on clinical examination. The potential severity of this diagnosis justified extreme measures such as the removal of several suspicious lesions, which later proved to be benign. Dermoscopy emerged in recent years and has become a valuable tool in evaluating skin lesions, melanocytic or not, providing an observation of structures and colors of the epidermis, dermoepidermal junction and superficial dermis which are not visible to the naked eye. These findings correlate with histological characteristics and are used to assess whether the lesion is malignant or benign and whether surgical removal is indicated. The diagnosis sensitivity (success rate of correctly diagnosed melanomas) of PCM to the naked eye is around 71%, while with the dermoscopy it is 90%. However, despite the good results obtained, it has already

been demonstrated that dermoscopy is difficult to learn and has a somewhat subjective character. Thus, new technologies such as infrared sensing and multispectral sensing have been proposed to overcome these difficulties with the objective of serving as diagnostic aid tools by physicians. The recommendation of the Brazilian Society of Dermatology for diagnostic confirmation is an excisional biopsy with margins of 1–3 mm (complete removal of the lesion) which enables a better histological evaluation of the tumor, directly affecting the conduct and prognosis. Incisional biopsy (partial removal of the suspected lesion) is acceptable when the lesion is very extensive or there is low suspicion of melanoma [10–14]. As can be seen due to the severity of this pathology, it is recommended to remove the suspected lesion completely, even without a conclusive diagnosis. All of these factors illustrate the complexity of the scenario and the need to have computational tools capable of assisting in identifying melanomas. Diagnostic imaging is an important tool, as it favors early detection of diseases and is often used in medicine.

3

Correntropy

The correlation between two random variables is often used as a similarity measure in data classification in many problems of practical interest. However, the observed non-linearity in the case of non-Gaussian stochastic processes compromises the performance of the correlation-based classification [5]. The Rényi entropy1 and Mutual Information are two classic concepts of Information Theory which have been successfully used in evaluation of non-Gaussian signals [15,16]. However, in higher order statistical analysis and in situations where time is important in the random variable, a new measure has been proposed called Correntropy [4]. Correntropy generalizes the concept of correlation between random signals and has been applied in classifying data from different domains [4,5]. Correntropy can be defined as a non-linear transformation which seeks to find the similarity between two random variables in a Hilbert space controlled by the size of the kernel. When adjusting the kernel size, it is possible to disregard the very discrepant values present in a data set in the final result, meaning that it is possible to control the data observation window with this parameter when the similarity measure is evaluated. Thus, the kernel is a free parameter and must be properly resolved for the input data. In a simple analogy, one can understand the meaning of the kernel as a magnifying glass. When this measure is applied to a single random variable, we have the auto-correntropy. For a set of distinct random variables, there is a measure called cross-correntropy. The 1 The

Shannon entropy is a particular case of the Rényi entropy [15].

258 Information Theory Applied to Classifying Skin Lesions …

1769

cross-correntropy of two discrete random variables X and Y , V (X, Y ), is defined as [4,5]: V (X, Y ) =

N 1  G σ (xi − yi ), N

Mednode

Formation of the Image Database

PH2

(1)

i=1

Digital Image Processing

where G σ (X, Y ) is the Gaussian kernel, defined by: −(xi −yi )2 1 G σ (X, Y ) = √ e 2σ 2 , 2π σ

(2)

Features Extraction

where σ represents the kernel size. As defined by Eq. (1), cross-correntropy does not guarantee zero mean, meaning that it is not a zero-centered function due to the non-linearity introduced by the kernel. In [17] a correntropy estimator was proposed which is centered at zero, called the centralized cross-correntropy between X and Y , U (X, Y ), defined by: N N 1  G σ (xi − y j ), U (X, Y ) = V (X, Y ) − 2 N

Classification (3)

i=1 j=1

where the last portion being on the right side of Eq. (3) is known as cross information potential, denoted by I P(X, Y ). The crossed information potential represents the mean of the correntropy, being a generalization of the centralized covariance [18]. A new measure of similarity was defined in [18], called correntropy coefficient, which calculates the cosine of the angle between the two randomly transformed vectors and thus is able to extract more information than the commonly used correlation coefficient. The coefficient of correntropy, η(X, Y ), is defined as: U (X, Y ) , η(X, Y ) = √ √ U (X, X ) U (Y, Y )

(4)

the correntropy coefficient is therefore dimensionless, since it is normalized by the auto-correntropies of X and Y , respectively. In this work, the correntropy coefficient is used in the automatic classification of nevi and melanomas by Artificial Vision systems.

4

Methodology

The methodology used in this work followed the flowchart shown in Fig. 1. The development of the proposed application used native functions from libraries of the OpenCV, in the programming language Python, with its execution being carried out through the Colab, which is the application development environment provided by Google Inc.

Results Evaluation Fig. 1 Flowchart of adopted methodology

4.1

Formation of the Image Database

The first challenge of the work consisted in forming a database with images of high visual quality in terms of sharpness, contrast, absence of specularities and good resolution. With these criteria, 140 images were manually selected from the Mednode and PH2 literature image bases [7,8], choosing 70 images for each of the melanocytic lesion classes considered in this work. All images having a pixel resolution of 768 × 560 (PH2), or equal to 1022 × 767 pixels (Mednode).

4.2

Digital Image Processing

Each image in the database formed in the previous step was initially subjected to segmentation using the Otsu’s technique [19], in which the melanocytic lesion is extracted from the background (healthy epidermis). Figure 2 illustrates an example of the result obtained in the segmentation of a nevus image.

1770

L. G. de Q. Silveira-Júnior et al.

The images were subsequently resized to the same dimension, 80 × 80. The choice of this resolution was guided by an analysis of the computational cost demanded in the digital processing itself. Although color is considered an important descriptor of visual discrimination, the non-deterministic approach adopted in this work does not consider color information, only of intensity, which is why the images of melanocytic lesions were subsequently converted to grey scale [0, 255].

4.3

Approach Proposed for Features Extraction

The extraction of features considered in this work consists of obtaining the correntropy coefficient measurement, η(X, Y ), which is calculated through Eq. (4), considering the random vectors X and Y . Some studies of the random variables involved were performed so that the images considered in this work could be treated as random vectors. This is necessary because a random vector can be defined as a set of independent random variables with the same probability distribution. Thus, by converting the two-dimensional signal (image) to a one-dimensional signal (vector), the corresponding random vectors were obtained for the images of the two classes of lesions, nevus (X ) and melanoma (Y ), respectively. Consequently, there are two possible scenarios in evaluating the similarity between the images of the two classes of melanocytic lesions: • The images are of lesions of the same class (greater similarity), and • The images are of lesions of different classes (less similarity).

Fig. 2 Segmentation steps in the melanocytic lesion image

As we seek to assess the similarity, it is essential that the correntropy coefficient considers one of the vectors, X or Y , as corresponding to an image/sample representative of the reference pattern of the melanocytic lesion class considered. In other words, we need to choose the most representative images of melanomas and the most representative of nevi between 70 images/class. Starting from a subjective criterion using visual inspection, 10 samples (images) more representative of the melanoma lesion type and 10 more representative samples of nevi were chosen. These two sets constitute the two standards for the classes of melanocytic lesions considered in this work, with each image of each set being considered an authentic reference of its respective class. Proceeding in this way, we have a similarity assessment based on the maximum likelihood criterion, since a comparison is being undertaken between the test image and the representative sample of the class, considered the reference

258 Information Theory Applied to Classifying Skin Lesions …

for melanoma/nevus. It is for this reason that the obtained measurements are consistent for classifying pigmented skin lesions based on the calculation of the correntropy coefficient.

4.4

Proposed Classification

The classifier proposed in this work is simple and therefore it is very easy to implement in low-cost embedded devices, which is an advantage of the proposed technique since not every classifier proposed in the literature has this benefit. This is the adoption of the consensus decision criterion obtained after accounting for the results of similarity assessments carried out between the same test image and each of the existing 10 references of each pattern considered to be lesion. In this case, the consensus stems from an analysis which seeks a majority similarity in 10 evaluations made by class (one for each reference). In this work, two different ways are proposed to account for the results of similarity assessments in order to determine the consensus, namely: • Defining an average random variable, V , which assumes values through the average of 10 similarity measures obtained by test image in comparison with the ten reference images, and • Defining a cumulative sum random variable, C, which assumes values through the sum of 10 similarity measures obtained in the evaluation of the test image. Based on these definitions, two consensus decision criteria were adopted in this work to classify an image as melanoma: Decision criterion by mean consensus, when: Vnevus < Vmelanoma . Decision criteria by accumulated consensus, when: Cnevus < Cmelanoma . And, if not, it is decided to classify the test image as nevus. Generally speaking, D is defined as the random decision variable:  Vnevus Cnevus < 1, or if Cmelanoma < 1, Melanoma, Se Vmelanoma D= N evus, Otherwise. (5) Therefore, our decision threshold is equal to 1. Obviously, a false negative occurs when the image of a melanoma is mistakenly classified as a nevus (for example). Thus, this article proposes to increase the amount of representative pattern samples of each class to try to minimize this problem.

1771 Table 1 Confusion matrix Lesion Melanoma Nevus

Classification Melanoma

Nevus

58 0

2 60

Table 2 Performance comparison Accuracy (%) Our Barros et al. [20]

5

96.7 90

Results and Discussion

Computational evaluations were carried out considering the proposed methodology, the two adopted criteria and the 120 non-referenced images. The results obtained from the evaluations performed are shown in Table 1, which represents the confusion matrix. As can be seen, adoption of any of the two consensus decision criteria does not influence the performance result of the proposed classification technique, which is why there is a single confusion matrix. Thus, the results obtained from the performance evaluations of the proposed classification technique show an accuracy rate of 96.7%, with an irreducible level of errors of only 2 false negatives considering the classification of melanomas. The accuracy rate in the nevi classification is 100%. Performance comparisons were made between the technique proposed in this article and the alternative presented in [20].2 Table 2 shows the results so obtained. From these comparisons, it is possible to verify that the classification technique proposed in this article has greater accuracy than [20] and is simpler than deep learning (no training step is needed), which facilitates low cost implementation of embedded systems. Therefore, considering the simplicity of the proposed technique and the results obtained, it is possible to use this approach in developing computational systems to support the medical diagnosis of melanomas.

6

Conclusions and Future Works

This article proposed a technique for classifying images of melanocytic skin lesions based on Information Theory concepts and measures. In particular, the discriminating capacity of the Correntropy coefficient in classifying nevi and melanomas was evaluated in this work. The main advantage 2 In [20] the model validation is performed using 30 images, which were

divided into 22 non-melanomas and 8 melanomas.

1772

of the proposed classification technique is its effectiveness. It was possible to classify all nevi images successfully and most of the melanoma images in the performance evaluations, with only two melanoma images being erroneously classified as nevi. Another advantage of the proposed classifier is its ease of implementation which enhances the development of applications in low-cost embedded systems. Additionally, the results obtained in the performance comparisons demonstrate that the approach exceeds the accuracy of those proposed as State-of-the-Art in the literature, which demonstrates the enormous potential of the applications of Information Theory in Dermatology. In the future, the authors of this study intend to incorporate other types of pigmented skin lesions in order to extend the application to a larger class of pathologies, thus proposing to develop an assistance medical diagnosis system for a wide variety of skin diseases.

L. G. de Q. Silveira-Júnior et al. 4. 5.

6. 7.

8.

9.

10.

11. Acknowledgements The authors would like to thank the financial support of PROPESQ/UFRN in developing the scientific initiation work carried out by one of the authors, whose results served as the basis for this work. The authors also acknowledge the support of the Group for Researching and Fast Prototyping Solutions for Communication (GPPCOM), of the Federal University of Rio Grande do Norte.

12.

13.

Compliance with Ethical Standards Conflict of Interests The authors declare that they have no conflicts of interest. Ethical Approval This article does not contain any study with human or animal participants done by any of the authors.

14. 15.

16.

Informed Consent Formal consent is not necessary for this type of study. 17.

References 18. 1. 2.

3.

Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423 Amelio L, Jankovi´c R, Amelio A (2018) A new dissimilarity measure for clustering with application to dermoscopic images. In: 2018 9th international conference on information, intelligence, systems and applications (IISA), pp 1–8 Bhise V, Rajan SS, Sittig DF, Morgan RO, Chaudhary P, Singh H (2018) Defining and measuring diagnostic uncertainty in medicine: a systematic review. J Gen Intern Med 33:103–115

19. 20.

Liu W, Pokharel PP, Principe JC (2006) Correntropy: a localized similarity measure, Vancouver, BC, Canada Liu W, Pokharel PP, Principe JC (2007) Correntropy: properties and applications in non-Gaussian signal processing. IEEE Trans Signal Process 55:5286–5298 Jeong K-H, Liu W, Han S, Hasanbelliu E, Principe JC (2009) The correntropy MACE filter. Pattern Recognit 42:871–885 Giotisa I, Moldersb N, Landc S, Biehla M, Jonkmanb MF, Petkova N (2015) MED-NODE: a computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst Appl 42:6578– 6585 Mendonça T, Ferreira PM, Marques JS, Marcal ARS, Rozeira J (2013) PH2 —a dermoscopic image database for research and benchmarking, Osaka, Japan Araujo IC, Coelho CMS, Saliba GAM et al. (2014) Melanoma cutâneo: aspectos clínicos, epidemiológicos e anatomopatológicos de um centro de formação em Belo Horizonte. Rev Bras Cir Plást 29:497–503 Tovo LFR, Belfort FA, Junior JAS (2005) Melanoma cutâneo primário. Rev Assoc Méd Bras (Série: Diretrizes em foco— Medicina Baseada em Evidências) 51:7–8 Wainstein AJA, Belfort FA (2004) Conduta para o melanoma cutâneo. Rev Col Bras Cir 31:204–214 Castro1 LGM, Messina MC, Loureiro W et al (2015) Diretrizes da sociedade brasileira de dermatologia para diagnóstico, tratamento e acompanhamento do melanoma cutâneo primário—Parte I. An Bras Dermatol 90:851–861 Zager JS, Hochwald SN, Marzban SS et al. (2011) Shave biopsy is a safe and accurate method for the initial evaluation of melanoma. J Am Coll Surg 212:454–462 Câncer de pele melanoma. https://www.inca.gov.br/tipos-decancer/cancer-de-pele-melanoma. Accessed 29 May 2020 Silveira Junior LGQ, Gurjão EC, Pinto EL, Assis FM, Medeiros JLA (2003) Critérios para. Retirada de Arcos em Redes Bayesianas 1:668–673 Silveira Junior LGQ, Silveira LFQ, Gurjão EC, Pinto EL, Assis FM (2004) Ordenação de Variáveis em Redes Bayesianas com. Medidas de Informação Mútua de Shannon e Rényi Li R, Liu W, Príncipe JC (2007) A unifying criterion for instantaneous blind source separation based on correntropy. Signal Process 89:1872–1881 Xu J-W, Bakardjian H, Cichocki A, Principe JC (2008) A new nonlinear similarity measure for multichannel signals. Neural Netw 21:222–231 Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9:62–66 Barros WKP, Morais DS, Lopes FF, Torquato MF, Barbosa RM, Fernandes MAC (2020) Proposal of the CAD system for melanoma detection using reconfigurable computing. Sensors 20:3168

Alpha Development of Software for 3D Segmentation and Reconstruction of Medical Images for Use in Pre-treatment Simulations for Electrochemotherapy: Implementation and Case Study J. F. Rodrigues, Daniella L. L. S. Andrade, R. Guedert, and D. O. H. Suzuki Abstract

1

Cancer is the second most common cause of death in the world. Several treatments are used against this disease. Chemotherapy is a drug treatment against cancer, but it is a slow process with several side effects. Electroporation (EP) is a technique that allows increasing cell permeability by applying high-intensity electrical pulses for short periods and can be used to speed up the chemotherapeutic process, resulting in Electrochemotherapy (ECT). ECT is a well-known technique for the treatment of superficial tumors and is being developed for deep-seated tumors. Pre-treatment simulations are reasonable solutions to predict whether the electric field intensity will be high enough throughout the tumor mass. This article presents a software development to segment and to export biological 3D structures from a medical image to an electric field simulator. A realistic lung tumor was in silico studied using ECT with electrodes proposed by the ESOPE protocol. The results demonstrate that multiple applications of the electrodes eliminate the tumor. Evaluation of the required electric current and electric field distribution allows the development of new electrodes and equipment in real applications of ECT treatment. Keywords





Computed tomography DICOM segmentation 3D reconstruction



Medical image

J. F. Rodrigues  D. L. L. S. Andrade  R. Guedert (&)  D. O. H. Suzuki Institute of Biomedical Engineering (IEB-UFSC), Federal University of Santa Catarina, R. Eng. Agronomico Andrei Cristian Ferreira, S/N, Florianopolis, Brazil

Introduction

The World Health Organization (WHO) estimates that the second leading cause of death in recent years is derived from cancer [1]. Cancer is surpassed only by deaths caused by cardiovascular diseases. Cancer treatment effectiveness is directly linked to the correct diagnosis. Since, the different types of cancer include different treatments such as chemotherapy, radiotherapy or surgery to enable the cure of cancer or to prolong the patient’s life. Joining different areas of knowledge allows for improving results concerning cancer treatment. Diagnostics based on three-dimensional (3D) images and other techniques allow the early detection of anomalies which would cause future diseases [2]. Electrochemotherapy (ECT) is a technique that uses the application of short electrical pulses to open pores on cells membrane (electroporation) combined with anti-cancer drugs such as bleomycin or cisplatin to increase its cytotoxicity [3, 4]. Ensuring that the entire tumor region is electroporated is one of the main challenges of ECT. Simplified computer simulations and computational models are useful to observe the electric field distribution in superficial tissues such as dermis and epidermis [5, 6]. However, such simulations may not show the real effect of treating deep-seated tumors due to the structural complexity of tissues and internal organs. In this sense, the use of software and computational tools allow the transformation of tomographic images into simulation files, achieving pre-treatment results closer to the real environment and increasing the chance of success during treatment [7, 8]. This article presents a software development capable of generating a 3D simulation file for the COMSOL Multiphysics® software (COMSOL AB, Stockholm, Sweden) from computed tomography (CT) images. Besides, we used the software to segment a dog’s lung tumor and analyze the possibility of treatment through simulations of electric field distribution by the finite element method.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_259

1773

1774

2

J. F. Rodrigues et al.

Software Development

This section is split into two subsections that describe the user interaction environment (frontend) and the image processing functions itself (backend). The main objective of the software was the development of an easily scalable platform with the possibility of adding new tools, as well as high processing capacity by transporting it to a cloud computing environment. In the alpha version that is described in this work, both frontend and backend are running on the local environment (user’s computer), the transfer for cloud computing was scheduled to the next versions of the software. The communication protocol used to transfer data between frontend and backend was the Hypertext Transfer Protocol (HTTP). HTTP supports use on the internet (cloud computing) at the same time that can be used in a local environment with low latency and good transfer speed [9].

2.1 Backend Development The backend was developed in Python. Python is an interpreted programming language often used in scientific data processing. We used the ITK (Insight Toolkit) library to perform all medical image processing. Medical images are standardized by the Digital Imaging and Communications in Medicine (DICOM) format [10]. DICOM establishes the pattern of medical images metadata as well as how the image data will be available (data type, structure, et cetera). Metadata brings technical and personal information about the study. The first step of image processing consists of identity, organize, and compress the study images. CT scans have hundreds of slices, each one of them provided by a DICOM file. The DICOM Study Identifier algorithm is shown in Fig. 1a; it performs an iteration between all files inside the supplied folder and checks if the file is in DICOM format and if so, it analyses the metadata and stores all found content into a list of studies. During this process, slices from the same CT scan are also sorted by its slice number (provided in the metadata) and grouped in the same list element. The resulting studies list will be displayed to the user to confirm the desired study. After user confirmation, the processing phase starts with the study compression into a format that could be displayed in the frontend as well as be a source for future processing. Before generating the compressed file, all study slices are filtered for noise removal. In this project, as the main goal is to perform tissue segmentation, we choose the anisotropic diffusion filter as that preserves boundary regions (between tissues) [11]. Figure 2 shows a medical image before and

after the filtering process; the highlighted section presents in detail the boundary regions preserved. After noise removal, all slice files are compressed into a single one and stored in a temporary file. The whole process of filtering and compression is summarized in Fig. 1b. The next processing steps consist of segmentation filters. The alpha version has three implemented segmentation filters: manual, traditional threshold, and points of interest. The manual filter generates the segmentation directly based on the points provided by the user as a binary filter, adopting “1” to selected pixels and “0” to all others. The traditional threshold filter uses the pixel value to perform the segmentation. The user provides both lower and upper thresholds and the filter select solely the pixels inside the provided range. The traditional threshold filter is often used in medical image processing as different tissues or organs have different grayscale tones in the imaging exam. The filtering by points of interest is an advancement of traditional threshold filtering. Instead of applying the threshold filtering in the whole image, there are provided as input the points of interest (or seeding points). The filter will analyse if the neighbour pixels of the provided ones are inside the thresholds, and if so, they will be selected. The selected pixels act now as new points of interest, and the filter will iterate until no new pixels were selected. In the end, the main difference between the traditional threshold filter or filtering by points of interest is that, in the latter, the filter will avoid selecting two different organs with the same (or similar) gray tone in the exam image [12]. The segmentation process can introduce some artifacts or holes during its execution. Artifacts are small segments of images that were selected during segmentation with no connection to a large segment. Although holes could occur by anatomic properties, small ones can also be considered as a segmentation noise. The post-processing filters manage to remove these segmentation noises. The removal of artifacts occurs by analyzing if its size is smaller than a defined threshold. In the same way, the undesired holes are filled if its size is smaller than a defined threshold [12]. The last step of the segmentation process was implemented to reduce the complexity of the simulation, focusing on the target volume. The user can cut the 3D model around the tumoral nodule by providing the axis limits manually. The cut filter will remove any segment outside the provided limits. The diagram presented in Fig. 3a shows all the main steps of the whole segmentation process. At the end of all segmentation process, it is necessary to transform these segmentations results in 3D objects and export them to COMSOL. The Export Algorithm summarized in Fig. 3b is in charge to create the 3D models and configure them into COMSOL. Each segmentation is converted in the stereolithography format (STL) and saved in a

Alpha Development of Software for 3D Segmentation …

1775

Fig. 1 a Block diagram of the DICOM study identifier algorithm. b Block diagram of the study preprocessing and compression algorithm

Fig. 2 Dog’s head CT slice a before and b after filtering with anisotropic diffusion filter. Highlighted squares shown in detail how boundary regions are preserved

temporary folder. The connection between COMSOL and Python is performed with the COMSOL Java API Server. After the connection establishment, all data (3D objects, and tissues’ properties) are transferred to the COMSOL environment. Our software also configures the necessary parameters to perform the simulations inside COMSOL.

2.2 Frontend Development The frontend was developed using JavaScript (JS) with React (Facebook Inc., Menlo Park, USA) and the Visualization Toolkit (VTK). JS is a programming language usually interpreted by an internet browser and present in most websites worldwide. In this project, JS was used to facilitate the transfer for the cloud computing environment in the next

software versions. Both React and VTK are open-source libraries, the first one to speed up frontend applications development, and the latter to render scientific data. User interaction start with the first application page. There are necessary to choose the folder that contains all the study DICOM files. The backend will process this folder and return a list with all DICOM studies inside the provided folder. Thereafter, this list is displayed to the user in a table with the main information (patient name, timestamps, and clinic name) of each study to user choice confirmation. The chosen study is loaded and rendered, as visualized in Fig. 4a. The studies are displayed with the VTK. A side menu and four study views are displayed, three in twodimensional (2D) and one in 3D. The 2D views represent the coronal, sagittal and transversal planes, making it possible to analyse the details of each study. The 3D view represents the

1776

J. F. Rodrigues et al.

Fig. 3 a Block diagram of the segmentation algorithm. b Block diagram of the data export algorithm

Fig. 4 a Study views (coronal, sagittal, transversal, and 3D). b Study views during segmentation mode. Pixels are green colored to identify selected pixels, and 3D view modified to object preview. Text of the image in Brazilian Portuguese (software original language)

Alpha Development of Software for 3D Segmentation …

1777

three segmentations in 2D with their positions in space. In the segmentation mode, the 3D view is changed to a preview of the segmented object. Providing tissue electrical parameters are necessary to configure the simulator. A boot file contains the properties of some primary tissues with known electrical parameters [6]. Table 1 shows these tissues and their properties. The user is free for adding other necessary tissues. All data will be automatically exported to COMSOL in the exporting step. The Create Segmentation option signals the start of the targeting process in the segmentation section. By entering in segmentation mode, a right side menu will be displayed with the segmentation options. The segmentation process enables commands and changes the 2D and 3D views. In 2D views, the green color indicates the pixels that are part of the segmentation. The 3D segmentation shows a preview of the 3D reconstruction that will be generated. The point selection command is also enabled, allowing the user to select points of interest in any 2D view. These points will be used in the segmentation process by points of interest. If no points are selected, segmentation occurs exclusively by traditional threshold filtering. The last step is to select the upper and lower thresholds for cutting function. Figure 4b shows the software screen after segmenting a dog’s torso bones. The user can create as many segments as necessary; in the end, the Export option will transfer all data to the COMSOL environment.

module was used for the resolution. The simulations solve the load conservation principle equation, as shown in Eq. 1.

3

4

Case Study

The purpose of the case study was to evaluate the effectiveness of treatment by ECT with commercial or customized electrodes, of a tumor mass of about 3 cm in diameter in the right lung of a 9-year-old Golden Retriever dog, as shown in Fig. 5.

3.1 Numerical Simulations The COMSOL Multiphysics® software was used for simulating the electric field distribution. The stationary current

Table 1 Tissue templates added automatically at application startup

Tissue

r  ðr  rVÞ ¼ 0

ð1Þ

where r represents the electrical conductivity of the material in [S/m] and V the electric potential in [V]. The applied voltage (model input) can be modeled by the Dirichlet Boundary Condition on the contact surface between the electrode and the tissue. To mathematically separate the conductive segment from its surroundings, the Neuman Boundary Condition can be applied. The conductivity of biological tissues varies according to the effect of electroporation [13]. The tumor electrical properties are shown in Table 1. Mathematical dependence proposed by Miklavicic et al. [14] is shown in Eq. 2. rðEÞ ¼ r0 þ A¼

Eire þ Ere 2

rmax  r0 EA 1 þ D  e ð B Þ B¼

ð2Þ

Eire  Ere C

where r(E) is the tissue electric conductivity in function of the electric field, r0 and rmax are respectively the initial and maximum values of tissue conductivity, Ere and Eire are the electric field thresholds of reversible and irreversible EP, respectively. C = 8 and D = 10 are adjustments of the proposed sigmoidal function.

Results

The lung was segmented using ten points selected from the study images with −900 lower threshold, and −150 upper threshold. We cut the final model in a region of interest between points 200 and 395 on I axis and between 200 and 350 on K axis. Tumor mass was segmented with manual and threshold segmentation (−149 lower and 200 uppers). We performed the manual segmentation due to similar grayscale between tumor mass and rib bones. The tumor mass was leaning against the rib bone during the examination and the segmentation by points of interest, inevitably, selected both tumor and bones.

r0 (S/m)

rmax (S/m)

Erev (kV/m)

Eire (kV/m)

EC + Epidermis

0.008

0.008

40

120

Dermis

0.250

1.000

30

120

Muscle

0.135

0.340

20

80

Standard tumor

0.300

0.750

20

80

Where r0 is the initial conductivity, rmax is the maximum conductivity of the permeabilized tissue, Erev is the reversible electric field (RE) threshold, and Eire is the irreversible electric field (IRE) threshold [6]

1778

J. F. Rodrigues et al.

Fig. 5 Computed tomography exam indicating the presence of a tumor mass in the right lung. Provided by Cães e Gatos Veterinary Clinic

730 V (type III) resulting in electric currents of 14.6 and 14.3 A.

5

Fig. 6 3D model reconstructed from the segmentation in the software after importing into the COMSOL multiphysics simulator

Figure 6 shows the model inside COMSOL after the segmentation process. Figure 7 shows the results of electric field distribution for ESOPE needles (types II and III), respectively with applied voltages: 400 V (type II) and

Discussion

It is hard to discuss the software capability and accuracy on segmenting tissues because as it is an alpha version, no validation was performed so far. However, we can observe by comparing Figs. 6 and 7 that our software might recreate the CT scan in 3D objects. We plan validations both on image processing (backend) and user usability (frontend) in the next steps of development. During validations, it will be necessary to evaluate studies from different equipment manufacturers, as well as the many available resolutions (including different pet breeds) to cover all variants. The importance of the segmentation process from examining real images was shown through the case study. The peculiarities created by the tumor growth process can be a considerable challenge for the application of ECT, and preclinical studies are the main ally in this process. When reconstructing the study from the CT scan, we can analyze better the patient case peculiarities. The main challenge in the software development was the low quality of the veterinary medical images (resolution and blurry), often due to studies performed on old equipment or

Alpha Development of Software for 3D Segmentation …

1779

Fig. 7 Electric field distribution to assess EP in a tumoral nodule in a dog’s lung using ESOPE a type II and b type III needle electrodes in its standard configuration

misconfiguration during acquisition. Other studies have had the same experience with the computed tomography image quality [15]. The number of slices presented in the CT scan also affects the software performance, considerably slowing the processing steps when studies have more than 1000 slices. We hope to solve both problems with additional filters to improve image quality and optimizing all developed functions when transferring to the cloud environment. The results for ESOPE’s standard size electrodes (type II and III), demonstrated the possibility of carrying out the ECT treatment on the case study, as shown in Figs. 6 and 7. Nonetheless, electric currents are on the limits of most ECT pulse deliver equipment.

case study shows the importance of the segmentation process in better analyzing the patient case. Pre-treatment studies might be a tool to find peculiarities created by the tumor growth process before the application of ECT. The case study demonstrated that the ESOPE electrodes could be sufficient for the treatment, but are close to their limitations for applications in internal tumors. Studies on new electrode configurations are recommended, as well as the production of new equipment capable of supporting the requirements.

6

Conflict of Interest The authors declare that they have no conflict of interest.

Conclusions

The alpha version of the proposed software might recreate CT scans and export to COMSOL. Validations are necessary to evaluate both image processing and user usability. The

Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES).The authors would like to thank the Brazilian research funding agencies CAPES and CNPQ for the scholarships granted to the postgraduate students.

1780

References 1. Romeo S, Frandsen SK, Gehl J, Zeni O (2019) Calcium electroporation: an overview of an innovative cancer treatment approach. In: 2019 photonics & electromagnetics research symposium—Spring (PIERS-Spring). IEEE, pp 2979–2984 2. Liu S, Tang K, Feng X et al (2019) Toward wearable healthcare: a miniaturized 3D imager with coherent frequency-domain photoacoustics. IEEE Trans Biomed Circuits Syst 13:1417–1424. https:// doi.org/10.1109/TBCAS.2019.2940243 3. Calvet CY, Famin D, André FM, Mir LM (2014) Electrochemotherapy with bleomycin induces hallmarks of immunogenic cell death in murine colon cancer cells. Oncoimmunology 3: e28131. https://doi.org/10.4161/onci.28131 4. Campana LG, Edhemovic I, Soden D et al (2019) Electrochemotherapy—emerging applications technical advances, new indications, combined approaches, and multi-institutional collaboration. Eur J Surg Oncol 45:92–102. https://doi.org/10. 1016/j.ejso.2018.11.023 5. Pavliha D, Kos B, Županič A et al (2012) Patient-specific treatment planning of electrochemotherapy: procedure design and possible pitfalls. Bioelectrochemistry 87:265–273. https://doi.org/ 10.1016/j.bioelechem.2012.01.007 6. Suzuki DOH, Anselmo J, de Oliveira KD et al (2015) Numerical model of dog mast cell tumor treated by electrochemotherapy. Artif Organs 39:192–197. https://doi.org/10.1111/aor.12333

J. F. Rodrigues et al. 7. Esmaeili N, Friebe M (2019) Electrochemotherapy: a review of current status, alternative IGP approaches, and future perspectives. J Healthc Eng 2019:1–11. https://doi.org/10.1155/2019/2784516 8. Cemazar M, Sersa G (2019) Recent advances in electrochemotherapy. Bioelectricity 1:204–213. https://doi.org/10.1089/bioe.2019. 0028 9. Naik N (2017) Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP and HTTP. In: 2017 IEEE international systems engineering symposium (ISSE). IEEE, pp 1– 7 10. Pianykh OS (2012) Digital imaging and communications in medicine (DICOM). Springer, Berlin 11. Weickert J (1997) A review of nonlinear diffusion filtering, pp 1– 28 12. Johnson HJ, McCormick MM, Ibanez L (2020) The ITK software guide, 3rd edn 13. Corovic S, Lackovic I, Sustaric P et al (2013) Modeling of electric field distribution in tissues during electroporation. Biomed Eng Online 12:16. https://doi.org/10.1186/1475-925X-12-16 14. Sel D, Cukjati D, Batiuskaite D et al (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans Biomed Eng 52:816–827. https://doi.org/10.1109/TBME.2005. 845212 15. D’Arnese E, Del Sozzo E, Chiti A et al (2018) Automating lung cancer identification in PET/CT imaging. In: 2018 IEEE 4th international forum on research and technology for society and industry (RTSI). IEEE, pp 1–6

A Systematic Review on Image Registration in Interventionist Procedures: Ultrasound and Magnetic Resonance G. F. Carniel, A. C. D. Rodas and A. G. Santiago

Abstract

Medical images are a powerful tool to help the physicians in diagnosis and planning a surgery. Ultrasound (US) and Magnetic Resonance Image (MRI) are the most used techniques in the literature with this purpose. Both have their positive points and negative points related to the quality of the image acquisition, some parameters which are looked for are if the same technique, or combination, can give a high-quality definition of the position and fast acquisition of the image. With this worry in mind, this work has the aim to answer a specific question: “Which are the characteristics of ultrasound and magnetic resonance image registration methods for interventional procedures?”. We used the PICOS (Problem, Intervention, Comparison, Outcome, Study) tool to define the search. As the inclusion and exclusion criteria were the year range (from 2013 to 2019) and if the study applied both US and MRI for image registration. As a result, 22 out of 96 articles matched the criteria. It was observed that Urology, Neurology, Hepatology, Cardiology and Gynecology are the medical fields in which the US and MRI combination are most used. There was a preference by the authors for the use of non-rigid transformations for recording US and MRI images. This fact can be justified by the need to consider the difference in the shape of structures in the US and MRI images for registration. The accuracy was the most used metric in the evaluation of image recording methods. The metrics used to assess accuracy varied according to the study field. Keywords

Ultrasound • Magnetic resonance • Image registration • Bibliography review • Up-date • Intervention procedures

G. F. Carniel · A. C. D. Rodas · A. G. Santiago (B) Center for Modeling, Engineering and Social Sciences, Federal University of ABC, São Bernardo do Campo, Brazil e-mail: [email protected]

1

Introduction

Each day more and more technologies are demanded in order to improve and aid physicians in diagnosis and medical procedures. Some of them are able to identify anatomic structures such as Magnetic Resonance (MR), Ultrasound (US) and Computer Tomography (CT) while others present physiological details of the structures, as Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT) [1]. The choice among which modality would be chosen is strongly related to characteristics such as region and organ to be imaged and particular structures. The US has many advantages such as non-ionizing radiation, low cost, portable devices and acquisition time, although some disadvantages are limited vision field regions and its images are, in many situations, difficult to analyze. On the other hand, MR images present better soft tissues contrast, favoring anatomic structures recognition and higher spatial resolution when compared to other modalities. As a drawback, it is expensive and does not allow real time acquisitions [1]. Image registration has emerged as an attempt to improve quality of service, reducing the procedure time, improving the diagnosis, treatment and procedure planning [1,2]. This paper presents a systematic literature review of multimodal image registration methods, in particular Ultrasound– Magnetic Resonance, applied to interventionist procedures.

2

Methodology

2.1

Target Question Design

The anagram PICOS [3] was used in order to design the target question. Table 1 presents the anagram application and, by doing so, the following question was proposed: Which are the characteristics of ultrasound and magnetic resonance image registration methods for interventional procedures?

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_260

1781

1782

G. F. Carniel et al.

Table 1 Description of the PICOS anagram Description

Abbreviation

Question components

Problem

P

Medical image registration Interventionist procedures guided by image registration between US and RM – – Experimental studies

Intervention

I

Comparison Outcome Study type

C O S

The “Comparison” and “Outcome” were not considered since the result evaluation of each method is not in the scope of the present paper.

2.2

Search Strategy

The PubMed, Science Direct, Google Scholar and IEEE databases were manually consulted with the following keywords: “Image Registration”, “Ultrasound”, “Magnetic Resonance”, “Operative”, “Interventional”, “Similarity Measure” and “Medical Image Registration”. Table 2 presents the search strategies used. The choice among the key-words combinations and databases were randomized and the stopping criterion chosen was the results invariability, independent of the key-words used. That is also the reason why some databases were more explored than others. The Google Scholar database was not used, since its results were already presented in other databases.

2.3

Exclusion and Inclusion Criteria

In order to define which paper should be included into this review, the publication year, study field and image modality were chosen as main parameters to be attended: were considered only papers published between 2013 and 2019, medical image registration and multi-modal image registration between RM and US. Given the preliminary results, each study was manually evaluated according its title and abstract. Beside the mentioned criteria, only those papers that proposes the development of a method for image registration were considered.

3

Results

The total number of selected papers that matched the criteria is presented in the column “Selected” of Table 2. After the selection of the papers, it was observed that some of the

Table 2 Search strategies (SS), databases, keywords and results for the proposed study SS

Database

Keywords

Founds

1

PubMed

((Medical Image Registration) AND Ultrasound) AND Magnetic Resonance AND (“2013/01/01”[PDAT]: “2019/12/31”[PDAT])

418

49

2

PubMed

(((Image Registration) AND Ultrasound) AND Magnetic Resonance) AND Similarity Measure AND (“2013/01/01”[PDAT]: “2019/12/31”[PDAT])

70

16

3

PubMed

(((((Image Registration) AND Ultrasound) AND Magnetic Resonance) AND Interventional)) AND Operative AND (“2013/01/01”[PDAT]: “2019/12/31”[PDAT])

60

19

4

Science Direct

Image Registration AND Ultrasound AND Magnetic Resonance AND Similarity Measure; Year: 2013–2018

224

6

5

Science Direct

Medical Image Registration AND Ultrasound AND Magnetic Resonance AND Interventional; Year: 2013–2018

509

19

6

IEEE

Image AND registration AND ultrasound AND magnetic AND resonance; Year: 2013–2018

50

14

Total

Selected

123

results repeated themselves through several search strategies, therefore, the total number of papers considered was reduced in comparison to those presented in Table 2, resulting in a total of 96 papers. As shown in the Venn diagrams in Fig. 1, it is possible to observe a relation among papers selected in the same database if Pubmed and Science Direct are considered. The diagrams present the intersection between different results concerning the strategies mentioned in Table 2. The IEEE database was not taken into account since it presented an in-variability of results for different key-words and search strategies and no intersection between different databases was observed. Given the selected 96 papers, a new step was performed taking into account the goal of the present paper: select only those papers that propose a new multi-modal image registration, thus, reducing the total number of papers to 22. The papers selected using the methodology described in section II.B and II.C are presented in Table 3, which contains information regarding the publication year, title and study

260 A Systematic Review on Image Registration in Interventionist …

Fig. 1 Venn diagram for search strategies considering PubMed and Science Direct databases: a PubMed: green: strategy 1; blue: strategy 2; gray: strategy 3; b Science Direct: yellow: strategy 4; red: strategy 5

type and it is grouped according to the research field: Urology, Neurology, Hepatology, Cardiology and Gynecology. Image registration can be divided into two main groups according the technique employed: (i) rigid transformation or (ii) non-rigid transformation, and for the present paper, each category represents 23.7% and 72.3% respectively. Nonrigid registration techniques are mainly employed in Urology, Gynecology, Cardiology and non-specific application. On the other hand, rigid transformations were used in studies related to Hepatology. A more diverse application can be observed

1783

in Neurology, where some studies use rigid transformation (37.5%), non-rigid transformation (37.5%) or both (25%). Figure 2a and b presents the percentage distribution of the selected papers regarding their application field and the transformations used by each field of application respectively. Another interesting characteristic is the metric used in order to evaluate the accuracy of a given method. Some techniques were evaluated according the distance between reference points defined over the images, such as Target Registration Error (TRE, 31.8%) [4–10], mean Target Registration Error (mTRE, 22.7%) [11–15], Fiducial Registration Error (FRE, 13.6%) [16–18], Root Mean Square Error (RMSE, 9.1%) [19,20] and Mean Square Error (MSE, 4.5%) [21]; others based their accuracy metric on the evaluation of the superposition of the target image over the reference structure, like Hausdorff Distance (HD, 4.5%) [21], Dice Similarity Coefficient (DSC, 13.6%) [21–23], Mean Surface Distance (MSD, 9.1%) [23,24]; some studies also considered Cover Range (COR, 9.1%) [20,25], Cross Correlation (CC, 4.5%) [22], Peak Signal-To-Noise Ratio (PSNR, 4.5%) [25] and Mutual Information (MI, 4.5%) [25] and there are also cases which this task was attributed to a specialist (4.5%) [22]. The most used techniques, TRE and mTRE, uses the Euclidean distance between references points and corresponds to 63.3%, other techniques employ the superposition between structures, statistical tools or field specialists. Considering multi-modal image registration, some methods considered are based on geometrical characteristics of the structures and/or information about the image intensity distribution. With this, different combinations of transformations and metrics of similarity that fit a specific application are proposed. It can be noted that image registration techniques are associated with interventionist procedures within each specific area. In Urology applications, image registration is mainly used for prostate imaging, for example, in biopsy procedures and monitoring tissue extension. The majority of the studies analyzed in Neurology (87.5%) are related to neurosurgery by identifying the anatomic structures with US imaging during procedures. Both studies considered in Gynecology developed methods for diagnosis, monitoring, and treatment of endometriosis. In Cardiology, image registration methods are related to the characterization of atherosclerosis. Another characteristic observed is the computational time related to completing the image registration task, being related by 68.2% of the studies. This parameter works as an indicative in order to choose which method should be employed according to the procedure demand. In order to minimize the overall computational time, some methods perform a pre-procedure registration step.

1784

G. F. Carniel et al.

Table 3 Description of the selected papers R.F.a

Title

Author/year

Urology

Three-dimensional nonrigid landmark-based magnetic resonance to transrectal ultrasound registration for image-guided prostate biopsy [10] Multiattribute probabilistic prostate elastic registration (MAPPER): Application to fusion of ultrasound and magnetic resonance imaging [19] Non-rigid MR-TRUS image registration for image-guided prostate biopsy using correlation ratio-based mutual information [5] Deformable registration of trans-rectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for focal prostate brachytherapy [6] Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy [7] Statistical Biomechanical Surface Registration: Application to MR-TRUS Fusion for Prostate Interventions [8] Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention [9] Three-Dimensional Nonrigid MR-TRUS Registration Using Dual Optimization [4] Registration of 3D fetal neurosonography and MRI [11] Global Registration of Ultrasound to MRI Using the LC2 Metric for Enabling Neurosurgical Guidance [16] Automatic ultrasound-MRI registration for neurosurgery using the 2D and 3D LC2 metric [17] Automatic deformable MR-ultrasound registration for image-guided neurosurgery [12] ARENA: Inter-modality affine registration using evolutionary strategy [13] Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information [14] Fast rigid registration of pre-operative magnetic resonance images to intra-operative ultrasound for neurosurgery based on high confidence gradient orientations [15] A hybrid method for non-rigid registration of intra-operative ultrasound images with pre-operative MR images [18] Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images [21] Mapping endometrial implants by 2D/2D registration of TVUS to MR images from point correspondences [22] Local structure orientation descriptor based on intra-image similarity for multimodal registration of liver ultrasound and MR images [20] Automated registration of freehand b-mode ultrasound and magnetic resonance imaging of the carotid arteries based on geometric features [24] Joint intensity-and-point based registration of free-hand B-mode ultrasound and MRI of the carotid artery [23] Research on Optical Flow Model Based Large Deformation Ultrasound-MRI Registration [25]

(Sun, Qiu, et al. 2015)

Neurology

Ginecology

Hepatology

Cardiology

MAb a b

Research field Multipurpose area

(Sparks et al. 2015)

(Gong et al. 2017)

(Mayer et al. 2016)

(Van De Ven et al. 2015) (Khallaghi et al. 2015) (Onofrey et al. 2017)

(Sun, Yuan, et al. 2015) (Kuklisova-Murgasova et al. 2013) (Wein et al. 2013) (Fuerst et al. 2014) (Rivaz, Chen and Collins, 2015) (Masoumi, Xiao and Rivaz 2019) (Rivaz et al. 2014) (Nigris, Collins and Arbel 2013)

(Farnia et al. 2014) (Canis, M. et al. 2015)

(Yavariabdi et al. 2013) (Yang et al. 2016)

(Carvalho et al. 2017)

(Carvalho et al. 2014) (Hou et al. 2016)

260 A Systematic Review on Image Registration in Interventionist …

1785

MRI images. This fact can be justified by the need to consider the difference in the shape of structures in the US and MRI images for registration, which is directly related to the context of the acquisition and caused by the positioning of the transducers on the organ surface and the position of the patient during image acquisition. In addition, accuracy was the most used metric in the evaluation of image recording methods. The metrics used to assess accuracy varied according to the study field. In urology and neurology, the techniques were based on measuring the Euclidean distance between reference points established in the images, before or after the intervention procedure, due to the ease in establishing common and faithful anatomical points in both images. In the other fields, this task was not possible and used the contours of the structures that had to be carried out to complete the registration of the images. Thus, considering the multimodal application, the methods studied were based on geometric characteristics of the structures and/or information on the distribution of intensities in the images. With this, different combinations of transformations and similarity metrics are constructed that are specifically suited to each application of interest.

5

Fig. 2 Field of application of US-RM image registration: a Distribution of the papers analyzed according each field of application; b distribution of the techniques employed according each field of application

4

Discussion

Checking the application areas of each study individually, there was a trend in categorizing the results in some areas of medical application, with emphasis on Urology, Neurology, Hepatology, Cardiology and Gynecology. Such observation may be related to the suitability of the technology in the clinical routine due to the improvement in the performance of the procedures performed with the aid of these techniques, which are often limited by the physical properties of the imagined tissues and which leads to the need to use more than one image technique to access the features of interest. Among the selected articles, the only study that cannot be classified in any specific category was Hou et al. [25]. As illustrated in Fig. 2b, there was a preference by the authors for non-rigid transformations for recording US and

Conclusion

According to our question “Which are the characteristics of ultrasound and magnetic resonance image registration methods for interventional procedures?” and based on a systematic review of methods for recording ultrasound and magnetic resonance images applied to interventional procedures was proposed, the criteria established for the studies selection resulted in specific medical fields which apply US and MRI, as known Urology, Neurology, Hepatology, Cardiology and Gynecology. The vast majority of studies used non-rigid transformations to perform the image registration and the choice of the metric for evaluating the method was influenced by the characteristics of the images, such as availability of corresponding anatomical points between the images or outline information of the structures. As highlighted in some studies, in addition to the accuracy of the method, the computational time required to complete the task is an important factor for the applicability of the method in a real-time procedure, however, this parameter was not mentioned by all studies. To assist in computational performance, some studies have used optimization techniques. Thus, the development of new image registration methods applied in a multi-modal image context should consider the characteristics of the imaged structures, context image acquisition and needs of the interventional procedure. Conflict of Interest The authors declare that they have no conflict of interest.

1786

References 1.

2.

3. 4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

Zhang S, Jiang S, Yang Z, Liu R (2015) 2D ultrasound and 3D MR image registration of the prostate for brachytherapy surgical navigation. Medicine 94:e1643 Mercier L, Fonov V, Haegelen C, Del Maestro RF, Petrecca K, Collins DL (2012) Comparing two approaches to rigid registration of three-dimensional ultrasound and magnetic resonance images for neurosurgery. Int J Comput Assist Radiol Surg 7:125–136 Cooke A, Smith D, Booth A (2012) Beyond PICO. Qualitative Health Res 22:1435–1443 Sun Y, Qiu W, Yuan J, Romagnoli C, Fenster A (2015) Threedimensional nonrigid landmark-based magnetic resonance to transrectal ultrasound registration for image-guided prostate biopsy. J Med Imaging 2 Gong L, Wang H, Peng C et al (2017) Non-rigid MR-TRUS image registration for image-guided prostate biopsy using correlation ratio-based mutual information. BioMedical Eng OnLine 16:8 Mayer A, Zholkover A, Portnoy O, Raviv G, Konen E, Symon Z (2016) Deformable registration of trans-rectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for focal prostate brachytherapy. Int J Comput Assist Radiol Surg 11:1015–1023 Ven WJM, Hu Y, Barentsz JO, Karssemeijer N, Barratt D, Huisman HJ (2015) Biomechanical modeling constrained surface-based image registration for prostate MR guided TRUS biopsy. Med Phys 42:2470–2481 Khallaghi S, Sanchez CA, Rasoulian A et al (2015) Statistical biomechanical surface registration: application to MR-TRUS fusion for prostate interventions. IEEE Trans Med Imaging 34:2535–2549 Onofrey JA, Staib LH, Sarkar S et al (2017) Learning non-rigid deformations for robust, constrained point-based registration in image-guided MR-TRUS prostate intervention. Med Image Anal 39:29–43 Sun Y, Yuan J, Qiu W, Rajchl M, Romagnoli C, Fenster A (2015) Three-dimensional nonrigid MR-TRUS registration using dual optimization. IEEE Trans Med Imaging 34:1085–1095 Kuklisova-Murgasova M, Cifor A, Napolitano R et al (2013) Registration of 3D fetal neurosonography and MRI. Med Image Anal 17:1137–1150 Rivaz H, Chen SJ-S, Collins DL (2015) Automatic deformable MRultrasound registration for image-guided neurosurgery. IEEE Trans Med Imaging 34:366–380 Masoumi N, Xiao Y, Rivaz H (2019) ARENA: inter-modality affine registration using evolutionary strategy. Int J Comput Assist Radiol Surg 14:441–450

G. F. Carniel et al. 14. Rivaz H, Karimaghaloo Z, Fonov VS, Collins DL (2014) Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information. IEEE Trans Med Imaging 33:708–725 15. Nigris DD, Collins DL, Arbel T (2013) Fast rigid registration of preoperative magnetic resonance images to intra-operative ultrasound for neurosurgery based on high confidence gradient orientations. Int J Comput Assist Radiol Surg 8:649–661 16. Wein W, Ladikos A, Fuerst B, Shah A, Sharma K, Navab N (2013) Global registration of ultrasound to MRI using the LC2 metric for enabling neurosurgical guidance. Lecture notes in computer science, vol 8149. Springer, Berlin, Heidelberg, pp 34–41 17. Fuerst B, Wein W, Müller M, Navab N (2014) Automatic ultrasoundMRI registration for neurosurgery using the 2D and 3D LC2 metric. Med Image Anal 18:1312–1319 18. Farnia P, Ahmadian A, Shabanian T, Serej ND, Alirezaie J (2014) A hybrid method for non-rigid registration of intra-operative ultrasound images with pre-operative MR images. In: 2014 36th annual international conference of the IEEE engineering in medicine and biology society. IEEE, pp 5562–5565 19. Sparks R, Nicolas Bloch B, Feleppa E et al (2015) Multiattribute probabilistic prostate elastic registration (MAPPER): application to fusion of ultrasound and magnetic resonance imaging. Med Phys 42:1153–1163 20. Yang M, Ding H, Kang J, Cong L, Zhu L, Wang G (2016) Local structure orientation descriptor based on intra-image similarity for multimodal registration of liver ultrasound and MR images. Comput Biol Med 76:69–79 21. Yavariabdim A, Bartoli A, Samir C, Artigues M, Canis M (2015) Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images. Comput Med Imaging Graph 45:11–25 22. Yavariabdi A, Samir C, Bartoli A, Da Ines D, Bourdel N (2013) Mapping endometrial implants by 2D/2D registration of TVUS to MR images from point correspondences. In: 2013 IEEE 10th international symposium on biomedical imaging. IEEE, pp 576–579 23. Carvalho DDB, Klein S, Dijk Z, Akkus et al (2014) Joint intensityand-point based registration of free-hand B-mode ultrasound and MRI of the carotid artery. Med Phys 41 24. Carvalho DDB, Arias Lorza AM, Niessen WJ, de Bruijne Marleen M, Klein S (2017) Automated registration of freehand B-mode ultrasound and magnetic resonance imaging of the carotid arteries based on geometric features. Ultrasound Med Biol 43:273–285 25. Hou M, Wang X, Yang M, Li X, Li B, Han Y (2016) Research on optical flow model based large deformation ultrasound-MRI registration. In: 2016 8th international conference on intelligent humanmachine systems and cybernetics (IHMSC). IEEE, pp 169–172

Application of Autoencoders for Feature Extraction in BCI-SSVEP R. Granzotti, G. V. Vargas and L. Boccato

Abstract

A brain-computer interface (BCI) based on the steady-state visually evoked potentials (SSVEP) paradigm deals with the challenge of determining the frequency associated with the visual stimulus the user is concentrated on, given electroencephalography (EEG) recordings of the brain activity. For this, the BCI process the brain signals in order to remove artifacts and, more importantly, to extract relevant features that contribute to the classification. A technique known as autoencoder (AE) has gained special attention in the last decades due to its ability to discover advantageous representations for a dataset, even with a significant dimensionality reduction. Essentially, autoencoders (AEs) are neural networks composed of two parts—encoder and decoder—whose roles are, respectively, to create the internal representation (named code) for the input data, and to reconstruct the input data from the generated code. Thus, the encoder corresponds to a powerful nonlinear feature extractor. In this work, we investigated the use of AEs to perform the feature extraction in a BCI-SSVEP. Different AE approaches have been analyzed, both in time and frequency domains, considering two classifiers: logistic regression and support-vector machines. The obtained results reveal that AEs can offer a performance improvement when compared with a BCI using the discrete-time Fourier transform (DFT) features. Keywords

Brain-computer interfaces • Steady-state visually evoked potentials • Feature extraction • Autoencoders • Neural networks

R. Granzotti (B) · G. V. Vargas · L. Boccato School of Computer and Electrical Engineering, University of Campinas, UNICAMP, Campinas, SP, Brazil e-mail: [email protected]

1

Introduction

A brain-computer interface (BCI) is a communication system that exploits brain activity signals to control external devices, thus being the hope for the communication of many patients with some type of neuromuscular disorder [1,2]. In general, BCIs explore the electroencephalography (EEG) technique for recording the brain signals due to its relatively low cost and non-invasiveness. Additionally, they are designed having in view the chosen paradigm. In this work, we consider the steady-state visually evoked potentials (SSVEP) paradigm, which is based on the fact that when an individual focuses his/her attention on a visual stimulus that flickers according to a pre-specified frequency, a synchronized firing of neurons, mainly in the visual cortex area, is observed. Hence, the task of the BCI-SSVEP amounts to processing the EEG signals in order to identify the frequency of the visual stimulus selected by the user. This is typically accomplished according to a sequence of steps, which include: (1) signal acquisition, (2) signal enhancement or preprocessing, (3) feature extraction, (4) classification and, finally, (5) the control of the external device. The third step—feature extraction—aims at obtaining a set of attributes to represent the EEG signals which are discriminative with respect to the frequency selected by the user, in order to facilitate the classification task and, ultimately, improve the BCI performance. In the context of BCI-SSVEP, techniques such as the Fast Fourier Transform (FFT), Welch’s periodogram and canonical correlation analysis (CCA) have been widely explored [3–5]. Interestingly, from the fields of neural networks and deep learning, the so-called autoencoders (AEs) represent powerful options for dimensionality reduction and feature extraction [6]. In simple terms, these models indirectly learn to create internal representations for the available data while they are trained to minimize the error between their inputs and outputs. Additionally, since AEs are nonlinear structures, they

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_261

1787

1788

R. Granzotti et al.

have the potential of finding complex and advantageous representations when compared with classical techniques, such as Principal Component Analysis [7]. Hence, in this work, we investigated the application of AEs in a BCI-SSVEP for extracting features from the EEG signals. More specifically, we considered two different approaches: in the first case, time windows of the EEG recordings represent the AE input data, whereas, in the second case, the AE deals with the frequency-domain representation of the EEG signals, obtained by the FFT. For both approaches, we assessed the performance of the system considering the logistic regression and a SVM classifier. In the literature, there has been an increasing number of works considering the use of deep models in BCIs (see [8] for a thorough survey). With respect to AEs in BCI-SSVEP, [9] employed a sparse AE before the classification stage, but the AE input data was previously modified according to a coding scheme developed for multi-frequency visual stimulation. Other works explored AEs for a different purpose (not as a feature extractor), such as suppressing the noise components in EEGs [10]. So, it still is necessary to further analyze whether AEs can obtain relevant features from the EEG signals and improve the classification in BCIs. This paper is organized as follows: Sect. 2 describes the techniques employed at each step of the BCI-SSVEP. Section 3 presents the fundamentals of the AEs explored in this work for processing EEG-SSVEP signals. The dataset and the adopted methodology are described in Sects. 4 and 5, respectively. Section 6 describes in more detail the experiments performed. In Sect. 7, the obtained results are presented and discussed. Finally, Sect. 8 brings the final considerations.

2

BCI Signal Processing Module

In this section, we briefly present the signal processing steps involved in the BCI-SSVEP system implemented in this work: (1) pre-processing, whose objective is to reduce the artifacts in the EEG signals; (2) feature extraction, which retrieves discriminative information from the original signals, and (3) classification, which identifies the command (or frequency) associated with the brain activity.

2.1

Pre-processing

We employed the spatial filter known as Common Average Reference (CAR) to preprocess the EEG signals [11]. Let vi (n) be the electrical potential associated with the i-th electrode at the instant n. Mathematically, the CAR filter is described by the following operation:

viCAR (n)

E 1  = vi (n) − v j (n), E

(1)

j=1

where E denotes the number of electrodes involved in the data acquisition. By subtracting the average electrical potential of all electrodes—the so-called common average—from the electrical potential recorded by each electrode, the CAR filter provides nearly a neutral reference for the EEG recordings, considerably reducing the artifacts present in the data [11].

2.2

Feature Extraction

The purpose of feature extraction is to map the sequence of EEG samples to a different space where the classification task becomes simpler. In the SSVEP paradigm, as the subject focuses on a particular stimulus, associated with a specific frequency, it is expected to observe peaks around that frequency (and its harmonics) in the EEG spectrum. Hence, it is common to extract features from the frequency-domain representation of the EEG signals in BCI-SSVEPs. In this work, we employed the FFT algorithm [12] to obtain a set of frequency features for each time window of the EEG signals. Additionally, we propose the use of AEs to extract attributes of the EEG signals, both from the time and frequency-domain representations. A more detailed exposition of AEs can be found in Sect. 3.

2.3

Classification

Having at hand the features representing the EEG signals, the BCI-SSVEP must identify the visual stimulus (or, equivalently, the frequency) selected by the user and, in the end, send the respective command for the involved application (e.g., to move forward a wheelchair). This challenge can be seen as a classification task, where each class is associated with one of the frequencies of the interface. Therefore, the classifier is responsible for creating a mapping between the EEG features obtained in the previous step and a class label, which identifies the frequency of the visual stimulus the user was focused on. In this work, we explored two well-established classification methods: Logistic Regression (LR) and Support Vector Machines (SVM) [13]. In a multiclass scenario, the LR is implemented using the softmax function and produces the probabilities of input x belonging to each class. The LR parameters are obtained with the aid of iterative methods to minimize the cross-entropy between the estimated probability and the desired probability [13].

261 Application of Autoencoders for Feature Extraction in BCI-SSVEP

A Support-Vector Machine is a classification algorithm that is based on the sound theory of maximum-margin hyperplanes [13]. Originally designed for binary classification of linearly separable data, this approach has been adapted to cope with general nonlinear classification tasks by (1) relaxing the requirement of having linearly separable patterns, thus implementing a soft-margin classifier, and, more importantly, by (2) using the kernel trick [13]. In this case, the inner product between data points xi and x j in the original space are replaced by the values of a kernel function κ(xi , x j ). Implicitly, this is equivalent as nonlinearly mapping the original data to a new feature space, of a much larger dimension, where the maximum-margin hyperplane is obtained; the hyperplane in the feature space defines a nonlinear boundary between the classes in the original space. In this work, we adopted the  2 Gaussian kernel, κ(xi , x j ) = exp(−γ xi − x j  ).

3

Autoencoders

An autoencoder is an artificial neural network trained to copy its input to its output [6]. The network consists of two parts: an encoder function h = f (x) that generates an internal representation h, called code, for the input x, and a decoder function r = g(h) that attempts to reconstruct the input from the code. Figure 1 shows the basic architecture of an AE. Both the encoder and the decoder can be composed of one or more layers of neurons. Naturally, there is no interest in a network that simply implements the identity function. In order to avoid this trivial solution, some constraints are imposed on the structure or on the loss function involved in the AE training process, so that a perfect reconstruction is no longer attainable. Interestingly, as the network is forced to prioritize which aspects of the input should be preserved at the output, it ends up creating internal representations for the data that are sufficiently informative. Therefore, if the network is properly trained, the learned encoder can be used to extract features from the dataset, since the decoder still is able to reconstruct the original data from the code with an acceptable error. This means that the encoder can be seen as a promising unsupervised feature extractor [6]. The training process of an AE is the same as that of any feedforward network, leading to a nonlinear and unconstrained optimization problem, in which it is necessary to minimize a loss function L(·) that expresses a dissimilarity measure between the output r = g( f (x)) and the input sam-

Fig. 1 Generic representation of an autoencoder

1789

ple x. In this work, the adopted loss function corresponds to the mean squared error (MSE). A relatively simple approach to extract features with an AE is to construct its architecture so that the code layer has a smaller dimension than the input layer, which means that the network effectively reduces the dimensionality of the data. In this case, the AE is said to be undercomplete [6]. When the encoder and the decoder are simply linear functions, and the loss function is the MSE, an undercomplete AE generates the same subspace of PCA [6]. Hence, an undercomplete AE with a general nonlinear structure has the potential of learning a generalization of PCA, thus obtaining a more representative set of latent variables. Another possibility, called sparse autoencoder, aims to obtain an internal representation with most of the latent variables inactive (i.e., equal to zero). Differently from the undercomplete AE, the code layer usually has a larger dimension than the input. A classical approach for generating sparse representations is to add a term (h) in the cost function which forces the network to seek sparsity in the code layer (for exam ple, (h) = λ i |h i |). In this work, we implemented a different approach by using a dropout layer [7] after generating the code, as explored in [14]. Since dropout randomly turns off some neurons during the training process, forcing their outputs to zero at certain iterations, it simulates a sparse activation of the code layer, which encourages the code to actually learn a sparse representation. A third type of AE, known as denoising autoencoder, corrupts the input pattern with noise in order to make the internal representation built by the AE more robust. From several noisy versions of the same input pattern, the AE must learn to extract relevant attributes in order to reconstruct the original pattern (i.e., without the noise). Thus, what is desired is to minimize the reconstruction error between the output generated for the noisy input (˜x) and the uncorrupted input, i.e., min L(x, g( f (˜x))). Many techniques can be employed to insert noise in the input pattern, as, for example, to add a Gaussian random vector with zero mean and a small variance. In this work, we explored, once again, the dropout technique to generate the noisy versions of the input.

4

EEG Data Set Information

The EEG data employed in this work corresponds to a set composed by 480 signals (10 subjects × 6 visual stimuli × 8 sessions). Each signal comprises a 16 × 3072 matrix, representing the 16 electrodes placed according to the 1010 international standard (O1, O2, Oz, POz, Pz, PO3, PO4, PO7, PO8, P1, P2, Cz, C1, C2, CPz, and FCz), and the 3072 samples given by the 12 s stimulation session and the sam-

1790

R. Granzotti et al.

pling rate of 256 Hz. The six stimulation frequencies were: 6, 7.5, 12, 15, 20 and 30 Hz. The acquisition was provided by a research group from the Laboratory of Digital Signal Processing for Communications (DSPCom), which the authors are part of, and was approved by the Ethics Committee of the University of Campinas (no. 791/2010, CAAE 0617.0.146.000-10). Further details on the EEG data set can be found in [5].

5

Methodology

In this section, we describe the methodology employed in the computational experiments.

5.1

Pre-processing

The first step was the segmentation of the EEG signals at intervals of one second, with an overlap of 50%, using rectangular window. Thus, each of the 8 sessions of 12 s gave rise to 23 new signals. We considered four frequencies of visual stimulation—6, 12, 15 and 20 Hz, which means that we have four classes in the classification stage. Additionally, we applied the CAR filter over the data in order to remove artifacts. Finally, based on preliminary tests, we decided to use only the samples associated with the Oz electrode.

5.2

Hyperparameters of Classifiers

The hyperparameters of the classifiers have been selected with aid of a grid search and a cross-validation procedure. For the regularization factor C, we tested the following values: 10−2 , 10−1 , 1, 10, 102 , 103 and 104 . In the case of SVM, we also tested the following values for the kernel width (γ ): 10−5 , 10−4 , 10−3 , 10−2 , 10−1 and 1. In order to evaluate the performance of the BCI, we separated two entire sessions (7 and 8) as the test set, while the remaining six sessions were used in the training phase. After the best hyperparameters were determined based on the validation error considering a 6-fold cross validation (CV), the BCIs were retrained using all the available sessions for training and, then, were evaluated at the test set.

6

Experiments

In this section, we present the details regarding the application of AEs for feature extraction in a BCI-SSVEP. Two different approaches have been considered in this work: (1) time-domain AE and (2) frequency-domain AE. In the first case, the AE must extract features directly from the time-domain signal. More specifically, the AE input data

corresponds to time windows of the Oz electrode EEG recordings (after the CAR filter), normalized by the maximum absolute amplitude of that window. In this context, six different architectures were considered for the AE, with a different number of neurons in the code layer and a different number of hidden layers between the input and the code (the same as the number of layers between the code and the output). For the number of neurons in the code layer, we tested the following values: 1024 and 2048. With respect to the number of hidden layers, we tested the values 0, 1 and 2, with all layers containing 1024 neurons. Additionally, the time-domain AE has two additional hyperparameters: the dropout rate in the input layer and the dropout rate in the code layer. Dropout rate controls the percentage of neurons that will be set to zero at each iteration. The tested values were: 0.25, 0.50 and 0.75 in input and code layers. The best values were chosen using 6-fold CV. In the second case, the AE must extract features in the frequency domain. In other words, the AE input data corresponds to the frequency spectrum of the signal observed in the Oz electrode. More specifically, the EEG signal is preprocessed with the CAR filter; then, the FFT of each window is computed and, finally, it is normalized by the maximum magnitude of that spectrum. In this scenario, ten architectures were considered, with a different number of neurons in the code layer, a different number of hidden layers between the input and the code, and a different number of neurons in the hidden layers. For the number of neurons in code layer, we tested the following values: 64 and 128. For the number of hidden layers we tested: 0, 1 and 2. The values tested for the number of neurons in the hidden layers were 128 and 1024. In both approaches, the input and output layer of the AE have the same dimension (256 elements, corresponding to a 1 s time window). However, while the ReLU model was explored in the encoder layers, the neurons of the decoder layers used the hyperbolic tangent. As already mentioned, the AE training process involved the MSE loss function and used the Adam algorithm [7]. The weights of the AE were obtained according to the following procedure: firstly, we trained the AE considering each of the six folds as the validation set. Then, we computed the average iteration required for it to achieve the smallest validation error. Finally, we retrained the AE using all the folds until that average iteration was reached. After that, the decoder was removed and the generated code was used as the input of the classifier system (LR or SVM).

7

Results and Discussion

This section present the results obtained in the experiments described in Sect. 6. Since the classes are well balanced in this

261 Application of Autoencoders for Feature Extraction in BCI-SSVEP

dataset, the performance of each BCI was measured by the classification accuracy.

7.1

Time-Domain Autoencoder

For each AE architecture, we present in Fig. 2 the average validation accuracy (Acc. Val.) in the 6-fold CV as a function of the dropout rate at the input and code layers, considering the LR classifier. We omitted the results obtained with SVM because they were always a little worse than those obtained with LR. Some interesting remarks can be drawn from Fig. 2: (i) in general, increasing the dropout rate, either in the input layer or in the code layer, improves the average accuracy; (ii) using a higher number of hidden layers tends to increase the accuracy; (iii) increasing the number of neurons in the code layer, at least in the tested cases, does not seem to significantly impact the accuracy of the classifier. The network that achieved the best validation accuracy (59.06%) was composed of two hidden layers, 1024 neurons in the code and 0.75 of dropout rates.

7.2

Frequency-Domain Autoencoder

In this context, the AE must extract features in the frequencydomain, after FFT was applied. The goal here is to verify if the AE can reduce the dimensionality of the input data and, at the same time, improve the BCI performance. Differently from what occurs in the time domain, the results obtained in the frequency-domain do not vary much with the different architectures. The smallest accuracy in the validation set using LR was 72.10%, while the largest was 73.37%. This small difference was also observed for the SVM classifier, being the lowest and the highest validation accuracies equal to 73.01% and 75.36%, respectively. The best configuration of the BCI using LR employed an AE with no hidden layers and 64 neurons in the code layer. On the other hand, when the SVM classifier was used, the best AE architecture comprised two hidden layers of 128 neurons and a code layer of 128 neurons.

7.3

Comparative Analysis

Finally, after identifying the best configurations for the AEs, both in time and frequency domains, we compared the performance of the different BCIs in the test set exploring the LR and the SVM classifiers. The final results are exhibited in Table 1. We also included the performance of the BCIs using the features extracted with the CCA, FFT, as well as without having any feature extraction (which is indicated as—in

1791

the first column; in this case, the samples of the EEG time window are the inputs of the classifier). Notice in Table 1 that, in all cases, the standard deviation associated with the validation accuracy is considerably high. Moreover, there is a significant difference between the average performance in the validation set and the test set, which indicates the existence of a high variability between different EEG sessions for the same user. It is possible to observe in Table 1 that the absence of feature extraction (rows 1 and 6 of the table) led to the worst performances, both in validation and test sets. On the other hand, when the time-domain AE was used, the performance was significantly improved, reaching a test accuracy of 79.35% for LR classifier. However, it did not reach the performance level of the BCI using the frequency features computed by the FFT. Nevertheless, the results indicate that by using a more flexible time-domain feature extractor, the performance gap with respect to the BCIs using frequency-domain information was significantly reduced. Interestingly, it may be possible to improve the time-domain AE performance if more training samples are available. Therefore, the time-domain AE should still be considered as a potential tool to extract features in BCI systems, especially if the frequency-domain representation is not the most informative. Finally, the experiments showed that although the frequency-domain AE is reducing the dimensionality of the data—from 256 to 64 or 128 attributes, it still offered a performance improvement, albeit modest, when compared with the BCI using FFT features, which emphasizes the flexibility of AEs to extract relevant information.

8

Conclusions

Throughout this work, we presented the main steps involved in the EEG signal processing module of a BCI-SSVEP, giving more emphasis on the feature extraction stage. The challenge in feature extraction consists of obtaining a set of attributes to represent the original signals which are discriminative with respect to the frequency (or command) selected by the user. In the SSVEP paradigm, it is common to employ techniques that map the original data to the frequency domain, such as the FFT transform, since the evoked frequencies may be more easily perceived in the frequency spectrum. Here, we proposed the use of a general machine learning technique, known as autoencoder, to perform the feature extraction from EEG signals, both in the time and frequency domains. The results of the computational experiments revealed possible paths to be followed. The use of the time-domain AE significantly improved the BCI performance when compared with the case without any feature extraction. Still, the test accuracy was inferior to that

1792

R. Granzotti et al.

Fig. 2 Acc. Val. for AEs using 1024 and 2048 neurons in the code layer. Note: the figure has been modified. No results were changed, the image was just aligned

attained by the BCI using the FFT features, which, in a certain sense, was expected in the BCI-SSVEP context. However, the difference in the test accuracy was significantly reduced when the AE was employed, which indicates that it can be an interesting tool for feature extraction in BCI scenarios where the best option is to work in the time-domain. Finally, we also verified the ability of the AE to extract information in the frequency-domain. In this case, the AE

was able to modestly improve the performance of the classifiers, while reducing the dimensionality of the feature vector, confirming that its flexibility can be advantageous in the design of BCIs. As future research, it may be pertinent to explore more elaborated AEs, such as convolutional AEs, for processing the EEG spectrograms, thus combining both time and frequency-domain information in the same structure.

261 Application of Autoencoders for Feature Extraction in BCI-SSVEP

1793

Table 1 Average validation accuracy (Acc. Val.) with the standard deviation and accuracy in test set (Acc. Test), both in percentage, for different classifiers (Cla.), with the best hyperparameters (C and γ ), with different feature extraction techniques (F.E) F. E

Cla.

Acc. Val.

Acc. Test

C

γ

– AE CCA FFT FFT + AE – CCA FFT FFT + AE

LR LR LR LR LR SVM SVM SVM SVM

28.62 ± 4.72 59.06 ± 6.66 67.75 ± 7.47 72.46 ± 6.95 73.37 ± 6.24 46.92 ± 7.95 68.84 ± 8.71 73.55 ± 7.80 75.36 ± 7.06

38.59 79.35 78.80 86.96 87.50 69.57 78.26 83.15 88.04

1.00 0.10 10.0 0.10 0.10 10.0 100 100 1.00

– – – – – 0.10 0.10 0.10 0.10

Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior—Brasil (CAPES)—Finance Code 001. Conflict of Interest The authors declare that they have no conflict of interest.

7. 8.

9.

References 1.

2. 3.

4.

5.

6.

Graimann B, Allison B, Pfurtscheller G (2009) Brain–computer interfaces: a gentle introduction. In: Brain-computer interfaces. Springer, pp 1–27 Nicolas-Alonso LF, Gomez-Gil J (2012) Brain computer interfaces, a review. Sensors 12:1211–1279 Lin Z, Zhang C, Wu W, Gao X (2006) Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE Trans Biomed Eng 53:2610–2614 Nakanishi M, Wang Y, Wang Y-T, Jung T-P (2015) A comparison study of canonical correlation analysis based methods for detecting steady-state visual evoked potentials. PLOS ONE 10:1–18 Carvalho SN, Costa TBS, Uribe LFS et al (2015) Comparative analysis of strategies for feature extraction and classification in SSVEP BCIs. Biomed Signal Process Control 21:34–42 Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, MA

10.

11.

12. 13. 14.

Géron A (2019) Hands-on machine learning with Scikit-Learn, Keras & TensorFlow, 2nd edn. O’Reilly Media Zhang X, Yao L, Wang X, Monaghan J, Mcalpine D, Zhang Y (2019) A survey on deep learning based brain computer interface: recent advances and new frontiers. arXiv:1905.04149 [cs.HC] Pérez-Benítez JL, Pérez-Benítez JA, Espina-Hernández JH (2018) Development of a brain computer interface using multi-frequency visual stimulation and deep neural networks. In: 2018 international conference on electronics, communications and computers (CONIELECOMP), pp 18–24 Chuang CC, Lee CC, Yeng CH, So EC, Lin BS, Chen YJ (2019) Convolutional denoising autoencoder based SSVEP signal enhancement to SSVEP-based BCIs. Microsyst Technol McFarland DJ, McCane LM, David SV, Wolpaw JR (1997) Spatial filter selection for EEG-based communication. Electroencephalogr Clin Neurophysiol 103:386–394 Oppenheim AV, Schafer RW (2009) Discrete-time signal processing, 3rd edn. Pearson Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin Zhai S, Zhang Z (2015) Dropout training of matrix factorization and autoencoder for link prediction in sparse graphs. In: Proceedings of the 2015 SIAM international conference on data mining

An LPC-Based Approach to Heart Rhythm Estimation J. S. Lima, F. G. S. Silva and J. M. Araujo

Abstract

Heart diseases are the cause for many of the human deaths in the world. A reasonable part of the population is affected by heart diseases. A primordial health monitoring to the diagnosis of heart illness is the electrocardiogram. The signal of the heartbeats captured in this protocol is key for medical professionals to investigate if the individuals are healthy or suffer from some possible heart disease. In this paper, the LPC technique is proposed to the automatic heart rhythm estimation. The QRS complex can have a variety of formats depending on the sensors positioning or even be hard to discriminate in a visual inspection of the signals or conventional threshold detection techniques. The signals are then filtered by a single, low order LPC filter, and the absolute value of the error estimation can be easily used in a threshold detection scheme to estimate the beats per minute heart rhythm. A database subset taken from MIT is used to test the efficiency of the approach. The direct measurement from the signals is presented to give a fair comparison of the findings. Keywords

Electrocardiogram • LPC • Diagnosis • Heart frequency

1

Introduction

Health monitoring for humans and animals is a crucial issue nowadays. For patients that require more attention in several situations, such as home-care, hospitals, and specialized exams, the acquisition of certain bio-signals can help the medical professionals with their investigations and diagnosis [1]. J. S. Lima · F. G. S. Silva · J. M. Araujo (B) Programa de Pós-Graduação em Engenharia de Sistemas e Produtos (PPGESP), Instituto Federal da Bahia (IFBA), Rua Emídio dos Santos, S/N, Barbalho, Salvador, BA, Brazil e-mail: [email protected]

The heart rhythm, or heart rate, is undoubtedly one of the more important among that bio-signals [2–9]. The mean value and variance or variability are vital indicators of the individual status, yet the health seems to be right [10–12]. Ergometric tests, as an example, exemplify a specialized exam that considers the mean value and variability of the heart rhythm [13,14]. Several pathologies and diseases can be diagnosed with a correct interpretation of electrocardiogram (ECG) signals. Far beyond the visual interpretation of ECG signals, some of its features can not be accessed in this form, and signal processing techniques are the primary tool for computing and dealing with features extraction from ECG signals. The main focus of this note, the heart rhythm estimation, despite it, can seem at a first sight simple to compute, can put some challenges depending on the setup used in the signal acquisition and conditioning. The existing techniques several times must include filter banks for denoising, drift compensation, and other approaches that help to the correct detection of the heart rhythm [15,16]. In this work, we propose an LPC-based automated heart rhythm estimator anchored in the reconstruction and error of the ECG signal with low order linear prediction coefficients. The advantages and drawbacks of the proposal are discussed. The method is then applied in some collected benchmark signals from the well-known PhysioNet database of the MIT. A comparison with wavelets filter banks estimation of hearth rhythm is presented to illustrate the virtues of the proposal

2

Theoretical Development

2.1

ECG Signal

The ECG signal (Fig. 1) carries information about biological changes that occur in the heartbeat generated by stimulus from the ventricle and atrium. The signal presents three major characteristics: Length given in samples (or seconds), amplitude in milivolts and morphology, where it is possible to identify your waveform.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_262

1795

1796

J. S. Lima et al.

rk =

N −1−k

xi [n]xk+i [n]

(5)

i=0

The error signal e[n] is sensible to sudden variations of signal x[n] due the behavior of high-pass filter, as can be deduced applying Z-Transform to error e[n] (3).  E(z) =1+ a p z− p X (z) N

H (z) =

(6)

p=1

This property is used in the estimation of the heartbeat. The method is presented in the next section. Fig. 1 Typical morphology of normal sinusal rhythm see on ECG

2.3

In Fig. 1 is shown the major waves of the ECG signal: P wave, QRS complex and T wave. The QRS complex is composed of the Q, R and S waves [17]. In the majority of cases, the cardiac events occur in these waves. For that, a large amount of research has focused on the complex QRS.

2.2

LPC—Linear Prediction Coefficients

The fundamental idea of LPC is to express the one-step prediction x[n] ˆ of a discrete signal x[n] from its previous p samples [18,19], whose difference equation is given by x[n] ˆ =−

p 

ak x[n − k],

(1)

k=1

where ak are the coefficients of LPC and p is the order of the predictor. The estimation error e[n] is then: e[n] = x[n] − x[n] ˆ

LPC: Heart Rhythm Estimation

Knowing most researches about characteristic extraction of ECG signals are focused on QRS complex, in this paper, it is proposed the use of a method based on LPC for identification of the instants when occurring the QRS complex. To understand the method, let considers the discrete-time ECG signal x[n], whose samples can be predicted from the signal x[n] ˆ obtained by the 4th-order ( p = 4) LPC model (1). From the difference x[n] − x[n], ˆ it is determined e[n] which is sensible to high-frequency components, and consequently, to the variations of the signal level in the QRS complex. In this way, threshold search can be easily applied on e[n] to retrieve information about heart rhythm, as can be seen in Fig. 2. Since the error signal carries only high frequency in this band, the off-set, drift caused by spurious interference and even possible T wave prominence are absent. Then, the threshold logic detection of the peaks is effective.

(2)

Knowing Eq. 1, Eq. 2 can be rewritten as e[n] = x[n] +

p 

ak x[n − k],

(3)

k=1

whose linear prediction coefficients ak minimize the mean square of error e[n] and can be found using Eq. 4 ⎡ ⎤ a1 ⎢ ⎢ a2 ⎥ ⎢ ⎢ ⎥ ⎢ .. ⎥ = − ⎢ ⎣ ⎣ . ⎦ ⎡

ap

r0 r1 .. . r p−1

⎤−1 ⎡ ⎤ r1 r1 . . . r p−1 ⎢ r2 ⎥ r2 . . . r p−2 ⎥ ⎥ ⎢ ⎥ .. . ⎥ ⎢ . ⎥, . . . . .. ⎦ ⎣ .. ⎦ r p−2 . . . r0 rp

(4)

Fig. 2 Heart rhythm computation using detected R waves

where rk is given by

262 An LPC-Based Approach to Heart Rhythm Estimation

1797

The chosen threshold for detection has some degree of subjectivity [20]. However, given a sorted sequence in ascending order of the normalized absolute value of the error: e[n] ˜ =

|e[n]| , supn |e[n]|

(7)

its acceptable to discard the 10% greater values of the samples in e[n], ˜ generating a truncated sequence, e˜t [n]. Considering that this new sequence gives to the spiking events a good prominence, a reference to the threshold can be computed using the median μ 1 : 2

L=



μ 1 (e˜t2 [n])), 2

(8)

and, finally taken an adequate calibration for the threshold as: D = β L , β > 0.

(9)

Adequate values for the parameter β for use with the MIT database were founded to β = 6. A sample m is then considered as and event to be marked if the condition e[m] ˜ ≥ D is matched. For each two consecutive marked samples m 1 and m 2 with m 2 − m 1 = N , the heart rhythm H R in beats per minute (bpm) can be estimated as: HR =

60 f s , N

(10)

in which f s is the sampling frequency in hertz.

3

Results and Discussions

To evaluate the proposal presented, it was used the database of Massachusetts Institute of Technology (MIT-BIH) [21]. In this database, ECG signals obtained of 45 patients which can present some kind of heart disease are applied in the method. A good segmentation and MATLAB® compatible format also can be obtained from [22]. An advantage of the LPC method is in the frequency domain, more specifically, the high-pass behavior of the response E(z), Eq. 6. This property of E(z) is important to eliminate the presence of drift (low-frequency components) and off-set in the ECG signal which can impair the performance of the LPC method based on the existence of a threshold, as can be seen in Fig. 3. According to the figure, the measurement of the heartbeat from the threshold is not affected by the presence of drift and off-set. In this way, due to the highpass behavior, it is not necessary to use data pre-processing before applying the LPC method. For sake of simplicity, to evaluate the method, it was chosen some registers of healthy patients and patients with arrhyth-

Fig. 3 An application of the method to an excerpt of NSR data

mia, as can be seen in Table 1, wherein the column Direct Reading (DR), it is displayed a measure of the heartbeat from the time interval between QRS complexes. The results for heart rhythm measure are displayed in Table 2, wherein column Class is shown several pathologies associated with the ECG signal. In the columns LPC Method (LPC) and LPC Accuracy (LPCA) are presented, respectively, the average heartbeat estimated by our method and your accuracy concerning Direct Reading—Table 1. For comparison proposal, a wavelet-based code is also applied to data for heart rhythm measurement [23], and the results are displayed in the columns Wavelet Method (WT) and Wavelet Accuracy (WTA). As shown in Table 2, the LPC method shows promising results. In comparison to the method based on Wavelet, our proposal presents an accuracy greater than 98%. The use of Wavelet Transform estimates the heartbeat with a maximum accuracy of 97.02%—NSR case. Therefore, a result smaller than obtained by the method based on LPC. Also, it is important to highlight that the accuracy of the method using Wavelet varies in the range [42.47%, 97.02%]. To explore the potential of the method, two excerpts from the classes NSR and Bigemy with similar heart rhythms from the database were analyzed. In Fig. 4, the estimations for the generated vector of heart rhythm are displayed. In this example, the method proves to be able to capture the heart rate

1798

J. S. Lima et al.

Table 1 Register of pathologies Class

Pathology

DR (bpm)

NSR APB AFL AFIB SVTA WPW PVC VT IVR

Normal sinus rhythm Atrial premature beat Atrial flutter Atrial fibrillation Supraventricular tachyarrhythmia Pre-excitation Premature ventricular contraction Ventricular tachycardia Idioventricular rhythm

67 98 113 73 135 65 88 110 63

Table 2 Results of LPC method Class

WT (bpm)

WTA (%)

LPC (bpm)

LPCA (%)

NSR APB AFL AFIB SVTA WPW PVC VT IVR

69 118 99 115 127 97 66 84 32

97.02 48.22 87.61 42.47 94.07 50.77 75.00 76.36 50.79

67 98 113 73 135 65 88 112 63

100.00 100.00 100.00 100.00 100.00 100.00 100.00 98.18 100.00

the boxes. It indicates that the method can be used to feature extraction from the signal for automatic classification purposes.

4

Fig. 4 Box plot comparison of NSR and Bigeminy excerpts with the same mean value

variability, a relevant parameter for diagnosing heart diseases. In the two cases, the mean heart rhythm is close, whereas the standard deviation is much different, as one can see at

Conclusions

In this note, a novel application for LPC signal reconstruction is proposed to heart rhythm estimation from ECG signals. The estimation error from LPC has advantages in applying threshold detection techniques for heartbeat localization, e.g., the R waves. Since LPC’s error is a high-pass filter, it can obliterate the off-set and wander drift phenomenons that plague the signal acquisition system. Moreover, possible T wave prominence is also obliterated in the error samples. The proposed methodology is tested over ECG signals from the MIT arrhythmia database. The obtained results are very consistent and accurate, and the potential for heart disease classification is also verified. Future research intends to embed the technique in a more general heart disease classification context using convolutional neural networks and deep learning paradigms.

262 An LPC-Based Approach to Heart Rhythm Estimation Acknowledgements The authors would like to thank the Instituto Federal da Bahia for supporting the realization of the present research. Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Escabí M (2012) Biosignal processing. In: Enderle JD, Bronzino JD (eds) Introduction to biomedical engineering, 3rd edn. Biomedical engineering. Academic, Boston, pp 667–746 [2] Perkusich A, Perkusich MLB, Deep GS, de Moraes ME, Lima AMN (1990) System for analysis and diagnostic aid using ECG. Cadernos de Engenharia Biomédica 7:112–118 [3] Jung DK, Kim KN, Kim GR et al (2005) Biosignal monitoring system for mobile telemedicine. In: Proceedings of 7th international workshop on enterprise networking and computing in healthcare industry, 2005. HEALTHCOM 2005, pp 31–36 [4] Bojanic D, Petrovic R, Jorgovanovic N, Popovic DB (2006) Dyadic wavelets for real-time heart rate monitoring. In: 2006 8th seminar on neural network applications in electrical engineering, pp 133– 136 [5] Zago GT, Andreão RV, Rodrigues SL, Mill JG, Filho MS (2015) ECG-based detection of left ventricle hypertrophy. Res Biomed Eng 31:125–132 [6] Figueiredo Dalvi R, Zago GT, Andreão RV (2017) Heartbeat classification system based on neural networks and dimensionality reduction. Res Biomed Eng 32:318–326 [7] Pławiak P (2018) Novel methodology of cardiac health recognition based on ECG signals and evolutionary-neural system. Expert Syst Appl 92:334–349 [8] Suganthi Evangeline C, Lenin A (2019) Human health monitoring using wearable sensor. Sens Rev 39:364–376 [9] Van Steenkiste G, Loon G, Crevecoeur G (2020) Transfer learning in ECG classification from human to horse using a novel parallel neural network architecture. Sci Rep 10 [10] Haapaniemi TH (2001) Ambulatory ECG and analysis of heart rate variability in Parkinson´s disease. J Neurol Neurosurg Psychiatry 70:305–310 [11] Marques Vanderlei LC, Pastre CM, Hoshi RA, de Carvalho TD, de Godoy MF (2009) Noções básicas de variabilidade da frequência cardíaca e sua utilidade clínica. Braz J Cardiovasc Surg 24:205– 217 [12] Shaffer F, Ginsberg JP (2017) An overview of heart rate variability metrics and norms. Front Public Health 5

1799 [13] Shiraishi Y, Katsumata Y, Sadahiro T et al (2018) Real-time analysis of the heart rate variability during incremental exercise for the detection of the ventilatory threshold. J Am Heart Assoc 7 [14] Malta FP (2018) AnÁlise experimental da variabilidade da frequência cardíaca e sua relação com o sistema de controle cardiorrespiratório em condições de exercício físico moderado e intenso Master’s thesis, Escola Politécnica da USPSão Paulo 2018. Dissertação de mestrado [15] Zhang D (2005) Wavelet approach for ECG baseline wander correction and noise reduction. In: 2005 IEEE engineering in medicine and biology 27th annual conference, pp 1212–1215 [16] Leski JM, Henzel N (2005) ECG baseline wander and powerline interference reduction using nonlinear filter bank. Signal Process 85:781–793 [17] Becker DE (2006) Fundamentals of electrocardiography interpretation. Anesth Prog 53:53–64 [18] Marvi H, Esmaileyan Z, Harimi A (2013) Estimation of LPC coefficients using evolutionary algorithms. J AI Data Min 1:111–118 [19] Beraldo OA (1997) Processamento Digital do Sinal de Eletrocardiograma para Aplicação em Experimentos de Fisiologia Cardíaca. PhD thesis, Escola de Engenharia de São Carlos, Universidade de São Paulo [20] Araujo JM, Embirucu M, Fontes CHO, Andrade Lima LRP, Kalid RA (2009) Melhoria da performance em reconciliação de dados pela eliminação de outliers com pré-filtro por predição linear. In: Proceedings of 8th international seminar on electrical metrology, pp 1–5 [21] Moody GB, Mark RG (2001) The impact of the MIT-BIH arrhythmia database. IEEE Eng Med Biol Mag 20:45–50 [22] Plawiak P (2017) ECG signals (1000 fragments). Version 3 [23] Aniyan A (2020) ECG beat calculation 2020. Retrieved from MATLAB Central File Exchange, 6 June 2020

Spectral Variation Based Method for Electrocardiographic Signals Compression V. V. de Morais, P. X. de Oliveira, E. B. Kapisch, and A. J. Ferreira

Abstract

In this paper, a compression method for electrocardiographic (ECG) signals is proposed. The proposed method is based on the spectral content variation of each cycle of the ECG signals. The fast Fourier transform (FFT) is used as the signal spectrum estimator. The proposed method allows the entire reconstruction of the ECG waveform signals from the compressed data, keeping all relevant information present in the original signal for further diagnostic analysis and processing. In order to show the credibility of the method, a visual inspection by a cardiologist was done on the reconstructed signals. Keywords





Electrocardiographic signals Data compression Digital signal processing Fast Fourier transform

1

Introduction

Cardiovascular diseases (CVDs) are the leading cause of death in the world. In the United States, 82% of people over the age of 80 have CVD, 70% of people between 52 and 82, while 37% between 40 and 60, and 11% of people between 30 and 45 have CVD [1].

V. V. de Morais (&)  P. X. de Oliveira Department of Electronic and Biomedical Engineering, Campinas State University (UNICAMP), Av. Roxo Moreira, 1402, Campinas, Brazil e-mail: [email protected] E. B. Kapisch Department of Electrical Engineering, Federal University of Juiz de Fora (UFJF), Juiz de Fora, Brazil

Among the cardiovascular system pathologies, arrhythmias stand out. They are characterized by causing changes in the heart electrical activity and consequently modifying the sinus rhythm and, depending on the type and severity, they can be lethal [2]. Arrhythmias must be treated as soon as they are detected, otherwise, they can develop into more serious problems. The treatment consists of changing habits for simpler cases, and clinical interventions, such as surgeries, for more severe cases. Arrhythmias are detected by monitoring electrocardiographic (ECG) signals, using cardiac monitors, electrocardiographs, and Holter’s. The Holter is a device used to monitor and record the electrical activities of the heart, aiming at the detection of intermittent pathologies not found in short duration tests. It captures and stores the patient ECG signals, generally on an SD memory card, for measurement periods ranging from 24 to 48 h. However, data transmission is offline, on a computer where all stored information is downloaded. In addition, there are situations in which it is desired to monitor the patient for longer periods of time, in order to detect certain cardiac pathologies that may take more than two days to manifest [3]. For this reason, depending on the storage duration, there may be a lack of memory space on the device and, if there is a low data transmission rate, it may take hours for the captured data to be transmitted. Therefore, data compression algorithms can assist in the transmission of ECG signals and in their storage, since the purpose of compression is precisely to reduce the file size to allow smaller memory spaces and low transmission rates to be used. In this work, a compression method based on the variation of the spectral content of the Fourier transform of each cycle of the ECG signal is proposed. The paper is divided as follows. Section 2 describes the materials and methods of the proposed strategy. Section 3 shows some results and discussion. Finally, Sect. 4 states some conclusions.

A. J. Ferreira Monte Sinai Hospital, Juiz de Fora, Brazil © Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_263

1801

1802

V. V. de Morais et al. FFT_ARRAY MAX = MIN = SUM = FFT_ARRAY; CNT = [1,1,1, ... , 1]

Input signal

Receive a new FFT_ARRAY

1 Frequency estimator

Take a new POSITION = FFT_ARRAY( q)

Estimated frequency (Hz) 2 Interpolation/ Undersampling

Initialization

If POSITION > MAX( q) → MAX(q) = POSITION; If POSITION < MIN( q) → MIN(q) = POSITION

3 FFT per cycle 4

Proposed lossy compression method 5

Save the q index of POSITION, Mean value (SUM/CNT) and CNT

No SUM(q) = SUM( q) + POSITION CNT(q) = CNT(q) +1

Lossless compression

Compressed file.bin

Yes

|MAX(q) - MIN(q)| > γ(q) ?

No

Was this POSITION the last one of this FFT ARRAY? Yes

(a)

MAX(q) = MIN(q) = SUM(q) = POSITION CNT(q) = 1; Discard POSITION

Construction of the compression matrix ( MC)

(b)

Fig. 1 Flowcharts: a The compression system. b The proposed compression method

2

Materials and Methods

2.1 Proposed Compression System The flowcharts of the compression system and the proposed compression method are depicted in Fig. 1a, b, respectively. The compression system is divided into 5 stages. One of them is the proposed method. These steps are described in the following subsections.

2.1.1 Frequency Estimator The frequency estimator is responsible for providing the fundamental frequency estimation in hertz for each cycle of the ECG signal to the system. The frequency estimation information is used in the signal interpolation/subsampling process, and it is based on the interval between the R waves of the input signal [4]. The choice of this method is due to the low computational complexity requirements. 2.1.2 Interpolation/Subsampling The interpolation/subsampling process ensures that each cycle of the input signal has 128 samples, so that it is

possible to perform the fast Fourier transform (FFT) calculation for each cycle in the next step. From the estimated frequency, the interpolator/subsampler identifies the number of samples in each cycle. If there are more than 128 samples, some are taken out (subsampling) and if there are fewer, some are added (interpolation). This process is performed online, sample by sample. The interpolator/subsampler used in this work is based on Lagrange polynomial interpolation [5]. This option is due to a good performance and ease of real-time implementation.

2.1.3 FFT Per Cycle The calculation of the FFT per cycle is done using a 128-sample length window. Therefore, this process results in a vector with 128 complex bins, each one representing a harmonic component according to the FFT resolution. However, the way used here to manipulate the FFT results is handling the real and imaginary parts of the complex bins separately. Thus, a 256-position vector named FFT_ARRAY receives the real part followed by the imaginary part of each bin, in such a way that each bin occupies two consecutive positions in the FFT_ARRAY vector.

Spectral Variation Based Method for Electrocardiographic …

2.1.4 Proposed Compression Method The proposed compression method is based on the spectral content of each cycle of the input signal. Each bin of the spectrum, obtained using the FFT of each signal cycle, is compared sequentially cycle by cycle with the objective of storing only relevant information and enabling a good signal reconstruction. Thus, the aim is to preserve the cardiac pathologies that the patient may present in the original signal while reducing the file size. The flow chart of the method operation is shown in Fig. 1b. Initially, the FFT_ARRAY vector of the first cycle of the ECG signal is copied to the MAX, MIN, and SUM vectors. These three vectors have the same length of FFT_ARRAY, i.e., 256 positions, and store the maximum value, the minimum value, and the accumulated value of each q position over time, respectively. The CNT vector, also with the same 256-position length, receives unit value in all its positions in the first cycle of the signal, concluding the initialization. With the vectors initialized, the system waits for a new FFT_ARRAY vector, corresponding to the next ECG signal cycle. All samples of this vector are compared with the values of MIN and MAX positions, previously stored. If the value of any position of the FFT_ARRAY is greater than that same position of the MAX vector or smaller than the corresponding position of the MIN vector, the MIN or MAX is updated. Thus, the MIN vector contains the minimum observed value for each FFT q position for a given cycle. Similarly, the MAX vector contains the maximum value for each q position. With the updated MIN and MAX vectors, the comparison of the absolute value of the difference between them with a threshold value, or c(q), is performed for each q position. If the absolute difference is less than the c(q) for any q position, the system interprets that there is no difference between the values in that q position, and the SUM and CNT vectors are updated for that position. The FFT result is accumulated to the SUM vector and the CNT vector is incremented by one, both in that corresponding q position. This happens every time the threshold c(q) is not exceeded. Thus, the SUM vector stores the accumulated value for all 256 q positions of the spectra, and the CNT vector stores the number of times the accumulation process was repeated. In this way, it is possible to calculate the average value of each position q by dividing SUM by CNT, position per position. If the absolute difference between MAX and MIN for any q position is greater than the c(q), i.e., the threshold is exceeded, the system understands that there is a relevant variation for that position. So it saves the q index, the average value for that position (SUM/CNT), and the CNT value, forming a compression matrix, named MC. After saving these information, the process is restarted at that q position of the FFT_ARRAY vector. In most ECG signals, the amplitudes of the harmonic components decrease as the harmonic order increases.

1803

Therefore, in order to make a fair threshold comparison, it is necessary to use a different c(q) value for each FFT bin (real and imaginary parts), instead of a single value for all bins. The variation in the c(q) position values with respect to the q indexes follows a decreasing exponential distribution for positive frequencies and an increasing exponential distribution for negative frequencies. This exponential variation is given by: cðqÞ ¼ Cqb ;

ð1Þ

where C is a gain factor, q is the vector index position on which the threshold comparison is made, and b < 1 is the exponent responsible for the exponential variation of the c(q) values. In this work, several compression and reconstruction tests using different values of C and b were performed. The values that provided the best trade-off between compression and reconstruction quality were C = 0.03 and b = 0.1.

2.1.5 Lossless Compression The last stage of the compression system consists of applying a lossless compression method on the MC matrix to ensure that it is in its most compact form possible. In this stage, no more information is removed, only a coding process is performed, seeking to further reduce the storage space. The well-known LZW algorithm was chosen for this stage, as it offers high compression rates and there are open-source versions of this algorithm available to developers, such as 7-zip, since December 2008 [6]. After going through all stages of the compression system, a compressed file in.bin format becomes available to be stored or to be transmitted for future reconstruction.

2.2 Signal Reconstruction To perform the signal reconstruction, first, it is necessary to perform lossless decompression of the.bin file, using the LZW method, and recovering MC. Then, the data stored in MC are manipulated in order to assemble a reconstruction matrix named MR, whose lines represent the frequency spectrum of each cycle of the signal to be reconstructed. For this manipulation, the q position values, average values (ACC/CNT), and CNT are used, as shown in the following example. A simplified example of signal reconstruction is shown in Eqs. (2) and (3). While (2) represents MC, (3) represents MR created from the MC in (2). The columns C0, C2, and C4 of MR represent the real values of the first three spectral components respectively. Columns C1, C3, and C5 are the imaginary values for the same components. The lines L0, L1, …, L3 correspond to the frequency spectra of the first four cycles of the signal in this example.

1804

V. V. de Morais et al.

2 q 2 MC ¼ ACC/CNT 4 9 CNT 4 C0 L0

MR =

L1 L2 L3

C1

C2

5 3 2 C3

3 5 7 5 2 C4

ð2Þ

C5

9 9

3 3

9 9

7 7

ð3Þ

The construction of MR from MC is as follows. With the SUM and CNT values, the average value (SUM/CNT) of each q position is repeated CNT times. In the given example, for position q = 2, the average value SUM/CNT = 9 is repeated over CNT = 4 cycles. So, the value 9 is repeated for 4 rows in column C2 of the reconstruction matrix MR, from the top down. This step was the data manipulation of the first column of MC. The manipulation of the second and third columns of MC results in the filling of column C5 of MR, where the average value SUM/CNT = 3 is repeated for 2 cycles, and the value SUM/CNT = 7 is repeated for 2 more cycles sequentially, as shown in (3). This way, MC is read by columns, from the left to the right, while MR is built from top to bottom. To reconstruct the ECG signal and put it in its original form in the time domain, after building the entire MR, it is necessary to make the inverse Fourier transform of each line of the matrix and concatenate the results horizontally, cycle by cycle, to form the signal along the time axis.

2.3 Error Versus Compression Rate Analysis In order to show the performance of the proposed method, ECG signals from the MIT-BIH arrhythmia database were used. This base is composed of 48 two-channel ambulatory ECGs, with a duration of 30 min each. They were sampled at 360 Hz per channel and with 11 bits of resolution [7]. To evaluate the compression performance of the proposed system, a compression rate (CR) was used as a metric factor. It is calculated as the ratio between the size of the original uncompressed signal x and the size of the same signal in its compressed form ^x [8]. This metric was calculated for each signal, and an average compression factor (CRm) was also calculated, which represents the mean value of the CR of all signals in the database. The reconstruction quality evaluation metric used in this work is the percentage root-mean-square difference [PRD (%)]:

PRDð%Þ ¼

kx  ^xk  100%; k xk

ð4Þ

where x is the original signal vector notation, ^x is the reconstructed signal vector notation, and ||.|| represents the calculation of the ‘2 norm. As with CR factor, the average value of the percentage root-mean-square differences PRDm(%) was also calculated: PRDm ð%Þ ¼

N 1X PRDi ð%Þ; N i¼1

ð5Þ

where N is the number of signals in the database. To assess the dispersion of both the reconstruction quality and the compression rate across all signals in the database, the standard error of the percentage root-mean-square difference PRDse(%) and the standard error of the compression factors CRse were used, as defined by (6) and (7), respectively: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PN 2 1 ðPRD ð%Þ  PRD ð%ÞÞ i m i¼1 N1 pffiffiffiffi PRDse ð%Þ ¼ ; ð6Þ N qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PN 2 1 ðCR  CR Þ i m i¼1 N1 pffiffiffiffi CRse ¼ : ð7Þ N An important analysis is to evaluate how the variation of the gain parameter C of the tolerance function (1) impacts on the variation of CR and PRD, and at the same time, to identify up to which value of C the reconstructed signal still preserves the same characteristics of the original signal to perform reliable analyzes. To do so, the value of C was varied from 0 to 0.3, with a step of 0.01, and for each step, CRm, CRse, PRDm(%), and PRDse(%) were calculated using the 48 signals from the database. The results and discussion of this evaluation are present in Sect. 3.1.

2.4 Visual Inspection The PRD(%) is a questioned metric in the literature because it does not assure that the reconstructed signal is still suitable for a specialist to make the patient’s diagnosis [8–10]. In other words, a low PRD(%) does not necessarily mean that the compression method has correctly stored all relevant shape information related to the patient’s ECG pathologies. On the other hand, a relatively high PRD(%) does not mean for sure that a disturbance that may confuse doctors has been added to the reconstructed signal, resulting in invalid diagnoses. Having said that, three different C values were chosen to have the results analyzed. A minimum value C = 0, a maximum C = 0.3 and an intermediate C = 0.03. For C = 0,

Spectral Variation Based Method for Electrocardiographic …

3

Results and Discussion

3.1 Error Versus Compression Rate Results The results of Sect. 2.3 can be seen in Fig. 2.We can see that CRm increases with the increasing of C. The higher the C value, the lower the growth rate of CRm, tending to a stabilization. The same is true for PRDm. It is also noticed that the proposed method reaches high CRm values. At the same time, the PRD values are slightly high, even at low C values. An explanation for that is that the compression system stores only the relevant information, filtering out most of the high-frequency spectral components. This makes the reconstructed signal less noisy than the original one.

3.2 Visual Inspection Results Figure 3 shows the signals (original and reconstructed) for the three different C values described in Sect. 2.4.

CRm

1500 1000

CRm CRse

500 0

0

0.05

0.1

0.15

0.2

0.25

0.3

C

PRDm(%)

40 30 20

PRDm PRDse

10 0

0

0.05

0.1

0.15

0.2

0.25

0.3

C

Amplitude

Amplitude

Fig. 2 CRm and PRDm curves with standard error related to the variation of the C value. Upper plot: CRm-related curves. Bottom plot: PRDm-related curves

Amplitude

all c(q) are null, resulting in a perfect reconstruction (null PRDm), whose CRm value is unitary (without compression). For the maximum C = 0.3, the values of CRm and PRDm are also maximum, and possibly inserting distortions in the signal. For the intermediate value C = 0.03, it is possible to obtain a relatively high CRm and, at the same time, a lower PRDm than the previous one. Also, a part of a signal from the database was selected to analyze the performance of the method, using the three values described above. This part contains a premature ventricular contraction (PVC), which is used to find out if it is possible to reconstruct the pathology for all three values of C, without inserting sudden changes in its morphology. This analysis is shown in Sect. 3.2. To increase the credibility of the proposed method, in order to certify that it is capable of maintaining information related to at least one specific pathology, 17 signals were selected from the database to compress them, using C = 0.03 (the value that best met the expectations), and rebuild them to be inspected by a cardiologist. It is noteworthy that the doctor did not know of any prior information on the pathologies in the signals. The arrhythmia chosen for this purpose was again PVC. This analysis was compared with the analysis of the corresponding original signals made by the specialists of the hospital where the MIT-BIH database was collected. Besides, for each signal, the number of PVCs present in the original signal that was not detected by our cardiologist (false negative) was registered, as well as the number of the PVCs that were detected, but did not exist in the original signal (false positive). These results are also present in Sect. 3.2.

1805

0.5

x (original)

xˆ (reconstructed)

0

-0.5

930 930.5 931 931.5 932 932.5 933 933.5 934 934.5 935

0.5

Time (s) x (original)

xˆ (reconstructed)

0

-0.5

930 930.5 931 931.5 932 932.5 933 933.5 934 934.5 935

0.5

Time (s) x (original)

xˆ (reconstructed)

0

-0.5

930 930.5 931 931.5 932 932.5 933 933.5 934 934.5 935

Time (s)

Fig. 3 Original and reconstructed signals. Upper plot: C = 0 (perfect reconstruction). Middle plot: C = 0. 3. Bottom plot: C = 0.03

In the upper plot, where C = 0, CR = 1 and PRD = 0 were obtained. In this case, the signal reconstruction was perfect, with no change in PVC morphology, as expected. In the middle plot, with C = 0.3, CR = 1221 and PRD = 27.8% were obtained. In this case, it is clear that the arrhythmia of the original signal was not preserved in the reconstructed signal. Therefore, the reconstructed signal is not suitable for diagnostic purposes. In the bottom graph, where C = 0.03, CR = 161 and PRD = 10.8% were obtained. We can see that in this case, the reconstructed signal, despite having minor differences, preserves the pathology of the original signal and can be used for visual inspection by a cardiologist.

1806

V. V. de Morais et al.

Table 1 Visual inspection of reconstructed signals by a cardiologist, where x is the original signal with the corresponding number in the database, CR is the compression ratio, PRD(%) is the percentage root-mean-square difference, and ^x is the reconstructed signal

x

CR

PRD (%)

Total PVCs in x

PVCs detected in ^x (cardiologist)

False positive

False negative

100

83

19.40

1

1

0

0

102

28

25.21

4

3

0

1

103

386

14.63

0

0

0

0

109

111

10.22

38

34

0

4

116

79

17.91

109

109

0

0

118

42

17.40

16

16

0

0

121

251

18.18

1

1

0

0

123

107

23.55

3

3

0

0

124

147

15.57

47

42

0

5

202

82

20.57

19

21

2

0

205

84

15.15

71

71

0

0

209

25

18.90

1

2

1

0

215

36

18.27

164

160

0

4

219

25

19.75

64

50

0

14

230

47

18.94

1

1

0

0

231

112

22.60

2

1

0

1

234

302

12.11

3

3

0

0

The visual inspection of the signals (Sect. 2.4), performed by a cardiologist, is shown in Table 1. According to this table, and considering that there are interferences and artifacts in the original signals that can impair the detection, in addition to the expert personal opinion about the diagnosis of the pathology under analysis, it can be seen that the identification of PVCs in the reconstructed signals is satisfactory. The number of occurrences of false positives and false negatives in each signal, in most cases, is zero or very small. It is worth mentioning that a reduction in the value of parameter C can further improve the quality of the signal, which may further reduce the number of false positives or negatives, but it will affect the value of CR. Therefore, such evidence proves that at least for this specific pathology it is possible to apply the proposed compression method and still have reconstructed signals suitable for diagnostic purposes.

4

Conclusions

The present work describes a new method of ECG signals compression. It is based on the variation of the spectral content of each cycle of the ECG signal. This method uses the FFT, to obtain the spectrum estimation and uses this estimation to perform a cycle by cycle comparison, resulting in signal compression. The inverse Fourier transform is used in the signal reconstruction process.

The proposed method was tested with some signals containing PVC and presented good compression rate (CR) results and good quality of signal reconstruction, maintaining the relevant information of this pathology. A visual inspection was performed by a specialist to evaluate the quality of reconstruction of the proposed method. Regarding the variation of parameter C of the tolerance function, we can conclude that both CR and the PRD(%) are dependent on this parameter. Besides, a stabilization tendency is visible in this relation. Finally, we can conclude that the proposed method has promising compression results, with CR values above 100, showing its ability to be used in ECG signal storage systems. Acknowledgements The authors of this work would like to thank the State University of Campinas (UNICAMP) and CAPES, for supporting this research. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Singh RS, Saini BS, Sunkaria RK (2019) Arrhythmia detection based on time–frequency features of heart rate variability and back-propagation neural network. Iran J Comput Sci 2(4):245– 257. https://doi.org/10.1007/s42044-019-00042-1

Spectral Variation Based Method for Electrocardiographic … 2. Roberts-Thomson KC, Lau DH, Sanders P (2011) The diagnosis and management of ventricular arrhythmias. Nat Rev Cardiol 8 (6):311. https://doi.org/10.1038/nrcardio.2011.15 3. Page RL, Wilkinson WE, Clair WK, McCarthy EA, Pritchett EL (1994) Asymptomatic arrhythmias in patients with symptomatic paroxysmal atrial fibrillation and paroxysmal supraventricular tachycardia. Circulation 89(1):224–227. https://doi.org/10.1161/ 01.CIR.89.1.224 4. Pan J, Tompkins WJ (1985) A real-time QRS detection algorithm. IEEE Trans Biomed Eng 3:230–236. https://doi.org/10.1109/ TBME.1985.325532 5. Moler CB (2008) Numerical computing with MATLAB: revised reprint, vol 87. Siam 6. Ziv J, Lempel A (1977) A universal algorithm for sequential data compression. IEEE Trans Inf Theory 23(3):337–343. https://doi. org/10.1109/TIT.1977.1055714

1807 7. Goldberger AL, Amaral LA, Glass L et al (2000) PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23): e215–e220. https://doi.org/10.1161/01.CIR.101.23.e215 8. Nave G, Cohen A (1993) ECG compression using long-term prediction. IEEE Trans Biomed Eng 40(9):877–885. https://doi. org/10.1109/10.245608 9. Blanco-Velasco M, Cruz-Roldán F, Godino-Llorente JI, Blanco-Velasco J, Armiens-Aparicio C, López-Ferreras F (2005) On the use of PRD and CR parameters for ECG compression. Med Eng Phys 27(9):798–802. https://doi.org/10.1016/j.medengphy. 2005.02.007 10. Němcová A, Smíšek R, Maršánová L, Smital L, Vítek M (2018) A comparative analysis of methods for evaluation of ECG signal quality after compression. BioMed Res Int. https://doi.org/10. 1155/2018/1868519

Centre of Pressure Displacements in Transtibial Amputees D. C. Toloza and L. A. Luengas

Abstract

1

People with transtibial amputation, the stability is altered as a consequence of body part removed, including: the bone elimination, muscle tissue and somatosensory information and this causes transtibial amputees to have a high number of falls. It is necessary to know static stability and the indicators corresponding to this type of amputees, since it directly influences postural stability and the performance of various activities, including displacement, which directly affects the rehabilitation. Likewise, the information on stability in transtibial amputees is insufficient, and in amputees by land mines is null, the affectation suffered in the pressure center (COP) due to the type of amputation is unknown. A cross-sectional observational investigation was carried out in transtibial amputees due to trauma from landmines, people without amputation were also included, the COP was recorded in both groups to assess the upright position using the Romberg Test. Recorded data from amputee subjects differs significantly from non-amputee subjects. The first one shows an increase in biomechanical parameters related to stability, demonstrating that they make continuous use and more frequently compensation strategies to preserve stability when standing. Keywords





Transtibial amputation Postural stability pressure Biomechanics Romberg test



Center of

D. C. Toloza (&) Biomedical Engineering, Universidad Manuela Beltrán, Calle 33 No. 27-23, Bucaramanga, Colombia e-mail: [email protected] L. A. Luengas Tecnología en Electrónica, Universidad Distrital Francisco José de Caldas, Bogotá, Colombia

Introduction

When amputation of body segments is performed, the asymmetry in the amputee’s musculoskeletal structures is evident, affecting the center of mass and therefore the center of gravity and the center of pressure (COP). If the amputation is below the knee it is called a transtibial amputation, the foot and part of the leg have been removed, this causes reduction of bone, muscle tissue and loss of somatosensory information, affecting the proprioceptive system. This type of amputee lacks an anatomical ankle and all the sensorimotor structures of the amputated section, for this reason their stability is low and they tend to suffer a greater number of falls than non-amputees. Amputees use proprioceptive information from the contralateral segment to orient the center of mass and determine an appropriate standing position with low energy expenditure. Postural asymmetry causes the COP to deviate from the central region of the support base. Thus, amputation generates a variation in stability with respect to non-amputees. Due to the lack of a biological ankle, amputees cannot maintain postural stability with an ankle strategy on the prosthetic side, therefore they must make use of the remaining structures and a modified posture, with a greater component of knee and hip. Information on stability in transtibial amputees is insufficient, and in amputees by landmines it is null, the affectation suffered in the pressure center (COP) due to the type of amputation is unknown [1]. According to the panorama that Colombia will face during the post-conflict, since in the last decade, there are more than 11 thousand people affected by landmines [2]; due to the COVID virus, Colombia has had restrictions on the mobility of people, but this has not stopped the devastating effect of landmines, according to International Committee of the Red Cross (CIRC) 100 persons have been victims of these devices during 2020 [3]; it is necessary to establish health strategies that improve the inclusion of all amputees in the social, work and family environment. The majority of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_264

1809

1810

D. C. Toloza and L. A. Luengas

studies have recruited patients with traumatic by accidents or vascular causes of amputation, but none for traumatic by landmines. Some research demonstrated differences in postural control depending on the cause of amputation [4, 5]. The use of prostheses as a method of rehabilitation is essential, and it is a fundamental requirement in the adaptation of person-prostheses to provide standing stability for the continuous use of the prosthetic system. It is necessary to know the postural stability conditions for this type of amputee, therefore a study in the Colombian environment is necessary to analyze transtibial amputees according to the subjects’ characteristics, the amputation cause, the prosthesis type and the age, among others [6]. Therefore, a study was carried out with the general objective of evaluating the indicators of static stability in the standing position in transtibial amputee subjects due to trauma from landmines. The impact of transtibial amputation on stability is described, with a linear and non-linear analysis with the discrete wavelet transform (DWT).

2

Methods

2.1 Subjects Two groups were formed, with nine adult subjects each: a non-amputated control group (9) and another group with transtibial amputation (9). All participant collaborators signed the document as consent to their participation. The study was approved by the Bioethics Committee of Universidad Distrital Francisco José de Caldas, Bogotá (Colombia). Non-amputees group presented an average age of 44.32 ± 12.7 years, weight 69.44 ± 14.99 kg and height 167.44 ± 8.45 cm. All were physically active subjects and did not report any musculoskeletal or neurological disorders. They were selected with anthropometric characteristics similar to group of amputees. The group of amputees were unilateral transtibial male amputees, with amputation due to anti-personnel mine trauma, who use prostheses recurrently for a minimum of one year, without musculoskeletal, neurological, sensory, cognitive, or articular changes or skin lesions. The average age in this group was 32.35 ± 3.2 years, weight 78.25 ± 6.5 kg and height 176 ± 2.7 cm. The type of prosthesis used is an internal socket and bolt suspension system and a multi-axial carbon fiber foot, Ottobock® brand (Fig. 1), mass of the prosthesis is approximately to 3.5 kg [7], this weight is close to anthropometric values of a population group with characteristics similar to that examined.

Fig. 1 Type of prosthesis used by amputees

2.2 Experimental Design The COP registration was obtained via the Pedar® system from the company Novel. The system contains templates to extract data from the force variable present in the interaction of the foot and a support surface, it also returns data on the distribution of this force and the position of the COP. The templates contain 198 capacitive sensors (99 each template) and a computer application that displays the recorded data. The values of the variables pressure and COP were recorded with a sampling frequency of 50 Hz. The variables investigated in the study were the COP position in the mediolateral (ML) and anteroposterior (AP) axes with the calculation of the excursion, the speed, the RMS value (Root Mean Square of path length), the average amplitude and the minimum and maximum amplitude in each group. The protocol considered the positioning of the Pedar® insoles on a hard and stable surface, to control the location of the feet and the anatomical position of the subjects during the capture of samples, the above according to the first condition of the Romberg Test [6]. The COP was recorded for 60 s and repeated 3 times.

2.3 Analysis Plan All variables were calculated for both groups for each limb and in each axis of movement (ML and AP). The group amputee, we calculated for each amputee and non-amputee side (right and left, respectively). The removal of unwanted

Centre of Pressure Displacements in Transtibial Amputees

1811

data was carried out in two ways, first with the use of a Butterworth low pass filter with a cutoff frequency of 10 Hz [8]; Second, the data were normalized by subtracting the average and the measurements were calculated with their respective standard error for the amputee group and the control group [9]. Having developed this stage, we proceeded to the linear and nonlinear analysis with Discrete Wavelet Transform (DWT). The Mann–Whitney U statistic was used to identify the agreement between the two groups studied using population averages. With the DWT, the COP signal was broken down into nine levels or frequency bands using the Haar mother wavelet as a base. For this analysis, the data were not filtered, since the non-linear results can be biased by this procedure.

3

Results

Amputees presented higher values in data of calculated variables compared to non-amputees, significant statistical difference (p < 0.05) in all the indicators were found, Table 1. High difference in the group of amputees in the parameters of maximum amplitude, excursion, average and RMS value in the AP axis of the non-amputated segment was evident. Comparison between groups indicates that their behavior is significantly different. All displacements are greater in amputees with respect to non-amputees, regardless of the direction where they occur. Statistically significant difference in the direction anteroposterior axis of the group of amputees with respect to non-amputees was highlighted; the

Fig. 2 Comparison of displacement between groups* [*Displacement diagram of COP presented by each of the extremities, in AP and ML directions, and in each group studied. DA: anterior displacement; PD: posterior displacement; DM: medial displacement; DL: lateral displacement; D: right side (ipsilateral for amputees); I: left side (contralateral for amputees)]

former was less able to maintain stability with low displacement (Fig. 2). Regarding the analysis with the TDW, energy percentage in AP and ML for each foot and group was obtained; comparison of energy percentage obtained in AP axis on the nonamputee side of the study group and the left side of the control group is presented in Fig. 3, the greatest differences in this corporal segment were found.

Table 1 Posturographic parameters Parameter

Amputees

Non-amputees

ML LL/RL

AP LL/RL

ML LL/RL

AP LL/RL

Excursion (mm)

68.893 214.944

1134.301 436.279

30.615 43.353

152.105 246.554

Velocity (mm/s)

0.909 1.393

5.378 2.204

0.179 0.187

0.766 0.980

RMS (mm)

0.186 0.571

3.049 1.126

0.079 0.113

0.416 0.666

Mean amplitude (mm)

0.152 0.473

2.498 0.961

0.067 0.095

0.335 0.543

Max amplitude (mm)

0.408 1.066

6.286 2.120

0.140 0.228

0.732 1.158

Min amplitude (mm)

−0.421 −1.145

−5.719 −2.122

−0.175 −0.191

−0.982 −1.353

ML Medial–lateral; AP Antero-posterior; LL Left leg; RL Right leg

1812

D. C. Toloza and L. A. Luengas

Fig. 3 Energy concentration percentage of COP-AP left

For the short time scales (D1–D3) there was a low percentage of energy content, but this increases gradually in the moderate scales (D4–D6), reaching stabilize in the long scales (D7–A8). For amputees and non-amputees, the energy was mainly concentrated at levels D6 to A8 and on the approximation (level 9), this suggests a frequency band between 0 and 0.781 Hz. Higher difference between the group of amputees with respect to the non-amputees in band D6, which corresponds to 0.391–0.781 Hz and in A8: 0–0.098 Hz was observed, Fig. 3.

4

Discussion

Investigations focused on the stability exhibited by transtibial amputees have had two main axes: one, identification of the strategy for motor control used by amputees in standing static anatomical position; two, structure of the strategy used in the recovery of stability when a disturbance occurs. However, the study of COP behavior and its indicators in amputees has not been carried out. The few studies conducted about stability have recruited groups of amputees because of accident trauma or vascular complications, never amputation for landmine [10]. COP is considered the most important quantitative parameter to evaluate postural stability either under static or dynamic conditions. In general, in the analysis of COP in a standing position, the scientific literature is clear that amputees with respect to a non-amputated group have a greater range in the movement of the COP, a greater speed and greater acceleration, among other parameters. In this study, comparison between the two groups evidenced the existence of significant differences between the medians of the stability indices: excursion, speed, RMS

value, mean, maximum amplitude and minimum amplitude; being the amputees who presented high values in general on the non-amputated side (left side), this denotes an association between the alteration of stability and the state of amputation; which agrees with Kendell et al. [11]. The results may indicate that amputee group presents a less stable posture in the bipedal position, since the stability embodies all the active sensory systems (visual, vestibular and proprioceptive) even if some are altered (vestibular and proprioceptive) or removed (visual). Also, what was obtained with the analysis of COP suggests that amputee group presents high oscillatory variation in the static standing position, which can lead to increased instability and loss of balance. Velocity in amputee group in anteroposterior direction was the biggest, which indicates that amputees uses ankle strategy to try to maintain stability, since when using this tactic, the body rotates around the ankle and produces efforts of anteroposterior direction. Likewise, using prostheses disturbs the speed of the whole COP, a cause may be the increased movement of the contralateral ankle to have stability. Regarding the analysis of frequencies with TDW, the energy content was centered in the range of 0–0.781 Hz, confirming that the COP signal is at a low frequency as proposed [12–15]. In both displacements, mediolateral and anteroposterior, and considering both extremities, on the shorter timescales (D1–D3) there is a low percentage of energy, but this in-creases gradually on the moderate scales (levels D4–D6) to long (levels D7–A8), in agreement with Kirchner et al. [16] in the investigation of healthy young subjects, and Chagdes et al. [17] with adults and healthy young people, when using the TDW to examine the behavior of COP. Gradual variation of energy in the different time scales suggests the joint use of the active elements of the sensory system to control stability in two directions of displacement AP and ML. Thus, adaptation to variations in the environment used by postural control employs sensory reweighting [18]. According to results and Fig. 3, where the group of amputees had a greater impact in D6 band (0.391– 0.781 Hz), it can be indicated that this frequency range corresponds to that reported in the literature by [18, 19] who proposed that the average frequency 0.5–1 Hz is linked to proprioceptive activity, exactly where the population studied presents its greatest impact.

5

Conclusions

When comparing the stability parameters between the extremities, the asymmetry between the non-amputated limb and the prosthetic limb was evident. When examining

Centre of Pressure Displacements in Transtibial Amputees

movement on the AP direction, the parameter values are higher (that is, more unstable) on the contralateral side, indicating that stabilization strategies such as ankle strategy are used in the non-amputated leg, which affect this axis. In the ML axis, the values increase on the ipsilateral side, which shows greater instability in the amputated limb, and indicates the use of a hip strategy on that side. The data adaptive spectral analysis method has been successfully demonstrated for the analysis of temporal changes in the components of COP. The balance pattern of COP in the anteroposterior axis of the amputees differs markedly from that shown by non-amputees and could be used as a predictor of the adequate movements of the amputees in the standing static position. Techniques to accurately assess postural stability can be helpful in designing and developing rehabilitation methods and prosthetic alignment protocols. Conflict of Interest The authors declare that there is no conflict of interest regarding the publication of this article.

References 1. Arifin N, Abu N, Ali S, Gholizadeh H, Wan W (2015) Evaluation of postural steadiness in below-knee amputees when wearing different prosthetic feet during various sensory conditions using the Biodex® stability system. Proc Inst Mech Eng Part H J Eng Med 229(7):491–498 2. Dirección Contra Minas (2020) Víctimas de Minas Antipersonal y Municiones sin Explosionar. Presidencia de la República de Colombia, p 8 3. Sarralde M (2020) Minas antipersonal y explosivos: balance del CICR sobre víctimas civiles y de fuerza pública en 2020. EL TIEMPO at https://www.eltiempo.com 4. Hendershot B, Nussbaum M (2013) Persons with lower-limb amputation have impaired trunk postural control while maintaining seated balance. Gait & Posture 38(3):438–442 5. Ku P, Abu N, Wan A (2014) Balance control in lower extremity amputees during quiet standing: a systematic review. Gait & Posture 39(2):672–682

1813 6. Luengas L, Toloza D (2020) Frequency and spectral power density analysis in the pressure center of amputees subjects. TecnoLógicas 23(48):1–16 7. Luengas L, Camargo E, Guardiola D (2018) Modeling and simulation of prosthetic gait using a 3-D model of transtibial prosthesis. Rev Ciencias la Salud 16(1):82–100 8. Kolarova B, Janura M, Svoboda Z, Elfmark M (2013) Limits of stability in persons with transtibial amputation with respect to prosthetic alignment alterations. Arch Phys Med Rehabil 94 (11):2234–2240 9. Clark R, Howells B, Pua Y, Feller J, Whitehead T, Webster K (2014) Assessment of standing balance deficits in people who have undergone anterior cruciate ligament reconstruction using traditional and modern analysis methods. J Biomech 47(5):1134–1137 10. Molero-Sánchez A, Molina-Rueda F, Alguacil-Diego I, Cano-de la Cuerda R, Miangolarra-Page J (2015) Comparison of stability limits in men with traumatic transtibial amputation and a nonamputee control group. PM&R 7(2):123–129 11. Kendell C, Lemaire E, Dudek N, Kofman J (2010) Indicators of dynamic stability in transtibial prosthesis users. Gait & Posture 31 (3):375–379 12. Winter D (2009) Biomechanics and motor control of human movement. In: Biomechanics and motor control of human movement, 4th edn. Wiley, Nueva Jersey 13. Carpenter M, Frank J, Winter D, Peysar G (2001) Sampling duration effects on centre of pressure summary measures. Gait & Posture 13(1):35–40 14. Luengas L, Toloza D (2019) Análisis de estabilidad en amputados transtibiales unilaterales. UD Editorial, Bogota, p 170 15. Vieira T, De Oliveira L, Nadal J (2009) Estimation procedures affect the center of pressure frequency analysis. Braz J Med Biol Res 42(7):665–673 16. Kirchner M, Schubert P, Schmidtbleicher D, Haas C (2012) Evaluation of the temporal structure of postural sway fluctuations based on a comprehensive set of analysis tools. Phys A 391 (20):4692–4703 17. Chagdes J, Rietdyk S, Haddad J, Zelaznik H, Raman A, Rhea C, Silver T (2009) Multiple timescales in postural dynamics associated with vision and a secondary task are revealed by wavelet analysis. Exp Brain Res 197(3):297–310 18. Maurer C, Mergner T, Peterka R (2006) Multisensory control of human upright stance. Exp Brain Res 171(2):231–250 19. Loughlin P, Redfern M (2001) Spectral characteristics of visually induced postural sway in healthy elderly and healthy young subjects. IEEE Trans Neural Syst Rehabil Eng 9(1):24–30

Lower Limb Frequency Response Function on Standard Maximum Vertical Jump C. Rodrigues, M. Correia, J. Abrantes, M. A. B. Rodrigues and J. Nadal

phase, fundamental harmonic frequencies, convergence of the GRFz input and CG z output Fourier series, their autospectral and cross-spectral density, as well as its inputoutput coherence, cross-spectrum gain factor, and phase of the frequency response function. Several differences were detected among CM condition, potentially contributing to explain differences on achieved performances at each CM and SSC.

Abstract

This study presents and applies in vivo lower limb frequency response analysis during standard maximum vertical jump (MVJ) with long and short counter movement (CM) and corresponding muscle stretch shortening cycle (SSC) for comparison without CM and SSC condition. The study makes use of algebraic relation at the frequency domain to obtain the response function from the input and output signals. Single-input/single-output (SI/SO) constant parameter linear system (CPLS) was applied with vertical ground reaction force (GRFz) input and center of gravity (CG) vertical displacement (z) output, obtaining lower limb frequency response function during MVJ impulse phase. Piecewise linearity and limited input-output range of experimentally acquired GRFz and CG z during MVJ impulse phase were assessed to confirm assumptions for CPLS application. Piecewise stationarity of the input and output signal was ensured by acquiring those signals on each MVJ type at similar conditions, guaranteeing experimental repetitions under statistical similar conditions on each CM. Different CM condition on each MVJ type were compared as regards to maximum vertical height, time period of the impulse

C. Rodrigues (B) INESC TEC, Centre for Biomedical Engineering Research, Porto, Portugal e-mail: [email protected] M. Correia Department of Electrical and Computer Engineering, FEUP, Porto, Portugal J. Abrantes ULHT, MovLab Interactions and Interfaces Lab, Lisbon, Portugal M. A. B. Rodrigues Department of Electronic and Systems, UFPE, Recife, Brazil J. Nadal UFRJ Biomedical Engineering Program, Rio de Janeiro, Brazil

Keywords

Frequency response function • Lower limb

1

Introduction

Despite isolated isometric, concentric, and eccentric forms of muscle action have gathered higher research effort, natural form of muscles action frequently involves muscle stretch shortening cycle (SSC), with concentric action immediately preceded by eccentric action to potentiate more efficient concentric action in walking and powerful concentric action in running and jumping [1]. Notwithstanding the importance of study on muscle isolated action, in vivo study of natural human muscle action plays a key role due to specific muscle behaviour on natural movement condition, according to complexity of multi-joint synergy and movement specificity. Although muscle SSC can be registered in gait and running, its strong expression is observed at higher controlled tests on maximum vertical jump (MVJ) with long countermovement (CM) at counter movement jump (CMJ), and short CM at drop jump (DJ), for comparison without CM at squat jump (SJ). During in vivo natural movement such as CM, lower limb predominantly performs as a thorough complex system, with multiple muscle joint action, multi-joint muscle action, muscle-tendon coupling, and inter-joint synergy with dominance of time domain studies, and an open issue on entire lower limb response function at frequency domain (FRF).

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_265

1815

1816

Large research efforts have been developed on the effect of small amplitude oscillations on the human body, with Rasmussen [2], Goel [3], and Mester [4] presenting a vast compilation of studies and results on individual anatomical segments, using natural frequencies oscillations, transmissibility, and mechanical impedance metrics, with major limitations on application of ideal models with elastic, damping, and inertia concentrated parameters to account for nonlinear behaviour, as well as the need for new models with the ability to account for dynamic deformations on higher amplitude oscillations and lower assumptions on model features. A category of such models consist on the black-box models, with lower assumptions on model features and the ability for estimation of system impulse response function from system input and output signal time profiles. At mechanical systems analysis, single-input/singleoutput (SI/SO) spectral relation is a classical approach, with well-founded methods and tools for applications, ranging from structure development to test and analysis. Real example of black-box SI/SO mechanical system spectral analysis includes acceleration input—acceleration output cantilever beam vibration experiment [5], vertical gust velocity input—vertical CG acceleration output of an airplane flying through atmospheric turbulence, or automobile acceleration input—acceleration output of chest front seat simulated passenger during automobile collision [6]. Advantages of SI/SO spectral analysis results from the ability to use input and output signals to determine system behavior without the need for assumptions on its composition and configuration. Decomposition of complex input and output signals into its components at multiple frequencies allow the extraction of relevant signal features, and the interpretation by the principle of superposition of the relation between input and output components, independently of the relation at the other frequencies [7]. Additional advantage from SI/SO spectral relationship models results from the transformation of complex differential movement equations, with successive derivatives at the time domain, into simple algebraic equations at the frequency domain, with the possibility of direct calculation of one of the input, output and response functions, based on the knowledge of the other two of these physical quantities. This is particularly important for the study of complex systems behavior, such as the human body movement. Presented metrics [5,6] correspond to the input and output autospectral density functions, representing for stationary records the rate of change of the mean square values with the frequency, and cross spectral function determining the relation between the input and output signals as a function of the frequency. Coherence function with values ranging from 0 to 1 is also applied, corresponding for linear systems as the frac-

C. Rodrigues et al.

tional portion of the mean square value of the output that is contributed by the input at the frequency f . In addition, gain factor based on autospectra calculation of the input and the output is applied as well as the gain factor based at the input autospectrum and the cross-spectrum between the input and the output. Phase factor is also used, with its algebraic signal and magnitude respectively determining the lead of the input or the output signals and the phase difference. Specific research question focus on application of previous metrics for detection of differences of lower limb frequency response function at long and short CM that could contribute to explain achieved different performances on CMJ, DJ, and SJ. For this purpose, selected approach corresponds to single-input/single-output (SI/SO) one degree-of-freedom (dof ) model, with vertical ground reaction force (GRFz) input, and body center of gravity (CG) vertical displacement (z) output analysis at frequency domain, as well as the frequency response function of the SI/SO model, with GRFz model input and z model output on different CM conditions.

2

Materials and Methods

2.1

Trial Tests

A small group of n = 6 voluntary male degree sports students was selected at Maia Higher Education Institute, with no previous injuries or specific training. Research project was approved by the local ethical committee (protocol CAE200903), procedures were in accordance with the Helsinki Declaration and volunteers signed written informed consent. Subjects aging (21.5 ± 1.4) years were weighted (76.7 ± 9.3) kg and their height measured (1.79 ± 0.06) m. Volunteers were instructed on MVJ protocol, starting impulsion phase from static squat position with 90°knee flexion on SJ, from orthostatic with downward long CM inverting direction at 90°knee flexion to upward movement on CMJ, and depth jump from 40 cm step height inverting downward movement to upward with short and fast CM on DJ [8]. All tests were performed akimbo, with the hands on the hips and the elbows pointing outward to limit upper limb influence during impulse phase, leaving lower limbs as the exclusive source of vertical impulse. Each subject performed a total of 9 MVJ, by this order, 3 SJ, 3 CMJ, and 3 DJ repetitions with 2– 3 min rest between trials. During MVJ impulse phase, ground reaction contact forces were acquired with AMTI BP24164000CE multiaxis force platform 1000 Hz, coupled to Mini Amp MSA-6 amplifier with ADI-32 interface to A/D board DT3002, and Simi Motion 3D version 7.5.280 (Simi Reality Motion Systems GmbH, Germany).

265 Lower Limb Frequency Response Function …

2.2

1817

Input-Output Signals

Adopted model corresponds to SI/SO one degree-of-freedom (dof ) model, with vertical ground reaction force (G R F z) input and body center of gravity (CG) vertical displacement (z) output, as presented in Fig. 1. Vertical ground reaction force components (G R F z) during impulse phase (t < 0) before take-off (t = 0 s) on SJ, CMJ, and DJ, at Fig. 2a–c, were selected as the input (x) according to its higher intensity in relation to anteroposterior and mediolateral force components, along with its role on developed output (y) the CG vertical displacement (z). Best SJ, CMJ, and DJ trials were selected for each subject according to MVJ height (h t ) assessed by maximum flight time, corresponding to null G R F z value during flight phase. Vertical resultant force R Fz = G R Fz − Fg was obtained with Fg = G R Fz during support phase. Vertical acceleration az = R Fz /m was obtained with m = Fg/g each subject mass and g = 9.81 m/s2 the local gravitational acceleration. Vertical velocity vz = az dt was obtained from az time integration during impulse phase starting from initial zero vertical velocity. CG vertical displacement z = vz dt was obtained from vz time integration during impulse phase with no loss of useful information, as presented on Fig. 2d–f.

2.3

System Dynamic

Underlying assumptions of system dynamic characteristics correspond to the applicability over limited input range of constant-parameter linear system (CPLS) described by an impulse response function h(τ ) defined as the output of the system at t time instant to unit impulse input applied τ time before, with additional physically realizable (causal) condition of system response to past inputs, h(τ ) = 0 for τ < 0. Under these conditions, the system output y(t) for an arbitrary input x(t) is given by the convolution integral.

γx2y ( f ) =

2.4

∞ h(τ )x(t − τ ) dτ

y(t) =

For physical realizable and stable CPLS the dynamic of the system can be described by a frequency response function H ( f ) defined as the Fourier transform of h(τ ). Taking the Fourier transform on both sides of (1), with X ( f ) and Y ( f ) the Fourier transform of the input x(t) and output y(t), an important relation arises for the CPLS, transforming the convolution integral into an algebraic expression in terms of the frequency response function H ( f ), X ( f ), and Y ( f ), Y ( f ) = H ( f ) X ( f ) [6]. The frequency response function can be expressed in terms of its magnitude |H ( f )| as the system gain factor, and associated phase angle φ( f ) the system phase factor, H ( f ) = |H ( f )| e− j φ( f ) . The frequency response function has a direct physical interpretation, assuming a system subjected to an input x(t) decomposed at Fourier transform X ( f ) on sinusoidal input at each frequency f producing an output Y ( f ) at each same frequency f , with the ratio of the output magnitude |Y ( f )| to the input amplitude |X ( f )| determined by the gain factor |H ( f )| of the system, and the phase shift between the input X ( f ) and the output Y ( f ) determined by the phase factor φ( f ) of the system. In order to eliminate delta functions at the origin f = 0 Hz of the Fourier spectrum, corresponding mean values were subtracted at the input x(t) and output y(t) signals. The autospectral density functions Sx x ( f ) and S yy ( f ), as well as the cross-spectral density Sx y ( f ) of the input x(t) and output y(t) signals, were obtained from Fourier transforms of the autocorrelations Rx x (t), R yy (t) and cross-correlation Rx y (t). The input/output autospectrum relation S yy ( f ) = |H ( f )|2 Sx x ( f ) and the cross-spectrum relation Sx y ( f ) = H ( f ) Sx x ( f ) were used to determine the gain factor |H ( f )| as well as the phase factor φ( f ), with the coherence between the input and the output determined by the relation (2).

−∞

Fig. 1 Human body model with vertical force (G R F z) input and body center of gravity (CG) vertical displacement (z) output

(1)

|Sx y ( f )|2 Sx x ( f ) S yy ( f )

(2)

Signal Analysis

Despite selected input G R F z(t) and output z(t) correspond to nonstationary data with time varying mean value, time period piecewise stationary segments were selected on G R F z(t) and z(t) data, corresponding to specific set of fixed conditions on experimental repetitions under statistical similar conditions on each CM, thus obtaining ensemble sample records measured on a common time base, more specifically in the range −T ≤ t ≤ 0, T the time period of the impulse phase and 0 the instant of take-off, with the analysis based on individual sample records at SJ, CMJ, and DJ impulsion phases, presented at Fig. 2.

1818

C. Rodrigues et al.

Fig. 2 Time profiles of input vertical ground reaction force G R F z (a–c) and output CG vertical displacement z (d–f) during impulse phases on individual sample records at SJ, CMJ and DJ

Time period T and fundamental harmonic frequencies f 1 were compared on different MVJ. Fourier decomposition of G R F z and z time profiles during impulse phase at each subject SJ, CMJ, and DJ trials was implemented using MATLAB R2010b (The MathWorks Inc., Natick, MA, USA) with the multiple integer of the frequencies of fundamental harmonics for fast Fourier transform (FFT) decomposition of G R F z(t) and z(t), defined according to the time period of each impulse phase. FFT coefficients amplitude from |G R F z( f )| and |z( f )| decomposition were analyzed according to the frequency of the harmonic with higher amplitude ( f max ) and 90th percentile ( f 90 ) of the Fourier series for each subject SJ, CMJ, and DJ selected trials. f max , f 90 and maximum FFT amplitude of the |G R F z( f )| and |z( f )| Fourier series were compared for SJ, CMJ, and DJ trials, as well as scatter plotted for clustering and correlation detection. 90th percentile was selected due to inclusion of major part of signal energy, while preserving all percentiles for other possible comparison. According to the concentration of human body resonant frequencies at low frequencies, 2 Hz with the knees flexed to 20 Hz with rigid posture on standing position [2], and the importance of transmission with amplification of these harmonics [4], along with concentration of the larger part of |G R F z( f )| and |z( f )| signals energy at lower frequency   harmonics, normalized sums |G R F z( f )| and |z( f )| of the Fourier series were obtained for frequencies f < 25 Hz on each subject SJ, CMJ, and DJ trials.

The input G R F z and output z autospectral density functions Sx x ( f ) and S yy ( f ), as well as the input-output crossspectral density Sx y ( f ) of G R F z and z were compared as regards to low-pass frequency fl of the ratio 10−4 of signal maximum amplitude. The coherence γx2y ( f ) profiles between the input G R F z and the output z were assessed according to individual sample records at SJ, CMJ, and DJ. The gain factor from cross-spectrum relation H ( f ) = Sx y ( f )/Sx x ( f ) was assessed as regards to the gain factor from autospectrum |H ( f )|, as well as the phase factor φ( f ).

3

Results and Discussion

Vertical ground reaction force |G R F z(t)| and CG vertical displacement |z(t)| assessed on piecewise linear relation during impulse phase, presented dominant linear relation at the impulse sub-phases on each MVJ type [9]. CMJ with long countermovement presented better performance with higher MVJ height (h t ) than SJ, both higher than DJ as presented in Table 1 ( p < 0.05). CMJ also presented longer time period (T ) during impulse phase than SJ, both longer than DJ ( p < 0.05), thus resulting in lower f 1 = 1/T value on CMJ than SJ, both lower than DJ. Both |G R F z( f )| and |z( f )| presented maximum coefficient amplitude on FFT decomposition at frequency ( f max ), corresponding to each signal f 1 frequency. CMJ thus presents dominant frequency f max , accounting for the largest fraction of each sig-

265 Lower Limb Frequency Response Function …

1819

Table 1 Average (μ) and standard deviation (sd) vertical height (h t ), period (T ), frequency ( f 1 ) and FFT measurements of vertical ground reaction force (G R F z) and CG vertical displacement (z) during SJ, CMJ and DJ μ ± sd

SJ

CMJ

DJ

h t (m) T (s) f 1 (Hz) f max (Hz) G R F z max (N) z max (m) G R F z f 90 (Hz) z f 90 (Hz) G R F z k90 z k90  |G R F z( f )|  |z( f )| Sx x fl (Hz) S yy fl (Hz) Sx y fl (Hz)

0.33 ± 0.05 0.35 ± 0.05 2.94 ± 0.46 2.93 ± 0.46 474.2 ± 88.4 0.13 ± 0.02 160.4 ± 5.1 298.4 ± 5.1 57 ± 10 105 ± 15 0.76 ± 0.01 0.53 ± 0.01 47.53 ± 5.75 40.36 ± 6.38 41.02 ± 5.38

0.36 ± 0.04 0.80 ± 0.12 1.27 ± 0.21 1.27 ± 0.21 782.5 ± 164.4 0.19 ± 0.04 145.2 ± 14.9 114.0 ± 59.0 119 ± 27 91 ± 45 0.77 ± 0.02 0.81 ± 0.06 32.55 ± 4.03 22.79 ± 4.57 26.04 ± 4.03

0.27 ± 0.03 0.20 ± 0.02 5.00 ± 0.54 4.97 ± 0.53 1915.2 ± 122.7 0.08 ± 0.01 30.4 ± 16.8 28.5 ± 34.3 7±3 7±7 0.90 ± 0.05 0.91 ± 0.04 134.14 ± 42.39 77.90 ± 10.14 87.55 ± 17.73

nal energy, either on input |G R F z( f )| and output |z( f )|, at lower frequency than SJ, both lower than DJ. Maximum amplitude of |G R F z( f )| FFT decomposition (G R F z max ) presented higher value at DJ than CMJ, both higher than SJ in agreement with |G R F z(t)| amplitudes with higher values at DJ than CMJ, both higher than SJ, on Table 1 and Fig. 2a–c. Maximum amplitude of |z( f )| FFT decomposition (z max ) presented higher value at CMJ than SJ, both higher than DJ in agreement with higher CG displacement amplitudes |z(t)| at CMJ than SJ, both higher than DJ, on Table 1 and Fig. 2d–f. SJ and CMJ presented larger dispersion than DJ at the frequency spectrum of input |G R F z( f )| FFT decomposition, expressed by higher G R F z f 90 frequency and corresponding harmonic G R F z k90 of the 90th percentile of the signal energy at SJ and CMJ than DJ as presented on Table 1. This larger spectrum dispersion of G R F z( f ) FFT decomposition at SJ and CMJ than DJ is consistent with lower value of the  normalized accumulated signal energy |G R F z( f )| 25 Hz on SJ and CMJ than DJ. As regards to the output |z( f )| FFT decomposition, SJ presents higher frequency dispersion than CMJ and DJ, expressed by higher frequency z f 90 and corresponding harmonic z k90 of the 90th signal energy percentile at SJ than CMJ and DJ. This higher dispersion of |z( f )| FFT decomposition at SJ than CMJ and DJ can also be confirmed by the lower normalized accumulated sig nal energy |z( f )| at 25 Hz on SJ than CMJ and DJ, as presented on Table 1. The relation G R F z f 90 = f 1 × G R F z k90 conduced to different results at the comparison of |G R F z( f )| spectrum convergence. Thus, despite G R F z k90 presents lower value at SJ than CMJ ( p = 0.002), the ratio of f 1 with higher values at

SJ than CMJ conduces to higher G R F z f 90 values at SJ than CMJ ( p = 0.056). Regarding G R F z k90 with lower values at DJ than SJ and CMJ, f 1 ratios with higher values at DJ than SJ and CMJ are not sufficiently high to modify the comparison of G R F z f 90 with lower values at DJ than SJ and CMJ. As regards to the output |z( f )| signals spectrum convergence, f 1 ratios at SJ, CMJ, and DJ did not change the comparison of z f 90 and z k90 , with both metrics presenting lower values on DJ than CMJ, both lower than SJ, as presented on Table 1. 90th percentile f 90 scatter of the |G R F z( f )| and |z( f )| FFT decomposition point to clustering on each MVJ type, with SJ presenting higher f 90 for |G R F z( f )| and |z( f )|, CMJ presenting higher |G R F z( f )| f 90 and medium |z( f )| f 90 , and DJ presenting lower |G R F z( f )| and |z( f )| f 90 . SJ presented 25 Hz, higher convergence at the input G R F z( f ) FFT decomposition, with higher normalized  accumulated signal energy |G R F z( f )| than the output  z( f ) corresponding value |z( f )| at p < 10−3 , whereas CMJ and DJ present the opposite higher convergence at the output   z( f ) FFT decomposition, with higher |z( f )| than |G R F z( f )| at p > 0.05. CMJ presented at the input G R F z and output z autospectral Sx x ( f ), S yy ( f ), as well as the input-output cross-spectral Sx y ( f ), lower frequency fl of the ratio 10−4 of signal maximum amplitude than SJ, both lower than DJ, as presented on Table 1 and Sx y ( f ) at Fig. 3a–c. SJ presents distinct behavior from CMJ and DJ at the coherence function γx2y ( f ) between the input G R F z and the output z, on Fig. 3d–f. Thus, SJ presents increasing γx2y ( f ) from low values at low frequencies reaching average coherence of 0.70 at 25.4 Hz, whereas CMJ presents higher average coherence of 0.75 at frequencies below 10.4 Hz, and DJ presenting

1820

C. Rodrigues et al.

Fig. 3 Cross-spectral density Sx y ( f ) (a–c) and coherence γx2y ( f ) (d–f) of the input-output lower limb vertical ground reaction force G R F z( f ) and CG vertical displacement z( f ) during impulse phases on individual sample records at SJ, CMJ and DJ

stronger coherence average of 0.92 at very low frequencies of 3.9 Hz. The low input-output γx2y ( f ) at low frequencies on SJ is likely associated to contributions to the output z other than the considered input G R F z, since both the input and the output present high autospectral values at low frequencies. Maximum values at γx2y ( f ) indicate higher linear relation at these frequencies between the input G R F z and the output z on each MVJ type. Above these frequencies, SJ, CMJ, and DJ presented a loss of coherence γx2y ( f ) at different frequency band 25.4–47.5 Hz on SJ, 10.4–33.2 Hz on CMJ, and 3.9–175.8 Hz at DJ, possibly due to measurements’ extraneous noise, decrease of the input-output linearity, or other input contributions to the output [6]. At higher frequencies γx2y ( f ) presents higher average values of 0.62 at SJ, 0.87 at CMJ, and 0.80 at DJ. Nevertheless, the decaying nature of the input and output Sx x ( f ) and S yy ( f ) and oscillation of γx2y ( f ) at higher frequencies point to registered high γx2y ( f ) as being associated to noise on input and output. The gain factor from cross-spectrum relation |H ( f )| presented maximum value, corresponding to natural frequency with increase input-output transmissibility at higher frequency 20.8 Hz on SJ and lower average frequencies 8.5 Hz at CMJ and 3.9 Hz at DJ, as well as minimum gain with notches at higher average frequency 40.4 Hz on SJ than CMJ at 17.6 Hz, both lower than DJ at 76.6 Hz. SJ presented below 35.8 Hz, negative φ( f ) −2.3 rad, and the input G R F z( f ) leading the output z( f ) [6]. This result is in agreement with G R F z(t) and z(t) time profiles, on

Fig. 2a, d, with G R F z(t) increase ahead of z(t) increase. As regards to CMJ, it presents maximum positive φ( f ) at first harmonic accounting for the highest signals energy, and z( f ) leading G R F z( f ), in agreement with CG z(t) decrease anticipating G R F z(t) increase out of phase, on Fig. 2b, e. CMJ presents at the second harmonic, on average frequency 7.8 Hz, φ( f ) inversion to −2.8 rad and the input G R F z( f ) leading the output z( f ) below 22.8 Hz. DJ presented below 45.4 Hz, maximum positive φ( f ) with the output z( f ) leading the input G R F z( f ), and this being consistent with G R F z(t) and z(t), on Fig. 2c, f, with z(t) leading G R F z(t) out of phase, as a result of z(t) decrease before ground contact and G R F z(t) increase.

4

Conclusion

Presented methodology proved adequate unveiling hidden differences at frequency domain on short and long CM, and corresponding SSC compared to the absence of CM and SSC on different MVJ. Differences were detected on autospectral and cross-spectral of vertical GRF and CG vertical displacement, as well as on input-output coherence, frequency response gain, and phase, along with natural frequencies and notches transmissibility on different CM. Detected differences at spectral domain at long and short SSC in comparison with no SSC condition can contribute to explain the mechanism of SSC in gait, either on walking, running and jumping.

265 Lower Limb Frequency Response Function …

Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Komi PV, Ishikawa M, Linnamo V (2011) Identification of stretchshortening cycles in different sports. Port J Sport Sci 11(2):31–34 [2] Rasmussen G (1982) Human body vibration exposure and its measurement. Bruel & Kjaer Tech Rev 1:3–31 [3] Goel VK, Chang JM, Schimmels F, Wan Y (2001) Contributions of mathematical models in the understanding and prevention of the effects of whole-body vibration on the human spine. CRC Press LLC, Boca Raton, FL

1821 [4] Mester J, Spitzenpfeil P, Yue Z. Vibration loads: Potential for strength and power development, ch. 24. Strength and power in sport. Blackwell Science Ltd, 2nd edn. 2003, pp 488–501 [5] Bendat JS, Piersol AG (1980) Engineering applications of correlation and spectral analysis. Wiley, New York [6] Bendat JS, Piersol AG (2000) Random data analysis and measurement procedures. Wiley, New York [7] Olesen H P, Randall R B. A guide to mechanical impedance and structural response techniques. Application notes 17-179 Bruel & KjaerNaerum, Denmark [8] Asmussen E, Bonde-Petersen F (1974) Storage of elastic energy in skeletal muscles in man. Acta Physiol Scand 91(3):385–392 [9] Rodrigues C, Correia M, Abrantes JMCS, Rodrigues Benedetti MA, Nadal J (2019) Lower limb assessment of dynamic stiffness on different human maximum vertical jump. In: ENBENG IEEE 6th Portuguese meeting on bioengineering, Lisbon, Portugal, pp 1–4

Time-Difference Electrical Impedance Tomography with a Blood Flow Model as Prior Information for Stroke Monitoring R. G. Beraldo and F. S. Moura

Abstract

Continuous monitoring of brain hemodynamics is important to quickly detect changes in healthy cerebral blood flow, helping physician decision-making in the treatment of the patient. Resistivity changes in the brain happen as a result of the pulsatile characteristic of the blood in the arteries or pathological conditions such as ischemia. We developed a dynamic model of cerebral circulation capable of portraying variations in resistivities in arteries within a cardiac cycle. From the hypothesis that the resistivity changes in the brain can be detected by Electrical Impedance Tomography (EIT), we included this model as prior information in time-difference image reconstruction algorithm. With this prior information, image reconstruction of the brain with pre-existing ischemia was possible, showing that EIT is a potential technique for brain hemodynamic monitoring. Keywords

Blood flow model • Electrical impedance tomography • Difference imaging • Stroke

1

Introduction

Brain activity depends on high oxygenation. The decrease or interruption of the supply of oxygen and nutrients to the brain can cause irreversible damage very quickly, e.g. death of neurons in minutes [1]. The perfusion deficit is related to the extent of the damage. The location of the perfusion deficit is related to the function of the compromised brain region. Clinical signs only appear with a low cerebral blood flow that can already cause neuronal impairment, thus quick R. G. Beraldo (B) · F. S. Moura Engineering, Modeling and Applied Social Sciences Centre, Federal University of ABC, São Bernardo do Campo, Brazil e-mail: [email protected]

monitoring in the acute phase of the pathology is important to decrease the chances of clinical complications [2]. The best-known cause is stroke, a cardiovascular disease that encompasses a group of disorders involving the cerebral circulation that can result in neurological functional impairment. A stroke can be ischemic, the most common type, when a blood vessel is obstructed (by a thrombus or an embolus) preventing oxygen from reaching the cells, or hemorrhagic, resulting from non-traumatic vascular rupture with blood leakage into the brain, subarachnoid space, or vestibular system. Gold standard exams for monitoring blood flow in the brain, such as magnetic resonance (MR) or computed tomography (CT), require patient transport to specialized areas away from the bedside, a high-risk maneuver for critically ill patients, such as those in intensive care units. Also, both are not types of continuous monitoring and, in the case of CT, there is exposure to ionizing radiation. In this context, Electrical Impedance Tomography (EIT) is a promising technique. EIT is a non-invasive method of obtaining images of the electrical impedance distribution within a region of interest by imposing low-intensity electrical current through electrodes on the surface of the region of interest and by measuring the voltage in that same interface [3]. The spatial resolution of EIT images is lower than that of other usual conventional methods, but EIT is a non-invasive continuous monitoring method with high temporal resolution. Also, it is not harmful to the patient, it is portable, and has a lower cost compared to MR or CT [3]. The dynamics of fluids inside the human body affect the electrical resistivity of that region. Specifically, an increase in blood volume in a region decreases its electrical impedance [3], so that EIT can be an important tool for monitoring brain hemodynamics. Aorta monitoring studies using the EIT can be found [4], but there are still few studies on the use of EIT in brain hemodynamics, the proposal of the present work.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_266

1823

1824

R. G. Beraldo and F. S. Moura

Table 1 Generated 3D mesh information for forward and inverse problems Tissue

No.

Forward

Inverse

Scalp Skull Brain Electrodes Head

Elements Elements Elements Elements Nodes

32k 27k 31k 1k 17k

26k 25k 14k 1k 13k

2

Materials and Methods

2.1

Three-Layer Head Model Mesh Generation

In this work, no experimental data acquisition was performed. We generated the head meshes based on a mean head atlas, already segmented, available online [5]. We pre-processed the images applying morphological filters, such as erosion, dilation, filling holes, and opening areas [6]. We used the Matlab programming platform to generate the isosurfaces of the scalp, skull, and brain from the atlas and converted them to stereolithography files using stlwrite.1 We used Blender 3D Creation Suite to fix non-manifold elements, to remove small structures irrelevant to the analysis, and to add two parallel rows of 16 electrodes each to the scalp. We discretized the entire volume into tetrahedral elements with the Gmsh mesh generator [7]. To avoid the inverse crime [8], when the same mesh is used to solve both the forward problem and the inverse problem, we discretized the mesh of the forward problem with a larger number of elements than the mesh of the inverse problem. The external surfaces, including the electrodes, were the same for the generation of both meshes. Table 1 presents the number of elements of each tissue and the total number of nodes of the head model.

elements in the mesh whose centroids were at a distance lower than 4 mm with respect to the artery centerline. Each artery has a different diameter, but given that the mesh elements are not small enough to represent these differences, this diameter was standardized.

2.3

Dynamic Model of Brain Blood Flow

We simulated cerebral hemodynamics with the openBF Julia package [10], a 1D blood flow simulator based on the finite volume method.2 The parameters of the blood vessels were based on a previous Willis polygon model [11]. As a result of the simulation, we obtained pressure and blood flow curves for each vessel during a cardiac cycle. We used the Visser model [12] to convert the blood flow values to resistivity, for each instant of time, generating the desired dynamic model. By investigating the resistivity of blood in laminar flow in circular tubes for different constant flow rates, Visser et al. arrived at an expression that allows one to convert blood flow values to variation in relative resistivity. Equation (1) relates the reduced average velocity Rv with the volumetric blood flow F. The Visser model is given by Eq. (2), where relative changes in resistivity at continuous flow rates are written as a function of the reduced average velocity. v F = π R3 R   c  ρz v = a H − a H exp −b ρ R

(1)

(2)

We generated 3D brain arteries from angiographies available online [9] and converted the MetaImage medical format files to Visualization Toolkit files with the Slicer software platform. Then, we extracted brain arteries and converted them to stereolithography files using the Aneufuse toolkit. We positioned the arteries in the brain and segmented them in 13 parts with Blender 3D Creation Suite. We extracted the centerlines of each artery with the Aneufuse toolkit and they were included in the mesh by selecting the pre-existing

where R is the radius of the vessel, v is the average velocity, ρz ρ is the relative variation of resistivity in the longitudinal direction, H is the hematocrit and a, b and c are constants that were obtained by non-linear regression (a = −0.45 ± 0.03, b = 0.26 ± 0.03 and c = 0.39 ± 0.05). We discretized each curve in 10 points, generating 10 instants of time during the cardiac cycle to include in the problem. The diameter of each vessel and the position of each element did not change, only its value possibly changed. We created a region of influence of 1 cm around these vessels, where the elements close to the arteries are also influenced by the pulse and their resistivity is a linearly interpolated value between the resistivity of the blood in the arteries at that time and the resistivity of the brain. We attributed the isotropic resistivities to the elements of the scalp, skull, and brain for a frequency of 125 kHz and did not include the complex parts of impeditivity in the calculations. In the case of the brain, we considered an average resistivity between white matter and gray matter,

1 www.mathworks.com/matlabcentral/fileexchange/20922.

2 https://github.com/INSIGNEO/openBF.

2.2

Inclusion of Arteries in the Meshes

266 Time-Difference Electrical Impedance Tomography …

1825

ρbrain = 9.68 m [13,14]. For the scalp, an average value was based on Corazza et al. [15], ρscalp = 2.50  m. For the skull, a bone resistivity was based on Gabriel et al. [13,14], ρskull = 47.94  m. We modeled ischemia as an increase in resistivity in relation to the brain tissue [16], ρischemia = 11.00  m, as well as setting this constant value over each time instant, reflecting the lack of pulse of the artery in this case.

2.4

Forward Problem Solution

Calculating the voltages given the input electric current and resistivities is called the EIT forward problem, represented in Eq. (3). We solved the forward problem by the finite element method using the Matlab programming platform. v (c, ρ) = K −1 (ρ)c

(3)

where K (ρ) is the global stiffness matrix, v (c, ρ) is the nodal voltage vector, c is the imposed electric current at the electrodes and ρ is the resistivity vector. In this work, we used the complete electrode model to take into account the contact impedance of the electrodes [17]. We selected an input electric current of 1 mA, with pair-wise current injection and skip-8 pattern to improve observability in the center of the head [18,19]. We simulated single-ended measurements with respect to the geometric center of the brain. We considered two situations for the forward problem: 1. Healthy patient’s brain with variation in their arteries resistivities between the moments of a cardiac cycle; 2. Patient’s brain with ischemia in one artery. The resistivity of the ischemia was imposed on the entire chosen artery and at all times, to represent the absence of blood flow and pulse during the cardiac cycle. This leads to three possible clinical situations: (i) Monitoring a healthy patient; (ii) Monitoring a patient that was healthy when monitoring started, but then showed ischemia, i.e., there is a reference measurement of the healthy brain of the patient; and (iii) Monitoring a patient with pre-existing ischemia, i.e., there is no reference measurement of his healthy brain.

2.5

Blood Flow Model as Prior Information

Assuming that the patient is being monitored by an electrocardiogram, the duration of each cardiac cycle is known. By synchronizing the resistivity curves of the arteries with the patient’s cardiac cycle, the expected resistivity variation in the arteries could be used in the form of ρ ∗ in Eq. (5).

To avoid inverse crime, we simulated two sets of different resistivity curves: one for the forward problem and one for the inverse problem, so that the prior information was not exactly the one imposed on the forward problem.

2.6

Time-Difference Imaging: Solution of the Inverse Problem

We solved the inverse problem assuming that the linear approximation is valid, i.e., that the difference between the conductivity of the model and the actual conductivity of the object is small. In some medical problems, this is not always true, but this approximation is widely used when one is only interested in the resistivity variation between two instants of time. The images computed in this way are called timedifference imaging, requiring two sets of voltage measurements of the electrodes [3]. In this case, Eq. (4) shows the cost function to be minimized.    (4) arg min ||vm − J ρ||22 + λ2 ||L ρ − ρ ∗ ||22 ρ

where λ is the regularization parameter, L is the regularization matrix, a high-pass filter in this work and ρ ∗ is a prior estimate for the resistivity variation. The first term represents a data fidelity term between the measurements and the model, while the second term represents a penalty function to regularize the ill-posed problem. Their influence is balanced using parameter λ. Equation (5) denotes the solution of the inverse problem [20], where the Jacobian J of the forward problem is calculated at the linearization point for the elements of the brain, the region of interest, as considering the whole head would require computational processing power beyond that available. −1

J T vm + λ2 L T Lρ ∗ (5) ρ = J T J + λ2 L T L

3

Results

3.1

Monitoring of Healthy Patient

We simulated two curves of the blood flow in the brain, one for the forward problem (red) and the other for the prior information (blue). Figure 1a shows, for each instant of time, the median ρ of all arteries, its 25th percentile, and its 75th percentile as the error bar graph. In the first simulation, the patient did not present ischemia and the goal was to monitor the resistivity variation curves of the largest arteries. Figure 1b shows the resistivity difference between t = 0 s and t = 0.03 s, the ρ of the forward prob-

1826

R. G. Beraldo and F. S. Moura

first and third quartiles, respectively. We observed that the shape of the curve is close to what we imposed in the forward problem, but that there is a difference between ρ of each artery, some varying more than others.

3.2

Detection of Ischemia with Reference Measurements

The next step was to monitor ischemia. We first considered when the pathological condition is not pre-existing, i.e., when the reference voltage measurements of the electrodes for the healthy patient were already taken. Figure 2a shows the ρ for the forward problem of this case. The resistivity of all the arteries continued to change inside the cardiac cycle, resulting in ρ = 0 for all the brain, except in the right middle cerebral artery that remained constant over the cycle with a higher resistivity than the brain. Figure 2b shows the computed images, related to Fig. 2a, without using the prior information, because it was already possible to identify the ischemia region. In this case, the prior information does not significantly impact the images, whether in the sense of improving them or not, because the difference between the voltage of the healthy and pathological instants already is enough for its detection.

Fig. 1 a Comparison of forward problem and prior information simulated curves for blood flow in all arteries. b Imposed ρ for an instant of time in forward problem. c Reconstructed ρ for the instant of time of (b). d Median, Q1 and Q3 error bar of reconstructed ρ for all instants of time

lem where there is the greatest variation in blood resistivity of Fig. 1a. While there are resistivities at the arteries, green denotes no variation elsewhere. We computed time-difference images for each instant of time of a health patient using λ = 10−7 . Figure 1c shows the computed image related to Fig. 1b using the prior information ρ ∗ . We observed that when the information from ρ ∗ is not used, the image does not represent what we expected, as the arteries are not well defined in the brain. Using ρ ∗ , we can see the arteries and their resistivities can be monitored. Figure 1d shows the resistivities of the elements for each instant of time. The indicated value is the median of ρ within that artery and the lower and upper error bars represent the

Fig. 2 Detection of ischemia with reference measurements. a Imposed ρ. b Reconstructed ρ without prior information

266 Time-Difference Electrical Impedance Tomography …

3.3

1827

Detection of Pre-existing Ischemia

4 When the pathological condition is pre-existing, it is not possible to reconstruct images considering a static brain, as the resulting ρ would be zero over the cardiac cycle. But if the resistivities of blood vary over the cardiac cycle, and there is a model to show how it should occur, this could allow the time-difference image reconstruction of a pre-existing condition using the expected variation ρ ∗ , as there would be differences between the expected and the computed resistivities. Figure 3a shows the ρ of the forward problem for this case between t = 0 s and t = 0.03 s. While the other arteries continue to vary during the cardiac cycle, the ischemic artery (in this case, the right middle cerebral artery) does not vary. Figure 3b shows the computed image between 0.0 and 0.3 s, using the prior information ρ ∗ . Comparing this result with the images of the healthy patient, Fig. 1c, we observed an increased resistivity in the right region of the brain. Proceeding the same way as in Fig. 1a, Fig. 3c shows the median ρ of the elements of the arteries, its 25th percentile and its 75th percentile as the error bar graph. The elements of the right middle cerebral artery did not vary the same way as the other arteries. We observed that resistivities increased for all instants of time.

Given the reference measurement of the healthy patient, we reconstructed time-difference images of ischemia with or without ρ ∗ . Figure 2 shows the reconstructed image without ρ ∗ , but using ρ ∗ gives similar results. This case should be considered, as there are many secondary injuries to the patients that happen after they entered the hospital. An example that can cause decreased perfusion is vasospasm, characterized by prolonged constriction of a cerebral artery caused by contraction of a smooth vascular muscle. Vasospasm can happen days after the rupture of a saccular aneurysm in the patient affected by subarachnoid hemorrhage, increasing the patient’s morbidity and mortality [21]. Continuous monitoring at the bedside would be essential and the EIT reference measurements would be available, making it possible to use time-difference imaging. Reconstruction of time-difference images of pre-existing conditions was once considered not possible, as it depends on the variation of resistivities over time and the pathology would not be detected [16]. But there are advantages using time-difference EIT in comparison with absolute EIT imaging that justifies further investigation. Being a one-step algorithm, it has fast processing, an important feature in realtime monitoring. Besides, modeling errors and systematic errors are subtracted, and mitigated, in this method. Using the prior information ρ ∗ , time-difference images could be computed and alterations of the expected resistivity distribution detected. Further studies are necessary to verify whether this prior information would help to reconstruct time-difference images considering all the noise involved in real measurements and model errors.

5

Fig. 3 a Imposed ρ with ischemia in RMCA. b Computed image without reference measure. c Error bar of the median, Q1 and Q3 values of artery elements resistivities

Discussion

Conclusions

In this work, we developed a simplified geometric model of the head, with scalp, skull, brain, and arteries. Representing ischemia by an increase in resistivity and lack of pulsatile blood flow with time, we computed time-difference images of the brain for each case. The main contribution of this work is to include new prior information to solve the EIT inverse problem, a temporal characteristic of how variations in blood flow in the cerebral arteries alter their resistivity. The changes in the expected behavior could be indicators for the physician decision-making, adding the information that the EIT brings to all other forms of imaging and monitoring a patient.

1828

Caution is necessary when expecting to use these results from computer simulations for clinical use. As future perspectives for detecting cerebrovascular diseases using EIT, it is necessary to use real data collected in the head for image reconstruction, the refinement of the conversion model between blood flow and electrical conductivity, the inclusion of the head venous system, and the inclusion of the complex part of the electrical impedance. One of the goals of brain flow monitoring using EIT is to differentiate ischemic and hemorrhagic stroke. Some works on the classification of stroke types argued that, on the one hand, time-difference EIT could not be used for as there would be no reference measurements for the patient, on the other hand, the use of absolute images was not yet possible due to the strong influence of model errors [16,22]. Potentially, the results presented in this current work may help in this case. Acknowledgements The MR brain images from healthy volunteers used in this paper were collected and made available by the CASILab at The University of North Carolina at Chapel Hill and were distributed by the MIDAS Data Server at Kitware, Inc. The authors gratefully acknowledge funding from the Coordenação de Aperfeiçoamento de Pessoal de Nìvel Superior—Brasil (CAPES)— Finance Code 001 and The São Paulo Research Foundation (FAPESP), processes 2017/18378-0 and 2019/09154-7. Conflict of Interest The authors declare that they have no conflict of interest.

References [1] Puig B, Brenna S, Magnus T (2018) Molecular communication of a dying neuron in stroke. Int J Mol Sci 19:2834 [2] Bor-Seng-Shu E, Kita WS, Figueiredo EG et al (2011) Cerebral hemodynamics: concepts of clinical importance. Arq NeuroPsiquiatr 70:352–356 [3] Holder DS (2005) Electrical impedance tomography: methods, history and applications, 1st edn. IOP Publishing Ltd, Cornwall, UK [4] Proença M (2017) Non-invasive hemodynamic monitoring by electrical impedance tomography. PhD thesis, École Polytechnique Fédérale de Lausanne [5] Hammond D, Price N, Turovets S (2017) Construction and segmentation of pediatric head tissue atlases for electrical head modeling OHBM. Vancouver, Canada

R. G. Beraldo and F. S. Moura [6] Gonzalez R, Woods R (2008) Digital image processing, 3rd edn. Pearson Prentice Hall, Londres [7] Geuzaine C, Remacle JF (2009) Gmsh: a three-dimensional finite element mesh generator with built-in pre- and post-processing facilities. Int J Numer Methods Eng 79:1309–1331 [8] Kaipio J, Somersalo E (2005) Statistical and computational inverse problems, 1st edn. Springer, New York [9] Bullitt E, Zeng D, Gerig G et al (2005) Vessel tortuosity and brain tumor malignancy: a blinded study. Acad Radiol 12:1232–1240 [10] Melis A (2017) Gaussian process emulators for 1D vascular models. PhD thesis, The Department of Mechanical Engineering—The University of Sheffield [11] Alastruey J, Parker KH, Peiró J et al (2007) Modelling the circle of Willis to assess the effects of anatomical variations and occlusions on cerebral flows. J Biomech 40:1794–1805 [12] Visser KR (1989) Electric properties of flowing blood and impedance cardiography. Ann Biomed Eng 17:463–473 [13] Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues. Phys Med Biol 41:2271–2293 [14] Andreuccetti D, Fossi R, Petrucci C (1997) An Internet resource for the calculation of the dielectric properties of body tissues in the frequency range 10 Hz–100 GHz. Based on data published by Gabriel C et al. in 1996 [15] Fernandez-Corazza M, Ellenrieder N, Muravchik CH (2011) Estimation of electrical conductivity of a layered spherical head model using electrical impedance tomography. J Phys: Conf Ser 332:012022 [16] Horesh L (2006) Some novel approaches in modelling and image reconstruction for multi-frequency electrical impedance tomography of the human brain. PhD thesis, Department of Medical Physics—University College London [17] Cheng KS, Isaacson D, Newell JC et al (1989) Electrode models for electric current computed tomography. IEEE Trans Biomed Eng 36:918–924 [18] Silva OL, Lima RG, Martins TC et al (2017) Influence of current injection pattern and electric potential measurement strategies in electrical impedance tomography. Control Eng Pract 58:276–286 [19] Beraldo RG (2019) Desenvolvimento de um modelo dinâmico da circulação cerebral para tomografia por impedância elétrica. Master’s thesis, Universidade Federal do ABC [20] Vauhkonen P (2004) Image reconstruction in three-dimensional electrical impedance tomography. PhD thesis, University of Kuopio [21] Findlay JM, Nisar J, Darsaut T (2016) Cerebral vasospasm: a review. Can J Neurol Sci 43:15–32 [22] Romsarueva A, McEwan A, Horesh L et al (2006) MFEIT of the adult human head: initial findings in brain tumours, arteriovenous malformations and chronic stroke, development of an analysis method and calibration. Physiol Meas 27:S147

Development of a Matlab-Based Graphical User Interface for Analysis of High-Density Surface Electromyography Signals I. S. Oliveira, M. A. Favretto, S. Cossul and J. L. B. Marques

Abstract

The high-density electromyography (HD-sEMG) is a noninvasive technique to measure muscular activity using grids of surface electrodes, being used to evaluate neuromuscular disorders. Therefore, the present work is aimed to develop a MATLAB-based graphical user interface (GUI) for HD-sEMG signal analysis. As input, the user uploads the HD-sEMG and force signals, acquired during a sustained contraction. The parameters calculated are: (a) amplitude estimators: average rectified value (ARV) and root mean square (RMS); (b) frequency estimators: mean frequency (MNF) and median frequency (MDF); (c) topographic maps estimators: coefficient of variation (CoV) and modified entropy (ME). Both the HD-sEMG, force, and the parameters are calculated along time intervals and presented in graphs. The topographic map that represents the muscular electrical activities under the electrode’s array location is also demonstrated within the GUI. To assess the reliability of the developed interface, the RMS and MNF parameters were used to compare the values obtained by the developed interface and the commercial interface OT BioLab+. A perfect degree of reliability (ICC = 1) was found for the RMS and MNF variables. Keywords

High-density electromyography • GUI • MATLAB • Parameters • Force

I. S. Oliveira · M. A. Favretto (B) · S. Cossul · J. L. B. Marques Department of Electrical and Electronic Engineering, Institute of Biomedical Engineering, Federal University of Santa Catarina, Luis Oscar de Carvalho, Florianopolis, Brazil

1

Introduction

The electromyography (EMG) technique measures the electrical activity of the muscle fibers active during a muscular contraction [1]. These signals are expressed by the action potentials originated in the motor units (MU), which comprises a motor neuron and the muscle fibers it innervates. EMG can be detected by intramuscular electrodes or by electrodes mounted on the skin surface [2]. Classically, muscular activity has been assessed by intramuscular EMG (iEMG) technique. iEMG has the advantage of being a technique with high signal-to-noise ratio, however, it is an invasive procedure and has low reproducibility because of the high selectivity. Alternatively, techniques of surface EMG (sEMG), such as high-density electromyography (HD-sEMG) have been used to study motor unit behavior non-invasively [1]. The HD-sEMG method uses a two-dimensional grid comprising closely spaced electrodes (e.g., 3–8 mm center-tocenter), generating a greater representation of muscle behavior since the signal is recorded both parallel and perpendicular to muscle fibers [1,2]. Besides temporal activity, HDsEMG also allows spatial EMG distribution to be recorded, thus expanding the possibilities to detect new muscle characteristics such as motor units (MU) properties, location of innervation zones, measurements of length, and conduction velocity of muscle fibers. Due to spatial distribution, HDsEMG signals can be represented as topographic maps of the muscle electrical activity, allowing to measure the activation of several MUs simultaneously [1–3]. HD-sEMG is a relatively new technique and has been used in clinical applications such as studies of muscular fatigue in neuromuscular disorders and studies of motor neuron diseases, neuropathies, and myopathies [2,4,5]. Thus, it is essential to develop signal processing tools, such as graphical user interfaces (GUI), to facilitate the technique interpretability and consequently, further explore the HD-sEMG applicability in research and clinical studies. Other authors have devel-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_267

1829

1830

I. S. Oliveira et al.

oped GUIs for EMG signals processing [6,7]. However, the vast majority are applied to the processing of standard EMG signals, that is, using only one recording channel. Besides that, HD-sEMG commercial signal processing software, such as OT Biolab+ (OT Bioelettronica, Torino, Italy) require data obtained from their own products. Therefore, considering the further development of HDsEMG technique, the aim of this work is to develop a MATLAB graphical user interface (GUI) for offline HD-sEMG signals processing and feature extraction.

2

Materials and Methods

The HD-sEMG user interface was developed as a tool to analyse HD-sEMG signals based on visualization of graphs over time and feature extraction. Also, considering that HD-sEMG signals are recorded during sustained and dynamic contractions, force signals can also be analysed within the GUI. The GUI was developed with MATLAB GUIDE 2018b (Graphical User Interface Development Environment), which provides tools to facilitate the design of user interfaces and creation of custom apps. A flowchart schematic of the GUI operation steps is shown in Fig. 1. The flowchart represents the steps to analyse the signals. It starts with the force and HD-sEMG signals upload and

Fig. 1 Block diagram of the developed graphic interface

plot. Then, spatial filters can be selected to be applied in the HD-sEMG: (a) IB2 (Inverse Binomial); (b) IR (Inverse Rec-tangle Filter); (c) LDD (Longitudinal Double Differential); (d) LSD (Longitudinal Simple Differential); (e) TDD (Transversal Double Differential); (f) TSD (Transversal Simple Differential); and (g) NDD (Normal Double Differential). After that, an “offset” adjustment can be selected to be applied in the signals between the channels. Finally, the parameters to be calculated can be selected between RMS (root mean square value), ARV (average rectified value), MNF (mean frequency), MDF (median frequency), CoV (coefficient of variation), and entropy, setting the initial and final time of the analysis and the window size to be considered. Also, the topographic map is plotted if RMS is the parameter being analysed. Based on the software steps and requirements, an outline of the user interface was developed, defining the layout along the screen of each graph, text, and buttons. The initial draft of the GUI is shown in Fig. 2.

2.1

HD-sEMG and Force Signals Upload and Preprocessing

To test and evaluate the reliability of the developed GUI, 32channel HD-sEMG signals from 5 healthy individuals were

267 Development of a Matlab-Based Graphical User Interface …

1831

Fig. 2 Initial outline of the graphical user interface

used. The signals were recorded in the anterior tibial muscle during an isometric contraction of the ankle dorsiflexion of 30% of the maximum voluntary isometric contraction. The signals were recorded using a 32-channel electromyographic system in a monopolar configuration, with a sampling frequency of 2 kHz, gain equal to 8, and digitized with a 24-bit Analog/Digital converter. The force signals were acquired at a sampling frequency 80 Hz by a portable dynamometer developed to assess the isometric force of ankle dorsiflexion [8]. The procedures of this study were approved by the Ethics Committee in Human Research at the Federal University of Santa Catarina (protocol number: 3.326.385) [9]. Initially, within the GUI, the HD-sEMG and force signals are uploaded for offline processing. The HD-sEMG signals are imported in “.txt” format, arranged as a 32-column array by N lines, in which the columns represent the recording electrodes and the lines represent each time instant. The force signals are also imported as “.txt” format with one column, allocated as a vector. In the pre-processing step, HD-sEMG signals are filtered using the eighth-order Butterworth bandpass filter of 20–500 Hz; interference in 60 Hz power line and its five subsequent harmonics (i.e., 120, 180, 240, 300, 360 Hz) are also removed using fourth-order notch filters. The force signals are filtered using a fourth-order Butterworth low-pass filter with cutoff frequency 15 Hz [10].

2.2

Force Graphical Representation

The force graph represents the force level performed by the participant during a sustained isometric contraction, which can vary according to the developed protocol (e.g., 10%, 20% or 30% of the maximum voluntary isometric contraction). In the GUI, after the force signal upload, the user can select which force level is exhibited in the time-force graph.

2.3

HD-sEMG Signals Graphical Representation

The HD-sEMG graph represents the 32-channel signal acquired during a period of time and, in the same way as the force graph, the user can select which graph to plot according to the force level. For better signal visualization and analysis within the graphs, it is possible to add spatial filters as well as to set an offset between the channels.

2.4

Spatial Filters

Spatial filtering is responsible for increasing signal selectivity and reducing the number of MUs that contribute to the EMG signal. This technique applies high-pass filtering that reduces the contribution of low-frequency signals from deeper MUs and enhance the high-frequency signals from the superficial MUs [11,12]. For HD-sEMG signals processing, the spatial filters used depend on the orientation of the electrodes in relation to the direction of the muscle fibers (e.g., transversal or longitudinal). In a monopolar configuration for signals acquisition, different spatial filters can be formed. The spatial filters can be one-dimensional of first and second order, for example, the Longitudinal Simple Differential (LSD) (Fig. 3a), Transversal Simple Differential (TSD) (Fig. 3b), Longitudinal Double Differential (LDD) (Fig. 3c), and Transversal Double Differential (TDD) (Fig. 3d); or two-dimensional, such as the Normal Double Differential (NDD) (Fig. 3e), Inverse Rectangle Filter (IR) (Fig. 3f), and Inverse Binomial (IB2) (Fig. 3g). The two-dimensional filters causes the signal to be even more selective, even in maximum voluntary contractions that involve the recruitment of many MUs. In the GUI, the user can select which spatial filter will be applied to the HD-sEMG signal.

1832

I. S. Oliveira et al.

  N 1  |xn | RMS =  N

(2)

n=1

2.7

Frequency Estimators

The parameters used for frequency analysis are: mean frequency (MNF—mean frequency) and median frequency (MDF—median frequency). The MNF or centroid frequency represents the average value of the power spectrum (Eq. (3)). MDF is the frequency that divides the spectrum into two regions of equal amplitude (Eq. (4)) [14]. M Fig. 3 One-dimensional spatial filter configurations (i.e., the points represent the acquisition electrodes) of the first order (a longitudinal single differential (LSD), and b transversal single differential (TSD)), second order (c longitudinal double differential (LDD), and d transversal double differential (TDD)), and filter two-dimensional space (e normal double differential (NDD), f inverse rectangle filter (IR), and g inverse binomial (IB2))

2.5

Feature Extraction

Surface EMG (sEMG) signals are of difficult interpretation since they are the algebraic summation of the MU action potentials occurring within the area of the electrodes. Therefore, both amplitude and frequency analysis techniques are used to improve the interpretation of the sEMG signals by using time intervals (epochs) of 0.25 s, 0.5 s, and 1 s to calculate the parameters [13]. The EMG parameters calculated within the GUI are: a the amplitude estimators average rectified value (ARV) and root mean square (RMS); b the frequency estimators mean frequency (MNF) and median frequency (MDF); c coefficient of variation (CoV); d modified entropy (ME) and; e gravity center (GC). The user needs to select the parameters to be calculated and the time interval, since the frequency and amplitude estimators are calculated for each epoch. Then, a graph along the time interval for each parameter is generated.

2.6

Amplitude Estimators

The most frequently applied estimates of sEMG amplitude are: average rectified value (ARV) and root mean square (RMS) [1]. ARV and RMS are calculated using Eqs. (1) and (2), respectively, in which xn is the nth sample of the EMG signal and N is the number of samples within the considered window [14]. N 1  |xn | (1) A RV = N n=1

M N F = n=1 M n=1 M DF 

Pi =

i=1

M  i=M D F

f i Pi

(3)

Pi 1 Pi 2 M

Pi =

(4)

i=1

where fi is the frequency value of EMG power spectrum at the frequency bin i, Pi is the EMG power spectrum at the frequency bin and M is the length of frequency bin [14]. Since the power spectrum is obtained using the Fourier transform. These MNF and MDF variables provide basic information about the signal spectrum and its variation over time. The standard deviation of the MDF estimate is theoretically higher than the MNF [15]. However, MDF is less sensitive to noise and more sensitive to fatigue [16].

2.8

Modified Entropy

Modified Entropy (EM) is used to check the degree of homogeneity of the spatial distribution of muscle activities within topographic maps [17,18]. This variable is calculated by Eq. (5), where N represents the number of channels in the electrode array. The p(i)2 value is the square of the RMS value of electrode i normalized by the sum of the squares of the N RMS values, according to Eq. (6). Entr = −

N 

p(i)2 × log2 ( p(i)2 )

(5)

i=1

R M S(i)2 p 2 (i) =  N 2 i=1 R M S(i)

(6)

Entropy has the maximum value if all values in the set M are equal (uniform distribution). Therefore, the higher the entropy value, the more uniform the distribution of the RMS values through the topographic map [18].

267 Development of a Matlab-Based Graphical User Interface …

2.9

Coefficient of Variation

The CoV is used to demonstrate the heterogeneity of the topographic maps and is defined by the standard deviation (SD) of the RMS values divided by the average of the RMS values obtained in the topographic map [18]. As lower the CoV, the more uniform the potential distribution along the electrode grid.

2.10

Gravity Center

The coordinates of the center of gravity are calculated for the topographic map of the RMS values [19]. These coordinates represent the position of the center in x (Gx) and y (Gy) and are obtained by Eqs. (7) and (8), respectively, where R M Stotal is the sum of all RMS values, i is the total number of pixels (channels) of the image (map), and R M Si is the sum of RMS values at positions i, which correspond to the coordinates (xi, yi). I  1 R M Si xi (7) Gx = R M Stotal i=1

Gy =

I  1 R M Si yi R M Stotal

(8)

i=1

2.11

Topographic Maps

The GUI also allows the visualization of topographic maps, which represent a colour map of the amplitude or frequency parameters, such as the RMS or MNF value. For the topographic map visualization, it is necessary to select one of the spatial filters (e.g. LSD, TSD, LDD, TDD, NDD, IR and IB2) and one of the amplitude or frequency parameters (e.g. RMS, ARV, MNF or MDF). Also, the maximum value for the colour map that is represented by the red colour can be adjusted.

2.12

Software Evaluation

The RMS and MNF parameters were used to compare the values obtained by the developed interface and the commercial interface OT BioLab+ (OT Bioelettronica, Torino, Italy). This comparison was made to verify the reliability of the data obtained by the developed interface. These parameters were estimated for each of the 32 channels, in periods of 0.5 s. Twenty consecutive values over time for each of the parameters were obtained, which corresponds to the average value of the 32 channels for each of the 0.5 s periods, for a signal of 10 s.

1833

Agreement between measures was assessed with the Intraclass correlation coefficient (ICC) and the Bland-Altman Limits of Agreement (LoA), while ICC provides a single measure of the extent of agreement, the Bland-Altmann plot provides a quantitative estimate of how closely the value from two measurements lie. LoA were defined as the mean difference ±1.96 times the SD of the differences. In the Bland-Altmann, plot horizontal lines are drawn at the mean difference and the LoA. ICC estimates and their 95% confidence intervals were calculated using SPSS statistical package version 26 (SPSS Inc, Chicago, IL) based on a mean of measurements (windowsize mean = 0.5 s), absolute-agreement, 2-way mixed-effects model.

3

Results

After the finalization of the GUI, it was possible to test its functionalities through signals analysis. About the force signals, it was possible to visualize the force variation over a given period. In Fig. 4a is shown the force signal made by a participant for a period of 30 s, sustaining 30% of his maximum force during that interval. Furthermore, the HD-sEMG signal uploaded and plotted on the graph is not in its best form for evaluation. Therefore, a spatial filter and an offset between the channels were applied. As a representation, in Fig. 4b, the LSD filter was defined and adjusted for 0.007. In the third region designed for graphs representation, the graph of the resulting parameters is plotted. Figure 4c is an example where the chosen parameter was RMS, with an initial time of 10 s and a final time of 20 s, and a window size of 0.5 s. Finally, the topographic map is presented in Fig. 4d. Figure 4 shows a complete demonstration of all parts of the developed interface.

3.1

Software Evaluation

A high degree of reliability (ICC = 1) was found between the RMS measurements, which can be visualized in the BlandAltmann plot shown in Fig. 5. In the same way, a perfect degree of reliability (ICC = 1) was found for the MNF measurements, which can be further visualized in the Bland-Altmann plot shown in Fig. 6.

4

Discussion and Conclusion

The present work aimed to develop a MATLAB graphical interface for the digital processing of HD-sEMG signals. The developed interface allows an easy user manipulation since its design facilitate the visualization and manipulation of the

1834

I. S. Oliveira et al.

Fig. 4 Final version of the developed graphical user interface. a Force development representation during an isometric contraction, b HD-sEMG signals representation, c RMS value for each of the channels, and d topographic map

Fig. 5 Bland-Altman plot for the RMS parameter

Fig. 6 Bland-Altman plot for the MNF parameter

signals being analysed. In addition, there is the possibility of choosing different parameter for analysis, which will be applied only at the desired location of the signal, by adjusting the start time, end time and window size. Finally, topographic maps of the signals can also be generated. To evaluate the agreement between the measurements, the RMS and MNF parameters obtained by the interface and by the commercial software OT Biolab+ (OT Bioelettronica, Torino, Italy) were compared. As can be seen in the Bland-Altman graphs (Figs. 5 and 6), there is a high agreement between the measurements, since results obtained in the two software are in its majority inside the limits of agreement (±1.96∗S.D.), which indicates a high reliability in the measures of the developed software when compared to the commercial software. The main advantages of the developed GUI are the possibility to analyze up to 32 EMG channel simultaneously, along with the analysis of the force signal. When compared to the OT Biolab+ commercial software, the developed GUI has an added feature that allows applying different spatial filters to HD-sEMG signals, which in conjunction with parameters such as modified entropy, CoV and the center of gravity, allows to evaluate changes in the spatial distribution of HDsEMG signals from different muscle depths [20,21]. As improvements in future versions, it is intended to include algorithms for the decomposition of the MU to obtain more selective parameters, such as the firing frequency of the MU and muscle fibre conduction velocity.

267 Development of a Matlab-Based Graphical User Interface …

4.1

Availability of the Software

Upon publication, request of the GUI developed in this work can be addressed to [email protected] or [email protected].

1835 9.

10.

11. Acknowledgements The authors thank the Brazilian Government Funding Agencies CAPES and CNPq (Processes 142180/2018-1 and 170783/2017-0) and the Ministry of Education of Brazil through the tutorial education program of the Federal University of Santa Catarina (PET-UFSC). Conflict of Interest The authors declare there are no conflict of interest.

12.

13. 14.

References 15. 1. 2.

3. 4. 5.

6.

7.

8.

Merletti R, Farina D (2016) Surface electromyography: physiology, engineering and applications, vol 24 Stegeman DF, Lapatki BG, Van Dijk JP (2012) High-density surface EMG: techniques and applications at a motor unit level. Biocybern Biomed Eng 32(3):3–27 Zwarts MJ, Stegeman DF (2003) Multichannel surface EMG: basic aspects and clinical utility. Muscle Nerve 28:1–17 Enoka RM (2012) Muscle fatigue—from motor units to clinical symptoms. J Biomech 45:427–433 Gallina A, Marletti R, Vieira TMM (2011) Are the myoelectric manifestations of fatigue distributed regionally in the human medial gastrocnemius muscle? J Electromyogr Kinesiol 21:929–938 Kaur M, Mathur S, Bhatia D, Verma S (2015) siGnum: graphical user interface for EMG signal analysis. J Med Eng Technol 39(1):19–25 Mengarelli A, Cardarelli S, Verdini F, Burattini L, Fioretti S, Di Nardo F (2016) A MATLAB-based graphical user interface for the identification of muscular activations from surface electromyography signals. In: Annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 3646–3649 Favretto MA, Cossul S, Andreis FR, Balotin AF, Marques CMG, Marques JLB (2017) Desenvolvimento de um sistema de avaliação da força isométrica de flexão dorsal do pé. In: V Congresso Brasileiro de Eletromiografia e Cinesiologia e X Simpósio de Engenharia Biomédica, pp 377–380

16.

17.

18.

19.

20.

21.

Favretto MA, Cossul S, Andreis FR, Balotin AF, Marques JLB (2018) High density surface EMG system based on ADS1298-front end. IEEE Latin Am Trans 16:1616–1622 Andersen LL, Andersen JL, Zebis MK, Aagaard P (2010) Early and late rate of force development: differential adaptive responses to resistance training? Scand J Med Sci Sports 20:162–169 Farina D, Merletti R (2003) Estimation of average muscle fiber conduction velocity from two-dimensional surface EMG recordings. J Neurosci Methods 134:199–208 Farina D, Pozzo M, Merlo E, Bottin A, Merletti R (2004) Assessment of average muscle fiber conduction velocity from surface EMG signals during fatiguing dynamic contractions. IEEE Trans Biomed Eng 51:1383–1393 Merletti R, Parker P (2004) Electromyography: physiology. Engineering and non-invasive applications, pp 1–474 Farina D, Merletti RR (2000) Comparison of algorithms for estimation of EMG variables during voluntary isometric contractions. J Electromyogr Kinesiol 10:337–349 Stulen FB, De Luca C J (1981) Frequency parameters of the myoelectric signal as a measure of muscle conduction velocity. IEEE Trans Biomed Eng 515–523 Stulen FB, De Luca CJ (1982) Muscle fatigue monitor: a noninvasive device for observing localized muscular fatigue. IEEE Trans Biomed Eng BME-29:760–768 Farina D, Merletti R (2008) Analysis of motor units with high-density surface electromyography. J Electromyogr Kinesiol 18:879–890 Watanabe K, Miyamoto T, Tanaka Y (2012) Type 2 diabetes mellitus patients manifest characteristic spatial EMG potential distribution pattern during sustained isometric contraction. Diabetes Res Clin Pract 97:468–473 Madeleine P, Leclerc F, Arendt-Nielsen L, Ravier P, Farina D (2006) Experimental muscle pain changes the spatial distribution of upper trapezius muscle activity during sustained contraction. Clin Neurophysiol 117:2436–2445 Nougarou F, Campeau-Lecours A, Massicotte D, Boukadoum M, Gosselin C, Gosselin B (2019) Pattern recognition based on HDsEMG spatial features extraction for an efficient proportional control of a robotic arm Biomed. Signal Process Control 53:101550 Stegeman FD, Kleine BU, Lapatki BG, Van Dijk JP (2012) Highdensity surface EMG: techniques and applications at a motor unit level. Biocybern Biomed Eng 32:3–27

Development of an Automatic Antibiogram Reader System Using Circular Hough Transform and Radial Profile Analysis B. R.Tondin, A. L. Barth, P. R. S. Sanches, D. P. S. Júnior, A. F. Müller, P. R. O.Thomé, P. L. Wink, A. S. Martins and A. A. Susin

Abstract

The disk diffusion method is one of the most frequently used techniques to determine the antibiotic susceptibility profiles of pathogens. A fast identification of resistance profiles is essential for researchers and physicians. In laboratories without technological resources, these measurements are done manually using a ruler or a caliper increasing the chances of error. We have designed a device that, by mean of image processing of disk diffusion tests, semi automatizes the process. A total of 53 images of plates containing, on average 11 inhibition zones each were acquired and we compared the results obtained by the proposed algorithm with the results from the commercial equipment Osiris® (considering the correction in the measurements made by a human operator). The proposed algorithm correctly identified the position of the 597 inhibition zones in 100% of the images. The Pearson correlation coefficient between the measurements of the inhibition zones diameters by the proposed algorithm and our reference was 0.7728. The Cohen’s kappa coefficient between reference and the proposed method was 0.729, and the concordance between methods was 86.3%, where the most significant error rate was 0.2%. This study was approved by the HCPA Health Ethics Committee under number 170587.

P. R. S. Sanches · D. P. S. Júnior · A. F. Müller · P. R. O. Thomé Serviço de Pesquisa E Desenvolvimento Em Engenharia Biomédica, Hospital de Clínicas De Porto Alegre, Porto Alegre, Brazil B. R. Tondin (B) · A. A. Susin Programa de Pós-Graduação em Engenharia Elétrica—Departamento de Engenharia Elétrica, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS, Brazil e-mail: [email protected] A. L. Barth · P. L. Wink · A. S. Martins Laboratório de Pesquisa em Resistência Bacteriana, Hospital de Clínicas de Porto Alegre, Porto Alegre, Brazil

Keywords

Image processing • Computer vision • Hough transform • Antibiogram • Disc diffusion method

1

Introduction

Infectious diseases cause a significant number of deaths worldwide and, more markedly, in developing countries. The emergence of bacterial resistance is an inherent effect of the use of antimicrobial agents [1]. The period required for the occurrence of this phenomenon is extremely short for specific medications, showing a great capacity of adaptation by the microorganisms [2]. Several factors contributes to bacterial resistance, leading to erroneous prescriptions and possibly non-prescription uses: the patient’s expectation of an effective treatment, reduction of clinical consultation time, pressure from the pharmaceutical industry to sell expensive and modern medicines, and skepticism of health professionals. The rational use of antimicrobial agents is fundamental to reduce the mortality caused by bacteria with increasingly complex treatments [3,4]. If we can successfully determine the profile of bacterial susceptibility, in addition to guiding decisions in the treatment of infectious diseases, it is possible to trace the evolution and epidemiology of microbial resistance [5]. These profiles are raised using ASTs (Antibiotic Susceptibility Testings), also called antibiograms. The disk-diffusion antibiogram is the most widely used method by microbiology laboratories in hospitals around the world. Several factors motivated the extensive adoption of antibiogram methods: the low-cost of instruments, simple execution protocol, and the fact that it allows testing a diversity of antibiotics simultaneously [6]. The test consists of sampling patient’s material, isolating the bacteria of interest and spreading it into the surface of the Mueller-Hinton Petri dish. Then, the specialist places into this dish a set of paper disks impregnated with a known concen-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_268

1837

1838

B. R.Tondin et al.

tration of antimicrobial agent [7]. After 18 of hours, bacterial growth will occur only where the concentration of the agent is not sufficient to inhibit it, forming an inhibition zone called inhibition halo. The diameter of a halo will determine whether the microorganism is resistant, poorly resistant (intermediate) or sensitive to the agent tested. Usually, the diameter measurements of the formed halos are done manually using rulers or calipers. However, each dish may contain up to 12 disks, making the task very repetitive and tedious. Even for experienced professionals, repeatedly performing these measurement tasks start to create a dependence on their mental condition and working environment [6,8]. In this work, we present the development of a prototype to measure inhibition halos in disk-diffusion ASTs aiming to avoid the inherent problems of manual measurements. Our technique uses Hough’s Transform to automatically identify the central position of the circles formed by the paper filters; and then, estimate the halo diameter based on the RPAA (Radial Profile Analysis Algorithm) technique [9]. The results of the proposed device were compared with those obtained with the original RPAA and also with the commercial equipment used at the Hospital de Clínicas de Porto Alegre (HCPA).

Fig. 1 a Assembled device overview. b Bottom end with LED strip holders. c Mask to prevent camera glare. d Magnets to guide the correct drawer placement

2

Materials and Methods

2.3

2.1

Developed Device

The algorithm developed is shown in the block diagram of Fig. 2. Our approach has 16 steps classified into 4 groups: Image Capture, Pre-Processing, Measurement of Inhibition Halos and Interpretation. The first 3 were implemented using the software MathWorks MATLAB® and the last one, using Google Sheets® . The data analysis of the results were performed using the software IBM SPSS® and MATLAB® . The algorithm runs once for each antibiogram dish. The device captures two different images for each one of the Petri dishes tested. One with the camera in high gain mode and the other in low gain. The first contains information for measuring the halos and the second to find their locations. Figure 3 shows both images after trimming the region of interest. Since the main goal of this approach is the location of halos and the measurement of their diameter, the use of color information was not relevant in this process. Hence, we opted to use only grayscale images. The clinical routine of the HCPA requires the use of a tag on the external surface of Petry dish containing the patient identification. For this reason, as can be seen in Fig. 3, near the border on the right side of the high gain image, we can notice the presence of a white rectangular artifact. This artifact, as will be seen in the results section, degrades the performance of the algorithm and will need further research.

In order to guarantee the standardization of image acquisition for the experiment, the device was built to offer constant illumination during the capture by means of a built-in camera (RGB 16 megapixel Logitech® C920) placed on a fixed position on the top of a 40 × 20 × 20 cm dark chamber (Fig. 1a). This illumination was provided by means of white led strips concentric around the Petri dish (Fig. 1b) and a black mask made from laser cut card used to prevent camera glare (Fig. 1c). A drawer with magnets in its end was used to hold the Petri dish in place, as seen in Fig. 1a and d.

2.2

Strains and Antibiotics

Bacteria seeding was performed following the procedures described by the Clinical & Laboratory Standards Institute (CLSI) performance standard guidelines [10], in order to ensure a confluent growth. The designated antimicrobial combination for each test is predefined from the micro-organism tested and the medium in which it is collected. We tested 53 antibiogram plates containing between 8 and 12 disks each. The total number of zones of inhibition to be measured was 597.

Proposed Software

268 Development of an Automatic Antibiogram Reader System …

1839

Fig. 2 Block diagram with the steps used in the algorithm developed in this work

Fig.4 a Smoothness of the low gain image using Gaussian filter. b Binarization of the low gain image using Otsu method. c Detection of edges in the low gain image using Canny’s method. d Detection of antimicrobial disc centers using Hough transform

Fig. 3 On the left: high gain image capture. On the right: low gain image capture. Images were cropped to highlight the region of interest and their color scales were converted from RGB to grayscale

We decided to use a low gain image because it enhances the white disks containing the antimicrobial agents and reduces the presence of the label artifact, minimizing the occurrence of errors in the identification of the discs positions and making them more evident in the image. In this low gain image, a Gaussian filter with unitary standard deviation was used to attenuate noise at high frequencies and to minimize possible errors in the later stages (Fig. 4a). Then, the binarization of the image was obtained using Otsu’s thresholding algorithm [11] (Fig. 4b). With this binary image, it was possible to apply Canny’s edge detection algorithm [12] (Fig. 4c). To locate the central position of each circle in the image, we made use of the Hough transform implementation proposed by Atherton and Kerbyson [13]. It consists of evaluating the occurrence of circles within a defined range of diameters. The algorithm considers the orientation of the edges detected, so the votes applied in the parameter space reduces to only 2 and not the entire circle outline. Knowing that the diameter of the antibiotic disks has a standard size of 6 mm, we determined an

equivalent search interval, in pixels, of 5.5–6.5 mm. In Fig. 4d it is possible to verify, in green, the location of each disk in the image. The output information in this step is the number of paper disks and their relative positions (x and y) in the image. In the high gain image, when bacterial growth is weak, there is low contrast between the inner and outer halo region. For this reason, we applied the contrast enhancement method Contrast Limited Adaptive Histogram Equalization (CLAHE) [14] to adjust general contrast in the image. Conventional methods for contrast enhancement tend to amplify the noise, and the CLAHE method prevents this from occurring by limiting its amplification [15]. Following the procedures in the low-gain image, a Gaussian filter with a standard deviation of 3 was applied, in order to reduce details at high frequencies. Since the pixels beyond the edge of the Petri dish degrades the performance of the algorithm we added a mask over the image, by creating a region of interested that is defined by the area inside the dish (where we can find the discs). Then the external are is ignored in the next steps as shown in Fig. 5. In order to evaluate each one of the halos individually, we perform square cuts (35 mm2 ) in the high gain image at the locations previously detected Fig. 5b shows some examples of cropped segments. In each segment of the image was drawn N equally spaced radial vectors starting at the center of the image and going up to the distance corresponding to 17.5 mm. Each of these vectors, associated with a different angle, sampled the gray

1840

Fig. 5 Illustrated presentation of the applied mask. The pixels in red are ignored by the algorithm

B. R.Tondin et al.

the likelihood of a significant measurement error. Subtracting 1.5 mm from the radius value decreases the chance of a favorable antimicrobial diagnosis that has not performed satisfactorily. Knowing that the elements of the vectors belonging to the paper disk had been removed and that the clinically relevant information is the diameter (D in millimeters), we can calculate the values with Eq. 1. Being Rgr oss the value found directly from the maximum of the graph of Fig. 7. Rdisk the known value of the radius of the antimicrobial disk and Rcaution the safety factor of 1.5 mm. D = 2 · (Rgr oss + Rdisk − Rcaution ) = 2 · (Rgr oss + 3 − 1.5)

(1)

D = 2 · Rgr oss + 3

Fig. 6 a Illustration of the 8 example angles proposed by the algorithm on a halo figure. b Profile resulting from the arithmetic average between the pixels acquired at different angles. In well-behaved halos the formation of a U-shape will occur, where region 1 corresponds to the filter paper disk, region 2 to the inhibition zone and region 3 to the outside of the halo where the bacterium could develop

As depicted at the beginning of this paper, we need to classify the agent’s performance against the microorganism as Sensitive, Intermediate, or Resistant. So, it is necessary to cross-reference the database and the worksheets provided by CLSI to obtain this information. Spreadsheets have been created in the Google Sheets software to perform this evaluation automatically aiming to minimize errors and reduce the hard work of performing the manual analysis for each one of the 597 disks.

3 level in each pixel of its extension, in order to generate a curve that represents the profile of the grayscale values of the halo at that angle. In Fig. 6a, we show an example considering N equal to 8. The pixel values that occurs in the eight red lines are acquired and, by calculating a mean of pixels with identical distances from the center at different angles, we drawn a the curve profile as can be seen in Fig. 6b. As the value of N increases, smoother the curve becomes. During our experiments, we noticed a considerable increase in execution time for very large N . In this way, a value of 512 was assigned to analyze the acquired images. For each one of inhibition halos detected, we estimate a mean vector and then normalize the values to the interval between 0 and 1. The next step was to differentiate the normalized vector twice so that the halo’s edge could be considered as the global maximum, that is, at the inflection point of the curve. In Fig. 7 is shown an example for this procedure for a inhibition zone. This approach allows us to obtain the values, in pixels, that approximate the radius of each one of the inhibition zones formed in the dish, after that, knowing pi xels the millimeter s ratio, we can convert the values from pixels to millimeters. Following a recommendation from specialists in bacterial resistance, a caution factor was included to minimize

Results

We compared the measurements obtained by the device and the proposed algorithm with those estimated by the Osiris® commercial equipment (CE), by the manual method (MM), and by the original RPAA implementation. Since all measurements using the commercial equipment were performed in real diagnostic routines, with qualified human interference when necessary, we consider it as the reference standard. Previous work did not face the interference of an identification tag on the antibiogram plates. Thus, to ensure a fair comparison, the performance evaluation of the methods disregards the halos that contained the presence of such label (i.e., the white paper containing the patient identification at the bottom of the Petri dish), leaving only 429 measures for analysis. In the disk identification step, the phase where we locate the centers position, the method proved to be 100% accurate, correctly finding the 597 locations in all 53 dishes. So, it was not necessary to perform any statistical analysis. To evaluate the inhibition zones measurements, we used the Pearson correlation coefficient and the Cohen’s Kappa coefficient of agreement. The FDA recommends the limits for the mismatch between the method and the standard used as a reference. Major Errors (ME) occur when the method or device to be tested provides a resistant result, but the reference ensures that it is sensitive;

268 Development of an Automatic Antibiogram Reader System …

1841

Fig. 7 Second derivative of the mean vector plot of an inhibition halo. The maximum is highlighted in red

these cases should correspond to less than 3% of the measurements. Otherwise, if the reference shows resistant and the evaluated method or device delivers a sensitive result, it is considered a Very Major Error (VME), and they should not exceed 1.5% of cases. All other discrepancies are considered minor Errors (mE) [16]. We evaluated the agreement and correlation between the qualitative results of the measurements and the other methods using Cohen’s Kappa index and Pearson coefficient. We also compared the results with the reference method (EC) using all halos, only the halos affected by tags and only the halos unaffected by tags. Table 1 presents the Pearson correlations coefficients and Cohen’s Kappa indexes between the methods. In addition, the results show that VMEs occurred in only 0.2% of the measurements, the MEs in 4.9% and the mEs in 8.6%, resulting in a concordance rate between the results of the proposed method and the standard reference of 86.3%.

plan to improve the algorithm’s accuracy, port it to a vendor independent programming language and include a Graphical User Interface, allowing the use by the end users, like microbiologists and students. Acknowledgements This work was sponsored by INPRA—Instituto Nacional de Pesquisa em Resistência Antimicrobiana, CAPES and CNPq. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. 2. 3.

4

Conclusion

4.

Based on the experimental results shown in this article, we conclude that the inclusion of new features in the algorithm improves the measurement performance of the original RPAA method. Also, we presented a novel approach to automatically detect and measure inhibition halos in Petri dishes using standard digital cameras. For the future work in this research, we

5. 6.

7. Table 1 Pearson correlation coefficients and Cohen’s Kappa indexes between the reference measures (EC) and the other methods

8.

Methods

9.

Halos All the halos

Only halos affected by tags

Only halos unaffected by tags

Manual method

ρ = 0.9060 κ = 0.807

Original RPAA

ρ = 0.4666 κ = 0.588

ρ = 0.3817 κ = 0.453

ρ = 0.5116 κ = 0.656

Proposed algorithm

ρ = 0.7063 κ = 0.676

ρ = 0.5727 κ = 0.553

ρ = 0.7728 κ = 0.729

10.

11. 12. 13.

Maxwell F (1955) Emergence of antibiotic-resistant bacteria. N Engl J Med 253:1019–1028 Paterson DL, Bonomo RA (2005) Extended-spectrum βlactamases: a clinical update. Clin Microbiol Rev 18:657–686 Wannmacher L (2004) Uso indiscriminado de antibióticos e resistência microbiana: uma guerra perdida. Uso racional de medicamentos: temas selecionados 1:1–6 Jensen US, Muller A, Brandt CT et al (2010) Effect of generics on price and consumption of ciprofloxacin in primary healthcare: the relationship to increasing resistance. J Antimicrob Chemother 65:1286–1291 Jenkins SG, Schuetz AN (2012) Current concepts in laboratory testing to guide antimicrobial therapy. Mayo Clin Proc 87 Nijs A, Cartuyvels R, Mewis A, Peeters V, Rummens JL, Magerman K (2003) Comparison and evaluation of Osiris and Sirscan 2000 antimicrobial susceptibility systems in the clinical microbiology laboratory. J Clin Microbiol 41:3627–3630 Hudzicki J (2009) Kirby-Bauer disk diffusion susceptibility test protocol. Am Soc Microbiol Hombach M, Zbinden R, Böttger EC (2013) Standardisation of disk diffusion results for antibiotic susceptibility testing using the sirscan automated zone reader. BMC Microbiol 13:225 Hejblum G, Jarlier V, Grosset J, Aurengo A (1993) Automated interpretation of disk diffusion antibiotic susceptibility tests with the radial profile analysis algorithm. J Clin Microbiol 31:2396–2401 Clinical and Laboratory Standards Institute (2019) Performance standards for antimicrobial susceptibility testing. In: Eleventh Informational Document MWo-Sll. Tech. rep., Clinical and Laboratory Standards Institute, Wayne, EUA Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9:62–66 Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 679–698 Atherton TJ, Kerbyson DJ (1999) Size invariant circle detection. Image Vis Comput 17:795–803

1842 14. Zuiderveld K (1994) Contrast limited adaptive histogram equalization. Graphics gems IV. Academic, San Diego, CA, pp 474–485 15. Sund T, Møystad A (2006) Sliding window adaptive histogram equalization of intraoral radiographs: effect on image quality. Dentomaxillofac Radiol 35:133–138

B. R.Tondin et al. 16. Romney Humphries M, Jane A, Mitchell Stephanie L et al (2018) CLSI methods development and standardization working group best practices for evaluation of antimicrobial susceptibility tests. J Clin Microbiol 56:e01934-17

Gene Expression Analyses of Ketogenic Diet A. A. Ferreira and A. C. Q. Simões

Abstract

Ketogenic diet (KD) is used to loose weight and in the treatment of epilepsy due to its anticonvulsant properties. Evidence demonstrates that this diet can assistant to diminish inflammation and reduce mortality, even though the genetic and molecular mechanisms are not yet well understood. This article investigated the gene expression under KD searching for differentially expressed genes and biological networks. Three publically available microarray datasets of mouse liver have been analysed and reported differentially expressed genes and Gene Ontology categories that are statistically overrepresented. Differentially expressed genes were filtered for adjusted p-value < 0.05. The intersection between the three datasets revealed only 2 genes. To identify robust response (2 out of 3 datasets), the union of the intersection of the differentially expressed genes was calculated and revealed 33 genes, which were used to evaluate the Gene Ontology categories overrepresentation and Functional Profiling using gProfiler with KEGG Orthology. According to the results found, a KD has a fundamental role in the release of ketone bodies, mediated by PPAR α, influencing metabolism process. Keywords

Bioinformatics • Gene expression • Microarray • Biomedical engineering • Ketogenic diet

1

Introduction

The ketogenic diet (KD) has been used since the 1920s as a diet based in high fat intake, moderate protein and low carboA. A. Ferreira · A. C. Q. Simões (B) Centro de Engenharia, Modelagem e Ciéncias Sociais Aplicadas, Universidade Federal do ABC, Rua Arcturus 03, São Bernardo do Campo, Brazil e-mail: [email protected]

hydrate, which reduce insulin, replacement glucose and initiate and perpetuate the body’s production of ketones [1]. It is aimed at simulating the state of prolonged fasting, used primarily as a treatment for epilepsy, since this fasting state has anti-convulsive properties, may have direct action in antiepileptic activity or may act to stabilize neuronal membranes [2,3]. Ketone bodies are produced in the liver as a consequence of fatty acid oxidation, following the metabolism of acetyl-CoA formed during mitochondrial β-oxidation. Acetyl-CoA can enter the Krebs cycle for ATP production and/or be converted into the ketone bodies acetoacetate, beta-hydroxybutyrate (BHB), and acetone, which are transported to different tissues such as the heart and brain [4]. Although several regulatory genes encoding enzymes of fatty acid catabolism have been described, recent evidence suggests that Mitochondrial 3-hydroxy-3-methylglutarylCoA synthase (HMGCS2) and the peroxisome proliferator activated receptor α (PPARα) have an especially important relationship in promoting ketogenesis during fasting [3]. Hmgcs2 is the rate-limiting enzyme in ketogenesis, catalyzing the condensation reaction between acetoacetyl-CoA and acetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA), which then is converted by 3-hydroxy-3methylglutaryl-CoA lysase into the ketone bodies. The Hmgcs2 gene contains a peroxisome proliferator response element in its promoter region and is under the transcriptional regulation of the PPARα [5]. PPARα is a ligand-activated transcription factors belonging to the nuclear receptor superfamily. These proteins use fatty acids and their derivatives as ligands and play a major role in the expression of genes involved in energy homeostasis. PPARα is the major PPAR expressed in the liver and controls the transcription of many genes involved in fatty acid metabolism, as well as ketogenesis. Subsequent studies determined PPARA activation upregulates enzymatic pathways involved in lipid transport, lipid oxidation, and ketoge-

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_269

1843

1844

A. A. Ferreira and A. C. Q. Simões

nesis, thereby facilitating the metabolic remodeling required for cells and tissues to use fatty acids as an energy source [6,7]. To elucidate the genes and biological processes involved in ketosis, three public available Microarray datasets, from mice submitted to KD, were obtained from NCBI and analyzed for differentially expressed genes and functional profiling, using bioinformatics.

KD along a day [8]. The microarray platform is GPL6246 [MoGene-1_0-st] Affymetrix Mouse Gene 1.0 ST Array [transcript (gene) version]. The control subjects were submitted to regular chow diet. For our analysis we selected the samples of liver from 2 sets (with 3 replicates each): one group was submitted to a regular chow (control group) and the other submitted to a KD (keto group) both groups were submitted to 20 h of diet.

GSE115342—(3 Control Regular Chow × 3 LCKD: 7 Weeks) In this experiment, novel molecular targets of low carbohydrate ketogenic diet (LCKD) were explored by gene expression profiling in the liver and cerebral cortex of an LCKD-fed mouse model. The microarray platform is GPL21810 Agilent074809 SurePrint G3 Mouse GE v2 8x60K Microarray [Feature Number version]. Although the study has samples from cerebral cortex and liver, for our analysis we used raw datas from 2 sets of liver samples (with 3 replicates each): one group was control, regular chow and another the group submitted to a LCKD, both groups were submitted to 7 weeks of diet [9–11]. 2.1.2

2

Materials and Methods

Three different microarray datasets were searched for Differentially Expressed Genes and Functional Profiling. The condensed pipeline is in Fig. 1.

2.1

Publically Available Microarray Datasets from GEO-NCBI

The Gene Expression Omnibus (GEO) DataSets repository enables the public access to transcriptomics Datasets. For this study three different microarray Datasets were selected of liver mouse subjected to KD to compare the differentially expressed genes from a less time restricted carbohydrate diet to a more time restricted carbohydrate diet. The datasets are available at https://www.ncbi.nlm.nih.gov/gds and listed below: • GSE87425 (3 control regular chow × 3 keto: 20 h). • GSE115342 (3 control regular chow × 3 LCKD: 7 weeks). • GSE121096 (4 control fasting 16 h × 4 days keto).

GSE87425—(3 Control Regular Chow × 3 Keto: 20 h) In this experiment the researchers evaluated the gene expression associated with the circadian cycle obtained through a 2.1.1

Fig. 1 Pipeline of the analysis

GSE121096—(4 Control Fasting 16 h × 4 Days Keto) This experiment studied the effect of individual and combined PPARα and CREB3L3 deficiency on hepatic gene expression after a 16 h of fasting and a 4 days KD. Whole genome expression analysis was performed on liver samples using microarray [12]. The microarray platform is GPL11533 [MoGene1_1-st] Affymetrix Mouse Gene 1.1 ST Array [transcript (gene) version]. Although the study has several different samples, for our analysis we used raw datas from 2 sets (with 4 replicates each): one group was control (fasting Liver_wildtipe_fasted_diet) and another the group sub2.1.3

269 Gene Expression Analyses of Ketogenic Diet

mitted to a KD (Liver_wildtipe_ketogenic_diet), all male wildtype C57BL/6.

2.2

Differentially Expressed Gene Analyses by GEO2R from GEO Datasets

The differential gene expression analysis was performed for each dataset using GEO2R already available at GEO-DataSets interface. Internally GEO2R uses R-project and Bioconductor packages GEOquery and limma to analyse the data. It can be used to compare two groups of samples to identify differential expression in a GEO series. The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. GEO2R calculates the FDR for each differential expression test computed using moderated t-statistics using eBayes [13]. To further analyses the gene lists were filtered for genes that showed corrected p-value lower than 0.05.

2.3

Gene Ontology Over or Under Representation: Cytoscape and BINGO

Cytoscape is an open-source software platform for visualizing molecular interaction networks and integrating these interactions with gene expression profiles and other functional genomics data. The Biological Networks Gene Ontology tool (BINGO), integrated to Cytoscape, determine which Gene Ontology (GO) terms are significantly overrepresented (or under) in a set of genes. BINGO maps the predominant functional themes of the tested gene set on the GO hierarchy, in a subgraph of a biological network or any other set of genes [14]. Each of the three differentially expressed gene lists were submitted to BINGO through Cytoscape environment. BINGO performed a hypergeometric based test to evaluate the overrepresentation, or underrepresentation, of GO Biological Processes among the gene list provided. The p-value for each GO category was corrected with False Discovery Rate (Benjamini and Hochberg) and filtered to lower than 0.05. In order to visualize the lower levels of GO tree, the categories appointed by BINGO were filtered to select only those levels with less than 1300 elements (genes). The set of genes from the union of the intersection among the three differentially expressed gene lists was submitted to the same analysis and parameters.

1845

lists from union of the intersection among the datasets were submitted to gProfiler using KEGG Orthology categories. GProfiler performs a hypergeometric based test to evaluate the overrepresentation of categories, in this case KEGG Orthology categories. GProfiler uses False Discovery Rate (Benjamini and Hochberg) to correct to multiple testing. Only categories with adjusted p-value lower than 0.05 were considered significant.

2.5

Consistent Genes and Categories

The analysis of each dataset revealed a list of differentially expressed genes and Biological Processes when comparing control to KD. One of the dataset had microarray probe IDs as geneName and they were converted to GeneSymbol using the bioinformatic tools available in https://www.biotools.fr/ mouse. Besides of the necessary conversion, some gene identifiers did not refer to an unique gene (example Gm17026/// Gm10375/// Gm10377/// BC061237/// 1700001F09Rik). These entries were excluded from the BINGO and gProfiler analysis. In order to evaluate the consistency of these findings an intersection and union of intersections among the three datasets was performed using bash script. The Venn diagram was plotted using vennDiagram package of R.

3

Results

3.1

Differentially Expressed Genes

Each of the differentially expressed genes analysis revealed a set of significantly genes (p-value corrected by BH lower than 0.05) that were altered when comparing control diet to KD, as can be seen in Table 1. The list of the ten most differentially expressed genes of each set can be seen in Tables 2, 3 and 4.

3.2

The Union of Intersections

The gene list resulted from the intersection of the three dataset revealed only 2 genes, as can be seen in Fig. 2. Because it

Table 1 Quantity of unique geneSymbols with corrected p-value < 0.05

2.4

gProfiler with KEGG Orthology

The gProfiler comprises several tools to perform functional enrichment analysis [15]. The differentially expressed gene

Datasets

Qt. of genes

GSE87425 GSE115342 GSE121096

130 645 151

1846

A. A. Ferreira and A. C. Q. Simões

Table 2 Ten most differentially expressed genes for GSE87425: 3 control regular chow × 3 keto: 20 h Adj P-value

Gene symbol

Gene name

0.00627 0.00627 0.00627 0.01446 0.01446 0.0179 0.0179 0.01935 0.01935 0.01935

Nat8f6 Nat8f7 Nat8f3 Nat8f6 Nat8f7 Nat8f3 Nat8f6 Nat8f7 Nat8f3 Acacb Klk1b4 Tff3 Osbpl3 Cidec Cmpk2 Gm8995

N-acetyltransferase 8 (GCN5-related) family member 6 (or 7 or 3) N-acetyltransferase 8 (GCN5-related) family member 6 (or 7 or 3) N-acetyltransferase 8 (GCN5-related) family member 6 (or 7 or 3) Acetyl-Coenzyme A carboxylase beta Kallikrein 1-related pepidase b4 Trefoil factor 3, intestinal Oxysterol binding protein-like 3 Cell death-inducing DFFA-like effector c Cytidine monophosphate (UMP-CMP) kinase 2, mitochondrial Predicted gene 8995

Table 3 Ten most differentially expressed genes for GSE115342: 3 control regular chow × 3 LCKD: 4 weeks Adj P-value

Gene symbol

Gene name

0.00138 0.00298 0.00394 0.00394 0.00562 0.01299 0.01299 0.01299 0.01377 0.02365

Scd1 Pcp4l1 Elovl6 Gm32894 Asns Acacb Tcf24 Cyp2c55 Cyp39a1 Aatk

Stearoyl-coenzyme A desaturase 1 Purkinje cell protein 4-like 1 ELOVL family member 6, elongation of long chain fatty acids (yeast) Predicted gene, 32894 Asparagine synthetase Acetyl-Coenzyme A carboxylase beta Transcription factor 24 Cytochrome P450, family 2, subfamily c, polypeptide 55 Cytochrome P450, family 39, subfamily a, polypeptide 1 Apoptosis-associated tyrosine kinase

Table 4 Ten most differentially expressed genes for GSE121096: 4 control fasting 16 h × 4 days keto Adj P-value

Gene symbol

Gene name

0.00000107 0.00145905 0.00281815 0.00281815 0.00281815 0.00329083 0.00343767 0.00469168 0.00469168 0.00529361

Abcd2 Slc39a4 Steap2 Serpine2 Fkbp11 Sco2 Tymp Ncaph2 Scd1 Cyp39a1 Mup21 Tbc1d24

ATP-binding cassette, sub-family D (ALD), member 2 Solute carrier family 39 (zinc transporter), member 4 Six transmembrane epithelial antigen of prostate 2 Serine (or cysteine) peptidase inhibitor, clade E, member 2 FK506 binding protein 11 Synthesis of cytochrome C oxidase 2 Stearoyl-Coenzyme A desaturase 1 Cytochrome P450, family 39, subfamily a, polypeptide 1 Major urinary protein 21 TBC1 domain family, member 24 (Mus musculus)

showed to be a too strict approach, the union of the intersections was calculated and revealed 33 genes, present in Table 5.

3.3

BINGO and Gene Ontology Terms for the Union of Intersections

In order to get a better understanding of the biochemical processes that were different between a control and a KD, the gene list revealed by the union of intersections of the dif-

ferential expression analysis (adjusted p-value < 0.05) were submitted to BINGO Table 6. In order to have a deeper view of the Gene Ontology (GO) categories, the categories appointed by BINGO were filtered to exclude categories with number of genes higher than 1300, as can be seen in Table 7. The search within each dataset for biological processes that outstand when comparing a control to KD no revealed a list of underrepresented GO Terms. Neither did the union of intersections.

269 Gene Expression Analyses of Ketogenic Diet

1847 Table 7 Thirty most significantly overrepresented biological processes made with BINGO-union of intersection

Fig. 2 Venn diagram with differentially expressed genes with corrected p-value of the microarray sets. Union unique geneSymbols: 891 geneSymbols

GO-ID

Adj P-value

Description

42180 6082 19752

0.00000001 0.00000001 0.00000001

43436 32787

0.00000001 0.00000005

44283

0.00000053

16053 46394

0.00000053 0.00000053

6629 6633 8610 6631 44255 55114 8202 45834

0.00000128 0.00000128 0.00002112 0.00006211 0.00029717 0.01185900 0.01456300 0.01575700

6084 15881 34433 34434 34435 30001 10873

0.01575700 0.01575700 0.01575700 0.01575700 0.01575700 0.01749100 0.02460800

30913 44268

0.02460800 0.02460800

30497 30573 6812 10872

0.02460800 0.02460800 0.02889900 0.03279400

42866

0.03279400

Cellular ketone metabolic process Organic acid metabolic process Carboxylic acid metabolic process Oxoacid metabolic process Monocarboxylic acid metabolic process Small molecule biosynthetic process Organic acid biosynthetic process Carboxylic acid biosynthetic process Lipid metabolic process Fatty acid biosynthetic process Lipid biosynthetic process Fatty acid metabolic process Cellular lipid metabolic process Oxidation reduction Steroid metabolic process Positive regulation of lipid metabolic process Acetyl-CoA metabolic process Creatine transport Steroid esterification Sterol esterification Cholesterol esterification Metal ion transport Positive regulation of cholesterol esterification Paranodal junction assembly Multicellular organismal protein metabolic process Fatty acid elongation Bile acid catabolic process Cation transport Regulation of cholesterol esterification Pyruvate biosynthetic process

Table 5 Genes presented on the union of intersection Acaca Bco1 Fkbp5 Pklr Saa3 Steap2 Tspan4

Acacb Cd9 Gls2 Pnpla3 Scd1 Tmem56 Ube2u

Apcs Cyp39a1 Hsd17b6 Prss8 Slc17a4 Tmem86a Usp18

Arhgef16 Elovl6 Klk1b4 Rad51b Slc6a8 Tmie

Azin2 Fasn Mup21 Rdh9 Snhg11 Trpm4

Table 6 Quantity of enriched categories found by BINGO Datasets

Enriched categories

GSE87425 100 GSE115342 101 GSE121096 145 Union of intersections 24

3.4

Filtered enriched categories 94 94 135 23

gProfiler with KEGG Orthology for the Union of Intersections

KEGG Orthology categorizes the geneSymbols differently than Gene Ontology. In order to further investigate the relevant biological processes involved in a KD a search for overrepresented KEGG Orthology terms was performed for union of intersections, as shown in Table 8.

4

Discussion

The molecular mechanisms involved in the KD have been the subject of several studies, as well as trying to determine the genes and processes associated with inflammation and autophagy in diseases related to chronic inflammatory processes, such as obesity, type II diabetes and insulin resistance [16–20].

1848

A. A. Ferreira and A. C. Q. Simões

Table 8 KEGG categories appointed by gProfiler for the union of the intersections Term ID

Term names

Adj P-Value

Fatty acid biosynthesis Fatty acid metabolism Pyruvate metabolism Metabolic pathways AMPK signaling pathway Insulin signaling pathway

KEGG:00061

7.782–10−4

KEGG:01212 KEGG:00620 KEGG:01100 KEGG:04152

1.185–10−3 4.090–10−3 5.851–10−3 1.622–10−2

KEGG:04910

2.372–10−2

Related to ketone bodies, Newman et al. have been studied the BHB as a normal human metabolite that is synthesized in the liver from fat, and then circulates throughout the body as a glucose-sparing energy source. It is intrinsically produced during states such as intermittent fasting and dietary restriction that result in extended longevity, cognitive protection, cancer reduction, and immune rejuvenation [21]. Another study carried out by Cannatato et al., related to ketone bodies (including BHB), reinforces the action of ketone bodies on Hydroxy-Carboxylic Acid receptors, that are considered targets for preventing metabolic and inflammatory disorders [6]. The Acaca mouse gene was shown to be present in all the datasets analysis, a gene associated with the molecular pathways of insulin signalling, type II diabetes mellitus and fatty acid metabolism besides of biosynthesis and elongation in mitochondria, this result was found by Cannataro et al., as well [6]. As PPARα control metabolism in the liver, it helps to increase insulin sensitivity and decrease inflammation in KD. In normal diet, reports indicate that factors that increase oxidative stress like hyperglycemia, free-fatty acids and adipokines contribute to insulin resistance [7]. Activated PPARα promotes the expression of genes required for fatty acid and lipoprotein metabolism in mitochondria, peroxisomes, and endoplasmic reticulum. Activation of PPARα by agonists or fatty acids induces peroxisomal proliferation, fatty acid oxidation, and the production of ketone bodies [22], its activating is able to influence in biological processes as fatty acids metabolism, biosynthesis and metabolism pathway. Pyruvate metabolism, also found in KEGG categories, is the end-product of glycolysis, is derived from additional sources in the cellular cytoplasm, and is ultimately destined for transport into mitochondria as a master fuel input undergirding citric acid cycle carbon flux, pyruvate can be converted to acetyl-CoA or oxaloacetate, as well [23]. Another pathway demonstrated an important role in the face of maintenance of lipid and cholesterol homeostasis, the

AMPK pathway, that was presented in biological process of the differentially gene expression, AMPK activators mimic the actions of insulin in terms of gluconeogenesis, repressing glucose production. Therefore, the agents that activate AMPK are beneficial in the treatment of insulin resistance and diabetes [22].

5

Conclusion

According to the results found, a KD has a fundamental role in the release of ketone bodies, mediated by PPARα, influence in biological processes as fatty acids metabolism, biosynthesis, metabolism and AMPK signaling pathway, to increase insulin sensitivity and decrease inflammation. Further analyses should include the integration of these findings with pathways to provide more accurate understanding of the molecular interactions and reactions. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Hutfles LJ, Wilkins HM, Koppel SJ et al (2017) HHS public access. Nat Protoc 45:537–544 2. Dhamija R, Eckert S, Wirrell E (2013) Ketogenic diet. Can J Neurol Sci 40:158–167 3. Cullingford TE (2004) The ketogenic diet; fatty acids, fatty acid-activated receptors and neurological disorders. Prostaglandins Leukot Essent Fatty Acids 70:253–264 4. Branco AF, Ferreira A, Simões R et al (2016) Ketogenic diets: from cancer to mitochondrial diseases and beyond. Eur J Clin Invest 46:285–298 5. Kostiuk MA, Keller BO, Berthiaume LG (2010) Palmitoylation of ketogenic enzyme HMGCS2 enhances its interaction with PPARα and transcription at the Hmgcs2 PPRE. FASEB J 24:1914–1924 6. Cannataro R, Perri M, Gallelli L, Caroleo MC, De Sarro G, Cione E (2019) Ketogenic diet acts on body remodeling and microRNAs expression profile. Microrna 8:116–126 7. Ndisang JF (2010) Role of heme oxygenase in inflammation, insulin-signalling, diabetes and obesity. Mediat Inflamm 2010 8. Tognini P, Murakami M, Liu Y et al (2017) Distinct circadian signatures in liver and gut clocks revealed by ketogenic diet. Cell Metab 26:523–538 9. Okuda T (2019) Data set for characterization of the glycosylation status of hepatic glycoproteins in mice fed a low-carbohydrate ketogenic diet. Data Brief 27 10. Okuda T (2019) A low-carbohydrate ketogenic diet promotes ganglioside synthesis via the transcriptional regulation of ganglioside metabolism-related genes. Sci Rep 9:1–11 11. Okuda T (2019) A low-carbohydrate ketogenic diet induces the expression of very-low-density lipoprotein receptor in liver and affects its associated metabolic abnormalities. NPJ Sci Food 3:1–6 12. Ruppert PMM, Park J-G, Xu X, Hur KY, Lee A-H, Kersten S (2019) Transcriptional profiling of PPARα-/- and CREB3L3-/- livers reveals disparate regulation of hepatoproliferative and metabolic functions of PPARα. BMC Genomics 20:199

269 Gene Expression Analyses of Ketogenic Diet 13. Ye Y, Li S-L, Wang S-Y (2018) Construction and analysis of mRNA, miRNA, lncRNA, and TF regulatory networks reveal the key genes associated with prostate cancer. PLoS ONE 13 14. Maere S, Heymans K, Kuiper M (2005) BiNGO: a Cytoscape plugin to assess overrepresentation of gene ontology categories in biological networks. Bioinformatics 21:3448–3449 15. Reimand J, Tx Arak, Adler P et al (2016) g:Profiler—a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res 44:W83–W89 16. Cao J, Peng J, An H et al (2017) Endotoxemia-mediated activation of acetyltransferase P300 impairs insulin signaling in obesity. Nat Commun 8 17. Kang HS, Okamoto K, Takeda Y et al (2011) Transcriptional profiling reveals a role for RORα in regulating gene expression in obesityassociated inflammation and hepatic steatosis. Physiol Genomics 43:818–828 18. Prado WL, Lofrano MC, Oyama LM, Dâmaso AR (2009) Obesidade e adipocinas inflamatórias: implicações práticas para a prescrição de exercício. Rev Bras Med Esp 15:378–383

1849 19. Alirezaei M, Kemball CC, Flynn CT, Wood MR, Whitton JL, Kiosses WB (2010) Short-term fasting induces profound neuronal autophagy. Autophagy 6:702–710 20. Xu L, Kanasaki M, He J et al (2013) Ketogenic essential amino acids replacement diet ameliorated hepatosteatosis with altering autophagy-associated molecules. Biochim Biophys Acta (BBA)— Mol Basis Dis 1832:1605–1612 21. Newman JC, Covarrubias AJ, Zhao M et al (2017) Ketogenic diet reduces midlife mortality and improves memory in aging mice. Cell Metab 26:547–557 22. Lee WH, Kim SG (2010) AMPK-dependent metabolic regulation by PPAR agonists. PPAR Res 2010 23. Gray Lawrence R, Tompkins Sean C, Taylor Eric B (2014) Regulation of pyruvate metabolism and human disease. Cell Mol Life Sci 71:2577–2604

Comparison Between J48 and MLP on QRS Classification Through Complexity Measures L. G. Hübner and A.T. Kauati

Abstract

The aging process makes changes in the cardiovascular structure, increasing the risk of diseases, making people more dependent and vulnerable. The Brazilian National Institute of Social Security points out that every year around 250 thousand new cases of stroke are documented in Brazil, around 40% of retirement requests are due to heart attacks or strokes. Heart diseases are one of the most prominent causes of death in the world. The majority of sudden death cases occur without previous symptoms, while some non-lethal arrhythmia’s such as ventricular extra-systoles precede others directly related to it. With that in mind, it is advisable to monitor high risk individuals, who are not hospitalized, on a daily basis. Considering the aging of the population and the increasing number of people living alone, due to it remote monitoring systems, for various types of biomedical signals, are important. The goal of this paper is to evaluate two approaches of QRS classification using the BIH-Arrhythmia Database comparing the performance obtained on each. The algorithms tested were: J48 and Multilayer Perceptron, one of the approaches of this work obtained the weighted average Sensitivity 96.91% and Precision of 96.81% for J48 and 94.77% Sensitivity and 93.87% Precision for MLP. Keywords

ECG • Signal processing • Arrhythmia • Classification

1

Introduction

The aging process makes changes in the cardiovascular structure, increasing the risk of diseases making people more L. G. Hübner (B) · A. T. Kauati UNIOESTE/CECE, Foz do Iguaçu, Brazil e-mail: [email protected]

dependent and vulnerable [1]. The Brazilian National Institute of Social Security points out that every year around 250 thousand new cases of stroke occur in Brazil, with approximately 40% of retirement requests because of strokes and heart attacks. Heart disease is one of the most prominent causes of death in the world [2,3]. The majority of the sudden death occur without previous symptoms [4], however, some non lethal arrhythmia’s like ventricular premature contractions, precede others directly related by it. With that in mind, predictive treatment and correct diagnosis are convenient, therefore the early detection of abnormalities could help the patient on prolonging and keeping the quality of life, when the correct treatment is applied. Nowadays, health monitoring in out-of-hospital environment has come to the interest of researchers and healthcare practitioners, monitoring modality becomes more viable everyday as a result of the advancing wireless communication technology. The acquisition of physiological and psychological variables on day-to-day could be especially useful to detect chronic and health problems [5]. Due to better life conditions, the population is living longer, however, elder people become limited in various aspects (physiological and psychological), daily activities became problematic requiring continuous care [6]. In order to help in the early diagnosis some papers [7–9] tries to help people proposing a system to capture and store data for post medical analysis, which helps in the early diagnoses. However, this approach still needs a doctor and does not apply to real time monitoring. This manual process is impracticable due its constant need constant and carefully vigilance, the process can be compromised due tiredness or inattention [10]. Looking forward to automate this process, a lot of work have been developed over the years, Nadal [10] using Principal Components Analysis (PCA) with the first ten components of autovector extraction and autovalues to realize the classification of QRS complexes using a binary decision tree. Along side that, another branch [11] develops a derivation using neural networks instead of binary decision trees.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_270

1851

1852

L. G. Hübner and A.T. Kauati

Others, like Jimena et al. [12] use the J48 algorithm to create a QRS classifier and later use the classifier to identify rhythms, then, the model is implemented on a Personal Digital Assistant for real time classification. Some use combinations of morphological and dynamics features, the Wavelet Transform (WT) and Independent Component Analysis were applied separately in each beat to extract the coefficients. These features were used with a Support Vector Machines (SVM) to classify the QRS complexes [13]. A device that can monitor arrhythmia’s in real time is developed by Hu et al. [14], using a sensor to digitize the ECG signal and sending to a smartphone via bluetooth. With the digitalized signal, a markovian network is used to classify the QRS complexes. A low-complexity real-time arrhythmia detection system that includes the QRS complex and arrhythmia detection is proposed by Chang et al. [15]. A platform for signal processing destined to online real-time classification of heart conditions are presented by Gutiérrez-Gnecchi et al. [16]. The classification is given using Wavelets Transforms based on squared waves to identify individual ECG waves and obtain a fiducial mark vector. Extraction statistical features of RR intervals, Alfarhan et al. [17] realizes the classification using a SVM with the Gausian Radial Basis Function. Using memristor neuromorphic computing system, Hassan et al. [18] develops a real time classification device. With the ANN algorithm implemented on a Raspberry Pi3 board and capturing the ECG leads, Nath et al. [19] presents a real-time arrhythmia episodes detector. A classification method that eliminates the first stage of the ECG processing, combining two phases of the extraction and classification into a single stage, named Ensemble MLP is proposed by Ihsanto et al. [20]. In this paper we test and compare two machine learning algorithms using complexity measures as input to J48 and Multilayer Perceptron (MLP) approaches. These algorithms have low cost and could be used to implement a real time Arrhythmia Monitor with low cost equipment.

Table 1 QRS complexes extracted from database

NORMAL APC PVC NAPC FUSION NESC LBBB ABERR NPC RBBB FLWAV VESC Total

Methods

At the beginning of the construction of the classifier, the signal needs to be processed, the first step was applying filters to the original signal using the Low-Pass and High-Pass from [21]. With the signal filtered, we segmented the QRS complexes, using the database annotations, into small pieces of signal and normalize them using the Origin normalization described on Eq. 1. (z i − μ(Z )) (1) z i = σ (Z )

Ignored

Total

75.017 2.546 7.129 193 802 229 8.072 150 83 7.255 472 106 102,054

2.701 138 802 4 28 61 105 3 7 197 12 2 4560

72.316 2.408 6.327 189 774 168 7.967 147 76 7.058 460 104 97,994

where Z éthe vector that contains the signal, Z i is the point i in the signal, z i is the observation normalized of the signal in the point i, and σ (Z ) is the Standard Deviation (SD) of the signal. The total of extracted QRS’s are presented in Table 1. After all, the complexity measures were extracted. With the complexity measures of each QRS complex extracted, we followed with the Machine Learning algorithms to generate different models using J48 and MLP, all the parameters are set to default, the models where generated using the Weka toolbox. To evaluate the models, we choose the k-Fold Stratified Cross Validation with k = 10, to obtain a better overview of the results, we executed the k-Fold Stratified Cross Validation 10 times with different random seed. This process is illustrated on Fig. 1. A detailed explanation of each step is presented on the following sessions. However, first of, we need to introduce some concepts and materials used, the ECG signals used in this paper came from a well known database, the MIT BIHArrhythmia Database.

2.1

2

Total extracted

The BIH-Arrhythmia Database

In this paper we used the MIT-BIH Arrhythmia Database, developed by the Beth Israel Hospital and the Massachusetts Institute Of Technology, which contains a set of ECG’s previously analyzed and classified by cardiologists. This database is composed by ECG’s records of 48 patients in two leads, each record has 30 min with a total of 24 h of signals, this signals contains annotations pointing the QRS complexes and its classification [22]. In this paper, we used the following annotations of the database:

270 Comparison Between J48 and MLP on QRS Classification …

1853

Fig. 1 Database processing and model creation

• • • • • • • • • • • •

Normal Beat (NORMAL); Left Bundle Branch (LBBB); Right Bundle Brach (RBBB); Atrial Premature Contraction (APC); Aberrated Atrial Premature Beat (ABERR); Nodal (junctional) Escape Beat (NESC); Ventricual Escape Beat (VESC); Premature Ventricular Contraction (PVC); Nodal Premature Contraction (NPC); Ventricular Flutter (FLWAV); Fusion of Ventricular and Normal Beat (FUSION); Non Conducted P-Wave (NAPC).

2.2

Process

To extract the QRS complexes and the Complexity Measures, we used the R language. The process used to extract these is described below: • Segmentation of all QRS occurrences in each file; • Extraction of a stretch containing each QRS occurrence (301 points); • Normalization using the method Origin; • Extraction of Complexity Measures. After the segmentation and extraction of the complexity measures, the results were stored in a ARFF file.

• Distance between actual R peak and last R peak; • Distance between actual R peak and next R peak. • Fractal Dimension: – – – – – – – – – – – –

• Entropy: – Empirical [28]; – Miller-Madow [29]. • Statistics: – – – –

2.4 2.3

Complexity Measures

This paper uses Complexity Measures as a set of features extracted from the filtered and normalized signal containing the QRS complex. The features extracted from the signal in this paper are:

Katz [23]; Box Counting [24]; Hallwood [25]; Wavelets [26]; Periodogram [26]; DCT-II [26]; Incr1 [26]; Variogram [26]; Madogram [26]; Rodogram [26]; Variation [26]. Estimated Complexity [27].

Mean/Average; SD; Median; Root Mean Square (RMS).

Evaluation

To evaluate the proposed methods, the k-Fold Stratified Cross Validation (SCV) was used, both algorithms (J48 & MLP) were executed 10 times changing the random seed, changing this seed causes the SVC to rebuild the bases with a different set of data on each. After this, the SD and Average (AVG) were

1854

L. G. Hübner and A.T. Kauati

calculated for each method. All the results presents values of Precision (VPP) and Sensitivity (VSE). Precision (VPP): its the percentage of corrected classified examples in a class c by the classifier, calculated with Eq. 2. V PP =

TP T P + FP

(2)

where T P means the total examples of c correctly classified and F P is the number of examples of other classes classified as c. Sensitivity (VSE): is the percentage of examples of c correctly classified in relation of the total examples of c on the test database. Calculated with Eq. 3. V SE =

TP T P + FN

(3)

where T P is the number of examples of c correctly classified as c and F N is the number of examples of c classified as another class.

3

Results

NORMAL LBBB RBBB ABERR PVC FUSION NPC APC VESC NESC FLWAV NAPC Weighted average

Table 2 VSE and VPP, per class on J48 approach VSE

SD

VPP

SD

98.54 94.25 97.03 53.46 93.29 64.17 35.65 85.51 90.74 37.73 93.35 92.07 96.91

0.05 0.17 0.18 3.49 0.18 1.31 1.92 0.39 1.03 3.50 0.56 0.97 0.06

97.98 94.97 97.41 60.89 93.63 72.96 52.73 89.56 93.68 52.95 95.62 94.16 96.81

0.04 0.29 0.14 4.50 0.23 0.81 4.55 0.31 1.78 2.99 0.53 1.78 0.07

VSE

SD

VPP

SD

98.84 93.88 77.72 30.08 91.69 42.68 9.88 66.80 87.10 3.23 87.39 77.39 94.77

0.08 0.65 2.61 4.33 0.50 3.75 3.77 1.88 3.67 1.79 1.84 4.71 0.16

96.45 92.62 95.48 59.06 90.01 62.63 1.49 89.49 69.43 15.85 86.97 77.46 93.87

0.19 1.00 6.33 4.81 1.03 6.26 0.49 1.45 18.41 12.78 1.92 2.64 3.53

Table 3 presents the VSE and VPP results for the MLP approach.

4

This section will present the results obtained by both approaches, J48 and MLP, all the results will be shown as the mean value and SD for ten executions of the k-Fold Stratified Cross Validation. In addition, the weighted average of VSE and VPP for each approach is presented in the last line of the table, the weighted average is the average considering the weight of each QRS class in the database. Table 2 presents the VSE and VPP results for the J48 approach.

NORMAL LBBB RBBB ABERR PVC FUSION NPC APC VESC NESC FLWAV NAPC Weighted average

Table 3 VSE and VPP, per class on the MLP approach

Discussion

All the results presented in this section will be written as Value (SD) %, Table 4 presents state of art works that will be used to compare the results. On Table 2 is possible to observe that the J48 approach has obtained VSE superior to 90% on the NORMAL, LBBB, RBBB, PVC, VESC, FLWAV and NAPC as the NORMAL, LBBB and RBB obtain the best values being 96.54(0.05)%, 94.25(0.17)% and 97.03(0.18)% respectively, but, this classes represent more than half of the database and present low variability in its QRS formats. The class ABERR obtained 53.46(3.49)% VSE and 60.89(4.5)% of VPP. This class presents low values of VSE and VPP in comparison with the classes presented before, looking at some examples of the ABERR class, it’s possible to observe that the QRS complex does not present a consistent pattern, the lack of consistency in the morphology makes the work of creating a generic model to classify this class harder, the same occurs to the FUSION class, which has obtained the 64.17(1.31)% VSE and 72.96(0.81)% VPP. Another class that obtained good results is the APC, obtaining 85.51(0.39)% VSE and 89.56(0.31)% VPP. Other classes like NPC and NESC has obtained low values for VSE, 35.66(1.81)% NPC and 37.74(3.32)% for NESC, both complexes has nodal origin, the NPC has 76 examples and the NESC has 161 examples in the database, this lack of examples could lead to bad VSE and VPP rates. On the other way, the MLP values for NORMAL, LBBB and PVC has obtained the VSS superior to 90%,

270 Comparison Between J48 and MLP on QRS Classification …

1855

Table 4 Comparison of J48 approach with the literature This Paper [10] [11] [12] [13] [14] [16] [17] [19] [18] [20]

NORMAL

LBBB

RBBB

ABERR

PVC

FUSION NPC

APC

VESC

NESC

FLWAV

NAPC

98.54

94.25

97.03

53.46

93.29

64.17

85.51

90.74

37.73

93.35

92.07

98.38 100

97.35 99.99

10.63 66.28 46.34 92.86

74.07 80.31 76.36 100

33.33 100

0.00 82.16 91.30 100

100

91.06

87.50

90.23 94.73 88.44 99.26 97.75 71.04 91.66

97.5 98.68 97.68

100 91.62 95.36

98.93 98.64 98.42 99.95 99.72 97.15 95.83 100 98.91

0.00

98.98 90.57

62.69

being 98.84(0.08)% on NORMAL, 93.88 for LBBB and 91.69(0.5)% on PVC. However, the MLP approach has worst results for the most of the classes, except for the class NORMAL, where obtained better VSE values. The work from [10] has better results for the NORMAL and FUSION, and [11] work has better results for NORMAL, ABERR, PVC, FUSION and VESC, both works consider only 7 types of QRS complexes, this paper uses 12. Another work, Jimena et al. [12] use PDA to classify QRS complexes and rythms, comparing the results for QRS classification, Jimena et al. [12] obtained better VSE values for LBBB, RBBB, FUSION, NPC and VESC complexes. However, the total examples used on [12] is 21849 complexes, and this paper uses 97994, therefore this paper presents a better reliability. Better VSE for all the classes presented on the method proposed by [13], however it requires extra computational resources and uses more than the current QRS info, it needs to calculate Daubechies Transform, Principal Component Analysis and Independent Component Analysis together with the mean RR interval, the extra computational resources needed for this method could derail the use in real time classification. Looking in the work developed by Hu et al. [14], we see that it has superior VSE values for Normal, PVC and APC classes, however, the evaluation of this work simulates only 12 ECG signals from the MIT database with a total of 34799 simulated complexes, this paper uses a total of 97994 complexes for train and test. A Wavelets Transform and Probabilistic Neural Networks approach is developed by [16], however none of the classified QRS obtains better performance than this paper.

35.65

50.00 97.06

77.20 92.86 99.48 76.82 95.83 97.5 97.09 61.75

0 97.00 93.94 100

58.05

73.20

On this work [17], is observed superior VSE values for APC QRS, however, the total amount of examples used were 24, this paper uses a total of 2408 of APC examples for train and test using 10-Fold Stratified Cross Validation. Another work, [19] has obtained better results for NORMAL, LBBB, RBBB and APC, however, the evaluation proposed on the work uses only 45 examples for train and test, does not represent a trustworthy result. The method developed by [18] has superior values on LBBB, PVC and APC, however, the tests performed uses only 6258 examples of the database. A different way is presented by [20], which does not include the pre-processing step and combines the feature extraction and classify into a single step. However, this work only has VSE superior for NORMAL and LBBB class.

5

Conclusion

Considering the discussion above-mentioned, we can conclude that this work presents coherent results to the state of art. Besides that, the proposed method uses only local features, this model can be used to perform real time classification. As presented on the last section, the results show a better performance on the J48 approach. The MLP presents results compatible with the state of art on local classification of QRS complexes, however, some QRS classes still have low VSE, these classes are well known by their hard classify task, therefore this classifier is not far from the state of art, although, the J48 present better results on almost every QRS class. The J48 presents good results even though it’s a simple approach, it has shown better results than MLP on most of the QRS classes, its implementation is easy and efficient.

1856

The Complexity Measures used in this paper presents performance on QRS classification equivalent to the state of art works, in this way, the present results support that this method can be used to improve the classification problem. Conflict of Interest The authors declare that they have no conflict of interest.

References 1.

Lakatta EG, Levy D (2003) Arterial and cardiac aging: major shareholders in cardiovascular disease enterprises. Circulation 107:490– 497 2. Biolo A, Garcia SB, Silva S et al (2006) Análise do manejo agudo do acidente vascular cerebral no Hospital de Clínicas de Porto Alegre. Rev HCPA Porto Alegre 26(1):17–21 3. Gifari MW, Zakaria H, Mengko R (2015) Design of ECG homecare: 12-lead ECG acquisition using single channel ECG device developed on AD8232 analog front end. In: 2015 international conference on electrical engineering and informatics (ICEEI). IEEE, pp 371–376 4. Martino M, de Siqueira SF, Zimerman LI, Neto VA, Moraes AV Jr, Fenelon G (2012) Sudden cardiac death in Brazil: study based on physicians’ perceptions of the public health care system. Pacing Clin Electrophysiol 35:1326–1331 5. Korhonen I, Parkha J, Van Gils M (2003) Health monitoring in the home of the future. IEEE Eng Med Biol Mag 22:66–73 6. Liu L, Stroulia E, Nikolaidis I, Miguel-Cruz A, Rincon AR (2016) Smart homes and home health monitoring technologies for older adults: a systematic review. Int J Med Inform 91:44–59 7. Chan AM, Selvaraj N, Ferdosi N, Narasimhan R (2013) Wireless patch sensor for remote monitoring of heart rate, respiration, activity, and falls. In: Engineering in Medicine and Biology Society (EMBC), 2013 35th annual international conference of the IEEE. IEEE, pp 6115–6118 8. Fensli R, Gunnarson E, Gundersen T (2005) A wearable ECGrecording system for continuous arrhythmia monitoring in a wireless tele-home-care situation. In: 18th IEEE symposium on computerbased medical systems, 2005. Proceedings. IEEE, pp 407–412 9. Rotariu C, Manta V (2012) Wireless system for remote monitoring of oxygen saturation and heart rate. In: 2012 federated conference on computer science and information systems (FedCSIS). IEEE, pp 193–196 10. Nadal J (1991) Classificação de arritmias cardíacas baseada em análise de componentes principais e árvores de decisão. PhD thesis, Tese de Doutorado, Programa de Engenharia Biomédica, COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 1991 11. Bossan Mde C (1994) Classificação de Batimentos Cardíacos Utilizando Redes Neurais. PhD thesis, Tese de Mestrado, Programa de Engenharia Biomédica, COPPE/UFRJ, Rio de Janeiro, 1994 12. Rodriguez J, Goni A, Illarramendi A (2005) Real-time classification of ECGs on a PDA. IEEE Trans Inf Technol Biomed 9:23–34

L. G. Hübner and A.T. Kauati 13. Ye C, Coimbra MT, Kumar BVKV (2010) Arrhythmia detection and classification using morphological and dynamic features of ECG signals. In: 2010 annual international conference of the IEEE engineering in medicine and biology. IEEE, pp 1918–1921 14. Hu S, Wei H, Chen Y, Tan J (2012) A real-time cardiac arrhythmia classification system with wearable sensor networks. Sensors 12:12844–12869 15. Chang RC-H, Chen H-L, Lin C-H, Lin K-H (2018) Design of a lowcomplexity real-time arrhythmia detection system. J Signal Process Syst 90:145–156 16. Gutiérrez-Gnecchi JA, Morfin-Magana R, Lorias-Espinoza D et al (2017) DSP-based arrhythmia classification using wavelet transform and probabilistic neural network. Biomed Signal Process Control 32:44–56 17. Alfarhan KA, Mashor MY, Saad M, Rahman A, Omar MI (2018) Wireless heart anormality monitoring kit based on Raspberry Pi. J Biomimetics Biomater Biomed Eng 35:96–108 18. Hassan AM, Khalaf AF, Sayed KS, Li HH, Chen Y. Real-time cardiac arrhythmia classification using memristor neuromorphic computing system. In: 2018 40th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, pp 2567–2570 19. Nath US, Arunima CR, Jaisha R, Amjad UP, Monteiro JI (2018) A standalone open hardware system for ECG detection and classification. In: 2018 9th international conference on computing, communication and networking technologies (ICCCNT). IEEE, pp 1–5 20. Ihsanto E, Ramli K, Sudiana D (2019) Real-time classification for cardiac arrhythmia ECG beat. In: 2019 16th international conference on quality in research (QIR): international symposium on electrical and computer engineering. IEEE, pp 1–5 21. Pan J, Tompkins WJ (1985) A real-time QRS detection algorithm. IEEE Trans Biomed Eng 230–236 22. Moody GB, Mark RG (1990) The MIT-BIH arrhythmia database on CD-ROM and software for use with it. In: Computers in cardiology 1990. Proceedings. IEEE, pp 185–188 23. Katz MJ (1988) Fractals and the analysis of waveforms. Comput Biol Med 18:145–156 24. Peitgen H-O, Jürgens H, Saupe D (2012) Fractals for the classroom: part two: complex systems and Mandelbrot set. Springer, New York 25. Hall P, Wood A (1993) On the performance of box-counting estimators of fractal dimension. Biometrika 80:246–251 26. Gneiting T, Ševˇcíková H, Percival Donald B (2012) Estimators of fractal dimension: assessing the roughness of time series and spatial data. Stat Sci 27:247–277 27. Batista GEde APA, Wang X, Keogh EJ (2011) A complexityinvariant distance measure for time series. In: Proceedings of the 2011 SIAM international conference on data mining. SIAM, pp 699–710 28. Hausser J, Strimmer K (2009) Entropy inference and the JamesStein estimator, with application to nonlinear gene association networks. J Mach Learn Res 10:1469–1484 29. Miller G (1955) Note on the bias of information estimates. Inf Theory Psychol: Probl Methods

Low Processing Power Algorithm to Segment Tumors in Mammograms R. E. Q. Vieira, C. M. G. de Godoy, and R. C. Coelho

a convenient approach to assist in the analysis of mammograms without requiring high computational resources.

Abstract

The diagnosis based on mammography depends on relevantly upon the radiologist’s experience and accuracy in identifying shapes and tenuous contrast in the images. A possible improvement in the mammography consists of image analysis algorithms suitable for detecting and segmenting tumors. However, such an approach usually demands high computational processing. In this context, the present work objective was to develop an algorithm to accurately search and segment tumors in mammography images without requiring high processing power. Thus, efficient, low computational demanding, and automatic image segmentation occurred by performing traditional image processing, including uniform equalization, adaptive enhancement (CLAHE), simple thresholding, Otsu multiple thresholds, and morphology operators. The development of the algorithm divides into two steps. The first step was the isolation of the breast and removing the pectoral muscle. The breast isolation occurred by removing artifacts outside of the breast image. The pectoral muscle removes as this region presents gray levels similar to the tumor. Finally, the second step segmented the tumor mass. The algorithm validation occurred by determining the accuracy and the Dice similarity coefficient, which values (mean ± standard deviation) were, respectively, 0.75 ± 0.09 and 0.71 ± 0.11. These average values resulted after processing 150 images of the CBIS-DDSM database, signifying a good segmentation outcome. As for the processing speed, the algorithm spent 14,5 s to segment a tumor in an image sizing 5383  3190 pixels (16 bits), using a conventional computer. The tool was able to segment both tiny and large tumors, and it may represent

R. E. Q. Vieira  C. M. G. de Godoy  R. C. Coelho (&) Technology and Science Institute, Universidade Federal de São Paulo, São José dos Campos, SP, Brazil e-mail: [email protected]

Keywords





Image segmentation Mammography processing Breast cancer

1



Image

Introduction

Image segmentation is a recurrent image processing technique beneficial to detect breast cancer in a digital mammogram [1]. Mammography is the most used test in breast cancer diagnosis, which, if diagnosed early, has a high chance of treatment success [2]. However, the diagnosis based on mammography depends on the radiologist’s experience and accuracy in identifying shapes and tenuous contrast in the images [3]. Thus, the diagnosing process can impair significantly due to inter-observer variations among distinct clinicians. The main mistake in interpreting mammograms happens due to the heterogeneity of the breast parenchyma, anatomical noise arising from dense tissue masking [4], and the image nuances of some breast cancer types [5]. Data from the National Cancer Institute (USA) show that mammography detects about 80% of tumors. The remaining 20% are false-negative, causing complications in the treatment as the diagnosis occurs in more advanced stages of the tumor [2]. Concerning the correct and early diagnosis, it is necessary to create mechanisms to assist in the analysis of mammograms by incorporating computer-aided diagnosis (CAD). Such assistance should increase the accuracy of the diagnosis, generating less false-negative results, and thus being able to treat tumors in less evolved stages. That considerably increases the success of the treatment and, therefore, reduces the number of deaths.

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_271

1857

1858

R. E. Q. Vieira et al.

A possible improvement to assist in the mammography consists of image analysis algorithms suitable for detecting and segmenting tumors. Some works refer to algorithms that highlight parts of the mammogram images to determine a region of interest where the tumor should be located [6]. In contrast, others refer to adaptive enhancement managed to give more contrast to the image, facilitating its analysis [7]. Many studies create algorithms that use the mammography image as input and returns the precisely targeted tumor as an output [7]. In the work of Zhou and colleagues [8], the authors used the convolutional network to training a potential function, followed by a conditional random field to carry out structured learning. In contrast, in the work of Singh and colleagues [9], the authors used conditional Generative Adversarial Network (cGAN) to segment the tumor within a region of interest (ROI). However, such approaches demand high processing power algorithms [8, 9], complicating the implementation in more simplified computational platforms. Concerning those issues, the present work objective was to develop an algorithm to accurately search and segment tumors in mammography images without requiring high processing power.

2

Materials and Methods

Efficient, low computational demanding, and automatic image segmentation occurred by performing traditional image processing approaches in the following pipeline: substantial isolation, pectoral muscle removal, segmentation of tumor mass, and validation. This pipeline allowed us to focus just on the breast region, eliminating the pectoral muscle and segmenting the tumor region.

2.1 Database The database used consists of 150 Gy-level mammography images obtained in the oblique lateral median view. The images originate from the CBIS-DDSM database (Curated Breast Imaging Subset of DDSM). It is an updated version of the Digital Database for Screening Mammography (DDSM) and contains more than 2600 images in DICOM format depicting normal, benign, and malignant cases with verified pathology information [2]. This work only considered images referring to breasts with malignant tumors. All images have ground-truth segmentation provided by the database.

2.2 Preprocessing Some mammography images have white bands on their ends that can impair the analysis and compromise the

segmentation process. Thus, concerning breast isolation, the initial step consisted of removing 5% of the total number of lines from the top and bottom of the image. Also, it consisted of removing 3% of the total number of columns from both sides. There are other artifacts outside of the breast region (as tags) that must clear out. Thus, the image is binarized as all pixels with intensity higher than 20% of the maximum intensity become one, and the others become zero. Only the most significant connected component is considered as a result of this processing as it corresponds to the breast region. The approach to finding and segment the tumor consisted primarily of techniques that analyze the intensity of the image pixels. Thus, it is essential to clear out the pectoral muscle before the tumor segmentation. The image of this muscle is a region of high or medium intensity, which, if maintained, could compromise the analyses. First, an adaptive enhancement tool evidences the separation between the muscle and the rest of the tissues. The enhancement used was the method of Contrast Limited Adaptive Histogram Equalization (CLAHE) with tiles (2, 2) and clip limit 0.02 [6]. This method performs a local contrast enhancement by equalizing the histogram in sub-areas of the image. It divides the image into regions and plays each part of the histogram equalization individually. This approach makes the hidden image characteristics more visible because of the balance of gray tones used [6]. As the muscle region is brighter than some regions breast regions after the equalization process, the algorithm removes dark regions by using the Otsu multiple thresholds technique. Therefore, 14 thresholds outcome by dividing the image into 15 intensity ranges. The first bands contain the darkest pixels, while the last ones contain the brightest ones. All pixels belonging to the first nine tracks cleared out, as they do not belong to the pectoral muscle. As for eliminating other dark pixels remaining in the image, the approach consisted of calculating the median intensity of all pixels, except those with an intensity equal to 0. Then, all pixels with an intensity less than the calculated median cleared out. At this point, an image containing only high-intensity pixels outcomes. A uniform equalization is then applied, aiming to distribute the gray levels better and highlight differences not visible before. The approach to isolate and remove the muscle consisted of recalculating the median of the pixels, considering only the pixels remaining in the image and higher than zero. Then, a new threshold is calculate considering as one all pixels with an intensity higher than 40% of the median and zero the other pixels. The remaining connected components, distant from the first lines of the image, are cleared out (those that do not have a pixel within the first 20% lines). Among the

Low Processing Power Algorithm to Segment Tumors in Mammograms

1859

Fig. 1 Block diagram showing the preprocessing steps

remaining components, the one with the highest median intensity remains, as the muscle is usually a brighter region in the image. This resulting image subtracts from the highlighted original image. Finally, the resulting preprocessing image will be the highlighted original image without the pectoral muscle and low-intensity pixels. Figure 1 shows the block diagram with the main preprocessing steps.

2.3 Tumor Segmentation In this step, considering the preprocessing outcome, pixels with a lower intensity than the median (disregarding pixels with zero intensity) cleared out. As the remaining pixels have very concentrated intensity at the highest levels, such a procedure performs a uniform equalization to redistribute the gray levels (Fig. 2a). Next, the image is thresholded, considering that pixels with intensities higher than 1.2  median_intensity become one, and the others become zero (Fig. 2b). At this step, it is common having several connected components joined in an undesirable manner by some tenuous connections. Thus, the algorithm applies a small erosion with a disk of size 1 to separate them. Note, in Fig. 2c, that many small fragments remain in the image. Therefore, all connected components that represent less than 0.04% of the total image area cleared out. The selection of 10 connected components with the highest median intensity allows choosing which of the remaining connected components represent the tumor (Fig. 2d). Then, the goal is to choose some connected

components resembling a circle. For this, the algorithm finds the smallest convex polygon that fits the component and a circle around each connected component, whose center of the circle is the midpoint of the component, and the radius is the distance from the center to the most distant vertex of the convex polygon (highlighted in Fig. 2e). The ratio between the area of the component and the circle area (ratio) expresses the similarity between them, as a perfectly circular component would have a ratio equal to one. Therefore, it is selected the component with the higher ratio (component t) and others whose ratio value is similar to the ratio of component t (difference minor than 0.05). As for the remaining connected components, the tumor will be that with a higher mean intensity of gray level (Fig. 2f). A dilation applied to the chosen connected component compensates for the erosion previously used (Fig. 2g). Compared to the specialist tracing, the edges of the found tumor plots on the original image, as shown in Fig. 2h. In red is the contour found by the tool proposed in this work and, in yellow, the mark indicated by the ground truth.

3

Validation Method

The algorithm validation occurred by using the accuracy and Dice similarity coefficient, which evaluates the similarity between two figures [10]. The accuracy corresponds to the percent of pixels in the image which are correctly segmented. Considering the numbers of true positive (TP), true

1860

R. E. Q. Vieira et al.

Fig. 2 Tumor segmentation process. a Equalized image; b median-based threshold image; c result after erosion; d result after removing small connected components; e enlarged image of the circularity analysis of the component that represents the tumor; f connected component represented as a tumor; g dilation of the chosen connected component; h final result of the segmentation (red line) superimposed on the initial image (the yellow line is the gold standard)



negative (TN), false positive (FP) and false-negative (FN), the accuracy is: Acc ¼

TP þ TN TP þ FN þ FP þ TN

Dice ðA; BÞ ¼ 2 

 ð2Þ

The values for these indicators are between 0 and 1.

ð1Þ

The values of TP, FP, and FN are normalized by the number of pixels of the ground truth. TN is normalized by: number of pixels of the image minus the number of pixels of the ground truth. Dice coefficient concerns the overlap area of two figures. Considering A the region segmented by the algorithm and B, the mask provided by the specialist, the Dice is:

jA \ B j j Aj þ j Bj

4

Processing Speed Evaluation

The software used to implement the segmentation tool was the MatLab, version 2015a. It ran in a desktop with Intel Core i7-3520 M, CPU 2.90 GHz, with 8 GB RAM. The processing speed evaluation of the segmentation tool consisted of determining the average time of processing a

Low Processing Power Algorithm to Segment Tumors in Mammograms

mammogram image of 5383  3190 pixels, 16 bits (the mean size of the images used in this work).

5

Results and Discussion

Figure 3 illustrates part of the results obtained by the proposed algorithm. The images show that the algorithm segmented tumors fairly to the ground truth. Table 1 presents values (mean and standard deviation) of the accuracy, Dice, false positive, false negative and true positive, obtained after processing 150 mammograms to segment tumors. The accuracy (0.75) indicates that segmentation performed by the algorithm is comparable to manual segmentation. According to the lecture, Dice values higher than 0.7 represent good segmentation results [10], which indicates that the present algorithm succeeded in segmenting the cancer region in mammography images. Concerning this score, Zhu and colleagues [8] obtained Dice 90.97% for the INbreast dataset and 91.30% for the DDSM-BCRP. However, distinctly of the present work algorithm, their technique demands high power computational (uses convolutional network) and an ROI very close to the tumor, instead of using a complete mammography image. Table 1 also shows that the algorithm segmented area is, in average, 27% (FP = 0.27) or 31% (FN = 0.31) not superimposed to the ground truth. In contrast, it segments correctly 69% of the ground truth (TP = 0.69). As the algorithm input is a full mammogram image, the results depicted in Table 1 are comparable, or even better than the results of the other works that consider full images, as mentioned by Singh and colleagues [9]. The algorithm verifies the gray level of the image and assumes that the tumors are always brightening regions. In some cases, the Dice was less than 0.7 as the software detects just the bright region of the tumor (see red line in Fig. 4b), and the ground truth includes some dark regions (yellow line). For comparison, Fig. 4a shows the tumor image with no trace or segmentation. The Dice obtained in this example was 0.67. The proposed software detected both tiny and large tumors. Therefore, Fig. 5 shows other circumstances in which the algorithm outcomes low dice value. The zoom of the tumor region features the yellow line as the ground truth and the red line the segmentation found by the program. It is possible to see that the tumor is tiny. In this case, any small error in detecting the contour could significantly impact the Dice result, as there are very few points to be analyzed in both areas (ground truth and detected areas). In this example, the Dice was 0.52, but the error was minimal.

1861

Relevant criteria for choosing the components that could represent the tumor concerns its roundness, as it can eliminate anything that is very far from a circle, such as long connected components of the fibrous breast regions. It maintains the connected components whose area of the convex hull is close to the area of the convex hull circumscribing circle. Considering the area of a connected component not entirely different from the area of a circle that surrounds it, the algorithm selects it as a possible tumor, even if it spreads. That means, for instance, if there is a circular cyst, the algorithm also selects it as a possible tumor. As for this circularity analysis, the algorithm correctly chooses the tumor among the remaining connected components by using the procedure next step that chooses the connected component with the highest average intensity. Thus, it chooses correctly even scattered tumors. The mean accuracy of 75% achieved with the proposed low processing power algorithm appears to be a reasonable score as, in a recent work of Singh and colleagues [9], they obtained 80% of overall accuracy by using a higher processing power approach (generative adversarial and convolutional neural network) to segment tumor in mammograms. Concerning the processing speed, the algorithm spent 14.5 s to segment a tumor in an image sizing 5383  3190 pixels (16 bits), using a conventional computer resource (as described in Sect. 4). Such a low processing time to segment tumor in a relatively big mammogram image seems plausible as, based on the theoretical comparison, Ibachir and colleagues showed that processing time and segmentation optimization occurs by concentrating on removing the pectoral muscle and the dispensable data [11], just like implemented in present work.

6

Conclusions

The algorithm presented in this work is a simple, fast, and automatic method to segment cancer accurately in mammography images. It may represent a convenient approach to assist in the analysis of mammograms without requiring high computational resources. A limitation of this work concerns the algorithm´s difficulty in finding dark tumors as the developed method considers the tumor a bright region. Another limitation involves the algorithm´s inability to discriminate the tumor out of areas as bright as the tumor region. Also, the algorithm indicates only one tumor area for medical analysis. Thus, future works include improving the algorithm sensitivity to detect the tumor regions; extending the segmentation to other image modalities, as 3D tomography; and testing the method with images from more recent databases.

1862 Fig. 3 Some segmentation outcomes of the tool proposed in this paper. Yellow contours correspond to the ground truth, whereas red contours are the segmentation outcomes

R. E. Q. Vieira et al.

Low Processing Power Algorithm to Segment Tumors in Mammograms Table 1 Mean and standard deviation (SD) of accuracy (Acc), Dice, false negative (FN) and false positive (FP) and true positive (TP) obtained to segment 150 images

1863

Acc

Dice

FN

FP

TP

Mean

0.75

0.71

0.31

0.27

0.69

SD

0.09

0.11

0.19

0.38

0.19

Fig. 4 Example of a tumor that outcome low dice value because the ground truth includes the dark area. a Image with no tumor marking; b image with tumor marking, where the yellow line is the ground truth, and the red line is the segmentation found by the tool proposed

Fig. 5 Example of a tiny tumor that outcomes low dice value. a Image with no tumor marking; b image with tumor marking, where the yellow line is the ground truth, and the red line is the segmentation found by the tool proposed

Acknowledgements The authors are grateful to FAPESP (proc. 2017/22949-3) and FINEP (Ref. 1266/2013) for the financial support that helped to equip the Biomedical Computing Laboratory (at UNIFESP—São José dos Campos/SP, Brazil) with hardware and software resources used in the present work. Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Padhi S, Rup S, Saxena S, Mohanty F (2019) Mammogram segmentation methods: a brief review. In: 2nd International conference on intelligent communication and computational techniques (ICCT). Jaipur, India, pp 218–223 2. Lee RS et al (2017) A curated mammography data set for use in computer-aided detection and diagnosis research. Sci Data 4:170–177

1864 3. Moreira IC, Ramos I, Ventura SR, Rodrigues PP (2019) Learner’s perception, knowledge and behavior assessment within a breast imaging e-learning course for radiographers. Eur J Radiol 111:47–55 4. Ekpo EU et al (2015) Breast composition: measurement and clinical use. Radiography 21(4):324–333 5. Gaur S et al (2013) Architectural distortion of the breast. Am J Roentgenol 201(5):W662–W670 6. Makandar A, Halalli B (2015) Breast cancer image enhancement using median filter and CLAHE. Int J Sci Eng Res 6(4):462–465 7. Padhi S, Rup S, Saxena S, Mohanty F (2019) Mammogram segmentation methods: a brief review. In: 2019 2nd International conference on intelligent communication and computational techniques (ICCT). Jaipur, India, pp 218–223

R. E. Q. Vieira et al. 8. Zhu W et al (2018) Adversarial deep structured nets for mass segmentation from mammograms. In: IEEE 15th International symposium on biomedical imaging (ISBI 2018) 9. Singh VK et al (2020) Breast tumor segmentation and shape classification in mammograms using generative adversarial and convolutional neural network. Expert Syst Appl 139:112855 10. Taha AA, Hanbury A (2015) Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging 15(1):29 11. lbachir IA, Es-salhi R, Daoudi I, Tallal S, Medromi H (2016) A survey on segmentation techniques of mammogram images. In: International symposium on ubiquitous networking. Springer, Singapore, pp 545–556

Electromyography Classification Techniques Analysis for Upper Limb Prostheses Control F. A. Boris, R. T. Xavier, J. P. Codinhoto, J. E. Blanco, M. A. A. Sanches, C. A. Alves, and A. A. Carvalho

Abstract

The classification of surface electromyographic signals is an important task for the control of active upper-limb prostheses. This article aims to analyze and evaluate techniques to classify surface electromyographic signals for the control of upper limb prostheses. The electromyographic signals were obtained from a public database. Machine learning algorithms and seven features of the EMG signal were used to classify the signals. Random samples were created for the training and testing tasks in subsets with 80% and 20% of the data, respectively. Machine learning algorithms for classifying electromyographic signals were trained with different configurations, allowing evaluation between combinations of techniques and parameters. It was observed that signal feature extraction is an important process for obtaining accurate results. The best result produced an average accuracy of 95% with a Random Forest classifier and three features extracted from surface electromyography signals of two channels. Keywords



EMG classifier Machine learning Upper limb Random forest

1



Prostheses control



Introduction

Electromyography is a technique for monitoring the electrical activity of the excitable membranes of muscle cells, representing the action potentials triggered by reading the electrical voltage over time. The electromyographic signal F. A. Boris (&)  R. T. Xavier  J. P. Codinhoto  J. E. Blanco  M. A. A. Sanches  C. A. Alves  A. A. Carvalho School of Engineering, São Paulo State University (Unesp), Ilha Solteira, Brazil

(EMG) or electromyogram, is the algebraic sum of all the signals detected under the area of the electrodes [1]. EMG can be divided into two main categories: the intramuscular electromyogram (iEMG), where measurements are made using needles inserted into the muscle, producing a signal with low interference; and the surface electromyogram (sEMG), where measurements are made using electrodes on the skin along with the muscle, with the characteristic of being non-invasive [2]. Currently, there are only two commercial microcontrollers approved by the Food and Drug Administration (FDA) for the recognition of sEMG standards used in the control of upper limb prostheses [3]. A low-cost anthropomorphic lower limb prosthesis presented in [4] was able to reproduce anatomically functional movements through a single sEMG channel acquisition system. It is equipped with a glove with flexible sensors to generate a database of hand movements. The classification system based on the number of muscle contractions showed good results with healthy volunteers. An upper limb prosthesis for patients with congenital or acquired deformity was described in [5]. The control system works in three operation modes that allow the execution and configuration of hand movements reproduced by the prosthesis. An approach to pattern recognition for the identification of basic hand movements using sEMG signals was presented by [6], subjecting the data to pre-processing and the extraction of eight features that were tested in different subsets by a linear classifier. This same data set was submitted to another approach [1], involving two additional features and two methods of reducing the dimensionality, increasing the accuracy of the general classification and indicating that the set of features used is adequate. These data sets were made available to the public at the UCI Machine Learning Repository [7]. In a study [8], nineteen time-domain features extracted from three sEMG channels were analyzed for the control of

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_272

1865

1866

upper limb prostheses using scatter plots and a linear classifier with different combinations of features comparing the accuracy between two dimensionality reduction methods. The correlation of twelve finger, hand and wrist movements and residual muscle activity in five amputated subjects based on the analysis of sEMG obtained by twenty-four electrodes positioned on the residual limb were investigated, showing that with a state of the art classification architecture it is possible correctly classify the activity of the phantom limb [9]. A comparative analysis [10] was performed in four classifiers without extracting features for the onboard control of upper limb prostheses from the signals obtained by six sEMG channels collected from thirty amputee volunteers performing five different movements. The accuracy and computational cost of each algorithm were evaluated. A technique for the control of upper limb prosthesis has been proposed [11] based on the use of sEMG and inertial measurements. The data for six hand movements were collected from twenty healthy volunteers and two amputees. The classification was performed by a linear classifier based on four distinct features. The authors pointed out that the inclusion of inertial measures significantly improved the classification accuracy. In a study [12], a system for classifying fourteen movements performed by ten healthy individuals captured by six sEMG channels was proposed. Six techniques for extracting features combined with seven different classifiers were compared to find the set with the best accuracy. A method [13] based on a single sEMG channel was explored for the classification of movements. In this method, an envelope is extracted from the signal with a pre-processing circuit, and from the envelope, fifteen features from each valid segment, discarding the features with the lowest correlation coefficient, to then apply dimensionality reduction, and finally perform the classification with a hybrid algorithm. A system [14] was developed with three sEMG channels to acquire signals from the forearm muscles. The signal features were extracted using the power spectral density method. The results showed better accuracy combined with the use of the genetic algorithm for the optimization of a support vector machine classifier. A low energy consumption embedded system [15] was developed for the recognition of movements from eight sEMG channels. The system was validated in real-time with the execution of six activities by six volunteers, including an amputee. In a later study [16], they proposed the extraction of a new set of features in derivative moments to improve the accuracy of classifying movements in upper limbs. The tests were performed with signals generated by eight healthy volunteers from a public database of surface

F. A. Boris et al.

electromyographic signals. Three different classifiers were compared, obtaining increased accuracy in both. This paper aims to analyze and evaluate techniques for classifying surface electromyography signals to upper-limb prosthesis control. A bibliographic survey to examine the state of the art was done on research databases. Experiments evolving a public sEMG dataset and combinations of techniques features extraction and machine learning classification algorithms were performed to evaluate and compare the classification accuracy and computing time for training and predicting tasks. The combination of feature extraction and correct tuning of machine learning classification algorithms produced viable results.

2

Methodology

2.1 Bibliographical Study The research for the survey of the state of the art was taken out in the Web of Science, Scopus, Science Direct, IEEE Xplorer, PUBMED, BIOMED, and IOP databases using the Google Scholar search engine. The English language was chosen, and the search criteria were classification, surface electromyography, and upper limb prosthesis. Exclusion criteria were publications related to lower limbs, electroencephalography, electrocardiography, force myography, intramuscular or intraneural signals, exoskeletons, virtual platforms, and other works whose scopes differ from this study.

2.2 Experiments Dataset This study used a dataset published in [7] described by [6]. These are 900 instances obtained from five healthy individuals (two males and three females) aged between 20 and 22 years old, carrying out six handgrip movements, Cylindrical, for holding cylindrical objects, Tip, for holding small objects, Hook, for supporting a heavy load, Palmar, for grasping with palm facing the object, Spherical, for holding spherical objects, and Lateral, for holding thin, flat objects, as shown on Fig. 1, for 30 times each, with a measurement time of 6 s. The speed and strength of the grips were intentionally left to the individual's will. Two sEMG electrodes were used on the forearm, Flexor Capri Ulnaris and Extensor Capri Radialis, Longus and Brevis, fixed by elastic straps, and the reference electrode in the middle, in order to obtain information on muscle activation. The data were collected at a sampling rate of 500 Hz, using the National Instruments Labview. The signals passed

Electromyography Classification Techniques Analysis for Upper Limb …

1867

Fig. 1 Manual grasps movements [6]

through a Butterworth Band Pass bandpass filter with a lower cutoff frequency of 15 Hz, a higher cutoff frequency of 500 Hz, and a 50 Hz notch filter to eliminate the interference of the powerline signal. The equipment used an NI USB-6009 analog/digital conversion board mounted on a microcomputer. The signal was obtained by two differential sEMG sensors and transmitted to a 2 channel sEMG system Delsys Bagnoli ™ Handheld EMG Systems [6]. The data set was available in MATLAB MAT-file format (.mat extension) so that the samples corresponding to the five individuals are in separate files, one file per individual. Each file has, in addition to the metadata, 12 matrices, which correspond to the data obtained from the two sEMG channels during each of the six grasping movements, and each matrix has 30 lines, which correspond to the repetitions performed for each grasping movement, and finally, each row has 3000 columns, which correspond to the six seconds of reading at 500 Hz.

2.3 Software Tools In this work, the experiments were performed on a computer with an Intel Core i5-3210 M 2.50 GHz processor, and 16 GB of DDR3 1600 MHz RAM. As the software set, open-source tools were used mainly to avoid costs with licenses, in addition to issues such as portability, interoperability, and documentation. In this sense, the Python programming language [17] was chosen, which has been very popular in scientific publications, mainly in the field of data analysis and machine learning.

For convenience, the Anaconda software distribution [18] was chosen, which provides a wide range of software for scientific computing, providing, among others, the Python programming language, as well as modules and general tools. The Python modules used for numerical processing, data analysis and data visualization were NumPy [19], SciPy [20], Matplotlib [21], and Pandas [22], and the module used to create the machine learning classification models was scikit-learn [23].

2.4 Dataset Shaping As pointed out, the data set was made available in separate files, and in separate matrices in those files. The standard database files [7] were organized into matrices to facilitate processing. In this task, all the lines of all the matrices of all the files were gathered in a single matrix, and a column was added at the end, in order to identify the class (grasp movement) to which each of the instances (lines) belongs. The resulting matrix has 900 lines (5 individuals  6 classes  30 repetitions) per 6001 columns (6 s  500 Hz  2 channels + 1 class) and was stored in a file for later use.

2.5 Features Extraction Initially, the data were analyzed without any kind of pre-processing, and low precision in the results was obtained with all the classifiers evaluated.

1868

To produce better results, as presented by [24], the following features were extracted: root mean square (RMS), mean absolute value (MAV), standard deviation (SD), variance (VAR), mean frequency (MNF), zero count (ZC) and slope sign changes (SSC) features. To obtain better results and understand the effect of the features on the data set, a scatter plot was produced (Fig. 2), which illustrates the result of RMS, MAV, SD, VAR, MNF, ZC and SSC features applied in each sEMG data channel

Fig. 2 Scatter plots from signal extracted features

F. A. Boris et al.

(channel 1 and channel 2), separating the expected classes (grasps) by shape and color (legend on right bottom side). It was noticed that MNF feature present low variation between the classes, and RMS, MAV, SD, and VAR features present visually similar values, which confirms their high correlation on previous exploratory analysis. For these reasons, it was decided to try a new configuration of features, keeping RMS, ZC and SSC, for presenting visually different configurations of values.

Electromyography Classification Techniques Analysis for Upper Limb …

2.6 Training and Testing Data Samples The data samples for training and testing tasks were generated by train_test_split() function from sklearn.model_selection module which generates random train and test from the original dataset. The random_state parameter was fixed to zero for reproducibility purposes. The subsets sizes were 80% of the instances for the training task and 20% for the testing task.

2.7 Classifiers Fitting After defining the training and test subsets, the classifiers parameters were adjusted, and the models were trained with the training sample. With trained models, predictions were

Table 1 Tested sklearn classifiers

Table 2 Tested classifiers parameters

1869

made with test samples. Each classifier has its adjustment settings, inherent to its nature. A total of 576 tests were performed, corresponding to the product of 144 classifier configurations in 4 features configuration, as shown in Tables 2 and 3. The list of Python sklearn module classifiers evaluated in this work is shown in Table 1. Table 2 presents the configuration parameters used in the classifiers during the creation of the models. The parameters were chosen after preliminary tests. Table 3 presents the features configurations used to carry out training and tests. All of these sets have the same number of instances, however the amount of information per instance is defined according to the columns Signal, RMS, ZC and SSC, which indicate whether the corresponding data were included in the data sets or not.

Classifier name

sklearn classifier

LR Logistic regression

sklearn.linear_model SGDClassifier()

LDA Linear discriminant analysis

sklearn.discriminant_analysis LinearDiscriminantAnalysis()

QDA Quadratic discriminant analysis

sklearn.discriminant_analysis QuadraticDiscriminantAnalysis()

KNN K-nearest neighbor

sklearn.neighbors KNeighborsClassifier()

DT Decision tree

sklearn.tree DecisionTreeClassifier()

RF Random forest

sklearn.ensemble RandomForestClassifier()

SVM Support vector machine

sklearn.svm SVC()

MLP Multi-layer perceptron

sklearn.neural_network MLPClassifier()

Classifier

Tested parameters

LR

loss: log

LDA

solver: svd, lsqr

QDA

default parameters

KNN

n_neighbors: 1, 3, 4, 7, 9

DT

criterion: gini, entropy

RF

n_estimators: 10, 100, 1000 oob_score: True

SVM

C: 0.5, 1, 10, 100 kernel: rbf

MLP

hidden_layer_sizes: (4, 4), (4, 4, 4), (8, 8), (8, 8, 8), (16, 16), (16, 16, 16), (32, 32), (32, 32, 32), (64, 64), (64, 64, 64), (100, 100), (200, 200), (300, 300), (600, 600) activation: relu, tanh, logistic solver: adam, lbfgs, sgd max_iter: 1000

1870

F. A. Boris et al.

Table 3 Tested features configurations

3

No.

Dataset

Includes signal

Features RMS

ZC

SSC

1

sEMG signal

Yes

Yes

Yes

Yes

2

sEMG signal

Yes

No

No

No

3

sEMG signal

No

Yes

Yes

Yes

4

sEMG signal

No

Yes

No

No

Results and Discussion

The tests accuracy can be visualized, among other ways, by plotting a confusion matrix [25], which can be generated with the plot_confusion_matrix() function of sklearn.metrics module. A confusion matrix is a square matrix that reports the accuracy of a learning algorithm, showing the counts of a classifier's true positive, true negative, false positive and false negative predictions [26]. Figure 3 shows the confusion matrix of an RF classifier model with RMS, ZC and SSC features set that produced an average accuracy of 95%. The 20 best results among all the 576 tested configurations (144 combinations of classifiers parameters, by 4 features datasets) are shown on Table 4. It was ranked with the highest values in the Accuracy column, and the lowest values in the Test and Training columns, respectively. Among the 20 best results presented, the accuracy varied

Fig. 3 Confusion matrix: 95% accuracy from RF classifier

from 95 to 80%, configuring good results. Note that dataset 3 built with RMS, ZC and SSC features, and dataset 1, built with sEMG original signal and RMS, ZC and SSC features, produced together the 18 best results. The best result was obtained by an RF classifier, with 95% accuracy, shown on Fig. 3. It is noticed that the RF, KNN, SVM, DT and MLP classifiers produce satisfactory accuracy with the appropriate features and configurations. The best average accuracy obtained from LR, LDA and QDA classifiers were 21.11%, 39.44%, and 75.56% respectively. The dataset 2, built with pure sEMG signal, didn’t appeared among the best results, and its best result was 43.89% accuracy with a RF classifier, showing feature extraction is an important task on sEMG classifying.

4

Conclusions

This work presented a study on the analysis and implementation of predictive models for the classification of sEMG signals through the use of open-source software tools and a public signals database. Based on the results obtained, it is noticed that the features extraction improves the results. The best result obtained was 95% of average accuracy (Fig. 3), obtained by an RF classifier with RMS, ZC and SSC feature set, and can be considered a good result if compared with the result of 89.2% of general accuracy, obtained by [1], who used the same sEMG dataset. Comparing results were important to identify the best configurations in terms of accuracy and computing time among the algorithms and the features analyzed in this study. The classification of bioelectric signals is an attractive field of research, given the number of publications in this field in the last years. Some future works can be proposed, such as pre-processing techniques analysis, the implementation of control systems for active prostheses, and the implementation of simulation systems with human–computer interaction.

Electromyography Classification Techniques Analysis for Upper Limb …

1871

Table 4 Best 20 results on tests Classifier

Parameters

Dataset

Accuracy (%)

Test (s)

Train (s)

RF

n_estimators = 100

3

95.00

0.1028

0.2833

RF

n_estimators = 1000

3

95.00

0.2087

1.6658

KNN

n_neighbors = 5

3

90.56

0.1132

0.0007

KNN

n_neighbors = 3

3

90.56

0.1132

0.0007

KNN

n_neighbors = 5

1

90.00

0.2161

0.1220

SVM

C = 100.0

1

90.00

0.7922

5.9878

SVM

C = 10.0

1

90.00

0.8145

6.2223

KNN

n_neighbors = 7

1

89.44

0.2180

0.1284

SVM

C = 1.0

1

89.44

0.8300

6.0215

KNN

n_neighbors = 3

1

88.89

0.2240

0.1212

KNN

n_neighbors = 9

1

88.33

0.2177

0.1221

KNN

n_neighbors = 1

3

88.33

0.1109

0.0007

KNN

n_neighbors = 7

3

88.33

0.1192

0.0007

KNN

n_neighbors = 9

3

88.33

0.1224

0.0007

KNN

n_neighbors = 1

1

87.78

0.2206

0.1147

RF

n_estimators = 10

3

87.22

0.1121

0.1206

DT

criterion = gini

3

86.67

0.0001

0.0034

DT

criterion = entropy

3

82.22

0.0001

0.0060

MLP

hidden_layers = (32,32) transfer = tanh solver = lbfgs

4

80.00

0.0005

1.7374

KNN

n_neighbors = 5

4

80.00

0.1155

0.0006

References 1. Sapsanis C, Georgoulas G, Tzes A (2013) EMG based classification of basic hand movements based on time-frequency features. In: 21st Mediterranean Conference on Control and Automation, vol 21, pp 716–722. https://doi.org/10.1109/MED.2013. 6608802 2. Chowdhury R, Reaz M, Ali M, Bakar A et al (2013) Surface electromyography signal processing and classification techniques. Sensors 13:12431–12466. https://doi.org/10.3390/s130912431 3. Calado A, Soares F, Matos D (2019) A review on commercially available anthropomorphic myoelectric prosthetic hands, patternrecognition-based microcontrollers and sEMG sensors used for prosthetic control. In: 2019 IEEE International conference on autonomous robot systems and competitions (ICARSC), pp 1–6. https://doi.org/10.1109/ICARSC.2019.8733629 4. Xavier RT, Boris FA, Castro FR et al (2016) Prótese de membro superior com movimentos pré-definidos pelo usuário. XXV Congr Bras Eng Bioméd 25:856–859. ISSN 2359-3164 5. Xavier RT, de Carvalho AA, Rohmer E et al (2019) Upper limb prosthesis for patients with congenital or acquired deformity. XXVI Congr Bras Eng Bioméd 26:723–728. https://doi.org/10. 1007/978-981-13-2119-1_111 6. Sapsanis C, Georgoulas G, Tzes A et al (2013) Improving EMG based classification of basic hand movements using EMD. In: 35th Annual international conference on IEEE engineering in medicine and biology society (EMBC), vol. 35, pp 5754–5757. https://doi. org/10.1109/EMBC.2013.6610858

7. Dheeru D, Karra Taniskidou E UCI machine learning repository at https://archive.ics.uci.edu/ml 8. Negi S, Kumar Y, Mishra VM (2016) Feature extraction and classification for EMG signals using linear discriminant analysis. In: 2nd International conference on advances in computing, communication, and automation (ICACCA), vol 2, pp 1–6. https://doi.org/10.1109/ICACCAF.2016.7748960 9. Jarrassé N, Nicol C, Touillet A et al (2017) Classification of phantom finger, hand, wrist, and elbow voluntary gestures in transhumeral amputees with sEMG. IEEE Trans Neural Syst Rehabil Eng 25(1):68–77. https://doi.org/10.1109/TNSRE.2016. 2563222 10. Bellingegni AD, Gruppioni E, Colazzo G et al (2017) NLR, MLP, SVM, and LDA: a comparative analysis on EMG data from people with trans-radial amputation. J NeuroEngineering Rehabil 14 (1):82. https://doi.org/10.1186/s12984-017-0290-6 11. Krasoulis A, Kyranou I, Erden MS et al (2017) Improved prosthetic hand control with concurrent use of myoelectric and inertial measurements. J NeuroEngineering Rehabil 14(1):71. https://doi.org/10.1186/s12984-017-0284-4 12. Phukpattaranont P, Thongpanja S, Anam K et al (2018) Evaluation of feature extraction techniques and classifiers for finger movement recognition using surface electromyography signal. Med Biol Eng Comput 56:2259–2271. https://doi.org/10.1007/s11517-0181857-5 13. Wu Y, Liang S, Zhang L et al (2018) Gesture recognition method based on a single-channel sEMG envelope signal. EURASIP J Wirel Commun Netw 2018:35. https://doi.org/10.1186/s13638018-1046-0

1872 14. Yang S, Chai Y, Ai J et al (2018) Hand motion recognition based on GA optimized SVM using sEMG signals. In: 11th International symposium on computational intelligence and design (ISCID), vol 11, pp. 146–149. https://doi.org/10.1109/ISCID.2018.10134 15. Pancholi S, Joshi AM (2019) Electromyography-based hand gesture recognition system for upper limb amputees. IEEE Sens Lett 3:1–4. https://doi.org/10.1109/LSENS.2019.2898257 16. Pancholi S, Joshi AM (2019) Time derivative moments based feature extraction approach for recognition of upper limb motions using EMG. IEEE Sens Lett 3:1–4. https://doi.org/10.1109/ LSENS.2019.2906386 17. Van Rossum G, Drake FL (2009) Python 3 Reference Manual. CreateSpace, Scotts Valley 18. Anaconda Software Distribution at https://docs.anaconda.com/ 19. Oliphant TE (2006) Guide to NumPy. Trelgol Publishing 20. Virtanen P, Gommers R, Oliphant TE et al (2020) SciPy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17:261–272. https://doi.org/10.1038/s41592-019-0686-2

F. A. Boris et al. 21. Hunter JD (2007) Matplotlib: A 2D graphics environment. Comput Sci Eng 9:90–95. https://doi.org/10.1109/MCSE.2007.55 22. McKinney W (2010) Data structures for statistical computing in Python. In: Proceedings of the 9th python in science conference, pp 51–56. https://doi.org/10.25080/Majora-92bf1922-00a 23. Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830. https://doi.org/10.5555/1953048.2078195 24. Jahan M, Manas M, Sharma BB et al (2015) Feature extraction and pattern recognition of EMG-based signal for hand movements. In: International symposium on advanced computing and communication (ISACC), pp 49–52. https://doi.org/10.1109/ISACC.2015. 7377314 25. James G, Witten D, Hastie T et al (2013) An introduction to statistical learning. Springer, New York 26. Raschka S (2016) Python machine learning: unlock deeper insights into machine learning with this vital guide to cutting-edge predictive analytics. Packt Publishing, Birmingham Mumbai

EEG-Based Motor Imagery Classification Using Multilayer Perceptron Neural Network S. K. S. Ferreira, A. S. Silveira and A. Pereira

Abstract

Signals derived from brain activity can be used as commands to control an external device or application in BrainComputer Interface (BCI) systems. Electroencephalography (EEG) is widely used to record brain signals due to its non-invasive nature, relatively low-cost, and high temporal resolution. BCI performance depends on choices regarding available options for signal pre-processing, classifiers, and feature extraction techniques. In this paper, we describe the use of an Artificial Neural Network (ANN) based on a Multilayer Perceptron (MLP) architecture as a classifier to identify motor imagery tasks using EEG signals from nine subjects of an experimental data set. BCIs based on brain signals recorded during motor imagery tasks use the changes in amplitude of specific cortical bands as features. Moreover, we evaluated the effect of systematically decreasing the number of inputs (EEG channels) on the classifier performance. The results show that a MLP classifier was able to segregate the EEG signature of four motor imagery tasks with at least 70% accuracy using at least 12 EEG channels. Keywords

Motor imagery • MLP • Periodogram • DWT

1

Introduction

Motor Imagery is a mental task in which subjects imagines themselves performing motor actions without actually moving their body. This type of imagery activates the same cortical S. K. S. Ferreira (B) · A. Pereira Institute of Technology, Federal University of Pará, Belém, Brazil e-mail: [email protected] A. S. Silveira Laboratory of Control and Systems, Federal University of Pará, Belém, Brazil

motor areas as during the performance of real movement (cortical activation) [1]. Electroencephalography (EEG) signals recorded during motor imagery tasks can be used to implement brain-computer interfaces, as well as for motor rehabilitation protocols [2]. During motor imagery, Event-Related Desynchronization (ERD) and Event-Related Synchronization (ERS) can be observed in the mu (8–12 Hz) and beta rhythms (13–30 Hz) of the EEG signal [3]. The mu rhythm is related to alpha activity over the sensorimotor cortex. Desynchronization indicates suppression of oscillatory activity (energy decreases) in a particular frequency band whereas the synchronization refers to an increase of oscillatory activity. These changes in the amplitude of specific EEG bands have been used as input for BrainComputer Interface (BCI) systems [4,5]. In additional to mu and beta, previous works have also included the delta rhythm (0.5–4 Hz) [6]. The elicited ERD/ERS patterns are topographically organized along the cortical surface. Hence, BCI-related motor imagery usually includes limb and tongue movements, due to the fact that these movements engage relatively large cortical areas and are easier to be isolated with surface EEG electrodes [7]. In particular, the pattern of brain activation during motor imagery of hand movements is lateralized: motor imagery of the left (right) hand causes ERD (ERS) in the right (left) sensorimotor cortex, and vice-versa. Thus, the motor imagery of hand movements can be associated with corresponding increase/decrease of EEG energy in the C4/C3 electrodes, which are usually localized over the regions representing hand movements in the sensorimotor cortex [3]. This study investigates the performance of a Multilayer Perceptron (MLP) Artificial Neural Network (ANN) built with 15 neurons in its hidden layer during the classification of four motor imagery tasks: movements of the left hand, the right hand, the feet, and the tongue. In order to test the classifier, we used data of nine subjects obtained from data set A of the BCI Competition IV. The feature We extracted from the dataset was the individual subject’s average power of the mu/alpha and beta EEg frequency ranges using periodogram

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_273

1873

1874

S. K. S. Ferreira et al.

as the spectral estimation method and the Wavelet Energy Spectrum (WES). We performed three simulations using features extracted from mu, beta, and a combination of both frequency ranges. Also, in order to investigate strategies to reduce computational cost and system complexity, we systematically reduced the number of channel inputs while measuring the effect on performance accuracy [6].

2

Materials and Methods

2.1

Features Extraction

In EEG analysis, power spectral density (PSD) is usually calculated separately for each standard range (e.g. delta, theta, alpha, mu, beta, and gamma) (V2 /Hz) [8]. The periodogram estimates the PSD using the squaredmagnitude of the Discrete-Time Fourier Transform (DTFT) and is defined as: 1 1 |DT F T (x[n])|2 = Px (ω) = N N

 N −1 2    − jωn  x[n]e   (1)   n=0

where Px (ω) is the PSD, N represents the total number of samples of the EEG signal x[n] from one channel, and ω is the frequency −π < ω < π. The Fast Fourier Transform algorithm is used to compute DTFT and, as a result, the periodogram [9]. The use of an appropriate window function can reduce smearing and leakage effects. Considering a window function W of length L, the x W [n] = W [n]x[n] represents a windowed data segment from the EEG signal x[n]. Thus, the periodogram (1) of this segment is:  L−1 2  1 1   2 − jωn  W [n]x[n]e Px w = |DT F T (x W [n])| =    L L n=0 (2) Previous studies indicate that for the spectral analysis of biomedical signals, the most appropriate windows are the rectangular and Hanning [10]. In the present work, we used a Hanning window of length 250 to compute the periodogram of each data segment, with a bin size of 100 ms (25 samples). By integrating the PSD within a frequency range, we can obtain the average power contained in this frequency interval. In the present work, the area under the curve of the PSD of mu/alpha/beta rhythms is decomposed into rectangles to calculate the approximate integral, i.e., the average power of each band. In addition to average power, we calculated the WES with a Discrete Wavelet Transform (DWT) implementation, which decomposes a signal into multi-levels j = 1, 2, . . . J with respective frequency components and is used especially in

EEG signals due to its non-stationary and non-linear characteristics [11]. The DWT is defined as:   ∞ t − 2jk 1 dt x (t) ψ DW T ( j, k) =   2j 2 j  −∞

(3)

where x(t) is the signal captured at each EEG channel, ψ is a wavelet function, and j is the decomposition level. In this study is used the 4th order Daubechies wavelet (db4). The DWT implementation developed by [12] involves the use of Low-Pass (LP) and High-Pass (HP) filter pairs called quadrature mirror filters. In the first step, the EEG signal is subject to both LP and HP filters, with cut-off frequency equal to one fourth of the sampling frequency Fs . This step results in the approximation (c A j ) and detail (cD j ) coefficients of the first level ( j = 1). The same procedure can be repeated until the desired decomposition level is reached. The EEG signals used in the present work were sampled with 250 Hz (Fs ). Following the Nyquist Sampling Theorem Fs ≥ 2Fmax , the maximum useful frequency is equal to half of the sampling frequency, i.e., 125 Hz. Thus, as a result of the DWT decomposition, we uncover a relationship between the cD4 (7.8125–15.625 Hz) wavelet components and the mu/alpha rhythm and between the cD3 (15.625 − 31.25 Hz) coefficients and the beta rhythm. The WES at scale j and instant k (E jk ) is the square of the wavelet transform coefficients E jk = d 2jk , where d jk denotes the approximation coefficient c A jk or detail coefficient cD jk . The sum of each E jk composes the wavelet spectrum at scale j to c A jk and cD jk . The WES for detail coefficients is described below [13,14]: N

E detail j

2j    cD jk 2 =

(4)

k=1

In the present work, the WES of the cD3 and cD4 coefficients are obtained with the sliding of the fixed time window [13]. This procedure allows following the variations in wavelet energy over time [15]. Hence, given the wavelet coefficients:  D = d jk , k = 1, 2, . . . , N ; j = 1, 2, . . . , J

(5)

the sliding-window should be written as:  W (m; w, δ) = d jk , k = 1 + mδ, . . . , w + mδ

(6)

where 2 ≤ w ≤ N is the width of window, 1 ≤ δ ≤ N is the sliding factor, and m = 0, 1, 2, . . . , M; M = (N −w)/δ is the number of sliding steps [16,17]. We used the moving window parameters w = 33 and δ = 1 to calculate the wavelet energy of the cD3 and cD4 coefficients.

273 EEG-Based Motor Imagery Classification Using Multilayer Perceptron Neural Network

2.2

Experimental Design

We used the data set from the fourth edition of the BCI Competition provided by the University of Graz. This data set consists of EEG signals from 9 healthy subjects performing motor imagery tasks. The signals were sampled with 250 Hz and the electrodes were placed according to the International 10–20 System. Each subject, designated here as Sub1 , Sub2 , …, Sub9 , performed four different imagery tasks: movement of the left hand (Class 1), right hand (Class 2), feet (Class 3), and tongue (Class 4). Two sessions were performed on different days. Each session has 6 runs composed by 48 trials (12 trials per class) and each run is separated by short breaks. The experimental paradigm is represented in Fig. 1 [18]. As observed in Fig. 1, at t = 0s, i.e., at the beginning of the trial, a fixation cross is exhibited on a computer screen together with a short acoustic warning (beep). After two seconds (t = 2 s) a cue in the form of an arrow appears, indicating the motor imagery task: pointing to the left (class 1), right (class 2), down (class 3) or up (class 4). The cue remained on the screen for 1.25 s. The subject should carry out the task until t = 6s, when the fixation cross disappeared. At the end of each trial there is a short break.

2.3

Topology, Training, and Evaluation of MLP

The MLP used in this study has two layers, i.e., a hidden layer and an output layer and the number of neurons in each layer is 15 and 4, respectively. The number of hidden neurons (15) was chosen in an optimization process aiming at minimizing both the time required for training and the complexity of the classifier, while increasing the accuracy obtained using all channels. In the output layer, class 1 is represented as [1 0 0 0]T , class 2 as [0 1 0 0]T , class 3 as [0 0 1 0]T , and class 4 as [0 0 0 1]T . The training algorithm was the Levenberg-Marquadt Backpropagation [19]. The classification accuracy was evaluated in three scenarios: simulation with features extracted from the mu/alpha rhythm, the beta rhythm, and from both rhythms. A total of five experiments were performed in each scenario with 70%

of the data randomly reserved for training and 30% to validation. In addition, the number of EEG channels (n) used as input was systematically reduced in order to analyze the accuracy with low input complexity. The channels selected are detailed below: • 22 Channels: Fz, FC3, FC1, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, P1, Pz, P2, and POz. • 18 Channels: Fz, FC3, FC1, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, and CP4. • 16 Channels: Fz, FC3, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CP2, CP4, P1, Pz, P2, and POz. • 14 Channels: Fz, FC1, FCz, FC2, C5, C3, C1, Cz, C2, C4, C6, CP1, CPz, and CP2. • 12 Channels: FC3, FC4, C5, C3, Cz, C4, C6, CP3, CPz, CP4, P1, and P2. • 10 Channels: Fz, FCz, C5, C3, C1, Cz, C2, C4, C6, and CPz. • 8 Channels: Fz, C3, C1, Cz, C2, C4, CPz, and Pz. The features extracted from those channels, as described in section II, are the WES and the average power in the mu/alpha and beta bands, which were included in the feature vector for each EEG channel. Hence, each subject is associated with four feature matrices for each class, with the same number of samples: the average power matrix in the mu band (1562×22), the average power matrix in the beta band (1562 × 22), the WES matrix corresponding to the mu band (1562 × 22), and the WES matrix corresponding to the beta band (1562 × 22). Since during the training stage the system is biased to the influence of features with larger values, the features are normalized along the range [0, 1] with softmax activation function in both layers of the neural network. In order to evaluate classification performance, the accuracy is calculated based on information from the confusion matrix, given by:  Ac =

TN + T P T N + T P + FN + F P

 × 100

(7)

where TN is the number of true negatives, TP of the true positives, FN of the false negatives, and FP of the false positives. An accuracy higher than 70% is suggested for BCI applications [20].

3

Fig. 1 Experimental paradigm

1875

Results and Discussion

In the present work, we used an MLP neural network to classify four types of motor imagery performed by 9 subjects based on spectral estimation and WES. Figures 2 and 3 shown the features extracted before and during motor imagery (MI),

1876

S. K. S. Ferreira et al. Channel

Left Hand

Mu Band Power

1

Right Hand

1 0.8

0.8

0.8

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0 0

100

200

300

400

0 0

100

Samples

300

400

100

200

300

400

0

Feet

1 0.8

0.8

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0 100

200

300

100

200

300

400

0

100

Samples

Samples

400

300

400

0

0 0

400

300

Tongue

1

0.8

0

200

Samples

0.8

0

100

Samples

Right Hand

1

C3 Before MI C3 During MI

0 0

Samples

Left Hand

1

200

Tongue

1

0.8

0

Beta Band Power

Feet

1

200

300

0

400

100

200

Samples

Samples

Fig. 2 Normalized average power in the mu/alpha and beta bands of the Subject 5 Channel

Left Hand

WES - Mu

1

Right Hand

1 0.8

0.8

0.8

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

100

200

300

0

400

100

300

0

400

100

Left Hand

Right Hand

1

200

300

0

400

Feet

1 0.8

0.8

0.6

0.6

0.6

0.6

0.4

0.4

0.4

0.4

0.2

0.2

0.2

0.2

0

0 100

200

Samples

300

400

0

100

200

300

400

Samples

300

400

300

400

Tongue

1

0.8

0

200

Samples

0.8

0

100

Samples

Samples

Samples 1

200

C4 Before MI C4 During MI

0

0

0 0

Tongue

1

0.8

0

WES - Beta

Feet

1

0 0

100

200

Samples

300

400

0

100

200

Samples

Fig. 3 Normalized WES in the mu/alpha (cD4 ) and beta (cD3 ) bands of the Subject 5

considering C3 channel for average band power and C4 channel for WES, of the subject 5. Analyzing the results of each simulation, it is possible to see that the ANN reaches significant performance using only the mu/alpha frequency (up to 94.06 ± 1.37 using all electrodes), as can see in Table 1. Also, the minimum number of channels needed to obtain at least 70% accuracy was

at least 12 channels for simulations considering separately mu and beta rhythms (Tables 1 and 2), and at least 14 channels for simulation using the features vector in both rhythms (Table 3). Therefore, the level of classification accuracy using alpha or beta reaches higher values than the use of the two rhythms together and enables a reduction of at least 10 EEG channels.

273 EEG-Based Motor Imagery Classification Using Multilayer Perceptron Neural Network

1877

Table 1 Average accuracy for MLP using features extracted within alpha frequency range. In bold the highest accuracy Subject

n = 22

n = 18

n = 16

n = 14

n = 12

n = 10

n=8

Sub1 Sub2 Sub3 Sub4 Sub5 Sub6 Sub7 Sub8 Sub9 μ±σ

91.96 ± 0.28 93.16 ± 1.09 95.19 ± 0.34 94.62 ± 0.23 94.95 ± 0.51 95.04 ± 1.08 95.05 ± 0.40 94.79 ± 0.58 91.82 ± 1.03 94.06 ± 1.37

78.24 ± 1.85 81.90 ± 1.96 80.36 ± 1.60 86.03 ± 1.01 85.62 ± 1.29 90.49 ± 0.87 79.44 ± 1.06 82.47 ± 2.71 82.24 ± 2.67 82.98 ± 3.82

78.78 ± 3.64 82.35 ± 1.61 88.25 ± 4.56 78.52 ± 1.45 79.08 ± 1.53 81.64 ± 1.25 87.50 ± 0.82 83.17 ± 1.21 81.52 ± 3.17 82.31 ± 3.56

76.25 ± 0.96 80.79 ± 0.88 82.65 ± 1.67 82.19 ± 1.60 81.43 ± 1.62 82.40 ± 2.37 80.81 ± 1.86 84.33 ± 0.45 73.89 ± 2.03 80.53 ± 3.33

77.07 ± 3.13 75.72 ± 0.93 71.79 ± 1.38 76.33 ± 0.95 75.50 ± 1.75 76.21 ± 1.58 73.96 ± 1.57 78.20 ± 1.23 58.27 ± 1.40 73.67 ± 6.06

69.57 ± 0.86 69.33 ± 1.00 69.64 ± 0.93 70.49 ± 1.29 70.11 ± 1.42 74.69 ± 0.25 72.86 ± 3.28 79.40 ± 2.68 63.25 ± 1.95 71.04 ± 4.41

59.99 ± 1.44 63.34 ± 1.54 69.54 ± 1.78 72.02 ± 0.61 72.08 ± 1.36 65.50 ± 2.16 70.12 ± 0.83 70.15 ± 1.96 61.70 ± 1.03 67.16 ± 4.61

Table 2 Average accuracy for MLP using features extracted within beta frequency range. In bold the highest accuracy Subject

n = 22

n = 18

n = 16

n = 14

n = 12

n = 10

n=8

Sub1 Sub2 Sub3 Sub4 Sub5 Sub6 Sub7 Sub8 Sub9 μ±σ

95.66 ± 0.29 93.00 ± 0.75 87.54 ± 2.54 97.00 ± 0.19 96.30 ± 0.90 92.03 ± 1.61 92.32 ± 2.06 89.02 ± 0.84 86.85 ± 1.26 92.19 ± 3.76

91.34 ± 2.38 81.23 ± 0.88 78.49 ± 2.82 93.78 ± 0.52 82.74 ± 9.36 77.00 ± 0.47 79.53 ± 2.52 79.17 ± 1.37 80.68 ± 1.39 82.66 ± 5.88

85.62 ± 0.91 80.36 ± 2.46 77.03 ± 1.82 86.12 ± 4.41 86.66 ± 4.24 85.72 ± 1.90 81.42 ± 2.93 74.52 ± 1.46 78.19 ± 1.30 81.74 ± 4.52

88.84 ± 0.99 73.28 ± 3.13 73.68 ± 2.32 82.94 ± 2.57 80.66 ± 1.07 75.53 ± 3.16 72.71 ± 1.68 63.35 ± 2.28 79.97 ± 1.06 76.46 ± 6.97

78.14 ± 1.16 68.85 ± 0.71 69.14 ± 1.55 78.57 ± 2.60 78.78 ± 1.26 70.43 ± 0.81 75.18 ± 1.66 69.77 ± 1.87 72.00 ± 0.96 73.43 ± 4.24

72.38 ± 2.53 67.78 ± 1.61 64.46 ± 2.89 73.72 ± 3.63 74.64 ± 2.75 68.22 ± 0.71 66.32 ± 1.95 59.06 ± 1.33 70.91 ± 1.08 68.61 ± 4.96

68.98 ± 1.85 63.47 ± 0.60 64.52 ± 1.13 66.45 ± 1.37 66.70 ± 2.52 63.30 ± 1.18 63.56 ± 3.49 54.50 ± 0.33 70.20 ± 1.11 64.63 ± 4.53

Table 3 Average accuracy for MLP using features extracted within alpha and beta rhythms. In bold the highest accuracy Subject

n = 22

n = 18

n = 16

n = 14

n = 12

n = 10

n=8

Sub1 Sub2 Sub3 Sub4 Sub5 Sub6 Sub7 Sub8 Sub9 μ±σ

89.59 ± 0.70 86.34 ± 0.84 87.55 ± 1.09 90.57 ± 1.67 91.78 ± 1.00 91.26 ± 1.22 90.91 ± 0.43 87.72 ± 1.62 85.08 ± 0.89 88.98 ± 2.38

77.95 ± 2.24 74.42 ± 1.08 70.11 ± 3.57 81.22 ± 2.14 81.66 ± 1.91 76.32 ± 1.87 74.50 ± 1.17 75.44 ± 1.36 76.87 ± 1.07 76.50 ± 3.56

71.72 ± 1.39 75.07 ± 1.09 69.45 ± 2.57 76.58 ± 3.30 76.07 ± 1.51 78.40 ± 0.94 81.01 ± 1.37 72.67 ± 1.06 72.16 ± 1.81 74.79 ± 3.64

77.03 ± 0.47 65.66 ± 2.75 66.98 ± 0.97 77.41 ± 4.25 79.43 ± 2.25 70.86 ± 1.46 71.74 ± 1.24 71.93 ± 0.89 66.89 ± 0.97 71.99 ± 5.03

71.61 ± 1.29 70.40 ± 2.04 62.61 ± 2.93 77.96 ± 2.37 70.87 ± 1.81 67.95 ± 1.11 71.71 ± 1.38 63.36 ± 2.12 60.34 ± 1.66 68.53 ± 5.55

63.10 ± 1.01 60.61 ± 1.90 61.04 ± 1.46 66.84 ± 2.96 64.70 ± 3.65 64.49 ± 0.75 67.30 ± 1.20 61.05 ± 1.79 63.89 ± 0.80 63.67 ± 2.46

58.35 ± 0.60 55.25 ± 0.57 65.10 ± 0.87 56.76 ± 1.36 57.95 ± 0.75 58.29 ± 0.74 62.15 ± 2.02 57.93 ± 1.94 55.78 ± 2.26 58.62 ± 3.13

Furthermore, comparing the simulations results using mu/alpha rhythm and the simulations using beta rhythm, for n = 22, we found a greater classification accuracy for features extracted in the alpha band than for beta band, for most subjects, indicating differences in the degree of desynchronization of sensorimotor rhythms. Other similar studies with ANN includes the use of average power and wavelet energy spectrum, such as [21], which report accuracy equal to 77.1429% using average power and 83.5714% using WES, for two classes of motor imagery. In

additional, [6] achieved 80 ± 10 of accuracy using ANN for classification of two classes of motor imagery, and after filtration and analysis of channel reduction, the accuracy enhances to 90 ± 5 using 8 electrodes.

4

Conclusion

This paper presented a study about the use of MLP to classify EEG signals based on motor imagery tasks, using features

1878

that can identify the energy modulation that occurs during this type of mental activity. The technique involves the average power, derived from the periodogram method, and the windowed wavelet energy spectrum, in order to characterize the ERD/ERS during four motor imagery tasks. It is possible to conclude that, using the BCI data set, the implementation of a MLP for classification tasks with 15 hidden neurons and two layers has enough accuracy using all 22 electrodes. Moreover, the channel reduction analysis performed has shown the possibility to use fewer EEG channels than available, maintaining enough accuracy. Acknowledgements To Laboratory of Control and Systems (LACOSUFPA) and Neuroprocessing Laboratory (LABNEP-UFPA). A.S. Silveira acknowledges the financial support of the Conselho Nacional de Desenvolvimento Cientìfico e Tecnológico under grant 408559/2016-0. Conflict of Interest The authors declare that they have no conflict of interest.

S. K. S. Ferreira et al. 7.

8. 9. 10. 11. 12.

13.

14.

15.

16.

References 1. 2.

3. 4.

5.

6.

Mulder T (2007) Motor imagery and action observation: cognitive tools for rehabilitation. J Neural Trans 114(10):1265–1278 Padfield N, Zabalza J, Zhao H et al (2019) EEG-based braincomputer interfaces using motor-imagery: techniques and challenges. Sensors 19(6):1423 Pfurtscheller G, Neuper C (1997) Motor imagery activates primary sensorimotor area in humans. Neurosci Lett 239(2–3):65–68 Zuo C, Jin J, Yin E et al (2020) Novel hybrid brain-computer interface system based on motor imagery and P300. Cogn Neurodyn 14(2):253–265 Abdalsalam E, Yusoff M, Malik A et al (2018) Modulation of sensorimotor rhythms for brain-computer interface using motor imagery with online feedback. Signal Image Video Process 12(3):557–564 Maksimenko V, Kurkin S, Pitsik E et al (2018) Artificial neural network classification of motor-related EEG: an increase in classification accuracy by reducing signal complexity. Complexity

17. 18.

19.

20. 21.

Graimann B, Allison B, Pfurtscheller G. Brain–computer interfaces: a gentle introduction. In: Brain-computer interfaces. Springer, pp 1– 27 Zhang Z (2019) Spectral and time-frequency analysis. In: EEG signal processing and feature extraction. Springer, pp 89–116 Mello C (2009) Biomedical engineering. BoD–Books on demand Akin M, Kiymik MK (2000) Application of periodogram and AR spectral analysis to EEG signals. J Med Syst 24(4):247–256 Thampi SM, Gelbukh A, Mukhopadhyay J (2014) Advances in signal processing and intelligent recognition systems. Springer Mallat SG (1987) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11 Zheng-You HE, Xiaoqing C, Guoming L (2006) Wavelet entropy measure definition and its application for transmission line fault detection and identification. In: International conference on power system technology. IEEE, pp 1–6 Kumar Y, Dewal ML, Anand RS (2012) Relative wavelet energy and wavelet entropy based epileptic brain signals classification. Biomed Eng Lett 2(3):147–157 He A (2019) Fault detection of traction power supply system based on wavelet energy entropy. In: AIP conference proceedings, vol 2066, issue 1. AIP Publishing LLC Chen J, Li G (2014) Tsallis wavelet entropy and its application in power signal analysis. Entropy 16(6):3009–3025 Yang Q, Wang J (2015) Multi-level wavelet shannon entropy-based method for single-sensor fault location. Entropy 17(10):7101–7117 Brunner C, Leeb R, Müller-Putz G, Schlögl A et al (2008) BCI competition 2008-Graz data set A. Graz University of Technology, Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces) Rashid MM, Ahmad M (2016) Classification of motor imagery hands movement using Levenberg-Marquardt algorithm based on statistical features of EEG signal. In: 3rd International conference on electrical engineering and information communication technology (ICEEICT) , pp 1–6 Kübler A, Kotchoubey B, Kaiser J et al (2001) Brain-computer communication: unlocking the locked in. Psychol Bull 127(3):358 Chatterjee R, Bandyopadhyay T (2016) EEG based motor imagery classification using SVM and MLP. In: 2nd International conference on computational intelligence and networks (CINE) , pp 84–89

Real-Time Detection of Myoelectric Hand Patterns for an Incomplete Spinal Cord Injured Subject W. A. Rodriguez, J. A. Morales, L. A. Bermeo, D. M. Quiguanas, E. F. Arcos, A. F. Rodacki, and J. J. Villarejo-Mayor

Abstract

1

Individuals with spinal cord injuries lose the ability to complete hand movements. Active orthosis based on myoelectric signals may provide a more intuitive control from the remaining muscles. Pattern recognition has been widely used to detect the intention to control assistant devices for rehabilitation, but little work has been extended to injured individuals. This work presents a proposal for real-time detection of hand movements based on myoelectric signals. A subject with incomplete spinal cord injury at the cervical level attempted to elicit flexion/extension fingers and resting while two-channel electromyographic signals were acquired. A classic on– off control was compared with different configurations of KNN, yielding classification performance up to 81.00% in real-time. The results showed the ability of the subject to performed contractions with repeated patterns for the control of low-cost active orthosis. Keywords



Orthosis Spinal cord injury Real-time



Electromyography



W. A. Rodriguez  J. A. Morales  L. A. Bermeo (&)  E. F. Arcos Department of Engineering, Universidad Santiago de Cali, Cali, Colombia e-mail: [email protected] D. M. Quiguanas Department of Health, Universidad Santiago de Cali, Cali, Colombia A. F. Rodacki  J. J. Villarejo-Mayor Department of Physical Education, Federal University of Paraná, Curitiba, Brazil

Introduction

Spinal cord injuries and neurological diseases such as muscular dystrophy, cerebral palsy, and stroke can lead to partial or complete loss of hand mobility [1]. According to the WHO [2], about 500,000 people each year worldwide suffer from spinal cord injury (SCI) and approximately 15 million people have a stroke [1]. Individuals with this type of injury lose the ability to complete everyday activities like grasping objects, drinking a glass of water, or dressing, which affect their autonomy and emotional state [2]. Rehabilitation and assistive technologies have been proposed to restore hand functionality such as active orthoses, exoskeletons, and robotic gloves [3]. These devices allow assisting to complete hand movements through actuated systems that are directly controlled by the user. The control can be based on On/off control or pattern recognition techniques. In a control system, a trigger signal (defined by a threshold) acts as a switch to control the execution of a specific task [1]. In pattern recognition, movements are associated with patterns represented by features of the signal. These patterns are previously trained and classified to generate control commands [4]. The most common signals used for orthoses control are electroencephalographic (EEG) and voice signals, electrooculogram (EOG), control switches, and surface electromyography (sEMG). Most orthoses (42.00%) are controlled by the sEMG signal, while 8% are controlled by voice signal and 8.00% by EEG [5]. This is because sEMG-based orthosis control provides a natural mapping of the intention for spontaneous muscle movement [6]. Besides, it allows other activities to be performed simultaneously without requiring additional concentration [7]. In individuals with spinal cord injuries, sEMG signals from muscles with remaining voluntary contractions have been used for hand orthosis control [8]. Myoelectric pattern recognition has been widely used to control orthoses and prostheses. Different factors determine

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_274

1879

1880

W. A. Rodriguez et al.

the type of signal processing, such as the electrodes locations, the influence of intrinsic and extrinsic noise-generating artifacts, and the number of electrodes [9]. High-density electrode configurations provide high recognition efficiency. But, it has higher requirements for hardware, processing, and computational cost [6, 10]. Also, this configuration increases the difficulty of dressing and the probability of electrode failure, which makes it mainly used in laboratory environments and rarely in daily use. On the other hand, low-density systems are normally used in commercial EMG devices, which have low cost, easy to dress, and acceptable performance. In the literature, feature sets in the time, frequency, and time–frequency domains have been used to represent myoelectric patterns. Moreover, different learning machine methods such as k-nearest neighbor (KNN), artificial neural networks (ANN), linear discriminant analysis (LDA), support vector machine (SVM), have been used as classification methods [10–15]. In particular, studies showed that SVM combined with genetic algorithms for optimizing feature selection performed slightly better than LDA and SVM for hand movements classification [11, 13]. However, KNN is computationally efficient and easy to be implemented in hardware, being a suitable option for real-time applications [10]. Nevertheless, most of these works have been evaluated in people without injuries, which shows the need to validate these methods with individuals with SCI. Besides, validation of systems in real-time is required to evaluate their true applicability in daily use [14–16]. Studies based on real-time sEMG showed better classification by increasing the number of channels. However, in individuals with spinal cord injury, it is recommended to use low-density sEMG [17], due to muscle atrophy and limitation of the active muscles that induce movement compensations. In a previous study, the authors presented the development of a hand orthosis for an individual with SCI, based on a two-channel sEMG On–Off control [18]. This study aims to develop a two-channel sEMG pattern recognition-based control, implemented on an embedded microcontroller system in a hand orthosis to detect finger flexion/extension and the rest state of an SCI subject. The study compares features in the time and frequency domains, as well as different metrics of the KNN classifier.

2

Methodology

2.1 Subjects This study involved an individual with incomplete spinal cord injury at the cervical level and fractured C5–C6 vertebrae (male, 29 years old, right-handed, injured 11 years

ago), and a neurologically intact individual (female, 55 years old, right-handed). The subject with SCI currently has muscle atrophy and loss of mobility at the distal level of the hands. Incomplete injury at low-cervical nerves (C5–C8) affects some motor and sensory functions of the wrist, arms and hands. The study was approved by the ethics committee of the faculty of engineering of the Universidad Santiago de Cali, Colombia, act Number 013 session 003. Subjects voluntarily signed an informed consent form before the experiments were performed.

2.2 Data Acquisition and Hardware Data acquisition was performed on the dominant hand of both participants (right hand). To capture the EMG data, electrodes were localized on the forearm according to [19] on the flexor digitorum profundus and extensor digitorum communis. Before placing the electrodes, the skin was shaved, lightly scraped, and cleaned with alcohol. The EMG signal is obtained by the EMG muscle sensor MyoWare™ (AT-04–001). The EMG data is digitized and processed in a Beaglebone Black platform (AM335  1 GHz ARM A8 processor, 512 MB RAM, 12-bit ADC port), with a sampling frequency of 1 KHz per channel. The classification algorithms for the embedded system were implemented in Python language under the Debian operating system. The data were analyzed in a notebook (intel core i7 processor and 16 GB RAM).

2.3 Experimental Protocol The study was conducted in off-line and real-time modes. For the experiments, the subjects were placed in a seated position with their backs supported by the seatback and their forearms resting on a table at 90°. The subjects were instructed to perform the flexion and extension of the hand fingers. Besides, the rest state was considered as a task. The off-line experiment consists of a test of 6 repetitions of a hand movement pattern and a sequential test where the tasks were performed alternately. Each test lasted 60 s, where the participant was instructed to perform (or attempt to perform) the task by 5 s sustained isometric contraction with 5 s rest between each repetition. A verbal instruction was given to start and end each contraction. The experiment consists of five trials of each test, repeated on four different days, for a total of 180 repetitions for each task. Breaks were given between each test to avoid physical and mental fatigue. For real-time experiments, a serial test with repetitions similar to the off-line experiment was carried out. Also, a

Real-Time Detection of Myoelectric Hand Patterns for an Incomplete …

randomized test was performed using a task generation algorithm until complete three repetitions of each task. The subjects received visual instructions through a notebook, to perform the muscle contraction.

2.4 Data Processing and Analysis Data processing and analysis were carried out in two scenarios, off-line and real-time. The off-line analysis was performed to identify the best configuration of the classifier according to its performance. Also, three feature sets were analyzed. The resulting configuration was implemented in the embedded system for validation with an active orthosis developed in [18] in real-time. In this way, the pre-processing and processing stages described below were applied to the off-line and real-time configurations.

2.4.1 Pre-processing For raw signal conditioning, the offset was removed by subtracting the average signal per channel. A Notch filter was applied to eliminate interference from the power grid. A data segmentation scheme was used for each active segment corresponding to the repetition of each movement, for both channels. One second was removed at the beginning and the end of the muscle contraction to eliminated the transitions between movements, as shown in Fig. 1. The remaining three seconds were used for data processing. According to the literature, the latency should be within 100–125 ms for real-time control. However, a larger windows are suggested for higher classification accuracy [20]. For analysis, each segment was divided into 300 ms overlapping data windows, with 50 ms overlapping step to reduce the input delay as previous works [10]. 2.4.2 Feature Extraction Time-domain features were extracted: Mean Absolute Value (MAV), Variance (VAR), Root Mean Square (RMS), Zero Crossing (ZC), Willison’s Amplitude (WAMP), Slope Sign Change (SSC), Wavelength (WL) and Integral (IEMG) [10]. Also, frequency-domain features were used: Median Frequency (FDM), Mean Frequency (MNF), Peak Frequency (PFK), Mean Power (MNP), and Total Power (TTP). Three Fig. 1 Section used for feature extraction, eliminating the transitions

1881

feature sets were composed for analysis according to their nature: Time-Domain (TD), Frequency Domain (FD), and total set of features (TF).

2.4.3 Classification Methods Two classification methods were used in this study: On–off control and K-nearest neighbors (KNN). For On–off control, the integrated signal from the MyoWare sensor was used. The signal was analyzed experimentally to define two thresholds per channel for decoding the tasks. An increase of 1 mV was defined to obtain the thresholds for each channel and movement. On the other hand, KNN calculates the distance between a pattern to be classified with other points in the training data set. The result is associated with the class between its K-nearest neighbors [11]. Three K values (5, 7, and 9), and four metric distances (Euclidean, Minkowski, Manhattan, and Chebyshev) were compared. Finally, the configuration of the feature set and classifier with the highest performance was validated in real-time, through tests performed on an active orthosis designed in [18].

2.5 Evaluation of Performance To evaluate the performance of the KNN classification in the off-line phase, a Stratified Shuffle Split (SSS) five-fold cross-validation procedure was used. This technique consists of split the dataset into five groups, where four are randomly assigned as a training dataset and the remaining with validation set. This process is repeated for each group. The overall performance was calculated as the average of all the validation sets. Moreover, a confusion matrix was obtained for detailed analysis among task recognitions.

3

Results

3.1 Off-Line Classification Different thresholds were evaluated for On–off control. As a result, the extension class corresponded to values between 2.5 and 5.5 mV, the flexion class to values above 5.5 mV, and the rest class was defined for values below 2.5 mV.

1882

W. A. Rodriguez et al.

Fig. 2 Confusion matrix (on–off control). a Able-bodied and b SCI subjects

The best performance for On–off control was 98.42 ± 0.23% for the able-bodied subject and 80.98 ± 0.51% for the SCI subject. The confusion matrices for both participants are shown in Fig. 2. For the able-bodied subject, the flexion class showed 97.00% accuracy, while for rest and extension were above 99.00%. For the SCI subject, the rest class obtained the highest percentage of accuracy with 87.00%, with 10.00% of false positives for flexion and 3.00% for the extension. The flexion class obtained the lowest accuracy, with 25.00% of false positives for the extension class. Classification performance using KNN was compared for three feature sets and variations of k-values with the proposed metrics. For the SCI subject, results for TD and FD feature sets showed accuracies below 97.00%. For TF, it was above 98.00%. Additionally, it was observed lower accuracies when the k-number neighbors were increased for the SCI subject, while variations were not found for the able-bodied subject. Lowest performance was obtained for distance Chebyshev and k = 9 configurations (96.23 ± 0.44% for the SCI subject and 99.27 ± 0.08% for the able-bodied subject). The highest accuracy was obtained using Manhattan distance and k = 5 for the TF dataset, with 99.06 ± 0.06% and 99.52 ± 0.06% accuracy for the able-bodied and SCI subjects, respectively. The combination of datasets TF with features on time and frequency domains showed performance for the SCI subject up to 99.06%, compared to datasets TD and FD, with results below 98.00 and 97.00%, respectively. Figure 3 shows the KNN confusion matrices (Manhattan, k = 5) for both participants. For the SCI subject, it can be noted a confusion between flexion and extension classes, with 2.00% as the highest false positives. For the control participant, the flexion and extension classes showed errors

of 1.00%, with false positives between the two classes respectively. The resting class showed 100.00% recognition for the subjects.

3.2 Real-Time Classification The real-time classification was validated using the best configuration for the KNN classifier (Manhattan, k = 5), with the TF feature. The confusion matrices for both subjects are shown in Fig. 4. For the able-bodied subject, the performance was 66.89%, where flexion class had 31.00% of accuracy, with 20.00% of confusion with the extension class and 50.00% with the rest class. For the SCI subject, the overall accuracy was 81.66%. The confusion matrix the SCI subject is observed that the most successful class was the rest with 86.00% accuracy, followed by flexion and extension classes with 80.00% and 79.00%, respectively. The extension and flexion classes had 12.00 and 14.00% of false positives between them, respectively.

4

Discussion

The ability to recognize myoelectric patterns from impaired subject despite the muscular atrophy, muscle spasticity and movement compensations, could facilitate the development of orthotic devices controlled by EMG. It is known that, unlike subjects with SCI, healthy people generate highly repeatable patterns across subjects, due to the analogous coordination of muscle synergies [21]. Despite this difference, SCI subjects were able to generate repeated patterns possible to be distinguished between classes, as shown in the

Real-Time Detection of Myoelectric Hand Patterns for an Incomplete …

1883

Fig. 3 Confusion matrix (KNN, Manhattan metric). a Able-bodied and b SCI subjects

Fig. 4 Confusion matrix at real-time classification. a Able-bodied and b SCI subjects

results. Power tasks like wrist or hand motions involve antagonist muscles of the forearm. However, muscle spasticity on SCI subjects affects voluntary EMG patterns due to crosstalk by involuntary contractions. Thus, basic methods for pattern detection in terms of the amplitude thresholding of time-series or frequency spectrum compromise the classification accuracy and more complex methods are needed. The present results with SCI subject support the above. This study was focused on the evaluation of a hand pattern recognition system on a spinal cord injured subject based on low-density sEMG. The performance of a KNN classifier was compared with an On–off control to detect finger flexion/extension movements and the state of rest, evaluating different parameter configurations. The On–off control based on the integrated signal showed lower results compared to the KNN classifier. However, due to the reduced number of

tasks and channels, the threshold comparison based on the integrated signal obtained higher results than the a priori probability (likelihood of a class without training or calibration). Although this method turns out inefficient for more tasks, its simple implementation and reduced hardware could make it as a very low-cost alternative for applications designed for impaired subjects. This work represents the first approach for real-time control of an active orthosis for the assistance of hand movements of injured individuals, using only 2 channels of sEMG signals. Different parameters and feature sets were validated on a KNN classifier. The classifier was implemented on a real-time embedded system using Manhattan distance and k = 5. The performance obtained in real-time was lower compared to offline, as expected. The cross-validation method selects random windows for the

1884

W. A. Rodriguez et al.

classifier training dataset, which includes windows belonging to all the tests of every day of the experiment. In this way, the training dataset presents less variability related to intrinsic conditions of the experiment such as electrode positioning, fatigue, sweating, temperature, unlike real-time validation. Furthermore, training based on data in the offline mode does not include transitions among tasks. But, real-time validation includes all samples, including transitions. Although performance in real-time was above 81.00% for the SCI subject, more studies need to be performed. The offline experiment allowed the feature selection among three proposed datasets for the best representation of the patterns. Nonetheless, other methods for feature selection like genetic algorithms allow the assessment of features independently, which leads to reduce redundant information among feature parameters. Also, computation cost could be reduced with a fewer number of features to represent the patterns. The able-bodied subject showed lower performance for some tasks in comparison with the SCI subject. This could be related to low familiarization with the experiments.

5

Conclusions

The proposed system represents an advance towards the development of low-cost and low-density myoelectric control systems in real-time, with only two sEMG channels to assist movements by an active orthosis. The results reported in this study showed a feasible pattern recognition to detect the flexion and extension of the fingers of the hand and the rest state of an SCI subject. A non-parametric classifier (KNN) on an embedded real-time system with two sEMG channels was implemented to assist movements by an active orthosis. A dataset comprised of all features implemented on time and frequency domains were used to represent myoelectric patterns. Future studies include the selection of characteristics and the assessment of more classification methods. The number of participants involved in this study was a limitation, for the future works will be focused on the increasing of the number of gestures, and SCI subjects. Acknowledgements The authors are thankful for the support provided by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)—Brazil, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brazil (CAPES)—Finance Code 001, and Departamento General de Investigación (DGI)—Universidad Santiago de Cali, Colombia, project No. 819-621119-487.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Hameed HK, Hassan WZW, Shafie S et al (2020) A review on surface electromyography-controlled hand robotic devices used for rehabilitation and assistance in activities of daily living. J Prosthet Orthot 32:3–13. https://doi.org/10.1097/JPO.0000000000000277 2. WHO (2013) Spinal cord injury. World Health Organization 3. Viladot R, Riambau OC, Paloma SC (1998) Ortesis y prótesis del aparato locomotor. MASON, España 4. Oskoei MA, Hu H (2007) Myoelectric control systems—a survey. Biomed Signal Process Control 2:275–294. https://doi.org/10. 1016/j.bspc.2007.07.009 5. Chu C-Y, Patterson RM (2018) Soft robotic devices for hand rehabilitation and assistance: a narrative review. J Neuroeng Rehabil 15:9. https://doi.org/10.1186/s12984-018-0350-6 6. Huang G, Xian Z, Tang F et al (2020) Low-density surface electromyographic patterns under electrode shift: characterization and NMF-based classification. Biomed Signal Process Control 59:101890. https://doi.org/10.1016/j.bspc.2020.101890 7. Hakonen M, Piitulainen H, Visala A (2015) Current state of digital signal processing in myoelectric interfaces and related applications. Biomed Signal Process Control 18:334–359. https://doi.org/ 10.1016/j.bspc.2015.02.009 8. Liu J, Zhou P (2013) A novel myoelectric pattern recognition strategy for hand function restoration after incomplete cervical spinal cord injury. IEEE Trans Neural Syst Rehabil Eng 21:96– 103. https://doi.org/10.1109/TNSRE.2012.2218832 9. Merletti R, Botter A, Troiano A et al (2009) Technology and instrumentation for detection and conditioning of the surface electromyographic signal: state of the art. Clin Biomech 24:122– 134. https://doi.org/10.1016/j.clinbiomech.2008.08.006 10. Mayor JJ, Costa RM, Frizera Neto A, Bastos TF (2017) Dexterous hand gestures recognition based on low-density sEMG signals for upper-limb forearm amputees. Res Biomed Eng 33:202–217 11. Mayor JJ, Rodacki AF, Bastos T (2018) Classification of dexterous hand movements based on myoelectric signals using neural networks. In: Anais do V Congresso Brasileiro de Eletromiografia e Cinesiologia e X Simpósio de Engenharia Biomédica, pp 2–5 12. Rahmatillah A, Salamat L, Soelistiono S (2019) Design and implementation of prosthetic hand control using myoelectric signal. Int J Adv Sci Eng Inf Technol 9:1231–1237. https://doi. org/10.18517/ijaseit.9.4.4887 13. Yang S, Chai Y, Ai J et al (2018) Hand motion recognition based on GA optimized SVM using sEMG signals. In: 2018 11th International Symposium on Computational Intelligence and Design (ISCID), pp 146–149 14. Connan M, Kõiva R, Castellini C (2020) Online natural myocontrol of combined hand and wrist actions using tactile myography and the biomechanics of grasping. Front Neurorobot 14:1–16. https://doi.org/10.3389/fnbot.2020.00011

Real-Time Detection of Myoelectric Hand Patterns for an Incomplete … 15. Lu Z, Chen X, Zhang X et al (2017) Real-time control of an exoskeleton hand robot with myoelectric pattern recognition. Int J Neural Syst 27:1–11. https://doi.org/10.1142/S0129065717500095 16. Lu Z, Stampas A, Francisco GE, Zhou P (2019) Offline and online myoelectric pattern recognition analysis and real-time control of a robotic hand after spinal cord injury. J Neural Eng 16:36018. https://doi.org/10.1088/1741-2552/ab0cf0 17. Liu J, Li X, Li G, Zhou P (2014) EMG feature assessment for myoelectric pattern recognition and channel selection: A study with incomplete spinal cord injury. Med Eng Phys 36:975–980. https://doi.org/10.1016/j.medengphy.2014.04.003 18. Bermeo L, Rodriguez WA, Morales JA et al (2020) Design of a Hand Orthosis for People with Deficiency of the Medial, Radial,

1885

and Ulnar Nerves. Int J Adv Sci Eng Inf Technol 10:945–951. https://doi.org/10.18517/ijaseit.10.3.10808 19. Bermeo L, Villarejo JJ, Arcos EF et al (2020) Acquisition protocol and comparison of myoelectric signals of the muscles innervated by the ulnar, radial and medial nerves for a hand orthoses. Commun Comput Inf Sci 1195:129–140 20. Farrell T, Weir R (2007) The Optimal Controller Delay for Myoelectric Prostheses. Neural Syst Rehabil Eng IEEE Trans 15:111–118. https://doi.org/10.1109/TNSRE.2007.891391 21. Seth N, Freitas RCD, Chaulk M et al (2019) EMG pattern recognition for persons with cervical spinal cord injury. IEEE Int Conf Rehabil Robot 6:1055–1060. https://doi.org/10.1109/ ICORR.2019.8779450

Single-Trial Functional Connectivity Dynamics of Event-Related Desynchronization for Motor Imagery EEG-Based Brain-Computer Interfaces P. G. Rodrigues, A. Fim-Neto, J. R. Sato, D. C. Soriano, and S. J. Nasuto

substantial single electrode peak performances (in terms of Cohen’s kappa) for discriminating rest and movement, i.e. for identifying alpha rhythm suppression. Weak peak classification performances were achieved for these features for right- and left-hand discrimination, but the combination of FC-based features and ERD/S provided significantly better results, suggesting complementary information. These results illustrate the symmetrical nature of brain activity relative power dynamics, as reflected in dynamics of functional connectivity during single trial MI and motivate need for further exploration of such measures for BCI applications.

Abstract

Functional connectivity (FC) analysis has been widely applied to the study of the brain functional organization under different conditions and pathologies providing compelling results. Recently, the investigation of FC in motor tasks has drawn the attention of researchers devoted to post-stroke rehabilitation and those seeking robust features for the design of brain-computer interfaces (BCIs). In particular, concerning this application, it is crucial to understand: (1) how motor imagery (MI) networks dynamics evolve over time; (2) how it can be suitably characterized by its topological quantifiers (graph metrics); (3) what is the discrimination capability of graph metrics for BCI purposes. This work aims to investigate the MI single-trial time-course of functional connectivity defined in terms of event-related desynchronization/ synchronization (ERD/S) similarity. Both ERD/S and clustering coefficient (CC) underlying FC were used as features for characterizing rest, right-hand MI, and left-hand MI for 21 subjects. Our results showed that MI can be associated with a higher CC when compared to rest, while right- and left-hand MI present a similar CC time-course evolution. From the classification standpoint, ERD/S, CC and their combination provided moderate to P. G. Rodrigues (&)  A. Fim-Neto  D. C. Soriano Engineering, Modeling and Applied Social Sciences Center, Federal University of ABC, São Bernardo do Campo, Brazil e-mail: [email protected] P. G. Rodrigues  A. Fim-Neto  D. C. Soriano Brazilian Institute for Neuroscience and Neurotechnology (BRAINN), Campinas, Brazil A. Fim-Neto Institute of Physics, University of Campinas, Campinas, Brazil J. R. Sato Center of Mathematics, Computing and Cognition, Federal University of ABC, São Bernardo do Campo, Brazil S. J. Nasuto School of Biomedical Sciences, Biomedical Engineering, University of Reading, Reading, UK

Keywords



 

Dynamic functional connectivity Brain-computer interface Motor imagery Graph analysis

1

Introduction

Brain functional connectivity (FC)—i.e. the functional interplay or interaction between brain regions—has been widely applied to characterize various brain conditions and diseases [1–5]. Such studies have been performed under different imaging modalities such as functional magnetic resonance (fMRI), functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) [6]. FC characterization in EEG has emerged as an interesting alternative for brain imaging due to its promising applications to motor related tasks, particularly important for either post-stroke rehabilitation [7] or development of assistive technologies, including the brain-computer interface (BCI) systems [8–11]. BCI are based on acquisition of neural signals and, subsequently, their translation into commands for external devices without the use of any muscle or peripheral nerves [12]. Among the BCI paradigms, the motor imagery (MI) can be pointed out as the main asynchronous framework for

© Springer Nature Switzerland AG 2022 T. F. Bastos-Filho et al. (eds.), XXVII Brazilian Congress on Biomedical Engineering, IFMBE Proceedings 83, https://doi.org/10.1007/978-3-030-70601-2_275

1887

1888

P. G. Rodrigues et al.

decoding the user’s intention, being widely applied to the study of cognitive aspects of action control in healthy and pathological brain, and also in neuro-rehabilitation [13–15]. MI can be defined as the mental rehearsal of a movement without motor execution and this phenomenon is essentially characterized by a frequency modulation mainly within alpha (8–13 Hz) and beta (14–26 Hz) band rhythms from sensorimotor cortex [16, 17]. Such bandpower fluctuations in these EEG rhythms are commonly captured by the event-related desynchronization/synchronization (ERD/S) and used as a biomarker for MI tasks. However, ERD/S computation usually involves averaging all trials of a given task, which constrains its employment in BCI systems. Moreover, MI is a dynamic process involving motor planning and preparation, with the interaction of different brain areas over time [18, 19], which naturally raises questions concerning the time-varying properties of FC and the underlying motor information encoded in it [20]. Several works have shown that static FC, i.e. FC estimated from the entire trial, can define promising BCI’s features [10, 21–23]. However, the dynamic nature of motor task can result in a time-varying performance [24], which requires a suitable characterization of FC temporal evolution not just for BCI, but also for a better understanding of the ERD/S underlaying motor related tasks (e.g. motor execution, motor imagery, motor observation). The aim of this work is to investigate the single-trial MI time-course of FC defined using similarity of ERD/S, i.e. the most usual marker associated with movement. Specifically, the clustering coefficient (CC) will be used as a quantifier for the graph topological characteristics. Our results show that (i) both left- and right-hand MI are associated with the increase of the CC when compared to rest; (ii) ERD/S, FC clustering coefficient and their combination can be used as features for classifying rest, right-, and left-hand MI, providing substantial agreement between predicted and actual classes for movement detection, while weak agreement is observed under MI hand discrimination for the population analyzed here. Combining FC-based features and ERD/S improved the final classification performance obtained, motivating future investigations concerning its improvement and possible uses in BCIs.

2

Materials and Methods

electrodes, recorded with 512 Hz using the Biosemi Active Two system. The experimental procedure consisted of a black screen with a fixation cross lasting 2 s to draw the subject’s attention, followed by 3 s of the imagery task and 2 s of rest, giving a total of 7 s for each trial. For both tasks, 100 or 120 trials were collected, thus 100 trials for each task were used for uniformity. In addition, approximately 66 s of EEG signals in rest state were also available. A detailed description of the experimental procedure and data acquisition can be found in [25].

2.2 Data Processing The EEG data were bandpass filtered (6th order, 1–55 Hz) and further IIR notch filtered at 60 Hz. Eyeblinks were automatically removed by linear regression based on blinks recorded at FP1 and FP2 electrodes. Both MI and rest trials were sliced into 7 s, leading to 100 trials for each motor imagery class and 60 trials for the rest condition. In this last condition, an overlap of 6 s was applied to achieved the mentioned number of trials. The onset of MI is 0 s and the entire trial is within the interval [-2, 5] s. A Laplacian spatial filter was used according to the algorithm presented in [26] and further modified in [27] to minimize volume-conduction effects.

2.3 ERD/S Single-Trial Single-trial ERD/S was evaluated for each electrode using the following steps: (1) temporal bandpass IIR filtering at 8– 16 Hz (4th order butterworth) to yield the alpha rhythm; (2) squaring the samples; (3) smoothing (moving average) of 128 points (i.e. 0.25 s); (4) baseline normalization according to the classical definition of Pfurtscheller and da Silva [17]: ERD=SðtÞ ¼

P(t)  Pref  100; Pref

ð1Þ

in which P(t) is the instantaneous signal power after smoothing and Pref is the mean baseline power in the interval [−1.5, −0.5] s. Here, instead of averaging the power samples across all trials, we applied the baseline normalization directly in the trials, allowing the use of ERD/S for task classification.

2.1 EEG Dataset A BCI motor imagery dataset comprising data from 50 right-handed subjects for right-hand and left-hand motor imagery, acquired by Cho et al. [25], was used in this analysis. EEG data were collected using 64 Ag/AgCl active

2.4 Dynamic Functional Connectivity Since it is known that the ERD/S contains relevant information about the imagery process, we defined the dynamic

Single-Trial Functional Connectivity Dynamics of Event-Related …

functional connectivity (dFC) based on the pairwise Euclidean distance of single trial electrodes’ ERD/S described as: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u Npw X 1 u t ðx  y Þ2 ; ð2Þ Distðx; yÞ ¼ i i Npw i¼1 in which xi and yi are ERD/S samples of the electrodes x and y, respectively, and Npw is the sliding window size. In this case, the factor 100 in Eq. 1 was omitted to cope with observation typically within the range [−1, 1]. Euclidean distance between ERD/S observations was then converted to a proximity measure by means of a Gaussian kernel as commonly used in spectral clustering [28], dFCðx; yÞ ¼ expð

Distðx; yÞ Þ; 2r2

ð3Þ

where r is a parameter that controls the similarity decay. This provides for each network link a weighted measure of proximity within the range [0, 1], in which clustering coefficient can be suitably evaluated. Euclidean distance based FC measure can be found in [29] and was employed here given the normalized nature of ERD/S after baseline correction. Previously, [10, 11] have shown that proximity-based FC can lead to significantly better connectivity estimates. r was set in 0.05 empirically, after some preliminary evaluations. The graph metric used for characterizing the topology of the connectivity was the CC, which quantifies how the nodes tend to cluster. This metric is one of the most employed features in brain FC analysis [8, 20, 30]. Clustering coefficient of an undirected graph can be defined as the fraction of triangles around an individual node, i.e. the fraction of the node’s neighbors that are also neighbors of each other [31]. For weighted graph, the node clustering coefficient can be computed using the following definition [32]: CCi ¼

X 1 2 ~ ij ; w ~ jk ; w ~ kl Þ3 ; (w ki ðki  1Þ j;k

ð4Þ

in which the edge weights wij are scaled by the largest ~ ij = wij /max(w), and ki is the node degree. Global one, w clustering coefficient was defined as the mean value of the node CC. The size of the sliding window used to estimate the connectivity matrix based on Eqs. 2 and 3 was 0.5 s (256 points) with a step of 0.25 s (128 points). For comparison, we applied the same windowing approach to calculate the moving average ERD/S, allowing a better estimation of the ERD/S and its comparison in the same time window with CC behavior.

1889

2.5 Classification and Subject Inclusion Criterion To evaluate the best channels and trial period, we classified all 64 channels for each time window separately. We tested all possible pairs of classes considering rest, right and left-hand motor imagery tasks, and hands tasks combined into a movement class. For each class, the number of trials was 60 for the rest, 100 for both hand tasks, and 200 for their combination. Due to the imbalanced characteristic of the dataset, i.e. the presence of more movement than rest trials, the Cohen’ kappa value was chosen for the assessment of the classification performance. The Cohen’s kappa is a statistic used to evaluate classes inter-rater reliability, as between actual and estimated class labels [33]. It is more robust measure than simple percent agreement calculation, since it takes into account the possibility of the agreement occurring by chance. Kappa considers all information contained in the confusion matrix for its computation, which allows dealing with asymmetric observations, i.e. unbalanced data. The metric provides values within the range [0, 1] and classification performance is usually analyzed in terms of ranges, in which: 0–0.20 indicates no agreement; 0.21–0.40 indicates a weak agreement; 0.41–0.60 indicates a moderate agreement; 0.61–0.80 indicates a substantial agreement; 0.81–1.00 indicates practically a perfect agreement. Cohen’s kappa is defined as follows: j¼

po  pe ; 1  pe

ð5Þ

in which po is the relative observed agreement (accuracy), and pe is the hypothetical probability of chance agreement. A linear discriminant analysis (LDA) classifier under a fivefold cross-validation scheme was used to estimate the performance of each channel (one attribute per channel) for each subject individually. Since we chose to analyze the mean value of the population, a subject selection criterion— as introduced in [20, 25] with the aid of common spatial pattern (CSP)—was introduced, which aims to characterize the latent MI information. As CSP consider information of all electrodes out the ERD/S domain, we included in our analysis just subjects with sufficient ERD/S information, i.e. those subjects with peak kappa value higher than 0.5 (moderate agreement) in any electrode and time window considering rest versus movement discrimination. Under these assumptions, our analysis considered 21 subjects with representative peak performances in the ERD/S domain.

1890

3

P. G. Rodrigues et al.

Results

Local Clustering Coefficient

Global Clustering Coefficient

Figure 1 depicts the global (1a) and local (1b) clustering coefficient considering the mean of all trials of the 21 subjects for rest, right- and left-hand MI tasks. It is noteworthy that baseline normalization introduced an increase in the local and global CC before the task onset in all analyzed conditions. After the MI flag onset, the CC progressively increases for 1 s during the MI trials, remaining elevated during the desynchronization phase. Figure 1b shows the population mean local CC for C3 and C4 electrodes—i.e. the most frequently used motor cortex electrodes for MI analyses—in which a similar CC time course can be observed under right-hand and left-hand MI with a clear distinction from the rest condition. This results also suggests a symmetry in the FC topology for right-hand and left-hand MI. The (re)synchronization phase start to impact CC around t = 3.5 s and takes approximately 1.5 s to match rest typical values. Figure 2 presents the kappa value time-course for ERD/S and dFC features for each electrode using a sliding window of 0.5 s, highlighting movement detection (i.e. rest versus movement discrimination). The MI interval presents an elevated kappa in the interval [0.5, 3.5], indicating

1

rest right left

0.8 0.6 0.4

A

0.2 0

-1

0

1

2

Time [s]

3

4

5

1

C3 - rest C3 - right C3 - left C4 - rest C4 - right C4 - left

0.8 0.6

well-succeed alpha suppression detection. In addition, local CC presented more electrodes with elevated kappa than ERD/S, which may be related to the neighborhood node effect as shown in the CC definition (Eq. 4), while ERD/S shows a more localized effect. Furthermore, left hemisphere electrodes exhibited higher mean performances, which is probably associated with the subject’s laterality (all right-handed). Higher kappa values were also found in the occipital electrodes (e.g. PO7, PO8 and O1 for ERD/S) in agreement with [20, 25]. Table 1 shows the population mean ± standard deviation for the peak kappa value for pairwise classification conditions for ERD/S, dFC and the combination of the attributes (ERD/S [ dFC), i.e., when considered both attributes in the feature vector. In this analysis, the highest kappa value of the performance matrix (channels vs time windows) of each subject was taken into account. Classification performance was compared using the Friedman hypothesis test and Dunn’s multiple comparison test. For rest versus movement, ERD/S and the combination of attributes obtained substantial performances and dFC obtained moderate classification performance, with ERD/S and the combination yielding significantly better results. The post-hoc p-value was 0.008, >0.999 and 0.999, combination versus ERD/S = 0.034 and combination versus dFC = 0.004). Both features and their combination allowed moderate to substantial discrimination for rest versus right-hand and rest versus left-hand, i.e. under movement detection, with the combination providing a significantly better performance than the dFC approach (post-hoc p-value for rest versus right-hand = 0.017 and rest versus left-hand =