Computational Intelligence Methods for Green Technology and Sustainable Development: Proceedings of the International Conference GTSD2020 [1 ed.] 3030623238, 9783030623234

This book is a selected collection of 54 peer-reviewed original scientific research papers of the 5th International Conf

263 96 31MB

English Pages 663 [655] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Editor’s Preface
Contents
Artificial Intelligence and Cyber Systems
Mobile Data Traffic Offloading with Content Centric Networking
1 Introduction
2 Related Work
3 Implementation and Evaluation
3.1 Network Topology Simulation
3.2 CCN-eNodeB Node Simulation
3.3 LTE Mobile Station Simulation
3.4 Simulation Results
4 Conclusions
References
Temporal Features Learning Using Autoencoder for Anomaly Detection in Network Traffic
Abstract
1 Introduction
2 Background and Related Works
3 Proposed Temporal Feature Learning Methodology
4 Experiments and Results
4.1 Dataset
4.2 Experiment Results and Discussion
5 Conclusions
References
A Computer-Aided Detection to Intracranial Hemorrhage by Using Deep Learning: A Case Study
1 Introduction
2 Related Works
3 Methodology
3.1 System Overview
3.2 Data Acquisition
3.3 Deep Convolutional Neural Network Architecture
3.4 Data Pre-processing
3.5 Highlighting ICH-Suspected Areas
4 Experiment
5 Conclusion
References
A Novel Security Solution for Decentralized Web Systems with Real Time Hot-IPs Detection
Abstract
1 Introduction
2 Preliminaries
2.1 Hot-IP
2.2 d-Disjunct Matrix
2.3 Non-adaptive Group Testing
2.4 IPFS Framework
3 The Proposed Solution
3.1 Online Hot-IP Detecting Algorithm
3.2 The Proposed Security Model for the Decentralized Web System
4 Conclusion
Acknowledgment
References
Segmentation of Left Ventricle in Short-Axis MR Images Based on Fully Convolutional Network and Active Contour Model
Abstract
1 Introduction
2 Materials and Methods
2.1 U-Net Architecture for Coarse Segmentation
2.2 Multiphase Active Contour Model for Fine LV Segmentation
3 The Proposed Approach
3.1 The Pipeline of the Proposed Approach
3.2 The Loss Function
4 Evaluations and Results
4.1 Dataset
4.2 Results
4.3 Compared to Other Works
5 Conclusion
Acknowledgement
References
Proposed Novel Fish Freshness Classification Using Effective Low-Cost Threshold-Based and Neural Network Models on Extracted Image Features
Abstract
1 Introduction
2 Hypothesis Analysis and Proposed Assumptions
3 Methodology
3.1 Pre-processing
3.2 Feature Extraction
4 Experiments and Results
4.1 Database Setup
4.2 Performance Evaluation Criteria
4.3 Training
4.4 Testing
5 Discussions and Conclusions
References
Machine Learning-Based Evolutionary Neural Network Approach Applied in Breast Cancer Tumor Classification
Abstract
1 Introduction
2 Proposed Method
2.1 Dataset
2.2 Neural Network Structure
2.3 Training Algorithm
3 Result and Discussion
4 Conclusion
Acknowledgement
References
Malware Classification by Using Deep Learning Framework
Abstract
1 Introduction
2 Proposed Framework
3 Dataset
4 Results
5 Conclusion and Future Work
References
Attention Mechanism for Fashion Image Captioning
1 Introduction
2 Related Works
3 Approach
3.1 Encoder-Decoder Architecture
3.2 Channel-Wise Attention Mechanism
3.3 Spatial Attention Mechanism
3.4 An Attention-Based Method for Fashion Image Captioning
4 Experiments
4.1 Fashion-Gen Dataset
4.2 Implementation Details
4.3 Quantitative Analysis
5 Conclusion
References
Robotics and Intelligent Systems
An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots
Abstract
1 Introduction
2 Problem Statement
3 Effective Inverse Kinematic Solutions
4 Simulation Validation
4.1 Comparative Solutions in Static Test
4.2 Comparative Solutions for Time-Varying Trajectories
5 Conclusion
Acknowledgements
References
A PD-Folding-Based Controller for a 4DOF Robot
Abstract
1 Introduction
2 Robot Dynamics and Problem Statement
3 Controller Design
3.1 Folding PD Control Scheme
3.2 Learning Integration Formed by a Neural Network
4 Simulation
4.1 Simulation Setup
4.2 Simulation Results
5 Conclusions
Acknowledgements
References
Designing and Analysis of a Soft Robotic Arm
Abstract
1 Introduction
2 Literature Review
3 Proposed Structural Design for a Robotic Arm
3.1 The Solution for the Weight of the Robotic Arm Structure
3.2 Section of Actuators, Gearhead and Gearbox System
3.3 Selection for Shoulder Joint
3.4 Selection for Elbow Joint
3.5 Other Parameters and Considerations
4 Proposed Design
4.1 Design of the Proposed Robot Arm
4.2 Components Installation
5 Validation of the Proposed Design
6 Conclusions and Future Work
References
ROV Stabilization Using an Adaptive Nonlinear Feedback Controller
1 Introduction
2 Dynamic Modeling of the Vehicle
2.1 The ROVVIAM900
2.2 Equations of Motion
2.3 Design Considerations
3 Vertical Controller Design
3.1 Nominal Nonlinear Feedback Controller (NNFC)
3.2 Adaptive Nonlinear Feedback Controller (ANFC)
4 Simulation and Results
4.1 Simulation Scenarios
4.2 Comment on Results
5 Conclusion
References
On Robust Control of Permanent Magnet Synchronous Generators Using Robust Integral of Error Sign
1 Introduction
2 Dynamic Model
2.1 Aerodynamic Model
2.2 Mechanic Model
2.3 PMSG Model
2.4 MPPT Algorithm
3 Control Design
3.1 RISE Based Current Controller
3.2 RISE Based Speed Controller
4 Simulation
5 Conclusion
References
A Flexible Sliding Mode Controller for Robot Manipulators Using a New Type of Neural-Network Predictor
Abstract
1 Introduction
2 Problem Statement
3 Controller Design
4 Simulation Validation
5 Conclusion
Acknowledgements
References
Optimized Gait Planning of Biped Robot Using Multi-objective JAYA Algorithm
Abstract
1 Introduction
2 Mathematical Background of the Proposed Concept
3 Proposed Algorithm
3.1 Explain Proposed Algorithm
3.2 Proposed MO-JAYA Technique
4 Results and Discussion
4.1 Results of Constrained Optimization
4.2 Results of Tested Pareto-Optimum Fronts of Solution-Candidates Using MO-JAYA Algorithm
5 Conclusions
Acknowledgments
References
Improve Stability of Deep Flux Weakening Operation Control Strategies for IPMSM
Abstract
1 Introduction
2 Speed Control of PMSM Drive Systems
2.1 PMSM Model and Its Limitations
2.2 Optimal Working Areas of PMSM
3 Proposed IPMSM Hybrid Flux-Weakening Speed Control Method
4 Simulation Results
4.1 Simulation IPMSM Velocity Control Model
4.2 Simulation Results of the Proposed Hybrid PMSM Control Method
5 Conclusions
Acknowledgments
References
Muscle-Gesture Robot Hand Control Based on sEMG Signals Utilizing Deep Neural Networks
Abstract
1 Introduction
2 Muscle-Gesture Robotic Arm and Finger Control System
2.1 System Architecture
2.2 Muscle-Gesture Computer Interface (MGCI)
2.3 One Dimensional Convolutional Neural Network (1D CNN)
2.4 Five Fingered Robot Hand
3 Experiment Results and Discussions
3.1 Recognition of Gesture Experiment
3.2 Robot Arm Motion Control
4 Conclusions
References
Green Energy and Power Systems
Techno-Economic Analysis of Grid Connected Photo-Voltaic Power Plant in Vietnam
Abstract
1 Introduction
2 Methodology
2.1 PVsyst Simulation Tool
2.2 Net Present Value Model
2.3 Levelized Cost of Electricity Model
2.4 Payback Year
2.5 Internal Rate of Return
3 Technical Results
4 Economic Analysis
4.1 Net Present Value
4.2 Internal Rate of Return and Payback Year
4.3 Levelized Cost of Electricity
5 Conclusion
References
Stability Identification of Power System Based Neural Network Training
Abstract
1 Introduction
2 Background
2.1 MDE Algorithm
2.2 MDE Algorithm for Training MLP Neural Model in Identification
3 The Proposed Approach: ANN-MDE
3.1 The Experimental Test to Identify the Stability of Power System
3.2 Case Study
4 The Experimental Result
5 Conclusion
Acknowledgement
References
Load Shedding in Islanded Microgrid with Uncertain Conditions
Abstract
1 Introduction
2 The Optimal Generator Dispatching
2.1 Mathematical Model
2.2 Linearization
3 Problem Solving
4 Load Shedding Considering the Voltage Deviation
5 Application
5.1 Optimize the Generator Output Power
5.2 Load Shedding Considering the Voltage Aspect
6 Conclusion
Acknowledgment
References
Effects of Designing and Operating Parameters on the Performance of Glucose Enzymatic Biofuel Cells
Abstract
1 Introduction
2 Experimental
2.1 Materials
2.2 Preparation of the Anodic Electrode
2.3 Characterization
2.4 Biofuel Cell Assembly and Testing
3 Results and Discussion
3.1 Wettability of CC Electrode
3.2 Morphology Analysis
3.3 Effect of Fuel Flow Rate on GEBC Performance
3.4 Effect of Current Collector Material on GEBC Performance
3.5 Effect of Assembled Stack Type on GEBC Performance
3.6 Effect of Open Ratio of Flow Field Plate on GEBC Performance
4 Conclusions
Acknowledgements
References
A Novel Fault Location Technique for Transmission Lines of a Power System Based on a Language of an Optimization Problem
Abstract
1 Introduction
2 Mathematical Model of the Fault Location
3 Modified ABC Algorithm Based Fault Location
4 Numerical Result
5 Conclusion
References
Parameter Estimation of Solar Photovoltaic Cells Using an Improved Artificial Bee Colony Algorithm
Abstract
1 Introduction
2 Mathematical Model of Parameter Estimation of a Solar PV Cell
3 Improved ABC Algorithm Based Parameter Estimation of a Solar PV Cell
3.1 Standard ABC Algorithm
3.2 Improved ABC Algorithm
4 Numerical Result
5 Conclusion
References
Improvement of ANFIS Controller for SVC Using PSO
Abstract
1 Introduction
2 The Studied Power System Configuration
3 Static Var Compensator Model
4 Apply ANFIS-PSO for Controlling SVC
5 Simulation Results
6 Conclusion
References
Study on the Operating Characteristics of Cell Electrodes in a Solid Oxide Fuel Cell (SOFC) Through the Two-Dimensional Numerical Simulation Method
Abstract
1 Introduction
2 Mathematical Model
3 Results and Discussion
4 Conclusions
Conflicts of Interest
References
The Structure and Electrochemical Properties of PAN-Based Carbon Aerogel Composite
Abstract
1 Introduction
2 Experimental
2.1 Preparation of the PAN and PAN-Graphene-Based Carbon Aerogels
2.2 Characterization
3 Results and Discussion
3.1 Structural Morphology of PAN and Graphene-Based PAN Aerogels
3.2 Raman Spectra and XRD Analysis
3.3 Electrochemical Apparatus and Measurements
4 Conclusions
Acknowledgments
References
Mechanical and Computational Mechanic Models
Theoretical and Experimental Research in the Assessment Fatigue Life of Prestressed Concrete Sleeper on the Urban Railway
Abstract
1 Introduction
2 Theoretical Basis for Calculating Fatigue Life for Prestressed Concrete Sleepers on Urban Railway
3 Experimental Method
3.1 Properties of Tested Sleepers
3.2 Test Equipments
3.3 Load Test Arrangements
3.4 Test Results
4 Conclusions
Acknowledgement
References
Modeling of Urban Public Transport Choice Behaviour in Developing Countries: A Case Study of Da Nang, Vietnam
Abstract
1 Introduction
2 Literature Review
3 Data and Methods
3.1 Location and Characteristics of the Study Area
3.2 Sampling
3.3 Binary Logit Model
4 Results and Discussion
4.1 Data and Descriptive Statistics
4.2 Binary Logit Model Specification
5 Conclusions
Acknowledgment
References
The Group Efficiency of Soil-Cement Columns in Comparison with the Group Factor of Piles
Abstract
1 Introduction
2 Methodology
2.1 Numerical Model
2.2 Method
3 Results
3.1 Group Efficiency of Piles
3.2 Group Efficiency of Semi Rigid Soil-Cement Columns
3.3 Percentage of Contribution of Each Factor to the Bearing Capacity of Group of Soil-Cement Columns
3.4 The Correlation Between the Group Efficiency for Piles and for Soil Cement Columns
3.5 Discussion
4 Conclusion
References
Design of Simplified Decoupling Control System of Pulsed MIG Welding Process for Aluminum Alloy
Abstract
1 Introduction
2 Preliminary
2.1 Mathematical Process Model of Pulsed MIG Welding for Aluminum Alloy
2.2 Simplified Decoupling Design for Aluminum Pulsed MIG Welding
3 PID Controller Design
4 Simulation Study
4.1 Integral Absolute Error Index (IAE)
4.2 Total Variation (TV)
4.3 Simulation Results
5 Conclusion
Acknowledgments
References
Static Analysis of HSDT-Based FGM Plates Using ES-MITC3+ Elements
Abstract
1 Introduction
2 ES-MITC3+ Element for HSDT-Based FGM Plates
2.1 Displacements, Strains and Stresses of FGM Plates Based on the HSDT
2.2 Formulation of ES-MITC3+ Element for the HSDT-Based FGM Plates
3 Numerical Examples
3.1 Square al/ZrO2 Plate Under Uniform Loading
3.2 Simply Supported Square al/Al2O3 Plate Under Sinusoidal Loading
4 Conclusions
References
Developing the Assembly Jig of Push-Belt CVT by SLA 3D Printing Technology
Abstract
1 Introduction
2 Experiment and Simulation
2.1 Object Description
2.2 ASTM D638
2.3 SLA 3D Printing Process
3 Results and Discussion
3.1 Material of Parts
3.2 The Development of SLA 3D Printing
4 Conclusions
References
The Effect of Airflow Rate on the Cooling Capacity of Minichannel Evaporator Using CO2 Refrigerant
Abstract
1 Introduction
2 Experiment Setup
3 Results and Discussion
4 Conclusion
Acknowledgment
References
An Approach to Optimize the Design of Ultrasonic Transducer for Food Dehydration
Abstract
1 Introduction
2 Materials and Methods
2.1 Design of Ultrasonic Transducer
2.2 Determination of Dimensions of Ultrasonic Transducer
3 Results and Discussion
3.1 Numerical Calculation of the Ultrasonic Transducer
3.2 Fabrication, Measurements of Ultrasonic Transducer
3.3 The Experiment of Drying Assisted by Ultrasound on Codonopsis Javanica
4 Conclusions
Acknowledgment
References
An ESO Approach for Optimal Steel Structure Design Complying with AISC 2010 Specification
Abstract
1 Introduction
2 Optimal Design Formulations
2.1 Optimization Problem
2.2 Direct Analysis Method
3 ESO Method
4 Illustrative Example
5 Conclusions
References
A Gravity Balance Mechanism Using Compliant Mechanism
Abstract
1 Introduction
2 Design Principles
3 Determine the Stiffness of Planar Springs and Compliant Rotary Spring
4 Energy-Free Adjustment with Spring Stiffness
4.1 The Adjustment Principle
4.2 Adjustment Method
5 Conclusion
Acknowledgements
References
Numerical Simulation of Flow around a Circular Cylinder with Sub-systems
Abstract
1 Introduction
2 Problem Definition
2.1 Model of Flow Through a Circular Cylinder Attached with 2 Splitter Plates
2.2 Model of Flow Through a Circular Cylinder with 2 Rotating Crosses Placed Behind
3 Numerical Method
3.1 Spacial and Temporal Discretization
3.2 Structure Solver
3.3 Navier-Stokes Solver
4 Computational Domain and Boundary Conditions
4.1 Model of Flow Through a Circular Cylinder Attached with 2 Splitter Plates
4.2 Model of Flow Through a Circular Cylinder with 2 Rotating Crosses Placed Behind
5 Results and Discussions
5.1 Test Method: Flow Through Circular Cylinder
5.2 Results of the Flow Through the Circular Cylinder Controlled by 2 Splitter Plates
5.3 The Results of Flow Through the Cylinder with 2 Crosses Placed Behind
5.4 Comparing the Effective of the 2 Models
6 Conclusion
References
Research on Performance of Color Reversible Coatings for Exterior Wall of Buildings
Abstract
1 Introduction
2 Description of the Color-Changing Coating Paint Components
3 Experimental Setup and Method
3.1 Sample Preparation
3.2 Experimental System
4 Results and Discussion
5 Conclusions
Acknowledgement
References
Advances in Civil Engineering
Experimental Study of Geopolymer Concrete Using Re-cycled Aggregates Under Various Curing Conditions
Abstract
1 Introduction
2 Experimental Program
2.1 Materials and Specimen Preparation
2.2 Test Setup and Procedure
3 Results and Discussion
3.1 Compressive Strength of Geopolymer Concrete with Recycled Aggregate
3.2 Effect of Recycled Coarse Aggregate on the Compressive Strength of Geopolymer Concrete
3.3 Effect of Recycled Fine Aggregate on the Compressive Strength of Geopolymer Concrete
3.4 Effect of the Curing Condition on the Compressive Strength of Geopolymer Concrete with Recycled Aggregate
4 Conclusion
References
Fundamental Study for Estimating Shear Strength Parameters of Fiber-Cement-Stabilized Soil by Using Paper Debris
Abstract
1 Introduction
2 Experimental Outline
2.1 Direct Box Shear Test
2.2 Unconfined Compression Test
3 Effect of the Amount of Paper Debris and Cement on the Shear Strength Parameters of Fiber-Cement-Stabilized Soil
3.1 Results of Direct Box Shear Test
3.2 Results of Unconfined Compression Test
3.3 Relationship Between Shear Strength Parameters and Failure Strength
4 Effect of Soil Properties on the Shear Strength Parameters of Fiber-Cement-Stabilized Soil
4.1 Experimental Conditions
4.2 Experimental Results
5 Conclusions
References
Study on the Compressive and Tensile Strength Behaviors of Corn Husk Fiber-Cement Stabilized Soil
Abstract
1 Introduction
2 Materials and Testing Programs
2.1 Materials
2.2 Testing Program
3 Results and Discussions
3.1 Unconfined Compression Test
3.2 Splitting Tension Test
4 Conclusions
References
Experimental and Numerical Investigation on Bearing Behavior of TRC Slabs Under Distributed or Concentrated Loads
Abstract
1 Introduction
2 Materials
2.1 Design Components for Fine Cement Concrete
2.2 Properties of Textile
3 Methodology
3.1 Casting and Preparation of Specimens
3.2 Pressure Testing Equipment and Experimental Protocol (Type A)
3.3 Punching Shear Testing Equipment and Experimental Protocol (Type B)
3.4 Finite Element Modeling
4 Results and Discussions
4.1 Mode of Failure and Cracking Patterns
4.2 Load – Deflection Behavior
5 Conclusions and Proposals
References
Probabilistic Free Vibration Analysis of Functionally Graded Beams Using Stochastic Finite Element Methods
1 Introduction
2 Theoretical Formulation
2.1 FG Beam Theory
2.2 Finite Element Model
3 Stochastic Spectral Methods for Uncertainty Propagation
3.1 Polynomial Chaos Expansion
3.2 Stochastic Collocation
4 Numerical Examples
5 Conclusion
References
A Study on Property Improvement of Cement Pastes Containing Fly Ash and Silica Fume After Treated at High Temperature
Abstract
1 Introduction
2 Materials and Experimental Program
2.1 Materials
2.2 Characterization
2.3 Experimental Method
3 Results and Discussion
3.1 Bulk Density
3.2 Compressive Strength
3.3 XRD Analysis
3.4 SEM Analysis
4 Conclusions
Acknowledgement
References
Tornado-Induced Loads on a Low-Rise Building and the Simulation of Them
Abstract
1 Introduction
2 Theory and Equations
2.1 Atmospheric Pressure Change During Tornadoes
2.2 Wind Velocity Pressure
2.3 Loads Due to Dropping Atmospheric Pressure
3 Simulation
3.1 Computational Program
3.2 Specific Example
4 Result and Discussion
4.1 Simulation
4.2 Comparison with Experiment Results
4.3 Effect of the External Pressure Coefficients Cp,e
5 Conclusions
References
Development of a New Stress-Deformation Measuring Device and Hazard Early Warning System for Constructional Work in Da Nang, Vietnam
Abstract
1 Introduction
2 Preparation for the New Monitoring System
2.1 Programming Language
2.2 Design of Each Components of the Measuring System
2.3 Calibration of the New Measuring System
2.4 Verification of the New Measuring System in Laboratory
3 Test-Bed
4 Conclusions
Acknowledgment
References
Compressive Behavior of Concrete: Experimental Study and Numerical Simulation Using Discrete Element Method
Abstract
1 Introduction
2 Uniaxial Compression Test on Concrete
2.1 Concrete Specimen
2.2 Experimental Results
3 Numerical Simulations
3.1 Numerical Specimen and Model Parameters
3.2 Uni-Axial Compressive Tests
3.3 Cracks Development
4 Conclusion
Acknowledgement
References
A Study on Motorcycle Equivalent Unit of Container Truck in Motorcycle Dominated Cities
Abstract
1 Introduction
2 Paper Objectives
3 Methodology
3.1 Data Collection
3.2 MEU Estimation Method
4 Research Results
5 Conclusions
References
Accelerated Cyclic Corrosion Testing of Steel Member Inside Concrete
Abstract
1 Introduction
2 Experiment Procedures
2.1 Test Specimens
2.2 Test Instrument and Corrosion Environment
2.3 Test Procedures and Measurement
3 Results and Discussion
3.1 The Corrosion Depth of Corroded Specimens
3.2 Distribution of Corrosion Depth on Corroded Surface
4 Conclusions
Funding
References
Prediction of Compressive Strength of Concrete Using Recycled Materials from Fly Ash Based on Ultrasonic Pulse Velocity and Design of Experiment
Abstract
1 Introduction
2 Research Methodology
2.1 Implementation Procedure
2.2 Materials
2.3 Design of Experiments
2.4 Specimen Preparation and Test Setup
2.5 Prediction Model: Multiple Linear Regression
2.6 Assessing Model Adequacy: Performance Criteria
3 Results and Discussions
3.1 Prediction of Concrete Compressive Strength and UPV-Simple Linear and Nonlinear Regression Model
3.2 Prediction of Concrete Compressive Strength via Concrete Compositions and UPV-Multivariate Linear Regression Model
3.3 Prediction of the Amount of Binder and Water for a Given Range of Concrete Compressive Strength Values
4 Conclusions
References
Performance of Foundation on Bamboo–Geotextile Composite Improved Soft Soil in Mekong Delta Through Both Plate Load Test and Numerical Analyses
Abstract
1 Introduction
2 Experimental Program
2.1 Materials
2.2 Experiments
2.3 Numerical Modelling
3 Results and Discussion
3.1 Strength Characteristics
3.2 Test Results
3.3 Numerical Simulation Results
4 Conclusions
Acknowledgment
References
Prediction of Long Span Bridges Response to Winds
Abstract
1 Introduction
2 Flutter Analysis
3 Buffeting Analysis
4 Conclusion
Acknowledgments
References
Effect of Wall Thickness and Node Diaphragms on the Buckling Behavior of Bamboo Culm
Abstract
1 Introduction
2 Finite Element Modelling
3 Numerical Results and Discussion
3.1 Effect of Nodal Thickness on the Buckling Behavior of Bamboo Culm
3.2 The Effect of Diaphragms on the Buckling Behavior of Bamboo Culm
4 Conclusion
References
Author Index
Recommend Papers

Computational Intelligence Methods for Green Technology and Sustainable Development: Proceedings of the International Conference GTSD2020 [1 ed.]
 3030623238, 9783030623234

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1284

Yo-Ping Huang · Wen-June Wang · Hoang An Quoc · Le Hieu Giang · Nguyen-Le Hung   Editors

Computational Intelligence Methods for Green Technology and Sustainable Development Proceedings of the International Conference GTSD2020

Advances in Intelligent Systems and Computing Volume 1284

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156

Yo-Ping Huang Wen-June Wang Hoang An Quoc Le Hieu Giang Nguyen-Le Hung •



• •

Editors

Computational Intelligence Methods for Green Technology and Sustainable Development Proceedings of the International Conference GTSD2020

123

Editors Yo-Ping Huang Department of Electrical Engineering National Taipei University of Technology Taipei, Taiwan

Wen-June Wang Department of Electrical Engineering National Central University Jhongli City, Taoyuan Hsien, Taiwan

Hoang An Quoc University of Technology and Education Ho Chi Minh City, Vietnam

Le Hieu Giang University of Technology and Education Ho Chi Minh City, Vietnam

Nguyen-Le Hung University of Da Nang Danang City, Vietnam

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-62323-4 ISBN 978-3-030-62324-1 (eBook) https://doi.org/10.1007/978-3-030-62324-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Editor’s Preface

This book contains the scientific contributions included in the program of the 5th International Conference on Green Technology and Sustainable Development (GTSD2020), which was held during November 27–28, 2020, as a virtual conference. The GTSD International Conference is a prestigious organized bi-annual event created to provide an international scientific research forum in the technologies and applications of green technology and sustainable development in the industrial revolution 4.0, with a focus on computational intelligence methods and techniques. The areas of GTSD are power, energy, new material and environmental engineering and advances in computational intelligence and their applications to the real world. The conference is structurally organized in order to promote the active participation of all attendees and presenters, via plenary presentation sessions, keynote addresses, interactive workshops and panel discussion, to find out how power, energy, new material and environmental engineering and advances in computational intelligence are applied in various domains. The aim was to further increase the body of knowledge in this specific area by providing a forum to exchange ideas and discuss results. The program committee of GTSD2020 represented 11 countries, and authors submitted 130 papers from 9 countries. This certainly attests to the widespread, international importance of the theme of the conference. Each paper was reviewed on the basis of originality, novelty and rigorousness. After the reviews, 54 papers were accepted for presentation and published in the proceedings. We would like to take this opportunity to express our deep appreciation to all authors, participants, keynote speakers, program committee members, session chairs, organizing committee members and steering committee members as well as the organizers in their various roles for their contribution to make the GTSD2020 successful, given the global surge in green technology research action for sustainable development. We highly appreciate their efforts and kindly invite all to continue to contribute to future GTSD conferences.

v

Contents

Artificial Intelligence and Cyber Systems Mobile Data Traffic Offloading with Content Centric Networking . . . . . Dung Ong Mau, Tuyen Dinh Quang, Anh Phan Tuan, Nga Vu Thi Hong, and Quyen Le Ly Quyen Temporal Features Learning Using Autoencoder for Anomaly Detection in Network Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nguyen Thanh Van, Le Thanh Sach, and Tran Ngoc Thinh A Computer-Aided Detection to Intracranial Hemorrhage by Using Deep Learning: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kien G. Luong, Hieu N. Duong, Cong Minh Van, Thu Hang Ho Thi, Trong Thy Nguyen, Nam Thoai, and Thi T. T. Tran Thi A Novel Security Solution for Decentralized Web Systems with Real Time Hot-IPs Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tam T. Huynh, Chinh N. Huynh, and Thuc D. Nguyen Segmentation of Left Ventricle in Short-Axis MR Images Based on Fully Convolutional Network and Active Contour Model . . . . . . . . . Tien Thanh Tran, Thi-Thao Tran, Quoc Cuong Ninh, Minh Duc Bui, and Van-Truong Pham Proposed Novel Fish Freshness Classification Using Effective Low-Cost Threshold-Based and Neural Network Models on Extracted Image Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anh Thu T. Nguyen, Minh Le, Hai Ngoc Vo, Duc Nguyen Tran Hong, Tuan Tran Anh Phuoc, and Tuan V. Pham Machine Learning-Based Evolutionary Neural Network Approach Applied in Breast Cancer Tumor Classification . . . . . . . . . . . . . . . . . . . Hoang Duc Quy, Cao Van Kien, Ho Pham Huy Anh, and Nguyen Ngoc Son

3

15

27

39

49

60

72

vii

viii

Contents

Malware Classification by Using Deep Learning Framework . . . . . . . . . Tran Kim Toai, Roman Senkerik, Vo Thi Xuan Hanh, and Ivan Zelinka

84

Attention Mechanism for Fashion Image Captioning . . . . . . . . . . . . . . . Bao T. Nguyen, Om Prakash, and Anh H. Vo

93

Robotics and Intelligent Systems An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Dang Sy Binh, Le Cong Long, Khuu Luan Thanh, Dinh Phuoc Nhien, and Dang Xuan Ba A PD-Folding-Based Controller for a 4DOF Robot . . . . . . . . . . . . . . . . 117 Vo Tan Tai, Doan Ngoc Minh, and Dang Xuan Ba Designing and Analysis of a Soft Robotic Arm . . . . . . . . . . . . . . . . . . . . 130 Yousef Amer, Harsh Rana, Linh Thi Truc Doan, and Tham Thi Tran ROV Stabilization Using an Adaptive Nonlinear Feedback Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Ngoc-Huy Tran, Manh-Cam Le, Thien-Phuong Ton, The-Cuong Le, and Thien-Phuc Tran On Robust Control of Permanent Magnet Synchronous Generators Using Robust Integral of Error Sign . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Tien Hoang Nguyen, Hong Quang Nguyen, Phuong Nam Dao, and Nga Thi-Thuy Vu A Flexible Sliding Mode Controller for Robot Manipulators Using a New Type of Neural-Network Predictor . . . . . . . . . . . . . . . . . . 167 Dang Xuan Ba Optimized Gait Planning of Biped Robot Using Multi-objective JAYA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Tran Thien Huan, Cao Van Kien, and Ho Pham Huy Anh Improve Stability of Deep Flux Weakening Operation Control Strategies for IPMSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Pham Quoc Khanh, Truong Viet Anh, and Ho Pham Huy Anh Muscle-Gesture Robot Hand Control Based on sEMG Signals Utilizing Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Guan-Chun Luh, Hao-Sung Chiu, and Min-Jou Tsai Green Energy and Power Systems Techno-Economic Analysis of Grid Connected Photo-Voltaic Power Plant in Vietnam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Le Phuong Truong, Bui Van Tri, and Nguyen Cao Cuong

Contents

ix

Stability Identification of Power System Based Neural Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Nguyen Ngoc Au, Le Vinh Thinh, and Tran Thien Huan Load Shedding in Islanded Microgrid with Uncertain Conditions . . . . . 243 Thi Thanh Binh Phan, Quoc Dung Phan, L. Tran, Q. Vo, T. Vu, H. Nguyen, and Trong Nghia Le Effects of Designing and Operating Parameters on the Performance of Glucose Enzymatic Biofuel Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Ngoc Bich Duong, Van Men Truong, Thi Ngoc Bich Tran, and Hsiharng Yang A Novel Fault Location Technique for Transmission Lines of a Power System Based on a Language of an Optimization Problem . . . . . 268 Thanh H. Truong, Duy C. Huynh, Anh V. Truong, and Matthew W. Dunnigan Parameter Estimation of Solar Photovoltaic Cells Using an Improved Artificial Bee Colony Algorithm . . . . . . . . . . . . . . . . . . . . 281 Duy C. Huynh, Loc D. Ho, and Matthew W. Dunnigan Improvement of ANFIS Controller for SVC Using PSO . . . . . . . . . . . . 293 Van-Tri Bui and Dinh-Nhon Truong Study on the Operating Characteristics of Cell Electrodes in a Solid Oxide Fuel Cell (SOFC) Through the Two-Dimensional Numerical Simulation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 XuanVien Nguyen, PhamTrungKhanh Luong, ThiNhung Tran, MinhHung Doan, AnQuoc Hoang, and ThanhTrung Dang The Structure and Electrochemical Properties of PAN-Based Carbon Aerogel Composite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Haolin Hsu, Kaiying Liang, Junkun Lin, Jeanhong Chen, and Lungchuan Chen Mechanical and Computational Mechanic Models Theoretical and Experimental Research in the Assessment Fatigue Life of Prestressed Concrete Sleeper on the Urban Railway . . . . . . . . . 327 Tran Anh Dung, Tran The Truyen, Nguyen Hong Phong, Pham Van Ky, and Le Hai Ha Modeling of Urban Public Transport Choice Behaviour in Developing Countries: A Case Study of Da Nang, Vietnam . . . . . . . . 338 Tran-Thi P. Anh, Nguyen-Phuoc Q. Duy, Phan Cao Tho, and Fumihiko Nakamura

x

Contents

The Group Efficiency of Soil-Cement Columns in Comparison with the Group Factor of Piles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Tham Hong Duong and Nguyen Thanh Nga Design of Simplified Decoupling Control System of Pulsed MIG Welding Process for Aluminum Alloy . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Luan Vu Truong Nguyen, Lam Chuong Vo, Thanh-Hai Nguyen, and Moonyong Lee Static Analysis of HSDT-Based FGM Plates Using ES-MITC3+ Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Thanh Chau-Dinh, Huong Tran-Ngoc-Diem, and Jin-Gyun Kim Developing the Assembly Jig of Push-Belt CVT by SLA 3D Printing Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Thanh Trung Do, Son Minh Pham, Minh The Uyen Tran, Quang Khoa Dang, Hoang Khang Tran, and Sang-Ryeoul Ryu The Effect of Airflow Rate on the Cooling Capacity of Minichannel Evaporator Using CO2 Refrigerant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Tronghieu Nguyen, Thanhtrung Dang, and Minhhung Doan An Approach to Optimize the Design of Ultrasonic Transducer for Food Dehydration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Hay Nguyen, Ngoc-Phuong Nguyen, Xuan-Quang Nguyen, and Anh-Duc Le An ESO Approach for Optimal Steel Structure Design Complying with AISC 2010 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Atitaya Chaiwongnoi, Huynh Van Thu, Sawekchai Tangaramvong, and Chung Nguyen Van A Gravity Balance Mechanism Using Compliant Mechanism . . . . . . . . 431 Ngoc Le Chau, Hieu Giang Le, and Thanh-Phong Dao Numerical Simulation of Flow around a Circular Cylinder with Sub-systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Phan Duc Huynh and Nguyen Tran Ba Dinh Research on Performance of Color Reversible Coatings for Exterior Wall of Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Vu-Lan Nguyen, Chang-Ren Chen, Chang-Yi Chung, Kao-Wei Chen, and Rong-Bin Lai Advances in Civil Engineering Experimental Study of Geopolymer Concrete Using Re-cycled Aggregates Under Various Curing Conditions . . . . . . . . . . . . . . . . . . . . 469 Ngoc Thanh Tran

Contents

xi

Fundamental Study for Estimating Shear Strength Parameters of Fiber-Cement-Stabilized Soil by Using Paper Debris . . . . . . . . . . . . . 479 Kazumi Ryuo, Tomoaki Satomi, and Hiroshi Takahashi Study on the Compressive and Tensile Strength Behaviors of Corn Husk Fiber-Cement Stabilized Soil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Nga Thanh Duong, Tomoaki Satomi, and Hiroshi Takahashi Experimental and Numerical Investigation on Bearing Behavior of TRC Slabs Under Distributed or Concentrated Loads . . . . . . . . . . . . 504 Tran The Truyen and Tu Sy Quan Probabilistic Free Vibration Analysis of Functionally Graded Beams Using Stochastic Finite Element Methods . . . . . . . . . . . . . . . . . . . . . . . . 517 Phong T. T. Nguyen, Luan C. Trinh, and Kien-Trung Nguyen A Study on Property Improvement of Cement Pastes Containing Fly Ash and Silica Fume After Treated at High Temperature . . . . . . . . . . . 532 Thi Phuong Do, Van Quang Nguyen, and Minh Duc Vu Tornado-Induced Loads on a Low-Rise Building and the Simulation of Them . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 Ngoc Tinh Nguyen and Linh Ngoc Nguyen Development of a New Stress-Deformation Measuring Device and Hazard Early Warning System for Constructional Work in Da Nang, Vietnam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 Truong-Linh Chau, Thu-Ha Nguyen, Viet-Thanh Le, and Quoc-Thien Tran Compressive Behavior of Concrete: Experimental Study and Numerical Simulation Using Discrete Element Method . . . . . . . . . . 570 Tran Van Tieng, Nguyen Thi Thuy Hang, and Nguyen Xuan Khanh A Study on Motorcycle Equivalent Unit of Container Truck in Motorcycle Dominated Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Tran Vu Tu, Nguyen Huynh Tan Tai, Ho Si Dac, and Hiroaki Nishiuchi Accelerated Cyclic Corrosion Testing of Steel Member Inside Concrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Dao Duy Kien, Phan Tan Hung, Haidang Phan, Phan Thanh Chien, In-Tae Kim, and Nguyen Thanh Hung Prediction of Compressive Strength of Concrete Using Recycled Materials from Fly Ash Based on Ultrasonic Pulse Velocity and Design of Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 Le Thang Vuong, Cung Le, and Dinh Son Nguyen

xii

Contents

Performance of Foundation on Bamboo–Geotextile Composite Improved Soft Soil in Mekong Delta Through Both Plate Load Test and Numerical Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Huu-Dao Do, Quoc-Thien Tran, Van-Hai Nguyen, Khac-Hai Phan, and Anh-Tuan Pham Prediction of Long Span Bridges Response to Winds . . . . . . . . . . . . . . . 626 Hoang Trong Lam and Do Viet Hai Effect of Wall Thickness and Node Diaphragms on the Buckling Behavior of Bamboo Culm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Luan C. Trinh, Quaiyum M. Ansari, Phong T. T. Nguyen, Trung-Kien Nguyen, and Paul M. Weaver Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649

Artificial Intelligence and Cyber Systems

Mobile Data Traffic Offloading with Content Centric Networking Dung Ong Mau(B) , Tuyen Dinh Quang, Anh Phan Tuan, Nga Vu Thi Hong, and Quyen Le Ly Quyen Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam {ongmaudung,dinhquangtuyen,phantuananh,vuthihongnga, lelyquyenquyen}@iuh.edu.vn Abstract. Nowadays, Internet mobile becomes more popular and the number of mobile subscribers (MSs) is increasing very quickly. Internet MSs often access social networks such as Facebook/YouTube to share/download interesting pictures, voice, video and so on. For this reason, they occupy a lot of bandwidth in backbone transmission as well as server resources. With a traditional clients-server connection, servers easily suffered massive overload by a huge number of MSs that access service at the same time. In the century of green information technology, Content- Centric Networking (CCN) protocol has been proposed as the suitable solution for the above problem. The CCN uses named-data to push and cache the content to the edge gateways, so that highly popular content is generated once but can be consumed many times. In this paper, we present a solution for Long Term Evolution (LTE) mobile network using CCN protocol to cache contents in the E-UTRAN Node B (eNodeB). With OPNET Modeler simulation tool, we conduct realistic mobile networks with a huge number of mobility LTE MSs access to the same service on a single server. The simulation results show that all MSs requests are responded successfully while offloading server traffic significantly. Keywords: Long term evolution · Content centric networking generated content · Power-law distribution

1

· User

Introduction

In the global mobile data traffic forecast from Cisco, overall mobile data traffic is expected to grow to 77 Exabytes per month by 2022, a seven-fold increase over 2017. And mobile data traffic will grow at a Compound Annual Growth Rate (CAGR) of 46% from 2017 to 2022 [1]. Indeed, hundreds of millions Internet users are self-publishing consumers. The GSMA reported that in 2018, there were more mobile internet users than non-users amongst the population covered by a mobile broadband network [2]. To understand the nature and the impact of Internet users, many researchers analyze data from YouTube, the world’s largest social Video on Demand (VoD) c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 3–14, 2021. https://doi.org/10.1007/978-3-030-62324-1_1

4

D. O. Mau et al.

Fig. 1. Mobile CCN network topology.

system. A term “User Generated Content” (UGC) is defined which re-shaped the way people watch video on demand and television, with millions of video producers and consumers [3,4]. The scale, dynamics and decentralization of the UGC videos make traditional content popularity prediction unsuitable. Along with UGC, Internet architecture is a remarkably complex technical system built on creative innovation around the world from the 1950s to the present. Among many research progress on the future Internet architecture, “Content Centric Networking” (CCN) was first proposed by Van Jacobson in 2009 [5,6]. In 2019, the CCN becomes the CCNx protocol which is a product of the Internet Research Task Force (IRTF) [7]. Last but not least, in recent years, research issues have emerged in mobile edge caching [8,9]. Some research has focused on edge caching policy and optimization to improve the Quality of Service (QoS) [10]. With the combination of the analysis of UGC and the CCN protocol underlying, we propose a realistic mobile CCN network as shown in Fig. 1. In Fig. 1, an example LTE network includes seven cells [11]. The CCN protocol maintains Pending Interest Table (PIT), Forwarding Information Base (FIB) and Content Store (CS or Cache) integrated to all enodeBs dulled CCNeNodeBs. A memory size of the CS is chosen as a few percent of the server’s total file size and the replacement policy for the memory is performed when it is full.

Mobile Data Traffic Offloading with Content Centric Networking

Hitting rate(%) =

N umber of cache hits N umber of cache hits + N umber of cache misses

5

(1)

Hitting rate as shown in Eq. (1) is an important metric for caching due to limited size of memory. Within the same cache size, we consider the performance of multiple replacement algorithms such as Least Frequency Used (LFU), Least Recently Used (LRU), First In First Out (FIFO) and Random. With high hitting rate value, CCN-eNodeBs enable store most of popular contents that are found and responded directly to MSs. For this reason, CCN-eNodeBs help to reduce backbone traffic and offload server traffic significantly. Summary of the first section, we start with forecasts of rapidly increasing mobile Internet traffic. Next, important UGC features and QoS improvement in edge cache of the wire and wireless networks are presented. From above findings, we propose to apply the CCNx protocol to edge cache for LTE mobile network at eNodeB. The rest of the paper is organized as follows. In Sect. 2, we review the related work in CCN and our mobile CCN proposal. Section 3 presents our simulation and determines results. Finally, we conclude the paper by Sect. 4.

2

Related Work

A large amount of data from YouTube, Daum from Korea or Youku in China are collected and analyzed. Their analysis reveals interested properties regarding the distribution of requests across video [12,13]. As shown in Fig. 2, the horizontal axis represents the videos sorted from the most popular to the least popular, with video ranks normalized between 0 and 100. The graph shows that 10% of the top popular videos account for nearly 80% of views, while the rest 90% of the videos account for 20% requests. Note that Daum and Youku databases also reveal a similar behavior. A nice immediate implication of this skewed distribution is that caching can be made very efficient since storing only a small set of objects can produce high hit ratios. That is, by storing only 10% of long-term popular videos, a cache can serve 80% of requests. So, skewness of user interests across videos distribution is fine turned with Pareto principle (or 80–20 law) and such skewness tells us how niche centric the service is. After the CCN protocol was proposed, researchers have done a lot of projects in this area. CCN is a transport protocol for a communication architecture built on named data [7]. There are two types of message in CCN protocol: interest and data. The interest message is used to request data by name while the data message is used to supply the content. Named data objects are very important for CCN as host’s IP for today’s Internet. Fundamentally, CCN defines hierarchical name schemes which require unique names for individual data objects, since names are used for identifying objects independent of its location. The name has a structure similar to current Uniform Resource Locations (URLs), where the hierarchical is rooted in a publisher prefix. The name is human-readable, which makes it possible for users to manually type in name and assess the relation

6

D. O. Mau et al.

Fig. 2. Skewness of user interests across videos [13]. Table 1. Summary of protocol Name

Components

Behavior

CCNx

Interest packet Data packet FIB PIT CS/Cache

Request data Receive data Content-based routing table Forwarded Interest packet management Store data as content chunks

LRU

Timestamp tracking Discards the least recently used item first

LFU

Frequency tracking

Discards the least frequency used item first

FIFO

None

First in item then first out

Random None

Discard the random used item

between a name and what the user wants. Summary of protocol is shown in Table 1. The Named Data Networking (NDN) project [14] is one of the four projects under the National Science Foundation (NSF) future Internet architecture program [15]. NDN project aims to develop a new Internet architecture by naming data instead of their locations. The project also develops specifications and prototype implementations of CCN protocol and applications by using end-to-end test bed deployments, simulation, and theoretical analysis to evaluate the proposed architecture [16]. Content-Oriented Networking (CONNECT) project [17] is the international project with multiple partners in Europe and the US, adopting as a starting point the concept currently promoted by the Palo Alto Research Center (PARC) team led by Van Jacobson. CONNECT aims to complement the existing work on CCN with original proposals in the following three technique areas: (i) The applicable and queue management require name-based criteria to ensure fairness and to realize service differentiation; (ii) It is important to prove that the name-based routing and forwarding is scalable and to design algorithms suitable for full-scale implementation; (iii) It is urgent to define replication and caching strategies in cheap memory and to evaluate their performance.

Mobile Data Traffic Offloading with Content Centric Networking

7

Fig. 3. LTE network topology in OPNET simulation.

As mentioned in CCN protocol [7], they emphasized that the CCN model is compatible with today’s Internet and has a clear, simple evolutionary strategy. Like IP, CCN is a “universal overlay”: CCN can run over anything including IP, and anything can run over CCN including IP. In the next section, we are going to present our CCN protocol simulation which is integrated in the LTE network.

3 3.1

Implementation and Evaluation Network Topology Simulation

OPNET Modeler [18,19] is used to simulate LTE network and CCN protocol integration. As shown in Fig. 3, the network topology has seven cells and every cell includes 25 LTE mobile stations (MSs), an eNodeB, and a CCN-eNodeB. A total of 175 LTE MSs exist in networks which are denoted from CCN-node-0 to

8

D. O. Mau et al.

CCN- node-174. In addition, the timestamp of sending data requests is started randomly using the random function RAND(0,10)s. The duration of sending and receiving data as known as round trip time (RTT) is fluctuated, depending on many factors such as latency, congestion, data availability, etc. For this reason, even though Interest message inter-arrival time is a constant time, the rate of request data is fluctuated too. CCN-eNodeBs run CCN protocol and integrate to eNodeBs via 1000 BaseX Internet links. All eNodeBs and Evolved Packet Core(EPC) connect to mobile IP backbone using optical links with Optical Carrier OC-12 and OC-24 data rate, respectively. A wide area network (WAN) supports optical link with OC-24 data rate between a gateway (GW-0) and EPC. Video server connects to GW-0 via 1000 BaseX Internet links. In the context of upstream links in CCN protocol, MSs send content requests to the CCN-eNodeB rather than they send directly to the Video server as normally. After checking requirement content in the CS, CCN-eNodeB replies data packet to MS data if having. Otherwise, CCN-eNodeB continues to forward request content to the video server. On the downstream link, after receiving content from the video server, CCN-eNodeB does a replacement policy and forwards to MSs. For this reason, advertise common node (ads-common-node) acts as the IP address administer of current CCN-eNodeB which is serving LTE MSs. After MSs, as known as CCN-node, are powered on and receive current eNodeB ID, MSs will retrieve CNN-eNodeB’s IP address in association with eNodeB ID by sending and receive information from ads-common-node. While MSs move around and roaming between cells, ads-common-node continues to update the IP address of CCN-eNodeB of the cell that MSs roams to. For this reason, the ads-common-node holds a list of mapping IP addresses constituted by the pairs eNodeB identification (ID) and CCN-eNodeB. A location of ads-common-node is not limited to directly connect to EPC, it can be linked to other points of LTE network such as eNodeB, IP Core. 3.2

CCN-eNodeB Node Simulation

IP infrastructure services that have taken decades to evolve, such as Domain Name Service (DNS) naming conventions and name-space administration or inter-domain routing policies and conventions, can be readily used by CCN. As described in [14], the CCN’s hierarchically structured names are semantically compatible with IP’s hierarchically structured addresses, the core IP routing protocols, Border Gateway Protocol (BGP), Intermediate System To Intermediate System (IS-IS) and Open Shortest Path First (OSPF), can be used as-is to deploy CCN in parallel with and over IP. Thus, we proposed the model for CCN processor which is integrated to router, gateway and eNodeB as shown in Fig. 4. From our knowledge, there is no topic talking about CCN overlay on mobile networks, especially for LTE networks until now. A CCN node provides caching and loop-free forwarding. As shown in Fig. 4, the Content Store (CS) includes a buffer memory to store data objects. When a CCN node receives an interest message or data message, it does prefix matching

Mobile Data Traffic Offloading with Content Centric Networking

9

Fig. 4. CCN node integrated eNodeB processor OPNET model.

by looking up by name. Any data content is potentially useful to many consumers. When a buffer memory is full, the CS must implement a replacement policy in order to maximize the possibility of reuse of popular data content. Least Recently Used (LRU) and Least Frequently Used (LFU) are two famous policies that are suggested to apply in CS. Besides that, there are multiple simple cache algorithms that may be considered such as First In First Out (FIFO) or Random. CCN supports on-path caching. When data content is routed back from Server or CCN nodes to users, all CCN nodes in the middle on-path can cache data so that subsequent received requests for the same data object can be answered from that cache immediately. From a CCN node behavior, we also realize that there is a balance of request and response messages; that is every single sent interest message is answered by one data packet of content. 3.3

LTE Mobile Station Simulation

In our visual LTE network, each cell has 25 LTE mobile stations (MSs), and they are in steady state on the beginning time simulation. Then, MSs will move around 7 cells randomly starting operating time, moving direction and speed. The total number of MSs is 175, and they request for interest video files which are supplied by a single video server in time periodically. Every MS maintains a timer used to determine interval request time and time-out. After MS sends an interest message, if time-out, they resend the same interest message again until they receive data content successfully. The index of popular video files is based on a Power Function probability density function (PDF) presented by Eq. (2) in OPNET modeler [18]. f (x) =

c.xc−1 bc

(2)

10

D. O. Mau et al.

With x is a continuous random variable and 0 ≤ x ≤ b, c is a shape parameter and b is a scale parameter. Hence, mean (E(x)) and variance (V AR(x)) values are in Eq. (3). After performing expression conversion, we get b and c values as inputs of Eq. (2) based on chosen mean and variance values. 

 x=b

V AR(x) = x=0

x=b

x.f (x).dx =

E(x) = 



x=b

x=0

x=0

c.xc b.c .dx = bc c+1

b2 .c x .f (x).dx − (E(x)) = (c + 2)(c + 1)2 2



b = E(x). 

⇐⇒ c=

(3)

2

c+1 c

E(x)2 +1−1 V AR(x)

(4)

Following Eqs. (3) and (4), the Pareto Principle (also called the Pareto Law) is configured which is a scientific law that states 80% of popular data objects come from 20% of interest messages [20]. With 1000 video files simulation in this paper, we have chosen 900 and 100 for mean and variance values, respectively. 3.4

Simulation Results

In OPNET modeler, the initial state is known as the random number seed. For discrete event simulations, the random number seed can be set with the integer seed preference. For a simulation that incorporates stochastic elements, each distinct random number seed value will produce different behavior and yield new results. In order to enhance accurate simulation results, we run our model with three seeds (times) for each simulation scenario. Final simulation results are average values which are calculated and drawn by Matlab. Moreover, there are multiple replacement policies implementation, we pick simulation results using LFU replacement policy to demonstrate results of parts a). And the remaining results of part b) is a comparison between multiple replacement policies. Table 2 presents in detail our network simulation parameters. a) CCN-eNodeBs Assist to Reduce Network Traffic and Offloading Video Server Significantly: Figure 5a shows the number of bits responded by the video server in each second. Without applying CCN protocol, the video server needs to respond for every interest message requested by all MSs. For this reason, the server is always in high traffic and easily suffers massive overload when the number of users increases. In contrast, with applying CCN protocol deployed in eNodeBs, the server throughput is just in high traffic state from the beginning, then fades down quickly in association with the rising up data of CS as well as hitting ratio. It should be noted that all eNodeB’s caches are empty at the first time, then they are step by step filled by popular files. Let’s take a look at Fig. 5b, it is a

Mobile Data Traffic Offloading with Content Centric Networking

11

Table 2. Network configuration Element

Attribute

Value

eNodeB

Cell diameter

2000 m

Carrier Frequency (UL/DL)

1920/2110 MHz

Bandwidth

20 MHz

Antenna gain

15 dBi

Maximum transmission power

0.5 W

Receiver sensitivity

−200 dBm

eNodeB selection threshold

−110 dBm

Handover type

Intra-Frequency

CCN-eNodeB Function Cache size MS

CCN protocol 90Mbit (about 200 files)

Replacement policy

Random/FIFO/LFU/LRU

Antenna gain

−1 dBi

Maximum transmission power

0.5 W

Receiver sensitivity

−200 dBm

Serving eNodeB ID

Perform cell search

Path loss

Free space and without fading

CCN directory

ccnx://iuh.edu.vn/video/[file index]

File based popularity

Pareto distribution

Interest message inter-arrival time 6 s Interest message time-out MS mobility Moving area

Server

2s 25.000 m2 inside 7 mobile cells

Moving direction

Random way

Moving speed

Uniform PDF [0, 10] m/s

Pause time

100 s

Moving time

10 s

CCN directory

ccnx://iuh.edu.vn/video

Number of video files

1000 files

Video file size

Uniform [100, 500] packets

Packet size

1500 bits

total bit cached at eNodeB-7. A buffer is quickly fulfilled around 90 Mbits, then a replacement policy is applied to keep most popular files and release least popular files. So, a low traffic response by the server is caused by the least popular file requested. Since the Internet is the best effort delivery network, a certain amount of data packet may be dropped on the transmission. However, by maintaining time-out in MSs, for each interest message sent by MSs, one data message of the video file is replied to them successfully. As shown in Figs. 5c and 5d, a total interest message sent from MSs to eNodeB-7 and a total data response by eNodeB-7 envision that two curves of average values from both figures are very similar. b) Replacement Policies Comparison in the CS: Figure 6 adds more detail illustrations about how much CCN helps offloading servers. In Fig. 6a, a gap between two lines of accumulated bits sent by server with and without applying CCN is

12

D. O. Mau et al.

Fig. 5. Offloading server traffic with CCN protocol.

a sum total bit offloading. This is also the number of bits that can be responded directly by CCN-eNodeBs and doesn’t need to be transmitted on the IP backbone. In Fig. 6b, how many percent CCN helps offloading is presented. Within the Pareto Law PDF of popular contents, the percent offloading value can reach

Fig. 6. Hitting rate comparison.

Mobile Data Traffic Offloading with Content Centric Networking

13

to 80% with LFU replacement policy. It also means that by applying CCN in the current context, we can offload nearly 80% traffic on IP backbone. It should be noted that there is tradeoff between acceptable performance and a cost implementation. As shown in Fig. 6b, LRU is getting slightly better performance than LFU, and much better than FIFO, Random. Hitting rate value of LRU rises quickly and converges to maximum value (80%). LFU converges slowly than LRU but LFU still achieves 80% value. Compared with FIFO and Random, both LRU and LFU need extra memory to keep track of aged data content information and number of sent data content respectively. For this reason, FIFO and Random are suitable for cheap solutions.

4

Conclusions

In this paper, we have presented CCN protocol overlaid on IP layer in the context of mobile network and the popularity distribution of social network contents. We believe that our work is realistic and critical. In our knowledge, this work is the first understanding of the CCN protocol applying in the Internet mobile network, essentially in the LTE network. Our simulation results indicate that CCN can reduce overall backbone traffic as well as offloading servers. We have studied multiple cache algorithms applied to CCN-eNodeB, and showed that a simple replacement policy can store most popular contents and offload server traffic efficiently.

References 1. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2017– 2022, February 2019 2. GSMA, Mobile Internet Connectivity 2019 Global Factsheet 3. Rajamma, R.K., Paswan, A., Spears, N.: User-generated content (UGC) misclassification and its effects. J. Consum. Mark. 37, 125–138 (2019) 4. Wang, Z., Liu, J., Zhu, W.: Social Video Content Delivery. Springer, Heidelberg (2016) 5. Jacobson, V., Smetters, D., Thornton, J., Plass, M., Briggs, N., Braynard, R.: Networking named content. In: Proceedings of the 5th International Conference on Emerging Networking Experiments and Technologies (2009) 6. Ding, W., Yan, Z., Deng, R.H.: A survey on future Internet security architectures. IEEE Access 4, 4374–4393 (2016) 7. https://tools.ietf.org/html/rfc8569 8. Wang, X., et al.: Content-centric collaborative edge caching in 5G mobile internet. IEEE Wirel. Commun. 25(3), 10–11 (2018) 9. Li, H., Ota, K., Dong, M.: ECCN: orchestration of edge-centric computing and content-centric networking in the 5G radio access network. IEEE Wireless Commun. 25(3), 88–93 (2018) 10. Zhang, T., et al.: Content-centric mobile edge caching. IEEE Access 8, 11722–11731 (2019) 11. 3GPP: UTRA-UTRAN Long Term Evolution (LTE) and 3GPP System Architecture Evolution (SAE), Release 8 (2008)

14

D. O. Mau et al.

12. Liu, L.: Motivating user-generated content contribution with voluntary donation to content creators. In: International Conference on Human-Computer Interaction. Springer, Cham (2019) 13. Cha, M., et al.: I tube, you tube, everybody tubes: analyzing the world’s largest user generated content video system. In: Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement (2007) 14. Name Data Networking Project. http://www.named-data.net/ 15. NSF future Internet Architecture Project. http://www.nets-fia.net/ 16. Ahmed, S.H., Bouk, S.H., Kim, D.: Content-Centric Networks: An Overview, Applications and Research Challenges. Springer, Heidelberg (2016) 17. ANR Connect Project. http://www.anr-connect.org/ 18. OPNET Technologies, Inc., Opnet Modeler. http://www.opnet.com 19. Chen, M.: OPNET Network Simulation. Press of Tsinghua University, Beijing (2004). ISBN 7-302-08232-4 20. Jinbo, B., Hongbo, L.: Study on a pareto principle case of social network. In: 2019 4th International Conference on Social Sciences and Economic Development (ICSSED 2019). Atlantis Press (2019)

Temporal Features Learning Using Autoencoder for Anomaly Detection in Network Traffic Nguyen Thanh Van1,2(&), Le Thanh Sach1, and Tran Ngoc Thinh1

2

1 Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology, VNUHCM, Ho Chi Minh City, Viet Nam [email protected] Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Viet Nam

Abstract. Anomaly detection in network traffic with the high-volume and rapidity is a challenging. Data arrives along with the time with latent distribution changes lead to a single stationary model that doesn’t match streaming data all the time. Therefore, it is necessary to maintain a dynamic system to adapt to changes in the network environment. In solving that problem, supervised learning methods have attracted extensive attention for their capability to detect attacks known as anomalies. However, they require a large amount of labeled training data to train an effective model, which is difficult and expensive to obtain. In this paper, we propose to use LSTM Auto-encoder to extract temporal features from different sequences of network packets that are unlabeled or partially labeled data. After obtaining good data representation, the feedforward neural networks are applied to perform the identification of network traffic anomalies. Our experimental results with the ISCX-IDS-2012 dataset show that our work obtains high performance in intrusion detection with accuracy approximately 99%. Keywords: Anomaly detection learning

 Intrusion detection  Auto-encoder  Feature

1 Introduction Network traffic is the amount of network data moving across networks at a given point of time. They can be generated with high volume and can be seen as large network packet datasets. During the network data communication process, packets are generated and transferred between host pairs over time. Anomalies are unusual and significant changes in network traffic, which can often involve multiple packets. Events that may cause network anomaly maybe consist of attacks. Examining a single network packet will be less effective to detect attacks, especially in case of attacks spread over quite a few packets such as APTs (advanced persistent threat), DDoS, the time horizon can span from days to minutes and one can also second. The anomaly detection in network traffic has the advantage in the effective detection of new attacks, but anomaly events © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 15–26, 2021. https://doi.org/10.1007/978-3-030-62324-1_2

16

N. T. Van et al.

may be misuse or acceptable deviation from network pattern behavior, including normal behavior is not in the normal network dataset. In this context, how to model properly all the normal network data, whether they share a common range of values for a set of features, as well as correlation and sequencing between them. Supervised learning methods have used to solve that problem because of their capability in detecting known attacks. Nonetheless, they require a large amount of labeled training data to train an effective model, which is difficult and expensive to obtain. Building a set of temporal feature database in network traffic need using data mining methods such as sequence analysis, association mining, and frequent-episode mining. To obtain timebased features in network traffic, it is necessary to examine the connections in the past some seconds that have the same destination host as the current connection and calculate statistics related to protocol behavior, service, etc. In fact, a new attack is constantly created and updated with some specific changes, therefore, manually update feature databases against newly generated attack samples becomes increasingly challenging. To reduce feature engineering cost and domain expert knowledge, the auto feature learning method is applied for intrusion classification without hand-designed features. In this way, the expected characteristic can adapt to generalize its behavior and cope with dynamic network environments. Recently, Deep learning [1] techniques can learn efficiently good feature representations from a large amount of unlabeled data, so the model can be pre-trained in a completely unsupervised fashion. The principle of deep learning is to compute hierarchical features or representations of the observational data, where the higher-level features or factors are defined from lower-level ones. The most popular techniques include deep belief networks with restricted Boltzmann machines, auto-encoders, convolutional neural networks, and long short term memory (LSTM) recurrent neural network. Among those techniques, LSTM is designed specifically to support sequences of input data is LSTM [2]. It is capable of learning the complex dynamics within the temporal ordering of input sequences as well as use internal memory to remember or use information across long input sequences. LSTM can be organized into an architecture called the Encoder-Decoder LSTM [3] that allows the model to be used to both support variable-length input sequences and to predict or output variable-length output sequences. LSTM Auto-encoder is an implementation of an auto-encoder for sequence data using an Encoder-Decoder LSTM architecture. LSTM Auto-encoders can learn a compressed representation of sequence data and have been used on video, text, audio, and time-series sequence data. The simplest LSTM Auto-encoder is one that learns to reconstruct each input sequence. Inspired by the success of auto-encoder based approaches in a number of feature learning problems and LSTM with ability to find long temporal relation from its input sequences, our methodology employs LSTM Auto-encoder to learn temporal feature from a sequence of packets and then the deep feed-forward neural network is applied to perform the identification of network traffic anomalies. We performed various experiments on the ISCX-IDS-2012 dataset [4] which is the most widely used standard dataset for the evaluation of anomaly-based intrusion detection. Experimental results show that our work can produce high accuracy of approximately 99.8% in learning features and a high anomaly detection rate from 96% to 99%. We also compared our work with other existing deep learning-based methods for intrusion detection.

Temporal Features Learning Using Autoencoder for Anomaly Detection

17

The results show that using feature learning in anomaly detection help our work achieves high performance.

2 Background and Related Works In network traffic, events that may cause anomaly maybe consist of attacks. The detected anomalies will be analyzed and classified into network attack types. An anomaly can be categorized in three ways [5], includes point anomaly, contextual anomaly, and collective anomaly. These different anomaly types have a relationship with the attacks in network security, including DoS, Probe, U2R (User to Root), R2L (Remote to local). The DoS attack characteristics match with the collective anomaly. Probe attacks are based on a specific purpose to get information and reconnaissance, so they can be matched to a contextual anomaly. U2R attacks are illegal access to the administrator account, exploiting one or several vulnerabilities. R2L attacks are local access to get the privilege to send packets on the network, the attacker uses trial and error to guess the password. Both U2R and R2L attacks are considered as point anomaly. Among the three abnormal types, collective and contextual abnormal are usually related to temporal features. For example, DoS attack spreads over quite a few packets and its execution time horizon can span from hours to minutes in order to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host. In network traffic data, basic features include network flow, flag, the size of a network packet, source IP, destination IP, protocols and etc. These intrinsic features are for general network analysis purposes, and not specific to intrusion detection. Therefore, it is necessary to create additional features with better discriminative ability than the initial feature set. Time-based statistics can be derived from TCP/IP packet headers to obtain temporal features, then they are used to detect network scans, network worm behavior, and DoS attacks. Various constructed feature sets have been used to identify anomalies [6, 7]. The learned features are increasingly more informative towards the machine learning task that we intend to perform, for example, classification. Machine learning algorithms [8] are applied to detect and distinguish between normal traffic and anomalies or attacks in network traffic, for example, decision tree, knearest neighbor (K-NN), naïve Bayes network, SOM, SVM, and ANN. These supervised learning methods require a large amount of labeled training data to train an effective model, which is difficult and expensive to obtain. Therefore, quite a few researchers have used unsupervised deep learning algorithms or combined to supervised learning methods to learn features from unlabeled data and then create a model that can increase the detection rate. The deep learning techniques aim to learn useful feature representations from a large amount of unlabeled data, so the model can be pretrained in a completely unsupervised fashion. For that reason, deep learning techniques can be adopted as unsupervised feature learning methods that help supervised machine learning improve its performance and identification of network traffic anomalies by increasing anomaly detection accuracy. Deep learning is a class of machine learning techniques that have the ability to extract deep features from input data with deep neural network architecture. The deep

18

N. T. Van et al.

network is typically initialized by unsupervised layer-wise training and then tuned by supervised training with labels that can progressively generate more abstract and highlevel features layer by layer [9]. Auto-encoder (AE) is one of the deep learning techniques that can seek to learn a compressed representation of an input and it is used as an automatic feature extraction model. Once the model is fit, the reconstruction aspect of the model can be discarded and the model up to the point of the bottleneck can be used. The output of the model at the bottleneck is a fixed-length vector that provides a compressed representation of the input data. Various works using autoencoder to learn features in the network security domain [10–12]. Some works in [13, 14] used AE-based feature learning for intrusion detection and showed that the accuracy improves using the AE-based generated features compared to the original features on the KDD99 dataset. Thank auto-encoder, useful and deep features are obtained to improve accuracy of anomaly detection in network traffic. However, these features are learned from intrinsic original features that are for general network analysis purposes. We can see that network traffic is considered a dataset consisting of a large number of the packets that are generated during data transmission over time. When the network dataset is reviewed and analyzed in time series, we will obtain many temporal features in the sequence of the packet. In this context, we use LSTM Auto-encoder to learn temporal features in network traffic, then these features are fed to the anomaly detection systems. With this combination, LSTM [2] and auto-encoders, we can have the approximate representation of network data along with its dependency in the time domain. LSTM Autoencoders encode the input to a compact value, which can then be decoded to reconstruct the original input. LSTM Auto-encoders are capable of dealing with sequences as input and can obviously take variable-length inputs. In contrast, regular auto-encoders take only fixed-size inputs and it will fail to generate a sample sequence for a given input distribution in generative mode. One of the early and widely cited applications of the LSTM Auto-encoder was in 2015 [15] of N. Srivastava and et al. They describe the LSTM Auto-encoder as an extension or application of the Encoder-Decoder LSTM and use the model with video input data to both reconstruct sequences of frames of video as well as to predict frames of video, both of which are described as an unsupervised learning task. Some other applications of the LSTM Auto-encoder have been demonstrated, not least with sequences of text, audio data [16] and time series [17, 18]. The LSTM Auto-encoder consists the encoder LSTM and the decoder LSTM as shown in Fig. 1. The input to the model is a sequence of vectors. After the last input has been read by the encoder LSTM, the decoder LSTM takes over and outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order to make the optimization easier. The decoder unrolls the previously constructed list in the encoder, with the hidden to output weights extracting the element at the top of the list and the hidden to hidden weights extracting the rest of the list. The decoder has two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input. An unconditioned decoder does not receive that input and is presented in Fig. 1. With the operation of both encoder and decoder LSTM, the

Temporal Features Learning Using Autoencoder for Anomaly Detection

Learned representation

W1

19

Decode

W1

W2

W2 . . .

Encode

. . . LSTM AutoEncoder

Fig. 1. A LSTM auto-encoder structure

representation retains information about the appearance of the objects as well as the motion contained in the data sequence, finally, we obtain the learned representations. There are several variants of the LSTM Encoder-Decoder model, we consider extracting the features from time-series data, so we apply stacked LSTM Autoencoders to generate features which can be later fed to our classification algorithm.

3 Proposed Temporal Feature Learning Methodology Our proposed methodology includes two stages that are illustrated in Fig. 2. At the first stage, LSTM Auto-Encoder is used as the baseline unsupervised model to extract the features from the sequences of network packets. It consists of some auto-encoders where the output of each auto-encoder in the current layer is used as the input of the auto-encoder in the next layer. After performing the necessary data processing stages and extracting the features automatically from the raw data with stacked LSTM Autoencoders, good representation features are fed to our classifier. All obtained features are the representation of the basic and complete characteristics of observed inputs and they are very useful and serves as an input for a monitored predictor or classifier. Since the features have been refined by the LSTM Auto-encoder, it is possible to also have high detection accuracy by using even simple models. In our proposed, we use a feedforward neural network in order to classify network traffic at the second stage. Figure 3 shows the combination of a LSTM Auto-encoder and a feed forward neural network.

Fig. 2. The main stages of proposed methodology

20

N. T. Van et al.

Learned representation

W1

output Decode

W2

W1

W2 . . . . . .

Encode

. . .

. . . LSTM AutoEncoder

Neural Network

Fig. 3. The model with one LSTM Auto-encoder

Every sequence of packets from network traffic data X are fed to LSTM Autoencoder model, each sequence X_i includes several packets and may have different number of packet, they are illustrated like this X ¼ fX1 ; X2 ; X3 ; Xi . . .; X n g; n : number of sequences:   Xi ¼ xi;1 xi;2 xi;j . . .:xi;di ; di : number of packets xi;j 2 Rp ; 8j 2 f1; 2; 3. . .; di g; p : number of features in a packet For each sequence X i , there exist di -dimensional vectors X di i representing time series data, where i denotes the data sequence and X di i describes the number of time steps for data sequence of packets. We input sequence ith (Xi) of packet is fed to LSTM Auto-encoder model through time with input vector xi;j 2 Rp (the jth LSTM block, p features). Then, we initialize the learning parameters weights and biases with arbitrary values that will be adjusted through training. The cell states ci;j of LSTM are computed based on the input vector xi;j and its learning parameters values. With xi;j is input vector of the ith sequence at the time the jth, equations in the jth internal LSTM block are operated as follows   zi;j ¼ tanh W z xi;j þ U z hi;j1 þ bz

ð1Þ

  ii;j ¼ sigmoid W i xi;j þ U i hi;j1 þ bi

ð2Þ

  f i;j ¼ sigmoid W f xi;j þ U f hi;j1 þ b f

ð3Þ

ci;j ¼ ii;j  zi;j þ f i;j  xi;j1

ð4Þ

Temporal Features Learning Using Autoencoder for Anomaly Detection

21

  oi;j ¼ sigmoid W o xi;j þ U o hi;j1 þ bo

ð5Þ

hi;j ¼ oi;j  tanhðci;j Þ

ð6Þ

Here W(.) are rectangular input weight matrices for input (z), input gate (i), forget gate (f) and output gate (o); U(.) are square recurrent weight matrices for input (z), input gate (i), forget gate (f) and output gate (o). Two point-wise non-linear activation functions: logistic sigmoid for the gates and hyperbolic tangent for the block input and output. Point-wise multiplication of two vectors is denoted by . Now, we extract the sequential information by driving each Xi to the LSTM encoder network. For each Xi , the output is given by   hi;j ¼ fenc xi;j ; hði1Þ;j

ð7Þ

where fenc the function computed by the encoding layer and fdec the function computed by the decoding layer. After whole of the sequence is passed through the LSTM encoder, we get fhi gni¼1 : Then we perform pooling operation, example mean, max and last pooling. In our experiment, we use last pooling operations as hi ¼ hi;di . The LSTM decoder reconstructs the input that is passed from hi , then the reconstructed input is obtained by function v as follows:   ^ði1Þ;j ^i;j ¼ fdec hi ; h h

ð8Þ

   ^i;j where ^xi;j di is the reconstructed input ^xi;j ¼ v h j¼1

ð9Þ

When retrieving the reconstructed input, we evaluate the mean square loss and then update the corresponding LSTM encoder and decoder parameters accordingly di

X

xi;j  ^xi;j 2

ð10Þ

j¼1

The size of each time series will reduce to a constant that base on the structure of LSTM Auto-encoder, the encoded feature vector of each sequence can be concatenated and then passed to our classification model. The LSTM Auto-encoders are trained to reconstruct the data sequences minimizing the reconstruction loss. However, the decoders are only used to find suitable encoding functions to be applied before the classification task. In our experiment, we also added more layers in the proposed model, i.e., 2 or 3 layers each in the encoder and decoder part, and they composite to Stacked LSTM Auto-encoder. A Stacked LSTM Auto-encoder is the form of building a deep neural network (DNN). It is trained by an unsupervised greedy layer-wise pretraining before a fine-tuning supervised stage. The strategy of layer-wise unsupervised training followed by supervised fine-tuning allows efficient training of DNN and gives promising results for challenging learning problems in quite a few applications from diverse domains [17]. In our experiments, we have many layers in the encoder and

22

N. T. Van et al.

decoder part, therefore our DNN can be illustrated in Fig. 4. After training DNN, we evaluate our classification in distinguishing normal or anomaly in network traffic.

output

input

Deep Neural Network

Fig. 4. A deep neural network

4 Experiments and Results 4.1

Dataset

We used ISCX-IDS 2012 dataset [4] to evaluate the proposed methodology. The dataset consists of network traces, including full packet payloads in pcap format, which along with the relevant profiles are publicly available for researchers. This dataset is newer and more modern attacks for recent intrusion detection applications. It is intended to overcome the technical issues in other IDS evaluation datasets. The dataset comprises over two million traffic packets and covers seven days of network activities and contains normal and attack traffic data. Four different attack types, referred to as Brute Force SSH, Infiltrating, HTTP DoS, and DDoS are conducted and logged along with normal traffic on seven successive days. The proportion of normal in ISCX-2012 is about 97.27%, while the percentage of four attacks in ISCX-2012 was 0.46%, 0.66%, 1.38%, and 0.23%. In our experiments, the distribution of data samples using in training, validation, and test are presented in Table 1. We assume all attacks are abnormal, therefore we merge 4 attack types into abnormal class. All data is preprocessed to match to the proposed methodology.

Table 1. ISCX-IDS 2012 Subsets Abnormal Normal Total Train 41,561 986,939 1,028,500 Validation 4,226 338,674 342,900 Test 4,226 338,674 342,900

Temporal Features Learning Using Autoencoder for Anomaly Detection

4.2

23

Experiment Results and Discussion

We experiment with LSTM Auto-encoder (LSTM_AE) models with different number of LSTM units and layers. Processing steps in proposed methodology as follows: • Step 1: Define and train LSTM_AE on inputs to learn useful features using unsupervised learning • Step 2: Define a combined model with the same encoder function that is used in the LSTM_AE architecture followed by fully connected layers in DNN • Step 3: Load the weights of the trained LSTM_AE model (only use in encoder part of the LSTM_AE) into a few layers of the new model that is defined in Step 2. • Step 4: Train the combined model with all layers trainable, evaluate the model Our experiments are run on the Ubuntu 18.04.4 LTS 64-bit OS with TensorFlow software [18]. The computer is Intel® Core i7-8700 CPU@ 3.20Ghz x 12 cores and 16 GB of memory. A GeForce GTX GPU is used as the accelerator. In each experiment, we design the model architecture, investigate the number of parameters in each model, and then we evaluate the accuracy and execution time. For example, LSTM_AE (100,100) model has 2 layers with 100 units in each layer, it is implemented with the parameters in the feature learning stage is 289,115 while the classification stage is 33,411. The performance models are measured by accuracy of feature learn-ing and accuracy of the classification. Table 2 shows that the LSTM_AE(200) obtains the highest accuracy among 1-layer models, meaning the more units (in a layer) a model has, the higher accuracy it gets. In order to compare the accuracy of models with the different numbers of layers, we choose the model with 100 LSTM units in the comparison between models such as LSTM_AE(100), LSTM_AE(100, 100), and LSTM_AE(100, 100, 100). In the finding the appropriate, in general, the more layers and more units a model has, the higher accuracy it is. Especially, whenever the model has more layers, it will have more accuracy significantly. We also measure the training time in feature learning models with different numbers of LSTM units and layers. Figure 5 shows the more units and layers a model has, the more time it consumes. We also accelerate the training time on GPU and the result shows that our best experiment on GPU is up to faster more 3x than the implementation on CPU. As soon as we obtain the good representations of the input, we no need much time-consuming to classify. Therefore, the training time for the classifier is quick, even we need only the first 10 epochs to get high accuracy and low loss, example, LSTM_AE(100,100,100) model has high training and validation accuracy about 99%, the model training and validation error are 0.03 and 0.007, respectively. The number of layers in the feed-forward neural network is depended on the number of layers in stacked LSTM_AE. To assess the performance of classification, we used the evaluation metrics to anomaly detection problems such as recall values, precision, and F1-score for anomaly class. Precision summarizes the fraction of examples as-signed the attack class that belongs to the really attack class. Recall or sensitivity refers to the true assigned attack rate and summarizes how well the attack was predicted. F1-score is a combination of precision and recall into a single score that seeks to balance both concerns. Table 2 shows that the model has good feature learning with 90.20% then it has good classification with 99.89%. Although the F1-score metric

24

N. T. Van et al.

Training time for Feature learning (a) 400

400

330.18

300

226.92

200

130.39

119.91

100

145.53

minutes

300 minutes

Training time for Feature learning (b)

54.89

231.06

200 100

81.05 53.70

119.91 54.89

78.17

0

0 1 layer

2 layers

CPU

50 units 100 units 200 units

3 layers

CPU

GPU

GPU

Fig. 5. Trainning time for Feature learning in comparison among models (a) 1 layer with different units; (b) different layer, each layer has 100 units

Table 2. Accuracy of stage feature learning and evaluation metrics of the classification Models

Feature learning accuracy

Overall accuracy

76.43 79.58 89.70 85.01

Abnormal classification Precision Recall F1Score 97.36 89.78 93.41 97.99 89.90 93.76 98.09 90.13 93.94 97.89 92.10 94.90

LSTM_AE(50) LSTM_AE(100) LSTM_AE(200) LSTM_AE (100,100) LSTM_AE(100, 100,100)

90.20

97.54

99.89

93.07

95.25

99.83 99.85 99.86 99.88

for anomaly detection of our best model is 95.25%, the percentage of detecting normal class of models is high approximately 100%, consequently, the overall accuracy is more 99% in all models. Table 3 shows a comparison of the experimental results for the same ISCX2012 dataset, but the authors used different methods and manually designed traffic features to detect intrusion in network traffic. Based on the available experimental results for the compared methods, the overall accuracy is used as the evaluation metrics. The experimental conditions seem the same, but our implementation is almost better than other methods. Authors in [19] use sequential LSTM autoencoder for executing computer network intrusion detection but their result is lower in accuracy than our result. Some works using deep learning methods get high performance such as LSTM [19], RNNRBM [23], HAST-IDS [24]. HAST-II and our proposed have the same accuracy but our best model has a false alarm rate with less than 0.01 compared to their 0.02.

Temporal Features Learning Using Autoencoder for Anomaly Detection

25

Table 3. Accuracy comparison with recent works Methods Training/testing ratio Deep Auto LSTM [19] 80/20 LSTM [19] 80/20 Earth Mover’s Distance [20] (all days)/(1 day) AMGA2-Naïve Bayes [21] 50/50 Naïve Bayes [21] 50/50 K-Means + Naïve Bayes [22] (4 days)/(2 days) RNN-RBM [23] 75/25 HAST-I using CNN [24] 60/40 HAST-II using CNN + LSTM [24] 60/40 Our proposed 75/25

Accuracy (%) 95.12 95.19 90.12 92.70 98.40 99.00 99.73 99.69 99.89 99.89

5 Conclusions In this paper, we use LSTM Auto-encoder to learn temporal features and then apply fully connected deep feed-forward neural networks to perform the classification based on the auto learned features from the previous LSTM Auto-encoders. Our proposed methodology learn efficiently good feature representations from a large amount of unlabeled network data, so the model can be pre-trained in a completely unsupervised fashion. The pre-training process allows deep neural networks are constructed naturally from layer to layer. It also supports for neural network initialization with parameters better than that with random parameters. Results show that feature learning help obtaining the good performance of anomaly detection in network traffic. In future work, we will study some other methodologies to obtain more representation of the network traffic in order to improve the percentage of the anomaly detection.

References 1. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nat. Int. J. Sci. 521, 436–444 (2015) 2. Klaus, G., Rupesh, K.S., Jan, K., et al.: LSTM - a search space odyssey. Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017) 3. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, arXiv:1406.1078 [cs.CL] (2014) 4. Shiravi, A., Shiravi, H., Tavallaee, M., Ghorbani, A.A.: Toward developing a systematic approach to generate benchmark datasets for intrusion detection. Comput. Secur. 31(3), 357– 374 (2012) 5. Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv. 41(3), 1–58 (2009) 6. Davis, J.J., Clark, A.J.: Data preprocessing for anomaly based network intrusion detection: a review. Comput. Secur. 30(6–7), 353–375 (2011)

26

N. T. Van et al.

7. Ahmadi, M., Ulyanov, D., Semenov, S., Trofimov, M., Giacinto, G.: Novel feature extraction, selection and fusion for effective malware family classification. In: 6th ACM Conference on Data and Application Security and Privacy (2016) 8. Bhattacharyya, D.K., Kalita, J.K.: Network Anomaly Detection-A Machine Learning Perspective. CRC Press, Taylor & Francis Group (2014) 9. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems 19 (2006) 10. Zhou, G., Sohn, K., Lee, H.: Online incremental feature learning with denoising autoencoders. In: the 15th International Conference on Artificial Intelligence and Statistics (AISTATS) (2012) 11. Mirsky, Y., Doitshman, T., Elovici, Y., Shabtai, A.: Kitsune: an ensemble of autoencoders for online NID. In: Network and Distributed Systems Security Symposium (NDSS), San Diego, CA, USA (2018) 12. Niyaz, Q., Sun, W., Javaid, A., Alam, M.: A deep learning approach for NIDS. In: Bioinspired Information and Communications Technologies, (BIONETICS), Brussels, Belgium (2014) 13. Yousefi-Azar, M., Varadharajan, V., Hamey, L., Tupakula, U.: Autoencoder-based feature learning for cyber security applications. In: International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA (2017) 14. Farahnak, F., Heikkonen, J.: A deep auto-encoder based approach for IDS. In: International Conference on Advanced Communications Technology (ICACT), Korea (2018) 15. Srivastava, N., Mansimov, E., Salakhutdinov, R.: Unsupervised learning of video representations using LSTMs. In: 32nd International Conference on Machine, France (2015) 16. Zhang, Z., Ringeval, F., Han, J., Deng, J., Marchi, E., et al.: Facing realism in spontaneous emotion recognition from speech: feature enhancement by autoencoder with LSTM. In: 17th Annual Conference of the International Speech, INTERSPEECH, San Francisco, CA, US (2016) 17. Mehdiyeva, N., Lahanna, J., Emricha, A., Enkeb, D., Fettkea, P., Loosa, P.: Time series classification using deep learning for process planning: a case from the process industry. In: the Complex Adaptive Systems Conference with Theme: Engineering Cyber Physical Systems, Chicago, Illinois, USA (2017) 18. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems, March 2016. https://arxiv.org/abs/1603.04467 19. Mirza, A.H., Cosan, S.: Computer network intrusion detection using sequential LSTM neural networks autoencoders. In: 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey (2018) 20. Tan, Z., Jam, A., He, X., Nanda, P., Liu, R., Hu, J.: Detection of dos attacks based on computer vision techniques. IEEE Trans. Comput. 64(9), 15 (2015) 21. Kumar, G., Kumar, K.: Design of an evolutionary approach for intrusion, vol. 2013, p. 14. Hindawi Publishing Corporation (2013) 22. Yassin, W., Udzir, N., Muda, Z., Sulaiman, M.: Anomaly-based intrusion detection through Kmeans clustering and Naives Bayes classification. In: International Conference on Computing and Informatics (ICOCI), Malaysia (2013) 23. Li, C., Wang, J., Ye, X.: Using a recurrent neural network and restricted boltzmann machines for malicious traffic detection. NeuroQuantology 16(5), 823–831 (2018) 24. Wang, W., Sheng, Y., Wang, J.: HAST-IDS: learning hierarchical spatial-temporal features using DNNs to improve intrusion detection. IEEE Access 6, 1792–1806 (2018)

A Computer-Aided Detection to Intracranial Hemorrhage by Using Deep Learning: A Case Study Kien G. Luong1 , Hieu N. Duong1(B) , Cong Minh Van2 , Thu Hang Ho Thi2 , Trong Thy Nguyen3 , Nam Thoai1 , and Thi T. T. Tran Thi4 1

Ho Chi Minh City University of Technology, VNU, Ho Chi Minh City, Vietnam [email protected] 2 Vinh Long Department of Health, Vinh Long, Vietnam 3 Vinh Long Hospital, Vinh Long, Vietnam 4 Hoa Sen University, Ho Chi Minh City, Vietnam

Abstract. Intracranial hemorrhage (ICH) is an acute-stroke type leading to a high mortality rate. ICH diagnoses can keep patients out of lifethreatening and longlasting consequences. These diagnoses are typically detected by analyzing images of Computed Tomography (CT) scan and symptoms of the patients. Consequently, the expertise of radiologists is significant in terms of detecting signals of ICH and making in-time decisions to save these patients. Over the past few decades, due to powerfuland-modern machine learning algorithms, several computer-aided detection (CAD) systems have been developed. They are becoming an effective part of routine clinical work to detect a few critical diseases, such as strokes, cancers, and so on. In this paper, we introduce a CAD that combines a deep-learning model and typical image processing techniques to determine whether patients suffer from ICH due to their CT images. The deep-learning model based on MobileNetV2 architecture was trained on RSNA (Radiological Society of North America) Intracranial Hemorrhage dataset. Then it was validated on a dataset of ICH-Vietnamese cases collected from Vinh Long Province Hospital, Vietnam. The experiment indicated that the classifier achieved AUC of 0.991, sensitivity of 0.992, and specificity of 0.807. After the deep-learning model identifies ICH-suspected slices, the Hounsfield Unit (HU) method is employed to highlight specific hemorrhage areas in these slices. DBSCAN is also used to remove noises detected by the HU method. Keywords: Deep learning · Intracranial hemorrhage Computer-aided detection · Hounsfield unit

1

·

Introduction

Intracranial Hemorrhage (ICH) is a type of acute stroke that can spontaneously happen to anyone at any age, especially those who suffer from hypertension. It c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 27–38, 2021. https://doi.org/10.1007/978-3-030-62324-1_3

28

K. G. Luong et al.

is caused by blood leaking from damaged blood vessels, which flows into brain tissues and quickly threatens an individual’s life. Vietnam considers ICH to be one of the top lethal diseases, with approximately 168 cases per 100,000 population in 2008 [1]. Consequently, this raised demands for on-time detection and treatment to restrict the spread of hematomas within the golden hour. Thus, highly-accurate CAD systems are important to radiologists in terms of diagnosing vague cases. The CT technique was invented by Godfrey Hounsfield and Allan Cormack in the early 1970s. This technique has proven its effectiveness in reading brain images by producing more details when compared with the X-ray technique. There are only five X-ray density levels to describe all anatomical structures, which are grouped as metal, bone, soft tissue, fat, and air. Therefore, this blurs the differences among brain structures since they have similar X-ray densities. To tackle this challenge, the CT technique utilizes the strength of computers to combine information from multiple X-rays at several different angles and visualize differences of tissue structures. A noticeable drawback of the CT technique is that radio exposure has a small probability of developing cancer. Radiologists, however, can minimize the probability by keeping radiation levels as low as possible. An alternative to CT technique called MRI (magnetic resonance imaging) was developed as a typical imaging method for medical diagnosis. The method leverages strong magnetic fields and radio waves to capture the body’s inner structures. Despite several medical imaging advantages, such as excellent image quality for detecting subtle abnormalities in organs, and zero risk of radiation exposure, MRI remains a few disadvantages, such as high cost, time-consuming, and problems leading to discomfort and claustrophobia due to enclosed spaces [2]. Thus, the CT technique, which is faster and requires a lower cost, is more appropriate for emergency cases than MRI. Hounsfield Unit (HU) is a widely-used unit for the reading and understanding of CT images. It is easily measured by combining data in the CT images. HU values of an image are calculated by using its pixel intensity, rescale intercept, and rescale slope. This technique plays the role of classifying every pixel in CT images into the most relevant type. However, it does not reveal any signs of illness that must be handled by radiologists instead. As a result, several techniques, such as image processing, statistics, and machine learning, are commonly employed to support radiologists in detecting unclear symptoms of serious diseases [3,4]. In the past few years, deep learning has emerged and successfully applied to many practical fields, such as traffic management, social network, medical field, and so on [5,6]. Detecting breast cancer, for instance, which was a diagnostic challenge, has been gradually solved with significantly higher sensitivity and specificity [3,4]. Keeping in mind that any super-powerful CADs never play the full role of radiologists, they should instead serve as assistants for experts in decision-making. In this study, we developed a binary classifier to determine the presence of brain hemorrhage in CT images by applying multiple techniques, such as deep learning, HU, and data clustering. As deep learning techniques can harness the superpower of high-performance computing (HPC) systems, they have

A Computer-Aided Detection to Intracranial Hemorrhage

29

become increasingly attractive in the field of image processing. In particular, deep-learning models have won numerous contests in pattern recognition and machine learning [7]. In order to successfully use these models, they must be feed by a huge amount of data. However, due to the lack of local training data, we use a large dataset provided by the Radiological Society of North America (RSNA) in this study to train the deep model. Then the model was tested with a small dataset collected at Vinh Long Hospital, Vinh Long Province, Vietnam. Besides, to further enhance the utilization of the study, we use the HU technique to highlight ICH-suspected regions after removing noise by a clustering method called DBSCAN.

2

Related Works

Acute hemorrhage detection has been investigated over the last few decades, some considerable methods are MRI T2 gradient-recalled echo imaging – GREI [8], histogram and statistical-based methods [9–11]. The method proposed by Gong et al. [12] was a typical approach. They applied image processing techniques to classify abnormal regions in 2D brain CT images by 3 phases, which are pre-processing, feature extraction, and classification. In their pre-processing phase, they first convert the image into a JPEG format and then segment the region inside the skull, which is called the “internal region”. Afterward, they tried to make the images as clear as possible by removing the redundant parts from the JPEG image. Gray matters and noises are also cleared to generate a binary mask at the end of this phase. Next, in the feature extraction phase, the points in the binary mask are divided into groups so that they can create a vector for each group by calculating its features such as area and major axis length. These feature vectors are designed for a decision tree to classify each suspected group in the last phase. The results make sense when they are good at detecting abnormal cases with a 93% average accuracy. Thanks to the rapid development of computational hardware, deep learning techniques, which require a modern and powerful system to prove their effectiveness, have been widely used in many artificial intelligence tasks around the world. Previous methods, which rely on knowledge-based feature threshold, are inaccurate for NCCT images which are low-detail (resolution approximately 500x500), grayscale, and easy to be affected by noise (artifacts). Therefore, brain hemorrhage classification employing traditional methods alone usually gives unstable results even using carefully pre-processed features. Thus, deep convolutional neural network (DCNN), an essential part of deep learning techniques, has been variously applied in NCCT images to learn the patterns of the specific data [13,14]. The recently DCNN approaches for medical images are usually grouped by two typical data representations, i.e, 2-dimension [14,15] and 3-dimension [13, 16]. To be more specific, a CT scan viewed as many 2-dimensional images is a sequence of slices. This leads to a result that researchers work with not only picture-style CT data but also voxels.

30

3 3.1

K. G. Luong et al.

Methodology System Overview

Figure 1 illustrates the workflow of our CAD system, which includes two processes consisting of ICH classification and suspected-area highlighting. First, the ICH classification, which bases on deep learning methods, is used to predict the probability of containing ICH in every slice scan. Then if that diagnosis result is positive, multiple traditional processing methods will perform to highlight risks and report to people who are responsible for the treatment.

Read DICOM file

Anonymize data

Data preprocessing

Highlighting ICH-suspected areas

Apply DBSCAN

Remove skull

DCNN

Yes ICH ?

Fig. 1. Our system overview. It consists of two main tasks, such as binary ICH classification and ICH-suspected area highlighting. Finally, results from these tasks are combined into an image that visualizes ICH-suspected regions.

3.2

Data Acquisition

In this study, we used the dataset provided by the Radiological Society of North America (RSNA) for research purposes. At first, the dataset was collected by the RSNA under a collaboration with the American Society of Neuroradiology and MD.ai. Then the dataset has been prepared for a Kaggle competition in order to provide a database for medical experts in terms of researching novel treatments. In detail, it contains DICOM files stored under the form of anonymized data to secure personal information of patients. In this dataset, a DICOM file, which contains a 2-dimensional axial image, captures for a specific area inside a patient’s brain. Additionally, all DICOM files have a few similar attributes of columns, rows, rescale slope, and rescale intercept that are 512, 512, 1, and −1024, respectively. The dataset was divided into two parts, the first part containing approximately 752,803 labeled images while the images from the other part were unlabeled. Each labeled file is marked with a corresponding detection identified by medical experts due to the presence and the type of acute intracranial hemorrhage. In this study, we randomly separated the labeled data into two subsets containing 602,242 and 150,561 images for training and validating tasks, respectively. Although ICH type names are also mentioned along with each image, this study considers only positive or negative identifications. The principal purpose

A Computer-Aided Detection to Intracranial Hemorrhage

31

of this study is to identify if a case suffers from ICH and highlight ICH areas within the image. Besides, we use a local dataset collected at Vinh Long Hospital, Vinh Long Province, Vietnam. This dataset is used to examine the effectiveness of the deep model trained by the RSNA dataset. This dataset is quite small and consists of 20 studies with a total of 304 slices. The local dataset was reviewed by two radiologists working at Vinh Long Hospital. To secure the privacy of patients, all studies are anonymized and formatted the same as the RSNA dataset. Table 1 shows that the training and validating sets are imbalance in terms of the number of negative and positive cases. When the training set contains only 86,285 slices being positive since the number of negative cases is 516,747, we can realize that the number of slices having ICH is significantly lesser than those negative ones. To address this, we increase the number of positive cases and reduce negative samples. During the training process, for each training batch, the proportion of the two classes are set to be equal or nearly equal. Table 1. Number of positive and negative cases in each dataset Positive Negative Total Training

86,285

516,747

603,032

Validating 21,648

128,123

149,771

176

304

Testing

3.3

128

Deep Convolutional Neural Network Architecture

Recently, convolutional neural networks (CNNs) have ubiquitously used in practice, especially image-related tasks, and give outstanding performance with respect to learning complex visual patterns. The CNN’s performance has been proved in theoreticality and practicality and continuously improved by refactoring the architecture [17–20], and inside techniques [21,22]. Furthermore, a welltrained model with large datasets can be utilized to fine-tune other datasets and provides better performance. Thus, this study uses the pre-trained MobileNetV2 [20], which was published at Torchvision hub after trained with a large-scale dataset - ImageNet (Fig. 2). According to Sandler et al. [20], the Inverted Residual Block consisting of Depthwise Separable Convolutions and Linear Bottleneck significantly enhances the performance of neural frameworks by trade-offs between accuracy and loads on the system. Although the accuracy of this model slightly declines when compared with a few other larger models, the reduction of parameters makes a noticeable advantage for it. The design of MobileNetV2 is optimized to mainly work on CPU so that it is suitable for low-cost computers without GPU.

32

K. G. Luong et al. Add

Conv 1x1, Linear

Dwise 3x3, ReLu6

Conv 1x1, ReLu6

input

Fig. 2. Inverted Residual Block proposed in MobileNetV2 architecture [20]

3.4

Data Pre-processing

To enrich the information in the DICOM files, we used a typical medical image pre-processing method called multi-window conservation. It was firstly introduced by Lee et al. [14] and inspired by a technique called windowing. The large range of HU values in DICOM files (16-bit unsigned) makes it challenging for radiologists to view and understand all of the key information. To tackle this issue, radiologists attempt to convert and view CT scans from many specific ranges of HU values by employing the windowing technique. The windowing technique includes two main definitions, namely Window Level (WL) and Window Width (WW). WW is the length of a specific range, which is calculated by subtracting the minimum HU value from the maximum value of a specific range. WL is the median value of the specific range. Windowing follows Eq. (1). Different window settings highlight different head structures. Thus, stacking images from three different windows can produce an RGB color image. There are three window settings in this research consisting of brain window (WL:40, WW:80), subdural window (WL:80, WW:200), and bone window (WL:40, WW:400). ⎧ p(x, y) ≤ L − W ⎪ ⎨0 2 p(x,y)−L− W W 2 (1) pw (x, y) = I L − < p(x, y) ≤L+ W max W 2 2 ⎪ ⎩ W Imax p(x, y) > L + 2 where (x, y) is the coordinate of a point, p(x, y) is the original HU value that we need to change and pw (x, y) is the pixel value displayed after mapping to 8-bit gray-scale image range (0–255). L and W are Window Level and Window Width, respectively. Finally, Imax is the maximum value that usually is 255 (Fig. 3).

A Computer-Aided Detection to Intracranial Hemorrhage Bone Window WL: 40 WW: 400

Brain Window WL: 40 WW: 400

33

Subdural Window WL: 80 WW: 200

Multi-window

Fig. 3. Multi-window image [14] stacked by three types of window images

3.5

Highlighting ICH-Suspected Areas

Figure 4 illustrates the workflow to highlight ICH-suspected areas in a slice after successfully identifying ICH presence. The workflow consists of three linear steps: 1) removing brain’s skull from images; 2) applying DBSCAN algorithm to remove noise 3) and visualizing results with colors.

Remove skull with FSL's BET

Apply DBSCAN

Highlighting ICHsuspected regions

Fig. 4. Highlighting process with brain extraction and DBSCAN

In the first step, we pre-process images by extracting the skull and outsideskull areas. The purpose of this is to remove the insignificant pixels representing the skull and outside-skull areas. The technique used in this step called FSL’s Brain Extraction Tool (BET) [23,24]. It is effective for skull stripping, and it was mainly developed for MRI cases. Figure 5 illustrates the result of removing skull structure in a CT image by BET without any preparing steps. White-colored regions in the image represent the cranial structure, and the whiter these regions are, the higher density values they have. By contrast, the colors of bone regions in MRI head images are very dark because these bones keep nuclear magnetic

34

K. G. Luong et al.

resonance signals away from being recorded. This leads to the result that BET can not extract the skull structure in CT head images since BET is optimized for MRI images. Due to large distances in radiodensity between bone and other substances in CT images, we imitated skull characteristics in MRI by blackening high-value pixels before using BET. The Eq. (2) depicts the task of changing pixels’ HU. The raw image was changed as in Fig. 6a. After this preparation step, the color of bone structure is changed to dark so that it is quite similar to the bone structure in MRI images. Then, we used BET and gained the desired result as in Fig. 6b. ⎧ ⎪ p 100

Fig. 5. Result of using BET for a CT image without any preparing steps. a shows a raw CT image and its result after using BET is illustrated in b.

After removing the skull and some redundant areas from head images, all pixels having HU in the range from 40 to 90 are identified. Due to the characteristics of NCCT images, outer-border brain tissue regions are normally affected by high radiodensity from the bone. A clustering technique called DBSCAN is employed to resolve this problem. It removes outliers (artifacts) and keeps ICHsuspected pixels only. However, outputs from DBSCAN remain outliers in the areas around head bones. This problem is depicted by Fig. 7a. We filter these remaining noise clusters if their standard deviations are lower than a defined threshold. The results described in Fig. 7b were obtained after removing outliers by DBSCAN and applied standard deviation threshold.

A Computer-Aided Detection to Intracranial Hemorrhage

35

Fig. 6. Result of using BET for an altered CT image. a is the result of applying Eq. (2) to modify the raw CT image (5a) and b shows the final result.

Fig. 7. Results of noise removing process. a shows ICH-suspected areas after DBSCAN and b provides the image after applying standard deviation threshold.

4

Experiment

The pre-trained MobileNetV2 model was used for ICH binary classification task. Because this model had been pre-trained with the ImageNet dataset, the training data was normalized with mean and standard deviation from ImageNet to facilitate its convergence. We also randomly augmented input images with a few techniques consisting of flipping, shift, scaling, and rotating. A small learning rate, approximately 1e−5 , was used for the fine-tuning task. If a high learning rate is used, e.g., 1e−2 , the model easily suffer from exploding gradients. Before training, the inputs had been resized to 224 to make them consistent with the model architecture. Table 2. Model performance on each dataset AUC Sensitivity Specificity Validating 0.969 0.901

0.918

Testing

0.807

0.991 0.992

36

K. G. Luong et al.

Fig. 8. ROC Curve figures exported by model. Model achieved AUCs of 0.969 and 0.991 on validating set (a) and testing set (b), respectively.

The model was trained with a system equipped with a Tesla T4 GPU to accelerate the training process. The model converged within six training epochs, which were all set to the batch size of 128. Experimental results are described in Table 2 and Fig. 8. After training, the model was tested with the testing dataset collected at Vinh Long Hospital. Results on validating show that sensitivity is nearly equal specificity since the values of sensitivity and specificity are 0.901 and 0.918, respectively. After six epochs, if the model is continuously trained, it will gain a higher specificity. However, the sensitivity will significantly decrease. Balancing the values of sensitivity and specificity is considerable in treatments. The model gave a high performance of testing results since the values of sensitivity and specificity are 0.992 and 0.807, respectively.

5

Conclusion

The experiment reveals that lightweight pre-trained deep models as MobileNetV2 can give considerable results in brain hemorrhage detection. Besides, the brain images of international patients are effective for building CADs applied for local patient treatment, for instance, at Vinh Long Hospital. Moreover, DBSCAN and standard deviation can be utilized to pre-process and post-process brain CT images, especially hemorrhage segmentation. However, due to the small size of the testing dataset, the effectiveness of the model is uncertain and is required to be tested on larger datasets. We are considering taking the task of data collection into future works. Acknowledgement. The authors would like to thank doctors working at the ParaClinical Department, Vinh Long Hospital, Vietnam who provided expertise and imagery data as well as greatly assisted the study. This study was funded by Vinh Long Department of Science and Technology, under contract number 03/HÐ-2019.

A Computer-Aided Detection to Intracranial Hemorrhage

37

References 1. Phong, T.D., Duong, H.N., Nguyen, H.T., Trong, N.T., Nguyen, V.H., Van Hoa, T., Snasel, V.: Brain hemorrhage diagnosis by using deep learning. In: Proceedings of the 2017 International Conference on Machine Learning and Soft Computing, ICMLSC 2017, pp. 34–39. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3036290.3036326 2. Munn, Z., Moola, S., Lisy, K., Riitano, D., Murphy, F.: Claustrophobia in magnetic resonance imaging: a systematic review and meta-analysis. Radiography 21(2), e59–e63 (2015) 3. Shen, L., Margolies, L.R., Rothstein, J.H., Fluder, E., McBride, R., Sieh, W.: Deep learning to improve breast cancer detection on screening mammography. Sci. Rep. 9(1), 1–12 (2019) 4. Wu, N., Phang, J., Park, J., Shen, Y., Huang, Z., Zorin, M., Jastrzębski, S., Févry, T., Katsnelson, J., Kim, E., et al.: Deep neural networks improve radiologists’ performance in breast cancer screening. IEEE Trans. Med. Imaging 39(4), 1184– 1194 (2019) 5. Polson, N., Sokolov, V.: Deep learning for short-term traffic flow prediction. Transp. Res. Part C: Emerg. Technol. 79, 1–17 (2017) 6. Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applications in medical image analysis. IEEE Access 6, 9375–9389 (2018) 7. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 8. Akter, M., Hirai, T., Hiai, Y., Kitajima, M., Komi, M., Murakami, R., Fukuoka, H., Sasao, A., Toya, R., Haacke, E.M., et al.: Detection of hemorrhagic hypointense foci in the brain on susceptibility-weighted imaging: clinical and phantom studies. Acad. Radiol. 14(9), 1011–1019 (2007) 9. Imielinska, C., Liu, X., Rosiene, J., Sughrue, M., Komotar, R., Mocco, J., Ransom, E., Lignelli, A., Zacharia, B., Connolly, E., D’Ambrosio, A.: Toward objective quantification of perfusion-weighted computed tomography in subarachnoid hemorrhage: quantification of symmetry and automated delineation of vascular territories1. Acad. Radiol. 12, 874–87 (2005) 10. Phan, T.G., Donnan, G.A., Koga, M., Mitchell, L.A., Molan, M., Fitt, G., Chong, W., Holt, M., Reutens, D.C.: The aspects template is weighted in favor of the striatocapsular region. Neuroimage 31(2), 477–481 (2006) 11. Chan, T.: Computer aided detection of small acute intracranial hemorrhage on computer tomography of brain. Comput. Med. Imaging Graph. 31(4–5), 285–298 (2007) 12. Gong, T., Liu, R., Tan, C.L., Farzad, N., Lee, C.K., Pang, B.C., Tian, Q., Tang, S., Zhang, Z.: Classification of CT brain images of head trauma. In: IAPR International Workshop on Pattern Recognition in Bioinformatics, pp. 401–408. Springer, Cham (2007) 13. Trivizakis, E., Manikis, G., Nikiforaki, K., Drevelegas, K., Constantinides, M., Drevelegas, A., Marias, K.: Extending 2-D convolutional neural networks to 3-D for advancing deep learning cancer classification with application to MRI liver tumor differentiation. IEEE J. Biomed. Health Inf. 23(3), 923–930 (2018) 14. Lee, H., Yune, S., Mansouri, M., Kim, M., Tajmir, S.H., Guerrier, C.E., Ebert, S.A., Pomerantz, S.R., Romero, J.M., Kamalian, S., et al.: An explainable deeplearning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat. Biomed. Eng. 3(3), 173 (2019)

38

K. G. Luong et al.

15. Chawla, M., Sharma, S., Sivaswamy, J., Kishore, L.: A method for automatic detection and classification of stroke from brain CT images. In: 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3581–3584. IEEE (2009) 16. Jnawali, K., Arbabshirani, M.R., Rao, N., Patel, A.A.: Deep 3D convolution neural network for CT brain hemorrhage classification. In: Petrick, N., Mori, K. (eds.) Medical Imaging 2018: Computer-Aided Diagnosis, vol. 10575, pp. 307 – 313. International Society for Optics and Photonics, SPIE (2018). https://doi.org/10.1117/ 12.2293725 17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) 18. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014) 19. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016) 20. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018) 21. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014) 22. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML 2015, vol. 37, pp. 448–456. JMLR.org (2015) 23. Smith, S.M.: Fast robust automated brain extraction. Hum. Brain Mapp. 17(3), 143–155 (2002) 24. Jenkinson, M., Pechaud, M., Smith, S., et al.: BET2: MR-based estimation of brain, skull and scalp surfaces. In: Eleventh Annual Meeting of the Organization for Human Brain Mapping, Toronto, vol. 17, p. 167 (2005)

A Novel Security Solution for Decentralized Web Systems with Real Time Hot-IPs Detection Tam T. Huynh1(&), Chinh N. Huynh2(&), and Thuc D. Nguyen3(&) 1

2

Posts and Telecommunications Institute, HCMC, Ho Chi Minh City, Vietnam [email protected] University of Technology and Education, HCMC, Ho Chi Minh City, Vietnam [email protected] 3 University of Science, VNU-HCMC, Ho Chi Minh City, Vietnam [email protected]

Abstract. The decentralized web is currently a new trend in web technology, which operates based on the peer-to-peer network in which nodes communicate directly with each other without any central systems. Some nodes act as gateways of the network, some other nodes are used for data synchronization. However, this network architecture may exist malicious nodes that can perform security attacks to other nodes on the network. Hence, guaranteeing the availability of the main nodes plays an important role in the system. In this paper, we use the InterPlanetary File System (IPFS) platform to build the decentralized web system, and propose a novel security solution to detect Hot-IPs based on non-adaptive group testing at gateways and storage miner nodes of the decentralized web system. Hot-IPs are hosts appearing with high frequency in the network within a short period of time, caused by some threats such as (distributed) denial-of-service attacks or scanning Internet worm attacks. Detecting Hot-IPs in real time is the first important step that assists administrators in selecting security policies for the system. Keywords: IPFS  Hot-IP adaptive group testing

 Denial-of-service attack  Internet worm  Non-

1 Introduction Nowadays, almost websites are deployed on the Client/Server model [1], where a centralized system is used for hosting websites, and users can use one of the web browser applications to send requests to the web server. However, the centralized architecture has some disadvantages such as data security and availability [2]. To solve these problems, Dias et al. [3] used IPFS to host websites to form decentralized websites. A decentralized framework for web hosting with IPFS and Blockchain was proposed by Huynh et al. [4]. On the IPFS network, there are two important types of nodes: the cluster node is responsible for maintaining data for the network, and the gateway node is used to distribute data to users. Besides, the variety of devices joining on the IPFS network also increases security risks. Therefore, the early detection of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 39–48, 2021. https://doi.org/10.1007/978-3-030-62324-1_4

40

T. T. Huynh et al.

security threats, especially denial-of-service or distributed denial-of-service (DoS/DDoS) attacks or scanning Internet worm attacks, is an urgent issue to guarantee the availability and stability of the network. In DoS/DDoS attacks, attackers send a very large number of packets to victims in a very short time. They aim to make an unavailable service to legitimate clients. Internet worms propagate to detect vulnerable hosts very fast in networks. The problem is how to fast detect attackers, victims in denial of service attacks, and sources of the worm propagating on the IPFS network. Based on these results, administrators can quickly have a solution to prevent them or redirect attacks. In order to decrease these risks for the decentralized web system, we monitor IP addresses in IPFS messages. In case of DoS/DDoS attacks, attackers send a lot of traffic to a destination node in a short time. On the other hand, if the traffic comes from the same source IP address, this node may be infected by worms (scanning worms). Therefore, identifying sources or victims in DoS/DDoS attacks, Internet worms can be modeled by detecting Hot-IPs. Our Contribution: In this paper, we introduce a new algorithm that can be used to detect and prevent Hot-IPs in networks. The algorithm is based on non-adaptive group testing. Traditional non-adaptive group testing algorithms depend on the d value in the d-disjunct matrix. Our algorithm can expend the d value of d-disjunct matrices by using Hot-List containing candidate Hot-IPs. Besides, we also deploy the algorithm to protect the decentralized web system. The rest of this paper is organized as follows: Sect. 2 presents preliminaries. Section 3 describes Our solution, and finally we conclude with Sect. 4.

2 Preliminaries 2.1

Hot-IP

An IP address is used to identify a host in network. Every packet has an IP header which contents source and destination IP addresses. IP packet stream is a sequence of IP packets a1 ; a2 ; . . .; an in a link, every packet ai has an IP address si (si can be a source address or a destination one depending on particular applications). Hot-IPs in an IP packet stream are assumed to be hosts that appear with a high frequency. Given a  IP packet stream of N distinct IP, S ¼ ðIP1 ; IP2 ; . . .; IPm Þ,    ð0  N  mÞ. fi ¼ fjjIPi ¼ IPj ; i 6¼ j; IPi ; IPj 2 Sg , ð1  i  N Þ, ð1  j  mÞ, f1 þ . . . þ fn ¼ m. Given a threshold h, Hot-IP = {IPi 2 Sjfi  hm; 0  h  1}. 2.2

d-Disjunct Matrix

A binary matrix M 2 f0; 1gtN is called d-disjunct matrix if and only if the union of any d columns does not contain any other remainder columns.

A Novel Security Solution for Decentralized Web Systems

41

There are three methods to construct d-disjunct matrices [5–7]: greedy algorithm, probabilistic and concatenation codes. Using the first two methods, we must save the matrices when the program is running. Therefore, much of RAM space is used in applying these methods because d-disjunct matrices are often large for the great number of items in high speed networks. Using concatenation codes method [7], we can create any column of matrix as we need. Hence, this paper considers the nonrandom construction of d-disjunct matrix. Non-random d-disjunct matrix is built by concatenated codes which is concatenating between Reed-Solomon code and identity code is detailed below. • Reed Solomon (RS):  ¼ ðm  0 ; . . .; m  k1 Þ 2 Fkq , let P be a polynomial Pm ðXÞ ¼ Given a message m 0 þm  1X þ . . . þ m  k1 X k1 in which the degree of Pm ðXÞ is at most k − 1. m RS code ½n; kq with k  n  q is a mapping RS: Fkq ! Fnq is defined by.  ¼ ðPm ða1 Þ; . . .; Pm ðan ÞÞ RSðmÞ Where a1 , …, an be any n distinct elements of field Fq [8]. It is well known that any polynomial of degree at most k  1 over Fq has at most  6¼ m  0 , the Hamming distance between RSðmÞ  and RSðm  0 Þ is at k  1 roots. For any m least d ¼ n  k þ 1: Therefore, RS code is a ½n; k; n  k þ 1q code [8]. • Code concatenation: Let Cout is a ðn1 ; k1 Þq code with q ¼ 2k2 is an outer code, and Cin be a ðn2 ; k2 Þ2 n1 1 binary code. Given n1 arbitrary ðn2 ; k2 Þ2 code, denoted by Cin ; . . .; Cin : It means that k2 n2 i 8i 2 ½n1 ;Cin is a mapping from F2 to F2 . A concatenation code C ¼ n1 1 Cout  ðCin ; . . .; Cin Þ is a ðn1 n2 ; k1 k2 Þ2 code defined as follows [9]: given a message k1 k2 k2 k1  2F  m ¼ ðF Þ and let ðx1 ; . . .; xn1 Þ ¼ Cout ðmÞ; with xi 2 Fk22 then n1 n1 1 1   ¼ ðCin ðx1 Þ; . . .; Cin ðxn1 ÞÞ; in which C is constructed by Cout ðCin ; . . .; Cin ÞðmÞ replacing each symbol of Cout by a codeword in Cin . In our solution, we choose Cout is ½q  1; kq  RS code and Cin is identity matrix Iq : The disjunct matrix M is achieved from Cout  Cin by putting all the N ¼ qk codewords as columns of the matrix. According to [6], given d and N, if we chose q ¼ Oðd log NÞ; k ¼ Oðlog NÞ; the resulting matrix M is t  N d - disjunct, where t ¼ Oðd 2 log2 NÞ: With this construction, all columns of M have Hamming weight equals to q ¼ Oðd log NÞ:

42

T. T. Huynh et al.

Here is an example of a matrix constructed by concatenated codes. 2

0 1 2 Cout : 4 0 1 2 0 12 2 1 60 6 60 6 61 6  Cout Cin : 6 60 60 6 61 6 40 0

2.3

0 1 2 0 1 0 0 1 0 0 1 0

1 2 0 0 0 1 0 0 1 0 0 1

2 0 1 1 0 0 0 1 0 0 0 1

3

2

1 5 Cin : 4 0 0 3 0 0 1 07 7 0 17 7 0 17 7 0 07 7 1 07 7 1 07 7 0 15 0 0

0 1 0

3 0 05 1

Non-adaptive Group Testing

Preventing infectious diseases like syphilis is a major challenge in the USA army in World War II, they spent a lot of money and time on testing people infected. The problem is how to detect the infected person as quickly as possible. To solve this problem, Robert Dorfman [10] proposed a solution called Group testing. In this solution, getting N blood samples from N citizens and then combining groups of blood samples to test. It could help to detect infectors as few tests as possible. Group testing is an applied mathematical theory, which is applied in many different fields [7, 11, 12]. The main goal of group testing is how to determine the set of defective items in a large population of items using as few tests as possible. There are the two types of group testing [13]: Adaptive group testing and nonadaptive group testing. In adaptive group testing, later stages are designed depending on the test result of the earlier stages. In non-adaptive group testing, all tests must be specified without knowing the results of the other tests. Many applications, such as data streams, require the NAGT, in which all tests are to be performed at once: the result of one test cannot be used to adaptively design another test. Therefore, in this paper, we only consider NAGT. NAGT can be represented by M 2 f0; 1gtN , where the columns of the matrix correspond to items and the rows correspond to tests. In that matrix, mij ¼ 1 means that the jth item belongs to the ith test. We assume that at most d items are defective. It is well-known that if M is a d-disjunct matrix, we can show all at most d defectives. We use non-adaptive group testing for real time traffic. Therefore, the algorithm for the Hot-IPs can be implemented in one pass. If adaptive group testing is used, the algorithm is no longer one pass. We can represent each counter in Oðlog n þ log mÞ bits. This means we need Oððlog n þ log mÞtÞ bits to maintain the counters. With t ¼ Oðd 2 log2 NÞ and d ¼ Oðlog NÞ; we need the total space to maintain the counters is Oðlog4 Nðlog N þ log mÞÞ: The d-disjunct matrix is built by established codes and we can create any column as we need. Hence, we do not need to store the matrix M. Since Reed-Solomon code is strongly explicit, the d-disjunct matrix is strongly explicit. Ddisjunct matrix M is established by concatenated codes C ¼ Cout  Cin ; where Cout is a

A Novel Security Solution for Decentralized Web Systems

43

½q; kq -RS code and Cin is an identify matrix Iq : Recall that codewords of C are columns of the matrix M. The update problem is like an encoding, in which given an  2 Fkq specifying which column we want (where m  is the represeninput message m k  and it tation of j 2 ½N when thought of as an element of Fq ), the output is Cout ðmÞ corresponds to the column Mm : Because Cout is a linear code, it can be done in Oðq2  poly log qÞ time, which means the update process can be done in Oðq2  poly log qÞ time. Since we have t ¼ q2 ; the update process can be finished with Oðt  poly log tÞ time. In 2010, P. Indyk, Hung Q. Ngo and Rudra [5] proved that they can decode in time polyðdÞ  t log2 t þ Oðt2 Þ: 2.4

IPFS Framework

IPFS was first proposed in 2014 by Juan Benet, a distributed system [14] in which each node is initialized a public/private key pair by using RSA 2048-bit algorithm, where the private key is used for signing data, while the public key is used for creating the identifier for the node and is published to the network for verifying the signature. Before exchanging data, nodes must be initialized a connection by exchanging their public key to other nodes. If the NodeID matches the corresponding public key being exchanged, the connection is established. In a IPFS network, there are three main types of nodes: Client node, Retrieval miner node, and Storage miner node [15]: – Client node: This type of node represents users, using a client application to access a IPFS network. – Retrieval miner node: A retrieval miner node is responsible for distributing objects to other nodes on the network. Objects are temporarily cached on its local storage and are removed periodically by the garbage collection process; – Storage miner node: This type of node provides large storage space and highspeed processing capacity for the network. The cluster feature can be used in these nodes for replicating data on cluster nodes. Each IPFS node uses a distributed hash table (DHT) for storing the information of nodes (such as IP address, UDP Port, NodeID), objects stored at local storage, and objects served by particular nodes. Currently, the S/Kademlia DHT is used to maintain routing information [16]. Data Storage. When uploading a file to IPFS, the file will be put into objects. Each IPFS object includes the two fields: the data field stores binary data, meanwhile the links field contains an array of links that point to other related objects. Each link consists of three components: Name, Hash, and Size, where Name is as an alias of the link, Hash is the hash value of the object pointed, and Size is the size of the pointed object. Each object can store up to 256 kilobytes (KB) of data, thus, if the size of uploaded files is less than 256 KB, the storage nodes only use one object with the empty link field. Otherwise, the file is split into chunks of 256 KB, and using the Merkle DAG (Merkle directed acyclic graph) data structure for managing these chunks.

44

T. T. Huynh et al.

Pinning Service. IPFS provides the pinning service to keep the files available on the network, storage miner nodes use this service to store important objects. The garbage collection feature will ignore pinned objects from a deletion process. IPFS Cluster. IPFS Cluster is used for data replication between cluster nodes, which ensures data availability for the IPFS network. Normally, cluster nodes have high performance and large storage capacity and data in these nodes are stored long term. IPNS. Each object is identified by a hash value corresponding to its content. Thus, when altering the content of an object that will also produce a new identification for that object. The IPNS allows nodes to use their NodeId to publish mutable objects, and use their private key to sign these published objects. In this way, other nodes only use the same link to access mutable objects.

3 The Proposed Solution 3.1

Online Hot-IP Detecting Algorithm

Let N be the number of distinct IP addresses and d be the maximum number of IPs which can be attacked. IP addresses are brought into groups depending on the generation of d-disjunct matrix. The total number of tests, t ¼ Oðd 2 log2 NÞ; is much smaller than N: Therefore, the total space required is far less than the naïve onecounter-per-IP scheme. With a sequence of m IPs from [N], an item is a “Hot-IP” if it appears more than m=ðd þ 1Þ times [17]. Given the MtN ¼ ðmij Þ d-disjunct matrix, mij ¼ 1 if IPj belonging to the ith group test. Using counters c1 ; c2 ; . . .; ct , (ci 2 ½t), when an item j 2 ½n arrives, incrementing all of the counters ci such as mij ¼ 1. From these counters, a result vector r 2 f0; 1gt is defined as follows: ri ¼ 1 if ci [ m=ðd þ 1Þ and ri ¼ 0, otherwise, a test’s result is positive if and only if it contains a hot item.

A Novel Security Solution for Decentralized Web Systems

45

Detecting Hot-IPs: To find Hot-IPs, we use the decoding algorithm.

The new algorithm to improve the performance compared with traditional algorithm will presented as follow: The main deal of this algorithm is that use a list called Hot-List to store the candidate Hot-IPs.

46

T. T. Huynh et al.

Experiment Results (See Table 1).

Table 1. Decoding time of group testing and online detecting Hot-IP N (IPs) 3000 5000 7000 9000 11,000 20,000 40,000 60,000 80,000

3.2

Group testing (s) 0,08 0,14 0,16 0,21 0,26 0,48 1,01 1,37 1,84

Online Hot-IP detecting (s) 0,05 0,09 0,11 0,14 0,15 0,26 0,53 0,80 1,04

N (IPs) 100,000 120,000 140,000 160,000 180,000 200,000 220,000 240,000 260,000

Group testing (s) 2,28 2,79 3,19 3,65 4,10 4,56 5,01 5,48 5,93

Online Hot-IP detecting (s) 1,29 1,55 1,82 2,08 2,34 2,61 2,88 3,14 3,39

The Proposed Security Model for the Decentralized Web System

In the decentralized web system, some nodes are published as gateways of the network, users can access websites at these nodes. Some nodes of the service provider, having high performance and called storage miner nodes, are used to synchronize websites. Other nodes, as retrieval miner nodes, are responsible for caching data. Security risks can originate from users or retrieval miner nodes. Therefore, we propose the security model for detecting Hot-IPs for the decentralized web system, as shown in Fig. 1. In which, the online Hot-IP detecting algorithm is deployed at the gateway and IPFS storage miner nodes. When having DoS/DDoS attacks or scanning worm, these nodes will fast detect these risks, combined with monitoring the performance of nodes, the administrator can easily execute security policy such as limited connections or removing the malicious nodes from the network. In order to avoid wasting the performance of nodes, we suggest that the algorithm should be done every 10 or 15 s.

Fig. 1. The proposed model of the security solution for the decentralized web system.

A Novel Security Solution for Decentralized Web Systems

47

We also monitor the performance of the gateway node. The Fig. 1 shows that the network traffic when having a DoS attack, but the gateway do not run the online Hot-IP detecting algorithm. After running the algorithm every 15 s, the network traffic comes from the DoS attack is dropped by the firewall of the gateway node, as shown in Figs. 3 and 4 (Fig. 2).

Fig. 2. The performance of the gateway before employing the algorithm 4.

Fig. 3. The performance of the gateway after employing the algorithm 4

Fig. 4. Hot-IPs are blocked in the interval time

4 Conclusion Hot-IPs can cause damage in decentralized networks, such as DoS/DDoS attacks, Internet scanning worms. They can make main nodes unavailable to serve for the legitimate clients. In this paper, we introduced a novel approach for real time detecting Hot-IPs based on non-adaptive group testing. With the improved algorithms, we have applied in the decentralized web system to real time detecting Hot-IPs to guarantee the availability for the network. Our proposed algorithms have expended d value in ddisjunct matrices. This can be adapted for finding multi-Hot-IPs in DDoS attacks, and we compared the existing one and ours in terms of efficiency and accuracy.

48

T. T. Huynh et al.

Acknowledgment. This research is funded by Vietnam National University Ho Chi Minh City (VNU-HCM) under grant number NCM2019-18-01.

References 1. Xiao, Z., Wen, S., Yu, H., Wu, Z., Chen, H., Zhang, C., Ji, Y.: A new architecture of web applications-the widget/server architecture. In: 2010 2nd IEEE International Conference on Network Infrastructure and Digital Content, pp. 866–869. IEEE (2010) 2. Zhang, C., Sun, J., Zhu, X., Fang, Y.: Privacy and security for online social networks: challenges and opportunities. IEEE Netw. 24(4), 13–18 (2010) 3. Dias, D., Benet, J.: Distributed web applications with IPFS, tutorial. In: International Conference on Web Engineering, pp. 616–619. Springer, Cham (2016) 4. Huynh, T. T., Nguyen, T. D., Tan, H.: A decentralized solution for web hosting. In 2019 6th NAFOSTED Conference on Information and Computer Science (NICS), pp. 82–87. IEEE (2019) 5. Indyk, P., Ngo, H. Q., Rudra, A.: Efficiently decodable nonadaptive group testing. In Proceedings of the Twenty-First Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 1126–1142 (2010) 6. Kautz, W., Singleton, R.: Nonrandom binary superimposed codes. IEEE Trans. Inf. Theory 10(4), 363–377 (1964) 7. Ngo, H.Q., Du, D.Z.: A survey on combinatorial group testing algorithms with applications to DNA library screening. Discrete Math. Probl. Med. Appl. 55, 171–182 (2000) 8. Rudra, A., Uurtamo, S.: Data stream algorithms for codeword testing. In International Colloquium on Automata, Languages, and Programming, pp. 629–640. Springer, Heidelberg (2010) 9. Forney Jr., G.D.: Concatenated codes, DTIC Document (1965) 10. Dorfman, R.: The detection of defective members of large populations. Ann. Math. Stat. 14, 436–440 (1943) 11. Huynh, C.N., Huynh, T.T., Le, T.V., Tan, H.: Controlling web traffic and preventing DoS/DDoS attacks in networks with the proxy gateway security solution built on open hardware. In: 2019 International Conference on System Science and Engineering (ICSSE), pp. 239–244. IEEE (2019) 12. Chinh, H.N., Hanh, T., Thuc, N.D.: Fast detection of DDoS attacks using Non-Adaptive group testing. Int. J. Netw. Secur. Appl. 5(5), 63 (2013) 13. Du, D., Hwang, F.K., Hwang, F.: Combinatorial Group Testing and Its Applications vol. 12. World Scientific (2000) 14. Benet, J.: IPFS-content addressed, versioned, P2P file system. arXiv preprint arXiv:1407. 3561 (2014) 15. Filecoin: A Decentralized Storage Network. https://filecoin.io/filecoin.pdf. Accessed 9 Feb 2020 16. Steichen, M., Fiz, B., Norvill, R., Shbair, W., State, R.: Blockchain-based, decentralized access control for IPFS. In: 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 1499– 1506. IEEE (2018) 17. Cormode, G., Muthukrishnan, S.: What’s hot and what’s not: tracking most frequent items dynamically. ACM Trans. Database Syst. (TODS) 30(1), 249–278 (2005)

Segmentation of Left Ventricle in Short-Axis MR Images Based on Fully Convolutional Network and Active Contour Model Tien Thanh Tran, Thi-Thao Tran, Quoc Cuong Ninh, Minh Duc Bui, and Van-Truong Pham(&) School of Electrical Engineering, Hanoi University of Science and Technology, No. 1 Dai Co Viet, Hanoi, Vietnam [email protected]

Abstract. Left ventricle (LV) segmentation from cardiac MRI images plays an important role in clinical diagnosis of the LV function. In this study, we proposed a new approach for left ventricle segmentation based on deep neural network and active contour model (ACM). The paper proposed a coarse-to-fine segmentation framework. In the first step of the framework, the fully convolutional network was employed to achieve coarse segmentation of LV from input cardiac MR images. Especially, instead of using cross entropy loss function, we propose to utilize Tversky loss that is known to be suitable for the unbalance data-an issue in medical images, to train the network. The coarse segmentation in the first step is then used to create initial curves for ACM. Finally, active contour model was performed to further optimize the energy functional in order to get fine segmentation of LV. Comparative experiments with other state of the arts on ACDCA and Sunnybrook challenge databases, in terms of Dice coefficient and Jaccard indexes, show the advantages of the proposed approach. Keywords: Left ventricle segmentation convolutional network  Deep learning

 Active contour model  Fully

1 Introduction Segmentation of left ventricle from cardiac magnetic resonance imaging (MRI) images is an essential step for cardiac diagnosis [1]. Many clinical diagnosis parameters such as wall thickness, ejection fraction, left ventricular volume, and mass could be derived from LV segmentation. In practice, LV segmentation tasks are often manually performed by clinicians, which are tedious, subjective, and might lead to human error risk [2]. Therefore, automatic methods for LV segmentation are in high demand. However, automatically segmenting the left ventricle faces some difficulties related to properties of the cardiac MR images [2] such as the presence of the inhomogeneity due to blood flow. There have been many automatic methods for LV segmentation such as graph cuts [3], fuzzy clustering [4], active contours model [5–7], and machine learning based methods [8, 9]. Among them, active contour models (ACMs) and machine learning based methods have been shown to be promising approaches [10]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 49–59, 2021. https://doi.org/10.1007/978-3-030-62324-1_5

50

T. T. Tran et al.

In ACM approach, the level set based- active contour model gets more attention since it allows the curve to change its topology such as splitting and merging during image segmentation process. Though revealing advantages, level set-based ACMs have intrinsic limitations such as the segmentation results depend on curve initialization. For machine learning approach, recent studies based on deep learning have shown excellent performance [11]. Among them, some deep neural network based models have been successfully employed for LV segmentation [8, 9]. However, the deep learning approaches for LV segmentation is limited by the lack of large training datasets and low signal-to-noise ratio. As a consequence, there have been some works aiming at combining deep learning method and deformable model to segment LV on cardiac MR images [8, 9]. In these works, deep learning methods were employed to produce a rectangle to detect the region of interest of LV, and then other postprocessing method was used to achieve the final segmentation. Inspired by the deep learning and active contour approaches, in this study, we proposed a fully automatic method which employed fully convolutional networks and deformable model for LV segmentation. The basic idea of the proposed method is to train a dataset, that is consisted of multi cardiac MRI images with ground truths, to predict a coarse LV contour in the segmented image, and then evolve the contour towards the LV boundary based on image intensity information. In more detail, the proposed approach includes three steps: in the first step, we train the dataset consisted of multi cardiac MRI images in different positions in one beat cycle along with the ground truths by a fully convolutional network- based framework - U-Net neural network [12]. Especially, in this study, instead of using cross entropy loss function to train the network, we propose to utilize Tversky loss [13] that is known to be suitable for the unbalanced data-an issue in medical images, to train the network. In the second step, the coarse segmentation results for the test image predicted by the U-net are used to create initial contours that are then converted to the corresponding initial signed distance functions (SDFs). Finally, the test image along with its corresponding initial contours are applied to a level set active contour model to provide accurate and robust LV segmentation. The main contributions of the paper include: (i) designing a fully automatic LV segmentation method for MRI datasets; (ii) performing deep learning algorithms, trained with limited data, for automatic LV localization and initial segmentation; (iii) employing Tversky loss that is known to be suitable for the unbalance data to train the network, and (iv) incorporating the deep learning output into level set active contour models to address the shrinkage/leakage problems and reduce the sensitivity to initialization. The remainder of this paper is organized as follows: In Sect. 2, the proposed approach is described in detail. In Sect. 3, some experimental results are presented, including a comparison with state-of-the-art methods. Finally, we conclude this work and discuss its future applications in Sect. 4.

Segmentation of Left Ventricle in Short-Axis MR Images

51

2 Materials and Methods 2.1

U-Net Architecture for Coarse Segmentation

U-Net neural network architecture shown in Fig. 1 is an improvement version of fully convolutional network architecture. This architecture has shown to be applicable to multiple medical image segmentation problems and has been considered as the standard architecture for medical image segmentation [12]. The structure of the U-Net is extended the one from FCN [14]. It is the combination of a contracting part (the encoder) and an expanding part (the decoder). The contracting part composes of convolutional and max-pooling layers. The expanding part consists of the aggregation of the encoder intermediate features, upsampling and convolutional layers. To recover fine-grained features that may be lost in the downsampling stage, cross-over connections are used by concatenating equally sized feature maps.

Fig. 1. Basic structure of the U-Net

2.2

Multiphase Active Contour Model for Fine LV Segmentation

In this study, to simultaneously segment the endocardium and epicardium in the left ventricle, we propose a multiphase version of the region-based active contour model. Denote /1 and /2 be the level set functions, characterized by signed distance functions, respectively representing the endocardium and epicardium regions expressed as: 8 > < Endocardium Myocardium > : Background

¼ X1 ¼ fx 2 X : /1 ðxÞ [ 0g; ¼ X2 ¼ fx 2 X : /1 ðxÞ\0 ^ /2 ðxÞ [ 0g; ¼ X3 ¼ fx 2 X : /2 ðxÞ\0g;

ð1Þ

52

T. T. Tran et al.

The energy functional is then reformulated as: Z Z H ð/1 ðxÞÞðI ðxÞ  c1 Þ2 dx þ ð1  H ð/1 ðxÞÞÞH ð/2 ðxÞÞðI ðxÞ  c2 Þ2 dx E¼ X X Z Z 2 þ ð1  H ð/2 ðxÞÞÞðI ðxÞ  c3 Þ dx þ m ðjrH ð/1 ðxÞÞj þ jrH ð/2 ðxÞÞjÞ dx X

X

ð2Þ where H ð/1 Þ and H ð/2 Þ are Heaviside step functions. The initial values for /1 and /2 are respectively signed distance functions created from the binary endocardium and epicardium masks resulted from the FCN model. To minimize the energy functional (2), we use the gradient descent scheme. At each evolution iteration, c1 , c2 , and c3 are respectively the mean intensities inside the regions X1, X2, and X3. Denoting dð:Þ the Dirac function, the evolution equations for level set functions /1 and /2 are expressed as. o @/1 n ¼ mdivðr/1 =jr/1 jÞ  ðI ðxÞ  c1 Þ2 þ H ð/2 ðxÞÞðI ðxÞ  c2 Þ2 dð/1 ðxÞÞ ð3Þ @t o @/2 n ¼ mdivðr/2 =jr/2 jÞ  ð1  H ð/1 ðxÞÞÞðI ðxÞ  c2 Þ2 þ ðI ðxÞ  c3 Þ2 dð/2 ðxÞÞ @t ð4Þ

3 The Proposed Approach 3.1

The Pipeline of the Proposed Approach

In this study, we propose an approach for fully automatic LV segmentation. The approach is formulated into a coarse-to-fine framework as a pipeline presented in Fig. 2. In more detail, the proposed approach includes three steps: First, a deep fully convolutional network is applied to obtain the coarse segmentation of all 2D slices including endocardium and epicardium masks. In particular, we apply the U-Net architecture [12] to perform coarse segmentation tasks. Next, based on the masks resulted from the U-Net, the corresponding signed distance functions (SDFs) are created and then used as initial level set functions for the multiphase active contour model (MP-ACM). Finally, the MP-ACM will evolve the contours towards the boundaries of desired objects to get LV segmentation results. By integrating U-Net architecture and active contour model to form a framework for segmentation problem in cardiac magnetic resonance imaging, we can take advantages of both deep learning based and level set based active contour models. It is noted that, in the U-Net architecture used in this study, instead of using batch normalization, mean-variance normalization is used to normalize the pixel intensity distribution of the feature map. In addition, since training a deep model like that with a small MRI dataset might lead to overfitting, we use some well-known techniques to prevent overfitting like data augmentation, dropout and regularization during training.

Segmentation of Left Ventricle in Short-Axis MR Images Input image

Endo mask M1

Epi Mask M2

IniƟal level set

IniƟal level set

53

U-net

SegmentaƟon

MP-ACM

SDF funcƟons Fig. 2. The pipeline of the proposed framework

It is also noted that the training and testing procedures for the endocardium and epicardium by U-Net are performed independently. 3.2

The Loss Function

To train the proposed network, in this study, we utilize Tversky loss function [13] which is known to be suite for the imbalanced data-a common issue in medical image segmentation. The Tversky loss function is defined as the following: Let P and G are the set of predicted and ground truth binary labels, respectively. The Tversky index [13] is defined as: T ðP; G; a; bÞ ¼

jP \ G j jP \ Gj þ ajP  Gj þ bjG  Pj

ð5Þ

where 0  a; b  1 are the parameters of the Tversky index with a þ b ¼ 1. Equation 5 therefore becomes: T ðP; G; aÞ ¼

jP \ G j jP \ Gj þ ajP  Gj þ ð1  aÞjG  Pj

ð6Þ

The Tversky loss is defined as LTversky ¼ 1  T ðP; G; aÞ ¼ 1

jP \ G j jP \ Gj þ ajP  Gj þ ð1  aÞjG  Pj þ e

where e is a small factor used to handle division by zero.

ð7Þ

54

T. T. Tran et al.

As pointed out in [15], at the pixel level, the areas of jP \ Gj; jP  Gj and jG  Pj in Tversky index are calculated as: jP \ G j ¼ jP  Gj ¼ jG  Pj ¼

XN i¼1

XN i¼1

XN i¼1

p0i g0i

ð8Þ

p0i g1i

ð9Þ

p1i g0i

ð10Þ

where in the output of the softmax layer, the p0i is the probability of pixel i being a lesion and p1i is the probability of pixel i being a non-lesion. Also, the value of g0i is 1 if pixel i is a lesion pixel and 0 for a non-lesion pixel, and vice versa for the g1i .

4 Evaluations and Results 4.1

Dataset

The proposed method has been tested on the 2009 Sunnybrook Left Ventricle Segmentation Challenge (SUN09) data set [16], and the 2017 ACDCA dataset [17]. The SUN09 dataset includes 15 cases for each training, validation, and online sets with ground truth contours provided. The SUN09 are provided in DICOM format so it gives useful information such as slice thickness and pixel spacing to compute endocardial and epicardial areas for quantitative assessment. For ACDCA data, the training database with ground truths provided are split into the training and test sets to evaluate the image segmentation models. For each database, we use the training sets to train the FCN model, after that we use the test set (for SUN09 database, the test set includes the validation and online sets) to evaluate the model performance by the proposed coarseto-fine approach. 4.2

Results

To demonstrate the segmentation results by our approach, we show some representative samples of the results for Sunnybrook (SUN09) database in Fig. 3. In addition, to evaluate the quantitative accuracy of segmentation results, we used the Dice similarity coefficient (DSC). The DSC measures the similarity between automatic and manual segmentations and is calculated as [18]: DSC ¼

2Sam ðSa þ Sm Þ

ð11Þ

where Sa , Sm , and Sam are, respectively, the automatically segmented region, the manually segmented region, and the intersection between the two regions. As can be observed from Fig. 3, there is a good agreement between the results by our approach and the ground truths.

Segmentation of Left Ventricle in Short-Axis MR Images

55

Fig. 3. Representative segmentation by the proposed approach for the SUN09 data. First column: input images; Second column: results; Last column: ground truth

Besides, the agreement between the endocardial and epicardial areas by the proposed model and those by ground truth are also respectively depicted in linear regression and Bland-Altman [19] plots shown in Fig. 4. It can be seen from the plots in Fig. 4, the areas obtained by the proposed model are in good agreement with those from the expert with high correlation coefficients (above 98% for both endocardium and epicardium). As we can also observe from this figure, the plots of the data obtained by the proposed model are close to those by manual segmentation, which illustrate the small differences between them.

Fig. 4. Linear regression and Bland–Altman plots of the automatic segmentation versus the ground truth for the endocardium (a) and epicardium (b) of SUN09 dataset

For the ACDCA database, the data including image slices and corresponding groundtruth are split into training and validation data. The representative results for the data are provided in Fig. 5. It can be observed from this figure, the endocardium and epicardium contours are in good agreement with the ground truths.

56

T. T. Tran et al.

33

3

Fig. 5. Representative segmentation by the proposed approach for the ACDCA data. First column: input images; Second column: results; Last column: ground truth

4.3

Compared to Other Works

We now evaluate the performances of the proposed model with other models. To this end, we applied the proposed approach to segment all images from SUN09 dataset. Then we compared the results with those are reported in previous works including Phi Vu Tran [10], Avendi et al. [8], Queirós et al. [20], Ngo and Carneiro [9], Hu et al. [21]. The quantitative comparisons are evaluated by the Dice similarity coefficient (DSC) in Table 1. It can be seen from Table 1, the proposed approach the highest index on both endo- and epi-cardium regions. For the ACDCA data, we reimplement some recent FCN based image segmentation architectures including FCN [10], SegNet [22], and Unet [12], and compare the results by the proposed method. The comparative results in this compared experiment are reported in Table 2. As we can observe in Table 2, the proposed approach achieves the highest scores of both Dice and Jaccard coefficient values when compared to other methods, that shows the advantages of the proposed approach.

Segmentation of Left Ventricle in Short-Axis MR Images

57

Table 1. The mean of obtained Dice similarity coefficient (DSC) between other state-of the-art and the proposed models on the SUN09 dataset for both endocardium (Endo) and epicardium (Epi) regions. Method

Dice coefficient Endo Epi Phi Vu Tran method [10] 0.92 0.95 Avendi et al. method [8] 0.94 – Queirós et al. method [20] 0.90 0.94 Ngo and Carneiro method [9] 0.90 – Hu et al. method [21] 0.89 0.94 The proposed method 0.94 0.96

Table 2. The mean of obtained Dice similarity coefficient (DSC) and Jaccard Index between other state-of the-art and the proposed models on the ACDCA Dataset for both endocardium (Endo) and epicardium (Epi) regions. Method

Dice coefficient Endo Epi FCN [10] 0.89 0.92 SegNet [22] 0.82 0.89 Unet [12] 0.88 0.92 Proposed approach 0.90 0.95

Jaccard index Endo Epi 0.83 0.89 0.75 0.83 0.82 0.87 0.85 0.92

5 Conclusion This paper demonstrated the advantages of combining the U-Net architecture and active contour model for segmentation problem in cardiac magnetic resonance imaging. The approach takes advantages of both deep learning based and level set based active contour models for cardiac image segmentation. Experiments showed that this model achieves high accuracy on multiple metrics on the benchmark of MRI datasets. Acknowledgement. This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.05-2018.302.

References 1. Miller, C., Pearce, K., Jordan, P., Argyle, R., Clark, D., Stout, M., Ray, S., Schmitt, M.: Comparison of real-time three-dimensional echocardiography with cardiovascular magnetic resonance for left ventricular volumetric assessment in unselected patients. Eur. Heart J. 13(2), 187–195 (2012) 2. Petitjean, C., Dacher, J.: A review of segmentation methods in short axis cardiac MR images. Med. Image Anal. 15(2), 169–184 (2011)

58

T. T. Tran et al.

3. Boykov, Y., Lee, V.S., Rusinek, H., Bansal, R.: Segmentation of dynamic N-D data sets via graph cuts using Markov models. In: Proceedings of International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 1058–1066 (2001) 4. Rezaee, M., van der Zwet, P., Lelieveldt, B., van der Geest, R., Reiber, J.: A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans. Image Process. 9(7), 1238–1248 (2000) 5. Lynch, M., Ghita, O., Whelan, P.F.: Left-ventricle myocardium segmentation using a coupled level-set with a priori knowledge. Comput. Med. Imag. Graph. 30(4), 255–262 (2006) 6. Pham, V.T., Tran, T.T.: Active contour model and nonlinear shape priors with application to left ventricle segmentation in cardiac MR images. Optik 127(3), 991–1002 (2016) 7. Pham, V.-T., Tran, T.-T., Shyu, K.-K., Lin, Lian-Yu., Wang, Y.-H., Lo, M.-T.: Multiphase B-spline level set and incremental shape priors with applications to segmentation and tracking of left ventricle in cardiac MR images. Mach. Vis. Appl. 25(8), 1967–1987 (2014). https://doi.org/10.1007/s00138-014-0626-1 8. Avendi, M.R., Kheradvar, A., Jafarkhani, H.: A combined deep-learning and deformablemodel approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med. Image Anal. 30, 108–119 (2016) 9. Ngo, T.A., Carneiro, G.: Left ventricle segmentation from cardiac MRI combining level set methods with deep belief networks. In: 20th International Conference on Image Processing, pp. 695–699 (2013) 10. Tran, P.V.: A fully convolutional neural network for cardiac segmentation in short-axis MRI. https://arxiv.org/abs/1604.00494 (2016) 11. Luo, G., Dong, S., Wang, K., Zuo, W., Cao, S., Zhang, H.: Multi-views fusion CNN for lef ventricular volumes estimation on cardiac MR images. IEEE Trans. Biomed. Eng. 65(9), 1924–1934 (2018) 12. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241 (2015) 13. Tversky, A.: Features of similarity. Psychol. Rev. 84(4), 327 (1977) 14. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015) 15. Sadegh, S.M., Erdogmus, D., Gholipour, A.: Tversky loss function for image segmentation using 3D fully convolutional deep networks. In: Proceedings of International Workshop on Machine Learning in Medical Imaging, pp. 379–387 (2017) 16. Radau, P., Lu Y., Connelly, K., Paul, G., Dick, A.J., Wright, G.A.: Evaluation framework for algorithms segmenting short axis cardiac MRI. MIDAS J.-Cardiac MR Left Ventr. Segment. Challenge (2009). http://hdl.handle.net/10380/13070 17. Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, I., Lekadir, K., Camara, O., Gonzalez Ballester, M.A., Sanroma, G., Napel, S., Petersen, S., Tziritas, G., Grinias, E.K.M., Kollerathu, V.A., Krishnamurthi, G., Rohe, M.M., Pennec, X., Sermesant, M., Isensee, F., Jager, P., Maier-Hein, K.H., Full, P.M., Wolf, I., Engelhardt, S., Baumgartner, C.F., Koch, L.M., Wolterink, J.M., Isgum, I., Jang, Y., Hong, Y., Patravali, J., Jain, S., Humbert, O., Jodoin, P.M.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) 18. Lynch, M., Ghita, O., Whelan, P.F.: Segmentation of the left ventricle of the heart in 3-D + t MRI data using an optimized nonrigid temporal model. IEEE Trans. Med. Imaging 27(2), 195–203 (2008)

Segmentation of Left Ventricle in Short-Axis MR Images

59

19. Bland, J., Altman, D.: Statiscal methods for assessing agreement between two methods of clinical measurements. Lancet 1, 307–310 (1986) 20. Queirós, S., Barbosa, D., Heyde, B., Morais, P., Vilaça, J., Friboulet, D., Bernard, O., D’hooge, J.: Fast automatic myocardial segmentation in 4D cine CMR datasets. Med. Image Anal. 18(7), 1115–1131 (2014) 21. Hu, H., Liu, H., Gao, Z., Huang, L.: Hybrid segmentation of left ventricle in cardiac MRI using gaussian-mixture model and region restricted dynamic programming. Magn. Reson. Imaging 31(4), 575–584 (2013) 22. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481– 2495 (2017)

Proposed Novel Fish Freshness Classification Using Effective Low-Cost Threshold-Based and Neural Network Models on Extracted Image Features Anh Thu T. Nguyen(&), Minh Le, Hai Ngoc Vo, Duc Nguyen Tran Hong, Tuan Tran Anh Phuoc, and Tuan V. Pham The University of Danang - University of Science and Technology, Danang, Vietnam {ntathu,pvtuan}@dut.udn.vn, [email protected], [email protected], [email protected], [email protected]

Abstract. The quality of food has been becoming a great concern not only in Vietnam but also all over the globe. Quality of fish in terms of fish freshness is therefore highly attracted by the research and industry community. This paper proposes novel fish freshness classification models based on threshold-based and neural network-based approaches on extracted image features. These features are identified based on physiological characteristics of fish eyes at the fresh and stale statuses, including 12 Intensity Slices, Minimum Intensity, Haziness, Histogram, and Standard Deviation. The nine proposed models consisting of 4 threshold-based and 5 neural network-based models were trained on the training set composing of 49 fisheye images of the 4 Crucian carp fishes at two main groups of time points (0–5 h and 21–22 h after death) and tested on the testing set including 18 images from the fifth fish sample. The results of 8/9 models reach their 100% of accuracy on the training set and 7/9 at their 100% of accuracy on the testing set. These results confirm our four proposed feature assumptions and reveal the feasibility of the proposed models based on extracted features which are non-invasive, rapid, low cost, effective and environmentaleffect minimized and consequently, highly potential for further studies and mobile application for freshness classification. Keywords: Fish freshness  Image processing Classification  Threshold  Neural network

 Feature extraction 

1 Introduction In the food industry, fish freshness is the key factor to determine the quality of fishery products. A food with stale or spoiled fishes not only lessens its nutrient, taste but can lead to food poisoning for its customer. One of the main agents of spoilage is bacteria which grows quickly in number when the fish dies, especially in warm and humid weather [1] such as in tropical countries such as Vietnam. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 60–71, 2021. https://doi.org/10.1007/978-3-030-62324-1_6

Proposed Novel Fish Freshness Classification

61

Since the importance and popularity of fishery foods, through years there have been many studies on different approaches for fish quality and freshness measurement or prediction. In order to detect fish freshness, there have been two main methods sensory and instrumental evaluations, according to Ni et al. in 2017 [2]. The sensory evaluation can be conducted through the senses of smell, taste, touch and hearing by humans. However, the sensory method is normally limited due to personal experiences or is sometimes discernible and misinterpreted. Meanwhile, the instrumental assessment has been studied and developed based on chemical, biological, physical, electrical approaches, etc. such as Torrymeter [3], biosensor, nanotechnology [4] and more. The instrumental methods also show the disadvantage of equipment utilization, experience of fresh fish sorting, time-consumption, invasion, etc. Therefore, these equipment are not always available for customer usage. Nevertheless, among a diversity of instrumental methods, the image processing based techniques for fish quality and freshness classification are noticeable since its non-invasive, safe and mostly low-cost tools. To name a few, a study in 2012 [4] observed the change and quantification of RGB color indices of the fish eye and gill images at different periods of time of death in comparison with a meter Torrymeter on three types of fishes. This showed that the meters provided precise and fast measurements while the RGB indices could only show the deterioration from day 3 of spoilage and a variety of species have different levels of deterioration. In 2014, Jun Gu and Nan He [5] introduced a rapid and non-devastive method by calculating statistics features of gray values of eye iris and the surface texture features to accomplish freshness detection. The results reached the detection accuracy rate of 86.3%. Also in 2016, Isaac et al. [6] introduced an automatic and efficient method for gill segmentation for fish freshness validation and determination of any pesticide with the results of maximum correlation of 92.4% with the ground truth results. In 2018, Navotas et al. showed a built android application that automatically classifies the freshness of three types of fish at 5 levels by using RGB values of eyes and gills with acceptable results, but needs independent light source and aided devices [7]. In this study, according to hypothesis analysis on meaningful fisheye features, we have constructed our assumptions and then extracted features on the fisheye images which could help differentiate fresh status from stale status of the fishes. We have built a dataset of local fisheye images for training classification models based on threshold and neural network approaches. Subsequently, the models were tested on the self-built testing dataset for their performance evaluation.

2 Hypothesis Analysis and Proposed Assumptions According to published studies on physiological characteristics of fish eyes when a fish turns from fresh to stale status [8] and from our observations on self-built dataset as demonstrated in Fig. 1a, our assumptions on noticeable fisheye image features are given and by that, the image processing methods are then implemented to extract these noticeable features of the iris and the pupil of the eye for later processing. A fish anatomy is demonstrated in Fig. 1b.

62

A. T. T. Nguyen et al.

Fig. 1. Our self-built data at different time points after death (0, 1, 2, 3, 4, 5 and 21, 22 h) (a) and a fish anatomy [9] (b).

Particularly, when a fish is fresh, its eyes turn out bright, clear, transparent, and full. These characteristics may probably relate to the transparency, fullness and smoothness of the cornea and anterior chamber layers of the eye. Consequently, the pupil area looks really dark. However, when a fish is spoiled, its eyes become opaque, cloudy, wrinkled, sunken, and pupils are grey as if there was a haze layer covering the whole eye. This means the frequency distribution of pixels in the pupil area lying in a very low value zone (around black color) will be shifted to a higher value zone (around gray color) in the intensity histogram. On the other hand, the iris of the fresh fish eye seems to be more colorful than that of the stale fish eye which generally demonstrates opaque white color. Besides, the fresh fish eye makes it easy to display spectacular reflection spots when being imaged, and then the intensity at these pixels will reach saturation. This means the color intensity variation in the fish eye image, especially in the iris area, is higher. From the above hypothesis analysis, the following assumptions have been proposed for feature extraction and classification purpose. Assumption 1: The background intensity of the pupil area increases as the fish gets stale. Corresponding extracted feature: Minimum Intensity Feature - F1. Assumption 2: The haziness level on the whole eye area increases as the fish gets stale. Corresponding extracted feature: Haziness Feature - F2. Assumption 3: The group of low-intensity pixels (within the pupil area) in the intensity histogram will shift far away from value “0” as the fish gets stale. Corresponding extracted feature: Histogram Feature - F3. Assumption 4: Level of variation in intensity at the iris area will decrease as the fish gets stale. Corresponding extracted feature: Standard Deviation Feature - F4.

3 Methodology The overall diagram of our proposed fish freshness classification models including training and testing phases as described in Fig. 2. The features which were extracted from the pre-processed images are used for training and testing on threshold-based models and neural network models. The trained models will be used to classify the input image from the testing dataset into Class1 (fresh status) or Class 2 (stale status).

Proposed Novel Fish Freshness Classification

63

Fig. 2. The overall diagram of the proposed fish freshness classification models.

3.1

Pre-processing

The eye region composing of the iris and pupil area in the captured image was segmented for analysis. This region of interest is resized to an uniform size of 250  250 pixels, and then converted to grayscale image instead of RGB image. No normalization or white balance is applied on the segmented region. The purpose of this pre-processing step is to reduce computational complexity of the developed algorithms as will be presented in next sections. 3.2

Feature Extraction

In order to eliminate unexpected factors such as unexpectedly specular reflection received when capturing fisheye images, and also to reduce the workload of image processing, we firstly extracted a primary feature - noted as F0 - including a set of 12 central cutting slides along the eye image. In addition, four more secondary features noted as F1, F2, F3, F4 - were then derived from F0 for further analysis relating to our assumptions and support distinguishing two classes of fresh and stale fish.

Fig. 3. 12 intensity slices collected on a fisheye image.

12 Intensity Slices Feature (F0). Twelve intensity slides are extracted on every 250  250 pixel fisheye image as shown in Fig. 3. In that, one intensity slice (a vector of 1  250 pixels) is created by cutting a central horizontal line from left to right of the

64

A. T. T. Nguyen et al.

image. The image is then rotated every angle of 15° from 0 to 180° to form 12 different slices. The aim of utilizing intensity vectors from the sampling step instead of whole image intensity values is to simplify the large data of the whole image by representative acquisition of intensity value along the images. Minimum Intensity Feature (F1). In order to determine background intensity of the pupil area without specular reflection effect, the minimum value of each of 12 intensity slides was calculated. This minimum value comes from the pixel belonging to the pupil area since the intensity of this area is extremely lower than other areas. Subsequently, an average value of the 12 minimum values from 12 slides was calculated. When a fish gets stale, this minimum intensity is assumed to be increased. Haziness Feature (F2). Another secondary feature is the haze thickness of the fish eye which increases over time after death. Two different algorithms used for haziness estimation were proposed by Kaiming et al. [10] with approximate dark channel and Dubok et al. [11] with simple dark channel. Both authors represent the general model for describing a hazy image I with scene radiance J, transmission map t (haze thickness) and atmospheric light A. Given x is a 2D vector representing the pixel’s coordinates located in the image; t(x) is a scalar in [0, 1], then: IðxÞ ¼ JðxÞtðxÞ þ Að1  tðxÞÞ

ð1Þ

Considering J(x) as a dark pixel, Kaiming et al. defined that all other pixels following the same condition with J(x) are considered also as dark pixels [10]. This leads to recovering the scene radiance (dehazed image) J from an estimation of the transmission map t (F2) and atmospheric light according to: JðxÞ ¼ ðIðxÞ  AÞ=tðxÞ þ A

ð2Þ

An example of increasing haziness when fish becomes stale is shown in Fig. 4.

Fig. 4. 2D contour plot of haze thickness of a fisheye image over times.

Histogram Feature (F3). Histogram has been used in this study as a graphical illustration for intensity distribution in a grayscale image whose intensity is in the range of [0, 255], at a selected step size. Pixels with the same value are distributed into a group. By that, the darker intensities are distributed close to the left of the graph than the brighter and inversely, as demonstrated in Fig. 8.

Proposed Novel Fish Freshness Classification

65

Standard Deviation Feature (F4). Standard deviation (STD), in mathematical terms, is a descriptive statistical quantity used to measure the dispersion of a data set. STD is simply defined as the square root of variance. Variance is the squared difference from the mean and then taking the average of the result. For a random variable vector A made up of N scalar number of observations (A1, A2, A3,…) with l is the mean of A, the general equation for standard deviation is: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 XN d¼ jAi  lj2 i¼1 N1

ð3Þ

Taking into account the standard deviation of intensity of the iris area as in Assumption 4, the standard deviation in this paper is calculated by taking an average of 12 intensity slices of every image and estimating variation of 12 slices compared with the average line. MATLAB has been used for all the calculations.

4 Experiments and Results 4.1

Database Setup

In this research, there are 67 images taken from various angels on the five samples of live Crucian carp fish. All fisheye images of each fish sample were captured several times each hour, from the most freshness to the most staleness status, under normal light condition. Table 1 performs a training and testing database for this study. The fresh fish group, named as Class 1, consists of images taken from 0 h–5 h, while the spoiled fish group, named as Class 2, includes images shot from 21 h–22 h, respectively. In order to avoid overfitting, one part of the training set is reserved as a validation set.

Table 1. Distribution of fisheye images on training and test dataset. Classes Training (40) Validation (9) Testing (18) Sum (67) Class 1 (0–5 h) (4 fish samples) 20 6 12 38 Class 2 (21–22 h) (1 fish sample) 20 3 6 29

4.2

Performance Evaluation Criteria

In this research, three measures are introduced to evaluate classification performance of the proposed algorithms: True Positives Rate (TPR), True Negative Rate (TNR) and Accuracy (Acc). These statistic measures are calculated as follows: TPR ¼

TP TN TP þ TN ; TNR ¼ ; Acc ¼ TP þ FN TN þ FP TP þ FN þ TN þ FP

ð4Þ

66

A. T. T. Nguyen et al.

Where TP: true positives (freshness detection with fresh fishes); FN: false negatives (no freshness detection with fresh fishes); TN: true negative (no fresh detection with spoiled fishes), FP: false positives (freshness detection with spoiled fishes). 4.3

Training

The threshold-based classifiers have been trained on the Secondary Features F1, F2, F3, F4 to form the four corresponding models: TH_F1, TH_F2, TH_F3, TH_F4. On the other hand, the neural network classifiers have been developed for the Primary Feature and the four Secondary Features which leads to the formation of five corresponding models: NN_F0, NN_F1, NN_F2, NN_F3, NN_F4. Threshold-Based Approach. A general block diagram of training procedure for threshold-based models is illustrated as in Fig. 5.

Fig. 5. General block diagram of training procedure for threshold-based models.

Minimum Intensity Feature (TH_F1). Figure 6a demonstrates a clear difference of average minimum intensity values between Class 1 and Class 2, obtained from 4 training samples. This confirms the proposed assumption 1. Therefore, the best threshold values will be searched to provide maximum separation between Class 1 and Class 2 which leads to the highest Acc and equal TPR and TNR.

Fig. 6. Average of F1 values from Class 1 (0–5 h) and Class 2 (21–22 h) of 4 samples (a) and TH_F1_Threshold evolution (b).

The TH_F1_Threshold was varied in a range of [10, 52] (which are the minimum and maximum values of F1 in the training dataset). The maximum classification rates consisting of Acc = TPR = TNR = 100% are obtained given the TH_F1_Threshold in the range of [36, 39] as plotted in Fig. 6b.

Proposed Novel Fish Freshness Classification

67

Haziness Feature (TH_F2). The Simple DCP algorithm has been applied to estimate haze thickness values in the range [0, 1]. The average of the haziness average values of all eye images belonging to Class 1 and Class 2 were calculated, respectively. As described in Fig. 8a, a general increasing trend of haziness can be observed when the fish gets stale. However, due to the impact of glare and lights to the haziness intensity, all haziness values which are larger than a border “a” will be eliminated from the average calculation. The value of the border “a” may be varied in a range of [0.5 to 0.7]. The calculated average value of each image is then compared to a threshold “b” which may be varied in a range of [0.2 to 0.4] to justify if the fish belongs to Class 1 or Class 2. The colormap of Fig. 7b shows the maximum classification rate of Acc = 100% is obtained given the TH_F2_Threshold_(a, b) = (0.54, 0.294).

Fig. 7. Average of F2 values from Class 1 (0–5 h) and Class 2 (21–22 h) of 4 samples (a) and Finding the optimal threshold based on graph (b).

Histogram (TH_F3). Assumption 3 has been evidenced through histogram of the fresh and stale fish samples, as demonstrated in Fig. 8a. The first main beam which locates closely to the origin for fresh fish has a tendency to shift to the right when the fish becomes stale. Therefore, our aim is to search for a threshold, called TH_F3_Threshold, which can best distinguish between Class 1 and 2 on the first beam location. The TH_F3_Threshold was varied in the range of [0, 90]. The TH_F3_Threshold of 54 was found as the best value to reach 84% of Accuracy, 92.31% of TPR and 91.30% of TNR, as demonstrated in Fig. 8b.

Fig. 8. Histogram of fresh and stale sample (a) and TH_F3_Threshold estimation (b).

68

A. T. T. Nguyen et al.

Standard Deviation (TH_F4). In order to observe the general trend of intensity variation as stated in Assumption 4, we calculated mean and STD of all coefficients over the 12 slides. The Fig. 9a shows typical curves of mean and STD calculated from images of the Class 1 in the blue graph and that of the image of the Class 2 in the orange graph. It is obvious that STD values of the Class 1 is higher than that of the Class 2 at the two convex regions which correspond to the iris areas of the eyes. The concave regions at the middle graph representing the pupil area, however, do not show a difference between two classes. From that observation, the TH_F4 model has been trained by searching two optimal threshold values which could distinguish Class 1 and Class 2 through the value of STD (threshold b) and number of STD having their values larger than b (threshold a). From the minimum and maximum variation of STD values, b was set in the range [15, 40] while a varied between 75 and 175 (corresponding to 30% and 70% of 250 pixels of F4 values). From the colormap result in Fig. 9b, a pair of threshold (a, b) locating at the center of the yellow area has been selected among 360 pairs with 100% Accuracy. The resulting TH_F4_Threshold (a, b) of (100, 25) corresponds to 100% of Accuracy, TPR and TNR on the Training set.

Fig. 9. Average of average and STD average of sample 3 from 0 h–5 h (blue) and 21 h–22 h (orange) (a) and Colormap of Accuracy STD of all (a, b) pairs (b).

Neural Network Approach. In this study, we will build a two-layer feed-forward neural network using the Stochastic Gradient Descent to train the NN models parameters. The input layer has a number of input units which are the same with size of features vectors proposed above. In this research, after considering the size of the selfbuilt database and the task of distinguishing 2 classes, we have selected only 1 hidden unit. The transfer function used the sigmoid function which is a nonlinear function for the output value of about [0, 1]. The network output value of “1” corresponds to the spoiled fish sample, while another output node with value “0” means the fresh fish sample. The maximum number of epochs was set to quasi ∞. The MSE goal of training was set to quasi zero. The learning rate was varied from 0.01 to 0.05 to avoid the case when the input variable value is too large. The input values were normalized for various features, as listed in Table 2.

Proposed Novel Fish Freshness Classification

69

Table 2. The difference in input and normalized values of all neural network-based models. NN_F0 NN_F1 NN_F2 NN_F3 NN_F4 Input vector (dimensional) 12  250 12 1 90 250 Normalization 255 100 1 10000 100

4.4

Testing

The trained models in both Threshold-based approach and Neural Network-based approach have been evaluated on the testing set. The results are shown in Table 3 and 4. Table 3. Confusion matrix of the threshold-based and NN models on the testing dataset. Confusion matrix Predicted label Actual label Threshold-based models TH_F1 TH_F2 C1 C2 C1 C2 C1 (Class 1) 12 2 12 0 C2 (Class 2) 0 4 0 6 Neural network models NN_F1 NN_F2 C1 C2 C1 C2 C1 (Class 1) 12 0 12 0 C2 (Class 2) 0 6 0 6

TH_F3 C1 C2 11 3 1 3 NN_F3 C1 C2 12 0 0 6

TH_F4 C1 C2 12 0 0 6 NN_F4 C1 C2 12 0 0 6

Table 4. Performance evaluation of all threshold-based and neural network models. Performance (%) NN_F0 TH_F1 NN_F1 TH_F2 NN_F2 TH_F3 NN_F3 TH_F4 NN_F4 TPR TNR ACC

100 100 100

100 66.7 88.9

100 100 100

100 100 100

100 100 100

91.7 50 77.8

100 100 100

100 100 100

100 100 100

5 Discussions and Conclusions This study has proposed novel low-cost methods for fish freshness classification based on simple but reliable extraction of various image features based on observed physiological characteristics of fish eyes, without any setup for imaging. The threshold-based approach and neural network-based approach have been developed for accurate classification. The first new contribution is the proposal of simple and distinctive features: 12 intensity slices F0, minimum intensity F1, haziness F2, histogram F3, and standard deviation F4. The next contribution is the proposed training process to build the threshold-based models and neural network models which lead to nine different freshness classifiers. Specially, these models could not only overcome most of the environmental effects on the captured images but inversely utilize these effects as a

70

A. T. T. Nguyen et al.

useful feature for F4. Furthermore, the study has evaluated and compared classification performance of all these nine proposed models on the self-built dataset which reveals overall and insight of these methods for possible future studies and development in the field. In details, the nine proposed models (4 threshold-based and 5 neural networkbased) were trained on the training set composing of 49 fisheye images of the 4 Crucian carp fishes at two main groups of time points (0–5 h and 21–22 h after death) and tested on the testing set including 18 images from the fifth fish sample. The testing result firstly confirms our four proposed assumptions on the changes of the image features linked to physiological features on fish freshness. Particularly, 8/9 models reach 100% and 1/9 model reaches 84% of accuracy on the training set; and 7/9 models reach 100%, 1/9 models reaches 89% and the rest reaches 78% of accuracy on the testing set. Secondly, the result shows the effectiveness and stability of five neural network-based models for fish freshness classification on five proposed features - F0, F1, F2, F3, F4 - with the classification accuracy of 100% on both training and testing sets. Even without any effort in image processing for a finer secondary feature from the raw primary feature F0, the NN_F0 model still achieves an accuracy of 100% on both training and testing dataset. On the other hand, the classification could be simply implemented with the threshold-based models on four secondary features with the accuracy of 100% for F1, F2, F4, 84% for F3 on training; and 100% for F2, F4, 89% for F1, and 78% for F3 on testing. All these simple and fast fisheye image processingbased models could be potentially applied to build mobile friendly-user apps through imaging for fish freshness determination. Though all the models reach high classification accuracy in fish freshness classification on the self-built dataset, there are still mis-classification results mainly caused by limited data for training. Therefore, we have been expanding our database taking into account different types of fish, large number of samples, various time points after death for different fresh level detection, etc. This would offer potential applicability in the field of fish freshness determination.

References 1. Chamberlain, A.I., Titili, G.: Seafood spoilage and sickness. University of the South Pacific, Suva, Fiji Islands (2001) 2. Iswari, N.M.S., et al.: Fish freshness classification method based on fish image using knearest neighbor. In: 2017 4th International Conference on New Media Studies (CONMEDIA), pp. 87–91. IEEE (2017). https://doi.org/10.1109/conmedia.2017.8266036 3. Jarmin, R., Khuan, L.Y., Hashim, H., Rahman, N.H.A.: A comparison on fish freshness determination method. In: 2012 International Conference on System Engineering and Technology (ICSET) (2012). https://doi.org/10.1109/icsengt.2012.6339329 4. Dutta, M.K., et al.: Image processing based method to assess fish quality and freshness. J. Food Eng. 177, 50–58 (2016). https://doi.org/10.1016/j.jfoodeng.2015.12.018 5. Gu, J., et al.: A new detection method for fish freshness. In: 2014 Seventh International Symposium on Computational Intelligence and Design, pp. 555–58. IEEE (2014). https:// doi.org/10.1109/iscid.2014.153

Proposed Novel Fish Freshness Classification

71

6. Issac, A., et al.: An efficient image processing based method for gills segmentation from a digital fish image. In: 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 645–649. IEEE (2016). https://doi.org/10.1109/spin.2016.7566776 7. Navotas, I., Santos, C., Balderrama, E., Candido, F., Villacanas, A., Velasco, J.: Fish identification and freshness classification through image processing using artificial neural networks. J. Eng. Appl. Sci. 13, 4912–4922 (2018) 8. Clay’s Handbook of Environmental Health, 8th edn. (1933). Edited by W.H. Basset 9. Biosearch Technologies. https://www.biosearchtech.com/support/education/stellaris-rnafish/applications/vision-rna-fish 10. He, K.: Single image haze removal using dark channel prior (2011) 11. Park, D., Park, H., Han, D.K., Ko, H.: Single image dehazing with image entropy and information fidelity. In: 2014 IEEE International Conference on Image Processing (ICIP) (2014). https://doi.org/10.1109/icip.2014.7025820

Machine Learning-Based Evolutionary Neural Network Approach Applied in Breast Cancer Tumor Classification Hoang Duc Quy2, Cao Van Kien2, Ho Pham Huy Anh1, and Nguyen Ngoc Son2(&) 1

2

FEEE, Ho Chi Minh City University of Technology, VNU-HCM, Ho Chi Minh City, Viet Nam [email protected] Faculty of Electronics Technology, Industrial University of Ho Chi Minh City, Ho Chi Minh City, Viet Nam [email protected], {caovankien,nguyenngocson}@iuh.edu.vn

Abstract. Recently, the particle swarm optimization (PSO) has successfully applied to many optimization problems such as artificial neural network (ANN). In this paper, we propose an Adaptive Particle Swarm Optimization (APSO) algorithm to optimize parameters of neural network model for the breast cancer classification task. The aim of the model was to classify two types of tumor: benign and malignant. The dataset used in this study was the popular Wisconsin Diagnosis Breast Cancer (WDBC) which was partitioned into 70% for training phase and 30% for testing phase. The model had been trained by three difference optimization algorithms as follow: Back-propagation (BP), classical PSO and APSO. The results show that the proposed APSO method significantly improved the performance of model in test accuracy, convergence speed and local minimum avoidance in comparison with BP and classical PSO. Keywords: Breast cancer  Particle swarm optimization  Neural network Wisconsin diagnosis breast cancer  Adaptive particle swarm optimization



1 Introduction Breast cancer (BC) is a disease in which malignant (cancer) cells form in the tissues of the breast. According to World Cancer Research Fund International (WCRFI), BC is the most common cancer in women worldwide and the second most common cancer overall. In United States (US), 2019, there were 268,600 novel cases of invasive women breast cancer diagnosed and nearly 2,670 cases for men; in which nearly 41,760 women and 500 men are tolled to die because of breast cancer [1]. In Viet Nam (VN), the estimated number of new cases in 2018 was 15,529 (9.2% of all cancer cases) and the estimated of deaths was 6,103 (5.3% of all death by cancer cases) [2]. Currently, there is no treatment to prevent breast cancer. However, with an early detection when the stage of BC has just developed, the survival rate is 99% in five years [1]. Thus, it is a specific research attracting a vast attention. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 72–83, 2021. https://doi.org/10.1007/978-3-030-62324-1_7

Machine Learning-Based Evolutionary Neural Network Approach

73

Most breast cancer cases are detected by patient when a painless lump was seen in the breast or underarm lymph nodes, less specified symptoms are breast pain or heaviness; or persistent changing indexes, such as swelling, or redness of the skin [1]. All of those changes should be evaluated by a physician. However, when the tumor is not big and then easily cured, there are no signs or symptoms. Thus, early diagnosis of BC represents a very important factor in cancer treatment and allows patients gain higher survival rate. There are several methods for early detection of BC, such as breast self-examination, clinical breast examination, mammography, ultrasound, magnetic resonance imaging (MRI). Among these, mammography is the current gold standard breast screening technique. Breast tumor is usually detected though screening, before the symptoms developed or after patient notices a lump. Ultrasound is a technique for the breast tissue examination using high frequency ultrasonic waves that pass through the breast. Ultrasound examination of the breast is typically performed when the mass is found during a physical exam or mammography which means a method using low-energy X-ray to examine human breast for diagnosis and screening. Though mammography cannot prove that an abnormal area is cancer, but if it raises a significant suspicion of cancer, tissue will be removed for a biopsy. However, disadvantages of mammography are less effective for women under 40 years old and dense breasts. Recently, the development of digital mammography technology, such as contrast enhanced digital mammography (CEDM) present more accuracy diagnostic than film mammography and ultrasound in dense breasts [3, 4]. Breast MRI exploits radio waves and powerful magnets to present in detail the internal picture of the breast. Compared to ultrasound and mammography, MRI has less specific but more sensitive to detect small tumors in subjects with high breast cancer risk [5]. As mentioned above, in case there is any symptom of cancer, microscopic tissue is analyzed based on a needle biopsy known as a fine needle aspirate (FNA). There are several methods for the breast biopsy. If the mass can be palpated, the simplest way is to stab the mass with a thin needle and extract a part of analysis content using syringe. If the mass cannot be palpated, then the biopsy is performed using a needle with ultrasound guidance. The detail of the process was presented in [6]. Despite the development of breast cancer detection techniques, manual classification of breast cancer still remains issues such as imaging quality and human error, misdiagnosis of breast cancer by radiologists. Thus, computer-aided detection systems (CADs) are developed and applied to breast cancer to overcome these restrictions and have been studied in different medical imaging, including mammography, ultrasound and MRI [7]. The purpose of CAD system is to improve the quality and productivity of clinicians in their interpretation of radiological images. Typically, the CAD system consists of four stages as shown in Fig. 1.

74

H. D. Quy et al.

Input images

Image Preprocessing

Segmentation

Feature extraction & feature selection

Classification

Fig. 1. Computer-aided detection system

Classification is the last stage in the CAD system that identifies people with breast cancer, distinguish benign from malignant tumors. Lately, machine learning (ML) techniques such as artificial neural networks are playing a significant role in diagnosis of BC by applying classification techniques in classification stage of CAD system. Accuracy classification can assist clinicians to prescribe the most appropriate treatment. Within the field of ML, there are two main types of tasks: supervised and unsupervised. The main difference between them is that supervised learning has prior knowledge of what the output values for the data input should be. In other words, supervised learning requires all the data to be labeled. Many researches recently reveal that almost all the ML algorithms employed in the BC classification are supervised learning. Among these, A. F. M. Agarap [8] combined the gate recurrent unit (GRU) variant of recurrent neural network (RNN) with Support Vector Machine (SVM) and obtained 93.75% accuracy using WDBC dataset [15], Reem et al. [9] applied two methods combined with feature selection to the problem which resulted in the following accuracies: SVM-ANN reached 97.1388%, BP-ANN reached 96.7076%. Huang et al. [10] obtained 98.83% classification accuracy using ANN with Levenberg– Marquardt algorithm. Regarding the ANNs training, back-propagation (BP) is the mostly used algorithm, which is a gradient-based method. However due to the limitations of BP, such as low convergent speed and easily get trapped in local minima, many evolutionary computational methods had been used to solve those disadvantages. For example, genetic algorithm (GA) in [12, 13], differential evolution (DE) in [14, 15], PSO in [10, 16]. Among these, PSO is an evolutionary swarm based computational method inspired by swarm theory such as flocking birds, fish schooling [11]. PSO is simple in its concept and coding implementation compared to other evolutionary computational algorithms. Moreover, in comparison with BP in term of neural network training, prior studies have proved that PSO has advantages in computation effort and faster convergent speed [17, 18]. However, PSO in its original form suffers from premature convergence due to the inability of particles to escape from local minima and thus get trapped in those regions. Many researches have shown that parameters such as inertia weight, cognitive and social component acceleration coefficients have a significant impact on performance of the algorithm. But manually tuning them is timeconsuming. Hence, many parameter tuning methods had proposed [19–23]. In this paper, an adaptive PSO (APSO) algorithm is applied in order to optimize the parameters of neural network. For the BC classification task, the popular Wisconsin Diagnosis Breast Cancer (WDBC) [24] database was used and partitioned into two subsets: training set (70%) and test set (30%).

Machine Learning-Based Evolutionary Neural Network Approach

75

2 Proposed Method 2.1

Dataset

The WDBC dataset was collected by William H. Wolberg and other authors. According to [24], the dataset has 569 instances and features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass, they describe characteristics of the cell nuclei present in the image. A typical image contains between 10 and 40 nuclei. Ten features are computed for each nucleus: area, radius, perimeter, symmetry, number and size of concavities, fractal dimension (of the boundary), compactness, smoothness (local variation of radial segments) and texture (variance of grey levels inside the boundary). Each feature has three information components: mean, standard error, and “worst” or largest (mean of three largest values). Therefore, there are 30 features for each instance. 2.2

Neural Network Structure

Neural network is a branch of machine learning. Its structure constructed by linking multiple neurons together in the sense that the output of one neuron forms an input to another. In this paper, a typical three-layer neural network model is used as a BC classifier. In particular, the model contains an input layer with 30 neurons, a single hidden layer with 20 neurons and an output layer with 2 neurons. The structure of the model is illustrated in Fig. 2. Where, X denotes the vector of input variables; H is the hidden layer; Y is the output layer; b1 and b2 represent the bias vector of hidden layer and output layer, respectively; W1 and W2 represent the weight vector of input layer and hidden layer, respectively.

Fig. 2. Neural network architecture

76

H. D. Quy et al.

Then the prediction output is as follows X X ^yi ¼ f 2 ð W2 f 1 ð W1 X þ b1 Þ þ b2 Þ i ¼ 1; 2

ð1Þ

where f1 represent the rectified linear unit (RELU) activation function at hidden layer and f2 represents softmax function (Eq. 3) which produces a probability distribution for the classes in the output layer. The aim of the model is to minimize the fitness function in Eq. 4 until it reaches its global minima or very close to that point. Where y and ^ y denote as the actual class and the predicted class, respectively. e yi Pð^yi Þ ¼ P n

i ¼ 1; 2; n is number of classes

ð2Þ

ðyi  logð^yi Þ þ ð1  yi Þlogð1  ^ yi ÞÞ

ð3Þ

e yj

j¼0

Lð^yjyÞ ¼ 

N X i¼0

2.3

Training Algorithm

Classical Particle Swarm Optimization As mentioned above, PSO is bio-inspired algorithm based on behavior of a swarm of particles moving around through a multi-dimension search space. Each particle can memorize the optimal position of the swarm and that of its own, as well as the velocity. Flowchart of the PSO algorithm is illustrated in Fig. 3.

START

Swarm initialization

Particle fitness evaluating

NO

Calculate Particle best position

Ending condition triggered?

Calculate the swarm best position

Updating particle position and velocity

YES

END

Fig. 3. Flowchart of particle swarm optimization algorithm

Mathematically, the PSO algorithm can be described as follows. Assume that swarm size is N, each particle’s position in D-dimensional space is a vector Xi ¼ ðxi1 ; xi2 ; . . .; xiD Þ, the movement of each particle in search space is determined by its velocity vector Vi ¼ ðvi1 ; vi2 ; . . .; viD Þ, individual’s optimal position denotes as

Machine Learning-Based Evolutionary Neural Network Approach

77

Pi ¼ ðpi1 ; pi2 ; . . .; piD Þ. The index of the best variables in vector Pi is denotes as g and Pg is called global best position (i.e., the best position so far of the population). Then the particle positions and velocities can be updated according to the Eq. (5), (6): Vdi;t þ 1 ¼ w  vdi;t þ c1  r1  ðpdi;t  xdi;t Þ þ c2  r2  ðpdg;t  xdi;t Þ

ð4Þ

xdi;t þ 1 ¼ xdi;t þ vdi;t þ 1

ð5Þ

where w is inertia weight; r1 and r2 are two uniform random numbers in the range [0, 1]; c1 and c2 are both positive constants, which are called acceleration coefficients; Adaptive Particle Swarm Optimization As mentioned above, parameters such as inertia weight w; c1 and c2 have a huge impact on the performance of PSO. Typical PSO algorithm set those parameters as constant which lead to some disadvantages. When inertia weight is a fixed value, we consider two situations. On the one hand, large inertia weight will make the particle have strong global exploitation ability, but it tends to exploit new areas thus hard to find global optimal solution. On the order hand, small inertia weight makes the particle have strong local exploration ability which confines the particle searching within a local range close to its present position and thus increase chance of finding global optimal solution. However, it leads to low convergence speed if there is no acceptable solution within searching area. Thus, many inertia weight adaptation mechanisms had proposed [19–23]. In this section, a time-varying inertia weight strategy which adopted from previous research [21] is applied in order to linear decrease the value of inertia weight. The strategy is represented in Eq. 7. wðtÞ ¼ wend þ ðwstart  wend Þ  expð

ct Þ T

ð6Þ

Where wstart is the initial value of inertia weight, wend is the final value of inertia weight, t is the number of current iteration, T is the maximum iteration, c is the positive constant which was chosen as 3, 5, 8 or 10. The higher value of c is, the faster the convergence of w attains. The learning factors c1 and c2 are typically used to balance the global and local search abilities of PSO. In many cases, c1 and c2 have been set as a constant value of 2. However, a linear changing version proved to be successful [22, 26]. In the one hand, a large social component c2 and a small cognitive component c1 at the beginning will allow particles moving toward the global best. In the other hand, a large c1 and small c2 allow particles searching around its location instead of moving toward the global best. Hence, in this paper, a time-varying acceleration coefficients method which was proposed by Ratnaweera et al. [23] is applied. The method can be represented as follows: t c1 ðtÞ ¼ ðc1end  c1start Þ  ð Þ þ c1start T

ð7Þ

t c2 ðtÞ ¼ ðc2end  c2start Þ  ð Þ þ c2start T

ð8Þ

78

H. D. Quy et al.

Where cstart and cend are initial value and final value of acceleration coefficients, respectively, t is the number of current iteration and T is the maximum iteration. However, the inertia weight and acceleration coefficients are not only influenced by time but also the history best value of the swarm. The pseudo-code for the APSO algorithm is as follows (Fig. 4): Applied PSO-NN in Breast Cancer Classification Our proposed method for BC classification task is presented in Fig. 5. In data preprocessing stage, a min-max scaler was applied to rescale the range of the input data to a smaller range, for example [0, 1]. Empirical evidence proved that min-max normalization increased the accuracy of the neural network classifier model [25].

Fig. 4. APSO algorithm

APSO algorithm

Breast cancer dataset

Data preprocessing

Neural network model

Fig. 5. Flowchart of proposed method

ClassificaƟon result

Machine Learning-Based Evolutionary Neural Network Approach

79

In order to use PSO algorithm for training neural network, a vector encoding strategy is applied. Each particle in the swarm represents all weights and biases of the neural network model. Supposed that the model with structure 2-3-2, the strategy can be written as: ParticleðiÞ ¼ ½w11 ; . . .w23 ; b11 ; . . .; b13 ; w41 ; . . .; w62 ; b21 ; b22  i ¼ 1; . . .; N

ð9Þ

Particle Matrix ¼ ½particleð1Þ; particleð2Þ; . . .; particleðNÞ

ð10Þ

where N is the number of particles in swarm. However, when calculating the output of neural network, this strategy is required to decode each particle into weight and bias matrix.

3 Result and Discussion In this section, the performance of our proposed method is evaluated by four metrics: convergence speed, accuracy, sensitivity, specificity. The classification accuracy is measured using the following equation: Accuracy ¼

TP þ TN TP þ TN þ FP þ FN

ð11Þ

Where TP, TN, FP, FN denote true positives (input samples correctly classified as BC positive), true negatives (input samples correctly as BC negative), false positives (input samples incorrectly classified as BC positive), false negatives (input samples incorrectly classified as BC negative), respectively. Among these metrics, sensitivity and specificity are two widely used terms in medical diagnosis. They are defined as: Sensitivity ¼

TP TP þ FN

ð12Þ

Specificity ¼

TN TN þ FP

ð13Þ

Sensitivity, also known as true positive rate is defined as the proportion of correctly identified instances with actual positives. Specificity, also known as true negative rate is the proportion of correctly identified instances with actual negatives. Table 1 shows the hyper-parameters used for the algorithms. Table 2 shows the confusion matrix of the proposed method performance. Table 3 shows the experimental results achieved for breast cancer classification task using WDBC dataset. Figure 6 illustrated the convergence changes of three algorithms including our proposed method in 1000 iterations.

80

H. D. Quy et al.

As mentioned above, the proposed APSO algorithm has advantage in both global exploitation and local exploration. Therefore, it can improve the convergence speed and avoid local minimums during the training phase of the neural network model compared to other models that use BP algorithm or standard PSO algorithm. Indeed, the graph in Fig. 6 shows that the APSO algorithm is superiority over BP algorithm in term of convergence speed and outperform PSO algorithm in term of finding global optimum solution. In particular, APSO only takes less than 100 iterations to minimize the loss to 0.1 and is the only one model archived the loss goal of 0.01 around 500 iterations.

Fig. 6. Fitness curve of three algorithms

Table 3 summarizes the empirical results of four metrics used to evaluate the performance of three models. According to Table 3, our proposed method achieves 98.24% overall accuracy while BP-NN and PSO-NN model achieve 96.49% and 97.66%, respectively. Moreover, the APSO-NN model reached perfect 100% in sensitivity analysis. It means the model correctly to identify people who positive with breast cancer with no mistakes. Meanwhile, the sensitivity analysis of BP-NN model and PSO-NN are 98.14% and 99.08% respectively. However, the result also points out that the specificity analysis is only 94.54% for the proposed model. According to Table 2, among 55 women who negative with breast cancer, the model misclassifies three of them positive with breast cancer. The low accuracy on specificity analysis exists in other algorithms as well. This limitation is predicted before because the imbalanced data in the dataset and will be solved in future work.

Machine Learning-Based Evolutionary Neural Network Approach

81

Table 1. Hyper-parameters used for three algorithms Hyper-parameters Iterations Learning rate W C1 C2 Swarm size Error goal c

BP 1000 0.1 N/A N/A N/A N/A 0.01 N/A

PSO 1000 N/A 0.7 2 2 30 0.01 N/A

APSO 1000 N/A [0.7-0.4] [2.5-0.5] [0.5-2.5] 30 0.01 10

Table 2. Confusion matrix of APSO algorithm performance Actually positive (1) Actually negative (0) Predicted positive (1) 116 3 Predicted negative (0) 0 52

Table 3. The experimental results achieved for WDBC dataset Type of algorithm BP-NN PSO-NN APSO-NN

Sensitivity (%) 98.14 99.08 100.00

Specificity (%) 93.65 95.16 94.54

Classification accuracy (%) 96.49 97.66 98.24

Computation time (s) 2.3 100.5 54.2

4 Conclusion An adaptive particle swarm optimization (APSO)-based neural network approach is proposed in breast cancer classification. Empirical result proved that this method exhibited high performance on the classification task (i.e., determining whether benign tumor or malignant tumor). In the future, we plan to combine APSO and BP algorithm to form a hybrid algorithm which expects to solve the slow computation time limitation. In order to apply to practical application, mammography database is also considered to use in future work thus require more advanced computation model such as convolutional neural network (CNN). Acknowledgement. This paper is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant MDT 107.01-2018.318.

82

H. D. Quy et al.

References 1. American Cancer Society. Breast Cancer Facts & Figures 2019–2020. American Cancer Society, Inc., Atlanta (2019) 2. International Agency for Research on Cancer (IARC). Global Cancer Observatory— Vietnam Population fact sheets. http://gco.iarc.fr/today/data/factsheets/populations/704-vietnam-fact-sheets.pdf. Accessed 26 Oct 2018 3. Dromain, C., et al.: Dual-energy contrast-enhanced digital mammography: initial clinical results of a multireader, multicase study. Breast Cancer Res. 14(3), R94 (2012) 4. Mori, M., et al.: Diagnostic accuracy of contrast-enhanced spectral mammography in comparison to conventional full-field digital mammography in a population of women with dense breasts. Breast Cancer 24(1), 104–110 (2016) 5. Warner, E., Messersmith, H., Causer, P., et al.: Systematic review: using magnetic resonance imaging to screen women at high risk for breast cancer. Ann. Int. Med. 148(9), 671–679 (2008) 6. Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995) 7. Dromain, C., et al.: Computed-aided diagnosis (CAD) in the detection of breast cancer. Eur. J. Radiol. 82(3), 417–423 (2013) 8. Agarap, A.F.M.: On breast cancer detection: an application of machine learning algorithms on the wisconsin diagnostic dataset. In: Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, pp. 5–9 (2018) 9. Alyami, R., et al.: Investigating the effect of correlation based feature selection on breast cancer diagnosis using artificial neural network and support vector machines. In: 2017 International Conference on Informatics, Health & Technology (ICIHT), pp. 1–7 (2017) 10. Huang, M.-L., Hung, Y.-H., Chen, W.-Y.: Neural network classifier with entropy based feature selection on breast cancer diagnosis. J. Med. Syst. 34(5), 865–873 (2010) 11. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95International Conference on Neural Networks, vol. 4. IEEE (1995) 12. Aličković, E., Subasi, A.: Breast cancer diagnosis using GA feature selection and Rotation Forest. Neural Comput. Appl. 28(4), 753–763 (2015) 13. Leung, F.H.-F., et al.: Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 14(1), 79–88 (2003) 14. Thein, H.T.T., Tun, K.M.: An approach for breast cancer diagnosis classification using neural network. Adv. Comput. 6(1), 1 (2015) 15. Leema, N., Nehemiah, H.K., Kannan, A.: Neural network classifier optimization using differential evolution with global information and back propagation algorithm for clinical datasets. Appl. Soft Comput. 49, 834–844 (2016) 16. Tewolde, G.S., Hanna, D.M.: Particle swarm optimization for classification of breast cancer data using single and multisurface methods of data separation. In: 2007 IEEE International Conference on Electro/Information Technology. IEEE (2007) 17. Mohaghegi, S., et al.: A comparison of PSO and backpropagation for training RBF neural networks for identification of a power system with STATCOM. In: Proceedings 2005 IEEE Swarm Intelligence Symposium, SIS 2005. IEEE (2005) 18. Gudise, V.G., Venayagamoorthy, G.K.: Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS 2003 (Cat. No. 03EX706), Indianapolis, IN, USA, pp. 110–117 (2003)

Machine Learning-Based Evolutionary Neural Network Approach

83

19. Nickabadi, A., Ebadzadeh, M.M., Safabakhsh, R.: A novel particle swarm optimization algorithm with adaptive inertia weight. Appl. Soft Comput. 11(4), 3658–3670 (2011) 20. Tang, Y., Wang, Z., Fang, J.-a.: Feedback learning particle swarm optimization. Appl. Soft Comput. 11(8), 4713–4725 (2011) 21. Lu, J., Hongping, H., Bai, Y.: Generalized radial basis function neural network based on an improved dynamic particle swarm optimization and AdaBoost algorithm. Neurocomputing 152, 305–315 (2015) 22. Mohammadi-Ivatloo, B., Moradi-Dalvand, M., Rabiee, A.: Combined heat and power economic dispatch problem solution using particle swarm optimization with time varying acceleration coefficients. Electr. Power Syst. Res. 95, 9–18 (2013) 23. Ratnaweera, A., Halgamuge, S.K., Watson, H.C.: Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 8 (3), 240–255 (2004) 24. Wolberg, W.H., Street, W.N., Mangasarian, O.L.: Breast cancer Wisconsin (diagnostic) data set. UCI Machine Learning Repository (1992). http://archive.ics.uci.edu/ml/ 25. Mohd Nawi, N., Atomia, W.H., Rehman, M.Z.: The effect of data pre-processing on optimized training of artificial neural networks (2013) 26. Isiet, M., Gadala, M.: Self-adapting control parameters in particle swarm optimization. Appl. Soft Comput. 83, 105653 (2019)

Malware Classification by Using Deep Learning Framework Tran Kim Toai1,3(&), Roman Senkerik2(&), Vo Thi Xuan Hanh3(&), and Ivan Zelinka1(&) 1

3

VSB-Technical University of Ostrava, 17, Listopadu 15/2172, 708 33 Ostrava-Poruba, Czech Republic {tran.kim.toai.st,ivan.zelinka}@vsb.cz 2 Faculty of Applied Informatics, Tom as Bata University in Zlin, T. G. Masaryka 5555, 760 01 Zlin, Czech Republic [email protected] Faculty of Economics, HCMC University of Technology and Education, No. 1, Vo van Ngan Street, Linh Chieu Ward, Ho Chi Minh, Vietnam {toaitk,hanhvtx}@hcmute.edu.vn

Abstract. In this paper, we propose an original deep learning framework for malware classifying based on the malware behavior data. Currently, machine learning techniques are becoming popular for classifying malware. However, most of the existing machine learning methods for malware classifying use shallow learning algorithms such as Support Vector Machine, decision trees, Random Forest, and Naive Bayes. Recently, a deep learning approach has shown superior performance compared to traditional machine learning algorithms, especially in tasks such as image classification. In this paper we present the approach, in which malware binaries are converted to a grayscale image. Specifically, data in the raw form are converted into a 2D decimal valued matrix to represent an image. We propose here an original DNN architecture with deep denoising Autoencoder for feature compression, since the autoencoder is much more advantageous due to the ability to model complex nonlinear functions compared to principal component analysis (PCA) which is restricted to a linear map. The compressed malware features are then classified with a deep neural network. Preliminary test results are quite promising, with 96% classification accuracy on a malware database of 6000 samples with six different families of malware compared to SVM and Random Forest algorithms. Keywords: Deep learning  Machine learning Classification  SVM  Random forest

 Malware detection 

1 Introduction Malware continues to facilitate crime, intelligence, and other unwanted activities on our computer networks [1]. The attackers use malware as the primary tool for their campaigns. A malicious computer program, or malware, is an urgent problem for both enterprise and personal computers. Malware proliferates because of rapidly growing internet usage [2]. Together with the increase of the number of connected users, the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 84–92, 2021. https://doi.org/10.1007/978-3-030-62324-1_8

Malware Classification by Using Deep Learning Framework

85

detected computer viruses and malwares are rising quite seriously [3]. In such a case, the detection of a new malware, possibly just a variant or minor mutation of existing malware, represents a serious problem for any anti-virus/anti-malware tools [4]. Recently, several techniques have been proposed for clustering and classification of malware [5–9]. These include both static analyses as well as dynamic analyses, which are widely used for malware detection. The static analysis uses a signature-based approach, whereas dynamic analysis uses a behavior-based approach to malware detection [3]. Static detection is implemented by separating the malware code and analyzing how it works. The dynamic detection is analyzing the behavior of malicious code by executing it in a safe virtual environment or sandbox [7–9]. Like the static analysis, the dynamic analysis also has many limitations as it cannot explore all the possible execution paths of an executable file, and it tries to detect the virtual environment during the execution process. Thus, the false rate is high, and the true rate is low [9]. Over the past decade, researchers and Cyber-Security vendors have started exploring machine learning algorithms such as the Support Vector Machines (SVM), Random Forests (RF), Naive Bayes (NB), Neural Networks (NN), and deep learning approaches to address the problem of malware classification and malicious software detection [9–11]. Recently, deep learning has been widely used in recognition of handwritten numerals, speech recognition, and especially in image recognition [12, 13]. And a new technique for recognizing malware as images has been introduced in [14], where a malware executable was represented as a binary string of zeros and ones. Then, the vector was reshaped into a matrix, and the malware file could be viewed as a grayscale image. It was based on the observation that the images belonging to the same family appear to be very similar in layout and texture for many malware families. In paper [14], the group of 37374 samples belonging to 22 families is applied to deep neural network for classifying images, which was the ResNet-50 architecture, including a Convolutional Neural Network (CNN). Here, we propose an original deep neural network (DNN) framework consisting of standard deep denoising autoencoders and multi-layer perceptron (MLP) subnet as a classifier. In the middle of the network, the layer-size is smaller compared to the others. The layer sizes are: 10000, 3000, 500, 100, 20, 100, 500, 3000, and 10000. Denoising technique is also applied here to generalize the data, preventing network overfitting. In comparison to [14], a completely different approach to characterize and analyze malicious software is presented here. In [14], executables are converted into an image representation with size of 32 rows and 32 columns, and further a (deep) CNN (ResNet-50 respectively) was used on such dataset. The organization of the paper is following. Firstly, the proposed framework is explained in detail, followed by the presentation of the dataset, results and brief conclusions.

86

T. K. Toai et al.

2 Proposed Framework Malware can be divided into categories that are not mutually exclusive, depending on their purpose. Such a behavioral data is then pre-processed to obtain a vector (size of 10000) for each malware sample. This vector is then used as an input to deep learning framework. However, the vectors of 10,000 size are significantly large for training. To overcome this problem, a deep noise cancellation technique is automatically trained on all behavioral data. The deep noise eliminator automatically reduces the data size, by compressing large amounts of data into small amounts of feature data while preserving its spatial information. In this particular case, vectors of size 10000 are reduced to size of 20. For such a compression rate, it is required to use algorithms for dimensionality-reduction of the data while still maintaining its spatial information. By performing the above-mentioned compression, we can reach later the significant reduction of the complexity of training with the behavior data when running a DNN. For dimensionality-reduction of data, there exists several algorithms, such as principal component analysis (PCA) and Autoencoders. However, Autoencoders have greater capability as they can include nonlinear encoders/decoders compared to PCA that is restricted to a linear map [8]. An Autoencoder is an unsupervised neural network in which the number of neurons at the input and output layer is equal. A hidden layer of neurons is used between the input and output. Number of neurons in the hidden layer is set to fewer than those in the input and output layers. Autoencoders are typically trained using backpropagation with stochastic gradient descent. The whole workflow is depicted in Fig. 1. Dataset consists of data collected from the VirusTotal community using API [15] (for details, please refer to the next chapter). Afterward, it has to be converted to another fixed-format that the neuron network can understand. To finish this step, the 1-gram (unigram) pre-processing technique is used for feature representation, where a bit-string form is represented in a format such as ‘0111000111’. Next, a deep denoising autoencoder is used for feature selection and extraction. Denoising Autoencoders solve this problem by corrupting the data on purpose by randomly turning some of the input values to zero. The percentage of input nodes that are being set to zero depends on the amount of data and input nodes. The malware features are compressed in the next step to form the decimal values and finally classified with the DNN.

Fetch behavior data from Virus Total

1-gram Preprocessing

Bit-string form

Deep Denoising Autoencoders for feature compression

Compressed Malware Features

Fig. 1. Training flow of malware classification

Deep neural network for classification

Malware Classification by Using Deep Learning Framework

87

An important step in our approach is to convert binary files to image, as depicted in Fig. 2, so that we can extract image-based features. Firstly, the binary files are converted into a 1D array as vector form of 8-bit unsigned integers. This array is reorganized as a 2D array or matrix. The dimensions of created matrix depend on the file size we have used for binary input.

Malware binary 0101010111010 0011100000110 111011000011

Binary to 8 bit vector

8 bit vector to grey scale image

Fig. 2. Conversion of malware binary to image

For verification of our approach, we have used the F1 measure, which combines recall and precision in the following way (1)–(3). Recall ¼

number of correct positive prediction ; number of positive examples

Precision ¼

number of correct positive prediction ; number of positive prediction

F1 ¼

2  Recall  Precision : Recall þ Precision

ð1Þ ð2Þ ð3Þ

Python environment has been used in this paper to obtain all reported experiment data. Specifically, PyTorch deep-learning framework was chosen as it offers enough freedom, and it is easier to debug compared to TensorFlow.

3 Dataset In this research, we have used the Virus Total API service to obtain summarized behavior data by requiring a private key. The behavior data downloaded from VirusTotal is in the JSON structure format [15]. For all experiments, data were used in the following way: 70% of training and 30% for performance testing. The performance of the proposed model was evaluated with 6000 benign samples, and the six malware families. Whole dataset consists of 6000 samples; thus, the proposed model has been trained on 4217 samples and validated on 1783 samples. Our dataset consists of 1000 variants in each category. Six following major

88

T. K. Toai et al.

malware categories were used: Cerber, Cryptowall, Petya, GandCrab, Wannacrypt, and Sality. Distribution of these six malware families in training/testing dataset is given in Table 1. Table 1. Experiment data set for six malware families Cerber Cryptowall Petya GandCrab Wannacrypt Sality Total 731 725 695 695 691 680 4217

Size of the training database Size of the testing dataset 269

275

305

305

309

320

1783

4 Results In our experiments, we have trained a deep denoising Autoencoder consisting of nine layers: 10000, 3000, 500, 100, 20, 100, 500, 3000, 10000. At each step, only one layer is trained, then the weigh is frozen, and the subsequent layer is trained. At the end of this training phase, the deep network that is capable of converting 10000 input vectors into 20 floating-point values. Approximately, it takes a haft-day for training to be completed with the hardware configuration of NVIDIA GPU. By default, the network is trained for 1000 epochs for each layer. The other parameters were: learning rate of 0.001, batch size of 50, and the noise ratio of 0.2. Since there exist a lot of research experiments for malware classification using image-based method as in [14], the motivation in our research is the improvement in accuracy, through testing and finding different scenarios/solutions. Therefore, we evaluate the accuracy of three classification algorithms MLP, RF and SVM. The three confusion matrices are shown in Figs. 3, 4 and 5. Confusion matrices show the comparison of predicted classes with the actual classes. The diagonal values show the number of corrected classifications by the trained classifier, while the nondiagonal entries represent the misclassified predictions. Figure 3, 4 and 5 shows the accuracy evaluation of SVM, RF, and MLP algorithms for classifying the malware families. Figure 6(a)–(c) show classification reports of precision, F1-Score, and Recall for SVM, RF, and MLP, respectively. Table 2 shows the comprehensive numerical comparison of the performance of existing classification methods within the proposed framework for malware detection. SVM achieves 93.77% accuracy for classifying malware (images). The SVM method showed lower performance when compared with other methods. The RF showed an accuracy of 95.230%. MLP method obtains an accuracy of 95.96%. Table 2. Comparison of the results Classifier MLP RF SVM

Accuracy 0.9596 0.9523 0.9377

Precision 0.9588 0.9520 0.9361

Recall 0.9590 0.9515 0.9361

F1-Score 0.9588 0.9517 0.9361

Summary 1782 1782 1782

Malware Classification by Using Deep Learning Framework

Fig. 3. Confusion matrix for SVM evaluation

Fig. 4. Confusion matrix for RF evaluation

89

90

T. K. Toai et al.

Fig. 5. Confusion matrix for MLP evaluation

Fig. 6. Classification report for SVM (a), RF (b), and MLP (c) evaluation tests

Figure 7 provides a 2-dimension visualization of the data where each node represents one malware representation. The visualization is generated using Uniform Manifold Approximation and Projection (UMAP) algorithm [16]. This algorithm reduces the data dimensionality from 20 to 2-dimension visualization. The figure illustrates that variants of the same malware family are mostly clustered together in representation space. Six families are somewhat separated from each other. The dimensionality reduction avoids overfitting and redundancy. It also leads to better human interpretation and less computational cost with simpler models. The goal of

Malware Classification by Using Deep Learning Framework

91

Fig. 7. 2-dimensional visualization of the malware generated by Uniform Manifold Approximation and Projection (UMAP), which is a dimensionality reduction algorithm. Each color corresponds to one of six malware categories. The labels are used for coloring the nodes.

UMAP is to reduce dimensionality such that two closer nodes are each other in the original high dimensional space, the closer they would be in the 2-dimensional space. The figure shows that the deep denoising autoencoder captures invariant representations of malware. Some clustering errors are expected since hundreds of variants of malware were created.

5 Conclusion and Future Work The proposed deep learning approach introduced here has shown great performance. We propose here an original DNN architecture with deep denoising Autoencoder for feature compression. In this paper we present the approach, in which malware binaries are converted to a grayscale image. Specifically, data in the raw form are converted into a 2D decimal valued matrix to represent an image. The dataset with 6000 samples of malware out of 6 different families have been used. The test results are quite promising, with 96% classification accuracy obtained by means of deep denoising Autoencoder and MLP (as subnet of whole DNN). In future work, we are planning to continue in investigation of reducing the training time of our models by parallelizing the learning process. Furthermore, we will extract features over a broader range of datasets. We are also planning usage of the unsupervised training approach with large amount of data.

92

T. K. Toai et al.

References 1. Brundage, M., et al.: The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228 (2018) 2. Kim, C.H., Kabanga, E.K., Kang, S.-J.: Classifying malware using convolutional gated neural network. In: 20th International Conference on Advanced Communication Technology (ICACT) (2018) 3. Jeon, J., Park, J.H., Jeong, Y.-S.: Dynamic analysis for IoT malware detection with convolution neural network model. IEEE Access 8, 96899–96911 (2020) 4. Truong, C.T., Zelinka, I.: A survey on artificial intelligence in malware as next-generation threats. Mendel Soft Comput. J. 25(2), 27–34 (2019) 5. Chen, S., et al.: Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. Comput. Secur. 73, 326–344 (2018) 6. Vinayakumar, R., Alazab, M., Soman, K.P., Poornachandran, P., Venkatraman, S.: Robust intelligent malware detection using deep learning. IEEE Access 7, 46717–46738 (2019) 7. Sornil, O., Liangboonprakong, C.: Malware classification using n-grams sequential pattern feature. Int. J. Inf. Process. Manag. 4, 59–67 (2013) 8. David, E., Netanyahu, N.S.: DeepSign: deep learning for automatic malware signature generation and classification. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015) 9. Islam, R., Tian, R., Batten, L.M., Versteeg, S.: Classification of malware based on integrated static and dynamic features. J. Netw. Comput. Appl. 36(2), 646–656 (2013) 10. Ranvee, S., Hiray, S.: Comparative analysis of feature extraction methods of malware detection. Int. J. Comput. Appl. 120, 1–7 (2015) 11. Gandotra, E., Bansal, D., Sofat, S.: Malware analysis and classification: a survey. J. Inf. Secur. 5, 56–64 (2014) 12. Gavrilut, D., Cimpoes, M., Anton, D., Ciortuz, L.: Malware detection using machine learning. In: Proceedings of the International Multiconference on Computer Science and Information Technology, pp. 735–741 (2009) 13. Nguyen, M.H., Le Nguyen, D., Nguyen, X.M., Quan, T.T.: Auto-detection of sophisticated malware using lazy-binding control flow graph and deep learning. Comput. Secur. 76, 128– 155 (2018) 14. Singh, A., Handa(B), A., Kumar, N., Shukla, S.K.: Malware classification using image representation. In: International Symposium on Cyber Security Cryptography and Machine Learning, pp. 75–92 (2019) 15. Online Available: developers.virustotal.com. Accessed 20 May 2020 16. McInnes, L., Healy, J., Melville, J.: UMAP: uniform manifold approximation and projection for dimension reduction. arXiv:1802.03426 (2018)

Attention Mechanism for Fashion Image Captioning Bao T. Nguyen1 , Om Prakash2 , and Anh H. Vo3(B) 1

2

Faculty of Information Technology, Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam [email protected] Department of Computer Science and Engineering, H.N.B Garhwal University, Srinagar, India [email protected] 3 Artificial Intelligence Laboratory, Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam [email protected]

Abstract. Image captioning tries to make computer learn to understand the visual content of a given image and produce one or more description sentences. Recently, with the help from rapid development of deep learning, image captioning has become an active research. The encoderdecoder architecture is commonly considered as the baseline method to address image captioning. This model focuses on recognizing objects and the relationship among these objects. However, with fashion images, the generated sentence should not only describe items inside the input image, but also mention item attributes such as texture, fabric, shape, style, etc. This requirement of fashion image captioning is not able to be solved by the image-captioning method based on the traditional encoderdecoder architecture. Our study addresses this issue by proposing an image captioning model based on the attention mechanism for fashion image which is able to cover both items and the relationship among the detailed attributes of items. We introduce an efficient framework for fashion image captioning that incorporates spatial attention inside the traditional encoder-decoder architecture. Our model generates fashion image captions using the spatial attention mechanism, which dynamically modulates the sentence generation context in multi-layer feature maps. Experiments were conducted on Fashion-Gen, one of state-ofthe-art fashion image dataset, and achieved CIDEr/ROUGE-L/BLEU-4 scores of 0.913, 0.502, and 0.221, respectively. Based on experiments, it is consistently presented that our proposed model has significantly improved the performance of fashion-image captioning task, and it even surpasses the baseline methods on the same fashion benchmark dataset.

Keywords: Fashion image captioning Recurrent neural network

· Attention mechanism ·

c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 93–104, 2021. https://doi.org/10.1007/978-3-030-62324-1_9

94

1

B. T. Nguyen et al.

Introduction

In recent years, there has been a resurgence of deep learning in many various domains such as computer vision, image processing, natural language processing, data science. It has leveraged many practical applications in sign language [1,26], facial expression recognition [24,25], smile detection [27], image captioning. Among of them, image captioning aims at generating a natural language description for an input image automatically, and it is presentative of the combination of computer vision (image understanding) and natural language processing (language description). If image understanding identifies object locations, scene types, object properties, the language description requires generating sentences with well-formed both syntactic and semantic understanding of the language. Image captioning plays an important role for many reasons. First, currently, there are large number of available images comming from various sources such as the social network, e-commerce websites, news articles, and advertisements. But, most of these images do not have a description or they contain noise descriptions, and viewers would have to interpret these images by themselves. Second, some social network platforms such as Twitter, Facebook, or Instagram can directly generate descriptions from user images, with the help of image captioning model. The descriptions in a picture can involve where we are (e.g., mountain, park), what we are doing there, and what we wear. These information would become a useful source for further activities. Encoder−decoder architecture is the traditional approach for image caption task [3,18]. Although the encoder-decoder model is usually utilized to tackle image captioning with some ecourage results, it also has limitation. This model describes the image as a whole sence, instead of focusing on local aspects relevant to parts of the description. To address this issue, the encoder-decoder architecture can be equipped with attention mechanism [28], which focus on different regions of the input image dynamically when generating the output. Attention mechanism is one of the most interesting facets of the human visual system [23]. Instead of compressing the entire image into a static representation, attention model allows some salient features to dynamically go straight to the forefront as needed. This is especially important when there are many objects in an image. Using image representations (such as the output from the top layer of a CNN) which distill information in image down to the most salient objects, is an effective solution to capture information from different parts of image. By using the attention mechanism, the captioning model is able to incorporate precise visual contexts, which would yields performance improvements empirically. In the aspect of fashion image captioning, we identify that if the captioning model only bases on the encoder-decoder architecture, it will be lack of describing significant attributes of fashion attribute items inside the input image. In fact, fashion image contains not only fashion items but also detailed attributes inside the scene. In this paper, we propose attention mechanism to incorporate inside the encoder-decoder architecture. Our proposed method is able to capture the fashion attribute items based on spatial attention, which has been proved with the effectiveness on natural image compared with the original encoder-decoder

Attention Mechanism for Fashion Image Captioning

95

architecture in several previous works. To demonstrate that spatial attention is suitable for fashion image captioning, we also perform the comparison of the other methods including the original encoder-decoder without attention, and encoder-decoder with channel-wise attention, one of popular attention mechanism, and last but not least the spatial attention. Besides, currently, most of dataset for fashion image captioning task are lacking information about item attributes. In this paper, we refined the large-scale datasets relating to different fashion problems which were published in previous works as DeepFashion [15] and Fashion-Gen dataset [20], to utilize for fashion captioning task with attributes. Finally, we evaluate our proposed method on Fashion-Gen dataset after refining it to be suitable with fashion attributes of DeepFashion dataset. In short, following are our contributions in this paper: 1. We propose an appropriate attention mechanism incorperated with the traditional Encoder-Decoder approach to deal with fashion image captioning. 2. Standardize the fashion image dataset for the problem of fashion image captioning with attributes, based on DeepFashion [15] and Fashion-Gen dataset [20]. 3. Evaluate the usefulness of our proposed model in fashion image captioning on Fashion-Gen dataset. The remainder of paper is structured as following. Section 2 gives the general overview of the image captioning problem, and points out some specific requirements for fashion-image captioning task. Then, our new proposed method based on the aspects of attention mechanism for fashion-image captioning is described in Sect. 3. The next Sect. 4 focuses on experiments when running our proposed method on the Fashion-Gen dataset. The last Sect. 5 discusses and concludes what our paper contributes to the fashion-image captioning problem. Some future directions are also mentioned in this section.

2

Related Works

Generating a natural language description of an image is called image captioning and quite challenging in computer vision field. Bernardi et al. [4] provided a detailed review of most existing approaches to address for image captioning. Their study presents the benchmark datasets and the evaluation measures for this problem. It demands to recognize the essential objects, their relationships and their attributes in an image. It also needs to generate the correct natural language sentences regarding on syntactic and semantic aspects. There have been many researches working on image captioning, such as [2,5,14,16,19,29]. Many of those above image captioning approaches have typically solved the problem by using an object recognition system in conjunction with a natural language generation component based on language models or templates (Kulkarni et al. 2011) [10]. However, they have met some problems with preposition−region triples that they could not distinguish the difference between the main information of the image and the surrounding information of

96

B. T. Nguyen et al.

the region. The recent advantages of deep learning have substantially improved the performance of image captioning problem. A large number of studies for image captioning with deep machine learning have being popularly published recently. With deep learning, features are learned automatically from training data and they are able to control a large and diverse set of images. Deep learning algorithms are possible to handle the existing challenges of image captioning quite well. Specifically, a deep learning image captioning model usually contains an encoder and decoder model with two main components, a Convolutional Neural Networks (CNN) [7,21], and a Recurrent Neural Networks (RNN) [6,11]. CNN is used to extract the compact representational vector of the whole image, while RNN is able to work well with any kind of sequential data, such as generating a sequence of words. A typical deep learning image captioning approach is to combine CNN and RNN as Karpathy et al. [9], Vinyals et al. [23]. By merging them, the model can find patterns inside an image, and then use that information to interpret their presentation to generate captions. There have been increasing interests in integrating the encoder decoder framework and reinforcement learning for image captioning such as Liu et al. [14], Rennie et al. [19], Zhao et al. [29]. Although encoder and decoder image captioning model gains a lot of research and improves the performance of captioning, this approach also has some limitations. One case is that when CNN extracts feature vector from the image, this internal representation contains too much information for RNN to decode into description sentences. There are some effective ways to improve encoder-decoder models. The most wellknown one is to add attention mechanism to encoder-decoder image captioning model. For instance, Xu et al. [28] proposed an attention-based model that automatically learn where to attend when generating image descriptions. Another ways to improve models is presented in Squeeze-and-Excitation Networks (SENets) [8]. It introduces a building block for CNNs with the aim of constructing informative features by fusing both spatial and channel-wise information within local receptive fields at each layer [8] with almost no computational cost. Each channel in convolution has a lot of information and CNNs use their convolutional filters to extract hierarchal information from images, which could retain usefull information for RNN to generate captions. Li et al. [12] and Jie et al. [8] used this model and achieved the encouraging results.

3 3.1

Approach Encoder-Decoder Architecture

Encoder-decoder architecture is known as the most popular approach and commonly applied by researchers to address the image captioning problem because of its flexible and effective characteristic. Specifically, the encoder-decoder architecture includes two main phases, the first one extracts visual feature from the given image, known as the encoder phase, and convolution neural networks (CNNs) is regularly applied in this phase. The second one generates words to represent the image, which is considered as the decoder phase. Recurrent Neural Networks

Attention Mechanism for Fashion Image Captioning

97

(RNN), and its variants (e.g. Gated Recurrent Unit (GRU), Long Short Term Memory (LSTM)) are used as the common methods in decoder phase. Therefore, encoder-decoder architecture also called as CNN-RNN architecture. 3.2

Channel-Wise Attention Mechanism

Channel-wise attention was first introduced by [5], with the aim of paying attention on the visual feature V , which is obtained by the CNN filters. According to this, each CNN filter is considered as a pattern detector, and a response activation of the corresponding CNN filter is obtained by each channel of a feature map in CNN. Thus, a channel-wise mechanism is able be considered as a process of selecting semantic attributes. Formally, given the image feature map V ∈ Rw×h×C and the hidden state ht−1 ∈ Rd , where C is the total number of channels, d is the hidden state dimension.w and h are the width and height of input image, the channel-wise attention can be defined by g(·) function as αt = g(ˆ vt , ht−1 )

(1)

Specifically, the average of the feature map is denoted as vˆ ∈ Rm with m = w ×h dimension. h w 1  vˆt = Vt (i, j) (2) m i j avtˆ = tanh(Wc  vˆt + bc ) h at t−1

= tanh(Wh  ht−1 + bh ) at =

avtˆ

+

h at t−1

αt = sof tmax(Wa  at + ba )

(3) (4) (5) (6)

In addition, the attention weight of channel-wise at ∈ Rm is computed by incorporating the compressed information of vˆt and ht−1 , which are denoted avtˆ ∈ Rm , h at t−1 ∈ Rm . bc , bh ∈ Rm are model bias, and Wc ∈ Rm×C , Wh ∈ Rm×d are the transformation matrices. Meanwhile, αt ∈ RC is the attention weight of each feature in V at time t, and  denotes the element-wise product. 3.3

Spatial Attention Mechanism

Similarly, spatial attention mechanism is also adopted to improve the ability of generating descriptions [5]. More particularly, the spatial attention mechanism attempts to pay more attention to the semantic-related regions than selecting semantic attributes as channel-wise attention. As the input of channel-wise function g(·), the composition of spatial attention function f (·) includes the visual feature map V and the hidden state ht−1 . To compute the spatial attention weight, the width and height of the visual feature V = [v1 , v2 , ..., vm ] and vi ∈ RC are flattened out. The definition of the spatial attention f (·) is presented as: at = tanh(Ws vt + bs )  Whs ht−1

(7)

98

B. T. Nguyen et al.

βt = sof tmax(Wi  at + bi )

(8)

where Whs ∈ Rk×C , Ws ∈ Rk×d and Wt ∈ Rk are the transformation matrices, meanwhile, bi ∈ R1 and bs ∈ Rk are model bias, k is the common mapping space dimension of CNN feature V and hidden state ht−1 . 3.4

An Attention-Based Method for Fashion Image Captioning

In this paper, we propose the attention-based method integrated into the encoder-decoder architecture to deal with the fashion image captioning. Firstly, in the encoder phase, the fashion image is extracted by the CNN filter of Resnet50 network to form the visual feature map V . At the same time, the fashion image of the corresponding description is converted to word embedding y = [y1 , y2 , ..., yL ], yi ∈ RD , where L is the length of caption and the size of the vocabulary is denoted as D. Next, in the decoder phase, we utilize LSTM to perform a captioning one word at time step t conditioned on a tuple, including context vector zt , the previously hidden state ht−1 , and the previously generated words. The context vector zt presents a dynamic representation of the relevant part of the input image at time t. The visual feature map V is pushed into f (·) to compute the spatial attention weights β, and the context vector z also computes from the β and the visual feature map v as following: zt =

C 

βt vt

(9)

t

The LSTM unit includes a memory cell ct , three gates it , ft , ot are the input, forget, output gate and the hidden state ht of LSTM. it = σ(Wi Eyt−1 + Ui ht−1 + Zi zt + bi )

(10)

ft = σ(Wf Eyt−1 + Uf ht−1 + Zf zt + bf )

(11)

ct = σ(Wc Eyt−1 + Uc ht−1 + Zc zt + bc )

(12)

ot = σ(Wo Eyt−1 + Uo ht−1 + Zo zt + bo )

(13)

ht = ot tanh(ct )

(14)

where W •, U •, Z• and b• are the transformation matrices and biases. E ∈ Rm×K is the embedding matrix, and m presents the embedding dimension of LSTM. And σ is the sigmoid function. In the last step of decoder phase, the predicted distribution of each word at step t, denoted as wt , is computed by p(wt |zt ) = sof tmax(Ws  ht + bs )

(15)

Attention Mechanism for Fashion Image Captioning

4 4.1

99

Experiments Fashion-Gen Dataset

Fashion-Gen dataset consists of 293,008 fashion images (260,480 images for training, 32,528 for validation, 32,528 for testing). All fashion items are photographed from 1 to 6 different angles depending on the category of the item (more in Fig. 1). Each fashion item is paired with paragraph-length descriptive captions sourced from expert (some examples in Fig. 2). This dataset is built for the fashion image generation problem. However, we recognize that the imagedescription pair can be used for fashion image description problem too. Therefore, we refined this dataset by performing pre-processing steps such as data cleaning, normalization, removing some infrequent words which are not related to the fashion attributes. In this case, we utilized the fashion attribute classes which are collected from the DeepFashion dataset to refine the word dictionary for Fashion-Gen dataset. After the preprocessing step, the word dictionary significantly reduces 80% but the number of images only lost 10% samples of the dataset. Finally, we obtained the attributes description of clothing items to support for training the fashion image description task in our framework.

Fig. 1. Some fashion image examples from Fashion-Gen dataset (the top row). Each fashion item is photographed from 1 to 6 different angles depending on the category of the item (the bottom row).

100

B. T. Nguyen et al.

long sleeve over long wool shirt in red and black. plaid pattern throughout. button closure and flap pocket at front. logo print at front hem and center back in white. shirt tail hem. tonal stitching. single button barrel cuff.

long sleeve cotton and linen blend denim shirt in blue. fading and distressing throughout. spread collar. press stud closure at front. flap pocket at chest. dropped shoulder. single button barrel cuff. tortoiseshell hardware. contrast stitching in tan.

Fig. 2. Fashion images together with their descriptions from Fashion-Gen dataset.

4.2

Implementation Details

We implement the encoder and decoder parts for our fashion captioning system. In the encoder phase, Resnet- 50 is trained for extracting fashion image attribute features, and feature map from the convolution layer res5c is obtained to form a global image feature V 7×7×2048 . With the decoder phase, we concatenate the word embedding vector yt−1 , the last hidden state vector ht−1 and context vector zt as the input vector to put into the RNN. Then we use a single layer neural network that output vector ht−1 to predict the next word st . In addition, we utilize LSTM with the hidden size of 512. At the beginning of this process, the decoder is trained by using Adam optimizer with an initial learning rate of 1e − 4 for 10 epochs to warm up, and then we continuously train 100 epochs with the learning rate of 1e − 5. By changing the learning rate, the weights of model is learnt easier. The other parameters are configured as the batch size to be 64. Besides, we followed the strategy of Beam Search [28] with the beam size of 3 when sampling the caption for Fashion-Gen dataset. 4.3

Quantitative Analysis

We evaluate the quantitative effectiveness of attention mechanism for fashion image captioning task. We run three experiments on Fashion-Gen dataset, and results are presented in the Table 1: the Encoder-Decoder model with no attention (the first row), the Chanel-wise Attention (the second row), and the Spatial Attention (the last row). In particular, the spatial attention based model obtains the results on the popular evaluation metrics in the image description problem

Attention Mechanism for Fashion Image Captioning

101

Table 1. Comparison between the traditional approach (Encoder-Decoder architecture) and two Attention based models for generating fashion image captioning on Fashion-Gen dataset. Method

BLUE-1 BLUE-2 BLUE-3 BLUE-4 ROUGE-L CIDEr

Encoder-decoder (non-attention) 0.292

0.235

0.164

0.124

0.455

0.526

Channel-wise attention

0.280

0.205

0.144

0.109

0.345

0.412

Spatial attention

0.408

0.333

0.267

0.221

0.502

0.913

Fig. 3. Some fashion image examples from Fashion-Gen dataset (the top row). Each fashion item is photographed from 1 to 6 different angles depending on the category of the item (the bottom row).

102

B. T. Nguyen et al.

as BLEU [17] (BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores achieved 0.408, 0.333, 0.267, 0.221, respectively), ROUGE-L [13] with 0.502 and CIDEr [22] with 0.913. Improving performance on the same dataset Fashion-Gen compare to channelwise attention model also obtains BLEU (BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores with 0.28, 0.205, 0.144, 0.109, respectively), ROUGE-L with 0.345 and CIDEr with 0.412. Moreover, Table 1 also showed a significant effect on the performance of fashion image captioning over most of the common metrics compared to Encoder-Decoder (Non-attention). Some examples when running the spatial attention model on Fashion-Gen dataset for generating captions are showed in Fig. 3.

5

Conclusion

In this paper, we proposed an appropriate model for fashion image captioning which is inspired by spatial attention to enhance attention on the relationship of fashion attributes. The spatial attention mechanism is designed to extract the deep features representation, hence the proposed model would be able to represent the relationship between items inside the input fashion image. We applied our proposed method to solve the fashion image captioning task, and run experiments on the Fashion-Gen dataset, which is refined to appropriate for fashion image captioning with attributes. For a fair comparision, we also used the other two appoaches on the same Fashion-Gen dataset: the based-line encoder-decoder model, and the channel-wise attention based model. Experiments have proved the effectiveness of spatial attention when compared with the baseline method (only encoder-decoder model), and the channel-wise attention method. In the future, we will try to explore more aspects of attention machanism, and compare our proposed model when running on different fashion captioning datasets to prove the effectiveness of our proposed model.

References 1. Vo, A.H, Nguyen, N.T.Q, Nguyen, N.T.B, Pham, V.H., Van Giap, T., Nguyen, B.T.: Video-based vietnamese sign language recognition using local descriptors. In: ACIIDS 2019, vol. 11432. Springer (2019) 2. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L.: Bottom-up and top-down attention for image captioning and visual question answering. In: CVPR (2018) 3. Bai, S., An, S.: A survey on automatic image caption generation. Neurocomputing 311, 291–304 (2018) 4. Bernardi, R., Cakici, R., Elliott, D., Erdem, A., Erdem, E., Ikizler, N., Keller, F., Muscat, A., Plank, B.: Automatic description generation from images: a survey of models, datasets, and evaluation measures, vol. 55, January 2016 5. Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., Chua, T.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6298–6306, July 2017

Attention Mechanism for Fashion Image Captioning

103

6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, vol. 7, December 2015 7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016) 8. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) 9. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 664–676 (2017) 10. Kulkarni, G., Premraj, V., Dhar, S., Li, S., Choi, Y., Berg, A.C., Berg, T.L.: Baby talk: understanding and generating image descriptions. In: Proceedings of the 24th CVPR (2011) 11. Lee, C.: Image caption generation using recurrent neural network. J. KIISE 43, 878–882 (2016) 12. Li, S., Yamaguchi, K.: Attention to describe products with attributes. In: 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), pp. 215–218, May 2017 13. Lin, C.-Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, Barcelona, Spain, pp. 74–81. Association for Computational Linguistics, July 2004 14. Liu, S., Zhu, Z., Ye, N., Guadarrama, S., Murphy, K.: Optimization of image description metrics using policy gradient methods. CoRR, abs/1612.00370 (2016) 15. Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) 16. Jiasen, L., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via a visual sentinel for image captioning (2017) 17. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL 2002, pp. 311–318. Association for Computational Linguistics, Stroudsburg (2002) 18. Parikh, H., Sawant, H., Parmar, B., Shah, R., Chapaneri, S., Jayaswal, D.: Encoderdecoder architecture for image caption generation. In: 2020 3rd International Conference on Communication System, Computing and IT Applications (CSCITA), pp. 174–179 (2020) 19. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1179–1195 (2017) 20. Rostamzadeh, N., Hosseini, S., Boquet, T., Stokowiec, W., Zhang, Y., Jauvin, C., Pal, C.: Fashion-gen: the generative fashion dataset and challenge. ArXiv e-prints, June 2018 21. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014) 22. Vedantam, R., Zitnick, C.L., Parikh, D.: CIDEr: Consensus-based image description evaluation. In: CVPR, pp. 4566–4575. IEEE Computer Society (2015) 23. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. CoRR, abs/1411.4555 (2014) 24. Vo, A., Ly, N.Q.: Facial expression recognition using pyramid local phase quantization descriptor. In: Knowledge and Systems Engineering (KSE), pp. 105–115 (2015)

104

B. T. Nguyen et al.

25. Vo, A., Nguyen, B.T.: Facial expression recognition based on salient regions. In: 2018 4th International Conference on Green Technology and Sustainable Development (GTSD), pp. 739–743 (2018) 26. Vo, A.H., Pham, V.-H., Nguyen, B.T.: Deep learning for vietnamese sign language recognition in video sequence. Int. J. Mach. Learn. Comput. 9, 440–445 (2019) 27. Vo, T., Nguyen, T., Le, T.: A hybrid framework for smile detection in class imbalance scenarios. In: Neural Computing and Applications, pp. 8583–8592 (2019) 28. Xu, K., Ba, J.L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R.S., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML2015, vol. 37, pp. 2048–2057. JMLR.org (2015) 29. Zhao, W., Xu, W., Yang, M., Ye, J., Zhao, Z., Feng, Y., Qiao, Y.: Dual learning for cross-domain image captioning. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, pp. 29–38. ACM, New York (2017)

Robotics and Intelligent Systems

An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots Dang Sy Binh1, Le Cong Long1, Khuu Luan Thanh1, Dinh Phuoc Nhien1, and Dang Xuan Ba2(&) 1

HCMC University of Technology and Education (HCMUTE), Ho Chi Minh City, Vietnam [email protected], [email protected], [email protected], [email protected] 2 Department of Automatic Control, HCMC University of Technology and Education (HCMUTE), Ho Chi Minh City, Vietnam [email protected]

Abstract. Industrial robots so far play a key role in automation field. One necessary condition for high-precision control of such robots is inversekinematics solutions that are huge challenges in cases of universal structures. In this paper, we present an intelligent method for inverse-kinematics problems of universal 6-degree-of-freedom (6DOF) robots. The numerical method is first developed for the robot configuration using the gradient method. To improve the convergence rate, a novel speed-up mechanism is proposed using error-based excitation signals. Effectiveness of the designed method is confirmed by comparative simulation results. Keywords: Universal robots kinematics

 6DOF  Forward kinematics  Inverse

1 Introduction Along with rapid development of the science and technology in the world, automation of production process becomes indispensable in the industrial environment. That is a reason why robots have participated in most of the stages and are gradually replacing humans in almost field, especially in areas where the working environment requires continuity and is harsh and risky. Therefore, robotic technology has become an important integral part of science and research. One of the most typical tasks in the robot control is to find out solutions of inverse kinematics (IK). Many studies have been performed to effectively deal with the problems, especially with higher degree-offreedoms (DOF) robots. One category of the inverse kinematics algorithms is based on closed-form methods [1, 2]. The multiple solutions for a desired end-effector could be found by equalizing the proper entries of link homogenous transformations [1]. A common constraint of these methods are three consecutive joint axes intersecting at a certain point [2]. To avoid this kind of embarrassment, another direction of the analytical approaches is usage of equivalent high degree polynomials of the joint angles [3]. Although all © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 107–116, 2021. https://doi.org/10.1007/978-3-030-62324-1_10

108

D. S. Binh et al.

possible solutions could be obtained by solving 16-order polynomials of the half-angle tangent functions that were transferred from the IK problems, applying these theories in real-time applications is not a trivial works [4]. Analytical IK solutions of some robot manipulators are normally difficult or impossible due to their complex kinematics structures [5]. For providing the exact IK roots effectively, numerical techniques would be potential ways. The Newton-Raphson methods were adopted for the nonlinear kinematics equations [6, 7]. Recently, in [8], a hierarchical analysis method for the IK problem was introduced based on a graphical interpretation of the Jacobian matrix. In [9], another numerical algorithm for the position analysis of all serial manipulators was developed using inversion of the Jacobian matrix. The Gauss-Newton iterative method was also used for such nonlinear IK models [10, 11]. The main drawback of such intelligent approaches is the inverse matrix problem at singularity points [12]. Severe fluctuation phenomena could be trigged when the system passed through the singular regions. To tackle the inverse shortcoming, gradient-based nonlinear algorithms were proposed by solving index functions [13–15]. One of the most effective methods of this numerical class is based on Levenberg-Marquardt theory [16, 17]. Another possible approach is a combination of partial analytical computation and incremental tuning [5]. In fact, the IK can be also viewed as an optimization task solved with general purpose methods such as neural network-based and genetic algorithms [18–21]. So far, performance of the intelligent methods is mainly based on guess of the initial solution and learning rate chosen. With wide ranges of robot poses, employing a static rate is hard to yield excellent estimation results. In this paper, a second order learning method is proposed to cope with the IK problem of a universal 6DOF robot. The method is structured based on an extended Levenberg-Marquardt technology and is then integrated with a fast-convergence mechanism. The solving rate of a given problem is automatically adjusted to effectively support the convergence process. Advantages of the proposed solution are intensively verified by simulation results. The rest of this paper is organized as follows. The problem statement is present in Sect. 2. The high-speed solving algorithm is addressed in Sect. 3. Simulation validation is discussed in Sect. 4, while the conclusion and future research are drawn in the last section.

2 Problem Statement The studied robot is a universal robot UR3 whose configuration and relative coordinates are depicted in Fig. 1. Denavit-Hartenberg parameters of the robot were selected as shown in Table 1.

An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots

109

Fig. 1. Configuration and link coordinates of a universal 6DOF robot Table 1. Denavit-Hartenberg (DH) table of the robot i 1 2 3 4 5 6

ai1 0 0 d3 d5 0 0

ai1 0 90 0 0 90 90

di 0 0 0 d2-d4 + d6 d7 0

hi h1 h2 h3 h4 h5 h6

Homogenous transformation from the last coordinate to the base coordinate is computed from the DH table as follows [6]: 2

r11 6 r 0 6 21 6T ¼ 4 r 31 0

r12 r22 r32 0

r13 r23 r33 0

3 px py 7 7 pz 5 1

ð1Þ

in which rijji;j¼1;2;3 are entries of the following rotation matrix (2) of the 6th frame related to the based frame,

110

D. S. Binh et al.

8 r11 > > > > > r12 > > > > > r13 > > > > > r > < 21 r22 > > > r23 > > > > > > r31 > > > > > r32 > > : r33

¼ c6 ðs1 s5 þ c1 c5 c234 Þ þ c1 s6 s234 ¼ c1 c6 s234  s6 ðs1 s5 þ c1 c5 c234 Þ ¼ c1 s5 c234  c5 s1 ¼ s1 s6 s234  c6 ðc1 s5  s1 c5 c234 Þ ¼ s6 ðc1 s5  s1 c5 c234 Þ þ s1 c6 s234

ð2Þ

¼ c1 c5 þ s1 s5 c234 ¼ c5 c6 s234  s6 c234 ¼ c6 c234  c5 s6 s234 ¼ s5 s234

and px;y;z presents the origin position of the 6th frame with respect to the base frame 8 > < px ¼ s1 ðd2  d4 þ d6 Þ þ d7 c1 s234 þ d5 c1 c23 þ d3 c1 c2 py ¼ d7 s1 s234  c1 ðd2  d4 þ d6 Þ þ d5 s1 c23 þ d3 s1 c2 > : pz ¼ d5 s23 þ d3 s2  d7 c234

ð3Þ

here ci ; si denote cosðhi Þ and sinðhi Þ, and hiji¼1::6 are joint angles, and diji¼1::7 are physical link lengths. Normally, the forward kinematics could be easily obtained from the DH table, but inverse kinematics is a conundrum. The main objective of this paper is to effectively find out an accurate set of joint angles with respect to a desired end-effector pose. Highly nonlinearities of the kinematics (1) and requirements of fast responses and precise results are huge challenges.

3 Effective Inverse Kinematic Solutions To solve the inverse kinematics (IK) problem of this robot, we employ a transpose approximation method. First, an observation function for the robot pose is synthesized from entry functions of the forward kinematics (FK) matrix (1), as follows.  T f ¼ px py pz r11 r12 r22 r23 r31 r33

ð4Þ

From a given pose of the robot, it is possible to extract the desired value fd . By applying the extended Levenberg-Marquardt methods [16, 17], the approximation solution could be found as follows,  1 h½k þ 1 ¼ h½k þ NJTr ½k Jr ½kJTr ½k þ cI9 ðfd  f ½kÞ

ð5Þ

Where c is an arbitrary positive small number, N is positive-definite learning rate matrix, J is the direct Jacobian matrix derived from the synthesized function (4). The solving law (5) could result in expected joint angles for any desired endeffector pose inside the robot workspace. However, the convergence time much

An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots

111

depends on how to choose the learning rate N. At noted in (4), values of the last six entries of the function J are always inside the unit circle, while those of other tree terms (4) are in much larger ranges. Hence selecting proper value of N is not an easy work. The best N could yield a fast response with an excellent estimation error. As a crucial motivation, here a second layer updating rule is proposed for the learning rate matrix N. The learning rate is divided into a static positive rate Ns , offset positive rate No and dynamic rate Nd .  1 Nd ½k þ 1 ¼ aNd ½k þ Ns sgnðf d  f½kÞT Jr ½kJTr ½k Jr ½kJTr ½k þ cI9 jf d  f½kj ð6Þ where ð0\a\1Þ is a second-order learning rate. The primary learning rate N is comprised of as, N ¼ Ns ðN0 þ Nd Þ

ð7Þ

The low-level approximation laws (6)–(7) reveal that the solving mechanism would be speed-up depending on the bound of the robot pose error, and the dynamics rate would be automatically vanished once the roots found.

4 Simulation Validation In this section, performance of the solving algorithm proposed is discussed from several static and dynamic tests. The designed method was applied on a 6DOF universal robot model depicted in Fig. 1. Its detailed parameters are shown in Table 1, which were measured from a real robot. For providing an intensive view of the solving rate, the proposed approach would compare with a conventional Levenberg-Marquardt method that was proposed in [16]. Note that the static and offset learning rates of the comparative methods were chosen to be the same. 4.1

Comparative Solutions in Static Test

In this test, a certain robot pose was given by applying the FK computation (1)–(3) on a set of six desired joint angles: h1 = 20, h2 = 30, h3 = 40, h4 = 50, h5 = 60, h6 = 70 (degree). The conventional and self-learning Levenberg-Marquardt (CLM vs SLM) were respectively employed to solve the IK problem with respect to the desired pose. The simulation results obtained are presented in Figs. 2 and 3. As seen in the figures, the solution of the IK problem would be easily found by the both CLM and SLM methods. However, the solving speed of the two methods was really different: as shown in Fig. 2 (b), the CLM needed 18000 iterations to find out the true root, while as illustrated in Fig. 3 (b), the SLM required only 4000 iteration to complete the same mission. To this end, as presented in Fig. 3 (c), the second-order solving process (6)–(7) was effectively activated to force the pose error back to zero as fast as possible. Here, the SLM provided important outperformances as comparing to the conventional one.

112

D. S. Binh et al.

Fig. 2. Solving process of the conventional Levenberg-Marquardt algorithm (CLM) in the static test: a) Convergence process in the view of joint angles b) Convergence process of the pose error

Fig. 3. Solving process of the SELF Levenber-Marquardt algorithm (SLM) in the static test: a) Convergence process in the view of joint angles b) Convergence process of the pose error c) Intelligent variation of the learning rate

An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots

4.2

113

Comparative Solutions for Time-Varying Trajectories

In the second test, the two algorithms were challenged with a dynamic trajectory which is plotted in Fig. 4. Note that, in this test, solving iteration was limited by a fixed number of 100. The simulation results of the two methods are presented in Figs. 5 and 6. As same in the first test, Fig. 5 (a) indicates that the desired joint angles were also easily find out by the solving method even in the strict constraint of the iteration. As zoomed in Fig. 5(b) and 6 (a), the steady-state estimation errors of the CLM and SLM were about 2 mm and 0.3 mm, respectively. The data in Fig. 6 (b) reveals that the superior solving performance of the SLM came from the promptly learning process of the low layer (6)–(7). Hence, the outperformance of the proposed solving mechanism is strongly confirmed in this extensive test.

Fig. 4. Inverse kinematic performance of the conventional IK methods a) A time-varying desired trajectory (blue color) b) Plot by CLM (red color)

Fig. 5. Solving process of the conventional Levenberg-Marquardt algorithm (CLM) in the dynamic test: a) Convergence process in the view of joint angles b) Convergence process of the pose error

114

D. S. Binh et al.

Fig. 6. Solving process of the SELF Levenberg-Marquardt algorithm (SLM) in the dynamic test: a) Convergence process of the pose error b) Intelligent variation of the learning rate

An Intelligent Inverse Kinematic Solution of Universal 6-DOF Robots

115

5 Conclusion In this paper, a second-order solving method based on the Levenberg-Marquardt theory is proposed to cope with the inverse kinematics of a universal 6DOF robot. The first layer of the algorithm is designed by extending the LM theorem, while second-layer is designed to speed up the learning process. The proposed algorithm was carefully verified by intensive simulations. The comparative results achieved confirmed the advantages of the proposed methods over the existing ideas in terms of high accuracy and fast convergence. Acknowledgements. The authors are very grateful to the referees and editors for their valuable comments, which helped to improve the paper quality. This authors also would like to thank the support from Ho Chi Minh City University of Technology and Education (HCMUTE).

References 1. Piepers, D.: The kinematics of manipulators under computer control, Unpublished Ph.d. Thesis, Standford University (1968) 2. Piepers, D., Roth, B.: The inverse kinematics of manipulators under computer control. In: Proceedings of the Second International Congress on Theory of Machines and Mechanisms, Zakopane, Poland, vol. 2, pp. 159–169 (1969) 3. Raghavan, M., Roth, B.: Inverse kinematics of the general 6R manipulator and the related linkages. Trans. ASME J. Mech. Des. 115, 502–508 (1993) 4. Manocha, D., Canny, J.F.: Efficient inverse kinematics for general 6R manipulators. IEEE Trans. Rob. Auto 10(5), 648–657 (1994) 5. Kucuk, S., Bingul, Z.: Inverse kinematics solutions for industrial robot manipulators with offset wrists. Appl. Math. Model. 38(7–8), 1983–1999 (2014) 6. Uicker, J.J., Denavit, J., Hartenberg, R.S.: An iterative method for the displacement analysis of spatial mechanism. ASME J. Appl. Mech. 107, 189–200 (1954) 7. Angeles, J.: On the numerical solution for inverse kinematic problem. Int. J. Rob. Res. 4(2), 21–37 (1985) 8. Martins, D., Guenther, R.: Hierarchical kinematic analysis of robots. Mech. Mach. Theor. 38, 497–518 (2003) 9. Zhao, Y., Hang, T., Yang, Z.: A new numerical algorithm for the inverse position analysis of all serial manipulators. Robotica 24(3), 1–4 (2003) 10. Khalil, W., Dombre, E.: Modeling, Identification, and Control of Robots. Kogan Page Science, London (2004) 11. Abdelaziz, O., Luo, M., Jiang, G., Chen, S.: Multiple configurations for puncturing robot positioning. Int. J. Adv. Rob. Exp. Syst. 1(4) (2019) 12. Goldenberg, A.A., Apkarian, J.A., Smith, H.W.: A new approach to kinematic control of robot manipulator. ASME J. Dyn. Syst. Meas. Control 109, 97–103 (1987) 13. Hall, A.S.: A dependable method for solving matrix loop equations for general threedimensional mechanism. ASME J. Eng. Ind. 99, 547–550 (1997) 14. Balestrino, A., De Maria, G., Sciavicco, L.: Robust control of robotic manipulators. In: Proceedings of the 9th IFAC World Congress, vol. 5, pp. 2435–2440 (1984) 15. Korein, J.U., Badler, N.I.: Techniques for generating the goal-directed motion of articulated structures. IEEE Comput. Graph. Appl. 2, 71–81 (1982)

116

D. S. Binh et al.

16. Buss, S.R.: Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods. IEEE J. Rob. Autom. 17, 1–19 (2004) 17. Dulera, I., Opalka, M.: A comparison of jacobian-based methods of inverse kinematics for serial robot manipulators. Int. J. Appl. Math. Comput. Sci. 23(2), 373–382 (2013) 18. Wei, L.X., Wang, H.R., Li, Y.: A new solution for inverse kinematics of manipulator based on neural network. In: Proceedings of the Second International Conference on Machine Learning and Cybernetics, China, pp. 1201–1203 (200) 19. Karla, P., Prakash, N.R.: A neuro-genetic algorithm approach for solving the inverse kinematics of robotic manipulators. IEEE Trans. Rob. Autom. 2, 1979–1984 (2003) 20. Zhang, Y., Wang, C., Hu, L., Qiu, G.: Inverse kinematics problem of industrial robot based on PSO-RBFNN. In: 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China (2020) 21. Demby’s, J., Gao, Y., Desouza, G.N.: A study on solving the inverse kinematics of serial robots using artificial neural network and fuzzy neural network. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), New Orleans, LA, USA (2019)

A PD-Folding-Based Controller for a 4DOF Robot Vo Tan Tai1, Doan Ngoc Minh1, and Dang Xuan Ba2(&) 1

HCMC University of Technology and Education (HCMUTE), Ho Chi Minh City, Vietnam [email protected], [email protected] 2 Department of Automatic Control, HCMC University of Technology and Education (HCMUTE), Ho Chi Minh City, Vietnam [email protected]

Abstract. In this paper, an intelligent controller is proposed for high-accuracy position tracking control of a 4-degree-of-freedom (4DOF) robot. The control scheme is formatted using a 2-layer neural-network template, in which the first layer is encoded by a proportional-derivative structure, and the second layer is comprised of three different functions. Three ranges of perturbation that affects to the control performance are respectively treated by an offset learning, a folding PD control and high-switching control terms. The gains of the control terms are updated using modified gradient methods. Effectiveness of the proposed control is successfully verified by comparative simulation results. Keywords: PD controller

 Robotics  Position controller  Neural networks

1 Introduction In industrial revolution 4.0 trends, robots have become increasingly an essential part of life. Robots do not only replace humans in meeting the needs of continuous and repetitive work, but also perform intelligent tasks like a real human being. By possessing fast speed and artificial intelligence, robots have successfully completed their missions in exploration, working in dangerous places, rescuing people in accidents, in diagnostic field of medication. For completing such tasks, at least the robots need good controllers to realize the requirements for both accuracy and quick response. However, dynamic complex and different practical working conditions are the main obstacles in designing high precision controllers for the robotic systems [1–3]. To tackle with the aforenoted problems, a vast of effective approaches have been studied such as linear [4, 5], adaptive robust nonlinear [6–8] and intelligent controllers [9, 10] or their combinations [11–14]. In fact, the most popular control method used in industrial application is the Proportional-Integral-Derivative (PID) based controllers [4, 15, 16] thanks to the simple implementation and robustness. Such controllers have been continuingly exploited for higher performances. In [17], a fuzzy self-tuning algorithm was proposed to select proper PID gains according to the actual state of robot manipulators. In other directions, the fuzzy algorithm was employed to approximate the system dynamics [18]. The learning rules of this intelligent category are significantly © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 117–129, 2021. https://doi.org/10.1007/978-3-030-62324-1_11

118

V. T. Tai et al.

based on imperial human knowledge [19]. To alleviate the dependence on operator experience, improvements using neural networks attracted more attention. In the same usage purpose with the fuzzy logic approaches, the network could be adopted as model approximators. The nonlinear system dynamics were easily estimated by appropriate network designs, but large computation on the learning process is one of the noted points [20, 21]. The networks have been also applied for gain tuner. Indeed, fine gains of the PID robotic controllers could be automatically adjusted using various learning methods. The adaption rules were derived using back-propagation theories [3, 22–24] or vector quantization methods [25]. Most of the self-adjusting methods worked under a zero-order-linear assumption that might lead to instability of the closed-loop system or provide the unexpected control effect. To cope with the aforementioned discussion, in this paper, a neural-network-based controller is designed for high-accuracy tracking control of robot manipulators. Input signals of the network are synthesized by folding a PD form of the control error. That folding signal itself can ensures that stability of the closed-loop control system. The learning laws of the network are designed with excitation and leakage terms to provide the predefined performance regardless of the changes of the internal dynamics and external disturbances. Effectiveness of the proposed controller is carefully verified by extensive simulation results. The rest of this paper is organized as follows: Problem statements are presented in Sect. 2. The proposed algorithm is exploited in Sect. 3. Simulation validation is discussed in Sect. 4. Conclusions and future work are drawn in Sect. 5.

2 Robot Dynamics and Problem Statement The dynamics of the studied 4-DOF robot can be generally expressed as _ q_ þ gðqÞ þ Kf q_ þ sd ¼ s MðqÞ€ q þ Cðq; qÞ

ð1Þ

_ q € 2 R4 are joint position, velocity, and acceleration vectors, respectively, where q; q; 44 _ 2 R44 is the symmetric and positive definite matrix of inertia, Cðq; qÞ MðqÞ 2 R 4 denotes the Coriolis and centrifugal term matrix, gðqÞ 2 R is the gravity term, s 2 R4 is torque acting on joints, Kf 2 R44 is a diagonal positive-definite matrix standing for frictional coefficients at joints, sd 2 R4 is external disturbances. Remark 1: The dynamics (1) possesses the following properties [1, 26]: Property 1. The maximum and minimum eigenvalues of the inertia matrix M(q) are bounded and positive. Property 2. The gravitational vector is bounded: kgðqÞk  g0 . where g0 is a positive finite constant. _ and the time derivative of the inertia matrix MðqÞ Property 3. The matrix Cðq; qÞ   _ _ y ¼ 0 8y; q; q. _  2Cðq; qÞ satisfy the following constraint: yT MðqÞ

A PD-Folding-Based Controller for a 4DOF Robot

119

Remark 2: In general cases, the dynamics (1) are unknown nonlinearities due to the complex configuration of the robot and unpredictable working conditions. It is a challenge to design a simple controller to drive the system state q, yet accurately, tracking to a second-order-bounded desired profile qd . Further intensive requirements for the controller is model-free, strictly stable, and self-learnable.

3 Controller Design Traditional PID controller theory is detailed in the article [30]. In this section, a new controller is designed based on the defined solicitation in which the boundedness assumption of the lumped disturbances sd is fulfilled. 3.1

Folding PD Control Scheme

The main control objective is formulated as follows: e ¼ q  qd

ð2Þ

Since primary goals of the controller design are simple and model-free, here PDbased control schematics are reasonable solutions. The PD control signal is first synthesized for the control error: uPD ¼ e_ þ KP e

ð3Þ

where KP is a positive-definite gain matrix. In fact, under conditions of the bounded disturbance sd , applying the control signal (3) directly to the system (1) always results in the stability. However, both transient and steady-state performances need further investigation. Unexpected behaviors could _ characteristics come from the effect of the Coriolis and centrifugal term matrix Cðq; qÞ, of the reference signal qd , and the lumped external disturbance sd . These factors could be categorized into offset, medium- and high-frequency impacts [27]. As a result, the control rule (3) is modified to treat such the regions for better performance, as follows: s ¼ K1 tanhðuPD Þ þ K2 sgnðuPD Þ þ K3

ð4Þ

where Kiji¼1;2 are positive-definite control gain matrices, and K3 is another tunable bounded gain. Here, the gain K3 and the term K2 sgnðuPD Þ are used to eliminate the offset and high-frequency perturbances while the term K1 tanhðuPD Þ plays a dominant role in suppressing the medium-frequency effect. Remark 3: The control scheme (4) has been structured in the frequency point of view for the best control error (2). The control effect might be exceptional if the proper control gain matrices are selected. As noted in (4) again, large values of K2 could lead to chattering problems, while small K1 gain is difficult to generate an excellent control performance. It means that the automatic gain selection is definitely required.

120

3.2

V. T. Tai et al.

Learning Integration Formed by a Neural Network

As discussed in Remark 3, the intelligent gains must have adaptation ability according to the rapid behaviors of the system subject to the minimum control error (2) or equivalent control objective (3). To this end, the control laws (3)–(4) are formatted using a neural network structure where its input layer (x) consists of two neurons, fed by the PD control term (3), and one bias: x ¼ ½ tanhðuPD Þ

sgnðuPD Þ 1 T

ð5Þ

It reveals that weight factors of the layer are the gain matrices Kiji¼1;2;3 : K ¼ ½ K1

K2

K3 T

ð6Þ

The learning mechanism of the control gains is extended from the back-propagation theories [23, 25, 28, 29] as follows: Kt ¼ KtTs þ diagðsgnðjuPD j  u1 ÞÞKx

ð7Þ

where K is a learning rate, u1 is a desired convergence region. Remark 4: Working principle of the learning rule (7) is that the high-switching and medium-frequency-treating gains Kiji¼1;2 would be excited to force the equivalent error (3) back to inside the desire region u1 and be decreased thereafter by the leakage function to relax the whole system. The gain K3 is a searching term of offset disturbances. The overview of the proposed controller is demonstrated in Fig. 1.

Fig. 1. Scheme of the proposed controller.

4 Simulation This section discusses validation results of the designed controller on a simulation model. The experimental results would be extensively discussed to highlight advantages of the designed features.

A PD-Folding-Based Controller for a 4DOF Robot

4.1

121

Simulation Setup

The simulation model and design, and detailed coordinates of the robot are depicted in Fig. 2. Forward kinematics of the robot are expressed in Eq. (8): 80 > < xEE ¼ px ¼ c1 ðd1  d3 c23 þ d4 s23  d2 c2 Þ 0 yEE ¼ py ¼ s1 ðd1  d3 c23 þ d4 s23  d2 c2 Þ > :0 zEE ¼ pz ¼ d3 s23 þ d4 c23 þ d2 s2

ð8Þ

while inverse kinematics are computed by Eqs. (9)–(11): h1 ¼ atan2ðpx ; py Þ þ atan2ð1; 0Þ 8 0 > > > a ¼ 2d2 d3 > 0 > > > < b ¼ 2d2 d4 0

c ¼ p2x þ p2y þ p2z þ d12 þ 2px c1 d1 þ 2py s1 d1  d22  d32  d42 > > ffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > > a0 2 þ b0 2  c 0 2 c0 2 > 0 0 > : h3 ¼ atan2ðb ; a Þ þ atan2ð ; Þ a0 2 þ b0 2 a0 2 þ b0 2 8 a11 ¼ c1 px þ py s1 þ d1 > > > > > c11 ¼ d2 þ d3 c3  d4 s3 > > > > > b11 ¼ pz > > > > > a22 ¼ pz > > > > > < b22 ¼ c1 px  s1 py  d1 c22 ¼ d4 c3 þ d3 s3 > > > > D ¼ a11 b22  a22 b11 > > > > > > Dx ¼ b22 c11  b11 c22 > > > > > > Dy ¼ a11 c22  a22 c11 > > > > : h ¼ atan2ðDy ; Dx Þ 2 D D

ð9Þ

ð10Þ

ð11Þ

Dynamics of the robot could be derived using the Euler-Lagrange method summarized by Eq. (12):   d dk dk dp þ ¼ s  dt dh_ dh dh

ð12Þ

where k and p are kinetic and potential energies of the joints, computed using extended forward kinematics of the robot. Simulation parameters of the robot model were selected based on a real robot and are shown in Table 1. In the simulation, the controller design in Sect. 2 was applied to control the joints tracking to different desired trajectories. The performance of the

122

V. T. Tai et al.

proposed controller was moreover intensively analyzed by comparing with that of a conventional PID one.

Fig. 2. An overview of the 4-degrees of freedom (DOF): (a) 3D design; (b) principle drawing and setup coordinates

Table 1. The system parameters of the manipulator. Parameters Value Unit Mass of link miði¼1;2;3;4Þ ¼ 0:2 kg Length of link 1 d0 = 0.33 m d1 = 0.075 m Length of link 2 d2 = 0.4 m Length of link 3 d3 = 0.075 m Length of link 4 d4 = 0.41 m d5 = 0.05

4.2

Simulation Results

(a) Step responses The desired trajectories of step functions with specific levels for each joints chosen in this test are listed in Table 2. Control results of the two controllers are plotted in Fig. 3. The system error is shown in Fig. 4 and the updating of each control gains of the proposed PD folding controller for four joints are represented in Fig. 5, 6, 7 and 8.

Table 2. Desired constant angles for system. Parameters Angle joint Angle joint Angle joint Angle joint

1 2 3 4

Value 1 1.5 2 2.5

Unit rad rad rad rad

A PD-Folding-Based Controller for a 4DOF Robot

123

Fig. 3. Comparison of the system responses: (a) first joint, (b) second joint, (c) third joint, (d) fourth joint

Fig. 4. Comparison of the tracking error: (a) first joint, (b) second joint, (c) third joint, (d) fourth joint

In the step test, generally responses of the both methods were very good in which steady-state errors approached to zero. However, detailed observation could emphasize some advantages of the PD folding as follows: the established time of the PD folding method is much better than that of the conventional PID method (the PD folding only

124

V. T. Tai et al.

needed 0.75 s to reach the desired value, while the required time of the traditional PID controller was 1.45 s). Percentage of overshoot (POT) of the PD folding method was also dominant (POT of the PD folding was only 14.3%, while that of PID was up to 18.5%). The reason for such differences was that the PD folding method possesses the adaptation ability wherein the control parameters (see in Figs. 5, 6, 7 and 8) of the PD folding method were automatically adjusted to make the system stable, speed up the response of the system, minimize POT and errors. Thanks to the intelligent corrections by using a 2-layer neural network, even in the form of a simple input function, it still realized the advantages of the proposed idea as comparing to the traditional PID method. Next, another test with a more complex profile would be performed to deeply see the optimization, as well as the difference between the two methods.

Fig. 5. The adaptive gains (a, b, c) of 1st joint

Fig. 6. The adaptive gains (a, b, c) of 2nd joint

Fig. 7. The adaptive gains (a, b, c) of 3rd joint

A PD-Folding-Based Controller for a 4DOF Robot

125

Fig. 8. The adaptive gains (a, b, c) of 4th joint

(b) Sine wave responses The new desired trajectories in the second test was a sinusoidal signal of 0.7 Hz with amplitudes specified in Table 3. Control results of the two controllers are plotted in Figs. 9 and 10. The updating of each control gains of the proposed controller for four joints are represented in Fig. 11, 12, 13 and 14.

Table 3. The desired sine wave of the manipulator. Parameters Angle joint Angle joint Angle joint Angle joint

1 2 3 4

Value 1 1.5 2 2.5

Unit rad rad rad rad

Fig. 9. Comparison of the tracking performance of the four joints: (a) first joint, (b) second joint, (c) third joint, (d) fourth joint

126

V. T. Tai et al.

Fig. 10. Comparison of the tracking error of the four joints: (a) first joint, (b) second joint, (c) third joint, (d) fourth joint

In the more complex profile, the difference between the two methods could be easily recognized. As seen in Fig. 10, the PD folding method still retained its advantages over the traditional PID method. The error is the biggest difference: for example in the first joint, the control error of the PD folding was only 0.006, while that of the PID was 0.11. The response time is another remark. To this end, as observed in Figs. 11, 12, 13 and 14, the control parameters changed quickly to recalibrate the system.

Fig. 11. The adaptive gain (a, b, c) of 1th joint

Fig. 12. The adaptive gain (a, b, c) of 2nd joint

A PD-Folding-Based Controller for a 4DOF Robot

127

Fig. 13. The adaptive gain (a, b, c) of 3rd joint

Fig. 14. The adaptive gain (a, b, c) of 4th joint

The basic PID and PD Folding controllers provided the good performances in two different types of the reference input. However, the obtained data reveals that the PD folding controller is much more optimal than the other thanks to the proposed learning mechanism. The adaptation design indicates that the learning gain K1 could mainly suppress POT, K2 was used for minimizing noise and ripples, while K3 could eliminate the steady-state errors well. As a result, the proposed controller satisfies the predefined requirements: model-free, self-learning, stable and high control performances with small steady-state errors and fast responses.

5 Conclusions In this paper, a model-free controller is designed based on a PD control framework integrated into a neural network structure. For optimizing the control performance, an adaptation law is developed for automatic gain tuning. The proposed controller was tested on a 4-DOF model in Simulink Matlab simulation. The results indicate that the proposed algorithm can exhibit high accuracy in tracking performance as comparing to other controllers. In fact, nature of the robot system is nonlinear, nonlinear control design with adaption ability would thus yield excellent control accuracy. Planning of expanding the controller in a nonlinear structure is one of future research of this work. Acknowledgements. The authors are very grateful to the referees and editors for their valuable comments, which helped to improve the paper quality. This work is funded by the Vietnam National Foundation for Science and Technology (NAFOSTED) under grant number 107.012020.10.

128

V. T. Tai et al.

References 1. Craig, J.J.: Introduction to Robotics: Mechanics and Control, 3rd edn. Pearson Prentice Hall, USA (2005) 2. Ba, D.X., Le, M.-H.: Gain-learning sliding mode control of robot manipulators with timedelay estimation. In: 2019 International Conference on System Science and Engineering (ICSSE) (2019) 3. Truong, H.V.A., Tran, D.T., Ahn, K.K.: A neural network based sliding mode control for tracking performance with parameters variation of a 3-DOF manipulator. Appl. Sci. 9, 2014– 2023 (2019) 4. Roco, P.: Stability of PID control for industrial robot arms. IEEE Trans. Robot. Autom. 12 (4), 606–614 (1996) 5. Su, Y., Muller, P.C., Zheng, C.: Global asymptotic saturated PID control for robot manipulators. IEEE Trans. Control Syst. Technol. 18, 1280–1288 (2010) 6. Yim, W., Singh, S.N.: Feedback linearization of differential-algebraic systems and force and position control of manipulators. In: Proceedings of the 1993 American Control Conference, San Francisco, CA, USA, 2–4 June 1993, pp. 2279–2283 (1993) 7. Tran, D.T., Ba, D.X., Ahn, K.K.: Adaptive backstepping sliding mode control for equilibrium position tracking of an electrohydraulic elastic manipulator. IEEE Trans. Electron 67, 860–869 (2019) 8. Ba, D.X., Yeom, H., Kim, J., Bae, J.B.: Gain-adaptive robust backstepping position control of a BLDC motor system. IEEE/ASME Trans. Mechatron. 23, 2470–2481 (2018) 9. Ba, D.X., Yeom, H., Bae, J.B.: A direct robust nonsingular terminal sliding mode controller based on an adaptive time-delay estimator for servomotor rigid robots. Mechatronics 59, 82– 94 (2019) 10. Jin, M., Lee, J., Tsagarakis, N.G.: Model-free robust adaptive control of humanoid robots with flexible joints. IEEE Trans. Ind. Electron. 64(2), 1706–1715 (2017) 11. Baek, J., Jin, M., Han, S.: A new adaptive sliding mode control scheme for application to robot manipulators. IEEE Trans. Ind. Electron. 63(6), 3628–3637 (2016) 12. Vo, A.T., Kang, H.-J.: An adaptive neural non-singular fast-terminal sliding-mode control for industrial robotic manipulators. Appl. Sci. 8, 2562 (2018) 13. Wai, R.J., Muthusamy, R.: Design of fuzzy-neural-network-inherited backstepping control for robot manipulator including actuator dynamics. IEEE Trans. Fuzzy Syst. 22, 709–722 (2014) 14. Le, T.D., Kang, H.-J., Suh, Y.-S., Ro, Y.-S.: An online self-gain tuning method using neural networks for nonlinear PD computed torque controller of a 2-DOF parallel manipulator. Neurocomputing 116, 53–61 (2013) 15. Skoczowski, S., Domesk, S., Pietrusewicz, K., Broel-Plater, B.: A method for improving the robustness of PID control. IEEE Trans. Ind. Electron. 58(6), 1669–1676 (2005) 16. Pan, Y., Li, X., Yu, H.: Efficient PID tracking control of robotic manipulators driven by compliant actuators. IEEE Trans. Control Syst. Technol. 27(2), 915–922 (2019) 17. Meza, J.L., Santibanez, V., Soto, S., Llama, M.A.: Fuzzy self-tuning PID semiglobal regulator for robot manipulators. IEEE Trans. Ind. Electron. 59(6), 2709–2717 (2011) 18. Liu, Z., Lai, G., Zhang, Y., Chen, P.: Adaptive fuzzy tracking control of nonlinear timedelay systems with dead–zone output mechanism based on a novel smooth model. IEEE Trans. Fuzzy Syst. 23(6), 1998–2011 (2015) 19. Yang, C., Jiang, Y., Na, J., Li, Z., Cheng, L., Su, C.Y.: Finite-time convergence adaptive fuzzy control for dual-arm robot with unknown kinematics and dynamics. IEEE Trans. Fuzzy Syst. 27(3), 574–588 (2019)

A PD-Folding-Based Controller for a 4DOF Robot

129

20. Wang, L., Chai, T., Zhai, L.: Neural-network-based terminal sliding-mode control of robotic manipulators including actuator dynamics. IEEE Trans. Ind. Electron. 56(9), 3296–3304 (2009) 21. Ba, D.X., Truong, D.Q., Ahn, K.K.: An integrated intelligent nonlinear control method for pneumatic artificial muscle. IEEE/ASME Trans. Mechatron. 21(4), 1835–1845 (2016) 22. Thanh, T.U.D.C., Ahn, K.K.: Nonlinear PID control to improve the control performance of 2 axes pneumatic artificial muscle manipulator using neural network. Mechatronics 16, 577– 587 (2006) 23. Ba, D.X., Ahn, K.K., Tai, N.T.: Adaptive integral-type neural sliding mode control for pneumatic muscle actuator. Int. J. Autom. Technol. 8(6), 888–895 (2014) 24. Le, T.D., Kang, H.J., Suh, Y.S., Ro, Y.S.: An online self-gain tuning method using neural networks for nonlinear PD computed torque controller of a 2-DOF parallel manipulator. Neurocomputing 11, 53–61 (2013) 25. Ahn, K.K., Nguyen, H.T.C.: Intelligent switching control of a pneumatic muscle robot arm using learning vector quantization neural network. Mechatronics 17, 255–262 (2007) 26. Lewis, F.L., Abdallah, C.T., Dawson, D.M.: Control of Robot Manipulator. Macmillan, New York (1993) 27. Yasui, S.: Stochastic functional fourier series, volterra series, and nonlinear systems analysis. IEEE Trans. Autom. Control 24(2), 230–242 (1979) 28. Abid, S., Fnaiech, F., Najim, M.: A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm. IEEE Trans. Neural Netw. 12(2), 424–430 (2001) 29. Lin, N.Q., Xuan, D.M., Ba, D.X.: Advanced control design for a high-precision heating furnace using combination of PI/neural network. J. Tech. Educ. Sci. 55, 25–31 (2020) 30. Ang, K.H., Chong, G., Li, Y.: PID control system analysis, design, and technology. IEEE Trans. Ind. Electron. 13(4), 559–576 (2005)

Designing and Analysis of a Soft Robotic Arm Yousef Amer1, Harsh Rana1, Linh Thi Truc Doan1,2(&), and Tham Thi Tran2 1

2

UniSA STEM, University of South Australia, Adelaide, Australia [email protected] Department of Industrial Management, Can Tho University, Can Tho, Vietnam

Abstract. The research presented in this paper aims to design a soft robotic arm which can lift a patient in or out of a bed or a wheelchair on command. The new design can cope with some existing limitations of the Robot for Interactive Body Assistance (RIBA) with a lifting capacity of 61 kgs and a limited degree of freedom. The proposed robotic arm is modelled in the computer-aided modelling (CAD) software, namely, SolidWorks. The results show that the developed robotic arm can lift a person of 82 Kgs which increases 40% in carrying capacity compared to RIBA. The design of the robotic arm has also considered the comfort of the person whom the robot lifts, improving feelings of security and boosting confidence in the process. A special heating arrangement is also incorporated with the layers of the robotic arm. As populations age with reduced numbers of care workers, the proposed soft robotic arm seeks to find a solution for aged care limitations. Keywords: Robot

 Aged care  Soft robot arm  RIBA

1 Introduction The world’s population is facing a significant rise in the number of elderly people and this figure is estimated to approximately 2.1 billion by 2050 [8]. Recently, this increase has already seen in some countries like the USA, China, Japan, India and the UK. In Australia, the elderly people are estimated to increase by approximately 30% in 2027 [14]. Ageing of the population will lead to the increasing demand for health services as well as health workforces such as doctors, nurses and carers. There are some issues that old age people are facing in their daily chores such as eating, moving from bed to chair, and walking. Hence, governments should have a plan to ensure the future needs of the ageing population [16]. In this context, the demand for supporting the aged people with an emphasis on using new technologies is necessary. Therefore, a feasible solution is to design a soft robot which can interact like humans to assist aged care staff, nurses to take care of older people [10]. Soft robots can improve flexibility and adaptability for tasks, as well as improved safety when working around humans. These characteristics allow for its potential use in the areas of medicine and manufacturing. Soft robots may be equipped with various sensors that can sense and detect the current mood of a person to implement the task accordingly [15]. The soft robot is made from unique soft materials © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 130–143, 2021. https://doi.org/10.1007/978-3-030-62324-1_12

Designing and Analysis of a Soft Robotic Arm

131

that can lift the patients and/or people from one specific place to another place. Moreover, it can also have the capability of voice recognition, which can be very useful at a time. There are various robots which have been designed to support people like PARO, Care-O-Robot, RIBA, RiMan, Kismet and Aspire. These types of robots will be discussed in details literature review section. The fully functioning humanoid robot has been developed for various purposes, including taking care and supporting carers. One of the many humanoid robots is a RIBA robot which has been designed to assist nurses, carers to cake patients and elderly people [5]. One of the main applications of the RIBA robot in specific is to lift the patient from the bed or wheelchair and can place it in the required position. RIBA robot is developed to tackle various challenges such as lifting patient, which may have physical limitations or mental discomfort [13]. RIBA robotic arm uses specially designed elbow joints, and shoulder joints made up of stainless steel. The system which is used in the joints has the gear system installed in the small joint structure for converting the motion of dc motor in the desired motion. The design of the robotic arm is designed in such a way that the electrical components can be adjusted between the metallic structures of the robotic arm. The system is designed very compact in order to maintain the centre of gravity at a proper position for maintaining the balance. However, there are still some existing issues of RIBA robot which need to be improved the performance robot better since this robot has a lifting capacity of 61 kgs and a limited degree of freedom. The RIBA robotic arm has a length of 0.8 meters and all the sensors and other components adjusted all together in the system, thus making the system very heavy. Therefore, this research aims to make the improvement in the weight carrying capacity and the reduction in the overall structural weight of the RIBA robot. This study is seeking a soft exterior robot arm design which can be user-friendly, straightforward, comfort and safety for old people and patients. Some significant contributions of this research can be summarized as follows: • • • •

The designs of different robotic arms are reviewed to find the research gap Selection of the appropriate design for the new soft robotic arm. A new robot arm design is proposed to improve some limitations of the RIBA robot. The verification of the new design is carried out to check if the changes in the design can lift more weight or not.

The rest of the paper is organized as follows. Section 2 reviews the current some types of assistive robots while the proposed structural design for a robotic arm is discussed in Sect. 3. Section 4 presents the design of the proposed robot arm whereas Sect. 5 focuses on the validation of the proposed design. Section 6 ends with the conclusion and future work.

2 Literature Review This section aims to review some types of robots which have been designed to facilitate people in real life. Lin, Schmidt [12] examined the type of assistive robot named Kismet. This robot does not depend on one criterion for identification of the mood of a

132

Y. Amer et al.

person. To achieve a higher amount of efficiency in the identification of the mood and emotional state of a person, Kismet uses more than one thing, which is speech and facial expression. In the case of the Pleo robot, the haptic sensor is used, which gives the unique sense of belonging and warmth like a real person. Thus, it is necessary to precisely identify the state of mind of a person, the combination of techniques such as the recognition of facial expression, speech recognition and postural expression is required. It can also be concluded that with the use of the technique to identify the mood of a person, there is a need for the feedback system to improve the effectiveness of a process. To achieve this, specific equipment is required like cameras for facial recognition, microphones for speech, speech recognition and various sensors to process the signals and to identify the state. Encarnação [6] also explained the applications of various types of robots. It is stated that the assistive robots can be beneficial in many scenarios such as assistance to a disabled person, medical industries and care centres. The design of the assistive robot is based on the characteristics for supporting the assistance, which is the mechanism of actuation, programmed robot and more significant amount of self-sufficiency. Chen, Kemp [4] researched human-robot interaction with about 20 nurses. The robot has an arm with 7 degrees of freedom (DOF), elastic actuators, and various sensors for collecting data. The robot then was handled with the help of the gamepad controller for performing various motions. The similar procedure was followed by the nurse to control the robot. It was found that the nurses and caregivers exert more force at the end effectors, and the concern was raised from the research that the errors in robots dealing with patients can lead to some fatal problems. ASPIRE robot is under development phase and does not have the sophisticated software platform to perform complicated tasks in order to assist the older people at age care centres and medical centres [3]. Unlike other robots in the field, ASPIRE has one key advantage that it uses the small hardware systems which improves the costefficiency. Furthermore, the flexibility of the robot with the adaptability of the robot is admirable. The controller used in the ASPIRE robot is the L1 controller, which makes the versatility and the flexibility very quick. It is also robust due to the effective transformation in the transient phase. Another assistive robot is European robot named Care-o-robot, which was made to assist the elderly in living on their own [11]. The robot contains three special laser sensors to create an omnidirectional system. The robot also includes the 3-dimensional time to flight camera. The robot has 4 DOFs is capable of serving small things like water. This type of robot is used typically for helping the old age person in the daily activities and thus capable of living on their own. Bemelmans et al. [2] examined PARO robot which is developed to assist for picking and placing the older person. However, this robot is not entirely ready to successfully carry the task yet. PARO can be handy and give the sense of belonging and the warmth like a real person, whereas the robot, like RIBA, can be beneficial for a people and for a caregiver who has limited capabilities [1]. RIBA robot is a robot developed by two Japanese company named Riken and Tokai, which have advanced capabilities [7]. The main aim of the robot is to lift the patient from a particular place like a chair or bed and place the patient to the desired location. This is the most complicated task which a robot can perform. The structure of the robot is specially designed, which includes specific joints and lengths of parts for successfully carrying

Designing and Analysis of a Soft Robotic Arm

133

out the process. The rigid structure makes sure that the patient is safely lifted whereas it is covered by the soft material on top of the structure, which provides the feasible solution for interaction of the robot with the human being [3]. The mechanical characteristics like Degree of Freedom and others of current robots are presented in Table 1.

Table 1. The mechanical characteristics of current robots Robots PARO

DOF Application 7 Therapeutic Robot

ASPIRE

7

Kismet

21

Emotional and Social Interactions

RIMAN 23

Care-giving Robot

Robear

Care-giving Robot

15

Spherical Robot

Description It is used for interacting with people and assisting at the mental level

Specifications ∙ Height – 0.16 m, width – 0.35 m, length – 0.57 m and weight – 2.7 Kgs ∙ Light sensor and Tactile sensor ∙ Microphone, DC Motors, Nickel Metal Hydride Battery It is used in the medical centre ∙ DC Motor (Synchronous for assistance & Asynchronous) ∙ Metal structure, cameras and sensors It is used to support a person at a ∙ Height – 0.38 m and social state and emotional state weight – 7 kg ∙ Four Cameras, DC Servomotors – Maxon, Aluminium frame It is used to lift a person ∙ Height – 1.52 m and (typically child) from bed to and weight – 100 kg from wheelchair ∙ Load carrying capacity – 12 kg ∙ Tactile sensors and tactile controllers ∙ Semi-conductor pressure sensors ∙ Two cameras, microphone Lift a person from and to bed. ∙ Height – 1.4 m, weight – 180 kg ∙ Actuators having a low gear ratio ∙ Torque sensors and rubber capacitance sensors (continued)

134

Y. Amer et al. Table 1. (continued)

Robots DOF Application Care-O- 28 Service Robot Robot

RIBA

27

Age Care centres & Medical Centres

Description Specifications The robot helps and assists with ∙ Height – 1.58 m, weight household activities. – 140Kgs ∙ Software platform – CANopen ∙ Safety laser scanners, cameras, LED lights ∙ Tactile sensors and wheels The robot is used for picking the ∙ Weight - 180 Kgs, height - 1.4 m person from bed and to ∙ Load Carrying Capacity: wheelchair and vice versa 61 Kgs ∙ DC Motors, Potentiometers, Omnidirectional wheels ∙ Two cameras in eyes, two microphones ∙ Tactile sensors - 214 (each hand) ∙ Capacitance and resistance rubber sensors ∙ Torque sensors, Nickel Metal Hydride battery

The current system of the RIBA robotic arm has full capability of the motion as the normal arm of a human being. The robotic arm has the degree of freedom of the 7, just like the human being. The robotic arm is made up of soft materials used at the very outer end of the robotic arm which is in contact with the person and thus making it safe to work around human and to lift the person providing the desired comfort. There are three DC motors which are used for the actuation of the joints of the arm. Based on the design of the current RIBA robot, the weight of the robot is restricted to lift the person to 61 Kgs while the overall weight is 180 Kgs [9]. Hence, the weight carrying capacity of the robot is merely one-third of the overall weight of the robot. However, the factor of safety, which is considered in the system is 2 as it is used in the medical industry and directly deals with the lifting of a person. Therefore, this research aims to improve the weight carrying capacity and the reduction in the overall structural weight of the RIBA robot.

3 Proposed Structural Design for a Robotic Arm This section aims to present the selections of the materials used, actuators, gearhead, gearboxes, shoulder joint and other considerations which are used for the operation of the robotic arm.

Designing and Analysis of a Soft Robotic Arm

3.1

135

The Solution for the Weight of the Robotic Arm Structure

In the current design, the material for the structure of the arm is stainless steel which has excellent strength and has a practically very high resistance to corrosion. There are various other advantages of using stainless steel as the material for manufacturing the structure of the robot can be resistance to the fire and heat, thus making it ideal for working with the medical industry. However, the major limitation of the material is that it has a high specific weight which can be as high as 8 kg/dm3. Thus, it makes the overall structure of the robotic arm heavy. There are various alternatives which can be used for the same application which has lower specific weight than stainless steel which can reduce the weight of the product. The materials which can be used for the purpose can be aluminium, brass, nickel, wrought iron and others. However, various parameters are required to select and finalise the material for the structure. The parameter which is taken into consideration before can be the strength, which is one of the crucial parameters, weight, working conditions, machinability, cost and availability. Based on the parameters analysed, all desired parameters can be the metal of aluminium. However, there are ample of the grades of the metal, thus different alloy composition, which changes the properties. There are seven grades of the aluminium which have different alloy proportion and different material parameters. The crucial parameters for different grades of aluminium are explained below. • Strength – the strength of the alloy 1100 to 6063 is either low or medium whereas the strength of the alloy 7075 is high. • Machining – The machining in the all the aluminium grades is either good or moderate, thus can be machined with normal tools as compared to special tools for stainless steel • Corrosion resistance – The corrosion resistance of all the grades is excellent or good, and the alloy 7075 falls in the good corrosion resistance category. • Heat treatment – The heat treatment can be done in all the alloy types, but the alloy 1100, alloy 3003 and alloy 5052. When the application of the robot and parameters of the different grades of the robot is considered, the alloy 7075 comes to be the best suitable option. Thus, the proposed design which will use the high strength aluminium (7075) for the structural design of the robotic arm, which could reduce the weight of the structure by almost three times. 3.2

Section of Actuators, Gearhead and Gearbox System

In order to attain all the motions and movements for the robotic arm similar to the human hand, various actuators need to be selected based on the selection sheet which can be used for the actuation of the joints. The first actuator which is selected is the actuator for the shoulder joint. In the shoulder joint, the movement which is mainly required is to move the arm up or down. The DC brushless motor which has been selected has various advantages apart from ultraslim design. The motor is specially designed for high precision and positioning tasks. It has a high level of protection against interference pulses. The selection of gearhead is the planetary gear head which

136

Y. Amer et al.

can directly be linked to the DC motor and having a high precisional operation. There is one crucial consideration here is that the weight of the patient which needs to be lifted is at least 80 Kgs at a certain distance. 3.3

Selection for Shoulder Joint

As shown in Fig. 1, it can be seen that the person of 80 Kgs is at 400 mm away from the shoulder joint. Thus, the calculation of the motor and gearbox is as follows: • The actuator which is selected for the movement of the robotic arm is DC brushless motor which is manufactured by the Maxon company. The load which is required to carry is at 400 mm. Thus the torque which is required for the motor as follows: • Torque required = r  F  (sin h) • For the safety considering the maximum value of sin h = 1, thus the torque = (0.4)  (80  9.8) = 313 Nm Therefore, the torque which is required for the motion of the arm when it is under full load is at least 313 nm. Based on the parameters of the torque and other considerations such as small size as possible, cost and speed, the DC brushless motor which is selected and used in the project is “EC 90 Flat” which is manufactured by Maxon. The detailed parameters are described in the selection and information sheet of the motor attached in Appendix 1. However, the properties of the motor which is proposed in a nutshell are listed below: • • • •

Nominal voltage = 18 V Nominal Speed = 1620 rpm Nominal torque = 1.620 Nm Efficiency = 87.2%

To carry a high load with very smooth motion, the gearbox selected of the shoulder joint is the bevel gear type gearbox which is manufactured by the Chiravalli company. The model number, “CHM 075” bevel gear type gearbox, is selected which will be able to handle the torque of more than 200 Nm. The reduction ratio is to the 100th of the motor, thus will be easily able to handle the torque of 313 Nm, by the system of motor and gearbox. When the gearbox is connected directly to the motor, there are extra savings in the space without compromising the operation of the robot and safety. The specification in a nutshell of the selected gearbox is described as follows: • Rotations = 14 rotations/min • T = 203 Nm 80 Kgs

400 mm

Fig. 1. Load of a person on the arm

Designing and Analysis of a Soft Robotic Arm

3.4

137

Selection for Elbow Joint

The load of the person directly acts on the elbow, which is distributed over the area of the arm and forearm. Thus, considering the maximum safety of the motor, which is selected for the elbow joint is the same as the shoulder joint (EC 90 flat) whereas the gearbox which is coupled with the motor is selected of the Maxon company. The gearhead selected is GP 81 A type gearhead which is an arrangement of planetary gears with three stages which will be able to deal with the load of 1000 N. The arrangement of the gearhead and DC brushless motor is then connected to the bevel gearbox which will transmit the motion at an angle and thus will translate rotational motion into the movement of the up and down. The gearbox will be the same as used in the elbow joint. 3.5

Other Parameters and Considerations

Once the motors and gearboxes are studied, various parameters need to be taken into consideration. The parameters for the design is enough space to add parts which are required for proper operation. There are various sensors used in the robotic arm for sensing the position of the robotic arm, the load applied on the arm and the parameters related to the movement. Thus, the arrangement should be not only made of the sensors throughout the arm but also the connection of the sensors and motor with the battery and processing unit. The important parameters related to the designing are summarised in a succinct manner below: – The material selected for the metallic structure of the robot and robotic arm is Aluminium of the grade 7079 • The aluminium alloy composes of Zinc, magnesium, Copper always exists in some amount whereas there may be Iron, Chromium and Titanium presented in the alloy. • High strength • Yield strength of 240 MPa • Heat treatment on the specimen is possible – The motor which is selected for the actuation at the joints in the robotic arm is • The DC brushless motor is selected as follows: • Nominal voltage = 18 V • Nominal Speed = 1620 rpm • Nominal torque = 1.620 Nm • Efficiency = 87.2% – The selection of gearbox for the shoulder joint is made with the properties having: • Rotations = 14 rotations/min • T = 203 Nm – In case of the elbow joint, the gearhead coupled with the motor is the GP 81 A planetary gearhead which has some characteristics: • Number of stages = 3 • Reduction ratio = 308:1 • Continuous torque = 120 Nm • Efficiency = 70%.

138

Y. Amer et al.

4 Proposed Design 4.1

Design of the Proposed Robot Arm

Based on the proposed structural design, the design is carried out in the computer-aided designing software, which is SolidWorks. The key assumption while designing is that there is no limitation in the software and all is designed can be manufactured. The proposed designing of the robotic arm can be seen in Fig. 2.

Fig. 2. Proposed design of arm

As shown in Fig. 2, the robotic arm is kept almost similar to the existing arm of the RIBA robot. The arm can be divided into two parts which are forearm and the arm, and there are two crucial joints: shoulder joint and elbow joint. The designed arm has 7 degrees of freedom and can have all that movement as the design of the current RIBA robot.

Fig. 3. Arm movement representation

Designing and Analysis of a Soft Robotic Arm

139

The structural design of the robotic arm is made up of the Aluminium alloy 7079, which provides very high strength and at the same time, it is very light. The structural design of the robotic arm is shown in Fig. 3.

Fig. 4. Structural representation of arm

The robotic arm is designed in two sections (e.g. arm and the forearm) to achieve the easy movement. The structure which is made up of Aluminium alloy is designed and supported by two walls. The walls are connected by stiffeners providing enhanced support and strength. The walls are also connected by the shafts where the movement is required as seen in Fig. 4. The two walls of the arm and forearm are connected by the bearing to provide a smooth movement. As the application of the robot is to use in the medical industry and care centres, special care is taken in designing the parts so that no grease or oil comes out and creates a risk. The bearing which has been selected is sealed from both sides and requires no maintenance and no oil at a specific interval of time. Thus, the bearing which is selected is best suited for the application of robot, and it is capable withstand the load and can work in high rotations per minute. The components which are selected for the robotic arm are selected by keeping the worstcase scenario, and maximum safety is taken into consideration while selecting the robot. The structural design is carefully presented in Fig. 4, it can be seen that the walls of the arm have cut all over the length. The cuts are used for the specific purpose, which is to reduce the weight of the structure of the robotic arm without compromising the strength of the robot. The thickness of the wall is kept such that it is sufficient to support the other walls and can hold the fasteners properly. As the design of the wall of the forearm is observed, it can be seen that the special angle is provided to the wall of the forearm only. When the robot lifts the person, the person will be positioned in the centre of the robotic arm, right above the elbow joint. Thus, it is evident that the load will be maximum at the centre of the robotic arm. Therefore, the load may not distribute evenly and can be more in the centre, this creates the issues to the strength of the structural design. To solve the issue and to mitigate the risks involved while carrying the person, the special design is taken into consideration. The walls in the forearm are given the special angle which will help in distributing the load throughout the length of the walls. Besides, the walls of the arm have provided to the extra material in the very centre which gives the additional strength to tackle the issue if exists when the patient or person is lifted. Thus, special design consideration is

140

Y. Amer et al.

taken into consideration which ensures the load is distributed evenly and ultimately reduces the risks when the robot is in operation. 4.2

Components Installation

Once the structural design is finished, the components which are required to install for proper operation is analysed, and the space requirement is carried out. Therefore, all the components are strategically placed according to the use and application of the component. Thus, Fig. 5 shows the detailed representation of the robotic arm with all the components placed in between the walls of the robotic arm.

Fig. 5. Components and space representation in arm

The first component of the operation of the robotic arm is the actuators. Thus, it can be seen that two DC motor is installed at two joints that is the elbow joint and the shoulder joint. However, the orientation of the motor is along the length of the arm, as shown in Fig. 5. Therefore, the arrangement is required to transmit the rotational motion of the motor which is along to the length to the up-down motion of the arm. To achieve this movement, the gearbox is used which uses a bevel gear system, thus providing the solution. The gearbox which is selected to be a sufficient amount of reduction in ratio so that it can withstand the load and still can provide the smooth movement of the arm. Figure 5 also shows that there is enough space for the electrical and the connections for the power source. Besides, the battery pack made up of the Nickle Metal Hydride (NiMH) has been installed on one of the walls stiffer in the structure. This battery is mainly used for the supplying power to the sensors and the components in the robotic arm. Furthermore, when the new design of the RIBA robot is observed, it is evident that the forearm is flat where the capacitive sensor is installed to detect the position and the weight of the person who it lifts. It, on the other hand, provides very good support while lifting the patient. However, in the proposed design, the shape of the forearm is kept circular, but the radius of the arm is enough to ensure the comfort of the patient. It also gives the extra space, which enables the opportunity to accommodate more components. Thus, the proposed design improves the ratio of overall weight to the load-carrying capacity, with the enhancement in the performance of the current RIBA robot.

Designing and Analysis of a Soft Robotic Arm

141

5 Validation of the Proposed Design To ensure that there is no catastrophic disaster with the proposed design, it is crucial to study the finite element analysis test to identify the load-carrying capacity of the new robot. There are various types of stress, strain and load, which applies to the various parts and assemblies which needs to study and the effects of it on parts. Thus, the simulation analysis is a crucial part of the designing process, as it efficiently and instantly provides the validation of the results. SolidWorks Simulation is utilized to conduct the finite element analysis. Based on the aim of the design, the static analysis is examined to validate the design. The main aim of the robotic arm is to lift the patient or a person, and thus the load will be directly applied to the structure of the robotic arm. Implementing the simulation in the SolidWorks software, there are some assumptions as follows: • When the deformation is very small, the deformation is not considered while plotting the stress chart • The load is applied very slowly at the constant rate • There is a linear relationship between stress and strain until the yield point In the structure of the designed robotic arm, there are few steps which are carried out to analyse the design, thus fixing the shaft holes in the shoulder joint. When the robotic arm lifts the person, the shoulder is the point which will lock the arm and is the only point of contact of arm and rest of the body, thus making it as a key fixed region. In the next step, the load is applied to the regions of the robotic arm wherever applicable. In the next, the meshes are created which can be of various sizes and types and the last step is conducting the simulation to identify the effects of the loads on the part. The angle at which the current RIBA robot design conducted the simulation to identify the load-carrying capacity is shown in Fig. 6. Based on the analysis carried out by the Riken company, the angle of the shoulder joint is 42° and forearm to be 43°, which is as well ideal for the lifting of the person. To compare the simulation of the proposed design and the current design, this study conducts the simulation keeping the arm at the same angle.

Fig. 6. Finite element analysis angle of the robotic arm

142

Y. Amer et al.

Figure 6 shows the results of the finite element analysis simulation carried out in SolidWorks 2017 software. Based on reviewing the structure, the stiffness wall has been removed, which will make improved safety consideration. There are various steps which are carried out to achieve the results. The very first step is to apply the material to the structure of the robotic arm which is 7079 alloy and defining the properties of the material in the software. The next step is to define the fixed or rolling parts in the assembly and applying it in the assembly. The next step is to apply the load on the surface or the parts in the assembly. There are two types of the load applied in the designed robotic arm assembly which are the static load on the arm and the gravity. Thus, once both the load, static and gravitational force applied appropriately the mesh is created, the fine mesh is generated to get the accurate results. Lastly, the simulation study was carried out which fetched the results as shown in Fig. 7.

Fig. 7. Finite element analysis of the robotic arm

The yield strength of the alloy 7079 is 240 MPa, whereas, at the load equivalent to the 80Kgs and the gravity load, the maximum stress developed was 114 MPa, near the region of the shoulder that was fixed. It means that it can lift the person up to the weight of 164 Kgs. However, as the robot deals with the lifting of the person, the factor of safety selected is as high as 2, limiting the load-carrying capacity to 82 Kgs.

6 Conclusions and Future Work This research was carried out to improve the design of the robotic arm of the RIBA robot. The analysis and selection of appropriate materials that can be used for designing the soft robotic arm to assist carers and nurses in the health care sector more efficiently. The research found that the new design can improve the load-carrying capacity to 82 kgs, which is approximately 40% higher the RIBA robot. In the future, the research would be implemented by using other types of gear system for the actuation of the robot and compare the results to identify the better gear system with the maximum smoothness. In addition, other scenarios would be used for

Designing and Analysis of a Soft Robotic Arm

143

actuation of the robotic arm in future work. For instance, a ball screw system can be used for the actuation of the robotic arm in the specific direction and various types of actuation.

References 1. Bedaf, S., Huijnen, C., Van Den Heuvel, R., et al.: Robots supporting care for elderly people. In: Robotic Assistive Technologies, pp. 309–332. CRC Press (2017) 2. Bemelmans, R., Gelderblom, G.J., Jonker, P., et al.: Effectiveness of robot Paro in intramural psychogeriatric care: a multicenter quasi-experimental study. J. Am. Med. Direct. Assoc. 16, 946–950 (2015) 3. Böhlen, M., Karppi, T.: The Making of Robot Care. Transformations (14443775) (2017) 4. Chen, T.L., Kemp, C.C.: A direct physical interface for navigation and positioning of a robotic nursing assistant. Adv. Robot. 25, 605–627 (2011) 5. Ding, M., Ikeura, R., Mori, Y., et al.: Measurement of human body stiffness for lifting-up motion generation using nursing-care assistant robot—RIBA. In: 2013 IEEE SENSORS, pp. 1–4. IEEE (2013) 6. Encarnação, P.: Fundamentals of robotic assistive technologies. In: Robotic Assistive Technologies, pp. 1–24. CRC Press (2017) 7. Guo, S., Kato, Y., Ito, H., et al.: Development of rubber-based flexible sensor sheet for carerelated apparatus. SEI Tech. Rev. 75, 125–131 (2012) 8. Hyde, S., Dupuis, V., Mariri, B.P., et al.: Prevention of tooth loss and dental pain for reducing the global burden of oral diseases. Int. Dent. J. 67, 19–25 (2017) 9. Jeelani, S., Dany, A., Anand, B., et al.: Robotics and medicine: a scientific rainbow in hospital. J. Pharmacy Bioallied Sci. 7, S381 (2015) 10. Kachouie, R., Sedighadeli, S., Khosla, R., et al.: Socially assistive robots in elderly care: a mixed-method systematic literature review. Int. J. Hum.-Comput. Interact. 30, 369–393 (2014) 11. Lee, H.R., Šabanović, S., Stolterman, E.: How humanlike should a social robot be: a usercentered exploration. In: 2016 AAAI Spring Symposium Series (2016) 12. Lin, Y., Schmidt, D.: Wearable sensing for bio-feedback in human robot interaction. In: Wearable Electronics Sensors, pp. 321–332. Springer (2015) 13. Mukai, T., Hirano, S., Yoshida, M., et al.: Tactile-based motion adjustment for the nursingcare assistant robot RIBA. In: 2011 IEEE International Conference on Robotics and Automation, pp. 5435–5441. IEEE (2011) 14. Ofori-Asenso, R., Zomer, E., Curtis, A.J., et al.: Measures of population ageing in Australia from 1950 to 2050. J. Popul. Ageing 11, 367–385 (2018) 15. Rus, D., Tolley, M.T.: Design, fabrication and control of soft robots. Nature 521, 467–475 (2015) 16. Szabo, S., Nove, A., Matthews, Z., et al.: Health workforce demography: a framework to improve understanding of the health workforce and support achievement of the sustainable development goals. Hum. Resour. Health 18, 1–10 (2020)

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller Ngoc-Huy Tran(B) , Manh-Cam Le, Thien-Phuong Ton, The-Cuong Le, and Thien-Phuc Tran Ho Chi Minh City University of Technology, VNUHCM, Ho Chi Minh City, Vietnam [email protected] Abstract. Remotely operated underwater vehicle (ROV) is one of the most important types of underwater robots used in water environments for many purposes, especially for navy and marine industries. They are highly nonlinear systems and often operate in an unpredictable nature of the ocean environment. Therefore they have many initially uncertain and varying parameters. To overcome this problem, we need controllers adapting itself to such changing conditions. In this paper, two control schemes for stabilizing vertical motion of ROVs, called nominal nonlinear feedback controller (NNFC) and adaptive nonlinear feedback controller (ANFC) will be discussed. The main feature and mathematical model of the ROVVIAM900 used as a controlled system are also considered. The comparative study of two controllers is performed in the different scenarios using a numerical simulation. The results highlight clearly the superiority of the ANFC and also validate the choice of this. Keywords: ROV

1

· PD · Adaptive control · Nonlinear system

Introduction

The ROV movement control problem involves two tasks. The first, called the vertical task, is to stabilize the altitude and force the ROV depth to converge to a desired depth value continuously. At the desired depth, the second, called the horizontal task, can be left free for a human operator. While some complicated missions require the ROV to maintain its position and heading, follow a desired path and sometimes satisfy a desired dynamic behavior. To accomplish these tasks, various advanced control techniques have been proposed recently, such as [1,2] for horizontal task and [3,4] for vertical task. In this paper, a nominal nonlinear feedback controller (NNFC) and adaptive nonlinear feedback controller (ANFC) for stabilizing a self-built ROV depth and pitch are examined and compared by numerical simulation.

2 2.1

Dynamic Modeling of the Vehicle The ROVVIAM900

The ROVVIAM900 is a ROV model built at VIAMLAB (VietNam Automation & Mechatronics Laboratory) with its appearance illustrated in Fig. 1a and 1c. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 144–155, 2021. https://doi.org/10.1007/978-3-030-62324-1_13

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

145

rmg

rma m4

yb

Ob

m5

m1

rmf

rmc

rmd

xb

m7

m6 m2

m3

rmb

rme

(a) 3D model

(b) Top view

xb

rmh

zb

rmi

m6

Ob

(c) ROV prototype in swimming pool

(d) Size view

Fig. 1. The ROVVIAM900 vehicle

The body-fixed reference frame Ob xb y b z b is a moving coordinate frame which is fixed to the vehicle. The distance from the origin Ob to the point where the thruster forces is applied is denoted by rmα where α = a, b, ..., h, i. The propulsion system consists of seven T200 thrusters numbered from 1 to 7. Each thruster is connected to a separated driver. The τmβ where β = 1, 2, ..., 7 defines the β th thruster force vector acting on the ROV and has the corresponding arrow illustrated in Fig. 1b and 1d indicating the positive direction. The angles between the horizontal thrusters (1st , 2nd , 3rd , 4th ) and the axis Ob xb is ±135◦ . This allows a proper amount of maneuverability and controllability for the ROV in the horizontal plane referred as surge (linear longitudinal motion along Ob xb ) and sway (linear transverse motion along Ob y b ). While three vertical thrusters (5th , 6th and 7th ) accelerate the ROV along the Ob z b axis referred as heave. Additionally, the heading involving ψ and attitude stabilization involving φ and θ can also be obtained by differential speed control of the horizontal and vertical thrusters, respectively. Therefore, with the thruster arrangement as analyzed above, the ROV can be commanded to follow arbitrary trajectories in 3D space. In other words, the ROVVIAM900 is a fully-actuated system.

146

2.2

N.-H. Tran et al.

Equations of Motion

6DOF Kinetics and Kinematics Equations. The dynamics of an underwater vehicle involving three frames of reference: the body-fixed frame, the earthcentered earth-fixed frame (ECEF) and the North-East-Down frame (NED) can be conveniently expressed by [5] as compact matrix form: η˙ = J (η)ν

(1)

M ν˙ + C(ν)ν + D(ν)ν + g(η) = τ m + we

(2)

where: M ∈ R6×6 is the inertia matrix, C(ν) ∈ R6×6 defines the Corioliscentripetal matrix, D(ν) ∈ R6×6 represents the hydrodynamic damping matrix, g(η) ∈ R6×1 describes the vector of gravitational/buoyancy forces and moments, τ m = [τ 1 , τ 2 ] = [(X, Y, Z), (K, M, N )] ∈ R6×1 defines the control input vector of forces and moments; we ∈ R6×1 defines the vector of disturbances; ν = [ν 1 , ν 2 ] = [(u, v, w), (p, q, r)] ∈ R6×1 denotes the linear and angular velocity vector in the body-fixed frame; η = [η 1 , η 2 ] = [(n, e, d), (φ, θ, ψ)] ∈ R6×1 is the position and attitude vector decomposed in the ECEF frame and NED frame, respectively; J (η) ∈ R6×6 is the transformation matrix mapping from body-fixed frame to ECEF frame. Thruster Allocation. r bmβ ∈ R3×1 defines the position vector in the bodyfixed frame based on the priori known distances in Fig. 1b and 1d. τ bmβ denotes the thruster force vector expressed in body-fixed frame. The thruster resultant force and torque acting on the ROV denoted τ m is a vector sum of individual forces τ bmβ and torques r bmβ × τ bmβ as: ⎡ ⎢ τm = ⎢ ⎣

7  7  β=1

β=1



τ bmβ

r bmβ × τ bmβ

⎤ ⎥ ⎥ ⎦

(3)

Equation (3) is written in expanded form as follows: ⎞ ⎛ √ 2 (τ + τ + τ + τ ) m1 m2 m3 m4 ⎟ ⎜ √2 ⎟ ⎜ ⎟ ⎜ 2 ⎟ ⎜ (τm1 − τm2 + τm3 − τm4 ) ⎟ ⎜ 2 ⎟ ⎜ ⎟ ⎜ + τ + τ τ m5 m6 m7 √ ⎟ ⎜ τm = ⎜ ⎟ (4) 2 ⎟ ⎜ ⎟ ⎜ √ 2 rmi (τm1 − τm2 + τm3 − τm4 ) − rmc (τm5 − τm6 ) ⎟ ⎜ 2 ⎟ ⎜ ⎜− rmi (τm1 + τm2 + τm3 + τm4 ) − rmb (τm5 + τm6 ) − rma τm7 ⎟ ⎟ ⎜ √2 √ ⎠ ⎝ 2 2 (rmg + rmf ) (τm1 − τm2 ) − (rmd − rme ) (τm3 − τm4 ) 2 2

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

147

T200 Thruster Modelling. umβ (t) ∈ [1100, 1900] microseconds denotes a st is a steady-state ESC driver PWM input signal as a function of a time. τmβ value of the thruster force. A thrust measurement experiment performed by [6] st with its corresponding value of produces a dataset representing value of τmβ umβ . Based on this, the function as (5) and its fitting curve shown in Fig. 2, st and umβ can be obtain by MATLAB® which denote the relation between τmβ Curve Fitting Toolbox as: ⎧ 2 ⎪ ⎨ κm1 umβ + κm2 umβ + κm3 ; umβ ∈ [1100; 1464] st ; umβ ∈ (1464; 1536) τmβ = 0 (5) ⎪ ⎩ 2 κm4 umβ + κm5 umβ + κm6 ; umβ ∈ [1536; 1900] where κmi with i = 1, 2, ..., 6 is const. The first order transfer function is used as follows to include the β th thruster transient state in the thruster model: 1 τmβ (s) = st τmβ (s) Γmβ s + 1

(6)

st st where: τmβ (s) = L−1 {τmβ (t)}, τmβ (s) = L−1 {τmβ (t)}, L−1 {.} is the inverse laplace transform operator, s is the Laplace variable, Γmβ is the transient time, −Γmβ ln(0.02) = 1.5 s is the time required for the thruster force reached 98% of its steady value.

VERTICAL CONTROLLER 20

ROV MODEL

0

HORIZONTAL CONTROLLER

-20 1100 1200 1300 1400 1500 1600 1700 1800 1900

Fig. 2. BlueRobotics T200 thruster characteristic curve fitting based on experimental dataset provided by [6].

2.3

Fig. 3. The ROVVIAM900 overall control structure.

Design Considerations

Let n(ν, η) = C(ν)ν + D(ν)ν + g(η). In this section, we consider the ROV used has a slow dynamics, and hence it will be moving at velocities low enough to make the Coriolis and the nonlinear dampling terms negligible. This means C(ν) ≈ 0 and D(ν) = D l +D n (ν) ≈ D l . Then n(ν, η) = D l ν +g(η). Equation (2) is rewritten as: (7) M ν˙ + n(ν, η) = τ m + we Some design requirements for the ROVVIAM900, which we can take advantage of to simplify the mathematical models later are listed as follows:

148

N.-H. Tran et al.

– Due to the symmetry of the ROV, the y-coordinates of the center of gravity and buoyancy is zero (yb = yg = 0) and the inertia matrix relative to the center of mass is diagonal (Ixy = Ixz = 0). – The center of gravity of the ROV must be vertically below the center of buoyancy, which means zg − zb ≥ 0. – To deal with an emergency, the designed buoyant force is usually greater than the force of gravity, so B − W = ΔBW > 0. From the design requirements, the kinetics and kinematics equation for roll motion are derived from (1) and (7) as: ⎧ ˙ p + rcφ tθ + qsφ tθ φ= ⎪ ⎪ ⎪ ⎨ 1 p˙ = [K + Kp p − sφ cθ (W zg − Bzb ) (8) 2 mz + Ix − Kp˙ ⎪ g ⎪ ⎪ ⎩ − v(Y ˙ v˙ zg − mzg ) − Yv vzg + mrx ˙ g zg ] Note that m, W , B, xg , zg , zb , Ix , Kp˙ , Kp , Yv˙ , Yv are the ROV parameters. K is the input roll moment. Assuming that the horizontal motion variables in (8) like v, v, ˙ r, r˙ are low enough to make the terms containing them negligible, we have: ⎧ ⎪ ⎨ φ˙ = p + qsφ tθ (9) 1 ⎪ ⎩ p˙ = mz 2 + I − K [K + Kp p − sφ cθ (W zg − Bzb )] x p˙ g Then, we calculate the Jacobian matrix of (9) at the equilibrium point (φ, p, K) = (0, 0, 0) : ⎤ ⎡ 1 qtθ ⎦ Kp Aφ = ⎣ cθ (Bzb − W zg ) (10) mzg2 + Ix − Kp˙ mzg2 + Ix − Kp˙ and the corresponding characteristic equation with its coefficients denoted by κce1 , κce2 , κce3 . s is Laplace variable:   (11) det sI − Aφ = κce1 s2 + κce2 s + κce3 where: κce1 = 1,

κce2

−Kp + (Kp˙ − mzg2 − Ix )qtθ = , mzg2 + Ix − Kp˙

κce3 =

Kp qtθ + cθ (W zg − Bzb ) mzg2 + Ix − Kp˙

The necessary and sufficient condition for (φ, p, K) = (0, 0, 0) to be asymptotically stable is κce1 , κce2 and κce3 are all greater than zero. Thus, we have:  −Kp + (Kp˙ − mzg2 − Ix )qtθ > 0 (12) >0 Kp qtθ + cθ (W zg − Bzb )

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

149

Inequation (12) can be simplified by considering the value of qtθ low during the ROV’s operation:   >0 >0 −Kp −Kp ⇔ (13) cθ (W zg − Bzb ) > 0 W zg − Bzb > 0 The first condition in (13) is always satisfied because the pKp known as the roll drag moment acts in the opposite direction of the roll angular velocity p, which means Kp < 0. Additionally, the ROVVIAM900 has the small difference between the buoyant force and the force of gravity in practice, which we can consider ΔBW ≈ 0. Thus, the second condition rewrited as W zg − (W − ΔBW )zb = W (zg − zb ) − zb ΔBW ≈ W (zg −zb ) > 0 is also satisfied due to the second design requirement listed above. As a result, (φ, p, K) = (0, 0, 0) is an asymptotically stable equilibrium point which the system always returns to after small disturbances. Hence, it is reasonable to neglect the roll motion in the vertical controller design presented in the next section.

3

Vertical Controller Design

As shown in Fig. 3, the vertical and horizontal task is undertaken by vertical and horizontal controller, respectively. To obtain feedback signals for the horizontal controller, some high-cost underwater navigation sensors such as Doppler Velocity Logger (DVL), Ultra Short Baseline (USBL), . . . (for n and e) and compass sensor (for ψ) are required. Meanwhile, the vertical controller only needs some low-cost sensors like pressure (for d), gyroscopes and accelerometer (for φ and θ). The objective of two vertical controllers designed in this section is to force the ROV to follow the desired depth trajectory and simultaneously stabilize the pitch angle. Let the commanded acceleration in NED frame denoted by an be chosen as the PD-type control: ˜ ¨ d − K dη (14) ˜˙ − K p η an = η   ˜ φ, ˜ θ, ˜ ψ˜ ˜ = η − ηd = n with η ˜ , e˜, d, denoting the error values between the desired setpoints and the measured signals. K p > 0 and K d > 0 can be chosen as diagonal matrices. We apply this equation to the vertical controller as:  n   ad d¨des − Kd,d (d˙ − d˙des ) − Kp,d (d − ddes ) = (15) anθ −Kd,θ θ˙ − Kp,θ θ where Kp,d , Kd,d , Kp,θ and Kd,θ are controller parameters, ddes defines the desired signal for depth while the desired signal for pitch is constant and equals to zero. Differentiating both sides of (1) with respect to time, then substituting ν˙ ¨ with an , we obtain the commanded acceleration relation between with ab , η

150

N.-H. Tran et al.

body-fixed and NED frame as:   ab = J −1 (η) an − J˙ (η)ν

(16)

Neglecting the horizontal motion variables (u, v, r), horizontal commanded accelerations (ann , ane , anψ ) and roll commanded acceleration anφ in the 3rd and 5th row of (16), we have: ⎤ ⎡  b  and cφ cθ aw ˙ θ (c2 − 1) ⎦ (17) = ⎣ anθ cφ cθ + q θs φ abq cθ The formula below is derived from (4) and used to calculate the vertical thruster forces from the generalized force Z and pitch moment M . These vertical thruster forces aren’t affected by the horizontal thruster forces due to the rmi elimination. Additionally, the equation between τm5 and τm6 is the result presented the previous section, so: τm5 = τm6 = τm7 = − 3.1

M + Zrma 2(rma − rmb )

(18)

M + Zrmb rma − rmb

(19)

Nominal Nonlinear Feedback Controller (NNFC)

The NNFC control law based on the simplified kinematics equation as (7) of the ROV and given by: (20) τ m = M ab + n To make the vertical controller not affected by the horizontal controller, we ignore the horizontal and roll commanded accelerations. Then substituting ab =   0 0 abw 0 abq 0 into (20) and extracting Z and M from that, the vertical control law is obtained as follows:    b  a Z = M Z,M wb + nZ,M (21) aq M where:

 M Z,M =  nZ,M =

m − Zw˙ Zw˙ xg − mxg

−mxg Iy − Mq˙ + m(x2g + zg2 )



cφ cθ (B − W ) − Zw w sθ (W zg − Bzb ) − Mq q + Zw wxg + cφ cθ (W xg − Bxb )



ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

151

Hence, to execute this control law, we need to know: – ROV parameters: the buoyant force (B), the force of gravity (W ), mass (m), the moment of inertia about the y axis (Iy ), the center of gravity position (xg , zg ), the center of buoyancy position (xb , zb ), dampling coefficients (Zw , Mq ), added mass coefficients (Zw˙ , Mq˙ ). – Measurement values: q and d are measured from the gyroscopes and the pressure sensor respectively, φ and θ are calculated from the accelerometers and gyroscopes, w is estimated by using measurements from the accelerometers and the pressure sensor. 3.2

Adaptive Nonlinear Feedback Controller (ANFC)

So far, the NNFC under the assumption that all ROV parameters are known has been discussed. In this section, a parameter adaptation law used together with the previous control laws is derived by dividing the matrix M and n into two parts: the known (denoted by “k”) and unknown part (denoted by “uk”) containing certain and uncertain parameters, respectively. So the kinematics equation can be rewritten as: (M k + M uk ) ν˙ + nk + nuk = τ m Taking the control law to be:   ˆ est τ m = M k ab + nk + Φest Θ

(22)

(23)

where the hat symbol denotes the parameters’ estimates. Note that the unknown term is parameterized by multiplying the regressor matrix Φest by the estimated ˆ est , which is updated according to the following update law: parameters vector Θ ˆ˙ est = −Γ est Φ J −1 y est Θ est

(24)

˜ +c1 η with y est = c0 η ˜˙ ; c0 and c1 are positive constants. Γ est is a diagonal positive definite matrix representing the adaptation gain. The choice of K p , K d , c0 and ˜ to zero: c1 must satisfy the following requirements to ensure convergence of η ⎧ 2 ⎪ ⎨ (c0 K d + c1 K p ) c1 > c0 I (25) 2c0 K p > βest I > 0 ⎪ ⎩ 2(c1 K d − c0 I) > βest I > 0 where βest is taken to be a small positive constant. In the case of our study, some ROV parameters known for certain are the center of gravity position (xg , zg ), the force of gravity (W ) and mass (m). Considering this assumption, the known and unknown part for M and n is derived as follows:   m −mxg Z,M Mk = (26) −mxg m(x2g + zg2 )

152

N.-H. Tran et al.



 −W cφ cθ W zg sθ + W xg cφ cθ   −Zw˙ 0 = Zw˙ xg Iy − Mq˙

nZ,M = k

(27)

M Z,M uk

(28)



Bcφ cθ − Zw w = Zw wxg − Bzb sθ − Mq q − Bxb cφ cθ

nZ,M uk

 (29)

ˆ uk ab + n ˆ est form, we obtain the regressor ˆ uk term in the Φest Θ Rewriting M matrix:   cφ cθ 0 0 −abw 0 −w 0 Z,M (30) Φest = 0 −cφ cθ −sθ abw xg abq wxg −q and the uncertain parameters vector:  Θ Z,M est = B

Bzb

Zw˙

Iy − Mq˙

Zw

Mq



(31)

Simulation and Results

Table 1. Simulation parameters

Ts

10 ms

Kp,d

0.8

Kp,θ

4

Kd,d

0.8

Kd,θ

4

c0

ANFC

0.4 0.2 0 Nominal NNFC

0.2

c1

External force ANFC

6

0.3

B

464.8

450i (Γest1 = 1)

Bxb

185.28

180i (Γest2 = 1)

Bzb

−42.078 0i

(Γest3 = 1)

Zw˙

−107

0i

(Γest4 = 1)

Iy − Mq˙ 12.21

0i

(Γest5 = 1)

Zw

−59.044 0i

(Γest6 = 1)

Mq

−15

(Γest7 = 1)

i

0.6

d [m]

NNFC

0.8

Initial value.

0i

θ [deg]

4

Bxb

4

2

0 Nominal

External force

Fig. 4. ANFC and NNFC root mean square errors comparation

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

4.1

153

Simulation Scenarios

A ROV 3D model constructed by SOLIDWORKS® software as in Fig. 1a is used to determine parameters of the controlled system. The properties of a rigid body such as the position of center of gravity and buoyancy, mass, moment of inertia can be easily computed using Mass Properties tool in SOLIDWORKS® . Meanwhile, translational drag coefficients are computed by SOLIDWORKS® Flow Simulation. All remaining parameters such as added mass, rotational drag coefficients are referred from the RRC ROV II model as in [7]. The comparison between ANFC and NNFC performance will be is drawn using MATLAB® Simulink. The simulation is done with first 100 seconds as a nominal case. A presence of constant force and moment is started at the 100th second and allowed to continue till the end of the simulation. A depth desired trajectory in the nominal case is defined by pulse function. It makes system input (there vertical thrusters) sufficiently “rich” to ensure proper adaptation of ANFC. While sine wave with proper amplitude and frequency is chosen in the case of the presence of external force to prevent the thruster’s force outputs from exceeding their upper and lower bounds as shown in Fig. 6. It is worth to note that the desired trajectory can easily be extended to larger scale in case of more powerful thrusters. Proportional gains, derivative gains and sampling rate (Ts ) are shared among both controllers. The ROV parameters in the NNFC and the known part of ANFC are assigned their “true” values. Meanwhile, the uncertain parameters vector are initialized to zero except the first two elements. Adaptation gains is properly chosen in the simulation because of their influence on the performance of the system. A high or low adaptation gain may lead to badly damped behavior or unacceptable slow response, respectively. Table 1 provides some specific values of simulation parameters. 4.2

Comment on Results

In two cases, the qualitative and quantitative evaluation of two control methods is based on the system output response analysis and the root mean square errors, respectively. In the beginning of simulation, a depth response of ANFC as in Fig. 5b is slightly worse compared to NNFC due to uncertainty of the parameters initialization. But in the next two cycles, the adjustment mechanism of ANFC makes the depth response gradually better and the pitch response more oscillatory but faster. At 100th second, due to the presence of external force, there is a considerable amount of error in pitch and depth using NNFC. The pitch error using ANFC also increases up to 10◦ . Since then, the difference between the two controllers has become more evident. ANFC has shown its adaptability by making the pitch and depth error converge to zero after a period of time. While the errors of NNFC haven’t changed much. A comment is drawn from Fig. 7 that

154

N.-H. Tran et al. ANFC

4

NNFC

Desired

d [m]

d [m]

2 0 0

100

200

300

400

500

Desired

1 0

600

0

20

40

60

80

100

0

20

40

60

80

100

0

20

40

60

80

100

2

d˜ [m]

2

d˜ [m]

NNFC

-1

-2

0

0

-2

-2 0

100

200

300

400

500

600

5

θ [deg]

10

θ [deg]

ANFC

2

5 0

0

-5

-5 0

100

200

300

400

500

600

t [s]

t [s]

(a) Entire response

(b) A closer look at the response in nominal case

˜ and pitch (θ) using Fig. 5. The system output response of depth (d), depth error (d) NNFC and ANFC. The simulation consists of two cases: nominal case in the first 100 s and the presence of external force from 100th to 600th s. The simulation time is set long enough to ensure the pitch error of ANFC is nearly zero at the end of simulation. τm5 = τm6

40

[N]

0

0 -20 -40 0

100

200

300

τm7

40

400

500

ANFC

600

200

400

600

ΘZ,M est3

0.4 0.2 0 0

200

ΘZ,M est2

185 180 175 0

200

0 -500 -1000 400

600

400

600

400

600

400

600

ΘZ,M est4

0

200

NNFC

ΘZ,M est5

2 1 0

20

[N]

ΘZ,M est1

452 450 448

20

0

0

200

ΘZ,M est6

0 -0.05 -0.1 400

600

0

200

t [s]

-20

ΘZ,M est7

0.1 0.05 0

-40 0

100

200

300

400

500

600

t [s]

Fig. 6. Time history of vertical thrusters forces using NNFC and ANFC. Dashed lines represent the upper and lower bounds of thrusts. All horizontal thrusters forces are set to zeros in the simulation.

0

200

400

600

t [s]

Fig. 7. The uncertain parameters vector of ANFC.

although the convergence of ANFC depth and pitch error to zero is guaranteed, the parameter vector will be not necessarily convergent to their “true” values. It’s also shown as in Fig. 4 that the ANFC has smaller root mean square errors than the NNFC for both cases.

ROV Stabilization Using an Adaptive Nonlinear Feedback Controller

5

155

Conclusion

To deal with the challenges arising from the high nonlinearities of the ROV’s dynamics, the uncertain initialization and variations of its parameters, an adaptive nonlinear feedback controller (ANFC) has been proposed and implemented on the self-built ROV. Besides, a non-adaptive version of this controller called nominal nonlinear feedback controller (NNFC) has also given in this paper. The simulation results show that the ANFC has an ability to adjust online the parameters of the ROV during its operation. The clearly superiority of the ANFC over NNFC is performed especially in the case of the presence of unknown external force and moment. Acknowledgement. The study was supported by The Youth Incubator for Science and Technology Program, managed by Youth Development Science and Technology Center - Ho Chi Minh Communist Youth Union and Department of Science and Technology of Ho Chi Minh City, the contract number is “04/2019/Ð-KHCN-VU”.

References 1. Soylu, S., Proctor, A.A., Podhorodeski, R.P., Bradley, C., Buckham, B.J.: Precise trajectory control for an inspection class ROV. Ocean Eng. 111, 508–523 (2016) 2. Yan, J. Gao, J., Yang, X., Luo, X., Guan, X.: Position tracking control of remotely operated underwater vehicles with communication delay. IEEE Trans. Control Syst. Technol. (2019) 3. Campos, E., Monroy, J., Abundis, H., Chemori, A., Creuze, V., Torres, J.: A nonlinear controller based on saturation functions with variable parameters to stabilize an AUV. Int. J. Naval Archit. Ocean Eng. 11(1), 211–224 (2019) 4. Maalouf, D., Chemori, A., Creuze, V.: L1 adaptive depth and pitch control of an underwater vehicle with real-time experiments. Ocean Eng. 98, 66–77 (2015) 5. Fossen, T.I.: Marine control systems–guidance. Navigation, and control of ships, rigs and underwater vehicles. Marine Cybernetics, Trondheim, Norway, Org. Number NO 985 195 005 MVA, ISBN: 82 92356 00 2 (2002). www.marinecybernetics.com 6. BlueRobotics: T200 thruster (2019). https://bluerobotics.com/store/thrusters/ t100-t200-thrusters/t200-thruster/ 7. Chin, C., Lau, M., Low, E., Seet, G.: Software for modelling and simulation of a remotely-operated vehicle (ROV). Int. J. Simul. Modell. 5(3), 114–125 (2006)

On Robust Control of Permanent Magnet Synchronous Generators Using Robust Integral of Error Sign Tien Hoang Nguyen1 , Hong Quang Nguyen2(B) , Phuong Nam Dao1 , and Nga Thi-Thuy Vu1 1

Hanoi University of Science and Technology (HUST), Hanoi, Vietnam 2 Thainguyen University of Technology, Th´ ai Nguyˆen, Vietnam [email protected]

Abstract. This work proposes a Robust Integral of the Sign of the Error (RISE) based robust control approach for permanent magnet synchronous generators (PMSG). The cascade control structure was established in the PMSG model to be decoupled into two sub-systems. Controllers are designed base on the RISE method for both the inner and outer loop to estimate and cancel uncertainties and disturbances. The stability, tracking effectiveness of the closed system are verified via theoretical analysis. The numerical simulation is given to illustrate the results. Keywords: Permanent magnet synchronous generators (PMSG) · Robust Integral of the Sign of the Error (RISE) · Robust control · Wind turbine

1

Introduction

Wind turbines play an important role in sustainable development because they can convert the wind’s kinetic energy into electrical energy. Wind turbines provide a clean energy source with low cost and do not pollute the environment like fossil fuels based power plants. In the industry, variable-speed wind turbines have received more attention than fix-speed turbines. These turbines can change the rotor speed according to changes in the wind speed. In order to extract maximum energy from wind, the rotor speed needs to track the optimal rotor speed to obtain maximum power from wind [1,8,15]. About generators for wind turbines, permanent magnet synchronous generators (PMSG) are increasingly used widely in the wind power industry because they have many advantages over others. Some advantages of PMSG are multipole design, gearless construction, no excitation system, no brushes [12,13,17]. The Robust Integral of the Sign of the Error (RISE) feedback controller is a continuous control strategy to estimate and cancel uncertainties and disturbances. The RISE controller was firstly introduced in [18]. After that, some c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 156–166, 2021. https://doi.org/10.1007/978-3-030-62324-1_14

Robust Controller for PMSG

157

applications of the RISE controller for Euler-Lagrange systems are proposed in [4,14,16]. Combinations of neural networks and RISE in optimal control for a class of nonlinear Euler-Lagrange systems are proposed in [3,14]. There are also applications of RISE controller for regulating the rotor speed of wind turbines to track the optimal speed which is the output of the maximum power point tracking (MPPT) algorithm [10]. The RISE based controllers for the rotor speed control loop are proposed in [11]. The RISE controller, which includes an adaptive feedforward term to reduce the gain of the controller, is proposed in [6]. However, these works only study the control of the dynamic of the mechanical part (outer control loop) and do not consider the control of the electrical system (inner control loop). In fact, the electrical system of PMSG has model uncertainty, PWM offset, and external disturbances. Hence, we propose a RISE based robust nonlinear controller for the electrical system to estimate and cancel time-varying uncertainties and disturbances. The rest of this paper is organized as follows: PMSG model and MPPT algorithm are discussed in Sect. 2. The proposed controllers and stability proof are discussed in Sect. 3. Subsequently, the simulation results are described in Sect. 4. Finally, the conclusions and future studies are pointed out in Sect. 5.

2 2.1

Dynamic Model Aerodynamic Model

The aerodynamic model of the wind turbine is well studied in [2]. The aerodynamic power Pa extracts from the wind turbine is expressed 1 ρπR2 v 3 Cp (λ, β) (1) 2 where ρ is the air density, R is the rotor radius, v is the wind speed, Cp is the power coefficient, β is the blade pitch, and λ is the tip speed ratio that is defined as Rωr λ= v where ωr is the rotor speed. The power coefficient Cp reflects capability of the turbine about extracting energy from wind, which is a nonlinear function of the tip speed ratio and the blade pitch   −c5 c2 Cp (λ, β) = c1 − c3 β − c4 e λi + c6 λ λi with 0.035 1 1 − 3 = λi λ + 0.08β β +1 where c1 = 0.5176, c2 = 116, c3 = 0.4, c4 = 5, c5 = 21, c6 = 0.0068 [5]. The aerodynamic torque which is obtained from wind is expressed Pa =

Tm =

Pm 1 ρπR3 v 2 Cp (λ, β) = ωr 2λ

158

2.2

T. H. Nguyen et al.

Mechanic Model

The model of wind turbine is described as follows [2] Jt ω˙ r = Tm − Kt ωr − Tg + τω (t)

(2)

where Jt is the turbine total inertia, Kt is the turbine total external damping, Tg is the generator torque in the rotor side, which is Tg = np Te with Te is the generator electromagnetic torque. We add a time-varying unstructured input τ ω (t) to the wind turbine model to consider the effect of unknown dynamics and external disturbances. 2.3

PMSG Model

The stator voltage equations of PMSG in the synchronous reference frame are given by [17]  L did Rs + ωs Ldq iq + L1 ud + τd (t) dt = − Ld id  d diq Ld Rs 1 1 dt = − Lq iq − ωs Lq id + Lq ψp + Lq uq + τq (t) The generator electromagnetic torque equation is given by Te =

3 np [ψp iq + (Ld − Lq ) id iq ] 2

where id , iq , ud , and uq are the d-axis and q-axis stator currents and stator voltages respectively; Rs is resistance of the stator windings; Ld and Lq are the inductance of generator on the d − q axis; ωs is the electrical angular frequency which is related to the mechanical angular frequency of the generator as ωs = np ωm and np is the number of pole pairs; ψp is the permanent magnet flux. In fact, there are dynamics that are not modeled and external disturbances. Therefore, we add time-varying unstructured inputs τd (t) and τq (t) to the PMSG model. 2.4

MPPT Algorithm

In this paper, our objective is to control wind turbines in region 2, in which the main objective is to maximize energy capture from the wind [9]. In this region, the blade pitch angle is fixed to its optimal value (β = 0). From (1), at a particular wind speed, the output power of the turbine is maximum when the value of Cp is maximum. In order to maintain Cp be maximum when the wind speed changes continuously, we must control the rotor speed of the turbine to track the following optimal speed [9] ωropt =

λopt v R

where λopt is the optimal tip-speed ratio and it is a constant value for a given wind turbine.

Robust Controller for PMSG

3

159

Control Design

In this section, we design d-axis and q-axis current controllers and the speed controller base on the RISE method in the presence of time-varying uncertainties and disturbances. The cascade control structure has two control loop: the outer loop for the mechanical part and the inner loop for the electrical system. With the inner control loop, the d-axis current controller forces the d-axis current to be equal to zero, and the q-axis current controller regulates the q-axis current to correspond to the desired torque which is the output of the speed controller. With the outer control loop, the speed controller has to regulate the rotor speed to track the optimal speed to extract maximum energy from wind at different wind speeds. 3.1

RISE Based Current Controller

Consider the d-axis current equation Rs Lq did 1 = − id + ωs iq + ud + τd (t) dt Ld Ld Ld

(3)

Define the tracking error and the filter tracking error as ed = id ∗ − id rd = e˙ d + αd ed Because the reference of id is zero, thus rd becomes   Rs Lq 1 − α ed − ωs iq − ud − τd (t) rd = − Ld Ld Ld

(4)

(5)

Base on (5), we propose a RISE based d-axis current controller ud = −ωs Lq iq + μd

(6)

where μd is the RISE feedback control term, which is defined as [18] t μd (t) = (kd + 1) ed (t)−(kd + 1) ed (0)+

[(kd + 1) αd ed (σ) + βd sgn (ed (σ))]dσ 0

where kd and βd are positive constants, and sgn(.) denotes the standard signum function. Lemma 1 (Barbalat Lemma [7]). Suppose f (t) is a real function of real varit able t and lim f (t) = α where α < ∞. If f (t) is uniformly continuous, then t→∞ 0

lim f (t) = 0

t→∞

160

T. H. Nguyen et al.

Theorem 1. For the d-axis current model given as (3) with the control law (6), the state variables of closed system will be bounded and tracking errors ed (t) → 0, rd (t) → 0 as time converges to infinity. Provided that we select 0 < αd < (1 + Rs )/Ld and βd , kd according to the following condition   −¨ τd Ld , βd > |−τ˙d | + αd ρ2 (zd ) kd > 4γ

s where γ = min αd , 1+R Ld − αd Proof. The time derivation of (4) is expressed as   Rs 1 βd r˙d = − (kd + 1 + Rs − αd Ld ) rd + αd − αd 2 ed − sgn (ed ) − τ˙d (7) Ld Ld Ld Define the auxiliary function L(t) ∈ R, P (t) ∈ R as   βd L (t) = −rd τ˙d + sgn (ed ) Ld βd P (t) = |ed (0)| + ed (0) τ˙d (0) − Ld

(8)

t L (σ) dσ

(9)

0

According to [18], if the following condition is satisfied −¨ τd βd > |−τ˙d | + Ld αd then

t L (σ) dσ


ρ2 (zd ) 4γ

Then, we can concluded that ed , rd and P are bounded and   t ρ2 lim γ − zdT zd dσ  V (0) − V (∞) < ∞ t→∞ 4kd 0

From (4) and (7), e˙ d and r˙d are also bounded. Therefore, zd is uniformly continuous. By applying Lemma 1, we conclude that ed and rd converge to zero asymptotically. This completes the proof.   Next, we design the controller for the q-axis current. Consider the q-axis current equation   diq Ld Rs 1 1 = − iq − ωs id + ψp + uq + τq (t) (15) dt Lq Lq Lq Lq Define the tracking error and the filter tracking error eq = iq ∗ − iq , rq = e˙ q + αq eq

162

T. H. Nguyen et al.

We write explicitly Rs rq = i˙ ∗q + iq + ωs Lq



Ld 1 id + ψp Lq Lq

 − αq eq −

1 uq − τq (t) Lq

(16)

Base on (16), we propose a RISE based q-axis current controller uq = ωs (Ld id + ψp ) + μq

(17)

where μq is the RISE feedback control term t μq (t) = (kq + 1) eq (t) − (kq + 1) eq (0) +

[(kq + 1) αq eq (σ) + βq sgn (eq (σ))]dσ 0

Theorem 2. For the q-axis sub-system model given as (15) with the control law (17), the state variables of closed system will be bounded and tracking errors eq (t) → 0, rq (t) → 0 as time converges to infinity. Provided that we select 0 < αq < (1 + Rs )/Lq and βq , kq according to the following condition   −¨ τq Lq , βq > |−τ˙q | + αq  2 ¨i∗q + Rs ˙i∗q + ρ (zq ) zq  Lq kq > 2 4γzq 

s where γ = min αq , 1+R − α q Lq  

Proof. Similar to the proof of Theorem 1.

Remark 1. The output of current controllers (6), (17) is smooth because of integral operator, thus they do not excite high frequency dynamics or cause the chattering phenomenon. 3.2

RISE Based Speed Controller

To design the speed controller, we define the tracking error and filter tracking error as follows eω = ωr∗ − ωr , rω = e˙ ω + αω eω We use the RISE based speed controller which is proposed in [6]

where Yd = ωropt ω˙ ropt expressed as

Tg = Tm − Yd θˆ − μω

(18)

and μω is the RISE feedback control term, which is t

μω (t) = (kω + 1) eω (t) − (kω + 1) eω (0) +

[(kω + 1) αω eω (σ) + βω sgn (eω (σ))] dσ 0

Robust Controller for PMSG

163

The parameter estimate vector θˆ ∈ R2 is updated with following law t  t  T ˆ ˆ ˙ Y¨dT eω (σ) − αω Y˙ dT eω (σ) dσ θ (t) = θ (0) + Γ Yd eω (σ) − Γ 0

0

where Γ is a constant diagonal matrix. Theorem 3. For the wind turbine model described as (2) with the control law (18), the state variables of closed system will be bounded and tracking errors eω (t) → 0, rω (t) → 0 as time converges to infinity. Provided that we select αω > 12 and βω , kω according to the following condition −¨ τω , βω > |−τ˙ω | + αω kω >   where δ = min αω , αω − 12

βω2 4δ

Proof. See [6, Theorem 1]

4

Simulation

Wind speed (m/s)

In this section, we validate the proposed RISE based controllers through simulation implimented in Matlab/Simulink. The parameters of the wind turbine and PMSG are R = 39 mω, h = 1.205 kg.m−3 , Jt = 10000 kg.m2 , ψp = 136.25 Wb, np = 11, Ld = Lq = 5.5 mH, Rs = 50 μΩ, Kt = 1 Nm−1 .rad−1 .s−1 . Control gains of the speed controller are chosen according to Theorem 3 and we have kω = 4 × 107 , αω = 5, βω = 40; from Theorem 1, the d-axis current controller has kd = 130, αd = 1, βd = 10; from Theorem 2, the q-axis current controller has kq = 100, αq = 1, βq = 4. 12 10 8 6 4 0

20

40

60 Time(s)

Fig. 1. Wind speed

80

100

T. H. Nguyen et al.

×10-4

id*-i d (A)

5

PID RISE

0

-5 0

20

40

60

80

100

iq* - i q (A)

Time(s) 0.2

600 400

0

200

-0.2 50

PID RISE 60

70

80

90

100

0 0

20

40

60

80

100

Time(s)

0.5 0 PID RISE

-0.5

ωr

opt

- ω r (rad/s)

Fig. 2. Tracking error of d-axis and q-axis currents

-1 0

20

40

60

80

100

Time(s) 0.6 0.4

0.5

0.2

0.45

C p (PID)

Cp

164

50

0 0

20

C p (RISE) 60

70

80

40

90

60

100

80

100

Time(s) Fig. 3. Tracking error of the rotor speed and the power coefficient Cp

Robust Controller for PMSG

165

Unstructured  disturbance inputs are τω = 15 sin (ωt) , τd = 15 sin (ωt) , τq =  15 sin ωt + π2 . Figure 1 shows the wind speed profile to the turbine. Figure 2 shows tracking error between the d-axis, q-axis currents and their setpoints when we use proposed controllers and PID controllers. We see that the tracking capability of the RISE based current controllers are excellent, the tracking error is approximately equal to zero despite the presence of uncertainties and disturbances. Moreover, the RISE based controller do not cause overshoot, while the PID controller creates large overshoot. Figure 3 shows tracking error between the rotor speed and optimal rotor speed as well as the power coefficient. When we use proposed controllers, the power coefficient value is almost kept at its maximum value, which is better than PID controllers. From the simulation results, we see that proposed controllers have excellent performance in capturing energy from wind.

5

Conclusion

In this paper, the RISE based robust nonlinear controllers are designed for PMSG to estimate and cancel the effect of uncertainties and disturbances. The designed controllers make subsystems asymptotically stable and deal with uncertainties and disturbances in both the electrical model and mechanical model of the turbine. The theory analysis and simulation results illustrate the effectiveness of proposed controllers. Future studies will focus on designing RISE based controllers for the grid-side converter to accomplish a complete control design system of PMSG based wind turbines. Acknowledgements. This research was supported by Research Foundation funded by Thai Nguyen University of Technology, No. 666, 3/2 Street, Tich Luong Ward, Thai Nguyen City, Vietnam.

References 1. Aziz, D., Jamal, B., Othmane, Z., Khalid, M., Bossoufi, B.: Implementation and validation of backstepping control for PMSG wind turbine using dSPACE controller board. Energy Rep. 5, 807–821 (2019) 2. Boukhezzar, B., Siguerdidjane, H.: Nonlinear control of variable speed wind turbines without wind speed measurement. In: Proceedings of 44th IEEE Conference on Decision and Control European Control Conference CDC-ECC 2005, vol. 2005, no. 3, pp. 3456–3461 (2005). https://doi.org/10.1109/CDC.2005.1582697 3. Dupree, K., Patre, P.M., Wilcox, Z.D., Dixon, W.E.: Optimal control of uncertain nonlinear systems using RISE feedback. In: 2008 47th IEEE Conference on Decision and Control, pp. 2154–2159. IEEE (2008) 4. Hassan, G., Chemori, A., Chikh, L., Herv´e, P.E., El Rafei, M., Francis, C., Pierrot, F.: RISE feedback control of cable-driven parallel robots: design and real-time experiments. In: 1st Virtual IFAC World Congress (2020) 5. Heier, S.: Grid Integration of Wind Energy: Onshore and Offshore Conversion Systems. Wiley, Hoboken (2014)

166

T. H. Nguyen et al.

6. Jabbari Asl, H., Yoon, J.: Adaptive control of variable-speed wind turbines for power capture optimisation. Trans. Inst. Meas. Control 39(11), 1663–1672 (2017). https://doi.org/10.1177/0142331216645175 7. Khalil, H.K., Grizzle, J.W.: Nonlinear Systems, vol. 3. Prentice Hall, Upper Saddle River (2002) 8. Khan, M.W., Wang, J., Xiong, L., Ma, M.: Fractional order sliding mode control of PMSG-wind turbine exploiting clean energy resource. Int. J. Renew. Energy Dev. 8(1) (2019) 9. Kumar, D., Chatterjee, K.: A review of conventional and advanced MPPT algorithms for wind energy systems (2016). https://doi.org/10.1016/j.rser.2015.11.013 10. Kumari, B., Aggarwal, M.: A comprehensive review of traditional and smart MPPT techniques in PMSG based Wind energy conversion system. In: 2019 International Conference on Power Electronics, Control and Automation, pp. 1–6. IEEE (2019) 11. Li, P., Li, D., Wang, L., Cai, W., Song, Y.: Maximum power point tracking for wind power systems with an improved control and extremum seeking strategy. Int. Trans. Electr. Energy Syst. 24(5), 623–637 (2014) 12. Nasiri, M., Mobayen, S., Zhu, Q.M.: Super-twisting sliding mode control for gearless PMSG-based wind turbine. Complexity 2019 (2019). https://doi.org/10.1155/ 2019/6141607 13. Quang, N.P.: General overview of control problems in wind power plants. J. Comput. Sci. Cybern. 30(4), 313 (2014) 14. Shao, S., Zhang, K.: RISE-Adaptive Neural Control for Robotic Manipulators with Unknown Disturbances. IEEE Access (2020) 15. The Nguyen, H., Al-Sumaiti, A.S., Vu, V.P., Al-Durra, A., Do, T.D.: Optimal power tracking of PMSG based wind energy conversion systems by constrained direct control with fast convergence rates. Int. J. Electr. Power Energy Syst. 118, 105,807 (2020). https://doi.org/10.1016/j.ijepes.2019.105807. http://www. sciencedirect.com/science/article/pii/S0142061519322136 16. Van, C.N.: Designing the adaptive tracking controller for uncertain fully actuated dynamical systems with additive disturbances based on sliding mode. J. Control Sci. Eng. (2016). https://doi.org/10.1155/2016/9810251 17. Wang, C.N., Lin, W.C., Le, X.K.: Modelling of a PMSG wind turbine with autonomous control. Math. Probl. Eng. 2014 (2014). https://doi.org/10.1155/ 2014/856173 18. Xian, B., Dawson, D.M., de Queiroz, M.S., Chen, J.: A continuous asymptotic tracking control strategy for uncertain nonlinear systems. IEEE Trans. Autom. Control 49(7), 1206–1211 (2004)

A Flexible Sliding Mode Controller for Robot Manipulators Using a New Type of Neural-Network Predictor Dang Xuan Ba(&) Department of Automatic Control, HCMC University of Technology and Education (HCMUTE), Ho Chi Minh City, Viet Nam [email protected]

Abstract. This paper presents a free-model high-precision controller for robot manipulators using variation sliding mode scheme and a semi-positive neuralnetwork design. The total dynamics of the robot is first estimated by the neural network in which a new type of learning laws is proposed based on twisting excitation signals. A flexible sliding mode control interface is then developed to realize control objectives using the estimation result of the neural network. The control performance of the closed-loop system is verified by the intensively theoretical proof and simulation results. Keywords: Sliding mode control

 Neural-network  Robot manipulators

1 Introduction So far, robots have become an important part of production activities and social life. To meet the high quality of industrial products, robots must be equipped with precise controllers. However, uncertain nonlinearities and unpredictable working conditions are main obstacles in developing the outstanding controllers. To deal with the robotdynamics fluctuation on the control performance, a plenty of model-based controllers have been proposed based on typical analyses such as Newton-Euler or Lagrange methods, or decomposition principles [1, 2]. Such approaches could only be possible to apply for specific robots. To control general robots, obviously neural-network-based controllers are reasonable candidates [3, 4]. Direct and indirect learning methods are the leading solutions for building neural networks nowadays [5]. The networks are favorite employed to estimate the system dynamics and then fed the results to the controllers for eliminating unexpected effects. Based on this kind of the design, the many types of the networks could be exploited such as Radial-basis function networks [6–8] or Fuzzy-hybrid-networks [9, 10]. With this configuration, the control error is adopted as the main excitation signals of the learning process, which could be difficult to yield excellent transient control performance [11]. The key convergence condition of such networks is the richness of the excitation signals [5]. As a solution, the neural networks could be used in a parallel phase of the control process. The learning process could be combined with the control process for higher adaptation effect [12]. Here, the excitation source would be easily adjusted to speed up the learning convergence. As a © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 167–177, 2021. https://doi.org/10.1007/978-3-030-62324-1_15

168

D. X. Ba

further improvement, a new design of the neural network is required to overcome this criterion more effectively. Adoption of the new network inside control schemes is another open problem [13, 14]. In this paper, a high-precision controller is designed for general robot manipulators. A novel neural network is designed to estimate the extensive system dynamics. Its learning mechanism is activated by nonlinear signals. To cope with integration of the estimation system and a control phase, a flexible sliding mode control framework is developed. Variation of the sliding surface and synthetization of the control signal are promoted by nonlinear selection. Effectiveness and feasibility of the controller are confirmed by comparative simulation results. Rest of the paper is organized as follows. Section 2 presents general dynamics of the robots and problems statements. The proposed controller and the new neural networks are designed in Sect. 3. Validation results of the whole control system are discussed in Sect. 4. Finally, the paper is concluded in the last section.

2 Problem Statement The dynamics of a serial n-joint robot is expressed in a general form as follows: MðqÞ€ q þ Cðq; q_ Þq_ þ gðqÞ þ f ðq_ Þ þ sd ¼ s

ð1Þ

_ q €; s 2 3 times Temporary.Stop.Number None 1 stops >=2 stops Weather Sunny Rainy Cool Bus.Stop.Present No Yes Don’t know Age 15–24 25–60 >60 Occupation Pupils/Students Officers/Workers Housewife/Unemployed Free labor Others Income 25 millions Number.of.Children None 1 child 2 children >=3 children Motor.Certificate No Yes Car.Certificate No Yes Motor.Owning No Yes Car.Owning No Yes

p.overall 0.002 0.014

> krj ¼ þ \0:2 þ  1 for : 2/Pn /b Mnx /b Mny /Pn ð3Þ

424

A. Chaiwongnoi et al.

where rj is the resulting member stress, aaj is the associated allowable stress, Pu is the required axial strength (compression and tension), Pn is the nominal axial strength (compression and tension), Mux is the required flexural strength over a major axis, Muy is the required flexural strength about a minor axis, Mnx is the nominal flexural strength about a major axis, Mny is the nominal flexural strength about a minor axis, / is the resistance factor shown as /c = 0.9 for compression, /t = 0.9 for tension and /b = 0.9 for flexure. The serviceability constraints kd representing inter-story drift of multi-story frame are stated in the following equation: kdr ¼

jd  j  1  0; jd  j\dru dru

ð4Þ

where d*= dr – dr–1 is the inter-storey displacement at rth-level, dru is the allowable inter-storey displacement (viz., hr/300 for lateral sway) and hr is the story height of the rth-floor). 2.2

Direct Analysis Method

The direct analysis method requires that the second-order analysis, either the rigorous second-order or amplified first-order analysis, be performed. The notional loads are used to represent the influences of initial geometric imperfections consisting of out-ofplumpness and out-of-straightness. This lateral load is assigned as an additional load applied laterally at each frame storey. The notional loads applied at rth-level Ni is given as the following formula [15]: Ni ¼ 0:002aYi

ð5Þ

where a = 1 for LRFD and 1.6 for ASD, and Yi is a gravity load applied at rth-level. The factor of 0.002 assumes an out-of-plumpness ratio 1/500. The method applies the member stiffness reductions of EI* = 0.8sbEI and EA* = 0.8EA to account for inelastic material properties developed during the large deformations. This applies to connections, panel zones, diaphragms, column bases and member shear stiffness. The factor of sb reads [15]: sb ¼ 1 sb ¼ 4ðaPu =Pn Þ½1  ðaPu =Pn Þ

ð6Þ

In lieu of applying sb < 1.0, it is permissible to apply additional notional loads of Ni = 0.001Yi at all levels. This considers the out-of-straightness P-D conditions. Moreover, the effective length coefficient of K = 1 is adopted to all columns since the SAP2000 analysis applications perform the second-order P-d member deformations for all axially loaded members.

An ESO Approach for Optimal Steel Structure Design Complying

425

3 ESO Method An ESO method consists of two main steps, namely one called a uniform scaling factor and the other element sensitivity number. In essence, sensitivity plays a vital role in structural optimization. The sensitivity numbers are formulated using the optimality criteria in the form of a Lagrangian function [9], considering at an elementwise ultimate strength and displacements of structures. Based on the element sensitivity number, material is shifted from the strongest to the weakest part of the structure. These two steps are repeated in cycles until the feasible design domain is achieved. The structural responses are modelled and analyzed within a standard “line” finite element framework. For a generic jth element, the element sensitivity considering simultaneously ultimate strength and serviceability constraints is defined by [11]: aijþ ¼ Ss;j

ajdþ ajsþ þ S d;j þ þ as;av ad;av

ð7Þ

a ij ¼ Ss;j

a a js jd þ S d;j  a a s;av d;av

ð8Þ

þ þ where aijþ and a ij are the two sensitivity numbers of the ith group, as;av and ad;av are the  average values of the two ajsþ and ajdþ , and vice versa for a s;av and ad;av . The two uniform scaling factors are defined for the ultimate strength Ss;j ¼ rj =raj and serviceability Sd;j ¼ jd  j=dru conditions, complying with AISC-LRFD 2010 specifications. The iterative optimization procedures are performed. At each iteration, the crosssectional area of the jth element with the highest aijþ value is increased, whilst the area with the lowest a ij value is decreased. The sensitivity parameters iteratively trace the weight reduction of the structure, but also indicate the most active constraints (i.e., one required to increase and the other to decrease the sizes) [11]. During the optimization, the scaling design is applied to the most active constraint, determined of the two Ss,j and Sd,j factors. The sensitivity numbers

from the maximum

and ajdþ ; a ajsþ ; a js jd

are calculated for a generic jth element by: rjþ ðA þ DAÞ ¼ rj lj lj  r ðA  DAÞ j a ¼ rj js ¼  lj lj ajsþ ¼

ð9Þ

426

A. Chaiwongnoi et al.

ajdþ

a jd

   T    !  dj  djk Kj ðA þ DAÞ  Kj ð AÞ dj ¼  dj lj !     dj  Dajdþ dj  Ddjþ  ¼  ¼  dj dj lj lj

   !    dj   djk T Kj ðA  DAÞ  Kj ð AÞ dj ¼  dj lj        dj  Dajd dj  Ddj  ¼  ¼  dj dj lj lj

ð10Þ

where rjþ is the maximum stress associated with an increase of DA and vice versa r j with a decrease of DA, lj is the length of the jth element, Dajdþ and Da are the changes jd of the elemental virtual energy corresponding to the variation of displacements Ddjþ and Ddj , respectively. The ESO process performs the enumerative searches to converge the optimal design solution of structures. The pseudo-code describing the ESO algorithm is summarized. Initialization • All design members are categorized into ng groups of similar cross-sections, where each group i 2 f1,. . .,ngg contains mk elements. • All groups are initially assigned to the maximum cross-sectional area. • Initialize a termination tolerance d and the uniform scaling factors (Ss,j, Sd,j) for all ng design groups. ESO Procedures   While 1  maxðSs;j Sd;j j8j 2 f1; . . .; mkgÞ [ d for all groups 8i 2 f1; . . .; ngg • Perform analysis and design of steel structures with assigned member sizes complying with AISC-LRFD 2010 specification. • Enumerative for each design group i • Calculate the uniform scaling factors (Ss,j, Sd,j) and sensitivity numbers (aijþ ; a ij ) in Eqs. (7) and (8). • Update the member size, namely for one with the highest aijþ increasing the area size Anext ¼ Apresent þ MRR:Apresent , and for the other with the lowest a i i i ij present present decreasing the size Anext ¼ A  MRR:A . i i i

An ESO Approach for Optimal Steel Structure Design Complying

427

• Explore enumerative searches of the feasible sections lying within an interval range of [Anext  DA, Anext þ DA] to ensure the optimal design solutions. i i • Determine the total weight W ðXÞ of the structure employing the computed steel sections. End It is worthwhile noting that a material removal ratio (MRR) varying within a range between 5–10% defines the percentage of removed material (area or volume) between two consecutive iterations [16]. The characteristic parameters underlying the ESO procedures are problem-dependent. More explicitly, the value of MRR explores the feasible search space, and reduces the design domain to locate the optimal steel sections. The optimal designed cross-sections are ensured at an elemental level by the local searches over the intervals DA around the identified section areas A. The large value of DA involves the more computing efforts to furnish the optimal solutions. The small value incurs less resources, but may not provide the optimal design sections. In this study, a number of practical design examples successfully proceeded define the values of MRR = 10% and DA = 1–5%Anext.

4 Illustrative Example The two-bay, three-story frame under the applied forces shown in Fig. 1 was considered. The problem is set as a benchmark in many structural optimization literatures [2– 6, 17–19]. All these references construct the optimal design process complying with an AISC-LRFD specification. More specifically, the material properties, namely elastic modulus of E = 29,000 ksi, yield stress of Fy = 36 ksi and material density of 2.84  10−4 kip.in−3, were employed throughout.

Fig. 1. Two-bay, three-story steel frame.

428

A. Chaiwongnoi et al. Table 1. Optimization designs computed for two-bay, three-story frame. Group no 1 (Beams) 2 (Inner Columns) 2 (Outer Columns) Number of analyses Total weight (lb)

SAP2000 W18X76 W10X60 W10X49 – 21,127

GA [2] W24X62 W10X68 W10X68 30 19,512

Heuristic algorithm [20] Present work W18X76 W21X68 W10X60 W10X60 W10X49 W10X49 53 73 21,127 19,473

All design members were divided into three groups. The imposed fabrication conditions assigned six beam sections with the size selected from the available 274 AISC W-sections. The column members were grouped into two groups, namely one located at inner and outer grids. The column groups were determined from 18 W10 standard steel sections. This leads to the total 88,776 design combinations. The ESO algorithm was implemented. In essence, it was encoded within a Microsoft VBA environment. The standard structural analysis and design complying with AISC-LRFD 2010 specifications were run by the commercial software, called SAP2000. A direct interface between VBA and SAP2000 was enabled through the use of an OAPI function. The optimal design solutions were successfully processed by the ESO within the aforementioned frameworks. The computed results were reported in Table 1, and compared with some available analysis literatures, see [2] and [20], as well as the automatic design feature embedded in SAP2000. The optimal total weight of 19,473 lb was computed at modest computing resources (i.e., 73 iterations). This presents some 8.5% less than that given in the two standard SAP2000 and heuristic algorithm [20] design, and some 0.2% less than the genetic algorithm (GA) [2]. It hence evidences the simplicity and robustness of the ESO algorithm proposed. The detailed algorithmic and design performances are depicted in Fig. 2. It highlights the fast solution convergence and the optimal usages of all designed members as the corresponding stress ratios developed are close to units.

An ESO Approach for Optimal Steel Structure Design Complying

429

Fig. 2. Design results for two-bay, three-story frame, (a) solution convergences, (b) stress ratio developed for all steel members, (c) combined-action stress plot.

5 Conclusions The ESO-based algorithm is presented for the optimal design of steel structures. The design criteria comply strictly with the AISC-LRFD 2010 specifications. The influences of large-deformations (or second-order approximations) are incorporated by a direct analysis method. The implementation is simple as it can be encoded by any computer modeling platform having an interface with widely-available standard structural analysis and design software. The one presented is an integrating framework between ESO codes within Microsoft VBA and SAP2000 environments. A number of practical steel structure design examples were successfully processed. One of which is given in this paper. These all evidence the effectiveness and robustness of the proposed ESO algorithm that can be familiarly adopted by practical engineers, and further highlighted the nontrivial extension of the method for the design of 3D steel structures.

430

A. Chaiwongnoi et al.

References 1. Jenkins, W.: Towards structural optimization via the genetic algorithm. Comput. Struct. 40, 1321–1327 (1991) 2. Pezeshk, S., Camp, C., Chen, D.: Design of nonlinear framed structures using genetic optimization. J. Struct. Eng. 126, 382–388 (2000) 3. Camp, C.V., Bichon, B.J., Stovall, S.P.: Design of steel frames using ant colony optimization. J. Struct. Eng. 131, 369–379 (2005) 4. Degertekin, S.: Optimum design of steel frames using harmony search algorithm. Struct. Multidiscipl. Optim. 36, 393–401 (2008) 5. Murren, P., Khandelwal, K.: Design-driven harmony search (DDHS) in steel frame optimization. Eng. Struct. 59, 798–808 (2014) 6. Carraro, F., Lopez, R.H., Miguel, L.F.F.: Optimum design of planar steel frames using the Search Group Algorithm. J. Braz. Soc. Mech. Sci. Eng. 39(4), 1405–1418 (2016). https:// doi.org/10.1007/s40430-016-0628-1 7. Tangaramvong, S., Tin-Loi, F.: Optimal performance-based rehabilitation of steel frames using braces. J. Struct. Eng. 141, 04015015 (2015) 8. Xie, Y.M., Steven, G.P.: A simple evolutionary procedure for structural optimization. Comput. Struct. 49, 885–896 (1993) 9. Nha, C.D., Xie, Y., Steven, G.: An evolutionary structural optimization method for sizing problems with discrete design variables. Comput. Struct. 68, 419–431 (1998) 10. Li, Q., Steven, G.P., Xie, Y.: Evolutionary structural optimization for stress minimization problems by discrete thickness design. Comput. Struct. 78, 769–780 (2000) 11. Manickarajah, D., Xie, Y., Steven, G.: Optimum design of frames with multiple constraints using an evolutionary method. Comput. Struct. 74, 731–741 (2000) 12. Steven, G., Querin, O., Xie, M.: Evolutionary structural optimisation (ESO) for combined topology and size optimisation of discrete structures. Comput. Methods Appl. Mech. Eng. 188, 743–754 (2000) 13. Tanskanen, P.: A multiobjective and fixed elements based modification of the evolutionary structural optimization method. Comput. Methods Appl. Mech. Eng. 196, 76–90 (2006) 14. The Edition Steel Construction Manual. American Institute of Steel Construction (2010) 15. Griffis, L.G., White, D.W.: Stability Design of Steel Buildings. American Institute of Steel Construction (2013) 16. Chu, D.N., Xie, Y., Hira, A., Steven, G.: Evolutionary structural optimization for problems with stiffness constraints. Finite Elem. Anal. Des. 21, 239–251 (1996) 17. Farshchin, M., Maniat, M., Camp, C.V., Pezeshk, S.: School based optimization algorithm for design of steel frames. Eng. Struct. 171, 326–335 (2018) 18. Maheri, M.R., Shokrian, H., Narimani, M.: An enhanced honey bee mating optimization algorithm for design of side sway steel frames. Adv. Eng. Softw. 109, 62–72 (2017) 19. Talatahari, S., Gandomi, A.H., Yang, X.-S., Deb, S.: Optimum design of frame structures using the eagle strategy with differential evolution. Eng. Struct. 91, 16–25 (2015) 20. Ky, V.S., Lenwari, A., Thepchatri, T.: Optimum design of steel structures in accordance with AISC 2010 specification using heuristic algorithm. Eng. J. 19, 71–81 (2015)

A Gravity Balance Mechanism Using Compliant Mechanism Ngoc Le Chau1, Hieu Giang Le1, and Thanh-Phong Dao2,3(&) 1

2

3

Faculty of Mechanical Engineering, Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam Division of Computational Mechatronics, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam [email protected]

Abstract. The Gravity balance mechanisms (GBM) have designed to remove the effects of gravity throughout a specific range of motion. Traditional GBMs are often realized by the use of spring mechanisms or counterweights. In a quasistatically system, the GBMs can move without operating energy. The GBMs have been applied in industry, robot, rehabilitation, etc. However, the existing GBMs still have some disadvantages such as: Inability to adjust the load, large size, and complex structure. On the contrary, the devices are able to adjust the load but require a large outer energy. In recent years, scientists have focused on an energy-free adjustment for the GBMs. Although some good results have achieved, but the energy-free adjustment has not completed yet. Thus, this paper presents a new structure of GBM that has an ability to reach a completed energyfree adjustment. The proposed GBM is a combination of a compliant rotary joint and a planar spring, which has not been previously studied. First, the principle design of the proposed GBM is demonstrated. Then, the stiffness of both springs are determined. Finally, the principle of energy free adjustment is proposed based on adjusting the stiffness of planar spring. Keywords: Gravity balance mechanism Planar – spring  Compliant rotary joint

 Energy free adjustment 

1 Introduction Industrial equipment and machines need static and dynamic equilibrium. The balance helps the device and equipment can move without energy [1]. There are many methods to perform equilibrium for equipment and machinery. These methods can be divided into two types active and passive [2]. Active equilibrium method is implemented according to feedback principle [3]. Based on, the feedback signal from the output, the actuator will provide input energy to ensure balance. This balancing method requires complex structures. Passive equilibrium method, this method is divided into three categories. i) The static balance mechanisms use counterweight [4–6], which uses the principle of adding a necessary mass at opposite positions to ensure equilibrium, this method has often been used effectively in cranes and robots. In general, this method © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 431–439, 2021. https://doi.org/10.1007/978-3-030-62324-1_37

432

N. Le Chau et al.

ensures good balance, but only applies to equipment with a few degrees of freedom. In addition, the addition of mass will make the device become bulky, heavier, not suitable for devices requiring compact. In addition, the addition of mass also makes the motion process generate inertial force. ii) The static balance mechanisms use deformed element such as springs [7–10]. This balancing method is used quite effectively in many devices, the outstanding advantage of mechanisms use the spring are to generate small inertial force for the initial system. However, these designs are often complex and sometimes cannot be used in cases where the device has multiple degrees of freedom. iii) Static equilibrium method combines springs and counterweight [11–13]. In addition to the above classification, the gravity balancing mechanisms are also classified based on payload. That are fixed payload and adjustable payload. The fixedpayload balancers have been presented in numerous studies [9, 14–16]. These gravity balance mechanisms, if the payload is adjusted, the condition of balance will be broken, so they no longer valid for a robotic manipulator or a mobile arm support. The adjustable payload balancer are mechanism that if the payload is adjusted, the balance condition is guaranteed [17–19]. These balancer, when the payload is adjusted, a process of adjustment is performed to maintain the balance. The gravity balance mechanism that adjustable payload be divided two types: energy free adjustment and non-energy free adjustment. With non-energy free adjustment [20, 21], when adjusted payload to maintain balance must use a relatively large external force to adjust, this adjustment encountered difficulties in the case of energy sources the outside is missing or not enough. With energy free adjustment [7, 18, 22], These balancer, when the payload is adjusted, the stiffness of the spring is changed or the position of the connection point of the spring changes, or the length of the arm changes that without using energy. In this paper presents a new gravity balance mechanism that has not been studied before. The proposed gravity balance mechanism is combined by a planar spring and a compliant rotary joint. This new mechanism allows adjusting the stiffness of the planar spring without using energy when the load is adjusted.

2 Design Principles Two quite important details in the design of the gravity balance mechanism are the zero-free-length spring and the compliant rotary joint. The zero-free-length spring is assumed that the force applied on the spring is proportional to the length of the spring, in lieu of deformation of the spring. The zero-free-length spring is created using a planar spring combined with the cable pulley mechanism as shown in Fig. 1. As known, the planar spring is a type of compliant mechanisms that has the ability to deform to create resilient potential like traditional torsion springs. However, due to the special nature of the compliant mechanism, the planar spring is compact in size [23, 24]. Therefore, the planar spring is chosen to design for this gravity balance mechanism instead of using the traditional torsion springs. The principle design of the gravity balance mechanism is shown in Fig. 2. The mechanism consists of a bar connected to a pedestal by a compliant rotary joint of stiffness k1. The bar has mass of m1 and dangles

A Gravity Balance Mechanism Using Compliant Mechanism

433

objects with mass of m2. A planar spring with a stiffness of k2 is connected to the bar by a cable - pulley mechanism.

Fig. 1. The structure of zero- free-length springs.

Fig. 2. Principle design of gravity balance mechanism

This gravity balance mechanism has the working principle as follows: During working, when the bar rotes an angle, the object is moved accordingly. Basically, total mass of bar and object will generate an inertial force. However, the gravity balance mechanism is designed for applications with low speed. So, inertial force is ignored in this research. Each position of the bar and the object only produces moment Tm, the value of Tm is calculated by the Eq. (1)

434

N. Le Chau et al.

 Tm ¼

 1 m1 þ m2 Lg sin u 2

ð1Þ

where, m1, m2 are the mass of the bar and the mass of the object, respectively, L is the length of the bar and g is the gravitational acceleration. Besides, the compliant rotary joint is torsional an angle of u and creates the torque Tr. At the same time, the cable is stretched causing the planar spring to deform an amount of l. At this time, the planar spring will generate elastic force Fp and generate torque Tp. The value of Tr, Tp is calculated using the Eq. (2–4). T r ¼ k1 u

ð2Þ

pffiffiffi u Fp ¼ k2 2að1  cosuÞ ¼ 2k2 a sin 2

ð3Þ

u u a cos ¼ 2k2 a2 sin u 2 2

ð4Þ

Tp ¼ 2k2 a sin

where: k1, k2 are the stiffness of the compliant rotary joint and planar spring respectively. In order to ensure that the device operates under equilibrium conditions, the total torque generated by the mass must be equal to the total torque generated by the zero free -length spring and the compliant rotary joint as Eq. (5).   1 m1 þ m2 Lg sin u ¼ k1 u þ 2k2 a2 sin u 2

ð5Þ

When the angle u is small, sin u  u, then the Eq. (5) is transformed into the Eq. (6).   1 m1 þ m2 Lg ¼ k1 þ 2k2 a2 2

ð6Þ

Base on Eq. (6), it is shown that a gravity balance mechanism ensure equilibrium in angle range mall. The gravity created by mass is calculated using the Eq. (7) Fm ¼

k1 þ 2k2 a2 L

ð7Þ

Equation (7) shows that, when payload is adjusted. To maintain the equilibrium, one of four parameters k1, k2, a or l have to adjust. In this study, the adjustment solution to ensure the balance focuses on adjusting the stiffness of the planar spring k2, other parameters are pre-selected. Then, the stiffness of planar spring k2 is calculated as follows: k2 ¼

ð0:5m1 þ m2 ÞLg  k1 2a2

ð8Þ

A Gravity Balance Mechanism Using Compliant Mechanism

435

3 Determine the Stiffness of Planar Springs and Compliant Rotary Spring In Fig. 2, the bar is made of profiled aluminum. The length of the bar is 400 mm and its mass is 0.5 kg. The distance from the center of rotation to the connecting position of the spring with zero free length is selected 65 mm. Range of rotation of gravity balance mechanism is 30°, and the weight of the moved object can be adjusted in the range from 0.5 kg to 3 kg. The parameters of proposed gravity balance mechanism and adjustable mass for Table 1. Table 1. Parameters of gravity balance mechanism. Parameter l (mm) a (mm) m1 (kg) m2 (kg) umax (degree) Value 400 65 0,5 0.5  3 30

Based on Eq. (5), at u = 300, depending on the m2 the values of k1 and k2 are determine. The value of k1 and k2 are given in Table 2. The Table 2 show that when the stiffness of the compliant rotary joint changes, the stiffness of zero free length spring varies not much. Therefore, to increase the rigidity of the mechanism, the stiffness of the compliant rotary joint is selected at 200 N/mm. the stiffness of the planar spring is 0.325 corresponding to the smallest load (0.5 kg). Table 2. Value of k1 and k2 k1 (N/mm) m2 = 0.5 kg, k2 (N/mm) 10 0.347 20 0.346 50 0.342 100 0.336 200 0.325

m2 = 1 kg, k2 (N/mm) 0.579 0.578 0.575 0.569 0.557

m2 = 1.5 kg, k2 (N/mm) 0.811 0.810 0.807 0.701 0.789

m2 = 2 kg, k2 (N/mm) 1.044 1.042 1.039 1.033 1.021

m2 = 2.5 kg, k2 (N/mm) 1.276 1.275 1.271 1.265 1.253

m2 = 3 kg, k2 (N/mm) 1.508 1.507 1.503 1.497 1.468

4 Energy-Free Adjustment with Spring Stiffness 4.1

The Adjustment Principle

The zero-free - length spring is a combination of a planar spring and a cable pulley system. The stiffness of a planar spring is the stiffness of the zero - free - length spring. The Planar spring is created by arranging leaf springs in a zigzag shape as shown in Fig. 3. The stiffness of a leaf spring is calculated by the Eq. (9) kl ¼

Ewt3 6L3

ð9Þ

436

N. Le Chau et al.

where t is the thickness, w is width and L is the length of leaf spring, E is Young’s modulus of the material. The planar spring is designed that it can be considered as a spring system assembled as shown in Fig. 4. Therefore, the stiffness of planar spring is calculated as follows: 1 1 1 1 n ¼ þ þ...þ ¼ knt k1 k2 kn kl k ¼ knt1 þ knt2 ¼ 2knt ¼

2kl n

ð10Þ ð11Þ

where n is the number of leaf springs working on a branch, knt is the stiffness of the system of consecutive springs. As shown in Eq. 11, the planar spring stiffness is inversely proportional to the number of active leaf springs. Therefore, the stiffness of planar spring can change by changing the number of active leaf spring. The relationship between the stiffness of planar spring and the number of active leaf springs is given in Fig. 5.

Fig. 3. Structure of planar spring

Fig. 4. Modeling planar spring structure

Fig. 5. Relationship between the stiffness of planar spring and the number of active leaf springs

A Gravity Balance Mechanism Using Compliant Mechanism

4.2

437

Adjustment Method

The stiffness adjustment process of planar springs is only performed when the bar is in the vertical position as shown in Fig. 6. At this time, the zero- free - length spring has a length of zero. That mean, the planar spring is not deformed. The torque produced by gravity is zero. To change the stiffness of the planar spring, the number of active leaf springs must be changed. To change the number of active leaf springs, the shims position must be adjust as shown in Fig. 7. The stiffness of the planar spring is adjusted in the following way: a rack is machined grooves where the positions of these grooves fit with the gap of the planar spring as shown in Fig. 7. This rack is assembly in parallel with the planar spring as shown in Fig. 6. After the shims position changes, the bar moves clockwise, at this time, the planar spring is compressed, the leaf spring underneath of the shim will be pressed against the shims, making the number of active leaf springs changes make the stiffness of the planar springs change

Fig. 6. a. Stiffness k2 when there are n1 active leaf spring b. stiffness k2 when there are n2 active leaf spring

438

N. Le Chau et al.

Fig. 7. Rack (unit: mm)

5 Conclusion This paper presents a new structure for a gravity balance mechanism that can adjust the payload without energy. This mechanism is a combination of a compliant rotary joint and a planar spring. This mechanism allows adjustment of the payload by changing the stiffness of the planar spring. The gravity balance mechanism allows to achieve a balance within 00  300. It allows adjusting the payload within the range of 0.5 kg to 3 kg. The adjustment process does not need to use energy. In the future, a gravity balance mechanism will be fabricated to test the ability of the design structure to achieve a balance. Acknowledgements. This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 107.01-2019.14.

References 1. Agrawal, S.K., Fattah, A.: Design of an orthotic device for full or partial gravity-balancing of a human upper arm during motion. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), pp. 2841–2846 (2003) 2. Rosyid, A., El-Khasawneh, B., Alazzam, A.: Gravity compensation of parallel kinematics mechanism with revolute joints using torsional springs. Mech. Based Des. Struct. Mach. 48, 27–47 (2020) 3. Agrawal, S.K., Gardner, G., Pledgie, S.: Design and fabrication of an active gravity balanced planar mechanism using auxiliary parallelograms. J. Mech. Des. 123, 525–528 (2001)

A Gravity Balance Mechanism Using Compliant Mechanism

439

4. Lacasse, M.-A., Lachance, G., Boisclair, J., Ouellet, J., Gosselin, C.: On the design of a statically balanced serial robot using remote counterweights. In: 2013 IEEE International Conference on Robotics and Automation, pp. 4189–4194 (2013) 5. Kolarski, M., Vukobratović, M., Borovac, B.: Dynamic analysis of balanced robot mechanisms. Mech. Mach. Theor. 29, 427–454 (1994) 6. Walker, M., Oldham, K.: A general theory of force balancing using counterweights. Mech. Mach. Theor. 13, 175–185 (1978) 7. Barents, R., Schenk, M., van Dorsser, W.D., Wisse, B.M., Herder, J.L.: Spring-to-spring balancing as energy-free adjustment method in gravity equilibrators. J. Mech. Des. 133, 061010 (2011) 8. Wu, Q., Wang, X., Du, F.: Development and analysis of a gravity-balanced exoskeleton for active rehabilitation training of upper limb. Proc Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 230, 3777–3790 (2016) 9. Agrawal, S.K., Fattah, A.: Gravity-balancing of spatial robotic manipulators. Mech. Mach. Theor. 39, 1331–1344 (2004) 10. Lin, P.-Y., Shieh, W.-B., Chen, D.-Z.: A theoretical study of weight-balanced mechanisms for design of spring assistive mobile arm support (MAS). Mech. Mach. Theor. 61, 156–167 (2013) 11. Martini, A., Troncossi, M., Rivola, A.: Algorithm for the static balancing of serial and parallel mechanisms combining counterweights and springs: generation, assessment and ranking of effective design variants. Mech. Mach. Theor. 137, 336–354 (2019) 12. Woo, J., Seo, J.-T., Yi, B.-J.: A static balancing method for variable payloads by combination of a counterweight and spring and its application as a surgical platform. Appl. Sci. 9, 3955 (2019) 13. Laliberte, T., Gosselin, C., Gao, D., Menassa, R.J.: Gravity powered balancing system, ed: Google Patents (2013) 14. Rahman, T., Ramanathan, R., Seliktar, R., Harwin, W.: A Simple Technique to Passively Gravity-Balance Articulated Mechanisms (1995) 15. Herder, J.L.: Design of spring force compensation systems. Mech. Mach. Theor. 33, 151– 161 (1998) 16. Lin, P.-Y., Shieh, W.-B., Chen, D.-Z.: Design of a gravity-balanced general spatial serialtype manipulator. J. Mech. Robot. 2, 031003-1–031003-7 (2010) 17. Van Dorsser, W.D., Barents, R., Wisse, B.M., Herder, J.L.: Gravity-Balanced Arm Support with Energy-Free Adjustment (2007) 18. Van Dorsser, W., Barents, R., Wisse, B., Schenk, M., Herder, J.: Energy-free adjustment of gravity equilibrators by adjusting the spring stiffness. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 222, 1839–1846 (2008) 19. Chu, Y.-L., Kuo, C.-H.: A single-degree-of-freedom self-regulated gravity balancer for adjustable payload. J. Mech. Robot. 9, 021006 (2017) 20. Nathan, R.: A Constant Force Generation Mechanism (1985) 21. Takesue, N., Ikematsu, T., Murayama, H., Fujimoto, H.: Design and prototype of variable gravity compensation mechanism (VGCM). J. Robot. Mechatr. 23(2), 249–257 (2011) 22. Wisse, B.M., Van Dorsser, W.D., Barents, R., Herder, J.L.: Energy-free adjustment of gravity equilibrators using the virtual spring concept. In: 2007 IEEE 10th International Conference on Rehabilitation Robotics, pp. 742–750 (2007) 23. Howell, L.L., Midha, A.: Development of Force-Deflection Relationships for Compliant Mechanisms (1994) 24. Howell, L.L.: Compliant Mechanisms. Wiley, New York (2001)

Numerical Simulation of Flow around a Circular Cylinder with Sub-systems Phan Duc Huynh1(&) and Nguyen Tran Ba Dinh2 1

Faculty of Civil Engineering, HCMC University of Technology and Education, Ho Chi Minh City, Vietnam [email protected] 2 Faculty of Manufacturing Engineering, HCMC University of Technology and Education, Ho Chi Minh City, Vietnam

Abstract. When the coefficient Re  47, the flow of fluid through a certain structure gradually becomes unstable. At that time, periodic vortices were formed in the behind of the structure. They will interact with each other, causing the flow of fluid oscillate in a periodic way. This can cause intense structural vibrations and even structural damage. One of the techniques to reduce structural vibration is to use sub-systems to change the flow structure. In this research, immersed boundary method is applied to investigate the effect of controlling the flow through the circular cylinder by using various sub-systems. The survey model consists of: attaching 2 splitter plates to behind of the circular cylinder, placing 2 rotating controllers having cross-shape to behind of the circular cylinder. The results showed that the lift and drag force acting on the circular cylinder are significantly decreased for both models. In addition, the effect of controlling the flow of the two models will be compared and find the most optimal model. Keywords: Immersed boundary method

 Splitter plate  Rotating cross

1 Introduction Investigating the flow of fluid through an object is one of the fundamental problems of fluid mechanics. As is known, when the coefficient Re  47, the flow will gradually appear periodic vortices in the behind of the structure. It causes vibration and can lead to structural damage. This phenomenon is often very common in practice. Therefore, the research of techniques to reduce vibration and eliminate the vortices formed in the behind of the object is very important in practical applications. Using splitter plate to control the flow is considered as a passive control method. Because this method is effective and easy to implement due to its simple geometry, it has attracted a lot of attention before. This method was originally developed by Roshko [1, 2]. He experimented and proved that attaching a splitter plate to the behind of a circular cylinder could completely eliminate vortices. From these early studies, many researchers have focused on studying the effect of control plates on the aerodynamic features, vortices structure and flow mechanisms necessary for control them. Apelt and West [3] examined the effect of attached a splitter plate behind of a circular cylinder. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 440–452, 2021. https://doi.org/10.1007/978-3-030-62324-1_38

Numerical Simulation of Flow around a Circular Cylinder

441

They reported that in the area near the wake of the stream, the effect of rigid splitter plate on the fluid domain can be changed by varying the length of the plate. Wu and Shu [4] developed a numerical experimental model to investigate the effect of a forced flapping splitter plate on the characteristics of the flow. According to the numerical results, they showed that the amplitude and frequency of plate flapping were found to strongly effect to the flow wake patterns, the drag force, the lift force and Strouhal number. Another researching direction has been focused on development in recent years that is the use of separate sub-systems placed in the behind of the object. Mittal and Raghuvanshi [5] conducted numerical research on the effect of placing a small fixed cylinder in the behind of the main circular cylinder with coefficient Re change from 60 to 100. From the results, they showed that placing a small cylinder in the behind of the main cylinder can reduce the drag coefficient, the lift coefficient and even the Strouhal number. Mittal and Kumar [6] conducted numerical investigated the flows through a rotating cylinder with a rotation speed of a varying from 0 to 5. Their studies showed that when we rotated the circular cylinder with a constant rotation speed that can completely eliminate the vortex but greatly depends on the rotation speed of the circular cylinder. In recent years, many researches have focused on changing the flow structure by using multiple rotating cylinders with a constant rotation speed. The position of these cylinders is usually symmetrically arranged but rotating direction is opposite (Schulmeister et al. [7]; Dehkordi et al. [8]). The results showed that placing two small cylinders symmetrically and rotating in opposite directions can completely eliminate the vortices, but the suppression of the vortex is greatly influenced by the placement as well as the rotation speed of the circular cylinder. In the present study, the immersed boundary method is applied to simulate the flow in two models: a circular cylinder attached by 2 splitter plates arranged symmetrically in parallel configuration, a circular cylinder with 2 rotating controllers having crossshape placed behind. Effective factors of the splitter plates and the crosses on the flow of fluid will be investigated. In addition, the effect of controlling the flow of the two models will be compared and found the most optimal model.

2 Problem Definition 2.1

Model of Flow Through a Circular Cylinder Attached with 2 Splitter Plates

The model is constructed with a circular cylinder structure attached with 2 thin splitter plates placed parallel and symmetric through the horizontal centerline of the circular cylinder. The position of the splitter plates is determined by the angle hf (see Fig. 1a). This angle is changed within range 10  hf  80 . Two splitter plates whose length is L = 0.3D (D is diameter of circular cylinder) and thickness h = 0.01D. The simulation is performed in laminar flow mode with coefficient Re = 100 by Matlab software.

442

P. D. Huynh and N. T. B. Dinh

Fig. 1. a) Model of a circular cylinder attached with two control plates; b) Model of a circular cylinder with 2 rotating crosses placed behind.

2.2

Model of Flow Through a Circular Cylinder with 2 Rotating Crosses Placed Behind

The model is constructed with a circular cylinder structure and 2 crosses having the rotating direction is opposite. Two crosses have angular speed x and arranged in symmetric through the horizontal centerline of the circular cylinder. The position of the 2 crosses is determined by the angle h and the radius r (see Fig. 1b). The angle hf is changed in the range from 10° to 80° and the dimensionless radius rD ¼ r=D ¼ 1 (D is the diameter of the circular cylinder). The dimensions of the crosses are determined by formula d = 0.2D which can be observed in Fig. 1b. The rotation speed of the 2 crosses xD is a ¼ 2U ¼ 6. The simulation is performed in laminar flow mode with coefficient 1 Re = 100 by Matlab software.

3 Numerical Method 3.1

Spacial and Temporal Discretization

The immersed boundary method is a Eulerian-Lagrangian finite difference method to calculate the interaction between the flow of fluid and structure. We consider the model problem of viscous incompressible fluid in a two-dimensional domain Xf ¼   ½0; lx x 0; ly containing an immersed boundary in the form of a simple closed curve Cb (see Fig. 2a). We build a 2D model with an immersed boundary curve as shown in Fig. 2b. In particular, the Lagrangian grid represents the structure and the Eulerian grid represents the fluid domain. In this paper, the explicit time-stepping method is used to solve the governing equations of fluid follow step by step of time.

Numerical Simulation of Flow around a Circular Cylinder

443

Fig. 2. a) Model of fluid-immersed boundary system; b) Model of meshing method (Eulerian mesh (•) and Lagrangian mesh (∎)) and solution method.

3.2

Structure Solver

Basically, in both investigated models, the solver of the main circular cylinder is similar. On the other hand, when consider the sub-systems of the two models, we see that they having a few differences. For model of a circular cylinder with 2 splitter plates, the position of the splitter plates is fixed and not change over time. Whereas, in model of a circular cylinder with 2 rotating crosses placed behind, the positions of 2 crosses over time will be determined by equations:   xun ¼ xuc þ ru cos uun1  xdt 

yun ¼ yuc þ ru sin uun1  xdt



ð1Þ ð2Þ

  xdn ¼ xdc þ rd cos udn1 þ xdt

ð3Þ

  ydn ¼ ydc þ rd sin udn1 þ xdt

ð4Þ

    where xun ; yun , xdn ; ydn are respectively the coordinates that determine the position of cross in the x and y directions in the Eulerian coordinate system; theu upper  and lower  xc ; yuc , xdc ; ydc are the rotation center coordinates of the 2 crosses in Eulerian coordinates; ru , rd are respectively the distance from the coordinate positions of the 2 crosses to their own rotation center; and uun1 , udn1 are respectively angles by the line connecting the position coordinates of the upper and lower cross to its rotation center and the x-axis at time n – 1. Next, we determine the value of the force acting on the entire fluid domain by the following equation: f ni;j ¼

Nb X

  Fnk ðtÞdh xni;j  Xni;j Dsk

ð5Þ

k¼1

where f ni;j is the density force acting on entire the domain fluid; Fnk is the density force at boundary point; xni;j is the coordinates of Cartesian grid; Xni;j is the coordinates of the

444

P. D. Huynh and N. T. B. Dinh

boundary points in the Lagrangian coordinate system; dh ðxÞ ¼ ð1=h2 Þuðx=hÞuðy=hÞ is the Dirac delta function and u is continuous function: 8  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > 3  2 r þ 1 þ 4jr j  4r 2 =8; 0  jr j  1 j j >

h h > : 0; 2  jr j Because the immersed boundary is the elastic boundary, according to Hooke’s law we have: Fðs; tÞ ¼ kðXðs; tÞ  Xe ðs; tÞÞ

ð7Þ

where k is stiffness of virtual spring; Xe (s, t) is the equilibrium location; and X(s, t) are the location of immersed boundary points when interact with the fluid. Then, we solve the Navier-Stokes equation with the density force component to find the pressure field pni;jþ 1 and velocity field uni;jþ 1 by using the finite element differential method. Then, this velocity field is interpolated to find the velocity at the boundary points according to the equation:  dXnk þ 1 X n þ 1  n ¼ ui;j dh xi;j  Xni;jþ 1 h2 dt i;j

3.3

ð8Þ

Navier-Stokes Solver

In the 2 space dimensions, the Navier-Stokes equation for the flow of viscous, incompressed fluid, including the external components is shown by the following formula: 2 q4

uni;jþ 1  uni;j Dt



3

  þ ½ðu  rÞuni;j 5 ¼ rpni;jþ 1 þ l Duni;jþ 1 þ f ni;j r  uni;jþ 1 ¼ 0

ð9Þ ð10Þ

Equations (9) and (10) will be solved at the time step (n + 1) in the following three main steps: Treat Nonlinear, Viscosity and Force Density Terms. Equation (9) will be solved by using intermediate velocity fields. These velocity fields are determined by Eq. (11) when nonlinear conditions, viscosity and density force have been determined.



ðu  un Þ l 1 n n n n ¼ ðu  rÞu þ ðDu Þ þ f Dt q q

ð11Þ

Numerical Simulation of Flow around a Circular Cylinder

ðun þ 1  u Þ ðrpn þ 1 Þ ¼ Dt q

445

ð12Þ

Pressure Correction. We correct the intermediate velocity field u* by the gradient of pressure pn þ 1 (Eq. (12)) and multiplying ðrÞ into two sides of Eq. (12) gives a linear system of equations: ðDpn þ 1 Þ ðr  u Þ ¼ q Dt

ð13Þ

Equation (13) is the Poisson equation of the pressure field pn þ 1 at the time n +1. Update the Velocity Field. The new velocity field un þ 1 will be updated according to the equation: un þ 1 = u  Dtðrpn þ 1 Þ/q with the value of pressure field pn þ 1 was calculated in the step above. Algorithm of Immersed Boundary Method. In this paper, we use the explicit algorithm that mean the density force at Lagrangian points are calculated at the first step. The algorithm of method is presented as follows: 1. Update the new location of structure domain. 2. Determine the force Fn ðs; tÞ from the boundary of the structure Xn ðs; tÞ according to Eq. (7). 3. Apply the force of boundary points into entire fluid domain according to Eq. (5). 4. Solve the Navier-Stokes equation had external components. 5. Interpolate the new velocity of boundary points according to Eq. (8). Return to step (1). In addition, for the problem to convergence, the value of Δt must be small enough after each running step. To ensure this, we need to determine the minimum value of Δt between diffuse stabilization conditions Dt  0:25Re

  min h2x ; h2y 2

ð14Þ

and stable conditions of CFL (Courant-Friedrichs-Lewy) method

hx hy Dt  min ; umax vmax

ð15Þ

where u, v are the velocity components of the fluid domain in the x and y directions, respectively.

446

P. D. Huynh and N. T. B. Dinh

4 Computational Domain and Boundary Conditions 4.1

Model of Flow Through a Circular Cylinder Attached with 2 Splitter Plates

The computational domain and coordinate system together with the boundary conditions are illustrated in Fig. 3. In particular, the Dirichlet boundary condition with U1 ¼ 1 and V = 0 is set at the input boundary; Neumann boundary condition with ∂U/ ∂x = 0 and ∂V/∂x = 0 is set with output; two side boundaries are free-slip conditions with ∂U/∂y = 0 and V = 0; and boundary of the circular cylinder attached 2 splitter plates is no-slip conditions (U = V = 0). The model is simulated in fluid with coefficient Re = 100.

Fig. 3. Boundary conditions for flow through a circular cylinder with control plates (refer to [9]).

4.2

Model of Flow Through a Circular Cylinder with 2 Rotating Crosses Placed Behind

The computational domain and coordinate system together with the boundary conditions are illustrated in Fig. 4. Specifically, the Dirichlet boundary condition with U1 ¼ 1 and V = 0 is set at the input boundary; Neumann boundary condition with ∂U/∂x = 0 and ∂V/∂x = 0 is set with output; two side boundaries are free-slip conditions with ∂U/ ∂y = 0 and V = 0; the boundary of the main circular cylinder is fixed as a non-slip boundary (U = V = 0); and the boundary of 2 crosses is the non-slip conditions and rotating with angular speed x. The model is simulated in fluid with coefficient Re = 100.

Numerical Simulation of Flow around a Circular Cylinder

447

Fig. 4. Boundary conditions for flow through a circular cylinder with 2 crosses placed behind (refer to [8]).

5 Results and Discussions 5.1

Test Method: Flow Through Circular Cylinder

When the flow of fluid through the circular cylinder, the sliding layers are separated from both the upper side and the lower side of the cylinder and rolled up behind the circular cylinder to form vortices. These vortices cause vibration and can lead to structural damage. This phenomenon is known as the von-Karman vortices. The simulation results can be observed in Fig. 5.

Fig. 5. Simulating interaction between circular cylinder and fluid with coefficient Re = 100.

On the other hand, to evaluate the interaction of fluid flows on the circular cylinder, we consider the magnitude of two parameters: drag coefficient, CD and lift coefficient, CL . CD ¼

FD FL ; CL ¼ 2 D 2 D 0:5qU1 0:5qU1

ð16Þ

448

P. D. Huynh and N. T. B. Dinh

where, FD and FL are drag and lift force of flow acting on the circular cylinder. In addition, to evaluate the fluctuation of fluid flows, we can consider the value of Strouhal number, St . With St is defined as a dimensionless number describing oscillating flow mechanisms. St ¼

fD U1

ð17Þ

where f is the frequency of vortex shedding; D is the diameter of circular cylinder; and U1 is the input velocity. From the results obtained, we conduct a comparison with the results previously published. From Table 1, it can be seen that the deviations of the simulation results in the present research with other researches are relatively small, ranging from 3% to 7%. Therefore, it can be seen that the accuracy of the simulation is relatively good. Table 1. Table comparing present researching results with researching results of other authors. Featured quantities Re = 100 CD CL St Re = 200 CD CL St

5.2

Present Norouzi et al. [10]

Riahi et al. [11]

Liu et al. [12]

Russell and Wang [13]

1.395 0.302 0.163 1.298 0.532 0.191

1.35 0.237 0.164

1.35 0.339 0.165 1.31 0.69 0.192

1.38 0.3 0.169 1.29 0.5 0.195

1.387 0.298 0.164

Results of the Flow Through the Circular Cylinder Controlled by 2 Splitter Plates

When the circular cylinder is attached by 2 control plates, the interaction between it and the fluid will change the position and size of the vortices formed behind the circular cylinder. Specifically, when increasing the attached angle of 2 control plates, the size of main-vortex decreases. Meanwhile, the position of main-vortex tends to move to the centerline between the 2 plates. In addition, the interaction also forms sub-vortex lying symmetrically through the centerline between the two plates. These two types of vortex interact with each other that lead to changes in the drag and lift coefficient of the fluid acting on the structure. The results of simulation can be seen in Fig. 6.

Fig. 6. Simulation of interaction between circular cylinder attached 2 control plates and fluid with coefficient Re = 100.

Numerical Simulation of Flow around a Circular Cylinder

449

Effect of Angle Attached 2 Splitter Plates. To evaluate the effect of the angle attached 2 splitter plates, hf on the change of flow structure, we will consider the change of the drag and lift coefficient when hf changes. Specifically, from Fig. 7 we see that: when increasing hf from 10° to 40°, the drag coefficient decreases and reaches the smallest value at hf ¼ 40 (CD ¼ 0:371 decreases 73.41% compared to the case of only a circular cylinder). Then, if we continue to increase hf , the drag coefficient increases very quickly and reaches the maximum value at hf ¼ 70 (CD ¼ 13:469 increases by more than 10 times compared to the case of only a circular cylinder). If hf continues to increase, the drag coefficient decreases. On the other hands, we also considering the effect of the angle attached 2 splitter plates on the lift coefficient. From the results, we find that: when increasing hf from 10° to 40°, the lift coefficient decreases and reaches the smallest value at hf ¼ 40 (CL ¼ 0:13 decreases 56.954% compared to the case of only a circular cylinder). Then, if we continue to increase hf , the lifting coefficient increases and when it reaches a certain value, the lift coefficient can exceed the value in the case of only a circular cylinder.

Fig. 7. Diagram showing the change of drag and lift coefficient when changing the angle attached 2 splitter plates.

5.3

The Results of Flow Through the Cylinder with 2 Crosses Placed Behind

Change the Placed Angle h of 2 Crosses. In order to consider the effect of the placed angle h of 2 crosses to the drag and lift coefficient of the fluid acting on the circular cylinder, we will do not change the other values. It includes: the radius rD = 1 and rotation speed a = 6 for 2 crosses. The results can be observed in Fig. 8. Specifically, when we increase the angle h from 10° to 40°, the drag coefficient decreases and reaches the smallest value at h = 40° (CD = 1.0433 decreases by 19.62% compared to the case of only a circular cylinder). If the angle h continues to increase, the drag coefficient gradually increases and can exceed the value of the case of only a circular cylinder. On the other hand, when considering the value of the lift coefficient can be seen: when increasing the angle h from 10° to 50°, the value of the lift coefficient gradually decreases and reaches the value of approximately zero when 40  h  50 .

450

P. D. Huynh and N. T. B. Dinh

Fig. 8. Diagram showing the change of drag coefficient and lift coefficient when changing the placed angle h of 2 crosses.

Meanwhile, if we continue to increase h, the lift coefficient increases but overall, it is always small in the case of only a circular cylinder. In order to consider the overall effect of the position of the 2 crosses in changing the structure of the flow, we can observe the simulating results in Fig. 9. Observing Fig. 9, we can see: placing 2 crosses with the rotating directions is opposite in the behind of the circular cylinder that lead to form the 2 main-vortex approximately and opposite in the behind of the circular cylinder. In addition, when the 2 crosses rotate, the behind of them also form two symmetrical sub-vortex through the horizontal symmetry plane passing through the center of the circular cylinder. These vortices suppress each other, which makes the flow of fluid become stable and completely eliminates the vibration acting on the circular cylinder.

Fig. 9. Simulation of interaction between a circular cylinder attached 2 rotating crosses placed behind and fluid with coefficient Re = 100.

5.4

Comparing the Effective of the 2 Models

By summarizing the simulation results, we can make a comparing table about the effect in controlling the flow of the 2 models. From Table 2, we see that: thanks to the subsystems in both models, the drag and lift force acting on the circular cylinder are significantly reduced. In particular, the attached 2 splitter plates model reduces the drag force is largest, but reducing the lift force as well as the oscillation, the placed 2 crosses model is more effective. Therefore, we can draw the conclusion that the control model by placed 2 rotating crosses is the optimal model.

Numerical Simulation of Flow around a Circular Cylinder

451

Table 2. Comparison of the effective of the attached 2 splitter plates model and the placed 2 rotating crosses model in the behind of the circular cylinder. Featured quantities Re = 100

CD CL St

Cylinder

Attached 2 splitter plates (L = 0.3D; hf ¼ 40 )

Attached 2 rotating crosses (rD ¼ 1; h ¼ 40 ; a ¼ 6)

1.395 0.302 0.163

1.35 0.339 0.165

1.38 0.3 0.169

6 Conclusion From the simulation results of the model, it is shown that when the flow of fluid through the circular cylinder, the sliding layers are separated from both the upper and lower sides of the cylinder and rolled up behind the circular cylinder to form vortices. These vortices cause vibration and can lead to structural damage. This phenomenon is known as the von-Karman vortices. Attaching 2 control plates behind the circular cylinder has the effect of changing the drag and lift of fluid acting on the cylinder. Compared to the case of only a circular cylinder, the use of 2 control plates can effectively suppress the wake due to the interaction between the vortices formed behind the circular cylinder and the splitter plates. The results also show that the attached angle of splitter plates has a great influence on the effect of controlling and the most effective range regarding the maximum reduction of drag and lift force is 30  hf  40 . Placing 2 crosses with the rotating direction is opposite in the behind of the circular cylinder can completely eliminate the vortex and stabilize the flow. In addition, thanks to the simulation results obtained, we also can determine the influence of the position of the 2 crosses on the flow structure, and find the effective region at 40  h  50 . In this region, the effect of suppression vortex and flow stabilization are optimal. From the synthesis of numerical simulation results, we can compare the effect in controlling the flow of the 2 methods together. Specifically, the flow control by both methods brings the effect in reducting the drag and lift force. In particular, the model of 2 splitter plates reduce the drag force is the best. However, the effect in reducing lift as well as oscillation, the model placed two rotating crosses better. Therefore, we can draw the conclusion that the control model by placed two rotating crosses is the most optimal model.

References 1. Roshko, A.: On the Drag and Shedding Frequency of Two Dimensional Bluff Bodies. National Advisory Committee for Aeronautics (NACA) (1954) 2. Roshko, A.: On the wake and drag of bluff bodies. J. Aeronaut. Sci. 22, 124–132 (1955) 3. Apelt, C.J., West, G.S.: The effects of wake splitter plates on bluff body flow in the range 104 < R < 5 104: part 2. J. Fluid Mech. 71, 145–160 (1975)

452

P. D. Huynh and N. T. B. Dinh

4. Wu, J., Shu, C.: Numerical study of flow characteristics behind a stationary circular cylinder with a flapping plate. Phys. Fluids 23, 73–91 (2011) 5. Mittal, S., Raghuvanshi, A.: Control of vortex shedding behind circular cylinder for flows at low Reynolds numbers. Int. J. Numer. Meth. Fluids 35, 421–447 (2001) 6. Mittal, S., Kumar, B.: Flow past a rotating cylinder. J. Fluid Mech. 476, 303–334 (2003) 7. Schulmeister, J.C., Dahl, J.M., Weymouth, G.D., Triantafyllou, M.S.: Flow control with rotating cylinders. J. Fluid Mech. 825, 743–763 (2017) 8. Dehkordi, E.K., Goodarzi, M., Nourbakhsh, S.H.: Optimal active control of laminar flow over a circular cylinder using Taguchi and ANN. Eur. J. Mech.-B/Fluids 67, 104–115 (2018) 9. Bao, Y., Tao, J.: The passive control of wake flow behind a circular cylinder by parallel dual plates. J. Fluids Struct. 37, 201–219 (2013) 10. Norouzi, M., Varedi, S.R., Maghrebi, M.J.: Numerical investigation of viscoelastic shedding flow behind a circular cylinder. J. Nonnewton. Fluid Mech. 197, 31–40 (2013) 11. Riahi, H., Meldi, M., Favier, J., Serre, E.: A pressure-corrected immersed boundary method for the numerical simulation of compressible flows. J. Comput. Phys. 374, 361–383 (2018) 12. Liu, C., Zheng, X., Sung, C.H.: Preconditioned multigrid methods for unsteady incompressible flows. J. Comput. Phys. 139, 35–57 (1998) 13. Russell, D., Jane Wang, Z.: A cartersian grid method for modeling multiple moving objects in 2D incompressible viscous flow. J. Comput. Phys. 191, 177–205 (2003)

Research on Performance of Color Reversible Coatings for Exterior Wall of Buildings Vu-Lan Nguyen1(&), Chang-Ren Chen2, Chang-Yi Chung2, Kao-Wei Chen2, and Rong-Bin Lai2 1

2

Ho Chi Minh City University of Technology and Education, Ho Chi Minh City, Vietnam [email protected] Green Energy Technology Research Center, Kun Shan University, Tainan, Taiwan

Abstract. The purpose of this study is to investigate the ability to improve building envelope thermal insulation efficiency by adding thermochromic material into the outer paint coating layer. Normal white-color paint (W) is mixed with a reasonable amount of black-color reversible thermochromic material (r) and thus turns to gray-color paint (R) which can perform colorchanging process. The new paint mixture (R) is then compared to the normal gray-color paint (G) and white-color paint (W) in the market in terms of reflectance and absorption under solar radiation. Also, phase-change-material (p) is added to form a new paint mixture (R + p) in order to increase the heatinsulating effect of the original paint. Three parameters including reflectance, absorption and surface temperature have been investigated to analyze thermal performance of all of the prepared paint samples. Experiments are conducted indoor with halogen light source which acts as the simulated solar infrared radiation source. Although thermal performance of the color-changing paint (R) should be revealed more realistic in the real outdoor experiments, the indoor conditions show a more convenient comparison with the achieved results with more controllable parameters. From the thermal analysis of the samples, it has been found that the color-changing paints (R) and the color-changing paint with phase-change-material (R + p) can moderately grow insulation effect to keep the indoor space cool in summer but fail to keep indoor space warm in winter. Besides, the combination of normal gray paint (G) and phase-change-material (G + p) is not recommended since it does not help either reject heat in summer or absorb heat in winter significantly. Keywords: Thermal radiation K-value  Building envelope

 Reversible thermochromic material  PCM 

1 Introduction Solar radiation is considered endless clean and valuable renewable energy resource. However, when it is used improperly, it could become unexpected heat flow causing inconvenience, e.g. in buildings and environment. Energy saving for buildings is the research field of many most effective and economical technological and engineering solutions for not only power consuming devices or systems used in buildings to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 453–465, 2021. https://doi.org/10.1007/978-3-030-62324-1_39

454

V.-L. Nguyen et al.

facilitate occupants’ life, e.g. for lighting, ventilation, heating-cooling, cooking, etc., but also construction designs and materials which show the least energy required. Particularly, heat exchange between buildings and their surrounding environment is an important factor considerably influencing the total energy consumption of the buildings [1, 2]. In the solution group for heat exchange improvement for buildings, thermal insulation of the building envelop, such as wall and roof, is of very high interest for researchers. The insulation effects could be ameliorated by either reducing heat transfer rate through the envelop layer(s) [3–5], or increasing the heat reflectance and reducing the heat absorption at the outer envelope surface by polishing or using suitable painting colors [6–8]. Technically, surfaces with high reflectivity are a good solution to neglect the incident solar radiation but they become problematic when the incident heat is needed. For example, in summer, painting the house white helps increase the surface reflectance and thus reduces heat penetrating through the wall into the indoor space. However, in winter, the white color surface cannot utilize the valuable heat from the sun to keep indoor space warm, and therefore, a dark color paint is a better coating layer. In this regard, this study conducts an experimental investigation to answer the question if a reversible thermochromic material (r) is added to the exterior paint to help it change color according to surface temperature, will the sunlight reflection and absorption of the building be appropriately adjusted during summer and winter? On the other hand, phase change materials (PCMs) have also been widely used in building materials as either a separated thermal control layer or an additive that is mixed with other conventional construction materials [9–13]. While normal construction materials work with the sensible heat flows, PCMs show the capability to store latent heat, and thus, control the heat flow rate through the envelop layer(s). The materials have been prepared and made available to work in a wide range of melting temperatures, latent heat storage capacities and heat transfer coefficients [14]. Therefore, a phase change material PCM (p) is also added to the experiment mixture in this study to take advantage of the effect of delayed heating and cooling for the coating. Reflectance of each coating surface is measured with a spectrometer to make a preliminary reflection capability evaluation.

2 Description of the Color-Changing Coating Paint Components In general, there are many kinds of coating materials being used for buildings protection, decoration or some special functions. Usually the coating materials is selected based on their appearance, thermal performance, fusibility, durability, convenience and safety. Plenty of them, after decades of development and use, are cheap and easily found. Thus, this research looks for a solution to modify thermal performance of existing commercial house paints on the market by adding a reversible thermochromic material, whose optical properties can vary with temperature, into the normal house paints. Thermochromic materials are dichroic or bio-chromatic, react sensitively to light, heat, electricity, pressure. They are found to be used for information recording/presenting, medical diagnosis, cancer treatment devices, color displays, color printers and so on.

Research on Performance of Color Reversible Coatings

455

They can be divided into three categories including inorganic materials, liquid crystals and organic materials with different properties and color changing mechanisms.

Fig. 1. Wall surface temperature in summer time

Fig. 2. Wall surface temperature in winter time

Typical wall surface temperature measured in summer and winter are shown in Fig. 1 and Fig. 2, respectively. Therefore, the thermochromic material has to be selected in order to keep the material in a dark color state in winter for more heat radiation absorption while it could easily reach to the color-changing temperature in summer to reflect heat radiation more. According to the data in Fig. 1 and Fig. 2, the authors selected the powder type thermochromic material TM-43 for this research, which has black color at temperature lower than 43 °C and its discoloration starts at 40 °C up. Figure 3 shows the appearance of the thermochromic material at different temperatures.

Fig. 3. Appearance of thermochromic material at different temperatures a) At room temperature; b) At color-changing temperature Tc

456

V.-L. Nguyen et al.

Since the main purpose of this research is to make the discoloration on the envelope outer surface, the mixture of reversible thermochromic material (r) into the envelope must always stay on its top. Therefore, common paints are considered as the solvent for the thermochromic material. Available on the market are the following 3 types of paints: blended paints, cement paints and latex paints. From the AM1.5 solar spectrum, one may find that it is the infrared light with wavelength from 780 nm * 2500 nm that heats the building envelope surfaces up. For this reason, the selected commercial paint is white-color cement water-based paint. Phase change materials (PCMs) are the substances that could shift between 2 phases at normal atmospheric conditions. The solid – liquid PCMs are the most widely developed and used type in comparison to solid – gas and liquid – gas PCMs due to their availability for a large range of melting temperature and latent heat values, their costs and the applicability. The PCMs (mainly solid – liquid ones) could be used for buildings in order for conditioning the heat flow rate through building envelopes since they could remain their temperature around the phase-change-temperature. The PCM that is used in this research is the P-45 with melting temperature at 44.64 °C and the latent heat of 170,425.7 J/g. Its Differential Scanning Calorimeter (DSC) data are shown in Fig. 4. This is a fatty acid phase change material with the chemical formula CH3(CH2)2nCOOH. The main raw materials of fatty acids are derived from plants or animals, which are relatively natural, environmentally friendly and renewable.

Fig. 4. DSC data of PCM P-45.

The following equations could be applied to identify the energy saving rate brought back by the reversible thermochromic paiting layer. According to the theory of energy conservation, one may obtain: Wq ¼ Wr þ Wa þ Wp þ Wl þ Wt

ð1Þ

Wherein, Wq is the total incident radiation on the envelope surface; Wr is the radiation amount reflected back by the envelope surface; Wa is the radiation amount absorbed and stored by the envelope layer (bricks, mortar,…) in sensible heat form; Wp is the radiation amount absorbed and stored by the added PCMs in the latent heat form and/or sensible heat form; Wl is the heat loss from the envelope surface to ambient in terms of convection and radiation; Wt is the heat transferred through the envelope layer.

Research on Performance of Color Reversible Coatings

457

Further detailed development of Eq. (1) gives the total thermal variation equation Eq. (2).  Wq ¼ fRmi :Cpi :DTi þ ½er T2a þ T2o ðTa þ To Þ þ he ðTa  To Þ þ mp1 :Cpso :DTp1 þ mp2 :DH þ mp3 :Cpli :DTp1 þ hi ðTin  Tr Þg:ð1qÞ1

ð2Þ

Wherein, q is reflectivity of the envelope outer surface; mi, Ci and DTi are weight, specific heat and temperature difference at 2 side interfaces of envelope sensible solid layers, respectively; e is emissivity of the envelope outer surface; r is StefanBoltzman’s constant; he is average convection coefficient of envelope outer surface; hi is average convection coefficient of envelope inner surface; mp1, Cpso and DTp1 are weight, specific heat and temperature difference at 2 side interfaces of solid PCM layers, respectively; mp2, Cpli and DTp3 are weight, specific heat and temperature difference at 2 side interfaces of liquid PCM layers, respectively; mp2 is the amount of PCM that has changed phase and DH is the latent heat; Ta is average outdoor ambient temperature; To is envelope outer surface temperature; Tin is envelope inner surface temperature; and Tr is average indoor temperature. It could be inferred from the equation that with the same environment and location conditions, the higher the envelope reflectivity q is, the less variation of the indoor temperature Tr is. For example, in summer time, when the envelope rejects more incident radiation, it will surely reduce the increase of indoor temperature. This means the use of thermochromic materials for the envelope is reasonable. Also, the equation may be further simplified for the experimental sample without PCM. Since the coating layer is very thin, its stored sensible heat could be neglected.

3 Experimental Setup and Method 3.1

Sample Preparation

As mentioned above, this experiment is to compare the gray color-changing paint (R) and the normal gray-color paint (G) on the market, R must be prepared to obtain its specific gray-color first and then the same gray-color G will be selected from the market accordingly. Totally, there are six sample paints tested. Each sample is painted on the surface of identical acrylic 10  10  1.5 (cm) pieces whose heat transfer coefficient is 0.196 [W/m.K]. Figure 5 shows how the samples look like. In order to reduce experimental error, two pieces of each paint sample are made and the average data are calculated. It has been found that (r) could be dissolved into (W) at the rate up to 12%. However, because some samples need to add phase-change material (p) as well, if 12% (r) is mixed and then (p) is added, cracks will appear on the sample surface when it is dried. Repeated attempts have pointed out that the best ratio for (R + p) is 10% of (r) and 6% of (p) for the perfect integration of PCM.

458

V.-L. Nguyen et al.

3.2

Experimental System

A Reflectance spectrometer Hitach U-4100 is used to measure the reflectance from each paint sample surface. A thermometer system helps measure the temperature of the surface and inner layer of the test piece. To prevent heat loss to the ambience, they are tightly insulated by foam. The pieces are placed horizontally to ensure that the direction of heat transfer is to evenly flow from the high temperature upper surface to the low temperature lower surface until the steady-state temperature difference of the material is reached and captured.

Fig. 5. Prepared paint samples.

Due to the fact that in natural outdoor condition, ambient temperature, wind and shading may vary causing the incident radiation and heat loss on the surface unstable, an indoor experiment system has been set up and used to make a more convenient condition for result comparison. In this experiment, if the samples are exposed to direct sunlight with outdoor condition, too many error factors will be generated, such as uncontrollable clouds, wind, humidity, etc., which will directly affect the heat dissipation of the material and the ambient temperature. The only solution is to conduct the experiment for all the samples at the same time and place. However, the existing equipment is unfortunately not sufficient to test the above 12 test pieces at the same time. In addition, even if this is done, the test results cannot be used as a typical basis. As temperature keeps fluctuating, the test pieces cannot reach the temperature difference steady state. In the indoor experiment, the ambient temperature, wind and shading are strictly controlled. Four 300 W halogen lights are used to represent the solar radiation source. Although the spectrum generated by halogen lights are not exactly the same as the solar radiation, it fortunately covers fully the range of infrared range of 780 nm – 2400 nm in the solar radiation. Also, it is made reasonable with the power of the lights being adjusted to generate surface temperature of normal grey-color paint sample (used as reference) at 40 °C (representing winter) and 50 °C (representing summer) in reference to Fig. 1 and Fig. 2. The experiment lighting section is surrounded 4 sides by white plates. The white plates with high reflectivity and low thermal conductivity help reflect back all radiation that is deflected to the surface of the test piece and also prevent the cold air convection.

Research on Performance of Color Reversible Coatings

459

4 Results and Discussion Tested results of all 6 samples have been conducted and recorded at 2 ambient temperature values which are 20 °C and 25 °C, respectively.

Fig. 6. Comparison between G and R at ambient temperature 20 °C.

Fig. 7. Comparison between G and R at ambient temperature 25 °C.

Fig. 8. Comparison between G and (R + p) at ambient temperature 20 °C.

460

V.-L. Nguyen et al.

Fig. 9. Comparison between G and (R + p) at ambient temperature 25 °C.

Fig. 10. Comparison between G and (G + p) at ambient temperature 20 °C.

Fig. 11. Comparison between G and (G + p) at ambient temperature 25 °C.

Research on Performance of Color Reversible Coatings

Fig. 12. Comparison between W and (W + p) at ambient temperature 20 °C.

Fig. 13. Comparison between W and (W + p) at ambient temperature 25 °C.

Fig. 14. Comprehensive spectrum comparison among 3 samples R, W and G.

461

462

V.-L. Nguyen et al.

Fig. 15. Color-changing process of R, (R + p), W and (W + p) samples.

Data are collected and presented in graphs for easy comparison. Figures from Figs. 6, 7, 8, 9, 10, 11, 12, and 13 are comparison results between samples. All samples are compared with other ones. Results have shown that the performance of R sample is similar to W sample, which means that the addition of thermochromic into the white-color paint is not a reasonable solution. This could be further explained by the reflectance (%) of normal white paint q(W), normal gray paint q(G) and gray-color thermochromic mixture q(R) that have been investigated by Hitach U-4100 machine as shown in Fig. 14. It is interesting to find out that the reflectance of the color-changing paint (R) at room temperature is very similar to that of the gray paint (G) but only at the wavelength spectrum below 600 nm. Meanwhile, at the wavelength spectrum above 800 nm, it is almost the same as the white paint (W), despite the fact that the temperature has not reached the color-changing value of the thermochromic. In other words, the colorchanging paint (R) has the same ability to reflect infrared light as the white paint (W) even though it looks gray. This is because the used thermochromic material only make reflectance variation in the visible range and it does not really have effect on the infrared range. Also for this reason, although results show that the best sample, (R + p) sample, cools down a little bit indoor temperature compared to the normal gray paint, the color-changing effect unfortunately does not make significant improve the insulation of the envelope (Fig. 15).

Research on Performance of Color Reversible Coatings

463

Table 1. Comparison among samples. Samples At 20 °C G R 39.2 31.9 G R+p 39.2 33.4 G G+p 39.2 38.5 W R 31.7 32.4 W R+p 31.7 33.5 W W+p 31.7 30.6 At 25 °C G R 50.3 41.7 G R+p 50.3 41.7 G G+p 50.3 51.8 W R 42.6 42.2 W R+p 42.6 42.5 W W+p 42.6 41.1

Insulation benefit ratio Endothermic benefit ratio 7.3

−18.6%

5.8

−14.8%

0.7

−1.8%

−0.7

2.2%

−1.8

5.7%

1.1

−3.5%

8.6

17.1%

8.6

17.1%

−1.5

−3.0%

0.4

0.9%

0.1

0.2%

1.5

3.5%

It could be seen from Table 1 a comprehensive comparison between the samples in pairs. For example, the color-changing paint (R), when compared with the gray paint (G) of similar color, shows its heat insulation efficiency in summer (at 25 °C) is significantly better while the heat absorption efficiency in winter (at 20 °C) is inferior. That means in the same color gray, under the summer environment, the heat insulation effect of the surface is increased by 17.1%, and the temperature difference is about 8.6 °C. But in the winter time, it causes an increase of 18.6% of heat absorption in winter, and a temperature difference of about 7.3 °C. Similar analyses could be inferred in the same manner for others samples.

464

V.-L. Nguyen et al.

5 Conclusions The combination of thermochromic materials into normal building envelope coating paints in addition with PCM is an insignificant solution to improve insulation effect for the buildings especially for the purpose to keep indoor space cool in summer and warm in winter. Although the color-changing may make users have different feeling with the appearance of buildings, the visible change of colors does not effect on the infrared range which is invisible to human’s eyes. The surface temperature has no obvious variation when discoloration happens. For the color-changing paint and phase change material mixture (R + p), the most suitable melting temperature of phase change material (p) should fall inside 40 * 50 °C, if it is greater than that, it may make opposite effect. Anyway, the (R + p) mixture is the best paint mixture for this test. Acknowledgement. This research is a project number MOST 107-2221-E-168-017 subsidized and supported by the Taiwan Ministry of Science & Technology and the Ministry of Education and Green Energy Technology Research Center, Kun Shan University, Taiwan. The authors would like to give their deepest thanks to those who make this project possible and Ho Chi Minh City University of Technology and Education for their so valuable supports.

References 1. Ibrahima, M., Sayegha, H., Biancoa, L., Wurtz, E.: Hygrothermal performance of novel internal and external super-insulating systems: in-situ experimental study and 1D/2D numerical modeling. Appl. Therm. Eng. 150, 1306–1327 (2019) 2. Ji, R., Guo, S., Wei, S.: Evaluation of anchor bolt effects on the thermal performance of building insulation materials. J. Build. Eng. 29 (2020) 3. Al-Homoud, M.S.: Performance characteristics and practical applications of common building thermal insulation materials. Build. Environ. 40, 353–366 (2005) 4. Bourguiba, A., Touati, K., Sebaibi, N., Boutouil, M., Khadraoui, F.: Recycled duvets for building thermal insulation. J. Build. Eng. 31 (2020) 5. Mandili, B., Taqi, M., El Bouari, A., Errouaiti, M.: Experimental study of a new ecological building material for a thermal insulation based on waste paper and lime. Constr. Build. Mater. 228 (2019) 6. Qi, J.D., He, B.J., Wang, M., Zhu, J., Fu, W.C.: Do grey infrastructures always elevate urban temperature? No, utilizing grey infrastructures to mitigate urban heat island effects. Sustain. Cities Soc. 46 (2019) 7. Xuan, Z.W., Jun, Z.S.: Research on the application of combinatorial chemistry in smart color-changing materials. Industrial Mater. Mag. 302 (2012) 8. Marangoni, M., Nait-Ali, B., Smith, D.S., Binhussain, M., Colombo, P., Bernardo, E.: White sintered glass-ceramic tiles with improved thermal insulation properties for building applications. J. Eur. Ceram. Soc. (2016) 9. Chou, H.M., Chen, C.R., Nguyen, V.L.: A new design of metal-sheet cool roof using PCM. Energy Build. 57, 42–50 (2013) 10. Nematchoua, M.K., et al.: Application of phase change materials, thermal insulation, and external shading for thermal comfort improvement and cooling energy demand reduction in an office building under different coastal tropical climates. Solar Energy 207, 458–470 (2020)

Research on Performance of Color Reversible Coatings

465

11. Amarala, C., Vicente, R., Marques, P.A.A.P., Barros-Timmons, A.: Phase change materials and carbon nanostructures for thermal energy storage: a literature review. Renew. Sustain. Energy Rev. 79, 1212–1228 (2017) 12. Isa, M.H.M., Zhao, X., Yoshino, H.: Preliminary study of passive cooling strategy using a combination of PCM and copper foam to increase thermal heat storage in building facade. Sustainability 2, 2365–2381 (2010) 13. Mavrigiannaki, A., Ampatzi, E.: Latent heat storage in building elements: a systematic review on properties and contextual performance factors. Renew. Sustain. Energy Rev. 60, 852–866 (2016) 14. Sharma, A., Tyagi, V.V., Chen, C.R., Buddhi, D.: Review on thermal energy storage with phase change materials and applications. Renew. Sustain. Energy Rev. 13(2), 318–345 (2009)

Advances in Civil Engineering

Experimental Study of Geopolymer Concrete Using Re-cycled Aggregates Under Various Curing Conditions Ngoc Thanh Tran(&) Faculty of Civil Engineering, Ho Chi Minh City University of Transport, Ho Chi Minh City, Vietnam [email protected]

Abstract. This paper examined how cooperating recycled aggregates into geopolymer concrete affected its compressive strength under various curing conditions. Both recycled coarse and fine aggregates were produced by crushing demolition waste concrete. Total 8 test series of cube specimens were experienced under compressive test. Replacement ratio of recycled coarse aggregate was maintained at 100% while that of recycled fine aggregate varied from 0%, 25%, 50%, 75%, and 100%. The specimens were cured in two different conditions: dry cabinet with temperature of 80 °C for 24 h and mobile dryer with temperature of 120 °C for 8 h. The results showed that the replacement of recycled coarse aggregates had no significant effect on the compressive strength whereas that of recycled fine aggregates exhibited negative effects and the higher replacement ratio led to the more reduction in compressive strength. Specifically, the replacement of 100% recycled coarse aggregate and 25% recycled fine aggregate did not produce any significant negative impact on the compressive strength of geopolymer concrete. On the other hand, specimens cured by mobile dryer produced slightly higher compressive strength than those cured in dry cabinet. Keywords: Geopolymer  Recycled aggregate  Curing condition  Demolition waste concrete  Compressive strength

1 Introduction Concrete is the most important construction material used in infrastructure. About 25 billion tons of concrete has been produced in 2016 alone and it is estimated that the demand for concrete increases at a rate of 20% per year, especially in developing countries [1]. To meet the rising demand of concrete, large amounts of raw materials including cement and aggregates have been produced. The cement and aggregate production have reached around 4.2 billion and 40 billion per year, respectively [1]. However, the production of raw materials causes adverse effects with regard to natural resource and environment. For example, the production of each ton cement generates 1 ton of carbon dioxide and consumes about 1.65 tons of limestone [2]. In addition, the exploitation of aggregate results in the landslide, water flow change and flood [1]. On the other hand, millions of tons of demolition waste concrete are through disposal in © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 469–478, 2021. https://doi.org/10.1007/978-3-030-62324-1_40

470

N. T. Tran

landfills causing environmental pollution and reduction in valuable landfill space [3]. Thus, it is necessary to minimize cement and natural aggregate content in concrete while enhancing the utilization of recycled aggregates from waste material. Various methods have been proposed to reduce cement and natural aggregate content in concrete: 1) the use of industrial waste such as fly ash, silica fume, slag… as substitute materials for cement [4]; 2) the development of geopolymer concrete without cement content [2]; 3) the use of sea sand and crushed sand to replace normal sand [1]; and 4) the partial replacement of aggregate with waste glass, coconut shell, ceramic brick… [5]. However the above methods have been showed limitations: the industrial waste can replace small portion of cement in concrete [4]; although geopolymer concretes don’t contain cement, their composition still require large amount of aggregate [2]; the crushed sand shows high cost while the sea sand can activate the corrosion of steel bar [1]; and the addition of waste glass, coconut shell, ceramic brick cause negative effects on the concrete compressive strength [5]. To overcome the above limitation, this study proposes to develop the environment friendly concrete by using geopolymer concrete with recycled aggregates produced from demolition waste concrete. However, it is question that whether geopolymer concrete with recycled aggregates can generate excellent compressive strength. Unfortunately, there is very little research done to access the compressive strength of geopolymer concrete with recycled aggregates from demolition waste concrete, most of researchers have focused on the use of recycled aggregates in cement-based concrete. The compressive strength of cement-based concrete with recycled aggregates has been investigated to be lower than that ordinary cement-based concrete. The partial replacement of normal aggregate with recycled aggregate in cement-based concrete led to 20% reduction in compressive strength while a full replacement resulted in 50% reduction in compressive strength [6–9]. The incorporation of coarse recycled aggregates was found to lead to more significant negative effects on properties of cementbased concrete than geopolymer concrete [10]. This study aims to examine compressive strength of geopolymer concrete using recycled aggregates under various curing environments. The detail objectives are: 1) to evaluate effect of the replacement of coarse and fine aggregates on the compressive strength of geopolymer concrete, 2) to investigate the effect of curing condition on the compressive strength of geopolymer concrete with recycled aggregate.

2 Experimental Program 2.1

Materials and Specimen Preparation

An experimental program has been proposed to evaluate compressive strength of geopolymer concrete with recycled aggregates under various curing conditions, as shown in Fig. 1. Total 8 test series were prepared with at least three specimens per series. The replacement ratio of recycled coarse aggregate was maintained at 100% while that of recycled fine aggregate varied from 0%, 25%, 50%, 75%, and 100%. The specimens were cured using two different conditions: dry cabinet with temperature 80 °C for 24 h

Experimental Study of Geopolymer Concrete

471

and mobile dryer with temperature 120 °C for 8 h. The first condition is suitable for precast structure application whereas the second is effective for cast-in-place structure.

Normal coarse aggregate (D0)

0% recycled fine agg. (C0)

Dry cabinet 80oC (DC)

D0C0DC D1C0DC

0% recycled fine agg. (C0)

D1C25DC Dry cabinet 80oC (DC)

25% recycled fine agg. (C25)

D1C50DC D1C75DC

Geopolymer with recycled aggregate

Recycled coarse aggregate (D1)

50% recycled fine agg. (C50) 75% recycled fine agg. (C75)

D1C1DC

Mobile dryer 120oC (MD)

D1C0MD D1C25MD

100% recycled fine agg. (C1)

Recycled coarse agg.

Recycled fine agg.

Curing condition

8 series

Effect of coarse agg. Effect of fine agg. Effect of curing condition

Fig. 1. Detail of experimental program

The composition of mixture is given in Table 1 and the images of material are shown in Fig. 2. The recycled coarse and fine aggregates were produced by crushing demolish concrete. The maximum grain size of recycled coarse aggregates was around 20 mm while that of recycled fine aggregate was controlled at 5 mm through a sieve. Table 1. Mixture proportions of geopolymer concrete Fly ash (kg/m3) 428

Coarse aggregate (kg/m3) 1170

Fine aggregate (kg/m3) 630

Sodium silicate (kg/m3) 114

Sodium hydroxide (kg/m3) 57

Water (kg/m3) 43

472

N. T. Tran

a) Fly ash

e) Normal fine agg.

b) Normal coarse agg. c) Recycled coarse agg.

f) Recycled fine agg.

d) Sodium silicate

g) Sodium hydroxide

h) Water

Fig. 2. Images of material

In the preparation of the materials, the recycled coarse aggregates were washed right after crushing and all aggregates were dried for two days at room temperature before mixing. In addition, the sodium hydroxide solution was produced by mixing sodium hydroxide and water. Then the sodium hydroxide solution and the sodium silicate solution were mixed together to create activation solution. The mixture was mixed by using A Hobart-type mixer with 20-L capacity. Firstly, the coarse aggregate and fine aggregate were put into mixer and dry mixed for 5 min. Then, the fly ash was put and further mixed for 5 min. After dry mixing, the activation solution was divided two part and put two times and mixed for 5 min. Next, a wide scoop was used to pour the mixture into metal cube molds and the fresh mixture was compacted by hand. After 48 h stored in laboratory at room temperature, all the specimens were demolded. After demolding, six series were put in dry cabinet with temperature of 80 °C for 24 h and two series were cured by mobile dryer with temperature of 120 °C for 8 h, as shown in Fig. 3. All specimens were carried out to perform compressive test at the age of 14 days.

a) Dry cabinet

b) Mobile dryer Fig. 3. Curing conditions

Experimental Study of Geopolymer Concrete

2.2

473

Test Setup and Procedure

The test set-ups are illustrated in Fig. 4. The specimen geometry is cubic with the dimension size of 150  150  150 mm. A compression testing machine with capacity of 3000 KN was carried out to perform compressive test. The compressive load was recorded by the load cell. The testing procedure was followed to standard TCVN 3118-1993.

Fig. 4. Compressive test set up

3 Results and Discussion 3.1

Compressive Strength of Geopolymer Concrete with Recycled Aggregate

The compressive strength of all series is summarized in Table 2. The results showed that the compressive strength of geopolymer concrete with recycled aggregates varied from 9.91 MPa to 35.6 MPa. In addition, the compressive strength of geopolymer concrete was strongly dependent on the replacement ratio of aggregates and curing conditions.

474

N. T. Tran Table 2. Compressive strength of geopolymer concrete

Series D0C0DC

D1C0DC

D1C25DC

D1C50DC

D1C75DC

D1C1DC

D1C0MD

D1C25MD

Specimen Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD Sp1 Sp2 Sp3 Average STD

Maximum compressive load (kN) Compressive strength (MPa) 792.50 35.22 820.20 36.45 755.40 33.57 789.37 35.08 32.51 1.45 757.70 33.68 727.20 32.32 772.00 34.31 752.30 33.44 22.88 1.02 747.60 33.23 718.50 31.93 735.50 32.69 733.87 32.62 14.62 0.65 505.10 22.45 553.00 24.58 526.00 23.38 528.03 23.47 24.01 1.07 317.00 14.09 335.00 14.89 296.50 13.18 316.17 14.05 19.26 0.86 227.10 10.09 232.00 10.31 210.00 9.33 223.03 9.91 11.55 0.51 807.70 35.90 785.50 34.91 810.10 36.00 801.10 35.60 13.56 0.60 770.10 34.23 792.20 35.21 735.30 32.68 765.87 34.04 28.69 1.27

Experimental Study of Geopolymer Concrete

3.2

475

Effect of Recycled Coarse Aggregate on the Compressive Strength of Geopolymer Concrete

Figure 5 shows the effects of full replacement of recycled coarse aggregate by comparing compressive strength of series D0C0DC and series D1C0DC. It is clear that the full replacement of normal coarse aggregate with recycled coarse aggregate had no significant effect on compressive strength. The compressive strength of geopolymer concrete with recycled coarse aggregate was 4.8% lower than that with normal coarse aggregate. This is due to the fact that the recycled coarse aggregates maintained almost their properties compared to normal coarse aggregates although they were recycled and reused.

Compressive strength (MPa)

50

40 35.1

33.4

30

20

10

0

Normal CA

Recycled CA

Type of coarse aggregate Fig. 5. Effect of full replacement of recycled coarse aggregate on the compressive strength of geopolymer concrete

3.3

Effect of Recycled Fine Aggregate on the Compressive Strength of Geopolymer Concrete

The effects of partial replacement of recycled fine aggregate are shown in Fig. 6. As the replacement ratio of recycled fine aggregate increased, the compressive strength of geopolymer concrete decreased. The compressive strength of geopolymer concrete reduced slightly with increasing replacement ratio from 0% to 25%, but it decreased significantly at higher replacement ratio than 25%. The compressive strength reduced 2% at the replacement ratio of 25% while it decreased 30%, 58%, and 70% at the replacement ratio of 50%, 75% and 100%, respectively. This appearance might be due to the composition of recycled fine aggregate contained different components including sand and cement paste which caused negative impacts on the geo-polymerization process. Furthermore, the incorporation of recycled fine aggregate resulted in the

476

N. T. Tran

enhanced porosity and the weakness in the interfacial transition zone in geopolymer concrete [11]. Thus, the optimum replacement ratio of recycled fine aggregate should be 25% to maintain high value of compressive strength of geopolymer concrete.

Compressive strength (MPa)

50

40 33.4

32.6

30 23.5 20 14.1 9.91

10

0

0%

25%

50%

75%

D1C0DC D1C25DC D1C50DC D1C75DC

100%

D1C1DC

Replacement ratio of fine aggregate

Fig. 6. Effect of partial replacement of recycled fine aggregate on the compressive strength of geopolymer concrete

3.4

Effect of the Curing Condition on the Compressive Strength of Geopolymer Concrete with Recycled Aggregate

The mechanical resistance of geopolymer concrete was strongly dependent to curing conditions and most research has focused on the effects of curing temperature ranging from 40 °C to 80 °C. Figure 7 shows the effects of curing condition on the compressive strength of geopolymer concrete with recycled aggregates. It is interesting that the specimens cured by mobile drier at the temperature of 120 °C for 8 h showed little bit higher compressive strength than those cured by dry cabinet at the temperature of 80 °C for 24 h. The compressive strength of specimen cured by mobile drier was 4–6% higher than that cured by dry cabinet. This occurs because the higher temperature of mobile drier provided more favorable condition for the geo-polymerization process and structure formation of geopolymer concrete and led to the higher compressive strength. This finding is consistence with previous studies [12, 13] that examined effects of curing temperature ranging from 40 °C to 100 °C on the compressive strength of geopolymer concrete.

Experimental Study of Geopolymer Concrete

477

Compressive strength (MPa)

50

40

0% Recycle FA 25% Recycled FA 33.4

32.6

35.1

34

30

20

10

0

Dry cabinet T80

Mobile dryer T120

Curing condition

Fig. 7. Effect of curing condition on the compressive strength of geopolymer concrete with recycled aggregate

4 Conclusion An experimental program was performed to evaluate compressive strength of geopolymer concrete with recycled aggregates under various curing conditions. Based on the results of this study, the following conclusions can be withdrawn: – The full replacement of normal coarse aggregate with recycled coarse aggregate had no significant effect on compressive strength. The compressive strength of geopolymer concrete with recycled coarse aggregate was 4.8% lower than that with normal coarse aggregate. – As the replacement ratio of recycled fine aggregate increased, the compressive strength of geopolymer concrete decreased. The compressive strength of geopolymer concrete reduced slightly with increasing replacement ratio from 0% to 25%, but it decreased significantly at higher replacement ratio than 25%. The optimum replacement ratio of recycled fine aggregate should be 25% to maintain high compressive strength of geopolymer concrete. – The specimens cured by mobile drier at the temperature of 120 °C for 8 h showed little bit higher compressive strength than those cured by dry cabinet at the temperature of 80 °C for 24 h.

478

N. T. Tran

References 1. Xiao, J., Qiang, C., Nanni, A., Zhang, K.: Use of sea-sand and seawater in concrete construction: current status and future opportunities. Constr. Build. Mater. 155, 1101–1111 (2017) 2. Das, S.K., Banerjee, S., Jena, D.: A review on geo-polymer concrete. Int. J. Eng. Res. Technol. (IJERT) 2(9), 2785–2788 (2013) 3. Safiuddin, M., Alengaram, U.J., Rahman, M.M., Salam, M.A., Jumaat, M.J.: Use of recycled concrete aggregate in concrete: a review. J. Civ. Eng. Manag., 1–35 (2013) 4. Nochaiya, T., Wongkeo, W., Chaipanich, A.: Utilization of fly ash with silica fume and properties of Portland cement–fly ash–silica fume concrete. Fuel 89(3), 768–774 (2010) 5. Thankachan, A., Kumari, G.: Experimental study on properties of concrete by using ceramic materials. Int. J. Eng. Trends Technol. 43, 404–408 (2017) 6. Mandal, S., Chakraborty, S., Gupta, A.: Some studies on durability of recycled aggregate concrete. Indian Concrete J. 76(6), 385–388 (2002) 7. Sagoe-Crentsil, K.K., Brown, T., Taylor, A.H.: Performance of concrete made with commercially produced coarse recycled concrete aggregate. Cem. Concr. Res. 31(5), 707– 712 (2001) 8. Shayan, A., Xu, A.: Performance and properties of structural concrete made with recycled concrete aggregate. ACI Mater. J. 100(5), 371–380 (2003) 9. Tabsh, S.W., Abdelfatah, A.S.: Influence of recycled concrete aggregates on strength properties of concrete. Constr. Build. Mater. 23(2), 1163–1167 (2009) 10. Mesgari S., Akbarnezhad A., Xiao J.Z.: Recycled geopolymer aggregates as coarse aggregates for Portland cement concrete and geopolymer concrete: effects on mechanical properties. Constr. Build. Mater. 236 (2020) 11. Abdollahnejad, Z., Mastali, M., Falah, M., Luukkonen, T., Mazari, M., Illikainen, M.: Review construction and demolition waste as recycled aggregates in alkali-activated concretes. Materials 12, 1–22 (2019) 12. Nurruddin, M.F., Haruna, S., Mohammed, B., Shaaban, I.G.: Methods of curing geopolymer concrete: a review. Int. J. Adv. Appl. Sci. 5(1), 31–36 (2018) 13. Heah, C.Y., et al.: Effect of curing profile on Kaolin-based geopolymers. Phys. Procedia 22, 305–311 (2011)

Fundamental Study for Estimating Shear Strength Parameters of Fiber-CementStabilized Soil by Using Paper Debris Kazumi Ryuo, Tomoaki Satomi, and Hiroshi Takahashi(&) Graduate School of Environmental Studies, Tohoku University, Sendai, Japan [email protected], {tomoaki.satomi.c6, hiroshi.takahashi.b3}@tohoku.ac.jp

Abstract. A large amount of high water content mud is generated at civil engineering works and disaster sites, and the high water content mud is subjected to dehydration treatment and finally disposed in landfills. As for a method of recycling high water content mud, fiber-cement-stabilized soil method has been developed. Failure strength and failure strain of fiber-cement-stabilized soil have been investigated through unconfined compression test. However, there are few measured data of the shear strength parameters compared to failure strength and failure strain. In this study, to evaluate shear strength parameters of fibercement-stabilized soil, box shear test and unconfined compression test are conducted with different additive amount of cement and paper debris to clarify the effect of additive conditions on the shear strength parameters of modified soil. Furthermore, the improvement was performed on mud with different particle size distribution, and the effect of the difference in soil properties on the shear strength parameters was clarified. The results showed that the internal friction angle increased as increasing the amount of paper debris. In addition, when the additive amount of cement increased, the cohesion increased. In the case of unconfined compression test, it was confirmed that the failure strength and failure strain tended to increase due to the increase of the additive amount of cement and paper debris. Furthermore, it is suggested that the shear strength parameters of fiber-cement-stabilized soil has close relationship with the grain size characteristics of mud. Keywords: Shear strength parameters

 Soil improvement  Poor ground

1 Introduction Large amounts of mud with lots of moisture are generated at construction sites and disaster recovery construction sites. It has many difficulties such as reduced construction efficiency and increased disposal costs. To recycle high water content mud, sun drying and mixing with high quality soil have been conducted [1]. However, these construction methods take a long time and high cost. Therefore, fiber-cement-stabilized soil method has been developed as a new method to recycle mud at the site where it occurred [2]. The method can make high quality modified soil by adding paper debris and cement to mud with lots of moisture. The modified soil has some advantages: high © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 479–491, 2021. https://doi.org/10.1007/978-3-030-62324-1_41

480

K. Ryuo et al.

failure strength, high failure strain and high durability in dry and wet cycle [3, 4]. Studies on failure strength and failure strain of fiber-cement-stabilized soil sufficiently have been conducted through unconfined compression test. However, despite the fact that the shear strength parameters, i.e., cohesion c and internal friction angle u are important to evaluate slope stability and bearing capacity of foundation ground in geotechnical engineering, there is few measured data of the shear strength parameters of fiber-cement-stabilized soil. Therefore, evaluation on the shear strength parameters of fiber-cement-stabilized soil is required, and it is necessary to investigate the effect of the additive amount of paper debris and cement on the shear strength parameters. By utilizing the accumulated experimental data, if the shear strength parameters of the fiber-cement-stabilized soil can be estimated from known conditions such as the additive amount of paper debris and cement and the physical properties of mud, it will be possible to carry out the construction more quickly. The purpose of this study is to clarify the shear strength parameters of fiber-cement-stabilized soil by conducting a box shear test and unconfined compression test. In this study, direct box shear test was carried out with different cement and paper debris contents in order to find the relationship between shear strength parameters and the additive conditions. Moreover, the relationship between shear strength parameters and unconfined compressive strength obtained from unconfined compression test was evaluated. In addition, under the condition that the additive amount of paper debris and cement is constant, direct box shear test is carried out using mud with different physical properties, and the effect of soil properties of mud on shear strength was considered.

2 Experimental Outline In this study, in order to evaluate the shear strength of the fiber-cement-stabilized soil, direct shear test and unconfined compression test were performed. The outline of each experiment is shown below. 2.1

Direct Box Shear Test

Procedure of Making Specimens 1. Adjust the initial water content by adding water to the simulated mud. 2. Add paper debris and cement and mix for three minutes. 3. Put the mixed modified soil in a container, seal it, and cure at 20±3 °C for three days. 4. Put the modified soil into three layers in a 6 cm diameter and 2 cm height mold and compress them one by one. The compaction was done by dropping a 1.5 kg rammer from a height of 10 cm. The compaction was performed 6, 10, and 10 times from the lower layer. 5. Seal the specimen and cure at 20±3 °C for seven days. The cement used in this study is Geoset 200 manufactured by Taiheiyo Cement Co., Ltd. of Japan. For paper debris, the average length of the long side is 13.92 mm, the average length of the short side is 7.57 mm, and its water absorption rate is 900%.

Fundamental Study for Estimating Shear Strength Parameters

481

Experimental Method The shear strength parameters were measured according to JGS0561 “Direct box shear test”. Figure 1 shows a schematic diagram of direct box shear test apparatus. In the direct box shear test, the specimen is placed in a shear box divided into upper and lower parts, sheared with normal stress loaded, and the relationship between shear stress and shear displacement is drawn. After that, the maximum shear stress is obtained for each normal stress, the relationship between the maximum shear stress and normal stress is drawn, finally the shear strength parameters can be obtained according to the Coulomb’s failure criterion. In this study, since it is assumed that the fiber-cement-stabilized soil will be used as surface soil materials for travelling of heavy machine, and the direct box shear test was consolidated undrained condition. When conducting the experiment, the specimen was first consolidated by applying normal stress. Consolidation at this time was performed until the completion time determined by the 3t method, and normal stresses were performed in three types: 50 kN/m2, 100 kN/m2, and 150 kN/m2. When shearing, the shear velocity was set to 1 mm/min and it was performed until the shear displacement reached 7 mm. The experiment was performed three times under each condition in order to consider variations in the experimental results. Personal Computer Vertical Displacement Transducer

AC Speed Control Motor A/D Converter

Horizontal Displacement Transducer Load Cell

Stabilized Power Supply

Shear Box Weight

Fig. 1. Schematic diagram of direct box shear test apparatus

2.2

Unconfined Compression Test

In general, saturated clay soil has a relationship between cohesion under unconsolidated undrained condition cu and failure strength qu, i.e., cu = qu/2 [5]. Therefore, in this study, unconfined compression test was performed to examine the relationship between the failure strength and the shear strength parameters of the fiber-cementstabilized soil. Experimental Method The failure strength of the specimens was measured according to JGS 0511 “Unconfined compression test”. Figure 2 shows schematic diagram of unconfined compression

482

K. Ryuo et al.

test apparatus. When making a specimen, the size and compacting condition are different from those in the box shear test. Specifically, the modified soil was put into four layers in a mold of 5 cm in diameter and 10 cm in height, and each layer was compacted 5, 10, 10 and 20 times from the lower layer. The fall height of the rammer at this time is 20 cm. Since the compaction energies by compacting in the box shear test and the unconfined compression test are constant, there is no difference in the water content and the wet density of the specimen used in the experiment. During the unconfined compression test, unconfined compressive stress is measured under the condition of compressive strain of 1%/min, and the maximum compressive stress is defined as failure strength. Experiments were performed on three specimens per condition, and the average value was adopted. In this study, the target value of failure strength was set to 123 kN/m2 or more based on previous studies [6].

Personal Computer

Stabilized Power Supply Load Cell

Load Cell

Displacement Transducer Specimen A/D Converter Motor Displacement Transducer

DC Servo Motor

Fig. 2. Schematic diagram of unconfined compression test apparatus

3 Effect of the Amount of Paper Debris and Cement on the Shear Strength Parameters of Fiber-CementStabilized Soil The sample used in this experiment is a simulated mud mixed with clay and silt at a mass ratio of 4:6. Figure 3 shows the particle size distribution curve of simulated mud used in this study. The initial water content of the simulated mud was set to 100% and 150%. To evaluate the effect of the additive amount of paper debris and cement on the shear strength parameters, the experiments were performed under the conditions that the amount of paper debris was changed with the amount of cement being constant and the amount of cement was changed with the amount of paper debris being constant. Table 1 shows the experimental conditions. The additive amount of cement and paper debris is the amount of mud per unit volume.

Fundamental Study for Estimating Shear Strength Parameters

passing product, %

clay

silt

483

clay 40% and silt 60%

100 90 80 70 60 50 40 30 20 10 0 1

10

100 grain size, μm

1000

Fig. 3. Soil particle size distribution of simulated mud Table 1. Mixing conditions Water content w, % Cement Ac, kg/m3 100 60 40–90 150 100 60–140

3.1

Paper debris Ap, kg/m3 0–60 40 0–80 60

Results of Direct Box Shear Test

Figure 4 shows the shear stress-strain curve obtained in this study. As a representative sample, the results are shown under the condition that the water content is 100%, the additive amount of cement is 60 kg/m3, the additive amount of paper debris is 40 kg/m3 and the unmodified condition. From the stress-strain relationship obtained in the direct box shear test, the Coulomb’s failure criterion line can be drawn as shown in Fig. 5, and calculate the cohesion c and the internal friction angle /. The water content of the unmodified specimen is the same as the water content obtained by subtracting the amount of water absorbed by the cement and paper debris in the reinforced specimen. From Fig. 5, it was confirmed that the cohesion and the internal friction angle were increased by applying fiber-cement-stabilized method. Figures 6, 7, 8 and 9 show the relationship between the shear strength parameters and the additive amount of paper debris or cement with different initial water contents. Under the condition of water content of 100%, the cohesion was less than 30 kN/m2 when no paper debris was added (see Fig. 6). However, with the addition of paper debris, the cohesion was constantly obtained between 30 kN/m2 and 40 kN/m2 almost in all additive amount of paper debris. On the other hand, the internal friction angle increased gradually with increasing additive amount of paper debris. The reason was probably due to the fact that pores between soil particles were filled with the fiber of paper debris, thus the friction between the fiber and soil interface was improved.

484

K. Ryuo et al.

For the relation with the cement content under the same water content condition, the cohesion was significantly increased with increasing cement content (see Fig. 7). Since ettringite was generated by the hydration reaction of cement, the bond between soil particles was strengthened, thus the shear force increased. The results of the internal friction angle were around 40° regardless of the additive amount of cement. Then, under the condition of water content of 150%, the cohesion was constant and the internal friction angle was gradually increased when the additive amount of paper debris increased (see Fig. 8). In addition, when the additive amount of cement increased, the cohesion increased and the internal friction angle became constant around 40° as can be seen in Fig. 9. The changing tendencies were similar with the results from the water content of 100% as shown in Fig. 6 and Fig. 7. However, since the reason why the cohesion at the additive amount of cement of 80 kg/m3 was smaller than the cohesion at the additive amount of cement of 50 kg/m3 has not been clear yet, the additional experiments are required. σ=150kN/m²(1) σ=150kN/m²(3) σ=100kN/m²(2) σ=50kN/m²(1) σ=50kN/m²(3)

500

w=100%, Ac=60kg/m3, Ap=40kg/m3

400

shear stress, kN/m2

400

300 200 100 0 0

1

2

3 4 5 6 shear displacement, mm

7

8

300

σ=150kN/m²(2) σ=100kN/m²(1) σ=100kN/m²(3) σ=50kN/m²(2)

200 100 Unmodified specimen

0 0

1

2

3 4 5 6 shear displacement, mm

Fig. 4. Relationship between shear stress and shear strain

shear stress, kN/m²

shear stress, kN/m2

500

Reinforced specimen (w=100%, Ac=60kg/m3, Ap=40kg/m3) Unreinforced specimen

180 160 140 120 100 80 60 40 20 0

y = 0.8156x + 38.321 R² = 0.9753

y = 0.4841x + 15.257 R² = 0.9704 0

20

40

60 80 100 vertical stress, kN/m²

120

Fig. 5. Coulomb’s failure criterion

140

160

7

8

Fundamental Study for Estimating Shear Strength Parameters

interal friction angle, deg.

70

cohesion, kN/m²

60 50 40 30 20 10 0 0

10

20 30 40 50 paper debris, kg/m³

60

485

50 40 30 20 10 0 0

70

10

20 30 40 50 paper debris, kg/m³

60

70

Fig. 6. Relationship between additive amount of paper debris and shear strength parameters (w = 100%, Ac = 60 kg/m3) interal friction angle, deg.

70

cohesion, kN/m²

60 50 40 30 20 10 0 0

20

40 60 cement, kg/m³

80

50 40 30 20 10 0

100

0

20

40 60 cement, kg/m³

80

100

Fig. 7. Relationship between additive amount of cement and shear strength parameters (w = 100%, Ap = 40 kg/m3)

internal friction angle, deg.

120

cohesion, kN/m²

100 80 60 40 20 0 0

20

40 60 80 paper debris, kg/m³

50 40 30 20 10 0 0

100

20

40 60 80 paper debris, kg/m³

100

Fig. 8. Relationship between additive amount of paper debris and shear strength parameters (w = 150%, Ac = 100 kg/m3)

internal friction angle, deg.

120

cohesion, kN/m²

100 80 60 40 20 0 0

50 100 cement, kg/m³

150

50 40 30 20 10 0 0

50 100 cement, kg/m³

150

Fig. 9. Relationship between additive amount of cement and shear strength parameters (w = 150%, Ap = 60 kg/m3)

486

K. Ryuo et al.

3.2

Results of Unconfined Compression Test

Figure 10 shows the stress-strain curve obtained in this study. As a representative sample, the results are shown under the conditions that the water content is 100%, the additive amount of cement is 60 kg/m3, the additive amount of paper debris is 40 kg/m3 and unreinforced specimen. The maximum compressive stress is the failure strength, and the strain at that time is the failure strain. From Fig. 10, it was confirmed that the failure strength was significantly increased by fiber-cement-stabilized soil method. Figure 11 and Fig. 12 show the relationship between the failure strength and the additive conditions. In both cases where the water + content was 100% and 150%, the failure strength increased remarkably when the additive amount of cement was increased, and the failure strength gradually increased when the additive amount of paper debris was increased. It is considered that when the additive amount of cement was increased, the hydrate called ettringite, which is generated by the hydration reaction of cement, suppresses the movement of the soil particles, so that the failure strength was significantly increased. When the additive amount of paper debris is increased, the water contained in the mud is absorbed by the addition of the paper debris, and the water content of the improved soil approaches the optimum water content. It is considered that the compaction characteristics were improved by this, and the failure strength was increased. From the results of the unconfined compression test, it was found that the failure strength of the fiber-cement-stabilized soil was greatly affected by the additive amount of cement. Reinforced specimen (w=100%, Ac=60kg/m3, Ap=40kg/m3) Unreinforced specimen

failure strength, kN/m2

140 120 100 80 60 40

20 0 0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16

failure strain, %

450 400 350 300 250 200 150 100 50 0

w=100%, Ap=40 kg/m3 Target value

0

20

40

60 80 100 120 140 160 cement, kg/m³

failure strength, kN/m²

failure strength, kN/m²

Fig. 10. Diagram of stress-strain curve

450 400 350 300 250 200 150 100 50 0

w=100%, Ac=60 kg/m3 Target value

0

20

40 60 80 paper debris, kg/m³

100

Fig. 11. Relationship between failure strength and additive conditions (w = 100%)

450 400 350 300 250 200 150 100 50 0

w=150%, Ap=60 kg/m3

failure strength, kN/m²

failure strength, kN/m²

Fundamental Study for Estimating Shear Strength Parameters

Target value

0

20

40

450 400 350 300 250 200 150 100 50 0

60 80 100 120 140 160 cement, kg/m³

487

w=150%, Ac=100 kg/m3 Target value

0

20

40 60 80 paper debris, kg/m³

100

Fig. 12. Relationship between failure strength and additive conditions (w = 150%)

3.3

Relationship Between Shear Strength Parameters and Failure Strength

cohesion, kN/m²

120

w100c60

w100p40

w150c100

w150p60

100 80 60

40

y = 4E-06x3 - 0.0025x2 + 0.5713x R² = 0.874

20 0 0

100

200 300 400 failure strength, kN/m²

500

Internal friction angle, deg.

Figure 13 shows the relationship between the shear strength parameters and the failure strength of the fiber-cement-stabilized soil obtained in this study. The cohesion increased with increasing the failure strength. Specifically, when the failure strength exceeded around 300 kN/m2, the increase ratio of the cohesion increased more rapidly compared with the relationship between cohesion and failure strength of 300 kN/m2 or less. It was suggested that the cohesion of the fiber-cement-stabilized soil could be estimated from the failure strength. On the other hand, there was no clear relationship between the internal friction angle and the failure strength. The internal friction angle was from 30° to 50° regardless of the increase in failure strength. From the results, by applying the fiber-cement-stabilized soil method to high water content mud, it was found that the internal friction angle of 30° or more can be compensated.

w100c60 w100p60 w150c100 w150p60

60 50

40 30 20

10 0 0

100

200 300 400 failure strength, kN/m²

Fig. 13. Relationship between shear strength parameters and failure strength

500

488

K. Ryuo et al.

4 Effect of Soil Properties on the Shear Strength Parameters of Fiber-Cement-Stabilized Soil 4.1

Experimental Conditions

In this section, in order to clarify the effect of the physical properties of mud on the shear strength of fiber-cement-stabilized soil, direct box test was conducted under the same mixing conditions in various soils. The water content was 60%, the cement content was 60 kg/m3, and the paper debris content was 40 kg/m3. Table 2 shows the physical properties of the simulated mud. In this experiment, 20 types of simulated mud from sample A to sample T were prepared and used for the experiment. Table 2. Soil properties of simulated soil No.

Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample

A B C D E F G H I J K L M N O P Q R S T

Density, g/cm3

D50, mm

U c, -

Uc’, -

Fc, %

Cohesion, kN/m2

2.54 2.61 2.68 2.73 2.72 2.71 2.73 2.48 2.50 2.60 2.73 2.47 2.46 2.47 2.53 2.59 2.67 2.69 2.64 2.72

0.022 0.024 0.028 0.032 0.033 0.034 0.040 0.050 0.050 0.050 0.050 0.024 0.034 0.046 0.064 0.094 0.151 0.248 0.139 0.110

5.76 6.68 8.41 9.49 10.35 12.15 13.66 4.09 5.81 9.13 15.87 5.42 5.41 5.41 5.41 5.41 5.41 5.41 3.10 13.82

1.07 0.98 0.84 0.79 0.73 0.62 0.72 1.06 1.07 1.11 0.78 1.23 1.23 1.04 1.09 1.22 0.87 0.78 1.05 1.95

91.8 86.5 74.5 78.0 71.1 64.1 62.3 72.1 65.8 62.8 58.2 91.5 82.9 68.3 56.7 40.4 27.4 15.7 20.3 37.3

73.7 69.5 66.0 64.4 85.2 62.9 83.3 65.7 78.9 54.5 54.6 65.0 64.9 65.7 60.5 59.1 36.3 24.0 35.1 45.5

Internal friction angle, deg. 31.9 35.4 31.0 31.9 30.3 32.1 32.2 35.5 38.8 40.6 37.5 36.4 35.1 38.3 37.8 42.7 45.5 46.2 46.1 43.6

※Dn is the grain size when the mass passage percentage corresponds to n%, Uc is uniformity coefficient (= D60/D10), Uc’ is coefficient of curvature (= D230/(D10  D60)). 4.2

Experimental Results

In this chapter, the relationship between the shear strength parameters of the fibercement-stabilized soil and the physical properties of the soil was considered. Figure 14

Fundamental Study for Estimating Shear Strength Parameters

489

60

90 80 70 60 50 40 30 20 10 0

Internal friction angle φ, deg.

Cohesion c, kN/m2

shows the relationship between the shear strength parameters and D50. As a result, as the grain size increased, the cohesion decreased and the internal friction angle increased. Figure 15 shows the relationship between the shear strength parameters and the fine fraction content. As a result, as the fine fraction content increased, the cohesion increased and the internal friction angle gradually decreased. From these two results, it was confirmed that the shear strength parameters of the fiber-cement-stabilized soil is closely related to the grain size characteristics. Figure 16 shows the relationship between the shear strength parameters and the failure strength. As a result, as the failure strength increased, the cohesion increased. However, the correlation was not as clear as the results obtained in Sect. 3. As in Sect. 3, there was no clear relationship between the internal friction angle and the failure strength. From the results of this chapter, it was found that the shear strength parameters of fiber-cement-stabilized soil has a strong correlation with D50. However, because the results obtained in this chapter is for the same mixing conditions, it will be necessary to conduct experiments by changing the additive amounts of cement and paper debris to create a more versatile estimation formula.

50 40 y = 6.976ln(x) + 58.092 R² = 0.8023

30 20

y = 81.862e-5.105x R² = 0.8564 0.00

0.10

D50, mm

0.20

10 0 0.00

0.30

0.10

D50, mm

0.20

0.30

90 80 70 60 50 40 30 20 10 0

60

Internal friction angle φ, deg.

Cohesion c, kN/m2

Fig. 14. Relationship between shear strength parameters and D50

50 40

y = 5.9034x0.5694 R² = 0.8114

30 20

y = -8.849ln(x) + 73.046 R² = 0.7042

10

0

20

40 60 80 Fine fraction content Fc, %

100

0 0

20

40 60 80 Fine fraction content Fc, %

Fig. 15. Relationship between shear strength parameters and Fc

100

K. Ryuo et al. 60

90 80 70 60 50 40 30 20 10 0

Internal friction angle φ, deg.

Cohesion c, kN/m2

490

y = 0.2483x R² = 0.4329

50 40 30 20 10

0

100

200 300 Failure strength, kN/m2

400

0 0

100

200 300 Failure strength, kN/m2

400

Fig. 16. Relationship between shear strength parameters and Failure strength

5 Conclusions In this study, in order to clarify the shear strength characteristics of the fiber-cementstabilized soil, direct box shear test and unconfined compression test were conducted under the conditions of different paper debris and cement addition amount and the condition of mud with different physical properties. The results were summarized as follows: 1. The cohesion increased with increasing the additive amount of cement and the internal friction angle increased with increasing the additive amount of paper debris. 2. There was no clear relationship between the internal friction angle and the failure strength. The internal friction angle was from 30° to 50° regardless of the increase in failure strength. On the other hand, the cohesion increased with increasing the failure strength. It is suggested that the cohesion of fiber-cement-stabilized soil can be estimated from the failure strength. 3. As the grain size increased, the cohesion decreased and the internal friction angle increased.

References 1. Public Works Research Institute: Construction Generated Soil Utilization Technology Manual. Maruzen Publishing Co., Ltd. (2013) 2. Mori, M., Takahashi, H., Ousaka, A., Horii, K., Kataoka, I., Ishii, T., Kotani, K.: A proposal of new recycling system of high-water content mud by using paper debris and polymer and strength property of recycled soils. J. MMIJ 119, 155–160 (2003) 3. Mori, M., Takahashi, H., Kumamura, K.: An experimental study on the durability of fibercement-stabilized mud by repeated cycle test of drying and wetting. J. Min. Master. Process. Inst. Jpn. 121, 37–43 (2005).

Fundamental Study for Estimating Shear Strength Parameters

491

4. Takahashi, H., Takahashi, K., Mori, M.: Experimental study on dynamic strength of fibercement-stabilized soil. In: Proceeding of the 4th Symposium on Sediment Disasters, Kumamoto, pp. 1–5 (2008) 5. Tokita, K., Oda, K., Sano, I., Shibuya, S., Niiro, T.: Soil Mechanics. Ricoh Tosho Co., Ltd. (2010) 6. Takahashi, H.: Trial and actual construction on creation of artificial ground by recycling tsunami sludge. J. Jpn. Inst. Energy 94, 396–402 (2015)

Study on the Compressive and Tensile Strength Behaviors of Corn Husk Fiber-Cement Stabilized Soil Nga Thanh Duong, Tomoaki Satomi, and Hiroshi Takahashi(&) Graduate School of Environmental Studies, Tohoku University, Sendai, Japan [email protected], {tomoaki.satomi.c6, hiroshi.takahashi.b3}@tohoku.ac.jp

Abstract. Soft soil from construction sites and disaster areas with low strength is considered as solid waste. Therefore, it is necessary to recycle soft soil for reusing in construction. This research focuses on studying the behaviors of modified soil produced by using fiber and cement. The current study carries out experiments to investigate the influence of fiber on cemented soil. The laboratory experiments including splitting tension test and unconfined compression test are conducted. Corn husk fibers produced from crop residue of agriculture are used to reinforce cemented soil. The corn husk fiber content is 0%, 0.25%, 0.5%, and 1%. The used cement content is 1% and 2%. The results show that fiber inclusion improves both failure strength from unconfined compression test and failure tensile strength from splitting tension test. The highest increase of failure strength and failure tensile strength of cemented specimens is 35.3% and 72.6% when fiber content is 1%, respectively. Fiber inclusion changes the behavior of specimens from brittle to ductile. Moreover, corn husk fiber makes a contribution to the reduction of loss of post-peak stress. The increase in fiber content leads to a decrease in loss of post-peak stress. Keywords: Unconfined compression test  Splitting tension test  Corn husk Failure strength  Soil improvement



1 Introduction Nowadays, soil with high or low water content excavated from construction sites and disaster areas is not able to be reused directly in practice due to low strength under loading. Therefore, it is considered as a kind of waste and discharged to the final places or unidentified places without environmental management. However, the recycling of this soil for reusing in other purposes (i.e. embankment, landfill, etc.) can make a contribution to the reduction of discharged soil volume and the generation of environmental materials. Takahashi laboratory has developed “Fiber-Cement Stabilized Soil” method to recycle excavated soil with high water content. Cement and paper debris are the additives playing the important roles in soil improvement. Paper debris from old newspapers makes a contribution to the absorption of superficial water to reduce free water content in soil (Fig. 1). Cement is a binder to improve the strength of modified © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 492–503, 2021. https://doi.org/10.1007/978-3-030-62324-1_42

Study on the Compressive and Tensile Strength Behaviors

493

soil. Because of the addition of paper debris and cement, soil with high water content reduces water content and increases strength. Therefore, the modified soil shows good performance in strength and durability such as high durability under drying and wetting cycle, high failure strength, and strain [1, 2]. In Japan, due to the advantages of this method, many projects have applied Fiber-Cement Stabilized Soil method [3]. In geotechnical engineering, the excavated soil with low water content (soft soil) is also reused after the reinforcement by fiber and cement or other binders (i.e. lime, rice husk ash, pond ask, etc.) [4–6]. Soil reinforced with cement shows not only high failure strength but also high brittleness. Therefore, the addition of fiber results in the decrease of brittleness and the increase of ductility of modified soil. Besides, fiber inclusion also enhances other properties of cemented soil, such as energy absorption, loss of postpeak stress, failure strength and strain, tensile strength, durability, etc. Currently, in the viewpoint of the environment, many natural fibers have been used to reinforce soft soil such as coir, jute [7, 8]. Besides, many crop residues from agriculture (i.e. rice straw, cornsilk, rice husk, etc.) are also utilized in this method to reduce solid waste from discharging soil and reduce emission from crop residue burning which is considered as a way to treat them [9–11].

Soil particles

1)

Water

2)

Paper debris Ettringite during hydration of cement

3) Fig. 1. High water content soil with the addition of cement and paper debris [12]: 1) Sludge with high water content; 2) Paper debris inclusion; 3) Cement inclusion

In this research, soft soil is recycled by using fiber and cement. The used fiber kind is corn husk fiber produced from corn husk of maize. The laboratory experiments are conducted to study the potential of corn husk fiber in the enhancement of soft soil. The changes in mechanical properties of cemented soil due to fiber are evaluated in terms of splitting tension test and unconfined compression test.

494

N. T. Duong et al.

2 Materials and Testing Programs 2.1

Materials

Dry soil for carrying out all tests is artificial dry soil including the main component of silt. The artificial soil consists of 8.1% sand, 8.5% clay, and 83.4% silt. Figure 2 and Table 1 show the particle size distribution curve of dry soil and dry soil properties [13].

Percentage Passing (%)

100 80 60 40 20 0 1

10 100 Particle size (μm)

1000

Fig. 2. The particle size distribution curve of dry soil

Table 1. Dry soil properties Properties Value Specific gravity [-] 2.47 Silt [%] 83.4 Clay [%] 8.5 Sand [%] 8.1 Optimum moisture content [%] 30.8 Plastic limit [%] 29.4 Liquid limit [%] 46.1 Chemical compound SiO2 [%] 72.58 Al2O3 [%] 17.28 K2O [%] 2.62 Fe2O3 [%] 4.11 Na2O [%] 1.67 CaO [%] 1.30 MgO [%] 0.60 MnO [%] 0.05

Study on the Compressive and Tensile Strength Behaviors

495

Cement is Geoset 200 cement from Taiheiyo Company, Japan. Geoset 200 cement is mainly applied to soil problems. Corn husk fiber is produced from corn husk which is a kind of crop residue from maize. From raw dry corn husk, 10 mm corn husk segments are cut and milled with water by blender until obtaining corn husk fiber. The wet corn husk fiber is dried until obtaining constant mass (Fig. 3). Corn husk fiber consists of 43% cellulose, 31% hemicellulose, 22% lignin, and 1.9% ash [14]. The size of corn husk fiber is 0.32 mm in average diameter and 9 mm in average length. Tensile strength of corn husk is 52 MPa. To study the properties of corn husk-cement stabilized soil, mixing conditions change cement content and fiber content. Cement contents are 1% and 2% of the dry soil mass. For each cement content, the added fiber contents are 0%, 0.25%, 0.5% and 1% of the dry soil mass. To produce the mixture, firstly dry soil and cement are mixed by hand. Then fiber is added and the mixture of dry soil, cement, and fiber is mixed by hand until obtaining a homogenous mixture. Next, after adding water at the optimum moisture content of dry soil, the mixture is mixed by mixing machine. Finally, the homogenous mixture of dry soil, cement, fiber, and water is generated. This mixture is subjected to a compaction method for making specimens. The mixture is poured into a mold with four layers and the compacted time of each layer is 5, 10, 10, and 20 times. The total energy compaction is 674.5 kJ/m3. Mold size for making specimens is an inner diameter of 50 mm and a height of 100 mm. After compaction, all specimens are wrapped and cured under 20 °C ±3 for 7 days. Due to the addition of cement to the mixture, the strength of the mixture would be developed for at least 28 days. However, in this study, specimens are subjected to splitting tension test and unconfined compression test after 7 days. Three specimens are tested for each mixing condition. The difference between each value and the average value should be within 10% to confirm the accuracy of the data.

Fig. 3. Corn husk fiber

2.2

Testing Program

Unconfined compression test is carried by using compressive testing machine (Fig. 4). Unconfined compression test is conducted according to ASTM D1633 [15]. Data including stress value and strain value are recorded by a computer.

496

N. T. Duong et al.

Fig. 4. Compressive testing machine

Splitting tension test is carried by using the machine modified from the compressive testing machine. A couple of strip load with a size of 120 mm in length and 5 mm thickness and 10 mm in width is added. Splitting tension test is conducted according to ASTM C496 [16]. Tensile strength is calculated by Eq. (1) T=

2N pld

ð1Þ

Where T is the tensile strength, N is the tension load, d is the diameter of the specimen, and l is the length of the specimen.

3 Results and Discussions 3.1

Unconfined Compression Test

Stress-strain Behavior Figure 5 shows the development of compressive stress-strain behavior of specimens reinforced by corn husk fiber. From the figure, it is easy to see that compressive stressstrain curves of specimens with and without the addition of fiber are different. Without fiber inclusion, after increasing to the peak point, compressive stress decreases very quickly. However, with fiber inclusion, after reaching the peak point, the slope of the curves reduces when compared with that of the curves of cemented soil. Besides, the change of fiber content results in the change of the slope. The slope of the curve declines gradually when fiber increases. Therefore, it could be concluded that the reinforcement by fiber makes a contribution to the reduction of the slope of stress-strain curve at the phase after peak point.

Study on the Compressive and Tensile Strength Behaviors

Compressive stress (kPa)

600

497

1% Cement 0% Fiber 0.25% Fiber 0.5% Fiber 1% Fiber

400

200

0 0

2

Compressive stress (kPa)

600

4 6 8 Compressive strain (%)

10

2% Cement

500 400 300 0% Fiber 0.25% Fiber 0.5% Fiber 1% Fiber

200 100 0 0

2

4 6 8 Compressive strain (%)

10

Fig. 5. Compressive stress-strain behavior of specimens with and without the addition of fiber

Failure Strength Figure 6 shows the relationship between fiber content and failure strength with 2 cement contents. The plot shows that failure strength increases when the added fiber increases from 0 to 1%. At 1% cement content, the addition of fiber content of 0.25% and 0.5% leads to the increase of failure strength. Further fiber inclusion (1%), failure strength changes slightly. The most increase in failure strength is 27% at fiber content of 0.5%. It means that the optimum fiber content for case of cement content of 1% is 0.5%. At 2% cement content, failure strength of cemented soil increases 18.7%, 25.3%, and 35.3% when adding fiber content of 0.25%, 0.5%, and 1%, respectively. The increase in failure strength when adding fiber agreed with many previous studies [17, 18]. This can be explained by the fact that fibers interlock soil-cement particles and bonding strenth between fibers and soil-cement particles is produced

498

N. T. Duong et al.

Failure strength (kPa)

600 500 400 300 200

1% Cement

100

2% Cement

0 0

0.25

0.5 0.75 Fiber content (%)

1

Fig. 6. Relationship between fiber content and failure strength

Failure Strain The influence of fiber content on the development of failure strain of samples is shown in Fig. 7. It could be observed that failure strain increases with the increase of fiber content from 0 to 1%. At the same level of cement content, failure strain value is highest with the addition of 1% of fiber content. The increase in failure strain when adding fiber indicates that fiber inclusion leads to the change of behavior of cemented soil. The behavior of cemented soil changes from brittle to ductile. Moreover, the ductility of the sample also increases due to the increase in fiber content.

Failure strain (kPa)

7 6 5 4 3 2

1% Cement

1

2% Cement

0 0

0.25

0.5 0.75 Fiber content (%)

Fig. 7. Relationship between fiber content and failure strain

1

Study on the Compressive and Tensile Strength Behaviors

3.2

499

Splitting Tension Test

70

1% Cement 0% Fiber 0.5% Fiber

Tensile stress (kPa)

60 50

0.25% Fiber 1% Fiber

40 30 20 10 0 0 70

2 4 6 8 Diameter deformation (%)

10

2% Cement

Tensile stress (kPa)

60 50 40 30 20 10

0% Fiber 0.5% Fiber

0 0

0.25% Fiber 1% Fiber

2 4 6 8 Diameter deformation (%)

10

Fig. 8. Tensile stress-strain behavior of specimens with and without the addition of fiber

Tensile Stress-strain Behavior Figure 8 shows tensile stress-strain behavior of mixture with and without fiber inclusion. The tensile stress-strain curves are different in shape between the sample with and without fiber addition. Without fiber, the tensile stress-strain curves drop sharply after failure tensile strength. With fiber inclusion, two peak points appear during the development of tensile stress from the beginning to the complete failure. Tensile stress increases from the origin to the first tensile strength. After the first peak point, tensile stress decreases slightly and then increases again to second failure tensile strength. Besides, from the figure, it is easy to observe that the use of fiber enhances loss of

500

N. T. Duong et al.

Tensile strength (kPa)

70 60 50

1% Cement First tensile strength Second tensile strength

40 30 20 10 0 0

Tensile strength (kPa)

70 60 50

0.25 0.5 Fiber content (%)

1

2% Cement First tensile strength Second tensile strength

40 30 20 10 0 0

0.25 0.5 Fiber content (%)

1

Fig. 9. Relationship between fiber content and strength

post-peak tensile stress of cemented specimens. The presence of fiber results in the reduction of loss of post-peak tensile stress. The reason is that after the first tensile strength, fiber plays a main role to bear the load and limit the development of cracks in the specimens. Failure Tensile Strength As mention in the previous section, the mixtures with fiber inclusion perform two tensile strength points including the first and the second tensile strength. The relationship between fiber content and tensile strength is shown in Fig. 9. Plots show that compared with tensile strength of cemented samples, first tensile strength of fibercement stabilized soil changes slightly (except mixing condition with 2% cement and 1% fiber). This indicates that the influence of fiber on first tensile strength is low. However, second tensile strength of samples reinforced by fiber is higher than tensile

Study on the Compressive and Tensile Strength Behaviors

501

strength of samples without fiber inclusion. At 1% cement content, second tensile strength of specimens with 0.5% and 1% fiber content increases 27.4% and 63.7%, respectively. At 2% cement content, second tensile strength of specimens including fiber increase 16.8%, 34.7%, and 72.6% when adding 0.25%, 0.5%, and 1% fiber, respectively. Therefore, second tensile strength of specimens with the inclusion of 1% of fiber content is highest. The results could be translated that fiber inclusion leads to a significant increase in second tensile strength. Crack Pattern Figure 10 shows the crack behavior of specimens in splitting tension tests. From the figure, it can be seen that fiber inclusion changes crack pattern form. Without fiber inclusion, the crack pattern is clear and big. However, with the inclusion of fiber, the crack pattern is smaller and in a zigzag form. When fiber content increases, crack width reduces and crack patterns become unclear. The reason for changing crack form after adding fiber is that the appearance of fiber reduces the extension capacity of crack to delay the process of destroying samples.

a)

b)

c)

d)

Fig. 10. Crack pattern under loading: a) Fiber 0%; b) Fiber 0.25%; c) Fiber 0.5%; d) Fiber 1%

4 Conclusions This research carries out splitting tension test and unconfined compression test to study the development of mechanical properties of corn husk-cement stabilized soil. The results are briefly summarized as follows

502

N. T. Duong et al.

1. Fiber inclusion reduces loss of post-peak stress of compressive stress-strain curves and tensile stress-strain curves. The more fiber is added, the more loss of post-peak stress reduces. 2. Corn husk fiber improves failure strength. Failure strength increases with the increase in fiber content. 3. The addition of fiber results in an increase of failure strain. Specimen property changes from brittle to ductile and the complete decline of samples is postponed. 4. Fiber inclusion leads to the appearance of the second tensile strength. The impact of fiber on first tensile strength is slight, while second tensile strength improves significantly when compared with tensile strength of cemented soil. The inclusion of 1% fiber shows the highest second tensile strength. 5. In splitting tension test, the crack pattern under loading is different between specimens with and without fiber inclusion. The presence of fiber results in the smaller cracks and the zigzag form of crack. Corn husk is new material in soil stabilization. Therefore, this study focus on the effect of fiber on the behavior of cemented soil. These are some first results to evaluate the workability of fiber and the optimum fiber content and cement content were not found. Hence, it is necessary to carry out other studies to find out these parameters.

References 1. Satomi, T., Kuribara, H., Takahashi, H.: Evaluation of failure strength property and permeability of fiber-cement-stabilized soil made of tsunami sludge. J. JSEM 14, 303–308 (2014). https://doi.org/10.11395/jjsem.14.s303 2. Mori, M., Takahashi, H., Kumakura, K.: An experimental study on strength of fiber-cementstabilized mud by use of paper sludge and durability for drying and wetting tests. Shigen-toSozai 122, 353–361 (2006). https://doi.org/10.2473/shigentosozai.122.353 3. Takahashi, H.: Waste materials in construction: sludge and recycling. In: Topical Themes in Energy and Resources, pp. 177–194. Springer, Japan (2015). https://doi.org/10.1007/978-4431-55309-0_10 4. Kumar, A., Gupta, D.: Behavior of cement-stabilized fiber-reinforced pond ash, rice husk ash-soil mixtures. Geotext. Geomembranes 44, 466–474 (2016). https://doi.org/10.1016/j. geotexmem.2015.07.010 5. Anggraini, V., Asadi, A., Huat, B.B.K., Nahazanan, H.: Effects of coir fibers on tensile and compressive strength of lime treated soft soil. Meas. J. Int. Meas. Confed. 59, 372–381 (2015). https://doi.org/10.1016/j.measurement.2014.09.059 6. Basha, E.A., Hashim, R., Mahmud, H.B., Muntohar, A.S.: Stabilization of residual soil with rice husk ash and cement. Constr. Build. Mater. 19, 448–453 (2005). https://doi.org/10.1016/ j.conbuildmat.2004.08.001 7. Bordoloi, S., Kashyap, V., Garg, A., Sreedeep, S., Wei, L., Andriyas, S.: Measurement of mechanical characteristics of fiber from a novel invasive weed: a comprehensive comparison with fibers from agricultural crops. Meas. J. Int. Meas. Confed. 113, 62–70 (2018). https:// doi.org/10.1016/j.measurement.2017.08.044 8. Sivakumar Babu, G.L., Vasudevan, A.K.: Strength and stiffness response of coir fiberreinforced tropical soil. J. Mater. Civ. Eng. 20, 571–577 (2008). https://doi.org/10.1061/ (ASCE)0899-1561(2008)20:9(571)

Study on the Compressive and Tensile Strength Behaviors

503

9. Duong, T.N., Satomi, T., Takahashi, H.: Mechanical behavior comparison of cemented sludge reinforced by waste material and several crop residues. Adv. Exp. Mech. 4, 186–191 (2019) 10. Vatani Oskouei, A., Afzali, M., Madadipour, M.: Experimental investigation on mud bricks reinforced with natural additives under compressive and tensile tests. Constr. Build. Mater. 142, 137–147 (2017). https://doi.org/10.1016/j.conbuildmat.2017.03.065 11. Tran, K.Q., Satomi, T., Takahashi, H.: Tensile behaviors of natural fiber and cement reinforced soil subjected to direct tensile test. J. Build. Eng. 24, 100748 (2019). https://doi. org/10.1016/j.jobe.2019.100748 12. Mori, M., Takahashi, H., Kumakura, K.: An experimental study on the durability of fibercement-stabilized mud by repeated cycle test of drying and wetting. J. Min. Mater. Process. Inst. Jpn. 121, 37–43 (2005). https://doi.org/10.2473/shigentosozai.121.37 13. Tran, K.Q., Satomi, T., Takahashi, H.: Effect of waste cornsilk fiber reinforcement on mechanical properties of soft soils. Transp. Geotech. 16, 76–84 (2018). https://doi.org/10. 1016/j.trgeo.2018.07.003 14. Youssef, A.M., El-Gendy, A., Kamel, S.: Evaluation of corn husk fibers reinforced recycled low density polyethylene composites. Mater. Chem. Phys. 152, 26–33 (2015). https://doi. org/10.1016/j.matchemphys.2014.12.004 15. ASTM International: ASTM D1633 - 00(2007): Standard Test Methods for Compressive Strength of Molded Soil-Cement Cylinders, West Conshohocken, PA, USA (2007) 16. ASTM International: ASTM C496/ C496M-17: Standard Test Method for Splitting Tensile Strength of Cylindrical Concrete Specimens, West Conshohocken, PA, USA (2017). https:// doi.org/10.1520/C0496_C0496M-17 17. Sharma, V., Vinayak, H.K., Marwaha, B.M.: Enhancing compressive strength of soil using natural fibers. Constr. Build. Mater. 93, 943–949 (2015). https://doi.org/10.1016/j. conbuildmat.2015.05.065 18. Tran, K.Q., Satomi, T., Takahashi, H.: Improvement of mechanical behavior of cemented soil reinforced with waste cornsilk fibers. Constr. Build. Mater. 178, 204–210 (2018). https:// doi.org/10.1016/j.conbuildmat.2018.05.104

Experimental and Numerical Investigation on Bearing Behavior of TRC Slabs Under Distributed or Concentrated Loads Tran The Truyen and Tu Sy Quan(&) University of Transport and Communications, Hanoi, Vietnam [email protected]

Abstract. Textile-Reinforced Concrete (TRC) is a composite material combining of weaving grid and fine-grained cement concrete. The fiber grid made from carbon, aramid or glass is not corroded by the environment. Woven mesh has a much larger surface area than traditional reinforced bars, therefore weaving fiber grid reinforced concrete has a greater adhesion force, which is very convenient for thin structure requiring high level of architect such as roof, floor, shell, etc. In most cases, the weaving fiber grid is stretched either by direct traction or by bending. Accordingly, a problematic is the bearing behavior of the thin slabs under the effects of different kinds of loads and limit conditions. Based on the existing studies in literature, experimental models have been developed, which allow to evaluate the effect of the textile density on the bending capacity of the TRC slabs under the impact of distributed or concentrated loads. The failure mode of the TRC slabs is almost identical to the simulation results given by Abaqus software, which helps to make many recommendations for design work. Keywords: Textile-Reinforced Concrete  Textile density  Flexural behavior  Failure mode  Finite element analysis

1 Introduction Depending on the kind of textile using for elaboration of TRC, the fine-grained concrete mixtures eventually include many components such as cement, fine aggregate, filler, fly ash, superplasticizer etc., which give a desired performance [1]. In fact, TRC has gradually been applied through many test programs and in situ projects that promoted its outstanding advantages in structure. The first TRC bridge in the world was built in 2006. In this year, the second bridge is 17 m longer; weighting 12,5 tons for pedestrians and bicycles was inaugurated in Kempten (Allgäu, Germany) [2]. Later, in the domain of civil engineering, TRC is used to make assembly floors, such as buildings at the Institute of Materials on the TU Dresden campus. The fibrous concrete slabs are also combined with insulating materials or lightweight concrete to make sandwiches in the course of studying at RWTH Aachen and TU Dresden [3]. Against the application of TRC in tunnel lining, the European project ACCIDENT (Advanced Cementitious Composite In DEsign and coNstruction of safe Tunnel) designed a multi-layered component made of steel fiber reinforced concrete (SFRC) © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 504–516, 2021. https://doi.org/10.1007/978-3-030-62324-1_43

Experimental and Numerical Investigation on Bearing Behavior of TRC Slabs

505

and a concrete layer high performance fiber reinforced concrete (HPFRC). The TRC is used to envelop these parts outside, acting as a protective layer in order to improve the load-bearing capacity and resist other external solicitations [4]. In majority concepts of structural members, the filament textile is stretched under the direct tensile force or by bending moment. Thus, the study with TRC slabs is necessary to clarify its bending behavior under the effect of loads (Fig. 1).

a)

c)

b)

Fig. 1. a) Rottachsteg Bridge in Keemen, Germany; b) An TRC application on the campus of RWTH Aachen University, Germany; c) The inner lining of the tunnel is made by TRC.

In this perspective, Andrzej et al. (2009) [5], have examined the load bearing capacity and the stiffness on twelve concrete slabs, with dimensions 1200  1000 mm in plane and only 40 mm in depth. In this study, different kinds of textile fabrics were utilized for the test such as alkali-resistant glass (AR Glass) fiber, poly-vinyl-alcohol with PVC coating, hybrid made of AR-Glass and carbon fiber. In any case, the textile fabrics were primarily stretched with low force of about 0,5 kN/m and released immediately after casting. The experimental equipment are described as Fig. 2a. The study also showed that the bearing load capacity of specimens reinforced by AR-glass fabrics presented in the Fig. 2b, is relatively similar to those reinforced by steel in comparison with the same dimensions. Moreover, the post-cracking load is slightly increased, different from specimens reinforced by carbon and PVA fabrics, which presented softening and behaved as much more ductile.

a)

b)

Fig. 2. a) 3 points bending testing equipment set up by Andrzej et al. (2009); b) Relationship load and displacement in the case of AR-glass.

506

T. T. Truyen and T. S. Quan

Also, Williams Portal et al. (2014) [6], have performed the 4 points bending test on specimens with dimension 1000  200  50 mm, reinforced by only a layer made of carbon fiber placed at 7.5 mm from the upper side of slabs, displayed in the Fig. 3a. The global behavior of specimens was schematized in Fig. 3b, may be classified in 4 sections with 3 first ones were similar to those of uniaxial tensile tests on the narrow TRC plates performed by Hegger et al. (2004) [7]. In the first state, the un-cracked material behavior where the reinforcement contribution is negligible and apparently the linear slope represents the elastic modulus of the concrete. The translation between the first state and second state is indicated by apparition and development of the firsts cracks in the concrete matrix. In this stage, the load transfers from the cement matrix to the textile filament and redistributes step by step. A multi-cracking pattern is created by this process along the specimen with an oscillation of a slightly increased load. After that, the concrete support is negligible and only the textile filament resists the load. In this third period, the cracks are kept widening with the elastic modulus of textile until reaching its ultimate collapse. The pattern of this failure is may be plastic behavior, depend on many factors such as the fabric geometry, reinforcement ratio and the bond strength in fourth period, also the last period.

a)

b)

Fig. 3. a) 4 points bending testing equipment set up by Williams Portal et al. (2014); b) Relationship load and displacement in the case of AR-glass.

Otherwise, further experimental investigations on flexural strengthening have been performed, which give a deeper insight into the specific behavior of TRC one way slabs such as Volkova et al. (2016) [8], Park et al. (2017) [9] but the test on two ways slabs under the effect of different loads have not been mentioned.

2 Materials 2.1

Design Components for Fine Cement Concrete

The materials used in the study were included cement, fly ash, and Quang Binh fine sand of size particles less than 2.5 mm, Song Lo coarse sand has a particle size less than 5.0 mm, and superplasticizer admixture. The ingredients composition of small

Experimental and Numerical Investigation on Bearing Behavior of TRC Slabs

507

concrete is determined by the theory of “absolute density”. The composition of Funk & Dinger is based on the following theory [10]: P(D) ¼

Dq  Dqmin Dqmax  Dqmin

ð1Þ

Where PðDÞ: The total passing at the dimension of the sieve D (mm); Dmin , Dmax smallest and biggest particles size in the grain material mixture; In this study, Funk & Dinger theory proposed with q ¼ 0:25, Dmax ¼ 4:75 mm and Dmin ¼ 0:075 mm. It seems that the ratio of Quang Binh fine sand/Song Lo coarse sand = 30/70 is the most suitable for the mixture, compared to theoretical curve. Therefore, the ratio of fine sand/rough sand = 30/70 was selected for calculation fine grained concrete aggregate component. The aggregate gradation of the material mixture is described in Fig. 4.

Fig. 4. Aggregate interlock of the mixture by Quang Binh sand and Song Lo sand

The water-adhesives ratio is determined by the strength and the content of fly ash replacing cement. The water content is calculated by the water- adhesives ratio to achieve the required strength of concrete. The concrete mix proportion of fine cement concrete are shown in Table 1. Table 1. Design components for one cubic meter of concrete Components Units PC40 But Son fine cement kg Water kg Quang Binh fine sand kg Song Lo coarse sand kg Fly ash Uong Bi thermoelectric central kg Superplasticizer admixture MasterGlenium ACE kg Water/adhesives ratio % Slump flow cm

Content 404 228 468 1092 101 8 38 23–25

508

2.2

T. T. Truyen and T. S. Quan

Properties of Textile

The fabric used in the experiment was alkaline resistant glass fiber Fiberglass 100/100. The mechanical properties of product Fiberglass 100/100 are provided by the manufacturer, as presented in Table 2. Table 2. Physical properties of product Fiberglass 100/100. Index properties Ultimate tensile strength Tensile elongation Tensile strength at 2% strain Mesh dimension Cross section area Melting point Damage during installation

Units kN/m % kN/m mm mm2 °C %

Value 100 300 = αi < Ψi , Ψj > (17) i=0

Because Ψj are mutually orthogonal, Eq. (17) becomes αi =

< u, Ψi > < Ψi , Ψi >

(18)

Theoretically, all coefficients can be obtained by solving Eq. (18), however the random response u is unknown. Moreover, each inner product involves a multidimensional integral evaluated numerically using either probabilistic techniques (sampling) or deterministic techniques (quadrature rules, sparse grid approaches). This paper will use Gauss Hermite quadrature for solving {αi }. The normalization factor < Ψi , Ψi > can be analytical estimated. For 1-D integral:  αi ∝< u, ψi >=

u (q) ψi (q) ρQ (q) dq =

Ngp

wj u (qj ) ψi (qj )

(19)

j=1

where Ngp is the number of quadrature points, wj and qj are the set of weights and quadrature points, respectively. In case of d-dimension, it can be expressed using simple tensor products “probabilistic” Gauss-Hermite as follows N1

αi ∝< u, Ψi >=

gp

j1 =1

Nd

···

gp 

     wj11 ⊗ · · · ⊗ wjdd u qj11 , . . . , qjdd Ψi qj11 , . . . , qjdd

jd =1

(20) If the order of integral function is q, the minimum number of Gauss point for and the total number of terms to estimate each random variable is Ngp = 2p+1 2  coefficients is dNgp . Hence, if the model response u qj11 . . . qjdd is obtained from finite element solution, we need to solve dNgp deterministic problems. Clearly, this method is quite expensive for multidimensional and higher-order problems.

Probabilistic Vibration of FG Beams

525

Linear Regression Approach. Let  = {q1 , . . . , qN s } be a set of N s (N s > N ) realizations of input vector, and U = {u1 , . . . , uN s } be corresponding    i random i output evaluations u = u x, t, q , i = 1, . . . , N s .The vector of residuals can be estimated from Eq. (16) in the compact form: Υ = U − αT Ψ

(21)

where Ψ is the matrix whose elements are given by Ψij = Ψj (qi ), i = 1, . . . , N s; j = 1, . . . , N . The coefficients α are estimated by minimizing the L2 -norm (leastsquare regression) of the residual followed as α = Arg min  U − αT Ψ 22

(22)

Solving Eq. (22), the coefficients are given by −1 T  Ψ U α = ΨT Ψ 3.2

(23)

Stochastic Collocation

In Stochastic collocation method, instead of using polynomials orthogonal with respect to probability density function as above, Lagrange interpolation functions are used. These functions are formed from Gauss quadrature points called collocation points. In 1-D problem, after truncating Eq. (14) becomes u (x, t, q) =

Np

αi (x, t) Li (q)

(24)

i=0

where Np is the number of collocation point, Li (q) is the Lagrange interpolation function formed from a set of Np probabilistic Gauss Hermite quadrature points {qk , k = 1, . . . , Np } as follows: Np

q − qj , j = i Li (q) = q − qj j=1 i

(25)

Lagrange interpolation function satisfies Li (qk ) = δik , so the coefficients αi (x, t) in Eq. (25) are the solutions u (x, t, q) at the collocation point qi : αi (x, t) = u (x, t, qi )

(26)

And Eq. (24) becomes u (x, t, q) =

Ngp

u (x, t, qi ) Li (q)

(27)

i=0

When it comes to multi-dimensional problem, standard tensor products can be used to transfer 1-D interpolation functions to multi-dimension ones.

526

P. T. T. Nguyen et al.

i Let {qjii , ji = 1, . . . , Ngp , i = 1, . . . , d} be the set of collocation points of ith random variable, the full-tensor product interpolation of d-dimensional function can be expressed: N1

u (q) =

gp

j1 =1

Nd

···

gp

   u qj11 . . . qjdd L1j1 ⊗ · · · ⊗ Ldjd

(28)

jd =1

When dealing with probabilistic problems using PCE or SC, one need to estimate corresponding outputs u (x, t, q) at each realization of random inputs qi . In this paper, frequencies of FG beams for each set of certain inputs will be estimated by solving ω in Eq. (13) as presented in Sect. 2.

4

Numerical Examples

In this study, the material properties are random, but they are assumed to be independent of the geometry of FG beams. The detailed properties are summarized in Table 1. The numerical results are carried out for three boundary conditions, including clamped-free (C-F), simply supported (S-S) and clampedclamped (C-C). The statistical moments of frequencies and sensitivity indices of parameters are analytically estimated based on the coefficients of PCE and SC as mentioned in [1]. Herein, MCS’s results with 10000 simulations are considered as the references to compare with those of PCE and SC. Table 1. Distributions and parameters of material properties. Parameters Ec × 106 (M pa) Em × 106 (M pa) ρc (kg/m3 ) ρm (kg/m3 ) r Mean (μ)

151

70

3000

2707



COV (%)

10

10

10

10

10

Normal

Normal

Normal

Lognormal

Distribution Normal

In order to verify the convergence of the proposed methods, Table 2 summaries the mean μ and standard variation σ of first three natural frequencies of FG beams with different orders of PCE in which a number of quadrature points for each random variables N gp = 3 and the number of simulation for LR N s = 150. Those results are recalculated for different quadrature points chosen in SP and SC and presented in Table 3. It can be seen that if the polynomial order p = 4 and Gauss quadrature point N gp = 3, the results are convergent and three proposed methods are excellent in agreement. This statement is further proved by comparing the present first two moments of frequencies with results obtained from MCS for different mean value of power index r. It is noted that direct MCS needs 10000 FEM solvers whereas LR, SP and SC need only 150, 243 and 243, respectively. As framework presented in Fig. 1, the computational cost of FEM solver is dominant compared with other steps, so the alternative methods are

Probabilistic Vibration of FG Beams

527

by far cheaper than direct MCS. It seems that LR is cheapest, but it may have largest uncertainty due to the uncertainty in simulations for estimating the coefficients. As expected, for the same value of μr , the response’s variance increases as the mean value increases. C-C beam has the most variance and the least variance belongs to C-F beam. Interestingly, the coefficient of variance (σ/μ) for all three beams are similar, just around 5.2%, 5.7% and 6% for μr = 2, μr = 5 and μr = 10, respectively. It is clear that the variation of output is smaller than that of inputs. Additionally, as μr increases, the mean value of frequencies decreases but the variance has the opposite tendency. It is logical because the increase in μr leads to the decrease in the mean of material properties and the increase in variance of r. Table 2. Convergence of first three natural frequencies of FG beams with orders of PCE, L = 8 m, b = 0.8 m, h = 1 m, µr = 1. Boundary Polynomial condiorders tions

Properties LR (Ns = 150)

C-F

μ

λ1 2 3 4

σ

0.7781

μ

15.1982

σ

0.7787

μ

15.1981

σ C-C

2

μ σ

3

μ σ

4

μ σ

S-S

2

μ σ

3

μ σ

4

15.1977

μ σ

0.7786

λ2

SP (Ngp = 3)

λ3

λ1

93.5163 194.6697 15.1981 4.7852

10.2587 0.7786

93.5198 194.6725 15.1981 4.7894

10.2674

0.7786

93.5192 194.6717 15.1981 4.7884

10.2655

0.7792

λ2

λ3

93.5189 194.6712 4.7883

10.2655

93.5189 194.6712 4.7885

10.2658

93.5189 194.6712 4.7925

10.2723

96.4148 260.0021 389.5234 96.4212 260.0193 389.5365 4.9272

13.2798

20.4963

4.9387

13.3107

20.5371

96.4207 260.0180 389.5361 96.4212 260.0193 389.5365 4.9362

13.3040

20.5273

4.9389

13.3111

20.5377

96.4211 260.0189 389.5365 96.4212 260.0193 389.5365 4.9391

13.3117

20.5375

4.9431

13.3224

20.5507

42.5766 165.9196 195.3043 42.5768 165.9203 195.3087 2.1789

8.4757

10.3248

2.1803

8.4815

10.3252

42.5766 165.9199 195.3075 42.5768 165.9203 195.3087 2.1803

8.4818

10.3243

2.1804

8.4818

10.3255

42.5766 165.9197 195.3082 42.5768 165.9203 195.3087 2.1804

8.4818

10.3253

2.1823

8.4892

10.3322

528

P. T. T. Nguyen et al.

Table 3. Convergence of first three natural frequencies of FG beams with a number of quadrature points for each random variables, L = 8 m, b = 0.8 m, h = 1 m, µr = 1. Boundary Quadrature condipoints tions

Properties SP (p = 4)

C-F

μ

15.1980 93.5187

194.6697 15.1980 93.5182

194.6691

σ

1.0000

13.2011

10.2326

μ

15.1981 93.5189

194.6712 15.1979 93.5180

194.6703

σ

0.7786

10.2658

10.2623

μ

15.1981 93.5189

194.6712 15.1979 93.5180

194.6703

σ

0.7786

10.2661

10.2626

μ

96.4210 260.0188 389.5338 96.4205 260.0174 389.5324

σ

6.3431

μ

96.4212 260.0193 389.5365 96.4203 260.0169 389.5347

σ

4.9389

μ

96.4212 260.0192 389.5365 96.4203 260.0168 389.5347

σ

4.9391

μ

42.5767 165.9202 195.3073 42.5765 165.9193 195.3067

σ

2.8002

μ

42.5768 165.9203 195.3087 42.5764 165.9187 195.3078

σ

2.1804

μ

42.5768 165.9202 195.3087 42.5764 165.9187 195.3079

σ

2.1805

λ1 2 3 4 C-C

2 3 4

S-S

2 3 4

λ2 6.1498 4.7885 4.7887 17.0952 13.3111 13.3118 10.8912 8.4818 8.4823

SC

λ3

26.4104 20.5377 20.5383 13.2781 10.3255 10.3257

λ1 0.7751 0.7783 0.7783 4.9170 4.9368 4.9370 2.1707 2.1795 2.1796

λ2 4.7673 4.7864 4.7866 13.2520 13.3053 13.3060 8.4430 8.4776 8.4781

λ3

20.4715 20.5308 20.5313 10.2922 10.3222 10.3225

Figure 4 compares the probability density function (PDF) and probability of exceedance (PoE) of fundamental frequency of FG beam obtained from proposed methods with MCS as μr = 1. The results are estimated for all three boundary conditions. The exceedance plots are presented in log-scale. Again, they are all excellent in agreement. There are certainly small discrepancies at the very small probability (< 10−3 ) or large frequency regions. For C-F beam, the largest frequency of MCS is lessen than those of other methods, however for S-S and C-C beams, this trend is reversed for SP and SC.

Probabilistic Vibration of FG Beams 0.18

0.14

0.2

0.06

0.12 0.1 0.08 0.06

14 15 16 17 18 Fundamental frequency u(Hz)

19

35

(a) PDF (C-F) 0

−3

0

−1

10 P [U > u]

−2

10

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

10

10

−4

12

−4

14 16 Fundamental frequency u(Hz)

(d) PoE (C-F)

18

19

10

35

−2

10

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

−3

−3

10

120

10

−1

10

90 100 110 Fundamental frequency u(Hz)

(c) PDF (C-C)

10 P [U > u]

P [U > u]

80

0

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

−2

10

40 45 50 Fundamental frequency u(Hz)

10

−1

0.03

(b) PDF (S-S)

10

10

0.04

0.01

0.02 13

0.05

0.02

0.04 0

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

0.07 PDF Estimate

0.4

0.08

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

0.16

LR (p=4, Ns=150) SP (p=4, Ngp=3) SC (Ngp=3) MCS (N=10000)

PDF Estimate

PDF Estimate

0.6

529

−4

40 45 50 Fundamental frequency u(Hz)

(e) PoE (S-S)

55

10

80

90 100 110 Fundamental frequency u(Hz)

120

(f) PoE (C-C)

Fig. 4. Probability density function (pdf) and probability of excedance (PoE) of fundamental frequency for C-F, S-S and C-C FG beams, L = 8m, b = 0.8m, h = 1m, µr = 1.

5

Conclusion

This paper presents three different stochastic FEM for probabilistic free vibration to FG beams. All proposed methods are able efficiently propagate the uncertainty of FG material properties to natural frequencies of Euler-Bernoulli FG beams. Monte Carlo simulation is also considered as the exact method to compare with the proposed methods. The results show that all three proposed approaches are good agreement with MCS, but they require less computational cost than MCS.

References 1. Adams, B.M., Ebeida, M.S., Eldred, M.S., Jakeman, J.D., Swiler, L.P., Stephens, J.A., Vigil, D.M., Wildey, T.M., Bohnhoff, W.J., Eddy, J.P., et al.: Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Technical report, Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States) (2014) 2. Blatman, G., Sudret, B.: An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probabilist. Eng. Mech. 25(2), 183–197 (2010) 3. Bressolette, P., Fogli, M., Chauvi`ere, C.: A stochastic collocation method for large classes of mechanical problems with uncertain parameters. Probabilist. Eng. Mech. 25(2), 255–270 (2010)

530

P. T. T. Nguyen et al.

4. Council, N.R.: Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification. The National Academies Press, Washington, DC (2012) 5. Garc´ıa-Mac´ıas, E., Castro-Triguero, R., Friswell, M.I., Adhikari, S., S´ aez, A.: Metamodel-based approach for stochastic free vibration analysis of functionally graded carbon nanotube reinforced plates. Compos. Struct. 152, 183–198 (2016) 6. Ghanem, R.G., Spanos, P.D.: Stochastic Finite Elements: A Spectral Approach (Revised Edition). Dover, New York (2003) 7. Ghosh, D., Farhat, C.: Strain and stress computations in stochastic finite element methods. Int. J. Numer. Meth. Eng. 74(8), 1219–1239 (2008) 8. Gupta, A., Talha, M.: Recent development in modeling and analysis of functionally graded materials and structures. Progr. Aerosp. Sci. 79, 1–14 (2015) 9. Hosder, S., Walters, R.W., Balch, M.: Point-collocation nonintrusive polynomial chaos method for stochastic computational fluid dynamics. AIAA J. 48(12), 2721– 2730 (2010) 10. Lan, J., Dong, X., Peng, Z., Zhang, W., Meng, G.: Uncertain eigenvalue analysis by the sparse grid stochastic collocation method. Acta Mechanica Sinica 31(4), 545–557 (2015) 11. Li, J., Tian, X., Han, Z., Narita, Y.: Stochastic thermal buckling analysis of laminated plates using perturbation technique. Compos. Struct. 139, 1–12 (2016) 12. Lopez, R., Torii, A., Miguel, L., Cursi, J.S.: Overcoming the drawbacks of the form using a full characterization method. Struct. Saf. 54, 57–63 (2015) 13. Nobile, F., Tempone, R., Webster, C.G.: A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J. Numer. Anal. 46(5), 2309–2345 (2008) 14. Shaker, A., Abdelrahman, W., Tawfik, M., Sadek, E.: Stochastic finite element analysis of the free vibration of functionally graded material plates. Comput. Mech. 41(5), 707–714 (2008) 15. Shegokar, N.L., Lal, A.: Stochastic nonlinear bending response of piezoelectric functionally graded beam subjected to thermo electromechanical loadings with random material properties. Compos. Struct. 100, 17–33 (2013) 16. Shegokar, N.L., Lal, A.: Stochastic finite element nonlinear free vibration analysis of piezoelectric functionally graded materials beam subjected to thermopiezoelectric loadings with material uncertainties. Meccanica 49(5), 1039–1068 (2014) 17. Stefanou, G.: The stochastic finite element method: past, present and future. Comput. Meth. Appl. Mech. Eng. 198(9), 1031–1051 (2009) 18. Sudret, B.: Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Syst. Saf. 93(7), 964–979 (2008) 19. Talha, M., Singh, B.: Stochastic perturbation-based finite element for buckling statistics of FGM plates with uncertain material properties in thermal environments. Compos. Struct. 108, 823–833 (2014) 20. Talha, M., Singh, B.: Stochastic vibration characteristics of finite element modelled functionally gradient plates. Compos. Struct. 130, 95–106 (2015) 21. Thai, H.T., Kim, S.E.: A review of theories for the modeling and analysis of functionally graded plates and shells. Compos. Struct. 128, 70–86 (2015) 22. Xiu, D., Karniadakis, G.E.: The wiener-askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24(2), 619–644 (2002). https://doi.org/ 10.1137/S1064827501387826 23. Xu, Y., Qian, Y., Chen, J., Song, G.: Stochastic dynamic characteristics of FGM beams with random material properties. Compos. Struct. 133, 585–594 (2015)

Probabilistic Vibration of FG Beams

531

24. Xu, Y., Qian, Y., Song, G.: Stochastic finite element method for free vibration characteristics of random FGM beams. Appl. Math. Model. 40, 10238–10253 (2016) 25. Xu, Z., Zhou, T.: On sparse interpolation and the design of deterministic interpolation points. SIAM J. Sci. Comput. 36(4), A1752–A1769 (2014) 26. Yang, X., Lei, H., Baker, N.A., Lin, G.: Enhancing sparsity of hermite polynomial expansions by iterative rotations. J. Comput. Phys. 307, 94–109 (2016) 27. Zhang, Q., Li, Z., Zhang, Z.: A sparse grid stochastic collocation method for elliptic interface problems with random input. J. Sci. Comput. 67(1), 262–280 (2016)

A Study on Property Improvement of Cement Pastes Containing Fly Ash and Silica Fume After Treated at High Temperature Thi Phuong Do1(&), Van Quang Nguyen1,2, and Minh Duc Vu3 1

The University of Danang, University of Science and Technology, 54 Nguyen Luong Bang, Danang, Vietnam [email protected] 2 School of Chemical Engineering, Yeungnam University, 280 Daehak-ro, Gyeongsan, Gyeongbuk 38541, Republic of Korea 3 National University of Civil Engineering, 55 Giai Phong Road, Hai Ba Trung District, Hanoi, Vietnam

Abstract. A set of experiments was carried out to study the properties of high temperature exposed composite cement pastes. The samples were prepared from Portland cement (PC), fly ahs (FA) and silica fume (SF). The proportions of PC substituted by pozzolanic additives was 20, 30, 40, and 50%, the SF content was fixed at 5% and 10%. After curing, the samples were dried at 100 °C for 24 h and treated at the different temperature levels of 200, 400, 600 and 800 °C with the same ramping rate of 5 °C/min for 2 h, then cooled to room temperature in air. The bulk density and compressive strength of samples at each firing temperature were determined. Moreover, the microstructure characteristics of some selected samples were investigated by X-ray diffraction (XRD) and Scanning electron microscopy (SEM) methods. It was concluded that the compressive strength and bulk density of cement pastes made from 15% FA and 5% SF achieved the significant improvement in the sample treated at the highest temperatrure. The formation of new minerals was responsible for the increase of compressive strength which was 2.53 times higher than that of PC control sample at 800 °C. Keywords: Fly ash

 Silica fume  High temperature

1 Introduction Thermally heated related building works normally operate under hazardous condition of high temperature. That is one of reasons which leads to the change chemical composition, physical and mechanical properties of used materials, and undesirable structural failures. Therefore, heat-resistance concrete has recently been used to prolong the lifetime of these building works. This kind of concrete generally utilizes the binders such as high-alumina cement, phosphate binder, aluminum-phosphate binder, water glass, and so on. According to reported works, the concrete using PC as a binder could not work under high temperature [1, 2]. Utilization of PC for this kind of concrete requires a combination with finely ground mineral additives. These additives consist of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 532–542, 2021. https://doi.org/10.1007/978-3-030-62324-1_45

A Study on Property Improvement of Cement Pastes

533

high proportion of SiO2 and Al2O3, which could improve heat resistance ability of PC [3]. Some additives such as chamotte, metakaolines, brick powder, blast furnace slag, fly ash, silica fume, ground pumice, and volcanic ash have been investigated so far. Fly ash (FA) is waste product of thermal power plant. The FA particles have spherical in shape, its size from 0.5 to 300 µm, and the Blaine surface area of 15002000 cm2/g. The constituents of FA include mainly glassy phase and minor amount of unburnt carbon and crystalline phase. B.N. Grainger investigated cement paste of PC and FA at different temperature levels from 100600 °C with FA replacement content of 20%, 25%, 37.5%, and 50% (by weight) compared to total amount of binder. The results showed that FA greatly improved compressive strength of binder at above 300 °C [4]. S.S.Rehsi and S.K.Garg concluded that PC was replaced by FA of 2030% (by mass) gave good heat resistance property and stability of volume under high temperature as well as under high moisture environment when cooling [5]. X. Yigang et al. studied the influence of FA contents, water-to-binder ratio, curing conditions on properties of concrete at high temperature. The results indicated that concrete containing FA possessed better quality at 650 °C than concrete with PC [6]. Other research also manifested that concrete using ground pumice and FA did not record the reduction on strength at 600 °C [7]. Silica fume (SF) is well-known as a by-product with ultrafine powder (about 0.1 µm) of the silicon or ferrosilicon alloy production. SF with spherical structure exists in amorphous polymorph of SiO2 and widely used to manufacture highperformance concrete due to the high content of SiO2. The effects of temperature on properties of SF containing cement paste have recently been investigated. Some studies showed that 10% SF substituted PC enhanced quality of mortar and concrete at 600 °C [8, 9]. According to M. Heikal’s work, the binder sample containing 20% SF obtained the highest compressive strength at 800 °C while the highest compressive strength at 600 °C is with 15% SF [10]. M.S. Morsy et al. reported that the mortar with PC substitution of 30% (metakaolines + SF) showed better heat-resistance ability than normal cement mortar at 200800 °C [11]. The mixture of PC, SF, and ground pumice manufactured concrete that could work well at 600 °C [12]. M.Heikal et al. gave conclusion that PC replaced by 10% FA and 10% SF significantly improved properties and structure of cement paste at 450 °C [10]. These studies signified that the presence of FA and SF dramatically hindered the compressive strength reduction of PC at high temperature due to pozzolanic reaction. When the PC was used at 400600 °C, there was decomposition of calcium hydroxide (CH) component to produce free CaO. These free CaO species would react to moisture in air during cooling process, which caused microcracks and further led to the sample volume expansion and considerable decrease of compressive strength [3]. The FA and SF, especially SF, consisted of a large amount of active components, which would consume CH components in cement paste to produce hydration products with high durability [13]. Thus, the CH and free CaO components in cement paste were limited. This work investigated the utilization of FA and SF to improve some properties and microstructure of cement pastes at 800 °C.

534

T. P. Do et al.

2 Materials and Experimental Program 2.1

Materials

Materials were used in this investigation including Song Gianh Portland cement PC40 (PC) (Vietnam), fly ash (FA) from Duyen Hai Thermal Power Plant (Vietnam), and silica fume (SF, Sika Company). The cement, fly ash, and silica fume have specific densities of 3.11, 2.29, and 2.22 g/cm3, respectively; bulk densities of 973.2, 917.8, and 699.1 kg/m3, respectively, according to Vietnam standard specifications TCVN 4030:2003, specific surface areas (BET) of 1.3881, 1.4398, and 17.6596 m2/g, respectively. The compressive strength at 28 days of cement is 51.96 MPa (following TCVN 6016:2011), and its physical and mechanical properties meet requirements of TCVN 2682: 2009. FA with activity index of 95.4% (TCVN 6016:2011) is classified as Type F according to TCVN 10302:2014. SF has an activity index of 109.2% and its properties satisfy requirements of TCVN 8827:2011, which is as a mineral additives for mortar and concrete. Chemical composition of materials is given in Table 1. Table 1. Chemical composition of materials, wt%. Materials PC FA SF

2.2

SiO2 21.09 59.20 90.26

Al2O3 6.53 22.34 1.05

Fe2O3 3.43 8.18 1.03

CaO 64.21 4.23 1.23

Oxides (%) MgO 0.85 1.19 1.41

SO3 0.15 0.20 0.02

K2O 2.91 0.83 2.03

Na2O 0.70 0.91

LOI 0.83 3.08 2.03

Characterization

The morphology of the samples was observed by scanning electron microscopy (SEM, JEOL 6010 PLUS/LV). The crystallinity of the samples was examined by powder Xray diffraction (XRD, Rigaku SmartLab) using Cu, Ka radiation (k = 0.154 nm) at an accelerating voltage of 40 kV, an applied current of 30 mA. The Brunauer – Emmett – Teller (BET) surface area was obtained using a N2 adsorption – desorption apparatus (Micromeritics ASAP 2020) at −196 °C with degassing at 200 °C for 24 h prior to the adsorption - desorption process. 2.3

Experimental Method

The binder samples were prepared from PC, FA, SF and water following the proportions shown in Table 2. The amount of PC substituted by additives at proportions of 20, 30, 40, and 50% by weight, in which the proportion of SF is fixed at 5% and 10%. After determining the amount of binder mixtures as shown in Table 2, the mixtures were cast in 20  20  20 mm3 cube molds. The fresh specimens were closely covered to prevent water evaporation. After 20 h, the specimens were removed from molds and cured in steam condition for 4 h. Then, the specimens were dried at 100 °C for 24 h. After that, the specimens were heated to different temperature levels of 200,

A Study on Property Improvement of Cement Pastes

535

400, 600, and 800 °C in the laboratory furnace with ramping rate of below 5 °C/min and hold constant temperature processes for 2 h. After cooling to room temperature, all specimens were determined bulk density and compressive strength.

Table 2. The mixture composition of cement pastes, wt%. Sample No PC F15S5 F25S5 F35S5 F45S5 F10S10 F20F10 F30S10 F40S10

PC 100 80 70 60 50 80 70 60 50

FA 0 15 25 35 45 10 20 30 40

SF 0 5 5 5 5 10 10 10 10

Water of consistency 32.2 31.8 30.6 30.0 29.2 32.7 31.8 31.0 30.7

3 Results and Discussion 3.1

Bulk Density

The obtained values of bulk density of specimens at different heating temperatures were illustrated in Table 3 and Fig. 1.

Table 3. Bulk density of cement pastes after treated at high temperature. Sample No PC F10S10 F20F10 F30S10 F40S10 F15S5 F25S5 F35S5 F45S5

Bulk density of cement pastes at high temperature, kg/m3 25 °C 100 °C 200 °C 400 °C 600 °C 800 °C 2071.2 1997.4 1937.3 1872.1 1789.5 1756.5 1989.1 1921.1 1867.1 1788.2 1729.2 1693 1959.6 1893.7 1844.8 1761.1 1708.2 1672.4 1933.5 1868.3 1820.5 1737 1684.3 1646.4 1902.6 1839.9 1795.7 1709.5 1657.8 1619.3 2016.2 1948 1893.6 1827 1765.3 1736.7 1980.2 1915.4 1866.8 1795.6 1737.6 1703.5 1951.2 1889.5 1843.2 1769.2 1710.9 1665.1 1923.2 1862.3 1818.8 1740.6 1682.9 1638.8

At the same temperature levels, the PC sample showed the highest bulk density compared to other samples containing additives due to the higher specific density of cement. The bulk density values of specimens decreased as the temperature increased. This decline of bulk density can be ascribed to water evaporation as well as the

536

T. P. Do et al.

decomposition of mineral compositions in cement pastes. When the temperature heated up 200 °C, free water and adsorbed water evaporated while the chemically bonded water began to evaporate as the temperature reached 300 °C, resulting in the significant decrease of bulk density. A dehydration process of CH mineral occured in the range of 400600 °C. From 600 to 800 °C, the decomposition of CSH mineral to produce bC2S and dissociation of CaCO3 to CaO and CO2 were recorded, causing further decrease of bulk density [2, 14, 15]. The PC specimen exhibited the most considerable decline of bulk density at temperatures >400 °C (Fig. 2). Among the specimens consisting of additives, F45S5 and F15S5 samples showed the lower decrease of bulk density. When compared to the values at 100 °C, the decrease in the volume of PC sample at 800 °C was 12.06% while that of F15S5 sample was 10.85%. The PC sample owned the highest bulk density whereas the decrease of bulk density was the most significant, which indicated that the water loss and decomposition of CaCO3 occured strongly [15]. The active components in FA and SF additives interacted with free CaO species to hinder the decline of bulk density caused by CH and CaCO3 components.

Fig. 1. Bulk density of cement pastes after treated at high temperature.

3.2

Fig. 2. Bulk density loss of cement pastes after treated at high temperature

Compressive Strength

The compressive strength of thermally treated specimens at different temperature is depicted in Table 4, Fig. 3.

A Study on Property Improvement of Cement Pastes

537

Table 4. Compressive strength of cement pastes after treated at high temperature. Sample No PC F10S10 F20F10 F30S10 F40S10 F15S5 F25S5 F35S5 F45S5

Compressive strength of cement pastes at high temperature, MPa 25 55.86 53.42 48.62 44.39 39.93 54.38 51.43 46.22 43.22

100 63.89 60.84 57.57 52.82 49.85 61.96 57.26 54.09 51.01

200 79 78.61 75.26 71.93 68.62 79.96 75.09 71.57 68.46

400 48.99 61.95 56.14 51.4 48.21 65.55 59.88 56.17 50.72

600 33 48.12 52.48 47.73 43.85 58.3 55.68 52.15 46.29

800 18.8 34.12 35.19 25.58 22.99 47.48 40.12 32.05 25.84

At room temperature, the samples using additives have lower compressive strength in comparison with control sample (PC). This can be assigned to the fact that FA additives played a role of lowing the activity of sample.

Fig. 3. Compressive strength of cement pastes after treated at high temperature.

Fig. 4. Compressive strength loss of cement pastes after treated at high temperature.

When the specimens were dried and heated up to 100  200 °C, their compressive strength gradually increased. In this temperature range, the loss of free water and absorbed water led to shrinkage of cement pastes, which formed denser and closer structure. Moreover, separated free water also promoted hydration process of cement minerals to increase compressive strength due to self-autoclaving process [11]. The PC sample showed higher compressive strength at this temperature range. It is clearly

538

T. P. Do et al.

visible from Fig. 4 that the PC sample experienced a significant decrease of compressive strength from 200400 °C due to loss of absorbed water and chemically bonded water at 300 °C [2]. The strength loss of PC was 23.32% compared to the value at 100 °C. On the contrary, the compressive strength of specimens with additives depended on used contents of additives. The F15S5 sample performed the highest compressive strength with strength gain of 5.79%. The increase of compressive strength can be attributed to the formation of new minerals by pozzolanic reaction [16]. The hydration products in the samples also included tobermorite minerals (5CaO∙6SiO2∙xH2O) which is two or three times more durable than CSH minerals [13]. From 400 to 600 °C, the decomposition of CH to produce free CaO resulted in micro-cracks, increase of sample volume due to the reaction of free CaO and moisture in air during cooling process, which further strongly reduced compressive strength of sample [3]. However, it can be seen that the samples containing additives exhibited slight reduction of compressive strength in comparison with that of control sample (PC). The F25S5 sample had the smallest strength loss of 2.76% while PC sample of 48.49%. When the samples were heated up to 800 °C, the decomposition of CSH to generate b-C2S and decarbonation of CaCO3 caused the reduction of bulk density and compressive strength in the specimens. The strength loss of PC sample was approximately 70.57% while those of F15S5 and F25S5 specimens were 23.37% and 29.93%, respectively. The specimens with additives showed a slighter reduction on compressive strength than that of PC sample, which can be understood that the pores in cement pastes were filled by additional hydration products from pozzolanic reaction, especially in the presence of SF. Among the specimens containing 10% SF, the F10S10 and F20S10 specimens exhibited the slighter decrease on compressive strength at high temperatures. The compressive strength at high temperatures of F15S5 and F25S5 specimens more slightly reduced than that of the rest of samples, which indicated that used SF content of 5% for substitution of PC obtained the better improvement. This result is in agreement with a research of Morsy M.S et al. [17]. 3.3

XRD Analysis

In this study, the F15S5 specimen with the lowest reduction on bulk density and compressive strength at high temperature was selected to characterize phase compositions by XRD analysis in comparison with PC control sample. The XRD patterns of two samples at 25 and 800 °C were illustrated in Fig. 5. At room temperature (25 °C), both PC and F15S5 samples showed the presence of CH, C2S, C3S, CaCO3, and CSH minerals. However, the lower peak intensity of CH and the higher peak intensity of CSH were recorded in F15S5 samples compared to PC sample. This observation can be explained that the SF additives performed its high reactivity at room temperature by the reaction between SiO2 and Al2O3 in SF and CH in cement pastes to form calcium silicate minerals (CSH).

A Study on Property Improvement of Cement Pastes

539

Fig. 5. XRD analysis of PC, F15S5 at 25 °C and 800 °C.

The CaO species generated from dehydration of CH or decomposition of CaCO3 were found in PC control sample at 800 °C [2, 15], which was as a concrete evidence for decrease on peak intensity of CH and CaCO3. The second hydration of free CaO occured when the sample exposed to the moisture in air, resulting in the appearance of micro-cracks and rapid reduction of compressive strength. CSH gel completely disappeared due to chemical phase transformation to larnite (b-C2S) và harturite (C3S) [18]. The appearance of CH at 800 °C mainly originated from the second hydration of free CaO. The XRD pattern of F15S5 sample at 800 °C also showed the existence of wollastonite (CS), and gehlenite (C2AS) minerals. The decomposition of hydrated products in cement pastes is favorable for formation of new minerals such as CS and C2AS with the similar structure of C2S minerals [15]. This indicated that the F15S5 specimen owned the lower reduction on bulk density and compressive strength compared to PC control sample at the same temperature of 800 °C. 3.4

SEM Analysis

SEM images of PC and F15S5 specimens at 25 and 800 °C were depicted in Fig. 6.

540

T. P. Do et al.

a)

b)

c)

d)

Fig. 6. SEM analysis of PC, F15S5 at 25 °C and 800 °C a) PC at 25 °C; b) PC at 800 °C, c) F15S5 at 25 °C; d) F15S5 at 800 °C.

The CH crystals with hexagonal flakes and CSH crystals with fiber structure were observed in SEM micrograph of PC and F15S5 samples at 25 °C (Fig. 6a and 6c). It can be clearly seen that the structure of F15S5 sample was denser because of used water content in binder mixture was less than that of in PC control sample. The SEM micrograph of PC sample at 800 °C appeared small micro-cracks and there was no existence of CSH (Fig. 6b) due to its decomposition at this high temperature. The SEM images of F15S5 showed appearance of small needle like crystals, which was responsible for the high compressive strength (Fig. 6d). Crystallization of CS and C2AS confirmed the formation of additional CSH minerals from the pozzolanic reaction in the presence of FA, especially SF [15].

4 Conclusions In this work, the effect of utilization of pozzolanic additives for substitution of cement to improve physical and mechanical properties of binder mixtures in high temperature was investigated. The obtained experimental results concluded that the cement pastes containing 5% SF showed a lower reduction on the bulk density and compressive strength. Among of them, the specimen with 15% FA and 5% SF at 800 °C

A Study on Property Improvement of Cement Pastes

541

experienced the lowest reduction of bulk density with bulk density loss of 10.85% (1.21% smaller than that of PC specimen) and compressive strength is 47.48 MPa which was 2.53 times higher than that of PC. Microstructure analysis indicated the appearance of wollastonite and gehlenite minerals in the sample with 5% SF at 800 °C proved for property improvement of cement at high temperature in the presence of pozzolanic additives. Acknowledgement. This research was funded by The University of Danang – University of Science and Technology (DUT) under the project numbered T2020-02-39.

References 1. Schneider, U.: Concrete at high temperatures—a general review. Fire Saf. J. 13(1), 55–68 (1988) 2. Hager, I.: Behaviour of cement concrete at high temperature. Bull. Pol. Acad. Sci. Tech. Sci. 61(1), 145–154 (2013) 3. Remnev, V.V.: Heat-resistant properties of cement stone with finely milled refractory additives. Refractor. Ind Ceram. 37, 10–11 (1996) 4. Grainger, B.N.: Concrete at High Temperatures. Central Electricity Research Laboratories, UK (1980) 5. Rehsi, S.S., Garg, S.K.: Heat resistance of Portland fly ash cement. Cement 4(2), 14–16 (1976) 6. Yigang, X., Wong, Y.L., Poon, C-S.: Damage to PFA concrete subject to high temperatures. In: Proceedings of International Symposium on High Performance Concrete-Workability, Strength and Durability, Hong Kong, pp. 1093–1100 (2000) 7. Aydın, S., Baradan, B.: Effect of pumice and fly ash incorporation on high temperature resistance of cement base mortars. Cem. Concr. Res. 37(6), 988–995 (2007) 8. Saad, M., Abo-El-Enein, S.A., Hanna, G.B., Kotkata, M.F.: Effect of temperature on physical and mechanical properties of concrete containing silica fume. Cem. Concr. Res. 26 (5), 669–675 (1996) 9. Ghandehari, M., Behnood, A., Khanzadi, M.: Residual mechanical properties of highstrength concretes after exposure to elevated temperature. J. Mater. Civil Eng. ASCE 22(1), 59–64 (2010) 10. Heikal, M., El-Diadamony, H., Sokkary, T.M., Ahmed, I.A.: Behavior of composite cement pastes containing microsilica and fly ash at elevated temperature. Constr. Build. Mater. 38, 1180–1190 (2013) 11. Morsy, M.S., Al-Salloum, Y.A., Abbas, H., Alsayed, A.H.: Behavior of blended cement mortars containing nano-metakaolin at elevated temperatures. Constr. Build. Mater. 35, 900– 905 (2012) 12. Demirel, B., Keleştemur, O.: Effect of elevated temperature on the mechanical properties of concrete produced with finely ground pumice and silica fume. Fire Saf. J. 45(6–8), 385–391 (2010) 13. Nasser, K.W., Marzouk, H.M.: Properties of mass concrete containing fly ash at high temperatures. ACI J. 76(4), 537–550 (1979)

542

T. P. Do et al.

14. Hertz, K.D.: Concrete strength for fire safety design. Mag. Concr. Res. 57(8), 445–453 (2005) 15. Heikal, M.: Effect of elevated temperature on the physico-mechanical and microstructural properties of blended cement pastes. Build. Res. J. 56, 157–171 (2008) 16. Tanyildizi, H., Coskun, A.: The effect of high temperature on compressive strength and splitting tensile strength of structural lightweight concrete containing fly ash. Constr. Build. Mater. 22(11), 2269–2275 (2008) 17. Morsy, M.S., Rashad, A.M., Sheble, S.S.: Effect of elevated temperature on compressive strength of blended cement mortar. Build. Res. J. 56(2–3), 173–185 (2008) 18. Alonso, C., Fernandez, L.: Dehydration and rehydration processes of cement pastes exposed to high temperature environments. J. Mater. Sci. 29, 3015 (2004)

Tornado-Induced Loads on a Low-Rise Building and the Simulation of Them Ngoc Tinh Nguyen1(&) and Linh Ngoc Nguyen2 1

2

Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam [email protected] National University of Civil Engineering, Hanoi 100000, Vietnam

Abstract. Tornadoes are vortices of air, typically 100–300 m in diameter, which develop within severe thunderstorms. In addition to velocity pressures, a tornado also causes a rapid change in atmospheric pressure, which can, in cases of closed or partially vented structures, produce direct differential pressure loads. In this paper, based on the assumption of an isothermal process with the BoyleMariotte gas law as its foundation, the Equation that represents the internal pressure change in a partially vented structure due to external pressure changes during tornado has been built. Based on that, a computational program which simulates forces on a building during tornadoes, has been established. The program calculates differential pressures on walls and roof of a building of any geometry; with one or many compartments, or many floors; and any arrangement of openings. The inputs are tornado parameters, external pressure coefficients (Cp,e) and building geometry. The outputs are time–history of differential pressures on the walls and roof; total horizontal force and vertical (uplift) force on the building. In cases of closed structures, the results are verified by simulation of the United States laboratory. The program is a useful tool for investigating the most adverse loads on a building during tornadoes. Keywords: Tornado Simulation

 Wind  Pressure  Building  Structure  Load 

1 Introduction Tornadoes are dangerous hydrometeorological phenomena in nature because the wind velocities are very high (250–500 km/h), particularly if a rapid change in atmospheric pressure (40–170 hPa) occurs in a small range (50–150 m) of the tornado. Tornadoes cause a great deal of damage on buildings, especially low-rise buildings. Tornado effects on a building are very complicated, especially for partially vented structures. In the United States of America where there are the most tornadoes in the world [1, 2], a few experiments have been carried out to investigate tornado effect on building [3–6]. However, most of the experiments were carried out for non-porous building model that means the air is not circulating between inside and outside the building. Theoretically, J. Letzmann and colleagues proposed the hypothesis of pressure deficit in tornadoes [7] in 1930. This, in conjunction with the assumption of an adiabatic process were the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 543–555, 2021. https://doi.org/10.1007/978-3-030-62324-1_46

544

N. T. Nguyen and L. N. Nguyen

premises for G.C.K. Yeh to calculate differential pressures on building walls due to changing atmospheric pressure during tornadoes [8], which was presented by E. Simiu and D. Yeo in their book [9]. However, Yeh’s study only applies to the case of one– dimensional effect, i.e. the external atmospheric pressure of the building changes in the same way everywhere, but not for three–dimensional effects as in reality. G.C.K. Yeh did not consider the decrease in pressure due to turbulence airflows caused by the impact of wind on the building. Likewise, in the previous study, this paper’s author did not consider the decrease in pressure due to turbulence airflows [10], whilst it has been taken into account in the Standards by means of the external pressure coefficients Cp,e [11]. Consider a building with many compartments and ventilation openings which are constructed so that the oncoming wind does not enter the building. During a tornado, the rapid decrease in atmospheric pressure causes air to escape out of the building and also circulate between compartments. Based on the assumption of an isothermal process with the Boyle-Mariotte gas law as its foundation, the author calculates the internal pressure change of a partially vented structure (with openings) due to external pressure changes during a tornado, in which the external atmospheric pressure of the structure is the outcome of not only the tornado static pressure drop but also the pressure decrease due to turbulence airflows caused by wind impacts on the building. Based on that, a computational program which simulates forces on a building during tornadoes, has been established. The program calculates differential pressures on the walls and roof of a building of any geometry; with one or many compartments, or many floors; and any arrangement of openings. The inputs are tornado parameters, external pressure coefficients (Cp,e) and building geometry. The outputs are time – history of differential pressures on the walls and roof; total horizontal force and vertical (uplift) force on the building. In cases of closed structures, the results are verified by simulation of the United States laboratory [5]. The program is a useful tool for investigating the most adverse loads on a building during tornadoes.

2 Theory and Equations 2.1

Atmospheric Pressure Change During Tornadoes

To estimate tornado effects on a building, it is necessary to assume a model of the tornado windflow. A model currently accepted for use in engineering calculations consists of a vortex characterized by the rotational and translational motion with the following assumption [7–9]: 1- The wind velocities do not vary with height above ground. Given the fact that a typical tornado’s height is thousands of meter, within a small range of height, it is safe to suppose that the wind velocities do not vary with height. 2- The atmospheric pressure gradient at radius r from the tornado axis is given by the cyclostrophic wind equation: dPa ðrÞ v2 ðrÞ ¼q t dr r

ð1Þ

Tornado-Induced Loads on a Low-Rise Building

545

where q is the mass density of air; vt(r) is tangential wind velocity component. 3- The tangential wind velocity component vt(r) is given by the expressions: 8 r > vm ð0  r  Rm Þ < ð2Þ vt ðr Þ ¼ RRm > : m vm ðRm \r  1Þ r where vm is the maximum tangential wind velocity; Rm is the radius of maximum rotational wind speed. In the expression for vt(r) given by Eq. (2), integration of Eq. (1) from infinity to r gives the atmospheric pressure change at radius r: 8 2 vm r2 > > < q ð2  2 Þ 2 Rm Pa ðr Þ ¼ 2 2 v R > > : q m : 2m 2 r

ð0  r  Rm Þ

ð3Þ

ðRm \r  1Þ

Figure 1 is the graph of Pa ðr Þ; DP ¼ Pa ð0Þ ¼ qv2m is tornado static pressure drop.

Fig. 1. The graph of Pa(r)

2.2

Fig. 2. Schematic illustration of the tornado and building

Wind Velocity Pressure

A tornado has four velocity components, namely radial (vr), tangential (vt), translational (vtr), and vertical (vv). The radial and tangential components when combined give the rotational velocity (vro) as their result (see Fig. 2). As suggested by Mehta and his group [12], the vertical and radial components can be approximated as vv = vt/3 to 2vt/ 3 and vr = vt/2. Thus: vro ¼ 1:12vt

ð4Þ

546

N. T. Nguyen and L. N. Nguyen

Minimum horizontal velocity: vh.mim = vro − vtr and maximum horizontal velocity: vh.max = vro + vtr. According to worldwide opinions on tornado velocities, P.K. Dutta et al. suggested an average vtr which can be taken equivalently as 1/6th of tangential velocity [13]. Hence vh:mim ¼ 1:12vt  vt =6 ¼ 0:96vt and vh:max ¼ 1:12vt þ vt =6 ¼ 1:28vt

ð5Þ

Therefore, the total horizontal wind velocity can be expressed as: vh ¼ Kvt

ð6Þ

where K is a constant of proportionality of which value ranging from 0.96 to 1.28, depending on the context. This expression, which is not rigorously correct, is convenient in calculations. Finally, the field of horizontal wind velocity pressures is presented as qðrÞ ¼ q

K 2 v2t ðrÞ 2

ð7Þ

where q(r) is horizontal wind velocity pressures at radius r from the tornado axis. 2.3

Loads Due to Dropping Atmospheric Pressure

The internal pressure change of a building with openings due to changing external pressure, which was proposed by Nguyen Ngoc Tinh as follows [10]: • The volume flow rate of air passing through an orifice between two compartments There are two compartments 1 and 2 ventilated by an opening with area S. At the beginning t = 0 they are of the same internal pressure, P1(t) and P2(t). DVt is the volume of air passing through an orifice between two compartments per interval of time t, it can be expressed as Zt DVt ¼

Sv ds

ð8Þ

0

where v is the velocity of air passing through the opening. It is written as v ¼ an

ð9Þ

with a ¼

1 DPt q Dl

ð10Þ

Tornado-Induced Loads on a Low-Rise Building

547

where: DPt/Dl is average atmospheric pressure gradient between two compartments; n is the time in which air elements passed over distance Dl under pressure differential DPt with acceleration a, thus Dl is written as Dl ¼

an2 2

ð11Þ

From (8), (9), (10) and (11), the result of DVt is represented as: Zt DVt ¼ 0

sffiffiffiffiffiffiffiffiffiffiffiffiffiffi! Z t " sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# 2 2 S  jDPt j ds or DVt ¼ S  jP1 ðsÞ  P2 ðsÞj ds q q

ð12Þ

0

where: ‘+’ applies when P1 > P2 and ‘−’ when P1 < P2. • Internal atmospheric pressure change in a building during a tornado Consider a building with N compartments and openings which are constructed so that the oncoming wind does not enter the building. During a tornado, the rapid decrease in atmospheric pressure causes air to escape out of the building and also circulate between compartments. This may be assumed as an isothermal process. Consider compartment e and name parameters of this compartment as follows: Ve is the volume of compartment e, e = 1..N; Pe(t) is the internal atmospheric pressure of compartment e; ne is the number of opening ventilating out; Sei is the area of opening ventilating out, i = 1..ne; Sej is the area of opening which ventilates with other compartments, j = 1..N, j 6¼ e; in case compartments e and j do not open to each other, then Sej = 0. Pressure P and volume V are related by Boyle-Mariotte: P0 V0 ¼ Pe ðtÞVt

ð13Þ

in which: P0 is the internal atmospheric pressure of compartment e at the beginning t = 0; Pe(t) is the internal atmospheric pressure of compartment e at time t; V0 is the atmospheric volume of compartment e at the beginning: V0 = Ve; Vt is the atmospheric volume corresponding to Pe(t): Vt = Ve + DVe, where DVe is the volume of air circulating (into, out of) through compartment e per interval of time t. Equation (13) becomes Pe ðtÞ ¼

P0 DVe 1þ Ve

ð14Þ

in which DVe ¼ DVi þ DVj

ð15Þ

548

N. T. Nguyen and L. N. Nguyen

where: DVi is the volume of air circulating through the opening ventilating out Sei; DVj is the volume of air circulating through the opening which ventilates with other compartments Sej. Thus: DVi ¼

ne X i¼1

DVei and DVj ¼

N X

DVej

ð16Þ

j¼1

in which: DVei is the volume of air circulating through opening i; DVej is the volume of air circulating within compartment j. According to Eq. (12), DVei and DVej can be expressed as Zt DVei ¼ 0

Zt DVej ¼ 0

" sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# 2 Sei  jPe ðsÞ  PE i ðsÞ j ds q " sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi#  2  Pe ðsÞ  Pj ðsÞ  ds Sej  q

ð17Þ

ð18Þ

in which: Pj(s) or Pj(t) is the internal pressure of compartment j; PEi(s) or PEi(t) is the external pressure at opening Sei. The external pressure PE at opening Sei is the result of not only the decrease pressure Pa (see Eq. 3) but also the decrease in atmospheric pressure due to turbulence airflows caused by the impact of wind on the building, this decrease is represented by the external pressure coefficients Cp,e (Cp,e < 0) multiplied by q(r) [11]. Thus: PE ¼ PA Pa ðr Þ þ Cp;e qðr Þ

ð19Þ

From Eqs. (15), (16), (17), (18) Eq. (14) is represented as Pe ðtÞ ¼

P0 2 3 t sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z Z t sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n N e X  1 4X 2 2   1þ  Sei  Sej Pe ðsÞ  Pj ðsÞ ds5 jPe ðsÞ  PE i ðsÞ j ds þ Ve i¼1 q q j¼1 0

0

ð20Þ Equation (20) represents the internal pressure change of compartment e with respect to time during a tornado. Thus the internal pressure in a building with N compartments is represented by a system of N differential equations with N unknowns Pe (or Pj) e, j = 1..N.

Tornado-Induced Loads on a Low-Rise Building

549

3 Simulation 3.1

Computational Program

Based on Eqs. (3), (7), (19), (20) a computational program which simulates forces on a building during tornadoes, has been established. The program calculates differential pressures on the walls and roof of a building of any geometry; with one or many compartments, or many floors and any arrangement of openings. The inputs are the tornado parameters, building geometry and external pressure coefficients (Cp,e). In particular, the values of Cp,e for roof, walls and even ventilation openings are different. The outputs includes the time – history of differential pressures on the exterior walls and roof, the total horizontal force (Fx, Fy) and vertical (uplift) force (Fz) on the building. The program is a useful tool for investigating the most adverse loads on a building during tornadoes. 3.2

Specific Example

A building with a square plan area has one compartment (see Fig. 3) with B = 18 m, L = 18 m, height H = 9.15 m, therefore V = 2964.6 m3, a ventilation opening with an area of S is placed in the center of the roof, the opening is chimney-shaped so external pressure coefficient at the opening Cp,e = 0. The initial atmospheric pressure is P0 = PA = 1013 hPa. The tornado parameters are Pa(0) = 61.1 hPa, Rm = 34 m, vm = 73.68 m/s, K = 1.12, vtr = 4.5 m/s. The building and tornado parameters in the example are equivalent to the building and tornado which were simulated by the experiment of Alireza Razavi and Partha P. Sarkar [5]. These parameters are shown in Fig. 3 and Table 1, in which rc  Rm, vt  vtr, vh.c  vm. The coefficients Cp,e for roofs are averaged by −0.9 for crosswind sloped roofs and −0.3 for pitched roof of 35° (average of upwind sloped roofs and downwind sloped roofs) [11]. The proposed average coefficients value Cp,e for all walls is −0.25. The coefficient Cp,e = +0.8 for windward exterior walls [11] which is only applicable to calculating velocity pressure on the walls, not to Eqs. (19) and (20).

Fig. 3. Illustration of the building model by the experiment [5]

550

N. T. Nguyen and L. N. Nguyen Table 1. The building and tornado parameters for the experiment [5] and example. Sign Building B L H Roof type Roof in the y-axis Roof in the x-axis V Tornado rc (Rm) vt (vtr) vh.c (vm)   qvh:c2 =2 qv2m =2 DP = Pa(0)

Experiment

Scale In full scale

Example

0.090 m 1/200 18 m 0.090 m 1/200 18 m 0.030 m 1/200 6 m Gable Gable Roof pitch of 35o Roof pitch of 35o Crosswind slope Crosswind slope 2964.6 m3

18 m 18 m 9.15 m By Cp,e Cp,e = −0.3 Cp,e = −0.9 2964.6 m3

0.17 m 0.5 m/s 7.57 m/s

1/200 34 m 1/9 4.5 m/s 1/9 68.13 m/s 2600 N/m2

34 m 4.5 m/s 73.8 m/s 3055 N/m2

6110 N/m2

6110 N/m2

Figure 4 shows a good agreement between ground pressure distribution of the Manchester tornado of 2003 (field measurements) [4], experiment (swirl ratio Svane = 0.16) [5] and example (Pa). The comparison of the radial distribution of normalized tangential velocity between the experiment [5] and example is shown in Fig. 5, these are equivalent.

Fig. 4. Comparison of ground surface pressure between field measurements (Manchester), experiment (Svane = 0.16) and example (Pa)

Fig. 5. Comparison of radial distribution of tangential velocity between experiment (Svane = 0.16) and example (vt)

Tornado-Induced Loads on a Low-Rise Building

551

4 Result and Discussion 4.1

Simulation

Figure 6 shows simulative results at a point in time in case the average coefficient Cp,e for all the walls are calculated with values of −0.25 and the average coefficient Cp,e for the roof is −0.3, area of the opening S = 0.12 m2, the tornado center passes through the center of the building and deviates from the x-axis by an angle, Ф = – 45°. The central figure is the differential pressure on the walls; the upper figure is the differential pressure on the roof and vertical (uplift) force Fz; the right and lower figures are the components of horizontal forces Fx and Fy (see Fig. 2), in which Fx ¼ FxP þ FxW ; Fy ¼ FyP þ FyW

ð21Þ

where FxP and FyP are the total forces due to differential pressure on the walls, FyW and FxW are total forces due to wind velocity pressure on the walls. In order to see general tornado effects on the building, load coefficients were calculated as: CFx ¼

Fx Fy Fz ; CFy ¼ ; CFz ¼ Axm qv2m =2 Aym qv2m =2 Azm qv2m =2

ð22Þ

where CFx, CFy and CFz are load coefficients along the building ridge, along a direction normal to the building ridge and in vertical (uplift) direction respectively; Axm ¼ BH; Aym ¼ LH; Azm ¼ BL

ð23Þ

In Fig. 7, the time histories of load coefficients for the building for S = 0 when the tornado center moves along the roof ridge (Ф = 0°) is shown. The building was located at x/Rm= 0, while the tornado translates from negative x/Rm to positive x/Rm. The tornado center moves along the roof ridge (x-axis) hence the coefficients Cp,e is equal to −0.3 for roof pitch angle of 35°. In this case, load coefficients CFx and CFz are the results of dropping atmospheric pressure during tornadoes, CFx is the horizontal attractive load coefficient and CFz is the vertical attractive (uplift) load coefficient on the building. CFy is the result of horizontal wind velocity pressures. Clearly, the maximum of CFz is 3.4 times as much as that of CFy and hence, it is necessary to strike a balance between the pressure inside and outside the building by leaving the vents open. Likewise, Fig. 8 shows time histories of load coefficients for the building in case ventilation opening area S = 0.12 m2 (S/V = 4.10−5/m). The result shows new CFz’s max value is only 0.4 times as much as CFz’s max value when S = 0. Figures 7 and 8 show that the horizontal load coefficients CFx and CFy are independent of the internal pressure of the building. Also for coefficient CFz, it is clear that during a tornado, the rapid decrease in atmospheric pressure causes air to escape out of the building, causing the decrease of pressure difference between inside and outside the building.

552

N. T. Nguyen and L. N. Nguyen

10 9 8

2

3

4

6

5 1

7

Fig. 6. Simulative results at a point in time. 1- indicates the center of the circle with radius Rm and the point in time and the mean path of the tornado; 2- the plan of building walls; 3- the atmospheric pressure value of compartment; 4- the differential pressure diagram on walls due to atmospheric pressure drop (dark red color) and atmospheric decrease caused by wind impact on the building (light red color); 5- the wind velocity pressure diagram on walls, the differential pressure on walls is the total of 4 and 5; 6- the diagram of total force along the x-axis; 7- the diagram of total force along the y-axis; 8- the differential pressure spectrum on the roof that is presented by the different colors in 10; 9- the total vertical (uplift) force Fz, Pmax and Pmin are the maximum and minimum differential pressure on the roof.

Tornado-Induced Loads on a Low-Rise Building

553

Fig. 7. Time history of load coefficients for S/V = 0

4.2

Comparison with Experiment Results

Alireza Razavi and Partha P. Sarkar [5] studied tornado-induced loads on a rectangular building (see Fig. 3 and Table 1), in which the worst-case load scenario on a nonporous low-rise building (S = 0) was explored as influenced by translation speed and swirl ratio of a tornado, relative distance of the building to the tornado’s mean path and orientation of the building with respect to the tornado’s translation direction, simultaneously considering all these factors. One of the results of the study was the time history of load coefficients CFx, CFy and CFz. By comparison, the load coefficients in the example and in the experiment were made compatible with negative and positive signs. Figure 9 shows a good agreement between the time history of load coefficients CFx, CFy. The load coefficient CFz was compared in two cases: Cp,e = −0.3 for roof when the tornado moved along the x axis (see Fig. 10a) and Cp,e = −0.9 for roof when it moved along the y axis (see Fig. 10b), these are also equivalent.

Fig. 8. Time history of load coefficients for S/V = 4.10−5/m (S = 0.12 m2)

554

N. T. Nguyen and L. N. Nguyen

Example

Example

Experiment Experiment

a.

b.

Fig. 9. Comparison of the load coefficients CFx, CFy between experiment and example

Experiment Example

Experiment

Example

a. Cp,e = - 0.3 for roof

b. Cp,e = - 0.9 for roof

Fig. 10. Comparison of the time history of load coefficient CFz between experiment and example

4.3

Effect of the External Pressure Coefficients Cp,e

The external pressure coefficient Cp,e (Cp,e < 0) is to factor in the pressure drop when the windflow hits the building. The proposed values calculated in the example are based on AS/NZS 1170.2: 2011 [11] which may be suitable for roofs but not for walls. On the other hand, it is only suitable when calculating load coefficients CFx, CFy and CFz and not rigorously correct for determining differential pressure at a certain position on the roof or walls. However, the effect of wind velocity pressure expressed through the coefficient Cp,e is insignificant compared to the effect of air pressure drop in a tornado (see Fig. 7). The peak values of load coefficients CFx, CFy and CFz almost remain unchanged and only maintain longer existence (with CFz, see Fig. 10).

5 Conclusions The investigative result has practical application. The assumptions for the mathematical model of tornado is taken from recognized investigative results all over the world [7– 9], and quite consistent with the field measurements (see Fig. 4). Tornado effects on

Tornado-Induced Loads on a Low-Rise Building

555

buildings are established based on the assumption of an isothermal process, with the Boyle-Mariotte gas law as its foundation. The values of external pressure coefficients Cp,e should be further investigated in the future. However, the simulation results show an overall picture of the tornado’s effect on a building. The effect of dropping atmospheric pressure is much greater than the effect of wind pressure, specifically, 3.4 times as much. The peak values of uplift load coefficients CFz is significantly reduced when the ventilation opening is made. The investigative results are verified by the simulation of the US laboratory [5]. Because measurements of tornado effects on buildings are very difficult, the computational program is a useful tool for investigating the most adverse loads on a building during tornadoes.

References 1. Goliger, A., Milford, R.: A review of worldwide occurrence of tornadoes. J. Wind Eng. Ind. Aerodyn. 74–76, 111–121 (1998) 2. Dotzek, N.: An updated estimate of tornado occurrence in Europe. Atmos. Res. 67–68, 153– 161 (2003) 3. Jischke, M., Light, B.: Laboratory simulation of tornadic wind loads on a rectangular model structure. J. Wind Eng. Ind. Aerodyn. 13(1–3), 371–382 (1983) 4. Hu, H., Yang, Z., Sarkar, P., Haan, F.: Characterization of the wind loads and flow fields around a gable-roof building model in tornado-like winds. Exp. Fluids 51(3), 835–851 (2011) 5. Razavi, A., Sarkar, P.: Tornado-induced wind loads on a low-rise building: Influence of swirl ratio, translation speed and building parameters. Eng. Struct. 167, 1–12 (2018) 6. Feng, C., Chen, X.: Characterization of translating tornado-induced pressures and responses of a low-rise building frame based on measurement data. Eng. Struct. 174, 495–508 (2018) 7. Letzmann, J., Wegener, A.: Die Druckerniedrigung in Tromben [The pressure deficit in tornadoes]. Meteorologischen Zeitschrift 47, 165–169 (1930) 8. Yeh, G.: Differential pressures on building walls during tornadoes. Nucl. Eng. Des. 41, 53– 57 (1977) 9. Simiu, E., Yeo, D.: Wind effects on Structures: Modern Structural Design for Wind. Wiley, Hoboken (2019) 10. Nguyen, N.: Lực tác động lên ngôi nhà do thay đổi áp suất không khí khi lốc xoáy đi qua [Forces on building due to changing atmospheric pressure during tornadoes]. J. Sci. Technol. 44, 91–100 (2006) 11. Structural design actions– Wind actions. Standards Australia (2011) 12. Metha, K.C., Mcdonald, J.R., Minor, J.: Tornadic loads on structure. In: Ishizaki, H., Chiu, A.N.C. (eds.) Proceedings of the Second USA-Japan Research Seminar on Wind Effects on Structure, University of Tokyo, Tokyo, Japan, pp. 15–25 (1976) 13. Dutta, P., Ghosh, A., Agarwal, B.: Dynamic response of structures subjected to tornado loads by FEM. J. Wind Eng. Ind. Aerodyn. 90, 55–69 (2002)

Development of a New Stress-Deformation Measuring Device and Hazard Early Warning System for Constructional Work in Da Nang, Vietnam Truong-Linh Chau, Thu-Ha Nguyen(&), Viet-Thanh Le, and Quoc-Thien Tran Faculty of Road and Bridge Engineering, The University of Da Nang, University of Science and Technology, 54 Nguyen Luong Bang, Lien Chieu, Da Nang 550000, Vietnam [email protected]

Abstract. The paper presents the process of creating and evaluating the workability of new measuring devices for the health monitoring of infrastructure projects in Da Nang city by using special sensors, including deformation and cracks observers, tentatively named TUD-v1. All the devices are connected and managed via the Web Server in order to provide a real-time data to users. Through the laboratory checks and test-bed, the results confirmed that the observers and the management system works quite effectively and accurately. Additionally, with the reasonable cost, the new early warning system using Internet of Things (IoTs) in this study can be feasibly applied for various smart construction projects with a better performance but cheaper construction cost. Keywords: Measuring devices  Web server  Early warning system  Internet of Things

1 Introduction The value of deformation and the presence of cracks play an essential role in bridge working status and performance evaluation. The information reflects structural health, meanwhile exceeded values in use might reveal the presence of bridge deficiency or abnormal loading. Deflection of a bridge is measured as displacement of certain points of structure. When such measurements are performed automatically and continuously during a period of time, they are known as health monitoring [1]. Observation of deflection on an aging bridge under dynamic and shock loads help to predict their load bearing capacities and could help decision-making process of engineers that whether they should pay for expensive improvement or upgradation for the bridge. Knowledge of bridge deformation and crack is also crucial for evaluating serviceability of the bridge. Hence, there are many reasons for applying health monitoring for bridges [1–6]. Along with the development of construction serving the global urbanization, many manufacturers have been providing high-tech devices to monitor buildings or infrastructure project when they are under construction or operating such as deformation and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 556–569, 2021. https://doi.org/10.1007/978-3-030-62324-1_47

Development of a New Stress-Deformation Measuring Device

557

crack observers named TDS 303 from TML-Japan, UCAM-60B from Kyowa-Japan, P3 STRAIN INDICRATOR. However, the price of these modern devices is not reachable for most of the constructors in developing countries. Hence, they have only been employed to big projects, not for all projects which still need to be monitored and checked. Deformation, subsidence, and cracks are the negative agents that usually cause common problems in construction. When a structure is under multiple loads or natural effects such as static loads, dynamic loads, loading, humid air, high temperature, these mentioned negative phenomena are being accelerated to be occurred quickly. If these phenomena take place during a long term without any measures about the projects’ engineering properties, the value, number, or size of them will be significantly increased uncontrollably, leading to the loss of the bearing capacity and workability of the structures. In this study, the authors present how they made the monitoring devices and the management system complies with the IoTs form. Especially, along with the reasonable production cost, the structure of the devices is simple in small sizes, helping the constructors and project operators in constructing as well as maintaining their projects safely and economically. The operating mechanism of the new measuring devices is transferring the output signals of various sensors to the deformation value. To do that, various strain gages were assembled to locations needed to be monitored with the deformation range from µm to cm unit. Strain gages are designed to measure the deformation with very high accuracy based on the changing of electrical resistance. However, the output signals of strain gages were quite weak and small, hence, a special board named Wheatston was installed to recognize them. Especially, the Wheatson board observes the changing of electrical resistance and then transfer those signals to effective voltage complying with the following formula: De ¼

DR E :E ¼ :k:e 4R 4

ð1Þ

Hence: e¼

4De k:E

ð2Þ

Where: Δe, ΔR are the changes of voltage and electrical resistance of strain gages when deformation occurs, respectively; R is electrical resistance of strain gages; k is a constant; e is deformation. After that, the signals could be converted into the deformation data. The signals were sent to and from the web server via SIM800C installed at the observed locations. The description of strain gage sensor and transferring board is shown in Fig. 1a and b, respectively.

558

T.-L. Chau et al.

(a)

(b)

Fig. 1. Description of strain gage sensor (a) and transferring board (b).

Fig. 2. A working process of the monitoring system.

Figure 2 and 3 describe the each working steps and the block diagram of working process of the monitoring system in this study. The output signals of the sensors are weak, most of sensors usually generate voltage change less than 10 mV per each stimulating voltage. Hence, in this study, a board supplying the stimulating voltage for bridge circuits and coordinating circuits based on the bridge circuit was employed as described in Fig. 4. Ratiometric method eliminate the dependence of electrical devices on the accuracy of stimulating voltage by continuously measuring values of stimulating voltage and managing them in the hardware. The stimulating voltage is measured consistently and used to control the input of Analog to Digital convert (ADC). As a result, the data is then collected as the ratio of the output voltage of the bridge circuit to the stimulating

Development of a New Stress-Deformation Measuring Device

559

Fig. 3. Block diagram of the working process in this study.

Fig. 4. A description of coordinating circuits.

voltage. The system works continuously and automatically correct the uncertainty of the stimulating voltage. The working process of the system following the ratiometric method is presented in Fig. 5. In this study, 3-line deformation sensors were employed to eliminate the effect of changing the electrical resistance of electrical lines since it influences on the bridge circuit (Fig. 6).

560

T.-L. Chau et al.

Fig. 5. The process of data collecting following the ratiometric method.

Fig. 6. The circuit diagram of the 3-line deformation sensor.

2 Preparation for the New Monitoring System 2.1

Programming Language

In this study, C and C++ were employed analyzing the output data of measuring devices and then updating the data to web server. After that, the javascript was used for the rest of work in web server. An example of coding carried out by C++ is shown in Fig. 7.

Fig. 7. Coding work using C++ in this study.

Development of a New Stress-Deformation Measuring Device

2.2

561

Design of Each Components of the Measuring System

All circuits of the system were designed individually, the software Altium was employed to draw and then connect all the circuits into a united part, which is presented in Fig. 8. An advantage of this method of production of the circuit is that it can avoid and reduce errors, improve the aesthetic look of the system, and especially can develop many advantage functions more for the use of the measuring devices. The completed measuring devices are illustrated in Fig. 9 and the designed interface of the web server is illustrated in Fig. 10.

Fig. 8. Designed circuits using Altium

Fig. 9. The completed measuring devices.

2.3

Calibration of the New Measuring System

The deformation and cracks of pre-stressed and steel structures are usually insignificant under the static and dynamic loads, however, in the case of normal reinforced concrete, they are relatively large and visible. In this study, experiments were carried out to calibrate the new monitoring devices with various cases of deformation and cracks. The measured results obtained by the new measuring devices were confirmed by strain gauges with the accuracy of 0,001 mm as shown in Fig. 10.

562

T.-L. Chau et al.

Fig. 10. Process of the calibration work

Basing on the data measured by new monitoring devices associated with the confirmation of strain gauges, a correlation between the data built by two different measuring methods was plotted with R2 = 0.9954 as illustrated in Fig. 11.

Fig. 11. Process of the calibration work

Figure 11 shows that there is an uncertainty when performing collecting data with two different ways due to the effect of environment, human, and installation. Based on the correlation, the ultimate value of measuring results could be derived from the equation below: x¼

y  15:375 0:947

ð3Þ

Where: x and y are deformation after and before calibration. 2.4

Verification of the New Measuring System in Laboratory

Besides confirming with the strain gauges, the new measuring device TUD-v1 was checked by TDS-303 – an accurate device from the TML-Japan company in term of accuracy. Figure 12 a and b are the beam under bending and tensile loads, respectively. The data of deformation would be conveyed to a data collector and recorded by TDS303 with the measuring channel CH.NOI-CH.

Development of a New Stress-Deformation Measuring Device

a)

b)

563

c)

Fig. 12. a) Tested beam, b) tensile sample, and c) Data collector

Figure 13 and 14 show the deformation-loads relationship for the sample under tension and bending loads, respectively, obtained by using TDS-303 and the developed measuring device. It can be seen that when the load reaches the designed value, the difference of deformation value measured by TDS-303 and TUD-v1 devices were 12,2 .10−6 m (3,4%) and 40,9 .10−6 m (4,14%) for the case of beam under tension and bending loads, respectively. This indicates that the difference in measured results between the TUD-v1 and TDS-303 – a widely used commercial product, is insignificant and acceptable.

Fig. 13. Deformation-loads relationship for the sample under tension obtained by using TDS303 and the developed measuring device TUD-v1.

564

T.-L. Chau et al.

Fig. 14. Deformation-loads relationship for the beam obtained by using TDS-303 and the developed measuring device TUD-v1.

3 Test-Bed In order to ensure that the TUD-v1 is practically applicable and the whole monitoring system works well in the real site under various effects of nature and human, 2 test-bed work were performed on two different bridges in the city area. The basic information of the two bridges are briefly described as below: Project 1 – Nguyen Tri Phuong Bridge: The monitoring devices were installed between the reinforced retaining wall and the bridge abutments. This position is frequently subjected to shock loads because of the difference of stiffness of the two mentioned components, both the embankment and reinforced retaining wall in this case would be susceptible to be deformed as their flexibility.

Power source

Sensor 1

Sensor Measuring Device

a)

b)

Fig. 15. a) Installation of parts of TUD-v1 and b) Connecting the measuring devices to the power source

The process of installation of the new monitoring devices for observing the cracks’ width and connecting them to a power source is illustrated in Fig. 15a and b, respectively. All steps have been done manually, and all devices were covered and protected carefully in order to avoid the adverse effects of nature and human as well.

Development of a New Stress-Deformation Measuring Device

565

Cracks monitoring data during the elapsed time

Fig. 16. a) The width of cracks with respect to elapsed time drawn in webserver

Figure 16 presents the width of cracks with respect to elapsed time in webserver shown in figure while Fig. 17 records all those detailed data in number. It is worth noting that the interface of the software is quite easy to be seen, understood and employed.

1.6 1.412

Deformation (mm)

1.5 1.4 1.535 1.3

1.296 1.36

1.2 1.1

1.366

1

1.34

1.272

1.248

0.989

1.292

1.272

0.889 0.812

0.9 0.922

0.8 0.7

0.839

0.814

0.803 0.805

0.851

0.806

Sensor 1

25-04-2019 11:31

25-04-2019 11:02

25-04-2019 10:33

Time

25-04-2019 10:04

25-04-2019 09:36

25-04-2019 09:07

25-04-2019 08:38

0.6

Sensor 2

Fig. 17. Cracks’ width-elapsed time curve obtained by the new measuring devices at two different locations

566

T.-L. Chau et al.

Figure 17 depicts the cracks’ width with respect to elapsed time obtained by the new measuring devices at two designed locations. Deformation of the upper part (sensor 1) of retaining wall (1 m far from the wall’s top) was deformed 1,535 mm, the lower part (sensor 2) of retaining wall (1.2 m far from the wall’s foot) was deformed 0,989 mm. It reveals the fact that in this case, the upper part of the retaining wall deformed more than the lower ones under the shock and dynamic loads. And interestingly, the result shows that the deformation reaches the maximum value when the time is from 8:30 to 9:30 am, this can be explained that at this period time, the amount of traffic significantly increase due to the need of transportation. Generally, the collected data obtained by the new monitoring system were quite detailed and understandable. Project 2 – Nam O Bridge: The monitoring devices were installed at the middle position of the bridge’s beam, although the beams are pre-stressed ones, however, during 45 years of serving, cracks has been visibly occurred at the middle bottom parts of the beam. The bridge is under a significant deterioration. The process of installation of the new monitoring devices for observing the deformation of the bridge is illustrated in Fig. 18. All steps have been done similar to those in Nguyen Tri Phuong Bridge. The deformation of the bridge’s beam with respect to the elapsed time in webserver shown in figure and all those detailed data are shown in a similar form of Fig. 16 for the case of Nguyen Tri Phuong Bridge.

Sensor Measuring Device

Fig. 18. a) Installation of measuring device parts of TUD-v1 at the middle point of the bridge’s beam

Figure 19 a and b show the deformation and stress of the beam under the dynamic load with respect to elapsed time, respectively. It can be recognized that the deformation occurs most significantly at around 9:30 am, which is supposed to be the rush hour. Besides, the beams of this bridge are kinds of pre-stressed beams, when the dynamic loads of the heavy trucks affect the beams, resulting in the increase of cracks’ width and deformation, when the loads pass, the beams are partially recovered and then its cracks’ width and deformation decreased. When the damaged loads repeatedly affect

Development of a New Stress-Deformation Measuring Device

567

400.6 361

300.6 250.6 200.6

142 113

150.6

141

115

50.6

169

26-04-2019 09:36

26-04-2019 09:07

26-04-2019 08:38

26-04-2019 10:33

6 26-04-2019 08:09

0.6

146

69

100.6

26-04-2019 10:04

Deformation (mm)

350.6

Elapsed time

(a) 12.10 11.49

8.10 6.10 2.20 0.19 26-04-2019 10:04

26-04-2019 09:07

26-04-2019 08:38

0.10

4.65 4.52

3.60

26-04-2019 09:36

2.10

26-04-2019 10:33

4.10

5.38

4.49

3.66

26-04-2019 08:09

Stress (Mpa)

10.10

Elapsed time

(b) Fig. 19. a) The deformation and b) stress of the beam under the dynamic load with respect to elapsed time

to the beams during a long time of serving, the bridge would be significantly deteriorated, damaged and even collapsed. Besides collecting the real-time data and illustrate them in the webserver, in this study, dangerous thresholds were designated to make alerts when the measuring values exceed the acceptable number. For example, the width of dynamic load induced cracks should be smaller than 0,2 mm. In this case, the maximum width of the cracks was

568

T.-L. Chau et al.

Deformation monitoring data during the elapsed Dangerous Alert

Fig. 20. The deformation of the beam with respect to elapsed time (interval of 1 s) drawn in webserver coupled with dangerous threshold

0,396 mm, exceeding the safe threshold, hence, the system will give an alert to the user to consider about the dangerous incident. Additionally, the more frequent in taking real-time data, the more detailed and accurate results given. This can be graphically seen in Fig. 20.

4 Conclusions In this study, the new monitoring devices and system were successfully developed, giving useful real-time data and alerts in the webserver. Through various test carried out in both laboratory and real sites, the new monitoring system TUD-v1 provided accurate data and assessment, which was a good match with the popular commercial product TDS-303. The synchronization between the measuring device and the monitoring system made the new developed monitoring system easy and feasible to be used visibly for users to have an up-to-date update on a daily basic. Hence, the engineers could be able directly evaluate the working status of structures and give exact warnings to authority timely before the problems come. With the requirement of the system with higher accuracy and ability to monitor more parameters as well as various components in the local infrastructures, the further study would be expanded in monitoring temperature, settlement, large deformation of various structures, which helps the users to gather many data to evaluate and monitor the working status of the structure from different aspects, resulting useful and exact reports about the health of structures. Acknowledgment. This work was supported by The University of Da Nang, University of Science and Technology.

Development of a New Stress-Deformation Measuring Device

569

References 1. Xu, Y., Brownjohn, J.M.W., Huseynov, F.: Accurate deformation monitoring on bridge structures using a cost-effective sensing system combined with a camera and accelerometers: case study. J. Bridg. Eng. 24, 1–14 (2019). https://doi.org/10.1061/(ASCE)BE.1943-5592. 0001330 2. Kaloop, M.R., Li, H.: Monitoring of bridge deformation using GPS technique. KSCE J. Civ. Eng. 13, 423–431 (2009). https://doi.org/10.1007/s12205-009-0423-y 3. Khalil, A., Heiza, K., El Nawawy, O.: State of the Art review on bridges structural health monitoring (applications and future trends). Int. Conf. Civ. Archit. Eng. 11, 1–25 (2016). https://doi.org/10.21608/iccae.2016.43761 4. Ároch, R., Sokol, M., Venglár, M.: Structural health monitoring of major danube bridges in Bratislava. Procedia Eng. 156, 24–31 (2016). https://doi.org/10.1016/j.proeng.2016.08.263 5. Yi, T.H., Li, H.N., Gu, M.: Experimental assessment of high-rate GPS receivers for deformation monitoring of bridge. Meas. J. Int. Meas. Confed. 46, 420–432 (2013). https:// doi.org/10.1016/j.measurement.2012.07.018 6. Yunus, M.Z.M., Ibrahim, N., Ahmad, F.S.: A review on bridge dynamic displacement monitoring using global positioning system and accelerometer. In: AIP Conference Proceedings 1930 (2018). https://doi.org/10.1063/1.5022933

Compressive Behavior of Concrete: Experimental Study and Numerical Simulation Using Discrete Element Method Tran Van Tieng1(&), Nguyen Thi Thuy Hang1, and Nguyen Xuan Khanh2 1

HCMC University of Technology and Education, Ho Chi Minh City, Vietnam [email protected] 2 Civil Engineering Department, Faculty of Engineering, Bach Khoa Sai Gon College, Ho Chi Minh City, Vietnam

Abstract. This paper focuses on the compressive behavior of concrete in case of experimental study and modeling using discrete element method (DEM). The experimental behavior of concrete samples with different fabricated gradations is firstly carried out following ASTM C469. Then, the DEM is used to simulate this behavior. With the DEM, the concrete sample is simulated by the assembly of spheres. The simulating results are compared with the experimental results to verify the discrete element model. In addition, mechanical behavior as well as the propagation of cracks and destruction of material are clearly shown regardless of the meshing as finite elements. This study shows the ability of the discrete element model to simulate the concrete behavior and also the ability to develop the constitutive laws of the model in order to simulate concrete structures. Keywords: Concrete  Compressive behavior  Gradations  Discrete element method  Modeling  Simulation results  Experimental results

1 Introduction Concrete which is kind of artificial stone material is the best well-known usage in construction industry. Experimental research on the behavior of concrete samples has been carried out very early in the world. Mazars [1] performed an uniaxial compressive test on concrete sample and showed that the behavior of concrete undergoes different stages. In the first stage, the voids and micro-cracks in concrete are closed and the elastic modulus of concrete increases. In the second stage, from 30% to 50% of the strength value, concrete behaves almost linearly with the elastic modulus being constant, at which time the sample volume decreases. In the next stage, from 50% to 100% of the strength value, the modulus of concrete is reduced; the micro-cracks are oriented in parallel with the loading direction and the sample’s volume increases. The post peak is the softening phase of the concrete, the micro cracks form macroscopic cracks and lead to the destruction of the sample, and the sample volume at this time continues to increase. Besides the uniaxial compression test, the uniaxial tensile test has been studied by many authors [2, 3]. These studies indicate that, in the tensile test, the concrete sample behaves firstly linear, and then the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 Y.-P. Huang et al. (Eds.): GTSD 2020, AISC 1284, pp. 570–579, 2021. https://doi.org/10.1007/978-3-030-62324-1_48

Compressive Behavior of Concrete

571

nonlinear phase with the reduction of deformation modulus until the peak stress. After the peak, the stress decreases suddenly, the micro-cracks are oriented in the direction perpendicular to the loading direction due to the disconnection between the large aggregate particles because the cement mortar has been destroyed. The micro-cracks form macroscopic fractures in the direction perpendicular to the loading direction. In addition to experimental studies, behavior of concrete has also been simulated by various numerical methods such as finite element method, discrete element method. The most of simulation studies used finite element method and applied to structural structures of concrete and reinforced concrete [4, 5]. However, these studies still remain some restriction in simulating concrete and illustrating the propagation of cracks inside structure. To ameliorate the simulation concrete behavior, the discrete element method developed by Cundall [6] has been used. There have been many studies simulating the behavior of materials using discrete element method [7–9]. Tran et al. [7] developed a discrete element model to simulate the triaxial behavior of concrete under high confinement. Hentz et al. [8] developed and indentified a constitutive law to implement to DEM to model the behavior of concrete. Gyurkó et al. [9] also used DEM to simulate the uniaxial compression test of hardened concrete. This paper studies the experimental uniaxial compressive behavior of concrete with different fabricated gradations. Then, the discrete element model will be used to model this behavior of concrete; the simulation results will be compared with experimental one to verify the ability of the discrete element model.

2 Uniaxial Compression Test on Concrete 2.1

Concrete Specimen

The concrete specimens were casted with material composition below (see Table 1): PCB40 Holcim cement; fine aggregate (sand); coarse aggregate (natural aggregate) with particle size distribution which is shown in Table 2; pure water, follow technical requirement in accordance with TCVN 4506-2012 [10]. These materials were mixed with different ratios to fabricate the mixture concrete with two proportions DC01, DC02 which are indicated in Table 2. Table 1. Concrete mixture proportion (kg/m3) Series Cement Fine aggregate Coarse aggregate Water DC01 385 668 1182 201 DC02 437 625 1170 201

572

T. V. Tieng et al. Table 2. Particle size distribution of fine aggregate and coarse aggregate Fine aggregate Size sieve (mm) 4.75 2.36 1.18 0.6 0.3 0.15