Synergetic Cooperation between Robots and Humans: Proceedings of the CLAWAR 2023 Conference - Volume 2 (Lecture Notes in Networks and Systems, 811) [1st ed. 2024] 3031472713, 9783031472718

CLAWAR 2023 is the 26th International Conference Series on Climbing and Walking Robots and Mobile Machine Support Techno

129 53 58MB

English Pages 451 Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Synergetic Cooperation between Robots and Humans: Proceedings of the CLAWAR 2023 Conference - Volume 2 (Lecture Notes in Networks and Systems, 811) [1st ed. 2024]
 3031472713, 9783031472718

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 811

Ebrahim Samer El Youssef Mohammad Osman Tokhi Manuel F. Silva Leonardo Mejia Rincon   Editors

Synergetic Cooperation Between Robots and Humans Proceedings of the CLAWAR 2023 Conference—Volume 2

Lecture Notes in Networks and Systems

811

Series Editor Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Ebrahim Samer El Youssef · Mohammad Osman Tokhi · Manuel F. Silva · Leonardo Mejia Rincon Editors

Synergetic Cooperation Between Robots and Humans Proceedings of the CLAWAR 2023 Conference—Volume 2

Editors Ebrahim Samer El Youssef Universidade Federal de Santa Catarina Blumenau, Santa Catarina, Brazil Manuel F. Silva School of Engineering Polytechnic Institute of Porto Porto, Portugal

Mohammad Osman Tokhi School of Engineering London South Bank University London, UK Leonardo Mejia Rincon Universidade Federal de Santa Catarina Blumenau, Santa Catarina, Brazil

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-47271-8 ISBN 978-3-031-47272-5 (eBook) https://doi.org/10.1007/978-3-031-47272-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Foreword

CLAWAR (Climbing and Walking Robots) started with a 6-month exploratory phase in 1996 by four European organizations, namely the University of Portsmouth, Royal Military Academy, FZI and RISO with the view to identify robotic stakeholders across Europe. The outcome was initiation of the CLAWAR thematic network of excellence supported by the European Commission over two phases, namely CLAWAR1 under the EC Brite Euram programme during 1998–2002 and CLAWAR2 under the EC GROWTH programme during 2002–2005. CLAWAR Association was established by the end of 2005 to continue the activities of the CLAWAR Network globally, with the mission to advance robotics for the public benefit. The Association was registered in March 2006 with the Companies House in the UK as a non-profit making limited company by guarantee, and in 2012 with the Charities Commission in the UK as a charitable organization. The CLAWAR annual conference series is one of the main activities of the CLAWAR Association. The first nine issues of the conference starting from 1998 were held in locations across Europe, and further issues in various countries worldwide. The COVID19 pandemic has had an impact on the mode of participation in the conference, and while issues 22 (2019) were held in physical participation mode, virtual participation mode has been exercised for issues 23 (2020, Russian Federation) and issue 24 (2021, Japan). The CLAWAR conference series has established itself as a popular and high-profile platform for networking and dissemination of research and development findings in the area of mobile robotics and associated technologies.

Preface

CLAWAR 2023 is the twenty-sixth edition of the International Conference series on Climbing and Walking Robots and the Support Technologies for Mobile Machines. The conference is organized by the CLAWAR Association in collaboration with the Federal University of Santa Catarina, Brazil during 2–4 October 2023. CLAWAR 2023 brings new developments and new research findings in robotics technologies within the framework of ‘synergetic cooperation between robots and humans’. The topics covered include wearable assistive robotics from augmentation to full support for those with mobility disorders, innovative designs of components and full systems and application-specific robotic solutions. The CLAWAR 2023 conference includes a total of 55 regular and special submission articles, from research institutions worldwide. This number has been arrived at through rigorous review of initial submissions, where each paper initially submitted has received at least three reviews. The conference further features three plenary presentations; Biped walking with robots and exoskeletons: Marching towards bionic gait Arturo Forner-Cordero, Universidade de São Paulo, Brazil. Multibody dynamics with contact-impact events: Roots, models and applications Paulo Flores, University of Minho, Portugal. Human-like bipedal locomotion Karsten Berns, University of Kaiserslautern, Germany. It is believed that this book will serve as a source of inspiration and further innovation in research and development in the rapidly growing area of mobile service robotics. Blumenau, Brazil London, UK Porto, Portugal Blumenau, Brazil

Ebrahim Samer El Youssef Mohammad Osman Tokhi Manuel F. Silva Leonardo Mejia Rincon

Acknowledgements

The editors would like to thank members of the International Scientific Committee and National Organizing Committee of CLAWAR 2023 for their time and dedication in reviewing the submitted articles: Ahmad, S.—Malaysia Alkan, B.—UK Alexandre Campos Bonilla, A.—Brazil Almeshal, A.—Kuwait Armada, M.—Spain Banfield, I.—Panama Barasuol, V.—Italy Belter, D.—Poland Ben Amar, F.—France Berns, K.—Germany Bidaud, P.—France Boaventura Cunha, T.—Brazil Bolotnik, N.—Russia Bonsignorio, F.—Italy Brasil Pintarelli, G.—Brazil Bridge, B.—UK Briskin, E.—Russia Burlacu, A.—Romania Cascalho, J.—Portugal Chevallereau, C.—France Chrysostomou, D.—Denmark Chugo, D.—Japan Conceicao, A.—Brazil Costa, M. T.—Portugal Degani, A.—Israel Dehghani-Sanij, A.—UK Delisle Rodriguez, D.—Brazil Dias, A.—Portugal Dillmann, R.—Germany Doroftei, I.—Romania Dourado, A.—Brazil Drews-Jr, P.—Brazil Eguchi, A.—USA El Youssef E. S.—Brazil Esteban, S.—Spain Faina, A.—Denmark Farias, P.—Brazil

x

Acknowledgements

Fernandez, R.—Spain Ferreira, P.—Portugal Figueiredo, J.—Portugal Frantz, J. C.—Brazil Friebe, A.—Sweden Gallegos Garrido, G.—UK Goher, K. M.—UK Grand, C.—France Guedes, P.—Portugal Gwak, K.-W.—Korea Hassan, M. K.—Malaysia Hwang, K.-S.—Taiwan Ion, I.—Romania Jamali, A.—Malaysia Kottege, N.—Australia Lefeber, D.—Belgium Leon-Rodriguez, H.—Colombia Li, G.—UK Lima Brasil, F.—Brazil Lima, P. U.—Portugal Manoonpong, P.—Denmark Marques, L.—Portugal Martins, D.—Brazil Massoud, R.—Syria Mat Darus, I. Z.—Malaysia Mejia, L.—Brazil Mendes, A.—Portugal Mohamed, Z.—Malaysia Mondal, S.—UK Monje, C. A.—Spain Montes, H.—Panama Moon, S.—Korea Moreno, U.—Brazil Murai, E. H.—Brazil Muscato, G.—Italy Nakamura, T.—Japan Nunuparov, A.—Russia Ozkan-Aydin, Y.—UK Pandey, R.—India Paraforos, D.—Germany Park, H. S.—Korea Petry, M.—Brazil Pinto, M.—Brazil Pinto, V. H.—Portugal Plentz, P.—Brazil

Acknowledgements

Ponce, D.—Brazil Raffo, G.—Brazil Reina, G.—Italy Ribeiro, M.—Portugal Ribeiro, T. T.—Brazil Roccia, B.—Argentina Rocha, R. P.—Portugal Rodríguez, H.—Panama Rodríguez Lera, F. J.—Spain Röning, J.—Finland Santos, C. M. P.—Portugal Semini, C.—Italy Sequeira, J.—Portugal Silva, M. F.—Portugal Simas, H.—Brazil Simoni, R.—Brazil Skrzypczynski, P.—Poland Su, H.—China Suzano Medeiros, V.—Brazil Suzuki, D.—Brazil Toha, S. F.—Malaysia Tokhi, M. O.—UK Vainio, M.—Finland Visser, A.—Netherlands Wildgrube Bertol, D.—Brazil Wu, J.—China Xie, M.—Singapore Yatsun, S.—Russia Yigit, A.—Kuwait Yokota, S.—Japan Zaier, R.—Oman Zhao, Z.—UK Zielinska, T.—Poland

xi

Contents

Multibody Systems and Mechanism Design in Robotics Efficiency Optimization of the Gear Reducer of an Overhead Power Line Inspection Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marina Baldissera de Souza, Gustavo Queiroz Fernandes, Luís Paulo Laus, Andrea Piga Carboni, and Daniel Martins Leg Mechanism of a Quadruped Wheeled Robot with a 4-DoF Spherical Parallel Link Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Taisei Suzuki, Hayato Ota, Hiroki Takenaka, Takayuki Tanaka, Yuta Ishizawa, and Kenji Hashimoto A New Method of Climbing on a High Place by Elasticity-Embedded Rocker-Bogie Vehicle with Dynamic Motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ryunosuke Takahashi, Ikuo Mizuuchi, Kimikage Kanno, and Kenta Hiyoshi

3

15

28

Actively Variable Transmission Robotic Leg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Giorgio Valsecchi, Fabian Tischhauser, Jonas Junger, Yann Bernarnd, and Marco Hutter

40

Tensegrity Approaches for Flexible Robots: A Review . . . . . . . . . . . . . . . . . . . . . . Juan C. Guacheta-Alba, Angie J. Valencia-Casteneda, Max Suell Dutra, Mauricio Mauledoux, and Oscar F. Aviles

52

Robotic Arm Development for a Quadruped Robot . . . . . . . . . . . . . . . . . . . . . . . . . Maria S. Lopes, António Paulo Moreira, Manuel F. Silva, and Filipe Santos

63

Wrench Capability Analysis of a Serial-Parallel Hybrid Leg of a Disney’s Bipedal Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gustavo Queiroz Fernandes, Marina Baldissera de Souza, Leonardo Mejia Rincon, and Daniel Martins

75

Planning and Control Bipedal Walking Robot Control Using PMTG Architecture . . . . . . . . . . . . . . . . . . Vladimir Danilov, Konstantin Klimov, Dmitrii Kapytov, and Sekou Diane

89

xiv

Contents

Adaptive Suspension System Position-Force Control of Wheeled Wall-Pressed In-Pipe Climbing Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Sergey Jatsun and Andrei Malchikov Three-Rigid-Body Model Based NMPC for Bounding of a Quadruped with Two Spinal Joints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Songrui Huang, Wenhan Cai, and Mingguo Zhao Observer-Based Control Model Test in Biped Robot . . . . . . . . . . . . . . . . . . . . . . . . 125 João Carvalho and Mauricio Eiji Nakai Nonlinear Model Predictive Control and Reinforcement Learning for Capsule-Type Robot with an Opposing Spring . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Armen Nunuparov and Nikita Syrykh Neural Control and Learning of a Gecko-Inspired Robot for Aerial Self-righting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Léonard Chanfreau, Worasuchad Haomachai, and Poramate Manoonpong Double Gradient Method: A New Optimization Method for the Trajectory Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Alam Rosato Macêdo, Ebrahim Samer El Youssef, and Marcus V. Americano da Costa Planar Motion Control of a Quadruped Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Gabriel Duarte Gonçalves Pedro, Thiago Boaventura Cunha, Pedro Américo Almeida Magalhães Júnior, and Gustavo Medeiros Freitas Exploring Behaviours for Social Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Joana Sá and João Silva Sequeira Impedance Control Analysis for Legged Locomotion in Oscillating Ground . . . . 197 Vivian S. Medeiros, Felix M. Escalante, Marcelo Becker, and Thiago Boaventura Robotic Navigation A Cooperative Approach to Teleoperation Through Gestures for Multi-robot Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Dieisson Martinelli, Jonathan Cerbaro, Marco Antonio Simões Teixeira, Vivian Cremer Kalempa, Vitor de Assis Monteiro, and André Schneider de Oliveira

Contents

xv

NavPi: An Adaptive Local Path-Planning Pipeline for 3D Navigation in Difficult Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Marvin Grosse Besselmann, Arne Rönnau, and Rüdiger Dillmann Indoor Localisation of Mobile Robots with Ultra Wideband Using Experimental TDOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Ousmane Abdoulaye Oumar, Mohammad Osman Tokhi, Tariq P. Sattar, Sidik Haroune Ibrahim, and Shwan Dyllon Formation Tracking Control of Multiple UAVs in the Presence of Communication Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Thamiris Costa, Ebrahim El’Youssef, and Edson De Pieri Human-Robot Autonomous System: An Interactive Architecture . . . . . . . . . . . . . 263 Giovane Moreira, Anderson Leite, José Díaz-Amado, Cleia Libarino, and Joao Marques Comparative Analysis of LiDAR SLAM Techniques in Simulated Environments in ROS Gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Marcelo E. C. de Carvalho, Tiago T. Ribeiro, and Andre G. S. Conceicao Instance Segmentation to Path Planning in a Simulated Industrial Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Danilo G. Schneider and Marcelo R. Stemmer Robotics and Neurotechnologies for Healthcare Improvements FPGA-Based Emulation of a Muscle Stretch Reflex on an Electric Series Elastic Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Oleksandr Sivak, Patrick Vonwirth, and Karsten Berns Neural Multimodal Control for Versatile Motion Generation and Continuous Transitions of a Lower-Limb Exoskeleton . . . . . . . . . . . . . . . . . . . 311 Chaicharn Akkawutvanich, Natchaya Sricom, and Poramate Manoonpong Positional Health Assessment of Collaborative Robots Based on Long Short-Term Memory Auto-Encoder (LSTMAE) Network . . . . . . . . . . . . . . . . . . . . 323 Naimul Hasan, Louie Webb, Malarvizhi Kaniappan Chinanthai, Mohammad Al-Amin Hossain, Erkan Caner Ozkat, Mohammad Osman Tokhi, and Bugra Alkan Human-Exoskeleton Interaction During Knee Flexion-Extension Under Different Configurations of Robot Assistance-Resistance . . . . . . . . . . . . . . . . . . . . 336 Denis Mosconi, Yecid Moreno, and Adriano Siqueira

xvi

Contents

Walking on Virtual Surface Patterns Changes Muscular Activity . . . . . . . . . . . . . . 345 Maximilian Stasica, Kai Streiling, Celine Honekamp, Alexandra Schneider, Alexandros Exarchos, Saskia Henschke, Suat Pirincoglu, Melike Polat, Neele Scholz, Carina Stähler, Emma Syring, Loes van Dam, and André Seyfarth Towards a Gait Planning Training Strategy Using Lokomat . . . . . . . . . . . . . . . . . . 357 Thayse Saraiva de Albuquerque, Lucas José da Costa, Ericka Raiane da Silva, Geovana Kelly Lima Rocha, André Felipe Oliveira de Azevedo Dantas, Caroline do Espírito Santo, and Denis Delisle-Rodriguez Assistive Walker Which Stabilizes the User’s Posture and Prevent Falls Through Abnormal Gait Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Daisuke Chugo, Yao Li, Satoshi Muramatsu, Sho Yokota, Jin-Hua She, and Hiroshi Hashimoto Simulation and Digital-Twins in Robotic Applications A Solution Architecture for Energy Monitoring and Visualisation in Smart Factories with Robotic Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Louie Webb, Mohammad Osman Tokhi, and Bugra Alkan Locosim: An Open-Source Cross-Platform Robotics Framework . . . . . . . . . . . . . 395 Michele Focchi, Francesco Roscia, and Claudio Semini A Locomotion Algorithm for an Apodal Robot to Climb and Descend Steps . . . 407 Vitor Hugo Prado Gomes, Thiago de Deus Lima Rocha, Carla Cavalcante Koike, Dianne Magalhães Viana, and Jones Yudi Mori Alves da Silva Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

Contributors

Chaicharn Akkawutvanich Vidyasirimedhi Institute of Science and Technology (VISTEC), Rayong, Thailand Bugra Alkan School of Engineering, London South Bank University, London, UK Oscar F. Aviles Davinci Research Group, Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogota, Colombia Marcelo Becker University of São Paulo, Sao Carlos, SP, Brazil Yann Bernarnd RSL, ETH Zürich, Zürich, Switzerland Karsten Berns Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Germany Marvin Grosse Besselmann Department of Interactive Diagnosis and Service Systems (IDS), FZI Research Center for Information Technology, Karlsruhe, Germany Thiago Boaventura University of São Paulo, Sao Carlos, SP, Brazil Wenhan Cai Department of Automation, Tsinghua University, Beijing, China Andrea Piga Carboni Federal University of Santa Catarina (UFSC), Campus Joinville, R. Dona Francisca, Distrito Industrial, Joinville, SC, Brazil João Carvalho Federal Technological University of Paraná, Apucarana, Paraná, Brazil Jonathan Cerbaro Federal University of Technology - Paraná (UTFPR), Curitiba, Paraná, Brazil Léonard Chanfreau Institut Polytechnique de Paris - ENSTA Paris, Palaiseau Cedex, France Malarvizhi Kaniappan Chinanthai Computer Science and Engineering, University of Westminster, London, UK Daisuke Chugo Kwansei Gakuin University, Sanda, Hyogo, Japan Andre G. S. Conceicao Department of Electrical and Computer Engineering, Federal University of Bahia, Salvador, BA, Brazil Thamiris Costa Federal University of Santa Catarina, Automation and Systems Department, Florianópolis SC, Brazil Thiago Boaventura Cunha University of São Paulo, Sao Carlos, SP, Brazil Lucas José da Costa Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Marcus V. Americano da Costa Universidade Federal da Bahia, Bahia, Brazil

xviii

Contributors

Ericka Raiane da Silva Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Vladimir Danilov Institute of Control Problems of Russian Academy of Sciences, Moscow, Russia; Voltbro LCC, Moscow, Russia Thayse Saraiva de Albuquerque Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Vitor de Assis Monteiro Universidade do Estado de Santa Catarina (UDESC), São Bento do Sul-SC, Brazil André Felipe Oliveira de Azevedo Dantas Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Marcelo E. C. de Carvalho Postgraduate Program in Electrical Engineering, Federal University of Bahia, Salvador, Brazil André Schneider de Oliveira Federal University of Technology - Paraná (UTFPR), Curitiba, Paraná, Brazil Edson De Pieri Federal University of Santa Catarina, Automation and Systems Department, Florianópolis SC, Brazil Marina Baldissera de Souza Federal University of Santa Catarina (UFSC), Centro Tecnológico, Campus Universitário Reitor João David Ferreira Lima—Trindade, Florianópolis, SC, Brazil Denis Delisle-Rodriguez Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Sekou Diane Institute of Control Problems of Russian Academy of Sciences, Moscow, Russia José Díaz-Amado Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Zabelê, Vitória da Conquista - BA, Brazil Rüdiger Dillmann Department of Interactive Diagnosis and Service Systems (IDS), FZI Research Center for Information Technology, Karlsruhe, Germany Caroline do Espírito Santo Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Max Suell Dutra Robotics Laboratory, Mechanical Engineering Department, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Shwan Dyllon School of Engineering, London South Bank University, London, UK

Contributors

xix

Ebrahim El’Youssef Federal University of Santa Catarina, Automation and Control Engineering and Computation Department, Blumenau, SC, Brazil Felix M. Escalante University of São Paulo, Sao Carlos, SP, Brazil Alexandros Exarchos Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Gustavo Queiroz Fernandes Centro Tecnológico, Campus Universitário Reitor João David Ferreira Lima—Trindade, Federal University of Santa Catarina (UFSC), Florianópolis, SC, Brazil Michele Focchi Dynamic Legged Systems (DLS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy; Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy Gustavo Medeiros Freitas Federal University of Minas Gerais, Belo Horizonte, MG, Brazil Vitor Hugo Prado Gomes Campus Universitário Darcy Ribeiro, Universidade de Brasília, Brasilia, DF, Brazil Juan C. Guacheta-Alba Robotics Laboratory, Mechanical Engineering Department, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Worasuchad Haomachai College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China Naimul Hasan School of Engineering, London South Bank University, London, UK Hiroshi Hashimoto Advanced Institute of Industrial Technology, Shinagawa, Tokyo, Japan Kenji Hashimoto Waseda University, Kitakyushu, Fukuoka, Japan Saskia Henschke Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Kenta Hiyoshi Meidensha Corp., Kiyosu-shi, Aichi, Japan Celine Honekamp Sensorimotor Control and Learning, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Mohammad Al-Amin Hossain School of Engineering, London South Bank University, London, UK Songrui Huang Department of Automation, Tsinghua University, Beijing, China Marco Hutter RSL, ETH Zürich, Zürich, Switzerland Sidik Haroune Ibrahim ARCEP (Autorité de Régulation des Télécommunications Electroniques et des Poste) – Chad, Paris, France

xx

Contributors

Yuta Ishizawa Meiji University, Kawasaki, Kanagawa, Japan Sergey Jatsun Department of Mechanics, Mechatronics and Robotics, Southwest State University, Kursk, Russia Jonas Junger RSL, ETH Zürich, Zürich, Switzerland Pedro Américo Almeida Magalhães Júnior Pontific Catholic University of Minas Gerais, Belo Horizonte, MG, Brazil Vivian Cremer Kalempa Universidade do Estado de Santa Catarina (UDESC), São Bento do Sul-SC, Brazil Kimikage Kanno Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Naka-cho, Koganei-city, Tokyo, Japan Dmitrii Kapytov Robotics Laboratory, Institute of Mechanics Lomonosov Moscow State University, Moscow, Russia; Voltbro LCC, Moscow, Russia Konstantin Klimov Robotics Laboratory, Institute of Mechanics Lomonosov Moscow State University, Moscow, Russia; Voltbro LCC, Moscow, Russia Carla Cavalcante Koike Campus Universitário Darcy Ribeiro, Universidade de Brasília, Brasilia, DF, Brazil Luís Paulo Laus Federal University of Technology—Paraná (UTFPR), DAMEC, Curitiba, PR, Brazil Anderson Leite Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Zabelê, Vitória da Conquista - BA, Brazil Yao Li Kwansei Gakuin University, Sanda, Hyogo, Japan Cleia Libarino Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Zabelê, Vitória da Conquista - BA, Brazil Maria S. Lopes INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP—Faculty of Engineering, University of Porto, Porto, Portugal Alam Rosato Macêdo Universidade Federal da Bahia, Bahia, Brazil Andrei Malchikov Department of Mechanics, Mechatronics and Robotics, Southwest State University, Kursk, Russia Poramate Manoonpong Embodied AI and Neurorobotics Lab, SDU Biorobotics, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark; Bio-inspired Robotics and Neural Engineering Lab, School of Information Science and Technology, Vidyasirimedhi Institute of Science and Technology (VISTEC), Rayong, Thailand; SDU Biorobotics, University of Southern Denmark, Odense, Denmark

Contributors

xxi

Joao Marques Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Zabelê, Vitória da Conquista - BA, Brazil Dieisson Martinelli Federal University of Technology - Paraná (UTFPR), Curitiba, Paraná, Brazil Daniel Martins Centro Tecnológico, Campus Universitário Reitor João David Ferreira Lima—Trindade, Federal University of Santa Catarina (UFSC), Florianópolis, SC, Brazil Mauricio Mauledoux Davinci Research Group, Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogota, Colombia Vivian S. Medeiros University of São Paulo, Sao Carlos, SP, Brazil Ikuo Mizuuchi Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Naka-cho, Koganei-city, Tokyo, Japan António Paulo Moreira INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP—Faculty of Engineering, University of Porto, Porto, Portugal Giovane Moreira Instituto Federal de Educação, Ciência e Tecnologia da Bahia, Zabelê, Vitória da Conquista - BA, Brazil Yecid Moreno Mechanical Engineering Departiment, University of São Paulo, São Carlos, Brazil Denis Mosconi Industry Department, Federal Institute of São Paulo, Catanduva, Brazil; Mechanical Engineering Departiment, University of São Paulo, São Carlos, Brazil Satoshi Muramatsu Tokai University. Hiratsuka, Kanagawa, Japan Mauricio Eiji Nakai Federal Technological University of Paraná, Apucarana, Paraná, Brazil Armen Nunuparov Institute for Problems in Mechanics of the Russian Academy of Sciences, Moscow, Russia Hayato Ota Meiji University, Kawasaki, Kanagawa, Japan Ousmane Abdoulaye Oumar School of Engineering, London South Bank University, London, UK Erkan Caner Ozkat Department of Mechanical Engineering, Faculty of Engineering and Architecture, Recep Tayyip Erdogan University, Rize, Turkey Gabriel Duarte Gonçalves Pedro University of São Paulo, Sao Carlos, SP, Brazil Suat Pirincoglu Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Melike Polat Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany

xxii

Contributors

Tiago T. Ribeiro Department of Electrical and Computer Engineering, Federal University of Bahia, Salvador, BA, Brazil Leonardo Mejia Rincon Campus Blumenau, Federal University of Santa Catarina (UFSC), Blumenau, SC, Brazil Geovana Kelly Lima Rocha Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil Thiago de Deus Lima Rocha Campus Universitário Darcy Ribeiro, Universidade de Brasília, Brasilia, DF, Brazil Arne Rönnau Department of Interactive Diagnosis and Service Systems (IDS), FZI Research Center for Information Technology, Karlsruhe, Germany Francesco Roscia Dynamic Legged Systems (DLS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy Joana Sá Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal Filipe Santos INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal Tariq P. Sattar School of Engineering, London South Bank University, London, UK Alexandra Schneider Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Danilo G. Schneider Universidade Federal de Santa Catarina, Florianópolis, Brazil Neele Scholz Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Claudio Semini Dynamic Legged Systems (DLS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy João Silva Sequeira Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal André Seyfarth Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Jin-Hua She Tokyo University of Technology, Hachioji, Tokyo, Japan Jones Yudi Mori Alves da Silva Campus Universitário Darcy Ribeiro, Universidade de Brasília, Brasilia, DF, Brazil Manuel F. Silva INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; ISEP—Instituto Superior de Engenharia do Porto, Instituto Politécnico do Porto, Rua Dr. António Bernardino de Almeida, Porto, Portugal

Contributors

xxiii

Adriano Siqueira Mechanical Engineering Departiment, University of São Paulo, São Carlos, Brazil Oleksandr Sivak Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Germany Natchaya Sricom Vidyasirimedhi Institute of Science and Technology (VISTEC), Rayong, Thailand Carina Stähler Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Maximilian Stasica Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Marcelo R. Stemmer Universidade Federal de Santa Catarina, Florianópolis, Brazil Kai Streiling Sensorimotor Control and Learning, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Taisei Suzuki Meiji University, Kawasaki, Kanagawa, Japan Emma Syring Lauflabor Locomotion Laboratory, Institute für Sportwissenschaft, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany Nikita Syrykh Institute for Problems in Mechanics of the Russian Academy of Sciences, Moscow, Russia Ryunosuke Takahashi Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Naka-cho, Koganei-city, Tokyo, Japan Hiroki Takenaka Meiji University, Kawasaki, Kanagawa, Japan Takayuki Tanaka Meiji University, Kawasaki, Kanagawa, Japan Marco Antonio Simões Teixeira Graduate Program in Informatics (PPGIa) Pontifícia Universidade Católica do Paraná (PUCPR), Curitiba, Brazil Fabian Tischhauser RSL, ETH Zürich, Zürich, Switzerland Mohammad Osman Tokhi School of Engineering, London South Bank University, London, UK Dianne Magalhães Viana Campus Universitário Darcy Ribeiro, Universidade de Brasília, Brasilia, DF, Brazil Angie J. Valencia-Casteneda Robotics Laboratory, Mechanical Engineering Department, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil Giorgio Valsecchi RSL, ETH Zürich, Zürich, Switzerland Loes van Dam Sensorimotor Control and Learning, Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany

xxiv

Contributors

Patrick Vonwirth Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Germany Louie Webb School of Engineering, London South Bank University, London, UK Sho Yokota Toyo University, Kawagoe, Saitama, Japan Ebrahim Samer El Youssef Universidade Federal de Santa Catarina, Santa Catarina, Brazil Mingguo Zhao Beijing Innovation Center for Future Chips, Tsinghua University, Beijing, China

Abbreviations

2D 3D AAC ABNT ABV AGV AHN AI A-LOAM AMCL AMR ANN AOA API ARNL ATI AUV AVT BBO BC BCF BCI BF BiLSTM BLDC BMI BPC BSI BWSS CA CAD CAN CAPES CARL CATIA CB CFD CHS CM CMfg CMPC

Two-Dimensional Three-Dimensional All Aluminium Conductor Brazilian Association of Technical Norms Alcohol by Volume Autonomous/Automated Guided Vehicles Artificial Hormone Network Artificial Intelligence Advanced Implementation of LOAM Adaptive Monte Carlo Localization Autonomous Mobile Robot Artificial Neural Network Angle of Arrival Application Programming Interface Advanced Robotics Navigation and Localization Aerospace Technology Institute Autonomous Underwater Vehicle Actively Variable Transmission Black-Box Optimization Base Controller Musculus Biceps Femoris/Base Controller Fixed Brain–Computer Interface Biceps Femoris Bidirectional Long- and Short-Term Memory Brushless Direct Current Motor Brain–Machine Interface Body Posture Control British Standards Institution Body Weight Support System Completely Assisted/Cabo de Alumínio Computer-Aided Design Controller Area Network Coordenação de Aperfeiçoamento de Pessoal de Nível Superior Compliant Robotic Leg Computer-Aided Three-Dimensional Interactive Application Computational Brain Computational Fluid Dynamic Central Hormone System Conditional Monitoring/Centre of Mass Cloud Manufacturing Convex MPC

xxvi

Abbreviations

CNN COCO CoG (COG) CoM (COM) COT CP CPG CPU CWT DARPA DBSCAN DC DCAN DCT DDA DFG DFT DGM DH DHS DIBRIS DL DLR DLS DMF DoF (DOF) DRTLS DT DTW EA ECoG EEG EKF ELF EMA EMG ERD ESA ESG ESRIC F&E FA FAPERJ FCC FDM

Convolution Neural Network Common Objects in Context Centre of Gravity Centre of Mass Cost of Transport Convertible Plane Central Pattern Generator Central Processing Unit Continuous Wavelet Transform Defense Advanced Research Projects Agency Density-Based Spatial Clustering of Applications with Noise Direct Current Dense Co-Attention Symmetric Network Discrete Cosine Transform Digital Differential Analyzer German Research Foundation Discrete Fourier Transform Double Gradient Method Denavit–Hartenberg Distributed Hormone System Department of Informatics, Bioengineering, Robotics and Systems Engineering Deep Learning German Aerospace Center Dynamic Legged Systems Decreasing Rate Modifying Filter Degree of Freedom Dual Regularized Total Least Squares Method Digital Twin Dynamic Time Warping Extension Assisted ElectroCorticoGram ElectroEncephaloGram Extended Kalman Filter Euler–Lagrange Formalism Exponential Moving Average ElectroMyoGraphy Event-Related Desynchronization European Space Agency Environment, Social and Governance European Space Resources Innovation Centre Flexion-Extension Flexion Assisted Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro Federal Communications Commission Finite Difference Method

Abbreviations

FEM FEMD FFT F-LOAM FOV FP7 FPGA GA GAS GIA GLM GPS GUI HAA HFE His HJ HMD HMI HMM HRI HSE HUAUV ICP IDS IEBI IEEE IIoT IIT IK IMU INCT IoT IP IPOPT IR ISD ISI-SM ISM ISO JUKF KFE kNN (KNN) KPI LAR LDA

Finite Element Method Forward Euler Discretization Method Fast Fourier Transform Fast LiDAR Odometry and Mapping Field of View Framework Program 7 Field-Programmable Gate Array Genetic Algorithm Musculus Hastrocnemius Lateralis Gravito-Inertial Acceleration Musculus Gluteus Maximus Global Positioning System Graphical User Interface Hip Abduction/Adduction Hip Flexion/Extension Health Indicators Hamilton-Jacobi Head-Mounted Display Human–Machine Interface Hidden Markov Model Human–Robot Interaction Health, Safety and Environment Hybrid Unmanned Aerial-Underwater Vehicle Iterative Closest Point Interactive Diagnosis and Service Systems Inter-Eyeblink Interval Institute of Electrical and Electronics Engineers Industrial Internet of Things Istituto Italiano di Tecnologia/ Italian Institute of Technology Inverse Kinematics Inertial Measurement Unit National Institute of Science and Technology Internet of Things Intellectual Property Interior Point OPTimizer Infrared Santos Dumont Institute Innovation Institute for Manufacturing Systems Industrial, Scientific, And Medical International Organization for Standardization Joint state, input and parameter Unscented Kalman Filter Knee Flexion/Extension k-Nearest Neighbours Key Performance Indicator Laboratory of Applied Robotics Linear Discriminant Analysis

xxvii

xxviii

LEADS LED LfD LiDAR Li-Ion Li-Po LMAS LMI LOAM LOF LOS LQR LRS LRST LSTM LSTMAE LU MAE MC MD MI MIT MM MPC MQTT MR MSIS MVC NASA NavPi NDT NLOS NMPC NTRT OC ODE OGM OPC-UA OPDL OSQP OSRF OUAV PC PCB PCL PD

Abbreviations

Lab for Engineering Analysis, Design and Simulation Light-Emitting Diode Learning-from-Demonstration Light Detection and Ranging Lithium Ion Lithium Ion Polymer Line-less Mobile Assembly System Linear Matrix Inequality Lidar Odometry and Mapping Local Outlier Factor Line-of-Sight Linear–Quadratic Regulator Laser Range Scanner Low-Reaction Swing Trajectory Long Short-Term Memory Long Short-Term Memory Auto-Encoder Lower–Upper Mean Absolute Error Microcontroller Momentum Distribution Motor Imagery Massachusetts Institute of Technology Multidimensional Map Model Predictive Control Message Queuing Telemetry Transport Mixed Reality Mechanical Scanning Imaging Sonar Maximum Voluntary Contraction National Aeronautics and Space Administration Navigation Pipeline Non-destructive Testing Non-Line-of-Sight Nonlinear Model Predictive Control NASA Tensegrity Robotics Toolkit Optimal Control Open Dynamics Engine/Ordinary Differential Equation Occupancy Grid Map Open Platform Communications—Unified Architecture Overhead Power Distribution Line Operator Splitting Quadratic Program Open-Source Robotics Foundation Octocopter Unmanned Aerial Vehicle Personal Computer Printed Circuit Board Point Cloud Library Proportional and Derivative

Abbreviations

PGT PHM PID PLA PMTG POM PPO PSA PTP PVC PWM QoL QP QR QUAV RAF RAM RAMP RandNet RBF RCF RCNN RDA RDC ReLU RFBR RFID RF-MPC RGB RGB-D RL RM RMS RMSE RNN RoM ROS ROV RPAS RPN RSS RTI RTLS RVIZ SA SAR

Planetary Gear Train Prognostics and Health Management Proportional-Integral-Derivative PolyLactic Acid Policy Modulating Trajectory Generator PolyOxyMethylene Proximal Policy Optimization Pressure Sensitive Adhesive Point-To-Point Polyvinyl Chloride Pulse Width Modulation Quality of Life Quadratic Programming Quick-Response Quadrotor Unmanned Aerial Vehicle RobotAtFactory Random Access Memory Reaction-Aware Motion Planning Randomized Neural Network Radial Basis Function Musculus Rectus Femoris Recurrent Convolution Neural Network Hard-to-Reach Space Rope-Driven Robot Rectified Linear Unit Russian Foundation for Basic Research Radio-Frequency Identification Representation-Free MPC Red–Green–Blue Red–Green–Blue-Depth Reinforcement Learning Rectus Femoris Root mean square Root Mean Squared Error Recurrent Neural Network Range of Motion Robot Operating System Remotely Operated Vehicle Remotely Piloted Aircraft System Region Proposal Network Received Signal Strength Real-Time Iteration Real-Time Locating System ROS Visualization Self-Attention Search and Rescue

xxix

xxx

Abbreviations

SCC SCI SDK SDU SEA SEBR sEMG SENACYT SLAM SMA SOC SOL SQP SRB SRN SSD SSM ST STFT TDOA TF TFL TIB TOA UAV(s) UDP-IP UFSC UK UKF URDF USB UTEC UUV UWB VAS VHDL VL VM VNC VOR VR VRAM VTOL WBC WDr WHO

Stair-Climbing Chart Spinal Cord Injury Software Development Kit University of Southern Denmark Series Elastic Actuator Spontaneous Eyeblink Rate Surface ElectroMyoGraphy Secretaría Nacional de Ciencia, Tecnología e Innovación de Panama Simultaneous Localization and Mapping Shape Memory Alloy State of Charge Musculus Soleus Sequential Quadratic Programming Single-Rigid-Body Single Recurrent Neuron Solid-State Drive Swerve Steering Mechanism Semitendinosus Short-Time Fourier Transform Time Difference of Arrival Time–Frequency Musculus Tensor Fasciae Latae Musculus Tibialis Anterior Time of Arrival Unmanned Aerial Vehicle(s) User Datagram Protocol Internet Protocol Federal University of Santa Catarina United Kingdom Unscented Kalman Filter Unified Robot Description Format Universal Serial Bus Technological University of Uruguay Unmanned Underwater Vehicle Ultra Wide Band Musculus Vastus Medialis VHSIC Hardware Description Language Vastus Lateralis Vastus Medialis Virtual Network Computing Vestibulo-Ocular Reflex Virtual Reality Video Random Access Memory Vertical Take-off and Landing Whole-Body Control / Whole-Body Controller Wheel Drive World Health Organization

Abbreviations

WPM WPMDr Xacro XML ZMP

Wall-Press Mechanism Wall-Pressed Mechanism Actuator XML-based scripting language Extensible Markup Language Zero-Moment Point

xxxi

Multibody Systems and Mechanism Design in Robotics

Efficiency Optimization of the Gear Reducer of an Overhead Power Line Inspection Robot Marina Baldissera de Souza1(B) , Gustavo Queiroz Fernandes1 , Lu´ıs Paulo Laus2 , Andrea Piga Carboni3 , and Daniel Martins1 1 Federal University of Santa Catarina (UFSC), Centro Tecnol´ ogico, Campus Universit´ ario Reitor Jo˜ ao David Ferreira Lima—Trindade, Florian´ opolis, SC CEP: 88.040-900, Brazil [email protected] 2 Federal University of Technology—Paran´ a (UTFPR), DAMEC, Av. 7 de Setembro, 3165, Curitiba, PR CEP: 80.230-901, Brazil 3 Federal University of Santa Catarina (UFSC), Campus Joinville, R. Dona Francisca, 8300—Bloco U, Distrito Industrial, Joinville, SC CEP: 89.219-600, Brazil

Abstract. Electrical utility companies regularly inspect their power line networks to guarantee efficiency and reliability in energy transmission and distribution. However, the power lines inspection processes are expensive and time demanding, requiring robotized solutions to become feasible. A key issue in developing power line inspection robots is their energy efficiency, as they are required to operate for as long as possible. This work aims to reduce an inspection robot’s energy consumption by optimizing the mechanical efficiency of its gear reducer’s traction motor. The robot’s gear reducer is a planetary gear train whose efficiency is modeled via Davies’ method. The planetary gear train efficiency is optimized considering volume, size, and allowable stress constraints. The impact of the performance of the original and the optimized gearboxes in the batteries’ final SOC is evaluated, and it is verified that the gearbox efficiency slightly impacts the robot’s consumption. Keywords: Inspection robot · Overhead power lines · Energy consumption · Gear train efficiency · Efficiency optimization

1

Introduction

Electrical utility companies must regularly inspect their overhead power transmission lines (OPTLs) and distribution lines (OPDLs) to reduce the occurrence of energy supply interruptions. Since the inspection processes are expensive and time-demanding, robotized solutions to capture and process digital images from the energy network are developed to automate the inspection of power lines. Due to the limited autonomy of flying devices and the existence of regulations c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 3–14, 2024. https://doi.org/10.1007/978-3-031-47272-5_1

4

M. B. de Souza et al.

for their use near buildings and people, the leading solution for OPDLs inspections available in the literature is robots equipped with wheels for rolling over the cables. These robots maintain contact with the power line when moving along the conductors and when transposing the electrical components that act as obstacles to their operation. Some examples are the POLIBOT [15], the Robonwire [11], the Cable-Crawler [4], the Expliner robot [7], as well as the solution proposed in [10]. These inspection robots are usually powered by batteries that need regular recharging or replacement, so their power supply is the main factor that limits the inspection distance [2]. Therefore, quantifying the energy an inspection robot consumes while moving on the cables is crucial for the OPDLs inspection systems developed. However, although a mandatory design requirement for robot feasibility, to the best of the authors’ knowledge, there are no studies on the mechanical efficiency of the transmission system of OPDLs robots and how it is related to robots’ energy consumption. The developers of the POLIBOT [15] only mention that the robot consumes less power than other models because it is lighter, without quantifying its energy consumption. Jalal et al. [10] only describe the consumption of the motor that moves one of the robot’s arms, without describing the robot consumption as a whole. The studies that better deal with calculating mobile robots’ energy consumption are not related to power line inspection [6,17,19]. To cover this lack in the literature, this study assesses the efficiency optimization of the traction motor’s gearbox of an OPDL inspection robot that moves along the conductors and rolls over the obstacles of the energy network. The robot’s gearbox efficiency is assessed via the Davies method [14] for two planetary gear trains (PGTs): a preliminary (original) PGT, used as an optimization starting point, and the final optimized PGT. The optimization is performed through a genetic algorithm. This work also calculates the energy consumed by the robot with both PGTs, considering its velocity profile, the effect of its position on the cable, and, consequently, the slope it must climb during its displacement. The energy consumed by sensors, micro-controllers, cameras, and other electronic devices is assumed as constant. In the end, a discussion on the efficiency of both PGTs and the reduction of energy consumption with the optimized gear train is made.

2

Inspection Robot Description

As represented in Fig. 1a, the power line inspection robot is composed of five systems, indicated by letters A to E. Letter A identifies the wheel, letter B indicates the transmission system, letter C shows the pendulum systems, letter D identifies the camera’s positioning and orienting system, and letter E indicates the robot recharging system, which consists of a set of photovoltaic cells. The robot is driven by a DC motor with a nominal speed of 279 rpm [21]. The transmission system comprises two subsystems: the gear reducer, to be optimized in this work, and a belt and pulley system. The gear reducer is

Efficiency Optimization of the Gear Reducer of an Overhead

5

Fig. 1. a CAD model of the overhead power distribution inspection robot and b CAD and c schematic representations of the PGT.

connected to an L-series timing belt and pulleys to transfer the motion from the DC motor to the wheel. The system of pulleys and belt accounts for a 1:6 reduction ratio. L series timing belts and pulleys have an efficiency of 97% [3]. Once they are standardized and cannot be changed, the only option for improving the robot’s transmission system efficiency is to optimize its gear reducer, which is a PGT, shown in Fig. 1b. The PGT has four links: the sun gear is link 1, the planet gear is link 2, the carrier is link 3, and the ring gear and frame (both fixed) correspond to link 0.

3

PGT Efficiency

This section presents an analytical equation for ηP GT , the efficiency of the reducer of the DC motor shown in Fig. 1b, generated via Davies’ method. A detailed explanation of how to compute the mechanical efficiency of PGTs through Davies’ method is found in [14]. For brevity purposes, the procedure to generate ηP GT is not detailed in this paper. However, the step-by-step is available in [25]. Figure 1c shows the schematic representation of the PGT, where r1 is the pitch radius of the sun gear and r2 is the pitch radius of the planet gears. In Fig. 1c, five couplings are identified: three revolute couplings (a, b and c) and two gear pairs (d and e). The DC motor is connected to coupling a and coupling c is the reducer output, which provides speed and torque to the robot displacement. Thus, the power the motor provides enters through coupling a, and the power effectively consumed by the robot leaves through coupling c. Two loss sources, independent of speed, are considered in this work: meshing friction on the gear pairs and load-dependent bushing losses at the revolute pairs. Assuming that power losses are not high enough to change the power flow direction, the efficiency of the PGT ηP GT is given by [25]: ηP GT =

r1 (2r2 + ζd − ζe ) (r1 + r2 − f rb − f rc ) (r0 + r1 ) (r1 − ζd + f ra ) (r2 − ζe + f rb )

(1)

6

M. B. de Souza et al.

where r0 = r1 + 2r2 corresponds to the ring gear’s pitch radius, ra , rb and rc are the internal radii of the bushings at couplings a, b and c, respectively, and f is the bushing friction coefficient. In this paper, the gear reducer’s bushings are assumed to be lubricated with grease, leading to a friction coefficient of f = 0.15 [27]. The factors ζd and ζe are the Coulomb friction factors of gear pairs d and e, respectively. Introduced in [14], the Coulomb friction factor quantifies the losses that are independent of speed magnitude in gear pairs. The Coulomb friction factor can be expressed in terms of the efficiency of a gear coupling and the pitch radii of the driver and follower gears. For gear pair d, the sun gear 1 drives the planet gear 2, so ζd = [r1 r2 (1 − ηext )]/(r1 + ηext r2 ). For gear pair e, the planet gear 2 drives the ring gear 0, leading to ζe = [r2 r0 (1 − ηint )]/(r2 − ηint r0 ). ηext and ηint are the ordinary efficiency of a gear pair with external and internal meshing, respectively. In this work, ηext = 98% and ηint = 99% [14]. Equation (1) quantifies the PGT efficiency considering a single planet gear, although it can be used to compute the efficiency of PGTs with multiple planets. This simplification is possible under the hypothesis that the ordinary efficiency of a gear pair is load-independent [14]. The efficiency of Eq. (1) is the cost function of an optimization problem that aims to determine the dimensions of the gears that make the gear train as efficient as possible.

4

Efficiency Optimization

The cost function of the optimization problem is the gear reducer efficiency, given by Eq. (1). Due to the non-linearity of the cost function and to avoid differentiating ηP GT , the optimization problem is solved through a genetic algorithm (GA) via MATLAB [18]. The design variables are the gears’ module m and z1 and z2 , the number of teeth of the sun gear and the planet gear, respectively, as r1 = mz1 /2, r2 = mz2 /2, and r0 = r1 + 2r2 . The module m is a discrete design variable that assumes values between 0.1 mm and 2 mm, belonging to a standard set according to Metric Gearing Standards ISO [9]. It is imposed that z1 and z2 are integer design variables. The lower and upper bounds of the number of teeth are 18 and 300, respectively. The minimum number of teeth is set as 18 to avoid undercutting against a standard rack with a pressure angle of 20◦ [24]. The choice of 300 for the upper bound is due to the fact the CAD software used to design the reducer imposes a limit of 300 teeth to generate standard gears automatically. Figure 2 shows the cost function given by Eq. (1) for 18 ≤ z1 , z2 ≤ 300 and for some value of m (m = 0.1, 0.5, 1, and 2 mm). Note that the higher the module, the greater the efficiency. For m > 1, ηP GT slightly changes. Besides, the ratio between z1 and z2 has an insignificant impact on ηP GT when compared to m. Thus, defining the design variable m that gives the best performance is more critical than the number of teeth. Originally, the gears composing the PGT have m = 0.35 mm and the sun gear has z1 = 22 teeth, while the planet gear has z2 = 88 teeth, leading to a speed

Efficiency Optimization of the Gear Reducer of an Overhead

7

Fig. 2. Cost function over the optimization domain for m = 0.1, 0.5, 1, 2 mm.

ratio of ioriginal = 10. Applying these parameters in Eq. (1) gives an efficiency P GT original = 78.81%. The optimization will provide the PGT dimensions that of ηP GT maximize the efficiency, considering the constraints functions to which the cost function is subject. The constraint functions are related to the speed ratio, the size of the gear reducer, and the maximum stresses the teeth can withstand. The speed ratio iP GT of the PGT must belong to the interval 10 ≤ iP GT ≤ 14. The size constraints impose that the pitch radii of the gears are greater than or equal to their shafts’ radii and that the gear reducer fits inside the housing. The DC motor output shaft has a diameter of 7.68 mm, so the pitch diameter d1 of the PGT’s sun gear must be d1 ≥ 7.68 mm. Finally, the shaft of the planet gears composing the PGT has a diameter of 4 mm, meaning that the planet gears’ pitch diameter dp must be dp ≥ 4 mm. The shafts’ diameters were measured from the prototype’s motor [21]. The housing is assumed as a parallelepiped with a square face of sides H = 150 mm and a base with e = 30 mm of depth, setting a maximum volume of Vmax = H 2 e for the gear reducer. The gear reducer volume VP GT is calculated p s + kVext , where by adding the volume of each individual gear: VP GT = Vint + Vext p s is the volume of the sun gear, Vext is Vint is the volume of the ring gear, Vext the volume of the planet gear and k is the number of planet gears composing p s , Vext , and the PGT (assumed here as k = 3). The procedure to calculate Vext Vint is described in [28]. The remaining size constraint imposes that the diameter of the ring gear’s external circle dv is lesser than or equal to the housing height H: dv − H ≤ 0. The third constraint category is related to the stresses to which the gears’ teeth are submitted. In this work, two kinds of stress are considered: bending stress at the root of the tooth (σb ) and contact stress at the tooth surface or pitting resistance (σc ). The AGMA bending stress equation for external gears is W t K a Km given by σb = bmJ Kv KS KB KI , where Wt is the tangential force acting on the tooth flank, J is the bending strength geometry factor, Ka is the application factor, Kv is the dynamic factor, Km is the load distribution factor, KS is the size factor, KB is the rim thickness factor and KI is the idler factor [1,24].

8

M. B. de Souza et al.

AGMA does not have an official standard to calculate bending stresses in internal gears (σbint ), so an adaptation of the AGMA method for internal gears K is adopted in this work: σbint = Wb t Yf [16], where Y is the form factor and Kf is the stress concentration factor. The peak of the bending stress σb must be lesser than the corrected allowable L Sf b , where KL is the life factor, bending stress Sf b , calculated as Sf b = KK T KR KT is the temperature factor, KR is the reliability factor and Sf b is the published AGMA bending-fatigue strength [24]. The safety factor (SFb ) against bending failure is given by the ratio SFb = Sf b /σb [24]. 

W t K a Km The AGMA pitting resistance formula is given by σc = Cp bId Kv KS Cf , where d is the pitch diameter of the smaller of the two gears in mesh, I is a dimensionless surface geometry factor, Cp is an elastic coefficient and Cf is the surface finish factor. Similarly to the bending stress, the pitting resistance σc CL CH  must be lesser than the corrected allowable contact stress Sf c : Sf c = K Sf c , T KR where CL is the surface-life factor, CH is the hardness ratio factor for pitting resistance and Sf c is the published AGMA surface-fatigue strength. The safety 2 factor (SFc ) against surface failure is given by the ratio SFc = (Sf c /σc ) [1,24]. The stress constraint in the optimization problem is defined by the safety factors SFb and SFc , which must be greater than or equal to 1.6 [22]. The gears are made of carburized and case-hardened steel, so Sf b = 520 MPa and Sf c = 1300 MPa [24]. How the gear design factors are defined is not in the scope of this paper. A detailed explanation can be found in [3] and [24]. The following factors are independent of the size of the gears and their values do not change during the optimization: Ka = KS = KB = KI√= Cf = KT = KR = CL = CH = 1, Km = 1.6, KL = 0.87, and Cp = 191 MPa. The dynamic factor Kv depends on the pitch-line velocity, which depends on the gears’ pitch radii [1]. Since the gears size change for each population during the optimization, Kv is recalculated at each evaluation of the objective function. The bending strength geometry factor J, the form factor Y , the stress concentration factor Kf , and the surface geometry factor I are recalculated either. J and Y depend on the number of teeth, which corresponds to two design variables of the optimization, while Kf is related to the module, the remaining design variable [3]. I is computed from the radii of the curvature of the gear teeth, which also depend on the size of the gears [1].

5

Inspection Robot’s Power Consumption

The power demanded to displace the robot on the conductor depends on the traction force required to execute the motion, computed through Eq. (2) [8]: dv (2) dτ where Ft is the traction force, M is the robot mass, g is the gravity acceleration, fr is the rolling resistance coefficient, α is the slope angle, v is the robot linear Ft = M g[fr cos(α) + sin(α)] + M

Efficiency Optimization of the Gear Reducer of an Overhead

9

velocity, and dv/dτ stands for the robot linear acceleration. The robot mass is M = 12.8 kg and corresponds to the mass of the final prototype. The gravity acceleration is taken as g = 9.81 m/s2 . The value of the rolling resistance coefficient is fr = 0.074 [26]. The velocity variation dv/dτ is assumed as linear with time, with a time step of 1 s. The robot’s linear velocity is given by v = ωrdisc /(i0 iP GT ), where ω is the motor speed in rad/s, rdisc is the radius of the robot’s inner disc in meters, i0 is the reduction ratio of the pulley/belt, and iP GT is the speed ratio of the gear reducer. The prototype’s inner disc radius is rdisc = 0.0525 m and the pulley/belt transmission provides a reduction ratio of i0 = 6. Figure 3a shows the speed profile adopted in this paper, representing the displacement rate of the inspection robot while it travels along a cable connecting two consecutive poles with the same height. In the first 25 s, the robot departs from a pole and accelerates until reaching its cruising speed, corresponding to the motor’s nominal speed. The robot’s velocity is kept constant until the robot is a few seconds away from the next pole. Then, the robot decelerates, stops for a few seconds to cease oscillations provoked by the displacement, and accelerates again, to reach the nearest pole and overcome the vertical insulator on the top of the pole. The robot’s velocity to overcome an obstacle corresponds to 60% of the motor’s nominal speed. After surpassing the insulator, the robot decelerates again until zero velocity.

Fig. 3. a Robot’s linear velocity profile and b simplified geometry of the conductor when the robot moves along the cable.

The slope angle α depends on the cable length and the robot’s position. The cable length can be defined using two catenary equations: firstly, the catenary sag is computed and then the cable length is estimated. The catenary sag s is defined as s = pL2 /(8T0 ), where p is the cable unit weight, L is the distance

10

M. B. de Souza et al.

between two consecutive poles, and T0 is the conductor design tension [13]. The cable length S is defined as S = L + 8s2 /(3L). The parameters to calculate s and S are p = 0.296 daN/m, T0 = 348 daN [23], and L = 50 m. L is the average distance between consecutive poles in urban areas in the state of Santa Catarina, Brazil. The robot’s position on the conductor is defined through a simplified geometric representation of the cable state when the robot is placed on it, shown in Fig. 3b. The simplified geometry enables using trigonometric relations to determine the slope angle α according to the robot’s position. In Fig. 3b, points A and B are two consecutive poles and the robot’s position is represented by the point r. The point r divides the distance L between the poles into L1 and L2 and the cable length S into S1 and S2 , thus L = L1 + L2 and S = S1 + S 2 . Assuming that the robot departs from pole A, the distance already traveled by the robot is S1 and is given by the numerical integration of the robot’s linear velocity over time τ . S2 is defined by the difference between S and S1 . The distance L1 is defined by applying trigonometry to Fig. 3b, resulting in L1 (τ ) = 2SS1 (τ ) − S 2 + L2 /(2L). The distance L2 is defined through the difference between L and L1 . The angle α between the conductor and the horizontal is given by the ratio between L2 and  is updated for each instant τ of the  S2 and L2 (τ ) robot displacement: α(τ ) = arccos S2 (τ ) . The robot’s traction power Pt (τ ) is given by: Pt (τ ) =

Ft (τ )v(τ ) ηbelt ηP GT ηm

(3)

where ηbelt is the belt transmission efficiency and ηm is the motor efficiency. The motor nominal efficiency is ηm = 0.68, according to the motor’s datasheet [21], and the timing belt transmission efficiency is ηbelt = 0.97 [3]. Besides traction power Pt , the power demanded by the robot (Pdem ) includes some auxiliary loads regarding the electronic components used during robot operation. In this paper, three sets of components are taken into consideration and their power demand is supposed to be constant. They are the microprocessor, the sensors, and the half-bridge, whose power is denoted by Pmic , Psens , and Phb , respectively. Thus, the power demanded during the robot operation is calculated as Pdem (τ ) = Pt (τ ) + Pmic + Psens + Phb . The values adopted for Pmic , Psens and P hb are 10 W, 10 W, and 6 W, respectively, following the maximum power magnitudes assumed by Coelho et al. [5] in the simulation of the inspection robot’s power flow control. The batteries and the photovoltaic plates provide power to meet the power demand Pdem , leading to the power balance equation given by [5]: Pdem (τ ) = PP V ± Pbat (τ )

(4)

where PP V is the power supplied by the photovoltaic plates and Pbat (τ ) is the power provided by the batteries. The robot has six photovoltaic plates, each one producing at most 4.4 W of power. In this work, it is assumed that the

Efficiency Optimization of the Gear Reducer of an Overhead

11

photovoltaic plates operate on their maximum power during the whole trajectory, supplying a constant power of PP V = 26.4 W. The sign of Pbat (τ ) depends on the Pdem (τ ) intensity and indicates if the battery is being charged or discharged, influencing the batteries’ state-of-charge (SOC). The current SOC is determined from the batteries’ power Pbat (τ ) throughout the robot operation. Based on SOC estimation of electric vehicles, the battery SOC at each instant τ of the robot’s velocity profile is given by [12]:  τ 1 SOC(τ ) = SOC0 − Pbat (τ )dτ (5) E0 0 where SOC0 is the initial SOC and E0 is the energy initially stored in the batteries. E0 is calculated as E0 = nbat cbat Vbat [20], where nbat is the number of batteries, cbat is the battery capacity and Vbat is the battery voltage. The inspection robot has nbat = 8 batteries, each one with cbat = 2.2 Ah and Vbat = 11.1 V. According to the power balance, if the robot power demand is higher than the power produced by the photovoltaic plates, i.e. Pdem (τ ) > PP V , the batteries must provide the remaining power that fulfills the power demand. Consequently, Pbat (τ ) is positive and the batteries are discharged, decreasing their SOC, according to Eq. (5). On the other hand, if the power demand is lower than the power provided by the photovoltaic plates, i.e. Pdem (τ ) < PP V , the batteries do not need to supply any power and can be even recharged, as the photovoltaic plates provide all the power required and even more. If the SOC level is above a threshold SOCb and the photovoltaic plates provide enough power to move the robot, the batteries do not provide any power, so Pbat (τ ) = 0 and the SOC remains unchanged. In contrast, when the SOC level is below SOCb , the batteries are charged and the SOC increases, meaning that Pbat (τ ) is negative. In this paper, it is assumed SOCb = 0.4. The algorithm of the process to compute the inspection robot’s SOC is found in [25]. The integrals in this paper are numerically solved through trapezoidal integration [20].

6

Results and Discussion

The PGT efficiency is optimized following the procedure described Sect. 4. After 69 generations and the cost function being evaluated 2665 times, the GA converged to m = 0.5 mm, z1 = 32, and z2 = 128. These values of module and number of teeth give a PGT with a speed ratio of iopt P GT = 10 and an efficiency = 87.75%, respecting the constraint functions. of ηPopt GT original = 10. This way, the inspection robot follows Note that iopt P GT = iP GT the same velocity profile of Fig. 3a with both the original and the optimized configurations. So, the batteries’ SOC at the end of the route can be directly compared, as the only parameter distinguishing both configurations in the SOC calculation is the gear reducer efficiency ηP GT in Eq. (3), which calculates the traction power.

12

M. B. de Souza et al.

The difference in efficiency between the original PGT and the optimized PGT is 8.94 percentage points, which compensates for the belt losses. The impact of this difference on the robot consumption is evaluated by simulating the robot traveling along a cable connecting two poles, following the velocity profile of Fig. 3a, and comparing the batteries SOC at the end of the route for both configurations. The final SOC for both cases is practically the same: when the gear reducer of the motor is the original PGT, the batteries’ final SOC is 99.95%. If the gear reducer is the optimized PGT, the final SOC goes to 99.97%, an insignificant increase. The influence of the gear reducer efficiency increases with the distance traveled by the robot, but not outstandingly. If the robot travels along 58 consecutive poles, corresponding to the pilot network adopted in the project, the final SOC with the original PGT reducer is 97.35%, while the final SOC with the optimized PGT reducer is 98.27%, a difference of 0.92 percentage points. Therefore, the higher efficiency of the PGT did not bring a considerable enhancement in the robot’s consumption. A much simpler gear train design could be used as a gear reducer instead of a PGT, as it has been shown that ηP GT does not have a significant impact on robot consumption. Since it is not recommended to use a pair of gears with a ratio of about 10:1 with a pinion with more than 18 teeth [24], a simple gear pair could not be used. So if a gear train with a design simpler than the PGT design is preferred, the gear reducer should be composed of three gears in series or a compound gear train, as long as it respects the robot’s volume constraint.

7

Conclusions

This study assessed the efficiency optimization of the gear reducer of the traction motor of a power line inspection robot, aiming to reduce the robot’s power consumption. The gear reducer is a PGT, which is modeled through the Davies method and optimized in terms of its efficiency. The PGT presented an efficiency of 78.81% and 87.75% before and after optimization, respectively. Although a gain of 8.94 percentage points in the transmission efficiency is obtained after the optimization, very similar results were observed when the robot’s consumption was simulated. Once no significant energy saving was determined for the robot moving along the whole pilot network, a much simpler gear train could be implemented for this robot. It is worth noting that the optimization of the reducer gearbox efficiency did not lead to a significant improvement in energy consumption. The main reason behind the low impact of ηP GT on the final SOC is the power supplied by the photovoltaic plates, as they provide a constant power that is high enough to meet almost all the power demanded by the robot. However, this is an ideal case, where solar irradiation ensures that the photovoltaic plates operate at their maximum power. Besides, the robot has only one power source. For other mobile robots with multiple actuators and transmission systems, the improvement in energy consumption through the optimization of the gear train systems may be more relevant.

Efficiency Optimization of the Gear Reducer of an Overhead

13

It can be observed that for the case study presented, the belt transmission efficiency ηbelt is equal to 0.97 and the gear reducer efficiency ηP GT was optimized to 0.88. On the other hand, the motor efficiency ηm has a much lower nominal value of 0.68. By selecting proper combinations of motor velocities and torques depending on the task being performed, it is possible to improve the motor efficiency as well and consequently the robot’s energy consumption. Therefore, the design of an optimum inspection robot will involve not only the design of an optimized transmission system but the definition of velocity profiles for each task considering the motor’s efficiency map. Acknowledgement. The R&D project that made this work possible was entitled “Development of a Robotized System for Inspection of Electricity Distribution Lines” (05697-0317/2017), regulated by ANEEL (“Agˆencia Nacional de Energia El´etrica”) and financed by the Centrais El´etricas de Santa Catarina S.A. (Celesc), the electrical utility company for the southern Brazilian state of Santa Catarina. Thanks are due to Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior—Brasil (CAPES) and Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´ ogico (CNPq) for partial financial support.

References 1. AGMA: Fundamental rating factors and calculation methods for involute spur and helical gear teeth (2004) 2. Alhassan, A.B., Zhang, X., Shen, H., Xu, H.: Power transmission line inspection robots: a review, trends and challenges for future research. Int. J. Electr. Power & Energy Syst. 118, 105862 (2020) 3. Budynas, R.G., Nisbett, J.K.: Shigley’s mechanical engineering design. McGraw Hill, New York (2020) 4. B¨ uhringer, M., Berchtold, J., B¨ uchel, M., Dold, C., B¨ utikofer, M., Feuerstein, M., Fischer, W., Bermes, C., Siegwart, R.: Cable-crawler–robot for the inspection of high-voltage power lines that can passively roll over mast tops. Ind. Robot. Int. J. (2010) 5. Coelho, V.D.O., Balbino, A.J., Artmann, V.N., Simas, H., Martins, D., Batista, G., Silva, L.R.D.J.D., Kinceler, R.: Design and analysis of a multi-port dc microgrid to power flow control in a robotic system. SEPOC 2021 (2021) 6. Datouo, R., Motto, F.B., Zobo, B.E., Melingui, A., Bensekrane, I., Merzouki, R.: Optimal motion planning for minimizing energy consumption of wheeled mobile robots. In: 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2179–2184. IEEE (2017) 7. Debenest, P., Guarnieri, M., Takita, K., Fukushima, E.F., Hirose, S., Tamura, K., Kimura, A., Kubokawa, H., Iwama, N., Shiga, F., et al.: Expliner–toward a practical robot for inspection of high-voltage lines. In: Field and Service Robotics, pp. 45–55. Springer (2010) 8. Ehsani, M., Gao, Y., Longo, S., Ebrahimi, K.M.: Modern electric, hybrid electric, and fuel cell vehicles. CRC press (2018) 9. ISO 54:1996: Cylindrical gears for general engineering and for heavy engineering— modules. Standard, International Organization for Standardization, Geneva, CH (1996)

14

M. B. de Souza et al.

10. Jalal, M.F.A., Sahari, K.S.M., Fei, H.M., Leong, J.C.T.: Design and development of three arms transmission line inspection robot. J. Robot. Netw. Artif. Life 5(3), 157–160 (2018) 11. Jayatilaka, M., Shanmugavel, M., Ragavan, S.V.: Robonwire: design and development of a power line inspection robot. In: Proceedings of the 1st International and 16th National Conference on Machines and Mechanisms, vol. 16, pp. 808–815 (2013) 12. Kularatna, N.: Energy storage devices for electronic systems: rechargeable batteries and supercapacitors. Academic Press (2014) 13. Labegalini, P.R., Labegalini, J.A., Fuchs, R.D., de Almeida, M.T.: Projetos mecˆ anicos das linhas a´ereas de transmiss˜ ao. Editora Blucher (1992) 14. Laus, L., Simas, H., Martins, D.: Efficiency of gear trains determined using graph and screw theories. Mech. Mach. Theory 52, 296–325 (2012) 15. Lima, E.J., Bomfim, M.H.S., de Miranda Mour˜ ao, M.A.: Polibot–power lines inspection robot. Ind. Robot. Int. J. (2017) 16. Lisle, T.J., Shaw, B.A., Frazer, R.C.: Internal spur gear root bending stress: A comparison of ISO 6336: 1996, ISO 6336: 2006, VDI 2737: 2005, AGMA, ANSYS finite element analysis and strain gauge techniques. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 233(5), 1713–1720 (2019) 17. Liu, S., Sun, D.: Minimizing energy consumption of wheeled mobile robots via optimal motion planning. IEEE/ASME Trans. Mechatron. 19(2), 401–411 (2013) 18. MathWorks: Genetic algorithm and direct search toolbox user’s guide. MathWorks, 1st edn. (2004) 19. Mei, Y., Lu, Y.H., Hu, Y.C., Lee, C.G.: A case study of mobile robot’s energy consumption and conservation techniques. In: ICAR’05. Proceedings., 12th International Conference on Advanced Robotics, 2005., pp. 492–497. IEEE (2005) 20. Mi, C., Masrur, M.A.: Hybrid electric vehicles: principles and applications with practical perspectives. John Wiley & Sons (2017) 21. Motron: MR 910-IVEBUN-300-24V/MRTR 910-IVEBUN-300-BIV (2019) 22. Naunheimer, H., Bertsche, B., Ryborz, J., Novak, W.: Automotive transmissions: fundamentals, selection, design and application. Springer Science & Business Media (2011) 23. de Engenharia e Normas (DVEN), D.: Estruturas para redes a´ereas convencionais de distribui¸ca ˜o. Celesc Distribui¸ca ˜o S.A., 2 edn. (2014). Code E-313.0002 24. Norton, R.L.: Machine Design: An Integrated Approach, 4th edn. Prentice Hall (2010) 25. de Souza, M.B.: Powertrain optimization of hybrid and electric vehicles. Doutorado, Programa de P´ os-Gradua¸ca ˜o em Engenharia Mecˆ anica, Universidade Federal de Santa Catarina, Florian´ opolis (2022) 26. de Souza, M.B., Fernandes, G.Q., de Souza Vieira, R., Nicolazzi, L.C., Kinceler, R., Dadam, A.P.: Comparison of friction properties of materials with different hardness for cable riding robots’ wheels. In: 26th ABCM International Congress of Mechanical Engineering (COBEM 2021). ABCM (2021) 27. Thecnolub: Linha de produtos mancais autolubrificantes livres de manuten¸ca ˜o (2017) 28. Wang, C.: High power density design for planetary gear transmission system. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 233(16), 5647–5658 (2019)

Leg Mechanism of a Quadruped Wheeled Robot with a 4-DoF Spherical Parallel Link Mechanism Taisei Suzuki1(B) , Hayato Ota1 , Hiroki Takenaka1 , Takayuki Tanaka1 , Yuta Ishizawa1 , and Kenji Hashimoto2 1 Meiji University, 1-1-1 Higashi-Mita, Tama-ku, Kawasaki 214-8571, Kanagawa, Japan

[email protected]

2 Waseda University, 2-7 Hibikino, Wakamatsu-ku, Kitakyushu 808-0135, Fukuoka, Japan

[email protected]

Abstract. To develop a quadruped wheeled robot with high mobility, we have designed and developed the leg mechanism of a quadruped wheeled robot, named “MELEW-2LR” (MEiji LEg-Wheeled Robot–No.2 Leg Refined). MELEW-2LR has a total of 4 degrees of freedom (DoFs), which include translational movements of the leg tip in 3 directions (forward/backward, left/right, and up/down) and rotation of the leg tip in the Yaw direction to control the steering angle of the wheel when turning. The leg mechanism uses a 4-DoF parallel link mechanism that combines spherical links and parallel links to achieve high output. In addition, all motors are located at the hip joint to reduce inertia in the leg. Furthermore, a system was constructed to calculate the motor angle from the leg tip position by solving inverse kinematics on PC and to provide the angle command value to the brushless DC motor via a microcontroller. Evaluation experiments were conducted on the leg, and it was shown that accurate position control in the three translational directions and in the Yaw direction was possible, as well as bending and stretching movements with a payload equivalent to the leg mass. Keywords: Leg mechanism · Leg-wheeled robot · Parallel link · Spherical link

1 Introduction In recent years, there has been an increase in the use of mobile robots in situations such as package transportation and delivery, and guidance and security in facilities [1, 2]. Also, demand for mobile robots is expected in disaster response and other environments where it is difficult for humans to work directly [3, 4]. Types of mobile robots include leg-type robots that are capable of traversing uneven terrain [5, 6] and wheel-type robots that can move at high speed on level ground [7, 8]. The robot in this study is a leg-wheeled robot that has both of these features. Examples of previous research on leg-wheeled robots include Roller-Walker [9] and ANYmal [10, 11] as examples of quadruped wheeled robots. WS-2/WL-16 [12] and Ascento [13] as examples of bipedal wheeled robots. We also developed a quadruped wheeled robot named MELEW-1 (MEiji LEg-Wheeled Robot–No.1) in 2020. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 15–27, 2024. https://doi.org/10.1007/978-3-031-47272-5_2

16

T. Suzuki et al.

Fig. 1. DoFs of the quadruped wheeled robot to be developed

The long-term goal of this research is to develop a quadruped wheeled robot with high mobility. Prior to this study, in 2021, we developed MELEW-2L (MEiji LEgWheeled Robot–No. 2 Leg) [14], a single-leg wheeled mechanism with 2-DoF leg, equipped with motors that rotate in the Pitch direction at the hip and knee joints. As shown in Fig. 1, the DoF configuration per leg of the quadruped wheeled robot to be developed in the future consists of a leg with 4-DoF and an active wheel. The reason for the 4-DoF leg configuration is to move the leg tips in the three translational directions (forward/backward, left/right, and up/down) and to rotate the leg tips in the Yaw direction to control the steering angles of wheels when turning. The purpose of this research is to design and develop a 4-DoF leg of a quadruped wheeled robot that can move in each of the 4-DoF directions and achieve movements that require high power (speed and torque). This paper describes the design and development of the 4-DoF parallel linkage leg using spherical links named “MELEW-2LR” (MEiji LEg-Wheeled Robot–No. 2 Leg Refined), and the experiments of the leg tip moving in each of the 4-DoF directions and the experiments of the robot carrying a payload equivalent to its own weight. The updates in MELEW-2LR compared to MELEW-2L are: • Addition of DoFs in the leg: 3 → 4 (Wheel section has not been implemented yet) • Adoption of a 4-DoF parallel link mechanism for the leg mechanism • Downsizing of leg motors.

2 Consideration of Leg Mechanism 2.1 Required Specifications for Leg Mechanism The quadruped wheeled robot, which will be built as a long-term goal of this research, is designed to perform operations with high mobility, such as climbing stairs with a height of 230 mm and running on wheels at a speed of 20 km/h. Therefore, the following three items were defined as the required specifications for the leg mechanism in this study.

Leg Mechanism of a Quadruped Wheeled Robot

17

The first requirement is “a 4-DoF leg mechanism per leg.” The reason for this is to control the leg tip in a total of 4-DoF directions: forward and backward, left and right, up and down, and Yaw direction. The second requirement is “a leg mechanism capable of generating motions with high output (speed and torque).” The reason for this is to be able to handle movements with high mobility as described above when developing a quadruped wheeled robot in the future. In response to this, this study considers a leg mechanism that can swing itself at high speed and support a payload equivalent to the mass of the leg. The third requirement is “a leg mechanism that is less affected by its own inertia.” The reason for this is that by reducing the inertia of the leg, the required torque of the motor at the hip joint can be reduced, preventing a reduction in the overall power output of the leg. 2.2 Consideration of Proposed Leg DoF Configurations Based on the required specifications described in Sect. 2.1, the leg mechanism to be designed was considered. First, to realize “a leg mechanism capable of generating motions with high output,” a parallel link mechanism was employed, which can increase the output of the entire leg by combining the output of multiple motors. A parallel link mechanism is a link mechanism in which power is transmitted from the base link to the output link by chains of multiple links connected in parallel.

Fig. 2. 4-DoF parallel link mechanism

In order to realize “a leg mechanism that is less affected by its own inertia,” a leg mechanism that can reduce the weight of the leg tip as much as possible was considered by concentrating the leg motors at the hip joint.

18

T. Suzuki et al.

From the above, a 4-DoF parallel link mechanism with spherical links, as shown in Fig. 2, was adopted as the leg mechanism to be designed. 2.3 Designed Leg Mechanism Here, we describe the actually designed 4-DoF parallel link mechanism shown in Fig. 2. All joint parts of this leg mechanism are composed of 1-DoF rotational joints. The motors are concentrated at points A1 to A4 of the hip joint in Fig. 2, and all four motor shafts are coaxially placed to rotate in the Pitch direction. The total of eight links colored red and yellow above points C1 and C2 in Fig. 2 are all composed of spherical links centered at the origin O. These links are designed so that the angle between the axes at both ends of the link is π/3 rad, centered at the origin O as shown in Fig. 3. Using these links, spherical 5-node links are constructed connecting 5 points, A1 , B1 , C1 , B4 , A4 and A2 , B2 , C2 , B3 , A3 , and the outputs of the two motors are combined at points C1 and C2 , respectively. The total four links colored green and blue below points C1 and C2 in Fig. 2 are parallel links that operate so that the 4 points O, D1 , E, and D2 always maintain a parallelogram. By using these parallel links, the outputs of the two upper pairs of spherical 5-node links are combined at the position of point E. Using this mechanism, the total of four motors of the hip joint can be driven in the direction shown in Table 1 in the Pitch direction to control the leg tip in a total of 4-DoF directions: forward and backward, left and right, up and down, and Yaw direction.

/3

/3

(a) Upper link

(b) Bottom link

Fig. 3. Design of spherical links

The procedure for calculating the analytical solution for the inverse kinematics of this 4-DoF parallel link is as follows: First, the (y, z) coordinates of the 2 points D1 and D2 at the knee joint are obtained by establishing a series of geometric conditions from the (x, y, z) coordinates and the Yaw angle of the leg tip. Next, four motor angles θi (i = 1, 2, 3, 4) [rad] are obtained from the (y, z) coordinates of the 2 points D1 and D2 using rotation matrixes.

Leg Mechanism of a Quadruped Wheeled Robot

19

Table 1. Orientation of motors relative to leg tip movement Direction

A1

A2

A3

A4

Forward direction









Left direction



+



+

Upward direction



+

+



Yaw positive direction





+

+

+: Pitch positive direction –: Pitch negative direction

3 Design and Development of the Leg 3.1 Specifications of Developed Leg

460

Figure 4 shows the 4-DoF parallel linkage leg named “MELEW-2LR” designed and developed in this study. Table 2 shows the specifications of MELEW-2LR. The leg length of MELEW-2LR is 200 mm for both the upper leg and lower leg to ensure a vertical range of motion of at least 240 mm for stepping over a 230 mm-high step. The motor used for MELEW-2LR is a brushless DC motor AK80–9 (maximum speed: 245 rpm, maximum torque: 18 Nm) manufactured by T-Motor. The mass of the entire leg is approximately 4.9 kg, of which approximately 3.0 kg is concentrated in the hip joint.

Fig. 4. MELEW-2LR

20

T. Suzuki et al. Table 2. Specifications of MELEW-2LR

Leg length [mm]

Upper leg

200

Lower leg

200

Range of motion of motor [rad]

−(13/36)π ∼ π/3 (All motors are common)

Mass [kg]

Hip joint part

3.0

Link part

1.9

Total

4.9

3.2 Detailed Design of Mechanism The structural members of MELEW-2LR are mainly made of lightweight superduralumin A7075, and stainless steel SUS304 is used for the shafts, which require higher strength. In addition, the bottom links are made of CFRP pipe to reduce weight. The four motors were placed as close as possible to each other with the two left motor shafts facing left and the two right motor shafts facing right, so that the width of the hip joint was narrower. As shown in Fig. 5, diagonal notches were cut in the members to which the motors are attached, and these notches were placed against the uppermost spherical links to allow calibration of the encoders built into the motors. The shapes of the spherical links were modified with stress analysis so that they could withstand a load of 300 N in each direction applied from the leg tip side. The shafts connecting eight spherical links were supported by deep groove ball bearings with flanges from above and below to increase the rigidity of the entire leg. On the lower side of the leg, the thickness of the links was made thinner, and links were subjected to lightening to prevent interference with the leg and to make the leg as light as possible. In addition, to increase the rigidity of the entire leg, a double-sided structure was adopted at several points of the links that make up the leg. 3.3 System Configuration and Leg Movement Generation A system configuration diagram of MELEW-2LR is shown in Fig. 6. To generate the leg motion of MELEW-2LR, the (x, y, z) coordinates and Yaw angle of the leg target are first given to the external PC (iiyama LEVEL-15FXR21-i7-ROZX), and the angles and angular velocities of the four motors are calculated by inverse kinematics on the PC. The PC then sends commands of angles and angular velocities to the four motors via the microcontroller (STM32 Nucleo Board STM32F446RE) using CAN communication to perform PD control of the motor angles inside the motors.

Leg Mechanism of a Quadruped Wheeled Robot

21

Fig. 5. Calibration of the encoder of MELEW-2LR

Fig. 6. System configuration diagram of MELEW-2LR

4 Evaluation Experiments 4.1 Purpose of the Experiments As the evaluation experiments of MELEW-2LR, movement experiments were conducted during the swinging leg and the standing leg. In the movement experiments during the swinging leg, the feasibility of the motions in each direction was confirmed by controlling the leg tip in the three translational directions and in the Yaw direction while MELEW2LR was in the swinging leg state. In the movement experiments during the standing leg, the feasibility of the motions and the tracking of the motor angles were verified by moving the leg tip in the Yaw direction and the vertical direction with a payload equivalent to the weight of the robot.

22

T. Suzuki et al.

4.2 Movement Experiments During the Swinging Leg With the MELEW-2LR in the swinging leg, the leg tip was moved in each of the three translational directions (forward/backward, left/right, and up/down) and in the Yaw direction. The supply voltage was set to 48 V, and the PD gain values of the motor were set to (KP , KD ) = (450, 2.0). The photographs of MELEW-2LR during the experiment are shown in Fig. 7. As shown in Fig. 7, we confirmed that the leg tip could be controlled in each of the three translational directions (forward/backward, left/right, and up/down) and in the Yaw direction by using analytical solutions for inverse kinematics of the leg.

(a) Forward and backward direction

(c) Up and down direction

(b) Left and right direction

(d) Yaw direction

Fig. 7. Movement experiments during the swinging leg

4.3 Movement Experiments During the Standing Leg (a) Yaw direction movement experiment. The top of MELEW-2LR is attached to a black aluminum frame. A total weight of 5.0 kg, which is equivalent to the mass of MELEW-2LR, was mounted on the top of this aluminum frame as a payload. With the MELEW-2LR’s leg tip grounded, a sinusoidal trajectory with an amplitude of π/6 rad and a frequency of 0.50 Hz was given in the Yaw direction of the leg tip. The command value of the leg tip at time t[s] is as shown in the following equation. ⎤ ⎤ ⎡ ⎧⎡ xF (t) 0 ⎪ ⎪ ⎨⎣ ⎦[mm] yF (t) ⎦ = ⎣ 0 (1) π ⎪ z (t) −400 cos 6 ⎪ ⎩ F ψF (t) = π6 cos π t[rad]

Leg Mechanism of a Quadruped Wheeled Robot

23

Note that (xF , yF , zF ) is the coordinates of the leg tip command value and ψF is the Yaw angle of the leg tip command value. The power supply voltage was set to 52 V, and the PD gain values of the motors were set to (KP , KD ) = (450, 2.0). A series of photographs of MELEW-2LR during the experiment are shown in Fig. 8, and a graph comparing the command values of the four motors and the angles of the MELEW-2LR encoders, converted to Yaw angles of the leg tip by forward kinematics calculation, is shown in Fig. 9.

Fig. 8. Yaw direction movement experiment

Fig. 9. Yaw angle of leg tip in the Yaw direction movement experiment

As shown in Fig. 8, the robot actually achieved a leg tip rotation in the Yaw direction with a payload of 5.0 kg. The maximum value of the motor torque was approximately 15 Nm. Also, when the leg was rotated to the maximum in the Yaw positive direction, the torque was concentrated on the first and third motors from the left, and when the leg was rotated to the maximum in the Yaw negative direction, the torque was concentrated on the second and fourth motors from the left, resulting in a maximum error of about 0.03 rad in motor angles. This resulted in the sinking of the leg. The maximum error in the Yaw axis direction of the leg tip at this time was 0.067 rad as shown in Fig. 9.

24

T. Suzuki et al.

(b) Sinusoidal flexion experiment. The top of MELEW-2LR is attached to a black aluminum frame. A total weight of 5.0 kg, which is equivalent to the mass of MELEW2LR, was mounted on the top of this aluminum frame as a payload. With the MELEW2LR’s leg tip grounded, a sinusoidal trajectory with an amplitude of 100 mm and a frequency of 0.50 Hz was given in the vertical direction of the leg tip. The command value of the leg tip at time t[s] is as shown in the following equation. ⎤ ⎤ ⎡ ⎧⎡ xF (t) 0 ⎪ ⎪ ⎨⎣ ⎦[mm] yF (t) ⎦ = ⎣ 0 π ⎪ z (t) −400 cos 12 + 100(1 − cos π t) ⎪ ⎩ F ψF (t) = 0[rad]

(2)

The power supply voltage was set to 52 V, and the PD gain values of the motors were set to (KP , KD ) = (450, 2.0).

Fig. 10. Sinusoidal flexion experiment

A series of photographs of MELEW-2LR during the experiment are shown in Fig. 10, and a graph comparing the command values of the four motors and the angles of the MELEW-2LR encoders, converted to the x-coordinate and z-coordinate of the leg tip by forward kinematics calculation, is shown in Fig. 11. As shown in Fig. 10, the robot actually achieved bending and stretching motion of the leg in the vertical direction with a payload of 5.0 kg. The maximum motor torque reached 18 Nm, the maximum torque of the motor. Also, when the leg was flexed, the effect of the increased moment arm around the knee joint due to gravity caused a maximum error of about 0.03 rad in the motor angle, which caused the leg to sink. The maximum error in the z-axis direction of the leg tip at this time was 11.1 mm as shown in Fig. 11. Both the x-axis and z-axis errors of the leg tip position of MELEW-2LR were smaller than those of MELEW-2L, which both have motors around the Pitch direction at the hip and knee joints and have the same leg length, as developed by authors.

Leg Mechanism of a Quadruped Wheeled Robot

25

Fig. 11. Position of leg tip in sinusoidal flexion experiment

5 Conclusion In this study, we have developed a leg with a 4-DoF spherical parallel linkage, MELEW2LR, and demonstrated that it could move in the three translational directions and in the Yaw direction, and support a payload equivalent to the mass of the leg. As a future prospect, we plan to develop a quadruped wheeled robot based on MELEW-2LR. Prior to this, in 2023, we designed and developed the active wheel part of the leg tip and the body part, and developed the bipedal wheeled robot “MELEW-2” (MEiji LEg-Wheeled Robot–No. 2) (Fig. 12) based on the MELEW-2LR. In the future, we plan to further expand the number of legs of MELEW-2 and develop a quadruped wheeled robot.

26

T. Suzuki et al.

Fig. 12. MELEW-2

Acknowledgements. This study was conducted with the support of Information, Production and Systems Research Center, Waseda University; Future Robotics Organization, Waseda University, and as a part of the humanoid project at the Humanoid Robotics Institute, Waseda University. This work was supported by JSPS KAKENHI Grant Numbers JP20H04267, JP21H05055. This work was also supported by a Waseda University Grant for Special Research Projects (Project number: 2023C-179).

References 1. Robots—Agility Robotics. https://agilityrobotics.com/robots. Accessed 04 Apr 2023 2. Robot|SoftBank. https://www.softbank.jp/en/robot. Accessed 04 Apr 2023 3. Spenko, M., Buerger, S., Iagnemma, K.: The DARPA Robotics Challenge Finals: Humanoid Robots to The Rescue. Springer (2018) 4. Nagatani, K., Kiribayashi, S., et al.: Redesign of rescue mobile robot Quince–Toward emergency response to the nuclear accident at Fukushima Daiichi Nuclear Power Station on March 2011–. In: 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, pp. 13–18, Japan (2011) 5. Katz, B., Carlo, J. D., Kim, S.: Mini Cheetah: a platform for pushing the limits of dynamic quadruped control. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 6295–6301, Canada (2019) 6. Atlas™|Boston Dynamics. https://www.bostondynamics.com/atlas. Accessed 04 Apr 2023 7. What’s next for Amazon Scout? https://www.aboutamazon.com/news/transportation/whatsnext-for-amazon-scout. Accessed 18 Dec 2023 8. Delivery Robot DeliRo|ZMP. https://www.zmp.co.jp/en/products/lrb/deliro. Accessed 04 Apr 2023 9. Endo, G., Hirose, S.: Study on roller-walker (System integration and basic experiments). In: 1999 IEEE International Conference on Robotics and Automation (ICRA), pp. 2032–2037, USA (1999)

Leg Mechanism of a Quadruped Wheeled Robot

27

10. Hutter, M., Gehring, C., et al.: ANYmal - a highly mobile and dynamic quadrupedal robot. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 38–44, Korea (2016) 11. Bjelonic, M., Bellicoso, C.D., et al.: Keep Rollin’-whole-body motion control and planning for wheeled quadrupedal robots. J. IEEE Robot. Autom. Lett. 4(2), 2116–2123 (2019) 12. Hashimoto, K., Hosobata, T., et al.: Realization by biped leg-wheeled ro-bot of biped walking and wheel-driven locomotion. In: 2005 IEEE International Conference on Robotics and Automation (ICRA), pp. 2970–2975, Spain (2005) 13. Klemm, V., Morra, A., et al.: Ascento: a two-wheeled jumping robot. In: 2019 IEEE International Conference on Robotics and Automation (ICRA), pp.7515–7521, Canada (2019) 14. 2020 Leg-wheeled mechanism with four-bar linkage and self-weight compensation mechanism, MELEW-2L – YouTube https://www.youtube.com/watch?v=u7ieRxNQRqM. Accessed 26 June 2023

A New Method of Climbing on a High Place by Elasticity-Embedded Rocker-Bogie Vehicle with Dynamic Motions Ryunosuke Takahashi1(B) , Ikuo Mizuuchi1 , Kimikage Kanno1 , and Kenta Hiyoshi2 1

2

Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, 2-24-16, Naka-cho, Koganei-city, Tokyo, Japan {ryunosuke,ikuo,kanno}@mizuuchi.lab.tuat.ac.jp Meidensha Corp., 496-1, Nishibiwajimacho, Ittangosewari, Kiyosu-shi, Aichi, Japan [email protected]

Abstract. The rocker-bogie mechanism increases the capacity of vehicles to step over steps. When it comes to climbing steps, however, the rocker-bogie vehicle depends on the length of the links and friction force between the wheels and the vertical plate of steps. Therefore, we propose a new method that enables the elasticity-embedded rocker-bogie vehicle to climb a high place with low dependency on the friction force and length of links. Firstly, it lifts up the links of the elasticity-embedded rocker-bogie vehicle with the excitation of Series Elastic Actuator (SEA). Secondly, it keeps the posture of lifted links with inverted pendulum control. Thirdly, the rocker-bogie vehicle moves forward to the step and puts some wheels there with the link-lifted posture keeping. Finally, put all wheels on step by another dynamic motion. We verified and succeeded up to the third phase of the proposed method in both the simulation and the actual robot. Keywords: Wheeled mobile robot · Step climbing Physical elasticity · Dynamic motion

1

· Rocker-Bogie ·

Introduction

As shown in Fig. 1, the rocker-bogie mechanism [1] consists of a rocker-link holding a pair of wheels and a bogie-link having two pairs of wheels, which are connected by rotary joints. Due to this rotary joint, the center of gravity fluctuation when the wheel runs over an obstacle is small, it is possible to travel stably on uneven roads. Therefore, this mechanism is employed by the Mars rover [2,3]. There are research examples on stepping over step using a vehicle equipped with a rocker-bogie mechanism [4,5]. There are also research examples on improving step climbing capability by elastic utilization of this mechanism [6,7]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 28–39, 2024. https://doi.org/10.1007/978-3-031-47272-5_3

A New Method of Climbing on a High Place

29

Fig. 1. Concept of the proposed method

However, all of the step climbing conducted in these studies heavily depends on the dimensions of the link, including wheel diameters and inter-axle distances (the distance between the axles when the rocker-bogie vehicle is placed on a flat surface), as shown in Fig. 1. In addition, since step climbing is conducted while all wheels are always in contact with the ground, it heavily depends on the friction force between the vertical plate of the step and the wheels. Therefore, this study is based on the elastic utilization of rocker-bogie vehicle [6,7]. We proposed and verified a method to move the wheels of a rocker-bogie to a high place with dynamic motions, which is independent of the frictional force between the wheels and the vertical plate and the link dimensions.

2

Details of the Proposed Method

The flow of dynamic motions, shown in Fig. 2, proposed in this study, is as follows. (1) From a rocker-bogie vehicle with all wheels grounded, a series elastic actuator (SEA) and wheels are controlled to increase the elastic energy of the elastic component and lift the center of gravity of either rocker-link or bogielink to near just above the link joint. (2) A rocker-bogie vehicle with one of the rocker-link or bogie-link lifted is considered an inverted pendulum and controlled. (3) Keep moving forward and put the front wheels of the lifted link on the step without the vertical plate. (4) Put all wheels on the step with further dynamic motion.

30

R. Takahashi et al.

Fig. 2. Flow of sequence in the proposed method

Fig. 3. Model in simulation, and SEA rocker-bogie vehicle

A New Method of Climbing on a High Place

31

We carried out the sequence up to the 3rd phase in this flow. In addition, we lifted rocker-link in 1st phase. This was because, when shifting to the inverted pendulum control in 2nd phase after lifting the link, it is possible to control using two pairs of wheels attached to the bogie-link. It makes it easier to stabilize the control than lifting the bogie-link and using only one pair of wheels for the inverted pendulum control. 2.1

About SEA and Exciting Motion

Series elastic actuator (SEA) [8] is a mechanism in which an elastic component and an actuator are connected in series. In this study, we use a SEA rocker-bogie vehicle in which an actuator and an elastic component are connected in series between a rocker-link and a bogie-link [7]. The reason for adopting SEA is that the performance of the motor is expected to be improved due to the expansion of energy by utilizing the vibration of the spring [10]. Furthermore, since the elastic elements and the links are connected in series, it is possible to change the spring’s balance position according to the posture of the rocker-bogie, compared to the parallel elastic utilization. The rocker-bogie vehicle is more suited for dynamic motion because the balancing position of the springs can be changed according to the posture of the rocker-bogie vehicle. SEA can enlarge the vibration of the spring by using the action of the motor to apply the force in the direction of the deformation speed of the spring, which is called the feedback excitation [11]. This action enables the rocker-link to be lifted even by a motor with a small torque that cannot lift the rocker-link. 2.2

Gain Calculation Method for Inverted Pendulum Control

After the center of gravity of the rocker-link is lifted to the vicinity directly above the rotational joint from the attitude with all wheels on the ground. To maintain the center of gravity of this rocker-link in the vicinity directly above the link joint, the rocker-bogie vehicle is regarded as an inverted pendulum, and state feedback control is performed to maintain the attitude.  used for state feedback are shown in Figs. 4, 5 and The state vectors X (1), (2). While all the two pairs of wheels associated with the bogie-link are  is set to a vector (1) with four grounded, as shown in Fig. 4, the state vector X state variables: the horizontal position of the bogie-link center of gravity x, the inclination of the rocker-link in the world coordinate system θr , and their time derivatives x, ˙ θ˙r . The input u is the force acting on the rocker-bogie vehicle in the horizontal direction due to the wheel torque.    = x x˙ θr θ˙r T X

(1)

In addition, as shown in Fig. 5, while one of the two pairs of wheels associated  is set to a vector (2) with the bogie-link is off the ground, the state vector X

32

R. Takahashi et al.

Fig. 4. State value while all the two pairs of wheels associated with the bogie-link are grounded

Fig. 5. State value while one of the two pairs of wheels associated with the bogie-link is off the ground

A New Method of Climbing on a High Place

33

with six state variables: the state variables of (1), the slope of the bogie-link in the world coordinate system θb , and its time variation θ˙b .    = x x˙ θb θ˙b θr θ˙r T X (2) For the calculation of control gains, the equation of motion of a rocker-bogie vehicle is first obtained from the Euler-Lagrange equation, and then the equation of state is obtained. Since this equation of state is nonlinear, it is linearized by  and by substituting the equilibrium partial differentiation using the state X point of the vehicle at rest. From this equation of state, the control gain K is obtained by using the pole assignment method.  and the target value X  ref , Using the obtained gain K, the state quantity X, the input u is determined using (3).  ref − X)  u = K(X

(3)

 ref , The input u is set to be doubled while one of the bogie-links is floating. In X the tilt angle of the rocker-link and the bogie-link and its time variation are set such that the center of gravity of the rocker-link is maintained directly above the link joint. In addition, since we do not set a specific target value of the horizontal position of the center of gravity of the bogie-link x, we set x to 0 and x˙ to a value in the moving forward direction.

3

Verification of the Proposed Method in Simulation

The proposed method was verified in simulation. In this study, we used the dynamics simulator [7]. In this simulator, the equations of motion of the model of the rocker-bogie vehicle with attached spring shown in Fig. 3 are derived using the Euler-Lagrange equation, and the equations of motion are numerically integrated using the fourth-order Runge-Kutta method to calculate the position and posture of the rocker-bogie vehicle at each time. The simulation’s time Δt was set to 0.001 [s]. 3.1

Sequence to Be Verified in Simulation

In the simulations, the sequence from the posture of all wheels grounded to the placement of the front wheel on a step was performed in the following flow. We set the height of the step to be three times the wheel diameter and 1.5 times the distance between axles. (1) (2) (3) (4) (5) (6)

All-wheel ground contact posture. Feedback excitation. Stop excitation, constant SEA motor torque, and wheel torque. Reduce spring deformation. Inverted pendulum control. Place the front wheels on a step. The details of each phase of this flow are described below.

34

R. Takahashi et al.

(2)Feedback excitation. From the all-wheel-ground posture, feedback excitation was applied by SEA to lift the rocker-link. (3)Stop excitation, constant SEA motor torque, and wheel torque. In the simulation, unlike the verification with the actual machine, the lifting angle of the rocker-link from the all-wheel ground posture was only about half of the lifting angle when the center of gravity of the rocker-link is lifted directly above the link joint. The feedback excitation was stopped, and the SEA motor torque was kept constant in the direction that the springs lift the rocker-link. The motor torque of SEA was kept constant in the direction that the springs lifted the rocker-link, and the torque of the wheel was added in the forward direction. The switch from 2nd phase to 3rd phase was made at the time when the springs started to vibrate to lift the center of gravity of the rocker-link to the position closest to that, just above the joint. (4)Reduce spring deformation. Feedback damping (applying motor torque in the opposite direction of the deformation speed of the spring) is used to suppress the vibration of the spring, thereby reducing the influence of the spring force on the inverted pendulum control in the 5th phase. The switch from the 3rd phase to the 4th phase is made at the time after the rocker-link is lifted up and before the spring exerts force to swing the center of gravity of the rocker-link down to the ground. (5)Inverted pendulum control. Inverted pendulum control was performed at the time when the center of gravity of the rocker-link approached the point directly above the link joint to some extent. Speed target x˙ is given in the direction of the step. This enabled the SEA rocker-bogie vehicle to maintain its posture and move toward the step. 3.2

Verification Results in Simulation

The results of the execution of the sequence are shown in Figs. 6 and 7. The number in parentheses next to the time notation in the upper right of each scene in Fig. 6 corresponds to the step number of the sequence, as mentioned in Sect. 3.1. The red dot on the rocker-link in Fig. 6 indicates the position of the center of gravity of the rocker-link. The angle of each link and the torque of the motor and spring of the axle and SEA are shown in Fig. 7. Regarding the bogie-link angle in Fig. 7, the angle when the wheel on the rear wheel side floats was defined as positive. The spring torque of lifting the rocker-link was defined as positive. The wheel torque moving forward in the direction of the step was defined as positive. The light blue line in Fig. 7 indicates the rocker-link’s lift angle when the rocker-link’s center of gravity reaches directly above the link joint with both wheels of the bogie-link in contact with the ground. From Figs. 6 and 7, the center of gravity of the rocker-link is located near just above the link joint after the 3rd phase. In the 4th phase, the deformation of the spring is reduced by feedback vibration control so that the force acting on

A New Method of Climbing on a High Place

35

Fig. 6. Sequence of motion in the simulation

Fig. 7. The angle of each link and each output torque in the simulation experiment

36

R. Takahashi et al.

the rocker-link by the spring does not significantly affect the inverted pendulum control in the subsequent phases. In the scene corresponding to the 5th phase, one wheel of the bogie-link is floating, and the inverted pendulum control is different from that when both wheels are on the ground. At 2.039 [s], the rockerlink wheel is mounted on the step, and the input of the torque of wheel is not performed. In this simulation, we succeeded in placing the front wheels on a step 1.5 times the distance between the axles from a posture of all wheels grounded.

4

Verification of the Proposed Method on Actual Robot

The proposed method was verified using the SEA rocker-bogie vehicle [7] shown in Fig. 3. The structure of this actual robot is different from the simulation model. This difference only affects which of the relative angles between the motor link and bogie-link or rocker-link is related to the spring force. We set the height of the step at 21 [cm], which is 3.5 times the wheel diameter (6 [cm]) and 1.4 times the inter-axle spacing (15 [cm]). An IMU sensor is installed to measure the angle of the rocker-link in the world coordinate system. In addition, an encoder is installed to calculate the angle between the bogie-link and the motor link and between the rocker-link and the motor link. By this and the IMU sensor mentioned above, the angle of the bogie-link in the world coordinate system can be obtained. In addition, an encoder is installed on two axles attached to the bogie-link to measure the actual machine’s travel distance (the horizontal position of the center of gravity of the bogie-link). 4.1

Sequence to be Verified on Actual Robot

The sequence from the all-wheel-ground posture to placing the front wheels on the step was performed on an actual robot. The height of the step was set at 21 [cm]. This height is 3.5 times the wheel diameter (6 [cm]) and 1.4 times the distance between axles (15 [cm]). The phases of the sequence on the actual machine are as follows. (1) (2) (3) (4) (5)

All-wheel ground contact posture. Feedback excitation. Reduce spring deformation. Inverted pendulum control. Placing the front wheels on a step.

Details of each phase are described below. (2)Feedback excitation. From the posture in which all wheels are grounded, feedback excitation is applied with 0.9 [Nm] motor torque, and the center of gravity of the rocker-link is lifted to a position near the top of the link joint.

A New Method of Climbing on a High Place

37

(3)Reduce spring deformation. To prevent the rocker-link from being swung down by the force of the vibrating spring, the feedback excitation was stopped, and the motor torque of SEA was kept constant in the direction that the spring lifts the rocker-link. The timing of switching the motor torque was set according to the amount of spring deformation. Then, to prevent the rocker-link from lifting up too much, the motor torque output of SEA was stopped according to the lifting angle of the rocker-link. (4)Inverted pendulum control. Inverted pendulum control was performed on the robot with the rocker-link lifted up. Speed target x˙ is given in the direction of the step. The robot moves forward toward the step. The timing of the start of the inverted pendulum control was set according to the lifting angle of the rocker-link. 4.2

Verification Results on Actual Robot

The sequence verified by the actual machine is shown in Fig. 8. The numbers in parentheses above the time notation in the bottom right corner of each scene in Fig. 8 correspond to the phase numbers of the sequence, as mentioned in Sect. 4.1. However, the scene at 0.86 [s] does not correspond to any of the phases because no input is applied to the servo module mounted on axle or the motors of SEA. The red line extending from the link joint to the rocker-link in Fig. 8 indicates the direction from the link joint to the center of gravity of the rocker-link. The obtained sensor data and the output torque of each actuator are shown in Fig. 9. The light blue line in Fig. 7 shows the rocker-link’s lifting angle when the rockerlink’s center of gravity reaches just above the link joint with both wheels of the bogie-link on the ground. A large spring torque is generated around 0.6 [s] from Figs. 8 and 9, and the rocker-link is lifted up to the position just above the center of gravity link joint. After that, the lifted posture of the rocker-link is maintained by the motion of the 3rd and 4th phases. After the transition to the inverted pendulum control in the 4th phase, the center of gravity of the rocker-link tilted from the position directly above the link joint to the side of the step and moved forward while oscillating. At 2.58 [s] in Fig. 8, the rocker-link wheels were placed on the step. In this experiment, the front wheels were successfully placed on a step 3.5 times as high as the wheel diameter and 1.4 times as high as the distance between axles.

5

Conclusion

In this study, we proposed a method to move wheels to a high place without using friction between a vertical plate and the wheels by using SEA rockerbogie vehicle with dynamic motions. The proposed method was verified both in simulation and in the real world. In both environments, the wheels attached to the rocker-link were successfully placed on a high step without vertical plates by

38

R. Takahashi et al.

Fig. 8. Sequence of motion on the actual robot

Fig. 9. The angle of each link and each output torque in verification on the actual robot

A New Method of Climbing on a High Place

39

lifting the link through elastic utilization and inverted pendulum control from the posture where all wheels are grounded. Future tasks include the achievement of “moving all wheels to a high place by further dynamic motion,” which has not yet been achieved in Fig. 2. In addition, it includes suggestions for other approaches to moving a rocker-bogie vehicle to a high place.

References 1. Harrington, D.B., Voorhees, C.: The challenges of designing the rocker-bogie suspension for the mars exploration rover. In: 37th Aerospace Mechanisms Symposium (2004) 2. Rivellini, T. P.: Mars rover mechanisms designed for Rocky 4. In: NASA. Ames Research Center, the 27th Aerospace Mechanisms Symposium (1993) 3. Lindemann, A.R, Bickler, B.D, Harrington, D.B., Ortiz, M.G., Voothees, J.C.: Mars exploration rover mobility development. In: IEEE Robotics Automation Magazine, vol. 13, no. 2, pp. 19–26, June 2006. https://doi.org/10.1109/MRA.2006. 1638012 4. Chinchkar, D., Gajghate, S., Panchal, R., Shetenawar, R., Mulik, P.:Design of rocker bogie mechanism. Int. Adv. Res. J. Sci. Eng. Technol. 4(1), 46–50 (2017) 5. Kim, D., Hong, H., Kim, H.S., Kim, J.: Optimal design and kinetic analysis of a stair-climbing mobile robot with rocker-bogie mechanism 50, 90–108 (2012) 6. Yokoyama, S., Mizuuchi, I., Yamanaka, K.: Proposal of dynamic motion utilizing physical elasticity in rocker-bogie mechanism. In: The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec), pp. 2P1–J08. The Japan Society of Mechanical Engineers Press, Japan (2021). https://doi.org/10. 1299/jsmermd.2021.2P1-J08 7. Yokoyama, S., Mizuuchi, I.,Azuma, T., Yamanaka, K.: Proposal of dynamic motion in rocker-bogie mechanism using SEA. In: The Proceedings of JSME Annual Conference on Robotics and Mechatronics (Robomec), pp. 2A2–F01. The Japan Society of Mechanical Engineers Press, Japan (2022). https://doi.org/10.1299/jsmermd. 2022.2A2-F01 8. Pratt, A.G., Williamson, M.M.: Series elastic actuators. In: Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 1, pp. 399–406. IEEE Press (1995) 9. Choi, D., Oh, J., Kim, J. :Analysis method of climbing stairs with the rocker-bogie mechanism. J. Mech. Sci. Technol. 27, 2783–2788 (2013) 10. Azuma, T., Mizuuchi, I., Yokoyama, S., Yamanaka, K.: Proposal of dynamic motion in rocker-bogie mechanism using SEA. In: The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec), pp. 2P1-M10. The Japan Society of Mechanical Engineers Press, Japan (2022). https://doi.org/10.1299/ jsmermd.2022.2P1-M10 11. Hondo, T., Mizuuchi, I. : Kinetic energy maximization on elastic joint robots based on feedback excitation control and excitation limit hypersurface. J. Robot. Mechatron. Japan 25(2), 347–354 (2013). https://doi.org/10.20965/jrm.2013.p0347

Actively Variable Transmission Robotic Leg Giorgio Valsecchi(B) , Fabian Tischhauser, Jonas Junger, Yann Bernarnd, and Marco Hutter RSL, ETH Z¨ urich, 8092 Z¨ urich, Switzerland [email protected]

Abstract. Quadrupedal robots consume large amounts of energy while walking or standing, limiting their range and operation time. Moreover, their design involves multiple trade-offs between conflicting requirements, reducing their flexibility and suitability for certain applications despite their advantages in rough and unknown environments. One of the root causes of this situation is the mismatch between the limited capabilities of the actuators and the wide range of requirements for these robots. This work introduces an actively variable transmission design intended to adjust the robot’s characteristics according to its current mission requirements, resulting in more efficient power usage and longer operational times. We designed and built a single-leg prototype and tested it. Experimental results show a decrease of up to 50% in the power consumption of the knee actuator, confirming the theoretical analysis.

Keywords: Legged robot transmission

1

· Actuator design · Actively variable

Introduction

In recent years, legged robots have been used in a multitude of scenarios and applications, demonstrating their potential. A good example is the DARPA Subterranean (SubT) Challenge, where multiple teams relied on legged robots [4,12,17]. Thanks to their omnidirectionality, foothold selection, and obstacle negotiation capabilities, legged robots are increasingly seen as an interesting and versatile alternative to traditional wheeled systems. While versatility is a desirable characteristic, it introduces conflicting requirements that make the design of a walking robot more challenging. Velocity, agility, high payload capacity, and efficiency are necessary but can hardly coexist, resulting in inevitable trade-offs. The versatility problem is even more challenging when the application requires extreme behaviors in different moments (such as long-distance transportation tasks alternated with dynamic maneuvers), the available energy is limited (such as space missions), or the payload varies significantly (logistics and delivery tasks). c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 40–51, 2024. https://doi.org/10.1007/978-3-031-47272-5_4

Actively Variable Transmission Robotic Leg

41

These conflicting requirements are particularly problematic for the actuators, which are among the most expensive components of robots and the bottleneck for locomotion performance.

(a) Barry standing while carrying a heavy box.

(b) Cad model of Barry’s leg design.

Fig. 1. The newly developed robot Barry and its knee transmission.

The literature is rich with attempts to extend the operational envelope of robotic actuators by creating architectures capable of outperforming individual motors’ capabilities. Careful design of the actuators and their configuration can achieve better performance than simply connecting motors serially. Matching the leg topology and motion carefully can be effective in improving a robot’s capabilities, as shown in [7] and in a more generic way in [1]. While carefully shaping the Jacobian of a robotic leg can be effective for a specific task, it cannot result in a design capable of addressing a multitude of tasks. Parallel elasticity is perhaps the most intuitive way to enhance a robotic limb, using elastic elements to passively compensate for a fixed component of the load. Several works have shown the energetic advantage of this approach [9– 11]. However, springs may restrict joint motion range, increase weight, and limit the robot’s versatility. Nan et al. [13] proposed a way to engage and disengage the spring by utilizing only leg mobility, adapting the robot to different tasks. Antagonistic actuation is a possible way to further improve the concept of parallel elasticity, utilizing multiple actuators on a single joint to control the preload of parallel elasticity, as demonstrated in [14,18]. A familiar approach to address the problem is the use of a variable transmission. Vehicles of all kinds use a multitude of solutions and mechanisms to match the motor’s operational envelope with the instantaneous requirements.

42

G. Valsecchi et al. HAA

KFE

HFE

Torque [Nm]

100 50

200

0

100

−50

0

50 0 −50 0

20

40

−100

0

20

40

−100

0

20

40

Payload [Kg]

Fig. 2. Actuators’ torque as a function of the payload weight. The blue circle represents the mean torque, with the blue bars representing the variance. The green diamond represents the root mean square torque, which is relevant for the heat emissions of the actuators. Red and yellow squares represent maximum and minimum torques respectively.

Actively variable transmission is an actuation architecture used in prosthetics, for example in [16]. For prosthetics, the requirements for actuation vary greatly in a single stride and weight must be contained for ergonomic reasons. Therefore, complex architectures have been explored and optimized already. Another bioinspired solution consists of multiarticular actuation. In such architecture individual motors are coupled with multiple joints, as is the case of animal muscle-skeletal systems [5]. Bioinspired robotic designs trying to replicate this structure have proved effective in increasing efficiency and extending robots’ capabilities beyond those of the individual actuators, as shown in [2,6]. A more system-level approach to the versatility problem is to create systems with multiple locomotion modalities. Hybrid systems combining legs and wheels, as in [3] for example, are capable of a “high velocity” driving mode and “high agility” walking mode. This solution successfully increases the robots’ capabilities. However, the addition of wheels can only indirectly influence the aforementioned trade-offs within a single locomotion mode. In this work, we analyze the actuation requirements of walking robots and propose a novel design for an actively variable transmission (AVT). Our focus is on Barry, a walking robot developed at ETH Zurich, shown in Fig. 1a. In Sect. 2, we analyze data from walking robot experiments and investigate potential implementations of AVTs. Section 3 introduces the prototype design and testing setup. Finally, Sect. 4 presents and discusses the experimental results.

2

Problem Analysis and Specifications

The design of an actuator is primarily determined by the selection of its topology, motor, and gearbox. The aim of this selection process is to ensure that the actuator is capable of fulfilling a reference task or a set of tasks. This is achieved by specifying the desired torque and velocity outputs that the actuator should provide. Furthermore, optimization goals and constraints on parameters such as

45 55 65

30 20

Max actuator torque [Nm]

Knee velocity [rad/s]

Actively Variable Transmission Robotic Leg

10 0

0.5 1 Jump height [m]

1.5

Fig. 3. Knee actuator’s target velocity (derived from energetic considerations) as a function of target jump height, for different take-off knee angles.

N=9 N = 18 N = 4.5

600 400 200 0

0

43

0

5

10

15

Actuator velocity [rad/s] Fig. 4. Knee actuator’s limits for the transmission ratio N = 9 and for alternative values. Dashed lines represent the limits for an alternative motor.

mass, size, transmission ratio, and power consumption are typically taken into consideration during the design process. Topology is driven by the application and the designer’s experience. Currently, the quasi-direct drive architecture dominates the field of legged robots, with notable exceptions such as Spot and ANYmal. Since Barry’s actuators are quasi-direct drives, we consider the topology to be fixed. The general principles stated in this analysis should hold for other architectures; however, we did not focus on them, so our results might not apply directly. The task curriculum imposes constraints on the actuator’s design space, most importantly on peak power, peak torque, and peak velocity. Among the feasible solutions, the one that satisfies the optimization criteria is picked. Although this is an oversimplified description of the design process, one dynamic emerges clearly: the broader the set of tasks, the smaller the solution space. This process often results in a suboptimal solution that is merely sufficient for all cases but not optimal for any specific task. Figure 2 depicts the torque requirements of Barry’s actuators for different payloads. We obtained the data by loading Barry with dummy masses and commanding it to walk at its maximum velocity, (around 2 m/s, in a straight direction). More details on the motors used in the experiments are discussed below. Figure 3 shows the velocity requirements for the knee actuators, as a function of the jump height and taking into account different take-off postures. Since the actuators are currently not capable of delivering sufficient velocity for jumping, we obtained this information from energetic considerations. More specifically, we equated the potential energy of a certain jump height h to the kinetic energy of the leg moving vertically and solved for the vertical velocity, vz , obtaining Eq. 1. We obtained the knee joint velocity from the leg kinematics.  (1) vz = 2gh

44

G. Valsecchi et al.

Fig. 5. Barry’s leg with overlayed simplified schematics of the transmission, and a detailed schematic representation. The dashed lines represent the ball screw axis (aligned with the nut position f ), the perpendicular axis (aligned with the knee joint offset b), and the extension of the thigh link (used to define qKF E ).

The knee actuator emerges as the most problematic joint. This is a consequence of the leg topology, whereby the knee is the joint most strongly coupled with the vertical force. Adding mass to the robot directly increases the torque on the knees but only marginally affects the load on the hip actuators. Similarly, jumping requires the knees to produce a peak of power and velocity but is decoupled from the hips (assuming a vertical jumping motion). The Halodi REV01-30 motor used in the knee is capable of 50 Nm peak torque and 36 rad/s velocity, and can theoretically generate sufficient power and velocity for both payload carrying and jumping. However, it is impossible to select a single transmission ratio that enables adequate velocity for jumping and enough torque for carrying the payload. Figure 4 highlights the current limitations of the actuators and hypothetical limitations for different transmission ratios and also for a motor with an alternative winding and compatible geometry, but lower motor constant. Besides the direct implications on torque and velocity, the transmission ratio has a strong influence on the heat generated by the actuators. As shown in Eq. 2, there is a quadratic relationship between the heat flux generated by the actuator Q˙ Act (dominated by ohmic losses [15,19]), the motor constant KM , and the transmission ratio N , meaning that a small variation of N can have a significant impact on the energetic balance and thermal performance of the robot. τ2 τ2 Q˙ Act ≈ M otor = Joint 2 2 KM N 2 KM

(2)

As a consequence of the impact of the knee transmission ratio on the overall performance of a quadruped robot, and considering Barry’s design, we decided to focus on it.

Actively Variable Transmission Robotic Leg

45

Figure 5 shows the original transmission of the leg with an overlaid schematic model. The transmission is composed of a ball screw that transforms the motor’s motion into linear movement, which is then transferred to the knee joint by a linkage. The resulting transmission ratio between the motor and the joint is not constant across the range of motion of the joint. It is possible to derive the transmission ratio i(qKF E ) as the change of input position relative to the change of output position. For such a calculation is convenient to use the relation that connects the nut position to rotor angle qm Eq. 3 and to the helper angle γ, Eq. 4. All geometric parameters are defined as in Fig. 5, with the exception of p, the ball screw pitch. Equation 5 is the final result of all these substitutions. For the purposes of this discussion, the transmission ratio can be approximated with the final trigonometric function, which highlights how the gear ratio depends primarily on the length of the linkage lever r. f=

qm p 2π

(3)

 2 c2 − (r cos γ − b) − r sin γ = f

(4)

dqm ∂qm ∂f dγ = dqKF E ∂f ∂γ dqKF E ⎛ ⎞ sin γ (r cos γ − b) ⎠ 2πr 2πr ⎝ cos γ −  cos γ ≈ = p p 2 c2 − (r cos γ − b)

i(qKF E ) =

(5)

To quantitatively evaluate the effects of the transmission ratio, we used torque and velocity profiles from previous experiments and computed heat losses with different transmission ratios. We considered two different experiments: one Table 1. Summary of results from computations. Dataset Transmission type

Lever arm length [mm]

Joint torque [Nm]

Motor torque [Nm]

Heat losses [W]

Transmission ratio

Walk

Current Fixed Fixed Dynamic

28.6 40 20 40–20

81 81 81 81

10.1 7.2 14.4 10.6

76 39 156 42

9.0–8.8 12.6–12.3 6.3–6.2 12.6–6.2

Payload Current Fixed Fixed Dynamic

28.6 40 20 40–20

100 100 100 100

12.6 9.0 18.0 13.1

116 59 237 63

9.0–8.6 12.6–12.0 6.3–6.0 12.6–6.0

46

G. Valsecchi et al.

Fig. 6. Cad representation of the AVT mechanism. The motor is in blue, the lead screw in yellow, the bearing caps in purple, the tilting lever in green, and its interface with the nut in red. The resulting linkage lever r is also represented with the same colors as in Fig. 5.

in which the robot walks on flat ground, labeled “Walk”, and another in which the robot carries an additional payload and walks over more challenging terrain, labeled “Payload”. We then computed the knee motor’s heat losses for the current static transmission (Current), for two extreme values of the transmission value dictated by the geometry of the transmission (Fixed), and for a hypothetical system that can adjust the geometry during the gait cycle (Dynamic). The results of such analysis are summarized in Table 1. According to the calculations, the transmission ratio can greatly reduce the knee actuator’s heat losses, up to 50%, or temporarily increase the maximum velocity for particularly dynamic motion, with heat losses that would be unsustainable for a fixed design. Another conclusion is that a dynamic AVT would not bring benefits in terms of heat losses but would require substantial power for its actuation. Given the outcome of the analysis, we decided to develop an AVT capable of adjusting the transmission ratio by regulating the link lever r. We also decided to prioritize integration and energy efficiency, for this reason, we do not aim for AVT capable of adjusting the ratio during the gait cycle. This choice relaxes the power requirements on the AVT actuation and allows for compact solutions.

3

Design and Testing Setup

Figure 6 shows a CAD model of the final AVT design. The mechanism allows for continuous adjustment of the lever arm length r, increasing its adaptability. An actuated tilting lever changes the distance between the knee joint axis and the link lever axis. The lever is pivot-mounted on one side to sustain the knee linkage forces without loading its actuation, which is non-backdrivable. The AVT occupies the space opposite the thigh, thus avoiding collisions. As the tilting lever moves along a circular path, it introduces a shift perpendicular to the lever arm. To minimize this, the length of the tilting lever needs

Actively Variable Transmission Robotic Leg

47

Transmission ratio

to be sufficiently long. With a tilting lever of 90 mm, the angle it can describe is limited to ±8◦ , resulting in a perpendicular shift of only 1 mm. By adjusting the distance between the knee joint axis and the link lever axis with the actuated tilting lever, the AVT can achieve a lever arm range of 22–38 mm, achieving transmission ratio between 6.9 and 11.9, as shown in Fig. 7. 15 22mm 26mm 30mm 34mm 38mm

10 5 0

0

0.5

1

1.5

2

2.5

3

3.5

4

KFE position [rad] Fig. 7. Transmission ratio as a function of the knee angle qKF E for different link lever lengths.

In the physical prototype of the AVT, the tilting lever is actuated by a nut that runs on a trapezoidal screw. The screw is self-locking, and it is driven by a Dynamixel XM540-W270-R servo that can produce approximately 10N m of torque and is capable of multiturn settings. The interface between the nut and the tilting lever allows for relative rotational and translational movement, because of the aforementioned shift. We designed the mechanism to withstand the forces generated by the knee actuator, which can produce linear forces up to 15 kN. The tilting lever is made of structural steel and transfers the load via needle bearings. The trapezoidal screw and nut, DIN 103 Tr. 12 × 3, are rated for 5.3 kN. The screw runs on

Fig. 8. Picture of the AVT physical prototype, integrated into the thigh.

Fig. 9. Testing setup with linear rail.

48

G. Valsecchi et al.

preloaded angular contact bearings to minimize backlash. The servo motor can produce more than double the torque necessary to actuate the AVT. To validate the performance of the AVT, the leg was mounted on a linear guide, as shown in Fig. 9. A virtual model controller, emulating a spring between the hip flexion-extension (HFE) actuator and the ground, was used when the foot was in contact with the ground. An inverse dynamic controller was used during the flight phase. Overall, the setup is similar to that used in [8].

4

Results and Discussion

KFE power consumption [W]

300

200

300 22.0mm 25.2mm 28.4mm 31.6mm 34.8mm 38.0mm

200

100

0 10

12kg 17kg 22kg 32kg 42kg

100

20

30 Mass [Kg]

40

0 20

25

30

35

40

Link lever [mm]

Fig. 10. Power consumption of the KFE actuator when statically loaded with different weights with the AVT in different configurations. The plot on the left shows the quadratic dependency between the load on the leg and the KFE actuator power consumption. The plot on the right shows the same data, highlighting the relation between the AVT and the power consumption.

To validate Eq. 2 and the AVT capability of reducing winding losses by increasing the transmission ratio, we performed a series of static tests. Despite being less representative than the actual scenario experienced by a walking robot, static tests are useful to isolate behaviors. We controlled the leg to maintain its position, on the rail of Fig. 9, with the foot below the HFE axis and with qKF E = π/2, a configuration similar to that of the robot standing. We then loaded the leg with different weights and logged the power consumption of the KFE actuator, repeating the process for different AVT configurations. Figure 10 shows the experimental measurements with two plots that highlight the quadratic relationship between power consumption and mass, and that between the power consumption and the link lever. The measurements include the idle power consumption of the electronics and the leg’s own weight. The experimental results are coherent with the theoretical trend, and the AVT can significantly reduce

Actively Variable Transmission Robotic Leg

49

Peak velocity [m/s]

KFE power consumption [W]

1.2 400

200

0 20

25

30

35

40

Link lever [mm]

1 0.8 0.6 20

25

30

35

40

Link lever [mm]

Fig. 11. Plots showing the power consumption and the peak vertical velocity of the leg in continuous hopping experiments, as a function of the AVT configuration. In both plots the blue dots represent the mean value and the error bars the standard deviation. The red and yellow squares represent the maximum and minimum values measured. The minimum of the power is shown in absolute value, to not stretch the plot with negative values.

power consumption when in the higher transmission ratio configurations, with a 60% reduction for the highest transmission ratio. The deviation from the theoretical behavior, more visible on the right plot can be attributed to non-uniform friction in the transmission and to experimental errors. A better understanding of the overall performance of the AVT can be obtained with continuous hopping. We tuned the virtual model controller to produce stable hopping at all AVT configurations and acquired data for 50 cycles. Figure 11 synthesize the outcome of the tests. The left plot illustrates how the quadratic relationship between the link lever length and power consumption holds true also in the dynamic case, not only in the static one. The right plot instead shows how shorter link lever lengths, resulting in lower transmission ratios, can produce higher vertical velocities for the entire leg. The experiments confirm that the AVT can be used to control the trade-off between velocity and energy efficiency, with a 50% reduction in power consumption compared to the nominal design. However, further improvements could be achieved by developing a controller that is fully aware of the AVT configuration and capable of optimizing the jumping trajectory. Similar considerations apply to the hardware, which could be further optimized to fully leverage the potential of the AVT concept.

5

Conclusions and Future Work

In this study, we present an actively variable transmission (AVT) for robotic legs. We begin by analyzing the loads and requirements of the robotic leg actuators and identify the knee joint as the most critical. We then introduce an AVT design for the knee actuator that leverages the existing leg’s characteristics. The

50

G. Valsecchi et al.

design enables us to control the transmission ratio and switch between a more energy-efficient, payload-oriented mode and a higher velocity, agility-oriented mode. Finally, we validate the results on real hardware, demonstrating that the design successfully achieves the goal of controlling the transmission ratio. Acknowledgments. This work 4000131516/20/NL/MH/ic.

was

supported

by

ESA

Contract

Number

References 1. Abate, A., Hurst, J.W., Hatton, R.L.: Mechanical antagonism in legged robots. In: Robotics: Science and Systems, vol. 6., Ann Arbor, MI (2016) 2. Badri-Spr¨ owitz, A., Aghamaleki Sarvestani, A., Sitti, M., Daley, M.A.: Birdbot achieves energy-efficient gait with minimal control using avian-inspired leg clutching. Sci. Robot. 7(64) (2022). eabg4055 3. Bjelonic, M., Sankar, P.K., Bellicoso, C.D., Vallery, H., Hutter, M.: Rolling in the deep-hybrid locomotion for wheeled-legged robots using online trajectory optimization. IEEE Robot. Autom. Lett. 5(2), 3626–3633 (2020) 4. Bouman, A., Ginting, M.F., Alatur, N., Palieri, M., Fan, D.D., Touma, T., Pailevanian, T., Kim, S.K., Otsu, K., Burdick, J., et al.: Autonomous spot: Long-range autonomous exploration of extreme environments with legged locomotion. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2518–2525. IEEE (2020) 5. Elftman, H.: The function of muscles in locomotion. Am. J. Physiol.-Legacy Content 125(2), 357–366 (1939) 6. Flynn, L., Geeroms, J., Jimenez-Fabian, R., Vanderborght, B., Vitiello, N., Lefeber, D.: Ankle-knee prosthesis with active ankle and energy transfer: development of the cyberlegs alpha-prosthesis. Robot. Autonom Syst. 73, 4–15 (2015). Wearable Robotics 7. Hirose, S., Arikawa, K.: Coupled and decoupled actuation of robotic mechanisms. Adv. Robot. 15(2), 125–138 (2001) 8. Hwangbo, J., Tsounis, V., Kolvenbach, H., Hutter, M.: Cable-driven actuation for highly dynamic robotic systems. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8543–8550 (2018) 9. Kamikawa, Y., Kinoshita, M., Takasugi, N., Sugimoto, K., Kai, T., Kito, T., Sakamoto, A., Nagasaka, K., Kawanami, Y.: Tachyon: design and control of high payload, robust, and dynamic quadruped robot with series-parallel elastic actuators. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 894–901. IEEE (2021) 10. Kolvenbach, H., Hampp, E., Barton, P., Zenkl, R., Hutter, M.: Towards jumping locomotion for quadruped robots on the moon. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5459–5466. IEEE (2019) 11. Mazumdar, A., Spencer, S.J., Hobart, C., Salton, J., Quigley, M., Wu, T., Bertrand, S., Pratt, J., Buerger, S.P.: Parallel elastic elements improve energy efficiency on the steppr bipedal walking robot. IEEE/ASME Trans. Mechatron. 22(2), 898–908 (2016) 12. Miller, I.D., Cladera, F., Cowley, A., Shivakumar, S.S., Lee, E.S., Jarin-Lipschitz, L., Bhat, A., Rodrigues, N., Zhou, A., Cohen, A., et al.: Mine tunnel exploration using multiple quadrupedal robots. IEEE Robot. Autom. Lett. 5(2), 2840–2847 (2020)

Actively Variable Transmission Robotic Leg

51

13. Nan, F., Kolvenbach, H., Hutter, M.: A reconfigurable leg for walking robots. IEEE Robot. Autom. Lett. (2021) 14. Roozing, W., Li, Z., Medrano-Cerda, G.A., Caldwell, D.G., Tsagarakis, N.G.: Development and control of a compliant asymmetric antagonistic actuator for energy efficient mobility. IEEE/ASME Trans. Mechatron. 21(2), 1080–1091 (2015) 15. Seok, S., Wang, A., Chuah, M.Y., Hyun, D.J., Lee, J., Otten, D.M., Lang, J.H., Kim, S.: Design principles for energy-efficient legged locomotion and implementation on the MIT cheetah robot. IEEE/ASME Trans. Mechatron. 20(3), 1117–1129 (2015) 16. Tran, M., Gabert, L., Cempini, M., Lenzi, T.: A lightweight, efficient fully powered knee prosthesis with actively variable transmission. IEEE Robot. Autom. Lett. 4(2), 1186–1193 (2019) 17. Tranzatto, M.: Cerberus: Autonomous legged and aerial robotic exploration in the tunnel and urban circuits of the darpa subterranean challenge. J. Field Robot. (2021) 18. Tsagarakis, N.G., Morfey, S., Dallali, H., Medrano-Cerda, G.A., Caldwell, D.G.: An asymmetric compliant antagonistic joint design for high performance mobility. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5512–5517. IEEE (2013) 19. Valsecchi, G., Rudin, N., Nachtigall, L., Mayer, K., Tischhauser, F., Hutter, M.: Barry: a high-payload and agile quadruped robot. IEEE/ASME Transactions on Mechatronics (2023 (Manuscript under review))

Tensegrity Approaches for Flexible Robots: A Review Juan C. Guacheta-Alba1(B) , Angie J. Valencia-Casteneda1 , Max Suell Dutra1 Mauricio Mauledoux2 , and Oscar F. Aviles2

,

1 Robotics Laboratory, Mechanical Engineering Department, Universidade Federal do Rio de

Janeiro, Rio de Janeiro, Brazil [email protected] 2 Davinci Research Group, Mechatronics Engineering Department, Universidad Militar Nueva Granada, Bogota, Colombia

Abstract. Tensegrity robots are an emerging class of systems that use tensegrity principles to achieve a unique combination of flexibility and strength. Tensegrity is an engineering concept that combines compression and tension elements to create lightweight and resistant structures. This approach can achieve greater adaptability and flexibility in robots’ movement and their interaction with the environment. In addition, the ability of flexible systems with tensegrity to resist and recover from impacts and deformations makes them more resistant to damage than traditional rigid systems. Also, tensegrity robots have been applied in different environments, such as planetary exploration, flexible manipulation, locomotion, and modular robotics. This review presents basic definitions of tensegrity applied to flexible robots, the leading methods for structural and dynamic analysis of tensegrity robots, and emerging applications developed in recent years and still under development. Keywords: Flexible systems · Structural analysis · Tensegrity structures · Topology study · Nature-inspired systems

1 Introduction Tensegrity, a term coined by Buckminster Fuller, describes a structure composed of tensioned cables and compressed bars that self-support and pre-stress. From cytoskeletons to flexible muscles, this structure is found in nature and is essential in animal biomechanics, providing high mobility and robustness. In the human body, bones, muscles, tendons and tissues form a network of continuous tension that ensures their integrity, stability and structural flexibility [1]. Furthermore, tensegrity structures have a high rigidity per unit mass, and their minimal mass distribution makes them light and resistant to collisions. In addition, they can adapt to environmental changes, making them ideal for reducing mechanical components and the cost of structures [2]. These structures offer helpful properties such as high stiffness-to-mass ratio, controllability, reliability, structural flexibility and large deployment, making them attractive for diverse applications [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 52–62, 2024. https://doi.org/10.1007/978-3-031-47272-5_5

Tensegrity Approaches for Flexible Robots: A Review

53

Fig. 1. Tensegrity robots’ knowledge map by VOSviewer, for the last 5 years using databases such as Elsevier, Web of Science, IEEE, among others.

Tensegrity robots are an exciting alternative to conventional robots due to their high flexibility and adaptability to the environment. Having infinite degrees of freedom, they can move with different configurations and adapt to unpredictable situations. Their modular construction and low weight make them ideal for applications in tight spaces or inaccessible environments [4, 5]. However, the intrinsic smoothness of tensegrity can be a disadvantage in situations requiring high forces. Despite this, these mechanisms can vary their programmable stiffness and deformation, making them useful for tasks that require different stiffness levels. Figure 1 presents a knowledge map made in the VOSviewer software, analyzing a thousand articles related to tensegrity robots. In this figure are framed the most relevant topics, such as soft robots, dynamic analysis, shape search, locomotion, design, and optimization, which will be analyzed throughout this paper. Tensegrity robots are versatile and adaptable for applications where physical interaction, impact resistance, and environmental adaptability are vital factors [6, 7]. Tensegrity offers several advantages over dexterity and resilience in building mechanisms and systems. Robots formed by tensegrity structures can perform a variety of movements such as rolling, vibrating, hopping and crawling, making them suitable for applications in architecture, civil engineering, aerospace, mechanics and others [8]. Tensegrity has excellent potential in wave propagation, transfer, storage of mechanical work, and motion control. However, the modeling and control of tensegrity structures are highly nonlinear problems with multiple solutions and equilibria, making them complex and challenging for engineering [4, 9]. This paper presents the highlights of the recent advances in tensegrity robots. First, Sect. 2 compiled the leading and most innovative methodologies for structural analysis. Then, in Sect. 3 are presented recent applications in which tensegrity robots are used, observing their advantages and disadvantages. Finally, the last section mentions conclusive considerations and future work that can be developed using these systems.

54

J. C. Guacheta-Alba et al.

2 Structural Analysis The design and construction of tensegrity robots requires careful structural analysis. First, a static and dynamic analysis is performed, followed by a feedback process to improve the design. Equilibrium equations are defined in standard forms, such as nodal coordinates, force density, and force vector, expressed in Eq. (1), where A defines the equilibrium matrix and q defines the force density vector. This matrix and vector are duly defined in Eq. (2), where C expresses the connectivity matrix of the elements (bars and cables), f the internal strength of the element, and l its length [10]. The robot’s behavior may be non-smooth because cables switch between taut and loose states. To find an equilibrium state, parametric variables can be used to add degrees of freedom in the finite element model. Non-linear equilibrium equations can be solved using the Newton-Raphson method with a fair estimate of the initial solution [11]. It is also crucial to consider unwanted phenomena, such as a collision between bars, which can occur due to a high speed of action. Therefore, different techniques, such as the energy-based method, the stiffness analysis, and the force density method, are used to ensure the stability of the robots [12]. The modeling and control of these systems is a highly non-linear problem and requires a careful and detailed approach to obtain an accurate solution. A·q=0

(1)

⎧ ⎞ p=i C T diag(C ⎨ 1 x )  fk A = ⎝ C T diag Cy ⎠; qk = ; C(k,p) = −1 p=j ⎩ lk C T diag(Cz ) 0 othernodes

(2)

where, ⎛

The dynamics of tensegrity systems is a crucial part to analyze these types of robots. Linear models can be helpful for an initial approximation, but nonlinear dynamical models are necessary for an accurate description of the dynamic response of the structure. It is essential to consider the effects of inertia and damping on dynamic stability and to assess critical dynamics and dynamic weakness [13, 14]. Traditional dynamic modeling methods may need to be more accurate due to a lack of consideration of the internal displacements of structure members. Bloch’s theorem can be used to verify dynamic behaviors and detect the presence of default states and bandgap [15]. A new method of nonlinear dynamic modeling, the Cartesian spatial discretization method, has been developed to include the internal displacements of elements in the longitudinal and transverse directions for bars and cables [16]. The dynamic analysis allows the derivation of appropriate action strategies and shows the possibility of changing equilibrium configurations. Designing efficient and stable structures is a vital topic in structural engineering. Several traditional structures have been raised in tensegrity, so in Fig. 2 is presented the conventional tensegrity units used. In the search for the optimal topology, nodes are fixed, and connectivity becomes the primary design variable. Different methods are used to find the appropriate topology, such as the dynamic relaxation method, which updates the coordinates by viscous or elastic damping on the equilibrium equations

Tensegrity Approaches for Flexible Robots: A Review

55

Fig. 2. Tensegrity units by Sabouni-Zawadzka [21] from left to right: (1) X-Module, (2) 4-strut simplex, (3) 3-strut simplex, (4) truncated tetrahedron, and (5) expanded octahedron.

[17]. Eigenvalues can be minimized to find an optimal structure, or combinations can be applied to obtain stable structures with a given design [18]. The non-rigid-body motion analysis method uses virtual rigidity to obtain shapes in equilibrium and solve the shape search problem. Both the statics and the dynamics of the structures are considered in the topological design, specifying constraints of nodes and elements with the same force density [19, 20]. These methods allow to obtain efficient and stable structures for various engineering applications. Table 1. Main and recent methods used for the analysis of tensegrity robots. Topic

Methods

Equilibrium

Newton-Raphson method Multibody simulation

Energy-based method

Static

Approach energy Screw method

Virtual working principle Force density method

Dynamic

Cartesian spatial discretization Periodic boundary conditions Corotational formulation FEM by ANSYS

Positional formulation FEM Bloch’s theorem Lagrangian method based on generalized coordinates

Form-finding

Stiffness matrix method Geometric stiffness method Dynamic relaxation

Natural shape finding Multistep FDM Particle spring system

Control

Robust active control Active LQR control Tendon control technique

Lyapunov stability control Decoupled data-based control Shape memory alloy (SMA)

Optimization

Ground structure method Graph theory

MOTES software Mixed-integer linear problem

The design and control of structures is a field of study in constant evolution. Applying robust active control techniques attenuates vibration modes and maintains structural integrity and stability under dynamic loads. The mechanical characteristics of structures can be controlled by self-tensioning and the relationship of properties between cables and rods [22]. Topology search is a crucial variable in the design, and various control strategies can be applied to reduce voltages and power in cables [23]. Optimization

56

J. C. Guacheta-Alba et al.

of the structure’s mass and maximization of the deficiency range of the force density matrix are nonlinear programming problems, addressed by the ground structure method or topological optimization [24, 25]. Table 1 presents the leading and most innovative methods investigated in the literature of tensegrity robots, covering the fields of equilibrium, statics, and control, among others. Together, these techniques allow the creation of deployable, lightweight and super-stable structures, focusing on connectivity between elements and minimizing the structure’s mass.

3 Recent Applications Tensegrity structures are widely used in various areas, such as aerospace, civil, medical, and robotics. Its use extends to different fields and scales, from large structures such as modular scaffolding, infrastructure, antennas, and buildings, to more specific applications such as soft tension cages to protect the robot during landing or in extreme environments [26, 27]. One application is the design of tensegrity domes, such as the one presented in Fig. 3, developed by Chen et al. [20] with the TsgFEM design tool. In the design of transmission towers, tensegrity can reduce the area of their base structure and make it deployable. It has also been used to create lightweight antennas, where the ability to deploy and control the length of cables is essential, such as the antenna presented in Fig. 4 developed by Tibert and Pellegrino [28]. Robots with tensegrity are suitable for operating in uneven and unpredictable environments in urban and planetary exploration. However, space application is limited by the need to maintain high structural rigidity without compromising packaging efficiency [29, 30], which is why tools such as NASA Tensegrity Robotics Toolkit (NTRT) have been developed, providing central methods for modeling, simulating, and controlling broad types of tensegrity robots [31].

Fig. 3. Deployable tensegrity dome by Ma et al. [20].

The tensegrity structures are versatile and valuable in different areas of human-scale robotics. In particular, it has been used to design modular robots and collision-resistant arms, hands and wings [26]. In addition, introducing this structure in quadruped robots has significantly improved their flexibility, adaptability and safety. A study has been conducted on the 3D printed flexible tensegrity structure with metamaterial characteristics,

Tensegrity Approaches for Flexible Robots: A Review

57

Fig. 4. Hexagonal tensegrity antenna by Tibert and Pellegrino [30].

allowing flexible robots to complete complex movements [32]. To develop cooperative robots, suitable locomotion strategies for exploration and obstacle avoidance must be considered, as well as implementing simple design and control solutions [33]. One possible solution is to connect tensegrity units using electromagnets so that, when moving, they can assemble large, solid and light structures [34]. Tensegrity robots have great application potential in mechanical wave control and vibration attenuation, making them a good choice for working in extreme and damage-resistant environments. However, linear elastic overdesigns are required to ensure system robustness, and it is essential to note that resonant frequencies are related to the pretension levels of active cables [23, 35]. Robotics is rapidly moving toward creating mechanisms with complex and customizable movement capabilities and, along with tensegrity, allows the construction of flexible and impact-resistant systems. In addition, thanks to machine learning techniques, effective vibration-driven movements can be discovered allowing locomotion [8, 36]. Another advantage of tensegrity-based robots is their carrying capacity and transporting fragile objects. Furthermore, by applying suitable actuators and mechanisms, complex movements such as rolling and jumping for locomotion can be achieved [31]. Furthermore, these robots can reconfigure, allowing them to adapt to different situations and perform multifunctional tasks. However, challenges remain to be overcome, such as designing actuators that allow large but reversible deformations with stable and unstable equilibrium positions [9, 37]. A new component investigated in recent years is tensegrity joints, which allow the connection and controlled mobility of the elements of a designed robot. Tensegrity joints are essential for lightweight and safe manipulators but must be analyzed regarding actuation forces, stiffness, geometry and mass. Several designs of tensegrity joints have been developed. The most recent has been developed by Ramadoss et al. [5] and Friesen et al. [38], presented in Fig. 5, respectively. Cable-driven parallel joints offer a balanced design that reduces limb inertia and allows controllable compliance. In contrast, pin and ball joints can be selected to produce tensegrity-inspired robots with different elastic behaviors [31, 38]. Tensegrity joints offer a unique combination of strength and flexibility, making them versatile for robotics applications.

58

J. C. Guacheta-Alba et al.

Fig. 5. Tensegrity joints from left to right: (1) 3D tetrahedral manipulator by Ramadoss et al. [5], Concept Design for a Tensegrity Joint by Friesen et al. [38] at (2) Initial position (3) Deformed position.

For constructing tensegrity robots, actuators are a fundamental part since they counteract loads and guarantee the prestressing of the mechanism. These robots can be equipped with different actuators, from piezoelectric ceramics to tape-shaped dielectric elastomers [39]. The actuators allow these robots to be used in various applications, mainly rolling, climbing, crawling, vibrating, flying and swimming. The actuators allow to support loads to which the robot is exposed and modify its shape allowing its movement and control. Figure 6 shows three recent applications of actuators in tensegrity robots, which can be located on cables using motors [40] or linear actuators [34], or on rods using linear actuators [41]. The location of the actuators on the bars has a better performance than on the cables, which has been validated in terms of displacements and internal forces. Dynamic effects must also be considered if rapid action is to be achieved [14, 23]. Moreover, tensegrity robots require highly elastic and reliable sensors, such as the force resistor sensor and accelerometer, to measure the system’s natural frequency [42]. Robotic technology has advanced significantly in the last years, especially in prosthetics and bioinspired devices. One example is using intelligent tensegrity robots to develop a human foot prosthesis that combines mechanical properties with adaptive behavior, using SMA actuation [43]. This bio-inspired design of an adaptive foot mechanism is based on the concept of tensegrity, which has also been introduced in robotic grippers and the development of a bioinspired soft robotic gripper [1]. Stephen M. Levin coined the term biotensegrity to apply it to biology, allowing understand better the mechanics of the shoulder, elbow and spine. In addition, biomimetic fish and some continuous animal-based robots such as elephants, earthworms, snakes and octopuses have been created, with the ability to traverse confined spaces and manipulate objects [44]. Furthermore, 4D printing has allowed 3D printed structures to change their configuration or function over time in response to external stimuli, such as temperature, light, and water. The applicability of 4D printing with mono material to acquire control of the movement of pieces has also been validated, where the variation of the filling percentage affects the deformation obtained by thermal stimuli [45, 46]. 3D printing has even been used to create molds for the generation of tensegrity structures that, with

Tensegrity Approaches for Flexible Robots: A Review

59

Fig. 6. Tensegrity robots from top-left to bottom-right: (1) SUPERball v2 robot (expanded octahedron) by Vespignani et al. [40], (2) 3-strut simplex robot with rod actuation by Yagi et al. [41], and (3) 3-strut simplex robot with cable actuation by Meng et al. [34].

smart materials, do not need the post-assembly process. All this has allowed the design of standing robots with the capacity for flexibility, self-stability and self-adaptability.

4 Conclusions and Future Work The design of modular mechanical systems and self-assemblies is essential to achieve reliable and safe structures. Tensegrity is a promising solution for this purpose, as it reduces the risk of failure and facilitates transport. However, fixity, repeatability and reversibility must be guaranteed to deploy robots. In addition, applying smart materials with adjustable rigidity would allow a future realization of highly adaptable systems with adjustable mechanical properties [47, 48]. To achieve these advances, theoretical and experimental results must be better compared, and the actively controlled deployment of cables must be deepened. It is expected to use programmable mechanical components and smart materials in the future, given some drawbacks such as the weight of the motors, prestressing adjustment and heat dissipation. Further experiments studying manipulators with tensegrity joints should also be conducted further to advance the field of adaptive and safe robotics [49, 50]. Advances in lightweight manufacturing have presented challenges in model accuracy, design scalability, and robustness of controls. To address these problems, it seeks to develop automatic design algorithms and rapid parametric modeling of robots inspired

60

J. C. Guacheta-Alba et al.

by tensegrity. However, the complexity of robots and the equipment of sensors and actuators make their rapid manufacture difficult. Therefore, it is also expected to develop analytical designs and algorithms for flexible and customized robots, impact protection equipment systems, and vibration isolation devices based on tensegrity. The future could also involve the development of active or passive control with dampers or actuators. Nevertheless, one of the main challenges is measuring the shape of a robot without using external hardware. Acknowledgement. The authors are grateful to Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), through Programa de Excelência Acadêmica (PROEX), for the financial support on Process Number 88887.685173/2022–00 and 88887.518120/2020–00. Also, to financial support received by FAPERJ foundation through the program Bolsa de doutorado nota-10 with number 204.144/2022.

References 1. Liu, Y., Bi, Q., Li, Y.: Development of a bio-inspired soft robotic gripper based on tensegrity structures. s.l., IEEE (2021) 2. Zheng, Y., Asano, F., Li, F., Yan, C.: Analysis of passive dynamic gait of tensegrity robot. In: Robotics for Sustainable Future. CLAWAR 2021. Lecture Notes in Networks and Systems, vol. 324 (2022) 3. Wang, T., Post, M.A., Tyrrell, A.M.: Articulating Resilience: Adaptive Locomotion of Wheeled Tensegrity Robot. Electronics, February, vol. 11, p. 666 (2022). (February) 4. Zhao, W., Pashkevich, A., Klimchik, A., Chablat, D.: Elastostatic modeling of multi-link flexible manipulator based on two-dimensional dual-triangle tensegrity mechanism. J. Mech. Robot. 14 (2021). (September) 5. Ramadoss, V. et al.: HEDRA: A Bio-Inspired Modular Tensegrity Robot With Polyhedral Parallel Modules. s.l., IEEE (2022) 6. Allen, E.A.: Variable stiffness structures via mechanics modification and antagonistic actuation in soft robotic materials (2020) 7. Zappetti, D.: Variable-stiffness tensegrity modular robots. In: EPFL Scientific Publications (2021) 8. Shah, D.S. et al.: Tensegrity robotics. Soft Robot. 9, 639–656 (2022) 9. Wang, C., Vangelatos, Z., Grigoropoulos, C.P., Ma, Z.: Micro-engineered architected metamaterials for cell and tissue engineering. Mater. Today Adv. 13, 100206 (2022) 10. Zhang, X., Nie, R., Chen, Y., He, B.: Deployable structures: structural design and static/dynamic analysis. J. Elast. 146, 199–235 (2021). (October) 11. Kan, Z., Peng, H., Chen, B., Zhong, W.: A sliding cable element of multibody dynamics with application to nonlinear dynamic deployment analysis of clustered tensegrity. Int. J. Solids Struct. 130–131, 61–79 (2018). (January) 12. Dong, Y., Ding, J., Wang, C., Liu, X.: Kinematics analysis and optimization of a 3-DOF planar tensegrity manipulator under workspace constraint. Machines 9, 256 (2021). (October) 13. Yuan, S., Zhu, W.: A new approach to nonlinear dynamic modeling and vibration analysis of tensegrity structures. s.l., American Society of Mechanical Engineers (2022) 14. Kan, Z., Peng, H., Chen, B., Zhong, W.: Nonlinear dynamic and deployment analysis of clustered tensegrity structures using a positional formulation FEM. Compos. Struct. 187, 241–258 (2018). (March)

Tensegrity Approaches for Flexible Robots: A Review

61

15. Zhang, L.-Y. et al.: Multilevel structural defects-induced elastic wave tunability and localization of a tensegrity metamaterial. Compos. Sci. Technol. 207, 108740 (2021). (May) 16. Yuan, S., Zhu, W.: A cartesian spatial discretization method for nonlinear dynamic modeling and vibration analysis of tensegrity structures. Int. J. Solids Struct. 112179 (2023) 17. Bui, H.Q., Kawabata, M., Nguyen, C.V.: A combination of genetic algorithm and dynamic relaxation method for practical form-finding of tensegrity structures. Adv. Struct. Eng. 25, 2237–2254 (2022). (May) 18. Wang, X., Cai, J., Lee, D.S., Xu, Y., Feng, J.: Numerical form-finding of multi-order tensegrity structures by grouping elements. Steel Compos. Struct. 41(2), 267–277 (2021) 19. Su, Y., Zhang, J., Ohsaki, M., Wu, Y.: Topology optimization and shape design method for large-span tensegrity structures with reciprocal struts. Int. J. Solids Struct. 206, 9–22 (2020). (December) 20. Ma, et al.: TsgFEM: tensegrity finite element method. J. Open Source Softw. 7(75), 3390 (2022) 21. Sabouni-Zawadzka, A.A.: Extreme mechanical properties of regular tensegrity unit cells in 3D lattice metamaterials. Materials 13, 4845 (2020). (October) 22. Martins, D., Gonçalves, P.J.P.: On the dynamics of a smart tensegrity structure using shape memory alloy. J. Phys. Conf. Ser. 1264 (2019). (July) 23. Feng, X., Miah, M.S., Ou, Y.: Dynamic behavior and vibration mitigation of a spatial tensegrity beam. Eng. Struct. 171, 1007–1016 (2018). (September) 24. Wang, Y., Xu, X., Luo, Y.: Form-finding of tensegrity structures via rank minimization of force density matrix. Eng. Struct. 227, 111419 (2021). (January) 25. Goyal, R., Chen, M., Majji, M., Skelton, R.: MOTES: modeling of tensegrity structures. J. Open Source Softw. 4, 1613 (2019). (October) 26. Zappetti, D. et al.: Dual stiffness tensegrity platform for resilient robotics. Adv. Intell. Syst. 4, 2200025 (2022). (May) 27. Šljivi´c, A., Miljanovi´c, S., Zlatar, M.: A new classification of deployable structures. In: E3S Web of Conferences, vol. 244, pp. 05016 (2021) 28. Krishnan, S., Li, B.: Design of lightweight deployable antennas using the tensegrity principle. s.l., American Society of Civil Engineers (2018) 29. Doney, K. et al.: Behavioral repertoires for soft tensegrity robots. s.l., IEEE (2020) 30. Tibert, A.G., Pellegrino, S.: Deployable tensegrity reflectors for small satellites. J. Spacecr. Rocket. 39(5) (2012) 31. Kim, K., Agogino, A.K., Agogino, A.M.: Rolling locomotion of cable-driven soft spherical tensegrity robots. Soft Robot. 7, 346–361 (2020). (June) 32. Gao, R. et al.: Design of a novel quadruped robot based on tensegrity structures. s.l., IEEE (2021) 33. Mintchev, S., Zappetti, D., Willemin, J., Floreano, D.: A soft robot for random exploration of terrestrial environments. s.l., IEEE (2018) 34. Meng, P., Wang, W., Balkcom, D., Bekris, K.E.: Proof-of-concept designs for the assembly of modular dynamic tensegrities into easily deployable structures. s.l., American Society of Civil Engineers (2021) 35. Yuan, X. et al.: Recent progress in the design and fabrication of multifunctional structures based on metamaterials. In: Current Opinion in Solid State and Materials Science, vol. 25, pp. 100883 (2021). (February) 36. Rieffel, J., Mouret, J.-B.: Adaptive and resilient soft tensegrity robots. Soft Robot. 5, pp. 318– 329 (2018). (June) 37. Tan, N., Hayat, A.A., Elara, M.R., Wood, K.L.: A framework for taxonomy and evaluation of self-reconfigurable robotic systems. IEEE Access 8, 13969–13986 (2020) 38. Friesen, J.M., Dean, J.L., Bewley, T., Sunspiral, V.: A tensegrity-inspired compliant 3-DOF compliant joint. s.l., IEEE (2018)

62

J. C. Guacheta-Alba et al.

39. Tang, Y., Li, T., Lv, Q., Wang, X.: A self-vibration-control tensegrity structure for space large-scale construction. Mech. Syst. Signal Process. 177, 109241 (2022). (September) 40. Vespignani, M., Friesen, J. M., SunSpiral, V., Bruce, J.: Design of SUPERball v2, a compliant tensegrity robot for absorbing large impacts. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain, pp. 2865–2871 (2018) 41. Yagi, S., Kang, S., Yu, S., Mahzoon, H.: Evaluation of shape-changing tensegrity structure robot for physical human-robot interaction. In: 2019 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), pp. 20–24. Beijing, China (2019) 42. Barkan, A.R. et al.: Force-sensing tensegrity for investigating physical human-robot interaction in compliant robotic systems. s.l., IEEE (2021) 43. Brandão, A., Savi, M.A.: Nonlinear mechanics of a smart biotensegrity human foot prosthesis. Int. J. Appl. Mech. 14 (2022). (January) 44. Wei, D. et al.: Flexible bio-tensegrity manipulator with multi-degree of freedom and variable structure. Chin. J. Mech. Eng. 33 (2020). (February) 45. Kuang, X. et al.: Advances in 4D printing: materials and applications. Adv. Funct. Mater. 29, 1805290 (2018). (November) 46. Niazy, D., Elsabbagh, A., Ismail, M.R.: Mono–material 4D printing of digital shape–memory components. Polymers 13, 3767 (2021). (October) 47. Aberoumand, M., Rahmatabadi, D., Aminzadeh, A., Moradi, M.: 4D Printing by Fused Deposition Modeling (FDM). Em: Materials Forming, Machining and Tribology. s.l.:Springer International Publishing, pp. 377–402 (2021) 48. Chavez Vega, J., Schorr , P., Kaufhold, T., Zentner, L., Zimmermann, K., Böhm, V.: Influence of elastomeric tensioned members on the characteristics of compliant tensegrity structures in soft robotic applications. Procedia Manuf. 52 (2020) 49. Liu, Y. et al.: A review on tensegrity structures-based robots. Mech. Mach. Theory 168, 104571 (2022). (February) 50. Muralidharan, V., Wenger, P.: Optimal design and comparative study of two antagonistically actuated tensegrity joints. Mech. Mach. Theory 159, 104249 (2021). (May)

Robotic Arm Development for a Quadruped Robot Maria S. Lopes1,2(B) , Ant´ onio Paulo Moreira1,2 , Manuel F. Silva1,3 , and Filipe Santos1 1

3

INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal [email protected], [email protected], [email protected] 2 FEUP—Faculty of Engineering, University of Porto, Porto, Portugal [email protected] ISEP—Instituto Superior de Engenharia do Porto, Instituto Polit´ecnico do Porto, Rua Dr. Ant´ onio Bernardino de Almeida, 4249-015 Porto, Portugal

Abstract. Quadruped robots have gained significant attention in the robotics world due to their capability to traverse unstructured terrains, making them advantageous in search and rescue and surveillance operations. However, their utility is substantially restricted in situations where object manipulation is necessary. A potential solution is to integrate a robotic arm, although this can be challenging since the arm’s addition may unbalance the whole system, affecting the quadruped locomotion. To address this issue, the robotic arm must be adapted to the quadruped robot, which is not viable with commercially available products. This paper details the design and development of a robotic arm that has been specifically built to integrate with a quadruped robot to use in a variety of agricultural and industrial applications. The design of the arm, including its physical model and kinematic configuration, is presented. To assess the effectiveness of the prototype, a simulation was conducted with a motion-planning algorithm based on the arm’s inverse kinematics. The simulation results confirm the system’s stability and the functionality of the robotic arm’s movement. Keywords: Quadruped manipulation Kinematic analysis

1

· Simulation · Robotic arm ·

Introduction

Modern scientific and technological advances have enabled the creation of more intelligent robotic systems with a broader range of applications. New systems are continually being created to overcome human fragilities and increase the productivity and profitability of operations required in the industry, agriculture, surveillance and security, and even home usage [12]. In the last years, research has been focused on mobile robots so that activities can be completely autonomous or with minimal human contact. The three c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 63–74, 2024. https://doi.org/10.1007/978-3-031-47272-5_6

64

M. S. Lopes et al.

fundamental kinds of mobile robots are wheeled, tracked, and legged robots, of which quadrupeds are one example. Compared to the other categories, they are highly beneficial in rugged and uneven terrains, which is the reason why they have sparked much interest in robotics. Because they can alter their height and have fewer floor contact surfaces, they are more efficient at avoiding obstructions. They do, however, provide additional issues as a result of the higher complexity of the motion planning and control algorithms [2]. Quadruped robots, like all other mobile platforms, are unable of manipulation. This incapability is a severe disadvantage in specific applications since the capacity to interact with the environment is critical in various circumstances. Usually, quadrupeds can only do localization and mapping. To address these constraints, research is proceeding into incorporating manipulation capabilities into these robots, either through the use of legs with more flexibility or the addition of robotic arms, transforming the systems into mobile manipulators [11]. When researchers choose to manipulate objects using the quadruped’s legs, the system can be used for applications that do not require specific manipulations or tools. When the job is specific, such as painting or cutting, the leg’s duality does not allow for a sophisticated end-effector since it still must be able to walk; thus, some friction and area of floor contact are limitations on the format of the end of the limb. This approach was used in [3,18] for pressing buttons. On the other hand, adding a robotic arm can be an efficient solution; however, some characteristics must be considered. The arm must be as light as possible without limiting the possible payload. The motion-planning approach of the system is also an important point of study. In literature, two different methods have been utilized [11]. Some authors [5,14,20] consider the arm as an independent system; the motion-planning of the arm is entirely independent of the rest of the system. The control is considerably simpler using this method. However, it only partially exploits the system’s capabilities, and there is a risk of the eventual destination being inaccessible if one of the subsystems is badly positioned. The alternative option is to consider the integration as a single entity [1,6,13]. This strategy is significantly more complex in quadruped manipulators, often employs a whole-body controller, and requires an optimization procedure to produce good results. As the system developed in this work is intended for agricultural tasks, such as pesticide spraying, using the legs to manipulate is not a feasible option. The addition of a robotic arm provides enhanced maneuverability and versatility. The researched arm possibilities were found to be unsuitable due to the priceweight-payload ratio. To overcome this challenge and enable the use of various end-effectors, the authors opted to develop a robotic arm from scratch. After careful consideration of the intended application, the authors determined that treating the robotic arm as an independent system would yield sufficiently good results for motion planning. Consequently, a separate system approach based on inverse kinematics (IK) was implemented. The developed prototype was tested in a simulated environment using ROS and Gazebo to validate the motion planning and ensure the system’s stability. This approach facilitated

Robotic Arm Development for a Quadruped Robot

65

the identification of the optimal working volume and trajectory for the robotic arm, leading to favorable outcomes. The remainder of this paper is structured as follows: Sect. 2 describes the system’s hardware and the process of choice of the kinematic configuration. Section 3 presents the kinematic analysis of the robotic arm. Section 4 outlines the steps to achieve the simulation environment. Finally, Sect. 5 reviews the results obtained and describes the future developments that will be carried out in the scope of this work.

2

Hardware

This work uses Aliengo [15], a quadruped robot from Unitree Robotics, as the platform to integrate a lightweight robotic arm. The Aliengo robot, with a maximum payload of 13.0 kg and power output ports of 5, 12, 19, and 24 V, was previously equipped with a localization and navigation module and a Livox Mid 70 light detection and ranging (LiDAR) sensor (Fig. 1) for environment perception [4].

Fig. 1. Aliengo quadruped robot with the LiDAR mounted on its top [4]

In the literature, two systems already integrate a robotic arm into this quadruped robot. The robotic arms used are Unitree’s Z1 [17] and ViperX 300 [9], from Trossen Robotics. Z1 weighs around 4.3 kg and has a maximum payload of 2 kg; however, it costs around 15,000 USD. ViperX, on the other hand, weighs 3.3 kg with 0.750 kg of payload and costs 5,000 USD. Other systems that integrated a robotic arm with a quadruped robot with similar weight and payload were studied, such as [1,5,19]. The arms chosen have a clear tendency to be Kinova arms these weigh around 4 kg and have a 1.6 kg payload, but cost more than 20,000 USD. Given these values, the authors decided to build a robotic arm using less funds. To that end, it was decided that the manipulator would have igus ReBeL joints (illustrated on Fig. 2) [7,8] since they already came with a motor, control system, and encoder. These joints have two sizes; the bigger one has more torque. Their most important characteristics and differences are described in Table 1.

66

M. S. Lopes et al.

Fig. 2. Igus ReBeL revolute joints, RL-SE-80 (left) [8], RL-SE-105 (right) [7].

This joint’s motor controller uses a Controller Area Network (CAN) based protocol called CPR-CAN-V2, which needs a USB-CAN adapter.1 The joints need two input voltages, 5 V to power the controller and 24 V to power the motors, and those voltages are outputs available in the quadruped. Rigid and light links were needed to connect the joints. First, the authors considered designing and printing the links, but this solution would have little resistance and stiffness. The best option found were rectangular aluminum tubes weighing around 0.400 kg per meter, with 35 × 20 mm cross-section and 1.5 mm of thickness. Table 1. ReBeL joint specifications Description

RL-SE-80

RL-SE-105

Weight (g)

330

900

Maximum output torque (N.m)

2.5

20

Output speed (rpm)

6

Voltage supply (V)

24

Logic voltage supply (V)

5

Initially, the idea for the manipulator structure involved using the LiDAR as a support for the end-effector. However, it was positioned at a considerable height, which would increase the system’s instability. Therefore, the LiDAR was relocated to the front of the robot while maintaining a height that permitted its cables, which connect to the bottom of the device, to be properly connected. Additionally, this positioning ensured that the quadruped structure did not obstruct the LiDAR’s field of view (FOV). After careful consideration and testing, it was determined that an anthropomorphic manipulator would be the most appropriate choice since the quadruped’s body could fulfill the remaining degrees of freedom. To realize the final arm, the team designed and 3D printed specific 1

PCAN-USB adapter.

Robotic Arm Development for a Quadruped Robot

67

pieces that connected the joints to their adjacent links and the quadruped’s torso. The complete design is illustrated in Fig. 3, and is available online.2

Fig. 3. Complete robotic arm design in Fusion 360

The self-developed arm is cheaper than the ones referred to before and has a lower payload capacity; however, the length of the links can be easily changed, and different payloads can be achieved. The arm cost about 2,274.94 USD. The payload for the current configuration is ≈ 0.595 kg (limited by the third joint); this value was achieved using the Fig. 4 and Eq. 1.

Fig. 4. Data to payload calculation

Wmax =

2

Tj3 −

a2 2

· Wa2 · g 2.5 − 0.19 · 0.38 · 0.400 · 9.8 = 0.595 kg = g · a2 9.8 · 0.38

Complete arm design respository.

(1)

68

3

M. S. Lopes et al.

Kinematic Analysis

Having the robotic arm complete, the authors proceeded to its kinematic analysis. This analysis is fundamental for optimizing the arm’s motion control methods and workspace.

Fig. 5. Frames definition through DH convention

According to the coordinate system established in Fig. 5, the homogeneous transformation matrix from coordinate system i-1 to coordinate system i is obtained using Eq. 2: ⎤ ⎡ −sinθi · cosαi sinθi · sinαi ai · cosθi cosθi ⎢sinθi cosθi · cosαi −cosθi · sinαi ai · sinθi ⎥ ⎥ (2) =⎢ Ai−1 i ⎦ ⎣ 0 sinαi cosαi di 0 0 0 1 The transformation matrix (T) between the end-effector and the base frame is obtained using Eq. 3. ⎤ ⎡ −c1 · s23 s1 c1 · (a1 · c2 + a2 · c23 ) c1 · c23 ⎢s1 · c23 −s1 · s23 −c1 s1 · (a1 · c2 + a2 · c23 )⎥ ⎥ (3) T = A01 A12 A23 = ⎢ ⎣ s23 c23 0 d1 + a1 · s2 + a2 · s23 ⎦ 0 0 0 1 For the IK, the authors used a geometric approach. From the top view (Fig. 6, left), the following equations can be obtained: r = sqrt(x2c + yc2 )

θ1 = atan2(yc , xc )

(4)

From the side view (Fig. 6, right), we can take the following equations: h = zc − d 1

θ2 = β − α β = atan2(h, r)  2 2 2 r + h − a1 − a22 θ3 = ± acos 2 · a1 · a2

α = atan2(a2 · sin θ3 , a1 + a2 · cos θ3 )

(5)

Robotic Arm Development for a Quadruped Robot

69

Fig. 6. Top (left) and side (right) view of the manipulator

It is important to note that the manipulator has four possible solutions to reach each end position: θ1 can have another solution (π + θ1 ), and θ3 can have the elbow-up and elbow-down solutions, as concluded from Eq. 5. Finally, the velocity kinematics were calculated, relating the velocity in each joint (q) ˙ to the velocity in the tool (ξ). For a revolute joint, the Jacobian can be calculated through the following equations:

Jvi = zi−1 × (On − Oi−1 ) −1 (6) q˙ = J · ξ → with Jwi = zi−1 Calculating for all the joints, the Jacobian matrix results in the following: ⎡ ⎤ −s1 · (a1 · c2 + a2 · c23 ) −c1 · (a1 · s2 + a2 · s23 ) −a2 · c1 · s23 ⎢ c1 · (a1 · c2 + a2 · c23 ) −s1 · (a1 · s2 + a2 · s23 ) −a2 · s1 · s23 ⎥ ⎢ ⎥ ⎢ 0 a · c + a · c a2 · c23 ⎥ 1 2 2 23 ⎢ ⎥ (7) J =⎢ ⎥ 0 s1 s1 ⎢ ⎥ ⎣ ⎦ 0 −c1 −c1 1 0 0

4

Simulation

A simulation using ROS was developed to test the developed system’s robustness, check if the system is stable, and evaluate the motion planning algorithm based on kinematics. The Aliengo robot model is available on the Unitree Robotics GitHub page [16]. Additionally, there is an example package that launches the

70

M. S. Lopes et al.

robot into a chosen world in Gazebo. The Gazebo world chosen was a vineyard to allow simulation on uneven terrains. To enable efficient simulation testing, moving the quadruped through the environment must be possible to test different inclinations and uneven terrains. However, the implementation available in the repository only allowed low-level control, which means individual control of each motor, giving it a value of position, speed, or torque. To overcome this limitation, the authors adapted a Python-based high-level control using a joystick from an implementation [10] for an A13 robot, also from Unitree Robotics. The robotic arm was added to this implementation, and the axes defined followed the DH convention results, shown in Fig. 5. To visualize the robot model, the authors used RViz, with the joint state publisher gui package (Fig. 7), which allowed the study of the manipulator workspace by iteratively giving the joints different values.

Fig. 7. Rviz with joint state publisher

Some components, such as inertia and transmission, had to be added to the arm model to enable the simulation. To interact with the robot model in Gazebo, the authors used ros control packages.4 These packages allow sending position, velocity, or torque commands to the joints, and the simulation environment applies a generic control loop feedback mechanism, typically a PID controller.

3 4

https://m.unitree.com/products/a1/. ROS Control.

Robotic Arm Development for a Quadruped Robot

4.1

71

Motion-Planning

In light of the intended applications, it was determined that the arm of the quadruped robot would remain in a resting position while the robot was in motion; the quadruped remained still while performing a task. So a separate control approach can be employed to control the system. The chosen rest position is demonstrated in Fig. 3, with its corresponding joint values of {1.57, −0.42, −3.10} expressed in radians. To expedite the transition to the testing phase, the authors implemented a simple motion planning module using the manipulator’s IK, equivalent to a MOVJ instruction. The plan is to enhance motion planning capabilities later by approximating instructions with industrial manipulator instructions MOVL and MOVC. This enhancement would involve sending multiple actions to execute a trajectory using algorithms such as the A*. Considering the structure of the quadruped and the positioning of the LiDAR, one specific solution to the IK problem stands out as superior compared to other alternatives. This study’s configuration is always elbow-up, with θ1 selected from the range [0, π]. RViz markers were introduced in various positions within space to examine arm movements. The program transmits the tool to the designated location by clicking on them. Since the motor controller of the selected articulation (igus Rebel) does not calculate acceleration ramps or interpolation between target points, this feature was incorporated into the main controller. With position control, the joint tries to achieve the reference position as fast as possible, using the maximum velocity. Even if the authors send intermediate position points, the velocity of the joint would suffer oscillations. Therefore, it was decided that the joints would be controlled using velocity commands. The algorithm pipeline is depicted in Fig. 8. Implementing the “disable joint” feature in the simulation posed a challenge. This would involve stopping the joint and engaging the brake mechanism in real-life scenarios. However, in Gazebo, no brake feature exists, and if velocity commands with a zero value are sent continuously, the arm begins to fall, yielding to the gravitational force. To solve this issue, one method was to calculate the velocity that counteracts the force of gravity. However, another approach was adopted since the brake would be activated in the actual hardware. ROS provides the ability to define multiple controls to a joint simultaneously. Therefore two controllers were defined: one for position and one for velocity. By using the position controller, the arm can be fixed in the same position when no commands are issued. Therefore, the joint remained in position control mode, and when movement was required, the switch controller service was employed to activate the velocity controllers and deactivate the position ones. The simulation validated the prototype for the manipulator’s configuration chosen. The motion-planning algorithm is capable of moving the end-effector to the desired position with an error smaller than 2 cm. Concerning the system’s stability, the quadruped does not fall with the arm completely extended to the side (in the position {0, 0, 0} or {π, 0, 0}) and using the approximately the maximum calculated tool weight (0.570 kg), as seen in Fig. 9.

72

M. S. Lopes et al.

Fig. 8. Motion-planning algorithm pipeline

Fig. 9. Aliengo with the arm extended and tool with 0.570 kg in Gazebo

The results of this implementation can be seen in the following link. 4.2

Velocity Calculation

The velocity calculation implemented in the simulation adopts a trapezoidal profile. First, it undergoes a constant acceleration phase (whose duration can be defined as “accel time”) until it reaches the maximum velocity, which is already constrained by the joint limits. Then, it maintains the maximum speed until the position error surpasses a predefined threshold. Once this threshold is crossed, the velocity is gradually reduced using a PID controller considering the joint position error. This method can be applied in the real system since the joint responds to all motion commands with its position.

Robotic Arm Development for a Quadruped Robot

73

This calculator also determines the joint rotation direction, and the velocity reference is inverted if the destination point is more than π radians away. Finally, a minimum velocity threshold was imposed since the joints did not move below certain values. Figure 10 illustrates the velocity reference between two trajectory points.

Fig. 10. Velocity reference

5

Conclusions

This paper presents a low-cost arm designed to be integrated into a quadruped robot. The arm’s position and format were chosen to optimize its workspace and minimize platform instabilities. To assess the feasibility of the prototype, the authors developed a Gazebo simulation that considered the hardware restrictions and functionality. A separate system approach was adopted for motion planning, ensuring the manipulator’s movement is entirely independent of the base. When a destination point is received, the motion planning module converts the XYZ coordinates (defined in the base frame of the arm) to the desired joint values using the inverse kinematic equations, which automatically return the optimal configuration for the given position. The authors chose not to employ the Jacobian for these movements to avoid complicating the collision-free path planning algorithm in initial testing. The end-effector’s error to the given position is less than 1.5 cm, which is irrelevant for the applications in which the manipulator will be used, and the quadruped remains stable for all the given positions. The simulation results validate the proposed approach, and the authors’ next step is to build the physical system and test the motion planning implementation in real-life scenarios. Subsequently, the motion-planning algorithm will be optimized.

74

M. S. Lopes et al.

References 1. Bellicoso, C.D., et al.: Alma—articulated locomotion and manipulation for a torque-controllable robot. In: Proceedings—IEEE International Conference on Robotics and Automation 2019-May, pp. 8477–8483 (2019) 2. Biswal, P., Mohanty, P.K.: Development of quadruped walking robots: a review. Ain Shams Eng. J. 12, 2017–2031 (2021) 3. Z¨ urich, E.: Anymal using an elevator. https://www.youtube.com/watch? v=gM1z60aeunU (2017). Accessed on 28 Dec 2022 4. Ferreira, J.: Mapping, localization and trajectory planning for quadruped robots in highly uneven terrains. Master thesis, FEUP. https://hdl.handle.net/10216/142110 (2022) 5. Guo, J., et al.: Research on the autonomous system of the quadruped robot with a manipulator to realize leader-following, object recognition, navigation and operation. IET Cyber Syst. Robot. 1 (2022) 6. Hamed, K.A., Kim, J., Pandala, A.: Quadrupedal locomotion via event-based predictive control and QP-based virtual constraints. IEEE Robot. Autom. Lett. 5(3), 4463–4470 (2020) 7. Igus: Rebel revolute joint RL-SE-105. https://www.igus.pt/product/20711? artNr=RL-SE-105-70-0000 (2023). Accessed on 4 Mar 2023 8. Igus: Rebel revolute joint RL-SE-80. https://www.igus.pt/product/20710? artNr=RL-SE-80-50-0000 (2023). Accessed on 4 Mar 2023 9. Li, J., et al.: Whole-body control for a torque-controlled legged mobile manipulator. Actuators 11, 304 (2022) 10. lnotspotl: a1 robot simulation—python versions. https://github.com/lnotspotl/a1 sim py (2020). Accessed on 22 Dec 2022 11. Lopes, M.S., Moreira, A.P., Silva, M., Santos, F.: A Review on Quadruped Manipulators (2023). unpublished 12. Siciliano, B., Khatib, O.: Springer Handbook of Robotics. Springer (2016) 13. Sleiman, J.P., Farshidian, F., Minniti, M.V., Hutter, M.: A unified MPC framework for whole-body dynamic locomotion and manipulation. IEEE Robot. Autom. Lett. 6, 4688–4695 (2021) 14. Ulloa, C.C., Dom´ınguez, D., Cerro, J.D., Barrientos, A.: A mixed-reality teleoperation method for high-level control of a legged-manipulator robot. Sensors 22, 8146 (2022) 15. Unitree Robotics: Aliengo. https://shop.unitree.com/products/aliengo (2020). Accessed on 28 Dec 2022 16. Unitree Robotics: ROS simulation packages for unitree robots. https://github.com/ unitreerobotics/unitree ros (2020). Accessed on 22 Dec 2022 17. Unitree Robotics: Unitree robotics aliengo + z1 for fire rescue. https://www. youtube.com/watch?v=7CVwGY65In8 (2022). Accessed on 28 Dec 2022 18. Xin, G., Smith, J., Rytz, D., Wolfslag, W., Lin, H.C., Mistry, M.: Bounded haptic teleoperation of a quadruped robot’s foot posture for sensing and manipulation. In: Proceedings—IEEE International Conference on Robotics and Automation, pp. 1431–1437 (2019) 19. Xin, G., Zeng, F., Qin, K.: Loco-manipulation control for arm-mounted quadruped robots: Dynamic and kinematic strategies. Machines 10, 719 (2022) 20. Zimmermann, S., Poranne, R., Coros, S.: Go fetch!—dynamic grasps using boston dynamics spot with external robotic arm. In: Proceedings—IEEE International Conference on Robotics and Automation 2021-May, pp. 1170–1176 (2021)

Wrench Capability Analysis of a Serial-Parallel Hybrid Leg of a Disney’s Bipedal Robot Gustavo Queiroz Fernandes1(B) , Marina Baldissera de Souza1 , Leonardo Mejia Rincon2 , and Daniel Martins1

2

1 Centro Tecnol´ ogico, Campus Universit´ ario Reitor Jo˜ ao David Ferreira Lima—Trindade, Federal University of Santa Catarina (UFSC), CEP: 88.040-900 Florian´ opolis, SC, Brazil [email protected] Campus Blumenau, Federal University of Santa Catarina (UFSC), R. Jo˜ ao Pessoa, 2750—Velha, CEP: 89.036-002 Blumenau, SC, Brazil

Abstract. One of the most fascinating applications of walking robots is in the film industry, for creating special effects and animatronics. For this reason, the Walt Disney Company is investing efforts to develop bipedal walking robots. A serial-parallel hybrid leg for a humanoid robot was designed by Disney’s team, consisting of a pair of twin hybrid chains whose main body is a 5-bar linkage. This paper analyzes the wrench capability of a planar version simplification of this serial-parallel hybrid leg, aiming to determine the leg’s posture that provides the highest supporting and impulse forces during the robot’s gait. The wrench capability is calculated through Davies’ method. This paper demonstrates that the robot applies higher impulse forces when stepping forward or provides higher resistance forces when landing on the ground if the legs are brought closer to the actuators.

Keywords: Bipedal robot capability · Impulse forces

1

· Serial-parallel hybrid leg · Wrench

Introduction

Walking robots are machines designed to move and navigate by walking, mimicking the locomotion of animals or humans. Walking robots have a wide range of applications, and one of their most fascinating uses is in the film industry. They are particularly useful in movie production for creating special effects and animatronics. This is the reason why the Walt Disney Company is investing efforts in the development of bipedal walking robots through its network of research labs known as Disney Research [10]. Disney Research has developed a serialparallel hybrid leg for a humanoid robot aiming to set a compromise between the workspace dimensions and dynamic performance [5,6]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 75–86, 2024. https://doi.org/10.1007/978-3-031-47272-5_7

76

G. Q. Fernandes et al.

One way to quantify the dynamic performance of a walking robot is to evaluate how well the robot uses contact and impact forces to accelerate itself, as walking robots use contact forces to produce locomotion [1]. Bipedal walking follows a recurring pattern, comprising two distinct phases: double-support and single-support. The double-support phase is initiated by the forward foot making contact with the ground and concludes with the rear foot lifting off the ground. Throughout this phase, both feet maintain contact with the ground. The single-support phase commences with one foot stationary on the ground while the other swings from the back to the front [7]. This two-step process is illustrated in Fig. 1.

Fig. 1. Bipedal walking gait cycle. Adapted from [11].

During the gait of a bipedal robot, the feet apply a force to the ground to support the machine’s weight in the double-support phase. Then, in the singlesupport phase, another force is applied to the ground to propel the robot forward. A robot’s ability to apply forces to the environment is called wrench capability. Once a robot’s agility is related to the robot’s capacity to apply or sustain forces against the ground and change its velocity, wrench capability polygons are also a way to establish the walking robot’ agility. Agile robots are capable of abrupt changes in velocity through the generation of significant acceleration levels [1]. Mapping the regions on the walking robot’s workspace where the wrench capability is as high as possible is crucial for guaranteeing the walking robot’s agility. The wrench capability maps are a helpful tool for designing novel and improved agile-walking robots. They can be used as objective functions to optimize robot legs’ dimensions for a specific application, to develop power management and control strategies, to select actuators, and to compare the performance

Wrench Capability Analysis of a Serial-Parallel

77

of candidate designs. Wrench capability can also provide useful information for detecting optimum gait patterns for a walking robot that is already developed. This paper analyzes the wrench capability of the serial-parallel hybrid leg designed by the Disney Research team [5,6], whose prototype is shown in Fig. 2a, to determine the leg’s posture that provides the highest supporting and impulse forces during the robot’s gait. In this paper, wrench capability maps are generated through the Davies method, an analogy of Kirchhoff’s laws for a network of links and couplings that associates graph and screw theories [3], and the scaling factor technique [8,9]. This is the first time the Davies method is used to evaluate the agility of a walking robot, based on a static analysis. This paper also serves as a reference for the agility evaluation of any other robotic leg, as long as it is a planar kinematic chain and its net degree of constraint, considering the contact foot-ground, equals three. This paper is structured as follows. Section 2 briefly explains how to determine the wrench capability via Davies’ method and the scaling factor technique. Section 3 describes the hybrid leg mechanism developed by Disney and the computation of the static analysis equations. Section 4 describes the calculation of the leg’s wrench capability and the analysis of the results. Finally, some final considerations are drawn in Sect. 5.

2

Wrench Capability with the Davies Method and Scaling Factor

This section briefly describes the steps for using the Davies method and the scaling factor for the wrench capability analysis of Disney’s bipedal robot. By following the steps presented in this section, one may determine the static wrench capability of any other planar robotic leg that meets the following requirement: its net degree of constraint, considering the contact between foot and ground, equals three. For more details on how to apply the Davies method and the scaling factor, see Ref. [3,8,9]. The Davies method [4] adapts both Kirchhoff’s voltage and current laws to independently solve the kinematics and statics of any multi-body system. Once this work relies exclusively on the wrenches acting on the multi-body system to calculate the robotic leg’s wrench capability, the following Davies method description addresses only the determination and solution of the static problem. To determine the wrench capability of a robotic leg, the Davies method first represents the leg mechanism as a directed coupling graph. The edges of the coupling graph indicate each of the couplings, whereas the vertices represent the rigid bodies (links) connected through the couplings. For the purpose of standardization, this work considers the edges always pointing to the links closer to the foot. Due to the need to investigate the interaction of the leg against the ground, an additional coupling representing the contact between the foot and the fixed link is also considered. For the analysis considered in this paper, the rigid link indicating the robot’s trunk is also assumed as connected to the fixed link.

78

G. Q. Fernandes et al.

The next step is then the characterization of each coupling in terms of its degree of freedom (f ), its degree of constraint (c), and the order (λ) of the screw space, which is given by λ = f + c. For the planar robotic legs considered in this work, λ = 3. The degree of freedom is defined as the number of unit-independent motions allowed within the coupling. The degree of constraint has the opposite definition, representing the number of unit-independent motions mechanically constrained by that coupling. From all this, we compute the robotic leg’s gross degree of constraint (C), which is the sum of all degrees of constraint (c) of the leg’s couplings. Then, the Davies method uses graph theory to determine the number of vertices (n) in the graph, which coincides with the number of links in the robotic leg; the number of edges (e), which is equivalent to the number of couplings; the number of fundamental independent circuits (ν = e − n + 1); and the number of fundamental cutsets (κ = n − 1). The number of independent circuits (ν) defines the number of chords in the coupling graph. The chords are edges that can be removed from the coupling graph, so that the remaining edges, the branches, make a spanning tree, a maximal subgraph in which all edges are connected but there is no circuit on it. The number of independent cutsets (κ) indicates the number of cutsets allowed at the spanning tree and is equal to the number of branches composing the spanning tree. By defining all the cutsets available, the Davies method generates the fundamental cutset matrix ([Q]κ×e ). Each column of [Q] represents a different coupling (coupling graph’s edge). Each row of [Q] indicates a fundamental cutset that passes through one branch and as many chords as necessary. By considering the direction of the branches as positive (leaving or entering the cut area), each element qi,j of [Q] is filled as follows: – qi,j = 0: the edge ej does not belong to the fundamental cutset κi ; – qi,j = 1: the edge ej belongs to the fundamental cutset κi and has the same direction of the defining branch of κi ; – qi,j = −1: the edge ej belongs to the fundamental cutset κi ans has opposite direction of the defining branch of κi . The following step is the expansion of [Q] to account for all independent actions simultaneously transmitted by each coupling. This expansion occurs by repeating each column of [Q] a number of times equal to the degree of constraint, c, of each coupling. At the end of this process, the expanded cutset matrix ([QA ]κ×C ) is obtained. From screw theory, the normalized wrenches that define a pure force or a pure moment are determined as indicated in Eqs. (1) and (2), respectively. Both screws are defined by an instantaneous screw axis—ISA (ˆ s) and the coordinates of one point contained in its ISA (s0 ), both of them determined from the perspective of a global reference coordinate system Oxyz . The first three elements of Eqs. (1) and (2) correspond to the moments, whereas the remaining elements are the forces transmitted by the coupling, respectively.   ˆ$a = s0 × sˆ (1) sˆ

Wrench Capability Analysis of a Serial-Parallel

79

The Davies method then determines a unit action matrix ([AˆD ]λ×C ) by distributing side by side the normalized wrenches that act on each coupling, in the same order used for calculation of the expanded cutset matrix ([QA ]). Simultaneously, the magnitudes of the forces and moments are placed together as rows  C×1 . of the magnitude vector {Ψ}   ˆ$a = sˆ (2) 0 By taking the unit action matrix ([AˆD ]), multiplying it to each row of [QA ] in a diagonalized form, and placing the results as rows of one matrix, the network ˆ unit action matrix  ([AN]) can also be determined, as shown in Eq. (3). In this equation, diag [QA(i,:) ] is a diagonal matrix whose main diagonal is the ith row of [QA ].  ⎤ ⎡ ˆ [AD ]λ×C · diag [QA(1,:) ]   ⎢ [AˆD ]λ×C · diag [QA(2,:) ] ⎥ ⎥ AˆN (3) =⎢ ⎣ ⎦ ...  λ.κ×C  [AˆD ]λ×C · diag [QA(κ,:) ] The adapted Kirchoff’s current law states that the sum of loads transmitted by a coupling is equal to zero [2]. This law translates to a mathematical formulation by taking [AˆN ], multiplying it by the magnitude vector of the forces and  and making the result equal to an array of zeros, as represented moments ([Ψ]), in Eq. (4).        Ψ AˆN = 0 (4) λ.κ×C

C×1

λ.κ×1

The net degree of constraint (CN ) is the difference between the gross degree of constraint (C) and the rank of [AˆN ]. If CN is equal to three, Eq. (4) can be rearranged to Eq. (5), which has the internal constraints of each of the robotic  λ.κ×1 ) expressed as funcleg’s couplings and the actuator torques (composing {φ}  3×1 ). tions of the wrenches at the contact between the foot and the ground ({Φ} Matrix [A] represents the coefficients from the arrangement of [AˆN ] for Eq. (5) to hold true.       = [A]λ.κ×3 Φ (5) φ λ.κ×1

3×1

From this point, the scaling factor method is capable of determining the maximum wrench the robotic leg can apply or sustain in any direction. For the scope of this work, the ground does not sustain or apply any moment to the foot. Therefore, the scaling factor method assumes the wrench acting on the robot’s  3×1 ) in a given direction (α), as indicated in Eq. (6). foot as a unit force ({Φ} ⎤ ⎡ ⎤ ⎡ cos(α) Fx    Φ (6) = ⎣ Fy ⎦ = ⎣sin(α)⎦ 3×1 Mz 0 Next, the internal constraints and the actuator torques necessary to sustain the wrench from Eq. (6) are determined using Eq. (5). A scaling factor is then

80

G. Q. Fernandes et al.

obtained as indicated in Eq. (7), for each of the j actuators. When a wrench with magnitude scaled by this factor occurs at the foot, the torque of the actuator to which the scaling factor is minimum saturates. Therefore, for a force on a given direction α, the maximum wrench the leg can sustain or apply is indicated in Eq. (8) [8].   |τjmax | , j = 0, 1, 2.....number of actuators (7) ψ = min |τj |    max Φ

3×1

3



⎤ cos(α) = ψ ⎣sin(α)⎦ 0

(8)

Problem Formulation for the Hybrid Leg Mechanism

Figure 2a shows a prototype of Disney’s serial-parallel hybrid leg. The leg is a 6-degree-of-freedom (DOF) mechanism, consisting of a pair of twin hybrid chains. The main body of the hybrid chain consists of a 5-bar linkage chain with two actuators for hip and knee flexion/extension. One link composing the 5-bar linkage is a ternary connected to the robot’s foot through an ankle linkage composed of a ball joint and a universal joint, which guarantees three degrees of freedom for the ankle. The ankle linkage has an active joint to provide a pitch motion. One last actuator guarantees the sixth degree of freedom of the system by allowing the leg to rotate laterally (adduction/abduction) [6]. Once the main degrees of freedom required for robot locomotion are located in the plane, this paper simplifies Disney’s bipedal leg and considers it to be a planar kinematic chain with 5 links and 7 couplings, as indicated in the schematic representation of Fig. 2b. Couplings a, e, and f are actuated revolute joints— green colored in Fig. 2b— with two internal wrenches (forces in x and y directions) and one external constraint (actuator torques in z direction). Couplings b, c, and d are passive revolute joints with only the internal forces constraints (forces in x and y directions). Coupling g represents the contact between foot and ground, which due to Disney’s mechanism dimensions, has three constraints, not presenting any freedom. A reference frame Oxyz is attached to coupling a. The directed coupling graph of Fig. 2c represents the rigid bodies of the kinematic chain as vertices and the couplings as edges. From all this information, we get: – – – – – –

Order of the screw space (λ): 3; Gross degree of constraint (C): 18; Number of edges in the directed graph (e): 7; Number of vertices in the directed graph (n): 6; Number of fundamental independent circuits (ν): 2; Number of fundamental cutsets (κ): 5.

Wrench Capability Analysis of a Serial-Parallel

81

Fig. 2. a Prototype of the serial-parallel hybrid leg [6] and b schematic representation, c graph representation of the simplified planar leg, and d fundamental cutsets.

By choosing edges d and f from the directed graph of Fig. 2c as chords and computing the fundamental cutsets for each of the branches a, e, g, b and c, as illustrated in Fig. 2d, the fundamental cutset matrix [Q] becomes as indicated in Eq. (9). The next step for the Davies method requires knowing the coordinates of the couplings for calculation of the screws. The labels designating the links’ dimensions for the planar leg considered in this work are illustrated in Fig. 3a. The numeral values for each of these dimensions are listed in Table 1, according to the lengths provided by [6]. Coupling e is vertically aligned with coupling a, 28.5 mm below it. Figure 3b shows the joints’ angles of the actuated couplings

82

G. Q. Fernandes et al.

a, e and f (θa , θe , and θf , respectively), and the fixed angle θd , which equals to 110◦ . aeg bc d f ⎡ ⎤ 1 0 0 0 0 −1 −1 a ⎢ 0 1 0 0 0 1 0⎥ e ⎥ [Q] =⎢ (9) ⎢ 0 0 1 0 0 0 1⎥ g ⎣ ⎦ 0 0 0 1 0 −1 −1 b 0 0 0 0 1 −1 −1 c

Fig. 3. a Links’ dimensions and b joints’ angles of the simplified planar leg.

Table 1. Links’ dimensions in mm. l1 l2

l3

l4

l5

l6

l7

63 140 40 110 110 30 30

The Davies method leads to a network unit action matrix [AˆN ] with 15 rows and 18 columns (18 constraints on the couplings), generating the system of equations shown in Eq. (4). As described in Sect. 2, this system of equations can be rearranged to represent the internal constraints of the revolute joints and

Wrench Capability Analysis of a Serial-Parallel

83

the actuator torques as a function of the forces (Fx and Fy ) and moment (Mz ) coming from the contact between foot and ground. For purpose of repeatability of this work, Eq. (10) indicates the system of equations obtained when θa = 85◦ and θe = 5◦ , with the foot horizontally aligned with the ground. In Eq. (10), Fxk and Fyk are the internal constraints in x and y directions for each joint k = a, b, c, d, e, and f . The symbols τzk indicate the external torques in the actuated joints (k = a, e, and f ). For joint g, Fx , Fy , and Mz are the forces and moment on the robot’s foot when it contacts the ground. ⎤ ⎡ ⎡ ⎤ −1.56e + 00 −2.14e + 02 5.44e + 00 Fxa ⎢ Fya ⎥ ⎢−2.33e − 03 −3.19e − 01 8.11e − 03 ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ τza ⎥ ⎢ 2.51e − 02 3.43e + 00 −8.73e − 02⎥ ⎥ ⎢ ⎢ ⎥ ⎢ Fxb ⎥ ⎢ 4.96e − 01 −4.17e + 01 7.86e + 00 ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ Fyb ⎥ ⎢ 2.33e − 03 −6.81e − 01 −8.11e − 03⎥ ⎥ ⎢ ⎢ ⎥ ⎢ Fxc ⎥ ⎢−2.51e − 02 −3.43e + 00 −9.13e − 01⎥ ⎥ ⎢ ⎢ ⎥⎡ ⎤ ⎢ Fyc ⎥ ⎢−2.33e − 03 −3.19e − 01 8.11e − 03 ⎥ Fxg ⎥ ⎢ ⎢ ⎥ ⎢Fxd ⎥ = ⎢ 2.51e − 02 3.43e + 00 −8.73e − 02⎥ ⎣ Fyg ⎦ (10) ⎥ ⎢ ⎢ ⎥ ⎢ Fyd ⎥ ⎢−2.33e − 03 −3.19e − 01 8.11e − 03 ⎥ Mzg ⎥ ⎢ ⎢ ⎥ ⎢ Fxe ⎥ ⎢ 2.51e − 02 3.43e + 00 −8.73e − 02⎥ ⎥ ⎢ ⎢ ⎥ ⎢ Fye ⎥ ⎢−2.33e − 03 6.81e − 01 8.11e − 03 ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ τze ⎥ ⎢ 2.51e − 02 3.43e + 00 9.13e − 01 ⎥ ⎥ ⎢ ⎢ ⎥ ⎢Fxf ⎥ ⎢−1.00e + 00 −3.00e + 01 3.00e + 01 ⎥ ⎥ ⎢ ⎢ ⎥ ⎣Fyf ⎦ ⎣ 0.00e + 00 −1.00e + 00 0.00e + 00 ⎦ τzf 0.00e + 00 0.00e + 00 −1.00e + 00 The scaling factor then determines the maximum forces the robot leg can apply or sustain when Mz is known, respecting each actuator’s torque limit. The actuators’ torque limit is assumed as 200 N.mm, which corresponds to the maximum required torque calculated in [6]. The moment Mz is considered null. Once Mz is zero, the wrench acting on the robot’s foot is assumed as a unit force in a given direction (α), as indicated in Eq. (6). Next, the internal constraints and the actuator torques necessary to sustain this wrench are determined. A scaling factor is obtained as indicated in Eq. (7).

4

Wrench Capability of Disney’s Bipedal Robot

Figure 4 illustrates the wrench capability polygon, plotted with a deep blue full line and determined when the robot’s leg has the posture described by θa = 85◦ and θe = 5◦ , by varying the unit force direction α from 0◦ to 360◦ . The foot is assumed as always horizontally aligned with the ground. This is the same posture illustrated in Fig. 3. Each pair Fxg and Fyg in this polygon indicates the maximum force the leg can apply or sustain in that direction. For this polygon, one can observe the maximum allowable force against the ground is 7.83 N when α = 82◦ , represented by the light blue dashed arrow in Fig. 4. A maximum isotropic force representing the maximum force the robot’s leg can apply or sustain in any direction is also obtained from this polygon and is given by the

84

G. Q. Fernandes et al.

radius of the orange dashed circle in Fig. 4. The maximum isotropic force is 0.92 N when α equals 179◦ , as indicated by the red dashed arrow in Fig. 4.

Fig. 4. Disney’s robotic leg wrench capability polygon for θa = 85◦ and θe = 5◦ .

This work uses the maximum isotropic force to measure the robot’s capability to apply or sustain forces with a known posture. Varying the angle θa from 60◦ to 120◦ and θe from 0◦ to 45◦ when simultaneously considering the angle θf as the necessary value to maintain the robot’s foot parallel to the ground leads to the maximum isotropic force map presented in Fig. 5. The optimum posture (the maximum isotropic force), represented in Fig. 6, occurs when θa is 120◦ and θe is 0◦ . One can also observe the optimum solution tends to the boundaries of the problem, bringing the foot as much closer to the actuators as possible to reduce the torques at the actuators necessary to counterbalance the wrenches consequent from the contact foot-ground.

5

Conclusions

This work analyzed the wrench capability of the bipedal leg under development by the Disney Research laboratories. By looking at the maximum isotropic force the leg can apply or sustain for different postures, this paper demonstrated that the robot would have higher impulse forces when stepping forward or would provide higher resistance forces when landing on the ground if the legs are brought closer to the actuators. For the joint limits this paper considered, this optimum posture would occur when θa is 120◦ and θe is 0◦ . Future studies include optimizing link dimensions and testing the optimized mechanism in practice. Moreover, the present study focused exclusively on the

Wrench Capability Analysis of a Serial-Parallel

85

Fig. 5. Maximum isotropic forces for different θa and θe .

Fig. 6. Optimum leg posture to apply or sustain the forces to the ground.

analysis of the leg. By considering the whole robot and its mass, it is possible to assess other parameters, such as the friction force and the optimum gait for walking. The friction force may reduce the maximum force the robot can apply against the ground in the y direction, serving as another constraint for the wrench capability problem. Additionally, by working with an optimum defined gait, postures that support/apply higher forces in the direction of the motion can also be selected.

86

G. Q. Fernandes et al.

Acknowledgements. This study was financed by the Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior—Brasil (CAPES)—Finance Code 001 and by the Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´ ogico (CNPq) [grant number 383653/2022-3].

References 1. Bowling, A.: Impact forces and agility in legged robot locomotion. J. Vib. Control. 17(3), 335–346 (2011) 2. Davies, T.: Mechanical networks-iii wrenches on circuit screws. Mech. Mach. Theory 18, 107–112 (1983). https://doi.org/10.1016/0094-114X(83)90102-7 3. Davies, T.: Freedom and constraint in coupling networks. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 220(7), 989–1010 (2006) 4. Davies, T.H.: Kirchhoff’s circulation law applied to multi-loop kinematic chains. Mech. Mach. Theory 16, 171–183 (1981). https://doi.org/10.1016/0094114X(81)90033-1 5. Gim, K.G., Kim, J., Yamane, K.: Design and fabrication of a bipedal robot using serial-parallel hybrid leg mechanism. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5095–5100. IEEE (2018) 6. Gim, K.G., Kim, J., Yamane, K.: Design of a serial-parallel hybrid leg for a humanoid robot. In: 2018 IEEE International Conference on robotics and automation (ICRA), pp. 6076–6081. IEEE (2018) 7. Huang, Q., Yokoi, K., Kajita, S., Kaneko, K., Arai, H., Koyachi, N., Tanie, K.: Planning walking patterns for a biped robot. IEEE Trans. Robot. Autom. 17(3), 280–289 (2001) 8. Mejia, L., Frantz, J., Simas, H., Martins, D.: Modified scaling factor method for the obtention of the wrench capabilities in cooperative planar manipulators. In: Proceedings of the 14th IFToMM World Congress, pp. 572–581 (2015) 9. Mejia, L., Simas, H., Martins, D.: Force capability in general 3 DoF planar mechanisms. Mech. Mach. Theory 91, 120–134 (2015) 10. Research, D.: Disney Research: About. https://la.disneyresearch.com/innovations/ (2023). Accessed 07 May 2023 11. Tsai, C.S., Hsieh, C.H., Qiu, W., Qi, H.: Gait analysis of a little biped robot. ICIC Express Lett. Part B Appl. Int. J. Res. Surv. 7(7), 1453–1460 (2016)

Planning and Control

Bipedal Walking Robot Control Using PMTG Architecture Vladimir Danilov1,3(B) , Konstantin Klimov2,3 , Dmitrii Kapytov2,3 , and Sekou Diane1 1 2

Institute of Control Problems of Russian Academy of Sciences, Moscow, Russia [email protected] Robotics Laboratory, Institute of Mechanics Lomonosov Moscow State University, Michurinsky prosp.1, 119192 Moscow, Russia 3 Voltbro LCC, Moscow, Russia

Abstract. Reinforcement learning based methods can achieve excellent results for robot locomotion control. However, their serious disadvantage is the long agent training time and large number of parameters defining its behavior. In this paper, we propose a method that significantly reduces training time. It is based on the Policy Modulating Trajectory Generator (PMTG) architecture, which uses Central Pattern Generators (CPG) as a gait generator. We tested this approach on an OpenAI BipedalWalker-v3 environment. The paper presents the results of this algorithm, showing its effectiveness in solving a locomotion problem over challenging terrain.

Keywords: Reinforcement learning robot

1

· Locomotion · CPG · Walking

Introduction and Related Work

The development of walking machines and their real-world application represents a serious challenge in terms of locomotion control systems. Research on automatic control systems for such applications began in the 60s [1]. Mark Raibert made significant progress by creating algorithms that provide dynamically stable motion [2]. Thanks to the increase in computing power and the advances of control algorithms, the development of walking robots became substantially more widespread in the last decade. At the moment, the most popular systems use analytical algorithms or artificial intelligence. Analytical methods show excellent results. The most popular at the moment is the Model Predictive Control (MPC) method. Researchers from the Italian Institute of Technology (IIT) used this method when developing a controller for the HyQReal robot [3]. It allows the robot not only to move steadily over uneven surfaces, but also to pull a 3.3-ton aircraft. Sleiman et al. from ETH Zurich used MPC to control both locomotion and a manipulator attached to the robot’s body [4]. Bjelonic et al. applied this method to control a walking robot c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 89–100, 2024. https://doi.org/10.1007/978-3-031-47272-5_8

90

V. Danilov et al.

which additionally had wheels attached to its legs [5]. Thereby, it was possible to increase the speed of movement, while maintaining the ability to walk on rough terrain. Cheetah robots also used MPC [6]. The developers showed impressive results in terms of the robot velocity. Although these methods demonstrate excellent locomotion capabilities, they have some drawbacks. They require significant prior knowledge of the target robotic platform and its tasks [7], a very accurate dynamic robot model, which is often difficult to obtain [8], and the need to manually tune the parameters of the algorithms [9]. Moreover, separate controllers are used for each type of locomotion, which have a different architecture and are designed and configured separately [10]. The second group are algorithms based on artificial intelligence. Reinforcement learning (RL) methods have become increasingly popular in recent years. Margolis et al. showed impressive results using this approach in the Mini-Cheetah robot controller [11,12]. Using this algorithm, it was possible to increase robot velocity and the Froude number [13] compared to the basic algorithm based on MPC. Although these algorithms have a large number of parameters, their tuning can be automated. The combination of reinforcement learning and powerful function approximators such as neural networks make it possible to solve a wide range of locomotion problems [14]. However, at the same time, there are two serious problems: long training time and the reality gap. To solve the first problem, one can use the parallel training of several thousand agents, reducing the training time to several minutes [15]. Predefined gait generators can also be used, and the agent task in this situation is only to optimize their trajectories [7]. The problem of the first work is that the robot is not able to change its velocity. On the second work, the robot moves only along the lateral plane and cannot change its gait online. In order to overcome the reality gap, it is necessary to develop an accurate mathematical model of the robot’s servo drives, to randomize mechanical parameters and signals from feedback sensors during training, and also to simulate external force disturbances acting on the robot [8,10]. This algorithm provided stable and dynamic movement on such robots as the Minitaur [7,8], Unitree A1 [9], and ANYmal [10]. Based on the above information, it can be stated that reinforcement learning is the most promising approach. In this work we developed a PMTG architecture [7] where CPG was utilized instead of a conventional gait generator. This combination provides smooth movement, makes it possible to change the gait parameters online (duty factor, frequency, length and height of the stride), and significantly reduces the training time of the agent. The previous paper [16] dealt with the development of a CPG-based gait generator for a four-legged robot. In this work, in order to test the PMTG architecture, it was decided to use a simplified two-dimensional model of the OpenAI BipedalWalker-v3 walking robot [17]. See Sect. 2 for a detailed description of this environment. This environment

Bipedal Walking Robot Control Using PMTG Architecture

91

is popular among developers of locomotion algorithms, since it can be used to check the basic functionality of an algorithm. Our approach, based on the PMTG architecture, should have the following capabilities: – Ensure the movement of the robot both on slightly uneven terrain and serious obstacles: stairs, stumps, and pits; – Ensure movement with a desired velocity along the longitudinal axis of the robot; – The CPG-based gait generator should generate limb wave motions and have the ability to change the duty factor, length, height, and frequency of the stride online.

Table 1. Observation vector Num

Observation

Min Max

0

hull angle

0

1

hull angular velocity

−∞ ∞

2

vel x

−1

+1

3

vel y

−1

+1

4

hip joint 1 angle

−∞ ∞

5

hip joint 1 speed

−∞ ∞

6

knee joint 1 angle

−∞ ∞

7

knee joint 1 speed

−∞ ∞

8

leg 1 ground contact flag 0

9

hip joint 2 angle

10

hip joint 2 speed

−∞ ∞

11

knee joint 2 angle

−∞ ∞

12

knee joint 2 speed

−∞ ∞

13

leg 2 ground contact flag 0

14–23 LIDAR readings

2



1

−∞ ∞

1

−∞ ∞

Environment Description

The BipedalWalker-v3 environment from OpenAI (Fig. 1a) is a 2D model of a bipedal robot walking on a relatively flat surface. Each leg of the robot consists of two links. The task of the agent is to control the robot so that it can reach the end of the map. The BipedalWalkerHardcore-v3 version (Fig. 1b) has the same dynamic model of the robot, but obstacles, pits, and stairs are added to the map. In both cases the problem is considered solved when the value of the

92

V. Danilov et al.

Fig. 1. Environment

average reward over 100 episodes is greater than 300. Each episode ends when the robot touches the surface or reaches the final point. The observation vector consists of 24 elements. These include the position and velocity of the center of gravity, the joint angles and angular velocities, ground contacts, and LIDAR readings. The last one is very important for completing the hardcore version. All of the above data are contained in Table 1. The action vector presented in Table 2 consists of four elements, each of which represents the desired torque in the joints (Fig. 2). The reward function encourages the robot to move along the horizontal axis as fast as possible, keeping the body in a horizontal position while consuming as little energy as possible. When falling or trying to walk backwards, the agent is penalized. The reward function has the following form: 13 (px (t) − px (t − 1)) - moving forward reward, 3 = −5(|ϑ(t)| − |ϑ(t − 1)|) - hull deviation penalty,

rf w = rhull

rτ = −0.028

4 

|ai | - torque penalty,

(1) (2) (3)

(i=1)

res

 −100, if px < 0 = 0, otherwise

R = rf w + rhull + rτ + res

3 3.1

- premature stop penalty. - total reward.

(4) (5)

Control Algorithm Architecture Algorithm Overview

A reinforcement learning controller for walking robots is usually a neural network that uses an observation vector as input and outputs the desired angles or torques of the joints in the robot’s legs [14]. Such a system is very expensive in terms of

Bipedal Walking Robot Control Using PMTG Architecture

93

Fig. 2. Designation of robot parts Table 2. Action vector Num Observation

Min Max

0

Hip 1 (Torque/Velocity)

−1

+1

1

Knee 1 (Torque/Velocity) −1

+1

2

Hip 2 (Torque/Velocity)

−1

+1

3

Knee 2 (Torque/Velocity) −1

+1

agent training time. Our method is based on the PMTG architecture proposed by Iscen et al. [7]. The distinguishing feature of the architecture is the separation of the gait generator from the learning agent. The block diagram of this architecture is shown in Fig. 3.

Fig. 3. Block diagram of PMTG architecture

The gait generator uses input parameters such as the height, length, and frequency of the stride and duty cycle. The result is the desired foot positions of each robot leg uCP G . By constantly generating the same wave movements with the limbs, it is impossible for a walking robot to achieve stable movement on uneven terrain. Therefore, a RL-agent has been added to the architecture. As input, it receives the observation vector of the BipedalWalker-v3 environment (Table 1) and additionally the desired velocity of the robot υx . At each time step, the agent sets the input parameters for the gait generator (stride frequency ωsw ,

94

V. Danilov et al.

stride length SL , stride height SH , and duty factor β ) and desired foot positions uRL , which are then added to the output of the gait generator, optimizing robot movements. The sum u of the signals uRL and uCP G then goes to the inverse kinematics solver as input, which converts foot positions into desired joint angles. These values are processed by the PD controller, which outputs the required torques for the robot’s servos. 3.2

CPG-Based Gait Generator

The main difference between the proposed method and [7] is utilizing CPG [16] as a gait generator adapted for the BipedalWalker-v3 design (Fig. 4).

Fig. 4. Block diagram of CPG-based gait generator

Consider first the CPG Network. Its task is to form sawtooth phase signals for each leg. The CPG Network consists of two symmetrically connected neurons, each of which is an oscillator that modulates the phase signal for a specific leg. There are several types of oscillators [18]. We chose the Hopf oscillator because it is the least computationally complex. The following are example equations for one oscillator: x˙ i = α(μ − ri2 )xi − ωi yi + η

4 

(xj cosλji − yj sinλji )

(6)

(j=1)

y˙ i = α(μ − ri2 )yi + ωi xi + η ri =

 

4 

(xj sinλji − yj cosλji )

(7)

(j=1)

x2i + yi2



(8)

β 1 + (9) ωsw e−ayi + 1 eayi + 1 φi = arctan 2(yi , xi ), (10) √ where α—convergence rate, μ—amplitude, i—oscillator number, ωi —stride frequency, β—duty factor, ωsw —stride frequency in a swinging phase, a— alternation rate coefficient between swing and stance phases. λji = φi − φj — phase difference between legs i and j, φi ∈ [0; 2π], xi , yi —oscillator internal state, ωi =

Bipedal Walking Robot Control Using PMTG Architecture

95

η—coupling coefficient, φi —current phase of the oscillator, used as a synchronization signal input for the formation of the foot trajectories, φi ∈ [−π; 0)— stance phase, φi ∈ [0; π]—swing phase. In this work, since the control is performed for a bipedal   robot, the phase differences φi are represented by only one vector: Φ = π 0 . By changing the parameters β and ωsw , it is possible to smoothly change the form of signal φi . Figure 5a plots φi as function of time when the stride frequency ωsw changes from 1 to 2 Hz at time t = 2.5 s. Now let’s look at the Mapping Function block. Its task is to form the desired trajectory of the robot’s feet based on the phase signal φi . It takes the values of the clock signal φi ,stride length SL ,and stride height SH and generates the desired foot positions Pxi , Pyi for both legs. Calculation of the foot end trajectory is done using the following equations: If φi ∈ [−π; 0):  Pxi = −

1 SL S L φi + π 2

 + xidle

Pyi = yidle

(11) (12)

If φi ∈ [0; π]: 1 1 SL + xidle Pxi = − SL sin (4φi ) + SL φi − 2π π 2  − 1 SH sin (4φi ) + π2 SH φi + yidle , φ ∈ [0; π2 ] Pyi = 1 2π 2 π 2π SH sin (4φi ) − π SH φi + 2SH + yidle , φ ∈ [ 2 ; π],

(13) (14)

where Pxi , Pyi —position of ith foot, SH , SL —stride height and length, T  xidle yidle —transition vector from foot end to the body coordinate system. For BipedalWalker-v3 environment values xidle = 0, yidle = −1.8 were chosen. The Mapping Function results are shown in Fig. 5b.

Fig. 5. CPG-based gait generator working results

96

V. Danilov et al.

Fig. 6. Kinematic scheme of the two-dimensional robot

3.3

Inverse Kinematic Solver

The kinematic scheme of the robot is shown in Fig. 6. The IK Solver transforms the target foot position into the joint angles. The equations have the following form:  2 l2 = x2pos + ypos (15) xpos l2 + arccos ypos 2l1 l2 θ2 = − arccos . 2l1 θ1 = arctan

(16) (17)

where xpos , ypos —foot end position, l1 —length of each leg link, l2 —distance from hip to foot end, θ1 , θ2 —hip and knee angles, respectively. 3.4

RL Agent

In the PMTG architecture, the RL agent performs two tasks: modulating the gait generator and optimizing the output trajectories of the gait generator. With this in mind, we also take into account that the controller must ensure the robot’s movement with a desired velocity along the longitudinal axis. Thus, the vectors of observations, actions, and the reward function have been changed. Table 3. Proposed action vector for PMTG-controller Num Action Num Action 0

ωsw

4

xpos1

1

β

5

ypos1

2

SH

6

xpos2

3

SL

7

ypos2

Bipedal Walking Robot Control Using PMTG Architecture

97

The smallest change concerns the observation vector. The desired velocity variable υx∗ is added to the parameters from the Table 1. The new action vector is shown in Table 3. Instead of the desired torques in the joints, it consists of the desired positions of the robot’s feet. New reward function also takes into account desired velocity:  ∗  (υ − υ)2 4 exp rf w = (18) 30 0.2 (19) rhull = −5(|ϑ(t)| − |ϑ(t − 1)|) rτ = −0.56

12 

|ai |

(20)

i=1

R = rf w + rhull + rτ .

(21)

During training algorithms such as PPO, DDPG, TD3 and SAC were used. Section 4 shows the training results for the each algorithms.

4

Results

The standard implementation of BipedalWalker-v3 assumes that the trained agent directly controls the robot, trying to reach the maximum velocity. In our implementation, the agent modulates the gait generator and optimizes its outputs, while maintaining the desired robot velocity. This chapter will show the training results for the following implementations: – Vanilla RL algorithms – PMTG architecture without desired velocity – PMTG architecture with desired velocity Hyperparameter optimization for each training cycle was performed using the Tree-structured Parzen Estimator algorithm [19] and the Optuna library [20]. Additionally, to improve the effectiveness of training for the hardcore version, the following techniques were used: – – – – –

Within one time step, one action is applied three times The final reward is zero when the robot falls The total reward is multiplied by five Noise is added to the action vector During training, the probability of encountering an obstacle is increased by three times

Figure 7 shows the learning curves for various algorithms using the default reward function. In the standard version of the BipedalWalker-v3 (Fig. 7a), Vanilla TD3 and SAC are able to achieve an average reward of 300 points, but in the hardcore version (Fig. 7b) they are not able to complete the task. Since SAC performs best in solving both problems, it was decided to use it for the PMTG architecture. We can see the following on the plots:

98

V. Danilov et al.

Fig. 7. Reward values as a function of simulation steps for BipedalWalker-v3 during training

Fig. 8. The learning curve for PMTG+SAC of moving at a desired velocity

– In the standard version, PMTG+SAC is able to solve the training problem much faster. It took the algorithm 300k time steps to reach 304.24 reward points, which equates to 696 episodes. – In the hardcore version, the PMTG+SAC algorithm solved the problem, scoring 302.92 points over 7280 episodes. At the time of writing this paper, this is the best result among other algorithms that can solve this problem [21]. Since the reward is multiplied by 5 to speed up training, in the Fig. 7b the reward values are divided by 5 for clarity. As a result, the robot is able to quickly run through the map without a single fall, overcoming all obstacles.1 In order for the robot to move at the desired velocity, random values of the desired velocity, ranging from 0 to 0.6 m/s, with uniform distribution, were fed as input during the training process. The change in the value of the desired velocity was made in time, ranging from 1 to 5 seconds, also with uniform distribution. The learning curve for the BipedalWalker-v3 Standard, when moving at the desired velocity, is shown in Fig. 8. It took 4.6M time steps to achieve a result in which the robot is able to move steadily. Despite this, in the end the robot was able to move at a given velocity. Figure 9 shows robot velocity as a function of time. You can see the supplementary video via this link.2

1 2

https://youtu.be/h9M4VnhJPTs. https://youtu.be/75HU gmHlZE.

Bipedal Walking Robot Control Using PMTG Architecture

99

Fig. 9. Plot of robot’s velocity as a function of time. The black curve displays the real robot velocity, the blue one displays the desired robot velocity

5

Conclusions and Future Work

The proposed method allows relatively fast training of the RL-agent for walking locomotion tasks. The method was tested on the BipedalWalker-v3 environment. It has a standard and a hardcore versions. In the standard version the robot moves on a relatively flat surface. In the hardcore version the robot moves over a rough surface. Compared to vanilla RL-agents, our architecture is able to speed up training by several times for the standard BipedalWalker-v3. At the same time, where ordinary RL agents are not able to solve the hardcore version of the problem, our PMTG-based approach is able to solve it and can do so quickly. In addition, we managed to train an agent to control the robot movement at a given velocity. In the future, we are going to utilize this method in the development of a controller for a four-legged walking robot.

References 1. McGhee, R.B.: Finite state control of quadruped locomotion. Simulation 9(3), 135–140 (1967). https://doi.org/10.1177/003754976700900308 2. Raibert, M.H.: Legged Robots that Balance. MIT Press, Cambridge (1986) 3. Villarreal, O., Barasuol, V., Wensing, P.M., Caldwell, D.G., Semini, C.: Mpc-based controller with terrain insight for dynamic legged locomotion. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2436–2442 (2020) 4. Sleiman, J.P., Farshidian, F., Minniti, M.V., Hutter, M.: A unified mpc framework for whole-body dynamic locomotion and manipulation. IEEE Robot. Autom. Lett. 6(3), 4688–4695 (2021) 5. Bjelonic, M., Grandia, R., Harley, O., Galliard, C., Zimmermann, S., Hutter, M.: Whole-body mpc and online gait sequence generation for wheeled-legged robots. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8388–8395 (2021) 6. Di Carlo, J., Wensing, P.M., Katz, B., Bledt, G., Kim, S.: Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–9 (2018)

100

V. Danilov et al.

7. Iscen, A., Caluwaerts, K., Tan, J., Zhang, T., Coumans, E., Sindhwani, V., Vanhoucke, V.: Policies modulating trajectory generators. In: Conference on Robot Learning, pp. 916–926 (2018) 8. Tan, J., Zhang, T., Coumans, E., Iscen, A., Bai, Y., Hafner, D., Vanhoucke, V.: Simto-real: Learning agile locomotion for quadruped robots (2018). arXiv:1804.10332 9. Kumar, A., Fu, Z., Pathak, D., Malik, J.: Rma: rapid motor adaptation for legged robots (2021). arXiv:2107.04034 10. Hwangbo, J., Lee, J., Dosovitskiy, A., Bellicoso, D., Tsounis, V., Koltun, V., Hutter, M.: Learning agile and dynamic motor skills for legged robots. Sci. Robot. 4(26) (2019). https://doi.org/10.1126/scirobotics.aau5872 11. Margolis, G.B., Yang, G., Paigwar, K., Chen, T., Agrawal, P.: Rapid locomotion via reinforcement learning (2022). arXiv:2205.02824 12. Margolis, G.B., Chen, T., Paigwar, K., Fu, X., Kim, D., Kim, S., Agrawal, P: Learning to jump from pixels (2021). arXiv:2110.15344 13. Alexander, R.M.: Optimization and gaits in the locomotion of vertebrates. Physiol. Rev. 69(4), 1199–1227 (1989) 14. Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., et al.: Soft actor-critic algorithms and applications (2018). arXiv:1812.0590 15. Rudin, N., Hoeller, D., Reist, P., Hutter, M.: Learning to walk in minutes using massively parallel deep reinforcement learning. In: Conference on Robot Learning, pp. 91–100 (2022) 16. Danilov, V., Diane, S.: CPG-based gait generator for a quadruped robot with sidewalk and turning operations. In: Robotics in Natural Settings: CLAWAR 2022, pp. 276–288 (2022). https://doi.org/10.1007/978-3-031-15226-9 27 17. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym (2016). arXiv:1606.01540 18. Yu, J., Tan, M., Chen, J., Zhang, J.: A survey on CPG-inspired control models and system implementation. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 441– 456 (2013) 19. Yu, J., Tan, M., Chen, J., Zhang, J.: A survey on CPG-inspired control models and system implementation. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 441– 456 (2013) 20. Akiba, T., Sano, S., Yanase, T., Ohta, T., Koyama, M.: Optuna: a nextgeneration hyperparameter optimization framework. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2623–2631 (2019) 21. OpenAI Gym Leaderboard. https://github.com/openai/gym/wiki/Leaderboard

Adaptive Suspension System Position-Force Control of Wheeled Wall-Pressed In-Pipe Climbing Robot Sergey Jatsun and Andrei Malchikov(B) Department of Mechanics, Mechatronics and Robotics, Southwest State University, Kursk 305040, Russia [email protected]

Abstract. The article considers the implementation of the position-force control of the adaptive suspension of a three-module wheeled robot designed to monitor ventilation pipeline systems. Mathematical equations have been developed that describes the dynamics of a controlled robot movement inside a vertical section of a pipeline. A mathematical model includes a nonlinear model of dry friction, which makes it possible to simulate the slippage of the robot’s wheels relative to the pipe surface. The results of computational experiments are presented, demonstrating the operability of the proposed technique and the high performance of the adaptation mechanism when changing the diameter and properties of the pipe surface. Keywords: In-pine robot · Adaptive suspension · Wall-pressed mechanism · Mathematical modeling · Wheeled robot

1 Introduction Today, one of the most important parts of any building is the ventilation system, which is a structure of pipes placed in order to ensure air exchange by supplying and extracting air. Air ducts are round and rectangular, can be located horizontally, vertically or at an angle, have a complex pattern of intersections, bends and dead ends. For the effective functioning of the ventilation system, systematic maintenance, condition monitoring and repair are needed. Often there is difficult access to ventilation systems, which makes it technically difficult and quite costly to maintain the operational properties of pipeline systems. One of the ways to solve the problem of the ventilation systems monitoring is to use robotic systems that can independently move inside air ducts, performing operations to clean and determine the state of pipes [1–4]. Such devices can be equipped with various diagnostic equipment or tools for removing contaminants from the pipe surface [2–5]. Pipeline robots have been developed for a long time. Known designs with caterpillar, wheeled, walking and other types of locomotion [6–10], which are applicable to the described tasks. Pipeline schemes often have a complex structure, including various tees, connectors, bends, and other fragments, requiring mechanisms for adapting the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 101–111, 2024. https://doi.org/10.1007/978-3-031-47272-5_9

102

S. Jatsun and A. Malchikov

robot to the shape and geometry of the ducts. A number of works [9–16], including the works of the authors [17, 18], are devoted to the issues of robot adaptation to a variable pipe diameter and its orientation. The use of a linkage mechanism to adapt to the shape and dimensions of the pipe is well known, including the use of serial elastic elements in drive [11, 12]. However the use of position-force control to control the wall-press force has not been studied enough, despite the fact that it is the control of the force at the contact point that determines the efficiency of movement and the safety of using in-pipe robots. In this paper, methods for implementing and numerical modeling the position-force control of the wall-press mechanism (WPM) of a modular wheeled robot designed to monitor ventilation systems are considered. Designing a complex multi-loop control system requires, at the first stage, simulation on a mathematical model, determining the main force relations and features of the system dynamics. To test the applicability of position-force control and determine the optimal parameters of the controller, numerical simulation of the robot’s motion inside a vertical pipe was performed.

2 Object and Task Statement Description Considered in the work robot for monitoring the condition of ventilation pipeline systems is a three-module wheeled structure with an independent drive of each of the 9 wheels. 3 modules is the minimum sufficient configuration to overcome most obstacles inside the pipe. An increase in the number of modules will increase the maneuverability of the robot, but the general principles of the control system will not change. The modules are pivotally connected to each other, which allow changing the overall configuration of the robot inside the pipe and overcoming the change in its shape. A similar method is described in [6, 8, 10]. In order to provide a controlled change in the shape of the robot and to regulate the force of pressing the robot to the surface of the pipe, the robot is equipped with nine independent mechanisms, including a linear drive and a spring. When generating the supply voltage for the electric drives of the WPM, both the deformation of the spring of the suspension system and the data from the inertial sensors of the robot and the sensors of the angular velocity of all wheels are taken into account. The adaptive possibilities of the suspension are to automatically adjust the force at the point of contact of the wheel with the surface of the pipe, depending on the orientation of the robot, the shape of the pipeline, as well as the amount of slippage of the wheel on the surface, which can occur in the presence of contaminant or damage to the surface of the pipeline. The difference of the proposed design is the implementation of an adaptive suspension due to linear guides, in which the spring deformation has a linear function of the preload force. On the scheme shown following designations are accepted: 1—wheel, 2—bevel gearbox, 3—planetary gearbox of the wheel drive, 4—suspension system spring, 5— brushless wheel drive motor, 6—linear guide system, 7—linear drive of WPM, 8—screw gear, 9—gearbox of the WPM drive, 10—belt drive of the WPM, 11—fragment of the robot frame. The WPM drive unit works as follows. The electric motor 7, through the gearbox 9 and the toothed belt drive 10, the gear in motion is the trapezoidal screw, which moves

Adaptive Suspension System Position-Force Control

103 1 2 3 4 5 6 7 8 9 10 11

a

b

Fig. 1. 3D-model of wheeled wall-pressed in-pipe climbing robot (a) and wall-pressed wheeled mechanism (b).

the guide nut 6. The screw drive nut transmits the force through the spring 4, the wheel assembly, which is pressed against the pipe. In this case, the control system, obtaining control, what are the dimensions of the available wheels, and the force applied to the spring when the wheels come into contact with the supporting surface. With the required position of the wheel and an increase in concentration in the body, the control system generates a supply voltage for the electric motor 5, which, due to the planetary gear 3 and the bevel gear 2, turns the wheel 1. Thus, by controlling the position of the wheels, forming the formula (configuration) of the robot and the amplifier in the foci of infection, the control system can turn off the movement inside the pipelines with the extension of the form and direction when moving the tasks of controlling ventilation systems. A structural scheme that implements the described adaptation algorithm for one WPM module is shown in Fig. 2.

operator ΔV

RV

pipe surface

N*

ΔN

RN

UN

WPMDr



с * V* 1/ r ω w



Uω Sω

WDr

Spr

N

x1-x2

rw

SV

V

ω

Fig. 2. Scheme WPM control system.

The following designations are accepted on this diagram: V * —the linear velocity of the robot inside the pipe set by the operator; ω* —the desired angular velocity of the

104

S. Jatsun and A. Malchikov

wheel (provided there is no slippage); U ω —wheel drive motor voltage; V—the actual linear speed of the robot, measured by an optical sensor; V —the value of slippage of a point on the wheel relative to the pipe surface; N * —the desired value of the normal reaction, which provides the required frictional force between wheel and the surface of the pipe; N is the difference between desired and actual values of the normal reaction in contact point; U N —the voltage on the motor of the wall-pressed mechanism, (x 1 –x 2 ) c—the stiffness of the spring; r ω —the wheel radius. There are 3 independent regulators in the circuit: Rω —robots wheel rotation speed regulator, RV —slippage regulator, RN —WPM normal reaction regulator. All 3 circuits contain the corresponding sensors Sω , SN , SV . The scheme contains the designations of the wall-pressed mechanism actuator—WPMDr, and the wheel drive—WDr. The proposed system makes it possible to implement position-force control of the wall-pressed mechanism, providing the robot movement without slipping, as well as control the linear speed of the robot inside the pipe. The speed control loop, which provides a smooth change of the robot speed, uses an incremental digital PD controller. On the one hand, the PD controller is quite simple in digital implementation, on the other hand, it allows obtaining the required accuracy and speed, which was established in the course of preliminary computational experiments. When writing the  the wheel drive voltage U ω , we will take the following  equation for designations: ei = Vi∗ − ωi rω —linear velocity error, at the i-th cycle of the controller. In this expression, Vi∗ is the linear speed of the robot movement set by the operator, ωirw — the current value of the linear speed (on the i-th cycle of the controller),  determined by ∗ −ω the wheel speed sensor S ω ; ei = Vi∗ − ωi rω −− Vi−1 i−1 rω is the change in the linear speed error at the current i-th and at the previous cycle of the controller. Then, according to the proposed control strategy, the motor supply voltage can be represented as: Uωi = kpω ei + kd ω

ei t

(1)

where kpω , kd ω are the proportional and differential coefficients of the linear speed PD-controller, t is the time interval between controller operation cycles. In a two-loop control system for the wall-press force, we will use a P-controller, in terms of slip V, which forms the required preload force N * , and a PD-controller for the normal reactions force. Let’s take the following notation: Vi = (ωi rω − Vi )—is the amount of slip measured by sensors on the i-th cycle of the controller, i = (x2i − x1i )— is the value of the spring deformation on the i-th cycle of the controller. By analogy with the expression for the wheel drive controller, we get:       kpV Vi − c·i − kpV Vi−1 − c · i−1 (2) UNi = kpN kpV Vi − c · i + kdN t where kpN , kdN are the proportional and differential coefficients of the wall-pressed mechanism motor PD-regulator, kpV —proportional coefficient P-regulator of the slippage, c—the known stiffness of the spring suspension. The choice of controller type is based on the results of preliminary simulations. It is possible to use other types of controllers, including non-linear or fuzzy ones, however, the parameters of the transient

Adaptive Suspension System Position-Force Control

105

process for a given system are determined by the properties of the electromechanical system, and the selected types of regulators are minimally sufficient. Tuning WPM controllers is a complex non-trivial task one of the most effective ways to solve it is mathematical modeling, followed by digital probing of the parametric space of the model. This method consists in sequential enumeration of the controller parameters in a given range and obtaining multidimensional surfaces describing the dependencies of the main indicators of the quality of regulation (speed, static error, overshoot, integral error, and others), followed by the search for function minima and determining the optimal controller settings. The features of the software implementation of the method, as well as its applicability for optimizing the drive systems regulators, are described in detail in the works [19, 20].

3 Mathematical Model of the WPM To test the performance of the proposed control system algorithms, as well as to obtain the required parameters of the controllers, it is necessary to develop a mathematical model. To compile equations that describe the controlled movement of the WPM, consider the calculation scheme of the device (Fig. 3). 3

1

2

MG y

4 y

x1

x

P21

P12

MG

Md

6

9

x2

ω

rw

Fact

5

Ffr

Mr

7

8

mg

Fig. 3. Calculation scheme of WPM and wheel drive.

On this calculation scheme: 1—wall-pressed mechanism motor (maxon RE13), 2— spur gear (maxon GS16V 22:1), 3—robot frame, 4—timing belt (5M-15), 5—trapezoidal screw (TRR 10x2), 6—nut (TRSP-10x2), 7—spring, 8—drive wheel. The diagram does not show a wheel actuator, consisting of an electric motor (maxon Ec-i30), a planetary gear (maxon GP32A 66:1) and a bevel gear pair with a gear ratio of 1:1 for turning the axis of rotation of the drive. For the mathematical description of this device, we will take a number of assumptions. When modeling the suspension, we will assume that the wheel drive with mass m2 moves only along the guides and its position x 2 is determined only by the deformation of the spring with constant stiffness c and the reaction from the pipe surface N. Let the displacement of the nut x 1 with mass m1 of the WPM be determined by the drive force

106

S. Jatsun and A. Malchikov

Fact . We will take the mass of the entire robot equal to m and applied in the center of the wheel, since in the framework of the simulation we assume that the body can only move along the pipe axis—the y coordinate. According to the operation algorithms of the device, during rectilinear movement and when passing obstacles, at least 6 wheels of two modules must be in contact with the surface, that is, 6 wall-pressed mechanisms are involved. These devices have the same design therefore within the framework of WPM modeling, we will consider only one mechanism with the values of forces and torques that make up 1/6 of the total acting on the robot. The movement of the device along the pipe occurs due to the friction force Ffr , acting at the points of contact of the wheels with the surface of the pipe and arising when they rotate with a frequency ω, under the action of the drive torque Md and the rolling friction torque Mr . Incremental encoders are used to measure the wheel speed (in the diagram of Fig. 1. S ω ), the force on the spring will be estimated by value of deformation using an analog Hall sensor (in the diagram in Fig. 1. S N ), the amount of displacement relative to the pipe surface will be estimated by an optical sensor (on diagram in Fig. 1. S V ). Based on the assumptions made, the movement of the robot body along the axis of the pipe can be described by the differential equation: m˙y = Ffr − mg − b˙y,

(3)

where b is the coefficient of viscous resistance to the movement of the body in the environment, due to the presence of sludge or dirt inside the air ducts. Based on the Coulomb model, we write the equation for the friction force:       F  < kfr N − F, if      Ffr = (4) F , if  F  ≥ kfr N −kfr N · sign    where F = Md −Mrωr −Jω ϕ¨ − m¨y − mg —the sum of forces acting at the point of contact. Md —wheel drive torque, Mr —rolling resistance (Mr = δN , δ—rolling resistance coefficient), Jω —wheel moment of inertia. In this case, the limit value of the friction force Ffr = k fr N is determined by the properties of the surface and the value of the normal reaction N, which can be obtained from the system of equations:  mx¨1 = Fact − c(x2 − x1 ) − b1 x˙ 1 (5) mx¨2 = c(x2 − x1 ) − b2 x˙ 2 − N where Fact is the force generated by the WPM drive, b1 , b2 are the coefficients of viscous resistance to displacement parts of mechanism. In these equations, Md and Fact are the torque and force generated by the WPM actuators. To describe the dynamics of the wheel drive, we use the equation:

Uω − kew ωw iGw iGw ηw Md = kmw (6) Rw

Adaptive Suspension System Position-Force Control

107

where kmw —the torque constant of the wheel drive motor, kew —the speed constant of the motor, ωw —the wheel speed, Rw —the resistance of the motor armature winding, iGw , ηw —the gear ratio and efficiency of the wheel drive reducer, Uω is the motor supply voltage generated by the system wheel drive control (1). To describe the dynamics of the wall-pressed mechanism actuator, we use the equation:

UN − kep x˙1 iGp iGp ηp (7) Fact = kmp Rp where kmp —the torque constant of the WPM motor, kep —the speed constant of the WPM motor, Rp —the armature winding resistance of the WPM motor, iGp , ηp —the gear ratio and efficiency of the WPM mechanical transmission, UN —the motor supply voltage generated by WPM drive control system (2). In fact, the prototype of the device uses a screw gear that has a self-braking effect however, this property is not taken into account in the framework of the work. The dynamics of the WPM is characterized only by the torques on the electric motor and the values of external forces. Friction losses are taken into account by means of the efficiency of the gearbox and screw-nut transmission. Depending on the value of forces and torques, two cases can be distinguished in the model: a state of rest or movement without slip, and movement with slippage.   The state of rest or movement without slippage is described by the condition: Ffr  < kfr N . In this case, the dynamics of the robot is described by:  Jw ω˙ w = Md − Mr − Ffr rw (8) y˙ = ω˙ w rw That is, state without relative displacement a point on the wheel rim and pipe surface, the vertical movement of the robot is determined by the angle of rotation of the wheel, performed due to the drive torque. For movement with slippage, when the friction forces reach their limit values (Ffr = kfr N ), the dynamics is determined by the system:  Jw ω˙ w = Md − Mr − kfr Nrw (9) m¨y = Ffr − mg − b˙y For this mode, the dynamics of the rotation of the wheel and the displacement body of the robot are calculated independently. Since the system of equations has a non-linear character and a complex friction model that determines the nature of the motion and takes into account the possibility of motion with slippage, obtaining a numerical solution is a difficult task. To carry out computational experiments on a mathematical model, a numerical integration algorithm was developed for the resulting system of differential equations describing the WPM and wheel drive dynamics of the in-pipe climbing robot.

4 Results To obtain a numerical solution, based on the original algorithm, a simulation program was developed in the Matlab environment. The model uses the numerical values parameters corresponding to the design of the prototype developed within the project. The values

108

S. Jatsun and A. Malchikov

of the coefficients of viscous and dry friction are determined experimentally and are indicative. Main mathematical model parameters are shown in Table 1. Table 1. Main parameters of WPM in-pipe climbing robot model. Model parameter

Value

Robot body mass m, kg

12

Wheel drive mass m2 , kg

0.2

Mass of the moving part of the WPM with a spring m1 , kg

0.1

Driving wheel radius, r w , m

0.03

Spring stiffness c, N/m

1500

Viscosity coefficient robot body b, Ns/m

4.2

Viscosity coefficient b1 , Ns/m

0.2

Viscosity coefficient b2 , Ns/m

0.22

Friction force coefficient for driving wheels, k fr

0.5

Rolling friction coefficient δ, m

0,005

Pipe diameter D, m

0.3–0.32

Motor constants, resistance and mechanical transmission parameters can be found in the technical documentation. One of the basic modes of operation of the in-pipe robot is movement and stable in a vertical pipe, in which case the device actuators must compensate the effects of gravity. As a demonstration of the operation of the mathematical model and define of control system parameters, we will set the conditions under which the drive of the WPM system will provide the absence of slippage (V = 0), and the wheel drive control system will try to keep the wheel fixed (V * = 0). This mode implies that the robot must be stable inside the vertical section of the pipe, compensating for gravity due to friction forces. Let’s consider an experiment in which, at the moment of time 0,3s (chosen for diagram clarity) from the beginning of the simulation, the diameter of the pipe will instantly increase by 20 mm. The task of the WPM control system at this moment is to generate a signal for the drive, which should restore the required value of the friction force and stabilize the position of the robot and the wheel drive control system must provide a fixed initial position of the wheels (Figs. 4 and 5). As the simulation results showed, with a stepwise increase of the diameter of the pipe, the robot wheel loses contact with the surface (N = 0), which leads to slippage. The occurrence of slippage activates the WPM drive, which, by moving the base of the spring x1, presses the wheel to the surface, restoring the value of the normal reaction (for the given parameters of the model N = 45.9 N), sufficient to stop the robot from slipping in the pipe. With the selected coefficients of the regulators, during the loss of contact, the robot manages to move (fall) by no more than 3 mm. In the tuned system,

Adaptive Suspension System Position-Force Control

109

Fig. 4. Simulation results: a WPM drive energy; b WPM force Fact , spring force (P = c(x2 − x1 ) − b2 x˙ 2 ) and support reaction N; c spring base displacement x1 , wheel displacement x2 , pipe diameter.

Fig. 5. Simulation results: a wheel drive energy: I ω —motor current, U ω —motor voltage; b wheel drive dynamics: Md —torque, Ffr —friction force, robot vertical displacement y.

there are practically no vibrations in the suspension, and the transition process time for the WPM drive control system is no more than 25 ms.

110

S. Jatsun and A. Malchikov

5 Conclusions Monitoring of ventilation systems using a wheeled robot implies the possibility of climbing the device in the vertical segments of the pipeline. The required friction force of the robot’s adhesion to the surface is achieved through the use of an adaptive suspension that provides the necessary wall-press force. The paper considers a three-module scheme of an in-pipe robot equipped with a multi-loop automatic control system that implements, among other things, position-force control of the normal reaction value at the points of contact of the wheels with the pipe surface. The mathematical eaquations presented in the paper made it possible to conduct a series of computational experiments on a device model and determine the optimal parameters of the controllers. The obtained results of modeling the movement of a robot inside a vertical pipe confirm the applicability of the proposed method for adapting the suspension of a wheeled robot to changes in the diameter and surface properties of air ducts. The article was prepared with the Strategic Project “Priority-2030. Creation of robotic tools to expand human functionality” support.

References 1. Wang, Y., Zhang, J.: Autonomous air duct cleaning robot system. In: 2006 49th IEEE International Midwest Symposium on Circuits and Systems, vol.1, pp. 510–513. IEEE (2006) 2. Yamamoto, M., Enatsu, Y., Mohri, A.: Motion analysis of a cleaner robot for vertical type air conditioning duct. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004. vol. 5, pp. 4442–4447. IEEE (2004) 3. Kim, Y. G., Shin, D. H., Moon, J. I., An, J.: Design and implementation of an optimal in-pipe navigation mechanism for a steel pipe cleaning robot. In: 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 772–773. IEEE (2011) 4. Jatsun, S., Loktionova, O., Malchikov, A.: Six-link in-pipe crawling robot. In: Advances on Theory and Practice of Robots and Manipulators: Proceedings of Romansy 2014 XX CISMIFToMM Symposium on Theory and Practice of Robots and Manipulators, pp. 341–348. Springer International Publishing (2014) 5. Li, Z., Zhu, J., He, C., Wang, W.: A new pipe cleaning and inspection robot with active pipe-diameter adaptability based on ATmega64. In: 2009 9th International Conference on Electronic Measurement & Instruments, pp. 2–616. IEEE (2009) 6. Roslin, N.S., Anuar, A., Jalal, M.F.A., Sahari, K.S.M.: A review: hybrid locomotion of in-pipe inspection robot. Procedia Eng. 41, 1456–1462 (2012) 7. Du, Y., Zhu, Q. M., Ghauri, S., Zhai, J.H., Jia, H.R., Nouri, H.: Progresses in study of pipeline robot. In: 2012 Proceedings of International Conference on Modelling, Identification and Control, pp. 808–813. IEEE (2012) 8. Shao, L., Wang, Y., Guo, B., Chen, X.: A review over state of the art of in-pipe robot. In: 2015 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 2180–2185. IEEE (2015) 9. Zhang, Y., Yan, G.: In-pipe inspection robot with active pipe-diameter adaptability and automatic tractive force adjusting. Mech. Mach. Theory 42(12), 1618–1631 (2007) 10. Park, J., Kim, T., Yang, H.: Development of an actively adaptable in-pipe robot. In: 2009 IEEE International Conference on Mechatronics, pp. 1–5. IEEE (2009) 11. Hadi, A., Hassani, A., Alipour, K., Askari, R., Pourakbarian, P.: Developing an adaptable pipe inspection robot using shape memory alloy actuators. J. Intell. Mater. Syst. Struct. 31(4), 632–647 (2020)

Adaptive Suspension System Position-Force Control

111

12. Rusu, C., Tatar, M.O.: Adapting mechanisms for in-pipe inspection robots: a review. Appl. Sci. 12(12), 6191 (2022) 13. Kakogawa, A., Ma, S.: An in-pipe inspection module with an omnidirectional bent-pipe self-adaptation mechanism using a joint torque control. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4347–4352. IEEE (2019) 14. Xie, Q., Liu, S., Ma, X.: Design of a novel inchworm in-pipe robot based on cam-linkage mechanism. Adv. Mech. Eng. 13(9), 16878140211045192 (2021) 15. Kazeminasab, S., Akbari, A., Jafari, R., Banks, M. K.: Design, characterization, and control of a size adaptable in-pipe robot for water distribution systems. In: 2021 22nd IEEE International Conference on Industrial Technology (ICIT), vol. 1, pp. 39–46. IEEE. (2021) 16. Long, Q., Tang, B., Tan, Z., Zhang, Y., Wang, M., Wang, Y.: Structure design and analysis of in-pipe robot adaptable to pipe diameter. Front. Sci. Eng. 1(9), 46–53 (2021) 17. Jatsun, S., Vorochaeva, L., Yatsun, A., Savin, S., Malchikov, A.: Bio-inspired adaptive control strategy for a snake-like robot. In: 2015 19th International Conference on System Theory, Control and Computing (ICSTCC), pp. 273–278. IEEE (2015) 18. Carbone, G., Malchikov, A., Ceccarelli, M., Jatsun, S.: Design and simulation of Kursk robot for in-pipe inspection. In: SYROM 2009: Proceedings of the 10th IFToMM International Symposium on Science of Mechanisms and Machines, pp. 103–114. Springer Netherlands (2009) 19. Jatsun, S., Malchikov, A., Yatsun, A., Saveleva, E.: Studying of copying control system with nonlinear measurer. In: Electromechanics and Robotics: Proceedings of 16th International Conference on Electromechanics and Robotics “Zavalishin’s Readings” (ER (ZR) 2021), pp. 13–23. Springer, Singapore (2022) 20. Jatsun, S., Malchikov, A., Yatsun, A.: Adaptive control system for DC electric drive under uncertainty. In: 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM), pp. 1–5. IEEE (2020)

Three-Rigid-Body Model Based NMPC for Bounding of a Quadruped with Two Spinal Joints Songrui Huang1 , Wenhan Cai1 , and Mingguo Zhao1,2(B) 1

2

Department of Automation, Tsinghua University, Beijing 100084, China [email protected] Beijing Innovation Center for Future Chips, Tsinghua University, Beijing 100084, China [email protected]

Abstract. While real-world quadrupedal animals exploit significant spine motions to realize high-speed running, current high-speed quadrupedal robots seldom incorporate spinal structures. One main reason for this gap is the complicated robot dynamics brought by spinal structures, which causes difficulties in designing high-speed locomotion controllers like Model Predictive Control (MPC). In this paper, we propose the planar Three-Rigid-Body model for a planar quadruped with two actuated spinal joints. Based on this simplified model, we derive a Nonlinear MPC for the robot’s bounding control. The Three-Rigid-Body model catches the main dynamical effects of the spinal motions, meanwhile being simple enough to facilitate the NMPC’s real-time execution. We employ Whole-Body Control (WBC) as a bottom-layer controller for the NMPC to complement modeling errors. The whole control scheme is verified in simulation. The 13 kg robot, with maximum joint torque and speed of 21 Nm and 20 rad/s, reaches the highest speed of 4.3 m/s with a Froude number of 5.24. The controllers’ computation time implies real-time execution on the real robot. Keywords: Quadrupedal robot with spine · Three-Rigid-Body model · Nonlinear Model Predictive Control · Whole-Body control · Bounding

1

Introduction

High-speed locomotion of quadrupedal robots that rivals real-world quadrupedal animals has long been a challenging task. Some quadrupedal animals, for example, cheetahs, utilize significant spine motions to facilitate high-speed running. Meanwhile, the Boston Dynamics’ Cheetah robot and WildCat robot are currently the only quadrupedal robots that exploit spinal structures to realize highspeed running. The Cheetah robot, constrained to the vertical plane, can run at the highest speed of 12.6 m/s on a treadmill [1], and the Wildcat robot reaches c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 112–124, 2024. https://doi.org/10.1007/978-3-031-47272-5_10

Three-Rigid-Body NMPC

113

8.9 m/s in outdoor running tests [2]. Both the robots are hydraulic-driven, and their mechanical and control designs are unrevealed. Except for the Boston Dynamics robots, current quadrupedal robots with spinal structures, for example, the Canid robot [3], the Inu robot [4] and the robots in [5,6], do not realize high-speed running and are primitive in their control methods. Meanwhile, current quadrupeds that realize high-speed running usually have rigid torsos. For example, the MIT Cheetah 2 robot can bound at 6.4 m/s with a Froude number of 7.1 on flat ground outdoors [7], and the MIT Mini-cheetah robot can reach a highest running speed of 3.7 m/s and a Froude number of 4.65 [8]. Both robots are electrical-driven and have no spinal structures. To summarize, there is still a gap between animals and robots in the exploitation of spines for high-speed running. One of the main reasons for this gap is the difficulties in controlling quadrupeds with spinal joints. The difficulties result from these quadrupeds’ system dynamics, which is much more complicated than that of quadrupeds with rigid torsos. Optimization-based control methods, especially the Model Predictive Control (MPC) and the Whole-Body Control (WBC) are currently popular in quadrupedal locomotion control. The MPC has been widely adopted in highspeed running control due to its ability to foresee future motion trajectories, and the WBC usually serves as a bottom-layer controller for the MPC. Many MPC methods, for example, the convex MPC (CMPC) [9] and the Representation-Free MPC (RF-MPC) [10,11], adopt linearized system dynamics. While linearized models are beneficial to the real-time solution of the MPC, they greatly deviate from the original dynamics in case of drastic system state changes. The Nonlinear MPC (NMPC), namely the MPC employing nonlinear system dynamics, has seen wider application in the control of quadrupeds recently. The development of high-performance optimization toolkits and techniques like the Real-Time Iterations (RTI) scheme [12] assists the NMPC’s deployment on real robots. Still, the computational burden of solving a nonlinear optimization problem remains a great challenge. Current NMPC controllers that employ the robot’s full dynamics do not exhibit highly dynamical locomotion on real-world robots [13,14]. Simpler models like various Single-Rigid-Body (SRB) based models [15–17] and the centroidal dynamical model [18] are preferable because they balance model accuracy and computational cost. For a high-speed quadruped with spinal joints, the SRB model becomes invalid, and a suitable robot model that considers the spinal motions is needed. The robot’s full dynamics usually has high dimensions and strong nonlinearities, which makes the MPC problem difficult to solve in real time. An alternative option is the robot’s centroidal dynamical model, which considers the robot’s full kinematics and is much simpler than the full dynamical model. However, the centroidal dynamics still neglects the dynamical effects between the rigid bodies within a robot, therefore unsuitable for high-speed locomotion control. In this paper, we propose the Three-Rigid-Body model for a planar quadruped with two actuated spinal joints in order to implement a high-speed bounding gait of the quadruped. The Three-Rigid-Body model simplifies the

114

S. Huang et al.

robots into three rigid bodies connected by two spinal joints and subjected to ground reaction forces. The legs’ dynamics and coupling effects on the spinal joints are neglected. Different from the robot model in [19], our model has active spinal joints and includes ground reaction forces in the system inputs rather than adopting massless springy legs. The proposed robot model catches the main dynamical effects of drastic spinal motions while being simple enough to facilitate the real-time execution of an NMPC. Based on the Three-Rigid-Body model, we derive an NMPC for the robot’s bounding control. We employ Whole-Body Control (WBC) as a bottom-layer controller for the NMPC to complement modeling errors. The “NMPC+WBC” control scheme is verified in the simulation. The robot reaches the highest speed of 4.3 m/s, with a Froude number of 5.24. The controllers’ computation time implies real-time execution on the real robot. The rest of this paper is organized as follows. Section 2 introduces the ThreeRigid-Body model and derives the NMPC controller. Section 3 presents the whole control scheme and formulates the WBC controller. Section 4 presents the simulation setup and results. Section 5 is for the conclusion and future work.

2 2.1

Nonlinear Model Predictive Control with Planar Three-Rigid-Body Model Planar Three-Rigid-Body Model

Consider a quadrupedal robot with two actuated revolute spinal joints, as shown in Fig. 1. The front and rear spinal joints connect the front torso with the central torso, and the central torso with the rear torso, respectively. The robot is constrained within the vertical plane and the two spinal joints’ axes are perpendicular to the vertical plane. We simplified the robot into three rigid bodies as shown in Fig. 2: the first for the central torso, the second for the front torso and front legs, and the third for the rear torso and rear legs. The three rigid bodies each have their own mass mb , mf , mr and rotational inertia Ib , If , Ir , respectively. For the center of mass (CoM) position and rotational inertia, the central rigid body adopts the actual values of the central torso, while the front(rear) rigid body adopts estimated values of the front(rear) torso and front(rear) legs’ combination. The front and rear spinal joints have joint positions qf , qr and exert joint torques τf , τr , respectively. If a front or a rear leg touches the ground, the ground reaction force exerts on the corresponding front or rear rigid body at the leg’s contact point. By the specifications above, we neglect the legs’ dynamics and their coupling effects on the spinal joints. Such simplifications are similar to those adopted in the SRB model but the modeling errors may be greater due to the presence of spinal joints. Still, we argue that as long as the legs weigh light enough compared with the torsos, the modeling errors will be tolerable and can be complemented by bottom-layer controllers like the WBC. The system dynamics of the Three-Rigid-Body model can then be derived as standard planar floating-base dynamics. We attach the robot’s body frame B to

Three-Rigid-Body NMPC

Fig. 1. The quadruped with two actuated spinal joints.

115

Fig. 2. A sketch of the planar ThreeRigid-Body model.

the central rigid body’s CoM. The body frame’s position and orientation relative to the world frame are denoted as x, z and θb . Then the system’s generalized coordinates are formulated as qM = [x, z, θb , qf , qr ]. Since we are considering the bounding gait of a planar robot, we assume that the left and right legs always have the same positions and contact forces. Therefore the front and rear rigid bodies are each subjected to a single contact force F1 and F2 , respectively. The contact positions of F1 and F2 relative to the central rigid body’s CoM and their ˜1, J ˜ 2 , respectively. The system contact Jacobians are denoted as rb1 , rb2 and J dynamics can then be written as ˜ M , q˙ M ) + G(q ˜ M) = S ˜ T τsp + J ˜ T (qM , rb1 )F1 + J ˜ T (qM , rb2 )F2 , ˜ M )¨ qM + C(q M(q 1 2 (1) ˜ M , q˙ M ), G(q ˜ M ) are the mass matrix, the Coriolis and centrifu˜ M ), C(q where M(q gal term and the gravity term, respectively. S is the selection matrix correspondT ˜ M ), C(q ˜ M , q˙ M ), G(q ˜ M) ing to the spinal torques τsp = [τf , τr ] . The matrices M(q ˜ 2 (qM , rb2 ) also rely on the three rigid bodies’ geometric and ˜ 1 (qM , rb1 ), J and J inertial parameters. The detailed expressions of these matrices can be derived by the Newton method or the Lagrangian method. We refrain from presenting these details due to limited space here. 2.2

NMPC Problem Formulation

The NMPC requires repeatedly solving a finite-horizon Optimal Control (OC) problem as  T l(X(t), U(t), t)dt+lT (X(T )) (2) min X(t),U(t)

s.t.

0

˙ X(t) = f (X(t), U(t), t),0 ≤ t ≤ T

(3)

h(X(t), U(t), t) ≤ 0,0 ≤ t ≤ T hT (X(T )) ≤ 0, X(0) = X0 ,

(4) (5) (6)

where T is the horizon length, X and U are system states and inputs respectively, and X0 is the system’s current states.

116

S. Huang et al.

For the Three-Rigid-Body NMPC, the system states are chosen as X =  T T T T T qM , q˙ M ∈ R10 and system inputs as U = [FT1 , FT2 , τsp ] ∈ R6 . The system dynamics of (3) are implicitly determined by (1). In the cost function of (2), we punish the deviation of the actual states from the reference states trajectories Xref (t) and the norm of the system inputs. Then T    l(X(t), U(t), t) = X(t) − Xref (t) W X(t) − Xref (t) + UT (t)QU(t),

(7)

where W, Q are positive definite, diagonal weight matrices. We specify no terminal cost, therefore lT (X(T )) ≡ 0. We denote the desired contact state of a leg i by TDi , where TDi = 1 means touch-down and TDi = 0 means lift-off. If a leg touches the ground, we impose a square pyramid approximation of the friction cone to constrain its contact force. We also limit the z component of the contact force between a minimum fz,min T and a maximum fz,max . Then for a contact force Fi = [Fi,x , Fi,z ] , i = 1, 2, we have ⎡ ⎤ ⎡ ⎤ ⎤ ⎡ 0 1 μ +∞ ⎣ −∞ ⎦ ≤ ⎣1 −μ⎦ Fi (t) ≤ ⎣ 0 ⎦ , if TDi (t) = 1, (8) fz,min 0 1 fz,max where μ denotes the friction coefficient. If a leg is swinging, its contact force is constrained to zero, therefore Fi (t) = 0,

if TDi (t) = 0.

(9)

We limit the maximum value τsp,max of the spinal joints’ torque, then − τsp,max ≤ τi (t) ≤ τsp,max ,

i = f, r.

(10)

The constraints in (8)–(10) can easily be transformed into the form of (4). We apply no terminal constraints, therefore inequality (5) is dropped. 2.3

NMPC Problem Solution

We employ the acados software package [20] to solve the NMPC problem constructed in Sect. 2.2. The acados NMPC solver implements the Multiple Shooting method with Sequential Quadratic Programming (SQP) and the Real-Time Iterations (RTI) technique [12]. The acados solver accepts implicit system dynamics like formula (1). The solver allows setting parameters for each shooting node, thus allowing timevarying system dynamics and constraints. The legs’ desired contact states TD1 (t) and TD2 (t) and desired contact position rb1 (t), rb2 (t) are provided by the planner, therefore chosen as time-varying parameters. In system dynamics and constraints, we replace the contact forces Fi (t) with TDi (t) · Fi (t), i = 1, 2. Then constraint (9) can be dropped and constraint (8) can be applied irrespective of the legs’ contact states.

Three-Rigid-Body NMPC

3 3.1

117

Whole-Body Control WBC Problem Formulation

To formulate the WBC problem, we consider the quadruped with two spinal joints in the full 3D space. The WBC controller for the 3D situation is still valid when the robot is constrained to the vertical plane. Each of the robot’s legs has two degrees of freedom, namely the hip joint and the knee joint. Therefore the robot has 10 actuated joints and 11 links in total. The robot’s generalized coor T T  dinates and velocities are denoted as q = qTb , qTj and q˙ = νbT , q˙ Tj , where qb ∈ R3 × SO(3), νb ∈ R6 are the floating base’s configuration and velocities in the 3D space, and qj , q˙ j ∈ R10 are the actuated joints’ positions and velocities. The robot’s full dynamics is formulated as ˙ + G(q) = ST τj + M(q)¨ q + C(q, q)

4

JTi (q)Fi ,

(11)

i=1

˙ G(q) are the mass matrix, the Coriolis and centrifugal where M(q), C(q, q), term and the gravity term, respectively. S is the selection matrix corresponding to the actuated joints’ torque τj . Fi and Ji (q) are the contact force and contact Jacobian of the i-th foot, respectively. A Whole-Body Control scheme with N tasks and M constraints can be formulated as a Quadratic Programming (QP) problem as follows: min χ

s.t.

N

2

Wi (Ai χ − bi ) 2

(12)

i=1

cj ≤ Cj χ ≤ cj ,

j = 1, 2, . . . , M.

(13)

d ¨ = dt The generalized acceleration q q˙ and the contact forces Fi , i = 1, 2, 3, 4 form the optimization variablesχ. Task matrices and vectors Ai , bi and constraint matrices and vectors Cj , cj , cj are all independent of χ but dependent on the ˙ Wi defines the weights of the i-th task. The optimal solution state feedbacks q, q. χopt is substituted into Eq. (11) to obtain the actuated joints’ torque commands τjcmd . The tasks and constraints adopted in the WBC are listed in Table 1. In contact force constraint, the stance legs’ contact forces are constrained by a friction cone and a maximum value on the vertical direction, and the swing legs’ contacted forces are constrained to zero. The derivation of the task matrices and vectors Ai , bi and constraint matrices and vectors Cj , cj , cj are similar to that in [8]. We refrain from presenting the detailed derivation due to limited space here.

3.2

Control Scheme Overview

The entire control scheme is shown in Fig. 3. [·] denotes a time sequence of the enclosed variable. The planner offers the reference state trajectories of the

118

S. Huang et al. Table 1. WBC tasks and constraints. No. Tasks

No. Constraints

1

Spinal joint torque tracking 1

Floating base’s dynamics

2

Spinal joint motion tracking 2

Contact force

3

Contact force tracking

3

Maximum joint torque

4

Torso motion tracking

4

Maximum joint velocity

5

Feet motion tracking

5

Maximum positive joint power

Three-Rigid-Body model, the desired touch-down schedules and the reference foot trajectories of the stance legs to the NMPC. The central torso’s forward motions are planned as a simple constant speed or constant acceleration motion. The central torso’s height and pitch angle are set to constant values. The spinal joints’ reference motions are generated by cubic spline interpolation. The WBC tracks the optimal state trajectories and system inputs of the NMPC. The NMPC operates at 40 Hz and the planner updates the NMPC’s reference trajectories before every NMPC calculation. The Planner updates the swing legs’ trajectories to the WBC at 1 kHz. A Zero-Order Hold (ZOH) module is put between the NMPC and the WBC, and the WBC’s references are updated once the NMPC completes its computation. For the state estimator, we employ a Kalman-filterbased method.

Fig. 3. Overall control scheme.

4 4.1

Simulation and Results Simulation Setup

We employ the open-source simulation environment Webots [21]. The robot weighs 13 kg in total and its important physical parameters are listed in Table 2. Note that “central torso’s length” refers to the distance between the two spinal

Three-Rigid-Body NMPC

119

joints, and “whole torso’s length” refers to the maximum distance between the front and rear hip joints. The friction coefficient between the robot’s feet and the ground is set to 0.5 in the simulation. All the actuated joints have the same torque, speed and power constraints. The maximum joint torque, speed and power are set to 24 Nm, 20 rad/s and 500 W, respectively. Note that in the simulation, the robot model’s physical parameters and joint limits are set according to the real robot shown in Fig. 1. Table 2. Robot parameters. Property Units Item

Value Property Units

Mass

1.81 Inertia 3.66 3.72 0.829 Length 0.173

kg

Central torso Front torso Rear torso Upper limb Lower limb

Item

Value

kg m2 Central torso Front torso Rear torso m Central torso Whole torso Upper limb Lower limb

0.0047 0.0080 0.0082 0.21 0.35 0.225 0.21

Table 3. NMPC parameters. Parameter

Value

x, z, θb weights

200,6000,1000 x, ˙ z, ˙ θ˙b weights

Parameter

qf , qr weights

5000,5000

q˙f , q˙r weights

Value 1,1,1 10,10

Contact force weight 1e-6

Nominal front body inertia 0.0320 kg m2

Spine torque weight

Nominal rear body inertia

1e-9

0.0328 kg m2

The control algorithms are implemented in C++. HPIPM [22] is chosen as the underlying QP solver of the acados. For the WBC, qpOASES [23] is employed to solve the QP problem. In the simulation, we assume that all the calculations are done within the 1 ms control cycle. The actual time consumption of the control algorithms is measured on a PC with Intel an i7-8700 CPU (3.2 GHz, 6 cores, 12 threads). The acados employs a 12-thread OPENMP parallel computing scheme. Key parameters of the NMPC are listed in Table 3. The NMPC’s prediction horizon is 10 frames with a frame interval of 25 ms. The quadruped adopts a bounding gait with a period of 250 ms. For each leg, the stance period is 75 ms.

120

4.2

S. Huang et al.

Results

Fig. 4. Snapshots of the robot’s motions in a single gait cycle. a Front legs touch-down. b Front legs in stance. c Front legs lift-off. d All legs in flight e Rear legs touch-down. f Rear legs in stance. g Rear legs lift-off. h All legs in flight again. i Gait diagram. Black sectors for stance phases and grey sectors for swing phases.

In the simulation, our robot achieves a stable bounding gait. A video showing the robot’s stable running behavior can be found in the link below.1 Figure 4 presents the snapshots of the robot’s motions in a single gait cycle. The central torso’s forward speed and height trajectories during the stable bounding stage are shown in Fig. 5. The central torso accelerates during the front or rear stance phase and decelerates while the robot is in flight. The robot reaches an average forward speed of around 4.3 m/s. Taking the average hip height as 0.36 m, the Froude number is 5.24. Figure 6 shows the legs’ contact forces. At the start of the legs’ desired stance phase, delayed touch-downs result in great spikes in the actual contact forces. Otherwise, the WBC roughly implements the NMPC’s optimal contact forces. Figure 7 shows the spinal joints’ motion trajectories and torques. Both the front and the rear spinal joints have significant oscillations, with the front joint’s position between around –15◦ and 45◦ , and the rear joint’s position between around –30◦ and –7◦ . The WBC roughly follows the NMPC’s optimal spine trajectories and torques most of the time, which implies that the Three-RigidBody model catches the main dynamical effects of the robot motions. Large 1

https://youtu.be/7jyVIfPouCc.

Three-Rigid-Body NMPC

121

Fig. 5. Central torso’s motion trajectories during the stable bounding stage. For these and the following figures, the black dashed curve refers to the desired contact states: 1 for front legs touch-down, –1 for rear legs touch-down, and 0 for all legs lift-off.

Fig. 6. Legs’ contact forces. The blue and red lines refer to the optimal contact forces of the NMPC and the WBC, respectively. The green lines are actual contact forces measured by contact sensors in the simulation.

Fig. 7. Spinal joints’ position and torque. The blue lines refer to the NMPC’s optimal state trajectories or torques. The red lines refer to the actual joint positions or the WBC’s torque commands.

122

S. Huang et al.

deviations occur when the corresponding legs begin to lift off the ground. At the start of the leg’s swinging stage, the leg achieves large accelerations and velocities, resulting in a significant coupling effect on the spinal joints. Since the Three-Rigid-Body neglects this coupling effect, the NMPC does not generate feasible torques during this stage. Still, the WBC complements this defect and maintains the robot’s stability. During the stable bounding stage, all the joints’ torque and power are within the limit. The violation of joint speed constraints appears in two cases: when the legs encounter a delayed touch-down, or before the front legs lift off the ground. The former case will not appear on the real robot due to the mechanical limits of the real-world joints. In the latter case, the hip joints’ speed reaches around 21.5 rad/s, which is slightly over the limit. We observed in the simulation that by lifting this limit, the robot can achieve an even higher running speed.

Fig. 8. Histograms of the measured computation time of the NMPC and the WBC.

The measured computation time of the NMPC and the WBC are presented in Fig. 8. In most cases, the NMPC’s computation time is within 4 ms and the WBC’s computation time is within 2 ms. We believe that by further optimizing the controllers’ code, the whole control scheme can operate in real-time on the real robot.

5

Conclusion and Future Work

We verified the proposed Three-Rigid-Body NMPC and the entire control scheme in the simulation. The robot achieves a stable bounding gait of 4.3 m/s with a Froude number of 5.24. Simulation results show that the Three-Rigid-Body model catches the main dynamical effects of the robot motions and the WBC complements the deviations caused by model simplifications. The measured computation time of the NMPC and the WBC implies that this control scheme may execute in real-time on the real robot. Since the Froude number of the Boston Dynamics Cheetah and Wildcat robots are unknown, we are unable to directly compare their results with ours. For comparison, the MIT Mini-cheetah robot, which is similar to our robot model in size and weight, has a Froude number of 4.65 as mentioned in Sect. 1. This indicates the effectiveness of spine motions and the capabilities of our method.

Three-Rigid-Body NMPC

123

In the future, we plan to verify our methods on the real robot. We also plan to extend the current planar model to the full 3D case. Furthermore, the ThreeRigid-Body model can be applied to different types of robots, for example, a bipedal robot or a rigid-torso quadrupedal robot mounted with a robotic arm. Acknowledgments. This research was funded by the STI 2030-Major Projects 2021ZD0201402.

References 1. Ackerman, E.: Boston dynamics’ cheetah robot now faster than fastest human (2012). https://spectrum.ieee.org/boston-dynamics-cheetah-robot-nowfaster-than-fastest-human 2. Legacy robots. https://www.bostondynamics.com/legacy 3. Pusey, J., Duperret, J., Haynes, G.C., Knopf, R.R., Koditschek, D.E.: Freestanding leaping experiments with a power-autonomous elastic-spined quadruped. In: Defense, Security, and Sensing (2013) 4. Duperret, J., Koditschek, D.E.: Empirical validation of a spined sagittal-plane quadrupedal model. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1058–1064 (2017) 5. Li, Z., Tan, Y., Wu, P., Zeng, S.: The compliant effect of controlled spine on interaction with the ground in quadruped trotting. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 234, 27–45 (2020) 6. Wang, C., Fan, C., Yang, Y., Zhan, M., Shao, H., Ma, B.: Realization of a bioinspired cheetah robot with a flexible spine. In: 2020 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1185–1190 (2020) 7. Park, H.W., Wensing, P.M., Kim, S.: High-speed bounding with the MIT Cheetah 2: control design and experiments. Int. J. Robot. Res. 36, 167–192 (2017) 8. Kim, D., Carlo, J.D., Katz, B., Bledt, G., Kim, S.: Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control (2019). arXiv:abs/1909.06586 9. Carlo, J.D., Wensing, P.M., Katz, B., Bledt, G., Kim, S.: Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–9 (2018) 10. Ding, Y., Pandala, A., Park, H.W.: Real-time model predictive control for versatile dynamic motions in quadrupedal robots. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8484–8490 (2019) 11. Ding, Y., Pandala, A., Li, C., Shin, Y.H., Park, H.W.: Representation-free model predictive control for dynamic motions in quadrupeds. IEEE Trans. Rob. 37, 1154– 1171 (2021) 12. Diehl, M., Bock, H.G., Schl¨ oder, J.P., Findeisen, R., Nagy, Z.K., Allg¨ ower, F.: Realtime optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. J. Process Control 12, 577–585 (2002) 13. Koenemann, J., Prete, A.D., Tassa, Y., Todorov, E., Stasse, O., Bennewitz, M., Mansard, N.: Whole-body model-predictive control applied to the hrp-2 humanoid. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3346–3351 (2015) 14. Neunert, M., St¨ auble, M., Giftthaler, M., Bellicoso, C.D., Carius, J., Gehring, C., Hutter, M., Buchli, J.: Whole-body nonlinear model predictive control through contacts for quadrupeds. IEEE Robot. Autom. Lett. 3, 1458–1465 (2018)

124

S. Huang et al.

15. Bledt, G., Wensing, P.M., Kim, S.: Policy-regularized model predictive control to stabilize diverse quadrupedal gaits for the MIT Cheetah. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4102–4109 (2017) 16. Bjelonic, M., Grandia, R., Harley, O., Galliard, C., Zimmermann, S., Hutter, M.: Whole-body mpc and online gait sequence generation for wheeled-legged robots. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8388–8395 (2021) 17. Li, H., Frei, R.J., Wensing, P.M.: Model hierarchy predictive control of robotic systems. IEEE Robot. Autom. Lett. 6, 3373–3380 (2021) 18. Sleiman, J.P., Farshidian, F., Minniti, M.V., Hutter, M.: A unified mpc framework for whole-body dynamic locomotion and manipulation. IEEE Robot. Autom. Lett. 6, 4688–4695 (2021) 19. Li, Z., Tan, Y.: Trotting motion of the quadruped model with two spinal joints and its dynamics features. J. Robot. 2020, 3156,540:1–3156,540:14 (2020) 20. Verschueren, R., Frison, G., Kouzoupis, D., Frey, J., van Duijkeren, N., Zanelli, A., Novoselnik, B., Albin, T., Quirynen, R., Diehl, M.: acados-a modular open-source framework for fast embedded optimal control. Math. Program. Comput. (2021). https://doi.org/10.1007/s12532-021-00208-8 21. Webots. Open-source Mobile Robot Simulation Software. https://www. cyberbotics.com 22. Frison, G., Diehl, M.: Hpipm: a high-performance quadratic programming framework for model predictive control (2020). arXiv:abs/2003.02547 23. Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput. 6, 327–363 (2014)

Observer-Based Control Model Test in Biped Robot Jo˜ ao Carvalho(B) and Mauricio Eiji Nakai Federal Technological University of Paran´ a, Apucarana, Paran´ a, Brazil [email protected]

Abstract. This paper presents two tests of an observed-based model to Biped Robots which takes values of the measured Zero Moment Point and generates a trajectory to the robot, one simulation of a robot and one applied on a real robot. The simulation was made in Robot Operating System, a set of libraries that helps in robot development, with Gazebo, an open-source platform of simulation and the real robot was created using a microcontroller, two sensors and eight servo motors. The results got by these tests were good to make a stable movement.

Keywords: Biped robot

1

· Controller · Simulation

Introduction

When designing a robot, one of the biggest challenges is how to control it, the control model can depend on many details of the robot or even the environment it is in. And when it comes to biped robots it gets more difficult because of its stability, which depends on the position of feet on the ground [3]. When there is only one foot on the ground, the robot is on a single support phase, [2], and when both feet are on the ground, the robot is on a double support phase. With the principle of a single support phase, the dynamics of the robot can be simplified to a inverted pendulum, resulting in a model based on this, [5]. In bipedal locomotion, there is an important point at the ground where the summation of the gravity forces and inertial forces equals zero, called Zero Moment Point (ZMP), and it is a good variable to control the robot [1]. Over the years, there’s been a lot of control models for bipedal robots, each of them with its detail, like the observer-based model created by Czarnetzki, Kerner and Urbann in 2009, which takes the value of the ZMP from sensors, and compare it to the input value generating the next value of the position of the robot Center of Mass [4]. With this model, is possible to generate the trajectory of the Center of Mass (CoM) of the robot in X and Y axis. The proposition of this project is to simulate a biped robot with 8 degrees of freedom and test the model proposed by Czarnetzki, Kerner and Urbann. The simulation is made with ROS, a framework with libraries that help develop robots, [11]. And also, test the model on the c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 125–135, 2024. https://doi.org/10.1007/978-3-031-47272-5_11

126

J. Carvalho and M. E. Nakai

real robot, similar to the simulation, with a microcontroller for processing, an ARDUINO UNO, servo motors for locomotion, eight SG-90, and inertial sensors to get data, and two MPU-9250 IMU sensors.

2

Robot’s Stability

The gait of the robot can be divided into two phases, a single support phase where only one foot is in the ground, and a double support phase, where both feet are in the ground and to be stable, the ZMP must be in the center of the ground foot [4], this is the main detail to describe the expected trajectory of the ZMP. However, it is necessary to generate the CoM trajectory and this is the main detail of the model. To work on this model, a simplified version of the robot is used, only considering its CoM. When the robot is in a single support phase, its resulting motion can be described as the inverted pendulum. The intention is to make the CoM move along an arbitrary plane, where the movement is made by contracting and relaxing the legs of the robot. The result of this is called 3D Linear Inverted Pendulum Mode [5]. In this mode, the pendulum’s dynamics are given by: 1 g x+ τy zh mzh 1 g y− τx y¨ = zh mzh

x ¨=

(1) (2)

Where m is the mass of the robot, g is the gravity, zh is the height of the CoM on the z axis and the torque around the x-axis and y-axis is τx and τy respectively. In this model, the position p of the ZMP at the ground in each axis is given by: τy mg τx py = mg

px = −

(3) (4)

Replacing Eqs. (3) and (4) in (1) and (2) results in the following ZMP equations: zh x ¨ g zh py = y − y¨ g

px = x −

(5) (6)

Considering the robot’s height constant, the position of the ZMP depends only on the acceleration and position of the CoM.

Control Model Simulation

3

127

The Control Model

Czarnetzki et al. proposed a model in which the ZMP is the target of the control algorithm and the output of the system [4]. And the controller depends on the future values of the ZMP. Equations (5) and (6) suggest that the system can be represented by the state vector (x, x, ˙ p). Then, the system’s dynamics can be represented by: ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ 0 1 0 x x 0 d ⎣ ⎦ ⎣g x˙ = zh 0 − zgh ⎦ ⎣x˙ ⎦ + ⎣0⎦ v (7) dt p p 1 0 0 0 Discretizing Eq. (7) yields: x(k + 1) = A0 x(k) + Bv(k) y(k) = Cx(k)

(8) (9)

Where x(k) is the state vector of the system. The controller proposed by Czarnetzki et al. has the idea of preview demand, which leads to a preview controller [6]. It is also necessary to calculate the disturbances of the robot and the state vector with the system input and the sensor data, then comes the idea of an observer. A better description of the derivation of the system can be found in [4]. The resulting observer system is given by: x ˆ(k + 1) = A0 x ˆ(k) − L[psensor (k) − C x ˆ(k)] + Bu(k)

(10)

In this model, the observer-based controller is designed by [4]: u(k) = −GI

k 

[C x ˆ(i) − pref (i)] − Gx x ˆ−

i=0

N 

Gd (j)pref (K + j)

(11)

j=1

This system results in the control algorithm in Fig. 1.

4

The Design of the Real Robot

The real robot is based on the project presented by Arduino Team [7] and consists of a 3D printed 8 degrees of freedom biped robot with a microcontroller, sensors, and servo motors to test the model. It has five parts: the torso, the hip, the upper leg, the lower leg, and the foot. The inertial sensors are located on the torso to get the acceleration of the CoM, through the microcontroller so the CoM stays in a higher place of the robot. In this robot, the hip is the only parts able to move in the y-axis, the other parts move along the x-axis. In this robot the center of the foot is located 3.8 centimeters away from the CoM, considering the initial CoM zero, the maximum value of the expected ZMP cannot pass this value, because it must be in the middle of the foot [4]. To control the robot, the model was applied on an Arduino Uno. It receives the acceleration values from the inertial sensor and averages the values of the

128

J. Carvalho and M. E. Nakai

Fig. 1. The control algorithm, [4]

sensor. Then it applies the model and gets the resulting value of the CoM position on the y-axis and x-axis. The robot’s gait is given by the value of the expected ZMP, when the value is close to the maximum, the robot moves its foot to make a step. With the value of the CoM and the values of the foot position, the microcontroller applies inverse kinematics to get the angles of each joint and send it to the servo motor by PWM. A better description of how this inverse kinematic work is presented on Sect. 6.

5

The Simulation

For this simulation three ROS nodes were used, one with the Gazebo simulation toolbox, publishing the values of the sensors and receiving the state of the robot, one publishing the state of the robot, called Robot State Publisher, to the Gazebo and receiving the value of the joints and one for the controller, receiving the values of the sensor, applying the control algorithm and sending the values of the joints, as is described on Fig. 2. The node with the controller had the control algorithm presented in Fig. 1 applied on an iterative code in C++ and applied inverse kinematics based on the values of the CoM and the position of the foot to get the values of the joint. With ROS it is possible to set a time when the code will get information from the topic it is subscribed to, this time is set in the beginning of the code when the node is created. For this project, due to the complexity and number of calculations of the controller, the time was set to 55 ms.

Control Model Simulation

129

Fig. 2. The concept of the simulation with ROS

The robot used in this simulation is an 8-degrees-of-freedom robot with two IMU sensors, to get the acceleration on an axis, described in Unified Robot Description Format (URDF), a way to describe robots based on links, the physical parts of the robot, and joints [9], that can be of six types, to this project was used fixed joint, doesn’t move, and revolute joint, rotates along an axis, his enabled to create a robot able of moving forward and sideways. The design of the robot can be seen on Fig. 3. The design of the robot was based on the real robot developed for this project, with five parts, the position of the sensor on the torso, but without texture to simplify the simulation. After all set up, the simulation was made by running the three nodes at the same time and obtained the values of the CoM, the value of the ZMP, and the expected value of the ZMP. To obtain the acceleration values of the robot, it was necessary to use a Gazebo plugin in the description file of the robot, [10]. With this plugin, it is possible to create an inertial sensor as a link of the robot, the gazebo to publish the values of the acceleration on a topic specified by the plugin. And the node that was subscribed to this topic, the controller node, must have included the library of the sensor.

6

Applying Inverse Kinematics

The movement made by the robot is based on the position of the Center of Mass and the position of its feet and to get the value of the joints the robot was divided into 2D shapes to apply Geometry Inverse Kinematics, [12]. The leg stayed on xz plan, generating a triangle with the hip joint, knee joint, and foot joint, represented in Fig. 4. The upper and lower leg have the same size, l, and the dashed line is the difference between the hip joint height and the foot joint height, h. But it was also necessary to consider the difference between the position of the hip and the

130

J. Carvalho and M. E. Nakai

Fig. 3. Robot Description with URDF

foot along the x axis, hx. So, the resulting value of the joints is given by: h2 + 1) −2 ∗ l2 hx 180 − KneeJoint + sin−1 ( ) HipJoint = 2 h 180 − KneeJoint hx − sin−1 ( ) F ootJoint = 2 h KneeJoint = cos−1 (

Fig. 4. Side view of the joints, xz plan.

(12) (13) (14)

Control Model Simulation

131

Concerning the movement along the y axis, the joint considered were the hip joint and the center of mass. Applying the same principle, the resulting system is demonstrated in Fig. 5. With Fig. 5 was possible to determine the value of the hip joint along the y axis, but the value of the height of the Hip joint was not constant, then, the equation of the system wasn’t based on the distance between the Center of Mass and the hip joint, Dh, which was constant because this part of the robot was continuous, without others joints. With this, the resulting equation of the system is: Dh ) (15) HipJointY = tg −1 ( Dp Where Dp is the difference between the position of the joints along the y axis. Considering the position of the hip constant along the movement, the values of the joint just depend on the position of the CoM.

Fig. 5. Front view of the joints, yz plan.

7

Results and Discussion

The constants of the controller, different from Czarnetzki et al. [4], were chosen by testing the robot, both in simulation and the real robot, and manipulating the values to get a good performance. The inverse kinematics presented in the previous section made the robot move along the x and y axes depending only on the value of the CoM and the position of the foot. But, due to the low amount of joints and the noise got on the PWM signal between the microcontroller and the servo motor, the movement wasn’t perfect. Small errors in the signal to the servo motor made the required angle not reached, however, this didn’t interfere with the general movement. The simulation was made on a plane ground without any external disturbances, as shown in Fig. 6. The result of the system is represented in Fig. 7, where P osCm is the position of the Center of Mass along the x and y axes, P ref is the expected ZMP Value and Zmp is the calculated value of the ZMP obtained by the sensors.

132

J. Carvalho and M. E. Nakai

Fig. 6. Simulation of the model on Gazebo

In the graphics in Fig. 7 it is possible to see that the preview controller made the CoM move based on future expected values, making it easier to achieve that ZMP value. Along the y axis, even though the ZMP values aren’t exactly the expected, the robot’s movement was stable and cyclic, making it possible for the robot to walk and be able to manipulate the leg to do a step. For the x axis, Fig. 7, demonstrate the increase of the CoM, making the robot move forward. About the real robot, it was tested in the same condition as the simulation, however, this one had cables that were used for power and acquiring data, which was held by the tester so that the cables would not interfere with the testing. The design of the real robot is similar to the one used in the simulation, as is described in Fig. 8, but with some small changes in the size of each part. The results of the test are shown in Fig. 9, where P osCm is the position of the CoM along the x and y axes, P ref is the expected ZMP Value and Zmp

Fig. 7. Results of the simulation on ROS and Gazebo

Control Model Simulation

133

is the calculated value of the ZMP obtained by the sensors. Different from the simulation, this model was more susceptible to errors due to the noise got by the sensors, and with the sensors in front of the torso, the position of the center of mass in x-axis isn’t in the middle of the robot. However, as it is shown on Fig. 9, the values of the ZMP were close to the expected, so it is possible to confirm that the model was working. Just like in the simulation, the values of the y axis were cyclic and stable, allowing the robot to move, and in the x axis, the values were increasing, proving the forward movement of the robot. Comparing the results of the simulation and the real’s robot sideways movement, it is possible to see that both results were stable and generated a satisfactory trajectory, however, they are slightly different. On the x axis, the ZMP values of the real robot were below the value of the CoM most of the time, while in the simulation the ZMP values were above the CoM most of the time. That means the robot was leaning back and that the feet were in front of it, but, it didn’t interfere with the movement. Even though the results were different, the calculated ZMP of both tests were close to the expected values, meaning that the control model could generate a satisfactory trajectory of the CoM to make the robot walk.

8

Conclusion

This paper presented two tests, one simulation and one test on a real robot, of an observed-based control model, where the output is given by not just the present expected values, but the future values too. The simulation was made in Robot Operating System with Gazebo and the real robot was 3D printed en controlled by an Arduino UNO. The results shown in the previous section made clear that the control model could give a stable movement to the robot, by taking sensor data and calculating the trajectory of it by the difference between the sensor data and the expected value. With the preview controller, the model generated a trajectory to the CoM base on future values of the ZMP. And the trajectory created was enough to make the robot walk forward. But, due to the limited amount of joints of the robot and the constants chosen for the model, the movement wasn’t perfect, one way to improve the performance of this model is to work on this topic. In order to improve the performance of the controller, it is possible to make changes to the robot to get better movement. In the simulation, one way to get results closer to the real robot is to use the parts created on CAD software to the description on URDF file. On the real robot, one method to improve the performance is to reduce the noise of its sensors and the PWM signal to the servo motor.

134

J. Carvalho and M. E. Nakai

Fig. 8. Real robot applying the control model

Fig. 9. Results of the real robot

Aknowledgements. This project was made thanks to the financial supports of the CNPq, on the scholarship, and the Federal Technological University of Paran´ a, on the financial support for participation in events.

Control Model Simulation

135

References 1. Vukobratovic, M., Borovac, B.: Zero-moment point-thirty five years of its life. Int. J. Hum. Robot. 1157–173 (2004). https://doi.org/10.1142/S0219843604000083 2. Vukobratovic, M., Borovac, B., Surla, D., Stokic, D.: Biped Locomotion: Dynamics. Stability, Control and Application (1990) 3. Vadakkepat, P.: Biped locomotion: stability, analysis and control. Int. J. Smart Sens. Intell. Syst. 186–207 (2008). https://doi.org/10.21307/ijssis-2017-286 4. Czarnetzki, S., Kerner S, Urbann, O.: Applying dynamic walking control for biped robots. Robot. Auton. Syst. 57, 839–845 (2009). https://doi.org/10.1016/j.robot. 2009.03.007 5. Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Yokoi, K., Hirukawa, H.: A realtime pattern generator for biped walking. IEEE, Los Alamitos 31–37 (2002). https://doi.org/10.1109/ROBOT.2002.1013335 6. Katayama, T., Ohki, T., Inoue, T., Kato, T.: Design of an optimal controller for a discrete-time system subject to previewable demand. Int. J. Control 41 (1985). https://doi.org/10.1080/0020718508961156 7. Building an Arduino-Based Bipedal bot. https://blog.arduino.cc/2020/06/21/ building-an-arduino-based-bipedal-bot/. Accessed 8 Feb 2023 8. Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2004). https://doi.org/10.1109/iros.2004.1389727 9. Sucan, I.: Wiki of the unified robot description format (URDF) (2009). https:// wiki.ros.org/urdfi 10. Open Source Robotics Foundation, “Gazebo PLugins in ROS” (2014). classic.gazebosim.org/tutorials 11. Maciel, E.H., Henriques R.V.B., Lages W.F.: Control of a Biped Robot Using the Robot Operating System. Federal University of Rio Grande do Sul 12. Corke, P.: Inverse Kinematics for a 2-Joint Robot Arm Using Geometry (2019). https://robotacademy.net.au/lesson/inverse-kinematics-for-a-2-jointrobot-arm-using-geometry/

Nonlinear Model Predictive Control and Reinforcement Learning for Capsule-Type Robot with an Opposing Spring Armen Nunuparov and Nikita Syrykh(B) Institute for Problems in Mechanics of the Russian Academy of Sciences, 119526 Moscow, Russia [email protected]

Abstract. The paper considers the solution to two control optimization problems for a capsule-type robot with an opposing spring. The first problem focuses on optimizing the average velocity of the robot while the other one deals with a time-optimal control problem. The capsule-type robot is an essentially nonlinear system that performs cyclic reciprocating motions to move in any direction. In this study, the usage of Reinforcement Learning and Nonlinear Model Predict approaches for such a nonlinear system is researched. The trajectories and control laws are obtained, and the results are compared with the optimization results for a simple periodic piece-wise constant law with one switch. The study was conducted by methods of mathematical modeling. Keywords: Contol optimization · Capsubot learning · Model predictive control

1

· Reinforcement

Introduction

A capsule-type robot is a locomotive (mobile) system that moves in resisting environments without external propulsors (legs, wheels, tracks, propeller fins) due to the movement of internal bodies in the presence of friction between the main body and the external environment. A capsule robot consists of a main body (capsule) and internal bodies, which can move relative to the body under the action of actuators. Actuators perform the interaction of internal bodies with the body of the robot. Capsule robots are a promising development for inspection of narrow pipes and channels, including in various corrosive environments. Accidents due to untimely maintenance of pipelines at industrial enterprises can lead to technogenic incidents and environmental pollution. The use of capsule robots for the inspection of critical pipelines should make it possible to reduce the likelihood of emergencies. Previously, the optimization problems of motions of a system with a moving inner mass were considered in articles [1–4]. In [5], a capsule robot with an c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 136–146, 2024. https://doi.org/10.1007/978-3-031-47272-5_12

Nonlinear Model Predictive Control

137

electromagnetic drive (solenoid) and a return spring was described. In particular, the dependence of the average velocity of the whole system on the excitation parameters was studied. The system was designed in such a way that the electromagnetic force from the solenoid acts on the inner body always in one direction, and the motion in the opposite direction occurs due to the spring connecting the inner body to the main body. When studying such a system, a periodic piecewise constant law with one switching during a period was used, which does not guarantee a global solution to the problem of maximizing the average velocity of such a system. This paper proposes to study such a system with a wider class of control forces by solving different optimization problems by Nonlinear Model Predictive Control [6] and Reinforcement Learning methods (which entered a new era of development after the publication of the article [7]) to obtain the control laws which could outperform previous solutions. The study was conducted by methods of mathematical modeling.

2

Mechanical Model of the system

The robot under consideration consists of two bodies (main and internal) connected by a spring. The main body is constructed from a housing and a solenoid rigidly attached to it. The inner body is the core of the solenoid. The inner body can move along the symmetry axis of the solenoid. The actuator is designed so that when current is applied to the actuator, the electromagnetic force generated tends to pull the core inside. When the current is switched off, the inner body begins to move in the opposite direction due to the spring. The model of the capsule robot is shown in Fig. 1.

Fig. 1. Capsubot model

Let us denote by M the mass of the body, m–the mass of the internal body, c–the spring stiffness, x–the body coordinate, Fe –the force generated by the solenoid, ξ–the coordinate of the internal body relative to the body (and at ξ = 0 the spring is not stretched), N –the normal force of support reaction, μ– the coefficient of dry friction, Ff r –the friction force between the main body and the horizontal surface. Then the motion of such a system is described by the following equations ˙ Mx ¨ = c ξ − Fe + Ff r (x),

¨ = −c ξ + Fe . m(¨ x + ξ)

(1)

138

A. Nunuparov and N. Syrykh

Coulomb’s law of dry friction is used as a model of friction force Ff r : ⎧ ˙ x˙ = 0, ⎨ −μN sign(x), x˙ = 0 and |cξ − Fe | ≤ μN, Ff r = −cξ + Fe , ⎩ −μN sign(cξ − Fe ), x˙ = 0 and |cξ − Fe | > μN ;

(2)

It is possible to control the motion of such a system by using a variety of laws of change of force Fe . Thus, in [5] periodic piece-wise constant control law is used: ⎧   t ⎪ ⎪ < τ, ⎨ F0 , T  Fe = (3) t ⎪ ⎪ ≥ τ, ⎩ 0, T where T –is the period; F0 –corresponds to the maximum value of the force Fe ; τ –is a positive dimensionless constant, which belongs to the interval (0, 1) and denotes the fraction of the period in which the driving force is not equal to zero; curly brackets denote the fractional part of the expression enclosed in them. We will use some results with this control to make a comparison with other methods. The current study proposes to find new control laws of system (1) by Nonlinear Model Predictive Control and Reinforcement Learning methods. To carry out a comparative analysis of various approaches to optimization of motion, the mechanical system is modeled with the parameters from the article [5]. They are given in Table 1. Table 1. Parameters of mechanical system Parameter name

Notation and value

Body mass

M = 0.193 kg

Mass of the inner body

m = 0.074 kg

Spring stiffness

c = 256.23 Nm−1

Maximum force value Fe F0 = 1.25 N Dry friction coefficient

2.1

µ = 0.29

Optimal Control Problems

Problem 1. For the system of Eqs. (1) find the control force Fe which satisfies the constraints Fe (t) ∈ {0, F0 }

(4)

and transfers the system in time T from a given initial state x(0) = 0,

ξ(0) = ξ0 ,

x(0) ˙ = x˙ 0 ,

˙ ξ(0) = ξ˙0

(5)

Nonlinear Model Predictive Control

139

into a final state x(T ) = xT ,

ξ(T ) = ξT ,

x(T ˙ ) = x˙ T ,

˙ ) = ξ˙T ξ(T

(6)

in order to maximize the average velocity of the main body xT , (7) T Problem 2. For the system of Eqs. (1) find the control force Fe which satisfies the constraints (8) Fe (t) ∈ {0, F0 } V =

and transfers the system in time T from a given initial state x(0) = 0,

ξ(0) = ξ0 ,

x(0) ˙ = x˙ 0 ,

x(T ) = xT ,

ξ(T ) = ξT ,

x(T ˙ ) = x˙ T ,

˙ ξ(0) = ξ˙0

(9)

into a final state ˙ ) = ξ˙T ξ(T

(10)

in order to minimize T.

3

Nonlinear Model Predictive Approach

The specific feature of the proposed method is that only the optimal control at the first moment of time is taken at the beginning of each step, which leads to the possibility of finding a control close to optimal for a complex system or process, which will be stable to model errors at the same time. For variables, an algorithm for predicting their future behavior is used, which makes it possible to calculate the control action on the basis of such prediction. Applying this method to complex dynamic processes, the optimization problem can be solved in a nearly real-time mode by varying the time interval in which the optimization problem is solved. The predictive control method is based on an algorithm for iterative optimization of the system model, which leads to increased stability of the system control to errors due to sensor noise and mismatch between the mathematical model and the real model. Linear or linearized models are often used with this method. In current section we consider application of the nonlinear model predictive control approach to solve these highly non-linear (2) control problems. The general scheme of the NMPC algorithm. At each time tn ∈ [t0 , ..., tN −1 ], where N –is number of timestamps for which NMPC optimization is needed, perform the following steps:  ˙ n) . ˙ n ), ξ(t 1. Measure the current state of the system zn = x(tn ), ξ(tn ), x(t 2. Solve the optimal control problem with initial conditions equal to zn and duration Tmpc . 3. Apply the obtained control law to the next moment of measuring the state of the system.

140

A. Nunuparov and N. Syrykh

Realization of such approach is based on the direct multiple shooting numerical method. It is used to map optimal control problem to a nonlinear programming problem, which is easier to solve numerically. For example by using IPOPT [8].

4

NMPC Results

In our realization on NMPC we fixed terminal time T = 2s and discretized space with a time-step dt = 5ms, also we introduced new parameter θ which regulates length of time-interval between sequential NMPC steps, which is determined from the following formula θ = tn − tn−1 . Solutions for the average velocity maximization problem with different prediction horizons Tmpc are presented at Fig. 2. During the research it was noticed that change of the prediction horizon length Tmpc may result in average velocity deterioration. On same figures optimal piece-wise controller (3) from [5] with optimal parameters for maximizing average velocity is used to obtain trajectories for better demonstration of NMPC results. It can be seen that the resulted average velocity with prediction length Tmpc = 50ms exceeds the solution for piece-wise controller in more than two times, however with Tmpc = 120 ms the solution with NMPC is less optimal in comparison with solution for piece-wise controller. At the same time parameter θ which regulates the time for which locally optimal control law is preserved also have a high impact on the resulted solution. For example if the value of this parameter was too small, the resulted solution could far from globally optimal solution. The main reason for this is that the system doesn’t know about the moment when the solution would be recalculated, and at that moment the final time point would be different. Also, parameter θ allowed us to directly affect the execution time of the algorithm. The total running time of NMPC could be significantly reduced by increasing θ. For the time-optimal problem problem we considered following terminal state of the system x(T ) = 0.5 m,

x(T ˙ ) = 0.0

m , s

˙ ) = 0.0 m . ξ(T s

(11)

The resulted trajectory and control are depicted at Fig. 3. The system managed to achieve goal state in nearly 14.3 s. On same figure the trajectory and control for the average velocity problem are depicted. It can be seen that up to 13.2 s, the controls and trajectory are exactly the same. That is, the transition time from moving at regime with maximum average velocity to reaching at the goal state with zero velocities takes about 1.1 s. During solution of the second problem NMPC demonstrated one useful in a practical sense property that we also can change optimal criteria in the middle of the execution of the algorithm to achieve completely different result. And this transition doesn’t need extensive recalculations of new control law and not much of the algorithm implementation is needed to be changed.

Nonlinear Model Predictive Control

141

Fig. 2. Resulted NMPC trajectories and control laws for velocity maximization

Fig. 3. Resulted NMPC trajectories and control laws for the time minimization problem

5

Reinforcement Learning approach

Learning with reinforcement implies interaction of the agent with simulated environment. At each learning step i while the environment is in a state si the agent chooses an action ai (in our environments ai ∈ {0, 1}). The optimality of the chosen action is evaluated by the reward function r. The main goal of the agent is to learn how to take actions that maximize the total reward ri for each episode. The choosing of such actions is the responsibility of another function called policy. Policy π(ai |si ) determines the probability of taking ai action at si state in order to maximize the total reward. In this paper, we use the Proximal Policy Optimization (PPO) algorithm, which refers to algorithms that directly optimize the Policy function. The PPO algorithm was introduced in 2017 in [9] and has good performance for problems with discrete action space and continuous state space. The PPO implementation from StableBaselines3 [10] is used in this paper. It has good

142

A. Nunuparov and N. Syrykh

documentation, community support, and is constantly evolving. The main developers are employees of DLR (German Aerospace Center) and use this library in their work. A reward function defines the reward an agent receives after taking chosen action. By changing reward function, we can make an agent to do something (by giving it a positive reward) or not to do something (by giving it a negative reward). Correctly constructing this function is a complex and iterative task without general scheme. We started construction of the reward function for the first virtual environment with the idea that if we forced an agent to seek control to reach some, distant goal point from the start without any limitations on it final velocities, then it would have to seek a control law that would allow it to reach the maximum average velocity. The final reward function is described by the following equations: ⎧x ⎪ xmin < x < xmax , ⎨ , x ˆ (12) ri = −2000, x ≤ xmin , ⎪ ⎩ V · k, x ≥ xmax ; where xmin and xmax are the left and right limits of the episode, k–the scaling factor (which was eventually assumed to be 5000), x ˆ–target point, V –average velocity. During training of the model the target x ˆ is determined by a pseudorandom number generator. For the inference we used x ˆ = 0.5 m. If the robot moves outside the specified region, the training episode ends. If the robot reaches the right boundary of the region, it is assigned a large positive reward. In this way, as mentioned above, we can make the agent move in the direction we want. The reward function for the second virtual environment was chosen to be continuous, based on the experience of constructing the function for the first environment. Let vN –be the average value of the body velocities for the last N steps N −1 1 v¯N = x˙ i−k , (13) N k=0

then the resulted reward function looks as follows ⎧x ⎪ , x ∈ (xmin , xmax ) and x ∈ Bδ (ˆ x), ⎪ ⎪ ˆ ⎨x −3 x) and (¯ v10 ≥ 10 ), ri = −5 · v¯10 , x ∈ Bδ (ˆ ⎪ (ˆ x ) and (¯ v10 ≤ 10−3 ), 2000, x ∈ B ⎪ δ ⎪ ⎩ −2000, x ∈ (xmin , xmax );

(14)

x)–the neighbourhood of the point x ˆ with radius δ, finding inside where Bδ (ˆ which is equated to reaching the target point. During training of the model the target x ˆ is determined by a pseudorandom number generator. For the inference we used x ˆ = 0.5 m.

Nonlinear Model Predictive Control

5.1

143

RL Results

Let us first consider the results for the first virtual environment, in which the task was to maximize the average velocity. Figure 4a shows the control law obtained by the RL. Figure 4b shows the robot body coordinate vs time for RL and piecewise control laws.

Fig. 4. Dependence of control a and main body coordinate b versus time for the first problem

The average velocity obtained with the RL on a segment of 0.5 m was equal to 0.0369 m/s. One can see that the resulting control signal at some point becomes nearly periodic (the approximate control period is indicated by rectangle) with more than one switching. For the comparison, here we also use the piece-wise control law (3) with parameters that provide the maximum average velocity (Fig. 2). We see that the control obtained from (3) in twice slower than RL, but it has only one switch per period. For the second problem we considered following terminal state of the system m ˙ ) = 0.0 m . , ξ(T (15) s s and the task was to reach the x(T ) in order to minimize the T . Figure 5 shows graphs of the dependence of the control at 1−2 s and the coordinate of the main body vs time. One can see that the control has a period of time with a nonperiodic signal in the first few seconds. Figure 5a shows non-periodic signal and Fig. 6 shows two different types of periodic signals that appear after (periods are highlighted by rectangles). The system managed to achieve goal state in nearly 16 s. Unfortunately, it’s difficult to explain why these signal changes occur because of the low explainability of models based on neural networks (like PPO). Other algorithms, such as those based on ensembles of decision trees, should be used to obtain explainable results. But this is beyond the scope of this paper. x(T ) = 0.5 m,

x(T ˙ ) = 0.0

144

A. Nunuparov and N. Syrykh

Fig. 5. Dependence of control (a) and main body coordinate (b) on time for the second problem

6

Comparison and Analysis

We can make a comparison with the piece-wise control from [5] for the problem of achieving the maximum average velocity. When simulating a capsule robot with the parameters from the article [5] the obtained results are presented in Table 2. The controls built with RL and MPC show faster steady-state velocity at the start with zero initial conditions, and less external body rollback when moving. As a result, it turns out that the average velocity at the optimal parameters

Fig. 6. Control signals versus time for the second problem Table 2. Results for average velocity maximization problem Control type

Average body velocity, m/s

Piece-wise control (3) 0.0153 MPC control

0.0361

RL control

0.0369

Nonlinear Model Predictive Control

145

of the NMPC and RL solutions exceeds the piece-wise control law by more than 2 times. For a qualitative comparison, the NMPC problem was also solved under constraints corresponding to a piece-wise constant law optimization with 1 switching per period as in [5]. As a result, it turned out that the maximum average velocity is the nearly the same, but with NMPC it was found that it is possible to decrease the acceleration time for it. This can be achieved by choosing different parameters of the piece-wise control (3) law at the initial stage. Since for the second problem there is no solution for optimal piece-wise control from [5], we compare only the RL and NMPC approaches. The following results were obtained: Table 3. Results for time-optimal transfer problem Control type Time, sec. MPC control 14.3 RL control

16

In the time-optimal problem the RL result is worse than the result obtained by NMPC. One reason is that constructed RL control requires completing the robot’s stage of motion with a maximum average velocity earlier than with NMPC.

7

Conclusions

In this paper optimal control problem for maximizing average velocity and time-optimal control problem are solved by nonlinear model predictive control (NMPC) and reinforcement learning (RL) approaches. Both approaches were able to find decent solutions, however solutions with NMPC have higher stability in terms of obtaining solution. Also by using NMPC and RL approaches we demonstrated that it is possible to reach higher value of average velocity for capsule-type robot by using more complex control laws then were studied in [5]. And for NMPC the transition process between different optimal control problem maybe not hard. For the RL approach it was possible to find the solution with higher average velocity than NMPC and piece-wise control in first problem, but in the time-optimal transfer problem NMPC showed better result. RL result was highly affected by choosing right reward function, which is not uniquely determined from optimal control problem and its construction is a non-trivial task. In addition, the NMPC method allows you to build controls right in the control process, i.e., it does not require prior training, but RL requires a long time, and not always stable, learning. Acknowledgments. The research was partially supported by the Government program (contract #123021700055-6) and partially supported by RFBR and Moscow city Government (project number 21-31-70005).

146

A. Nunuparov and N. Syrykh

References 1. Chernousko, F.L.: Analysis and optimization of the motion of a body controlled by a movable internal mass. J. Appl. Math. Mech. 70(6), 915–941 (2006) 2. Zimmermann, K., Zeidis, I., Bolotnik, N., Pivovarov, M.: Dynamics of a two-module vibration-driven system moving along a rough horizontal plane. Multibody Syst. Dyn. 22, 199–219 (2009) 3. Fang, H.B., Xu, J.: Dynamic analysis and optimization of a three-phase control mode of a mobile system with an internal mass. J. Vib. Control 74(4), 443–451 (2011) 4. Yan, Y., Liu, Y., Liao, M.: A comparative study of the vibro-impact capsule systems with one-sided and two-sided constraints. Nonlinear Dyn. 89, 1063–1087 (2015) 5. Nunuparov, A., Becker, F., Bolotnik, N., Zeidis, I., Zimmermann, K.: Dynamics and motion control of a capsule robot with an opposing spring. Arch. Appl. Mech. 1–16 (2019) 6. Moritz Diehl, Rolf Findeisen, S Schwarzkopf, I Uslu, Frank Allg¨ ower, Hans Bock, E Gilles, and J Schl¨ oder. An efficient algorithm for nonlinear model predictive control of large-scale systems. Part I: Description of the method. Automatisierungstechnik 50(01), 557–567 (2002) 7. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning (2013) 8. Nonlinear optimization problem solver ipopt. https://github.com/coin-or/Ipopt. Accessed 10 Feb 2023 9. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017) 10. Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., Dormann, N.: Stablebaselines3: reliable reinforcement learning implementations. J. Mach. Learn. Res. 22(268), 1–8 (2021)

Neural Control and Learning of a Gecko-Inspired Robot for Aerial Self-righting L´eonard Chanfreau1 , Worasuchad Haomachai2 , and Poramate Manoonpong3,4(B) 1

Institut Polytechnique de Paris - ENSTA Paris, Palaiseau Cedex, France [email protected] 2 College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China [email protected] 3 Embodied AI and Neurorobotics Lab, SDU Biorobotics, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Odense, Denmark 4 Bio-inspired Robotics and Neural Engineering Lab, School of Information Science and Technology, Vidyasirimedhi Institute of Science and Technology, Rayong, Thailand [email protected]

Abstract. Animals like geckos are able to reorient themselves while falling from a height and land safely, utilizing their flexible spines and tails for aerial self-righting. Researchers have studied this behavior to develop robots that can mimic such an animal locomotor skill. While quadruped robots have achieved impressive self-righting results, they still lack the utilization of a combination of a body, a tail, and legs as found in a sprawling posture animal like a gecko. In this study, three versions of gecko-inspired robots are developed, allowing investigation into how the spine (body and tail) and legs contribute to aerial self-righting behavior. A neural central pattern generator (CPG) with a radial basis function (RBF)-based premotor neuron network is used for the robots to learn self-righting behavior, along with continuous locomotion from falling to walking with a smooth transition. The results show that the robot with leg control but without active body control is limited to a 90-degree falling angle, while the robot with both leg and body control can achieve 110 degrees, and the one with full leg, body, and tail control can recover from a falling angle of 150 degrees to land successfully. Finally, we demonstrate the modular capability of the controller by enabling the robot to fall and walk simultaneously. Keywords: Gecko-inspired robot · Aerial self-righting robot control · Central pattern generator

· Neural

L´eonard Chanfreau and Worasuchad Haomachai contributed equally to this work. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 147–158, 2024. https://doi.org/10.1007/978-3-031-47272-5_13

148

1

L. Chanfreau et al.

Introduction

When falling from a height, animals such as cats [1,6] and geckos [5] can turn themselves around into the desired parachuting posture to land safely. Specifically, geckos utilize not only their axial body twisting but also their long tail and leg movements for effective self-righting. Therefore, many researchers have investigated the self-righting behavior of animals or used this as inspiration to enhance their robot performance with animal-like locomotor skills. For example, Shields et al. [10] observed a cat’s movement during falling and then modeled the controller to fit the specific dynamic movement by coordinating legs and a twisted body to rotate body orientation in the air. Kurtz et al. [7] demonstrated self-righting in a mini cheetah robot using only legs for aerial rotation. The leg trajectories of the robot were optimized by using reinforcement learning to obtain self-right behavior. Johnson et al. [4] implemented the integration of a tail into the central portion of a hexapedal robot’s body. By employing the point-mass tail connected to a massless rod, they were able to adjust the robot’s center of mass using model-based control to achieve effective aerial rotation. Roscia et al. [9] attached two rotating actuated flywheels to their robot body for controlling the orientation of the robot or reorienting it via changing the angular velocity of a flywheel. Chang-Siu et al. [2] proposed a simple robot with a 2 DOF tail and used a nonlinear feedback controller for 3D orientation control of the robot’s self-righting. Siddall et al. [11] developed a lizard robot with different tail lengths to explore the relationship between tail morphology and the capacity of self-righting. While all these studies can achieve self-righting ability for their robots, none of them have explored a combination of legs, a body, and a tail which seems to play an essential role in aerial rotation for obtaining a proper landing posture as found in geckos [5].

Fig. 1. a Three versions of a gecko-inspired robot are investigated: only leg control (GeckoBot-I), leg and body control (GeckoBot-II), leg, body, and tail control (GeckoBot-III). b GeckoBot-III composed of four legs (four joints per leg), a body joint, and a 2 degrees of freedom tail. c CPG-RBF network with a fast-learning mechanism (PIBB ) for learning joint trajectories to achieve both self-righting and walking locomotion modes.

Self-righting of Gecko-Inspired Robots

149

Thus, in this study, we use three versions of our gecko-inspired robot [3] (Fig. 1) to address this issue by exploring different control strategies: (i) only leg control, (ii) leg and body control, and (iii) a combination of leg, body, and tail control. Using these different robot versions allows us to also investigate how the legs, body, and tail contributes to aerial self-righting behavior. Moreover, the generated aerial self-righting behavior is learned and controlled by using a neural central pattern generator (CPG) with a radial basis function (RBF)-based premotor neuron network (CPG-RBF network) with a fast-learning mechanism [13]. We also utilize the modularity of our controller to achieve both aerial selfrighting and walking on the floor as well as their transition. As a result, the robot can recover from a fall and continue walking afterward.

2 2.1

Methods Gecko-Inspired Robot

The gecko-inspired robot is based on Haomachai et al. [3]. It is inspired by the morphology of Gekko geckos, especially regarding the legs joints and segmented body. The robot is composed of four legs, each of which has four joints, as shown in Fig. 1. To investigate how the leg and spine (body and tail) contribute to the gecko’s ability to self-right when falling, three versions of the robot are simulated with identical legs but different spine configurations. The first version (called GeckoBot-I) has no active control of the spine (Fig. 1a). The second version (called GeckoBot-II) is conducted with the additional active control of an anterior-posterior torsion of the body joint (Fig. 1a). The third version, GeckoBot-III, is conducted with an additional active control of 2 degrees of freedom (DoF) tail (Fig. 1a and b), while the body joint is the same as in GeckoBot-II. Both GeckoBot-I and II have identical body and leg weight (their body has a weight of 1.1 kg and each leg has a weight of 0.34 kg). GeckoBot-III has a similar body and leg weight, plus an extra 0.7 kg for the tail. Each leg has an adhesive foot pad simulated with relatively high friction coefficient of 2.5. All robot versions are equipped with foot contact sensors and an inertial measurement unit (IMU) to measure the orientation of the robot. 2.2

CPG-RBF Control Network

The control network is based on [13]. The whole network consists of three layers: the central pattern generator (CPG), pre-motor neurons using the radial basis activation function (RBF), and motor neurons, respectively (Fig. 1c). CPG: The central pattern generator is a biological neural network producing rhythmic signals. Here, we use a simple version, the SO(2)-based CPG, which is composed of two neurons and produces sinusoidal-like signals [13]. These two fully connected neurons use the hyperbolic tangent (tanh) as an activation function. Each neuron produces a discrete output as follows:   N wij (t)oj (t) , (1) oi (t + 1) = tanh j=0

150

L. Chanfreau et al.

where oi is the output from neuron i, N is the number of neurons, and wij is the synaptic weight from neuron j to i. The weights are chosen as follows:     cosϕ(t) sinϕ(t) w11 (t) w12 (t) =α , (2) −sinϕ(t) cosϕ(t) w21 (t) w22 (t) where α and ϕ are two parameters that allow the amplitude and frequency of the periodic CPG outputs to be adjusted, respectively. For our CPG-RBF network, α = 1.01 and ϕ = 0.01π are used to obtain harmonic oscillation at a fixed frequency of approximately 0.30 Hz. The parameter values are based on [13]. Pre-motor Neurons: The pre-motor neurons are the hidden layer neurons (Fig. 1c) to reshape the periodic (sinusoidal) signals generated by the CPG to enable more proper control of the joint trajectories. The activation function of the hidden neuron is given as   −

Rb = e

(o1 −c1,b )2 +(o2 −c2,b )2 σ2 RBF

,

(3)

where b is the RBF neuron index, o1 and o2 are the outputs of the two CPG neurons, c1,b and c2,b are two “centers” or means of the RBF neuron Rb , and 2 is the variance of Rb corresponding to the centers. RBF centers are set σRBF manually so that the RBF neurons, or kernels, are equally distributed along the period of the CPG outputs. This is achieved by   T (b − 1) ci,b = oi , (4) B−1 where i is the index of the CPG output signal, T is the period of the CPG signal (T ≈ 1/0.30 Hz), and B is the total number of RBF neurons. In this study, we 2 = 0.04 and B = 20. use σRBF Motor Neurons: The motor neurons of the third layer (Fig. 1c) transform the outputs of the pre-motor neurons into the signals controlling the robot joints. Each joint is independent and controlled by its own neuron. Moreover, the network is fully connected, meaning that each motor neuron is connected to all the pre-motor neurons and, in our case, we simply sum all the weighted outputs as Mn =

B 

wn,b Rb ,

(5)

b=0

where n is the motor neuron index while the weights wn,b are those that will allow the reshaping of the original periodic CPG (sinusoidal-like) signals into final complex joint trajectories. These weights will be learned through the blackbox optimization algorithm (BBO).

Self-righting of Gecko-Inspired Robots

2.3

151

Black-Box Optimization (BBO)

BBO is a probability-based approach where, in this case, the goal is to find the optimal values of the weight sets applied between the pre-motor neuron layer and the motor neuron layer that allow optimization of the joint trajectories to obtain the robot’s desired behaviors (i.e., aerial self-righting and walking). The quality of the behaviors obtained is evaluated in simulation using reward functions. PIBB : The BBO method, called Policy Improvement with Black-Box Optimization (PIBB , Algorithm 1) [12], is employed Algorithm 1 PIBB to optimize the policy (i.e., the set of while cost not converged do weights), thereby maximizing a reward for k ∈ K do function, which basically describes how k = N (0, σ 2 ) well the robot performs. Rk = runRBF N (wn,b + k ) end for PIBB is an iterative algorithm that for k ∈ K do aims to approach the maximum reward Rk −min(R) λ function. At each learning iteration, the Sk = e max(R)−min(R) Pk = KSk S algorithm performs K roll-outs (K = 16). k=1 k A roll-out k consists of adding Gaussian end for  δwn,b = K noise k to the weights wn,b and then k=1 Pk εk wn,b ← wn,b + δwn,b running a simulation to obtain a reward σ2 ← γ × σ2 Rk . The rewards are used to define the end while probability Pk of each corresponding rollout, where a probability is thus defined as higher when a reward is high. The probabilities are then used as weights associated with the noise added to each roll-out in the calculation of the weight update. Finally, between each iteration, the variance of the added Gaussian noise is decreased (γ = 0.995). Thus, the exploration domain, which is very large at the beginning, is reduced little by little when the reward function approaches an expect maximum value. Reward Functions: Two major reward functions are used in this work (one for the learning of self-righting behavior, and the other for walking). The reward function is defined by making a linear composition of multiple sub-rewards, which correspond to components of the behavior we want to promote. For the learning of aerial self-righting behavior, three main sub-rewards are used  1 if four legs touched the ground simultaneously, (6) rsuccess = −1 otherwise, rrotation =



pan(t) + tilt(t) + roll(t) , 3 t:time−step

rlegs = number of legs touching the ground during the landing period,

(7) (8)

152

L. Chanfreau et al.

where rsuccess favors a successful landing, rrotation favors an important rotation of the robot during falling. Indeed, the global desired behavior is not a slow rotation or non-existent rotation of the robot which relies entirely on a good angle of attack of the legs for a successful landing (like for example a robot landing on its side and flipping over on its legs once on the ground). The ideal behavior would be a robot that rotates as quickly as possible to reach the landing position (meaning its four legs face the ground before landing). The angle of inclination relative to the horizontal must therefore be as small as possible. The marker of this behavior is the average of the measured tilt angles. The faster the rotation, the smaller this average will be, and vice versa. The sub-reward rlegs favors a landing on the maximum number of legs possible. It allows material breakage to be reduced by distributing the impact on the maximum number of legs. Other subsidiary terms are then added: a term penalizing internal collisions rcollision , to reduce breakage, a term penalizing slips on the ground rslipping , to improve slippage landing, and a term penalizing the average power used rpower , to limit energy consumption. Finally, the reward function is defined as in Eq. 9, where wsccss = 10, wlegs = 1, wrot = 20, wpwr = 0.2, wslip = 0.1, wcol = 1. Rself−righting = (wsccss × rsuccess + wlegs × rlegs ) − (wrot × rrotation , + wpwr × rpower + wslip × rslipping + wcol × rcollision )

(9)

The reward function used to learn walking behavior is based on Kyndbøl et al. [8]. It is defined as in Eq. 10, while the sub-rewards and weights are defined in Table 1. Rwalking = (wdst × rdistance ) − (wstab × rstability + wpwr × rpower + wslip × rslipping + wcol × rcollision )

.

(10)

Table 1. Sub-rewards and weights of Rwalking rdistance = distance traveled

wdst = 3

rstability = body height + body pan + body tilt + body roll rpower = τavg θ˙avg

wstab = 1 wpwr = 0.2

with: τavg = average torque of all motors θ˙avg = average angular velocity of all motors rslipping = 1 for each leg slipping during the step

wslip = 0.1

rcollision = 1 − (distance between each leg tip/0.005)

wcol = 1.0

3

Simulations and Results

To verify the self-righting performance of each robot version, we carried out a series of simulation experiments that compared the self-righting success rate

Self-righting of Gecko-Inspired Robots

153

when starting at different tilt angles and the analysis of joint dynamics during the falling process of each robot. In addition, GeckoBot-III demonstrated continuous locomotion from falling to walking. 3.1

Simulation Setup

The learning process was performed in the simulation CoppeliaSim, using the physics engine Vortex. The simulation environment was a flat terrain (a horizontal surface on which the robot landed). To learn aerial self-righting, the altitude of the drop point was 3 m. Each robot version learned to perform aerial selfrighting with different initial tilt angles including 80, 90, 100, 120, 130, 140, 150, and 160 degrees (Fig. 2a). The learning period was set to 300 iterations with 10 s simulation time-step per iteration. To learn to walk, the robot also learned with the same setting but starting from standing on the flat ground. The inertial measurement unit (IMU) was used to monitor the body rotation angle during learning, especially the tilt angle during the falling and landing periods. We used the learned policies (after 300 iterations) to evaluate robot performance. It had to be noted that robot models, environments, and experiments were conducted in the CoppeliaSim simulation.

Fig. 2. a An example of a 60-degree initial tilt (falling) angle for the robot at an altitude of three meters above the ground. b Comparison of aerial self-righting success rates when at different initial (falling) tilt angles (80, 90, 100, 120, 130, 140, 150, and 160 degrees) for three robot versions. A video of experiments (90, 120, and 150 degrees) can be seen at https://bit.ly/3MFsWMM.

3.2

Robot Aerial Self-righting Results

Figure 2b shows that the ability to recover from an inverted angle falling can be improved by using a spine (body and tail). More specifically, GeckoBot-I used four legs without active control of the spine to turn around in the air. It can only land from a vertical angle (90 degrees). Moreover, GeckoBot-II with the addition of body axial joint control was able to land from a larger angle of 110

154

L. Chanfreau et al.

degrees. Furthermore, approximately 150 degrees of the initial falling angle can be achieved using the full functionality of the spine including a body joint and a 2-DoF tail, as shown by GeckoBot-III. 3.3

Aerial Self-Righting Dynamic Movement Analysis

To understand the key factors (i.e., legs, body, tail, and their combinations) that help the robot to achieve aerial self-righting, we investigated the joint dynamic movement and tilt angle of each robot version at an initial 90-degree tilt angle. The robot started at the release point (Fig. 3, dashed green line) to the falling period (Fig. 3, the period between the dashed green line and orange zone), and then the landing period (Fig. 3, orange zone). One of the evaluation values was the reduction of tilt angle during the falling period (called the aerial rotation angle) to allow the robot’s body to get as parallel to the ground as possible before landing. This can ensure that the robot can smoothly and successfully land with all four legs touching the ground.

Fig. 3. Evolution of the tilt angle and leg joint movements of GeckoBot-I with an initial angle of 90 degrees. The dotted red line indicates the first time the robot was observed to touch the ground.

GeckoBot-I: Results of the learned policy of GeckoBot-I (Fig. 3) show that the leg joints do not move much during the fall. Indeed, the most significant movements can be observed in the orange zone, during landing, because some of the legs are forced to bend due to suddenly hitting the ground. The tilt angle also indicates that this robot configuration does not allow the robot to rotate more than a few degrees during falling. Indeed, the tilt angle at the beginning of the fall is about 90 degrees, and at the moment of landing (orange zone), it is still around 87 degrees. However, after touching the ground, the tilt angle seems to decrease and eventually reach about 0 degrees (body parallel to the ground), which means that most of the landings were successful. This suggests that controlling the robot leg only doesn’t allow efficient aerial rotation and is mainly used for obtaining a proper angle of attack during landing.

Self-righting of Gecko-Inspired Robots

155

GeckoBot-II: Figure 4 presents the results of the learned policy of Gecko-II, which is able to rotate a body joint along an anterior-posterior axis. In terms of joint movement, the body joint rotates to the certain angle before landing (-35 degrees in this case) and then gradually rotates back after landing. The body movement also induces more dynamic leg joint movement compared to when the body joint is absent. This is consistent with what can be expected when observing the behavior of geckos during a fall [5]. Consequently, the tilt angle reaches approximately 71 degrees before landing which is 16 degrees less than GeckoBot-I. This indicates that a torsion axis in the body is one of the key factors in helping to reduce the tilt angle to gain a successful self-righting performance. However, even though the tilt angle may be reduced during the falling period, it is still high for landing. Self-righting success still relies heavily on the angle of attack of the legs when the robot touches the ground.

Fig. 4. Evolution of the tilt angle as well as leg and body joint movements of GeckoBotII with an initial angle of 90 degrees.

GeckoBot-III: Figure 5 presents the results of the learned policy of GeckoBot-III. Here, a swinging motion of the tail which is mostly rotation, relies on the tail joint (approximately 20 degrees from 0 to −20) during the falling period. Together, the body axial and 2-DoF tail significantly help the robot perform aerial rotation by reducing the tilt angle to around 57 degrees before landing. It is clear that the spine (combination of body and tail) is the most influential in aerial rotation to obtain a suitable tilt angle for a smooth landing using four legs simultaneously to contact the ground. 3.4

Robot Walking and Locomotion Transition

Here we further used the best gecko robot version, GeckoBot-III, for learning to walk. It could successfully learn walking forward within 300 iterations (Fig. 6a) where we obtained a high positive reward. The result of walking control policy, which coordinates the leg, body, and tail joints and generates a walking gait, is shown in Fig. 6b.

156

L. Chanfreau et al.

Fig. 5. Evolution of the tilt angle as well as leg, body, and 2-DoF tail joint movements of GeckoBot-III with an initial angle of 90 degrees.

Finally, we demonstrated the transition of both locomotion modes from selfrighting at a 150-degree initial tilt angle to walking using GeckoBot-III. Firstly, we applied the learned policy of self-righting, then after landing success (four legs stand on the ground), we switched to the learned policy of walking.1 Figure 6c shows the time evolution of all joint movements including legs, body, and tail. The robot was released at around 0.5 s and performed aerial rotation (see (1) in Fig. 6c) until around 2.5 s when the landing period began (see (2) in Fig. 6c), and successfully landed at 4 s. Consequently, the learned policy of walking was applied after 4 s to allow the robot to walk continuously (see (3) in Fig. 6c) with manual switching between two locomotion modes.

4

Discussion and Conclusion

We present a systematic way to investigate how the spine contributes to the ability of geckos to perform self-righting when falling. We used a gecko-inspired robot with a morphology design based on an experiment with a Gekko gecko. The function of the spine (body and tail) was investigated via three versions of the robot: a GeckoBot-I, -II, and -III. GeckoBot-I did not have active control of the spine, GeckoBot-II had an additional body joint, and GeckoBot-III had a body joint and 2-DoF tail joints. It should be noted that the same leg and foot structure was used in all robot versions. The CPG-RBF network with a learning mechanism (PIBB ) was introduced to learn self-righting and walking behavior in all robot versions. We performed a series of self-righting experiments in simulation to evaluate the aerial rotation and landing success rate of each robot version. According to the results, GeckoBot-I performed some aerial rotation (i.e., in the case of an initial 90 degrees, the tilt angle was still high at 87 degrees before the landing 1

Note that there were two weight sets (or control policies): one was for self-righting and the other one was for walking.

Self-righting of Gecko-Inspired Robots

157

Fig. 6. a Time evolution of reward values during learning to walk of GeckoBot-III. b All joint trajectories after learning (i.e., using the control policy at the 300th iteration) and robot walking behavior. A video of robot walking can be seen at https://bit.ly/ 3nBoTa8. c Example of GeckoBot-III demonstrating locomotion transition from falling to walking. The top graph shows the joint movement of the legs, body, and tail. The snapshots depict the robot posture while performing aerial self-righting (1), (2), and then walking (3). A video of this can be seen at https://bit.ly/3MIughS.

period) leading to the robot having limited opportunity for self-righting under only 90 degrees of the initial tilt angle. GeckoBot-II used a body joint to amplify its aerial rotation ability. For example, at an initial 90 degrees, the tilt angle is reduced from 90 to 71 degrees before landing. As a result, this robot can achieve 110 degrees of initial tile angle. For the full configuration of the spine, in addition to the legs, GeckoBot-III used a body joint and 2-DoF tail joints to improve aerial rotation, enabling this robot to achieve 150 degrees of the falling angle. This clearly indicated that whole-body rotation during falling is mostly influenced by the spine, including the body and tail. The robot without a motorized spine could not recover on the inverted plane (i.e., inverted slope and ceiling) and was limited only to the vertical wall. In addition, we also demonstrate continuous locomotion from falling to walking of GeckoBot-III. This illustrates both capabilities of the CPG-RBF network: first, the ability to learn different locomotion modes, and second, the modularity to change locomotion mode on the fly. Taken together, the robot could not only learn self-righting but also walk within the same network structure where different weight sets are used. Therefore, a smooth transition between different locomotion modes can be achieved by simply changing the output weights of the network (i.e., the learned policy). In the future, we will optimize the tail weight as well as add more joints and introduce a compliant mechanism to the tail to increase the self-righting success rate. We will also extend our CPG-RBF network with sensory feedback to improve aerial rotation during falling and apply adaptive muscle models to encode an elastic property in joints to facilitate a smoother landing.

158

L. Chanfreau et al.

Acknowledgments. We thank the CM labs for providing Vortex.

References 1. Arabyan, A., Tsai, D.: A distributed control model for the air-righting reflex of a cat. Biol. Cybern. 79(5), 393–401 (1998) 2. Chang-Siu, E., Libby, T., Brown, M., Full, R.J., Tomizuka, M.: A nonlinear feedback controller for aerial self-righting by a tailed robot. In: 2013 IEEE International Conference on Robotics and Automation, pp. 32–39 (2013) 3. Haomachai, W., Shao, D., Wang, W., Ji, A., Dai, Z., Manoonpong, P.: Lateral undulation of the bendable body of a gecko-inspired robot for energy-efficient inclined surface climbing. IEEE Robot. Autom. Lett. 6(4), 7917–7924 (2021) 4. Johnson, A.M., Libby, T., Chang-Siu, E., Tomizuka, M., Full, R.J., Koditschek, D.E.: Tail Assisted Dynamic Self Righting, pp. 611–620 5. Jusufi, A., Zeng, Y., Full, R., Dudley, R.: Aerial righting reflexes in flightless animals. Integr. Comp. Biol. 51, 937–43 (2011) 6. Kane, T.R., Scher, M.P.: A dynamical explanation of the falling cat phenomenon. Int. J. Solids Struct. 5(7), 663–670 (1969) 7. Kurtz, V., Li, H., Wensing, P.M., Lin, H.: Mini cheetah, the falling cat: a case study in machine learning and trajectory optimization for robot acrobatics. In: 2022 International Conference on Robotics and Automation (ICRA), pp. 4635– 4641. IEEE (2022) 8. Kyndbøl, P.M.: Using machine learning for optimizing and adapting gecko like robot locomotion to walk on surfaces with different slopes. Master’s thesis, Syddansk Universitet, Odense, Denmark (2020) 9. Roscia, F., Cumerlotti, A., Del Prete, A., Semini, C., Focchi, M.: Orientation control system: enhancing aerial maneuvers for quadruped robots. Sensors 23(3) (2023) 10. Shields, B., Robertson, W.S., Redmond, N., Jobson, R., Visser, R., Prime, Z., Cazzolato, B.: Falling cat robot lands on its feet. In: Proceedings of the Australasian Conference on Robotics and Automation, pp. 74–82 (2013) 11. Siddall, R., Ibanez, V., Byrnes, G., Full, R.J., Jusufi, A.: Mechanisms for midair reorientation using tail rotation in gliding geckos. Integr. Comp. Biol. 61(2), 478–490 (2021) 12. Stulp, F., Sigaud, O.: Policy improvement methods: between black-box optimization and episodic reinforcement learning (2012). https://hal.science/hal-00738463 13. Thor, M., Kulvicius, T., Manoonpong, P.: Generic neural locomotion control framework for legged robots. IEEE Trans. Neural Netw. Learn. Syst. 32, 4013–4025 (2021)

Double Gradient Method: A New Optimization Method for the Trajectory Optimization Problem Alam Rosato Macêdo1(B) , Ebrahim Samer El Youssef2 , and Marcus V. Americano da Costa1 1 Universidade Federal da Bahia, Bahia, Brazil

{alam.rosato,marcus.americano}@ufba.br

2 Universidade Federal de Santa Catarina, Santa Catarina, Brazil

[email protected]

Abstract. In this paper, a new optimization method for the trajectory optimization problem is presented. This new method allows to predict racing lines described by cubic splines (problems solved in most cases by stochastic methods) in times like deterministic methods. The proposed Double Gradient Method (DGM) is not affected by the dimensionality of the problem. Comparison of the results with data collected from professional drivers has shown that the DGM is reliable for lap time simulations with race line optimization. It can help drivers find the fastest racing line, be used for embedded algorithm development or for autonomous vehicle competitions. Keywords: Lap time simulation · Optimization · Autonomous racing vehicle

1 Introduction Over the years, simulators have been developed to help drivers find the fastest racing line [1] or to help develop embedded algorithms, such as stability control [2]. Lap time simulators with race line optimization using nonlinear programming have become attractive since the advances in computing capacity, as they can perform optimizations of hours [3, 4] in a few minutes [2, 5–8]. In 2014, F. Biral and R. Lot [5] developed a method for predicting the optimal path using the three-dimensional coordinates of the track as an implicit reference in the equations of the vehicle model. Using Darboux’s trihedron, they modeled the circuit in curvilinear coordinates, which in turn were used to derive the vehicle’s motion equations. The proposed method can optimize racing lines in less than a minute but requires the equations to be formulated at a symbolic level to derive the model which is inserted into the solver. The formulation of equations at the symbolic level requires information that is not always available to engineers from, for example, the Brazilian automotive categories, especially the semi-professional ones. Data for tire modeling are particularly difficult to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 159–170, 2024. https://doi.org/10.1007/978-3-031-47272-5_14

160

A. R. Macêdo et al.

obtain, even in professional categories. Stochastic methods [9, 10] and derivative-free methods [11] can solve optimization problems without having to fully model the tires of the vehicle [1, 12, 13], but are not as fast as deterministic methods [9]. Autonomous vehicle competitions are an interesting platform to develop control algorithms to work at the limit of handling [14] aiming at more reliable embedded systems for road vehicles. Trajectory prediction time is a key factor for such competitions [15, 16]. Simplifying the vehicle model and precomputing the racing line for a new track using cubic splines [13] is one strategy to reduce the trajectory prediction time. As in regular semi-professional competitions where tire data is not available, trajectory optimization for simplified models with cubic splines path description falls back to stochastic or derivative-free methods. Based on foregoing, this paper proposes an optimization method based on the gradient descent concept and inspired by Neural Networks to solve trajectory optimization problems: Double Gradient Method (DGM). This new method allows us to predict racing lines described by cubic splines in times similar to deterministic methods. The results of the developed method were compared with the results obtained by the following methods: Genetic Algorithm (GA) [17, 18], Differential Evolution (DE) [19, 20], Nelder-Mead [21–23] and Trust Region [24–27]. These methods were chosen because they can solve the optimization problem without using the derivative of the objective function, which is not available in cubic spline path optimization problems. The main contributions of this work are: 1. Development of a new method to solve trajectory optimization problems graphically i.e., manipulating a cubic spline that describes the vehicle path. 2. Application of this method on a race line prediction problem. 3. Comparison of the DGM with four other methods capable of solving race lines graphically. 4. Results validation comparing with professional driver performance. This paper is organized as follows: Sect. 2 presents the optimization problem for a path interpolated by a cubic spline. Section 3 develops the proposed Double Gradient Method, and simulation results are presented in Sect. 4. Section 5 contains the conclusions, applications of the algorithm, limitations of the algorithm, and future works.

2 Optimization Problem The DGM presented in this paper is applied to the following problem: Given a vehicle model with one degree of freedom and a trajectory described by cubic splines, determine the path that gives the shortest lap time on a circuit by directly changing the control points of the cubic splines (see Fig. 1). The optimization problem in the standard formulation is given by: min Fob(u) = u



2 · si (u) x˙ i (u) + x˙ i−1 (u)

(1)

Subject to: ui ≤ u ≤ us

(2)

Double Gradient Method: A New Optimization Method

161

Fig. 1. Updating the trajectory control points position during the optimization process. The control coordinates (in red) are interpolated by the t parameters of the u vector of decision variables to generate the cubic spline control points Pn. pi are the interpolated points between control points.

The objective function Fob(u) is the numerical integration of the path distance si as a function of velocity (lap time), where si is the linear distance between two consecutive discrete points pi−1 pi describing the vehicle path. The distance si cannot be integrated as a function of the spline by Bézier curves [28], since it is an elliptic integral, hence the use of a linear approximation between the interpolated points, which requires an appropriate interpolation mesh (less than 1m between two consecutive points, for errors up to the third decimal place). x˙ i represents the speed and is determined according to the model used to represent the vehicle on the track.

Fig. 2. In red, the control coordinates for generating the path for the Goiânia circuit.

The interpolation of the trajectory is based on control points that are uniformly distributed along the track. The control points must be homogeneously distributed due to the interpolation properties of the spline by Bézier curves. The interpolated points are distributed as a function of the linear interpolation of the Bézier points [29], so an irregular distribution of the control points leads to an irregular distribution of the interpolated points. Since the interpolated points correspond to the problem discretization in space domain, it is necessary that they are distributed as homogeneously as possible to ensure an adequate mesh.

162

A. R. Macêdo et al.

The u vector of decision variables contains the linear interpolation parameters of the pairs of control coordinates shown in Figs. 1 and 2, which form the control points for spline interpolation, so problem (1) has j − 1 degrees of freedom for a u vector with j parameters t. The bounds for the u vector are: ui = 0 and us = 1. The Fob(u) solution requires the vehicle model simulation on the track, so the optimizer cannot use the objective function itself to solve the problem using deterministic methods. The Fob(u) solution is not available for evaluation until the simulation is completed, so probabilistic methods and derivative-free methods are better suited to solve it, despite the high time cost. The “Double Gradient” method proposed in this paper was inspired by Neural Network topology and aims to take advantage of the use of gradients in solving optimization problems: Speed. The requirements for the method are: • The simulation time of the developed algorithm must be comparable to the simulation times of deterministic algorithms—a few minutes [2, 5–8]—to allow its application during the shakedown of racing vehicles or during competitive events. • The algorithm must optimize the trajectory without using the vehicle’s dynamic equations, which allows for greater flexibility in terms of the simulation method used to represent the vehicle on the track. This flexibility allows the use of complex models with as much vehicle information as possible [6] or simplified models that require minimal vehicle data [13]. • The algorithm must be able to solve the problem graphically by directly modifying the cubic spline that defines the path. This requirement is a direct consequence of the previous requirement.

3 Double Gradient Method Artificial Neural Networks (ANN) are the electrical equivalent of the biological neural networks from which they were inspired [17]. Neurons are represented by linear functions, while synapses are represented by nonlinear inhibitory functions that limit the signal amplitude processed in a cell. The method developed in this section was inspired by a particular topology of Artificial Neural Networks: the single-layer network with error back-propagation, as shown in Fig. 3. In matrix form, the topology of Fig. 3 is represented as follows: Y  (k) = ϕ(w(k) · u(k))

(3)

where Y (k) and Y (k) are the reference and the output of the network in k iterations. W and U are the weights matrices and the inputs matrices respectively and ϕ is the activation function. The adaptation law is given by the gradient rule: e = Y − Y

(4)

∂Y  ∂ϕ(W · U ) ∂e = =− ∂W ∂W ∂W

(5)

Double Gradient Method: A New Optimization Method

163

Fig. 3. Topology of an artificial neural network with error backpropagation

W (k + 1) = W (k) − α.

∂e ∂W

(6)

In order for the neural network’s W weights to adapt, the network is presented with a set of U input data and their respective reference outputs Y during training. Only one input is available for solving the optimization problem and there is no reference to the error, yet the Double Gradient Method uses the basic topology presented for constructing the algorithm. For application in solving the problem of trajectory optimization by control points for spline interpolation, the topology of Fig. 3 was modified according to Fig. 4. The following Eqs. 7 to 15 were developed to emulate the specialization phenomenon of a Neural Network by adjusting the weights to determine the best path. The resulting topology is not an ANN (there is no learning process), but an analog for optimization problems.

Fig. 4. Basic topology of the double gradient method

In Fig. 4, the vector ϑ of inputs (u in the neural network) is an invariant fixed reference (e.g. the track center line) during the optimization process and the vector u becomes both the “network” output and the input for the simulator running the vehicle model, i.e., it becomes the vector of decision variables of the optimization problem.

164

A. R. Macêdo et al.

In matrix form: un(k) = ϕ(W n(k) ∗ ϑn(k))

(7)

where ∗ denotes the row-wise product of the W and ϑ matrices such that u is an n × 1 matrix. Three gradients are considered for the construction of the adaptation law: ∂py¨ n(k)[n×1] = pft n(k)[n×1] − pft n(k − 1)[n×1]

(8)

∂uy¨ n(k)[n×1] = un(k)[n×1] − un(k − 1)[n×1]

(9)

∂py¨ n(k)[n×1] = py¨ n(k)[n×1] − py¨ n(k − 1)[n×1]

(10)

where Pft , defined by Eq. (11), is a function that accounts for the time gain/loss between the control points of the spline that forms the trajectory and Py¨ , defined by Eq. (12), is a function that accounts for the variation in lateral acceleration of the vehicle between the spline control points that form the trajectory. Both equations were established empirically. Equation (12) was established to emulate the driver’s pace on the track and the second and third terms of Eq. (11) provide more factual trajectory results when defined in this manner. ⎧ ip(n) ip(n+1) ⎪   2 · si 2 · si ⎪ ⎪ ⎪ Fob(u)2 · · , if n = 1, . . . , j − 2 ⎪ ⎪ x˙ i + x˙ i−1 x˙ i + x˙ i−1 ⎨ ip(n−1) ip(n) (11) pft = ip(n) ip(1) ⎪ ⎪   2 · si 2 · s ⎪ i ⎪ Fob(u)2 · ⎪ · , if n = j − 1 ⎪ ⎩ x˙ i + x˙ i−1 x˙ i + x˙ i−1 ip(n−1) ip(0) ⎧ ip(n) ip(n+1) ⎪   ⎪ ⎪ ⎪ |¨ | |¨yi − y¨ i−1 |, if n = 1, . . . , j − 2 y + − y ¨ i i−1 ⎪ ⎪ ⎨ ip(n−1) ip(n) (12) py¨ = ip(n) ip(1) ⎪ ⎪   ⎪ ⎪ ⎪ |¨yi − y¨ i−1 | + |¨yi − y¨ i−1 |, if n = j − 2 ⎪ ⎩ ip(n−1)

ip(0)

Thus, the algorithm adaptation law is given by:   w(k) − α(k) ∗ sgn ∂pft (k) ∗ sgn(∂u(k)), if κpn ≥ 0.002 w(k + 1) = w(k) − α(k) ∗ sgn ∂py¨ (k) ∗ sgn(∂u(k)), if κpn ≥ 0.002

(13)

The updating of the weights of the algorithm, as shown in Eq. (13), depends only on the sign of the gradient. The optimization of the trajectory is done by changing the location of the spline control points that form the path (see Fig. 1).The value of the gradient is not important and makes it difficult to control the degree of change of the trajectory between iterations of the optimization process, so only the sign of the gradient is needed for the adaptation law. Comparing Eqs. (13) and (6), we find that the weight updating step in (6) decreases as the reference error decreases, while this step in Eq. (13) is constant since there is no error parameter. To mimic the weighting update process, some features of Eq. (13) need to be analyzed:

Double Gradient Method: A New Optimization Method

165

• The weights are updated based on ∂Pft n if the path curvature at the point Pn is such that it can be considered a curve (κ ≥ 0.002). Under this condition, small changes in the position of the control points can produce reasonable gains/losses in lap time. • The weights are updated based on ∂Py¨ n if the path curvature at the point Pn is such that it can be considered a straight line (κ < 0.002). In this condition, small changes in the position of the control points do not effectively affect the lap time. In this case, better results are obtained by reducing the variation of the lateral acceleration y¨ . • Gradients serve only as a “compass” to indicate the direction of the update weights. The update step is defined by the update rate α. • The second term of Eq. (13) is always different from zero at all points except the trivial solution points. As the function approaches the suboptimal point, it stabilizes and oscillates indefinitely around that point. Since we know that α is the only parameter that changes the update step of the weights, and that Eq. (1) stops progressing when approaching a suboptimum, the update rate α is treated as follows:

α(k) if ς > 0 and ς % nk = 0 and α > 0.0001 , (14) α(k + 1) = α(k), elsewhere 2 where the parameter ς indicates how many consecutive k iterations Fob(u) does not progress and % represents the remainder of the division ς/nk. The parameter nk specifies how many iterations are allowed without result progress. The limit of 0.0001 is fixed because path changes at the level of tenths of a millimeter are meaningless for the lap time. Finally, the ϕ function ensures that 0 ≤ un < 1 by the modified sigmoid according to Eq. (15), where f = (W ∗ ϑ): ϕ(f ) =

1 1 + e−15.637 f +7.819

(15)

4 Results In this work, data collected from data loggers in competition vehicles were used to validate the simulated data (see Figs. 5, 6, 7 and 8). Track data obtained from two professional drivers at the Autódromo Internacional Ayrton Senna (Goiânia) were used for validation. Vehicles and competition cannot be disclosed for confidentiality reasons. Four already available methods were compared with the Double Gradient Method: Genetic Algorithm, Differential Evolution, Nelder-Mead and Trust Region. These methods were chosen because they can optimize problems without requiring the derivative of the objective function. This is the main feature for solving optimization problems with racing lines by cubic splines. The Goiânia circuit was modeled by 93 pairs of control coordinates (see Fig. 2). Table 1 shows the results obtained after the simulations with the four selected optimization methods and the Double Gradient Method developed and presented in this work. The

166

A. R. Macêdo et al.

proposed method was the only one that could solve the problem within a time window of minutes, comparable to that of the deterministic methods, which range from about 1 to 10 min [8]. To valid data analysis, all data were compared taking the inner edge of the track as a fixed reference. The reference vector ϑ was used as the initial guess for the four selected methods. Only the Double Gradient data are presented so as not to overload the figures. Table 1. Simulations results compared to the driver’s performance. Method

Simulation time

Lap time [s]

Difference [s]

Differential evolution

06:37:50

93.29

0.37

Genetic algorithm

19:55:22

96.72

3.80

Double gradient

00:07:29

93.30

0.38

Nelder-Mead

01:15:29

94.23

1.31

Thrust region

01:59:45

94.30

1.38

Driver 1

NA

92.92

0.00

Driver 2

NA

93.20

0.28

Distance from the inner edge of the track (m)

The following figures show the metrics comparisons between the proposed method and professional driver’s performances:

15 10 5 0 0

1000 Driver 1

2000

3000

Distance traveled (m) Driver 2

Double Gradient

Fig. 5. Vehicle position along the track in relation to the track inside edge.

4000

Speed (kph)

Double Gradient Method: A New Optimization Method

167

170

70 0

500

1000

1500

2000

2500

3000

3500

4000

Distance traveled (m) Driver 1

Driver 2

Double Gradient

Race line curvature (1/m)

Fig. 6. Vehicle speed along the track.

0.05

0 0

1000

-0.05

2000

3000

4000

Distance traveled (m) Driver 1

Driver 2

Double Gradient

Fig. 7. Race line (path) curvature

Longitudinal accelera on (g)

1

-3

0.5 0 -2

-1

0

1

2

-0.5 -1 -1.5

Lateral accelera on (g) Driver 1

Driver 2

Double Gradient

Fig. 8. Simulated g-g diagram compared to those obtained with professional drivers.

5 Conclusion In this paper, a new optimization method for the trajectory optimization problem was presented, where control points are used to manipulate a spline representing the path of the vehicle. Since the solution to the problem involves direct manipulation of the spline

168

A. R. Macêdo et al.

control points for path construction (see Fig. 1) and these control points are manipulated by a vector of linear interpolation parameters that vary between zero and one, the matrix product structure modulated by an activation function of Neural Networks was suitable for this application. By changing the topology, the role of the Neural Network variables was changed as follows: • The input variables have begun to act as reference vectors. For the optimization problem presented in this paper, the most suitable reference vector is the centerline of the track. • The modifications applied to the network topology were made to emulate the specialization phenomenon of a Neural Network by adjusting the weights to determine the best path. • The Neural Network output vector is the result of the problem posed to it. In the modified network, the output of the network is passed to the simulator, which calculates the vehicle lap time. The task of the network is now to adjust the vector of decision variables of the optimization problem, i.e. to change the layout of the center line of the race track until the lowest possible lap time is reached. • The adaptation law of the Neural Network is constructed as a function of the error gradient. In the modified network, the adaptation law is constructed based on the gradients of the simulator outputs (lap time and lateral acceleration) and the gradient of the vector of decision variables. The adaptation law uses two functions where two gradients are applied, hence the name of the proposed method. 5.1 Algorithm Limitations • Since it is based on gradients, long stretches of straight lines can produce a local minimum. This condition has a small effect on the total lap time, but it can produce a path that is unusual for professional drivers. Once drivers travel a known path in the straight lines, the control points on these sections can be set at the known positions, minimizing this problem. • The algorithm assumes that the vehicle simulation model is able to follow the path defined by the spline. This is done either by linking the curvature and spline length data to the model equations or by running a control algorithm that follows the optimization trajectory. 5.2 Algorithm Applications • Vehicle setup assistance during test sections in competitions. • Baseline prediction for autonomous vehicle competitions. • Development of amateur and semi-professional drivers, helping to learn the best race line for the circuit of interest. 5.3 Future Works • Generalize the method for solving various problems graphically by cubic splines or surfaces manipulation.

Double Gradient Method: A New Optimization Method

169

• Trajectory optimization in restricted 3D space such as racetracks or test tracks modeled in three dimensions. • Trajectory optimization in unrestricted 3d space for applications with drones, Unmanned Aerial Vehicle (UAV) or Autonomous underwater vehicle (AUV). Acknowledgement. The Authors would like to thank Bahia Research Foundation (FAPESB) for the financial support (grant 0342/2021).

References 1. Gadola, M., Cambiaghi, D., Manzo, L., Vetturi, D.: “A Tool for Lap Time Simulation,” SAE Technical Paper 962529, Dezembro 1996 2. Koutrik, S.V.: Optimal control for race car minimum time maneuvering. Maritime and Materials Engineering—Delft University of Technology, Delft, Faculty of Mechanical (2015) 3. Casanova, D.: On minimum time vehicle manoeuvring: the theoretical optimal lap. Cranfield University, School of Mechanical Engineering (2000) 4. Kelly, D.P.: Lap time simulation with transient vehicle and tyre dynamics. Cranfield University School of Engineering, Automotive Studies Group (2008) 5. Biral, F., Lot, R.: A Curvilinear Abscissa Approach for the Lap Time Optimization of Racing Vehicles, Cape Town (2014) 6. Bianco, N.D., Lot, R., Gadola, M.: Minimum time optimal control simulation of a GP2 Race Car. J. Automob. Eng. 232(9), 1180–1195 (2018) 7. Herrmann, T., Passigato, F., Betz, J., Lienkamp, M.: Minimum Race-Time Planning-Strategy for an Autonomous Electric Racecar (2020). arXiv:2005.07127 8. Lovato, S., Massaro, M.: A three-dimensional free-trajectory quasi-steady-state optimalcontrol method for minimum-lap-time of race vehicles. Veh. Syst. Dyn. (2021). https://doi. org/10.1080/00423114.2021.1878242 9. Bastos, E.A.: Otimização de Seções Retangulares de Concreto Armado Submetidas à FlexoCompressão Oblíqua Utilizando Algoritmos Genéticos. Universidade Federal do Rio de Janeiro, Rio de Janeiro (2004) 10. Chaves, A.A.: Heurísticas Híbridas com Busca Através de Agrupamentos para o Problema do Caxeiro Viajante com Coleta de Prêmios. Instituto Nacional de Pesquisas Espaciais, São José dos Campos (2005) 11. Gross, J.C.: Derivative-Free Methods for High-Dimensional Optimization with Application to Centrifugal Pump Design. University of Cambridge, Cambridge (2021) 12. Lenzo, B., ROSSI, V.: A simple mono-dimensional approach for lap time optimisation. Appl. Sci. 1498(10) (2020) 13. Jain, A., Morari, M.: Computing the Racing Line Using Bayesian Optimization, 12 Feb. 2020 14. Srinivasan, S., Sa, I., Zyner, A., Reijgwart, V., Valls, M.I., Siegwart, R.: End-to-End velocity estimation for autonomous. IEEE Robot. Autom. Lett. 5(4), 6869–6875 (2020) 15. Kapania, N.R., Subosits, J., Gerdes, J.C.: A sequential two-step algorithm for fast generation of vehicle racing trajectories, ASME. J. Dyn. Sys. Meas. Control 138(9) (2016) 16. Garlick, S., Bradley, A.: Real-Time Optimal Trajectory Planning for Autonomous Vehicles and Lap Time Simulation Using Machine Learning, 18 Mar. 2021 17. Konar, A.: Artificial Intelligence and Soft Computing: Behavioral and Cognitive Modeling of the Human Brain. CRC Press, Boca Raton (1999) 18. Filho, J.L.R., Treleaven, P.C., Alippi, C.: Genetic algorithm programming environments. IEEE Computer Society Press, pp. 28–43 (1994)

170

A. R. Macêdo et al.

19. Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997) 20. Mór, F.N.: An evolutionary approach for the task mapping problem. Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre (2016) 21. Nelder, J.A., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965) 22. Spendley, W., Hext, G.R., Himsworth, F.R.: Sequential application of simplex designs in optimisation and evolutionary operation. Technometrics 4, 441 (1962) 23. Gao, F., Han, L.: Implementing the Nelder-Mead simplex algorithm with adaptive parameters. Comput. Optim. Appl. 51, 259–277 (2012) 24. Ravindran, A., Ragsdell, K.M., Reklaitis, G.V.: Engineering Optimization: Methods and Applications, 2ª Wiley, Hoboken, New Jersey (2006) 25. Giuliani, C.M.: Estratégias de Otimização não Diferenciável Aplicadas à Maximização da Produção de Campos de Petróleo. Universidade Federal de Santa Catarina, Florianópolis (2013) 26. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. MPS/SIAM, Philadelphia (2000) 27. Yuan, Y.-X.: Recent advances in trust region algorithms. Math. Program. 151, 249–281 (2015) 28. Aflak, O.: Bézier Interpolation—Create smooth shapes using Bézier curves,” 9 Maio (2020) 29. Farin, G.E.: Curves and Surfaces for Computer-Aided Geometric Design—A Practical Guide, 3rd ed. Arizona: Academic Press Inc. (2014)

Planar Motion Control of a Quadruped Robot Gabriel Duarte Gon¸calves Pedro1(B) , Thiago Boaventura Cunha1 , Pedro Am´erico Almeida Magalh˜ aes J´ unior2 , and Gustavo Medeiros Freitas3 1

2

University of S˜ ao Paulo, Sao Carlos, SP, Brazil [email protected] Pontific Catholic University of Minas Gerais, Belo Horizonte, MG, Brazil 3 Federal University of Minas Gerais, Belo Horizonte, MG, Brazil

Abstract. In robotics, the ability of quadruped robots to perform tasks in industrial, mining, and disaster environments has already been demonstrated. However, to move a robot in different environments, it is necessary to properly plan the steps and control its legs. This article considers a quadruped robot with legs modeled using the Euler–Lagrange equations. The robot’s control strategy uses impedance control for each leg with reference trajectories generated by sixth-degree B´ezier curves, which are constructed based on leg velocities through a planar kinematic model. Then, a body control scheme commands the robot to follow a predefined trajectory. The proposed control strategy is implemented in Matlab and simulated in CoopeliaSim using the Vortex physics engine. The simulations successfully illustrate the control strategy that commands the robot to follow a reference trajectory with static walking and trotting gait patterns. Keywords: Quadruped robots

1

· B´ezier curves · Impedance control

Introduction

Recent advances in the development of legged locomotion systems have resulted in the emergence of several commercial robotic platforms, designed and sold for use in a variety of environments. Companies such as Boston Dynamics, ANYbotics, and Unitree are currently presenting commercial versions of quadruped robots, for industry, mines, oil rig, and other service applications [6,13,23]. Legged locomotion is a capability that has been researched for mobile robots to perform tasks in unsafe and unstructured terrains [4,6,13,23]. Research in this area has focused on developing locomotion, control, and applications of This work was partially funded by the Brazilian National Research Council (CNPq), the Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior—Brazil (CAPES), and the Funda¸ca ˜o de Amparo a ´ Pesquisa do Estado de Minas Gerais—Brazil (FAPEMIG), S˜ ao Paulo Research Foundation (FAPESP) grant 2018/15472-9. Thanks also to CM Labs-Vortex Studio for their support in the research. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 171–184, 2024. https://doi.org/10.1007/978-3-031-47272-5_15

172

G. D. G. Pedro et al.

these robotic platforms. Another example of an application is the autonomous mapping of large physical structures using quadruped robots [23]. Another possible application is the autonomous inspection of the deterioration of concrete floors in sewer canals [13]. Furthermore, recent studies have explored the use of quadruped robots in future planetary exploration missions [12]. To control motion and balance of legged robots we need nonlinear control techniques due to nonlinearity of mechanical systems. The non-linear feedback linearization technique is applicable in legged robots, both for control of legs and for control of the complete robot body. Computed torque is an approach used in robots with hydraulic and electric actuation to calculate joint torques. Recent studies in this field include [3,4,14,17]. Impedance control is also used in robots with hydraulic or electric actuators [18,21], and aims to change the stiffness and damping of the legs. This change allows for the absorbing external forces due to terrain irregularity [18], and impact forces as result of dynamic steps [3], avoiding damage to the legs and actuators of the robot. As noted in [8], the ability of animals to change their gait pattern is crucial to moving efficiently at different speeds and dealing with disturbances caused by irregularities in the ground. In order to achieve this mobility, a dedicated controller for each robot leg can be used to achieve the correct movement of each step based on the trajectory defined by a gait pattern. These trajectories must be smooth and adjustable on the basis of the new motion and gait parameters of robots. A viable solution is to use B´ezier curves, as shown in [10,11]. This paper builds on top of our previous works [19] and [20] that address leg control and the gait pattern generator in robots. The current paper considers quadruped-legged robots modeled using Euler–Lagrange equations, an impedance control with feedback linearization for each leg that receives reference trajectories of the gait patterns by means of sixth-degree B´ezier curves synchronized according to the gait pattern of static walking and trotting. Steps are generated according to the desired leg speeds and a body control based on a planar kinematic model. Leg control, gait patterns, and planar body control were implemented in Matlab and simulated in CoopeliaSim using the Vortex realistic physics engine. The result was a planar control of the robot body capable of following a reference curve like a Bernoli Leminiscata, having an impedance control in the feet following the chosen gait pattern.

2

Dynamic Model and Control of the Legs

The dynamics of a 3-Degree of Freedom (DoF) quadruped robot’s leg can be modeled using Euler–Lagrange equations as: M (q) q¨ + C(q, q) ˙ q˙ + g(q) + J T (q) Fe = τ.

(1)

where M (q) ∈ R3×3 is the inertia matrix, C(q, q) ˙ ∈ R3×3 is the Coriolis force 3 matrix, g(q) ∈ R is the vector of gravitational forces, τ ∈ R3 is the vector of joint torques, q, q˙ and q¨ ∈ R3 are the position, velocity and acceleration of the 3 joints. The resulting torques from external forces are calculated with a

Planar Motion Control of in a Quadruped Robot

173

transpose of the Jacobian of foot J T (q) ∈ Rn×3 and the vector of external forces T Fe = [Fx , Fy , Fz , nx , ny , nz ] , using the principle of virtual work. We propose to use the following inverse dynamics control law, where aq ∈ R3 and af ∈ R3 are auxiliary control inputs of acceleration and force, respectively: τ = M (q) aq + C(q, q) ˙ q˙ + g(q) + J T (q) af ,

(2)

To map (2) to the foot task space, we use the definition of Jacobian, that is: q˙ = J(q)−1 x, ˙ −1

q¨ = J(q)

˙ q) [¨ x − J(q, ˙ q]. ˙

(3) (4)

where x˙ and x ¨ ∈ R3 are the velocity and linear acceleration vectors in the foot task space. By changing the variables q¨ and x ¨ from (4) to aq and ax , and then substituting them into (2), and the result into (1), one can obtain (5). x ¨ = ax + J(q)M −1 (q)J T (q)(Fe − af ),

(5)

For simplification purposes, it is assumed that af = Fe to recover the double integrator system from the task space [22]. With this simplification and assuming that any additional force feedback terms are included in the control term, ax is obtained using (6). (6) x ¨ = ax . The idea of impedance control is to regulate mechanical impedance, i.e. apparent inertia, damping, and stiffness, by force feedback [22]. For the auxiliary control ax , the following impedance control law is proposed as: ax = x ¨d − Md−1 [Bd (x˙ − x˙d ) + Kd (x − xd ) + Fe ],

(7)

where the subscript d indicates desired values. Substituting (7) into (6), and defining the error dynamics as e = (x − xd ), we finally obtain: Md e¨ + Bd e˙ + Kd e = −Fe .

3

(8)

B´ ezier Curves for Steps

The basic mathematics for the development of B´ezier curves was established in 1912, but these polynomials were not widely disseminated until the 1960s by French engineer Pierre B´ezier [7]. In robotics, B´ezier curves can be used to plan trajectories for manipulators and mobile robots. For the robot to walk, the feet need to move by taking steps. A step can be divided into a support phase and a swing phase, repeating this cycle in different gait patterns. We needed to define a smooth curve using the foot trajectories for impedance control; we used B´ezier curves such as [10,11]. We use a single sixth-degree B´ezier curve, previously proposed in [20], traversing a closed trajectory for both phases of the step. This closed curve is defined by

174

G. D. G. Pedro et al.

seven control points, with the aim of simplifying the planning of the swing and foot support trajectories in a reduced solution space. We chose a sixth-degree curve that could generate a trajectory suitable with the support time greater than the foot-switch time in the parameterization range for the robot’s foot motion. The equation defining a B´ezier curve B(u) ∈ R3 can be rewritten in matrix form using a polynomial expansion. The weight vectors of the curve and variable are represented by Wi ∈ R3 and u where  the parameterization  u ∈ R | 0 ≤ u ≤ 1 . The derivatives of a B´ezier curve can be obtained recursively with the characteristic equation of its derivatives. The fitting of the B´ezier curve was set at the control points Pi and parameterization range value in the specified range u with the control point was achieved. The weights of the B´ezier curve Wi are calculated using the Eq. (10). In this paper, for the 6th degree B´ezier curve used, the terms W ∈ R7×3 are the weight matrix, M ∈ R7×7 is the diagonal matrix of the binomial coefficients of the curve, P ∈ R7×3 are the curve fit points, and T ∈ R7×7 is a matrix with temporal information. Each row of the matrices W and P corresponds to a weight and a setpoint, respectively, while each row of the matrix T are vectors ui = [1 u u2 ... u6 ] with the times determined for each setpoint Pi within the range 0–1: ⎡

⎤T 1 ⎢u⎥ ⎢ 2⎥ ⎢u ⎥ ⎢ 3⎥ ⎥ B(u) =⎢ ⎢u4 ⎥ ⎢u ⎥ ⎢ 5⎥ ⎣u ⎦ u6



1 0 0 0 0 0 ⎢ −6 6 0 0 0 0 ⎢ ⎢ 15 −30 15 0 0 0 ⎢ ⎢−20 60 −60 20 0 0 ⎢ ⎢ 15 −60 90 −60 15 0 ⎢ ⎣ −6 30 −60 60 −30 6 1 −6 15 −20 15 −6 W = M −1 (T T T )−1 T T P.

3.1

⎤⎡ ⎤ 0 W0 ⎢ ⎥ 0⎥ ⎥⎢W1 ⎥ ⎥ ⎥ 0⎥⎢ ⎢W2 ⎥ ⎢ ⎥ 0⎥⎢W3 ⎥ ⎥, ⎢W4 ⎥ 0⎥ ⎥⎢ ⎥ 0⎦⎣W5 ⎦ 1 W6

(9)

(10)

Step and Gait Pattern Using B´ ezier Curves

We defined the starting and ending points (P0 and P6 ) of the B´ezier curve that must coincide to form a closed trajectory that represents a step cycle. The beginning and end of the motion occur halfway through the swing phase of the leg. The distance variables d ∈ R for the step length, height h ∈ R of the step, and the angular variable θ ∈ S were established to determine the direction of the step in space. With (11), by defining the P3 point of the step in the foot task space, the other six curve fit points can be found as:

Planar Motion Control of in a Quadruped Robot

175

dx = d 0.7 cos(θ), dy = d 0.7 sin(θ), P0 = [P3x , P3y , P3z + h], P1 = [P3x + dx (4/5), P3y + dy (4/5), P3z + h (3/5)], P2 = [P3x + dx (5/5), P3y + dy (5/5), P3z + h (1/5)], P3 = [P3x , P3y , P3z ], P4 = [P3x − dx (5/5), P3y − dy (5/5), P3z + h (1/5)], P5 = [P3x − dx (4/5), P3y − dy (4/5), P3z + h (3/5)], P6 = P0 .

(11)

The B´ezier curve generates smooth trajectories for the impedance control of the motion of the robot foot using (10) to calculate the weights W of the curve that fit the points P . The ui times of the matrix T were set to obtain a duty factor βu of 0.6 to obtain a deformation-free trajectory. B´ezier curves represent a one-step cycle with a step frequency of 1 Hz and are parameterized at u in the range 0–1. It is possible to modify the step frequency by mapping the time t to the parameterization range of u, allowing one to change the step frequency with the mapped time t. The duty factor of the step, which is the fraction of time in the foot support phase, changes with the mapped time t in the support phase. We parameterize these phases of the B´ezier curve with the desired duty factor βt for the step. We parameterize the support phase between points P2 to P4 and the swing phase between points P0 to P2 and P4 to P6 . The parameterized curve at u has a duty factor βu between the points P2 to P4 of 0.6. The parameterization of the variable u with respect to time t of a period Tp = 1/fp and with a desired duty factor βt obtained by (12):   ⎧ βu −1 t t ⎪ , 0  Ttp < 1−β ⎪ T β 2 p t−1 ⎪ ⎨ (2 t) βu ( Tp −1) t u(t) = , (12) + 12 , 1−β  Ttp  βt+1 2 βt 2 2 ⎪ ⎪ (2 t) ⎪ (βu −1) (βt − Tp +1) ⎩ βu t + 12 , βt+1 2 − 2 (βt −1) 2 < Tp  1 The gait patterns chosen to reproduce were static walking and trot patterns, similar to those presented in [1,16,24]. In symmetric patterns, the choice of the length of the step and the position of its start was made to ensure a polygon or line of support for the center mass of the body [16,24]. The robot gait pattern is defined by determining the relative phases of each leg for the locomotion pattern, with the aim of synchronizing the movement of the legs to maintain the support of the center of mass [9]. The average velocity of legged locomotion, Vmp , considering successful and constant strides, estimated by multiplying the stride frequency by its length, vmp = d fp [2]. To calculate the step frequency and length for a given leg speed, we developed a function using this equation. This function is summarized in the pseudocode available online.1 1

https://github.com/GabrielDGP/CLAWAR2023.git.

176

G. D. G. Pedro et al.

This function is used to find the frequency and step length of the desired speed by traversing the maximum and minimum frequency ranges for different step lengths. The function makes the stepping frequency vary linearly with speed until the maximum stepping frequency is reached and the maximum step length only when close to the maximum speed. The maximum length is chosen to keep the robot within the working range of its feet. This function is used to calculate the frequency and stride length of the leg with the highest speed. After determining the calculation, the step frequency of the leg with the highest speed is used to determine the step length of the other legs with vmp = d fp using the leg frequency with the highest speed. In this way, it is possible to modify the step frequency within this range as the desired speed increases, while maintaining the synchrony of the steps by being at the same frequency.

4

Planar Kinematic Model of the Body

Differential kinematic model of a mobile robot with differential architecture is one of the simplest models used for wheeled robots. However, the differential kinematic model is inappropriate for quadruped robots because of the limitations of non-holonomic wheels. This paper proposes a simple planar kinematic model for quadruped robots, based on the differential model of a mobile robot, but including the lateral motion of the legs. This model assumes linear velocities in xr and yr directions in a plane, plus the angular velocity ωr in z, and considers nonsliding feet during motion. Figure 1 illustrates a quadruped robot, the variables used, and the robot’s coordinate system, positioned at the center of a straightline segment connecting the robot’s front shoulders in order to simplify the mathematical modeling and movement of the robot. We will accomplish the robot position using this coordinate system as a reference. Lateral velocities vd , and ve , front vf and rear vt , width L and length C of the robot, as well as orientation θ in a plane, curvature radius Rcx and Rcy and instantaneous center of rotation CIRx and CIRy are represented in Fig. 1. Using this diagram and inspired by the differential robot model, the equations of direct (13) and inverse (14) differential kinematics for a quadruped robot were obtained. Direct differential kinematics relates the linear velocities (vd , ve , vf and vt ) in the plane to the velocities (vrx , vry and ωr ) in the robot referential, while inverse differential kinematics relates the velocities in the robot referential to the linear velocities in the plane:

Planar Motion Control of in a Quadruped Robot

177

Fig. 1. Quadruped robot planar diagram

⎡ ⎤ ⎡ 1 1 vrx 2 2 ⎣vry ⎦ = ⎣ 0 0 1 1 ωr 2L −2L

0 1 1 2C

⎡ ⎤ ⎤ v 0 ⎢ d⎥ ve ⎥ 0 ⎦⎢ ⎣vf ⎦ , 1 −2C vt

⎤ ⎤ ⎡ 1 0 L2 ⎡ ⎤ vd ⎢ ve ⎥ ⎢1 0 − L ⎥ vrx ⎢ ⎥=⎢ 2⎥⎣ ⎦ ⎣vf ⎦ ⎣0 1 0 ⎦ vry . ωr vb 0 1 −C

(13)



(14)

To obtain the velocity of each leg of the robot use the modulus of the velocities of the vertices formed by the width and length of each leg. When the robot’s feet perform non-slip steps, the velocity of the foot motion is transferred to the shoulders. Equation (15) calculate the modulus of the velocities front right (vf d ), front left (vf e ), rear right (vtd ) and rear left (vte ) legs. The angle indicating the direction of leg velocities with respect to the robot referential is calculated using the atan2 function of the velocities of the vertices of the width and length of the legs, using (16). The obtained velocities and direction of each leg are then used to control the motion of the robot legs: vf d =

    vf 2 + vd 2 , vf e = vf 2 + ve 2 , vtd = vt 2 + vd 2 , vte = vt 2 + ve 2 ,

(15)

θf d = atan2(vf , vd ), θf e = atan2(vf , ve ), θtd = atan2(vt , vd ), θte = atan2(vt , ve ).(16)

178

4.1

G. D. G. Pedro et al.

Planar Body Control

The linear and angular velocities model used for the robot body can be simplified by a coordinate transformation model from the inertial system velocities to the robot body, as presented in (17). Due to the presence of sines and cosines, this equation has nonlinearities. To linearize the equation, a feedback linearization method is used. Using the actuation vector [vrx , vry , wr ]T , it is possible to linearize the system using Eq. (18): ⎡ ⎤ ⎡ ⎤⎡ ⎤ vx cos(θ) −sin(θ) 0 vrx ⎣vy ⎦ = ⎣sin(θ) cos(θ) 0⎦ ⎣vry ⎦ , ω 0 0 1 ωr

(17)

⎡ ⎤ ⎡ ⎤⎡ ⎤ vrx cos(θ) sin(θ) 0 uax ⎣vry ⎦ = ⎣−sin(θ) cos(θ) 0⎦ ⎣ uay ⎦ . ωr 0 0 1 uaw

(18)

The vector [uax , uay , uaw ]T is the auxiliary control action for linear and angular velocities. For this auxiliary control, a proportional control with feedforward was used to follow a curve. For the orientation control, on the other hand, only proportional control was adopted, following the methodology proposed in [15]. The auxiliary control is represented in (19). The vector [vxd , vyd , 0]T corresponds to the feedforward component of the controller, while the terms kx , ky and kθ are the proportional gains of the controller. The error vector for in-plane position and orientation is composed of the terms pxd −px , pyd −py and θd −θ. During motion, the robot remains aligned with the direction of the curve. To orient the robot with the direction of motion, the term θd is calculated using the arctan2 function of the components uax and uay , as proposed in [15], similar to the feedforward action for orientation, presented in Eq. (20): ⎤ ⎤ ⎡ ⎤ ⎡ ⎤⎡ kx 0 0 uax vxd pxd − px ⎣ uay ⎦ = ⎣vyd ⎦ − ⎣ 0 ky 0 ⎦ ⎣pyd − py ⎦ . uaw 0 θd − θ 0 0 kθ

(19)

θd = arctan2(uax , uay ).

(20)



Linearizing (17) with (18) and replacing the auxiliary controls from (19) with linearized (17), we find (21). This equation represents the error dynamics, where the position error and angular velocity tend to zero over time: ⎡ ⎤ ⎡ ⎤⎡ ⎤ e˙ x kx 0 0 ex ⎣e˙ y ⎦ = − ⎣ 0 ky 0 ⎦ ⎣ey ⎦ . ω eθ 0 0 kθ

(21)

Planar Motion Control of in a Quadruped Robot

179

This control methodology allows us to obtain the desired linear and angular velocities for the robot body using (18). Based on these velocities and Eqs. (14) to (16) the leg velocities are calculated, which are then sent as a reference to the step generation control.

5

Methodology Implementation and Simulation Results

We show the general description of the proposed control strategy in Figs. 2 and 3. Control of body position is made up of a proportional feedback control, a feedback linearization, and an inverse differential kinematics model that calculates leg speeds. The leg control includes a gait generator with B´ezier curves, an impedance control, a coordinate transformation, and a linearization by inverse dynamics. Reference to leg speeds is generated from body speeds of the robot.

Fig. 2. Body control diagram.

Fig. 3. Leg control diagram.

The proposed control methodology was validated with simulation, using the CoppeliaSim software and its legacy remote API for communication with Matlab. Simulations were performed using the Vortex physics engine, which provides real

180

G. D. G. Pedro et al.

parameters of physical properties and simulates the dynamics of rigid bodies, collision detection, contact determination, and dynamic reactions, making it accurate for simulations of legged robots [5]. Unitree’s A1 robot has 12 motors in joints with a maximum torque of 33.5 N.m. While performing simulations in CoppeliaSim, the data from the force sensor on the robot’s feet was filtered using a moving average filter from the virtual sensor. In Matlab, a Butterworth low-pass filter was used; these two filters reduced the high-frequency noise from the virtual force sensor. The gains Md , Bd , and Kd were adjusted to obtain the mass-spring-damper effect on the legs while following the trajectory of the B´ezier curve. The gains found were Md = 0.5 I, Bd = 2000 I, and Kd = 100000 I. Regarding body control, proportional gains were adjusted to follow the reference trajectory, with kx = ky = 0.8 and kθ = 1.2 being the values that represent the best performance. Despite the dynamic model of A1 used as a reference, the control strategies proposed in this paper can be applied to other quadruped robots with similar architecture. 5.1

Simulations and Results

Initial simulations aimed to verify whether the proposed control for robot legs could track the Bezier curves used as a reference. The robot was suspended, and the simulations tested leg velocities in both frontal and lateral directions. The purpose was to check whether the proposed leg control could follow the Bezier trajectory. Figure 4 illustrates the results of simulations by comparing the trajectory of the Bezier curve B(u) with the final position of the robot feet.

Fig. 4. Frontal and lateral movements following B´ezier curves.

We could verify that the proposed leg control is able to follow the reference trajectories of the B´ezier curve (in orange) in both simulations, showing small errors between the reference curve and the actual curve traced by the robot’s feet. Using this control, the legs were able to perform movements in both frontal and lateral directions of the robot, the reference B´ezier curve having a duty factor of 0.6, step length and height parameters of 100mm and 60mm, respectively. Subsequent simulations evaluated the synchronized control of the legs during locomotion of the robot resting on the ground. We set the same parameters and forward speed as in the previous simulation for walking patterns. We can verify

Planar Motion Control of in a Quadruped Robot

181

the torque control actions sent to robot legs in that simulation in the presence of forces applied to the tip of the foot in Fig. 5 at the joint torque. We can also verify the behavior of the support and swing cycles of the feet. The torque values were less than the maximum torque of 33.5 N.m of the motors of robot A1. The cyclic pattern of the steps is also present when the resulting force on the feet is visualized, as shown in Fig. 5 foot forces. It can be seen that the norm of the foot force (norm Fe ) has a value close to the weight of the robot. Robot A1 has a mass of 12.7, kg which results in a weight of approximately 31.4, N per foot in the case of uniform support.

Fig. 5. Graphs output of control and foot forces.

We simulate the leg and body control implemented in the third set. The developed function was used to calculate the frequency and length of the step based on the desired speed. We use 0.8 as the duty factor. We used a Bernoulli– Lemniscate curve with a length of 5 m and an angular velocity of 10◦ /s as a reference trajectory. The robot’s ability to execute left- and right-turns was tested using this curve. The body position control was able to make the robot’s reference coordinate system follow the trajectory of the Lemniscate, both in static walking and trotting. At the beginning of the simulation, the robot performed a 180-degree turn to follow the yellow point that represents the current position of the Lemniscate. When the center of the legs reached the yellow point, the robot followed the reference trajectory, with the position error minimized over time (Fig. 6).

182

G. D. G. Pedro et al.

Fig. 6. Simulations of the control of the robot body and legs.

At the beginning of the simulation, it is possible to observe an error in the initial position of the robot with respect to the reference curve. Despite the initial error, during the course of the simulation, the robot follows the Lemniscata curve without difficulty in a cyclic pattern, reducing this position error. The locomotion speed during the simulations was approximately 0.19 m/s (0.68 km/h) and the simulated time was 46 s. Videos of simulations performed for static walk and trot are available online.2

6

Conclusions

The paper presents the planar motion control of a quadruped robot using B´ezier curves to generate two gait patterns for the robot feet. We implemented an impedance control considering the dynamic model of each leg calculated using the Euler–Lagrange method. We use gait patterns static walk and trot implemented using 6th degree B´ezier curves. Body control is implemented using a planar kinematic model, and the entire control is validated in a simulated environment using Matlab-integrated CoppeliaSim simulation software. The impedance control performed as expected and the Bezier curve proved to be an adequate mathematical tool for plotting trajectories that can easily change. The simulations in CoopeliaSim using Vortex physics engine obtained force and torque results close to real estimates. The planar body control performed as proposed, managing to follow a trajectory using as reference a Bernoulli Lemniscate curve. Moreover, in a real application on robot A1, some method of estimating forces on feet will be necessary, since robot A1 does not have force sensors on feet as simulated, only a touch sensor. Further work consists in developing a method for estimating foot forces or instrumentation of a force sensor in robot feet. Another further work consists of developing a gait generator that modifies its parameters to allow the robot to move with greater speed or on irregular terrain, using cameras or laser sensors to map the ground. Finally, the last further works propose to modify the proposed control by investigating other kinematic or dynamic models that consider movement in a space and disturbance by forces on the robot body.

2

https://github.com/GabrielDGP/CLAWAR2023.git.

Planar Motion Control of in a Quadruped Robot

183

References 1. Aeini, A., Pourassad, M., Haghjoo, M.R., Taghizadeh, M.: Trotting gait planning and modeling of a quadruped robot. In: 2022 8th International Conference on Control, Instrumentation and Automation (ICCIA). IEEE (2022) 2. Alexander, R.McN.: The gaits of bipedal and quadrupedal animals. Int. J. Robot. Res. 3(2), 49–59 (1984) 3. Boaventura, T., Semini, C., Buchli, J., Frigerio, M., Focchi, M., Caldwell, D.G.: Dynamic torque control of a hydraulic quadruped robot. In: 2012 IEEE International Conference on Robotics and Automation. IEEE (2012) 4. Buchli, J., Kalakrishnan, M., Mistry, M., Pastor, P., Schaal, S.: Compliant quadruped locomotion over rough terrain. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE (2009) 5. Coppelia Robotics. Dynamics. Dispon´ıvel em: www.coppeliarobotics.com/ helpFiles/en/dynamicsModule.htm. Acesso 24 de Janeiro 2023 6. Fankhauser, P., Hutter, M.: ANYmal: a unique quadruped robot conquering harsh environments (2018) 7. Farin, G.: Curves and Surfaces for Computer-Aided Geometric Design: A Practical Guide. Elsevier (2014) 8. Gehring, C., Coros, S., Hutter, M., Bloesch, M., Hoepflinger, M.A., Siegwart, R.: Control of dynamic gaits for a quadrupedal robot. In: 2013 IEEE International Conference on Robotics and Automation. IEEE (2013) 9. Hildebrand, M.: Walking and running (Chap. 3). In: Functional Vertebrate Morphology, pp. 38–57. Harvard University Press (1985) 10. Hou, W., Ma, L., Wang, J., Zhao, J.: Walking decision of hydraulic quadruped robot in complex environment. In: 2020 Chinese Control And Decision Conference (CCDC). IEEE (2020) 11. Jin, B., Sun, C., Zhang, A., Ding, N., Lin, J., Deng, G., Zhu, Z., Sun, Z.: Joint torque estimation toward dynamic and compliant control for gear-driven torque sensorless quadruped robot. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2019) 12. Kolvenbach, H., Arm, P., Valsecchi, G., Rudin, N., Hutter, M.: Towards legged robots for planetary exploration. In: ICRA Workshop on Legged Robots (2022) 13. Kolvenbach, H., Wisth, D., Buchanan, R., Valsecchi, G., Grandia, R., Fallon, M., Hutter, M.: Towards autonomous inspection of concrete deterioration in sewers with legged robots. J. Field Robot. 37(8), 1314–1327 (2020) 14. Liu, M., Daokui, Q., Fang, X., Zou, F., Di, P., Tang, C.: Quadrupedal robots whole-body motion control based on centroidal momentum dynamics. Appl. Sci. 9(7), 1335 (2019) 15. De Luca, A., Oriolo, G.: Local incremental planning for nonholonomic mobile robots. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation. IEEE Computer Society Press (1994) 16. Ma, S., Tomiyama, T., Wada, H.: Omnidirectional static walking of a quadruped robot. IEEE Trans. Rob. 21(2), 152–161 (2005) 17. Pandala, A., Kamidi, V.R., Hamed, K.A.: Decentralized control schemes for stable quadrupedal locomotion: a decomposition approach from centralized controllers. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2020) 18. Park, J., Park, J.H.: Impedance control of quadruped robot and its impedance characteristic modulation for trotting on irregular terrain. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE (2012)

184

G. D. G. Pedro et al.

19. Pedro, G.D.G., Freitas, G.M.: Controle complacente de passo para um robˆ o quadrupede utilizando curvas de B´ezier. In: Procedings do XV Simp´ osio Brasileiro de Automa¸ca ˜o Inteligente, SBA Sociedade Brasileira de Autom´ atica (2021) 20. Pedro, G.D.G., Freitas, G.M.: Controle dinˆ amico de marcha e passo para um robˆ o quadrupede utilizando curvas de B´ezier. In: Anais do Congresso Brasileiro de Autom´ atica. sbabra (2022) 21. Semini, C., Barasuol, V., Boaventura, T., Frigerio, M., Focchi, M., Caldwell, D.G., Buchli, J.: Towards versatile legged robots through active impedance control. Int. J. Robot. Res. 34(7), 1003–1020 (2015) 22. Spong, M.W., Hutchinson, S., Vidyasagar, M. et al.: Robot modeling and control (2006) 23. Wang, Y., Ramezani, M., Fallon, M.: Actively mapping industrial structures with information gain-based planning on a quadruped robot. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2020) 24. Zhang, L., Ma, S., Inoue, K.: Several insights into omnidirectional static walking of a quadruped robot on a slope. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE (2006)

Exploring Behaviours for Social Robots Joana S´ a(B) and Jo˜ ao Silva Sequeira(B) Instituto Superior T´ecnico, University of Lisbon, 1049-001 Lisbon, Portugal {joana.sa,joao.silva.sequeira}@tecnico.ulisboa.pt

Abstract. The paper details a set of experiments to assess the perception of basic behaviors for social robots. These include (i) the blinking of the eyes, (ii) the shape of the mouth, (iii) the way the whole body moves, (iv) the motion of the arms, and (v) a collection of non-verbal sounds. The goals are (i) to validate behaviors that can be useful for social robots with simple anthropomorphic features, and (ii) to explore the perception of the audiences selected to interact with the robot, namely how long do they last and the perceived emotions, i.e., the extent to which small behaviours are perceived by people observing the robot. Online questionnaires containing media displaying a Monarch MBot robot undertaking several actions are used to assess people’s perception. Inequality indicators (Gini Index, pq-means, and Hoyes Index) are used as analysis indicators, in addition to classical statistical tests. Keywords: Social robot · Social behaviours · Gini index questionnaires · Hoyes index · Inequality indexes

1

· Likert

Introduction

The research on the field has pointed to numerous aspects, namely acceptance of the robots by people. The uncanny valley paradigm, though not universally accepted, already established a link between the appearance of a robot and the emotional response of people observing it. The paper presents a collection of experiments aim at understanding how people perceive small physical differences between behaviors thought relevant for social purposes and how important they are for acceptance of social robots. The robot considered is a Mbot, developed within the European project FP7 Monarch, to interact with children. in a hospital environment1 (a omnidirectional platform with 1-dof arms and neck and simple led-based facial features). The good acceptance of this robot in previous experiments and its basic anthropomorphic features make it interesting for this kind of study. Furthermore, a major part of the populations involved are not new to the robot and hence novelty effects are minimized. 1

In the text the robot is named Casper, after the name given by the children at the hospital.

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 185–196, 2024. https://doi.org/10.1007/978-3-031-47272-5_16

186

J. S´ a and J. S. Sequeira

The experiments in the paper aim at demonstrating the importance of small or short-duration behavioural features to the perception of people observing the robot in social contexts. The paper is organized as follows. Section 2 presents an overview of the state of the art. Sections 3–7 detail the Likert questionnaires2 used to assess people’s perception and respective analysis. Section 8 discusses the results obtained and points to future developments.

2

Brief Overview of Behaviours for Social Robots

Social robots can be utilitarian (or service robots) or affective social robots (or socially assistive robots), [2]. They are often endowed with anthropomorphic features to facilitate integration and acceptance in human environments, e.g., having a robot moving its arms during motion facilitates the recognition by humans as walking [4]. The effect of gait speed in the upper body kinematics and intersegmental coordination between upper and lower body in healthy individuals was studied in [15], including shoulder, elbow, thoracic, and pelvic movements for 20 healthy subjects walking at six speeds ranging from extremely slow to very fast. As speed increases, there’s a significant increase in range of motion (RoM); the extremely slow walking speed has a peak-to-peak difference of roughly 10◦ , whereas the very fast walking speed has a difference of about 40◦ . The behaviour of robotic eyes has been recognized as relevant and useful in the field of social robotics [1], e.g., to relay social cues, for example during a conversation, or to convey the psychological state of the person (or robot) exhibiting the behavior [13]. Although complex eye behavior includes gaze, saccades, and blinking, the Mbot has static LEDs for eyes and hence it is only capable of mimicking blinks. It is generally acknowledged that blinking patterns vary greatly depending on several factors including the person itself, the environment, and the task at hand [14]. The work in [8] reported that both spontaneous eyeblink rate (SEBR) and inter-eyeblink interval (IEBI) change substantially depending on the task the subject is doing. Facial expressions are an important medium for communicating emotions between people [12]. The MBot’s facial features are limited. Aside from the robot’s eye color, the only configurable feature of the robot’s face is its mouth LED matrix. Mouth movements and shapes for the Mbot were shown in [5] and reported that having a mouth the robot is perceived as more life-like and less sad. Additionally, users prefer the human-like mouth, which was rated as friendliest, as opposed to the wavelike mouth. Using a virtual environment, [9] studied users’ recognition of eight different emotions displayed by the MBot. In general, the literature agrees that basic emotions “should be discrete, have a fixed set of neural and bodily expressed components, and a fixed feeling or motivational component that has been selected for through longstanding interactions with ecologically valid stimuli” [17]. 2

Five-point scales are used in all questionnaires.

Exploring Behaviours for Social Robots

3

187

Questionnaires Basics

The participants in the online surveys were not required to interact directly with the robot. Media contents displaying the adequate features were included on the questionnaires. The questionnaires were shared, in person, where people were approached and asked to answer a quick set of questions to evaluate a given behavior being performed by a social robot. Access was granted through QR codes and also distributed through social media, direct messaging, and group chats. The university environment resulted in most of the answers being from people in the 18–24 years old age group. Evidence that participants’ engagement in a questionnaire dropping significantly as the median completion time of the questionnaire increased has been reported in [3]. Therefore, the duration of all questionnaires was kept under five minutes. Demographic or classification questions were left to the end of the questionnaire to reduce the effect of attention fading [3]. 3.1

An Argument Against Standard HRI Questionnaires

The trait-specific questions do not follow any HRI questionnaire standards such as the Godspeed or the RoSAS. These were not used since they tend to be extensive and poorly adapted to audiences that quickly shift attention if the duration exceeds a short period. The semantic complexity of the Godspeed and the RoSAS is also a relevant aspect. Some of the questionnaires presented had children audiences. Wiggers and van Lieshout [18] justify children’s (namely girls from the ages of five to eight years old) inability to recognize shame and contempt with the fact that they are unable to conceive the complexity of these emotions and understand the verbal labels given to them. The Godspeed and RoSAS use terms such as “quiescent”, “compassionate”, and “organic” which may be difficult for children to understand. Finally, as referred in [19], it is still common within the field of HRI to use custom surveys to assess users’ subjective perceptions of a robot.

4

Survey 1: Arm Movements for a Social Robot

This questionnaire was designed to assess users’ perceptions of the walking arm movement trait. Two variations of arm movement are proposed. A slower one, with a smaller RoM, which will be referred to as “small arm movement”. A faster one, with a larger RoM which will be referred to as “large arm movement”. This questionnaire had 202 participants, with a balanced gender representation (51/49%) and ages 18–34 being representing 64% of the participants. For the trait-specific questions, the users were presented with three videos (one without arm movements and two with the different arm movements) showing the robot navigating between two goals in an indoors lab. The questionnaire contained a single question: “How naturalistic was the robot’s arm movement?”. This question was shown in the questionnaire a total of 3 times, once for

188

J. S´ a and J. S. Sequeira

each movement type. This feature was created to anthropomorphize the MBot’s “walking” style by modulating a human-like arm movement that rises in amplitude and frequency proportionate to walking speed [15]. Hypothesis under test are the following. (H1 ) The users will consider the robot performing either “small arm movement” or “large arm movement” to be more naturalistic than when there is no arm movement; (H2 ) The “large arm movement” was designed to be more noticeable since the “small arm movement” was considered imperceptible; the users will consider the “large arm movement” more natural than the “small arm movement”. Table 1 shows the corresponding descriptive statistics. There is a clear user preference for robotic arm movement, as opposed to no arm movement and also that there is a slight preference for the large over the small arm movement. Table 1. Descriptive statistics for “no arm movement” (No), “small arm movement” (Small) and “large arm movement” (Large). Statistic

No

Mean

2.324 3.58

Small Large 3.88

Mean rank

1.35

2.18

2.46

Mode

2

4

4

Median

4

2

4

Standard deviation 1.188 0.934 1.022

Although there is some debate over the use of parametric methods to analyze Likert scale data (see [16]) the interpretation of this data as ordinal, implying the use of nonparametric methods, is acceptable [7]. Therefore, nonparametric statistical methods were also used to interpret the Likert scales data. A Friedman test was used to prove that there is a relevant difference in participants’ opinions about the three arm movements (the samples are dependent since they were drawn from the same group of people). The null hypothesis is that there is no statistically relevant difference between the three samples, i.e. there is no difference in perceived robotic arm movement naturalness between the three arm movement types. A common level of risk, α = 0.05, was selected [7]. Friedman test was significant (Fr(2) = 158.029, p < 0.001), meaning that the null hypothesis can be rejected. The Wilcoxon signed ranks and the Sign tests were used for the three comparisons (no movement versus small movement, small movement versus big movement, and no movement versus big movement). However, it is also relevant to quantify the magnitude of the difference between groups, which can be achieved by calculating the effect size, ES (see [7]). ES, ranges from 0 to 1 and grows with the difference between groups, can be classified as small, medium and large when values are approximately 0.1, 0.3 and 0.5, respectively [6,7]. Table 2 shows the results. The null hypotheses was that there was no statistically relevant difference between the two compared samples, i.e. there is no

Exploring Behaviours for Social Robots

189

difference in perceived robotic arm movement naturalness between the two arm movement types. The risk level is set to α = 0.05/3 ≈ 0.0167, by applying the Bonferroni procedure. All three comparisons (for both tests) are statistically significant with p-values below the established α, which means that the null hypotheses can be rejected. Table 2. Wilcoxon signed ranks and Sign tests. The z-scores were computed based on negative ranks for both tests, thus negative values indicate that the first rank in the calculation is higher than the second one. Wilcoxon signed ranks Sign test Small-No Large-Small Large-No Small-No Large-Small Large-No z-score −9.497

−3.337

−9.914

−8.869

−3.705

−10.628

p-value 0 if (j, i) ∈ E, wji = wij , and wij = 0, otherwise. The interaction topology representation of the multi-agent systems consists of modeling each agent as a node and the communication between them as edges. Two nodes, vi ∈ V and vj ∈ V, are defined as neighbors if each one is located at an end of the edge eij ∈ E. The Laplacian matrix L ∈ R2×N of graph G is N obtained by lii = j=1,j=i wij , and lij = −wij , for i = j, ∀i, j ∈ {1, 2, . . . , N } [1]. A graph G is connected if there is a path from any node to another. A root is a node with no parent. A tree is a directed graph with a root where the other nodes have only one parent. A spanning tree of a graph G is a tree containing all nodes of G. A multi-agent system reaches consensus if the graph that models the information flow between them has a spanning tree [1,2].

254

2.2

T. Costa et al.

Model

The dynamic model of a UAV comprises representations of its translational and rotational movements. Such a model is a nonlinear mathematical representation that includes coupling between both types of motion. As the translational dynamics present a higher time constant than the rotational dynamics, decoupling the translational and rotational dynamics usually simplifies the UAV model. Then, the control system considers a nested structure, where the inner loop corresponds to the attitude control, while the outer loop controls the position. The formation control problem is treated in the translational loop, i.e., in the outer loop [11,12]. In this work, we consider a decoupled model and propose the control scheme presented in Fig. 1. Neighbors’ Information

Virtual Structure Information

Yaw Reference

Formation Controller

Roll and Pitch Reference

Follower i Attitude Controller

Control Torque

UAVi Attitude States Position States

Fig. 1. Control Scheme of the UAV i.

For the dynamic attitude layer, we consider modifying the UAV simplified model presented by [16]. Then, the attitude model is given by, (1) J B η¨(t) = τ B (t),  T where J B ∈ R3×3 is the inertia tensor, η = φ θ ψ is the UAV attitude, and τ B ∈ R3×1 is the torque acting in the UAV body. The position dynamics is simplified to a double-integrator model, and it is given as  p˙ i (t) = v i (t), (2) v˙ i (t) = ui (t), for i = 1, . . . , N , and where pi (t) ∈ R2×1 is the position vector, v i (t) ∈ R2×1 is the velocity vector, and ui (t) ∈ R2×1 is the control input vector. 2.3

Problem Formulation

The objective of the work is to move the swarm of UAVs in a way that ensures formation in the two-dimensional space, which is defined by a virtual structure.

Formation Tracking Control

255

Thus, the formation control objective is to follow the virtual structure trajectory while the UAVs maintain a fixed position relative to the virtual structure. The virtual structure position and velocity vectors are described by pv s (t) ∈ ¨ v s = O2×1 . The R2×1 and v v s ∈ R2×1 , respectively, with p˙ v s = v v s and p time-invariant offsets between the UAV i and the virtual structure position are represented by hp i ∈ R2×1 , for i = 1, . . . , N . Being the vectors ξ i (t) =  T T T   T pi (t) v Ti (t) ∈ R4×1 , ξ v s (t) = pTv s (t) v Tv s (t) ∈ R4×1 , hi = hTp i O1×2 ∈ R4×1 , the multi-UAV system is said to achieve the formation tracking if, for any bounded initial conditions, the following condition is satisfied lim (ξ i (t) − hi − ξ v s (t)) = 0,

t→∞

(3)

for i = 1, . . . , N . Thus, the formation tracking control is designed to achieve the condition in (3).

3

Formation Tracking Control Design

This work focuses on developing a control system for the outer layer control scheme in Fig. 1. We designed a PD control for the attitude subsystem so that the closed-loop dynamics in (1) is stable. In this section, we first obtain an error model of the system and formulate the formation control law. For the sake of simplicity, we omit the term (t) of the time-dependent variables. Denoting the error vector as ξ˜i = ξ i − hi − ξ v s ∈ R4×1 , the state-space representation is written as follows ˜ξ˙ = Aξ˜ + Bui , i i

(4)

with A=

  O2×2 I2×2 O2×2 ,B= , O2×2 O2×2 I2×2

with Om×m and Im×m being a zero and an identity matrices with dimension Rm×m . T  the multi-agent system state vector, Being the vector ξ = ξ T1 ξ T2 . . . ξ TN the state space representation of the system is given by ˜ξ˙ = (IN ⊗ A) ξ˜ + (IN ⊗ B) u,

(5)

where ⊗ indicates the Kronecker product. The formation control law proposed for the system in (5) is given by ui = θik K r ξ˜i +

N

j=1

wij K c ξ˜i − ξ˜j ,

(6)

256

T. Costa et al.

for i = 1, . . . , N and k = 1, 2, and where K r , K c ∈ R2×4 are gains defined by the designer, and θik is a parametric uncertainty, with θi1 = 1 if the UAV i receives the information of the virtual structure, and θi2 = 0, otherwise. In (6), the first term on the right-hand side corresponds to a tracking control that aims to track the virtual structure trajectory. The second term on the righthand is an additional control input that considers the communication among the neighboring UAVs, which is added to enhance the system response [2]. Also, if an agent loses communication with the base, i.e., losing the trajectory information, it is possible to maintain the formation keeping the consensus with the other agents. For a multi-UAV system composed of N agents, the control law is given by ˜ u = (Θ N (θ) ⊗ K r + L ⊗ K c ) ξ,

(7)

where L is the Laplacian matrix, and with Θ N (θ) = diag (θ1k , θ2k , . . . , θN k ). To consider a more realistic scenario, we consider the UAVs can lose communication with the virtual structure. To model this matrix in a polytopic type, we represent it by

Θ N (θ) =

Γ =

N

2

θik Θ N i ,

i=1 k=1

θik ∈ R2N :

2N

(8) 

θik = 1, θik ≥ 0 ,

(9)

i=1

where Θ N i are the polytopic vertices. Considering the control law in (7) and the error model in (5), the closed-loop system is given by ˜ ˜ξ˙ = (IN ⊗ A + Θ N (θ) ⊗ BK r + L ⊗ BK c ) ξ,

(10)

To verify the stability of the closed-loop (10), consider the Theorem 1. Theorem 1. The multi-UAV system achieves the desired formation if the graph G has a spanning tree, the root receives the information of the virtual structure, and if there exists J = J T > 0, such that the LMIs IN ⊗ AJ + J AT +Θ N k ⊗BY r +Θ TN k ⊗Y Tr B T +L⊗BY c +LT ⊗Y Tc B T < 0, (11) for k = 1, . . . , 2N , are feasible for bounded initial conditions. Then, the gains are given by K r = Y r J −1 , Kc = Y c J

−1

(12) (13)

Formation Tracking Control

257

Proof. The following proof will be carried out for k = 1, . . . , N . To verify the system stability, we propose the quadratic Lyapunov function T V = ξ˜ (IN ⊗ P ) ξ˜ with P −1 = P T > 0 ∈ R22×2n , whose derivative is given by T ˙ ˙ V˙ = ˜ξ T (IN ⊗ P ) ξ˜ + ξ˜ (IN ⊗ P ) ˜ξ.

(14)

To verify the closed-loop system, the derivative in (14) must be negative definite. Then, replacing the closed-loop system (10), V˙ < 0 is satisfied if

IN ⊗ AT P + Θ TN k ⊗ K Tr B T P + LT ⊗ K Tc B T P + (IN ⊗ P A) +

(Θ N k ⊗ P BK r ) + (L ⊗ P BK c ) < 0

(15)

As there is multiplication of unknown variables, we manipulate (15) pre and   post multiplying it by IN ⊗ P −1 and making the following variables substitution J = P −1 ,

(16)

Y r = Kr P

−1

Y c = Kc P

−1

,

(17)

,

(18)

obtaining the following LMI IN ⊗ AJ + J AT +Θ N k ⊗BY r +Θ TN k ⊗Y Tr B T +L⊗BY c +LT ⊗Y Tc B T < 0

4

Results

In this section, we execute numerical simulations to verify the control system’s performance proposed in Sect. 3. In the simulation, four UAVs composing the system are modeled with the dynamics in (1) and (2). There is a PD controller for the attitude subsystem, whose gains are equal to 72, 98.3, 16, respectively, and saturators for the control torque input, with minimum and maximum values equal to −5N m and 5N m, respectively. The inertia tensor equals diag (0.1098, 0.1098, 0.2085). Those parameters were obtained from the previous work [16].  T The virtual structure trajectory is given by pv s = t 0.5t , and the desired distance between the UAVs and the virtual structure is given by  T  T  T  T h1 = 2 2 , h2 = 2 −2 , h3 = 2 −2 and h4 = −2 2 . The UAVs can communicate with its neighbor, and an undirected graph models the interaction graph. The edges set of the graph is given by E = {(v1 , v2 ) , (v2 , v3 ) , (v3 , v4 ) , (v2 ,v1 ) , (v3 , v2 ) , (v4 , v3 )}, and the weighted adjacency matrix is given by W = wij for wij = 1 if (vi , vj ) ∈ E, and wji = wij , for

258

T. Costa et al.

i, j = 1, . . . , 4. The gains of the control law (7) were obtained using the toolbox YALMIP and Sedumi and are given by  −1.1793 0 −5.3715 0 , (19) Kr = 0 −1.1793 0 −5.3715  −0.4727 0 −2.1545 0 Kc = . (20) 0 −0.4727 0 −2.1545 We consider that agent one always receives the information of the virtual structure, and the others can lose communication in different moments of the simulation. Initially, all the agents receive the virtual structure information, i.e., Θ N = IN . At simulation times 10, 15, 25, 30, and 40, we modified the matrix Θ N to consider faults and restoration in receiving information from the virtual structure. The matrix Θ N at each of these times are given by Θ N i = diag (1, 1, 0, 1), Θ N i = diag (1, 1, 1, 0), Θ N i = diag (1, 0, 1, 1), and Θ N i = diag (1, 0, 1, 0), respectively. In Fig. 2, we present the trajectory taken by the agents and the virtual structure. We verified the UAVs tracked the virtual structure trajectory respecting the desired offset even in the presence of faults in receiving information from the virtual structure. Agents trajectories

30

25

y

20

15

10

5

0

0

10

20

30

40

50

60

x

Fig. 2. Agents’ trajectory.

We present the position and velocity errors in Fig. 3. We verified the errors converge to zero in finite time, showing the UAVs’ trajectory tracking and formation achievement. We present the torque signals in Fig. 4. We analyze that the control laws are kept within the actuation limits. When receiving information from the virtual structure is modified, the control law changes but remains within the limits of performance and stability.

Formation Tracking Control Position Errors

Velocity Errors

0.5

2

259

0 0 -0.5 -2

0

25 t

-1

50

0

25 t

50

0

25 t

50

0.5 2 0

0 -2

-0.5 0

25 t

50

Fig. 3. Position errors. Torque Control 2 0 -2

Agent 1

0

10

20

Agent 2

Agent 3

Agent 4

30

40

50

30

40

50

30

40

50

t 2 1 0 -1 0

10

20 t

2 0 -2 -4 0

10

20 t

Fig. 4. Torque control.

Another simulation was taken using the control law proposed in (7) and considering another controller with equal trajectory tracking and consensus gains. Then, we solved the LMI (11) considering Θ N = IN and obtained  −0.3283 0 −0.6733 0 K= , (21) 0 −0.3283 0 −0.6733 with K r = K c = K. In this simulation, we considered faults at the same time as the first simulation, and the matrix Θ N at each of these times are given by Θ N i = diag (1, 0, 0, 0), Θ N i = diag (1, 0, 1, 1), Θ N i = diag (1, 1, 0, 1), Θ N i = diag (1, 0, 0, 1), and Θ N i = diag (1, 1, 1, 1), respectively. In Fig. 5, we present the trajectory of the UAVs for each of the controllers. We verified that the UAVs tracked the virtual structure trajectory in both cases. However, there were higher initial errors when the parametric uncertainty is not considered. The

260

T. Costa et al.

same analysis can be taken verifying Fig. 6, where it is presented the position and velocity errors. Agents trajectories - Comparison

30

A1 - Polytopic

A2 - Polytopic

A3 - Polytopic

A4 - Polytopic

A1

A2

A3

A4

VS

25

y

20

15

10

5

0

0

10

20

30

40

50

60

x

Fig. 5. UAVs trajectory—Comparison

4

Position Errors - Comparison

0.5

Velocity Errors - Comparison A1 - Polytopic A2 - Polytopic A3 - Polytopic A4 - Polytopic A1 A2 A3 A4

0

2

-0.5

0

-1

-2

0

25 t

50

0

25 t

50

0

25 t

50

1 2 0

0 -2

0

25 t

50

-1

Fig. 6. Position and Velocity errors of multiple simulations

The control torque was also analyzed, presented in Fig. 7. We verify that the control torque is kept within the actuation limits. However, for the case without considering the parametric uncertainty, the control signals saturate whenever the change of the matrix Θ N . In summary, the results demonstrate that the formation control system proposed in this work could guarantee the system stability in case there is the tracking of the virtual structure and consensus with the UAV’s neighbors and when there is only the consensus.

Formation Tracking Control

261

Torque Control - Comparison 2 0 -2 -4

A1 - Polytopic

0

10

A2 - Polytopic

A3 - Polytopic

20

A4 - Polytopic

A1

A2

A3

A4

30

40

50

30

40

50

30

40

50

t 5 0 -5

0

10

20 t

0 -2 -4 0

10

20 t

Fig. 7. Torque control—Comparison

5

Conclusions and Future Works

We proposed a formation control system based on the virtual structure method in this work. The UAVs model is simplified and decoupled into translational and rotational parts. A previously established control system controls the attitude subsystem. We proposed a formation control law for the translational control system to track the virtual structure trajectory and achieve a consensus with neighboring agents. We modeled the trajectory tracking part with a parameter considered a parametric uncertainty. The stability analysis was taken through LMIs, considering the parametric uncertainty as a polytope. The results showed that the proposed control system could guarantee the system stability even in case of losing the information of the virtual structure, leaving only the consensus. We intend to add the H∞ performance criteria for future work to increase system robustness.

References 1. Ahn, H.: Formation Control: Approaches for Distributed Agents. vol. 205. Springer (2019) 2. Oh, K-K., Park, M-C., Ahn, H-S.: A survey of multi-agent formation control. Automatica. 53, 424–440 (2015). Elsevier 3. Dong, X., Zhou, Y., Ren, Z., Zhong, Y.: Time-varying formation tracking for second-order multi-agent systems subjected to switching topologies with application to quadrotor formation flying. IEEE Trans. Ind. Electron. 64(6), 5014–5024 (2017). IEEE 4. He, L., Bai, P., Liang, X., Zhang, J., Wang, W.: Feedback formation control of UAV swarm with multiple implicit leaders. Aerosp. Sci. Technol. 72, 327–334 (2018). Elsevier 5. Xuan-Mung, N., Hong, S.K.: Robust adaptive formation control of quadcopters based on a leader–follower approach. Int. J. Adv. Robot. Syst. 16 (2019). SAGE Publications Sage, UK, London, England

262

T. Costa et al.

6. Ai, X., Yu, J.: Flatness-based finite-time leader–follower formation control of multiple quadrotors with external disturbances. Aerosp. Sci. Technol. 92, 20–33 (2019). Elsevier 7. Dong, X., Li, Y., Lu, C., Hu, G., Li, Q., Ren, Z.: Time-varying formation tracking for UAV swarm systems with switching directed topologies. IEEE Trans. Neural Netw. Learn. Syst. 30(12), 3674–3685 (2019). IEEE 8. Yan, L., Ma, B.: Distributed leader-follower formation tracking control of multiple quad-rotors (2020). arXiv:2003.01084 9. Liu, H., Wang, Y., Lewis, F.L., Valavanis, K.P.: Robust formation tracking control for multiple quadrotors subject to switching topologies. IEEE Trans. Control Netw. Syst. 7(3), 1319–1329 (2020). IEEE 10. Kartal, Y., Subbarao, K., Gans, N.R., Dogan, A., Lewis, F.: Distributed backstepping based control of multiple UAV formation flight subject to time delays. IET Control Theory Appl. 14(12), 1628–1638 (2020). IET 11. Wang, J., Han, L., Dong, X., Li, Q., Ren, Z.: Distributed sliding mode control for time-varying formation tracking of multi-UAV system with a dynamic leader. Aerosp. Sci. Technol. 111 (2021). Elsevier 12. Liu, H., Tian, Y., Lewis, F.L., Wan, Y., Valavanis, K.P.: Robust formation tracking control for multiple quadrotors under aggressive maneuvers. Automatica 105, 179– 185 (2019). Elsevier 13. Liu, H., Wang, Y., Lewis, F.L., Valavanis, K.P.: Robust formation tracking control for multiple quadrotors subject to switching topologies. IEEE Trans. Control Netw. Syst. 7(3), 1319–1329 (2020). IEEE 14. Oh, H., Shirazi, A.R., Sun, C., Jin, Y.: Bio-inspired self-organising multi-robot pattern formation: a review. Robot. Autonom. Syst. 91, 83–100 (2017). Elsevier 15. Yang, G., Hu, F.: Formation control of networked UAVs. In: UAV Swarm Networks: Models, Protocols, and Systems. CRC Press, pp. 37–54 (2020) 16. Vendrichoski, J.C., Costa, T.L., El’Youssef, E.S., de Pieri, E.R.: Mathematical modeling and control of a quadrotor aerial vehicle with tiltrotors aimed for interaction tasks. In: 19th International Conference on Advanced Robotics. IEEE, pp. 161–166 (2019) 17. Wu, J., Luo, C., Luo, Y., Li, K.: Distributed UAV swarm formation and collision avoidance strategies over fixed and switching topologies. IEEE Trans. Cybernet. 1–11 (2021). IEEE

Human-Robot Autonomous System: An Interactive Architecture Giovane Moreira(B) , Anderson Leite, Jos´e D´ıaz-Amado, Cleia Libarino, and Joao Marques Instituto Federal de Educa¸ca ˜o, Ciˆencia e Tecnologia da Bahia, Av. S´ergio Vieira de Melo, 3150 Zabelˆe, Vit´ oria da Conquista - BA, Brazil [email protected]

Abstract. Social robotics is a growing field that aims to enhance the interaction between robots and humans in everyday settings. To achieve this goal, various new techniques for human-robot interaction (HRI) have emerged. One such technique is proxemic interaction, which governs how people and robots interact based on their distance from each other, leading to the definition of proxemic zones. In our work, we present a socially-aware navigation system built on proxemic principles. This system responds to voice commands and incorporates a Chatbot to determine the robot’s path within a crowded environment. This innovative social navigation system is seamlessly integrated into GProxemic Navigation, a system that not only provides the robot’s location but also intelligently identifies the proxemic zones that the robot should avoid while navigating. These proxemic zones are determined based on the characteristics of the environment. To showcase the functionality and suitability of our proposed proxemic navigation system, we have implemented it in an autonomous Pepper Robot This implementation allows the Pepper Robot to navigate efficiently while respecting the social constraints imposed by the environment, enhancing the robot’s ability to coexist harmoniously with people in shared spaces.

Keywords: Navigation system

1

· Proxemics · ROS · Interaction

Introduction

Social robotics is steadily becoming a prominent presence in our daily lives [1]. Research efforts are increasingly focused on seamlessly integrating social robots into a variety of environments, including hospitals, museums, shopping malls, and educational institutions [2]. In these settings, these robots take on various service-oriented tasks and engage with people. To elevate the quality of humanrobot interaction in such contexts, it becomes imperative to equip robots with greater complexity and the ability to comprehend human behavior and environmental variants. Tools such as GProxemic Navigation and Dialogflow play fundamental roles in achieving this objective. GProxemic Navigation facilitates c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 263–274, 2024. https://doi.org/10.1007/978-3-031-47272-5_22

264

G. Moreira et al.

the delineation of proximity zones under measure to the specific environment, while Dialogflow aids in natural language processing [4,6]. These advancements empower robots to analyze spoken language, respond to voice commands, and navigate while adhering to social constraints. As a result, human-robot interactions become more efficient and harmonious, enhancing the overall experience of coexistence in these shared spaces. Interaction based on proxemics is fundamentally determined by the spatial separation between individuals and the digital devices they are engaging with [9].In the field of robotics, proxemics has gained increasing prominence as a means to enhance Human-Robot Interaction (HRI) and to facilitate the development of social navigation systems [10]. This approach relies on the categorization of interactions into four distinct proxemic zones, each defined by specific distance thresholds: (1) Intimate Zone: This zone is characterized by close proximity, typically within a range of 0–50 cm. (2) Personal Zone: At a distance 0.5 1 m from the individual. (3) Social Zone: At a distance range of 1–4 m, interactions fall within the social zone. (4). Public Zone: Interactions in this zone occur at distances exceeding 4 meters. Social navigation for robots entails taking into account people’s proxemic zones in order to adhere to social norms and prevent humans from reacting negatively to the robots [11]. A robot’s movement reflects its intelligence and defines its social acceptance, based on how safe, comfortable, and understandable its actions are perceived. Therefore, researchers are currently dedicated to enabling robots to behave naturally by considering people, generating emotions, displaying both implicit and explicit social behaviors, and taking into account social settings and their attributes [13]. Within this framework, we introduce a system for socially aware navigation that responds to voice commands indicating a new location in the environment. This social navigation system is integrated into the GProxemic Navigation system [16], which can identify the robot’s location and respect people’s proxemic zones in accordance with the characteristics of the environment. In order to showcase the functionality and suitability of our proposed proxemic-aware social navigation system, we have implemented it within a simulated environment utilizing an autonomous Pepper Robot In this scenario, the Pepper Robot responds to voice commands for autonomous navigation while taking into account proxemic zones, which define social boundaries based on the surrounding environment. The integration of voice command interpretation, combined with a Chatbot, enables us to define the optimal path for the Pepper Robot to follow. The GProxemic Navigation system plays a pivotal role by providing precise Pepper Robot localization and establishing the necessary proxemic zones for navigation. When a voice command is issued, it triggers autonomous navigation, allowing the Pepper Robot to move to a new location as directed by the Chatbot’s interpretation of the command. Importantly, throughout this navigation process, the system respects social constraints determined by GProxemic, ensuring harmonious coexistence between the Pepper Robot and individuals in its proximity. The primary contribution of this work lies in the seamless integration of a Chatbot, utilizing voice commands, into GProxemic, originally

Navigation Based on Proxemic Interaction

265

designed as a location system. This integration enables the implementation of socially-aware navigation for robots, enhancing their ability to navigate shared spaces while adhering to social conventions and ensuring human comfort. The article is organized as follows. Section 2 presents a related study of the work. Section 3 describes the navigation system and its integration into the GProxemic Navigation system. Section 4 shows the integration functionality and efficiency. Section 5 is Discussions and Sect. 6 concludes and focuses on the main contributions and results of this work.

2

Related Work in Socially Aware Navigation Based on Interaction

Presently, numerous studies propose the deployment of service robots across various sectors, including restaurants [17,18], hospitals [19,20], the tourism industry [21], and even archaeological explorations [22]. These robots are tasked with performing diverse functions, and a fundamental requirement is their ability to navigate through the environment without encountering collisions. To achieve this, robots are equipped with sensors that collect a variety of data from their surroundings, which is then processed for decision-making [23]. Among the techniques commonly employed for processing this data, computer vision stands out, enabling tasks such as image analysis in both static and dynamic environments. Nevertheless, in scenarios where humans and robots coexist in shared environments, it becomes crucial to account for new capabilities in robots to ensure their behaviors align more harmoniously with social norms and expectations. Social robotics has arisen from this imperative, propelling research beyond mere people detection and extending into other dimensions of human interaction, such as emotion recognition [24] and proxemic interactions [10,25]. These endeavors are geared towards enhancing Human-Robot Interaction (HRI) and streamlining the navigation process in shared environments. Given the growing complexity and the increasing demand for robots to provide progressively intricate services, the integration of diverse techniques and approaches emerges as a more efficient solution. In [27], the authors elucidate the significance of integrating various frameworks within the field of robotics. For instance, the amalgamation of tools like auto-localization, in conjunction with IoT (Internet of Things) and computer vision techniques, proves highly advantageous in the context of rescue robots [28] or robots aiding in agricultural operations [29]. This integration enhances the capabilities of robots in fulfilling their intended functions effectively. In [16], a proxemic navigation system is introduced, which accepts an IP address as input and performs tasks like determining the robot’s location, assessing the surrounding environment, and establishing the appropriate proxemic zones to be adhered to. However, there is an ongoing requirement to further streamline Human-Robot Interaction (HRI) processes by incorporating neurocognitive mechanisms. These mechanisms are instrumental in recognizing critical aspects of the interaction, such as voice processing for HRI, as elaborated in

266

G. Moreira et al.

[3]. The proposition presented in [30] empowers the robot to interpret requests conveyed via voice recognition, encompassing actions such as assessing the distance required to reach an object and subsequently retrieving it. Similarly, in [31], the adoption of voice commands supplants the conventional use of buttons for navigation, streamlining the vehicle’s movement. In this study, the primary contribution focus in the integration of three distinct systems to realize a socially aware navigation system within a service robot: (1) A voice detection system, coupled with a Chatbot, is employed to recognize commands for controlling the robot’s movements. (2) A geolocation system is utilized to ascertain both the robot’s current position and the designated target position. Additionally, it assists in determining the type of environment in which the robot operates. (3) An innovative proxemic navigation algorithm is implemented, which takes into account social constraints dictated by proxemic zones. This integrated approach enhances the robot’s ability to navigate and interact within shared spaces while respecting social norms and constraints.

3

Interactive Architecture Applied to a Social Aware Navigation System

Integrating service robots into social environments demands the utilization of diverse techniques that reinforce Human-Robot Interaction (HRI) while simultaneously ensuring autonomous navigation that aligns with social norms. As expounded in [5], Rachid Alami emphasizes the critical nature of prioritizing safety concerns, encompassing factors such as force and speed limits, collision detection, path planning, and motion control. These measures are imperative to mitigate the risk of harm to both humans and the robot itself within these communal settings. In order to accomplish this, we introduce a proxemic-based navigation system, broadly consisting of three subsystems: (1) the voice recognition system (chatbot) [4]; (2) the GProxemic Navigation system [16]; and (iii) the social navigation algorithm [10]. In the subsequent sections, we will elaborate on each of these components. Our voice recognition system is powered by a chatbot, and to achieve this, we have employed the Erika architecture [4], which leverages the robust Dialogflow technology developed by Google LLC. The seamless integration of the Erika architecture, with the Chatbot at its core, within our system enables users to effortlessly specify their desired destination within the environment, such as the bathroom, using natural language commands. To manages automatic location maps that come with semantic annotations used GProxemic navigation system [16] This system leverages the Mapbox API, a service capable of providing regional data in response to requests, typically requiring either an IP or geographical coordinates. Subsequently, this data is transformed into annotations and made available within the Robot Operating System (ROS). With this information, the robot can establish and adhere to

Navigation Based on Proxemic Interaction

267

three distinct proxemic zones: the personal zone, the social zone, and the public zone. The choice of these zones depends on the characteristics of an environment The social navigation algorithm, as proposed in [10], blends Social Momentum and A* to autonomously navigate within a known map, considering both people’s proxemic zones and those of coexisting robots. It determines the path from the current location to the target while ensuring social awareness. For autonomous navigation, the robot needs knowledge of the environment, which it obtains through a traditional Simultaneous Location and Mapping (SLAM) algorithm. In this version, we use Adaptive Monte Carlo Localization (AMCL), employing particle filtering to track the robot’s position and map creation. Consequently, the social navigation algorithm, with input from the GProxemic System, defines the proxemic zone the robot must adhere to in its trajectory based on the environment’s characteristics. Algorithm 1 outlines the integration of three systems for a socially aware navigation system. It starts by checking if the robot knows the environment; if not, it maps it using SLAM (Line 1). Then, it sends the IP address to GProxemic for geographic coordinates (Line 2). The Mapbox API provides location details (Line 3), helping define proxemic zones (Line 4). The chatbot designates a location (Line 5), and the robot, familiar with the map, autonomously navigates to the specified spot. Lastly, Line 6 ensures navigation respects social constraints. Algorithm 1. Chatbot, GProxemic Navigation and integration of social autonomous navigation 1 - If (unknown map) then create and interpret maps with SLAM. 2 - Send IP address to GProxemic Navigation. 3 - The GProxemic system uses the Mapbox API and returns the characteristics of the environment. 4 - Defines the proxemic zones. 5 - The chatbot receives the target location. 6 - Run social autonomous navigation respecting social restrictions.

4

Implementation and Results: Pepper Robot

We implemented the social navigation system on an autonomous Pepper Robot [33], as shown in Fig. 1. Testing was carried out in a simulated environment using Robot Operating System Melodic (ROS Melodic), providing libraries and tools for robotic application development. The simulated Pepper Robot is equipped with a Lidar sensor, which collects environment data and communicates with Gazebo and Rviz (for SLAM) and MATLAB software (for the social navigation algorithm) through ROS. Gazebo facilitated the Pepper Robot in mapping the environment. We implemented the AMCL SLAM algorithm using the gmapping package in ROS and the Pepper Robot’s Laserscan data to support navigation planning. For sensor

268

G. Moreira et al.

observation and management, we utilized RViz, which offers a graphical interface, but we could also employ rospy and Python for monitoring and control. In RViz, we fine-tuned parameters for capturing and managing sensor data on the generated map.The proxemic navigation system was developed in MATLAB. Communication between the GProxemic and Chatbot systems and the Pepper Robot occurred via the ROSLibJs library, which established specific topics for each system and sent messages to MATLAB (for the proxemic navigation system). This facilitated data integration and the execution of navigation functionalities.

Fig. 1. General architecture of the social aware navigation system

The Erika’s architecture is based on a client-server model as shown in Fig. 2. This architecture is a functional and interactive and aims to facilitate communication between the user and the service robot by voice and text commands [8]. We use the virtual 3D prototype in a simulated environment where we can test integration algorithms. For project enhancement, we conducted a simulation of an autonomous Pepper Robot within an office setting using ROS and Gazebo. The office comprises three distinct areas: the reception area (designated as environment 1), the commercial room (environment 2), and the library (environment 3). Figure 3 areas illustrates the layout of the office with its divisions. Throughout the experiments, two visualization tools were utilized. RViz was employed to display the real trajectory executed by the autonomous Pepper Robot on the map, while MATLAB was used to depict the theoretical trajectory. In these experiments, the GProxemic system conveyed a semantic annotation to the Pepper Robot, indicating its requirement to adhere to the social zone, particularly in the office environment. The user activates by voice command the Chatbot indicating that you want to go the environment 2, starting the trajectory since environment 3. The flow of information was cited in Fig. 2 occur and this information is pubished in a topic on ROS as seen in the Fig. 4. The Pepper Robot, under the control of MATLAB, proceeds to navigate to the specified location while meticulously considering social constraints, as depicted in Fig. 5. In this figure, the concentric circles represent proxemic zones associated with individuals present in the office. It’s worth noting that there are seven individuals in total, but only one of them is situated within the robot’s navigation path.

Navigation Based on Proxemic Interaction

269

Fig. 2. Erika architecture system

Fig. 3. Office and respective area

Unfortunately, this autonomous navigation model does not currently account for individuals in motion. For a more comprehensive understanding of the metrics describing the navigation path, please refer to [16]. Additionally, Fig. 6 provides a practical visualization of the robot’s movement during navigation within RViz, offering a tangible representation of its trajectory. These results clearly demonstrate the achievement of the intended objective. The Chatbot is capable of transmitting the final destination of the trajectory to the Pepper Robot, which can be activated through a voice command. The GProxemic system provides the geographical coordinates, enabling the Pepper Robot to identify the environment and the corresponding proxemic zone it needs to adhere to. Regarding navigation with social constraints, it’s noteworthy that the theoretical trajectory depicted by MATLAB in Fig. 5 closely resembles the practical trajectory observed in RViz in Fig. 6.

270

G. Moreira et al.

Fig. 4. Communication between Architecture Erika and ROS

Fig. 5. Pepper Robot trajectory respecting the social restrictions theoretically in Matlab, in a office. Starting in environment 3 to environment 2

Fig. 6. Pepper Robot trajectory respecting the social restrictions practically in RViz, in office. Starting in environment 3 to environment 2

The simulation has been enhanced for improved performance. Figure 6, displays the trajectory observed in RViz, where the Pepper Robot, initially facing away from its target, rotates around its axis to reach the final destination. This rotation causes a circular movement at the beginning of the path. As a result,

Navigation Based on Proxemic Interaction

271

we observe a trajectory in this scenario that diverges from the theoretical path depicted in MATLAB in Fig. 5. This deviation aligns the simulation more closely with the proxemics it must adhere to while also resembling the path a real Pepper Robot would follow. The simulations took place in an office, serving as a representative social environment. However, whether it’s a coffee shop, shopping mall, restaurant, or any other social setting, it’s clear that the system exhibits the flexibility to effortlessly adapt to these various scenarios. This adaptability is closely linked to its integration with GProxemic, which aims to capture the essential traits of each location. Consequently, it customizes proxemic interaction and autonomous navigation to the unique context in which the robot operates. The results highlight that during autonomous navigation, the Pepper Robot executes maneuvers that require refinement, particularly related to the mechanics of the front wheels affecting rotation when angular velocity is applied. Our proposed integration sets the stage for future algorithm enhancement, enabling it to navigate complex scenarios, diverse environments, and situations that may necessitate the incorporation of new sensors and algorithms. The chosen approach for social autonomous navigation is centered around regulating the distance between the Pepper Robot and other individuals based on the specific environment. This approach is fundamentally rooted in respecting the personal space and privacy of each individual. It’s important to note that the current work does not account for moving objects and individuals, which is a facet that will be addressed in future endeavors. Nonetheless, the examination of social interaction with autonomous robots remains a focal point of interest within this research and served as the cornerstone for the integration of the systems.

5

Discussions

In this work, the importance of good components for the robot is essential. It is necessary to have a sensor that can capture and enable the robot to distinguish different objects. Therefore, we use the LiDAR sensor, which, besides being fast and accurate, can also operate in low light conditions. As we are using closed environments in our simulation, LiDAR will be suitable for navigating in a part of the environment that is darker, such as a market storage room. Even though LiDAR is a highly relevant sensor in the market, there are already technologies developed specifically for indoor navigation activity. An example is the Marvelmind Starter Set HW v4.9-NIA (915 MHz) consisting of four fixed sensors, one mobile sensor, and a router. The fixed sensors capture information and communicate with the wireless router in the ISM (Industrial Scientific and Medical) band. The mobile sensor captures information with calculated coordinates (±2 cm) and sends it to the router. The router is the central controller of the system that calculates the position of the mobile sensor 16 times per second based on the sensor data. They communicate via USB (Universal Serial Bus) with the panel and with wireless sensors in the ISM band. The

272

G. Moreira et al.

great advantage of this kit is the amount of information obtained even at short distances. With four fixed and one mobile sensor, it would be possible to monitor the environment more efficiently, avoiding the Pepper robot from colliding with any obstacle that is not in the field of vision of just one sensor.

6

Conclusion

We’ve developed an autonomous navigation system that considers proxemic zones by integrating the GProxemic Navigation system, a chatbot, and a social navigation algorithm. This system was initially tested in a simulated autonomous Pepper Robot The GProxemic Navigation system enables the Pepper Robot to navigate routes while respecting social constraints, making it adaptable to different environments and cultures. The chatbot enhances Human-Robot Interaction (HRI) by providing an accessible interface to the navigation system. Currently, we’re in the process of creating a real prototype that allows remote operation of the Pepper Robot data retrieval through a web-based system, and environment mapping using SLAM with the Monte Carlo algorithm. Our ultimate goal is to migrate the entire autonomous navigation system from simulation to the real prototype. During real-world tests, we identified the need to replace the front wheels to improve precision during curve navigation. Additionally, an extra battery is required to address the system’s power demands, as it reduces the original battery life. We’re also using a 3D printer to construct sensor supports for the real prototype, ensuring ease of installation and project maintainability. Acknowledgment. We thank the Federal Instituto Federal de Educa¸ca ˜o, Ciˆencia e Tecnologia da Bahia (IFBA), the Research Group GIPAR - Grupo de Inova¸ca ˜o e Pesquisa em Automa¸ca ˜o e Rob´ otica and Public Call No 03/2022/PRPGI for their support and assistance in the development of this project.

References 1. Naneva, S., Sarda Gou, M., Webb, T.L., e Prescott, T.J.: A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 12(6), 1179–1201 (2020) 2. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B. e Tanaka, F.: Social robots for education: a review. Sci. Robot. 3(21), eaat5954 (2018) 3. Henschel, A., Hortensius, R., Cross, E.S.: Social cognition in the age of humanrobot interaction. Trends Neurosci. 43(6), 373–384 (2020) 4. Correia, E., Leite, A., Fernandes, G., Vilasboas, J., Sampaio, M., Bastos, A., D´ıazAmado, J., Soares, J., Cardinale, Y.: An architecture for social-aware navigation based on a chatbot interaction. In: Proceedings of the 18th International Conference on Intelligent Environments (IE2022). IEEE (2022) 5. Alami, R., Albu-Schaeffer, A., Bicchi, A., Bischoff, R., Chatila, R., De Luca, A., De Santis, A., Giralt, G., Guiochet, J., Hirzinger, G., Ingrand, F., Lippiello, V., Mattone, R., Powell, D., Sen, S., Siciliano, B., Tonietti, G., Villani, L.: Safe and dependable physical human-robot interaction in anthropic domains: state of the art and challenges. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2006, pp. 1–16 (2006). https://doi.org/10.1109/IROS.2006.6936985

Navigation Based on Proxemic Interaction

273

6. Valipour, S., Perez, C., Jagersand, M.: Incremental learning for robot perception through HRI. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2772–2777 (2017). https://doi.org/10.1109/IROS.2017. 8206106 7. Sergiyenko, O.Y., Tyrsa, V.V.: 3D optical machine vision sensors with intelligent data management for robotic swarm navigation improvement. IEEE Sensors J. 21(10) 11262–11274 (2020) ´ et al.: An architecture for social-aware navigation based on a chatbot 8. Correia, E. interaction. In: Workshops at 18th International Conference on Intelligent Environments (IE2022). IOS Press (2022) 9. Ballendat, T., Marquardt, N., Greenberg, S.: Proxemic interaction: designing for a proximity and orientation-aware environment. In: Proceedings of International Conference on Interactive Tabletops and Surfaces, Saarbr¨ ucken, Germany, 7-10 November. ACM, pp. 121–130 (2010). https://doi.org/10.1145/1936652.1936676 10. Daza, M., Barrios-Aranibar, D., Diaz-Amado, J., Cardinale, Y. e Vilasboas, J. An approach of social navigation based on proxemics for crowded environments of humans and robots. Micromachines 12(2), 193 (2021) 11. Mead, R., Matari’c, M.J.: Perceptual models of human-robot proxemics. In: Experimental Robotics, pp. 261–276, Springer, Berlin, Germany (2016) 12. Redondo, M.E.L.: Comfortability Detection for Adaptive Human-Robot Interactions. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, UK, 3-6 September, pp. 35–39. IEEE (2019) 13. Breazeal, C., Dautenhahn, K., Kanda, T. Social Robotics. Springer Handbook of Robotics, pp. 1935–1972 (2016) 14. Kanda, T. e Ishiguro, H.: Human-Robot Interaction in Social Robotics. CRC Press (2017) 15. Sheridan, T.B.: A review of recent research in social robotics. Curr. Opinion Psychol. 36, 7–12 (2020) 16. Vilasboas, J.P., Sampaio, M.S.C., Moreira, G.F., Souza, A.B., Diaz-Amado, J., Barrios-Aranibar, D., Cardinale, Y., Soares, J.E.: Application of social constraints for dynamic navigation considering semantic annotations on geo-referenced maps. In: IECON 2021 - 47th Annual Conference of the IEEE Industrial Electronics Society, pp. 1–7 (2021). https://doi.org/10.1109/IECON48115.2021.9589235 17. Akhund, T.M.N.U., Siddik, M.A.B., Hossain, M.R., Rahman, M.M., Newaz, N.T., Saifuzzaman, M. IoT Waiter Bot: a low cost IoT based multi functioned robot for restaurants. In: 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), pp. 1174– 1178. IEEE (2020) 18. Qing-xiao, Yu., Can, Y., Zhuang, F., Yan-zheng, Z.: Research of the localization of restaurant service robot. Int. J. Adv. Rob. Syst. 7(3), 18 (2010) 19. Kuo, C-M., Chen, L-C., Tseng, C-Y.: Investigating an innovative service with hospitality robots. Int. J. Contemp. Hospitality Manag. Emerald Publishing Limited (2017) 20. Takahashi, M., Suzuki, T., Shitamoto, H., Moriguchi, T., Yoshida, K.: Developing a mobile robot for transport applications in the hospital domain. Robot. Auton. Syst. 58(7), 889–899 (2010) 21. Papathanassis, A.: R-Tourism: Introducing the Potential Impact of Robotics and Service Automation in Tourism. Ovidius University Annals, Series Economic Sciences, vol. 17, no. 1 (2017)

274

G. Moreira et al.

22. Coad, M.M., Blumenschein, L.H., Cutler, S., Zepeda, J.A.R., Naclerio, N.D., ElHussieny, H., Mehmood, U., Ryu, J.-H., Hawkes, E.W., Okamura, A.M.: Vine robots: design, teleoperation, and deployment for navigation and exploration. IEEE Robot. Autom. Mag. 27(3), 120–132 (2019) 23. Bettencourt, R., Lima, P.U.: Multimodal navigation for autonomous service robots. In: 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 25–30. IEEE (2021) 24. Fiorini, L., Mancioppi, G., Semeraro, F., Fujita, H., Cavallo, F.: Unsupervised emotional state classification through physiological parameters for social robotics applications. Knowl.-Based Syst. 190, 105217 (2020) 25. Banisetty, S.B., Williams, T.: Implicit communication through social distancing: can social navigation communicate social norms? In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 499–504 (2021) 26. Gin’es, J., Mart’ın, F., Vargas, D., Rodr´ıguez, F.J., Matell’an, V.: Social navigation in a cognitive architecture using dynamic proxemic zones. Sensors 19(23), 5189 (2019) 27. Lee, M.H., Ahn, H.S., Wang, K., MacDonald, B.: Design of an API for integrating robotic software frameworks. In: Proceedings of the 2014 Australasian Conference on Robotics and Automation (ACRA 2014), vol. 2, No. 3.2, pp. 1–1 (2014). Citeseer 28. Imteaj, A., Chowdhury, M.I.J., Farshid, M., Shahid, A.R. RoboFI: autonomous path follower robot for human body detection and geolocalization for search and rescue missions using computer vision and IoT. In: 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), pp. 1-6. IEEE (2019) 29. Aguiar, A.S., dos Santos, F.N., Cunha, J.B., Sobreira, H., Sousa, A.J.: Localization and mapping for robots in agriculture and forestry: a survey. Robotics 9(4), 97 (2020). Multidisciplinary Digital Publishing Institute 30. Jishnu, U.K., Indu, V., Ananthakrishnan, K.J., Amith, K., Reddy, P.S., Pramod, S.: voice controlled personal assistant robot for elderly people. In: 2020 5th International Conference on Communication and Electronics Systems (ICCES), pp. 269–274 (2020). https://doi.org/10.1109/ICCES48766.2020.9138101 31. Saravanan, M., Selvababu, B., Jayan, A., Anand, A., Raj, A.: Arduino based voice controlled robot vehicle. IOP Conf. Ser.: Mater. Sci. Eng. 993(1), 012125 (2020) 32. Vega, A., Cintas, R., Manso, L.J., Bustos, P. N’u nez, P.: Socially-accepted path planning for robot navigation based on social interaction spaces. In: Proceedings of the Iberian Robotics conference. Springer, pp. 644–655 (2019) 33. Karar, A.S., Said, S., Beyrouthy, T.: Pepper humanoid robot as a service robot: a customer approach. In: 2019 3rd International Conference on Bio-engineering for Smart Technologies (BioSMART). IEEE (2019)

Comparative Analysis of LiDAR SLAM Techniques in Simulated Environments in ROS Gazebo Marcelo E. C. de Carvalho1 , Tiago T. Ribeiro2 , and Andre G. S. Conceicao2(B) 1

2

Postgraduate Program in Electrical Engineering, Federal University of Bahia, Salvador, Brazil [email protected] Department of Electrical and Computer Engineering, Federal University of Bahia, Rua Aristides Novis, 02, Federa¸ca ˜o, Salvador, BA 40210-630, Brazil {tiagotr,andre.gustavo}@ufba.br

Abstract. Simultaneous localization and mapping (SLAM) algorithms using data from LiDAR sensors, which provide accurate distance measurements, regardless of the lighting condition, are widely studied and used. Carrying out mapping in simulated environments can be used to analyze the resources used in mapping and planning the trajectory necessary for better acquisition of information from the real environment. In this work, three odometry and mapping techniques using a LiDAR type sensor were compared: LeGO-LOAM, A-LOAM and F-LOAM, with the odometry obtained by Ground Truth and the extended kalman filter (EKF). The LiDAR SLAM algorithms performed well compared to filtered odometry, especially in environments with more features. Keywords: Mobile Robotics

1

· LiDAR SLAM · Mapping · Odometry

Introduction

Among the main abilities of intelligent robots, mapping and pose estimation are undoubtedly among the fundamental tasks in mobile robotics. A great effort has been made by the academic community to achieve, in real time, a simultaneous location estimation and mapping (SLAM) with data from visual sensors and/or LiDAR sensors [1]. LiDAR sensors have the advantage of capturing data in low-light environments, and high-resolution sensors allow capturing high-definition 3D detail over long distances [2]. This sensory modality is attributed to its particular characteristics, such as non-contact measurement, wide field coverage and large amount of information. The most widely used approach for transforming between two successive LiDAR readings is ICP (Iterative Closest Point) [3]. When finding a match at a point level, ICP aligns two sets of points iteratively until the stopping criteria are satisfied. In a moving position estimate distortion is removed in two steps: c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 275–285, 2024. https://doi.org/10.1007/978-3-031-47272-5_23

276

M. E. C. de Carvalho et al.

the velocity estimate by ICP is followed by distortion compensation using the computed velocity [4]. When the scan includes a large number of points, the ICP increases the computational cost in order to make its use unfeasible. Some ICP variants were created with the aim of improving its efficiency and accuracy. Algorithms need to be simple enough to operate in real time, and robust enough to handle the inevitable measurement noise and modeling errors. One of the most commonly used 3D LiDAR SLAM methodologies is the socalled LOAM (Lidar Odometry and Mapping), which is a real-time technique based on the ICP algorithm [5,6]. In this technique, features that are more computationally efficient are combined with a method for plane and edge detection, the algorithm calculates the pose from the minimization of the point-to-edge and point-to-plane distance. In this article, the results of odometry and mapping obtained by three 3D SLAM techniques based on a LiDAR sensor were compared: LegO-LOAM [7], A-LOAM and F-LOAM [8], with the Extended Kalman Filter (EKF) odometry results. Based on the odometry results, it is possible to carry out quantitative and qualitative analyzes of pose estimation and map generation. Three performance analyzes of the techniques were carried out, evaluating the processing time between pose estimates, estimated total distance traveled, estimated final position in the xy plane, in addition to the qualitative analysis of the generated mappings.

2

Materials and Methods

2.1

Resources

To carry out the experiments, the ROS framework (Robot Operating System), Noetic distribution, along with the Gazebo simulator and the 3D RViz viewer were used. The framework and its extensions were installed on the Linux Ubuntu 20.04 operating system, available on notebooks with an Intel Core I5-1021 processor, 8 Gb of RAM and dedicated GeForce MX250 video card, with 2 Gb of video memory (VRAM). The husky1 package, developed by the manufacturer of the husky robot, Clearpath Robotics, was used. The husky is a Skid-Steer class robot, which features complex modeling, due to vehicle wheel slippage and terrain deformation. The package comes with a variety of sensors for use in the robot that can be simulated in an empty world (without objects), in the playpen world of the gazebo or in a world created by the user. Figure 1 illustrates the husky model in the gazebo. The LiDAR sensor used was the Velodyne VLP-16, available in the husky package. The Velodyne VLP-16 is a ToF (Time of Light) type sensor, with a measurement range of up to 100 m, with a reading of up to 300,000 points/sec, 360◦ horizontal and 30◦ vertical field of view. ◦ , being 15◦ up and down. The 1

https://github.com/husky/husky.git.

Comparative Analysis of LiDAR SLAM Techniques

277

rotation frequency of the sensor can be adjusted between 5 and 20 hz, and the accuracy is around +/− 3 cm. The husky has an embedded IMU (Inertial Measurement Unit), model UM7, with a filtered odometry estimation frequency of up to 500 Hz, resolution of 0.01◦ and accuracy between 1–5◦ , for orientation reading (Pitch, Roll and Yaw), with dependence on environmental characteristics, and repeatability of 0.5◦ .

Fig. 1. Husky mobile robot model with LiDAR SLAM Velodyne VLP 16 sensor.

2.2

Simulated Environments

To evaluate the performance of the LiDAR SLAM algorithms, some environments were created in the Gazebo simulator. The objective would be to evaluate the behavior of algorithms in environments with few and many features. For all environments, the friction between the wheels and the surface was configured using the ODE model (Open Dynamics Engine’s) available in Gazebo with mu and mu2 set to 100 and 50, respectively. The first environment created was a straight corridor 30 m long, 4 m wide, with smooth walls 2.5 m high, without any objects in the way, as shown in Fig. 2a. The second environment was a straight corridor, with the same dimensions as the first, but with 1 window every 5 m on both side walls, with the aim of simulating some features, as illustrated in Fig. 2b. The third environment created was a closed corridor, in square shape, with smooth walls, 30 m long, on each edge, and 5 m wide, as illustrated in Fig. 3a. The fourth simulated environment was the Playpen world, already available on husky package, which simulates an outdoor environment with the presence of features simple, such as cones, concrete containments, fire hydrants and garbage containers, as shown in Fig. 3b. 2.3

LiDAR SLAM Techniques

In the present work, the results of mapping with three techniques derived from the LOAM-Velodyne algorithm, developed by Zhang and Singh (2017), were

278

M. E. C. de Carvalho et al.

Fig. 2. World model in the gazebo for simulating the movement of the husky robot in a straight corridor without windows (a) and in a straight corridor with windows (b).

Fig. 3. World model in the gazebo for simulating the movement of the husky robot in a closed corridor without windows (a) and Model of gazebo playpen world for husky robot movement simulation (b).

evaluated. The techniques used were LegO-LOAM, A-LOAM and F-LOAM, with odometry estimation results compared with Ground Truth and with EKF, which merges wheel odometry data with data from the IMU. The LegO-LOAM technique, proposed by Shan and Englot (2018) is a lighter version of the LOAM-Velodyne technique optimized for land vehicles, which obtain pose estimates in real time with lower computational cost. To determine the components of the pose, the Levenberg-Marquart method is used in two stages: based on the points detected on the floor, the translations in x and y and the rotation yaw are calculated, with the points of edges calculate the rotations roll and pitch, and the translation in z [6]. The technique also uses IMU data to adjust the distortion of the point cloud. A-LOAM (Advanced Implementation of LOAM) is a technique that uses Eigen and Ceres Solver to simplify the structure of [9] code. This is a modified code from the original LOAM, but simple and without complex mathematical derivations and redundant operations. The F-LOAM technique (Fast LiDAR Odometry and Mapping), proposed by Wang et al. (2021), has the characteristic of presenting two stages of distortion

Comparative Analysis of LiDAR SLAM Techniques

279

compensation to reduce the computational cost. In a first stage, a constant angular and linear velocity is assumed during a short period of time, to predict the movement and correct the distortion. In a second stage the distortion is recalculated after the pose estimation process and the undistorted features are updated in the final map. 2.4

Data Acquisition Method

For data acquisition, the Gazebo simulator was used, with the Husky mobile robot initially positioned at the zero position in x and y, and with the LiDAR SLAM sensor positioned on the upper base, located 0.081 m on the axis x and 0.409 m from the z axis, from the robot’s reference point in relation to the world. The default frequency of data acquisition is different in each algorithm and was validated in the original articles, for LeGO-LOAM the frequency was 2.5 Hz, for A-LOAM and for F-LOAM the frequency was 10 Hz. The frequency of data acquisition interferes with the processing time between poses, it was calculated according to the total simulation time. Data acquisition was performed by collecting the .bag file during the robot’s movement in the simulated world. The robot was moved along a straight corridor, with and without a window, at a distance of 25 m, in a straight line. For the closed square corridor and for the Playpen world, the robot was teleoperated to walk the path and return to a point close to the initial pose. For all simulations the cruising speed was limited to 0.5 m/s. 2.5

Results Analysis

For the quantitative analysis of the results, 3 metrics were compared. The processing time (tproc ) between poses was analyzed by the relation between the total simulation time (tsim ) and the number of published messages (N.omsg ) in the topic corresponding to the estimation of odometry during the simulation, as illustrated in Eq. 1. tproc =

tsim N.omsg

(1)

In order to analyze the path error taken by the robot in estimating each algorithm, the reference distance was initially calculated using Ground Truth data. The travelled distance (Dtravelled ) was calculated by adding the module of the distances between each estimated point, as shown in Eq. 2. The same method was used to calculate the distance covered in the estimates calculated by the EKF, LeGO-LOAM , A-LOAM and F-LOAM. N.omsg

Dtravelled =

 

2 ) (x2i − x2i−1 ) + (yi2 − yi−1

(2)

i=1

The position error at the end of the simulation was calculated by the module of the distance between the coordinates x (xestimated ) and y (yestimated ) by the

280

M. E. C. de Carvalho et al.

evaluated model in relation to the final position obtained by the Ground Truth (GT) of the simulator. Equation 3 illustrates the magnitude of the error in the plane xy (Errorxy ), of the analyzed techniques in relation to the reference.  (3) Erroxy = (xGT − xestimated )2 + (yGT − yestimated )2

3

Results

This section describes the results of experiments performed with LiDAR SLAM techniques in simulated environments. The achieved results were compared with the Ground Truth and with the odometry filtered using the EKF filter (Extended Kalman Filter). In the experiments carried out in the straight corridor, with and without window, the robot was controlled through the rospy library, writing the linear velocity of 0.5 m/s in the topic/cmd vel, which publishes at a frequency of 10 Hz, and after 500 iterations it travels an approximate distance of 25 m oriented on the x axis of the world. For the experiments carried out in the closed corridor, in square format, and in the Playpen world of the gazebo, the robot was teleoperated by the computer keyboard using the husky control, teleop keyboard package. The results will be presented below and are divided into analysis of the processing time between poses, error in the distance covered and error in the final position in the plane xy. For a better comparison of the results, they are described in graphs and tables. In the tables, environments A, B, C and D represent the straight corridor without a window, the straight corridor with a window, the closed corridor without a window and the Playpen world, respectively, and the best compared values are highlighted in bold for better visualization. 3.1

Processing Time Analysis Between Poses

As shown in Table 1, the algorithm that presented the shortest processing time between poses was EKF, followed by A-LOAM and F-LOAM. Table 1. Comparison of processing time between poses according to the algorithm used. Environment Processing time between Poses (s) Time (s) EKF LeGO-LOAM A-LOAM F-LOAM A

51,600

0,020 0,400

0,100

0,100

B

51,400

0,020 0,402

0,100

0,100

C

138,000

0,020 0,399

0,100

0,100

D

64,600

0,020 0,395

0,099

0,099

Comparative Analysis of LiDAR SLAM Techniques

3.2

281

Distance Traveled Error

The error of the total distance traveled estimated by the evaluated algorithms, which takes into account only the distance, without evaluating the poses, is shown in Table 2. In the table it is possible to observe that the EKF algorithm presented the best performance for the straight corridor with and without windows. The LeGO-LOAM algorithm presented the best performance for the closed corridor without window and the A-LOAM algorithm presented the best performance in the mute Playpen of the gazebo. Table 2. Comparison of the total distance traveled error between each of the evaluated algorithms. Environment Reference distance error(%) Distance (metres) EKF LeGO-LOAM A-LOAM F-LOAM

3.3

A

24,980

0,020 12,570

1,237

1,244

B

24,980

0,000 0,297

0,532

1,345

C

95,813

0,808

0,018

0,706

0,730

D

58,557

5,776

2,320

0,710

0,728

Final Position Error in Plan xy

The final location error estimated by the evaluated algorithms, which takes into account only the distance, without evaluating the poses, is shown in Table 3. In the table it is possible to observe that the EKF algorithm presented the best performance for the straight corridor with and without windows. The LeGOLOAM algorithm presented the best performance for the closed corridor without window and the A-LOAM algorithm presented the best performance in the mute Playpen of the gazebo. Table 3. Comparison between the final pose estimates in the xy plane, with values presented according to the environment and algorithm used. Environment Final position error in plane xy (m) EKF LeGO-LOAM A-LOAM F-LOAM A

0,001 3,274

0,411

0,406

B

0,001 0,089

0,163

0,418

C

0,648

0,110

0,199

0,253

D

1,030

0,074

0,053

0,044

282

M. E. C. de Carvalho et al.

The performance of EKF in estimating the robot’s final position for straight corridors with and without window was significantly better than the other algorithms. A possible reason for the better performance of the EKF is related to the restricted number of features applied in the straight aisles, in addition to the absence of curves and less sliding effect. For the closed corridor without window, the LeGO-LOAM algorithm presented the best performance in the final position estimation, being this a world with few features, despite the presence of curves. LeGO-LOAM, in addition to being optimized for land vehicles, has IMU as a resource, which may have favored the performance of the algorithm in this scenario. In the Playpen world, LiDAR SLAM algorithms present better results compared to EKF. The Playpen world presents characteristics closer to the real world, with a greater diversity of features, which must have favored the performance of the LiDAR SLAM algorithms, which present similar performance, with a tendency to better performance for the F-LOAM algorithm. 3.4

Comparative Graphical Analysis between Algorithms

This section presents the graphical analysis of the algorithms in each situation tested. Figures 4a, b, 5a, b present the comparative graph of each algorithm for the environment with straight corridor without window, straight corridor with window, closed corridor without window and Playpen world of the gazebo.

Fig. 4. Comparative graph of the position estimates of the tested algorithms in the straight corridor without window (a). Comparative graph of the position estimates of the tested algorithms in the straight corridor with window (b).

The graphical analysis of the algorithms in the surrounding corridors allows observing the degree of oscillation in the position estimate, with each new iteration a new estimated pose based on data from the point cloud and the features present in the environment. In the graph of Fig. 4a, obtained by the simulation result of the robot moving in a straight corridor without a window, it is possible to observe the failure in the performance of the LeGO-LOAM algorithm in the presence of few features,

Comparative Analysis of LiDAR SLAM Techniques

283

Fig. 5. Comparative graph of the position estimates of the algorithms tested in the closed corridor without window (a). Comparative graph of the position estimates of the algorithms tested in the Playpen world of the gazebo (b).

in the determination of the final position, compared to the other LiDAR SLAM algorithms. In the graph in Fig. 4b, it is possible to observe the best performance of the algorithms for determining the final position on the x axis, in the presence of simple features, such as symmetrically spaced windows. In the graph of Fig. 5a, b it is possible to observe the limitation of the EKF for position prediction in the presence of curves, in this scenario the performance of the LiDAR SLAM algorithms is significantly more sufficient than the EKF. 3.5

Mapping Results

Figures 6a, b and 7 show the result of the point cloud mapping of the algorithms in the Playpen world of the gazebo. Due to the greater number of features, this world is more complex for point cloud mapping. The expectation is that the greater number of features will improve the results obtained with the SLAM techniques, due to the corrections of the odometry distortions and the step-by-step mapping in the simulation. The features serve as a reference for more assertive determination of new poses, recalculating the distance between them. Visually, all LiDAR SLAM algorithms were able to map the Playpen world, with emphasis on LeGO-LOAM, which managed to map some features of the environment in greater detail, such as the garbage container on the left side of the images. However, visually there is a greater presence of rotation in the border containments, which may have been a consequence of the orientation corrections obtained by the IMU data. The quality of the mapping would be better evaluated in a 3D reconstruction that could be performed with data from the cloud of points obtained by the algorithms, where the filling flaws of the structures would be better visualized. In this work, the analysis of the mapping was restricted to the positioning and rotation of the mapped structures.

284

M. E. C. de Carvalho et al.

Fig. 6. Result of the point cloud mapping in the playpen world simulated in the Gazebo using the algorithm LiDAR SLAM LegO-LOAM (a). Result of the point cloud mapping in the Playpen world simulated in the Gazebo using the algorithm LiDAR SLAM A-LOAM (b).

Fig. 7. Result of point cloud mapping in the Playpen world simulated in Gazebo using the LiDAR SLAM F-LOAM algorithm.

4

Conclusion

The results of the present study prove the efficiency of the evaluated LiDAR SLAM algorithms for determining the trajectory and pose of mobile robots, especially in environments with a greater number of features. In environments with few features, the algorithms also performed satisfactorily, with the exception of the performance of the LegO-LOAM algorithm in the straight corridor without window, which presented a final position error of 13% for displacement on the x axis. For environments with few features and/or trajectories with few curves, the EKF is a satisfactory solution and can be the first choice for robot localization. The IMU sensor is available in several mobile robots available on the market, which makes the use of the EKF feasible in most cases. For future work, it is necessary to compare the results obtained with the LiDAR SLAM algorithms in simulated worlds with the results obtained in real physical worlds, to validate the

Comparative Analysis of LiDAR SLAM Techniques

285

mapping simulation proposal before the real simulation. The mapping simulation would allow evaluating the performance of algorithms for this purpose, validating the choice of solution, in addition to allowing to determine the best trajectory for complete mapping of the environment. Acknowledgement. The authors acknowledge the financial suppport provided by CNPq grant number [311029/2020-5 and 407163/2022-0], and the CAPES - Finance Code 001.

References 1. Cruz, G.P.J. et al.: Investiga¸ca ˜o de T´enicas LiDAR SLAM para um Dispositivo Rob´ otico de Inspe¸ca ˜o de Ambientes Confinados. Revista da SBA, vol. 2, no. 1 (2020) 2. Wong U. et al.: Comparative evaluation of range sensing technologies for underground void modeling. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 3816–3823. IEEE (2011) 3. Besl, P.J., McKay, N.D.: Method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (2020) 4. Hong, S., Ho, H., Kim, J.: ICP: velocity updating iterative closest point algorithm. In: IEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska (2010) 5. Zhang, J., Singh, S.: Loam: lidar odometry and mapping in real-time. Robot.: Sci. Syst. 2(9) (2014) 6. Zhang, J., Singh, S.: Low-drift and real-time lidar odometry and mapping. Auton. Robot. 41(2), 401–416 (2017) 7. Shan, T., Englot, B.: Lego-loam: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765 (2018) 8. Wang, H. et al.: F-LOAM: fast LiDAR odometry and mapping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021) 9. Qin, T., Cao, S.: Advanced Implementation of LOAM (2022). https://github.com/ HKUST-Aerial-Robotics/ALOAM. Access at: 12 de Dezembro de

Instance Segmentation to Path Planning in a Simulated Industrial Environment Danilo G. Schneider(B) and Marcelo R. Stemmer Universidade Federal de Santa Catarina, Florianópolis 88040-900, Brazil [email protected], [email protected]

Abstract. This paper presents an integrated framework that combines instance segmentation and path planning in a simulated industrial environment to improve mobile robot navigation. By training the Mask-RCNN model using a synthetic dataset, accurate object detection and segmentation are achieved. The resulting segmentation masks are used to generate an Occupancy Grid Map (OGM) that provides detailed information about the dynamic environment. Utilizing the OGM in the path planning process enables the mobile robots to make informed decisions, considering the presence and characteristics of objects. This allows for efficient path optimization and obstacle avoidance. The framework’s effectiveness is evaluated through extensive experiments, comparing it to traditional path planning approaches. The results demonstrate significant improvements in navigation performance, with the integrated framework achieving higher path efficiency and reduced travel time. Real-time instance segmentation enhances the system’s adaptability to changes in the environment. The proposed framework contributes to advancing intelligent automation in industrial settings, enabling safe and efficient robot navigation. Overall, the integration of instance segmentation and path planning through the proposed framework offers a valuable solution for enhancing mobile robot navigation in industrial environments, optimizing productivity, and facilitating seamless human-robot collaboration. Keywords: Instance segmentation · Path planning · Simulated environment · Navigation

1 Introduction In recent years, there has been a growing demand for efficient and safe mobile robot navigation in industrial environments. The ability of robots to navigate autonomously and adaptively is crucial for optimizing productivity, ensuring worker safety, and achieving smooth operations. One key challenge in this context is providing robots with accurate and detailed information about the dynamic environment they operate in [1]. In the context of future assembly systems, there is a growing demand for enhanced flexibility. The concept of future factories involves the utilization of spatio-temporal workstations, as exemplified by the Line-less Mobile Assembly System (LMAS) [2]. While these systems exhibit a high level of complexity, they operate within a closedworld environment domain, where all movable instances are known and accounted for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 286–296, 2024. https://doi.org/10.1007/978-3-031-47272-5_24

Instance Segmentation to Path Planning in a Simulated

287

within the production process. In order to effectively monitor and manage the dynamic nature of these environments, the integration of surveillance cameras has emerged as a potential solution. Surveillance cameras serve the purpose of overseeing the networked system comprising mobile robots, human operators, and other movable objects that collaborate in assembling products within flexible environment configurations. In this paper, we investigate the following research question: Using a supervision RGB-D camera, is it possible to use instance segmentation CNNs for online mapping of flexible and dynamic industrial environments? We hypothesize that the integration of instance segmentation and path planning offers several advantages. Firstly, it enables the robots to detect and avoid obstacles in real-time, thereby enhancing their safety and preventing collisions. Secondly, it allows for efficient path optimization, reducing travel time and improving overall productivity. Additionally, the framework enhances the adaptability of the robots to changes in the dynamic environment. The main objective of this research is to leverage the segmentation masks obtained from Mask-RCNN to generate an Occupancy Grid Map (OGM). The OGM provides a comprehensive representation of the environment, highlighting the locations, shapes, and sizes of objects. By incorporating this detailed information into the path planning algorithm, mobile robots can make informed decisions regarding their navigation paths. To address this challenge, this paper proposes an integrated framework that combines instance segmentation and path planning in a simulated industrial environment. Instance segmentation is a computer vision technique that enables the identification and separation of individual objects within an image. By training the Mask-RCNN model using a synthetic dataset, we achieve accurate object detection and segmentation. Figure 1 shows a navigation example in the simulated environment with standard path planning and illustrates the motivation of this work. As shown, the first planned trajectory passes through the person, since he is initially occluded for the robot. Then it re-calculates the path when the person is in sight. To evaluate the effectiveness of our proposed approach, extensive experiments are conducted, comparing it against traditional path-planning methods. The experimental results demonstrate the efficiency of the integrated framework in terms of navigation performance and adaptability. In summary, this research aims to bridge the gap between instance segmentation and path planning in the context of mobile robot navigation in industrial environments. By leveraging the segmentation masks to generate an OGM, the proposed framework provides robots with a more detailed understanding of their surroundings, leading to safer and more efficient navigation.

2 Related Literature Since its early development in the work [3], occupancy grid maps have become a dominant paradigm for environment modeling in mobile robotics and are highly used in a vast range of robotic applications [4–6]. Object detection and segmentation are fundamental tasks in computer vision, involving the identification and localization of objects in images or videos. While object detection identifies objects and provides bounding boxes, object segmentation goes a step further, accurately delineating object boundaries for precise shape and position information.

288

D. G. Schneider and M. R. Stemmer

Fig. 1. Standard path planning in the simulated environment.

These tasks find applications in various fields like autonomous driving, medical imaging, and robotics. Deep learning, particularly Convolutional Neural Networks (CNNs), has revolutionized object detection and segmentation, yielding impressive results. Techniques such as Region Proposal Networks (RPNs) have improved the accuracy and efficiency of detection models. Prominent models like Mask R-CNN [7] incorporate instance segmentation by predicting object masks alongside bounding box recognition. Faster R-CNN [8], utilizing RPN as a shared full-image convolutional feature, serves as the basis for the object detection component in Mask R-CNN. These advancements have significantly enhanced the performance of object detection and segmentation models, empowering diverse computer vision applications. Regarding the use of synthetic data in training, the authors of [9] developed an automated synthetic data generation pipeline to generate large-scale photo-realistic data with precise annotations, and then formulate an instance segmentation network for object recognition and a 6D pose estimation network for localization. Moreover, they presented a generic and robust sim-to-real deep-learning-based framework, namely S2R-Pick, enabling fast and accurate object recognition and localization in industrial robotic bin picking. In robotic operations like bin picking or pick-and-place, as investigated in previous studies [9–12], training images often feature objects positioned randomly, occasionally occluded or even upside-down. However, in our specific case, where the objective is to track and segment mobile robots, humans, and other movable objects, including upsidedown instances in the training data is not relevant. This is because such instances do not represent the typical operational domain we aim to target. Numerous robotic simulators are currently available in the field. A comprehensive comparison conducted by [13] evaluates Unity3D, V-Rep, and Gazebo, highlighting the ease of integration with ROS for both V-Rep and Gazebo. In terms of annotation camera sensor implementation, we identified three simulators: Gazebo (since version Fortress), CARLA [14], and Nvidia Isaac Sim [15]. After careful consideration, Gazebo was chosen due to its complete open-source nature and perceived ease of integration with ROS2 [16].

Instance Segmentation to Path Planning in a Simulated

289

3 Method As of the problem formulation, training CNNs lies in the need for large amounts of annotated images arising for each and every new part. In industrial environments there rarely are annotated images of the desired part available, however, construction data in the form of 3D CAD models is almost always available. Our algorithm relies on the Gazebo Simulator (version Garden). Gazebo is an opensource robotic simulation environment developed by the Open Source Robotics Foundation (OSRF). It provides a platform for researchers and developers to test and evaluate robotic systems in a virtual environment, allowing for faster and more cost-effective development. The simulator includes a physics engine, sensor simulation, and support for various robotic platforms and driving mechanisms, making it a popular choice for a wide range of applications, including robotics, computer vision, and artificial intelligence. The simulator has been used in various research studies, including the development of autonomous robots, navigation systems, and object recognition algorithms. Moreover, we used the Robot Operating System 2 (version Humble) framework to autonomous navigate the robots and humans in the simulation environment. The Nav2 stack [17] was used for navigation. We developed an algorithm to generate synthetic datasets using the simulator and navigating the models autonomously inside a industrial environment. For instance segmentation training, 2613 images were generated composed of 3 category high-level classes annotated (robot, person, object). All models used and the environment can be observed in Fig. 2. For generating this dataset, our algorithm uses a 2D Mask grid map to spawn and control the moving instances inside the camera field-of-view, assuring the long-term balance of the dataset classes.

Fig. 2. Every model used for the data generation. Robots on the left side, actors on the middle and movable objects on the right side.

Mask R-CNN was trained for 10000 iterations. Previous Faster R-CNN model was used as a backbone with pre-trained weights obtained with COCO 2017 dataset. After that, we are able to predict the segmentation of the defined classes in a RGB image, then generate the pointcloud of the output predicted masks using PointCloud

290

D. G. Schneider and M. R. Stemmer

Library. In the next step, the PCL pipeline transforms the points based on the camera pose, downsamples the pointcloud using a uniform 0.05 leaf VoxelGrid filter, and projects all the points in a 2D occupancy grid map of the area iterating over the points in the remaining cloud. Finally, this map is merged with the full occupancy grid map of the empty environment and published in ROS2 so any robot can make use of it for path planning. The segmented objects in the OGM were filled as a convex polygon to be more representative of the area they occupy. As a qualitative result, we used the configuration on the Fig. 1 and generated the merged grid map which was tested in this configuration for global path planning of the robot. For a quantitative result, we tested every model from Fig. 2 individually, generating raw points, and convex hull, and calculated their Intersection over Union with ground truth OGM. This ground truth was obtained using three simulated 3D lidars, each generating a pointcloud that was projected in a 2D laserscan using the package pointcloud_to_laserscan, transformed in OGM using the SLAM Toolbox package, and merged for the final ground truth, then we manually sequenced all the contour pixels to fill the occupied area as a polygon so it is more representative. Figure 3 illustrates the ground truth generation process.

4 Results and Analysis 4.1 Experimental Results The predictions of the Mask R-CNN are shown in Fig. 4. Predicted output masks are presented in Fig. 5, note that the mask of the robot is not included on purpose, so that the robot has no need to filtering its surrounding in the resulting dynamic map. Figure 6 shows the point cloud generated using the binary mask and its projection into 2D laserscan data. Finally, Fig. 8 gives the final merged OGM that is published for the robot. Table 1 shows the intersection and union values for each moving object model used in simulation (Fig. 7). Figure 8 gives a sample visualization of the obtained raw points on top of its convex hull, side by side with the ground truth of one model. The new global path planning of the robot from Fig. 1 is now shown in Fig. 9, this time the robot planned a direct path avoiding the person. 4.2 Analysis Our experimental qualitative results show the functionality the applicability of the developed framework, with the published OGM any robot can avoid planning through obstacles even if they are not in a line of sight. For the ground truth generation, the conversion from pointcloud to laserscan was proven to be a handy solution but also limited, a lot of useful information is lost due to the nature of the laserscan data, thus we needed 3 Lidars to obtain a useful representation of the object, and the contour pixels needed to be manually sequenced so the resulting polygon is possibly non-convex. On the other side, our framework directly converts from pointcloud to OGM, taking most of the resulting pointcloud.

Instance Segmentation to Path Planning in a Simulated

291

Fig. 3. Ground thruth generation process. Top-left image shows an object in the middle of three 3D lidars, represented by black boxes. Top-right image shows the combination of the pointclouds. Bottom-left image gives the projected laserscan. And bottom-right shows the resulting ground truth mask.

Fig. 4. Mask R-CNN predictions.

As a single low-resolution RGB-D sensor was used (600 × 800 pixels), it is possible to observe that the IoU value indicates only a partial representation of the full objects,

292

D. G. Schneider and M. R. Stemmer

Fig. 5. Output binary masks from Mask R-CNN predictions.

Fig. 6. Generated pointcloud (on the left) and projected laserscan (on the right).

Fig. 7. Final OGM published to the robot.

having the lowest values on small objects far from the camera. But this was expected since we downsampled the pointcloud and used a resolution of 0.05 m per pixel in the

Instance Segmentation to Path Planning in a Simulated

293

Table 1. IoU values for raw points and convex hull. Model

Raw points Intersection

Convex hull Union

IoU

Intersection

Union

IoU

1

116

299

0.39

255

419

0.61

2

111

285

0.39

248

415

0.6

3

28

74

0.38

46

74

0.62

4

51

118

0.43

91

117

0.78

5

559

897

0.62

840

896

0.94

6

52

102

0.51

91

102

0.89

7

20

55

0.36

36

55

0.65

8

368

578

0.64

531

578

0.92

9

168

278

0.6

230

283

0.81

10

84

141

0.6

133

150

0.89

11

77

147

0.52

118

156

0.76

12

224

415

0.54

331

415

0.8

13

62

166

0.37

144

218

0.66

14

24

50

0.48

40

49

0.82

15

405

578

0.7

537

577

0.93

16

34

112

0.3

83

120

0.69

17

36

95

0.38

80

104

0.77

18

37

99

0.37

74

107

0.69

19

34

101

0.34

65

105

0.62

20

43

116

0.37

95

123

0.77

21

21

62

0.34

35

63

0.56

22

44

109

0.4

95

123

0.77

23 SUM

16

43

0.37

23

46

2614

4920

0.53

4221

5295

0.5 0.80

OGM. But the overall IoUs of raw points and convex hulls indicates a fair representation and placement of occupied pixels.

5 Conclusion In conclusion, we have presented a comprehensive framework that integrates instance segmentation and path planning in a simulated industrial environment. Our framework offers several notable advantages in mobile robot navigation. By utilizing synthetic datasets to train the Mask-RCNN model, accurate object detection and segmentation are achieved, enabling the generation of detailed Occupancy Grid Maps (OGMs). The

294

D. G. Schneider and M. R. Stemmer

Fig. 8. New planned path using the OGM from this

Fig. 9. New planned path using the OGM from this framework.

integration of OGMs into the path planning algorithm empowers mobile robots with enhanced situational awareness, allowing for efficient path optimization and obstacle avoidance. It is important to state that the CNN model was trained in this project only with synthetic data and pre-trained weights from the COCO dataset. In order to apply this OGM generation in a real scenario, real data and further training is still needed to ensure a safe and accurate operation. One significant advantage of utilizing a simulated environment for the development of such algorithms is the ability to generate diverse and controlled scenarios. Synthetic datasets provide flexibility in manipulating object placement, occlusions, and orientation, allowing for comprehensive training and testing. Moreover, the simulated environment enables iterative algorithm development and testing without the constraints and risks

Instance Segmentation to Path Planning in a Simulated

295

associated with real-world deployments. Another advantage is the potential for scalability. Furthermore, the use of simulated environments facilitates reproducibility and comparison of results across different research studies. Researchers can utilize the same synthetic datasets and simulated scenarios to evaluate and benchmark their algorithms, fostering collaboration and advancing the field. Overall, our framework demonstrates the efficacy of integrating instance segmentation and path planning in a simulated industrial environment. Future work may be focused on merging data from multiple input cameras, and dealing with the data association problem. Acknowledgments. We would like to acknowledge CAPES (Coordenação de Aperfeiçoamento Pessoal de Nível Superior) for their financial support.

References 1. Harapanahalli, S., Mahony, N.O., Hernandez, G.V., Campbell, S., Riordan, D., Walsh, J.: Autonomous Navigation of mobile robots in factory environment. Procedia Manuf. 38, 1524– 1531 (2019) 2. Buckhorst, A.F., do Canto, M.K.B., Schmitt, R.H.: The line-less mobile assembly system simultaneous scheduling and location problem. Procedia CIRP 106, 203–208 (2022) 3. Elfes, A.: Using occupancy grids for mobile robot perception and navigation. Computer 22(6), 46–57 (1989) 4. Thrun, S.: Learning occupancy grid maps with forward sensor models. Auton. Robot. 15, 111–127 (2003) 5. Panchpor, A.A., Shue, S., Conrad, J.M.: A survey of methods for mobile robot localization and mapping in dynamic indoor environments. In: 2018 Conference on Signal Processing and Communication Engineering Systems (SPACES), pp. 138–144. IEEE (2018) 6. Tsardoulias, E.G., Iliakopoulou, A., Kargakos, A., Petrou, L.: A review of global path planning methods for occupancy grid maps regardless of obstacle density. J. Intell. Rob. Syst. 84, 829–858 (2016) 7. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017) 8. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28 (2015) 9. Li, X., Cao, R., Feng, Y., Chen, K., Yang, B., Fu, C.W, Heng, P.A.: A sim-to-real object recognition and localization framework for industrial robotic bin picking. IEEE Robot. Autom. Lett. 7(2), 3961–3968 (2022) 10. Danielczuk, M., Matl, M., Gupta, S., Li, A., Lee, A., Mahler, J., Goldberg, K.: Segmenting unknown 3D objects from real depth images using mask R-CNN trained on synthetic data. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 7283–7290. IEEE (2019) 11. Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., Abbeel, P.: Domain Randomization for transferring deep neural networks from simulation to the real world. In: International Conference on Intelligent Robots and Systems–IROS, pp. 23–30. IEEE/RSJ (2017) 12. Andulkar, M., Hodapp, J., Reichling, T., Reichenbach, M., Berger, U.: Training CNNs from synthetic data for part handling in industrial environments. In: 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 624–629. IEEE (2018)

296

D. G. Schneider and M. R. Stemmer

13. De Melo, M.S.P., da Silva Neto, J.G., Da Silva, P.J.L., Teixeira, J.M.X.N., Teichrieb, V.: Analysis and comparison of robotics 3D simulators. In: 21st Symposium on Virtual and Augmented Reality–SVR, pp. 242–251. IEEE/Brazil (2019) 14. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16 (2017) 15. Makoviychuk, V., Wawrzyniak, L., Guo, Y., Lu, M., Storeym, K., Macklin, M., Hoeller, D., Rudin, N., Allshire, A., Handa, A., et al.: Isaac gym: high performance GPU-based physics simulation for robot learning (2021) 16. Macenski, S., Foote, T., Gerkey, B., Lalancette, C., Woodall, W.: Robot operating system 2: design, architecture, and uses in the wild. Sci. Robot. 7(66), eabm6074 (2022) 17. Macenski, S., Singh, S., Martin, F., Gines, J.: Regulated pure pursuit for robot path tracking. Auton. Robot. (2023)

Robotics and Neurotechnologies for Healthcare Improvements

FPGA-Based Emulation of a Muscle Stretch Reflex on an Electric Series Elastic Actuator Oleksandr Sivak(B) , Patrick Vonwirth, and Karsten Berns Department of Computer Science, RPTU Kaiserslautern-Landau, 67663 Kaiserslautern, Germany {sivak,patrick.vonwirth,karsten.berns}@rptu.de https://agrosy.cs.uni-kl.de

Abstract. Spinal reflexes play an important role in maintaining balance during human dynamic locomotion because they provide the fastest reaction to unexpected external mechanical impact [6]. Implementation of spinal reflexes behavior in bipedal robotic systems can increase their stability and movement speed which is an essential problem for this type of robot. Latency of the reflex response is one of the important factors that define the delay of mechanical stabilization which influences the final system stabilization during movement. This article shows one of the possible implementations of emulation of low-latency natural muscle behavior using the Field-Programmable Gate Array (FPGA) hardware platform. Keywords: Stretch reflex Humanoid robot · FPGA

1

· Artificial muscles · Series elastic actuator ·

Introduction

Control of the motion in human- or animal-like robots is a complex task. A full mathematical formulation of a precise mechanical model that contains all of the physical details about the system requires much effort in its implementation. Moreover, every moving body part contributes to the overall dynamic behavior, motion planners are required to traverse a broad search space, resulting in very long solve times. This problem can be solved by creating mathematically simple models that can be calculated by contemporary computers within real-time constraints. A bio-inspired approach gives us the possibility to build a relatively simple model that uses premade emulation of natural segmental-level (spinal) muscle behavior on the lower level [2,6]. Natural segmental-level muscle behavior consists of three reflexes: the stretch reflex, the inhibition reflex, and the inverse stretch reflex. “The stretch reflex or myotatic reflex refers to the contraction of a muscle in response to its passive stretching by increasing its contractility as long as the stretch is within physiological limits.” [1]. According to this definition stretch reflex, causes contracting c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 299–310, 2024. https://doi.org/10.1007/978-3-031-47272-5_25

300

O. Sivak et al.

force impulse in a muscle after its stretching [10]. The main purpose of the reflex in biology is to decrease the delay of the response reaction to dramatic changes in a limb’s position. From the system point of view, the myotatic reflex provides a fast feedback loop that activates when the system is affected by external intensive movement perturbations. When the reflex is triggered it generates a response. This response could have activated the stretch reflex in antagonistic muscle. The inhibition reflex prevents this activation i.e. the inhibition reflex makes the antagonistic muscle inactive. The two reflexes function simultaneously with the Golgi reflex (also called the inverse stretch reflex). Its purpose is to prevent muscle damage. The Golgi reflex works in the opposite direction to the stretch reflex. In other words, the Golgi reflex partially deactivates muscle when it is over-tensioned [8]. Control strategies that consider the reflex muscle behavior were implemented on shape memory alloy artificial muscles [4] and on different types of series elastic actuators (SEA) [2,5,6]. The objective of this research is to implement the stretch reflex, inhibition reflex, and deep tendon reflex behaviors into the proportional closed-loop controller of the electrical RRLAB SEA system, as described in Ref. [13]. It is important to achieve a reflex response latency in the robot system that is equal to or faster than that of a human. This requires optimizing the sensory input processing, control algorithms, and system architecture to minimize the time between stimulus detection and motor response.

2

Hardware

The spinal reflex behavior is implemented and executed on the electrical RRLab SEA [13] Fig. 1 that is used as a muscle-emulating actuation system in the compliant robotic leg CARL [9,11]. Every actuator is mimicking the combined control of a pair of antagonistic muscles [9]. The muscular control offers two control properties per actuator, the summed forces and damping characteristics of both antagonistic muscles. The control scheme developed by Vonwirth et al. [13] is used as a basis for the development and integration of the muscular stretch reflex introduced within the paper at hand. The hardware diagram shown in Fig. 2 illustrates the system setup. The actuator motor is driven by the Elmo Gold Twitter high-power density servo driver,1 which provides efficient power transmission. To control the driver, an FPGA board based on the Intel (Altera) Cyclone IV FPGA2 is utilized. Multiple Cyclone IV FPGA boards are interconnected through the Ethernet interface, allowing them to be managed by a personal computer (PC) via the User Datagram Protocol Internet Protocol (UDP-IP). This architecture provides a robust and flexible control system for the actuator motor. 1 2

https://www.elmomc.com/product/gold-twitter/. https://www.intel.com/content/www/us/en/products/sku/210465/cyclone-ivep4ce55-fpga/specifications.html.

Muscle Stretch Reflex

Proximal Load

F

F

Series Elasticity

τ

Brushless DC Motor

IA,B,C

Physical Hardware Components Custom Compute Unit F Control Commands D

+



F

Motor Driver

Prot

Ball Gear



Prot

I Gear & Limit

Reverse Gear

Distal Load

Rotation Position Encoder

F ilter

− 

F

301

v˜rot

P˜rot d/dt

Fig. 1. Sketch of the whole muscular control system of a single actuation unit as originally published in [13]. From top to bottom, the mechanical construction of the RRLab SEA, the respective physical drive train components and the digital control computation the scheme is shown [12].

Fig. 2. The general control diagram of the muscle behavior emulation.

The experimental setup based on the RRLab SEA Fig. 3 was built to test the behavior of the system. One side of the actuator is fixed to a metal frame, while the other side is connected to a carriage that moves freely along two linear shafts. The system is equipped with an encoder that measures the actuator’s position. The experimental setup serves as a platform for testing the control algorithm, reflex behaviors, and overall system functionality.

3

The Control System

Initially, the control system was based on a closed-loop proportional control approach (1) [12]. The system inputs are the control force Fc and the control damping coefficient Dc . The applied force Fa is the output of the controller. It proportional to the force generated by the actuator. If the control force Fc

302

O. Sivak et al.

is applied to a non-moving actuator, it accelerates and tends to a particular velocity. The absolute value of the particular velocity will be slightly less than Fc | due to mechanical friction in the actuator. |D c

Fig. 3. The experimental setup for testing reflex behavior on CARL SEA.

dq(t) Dc (1) dt The system feedback loop was modified to consider the reflex behavior R (2). It compares the absolute value of the reflex module output |R| and |Fc (t) − dq(t) dt Dc | and considers the largest as output signal in a particular time. In simpler terms, when a significant reflex is activated, the initial damping feedback loop will not have an effect on the system. Fa (t) = Fc (t) −

⎧ dq(t) ⎪ ⎨max(Fc (t) − dt Dc , R) dq(t) Fa (t) = min(Fc (t) − dt Dc , R) ⎪ ⎩ Fc (t) − dq(t) dt Dc + R

if Fc (t) − dq(t) dt Dc > 0 and R > 0 dq(t) if Fc (t) − dt Dc < 0 and R < 0 otherwise

.

(2) The modified feedback loop is implemented in FPGA using VHDL language. The diagram of the FPGA peripheral part is shown in Fig. 4. There are two control inputs, sensor input, output, and other modules that provide the functionality of the FPGA peripheral part. The control inputs are connected to the Ethernet controller using the Avalon memory-mapped interface of the NIOS II processor. This connection is used to receive the control parameters and encoder data within 1 kHz average data rate through Ethernet from a PC. The control damping multiplied by the time derivative of the EMA-filtered encoder signal forms a part of the modified feedback loop. The resulting signal is transmitted to the adder.

Muscle Stretch Reflex

303

Fig. 4. The FPGA peripheral part data flow diagram: – Control force is a controlled force parameter input (orange). – Control damping is a controlled damping parameter input (orange) – Motor position is an encoder signal input (yellow) – EMA filter is an exponential moving average filter module – dfdt(t) is a time derivative module combined with an embedded EMA filter at the output – Reflex module generates reflex output in case if it is required – Limiter module limits the signal – PWM is a pulse-width modulation module – Motor driver is the system output (green).

The signal from the control force input transmits two paths. The first path correspondent with the transmission of the force input to the adder module. The force input is processed by an EMA filter, a derivative module. Then the Control force derivative is transmitted to the reflex module on the second path. The encoder input is processed by an exponential moving average (EMA) filter and two derivative modules and also transmitted to the reflex module. The reflex module forms a reflex response according to the module inputs. Then the output of the reflex module and the adder module is transmitted to the comparator module. This module compares the absolute value of the two input signals and outputs the signal whose absolute value is bigger. Next, the limiter module process the signal. Then the PWM generator translates the signal to PWM and outputs it to the motor driver.

4

The Reflex Behavior Implementation

The function of the reflex module is to activate reflex response when the actuator is affected only by significant external forces. The diagram of the reflex module is shown in Fig. 5. The reflex module implemented in the FPGA board is shown in the diagram Fig. 5. The inputs of the module are the second derivative of encoder data q, in other words, the acceleration signal, and the first derivative of the control force

304

O. Sivak et al.

Fig. 5. The reflex module data flow diagram: – The Decreasing rate modifying filter (DMF) module slowdowns decreasing rate of an absolute value of the input signal i.e. the filter adds a tail to the peaks of the input signal. – The Inhibition function module calculates the level of suppression for the initial acceleration signal. – The Shift R module bit-wise shifts right the value of the acceleration signal according to the level of suppression. – The Threshold function module activates the next block when the absolute value of the input is higher than a specific value. – The Peak finder module returns the next local maximum or minimum of the input signal and activates the next block. – The Reflex generator module generates proportional to the input value reflex response as a bell curve.

value Fc . At the start, the system applies the Decreasing Rate Modifying Filter (DMF) to the first derivative of the control force value using Eq. (3). This filter slows down the decreasing rate of the absolute value of the input signal. The coefficient E determines the extent of the slowdown in the decreasing rate.        ⎧    dFc  dFc  dFc  ⎪ ⎪ − shif t DM F − , E R ⎪ dt  dt  dt  ⎪ ⎪ t t−Δt t ⎪      ⎪     ⎪ ⎪  dFc  dFc  dFc  ⎨  if < dt  and dt  > 0 or dFc  dt  DM F = t t−Δt t   ⎪ dt t ⎪    ⎪ dFc  c c ⎪ > dF and dF Uth θ(t) = . (6) 0 if |ainh (t)| ≤ Uth The reflex, as mentioned in [14], is directly proportional to the muscular perturbation. In our system, the actuator functions as the muscle, and the perturbation represents a local extremum (maximum or minimum) of the actuator acceleration. However, since the signal contains a significant amount of noise, it is not feasible to consider every local extrema. To identify the true local extrema of the signal, a criterion is applied (7). This criterion states that the absolute value of the subsequent N points following a true extremum should be lower than the absolute value of that extremum. This criterion helps in filtering out noise and identifying the genuine local extrema of the signal. Increasing the value of N enhances the likelihood of finding the actual local maximum or minimum of the signal due to the stochastic nature of noise. However, a higher N also increase a delay of the reflex response.

aex

 d2 q  = 2  if dt tex

      d2 q    d2 q      ∀n ∈ [1, N ], n ∈ N :  2   >  2    dt tex   dt tex +nΔt 

(7)

Then, the system generates a reflex response proportional to aex using a simplified bell curve and outputs it to the “Comparator” module Fig. 4.

5

Experiment: Fixed Barrier Strike

The experiment is shown in Fig. 6. The fixed actuator moves in one direction, then in the opposite direction, and then crashes into a barrier. A force is applied to the system in one direction, then in the other direction, and then the actuator hits a stationary barrier. According to the experiment, the actuator should provide a reflex response when it hits the barrier. In case when the actuator is affected by intensive acceleration caused by changes in control parameters, the system should not generate the reflex response.

306

O. Sivak et al.

Fig. 6. Experiment Fixed barrier strike: a—start moving in a left direction. b—flip movement direction. c—hit a fixed barrier.

Control force is the signal that the system receives from the PC output. The control force signal of the system is shown in Fig. 7. There are three time points in the figure. The blue dashed line correspondents to the starting movement when the control force is applied in the left direction Fig. 6a. The purple dashed line correspondents to the flipping movement direction when the control force flips its direction Fig. 6b. The red dashed line correspondents to the hitting of the fixed barrier direction. The system continues applying control force during the hit moment and after it Fig. 6c.

Fig. 7. Experiment Fixed barrier strike: the control force signal (blue curve) with respect to the motor encoder signal (yellow curve).

The experiment is provided on the setup. The motor encoder signal Fig. 7 (yellow curve) and control force (blue curve) were recorded. One can see that in the first and the second time point (blue and purple dashed lines), the actuator starts intensive movement slightly after the force application. It happens because the actuator has inertia it accelerates within some time after applying force Fig. 7. At the moment of hitting the barrier (red dashed line), the actuator continues its movement. This effect is caused due to a damping spring that the actuator has in its construction. As result, it continues movement and then wobbles around the equilibrium point. One can see this wobbling in the next 50 ms after the hitting time point (red dashed line).

Muscle Stretch Reflex

307

The reflex response (red curve) is shown with respect to the encoder signal (yellow curve) and acceleration signal (blue curve) in Fig. 8. One can see the system generates the reflex only after the actuator hits the barrier, but it does not generate a reflex response after the system flips the direction of movement. It happens because the inhibition function suppresses the acceleration signal due to the control force value changing. As a result, the inhibited acceleration signal is close to zero after time points when the control force changes Fig. 8 (green curve). After contact with the barrier (red dashed line), the system does not suppress the acceleration signal, the inhibited acceleration signal remains the same as the initial acceleration signal value. In this case, the inhibited acceleration signal brakes the threshold. It activates the Reflex generator module Fig. 5 which generate proportional to the input value reflex response as a line simplified bell curve Fig. 8 (red curve).

Fig. 8. Experiment Fixed barrier strike: The reflex response signal (red curve) with respect to the acceleration signal (blue curve), inhibited acceleration signal (green curve), and the motor encoder signal (yellow curve). The vertical scale is only for the acceleration signal. The motor encoder and reflex response signals are scaled to show all signals in the same graph.

One of the most important things in the reflex feedback loop is the delay between the application of external force and reflex response. According to the literature, the delay of a human reflex response is in the range of 50–200 ms [3,7]. The reflex response delay in our implementation is 16 ms. It is the time between the barrier hit (red dashed line) and the first point of the reflex response (black dashed line). The FPGA implementation reflex delay is more than three times less than a typical human reflex delay. It means that our system can provide the reflex response within the delay that is required for this behavior in nature.

308

6

O. Sivak et al.

Experiment: Hammer Strike

This experiment is shown in Fig. 9. The fixed actuator does not do any movement. Then the hammer hits the actuator Fig. 9a. Significant actuator acceleration emulates the muscle stretch in this experiment and then the system generates a reflex response Fig. 9b.

Fig. 9. Experiment Hammer strike: a—hammer strike. b—actuator reflex response.

This experiment is similar to a medical reflex diagnostic. The only difference is that in medical reflex diagnostic hammer activates only neural sensors that are responsible for muscle stretching and do not dramatically stretch the muscle. However, in this experiment hammer intensively accelerate the actuator. In other words, it stretches our emulated muscles. This shows that the system should generate the reflex response if the hammer strike is significantly strong and no reflex response if the hammer strike is relatively weak. Since the system does not have any change in control parameters and does not move it is relevant to observe only the applied force, the limited applied force, and the motor position. This experiment checks the system without affecting any internal forces. The control force parameter is set to zero and control damping is set to a low value. Figure 10 shows the result force signal. The actuator started moving after it was hit with hammer Fig. 10 (red dashed line). Then counteracts the actuator movement. Higher control damping parameters make this counteraction more intensive. This behavior correspondent to the first peak (small peak). In the case of zero damping, it does not generate this small peak. At that moment, the system behaves as the damping feedback loop behavior. At the next point in time, the reflex module starts to generate a reflex response as a line simplified bell curve (the big peak). This reflex response is directed against the hammer strike direction. One can see the behavior of the system is similar to a human limb muscle in the case of the reflex response [3,7].

7

Conclusion and Outlook

A concept of a bioinspired stretch reflex behavior has been developed for the control of the compliant robotic leg (CARL) RRLab SEA. This concept incorporates a reflex feedback loop that is dependent on the actuator acceleration and control force parameter. When the actuator experiences acceleration due to external forces, it triggers the reflex response.

Muscle Stretch Reflex

309

Fig. 10. Experiment Hammer strike: the result force signal before limitation.

The implementation of the stretch reflex feedback loop, based on this concept, has been integrated into the control system of the RRLab SEA using FPGA technology. Two experiments were conducted to evaluate and validate the reflex behavior in practice. In the first experiment, one side of the fixed actuator was subjected to controlled acceleration until it collided with a fixed barrier. This experiment tested the stretch reflex behavior when the actuator experienced intense deceleration caused by significant external forces. It also examined the reflex behavior when the actuator accelerated due to internal forces, specifically control parameters. The second experiment resembled a medical human reflex diagnostic, with the exception that the reflex hammer only activated neural sensors responsible for muscle stretching and did not excessively stretch the muscle. In this experiment, the fixed actuator remained stationary while the reflex hammer struck it, triggering a reflex response. The results demonstrated that after the hammer impact, the actuator moved backward and came to a stop, which corresponded to the stretch reflex behavior observed in medical human reflex diagnostics. Delay analysis revealed that the FPGA-implemented reflex behavior exhibited a delay of 16 ms, which is significantly lower than the natural human muscle reflex latency of 50–200 ms [3,7]. The next step in the research will involve integrating this behavior into the SEAs of the CARL robot and testing it on the entire bipedal system.

310

O. Sivak et al.

References 1. Bhattacharyya, K.: The stretch reflex and the contributions of C David Marsden. Ann. Indian Acad. Neurol. 20, 1 (2017) 2. Cao, Y., Xiang, K., Tang, B., Ju, Z., Pang, M.: Design of muscle reflex control for human upright standing push- recovery based on series elastic actuator. In: 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 192–196 (2019) 3. Cochrane, D.J., Stannard, S.R., Firth, E.C., Rittweger, J.: Acute whole-body vibration elicits post-activation potentiation. Europ. J. Appl. Physiol. 108(2), 311–319 (2010). https://doi.org/10.1007/s00421-009-1215-2 4. Kratz, R., Klug, S., Stelzer, M., von Stryk, O.: Biologically inspired reflex based stabilization control of a humanoid robot with artificial sma muscles. In: 2006 IEEE International Conference on Robotics and Biomimetics, pp. 1089–1094 (2006) 5. Li, M., Pang, M., Xiang, K., Ju, Z., Zhou, S.: Design and reflex control for a series elastic actuator based ankle joint emulator. In: 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 818–823 (2018) 6. Liu, X., Rosendo, A., Ikemoto, S., Shimizu, M., Hosoda, K.: Robotic investigation on effect of stretch reflex and crossed inhibitory response on bipedal hopping. J. R. Soc. Interface 15(140), 20180024 (2018). https://royalsocietypublishing.org/doi/ abs/10.1098/rsif.2018.0024 7. Mathew, A., Nundy, M., Chandrashekaran, N., Oommen, V.: Wrestle while you learn: Emg as a teaching tool for undergraduate skeletal muscle physiology teaching. Adv. Physiol. Educ. 43, 467–471 (2019) 8. Miller, K.C., Burne, J.A.: Golgi tendon organ reflex inhibition following manually applied acute static stretching. J. Sports Sci. 32(15), 1491–1497 (2014). https:// doi.org/10.1080/02640414.2014.899708. pMID: 24716521 9. Sch¨ utz, S.: CARL - a compliant robotic leg designed for human-like bipedal locomotion. Ph.D. Thesis, Kaiserslautern, Germany (2019) 10. Shibasaki, H., Hallett, M.: 157C18Tendon reflexes and pathologic reflexes. In: The Neurologic Examination: Scientific Basis for Clinical Diagnosis. Oxford University Press (2022). https://doi.org/10.1093/med/9780197556306.003.0018 11. Vonwirth, P., Berns, K.: Bio-inspired imprecise impedance control of muscle-driven robotic limbs. In: Cascalho, J.M., Tokhi, M.O., Silva, M.F., Mendes, A., Goher, K., Funk, M. (eds.) Robotics in Natural Settings, CLAWAR 2022. Lecture Notes in Networks and Systems, vol. 530, pp. 42–53. Springer International Publishing, Ponta Delgada, Portugal (2023) 12. Vonwirth, P., Berns, K.: Bio-inspired imprecise impedance control of muscle-driven robotic limbs. In: Cascalho, J.M., Tokhi, M.O., Silva, M.F., Mendes, A., Goher, K., Funk, M. (eds.) Robotics in Natural Settings, pp. 42–53. Springer International Publishing, Cham (2023) 13. Vonwirth, P., Berns, K.: Muscular damping distribution strategy for bio-inspired, soft motion control at variable precision. Sensors 23(5) (2023). https://www.mdpi. com/1424-8220/23/5/2428 14. Yavuz, U., Mrachacz-Kersting, N., Sebik, O., Unver, M., Farina, D., T¨ urker, K.: Human stretch reflex pathways reexamined. J. Neurophysiol. 111 (2013)

Neural Multimodal Control for Versatile Motion Generation and Continuous Transitions of a Lower-Limb Exoskeleton Chaicharn Akkawutvanich1 , Natchaya Sricom1 , and Poramate Manoonpong1,2(B) 1

Vidyasirimedhi Institute of Science and Technology (VISTEC), Rayong, Thailand 2 SDU Biorobotics, University of Southern Denmark, Odense, Denmark {Chaicharn.a s17,natchayas pro,poramate.m}@vistec.ac.th

Abstract. The paper proposes a neural multimodal control for versatile motion generation and continuous transitions between activities of a mobile lower-limb exoskeleton. The control method combines neural rhythmic oscillators as central pattern generators (CPGs) for basic rhythmic pattern generation and shaping networks using the radial basis function (RBF) networks to encode different demonstrated patterns e.g., sit-to-stand, climbing up the stairs, and walking on a level floor and a treadmill. Additionally, single recurrent neurons are used as low-pass filters to ensure smoothness. This approach is verified on a real exoskeleton in both static and dynamic situations. The experiment results show successful assistance with frequency adaptation to deal with different movement speeds. Keywords: Adaptive neural network Exoskeleton · Multimodal control

1

· Central pattern generator ·

Introduction

A wearable assistive device such as an exoskeleton has been gradually applied to rehabilitate many patients together with other gait training devices such as a treadmill or assisted lifting equipment in different countries in the past decades. Extending the application to cope with individual everyday life requires the capability of the device to move in different environments or perform different activities, e.g., sit-to-stand, climbing stairs, or walking on flat terrain at different speeds. Multimodal motion generation is required for adapting to these various activities. To tackle this challenging problem, several control strategies have been developed [1]. In a process, input from a user (high-level control) is commonly used as a command for operation mode selection. It then initiates a pattern generation (mid-level control) to drive the device by position or torque control (low-level control). Several research has focused on the high-level stage to classify activities by exploring methods to do human motion intention detection. In [2], they facilitated the online classification of multimodal sensor data acquired from c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 311–322, 2024. https://doi.org/10.1007/978-3-031-47272-5_26

312

C. Akkawutvanich et al.

3D-force sensors and inertial measurement units (IMU) using Hidden Markov Models (HMMs). Many researchers relied on surface Electromyography (sEMG) signals and Electroencephalography (EEG) signals. In [3], they developed the technique to collect, process, and classify multimodal signals from a foot motor imagery (MI)-based brain-machine interface (BMI) and multichannel EMG signals from leg muscles using Linear Discriminant Analysis (LDA). In [4], they deployed a K-Nearest Neighbours classifier (KNN) to predict movements from sEMG and pressure sensors. In [5], they proposed a multi-layer convolutional neural network (CNN) together with Dense Co-Attention Symmetric Network (DCAN) to extract features and fuse sEMG with EEG data in order to predict standing, sitting, and walking positions. However, those methods require extreme offline data processing and might have a limit in practical use due to sensitive sEMG and EEG placement. Other practical approaches utilize more direct commands from a human (high-level) mixing with a machine decision (mid-level). In [6,7], they separated between automatic (or robot-dominant), semi-automatic (assist-as-needed), and manual modes. The command control system does not require significant concentration of a user as in the case of sEMG or EEG. However, its difficulty is adapting to different user activities (stairs, sitting, standing, etc.). Efficient predefined patterns for different activities (mid-level) are one of the keys to compensate for this drawback when the main adapting decision belongs to the human (high-level). Examples of frequently used methods are the model-based approach as an Inverted pendulum model [8] or the model-free approach as an adaptive oscillator [9]. Our approach also targets this mid-level control with distinct perspectives. It is a learning-from-demonstration (LfD) approach where the knowledge from humans is learned and transferred to the exoskeleton for operation. We allow memorization of motions on different terrains under the influence of humans. Then, we replicate those motions with certain adaptability (online speed changes) in the exoskeleton-dominant mode under human supervision to switch between patterns. The possibility of our current control framework to learn distinctive patterns from individual users has been previously shown in [10]. However, it showed different classes of motion separately. This work extends the concept to allow connectedness and smoothness of the command signals within one single, but versatile neural control structure. Specifically, we propose here a neural multimodal control of a mobile lower-limb exoskeleton. The control can (i) generate multiple versatile patterns for multiple activities, (ii) online adapt locomotion frequency, and (iii) offer smooth transitions between activities. It allows a user to perform continuous activities as well as smooth and continuous activity transitions (Fig. 1).

2

Materials and Methods

Our multimodal exoskeleton system in this study comprises the exoskeleton hardware and the neural multimodal control software as the following.

Neural Multimodal Control

313

Fig. 1. A lower-limb exoskeleton performs different activities continuously: Sit-tostand, climbing up the stairs, walking on a level floor (static situation), and walking on a treadmill (dynamic situation). Location: VISTEC-GAIT, VISTEC, Rayong, Thailand.

2.1

Mobilized Lower-Limb Exoskeleton

In this study, we deploy our control algorithm on a mobilized lower-limb exoskeleton called “Exo-H3” developed by Technaid, Spain. The original exoskeleton is used together with our developed wireless-based computing unit (Fig. 2).

Fig. 2. Our hardware architecture with wireless communication. They are beased on a high-level, a mid-level, and a low-level control concept.

The main structure of the exoskeleton utilizes coated aluminum alloy bars with an adjustable mechanism to cope with different legs’ lengths. For each shank and thigh bar, there are also exchangeable four bars with different lengths enabling the exoskeleton to adapt to individual user heights. The exoskeleton can support up to a user weight of 100 kg. This exoskeleton is designed to support a user mainly in a sagittal plane, so it is compulsory to have any assisting structure such as crutches to balance the user in a frontal plane. There are only six motors (six degrees of freedom) as active joints. Each motor can measure a joint angle, a

314

C. Akkawutvanich et al.

joint interaction torque, and a joint current which is proportional to the control torque. Right and left insoles are equipped with force sensors at the heel and toe areas. The ex-factory exoskeleton provides only a low-level microcontroller (MC). Controller Area Network (CAN) bus is used as a main communication channel R NUC7i7BNH) to interface between the MC and a mobile computing unit (Intel where our neural control is implemented. The update frequency between motors and the MC occurs at a fixed rate of around 1 kHz, but the one between the MC and the neural control is around 100 Hz. To facilitate the usage of the exoskeleton and integrate it into our automated gait lab (VISTEC-GAIT, Fig. 1), other high-level components are developed. It can monitor all command and feedback signals. It supports recording and selecting activities. The mobile computing unit on the exoskeleton is used to host our mid-level neural control and connect to high-level control and monitoring software based on Robot Operating System (ROS). This control loop (mid-level, Fig. 2) is designed to run on a private local network where a (high-level) control computer using Linux as its operating system can send/receive signals to the exoskeleton system wirelessly through a router. This local network scheme can easily connect to a wider network (e.g., cloud) in order to, in this case, store recording data or stream the real-time data via a protocol such as Message Queuing Telemetry Transport (MQTT). 2.2

Neural Multimodal Control

In this section, our developed mid-level control algorithm based on a neural network concept is presented. In Fig. 3, there are three main components: (i) Zerotorque module, (ii) Neural multimodal control module (decoupled-CPG based control with synaptic plasticity and adaptive CPG post-processing networks), and (iii) Lowe-level control module. In order to allow the control to know the external movement patterns of the individuals for basic activities such as walking in different situations, we design our learning method based on the learning from demonstration concept. The (i) module is responsible in this case. The main mechanism for learning the demonstrated pattern is basically a direct combination of the frequency and shape of the motor joint angle signals. The CPG-based rhythmic generators with decoupled minimal structure are used to separately online learn the swing frequencies of each leg from its hip angle. Then, the output signals embedded with correct frequencies information can be properly translated into desired hip, knee, or ankle joint patterns by the post-processing modules (ii). The motor signals created from the aforementioned modules are then transmitted to drive the exoskeleton motors via the low-level control module (iii). Zero-Torque Module ((i) in Fig. 3) The final torque at each joint is described as: τF = τH + τG = τH + (τM − τres ), (1) where τH is the torque from the human force, τG is the driving torque to create the gait resulting from the surpass between the motor torque τM and the

Neural Multimodal Control

315

Fig. 3. The block diagram of our control mechanism. It comprises three main modules: i Zero-torque control, ii Neural multimodal control, and (iii) Low-level control. Our neural multimodal control module operates separately between the right and the left legs. It includes decoupled-CPG based control with synaptic plasticity (H0 , H1 , and H2 neurons) and an adaptive CPG post-processing network that is derived from a RBF network and single recurrent neurons (SRNs; for hip, knee, and ankle). The CPG can detect feedback (θRH ; Green) and adapt the command signals to an appropriate rhythm for each user activity. The zero-torque control module is applied to facilitate the RBF network in order to learn the target pattern via right/left hip, knee, and ankle joint angles which will be encoded as RBF weights (ωj ). The SRNs acting as low-pass filters are used for smoothing the learned pattern or learned joint angles.

internal friction torque τres . During the zero-torque mode, we control the τM to cancel the frictional torque τres , so the final torque acting at each joint is from the human. This module is firstly used to facilitate the targets recording (θRH,target , θRK,target , θRA,target , θLH,target , θLK,target , θLA,target ; blue color in Fig. 3) of human motions under different activities (e.g., walking, stairs climbing, sit-to-stand). Neural Multimodal Control ((ii) in Fig. 3) The first part, decoupled-CPG based control with synaptic plasticity, is based on a bio-inspired concept to create neural rhythmic generators. They are derived from decoupled neural oscillator networks with Hebbian-based synaptic plasticity [11]. Each network (or rhythmic generator) has three neurons H0 , H1 , and H2 acting as a central pattern generator (CPG) that can dynamically adapt the coupling strength according to the sensory feedback (ω2F ). Their outputs are governed by the following equations: Hact = WHout + F + B, Hout = tanh(Hact ),

(2a) (2b)

where ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎤ ⎡ ⎡ ω00 (t) ω01 (t) ω02 (t) 0 B0 (t) H0, act (t + 1) H0, out (t) ⎦ , B = ⎣ B1 (t) ⎦ , Hout = ⎣ H1, out (t) ⎦ , 0 Hact = ⎣ H1, act (t + 1) ⎦ , W = ⎣ ω10 (t) ω11 (t) 0 ⎦ , F = ⎣ 0 ω2F (t)Finput (t) H2, act (t + 1) ω20 (t) 0 B2 (t) H2, out (t)

316

C. Akkawutvanich et al.

B0 , B1 , B2 are bias terms, and Finput is the sensory feedback into the network which is the hip joint angle (θRH ) in this case. Their inter-connection weights ω01 , ω10 , ω02 , ω20 and self-connection weights ω00 , ω11 are adapted with respect to the external signal’s frequency according to the following equations: 

   ω00 (t) ω01 (t) cos(φ) sin(φ) = α , ω10 (t) ω11 (t) − sin(φ) cos(φ)

Δφ(t) = μ · ω02 (t) · H2,out (t) · ω01 (t) · H1,out (t), ⎤ ⎡ ⎤ Δω02 (t) −H0,out (t)H2,out (t) −(ω02 (t) − ω02,0 )   ⎣ Δω20 (t) ⎦ = ⎣−H0,out (t)H2,out (t) −(ω20 (t) − ω20,0 ) ⎦ A , B Δω2F (t) Finput (t)H2,out (t) −(ω2F (t) − ω2F,0 ) ⎡

(3) (4) (5)

where μ is the learning rate in the Hebbian-type learning rule (Eq. 4) that will alter the synaptic weights as shown in Eq. 5. ω02,0 , ω20,0 , and ω2F,0 are the initial weights at t = 0 or predefined relaxation values. A, B > 0 are scaling factors. These equations will force the structure to produce quasi-periodic outputs when −π < φ < π, and α > 1. When φ > 0 and α = 1 + ,   1, both outputs are almost sine-shaped with small amplitudes where H1,out leads H0,out by π/2. This parameter φ indicates the internal oscillation frequency φ fCP G = 2π [Cycle/Iteration]. Based on previous studies [10–12], the parameters are set as shown in Table 1 to operate properly on the exoskeleton system. Table 1. The initial and meta control parameter values Parameters Values Parameters Values B0 (0)

0

ω02 (0)

1

B1 (0)

0

ω20 (0)

0

B2 (0)

0

ω2F (0)

0.03

H0,out (0)

−0.2

A

1

H1,out (0)

0

B

0.01

H2,out (0)

0.2

α

1.01

φ

0.02π

μ

1.3

This initial value of φ of 0.02π equals to the internal frequency fCP G of 0.01 which is f of 1 [Hz; Cycle/s] in the SI unit. The discretization equation relating those frequencies is f [Hz] = fCP G



Cycle Iteration



·

 Iteration 1 , convNo s

(6)

where the convNo is discretized factor [Prog. step/Iteration] × timestep [s/Prog. step]. In this implementation, it is 0.01 (10 × 0.001). Our application uses two CPGs to handle the right and left legs.

Neural Multimodal Control

317

The second part, adaptive CPG post-processing network, is developed based on a radial basis function (RBF) network for each CPG (each leg). It receives the CPG’s outputs H0,out and H1,out as inputs. The RBF kernels are created from two-variable Gaussian distribution qj (t): qj (t) = e

−[(H0,out (t)−μx (j))2 +(H1,out (t)−μy (j))2 ] 2σ 2

; ∀j = 1, 2, ..., k,

(7)

where means μH0 (j) and μH1 (j) are sampled from H0,out and H1,out , respectively. σ is set to 0.07. All joints use the same kernels in one leg. At each timestep, the resulting output signal r(t) for each joint is created by a linear combination of k kernels with different sets of weights in each joint: r(t) =

k

ωj qj (t).

(8)

j=1

In matrix form, if the joint target signal has m data points, r(t) can be written as: ⎤ ⎡ q11 · · · q1m

R = [ω1

···

⎥ ⎢ ωk ] ⎣ ... . . . ... ⎦ = [r1 · · · rm ].

(9)

qk1 · · · qkm

The appropriate weights ωj are obtained from an error-based learning rule: ⎡



⎛⎡ ⎤

⎡ ⎤⎞

Δω1 t1 r1 ⎢ .. ⎥ ⎜⎢ . ⎥ ⎢ . ⎥⎟ ⎣ . ⎦ = ζ ⎝⎣ .. ⎦ − ⎣ .. ⎦⎠ , Δωk tk rk

(10)

where ζ is the learning rate. The t1 ...tk are sampling points from the joint target signals receiving during the zero-torque control ((i) blue color in Fig. 3). We choose ζ = 0.1, and k = 40 to properly generate the resulting joint angles for the position control of all joints. The final part is low-pass filters applied to smooth the output signals of the RBF network before sending them to the hip, knee, and ankle joints. We deploy a single linear recurrent neuron as the filter for each joint angle signal: θout (t) = ωR · θout (t − 1) + (1 − ωR ) · ωI · θin (t),

(11)

where the input weight ωI = 1, and the recurrent weight ωR = 0.01. Low-Level Control ((iii) in Fig. 3) The neural control commands are sent via CAN bus through this low-level module which allows position control. A basic control such as PID with predefined and fixed gains is also embedded in this module. This feedback control occurs at around 100 Hz. The feedback signals from sensors such as joint position, joint interaction torque, joint motor current, and insole force are read, packed, and also sent out via the same CAN bus for our analysis and evaluation with the same as the control rate.

318

3

C. Akkawutvanich et al.

Experimental Results

The experimental protocol for this study was approved by the Research Ethics Committee from University of Southern Denmark (Case no. 20/70422), allowing the experiment to be performed locally at VISTEC. This experiment performed on a healthy male subject aged 39 (height 170 cm and weight 70 kg). Hip-to-knee and knee-to-ankle lengths are 48 and 45 cm, respectively. Multiple activities were selected and their patterns were recorded by the zerotorque module. It is possible for the framework to record the single continuous signal of different activities. However, it is better that each activity is recorded separately, so that we can choose and order activity liberally. At least 3 trails were used to average the final patterns. This module allows different subjects to provide their personal patterns as targets. In Fig. 4, it shows those four selected activities i.e., sit-to-stand (SS), stairs up or climbing up the stairs (SU), walking on a level floor (W; static environment), and walking on a treadmill with a constant speed (W; dynamic environment). From those activities, they can be achieved simply by at least three distinctive patterns where the walking pattern can be used on both a level floor and a treadmill. Our same CPGs and their shaping networks in the neural multimodal control (Fig. 3) are utilized to handle all different activities. During the assistive period, manual input from an operator is used to switch between activities. In the sit-to-stand case, the whole signals can be represented by CPG signals with a frequency of around 0.1089 Hz. In the stairs-up case, even though each whole 4-step-stair signal is around 0.1058 Hz, it is better to separate it into four meaningful sub-parts i.e., the starting part (at the 1st step of the stairs), the middle part (at the 2nd , and 3rd steps of the stairs), and the ending part (at the 4th step of the stairs)1 . The starting part comprises the right hip moving up with a CPG frequency of 0.4223 Hz and the straight hip with a CPG frequency of 0.6173 Hz. The middle part resembles the starting part with CPG frequencies of 0.5464 and 0.4876 Hz. The ending part with a CPG frequency of 0.6028 Hz is where a user reaches and stands on the last step. For walking patterns, they can be classified into the first stride with a CPG frequency of 0.2282 Hz, the middle stride with a CPG frequency of 0.2798 Hz, and the last stride with a CPG frequency of 0.2021 Hz. This walking pattern creates a linear walking speed of around 0.25 m/s which can act on a static situation as a static level floor and also on a dynamic situation as a treadmill with the corresponding constant speed. We verified our neural multimodal algorithm for motion generation and continuous transitions in Fig. 5. Patterns recorded by zero-torque control were learned and encoded weights (ωj ) of the RBF networks during the learning period. The learned weights for each activity are [ω1 , . . . , ω40 ]SS,n ; n is hip, knee, or ankle of the right or left leg, for the whole sit-to-stand signal, [ω1 , . . . , ω40 ]SU 1,n to [ω1 , . . . , ω40 ]SU 5,n , according to each segment/sub-part of stairs-up pattern, and [ω1 , . . . , ω40 ]W 1,n to [ω1 , . . . , ω40 ]W 3,n for the first, middle, and the last 1

The sub-parts reveal the proper pattern for the specific situation, e.g., the starting and ending parts are used once, but the middle part can be repeated depending on the number of stairs’ steps. Each part’s length should be within the operating period of the CPG signals.

Neural Multimodal Control

319

Fig. 4. The selected motions/activities as our target patterns. The four activities which are sit-to-stand, climbing up the stairs, walking on a level floor, and walking on a treadmill can be represented by at least three patterns. One walking pattern can be used on both a level floor (static situation) and a treadmill (dynamic situation). Note that the shown pattern of walking on the floor and the treadmill is the same since the user walked with the same speed in both cases. The target signals (blue) and the resulting signals generated by our networks (orange) are shown below. The outputs of the right and left leg CPGs in different frequencies proportional to each sub-part of each target are also presented.

strides. During the action period, the SS weight set was utilized to online generate sit-to-stand activity by Eq. 8. Consequently, the SU weight set was introduced to continuously drive the user up the 4-step stairs. The feedback signals from those two activities showed that the exoskeleton performed and finished those activities smoothly and on the time as expected from the command signals. Subsequently, the W weight set for walking at a linear speed of 0.25 m/s was able to transfer the user on a level floor to a treadmill. Later on, the exoskeleton with the same generated walking pattern assisted the user to walk synchronously on it. The ground reaction forces indicate a clear swing and stance pattern at the stairs and on the treadmill. To demonstrate the flexibility of the algorithm on different speed changes, we performed a speed variation experiment as shown in Fig. 6. With the same set of learned weights, we altered the frequency of CPGs by +10% (the upper row in Fig. 6) or −10% (the lower row in Fig. 6) from the first experiment (the middle row in Fig. 6). All commands were recreated successfully and the resulting graphs show correct behavior for all activities with the adjusted movement speeds.

320

C. Akkawutvanich et al.

Fig. 5. The experimental result of our mobile lower-limb exoskeleton performed on multiple activities continuously (pink area). The different sets of weights (ωj ) related to each activity, in this case, sit-to-stand [ω1 , . . . , ω40 ]SS,n ; n is hip, knee, or ankle, climbing up stairs [ω1 , . . . , ω40 ]SU 1−5,n , walking on a level floor, and walking on a treadmill [ω1 , . . . , ω40 ]W 1−3,n , were used to online regenerate joint control signals for each activity. A subject wearing the exoskeleton successfully accomplished the assigned tasks at our VISTEC-GAIT testbed (shown below graph) where there are force plates to measure ground reaction forces at the stairs and the treadmill. Sit-to-stand took around 9.17 s, 4-step-stair took around 9.45 s, walking on the level floor took around 17 s (1-stride ∼ 4.21 s), and walking on the treadmill took around 34 s (1-stride ∼ 4.21 s). HS = heel strike, MS = mid stance, and TO = toe-off. RH/LH = right/left hip, RK/LK = right/left knee, and RA/LA = right/left ankle joint angles, respectively. The video can be seen at https://youtu.be/ulNJrOSSVrE.

4

Discussion and Conclusions

We propose a novel adaptive neural multimodal control of a mobile lower-limb exoskeleton for motion generation in various activities. The algorithm extends the concept from our previous work where individual patterns such as walking or climbing up the stairs were generated separately for each subject without a combination between them. In this study, we have demonstrated that our algorithm can continuously regenerate and smoothly transfer four basic activities i.e., sit-to-stand, climbing up the stairs, walking on a level floor, and walking on a treadmill at a constant speed using three distinguishing patterns encoded in the form of RBF weights. We divide the target patterns into proper segments so that the rhythmic oscillator (CPG) module can handle them without an extra calculation step as in the

Neural Multimodal Control

321

Fig. 6. The speed variation experiment. Different control signals were introduced to the system with different movement speeds. The middle, upper, and lower rows show normal, +10%, and −10%, respectively. Normal sit-to-stand took 9.17 s (+10%: 8.26 s; −10%: 10.01 s). Normal climbing up stairs took 9.45 s (+10%: 10.49 s; −10%: 8.52 s). Normal walking on a level floor or on a treadmill took 4.21 s (+10%: 4.51 s; −10%: 3.91 s), approximately.

previous work. Based on the experimental results, the algorithm achieves stable patterns and can assist a subject to perform the aforementioned activities in a real situation. Online speed adjustment is possible by adapting CPG frequency through the Hebbian-type learning mechanism with only hip joint angle feedback (Eq. 4) for a certain range without the final signal distortion (at least ±10% as demonstrated in this study). In the future, validation on multiple users should be conducted. Besides, we will integrate more feedback into our system, so the control can automatically select a pattern for different situations or obstacle negotiation on the move and directly respond to the user’s force or intention. Acknowledgement. We would like to thank Technaid S.L. for technical support. This work was supported by the VISTEC research grant under the EXOVIS project (Grant No. I20POM-INT010 [PM]).

References 1. Baud, R., Manzoori, A.R., Ijspeert, A., Bouri, M.: Review of control strategies for lower-limb exoskeletons to assist gait. J. Neuro Eng. Rehabil. 18(1), 119 (2021) 2. Beil, J., Ehrenberger, I., Scherer, C., Mandery, C., Asfour, T.: Human motion classification based on multi-modal sensor data for lower limb exoskeletons. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5431–5436 (2018) 3. Gordleeva, S.Y., Lobov, S.A., Grigorev, N.A., Savosenkov, A.O., Shamshin, M.O., Lukoyanov, M.V., Khoruzhko, M.A., Kazantsev, V.B.: Real-time EEG-EMG human-machine interface-based control system for a lower-limb exoskeleton. IEEE Access 8, 84070–84081 (2020)

322

C. Akkawutvanich et al.

4. Langlois, K., Geeroms, J., Van De Velde, G., Rodriguez-Guerrero, C., Verstraten, T., Vanderborght, B., Lefeber, D.: Improved motion classification with an integrated multimodal exoskeleton interface. Front. Neurorobotics 15 (2021) 5. Shi, K., Mu, F., Huang, R., Huang, K., Peng, Z., Zou, C., Yang, X., Cheng, H.: Multimodal human-exoskeleton interface for lower limb movement prediction through a dense co-attention symmetric mechanism. Front. Neurosci. 16 (2022) 6. Li, X., Pan, Y., Chen, G., Haoyong, Yu.: Multi-modal control scheme for rehabilitation robotic exoskeletons. Int. J. Robot. Res. 36(5–7), 759–777 (2017). June 7. Mineev, S.A.: Multimodal control system of active lower limb exoskeleton with feedback. In: Proceedings of the Scientific-Practical Conference “Research and Development - 2016”, pp. 3–10, Cham. Springer International Publishing (2018) 8. Ren, H., Shang, W., Li, N., Xinyu, W.: A fast parameterized gait planning method for a lower-limb exoskeleton robot. Int. J. Adv. Rob. Syst. 17(1), 172988141989322 (2020). January 9. Yan, T., Parri, A., Garate, V.R., Cempini, M., Ronsse, R., Vitiello, N.: An oscillator-based smooth real-time estimate of gait phase for wearable robotics. Auton. Robots 41(3), 759–774 (2017) 10. Akkawutvanich, C., Manoonpong, P.: Personalized symmetrical and asymmetrical gait generation of a lower-limb exoskeleton. IEEE Trans. Ind. Inf. 1–12 (2023) 11. Nachstedt, T., Tetzlaff, C., Manoonpong, P.: Fast dynamical coupling enhances frequency adaptation of oscillators for robotic locomotion control. Front. Neurorobotics 11 (2017) 12. Akkawutvanich, C., Knudsen, F.I., Riis, A.F., Larsen, J.C., Manoonpong, P.: Adaptive parallel reflex- and decoupled CPG-based control for complex bipedal locomotion. Robot. Auton. Syst. 134, 103663 (2020)

Positional Health Assessment of Collaborative Robots Based on Long Short-Term Memory Auto-Encoder (LSTMAE) Network Naimul Hasan1 , Louie Webb1(B) , Malarvizhi Kaniappan Chinanthai2 , Mohammad Al-Amin Hossain1 , Erkan Caner Ozkat3 , Mohammad Osman Tokhi1 , and Bugra Alkan1 1

School of Engineering, London South Bank University, 103 Borough Road, London SE1 0AA, UK {hasanm56,webbl4,hossam86,tokhim,alkanb}@lsbu.ac.uk 2 Computer Science and Engineering, University of Westminster, 309 Regent Street, London W1B 2HW, UK [email protected] 3 Department of Mechanical Engineering, Faculty of Engineering and Architecture, Recep Tayyip Erdogan University, Rize, Turkey [email protected]

Abstract. Calibration is a vital part of ensuring the safety and smooth operation of any industrial robot and this is particularly essential for collaborative robots as any issue pertaining to safety can adversely impact the human operator. Towards this aim, Prognostics and Health Management (PHM) has been widely implemented in the context of collaborative robots to ensure safe and efficient working environments. In this research, as a subset of PHM research, a novel positional health assessment approach based on a Long Short-Term Memory auto-encoder network (LSTMAE) is proposed. An experimental test setup is utilised, wherein the collaborative robot is subject to variations of coordinate system positional error. The operational 3-axis position time-series data of the collaborative robot is collected with the aid of an industrial data acquisition platform utilising influxDB. The experiments show that, with the aid of this approach, manufacturers can assess the positional health of their collaborative robot systems. Keywords: Collaborative robotics · Prognostics and Health Management (PHM) · Auto-encoder · Wavelength scattering Machine learning · Manufacturing assembly

1

· LSTM ·

Introduction

The term ‘Collaborative Robot’ (Cobot) refers to a robot that can safely do tasks in close proximity to humans [1]. They have a wide range of applications c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 323–335, 2024. https://doi.org/10.1007/978-3-031-47272-5_27

324

N. Hasan et al.

across multiple sectors such as product assembly, product packaging, material handling, welding, material removal, defect and quality inspections. In order to integrate them in advanced manufacturing systems to work alongside humans, it is essential that they are regularly calibrated and serviced to prevent health and safety issues [2]. Particularly, issues arising from positional encoders, defects in the robot-base securing system, or propagation of abnormal vibrations could manifest as functional failure of the cobot that might not only lead to a drop in quality and production efficiency, but also compromise the safety of human operators [3]. Therefore, it is critical to have a solid understanding of the positional degradations of a cobot such that anomalous trajectories can be flagged before they have the chance to propagate into a serious fault. To address the above-mentioned problem, Prognostics and Health Management (PHM) can be applied in cobots to support maintenance decision-making. They can help in reliable monitoring, detection of incipient defects, and forecasting of future faults. PHM can be implemented at two distinct levels: i ) the component level, in which it is typically used to monitor the health of specific components (such as gears, engines, and electronic devices); and ii ) the system level, in which it is employed to assess the health of the overall system (such as robots, and workstations) by taking into consideration a variety of aspects, including system responses and process-related metrics [4]. The following paragraphs provide a brief review of the relevant literature and highlight the research gap that this article aims to fulfill. Wang et al. [5] proposed a deep learning (DL) architecture based on the vibration signal of rolling elements, which is de-noised using the combined use of self-attention (SA) mechanisms and a bidirectional long- and short-term memory (BiLSTM) network. A similar approach was presented in [6] in which a CNNLSTM deep neural network called MC-LSTM architecture was used instead of SA to detect collisions (collision points) from cobots relying on rotor channel estimation statistics. Furthermore, Nabissi et al. [7] proposed a solution-based Robot Operating System (ROS) for automatic failure detection and conditional monitoring (CM) of cobots. Their architecture can detect joint anomalies using torque information to define standard health indicators (HIs), depending on whether the condition is highly dynamic or not. Auto-encoders are also utilised extensively in the PHM of industrial robots and cobots. Polenghi et al. [3] proposed a hybrid DL-based architecture for fault detection of cobots following PHM guidelines; a clustering algorithm was used on three different trajectories followed by the use of an autoencoder to identify unhealthy trajectories. A final clustering algorithm was used for functional failure trajectories. Similarly, in paper [4], by Qiao et al., followed PHM guidelines to identify the positional health changes of industrial robots. However, instead of the traditional DL mechanism, advanced sensors, visualization tools, and algorithms were used. On the other hand, Yun et al. [8] proposed an autoencoder-based architecture for inconsistency detection using the sound sensor in robot limbs. Sound sensors were implemented as a solution to address the problem of a limited quantity of defect data. A method that converts two-dimensional sound signals into one-dimensional sound signals

Positional Health Assessment of Collaborative Robots

325

using the short-term Fourier transform (STFT). The STFT signals that were considered as normal were sent to the training autoencoder; STFT signals that were both normal and anomalous were sent to the feed-forward part of the training encoder so that it could determine the difference between the reconstructed features. In another experimental study, the use of an autoencoder for the purpose of anomaly detection in cobots was conducted by Graabaek et al. [9]. They compared various outlier detection methods for cobot pick-and-place operations, including k-Nearest Neighbors (kNN), Local Outlier Factor (LOF), Randomized Neural Network (RandNet), Long Short-Term Memory (LSTM), and Bilinear (combination of CNN and LSTM). The vast majority of the approaches that are currently available are utilised for the purpose of managing component-level fault detection of cobots such as drivers, controllers, sensors, etc., [3,4]; however, only a select few of these methods are capable of being integrated at the system level [3]. There is a lack of research in the field of positional health assessment for cobots as the majority of the published works in the literature do not concentrate on the concept of predictive maintenance for cobot systems but instead focus on collision detection and trajectory planning [3]. As positional health anomalies are inherently not classified as faults by the cobot, it is much more difficult to detect them than other types of system anomalies. However, identifying them can assist in avoiding collisions, errors in assembly tolerance, and product damage [4]. Therefore, this paper proposes a novel method for detecting anomalies from end-effector position data of cobots using an LSTM-auto-encoder network. The proposed method is illustrated through the use of a case study that features a simple pick-and-place operation that incorporates two variations of a positional fault. The findings demonstrated that the proposed method is capable of detecting anomalies from the 3-axis positional trajectory such that associated safety risks and production quality issues can be avoided. The remaining sections of the paper are structured as follows: (i) the proposed auto-encoder-based solution is described in greater depth in Sect. 2, (ii) the testing environment is outlined in Sect. 3, (iii) a discussion of the approach’s performance and experimental results are presented in Sect. 4, and (iv) the final section wraps up the paper and lays out the plan for further research.

2 2.1

Research Methodology Long Short-Term Memory (LSTM) Autoencoder

An autoencoder is a type of neural network that is trained to reconstruct its input data. The idea is to train the network to encode the input data into a lowerdimensional representation and then decode this representation back into the original data. By doing so, the network can learn to capture the most important features of the input data, which can be useful for tasks such as data compression or anomaly detection. LSTM is a type of recurrent neural network (RNN) that is designed to handle the vanishing gradient problem in traditional RNNs [10]. The vanishing gradient problem occurs when the gradients used to update the

326

N. Hasan et al.

weights in the network become very small, making it difficult for the network to learn long-term dependencies [11]. LSTMs solve this problem by introducing a set of memory cells that can store information over long periods of time, allowing the network to learn long-term dependencies more effectively [12]. Combining LSTM and autoencoder results in an LSTM autoencoder (LSTMAE), which can be used for tasks such as sequence-to-sequence prediction, anomaly detection or feature extraction. The LSTM encoder takes in a sequence of inputs and encodes them into a lower-dimensional representation using the LSTM memory cells. The LSTM decoder then takes this representation and decodes it back into the original sequence. During training, the network is trained to minimize the difference between the original sequence and the reconstructed sequence. 2.2

The Proposed Architecture

Figure 1 shows the proposed LSTMAE architecture. The proposed network involves a two-stage process that transforms data into a matrix of frames coupled with LSTM autoencoder. The first stage involves organising the input data into overlapping frames. The second stage involves feeding the organised data into an LSTM autoencoder to reconstruct the original signal. Figure 2 shows the flowchart of the proposed approach.

Fig. 1. The proposed LSTMAE model.

In stage one, overlapping frames for the input signal as part of the data transformation are first created. Following that, the dataset is split into training and testing data, where the latter consists of both normal and abnormal signals.

Positional Health Assessment of Collaborative Robots

327

The z-score normalisation, as shown in Eq. 1, was used on the training set and provided as input to the LSTMAE. z = (x − μ)/σ

(1)

where z is the normalized signal, x is the input signal, μ and σ are the mean and the standard deviation of the input signal, respectively.

Fig. 2. The flow-chart of the proposed LSTMAE model.

In stage 2, an autoencoder that can learn to reconstruct the input data with high accuracy while at the same time capturing complex patterns and dependencies from the bottleneck is employed. The network’s encoding phase comprises a series of interconnected layers, starting with an LSTM layer, followed by a ReLU layer, a second LSTM layer, a dropout layer and a final ReLU layer. In

328

N. Hasan et al.

order to address the issue of diminishing gradients, the integration of ReLU layers has been implemented to facilitate non-linear transformations. Additionally, a dropout layer has been utilised during training to prevent overfitting by randomly discarding 20% of the input. To achieve the reconstruction of the input sequence, a custom layer was incorporated into the decoding stage of the network. This layer was specifically designed to emulate the output of the final LSTM layer in the encoding stage. The architecture of the decoding stage is composed of an initial Long Short-Term Memory (LSTM) layer, succeeded by a dropout layer, a Rectified Linear Unit (ReLU) layer, a second LSTM layer, and a final ReLU layer. In order to produce the predicted values, the regression layer receives input from the ultimate decoder layer. It is noteworthy that the arrangement of LSTM cell numbers in the decoder stage is reversed in comparison to that of the encoder stage. Once the autoencoder reconstructs the signal, the Root Mean Square Error (RMSE) metrics for each axis is calculated. The average of the three RMSE values is then calculated as the ‘average prediction error’. The RMSE calculation is shown in Eq. 2 as   N 1  (xˆi − xi )2 (2) RM SE =  N i=1 where N is the total number of input signal, xˆi and xi are the predicted and input signal, respectively. Hyperparameter tuning of the LSTMAE parameters such as the optimiser and activation function using grid search was performed. A moving average technique is applied on the ‘average prediction error’ for timeseries smoothing. From this processed timeseries data, an anomaly can be detected if the prediction error exceeds a threshold value. The operation will continue normally until the moving average exceeds the threshold value. Once the anomaly is detected, appropriate measures to deal with the positional fault such as alerting the human operator and requesting calibration can be carried out.

3 3.1

Experiments Experimental Test Setup

The experimental test setup consists of a small-scale cobot and a laptop PC that collects the real-time position of the end-effector within the coordinate system. The computer’s Intel(R) Core(TM) i7-11850H CPU @ 2.60 GHz and 32 GB RAM were used for all operations in the MATLAB environment. The cobot is equipped with a vision system and adaptive gripper to help pick objects of different shapes and colours. Figure 3 shows the test setup where the robot picks the workpieces and places them in a container.

Positional Health Assessment of Collaborative Robots

329

Fig. 3. Experimental test setup.

3.2

Process Description

The considered pick and place operation comprises of the human operator first placing workpieces within the cobot’s workspace. The cobot identifies and locates the workpieces by their shape, colour, and position in relation to the calibrated workspace coordinates with the help of a vision system. The identified workpiece is then picked and placed in a container. These steps are repeated for 20 cycles, where the first 10 cycles are working within normal calibrated conditions. Multiple workspace coordinates were calibrated and saved as references. At the 11th and 15th cycles, a new workspace reference coordinate was induced, respectively. Due to this reason, the cobot’s end-effector position will have an offset but the cobot does not realise the changes of offset and still continues working according to its process plan. As the robot requires the position of the workspace before identifying and picking the object, the position of the workspace can lead the robot to its out-of-bounds trajectory span and anomalies/failure. 3.3

Faults

The cobot picks and places the object depending on the coordinates of the workspace. Hence, any offset in the cobot’s coordinate system make it difficult or impossible in some cases for the gripper to pick the object. This can be due to functional errors or hardware-related errors such as encoder faults. In the experiment, two variations of coordinate positional error are introduced by making changes to the cobot’s software to mimic encoder faults. It is important to note that the actual position of the cobot, the container and the workspace are not modified in any way. The first error is a 1 mm offset in the x, y and z axes that was introduced at 1970 time units, and the second error is introduced at 2924 time units with 5 mm offset in the x, y and z axes.

330

3.4

N. Hasan et al.

Data Acquisition

On the top right corner of Fig. 3, the diagram displays the information flow and acquisition technologies used by the cobot to create a time series database. As the manipulator used is centered around a custom ROS stack, the end-effector comes from the robot’s direct kinematics and not an external tracking system. Therefore, the internal sensors and joint encoders of the robot can calculate the direct kinematics by publishing the readings coupled with the ROS trajectory planning packages (pose handlers and MoveIt packages). The robot can communicate with the OPC-UA (Open Platform Communications-Unified Architecture) server via Modbus, where the real-time positional measurements can be read within the OPC-UA-Modbus-TCP clientserver. This allows OPC-UA to map each reading with unique identifiers within the same namespace, further allowing protocol conversion and data access controls thus providing a standardised and interoperable interface between the robot and time series database. When storing the reading, the combination of Telegraf and InfluxDB was chosen. InfluxDB is an open-source, high-performance, and scalable time-series database designed for handling large volumes of timestamped data. It is specifically optimised for storing, querying, and visualising time-series data, which typically consists of data points associated with timestamps, such as sensor readings, metrics, logs, events, and other time-stamped data generated by various applications and systems. Telegraf is a data collection agent developed by InfluxData that is designed to gather, process, and send data to InfluxDB. It acts as a bridge between various data sources and InfluxDB, further enabling seamless data ingestion into InfluxDB for storage, querying, and visualisation. Therefore, within the experiment, positional measurements are identified as separate measurements and stored within the same database at a sampling rate of 50 ms. 3.5

Implementation of Methodology

In the experiment, the end-effector position (i.e., gripper) of the cobot was collected as a form of raw time series signal data for three axes (X, Y, and Z). A training dataset was used that covered the period of time from time unit 0 to 1969 time units, and the remaining data was used to create a test dataset. Each time unit represents 100 milliseconds. For stage one of the methodology, in order to perform data transformation, frame length of 5 time units is taken. Following the data transformation, the timeseries is input to stage 2 where the LSTM autoencoder is be used to reconstruct the signal. The L2WeightRegularization, SparcityRegularization, and SparsityProportion were selected as 1.0e-10, 1.0e10, and 0.7, respectively. The model converged at 2000 epochs and the output reconstructed signal is explained in more detail in the next section.

Positional Health Assessment of Collaborative Robots

4

331

Results

Figure 4 shows a timeseries profile of the end-effector position on the X axis at the top. The bottom profile in the figure, represents the calculated RMSE error in the X-axis. The higher the error, the more the end-effector position deviates from the normal. The variations in the X axis until 1970 time units are the result of the minor deviations attributed to the presence of a human operator interacting with the cobot’s workspace throughout the training and testing phase. However, from 1970 time units, the deviations are more prominent. Again at 2924 time units, there is more fluctuation in the timeseries. This can be attributed to the introduction of the functional fault as the 5 mm offset in the cobot’s workspace. Similarly, from Fig. 5, the increase in the RMSE error can be seen from 1970 time units and then at 2924 time units. From Fig. 6, a similar pattern emerges. Following this, the average of the three RMSE values is taken as the ‘average prediction error’ which is shown in (Fig. 7); it can be seen that the ‘average prediction error’ increases after the introduction of the positional faults at 1970. A threshold of 0.6 is set for the detection of anomaly and accordingly, at 2222 time units, an alert will be generated to indicate that there is some issue with the end-effector position. Although such faults might not normally be captured by the cobot, the use of such a methodology can help continuous monitoring of the cobots’s end-effector positions. Therefore, when very small deviations, such as 1mm, is introduced into the workspace coordinates, the proposed approach is capable of detecting such anomalies using timeseries data and generating alerts before they can progress into more serious issues.

Fig. 4. Error Prediction in X-axis.

332

N. Hasan et al.

Fig. 5. Error Prediction in Y-axis.

Fig. 6. Error Prediction in Z-axis.

Positional Health Assessment of Collaborative Robots

333

Fig. 7. Anomaly detection results.

4.1

Discussions and Limitations

This section presents a brief evaluation of the proposed methodology and it is important to highlight that the key focus of the LSTMAE for positional health assessment is to identify faults before it manifests itself into more serious problems in the industrial system. From the above-mentioned results, it can be seen that sudden changes in the timeseries profile can be detected as anomalies using the end-effector positional measurements. However, there is a small lag, approximately 200ms, from the introduction of fault to when the model can actually detect it. Although this needs to be researched further using different use cases, it is a promising area to improve the quality and safety of human-robot collaboration. The authors would like to highlight a few limitations of the proposed work. The time series data collection with the cobot depends on the process we set up for it to perform. In our experiment, we used a simple pick and place operation which generates a dataset with limited stochasticity. Therefore, the raw data was used as input to the LSTMAE architecture. However, if the initial dataset has a lot of variations, it might not be possible to use the dataset as such; more complex techniques such as wavelet scattering might need to be employed before that dataset can be input into the LSTMAE architecture. The proposed methodology is limited in its scope to positional health assessment of cobots and only sudden failures are considered. Propagating errors that can be interpreted from timeseries are not considered in this work.

334

5

N. Hasan et al.

Conclusions

This research presents the novel application of an LSTMAE network in the positional health assessment of cobots. The approach comprises of two stages wherein the positional measurement values of the cobot end-effector are subject to data transformation and used to train the LSTMAE network. During the training process, RMSE is used as the performance measure to evaluate the network. The trained network is then used to predict faults by detecting anomalies which can then be used to alert and request calibration. The core contribution of the research is pertaining to positional anomalies which is a difficult failure to detect. However, timely detection of such failures helps prevent health and safety hazards and ensures product quality. Typically, cobots do not follow strict state transitions/processes as they need to accommodate working with a human operator. This means that they are subject to process deviations and hence the research can be extended by using techniques such as process mining to represent the factual robotic process and state transitions. Another future work of the research is the determination of the threshold at which the timeseries values can be considered as an anomaly. Furthermore, the authors believe that the performance of the LSTMAE network can be improved by fine-tuning the process parameters using different hyperparameter tuning approaches.

References 1. Peshkin, M., Colgate, J., Wannasuphoprasit, W., Moore, C., Gillespie, R., Akella, P.: Cobot architecture. IEEE Trans. Robot. Autom. 17, 377–390 (2001) 2. Djuric, A., Urbanic, R., Rickli, J.: A framework for collaborative robot (CoBot) integration in advanced manufacturing systems. SAE Int. J. Mater. Manuf. 9, 457– 464 (2016) 3. Polenghi, A., Cattaneo, L., Macchi, M.: A framework for fault detection and diagnostics of articulated collaborative robots based on hybrid series modelling of artificial intelligence algorithms. J. Intell. Manuf. 1–19 (2023) 4. Qiao, G., Schlenoff, C., Weiss, B.: Quick positional health assessment for industrial robot prognostics and health management (PHM). In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1815–1820 (2017) 5. Wang, Y., Cao, G., Han, J.: A combination of dilated self-attention capsule networks and bidirectional long-and short-term memory networks for vibration signal denoising. Machines 10, 840 (2022) 6. Czubenko, M., Kowalczuk, Z.: A simple neural network for collision detection of collaborative robots. Sensors 21, 4235 (2021) 7. Nabissi, G., Longhi, S., Bonci, A.: ROS-based condition monitoring architecture enabling automatic faults detection in industrial collaborative robots. Appl. Sci. 13, 143 (2023) 8. Yun, H., Kim, H., Jeong, Y., Jun, M.: Autoencoder-based anomaly detection of industrial robot arm using stethoscope based internal sound sensor. J. Intell. Manuf. 34, 1427–1444 (2023) 9. Graabæk, S., Ancker, E., Christensen, A., Fugl, A.: An experimental comparison of anomaly detection methods for collaborative robot manipulators. (TechRxiv, 2022)

Positional Health Assessment of Collaborative Robots

335

10. Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning, pp. 1310–1318 (2013) 11. Ribeiro, A., Tiels, K., Aguirre, L., Sch¨ on, T.: Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness. In: International Conference on Artificial Intelligence and Statistics, pp. 2370–2380 (2020) 12. Landi, F., Baraldi, L., Cornia, M., Cucchiara, R.: Working memory connections for LSTM. Neural Netw. 144, 334–341 (2021)

Human-Exoskeleton Interaction During Knee Flexion-Extension Under Different Configurations of Robot Assistance-Resistance Denis Mosconi1,2(B) , Yecid Moreno2 , and Adriano Siqueira2 1

2

Industry Department, Federal Institute of S˜ ao Paulo, Catanduva, Brazil [email protected] Mechanical Engineering Departiment, University of S˜ ao Paulo, S˜ ao Carlos, Brazil

Abstract. The purpose of this work was to evaluate the humanexoskeleton interaction during performance of knee flexion-extension under different configurations of robot assistance-resistance. The interaction was assessed taking in account the myoelectric activity, muscle recruitment, robot torque and movement performed. To this, an experimental procedure was conducted with an individual wearing and exoskeleton and performing knee flexion-extension movements in a seated position with the robot configured in three modes: assistance on flexion (FA), assistance on extension (EA), assistance on flexion and extension (CA). The results obtained allowed us to conclude that robot configurations of flexion-assistance and extension-assistance can be used for training focused on strengthening while the configuration for completeassistance (that is, assist both in the flexion and extension) is useful for training focused on developing motor control.

Keywords: Knee orthosis

1

· Exoskeleton · Robotic therapy

Introduction

To recover the motor skills of lower limbs after trauma or stroke is a challenge, since beyond to ensure the neuromotor control necessary to perform the dynamics tasks like walk, squat and jump, it is also needed to ensure the sufficient strength to maintain balance when standing. Among all the articulations that compounds the lower limbs, the knee is the largest and most complex, being essential for the good performance of the movements of such members. So, wounds that affects such a joint can impose great motor limitations no the individual, so motor rehabilitation is fundamental to restore the joint functions as best as possible [5]. One of the various possible movements that can be used during the therapy of rehabilitation is the knee flexion-extension (F&E): an open kinetic chain movement that is useful for strengthening, restoration of joint stability, improvement c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 336–344, 2024. https://doi.org/10.1007/978-3-031-47272-5_28

Human-Exoskeleton Interaction During Knee Flexion-Extension

337

of motor coordination and range of motion [8]. In the beginning of the treatment, such exercise is performed in a seated position with the patient being assisted by the therapist. As an improvement in the clinical picture is noted, the therapist can exert a resistance force to the movement performed by the patient, in order to promote strengthening combined with motor control. A powerful resource that has been increasingly used in the rehabilitation process are exoskeletons robots: : a wearable robot that can help in the treatment by including forces to assist or resist movement, just like a therapist. Such devices can contribute to the reduction of therapist workload, time and costs of the treatment as well as to promote objective prognoses through the data collected with its sensors [10]. A resource that can be utilized in knee rehabilitation are the robotic devices that have ability to repeat tasks accurately, potential to measure the progress of the rehabilitation, are flexible and contributes to the reduction of therapist workload, as well as time and costs of the treatment [10]. One of the points that must be taken into account when using robots in rehabilitation is their interaction with the patient, which must be safe for both, in addition to guaranteeing the effectiveness of the treatment. Thus, it is a good idea that the human-robot interaction is tested and evaluated before using it in a therapy itself. In this context, the human-robot interaction was studied for many researchers, that took into account the movement performed, the torques applied by the robot and the myoelectric activity. For example, [7] evaluated the interaction between an individual and an active orthosis during lifting and lowering movements, concluding that the muscular activity can be reduced with the orthosis assistance, specially when using configurations were the knee is assisted. Muscle activations when using lower limbs assistive orthosis were also observed by [9,11] in their works. However, despite the reduction of muscle fatigue, changes in the movement pattern appeared. Changes in the range of motion and movement speed were also observed by [3], with the use of two degrees of freedom knee orthosis on the gait. Lee et al. [4] reports that despite the reduction of muscle activities, an increase in the metabolic cost resulting from compensatory movements can be noticed when using an unilateral knee exoskeleton during gait. The aforementioned research show the importance of study the interaction between human and exoskeleton robots for rehabilitation, in order to understand how such an interaction occurs with regard to changes in movement, applied torques, possibility of unwanted force exchange, myoelectric activity levels and muscle recruiting patterns. By understanding human-robot interactions, one can plan an adequate rehabilitation program and ensure treatment effectiveness and patient safety by developing effective interaction controls. The purpose of this work was to evaluate a human-exoskeleton interaction during the performance of knee flexion-extension movements, in a seated position, under different configurations of robot assistance-resistance, in order to verify how myoelectric activities and muscle recruitment patterns occur, as well

338

D. Mosconi et al.

as to measure the torque applied by the robot and the changes that can occur in the movement. Based on this work, a better understanding of how the aforementioned movement occurs will be obtained, allowing for an adequate application of the orthosis and a more effective and safe treatment planning.

2

Methodology

The human-exoskeleton interaction was evaluated from a experiment conducted with a human being wearing a lower limbs exoskeleton and performing movements of knee flexion and extension (F&E), in a seated position, being in sometimes assisted and sometimes resisted by the robot, as a protocol detailed bellow. The subject that participated of the experiment is a healthy man, 30 years old, weighting 64.6 kg and 1.75 m tall. He is right-handed and during the experiment he only performed the movements with his right leg. The robot used was the ExoTAO, a modular lower limbs exoskeleton developed by [2] whose structure is composed by lightweight tubes connected by six independent free joints, able to be adjusted to be used by humans with height between 1.65–1.90 m. The modular characteristic of the robot allows it to be applied in the treatment of one joint of the patient. In this work, only the right knee joint of the robot was used actively, the hip and ankle right joints were used passively and the left leg of the robot was not used. The Fig. 1b depicts the subject wearing the ExoTAO during the experiment. The exoskeleton was attached to the human by means of Velcro straps and a customized shoe, ensuring stability and avoiding joints misalignment.

Fig. 1. The muscles whose myoelectric signals were measured (a) and a subject using the ExoTAO during the experimental procedure (b).

Human-Exoskeleton Interaction During Knee Flexion-Extension

339

As impedance control law (Eq. (1)) was applied to the exoskeleton. In this law, τR is the robot torque, θd is the knee angular reference to be tracked by the subject, θ is the measured knee angle, KR is the robot virtual stiffness (that express the level of assistance from the orthosis to the user), BR is the robot virtual damp and θ˙ is the measured knee angular velocity. τR = (θd − θ)KR − BR θ˙

(1)

The experiment protocol consisted of the human wearing the exoskeleton performing knee flexion-extension in a seated position according to a sinusoidal trajectory with a period of 10 seconds and amplitude of 0◦ (flexion) to 70◦ (extension), during 90 seconds (i.e. 9 repetitions). Both the reference to be tracked and measured angular trajectories of the knee were displayed to the user on a computer screen. The experiment was divided into four modes, namely: • Bare (B): In this case the subject performed the movement without the exoskeleton. • Flexion Assisted (FA): In this case the exoskeleton was configured to assist in the flexion phase and resist in the extension. To this, the reference of the robot was a fixed value of 0◦ . • Extension Assisted (EA): In this case the exoskeleton was configured to assist in the extension phase and resist in the flexion. To this, the reference of the robot was a fixed value of 70◦ . • Completely Assisted (CA): In this case the exoskeleton was configured to assist both in the flexion and extension phases. The knee angular position and velocity was measured using Xsens MTw Awinda Wireless Inertial Measurement Units (IMUs) with work frequency of 100 Hz. The ReRobApp from [6] was used to process the signals from the IMUs. One sensor was place no the thigh and two on the shank. It was assumed that the knee positions and velocities of both the user and the exoskeleton were the same. The myoelectric activity of five muscles (Fig. 1a) was measured through surface electromyography (sEMG) using a Trigno Wireless EMG System (Delsys Inc., Natic, MA, USA). The muscles considered were the knee flexors biceps femoris (BF) and semitendinosus (ST) and the knee extensors rectus femoris (RM), vastus lateralis (VL), vastus medialis (VM). The instructions provided by [1] were used for the placement of the electrodes and preparation of the skin (shaving, abrasion with sandpaper and cleaning with 70% alcohol). The EMG data was normalized to %MVC that was measured through maximum voluntary contraction (MVC) procedure were the subject performed isometric contraction against manual resistance. The EMG data were sampled at 2 kHz in a separate computer using the Delsys EMGworks Software and thus processed using MATLAB (The MathWorks, Inc; Natick, MA, USA). First the signal’s moving average (50 ms time window) was subtracted in order to eliminate DC bias. Then, the signal was rectified, filtered by a second order butterworth lowpass filter (cut-off frequency of 2 Hz). Finally, the mean value was extracted and normalized to the MVC mean.

340

3

D. Mosconi et al.

Results and Discussion

The movements performed by the human during the experiment, as well as the reference to be tracked are presented in the Fig. 2a. For the bare mode, when the subject is not wearing the exoskeleton, in the extension phase the movement moved away from the reference, because the subject started with the knee semiflexed and developed more velocity in this phase. The maximum extension as higher than the proposed by the reference and in the flexion phase, the knee angular velocity was lower, with the movement ending in a point of semi-flexion of the knee. Thus, despite having developed a sinusoidal trajectory, it is noted that the subject in question tends to develop more strength in extension than in flexion. In the flexion assisted mode, the tracking errors for the extension phase were reduced, however, there is an increase in tracking error for the flexion phase. In this case, it is possible to verify that the user exerted a flexion velocity beyond the necessary, which indicates effort, that is, he did not take advantage of the robot assistance as he could. With regard to the assisted extension mode, tracking errors for extension were considerably reduced, but in addition to the maximum desired extension not being practiced, a greatest deviation was observed in the flexion phase in relation to the other modes. When completely assisted, results similar to bare mode were obtained, however, with an improvement in trajectory tracking, especially for the flexion phase. Observing Fig. 2b, it is possible to verify that for the case of CA the tracking error was the closest to zero over time. This is not surprising, as in this case the user does not experience resistance to movement. In any case, the tracking errors indicate that the user did not fully take advantage of the robot assistance, at times exerting more force than necessary to carry out the movement according to the reference trajectory. The torques applied by the robot over time are presented in the Fig. 3a. Related to the flexion assisted (FA) mode, initially the torque is zero, since the movement begins with the knee flexed. Then, as the subject performs an extension, the robot increases the torque applied in a temptation to recover the flexed status. After the maximum extension, when the flexion phase starts, the torque applied by the exoskeleton is decreased until almost zero at the end of the cycle, when the knee is flexed again. In the extension assisted (EA) mode, it is possible to notice a torque curve with the same shape of the one obtained for the flexion assisted mode. However, in this case, the torque initially applied by the robot is great than zero, as the knee starts in the flexion position. As the user performs the extension movement, the torque applied by the robot decreases to almost zero for maximum extension. Then, during flexion, the robot applies a torque in an attempt to restore the extended position (at this moment the torque is perceived by the user as resistance to movement). For the completely assisted mode, little torque was applied by the robot, just in an attempt to correct the reference tracking errors.

Human-Exoskeleton Interaction During Knee Flexion-Extension

341

Fig. 2. Knee angular position in degrees (a) and error related to the reference θd (b) for the modes bare (B), flexion assisted (FA), extension assisted (EA), completely assisted (CA).

The root mean square (RMS) values of the torques applied are presented in Fig. 3b. The highest value observed is related to assisted extension, while the lowest pertains to fully assisted movement. The high torque values related to the FA and EA modes are mainly due to the resistance that the robot imposes in each of the modes.

Fig. 3. Torque applied by the robot over time (a) and its root mean square (b) for the modes bare (B), flexion assisted (FA), extension assisted (EA), completely assisted (CA).

The muscle activations in %MVC are depicted in Fig. 4. Comparing the activations for the flexors and extensors, it is possible to notice that the flexors are less activated. It is due the fact that to perform flexion the human is helped by the force of gravity, while for the extension such a force acts as a resistance. Analyzing the muscle activation of the flexors, we can see that in the FA mode the biceps femoris (BF) and the semitendinosus (ST) were more activated

342

D. Mosconi et al.

Fig. 4. Muscle activations in %MVC for the flexors (a) and extensors (b) for the modes bare (B), flexion assisted (FA), extension assisted (EA), completely assisted (CA).

when compared with the bare mode. The main cause of this is the fact that in the flexion phase of the movement, the angular acceleration was greater, since there was no resistance at this time. This is confirmed by observing Fig. 2a, which indicates greater velocity in flexion than in extension. In this mode, the muscle recruitment pattern was inverted in relation to the bare mode, with the BF acting more than the ST. In the EA and CA cases it was possible to notice an little increase in the BF activation while a decrease in the ST activation is perceived, when compared with the bare mode. For these EA and CA modes, the recruitment pattern of ST working more than BF was maintained, as in the bare mode. The muscle activation of the extensors are presented in the Fig. 2b. When using the orthosis, the vastus medialis was reasonably requested, in order to ensure the patellar stability through the maintenance of tibiofemoral alignment. In the FA mode the extensors acted more than in the bare mode, because the user faced a resistance to perform the extension. For the EA mode, these muscles were also more activated than in the bare mode due the peak acceleration in the assisted phase. Finally, in the CA mode, the extensor muscles worked less than in the FA and EA modes, but more than in the bare mode. In all the modes were the subject used the exoskeleton, the recruitment pattern was different from the one observed for the bare mode: with the robot the VM was the more activated muscle, seeking to ensure the patellar stability, leading us to conclude that the exoskeleton may include some force that may cause unwanted rotation of the knee. Table 1 presents numeric values of the mean muscles activations in %MVC together to the standard deviation. Thus, in general, it can be inferred that the use of an exoskeleton for knee flexion-extension training under different robot assistance-resistance configurations can lead to increased muscle activations, variations in recruitment patterns and changes in the movement executed.

Human-Exoskeleton Interaction During Knee Flexion-Extension

343

Table 1. Mean muscle activations (±SD) comparisons for bare (B), flexion assisted (FA), extension assisted (EA), completely assisted (CA). Bare

FA

EA

CA

RF 27.0 (1.6) 17.8 (1.3)

23.6 (1.5) 27.3 (1.2)

VM 11.8 (1.8) 55.7 (2.7)

63.0 (3.7) 34.9 (2.7)

VL

13.5 (2.2) 15.9 (2.1)

24.0 (3.0) 22.5 (2.4)

BF

5.5 (2.4)

ST

12.9 (1.9) 10.8 (2.5)

18.0 (0.97) 6.2 (1.5) 7.1 (1.7)

6.7 (2.7) 7.9 (1.8)

However, despite the tracking errors observed, the user was able to perform flexion and extension movements within a range approximately close to the desired one, in all tested modes. Thus, it can be stated that the FA and EA modes are useful for training focused on strengthening, while the CA mode is useful for training focused on developing motor control.

4

Conclusions

The purpose of this work was to evaluate the human-exoskeleton interaction during the performance of knee flexion-extension movements, under different configurations of robot assistance-resistance, in order to verify how myoelectric activities and muscle recruitment patterns occur. The results obtained with an experimental procedure conducted under three different conditions of robot assistance-resistance allowed us to conclude that changes in the muscle recruitment pattern, increase of myoelectric activity and alterations in the movement can occur. The work also lead us to conclude that robot configurations of flexionassistance and extension-assistance can be used for training focused on strengthening while the configuration for complete-assistance (that is, assist both in the flexion and extension) is useful for training focused on developing motor control. For future works, we expect to conduct the experiment with another subjects changing the pattern of movement and level of assistance-resistance of the robot. Acknowledgment. This work is supported by Pro-Rectory of Research of University of S˜ ao Paulo, Coordena¸ca ˜o de Aperfei¸coamento de Pessoal de N´ıvel Superior - Brasil (CAPES) - Finance Code 001, PGPTA, under grant 3457/2014, and S˜ ao Paulo Research Foundation (FAPESP) under grant 2019/05937-7. This study was approved by the Ethics Committee of the Federal University of S˜ ao Carlos (Number 26054813.1.0000.5504).

344

D. Mosconi et al.

References 1. Biomedical Health and Research Program of the European Union. SENIAM— Surface ElectroMyoGraphy for the Non-Invasive Assessment of Muscles 2. dos Santos, W.M., Nogueira, S.L., de Oliveira, G.C., Pena, G.G., Siqueira, A.A.G.: Design and evaluation of a modular lower limb exoskeleton for rehabilitation. In: 2017 International Conference on Rehabilitation Robotics (ICORR), London, England, vol. 2017, pp. 447–451, July 2017. IEEE (2017) 3. Aghajani Fesharaki, S., Farahmand, F., Saeedi, H., Raeissadat, S.A., Abdollahy, E., Ahmadi, A., Maroufi, N.: The effects of knee orthosis with two degrees of freedom joint design on gait and sit-to-stand task in patients with medial knee osteoarthritis. Sultan Qaboos Univ. Med. J. [SQUMJ] 20(4), e324–331 (2020) 4. Lee, D., Kwak, E.C., McLain, B.J., Kang, I., Young, A.J.: Effects of assistance during early stance phase using a robotic knee orthosis on energetics, muscle activity, and joint mechanics during incline and decline walking. In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28(4), pp. 914–923, Apr. 2020 5. McGinty, G., Irrgang, J.J., Pezzullo, D.: Biomechanical considerations for rehabilitation of the knee. Clin. Biomech. 15(3), 160–166 (2000) 6. Moreno, J.Y., Escalante, F.M., Boaventura, T., Terra, M.H., Siqueira, A.A.G.: ReRobApp: a modular and open-source software framework for robotic rehabilitation and human-robot interaction. In: 2022 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob). IEEE, Aug. 2022 7. Nesler, C., Thomas, G., Divekar, N., Rouse, E.J., Gregg, R.D.: Enhancing voluntary motion with modular, backdrivable, powered hip and knee orthoses. IEEE Robot. Autom. Lett. 7(3), 6155–6162 (2022). Jul 8. Shelbourne, K.D., Biggs, A., Gray, T.: Deconditioned knee: the effectiveness of a rehabilitation program that restores normal knee motion to improve symptoms and function. N Am. J. Sports Phys. Ther. 2(2), 81–89 (2007) 9. Villa-Parra, A.C., Lima, J., Delisle-Rodriguez, D., Vargas-Valencia, L., FrizeraNeto, A., Bastos, T.: Assessment of an assistive control approach applied in an active knee orthosis plus walker for post-stroke gait rehabilitation. Sensors 20(9), 2452 (2020) 10. Wilmart, R., Garone, E., Innocenti, B.: The use of robotics devices in knee rehabilitation: a critical review. Muscle Ligaments Tendons J. 09(01), 21 (2019). Mar 11. Zhu, H., Nesler, C., Divekar, N., Peddinti, V., Gregg, R.D.: Design principles for compact, backdrivable actuation in partial-assist powered knee orthoses. IEEE/ASME Trans. Mechatron. 26(6), 3104–3115 (2021). Dec

Walking on Virtual Surface Patterns Changes Muscular Activity Maximilian Stasica1(B) , Kai Streiling2 , Celine Honekamp2 , Alexandra Schneider1 , Alexandros Exarchos1 , Saskia Henschke1 , ahler1 , Suat Pirincoglu1 , Melike Polat1 , Neele Scholz1 , Carina St¨ 1 2 Emma Syring , Loes van Dam , and Andr´e Seyfarth1 1

2

Lauflabor Locomotion Laboratory, Institute f¨ ur Sportwissenschaft, Centre for Cognitive Science, Technische Universit¨ at Darmstadt, Darmstadt, Germany maximilian [email protected] Sensorimotor Control and Learning, Centre for Cognitive Science, Technische Universit¨ at Darmstadt, Darmstadt, Germany

Abstract. Current designs of structures often involve creative uses of materials and colours. While it is well known that patients with balance related disorders can experience negative effects through perceptual perturbations, the connection from these effects to physical changes still remains unclear. This paper therefore showcases the gait changes induced by walking on potentially irritating virtual surface patterns. Muscular Data from nine healthy young participants point towards a more careful and insecure gait style when confronted with such a pattern. Inhibitions of forward movement are induced by fluctuations in the activity of the musculus rectus femoris and a breaking action by the biceps femoris. This implies a direct connection of visual disturbances to human gait which is especially important for the control of assistive devices that should include an integrated detection of gait relevant visual patterns to compensate patients’ uncertainties. Keywords: Human gait · Muscular activity · Visual perturbations Perception · Virtual reality · Control strategies

1

·

Introduction

In the last decades, rising life expectancy and an older average population lead to an increase of disabilities. An extensive survey (“Repr¨ asentativbefragung zur Teilhabe von Menschen mit Behinderungen Zusammenfassung”) conducted by the German Institute for applies social sciences from 2017 to 2021 revealed an increase of people with disabilities of 9% between 2009 and 2017 [1]. In Germany WWW home page: https://www.sport.tu-darmstadt.de/institut ifs/fachgebiete ifs/ sportbiomechanik ifs/index.en.jsp. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 345–356, 2024. https://doi.org/10.1007/978-3-031-47272-5_29

346

M. Stasica et al.

alone 9.4% of the population had the status of “severely disabled” [2]—which means 7.8 million people were affected in the year 2021. Globally, the World Health Organization (WHO) estimates 1.3 billion people to be affected by a disability in 2021 [3], which accounts for 16% of the population. Based on three dimensions defined by the WHO, the Center for Disease Control and Prevention created a definition of the term “disability”: “any condition of the body or mind (impairment) that makes it more difficult for the person with the condition to do certain activities (activity limitation) and interact with the world around them (participation restrictions)” [4]. Especially the dimensions of activity limitation and participation restrictions are factors that affect how our environment—in which we act in—is designed. Within our highly urban-centered environment, navigating through and being surrounded by architecture—that is often designed to balance functionality and aesthetics—is part of our everyday lives. Many architectural design properties either contain (e.g., through colouring) or create patterns (e.g., through light and shadows). Research has shown that complex, repetitive, or high contrast patterns can impair the well-being by creating a variety of perceptual perturbations in people with or without specific disorders [5,6]. Such patterns induce visual discomfort such as annoyance, irritations, distractions, as well as eye strain, and headaches. Flickering light, such as created by walking though light and shadow patterns, has shown to trigger a variety of symptoms for people who experience migraine, epilepsy or a strong sensory photosensitivity. Photosensitivity is a visual sensitivity disorder response to light stimuli, intermittent light sources and complex stimuli, however almost all patients suffering from this condition are sensitive to flickering light or certain patterns [7]. Bridge parapet railings and other structural elements commonly create such linear high contrast patterns. Another group of people that are affected by patterns are people with vestibular disorders. Research has shown, that complex, repetitive, or high contrast patterns influence the balance and gait of people with “sensory and/or information processing differences” [8]. For example, Leonards and colleagues (2015) found that patterns on the floor lead to a deviation from the desired movement direction [9]. This might arise due to misinterpretation of the environment (e.g., perceiving barriers, obstructions, curbs, edges of steps or holes) that can lead to imbalance [5], as well as other behavioral changes. Another possible explanation for discomfort and other problems could be so-called “discomfort glare” which causes visual fatigue [8]. Glare is described as a visual difficulty experienced in the presence of significantly brighter light than the eyes have adjusted to. A new guideline published only recently in October 2022 by the British Standards Institution (BSI) specifically states: “Building designers should design to avoid glare from daylight or artificial light sources” [8]. As proposed in [10] this calls for in depth investigations to link perception and human motor responses. To understand this relation not only perceptual but also biomechanical measurements are required. It is known that emotional states

Walking on Virtual Surface Patterns

347

can change postural control and gait style [11–14]—however these works mainly focused on fear of heights. To this point there is no clear connection between visual perception of surface patterns and human gait, however there are known effects of such patterns on balance control, such as the moving room illusion which can cause balance problems or even falls [15]. This is especially problematic for patients with balance disorders or elderly as they are specifically prone to falls. Visually induced gait style changes or balance problems also have to be considered during the design of assistive devices for walking. While some of these systems feature biologically inspired balance support [16,17] or improve balance support after slippage [18] their suitability to recover from visual perturbations still has to be evaluated. As a prerequisite to find robust controllers for assistives it has to be understood how humans react to visual patterns in the first place. It is expected that the muscular activity of humans being subjected to such visual disturbances will change to promote a more careful and insecure gait style. This should also lead to a reduction in vertical excursion of the body centre as well as an increase in lateral excursion.

2

Method

To test the effects of virtual surface patterns on human gait we used an improved version of the setup presented in [19]. These improvements involved more processing power and a wireless head-mounted display (HMD) which increases the immersion and reduces the risk of stumbling over the HMD’s cable. We further decided to change the virtual environment to an urban setting to increase realism. The experiment was conducted following the Declaration of Helsinki and approved by the Ethics board of Technische Universit¨ at Darmstadt (EK83/2022). 2.1

Virtual Environment

Participants where placed in an immersive virtual environment using a virtual reality (VR) headset (HTC Vive Pro Eye, Valve, Bellevue, US & HTC, Taoyuan, Taiwan). The VR contained a simple, pre-built model of a major city with high buildings, roads, and skyscrapers and a custom made footbridge on which participants had to walk. The pattern of the bridge and the colour of the pattern were varied between trials. Figure 1 shows the applied patterns in their greyscale and colour variant. The colours blue and orange were chosen to maximize the contrast between the colours. The bridge had waist-high railings with few, equally spaced pillars to clearly mark the end of the bridge while avoiding a frequency effect. 2.2

Participants

Twelve participants (5 female, 7 male) with either or normal or corrected to normal vision took part in the experiment. They were between 19 and 33 years

348

M. Stasica et al.

Fig. 1. Patterns applied to the surface of the bridge in VR. Left—straight lines. Center—Lisbon pattern motivated in [6]. Right—random control pattern. Additionally a solid grey surface colour was used as a control condition.

of age (μ = 23.42, med = 22, σ = 4.25) and ranged from 156 to 190 cm in body height (μ = 174.0, med = 174.5, σ = 9.43). Their body weights were all between 50 and 89 kg (μ = 67.83, med = 67.0, σ = 12.78). None of the participants had any physical disabilities, two reported to be afraid of heights. Six participants reported not to have any previous experience with virtual reality. Three participants were excluded due to technical difficulties during the experiment. 2.3

Measurement Equipment

Biomechanical measurement devices were utilized to record relevant gait parameters. The participants were therefore equipped with 16 wireless surface electromyography (EMG) sensors (Trigno Avanti, Delsys, Natick, US) which were placed perpendicular to the muscle fibers on the muscle belly of eight gait relevant muscles per limb, including the musculus tibialis anterior (TIB), m. soleus (SOL), m. gastrocnemius lateralis (GAS), m. vastus medialis (VAS), m. rectus femoris (RCF), m. biceps femoris (BCF), m. tensor fasciae latae (TFL), and m. gluteus maximus (GLM). Skin preparation included cleaning of the skin with alcohol. The rectangular electrodes (27 * 37 * 13 mm, mass 14 g) were attached with an adhesive. The sensors feature four silver bar contacts with an intraelectrode distance of 10 mm and collected data with a sampling rate of 1926 Hz. Further, each sensor contained a hardware-integrated 3D gyroscope (148 Hz).

Walking on Virtual Surface Patterns

349

We also collected movement data using a 9 camera-based motion capture system (Qualisys Oqus). For this we used a full body marker set consisting of 19 passive reflective markers. Additionally, the head movement was tracked using the position and rotation of the Head-mounted Display (HMD) that are logged by the VR system. This paper however focuses on the analysis of the EMG data, leaving the measurements of the latter two systems for future studies. 2.4

Measurement Protocol

After preparation the headset was adjusted to the participant and the participant had time to get familiar with the VR. Each participant went through all conditions in a randomized order to avoid order effects. Within each condition participants performed five trials to increase the amount of available data. Typically each trial contained three full gait cycles per leg plus an initiation cycle. Since this study focuses on stable gait, the initiation cycle was neglected. Every condition featured a virtual bridge with one of the surface patterns depicted in Fig. 1, while the baseline condition took place on a neutral grey surface. 2.5

Data Processing

EMG data were processed using the method outlined in [20]. The raw signal was demeaned to eliminate potential offsets, bandpass-filtered (10–450 Hz, fourth order Butterworth) and time-normalized over one gait cycle. The timenormalized signal was then highpass-filtered (6 Hz, second order Butterworth) and aggregated over each condition. Individual strides were identified based on gyroscope-stride identification [20]. Thus, gyroscopic data in the sagittal plane from three of the sensors (TIB, SOL, and GAS) integrated in the EMG system were combined to obtain a reliable measure for the orientation of the shank. Following [20] the heel-strike was defined using the first zero-passing of the combined angular velocity after a highly negative combined signal. Further, the first stride of each leg was neglected to avoid gait initiation effects. Lastly the Grand Mean [20] over all conditions and participants was calculated and used for detailed analysis. To check whether any possible changes arise from the pattern or the colour used, a 2-by-4 repeatedmeasure Anova (α = 0.05) was calculated for each muscle.

3

Results

With typically 6 strides per trial (3 left, 3 right), participants took 60 gait cycles per condition. Between these conditions, clear changes in muscular activity could be observed (see Figs. 2 and 3). The biarticular extensor muscles (GLM, VAS, and SOL) of the right leg generally showed an increased peak activation during the start of the swing phase (roughly at 60% of the gait cycle) (see Fig. 6). While the GLM showed its largest difference during the coloured Lisbon condition (see Fig. 1), the activity

350

M. Stasica et al.

Fig. 2. Grand mean of the muscular activity of the right leg per condition. Note the secondary peak in GLM activation during the coloured Lisbon condition and the earlier activation of the TFL during the onset of the swing. Also the SOL shows more activity in all conditions when compared to the baseline. These findings point towards a small disruption during the gait cycle that is mainly countered by the unilateral extensor and the lateral muscles.

of the SOL increased by more than 10% for all conditions but the greyscale Lisbon and random pattern. In the left leg, the extensors indicate that the VAS is shows a more fluctuating behavior in all conditions, while GLM and SOL remain within range of the control (see Fig. 7). The left GLM however shows the tendency of a reduction during onset of swing. The left TFL showed a noticeable reduction in activity during the standing phase, while the right TFL increased shortly after the heelstrike. While the flexor muscles (BCF, RCF, and GAS) of the right leg do not show significant changes, the flexors on the left leg fluctuate noticeably around their respective activity during control. While BCF tends to be activated more during the gait cycle, the RCF shows a reduction during the transition between standing and swinging. The left GAS also peaks during this time. The right TFL shows tendencies towards a reduction in activity during the swing phase for the Lisbon and striped conditions (see Fig. 6). However it peaks for both the greyscaled and the coloured striped patterns around the same time. The left TFL does not show any changes above 10%. Considering the rate of change in muscular activity in both legs, Figs. 4 and 5 showcase the actuation patterns. It is noticeable that especially the GLM and SOL respond to all surface patterns with a higher rate of change over time during activation. These findings could indicate a reduced vertical excursion of the Centre of Mass (COM) as well as an increase in lateral excursion. In a preliminary effort the geometrical mean of the pelvis was calculated based on Motion Capture

Walking on Virtual Surface Patterns

351

Fig. 3. Grand mean of the muscular activity of the left leg per condition. The BCF shows higher but more erratical activation patterns when confronted with any of the surface patterns, while the RCF shows lower activation especially during the onset of the swing. This could indicate a reduced vigor of the swing leg.

data to approximate the COM. The trajectory of this approximate COM did not significantly vary between the conditions. All participants reported that they felt most irritated during walking on both the coloured and the greyscale Lisbon pattern. The results of the rm-Anova indicated a significant effect of the colouring on the overall activation of the right GAS (p = 0.02) and the right VAS (p = 0.01). Significant interactions between colour and pattern were found for the right RCF (p < 0.01).

4

Conclusion

So far, the influence of patterns on people with migraine, photosensitive epilepsy, as well as vestibular disorders have not been taken into account, yet, when designing architectural structures in terms of their trigger-ability. Even though it is well known that people with conditions mentioned above can suffer from visual patterns, the recently published guideline by the BSI is the first to address these issues. As the perception of structures is highly individual, inclusive architecture only recently started to arise. Today, architecture and design slowly start to incorporate inclusive ideas into their work. Although the trigger-ability of high-contrast patterns on many different conditions are widely known, there is very few research aimed at uncovering the effects of patterns created by our urbanized environment. The effect of surface patterns on human gait are especially important for patients with diseases

352

M. Stasica et al.

Fig. 4. Phaseplot of muscular activity for different surface conditions.

Fig. 5. Phaseplot of muscular activity in the left leg for different surface conditions.

related to balance or visual perception. As only healthy young participants were admitted to this study the results are not directly transferable to patients or the elderly. It would need further research focusing directly on the target group to fully capture the clinically relevant effects of visual perception on human gait. While the validity of these results may also be subject to the limited immersion of HMDs and could partly be explained due to the uncertainty induced by the

Walking on Virtual Surface Patterns

353

Fig. 6. Difference of the muscular activation in the right leg between each surface condition and control (solid grey surface). While GLM and SOL show increased activity during the beginning of the swing phase in some conditions, the flexor muscles do not show significant changes. Interestingly, the activation of the lateral TFL is generally reduced with a little peak at the start of the swing phase.

HMD itself, the changes in muscular activity during walking on virtual surface patterns indicate a more careful and insecure gait style. This is especially highlighted by the behaviour of the left BCF which shows higher fluctuations during the conditions involving surface patterns. This could be interpreted as a stuttering breaking action which inhibits the forward movement of the leg. An insecure gait style also becomes apparent as also the left RCF muscle shows a fluctuation and tends to reduce its activation during the begin of the swing phase. This indicates a forward swing with less vigor. Further evidence for an insecure gait style stems from the increase in the activity of the right GLM during the onset of the swing in the coloured Lisbon condition. This would lead to a short but noticeable breaking action of the leg. Surprisingly, many muscles of the left leg (esp. GLM, RCF, SOL, and TIB) showed a reduction in their activity. Since GLM, VAS, and SOL are extensor muscles, this points towards a stiffer gait style which should lead to a reduction in vertical body center excursion. Combined with the information about the BCF and RCF this could indicate a more cautious gait style. Since the preliminary analysis of the approximate COM yielded no results, both an increase in lateral or a decrease in vertical COM excursion cannot be assumed. However, the findings call for a further in-depth analysis of the Motion Capture and the head tracking data, since the upper body sway could differ from the COM excursion.

354

M. Stasica et al.

Fig. 7. Difference of the muscular activation in the left leg between each surface condition and control (solid grey surface). Especially the flexor muscles BCF and RCF show a fluctuating behaviour which points towards a less efficient gait. This could be attributed to uncertainties induced by the surface patterns.

It is therefore possible that humans change the control strategy of their limbs during walking over irritating surface patterns. Some of these changes are attributed to the colour of the patterns or the depend on both the color and the pattern itself. This fits well into the literature on the impact of emotions and insecurity on gait and highlights the importance of inclusive structural design. Especially for patients, these patterns can prove to be a huge challenge. This also has to be considered by a controller of an exoskeleton or an active prosthesis to avoid tripping or falls. Such an adaptive controller could for example provide faster and more guided support to cope with these disturbances. For this it would also be necessary to develop a measure for the level of irritation that goes beyond self-report. This measure can be derived from biomechanical data (e.g. the Centre of Mass excursion) and provide valuable information to the system’s controller. Further investigations can be used to develop supportive structural designs and also enhance the control mechanisms of assistive devices. While more research is needed to get a deeper insight into the mechanisms of disturbances by visual patterns, this study showed an influence of visual patterns on gait and behavior even in young and healthy participants. Acknowledgments. This research was funded by Deutsche Forschungsgemeinschaft DFG (German Research Foundation) grant number 446124066. Further support was provided by the The Adaptive Mind research initiative funded by the Hessian Ministry of Higher Education, Research, Science and the Arts.

Walking on Virtual Surface Patterns

355

References 1. Institut f¨ ur angewandte Sozialwissenschaft GmbH (infas). Abschlussbericht - Repr¨ asentativbefragung zur Teilhabe von Menschen mit Behinderung (2022). https://www.bundesregierung.de/breg-de/service/publikationen/ abschlussbericht-repraesentativbefragung-zur-teilhabe-von-menschen-mitbehinderungen-2053322 [German] 2. Statistisches Bundesamt (Destatis): Disabled Persons: Severely disabled people by sex, age and severe-disability rate. Statistisches Bundesamt (2022). https://www. destatis.de/EN/Themes/Society-Environment/Health/Disabled-People/Tables/ disabled-people-gender-age.html 3. World Health Organization: Global report on health equity for persons with disabilities. World Health Organization, Geneva (2022). Licence: CC BY-NC-SA 3.0 IGO. https://www.who.int/publications/i/item/9789240063600 4. Center for Disease Control and Prevention: Disability and Health Overview. National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention (2020). https://www.cdc.gov/ncbddd/ disabilityandhealth/disability.html 5. Wilkins, A.J., Penachhio, O., Leonards, U.: The built environment and its patterns: a view from the vision sciences. J. Sustain. Des. Appl. Res. 6(1), 2–8 (2018) 6. Dickson, G., Burtan, D., James, S., Phillips, D., Stevanov, J., Heard, P., Leonards, U.: Walking on visual illusions. I-Perception, vol. 12, no. 1 (2021). https://doi.org/ 10.1177/2041669520981101 ¨ 7. Okudan, Z.V., Ozkara, C.: Reflex epilepsy: triggers and management strategies. Neuropsychiatr. Dis. Treat. 14, 327–337 (2022). https://doi.org/10.2147/NDT. S107669 8. British Standards Institution.: Design for the mind - Neurodiversity and the built environment - Guide (Standard No. 6463) (2022). https://standardsdevelopment. bsigroup.com/projects/2020-00234#/section 9. Leonards, U., Fennell, J.G., Oliva, G., Drake, A., Redmill, D.W.: Treacherous pavements: paving slab patterns modify intended walking directions. PLoS ONE 10(6), e0130034 (2015). https://doi.org/10.1371/journal.pone.0130034 10. Stasica, M., Frankenstein, J., Schwinn, G., Zhao, G., Seyfarth, A.: Physical and psychological factors influencing the acceptance of vibrations on lightweight footbridges. In: Footbridge2022, Madrid, ES (2022). https://doi.org/10.24904/ footbridge2022.287 11. Wiederhold, B.K., Bouchard, S.: Fear of heights (Acrophobia): efficacy and lessons learned from psychophysiological data. In: Advances in Virtual Reality and Anxiety Disorders, pp. 119–144. Springer, Boston, MA, US (2004) 12. Wuehr, M., Kugler, G., Schniepp, R., Eckl, M., Pradhan, C., Jahn, K., Huppert, D., Brandt, T.: Balance control and anti-gravity muscle activity during the experience of fear at heights. Physiol. Rep. 2(2), e00232 (2014) 13. Boffino, C.C., De S´ a, C.S.C., Gorenstein, C., Brown, R.G., Basile, L.F., Ramos, R.T.: Fear of heights: cognitive performance and postural control. Eur. Arch. Psychiatry Clin. Neurosci. 259(2), 114–119 (2009) 14. Brandt, T., Kugler, G., Schniepp, R., Wuehr, M., Huppert, D.: Acrophobia impairs visual exploration and balance during standing and walking. Ann. N. Y. Acad. Sci. 1343(1), 37–48 (2015) 15. Lee, D.N., Aronson, E.: Visual proprioceptive control of standing in human infants. Percept. Psychophys. 15, 529–532 (1974)

356

M. Stasica et al.

16. Barazesh, H., Sharbafi, M.A.: A biarticular passive exosuit to support balance control can reduce metabolic cost of walking. Bioinspiration Biomim. 15(3) (2020) 17. Zhao, G., Sharbafi, M.A., Vlutters, M., van Asseldonk, E., Seyfarth, A.: Bioinspired balance control assistance can reduce metabolic energy consumption in human walking. In: IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 9, pp. 1760–1769 (2019) 18. Monaco, V., Tropea, P., Aprigliano, F., Martelli, D., Parri, A., Cortese, M., Molino-Lova, R., Vitiello, N., Micera, S.: An ecologically-controlled exoskeleton can improve balance recovery after slippage. Sci. Rep. 7(1) (2017) 19. Stasica, M., Zhao, G., Fritzsche, M., Schwinn, G., Frankenstein, J., Seyfarth, A.: A setup for assessing the acceptance of vibrations to improve footbridge design. In: Footbridge 2022, Madrid, ES (2022). https://doi.org/10.24904/footbridge2022. 288 20. Grimmer, M., Zeiss, J., Weigand, F., Zhao, G.: Exploring surface electromyography (EMG) as a feedback variable for the human-in-the-loop optimization of lower limb wearable robotics. Front. Neurorobot. (2022). https://doi.org/10.3389/fnbot.2022. 948093

Towards a Gait Planning Training Strategy Using Lokomat Thayse Saraiva de Albuquerque(B) , Lucas Jos´e da Costa, Ericka Raiane da Silva, Geovana Kelly Lima Rocha, Andr´e Felipe Oliveira de Azevedo Dantas, Caroline do Esp´ırito Santo, and Denis Delisle-Rodriguez Postgraduate Program in Neuroengineering, Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, Brazil {thayse.albuquerque,lucas.costa}@edu.isd.org.br, {caroline.santo,denis.rodriguez}@isd.org.br

Abstract. The main goal in this study was to investigate how strategies used to perform motor gait planning in an uncontrolled environment affects patterns of neural activity in healthy volunteers during task performance. Materials and Methods A 16-channel EEG system with electrodes and 2 channels of sEMG were used to acquire data from 10 healthy subjects. The data was processed to obtain for each foot the ERD for μ and β rhythms; the spectrogram and continuous wavelet transform of the average 5 s segments and the STFT of the motor planning period. Results The results show significant differences in the ERD and FFT between feets that will be used in future works to develop an intervention using Lokomat to aid SCI individuals that cannot voluntarily initiate the gait. Conclusion This study is a preliminary study, it is still possible to draw some initial conclusions about the feasibility of using EEG signals to analyze gait motor planning. Keywords: Electroencephalography · Gait rehabilitation Motor planning · Motor-related potential

1

· Lokomat ·

Introduction

The prediction of human movement intention plays an important and significant role in a process of gait rehabilitation, enhancing the reaction time, speed and movement accuracy. Being a link between decision-making and action, motor planning is the process by which the brain decides how a movement will be performed in a given situation, also involving the prediction of movement intention [1]. Motor planning requires the integration and processing of sensory inputs, such as vision, proprioceptive, and vestibular information to better generate recruitments of muscle and joints (motor output) before motor execution. Various research have demonstrated that structures, such as the basal ganglia, the cerebellum and the cerebral cortex are involved in the planning and execution c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 357–367, 2024. https://doi.org/10.1007/978-3-031-47272-5_30

358

T. S. de Albuquerque et al.

of purposeful locomotion [2]. At the neural level, gait control involves all levels, whereas sensory integration, motor planning, and gait execution occur at the cortical-subcortical level [3]. This constant interaction between cortical and subcortical areas makes human’s motor planning flexible and adaptable to different environmental conditions (stimulus), generating the motor command addressed to the exact goal of the motor task (response) [4]. Time domain techniques have been proposed to recognize with high accuracy the cortical changes preceding the motor act (i.e., the time course between the stimulus and the motor response), which are becoming crucial to observe and detect brain waves variations linked to the control of voluntary movement. Commonly, this phenomenon has been detected through invasive and noninvasive techniques, such as electrocorticogram (ECoG) and electroencephalogram (EEG), respectively, which are methods of high temporal resolution capable of detecting spontaneous brain electrical activity [5–7]. The conventional approaches for gait rehabilitation typically require the collaboration of several therapists to manually assist the patient’s legs and trunk during gait training. These approaches can be intense and exhausting for both patients and professionals, as the therapy demands physical forces and exercises with a big number of repetitions. Moreover, the patients are not engaged with the therapy, as they do not know how their physiological information (for instance, in muscles and motor cortex) is working. Therefore, the conventional therapy is not sustainable for long term. Robotic technologies, such as exoskeletons have rapidly advanced, leading to the development of upper and-lower limbs robotic devices that serve as assistive or rehabilitation tools for individuals with severe motor impairments [8–10] As advantages, these systems can be used in rehabilitation therapies to provide a great number of controlled movements with high precision, avoiding additional damages. For instance, the Lokomat platform provides a walking experience for individuals who are often unable to perform this task voluntarily [11]. In short, the Lokomat has a pair of robotic legs, a treadmill, and a body weight support system (BWSS) that sustains the individual’s body weight during walking on the treadmill. Various research have been conducted in spinal cord injury (SCI) people by using Lokomat, showing its benefit for recovering/enhancing some of the individuals’ physiological functions and necessities [11–13]. However, the gait recovering in complete SCI individuals by using Lokomat is still a big challenge. As a result, there is an increasing search for more efficient and accessible alternatives for gait rehabilitation. For instance, brain computer interfaces have shown great potential and promising results, engaging the user’s central nervous system to activate external devices, according to an identified walking intention [14–16]. The sensorimotor cortical oscillation over μ (8–12 Hz) and β (13–30 Hz) rhythms is closely related to motor action production [17,18]. These cortical oscillations decrease their amplitude 2 s before the movement onset, and also they are still sustained throughout the movement [19,20]. This power decrease

Towards a Gait Planning Training Strategy Using Lokomat

359

is often referred as event-related desynchronization (ERD). ERD is considered a marker of motor processing, and can be observed in the mu or/and beta band [21]. Therefore, investigating the changes in cortical oscillations induced by BCIs during gait training can provide insights into their effectiveness as a rehabilitation therapy [22,23]. Undoubtedly, the Lokomat provides a suitable and excellent environment for SCI individuals to train and relearn the motor planning of walking in a safe way. It is well-known that sensorimotor cortical oscillations are closely related to motor action production, and they are considered a marker of motor processing [19,21]. Then, a gait training using Lokomat promoting the modulation of motor rhythms through neurofeedback or brain-computer interfaces (BCIs) can enhance neuroplasticity, and consequently restore walking functions [24,25]. As a first stage, the objective of this study is to investigate in an uncontrolled environment, how cortical patterns in healthy volunteers without any lowerlimbs impairments are affected while they performed motor planning of gait. The findings will be used in future work to develop an intervention strategy using Lokomat for training motor preparation/planning, to aid SCI individuals that cannot voluntarily initiate the gait.

2 2.1

Materials and Methods Participants

To characterize the sample, the volunteers completed a sociodemographic questionnaire via an electronic form in which they informed their age, gender, weight (kg), height (m), and any impairments (cognitive, visual, motor, or auditory) that could affect the experimental protocol’s feasibility. A total of ten healthy volunteers were recruited in our study, and provided their sociodemographic data and informed consent. The final sample included nine healthy (six male, aged between 19 and 29 years, height 1.75 ± 12.9 m and weight 75.89 ± 19.9 kg), right-footed volunteers with no history of impairments or abnormalities. This study was carried out under the project with CAAE 53127921.2.0000. 0129 code number, approved by the Research Ethics Committee from the Santos Dumont Institute (ISD). The study was also conducted in accordance with the 1964 Declaration of Helsinki. 2.2

Experimental Protocol

The experiment comprised a single session, which was divided into twoconditions: static condition and walking condition. The former was the baseline data acquisition (BL) in which volunteers were asked to remain in standing position and resting without making sudden movements for a period of 15 s. In the last condition, participants were instructed to walk while receiving random verbal commands via audio that conditioned them to change their movement plan. Audio cues were used here to randomly indicate the beginning of each gait cycle by following these four short phrases in Portuguese:

360

T. S. de Albuquerque et al.

Fig. 1. a simulating a daily living environment and b placement of electrodes for sEMG using the tibialis anterior muscle

1. “Direito, longo”, which means that volunteer should start the gait cycle with the right foot and with long steps. 2. “Direito, curto”, which means that volunteer should start the gait cycle with the right foot and with shorts steps. 3. “Esquerdo, longo”, which means that volunteer should start the gait cycle with the left foot and with long steps. 4. “Esquerdo, curto”, which means that volunteer should start the gait cycle with the left foot and with shorts steps. Here, each audio cue was randomized as a strategy to promote motor planning of walking. A total of 45 gait cycles was performed in an uncontrolled environment by each participant, on a flat surface simulating a daily living environment, as shown in Fig. 1. Figure 2 shows a trial, where each gait cycle lasted approximately 5 s, followed by a period of rest state varying randomly between 1 and 3 s before starting the next trial. Before the data collection, all participants were briefly familiarized to correctly execute and complete a walking cycle, identifying the end of each cycle. 2.3

Data Collection

This step involves the placement of an EEG cap of 64 electrodes to capture the brain activity over various locations simultaneously. Before the EEG and sEMG recording, patient’s skin on the scalp was cleaned with 70% ABV (Alcohol by Volume), followed by exfoliation with an abrasive gel and the application of a conductive gel. This process ensured a skin impedance below 10 kΩ. The EEG data were recorded in a frequency range from DC to 320 Hz through 16 dry

Towards a Gait Planning Training Strategy Using Lokomat

361

Fig. 2. Timing of each trial in the experimental protocol.

active electrodes (actiCAP Xpress from Brain Products) over Fp1, Fp2, F3, F4, FC1, FC2, C3, C4, Cz, CP2, CP1, P3, P4, Pz, O1, and O2 in accordance with the 10–20 system. A similar electrode setup was used in [14,26]. The reference and ground were chosen on Fpz and Fz locations, respectively. The V-Amp 16 equipment from Brain Products was used to acquire EEG signals with sampling rate at 512 Hz. Furthermore, two sEMG channels were also acquired by using the V-amp 16 auxiliary inputs, as a reference to detect the gait onset. In our study, each pair of sEMG electrodes were placed near to the middle of the muscle belly, with a distance of 2 cm with respect the center of each electrode [27]. Specifically, the two sEMG channels were used with self-adhesive disposable electrodes placed on the tibialis anterior muscle of each limb, as shown in Fig. 1. It is worth mentioning that this muscle was chosen for its influence and rapid activation during the walking action [28]. Prior to the electrode placement, the skin over the selected muscle was first shaved with disposable blades for volunteers presenting hairs on the region of interest, to after clean with 70% ABV. The reference electrodes were fixed to the patella, which is the closest bone structure. 2.4

Data Processing

Gait Onset Detection The sEMG signals were first filtered by using a fifthorder band-pass Butterworth filter in a frequency range from 20 to 250 Hz to remove noise and unwanted components, such as motion artifacts because of the cable movements. The filtered signals were then rectified by applying the square operation, and smoothed through a second-order Butterworth filter with a cutoff frequency at 6 Hz, in order to obtain their corresponding envelope [29]. The envelope amplitude range peak-to-peak for each sEMG signal was determined by finding two mean values, one from 10 peaks and the other from 10 minimum values. Afterwards, these two values were used to calculate the amplitude range

362

T. S. de Albuquerque et al.

Fig. 3. Procedure to detect the gait onset

(Amax −Amin ), which is then applied as reference to detect the gait onset, which occurs when the envelope signal reached 20% of the amplitude range, as show in Fig. 3. EEG Processing The raw EEG was first filtered by using a second-order bandpass Butterworth filter in a frequency range from 0.1 to 30 Hz to remove undesirable noise, and preserve the frequency components of interest. By using the gait onset as a reference, the filtered EEG was after segmented from –2 to +3 s, being 0 s the instant of the gait initiation. Then, segments of 5 s in length were obtained, being a period of 2 s of pre-movement and 3 s of post-movement for each cycle. It is worth noting that pre-movement or motor planning begins around 2 s before the gait onset. With the EEG segments and knowledge of when the volunteer started walking with the right or left leg, the EEG data analysis was focused on Cz, which is located over the foot motor area. For each segment, the Common Average Reference (CAR) technique was used to remove the common interference [30]. After obtaining the free-reference Cz segments in the time domain, the continuous wavelet transform (CWT) and short-time Fourier transform (STFT) were calculated on each segment, to obtain the time-frequency (TF) representation from –2 to 3 s. Moreover, the fast Fourier transform was computed to obtain the power spectrum corresponding to the motor planning period (–1 to 0 s). Then, the averaged TF and power spectrum were calculated on Cz for the following two-conditions: gait initiation with the right leg, and gait initiation with the left leg. In order to obtain a baseline reference for our analysis, we calculated a freereference Cz segment from condition 1 (baseline) using a moving window of 1 s with an overlap of 50%. This resulted in multiple segments which were used to determine the average baseline segment for our analysis. The baseline segment was then used to compare the ERD of the signal, as well as the difference between the STFT of the two experimental conditions. The ERD comparisons were made separately for the μ and β components of Cz, and we subtracted the mean value of the first 500ms of the results to ensure that the results started close to 0% relative power.

Towards a Gait Planning Training Strategy Using Lokomat

3

363

Results

Due to the exclusion of a volunteer who presented the EEG and sEMG signals very corrupted by artifacts, only nine volunteers were included for analysis. Figure 4 shows the automatic gait onset detection for a segment of sEMG signal. We observed a good performance by applying our method, which presents small errors that may be minimized by using averaged segments, ensuring then a robust interpretation of EEG data when analyzed by gait cycles.

Fig. 4. Precision of the method utilized to discover the gait onset instants

Figure 5d shows a similar averaged power spectrum for motor planning periods, taking into account the following two-conditions: initiating with the right, and initiating with left leg. We observed the predominant ERD for μ and β rhythms being different the relative power for each gait initiation condition, as shown in Fig. 5a. For instance, by starting with the right leg, the μ ERD decreased during the gait motor planning, reaching a relative power of –15%. In contrast, this effect was not observed for the β rhythm, which presented a little variation throughout the motor planning period. On the other hand, an opposite effect by considering the left leg was observed on the relative power for μ ERD a late slight decrease before the gait initiation (see Fig. 5a). In contrast, the β ERD decreased 15% during the motor planning period. It is worth noting an antagonist effect between μ ERD and β ERD on the gait motor planning by initiating with the right leg or left leg, as shown in Fig. 5a. Also notice a μ ERD relative power suppression when the participants began walking with their left leg, compared to the condition beginning with right (dominant) leg. Both time-frequency representations by using STFT and CWT confirmed the power increase in low frequencies during motor planning, as shown in Figs. 5b–c, respectively. Notice for the gait initiation with the right leg that highest power values were obtained in low frequencies when compared with the left leg.

364

T. S. de Albuquerque et al.

Fig. 5. Average representation for ERD, time-frequency representation and power spectrum from all volunteers. Top row shows the results for the gait onsets with the right leg, whereas the bottom row for gait onsets with the left leg. a ERD for μ and β waves; b Spectrogram; c Wavelet transform; and d power spectrum over motor planning periods.

4

Discussion and Conclusion

The main goal in this study was to investigate how strategies used to perform motor gait planning in an uncontrolled environment affects patterns of neural activity in healthy volunteers during task performance. To our best knowledge, this research is a pioneer study analyzing neural patterns during gait motor planning in an uncontrolled environment. As a highlight, this research aims to create in second stage, a new motor rehabilitation strategy from the findings to act simultaneously on the central and peripheral nervous systems of SCI individuals by using Lokomat, neurofeedback and BCI. Despite being an environment susceptible to external artifacts that can affect signal acquisition, our results showed the desynchronization of mu and beta rhythms during motor planning, making it possible to identify the period that precedes the gait motor action, corroborating the findings already described in [6,26,31]. The ability to predicting gait intention is an important resource for designing and implementing a BCI system for rehabilitation with possible real-life applications. The early identification of the individual’s intention allows a natural and effective response of the end-effector (for instance, a robotic exoskeleton) on the limb, thus increasing the effectiveness of active rehabilitation. Zheng et al. (2021) demonstrated that slow and active training better facilitates cortical activation associated with cognition and motor control [32]. Our findings have relevance to advance developing novel interventions aiming to enhance the gait rehabilitation in SCI individuals. Despite the effort and

Towards a Gait Planning Training Strategy Using Lokomat

365

progress in the scientific community, significant motor functions recovery after a cord injury is still a big challenge, as the cortical plasticity mechanisms are required for the impaired individuals to relearn previous motor patterns by using an altered motor pathway. Since the lesion can lead to cortical disorganization that can affect the topographic organization of the primary motor cortex, which is a critical area for planning, initiation and execution of movements [33]. As a limitation, the small sample size is a factor for consolidating our results. Also, other factors related to the environment used executing the experimental protocol, may include artifacts during signal acquisition because of its uncontrolled nature. Additionally, during one of the acquisition days, it was necessary to use a set of cup electrodes for sEMG acquisition, which required fixation with adhesive tape and application of conductive gel. The electrodes used on that day were found to be more noisy compared to self-adhesive electrodes, resulting in the exclusion of data from one of the volunteers. It is worth nothing that despite being instructed the volunteers before the experiment, some of them were confused when executing the audio commands for walking. In some trials, some subjects started then with delays or using the wrong leg to start the gait. The step length (short or long) was also confused in some trials. This study is a preliminary study and, although the study was conducted with a small sample, it is still possible to draw some initial conclusions about the feasibility of using EEG signals to analyze lower-limb movements during gait motor planning. Acknowledgments. Authors would like to thank the CNPq scholarships from Brazil.

References 1. Wagner, F.B., Mignardot, J.B., Le Goff-Mignardot, C.G., Demesmaeker, R., Komi, S., Capogrosso, M., Rowald, A., Se´ an ˜ez, I., Caban, M., Pirondini, E., et al.: Targeted neurotechnology restores walking in humans with spinal cord injury. Nature 563(7729), 65–71 (2018) 2. Takakusaki, K.: Functional neuroanatomy for posture and gait control. J. Mov. Disord. 10(1), 1 (2017) 3. MacKinnon, C.D.: Sensorimotor anatomy of gait, balance, and falls. Handb. Clin. Neurol. 159, 3–26 (2018) 4. Scott, S.H., Cluff, T., Lowrey, C.R., Takei, T.: Feedback control during voluntary motor actions. Curr. Opin. Neurobiol. 33, 85–94 (2015) 5. Biasiucci, A., Franceschiello, B., Murray, M.M.: Electroencephalography. Curr. Biol. 29(3), R80–R85 (2019) ´ 6. Hortal, E., Ubeda, A., I´ an ˜ez, E., Azor´ın, J.M., Fern´ andez, E.: Eeg-based detection of starting and stopping during gait cycle. Int. J. Neural Syst. 26(07), 1650,029 (2016) 7. Hassan, M., Wendling, F.: Electroencephalography source connectivity: aiming for high resolution of brain networks in time and space. IEEE Signal Process. Mag. 35(3), 81–96 (2018) 8. Grasm¨ ucke, D., Zieriacks, A., Jansen, O., Fisahn, C., Sczesny-Kaiser, M., Wessling, M., Meindl, R.C., Schildhauer, T.A., Aach, M.: Against the odds: what to expect in

366

9.

10.

11.

12. 13.

14.

15.

16.

17.

18.

19.

20.

21. 22.

23.

T. S. de Albuquerque et al. rehabilitation of chronic spinal cord injury with a neurologically controlled hybrid assistive limb exoskeleton a subgroup analysis of 55 patients according to age and lesion level. Neurosurg. Focus. 42(5), E15 (2017) Gao, M., Wang, Z., Pang, Z., Sun, J., Li, J., Li, S., Zhang, H.: Electrically driven lower limb exoskeleton rehabilitation robot based on anthropomorphic design. Machines 10(4), 266 (2022) Wendong, W., Hanhao, L., Menghan, X., Yang, C., Xiaoqing, Y., Xing, M., Bing, Z.: Design and verification of a human-robot interaction system for upper limb exoskeleton rehabilitation. Med. Eng. Phys. 79, 19–25 (2020) Alashram, A.R., Annino, G., Padua, E.: Robot-assisted gait training in individuals with spinal cord injury: a systematic review for the clinical effectiveness of lokomat. J. Clin. Neurosci. 91, 260–269 (2021) Marchal-Crespo, L., Riener, R.: Technology of the robotic gait orthosis Lokomat. In: Neurorehabilitation Technology, pp. 665–681. Springer (2022) Qaiser, T., Eginyan, G., Chan, F., Lam, T.: The sensorimotor effects of a lower limb proprioception training intervention in individuals with a spinal cord injury. J. Neurophysiol. (2019) Delisle-Rodriguez, D., Cardoso, V., Gurve, D., Loterio, F., Romero-Laiseca, M.A., Krishnan, S., Bastos-Filho, T.: System based on subject-specific bands to recognize pedaling motor imagery: towards a bci for lower-limb rehabilitation. J. Neural Eng. 16(5), 056,005 (2019) Jiang, N., Gizzi, L., Mrachacz-Kersting, N., Dremstrup, K., Farina, D.: A braincomputer interface for single-trial detection of gait initiation from movement related cortical potentials. Clin. Neurophysiol. 126(1), 154–159 (2015) Karimi, F., Jiang, N.: A reference-based source extraction algorithm to extract movement related cortical potentials for brain-computer interface applications. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 3603–3607. IEEE (2019) McFarland, D.J., Miner, L.A., Vaughan, T.M., Wolpaw, J.R.: Mu and beta rhythm topographies during motor imagery and actual movements. Brain Topogr. 12, 177– 186 (2000) Neuper, C., W¨ ortz, M., Pfurtscheller, G.: Erd/ers patterns reflecting sensorimotor activation and deactivation. In: Event-Related Dynamics of Brain Oscillations, vol. 159, pp. 211–222. Elsevier (2006) Zhang, W., Low, L.F., Schwenk, M., Mills, N., Gwynn, J.D., Clemson, L.: Review of gait, cognition, and fall risks with implications for fall prevention in older adults with dementia. Dement. Geriatr. Cogn. Disord. 48(1–2), 17–29 (2019) Erbil, N., Ungan, P.: Changes in the alpha and beta amplitudes of the central EEG during the onset, continuation, and offset of long-duration repetitive hand movements. Brain Res. 1169, 44–56 (2007) Pfurtscheller, G., Da Silva, F.L.: Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin. Neurophysiol. 110(11), 1842–1857 (1999) Lazarou, I., Nikolopoulos, S., Petrantonakis, P.C., Kompatsiaris, I., Tsolaki, M.: Eeg-based brain-computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century. Front. Hum. Neurosci. 12 (2018) Micera, S., Caleo, M., Chisari, C., Hummel, F.C., Pedrocchi, A.: Advanced neurotechnologies for the restoration of motor function. Neuron 105(4), 604–620 (2020)

Towards a Gait Planning Training Strategy Using Lokomat

367

24. Shokur, S., Donati, A.R.C., Campos, D.S.F., Gitti, C., Bao, G., Fischer, D., Almeida, S., Braga, V.A.S., Augusto, P., Petty, C., Alho, E.J.L., Lebedev, M., Song, A.W., Nicolelis, M.A.L.: Training with brain-machine interfaces, visuo-tactile feedback and assisted locomotion improves sensorimotor, visceral, and psychological signs in chronic paraplegic patients. PLOS ONE 13 (2018) 25. James, N.D., McMahon, S.B., Field-Fote, E.C., Bradbury, E.J.: Neuromodulation in the restoration of function after spinal cord injury. Lancet Neurol. 17(10), 905– 917 (2018) 26. Shafiul Hasan, S., Siddiquee, M.R., Atri, R., Ramon, R., Marquez, J.S., Bai, O.: Prediction of gait intention from pre-movement EEG signals: a feasibility study. J. Neuroeng. Rehabil. 17(1), 1–16 (2020) 27. Hallett, M., DelRosso, L.M., Elble, R., Ferri, R., Horak, F.B., Lehericy, S., Mancini, M., Matsuhashi, M., Matsumoto, R., Muthuraman, M., et al.: Evaluation of movement and brain activity. Clin. Neurophysiol. 132(10), 2608–2638 (2021) 28. Winter, D., Yack, H.: EMG profiles during normal human walking: stride-to-stride and inter-subject variability. Electroencephalogr. Clin. Neurophysiol. 67, 402–411 (1987) 29. Konrad, P.: The ABC of EMG. A practical introduction to kinesiological electromyography 1(2005), 30–5 (2005) 30. Ludwig, K.A., Miriani, R.M., Langhals, N.B., Joseph, M.D., Anderson, D.J., Kipke, D.R.: Using a common average reference to improve cortical neuron recordings from microelectrode arrays. J. Neurophysiol. 101(3), 1679–1689 (2009) 31. Hashimoto, Y., Ushiba, J.: EEG-based classification of imaginary left and right foot movements using beta rebound. Clin. Neurophysiol. 124(11), 2153–2160 (2013) 32. Zheng, J., Shi, P., Fan, M., Liang, S., Li, S., Yu, H.: Effects of passive and active training modes of upper-limb rehabilitation robot on cortical activation: a functional near-infrared spectroscopy study. NeuroReport 32(6), 479–488 (2021) 33. Mohammed, H., Hollis, E.R.: Cortical reorganization of sensorimotor systems and the role of intracortical circuits after spinal cord injury. Neurotherapeutics 15(3), 588–603 (2018)

Assistive Walker Which Stabilizes the User’s Posture and Prevent Falls Through Abnormal Gait Pattern Recognition Daisuke Chugo1(B) , Yao Li1 , Satoshi Muramatsu2 , Sho Yokota3 , Jin-Hua She4 , and Hiroshi Hashimoto5 1 Kwansei Gakuin University, Sanda, Hyogo, Japan

[email protected]

2 Tokai University. Hiratsuka, Kanagawa, Japan 3 Toyo University, Kawagoe, Saitama, Japan 4 Tokyo University of Technology, Hachioji, Tokyo, Japan 5 Advanced Institute of Industrial Technology, Shinagawa, Tokyo, Japan

Abstract. In this paper, a novel walking assistance robot is proposed that takes into account the user’s gait. Many walking assistance robots have been developed and most of them have been discussed in terms of obstacle and step avoidance functions, path following functions and gravity disabling functions. However, in the opinion of care experts, a walking assistance robot must first realize a safe walking function that does not cause the user to fall over. Therefore, it is important to maintain the user’s body balance, and it is necessary to take the user’s characteristics into account and coordinate the walker to the user’s walking style. Therefore, our proposed system measures the gait of the user’s left and right legs and calculates the position of the center of gravity to estimate the user’s body balance. Our system is practical because it does not require sensors to be attached to the user’s body. Our system uses the estimated body balance to control the auxiliary force vector and keep the body balance stable. A prototype walker using the proposed system is fabricated to assist the patient in walking safely. Keywords: Assistive walker · Footstep recognition · Walking posture estimation

1 Introduction In the daily life of older people, standing, walking and sitting can be the most serious and important activities due to lack of physical fitness [1, 2]. In typically poor cases, older people with poor physical fitness may become unable to stand or sit, and as a result may be restricted to living in a wheelchair or become bedridden [3]. Furthermore, once they are living that way, their physical fitness can deteriorate markedly due to lack of exercise [4]. In order to improve the QoL (quality of life) of the elderly, personal assistance robots are needed to enable them to easily carry out their daily activities alone, even as their physical strength declines with age. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 368–380, 2024. https://doi.org/10.1007/978-3-031-47272-5_31

Assistive Walker Which Stabilizes the User’s Posture and Prevent

369

We have developed robotic walkers and proposed standing assistance functions [5, 6] and sitting assistance functions [7], but have not mentioned walking functions. Therefore, in this paper, we propose a walking assistance function. Of course, many researchers have developed walking assistive devices. Many conventional walkers are equipped with intelligent functions such as obstacle and step avoidance, path following and gravity disabling functions [8–14]. However, nursing professionals want to avoid falls during walking, which are the highest risk for walker users [15]. In their opinion, walker users are usually able to avoid obstacles and walk a reference path with the assistance of a walker, meaning that these functions are over-specified. On the other hand, walker users have limited physical strength, and unintended and erroneous leg movements can lead to a loss of body balance. Therefore, robotic walkers equipped with servo-brakes have been developed to provide a fall prevention function [12, 13]. However, these robots only assess fall accidents when the COG position of the user’s body exceeds the safe range. In consideration of the user’s characteristics, an abnormal gait can cause the body to lose balance, resulting in high-risk accidents such as falls. A walker needs to detect such erroneous gait before the user actually falls. Some researchers [14] have proposed a walker to detect abnormal gait, but the user has to wear sensors and markers and there is no discussion of how the walker would assist the user in abnormal gait. Therefore, we have developed a new gait-assist robot in consideration of the user’s gait [15]. This paper proposes a function for our gait-assist robot that provides gait assistance to balance the user’s body before the user falls and, if a fall is unavoidable, prevents the user from falling by making an emergency stop before the fall. Specifically, the user’s body balance is estimated by measuring the gait of the user’s left and right feet using a three-dimensional laser scanner and calculating the position of the COG. By controlling the assist force using this estimated body balance, a stable body balance can be maintained. Furthermore, the angle of the user’s feet when a fall is inevitable has been clarified from preliminary experiments. The assisted walker stops urgently and supports the user in the small amount of time when a user fall is inevitable but the user has not yet fallen. Experiments with the prototype confirmed the effectiveness of the proposed system.

2 Assistive Walker Figure 1 shows our proposed walker. This walker has two drive wheels with in-wheel motors on the rear wheels. The gait-assist performance is equivalent to that of our previously developed walker with standing assistance function [6]. Therefore, the gait assist function described in this paper can be implemented with our walker with standing support function. The walker robot proposed in this paper is a model specializing only in the walking assistance function and is therefore low-cost. The size of the unit is small enough to be used in a typical Japanese house with corridors typically 900 mm wide. It is 550 mm wide and 740 mm long. The height can be varied between 735 and 860 mm according to the user’s height. The radius of the rear wheels (actuating wheels) is 150 mm and the front wheels is 240 mm, making it possible to climb small steps with ease. Brushless DC motors are used for the in-wheel motors, and the output of each motor is 30 W (60 W in total). This makes it possible for a user with a body weight of 100 kgf

370

D. Chugo et al.

to climb a 12% slope. The maximum speed is 1.7 m/sec. The walking speed of a human being is 4 km/h (=1.1 m/sec), which is enough to support the elderly in walking. The traction force is generated by in-wheel motors and, in addition, a mechanical parking brake can be used with the brake levers on the two hand grips. Our system is equipped with 3D laser range scanner, YVT35-LX-FK (Hokuyo Automatic Co., Ltd.). This laser range scanner can be used to detect the user’s body shape and estimate the position of the COG; if the COG is out of balance, the system attempts to adjust the user’s body balance by activating the motors on each wheel. This algorithm is described in the next section.

(a) Back view

(b) Side view

(c) Its coordination

Fig. 1. Our prototype

3 COG Position Estimation of Its User 3.1 Body Posture Detection The walker measures the position of the feet and hips to estimate the user’s COG position. As shown in Fig. 1a, the walker is equipped with 3d laser range scanner which is mounted at height and can measure the user’s upper body, thighs and lower legs. The height is z = 250 mm (Its coordination is defined in Fig. 1c). Postural Estimation of Each Body Parts. The proposed system measures the posture of each body part in order to estimate the user’s walking condition. Generally, when a pedestrian falls, the lower leg often falls to the ground. Therefore, the first, our system detects the right and left lower leg. Therefore, this subsection describes the method of posture recognition for the left and right lower legs. The proposed system uses this method in the same way for estimating the upper leg and upper body. To estimate the position and tilt of lower legs, clustering is performed on the point cloud in the space defined in Fig. 1c using the KD tree algorithm. An example of the clustering results is shown in Fig. 2. Next, posture information is derived from these point cloud information. Generally, the posture of a body part is derived by fitting a point cloud in three-dimensional space to a cylinder, etc., which places a high burden on the computer, such as iterative calculation algorithms. Therefore, in order to estimate

Assistive Walker Which Stabilizes the User’s Posture and Prevent

371

posture changes in real time, this study proposes a posture estimation algorithm with a low computational load. First, the center of the point  groups of the left lower leg and the right lower  k position k k k , k = right or left are determined as (1) and = xcenter , ycenter , zcenter leg, Pcenter Fig. 3a. Then, the point group recognized as the lower leg is divided into two parts by a point group whose Z-axis co-ordinate values are larger and smaller than the center position as (2) and Fig. 3b. The posture of the left and right lower leg in space is calculated by finding the center position for each of the two divided parts and calculating the vector connecting the respective center positions as (3) and Fig. 3c. The above algorithm can also be applied to posture estimation of body parts other than the lower leg, where the position of the center of gravity and the body posture vector can be obtained in the same way as for the lower leg. n  k Pcenter m  k Pcenter1 =

i=0

=

Pik

k , Pcenter2 =

(1)

n n 

Pik

m

i=0

i=m+1

Pik

n−m

(z0 ∼ zm > zcenter )

(2)

k k Lk = Pcenter2 − Pcenter1

Right lower leg

Right foot

(a) A subject’s posture

(3)

Left lower leg

Left foot

(b) Measured point cloud

Fig. 2. Recognition results for lower leg. The point cloud identified as the lower leg is shown in pink (right leg) and red (left leg).

left Pcenter 1

left Pcenter

right Pcenter 1

right Pcenter

P

left center 2

(a) Center of Gravity

Lleft

Lright

left Pcenter

right Pcenter 2

(b) Upper and lower part

(c) Posture vector

Fig. 3. Posture detection of each lower leg.

right Pcenter

372

D. Chugo et al.

A Foot Position Estimation. In foot detection, depending on the shape of the floor surface, the point cloud of the floor surface and the point cloud of the foot may not be recognized as properly separated. Therefore, the following algorithm is used to estimate the position of the right and left foot. • To detect grounded feet, only point clouds in the xy-plane with a ground height of z = 15 mm are used. As a specification of laser range scanner, it has a measurement range of 270°, 4 points/deg in xy-plane. As shown in Fig. 4a, the measurement range of the walker is 74°, which means that the feet are detected at about 300 measurement points. • If the measuring position Pi (xi , yi , z = 15 mm) by ith laser line fulfills (4), our walker detects Pi and Pi+1 shows same obstacle. distthreshold is 0.1 mm experimentally.  (4) distthreshold > (xi−1 − xi )2 + (yi−1 − yi )2   start x Our system expresses the start position of jth object as Pobjectj jstart , ylrs2 , zjstart end . and its end positon as Pobjectj

• If our system detects two objects, our system evaluates first object ( j = 1) as a left foot and second object ( j = 2) as a right foot. If our system detects more than two objects, our system selects two objects which has larger value in (5), and evaluates each object as feet.   2  2 distj = xjend − xjstart + yjend − yjstart (5) • If jth object is a left foot and ( j + 1)th object is a right foot, each edge position is expressed as (6) and Fig. 4b.   start = Pstart start start Pobject(j+1) Pobjectj = Prightfoot leftfoot , (6) end end end end Pobjectj = Pleftfoot Pobject(j+1) = Prightfoot • The center position of each foot is (7). Figure 4b and c show the measurement results when the subject kicked off the left or right foot while walking. Pleftfoot =

start + Pend Pleftfoot leftfoot

2

, Prightfoot =

start end + Prightfoot Prightfoot

2

(7)

3.2 Estimation of COG Position To estimate the location of the user’s COG, a linkage model is used, as shown in Fig. 5a. When estimating, the following assumptions are made;

Assistive Walker Which Stabilizes the User’s Posture and Prevent 3D LRF

Pleftfoot

74[deg]

end Pleftfoot

Prightfoot P

end rightfoot

foot

Line1 P

start leftfoot

P end start rightfoot

end Pleftfoot Prightfoot

Prightfoot

start Pleftfoot

start Prightfoot

start Prightfoot

P

end rightfoot

Y[m]

Measurement Range

Y[m]

X

373

end Pleftfoot

LRF

Pleftfoot start Pleftfoot

LRF

Line300

Y

X[m]

X[m]

Fig. 4. Foot position measurement and its results

• The user enters the length of each link into the walker in advance. • The user is constantly holding the handgrip while walking (Plefthand and Prighthand are same position of hand grips). • During walking, the user’s left arm and right arm are in the same posture. Using these assumptions, the walker can estimate the COG position of the user. Our system can estimate the center position of each body part, right and left lower arm k k , upper body Pupperbody , Plowerarm (k = right or left), right and left upper arm Pupperarm k k right and left thigh Pthigh , right and left lower foot Plowerfoot using algorithm explained in previous subsection. The center position and the COG position of each body part are not same because the human body is not homogeneous. Therefore, our system estimates COG position using the human body parameters use the standard Japanese anthropometric data [15] shown in Table 1. left For example, the COG position of the left thigh link is shown in Fig. 5a as (8). lthigh is left

length of left thigh and it is measured beforehand. Pthigh is center position and estimated by our algorithm, therefore, the distance from end position to center position is 50% of left lthigh . In this case, the distance from end position to COG position of thigh is 47.5%, so left

left

from the center position to the COG position is 2.5% of lthigh . Lthigh is posture vector which is estimated by our algorithm (Fig. 5b). left

left Pthigh_cog

=

left Pthigh

Lthigh

left  · (0.5 − 0.475)lthigh +  left  Lthigh 

(8)

Using these, our walker calculates the COG position of the user as (9). Figure 6 shows the estimation results of our proposal.  left right Pcog = 0.016 Plowerarm_cog + Plowerarm_cog  left right +0.027 Pupperarm_cog + Pupperarm_cog  (9) left right +0.47Pupperbody_cog + 0.11 Pthigh_cog + Pthigh_cog  left right +0.051 Plowerfoot_cog + Plowerfoot_cog

374

D. Chugo et al. Y

f righthand , flefthand Handle

Upper arm Lower arm

Psholder Upper body left 0.475 ⋅ lthigh

Psholder

Plefthand

Pwaist

Prighthand

Thigh

Pleftknee

Walker

Prightknee

Pleftfoot

left 0.5 ⋅ lthigh

Lower Thigh

left lthigh

left Pthigh _ cog left Pthigh Lleft thigh

Lleft thigh

Prightfoot Z

(a) Human linkage model

left ⋅ ( 0.5 − 0.475 ) lthigh

(b) COG of thigh

Fig. 5. Kinematical relationship of waist, thigh and lower thigh. Table 1. Human parameters No

Name

M (%)

1

Lower arm

2

Upper arm

2.7

52.9

26.2

0.39

3

Upper body

47.0

49.3

34.6

0.48

4

Thigh

11.0

47.5

27.8

0.61

5

Lower thigh

5.1

40.6

27.4

0.56

6

Foot

1.1

59.5

20.4

0.26

1.6

C.G (%)

K (%)

Length (m)*

41.5

27.9

0.35

M Mass ratio of body segments to body mass. C.G. The segment length ratio, which indicates where the center of gravity is located in the length direction of each segment. From these data, we can see that this ratio is almost 50%. K The ratio of the gyration radius of the body segment to the length of its segment. For example, the radius of the forearm in this case is calculated as follows; 0.35 m × 27.9% ≈ 0.098 m *The length of each body segment and the body weight was measured for each subject, and this data represents the body segment length of subject A.

Y[m]

Y[m]

COG

Z[m]

Z[m]

(a) Its user’s posture

(b) COG of its user

Fig. 6. COG estimation result

Assistive Walker Which Stabilizes the User’s Posture and Prevent

375

4 Assistance Methods in Consideration for Walking Balance Our proposed walker assists the user to walk in the following ways; 1. Normally, a walker generates brake traction to eliminate imbalances in the user’s body caused by walking. 2. In the case that the user’s posture exceeds the safe range (e.g. if the user is going to fall over), the maximum brake traction is generated in consideration of safety. 4.1 Walking Assistance for Maintaining Body Balance Figure 7 shows the force applied to the walker and the braking force during walking. If (10) is satisfied, the walker is evaluated as kicking the ground with the left foot, as shown in Fig. 7. Otherwise, it is evaluated as kicking the ground with the right foot. yleftfoot > yrightfoot

(10)

For a stable gait, the moment of the user’s body Mbody at Pcog is (11) and should be zero; fkick is the forward kicking force by the supporting leg (in case of Fig. 8, the supporting leg is left) and is derived by (12). w w − xcog + fkick + frighthand + xcog (11) Mbody = −flefthand 2 2   fkick = −mwalker y¨ + flefthand + frighthand (12) Our walker generates a braking traction force when the body moment Mbody is nonzero, as in (17). kp is a coefficient, which is derived experimentally. This wheel control can be used to maintain a stable posture of the user during walking assistance.  

frightwheel = kp Mbody , fleftwheel = 0 if Mbody > 0 (13) if Mbody < 0 frightwheel = 0, fleftwheel = kp Mbody

4.2 Emergency Brakes for Fall Prevention Normally, the COG position changes within a range during walking, and the range varies individually. Therefore, the user first walks for 30 s while using the walker unassisted, and the walker records the range of the user’s COG position during this time. Of course, the first recording operation must be performed under the supervision of a physiotherapist. Using the measured data, our walker sets the safety zone as (14), where min , y min , z min is the minimum value for this trial and x max , y max , z max is the maxxcog cog cog cog cog cog imum value; p, q and r are coefficients, experimentally set p = 1.2, q = 1.3, r = 1.05. In the vertical direction, r is set to a smaller value due to the higher risk of falling down. ⎧ min max ⎪ ⎨ p · xcog < xcog < p · xcog min < y max (14) q · ycog cog < q · ycog ⎪ ⎩ r · z min < z < r · z max cog cog cog

376

D. Chugo et al. Front Caster

Walker w

Hand Grip

X

Prighthand

Plefthand flefthand

d

f righthand

Pcog

Prightfoot

f kick

Rear Actuated Wheel

M body

fleftwheel

Pleftfoot Y

f rightwheel

Fig. 7. The force applied by the user to the walker and the braking force generated by the walker during walking assistance. In the case of this figure, the support leg is on the left.

If the estimated COG position exceeds the range defined by Eq. (14) during walking assistance, the walker applies maximum braking force to each wheel to prevent the user from falling.

5 Experiment 5.1 Gait Stabilization Performance by Elderly Users Our prototype walker was tested on three subjects with a care level of 1 or 2, and its effectiveness was verified. Subject A was a hemiplegic patient; subjects B and C were elderly. Each subject tried two cases, one with our proposed assist and one without. Subjects do not know a priori whether each trial is of our proposed idea. As shown in Fig. 8, subject A was able to walk smoothly using our prototype walker. Figure 9 shows the COG position, foot position and hip position of subject A. Figure 9(a1), (b1) and (c1) show the COG position without our assistance, and Fig. 9(a2), (b2) and (c2) show the COG position with our ideas. From these figures, it can be seen that the range of COG positions is reduced by our proposed idea. Figure 9(d1), (d2), (e1) and (e2) show the foot positions during walking. The distribution of foot position has been reduced by the proposal, particularly for the right foot. Subject A has hemiplegia in the right foot, but was able to walk smoothly with the assistance provided by the proposal. Figure 9 (f1) and (f2) show the position of the hips. It can be seen that the distribution of the waist position has been reduced by the proposed system. These experimental results show that the proposed walker is effective in helping the subject to walk stably.

Assistive Walker Which Stabilizes the User’s Posture and Prevent

377

Fig. 8. Walking experiment with subject A who is a hemiplegic patient. The occupational therapist is standing by for the safety of the subject. Note that our robotic walker with standing assistance [6] is used in this experiment. This assisted robot has the same walking assistance performance when the standing assistance manipulator is fixed at the upper height.

5.2 Fall Prevention A subject experiment was conducted to verify that the walker’s fall prevention function works in case the user is in an inappropriate walking posture. The subjects were five healthy subjects for safety reasons. Subjects were asked to move their COG (a) forward (in the negative direction of the Y-axis), (b) to the right (in the negative direction of the X-axis) and (c) to the left (in the positive direction of the X-axis) while using a walker, and the change in the subject’s COG was measured using a force-plate device. Each subject tried two cases, one with our proposed assist and one with assist proposed by previous researchers which uses upper body posture of the user. Subjects do not know a priori whether each trial is of our proposed idea. As shown in Fig. 10, our walker prevented the user from falling with our idea. Figure 10b, the walker detected the user’s abnormal gait and applied the brakes, so the walker could safely stop without any loss of posture by the user. On the hand, conventional method also prevented the user from falling as Fig. 10a, but the user’s posture was on the verge of falling, which was a high risk when used by the elderly. As shown in Fig. 11, the distance travelled by the user’s COG trajectory was shorter when the user attempted to fall in either direction using our proposed idea. Furthermore, similar results were obtained for subjects E to H in this experiment. This confirms the effectiveness of the proposed idea considering the user’s gait.

Z[m]

Y[m]

D. Chugo et al.

X[m]

Time[sec]

Time[sec]

(b1) COG Y position Without our proposed idea

(c1) COG Z position

Z[m]

X[m]

Y[m]

(a1) COG X position

Time[sec]

Time[sec]

Time[sec]

Time[sec]

X[m]

(c2) COG Z position

Z[m]

(b2) COG Y position With our proposed idea

Y[m]

Y[m]

(a2) COG X position

Y[m]

X[m]

(e1) Right foot position Without our proposed idea

Y[m]

Y[m]

(d1) Left foot position

X[m]

(f1) Waist position

Z[m]

378

Y[m]

X[m]

(d2) Left foot position

(e2) Right foot position With our proposed idea

(f2) Waist position

Fig. 9. Experiment results with subject A.

Assistive Walker Which Stabilizes the User’s Posture and Prevent

379

(a) Without our proposed idea

(b) With our proposed idea

Fig. 10. Motion of the walker when the user tries to fall forward (Subject D). -0.3

-0.3

-0.3

-0.2

-0.2

-0.2

-0.1

-0.1

-0.1

0

X[m]

X[m]

Beginning to fall

Emergency stop

0.1

0

-0.1

-0.2

-0.3

(a1) Forward direction

Emergency stop Y[m]

0.3 0.3

0.2

0.1

0

-0.1

-0.3

-0.2

Emergency stop

-0.1

-0.3 -0.2

-0.1

-0.1

0.3 0.3

Y[m] 0.2

0.1

0

-0.1

-0.2

(a2) Forward direction

-0.3

0.3 0.3

0

-0.1

-0.2

-0.3

Emergency stop

X[m] Beginning to fall

0.1

Emergency stop

0.2

0.2

0.1

0

X[m] 0.1

Beginning to fall

Y[m] 0.2

(c1) Left direction

-0.2

X[m] 0.1

0.3 0.3

-0.3

0

0

Beginning to fall

0.2

(b1) Right direction Without our proposed idea

-0.3 -0.2

Beginning to fall

0.2

Y[m] 0.2

0.1

0.1

0.2 0.3 0.3

0

0

X[m] 0.1

Emergency stop

Beginning to fall

0.2 Y[m] 0.2

0.1

0

-0.1

-0.2

-0.3

(b2) Right direction With our proposed idea

0.3 0.3

Y[m] 0.2

0.1

0

-0.1

-0.2

-0.3

(c2) Left direction

Fig. 11. The subject D’s COG trajectory during a fall

6 Conclusion In this paper, a novel walking support robot in consideration of the user’s gait is proposed. To realize a walking assist robot in consideration of the user’s gait, this paper measures the gait of the user’s right and left legs and estimates the user’s body balance by calculating

380

D. Chugo et al.

the position of COG. The estimated body balance was used to control the assist force vector to maintain a stable body balance. Acknowledgments. This study received a generous support on the nursing walking assistance way from Prof. Tsuyoshi Uematsu (occupational therapist), Mr. Yoichi Kondo (occupational therapist) and staffs on Kunitachi Visiting Nursing Station, Tokyo, Japan. This work is supported in part by a Grant-in-Aid for Scientific Research B (20H04566) from the Japan Society for the Promotion of Science, the Matching Planner Program (VP29117940231) from Japan Science and Technology Agency, JST and Individual Special Research Subsidy from Kwansei Gakuin University.

References 1. 2. 3. 4. 5.

6. 7.

8. 9.

10.

11. 12. 13.

14. 15.

Alexander, N.B., et al.: J. Geom.: Med. Sci. 46, M91 (1991) Hughes, M.A., et al.: J. Rehabil. Res. Dev. 133, 409 (1996) Office, C.: Koureisha Hakusho (written in Japanese). Government of Japan (2021) Hirvensalo, M., et al.: J. Am. Geriatr. Soc. 48, 493 (2000) Chugo, D., et al.: Standing assistance which realizes voluntary movements of the patient within a safety motion tolerance. In: Proceedings International Conference on Climbing and Walking Robots (CLAWAR’20). Virtually, Moscow, Russia (2020) Chugo, D., et al.: J. Artif. Intell. Technol. 2, 164 (2022) Koyama, M., et al.: Sitting assistance considering with posture tolerance of its user. In: Proceedings on 8th International Symposium on Industrial Electronics (ISIE2019). Beijing, China (2019) Fujie, M., et al.: Power assisted walking support and walk rehabilitation. In: Proceedings 1st International Workshop on Humanoid and Human Friendly Robotics. Tsukuba, Japan (1998) Hirata, Y., et al.: Motion control of Omni-directional type walking support system “Walking Helper”. In: Proceedings on IEEE Workshop on Robot and Human Interactive Communication (ROMAN2003). Millbrae, CA, USA (2003) Wasson, G., et al.: User intent in a shared control framework for pedestrian mobility aids. In: Proceedings on the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003). Las Vegas, Nevada, USA (2003) Rentschler, A.J., et al.: J. Rehabil. Res. Dev. 40, 423 (2003) Hirata, Y., et al.: IEEE Trans. Robot. 23, 981 (2007) Hirata, Y., et al.: Fall prevention control of passive intelligent walker based on human model. In: Proceedings on the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2008). Nice, France (2008) Zhao, D., et al.: IEEE Access 7, 76741 (2019) Okada, H., et al.: Biomechanisms 13, 125 (1996)

Simulation and Digital-Twins in Robotic Applications

A Solution Architecture for Energy Monitoring and Visualisation in Smart Factories with Robotic Automation Louie Webb(B) , Mohammad Osman Tokhi, and Bugra Alkan School of Engineering, London South Bank University, 103 Borough Road, SE1 0AA London, UK {webbl4,tokhim,alkanb}@lsbu.ac.uk

Abstract. In today’s manufacturing landscape, digital twin-enabled smart factories are revolutionising traditional practises by leveraging cutting-edge technologies such as Internet of Things (IoT) devices, advanced analytics, machine learning, and artificial intelligence (AI). These factories create virtual replicas, or digital twins, of their physical counterparts, enabling real-time monitoring, analysis, and control of manufacturing operations. One area of innovation within smart factories is the role of energy condition monitoring and data analytics, which has gained significant attention due to the challenges of interoperability in industrial environments and the emerging need for sustainable manufacturing systems. This paper proposes an energy monitoring and visualisation solution architecture and example data visualisation dashboards at multiple user levels. The proposed solution architecture is deployed on a case study that included robotic material handling, and the results showed that the proposed solution can provide valuable insights to the users regarding the energy consumption of shop-floor components and provide a cost-efficient solution for energy analytics that can be used within SMEs. Keywords: Industry 4.0 · Big data analytics · Energy analytics · Industrial robotics · Data visualisation · OPC-UA · InfluxDB · Grafana

1

Introduction

Smart factories are at the forefront of modern manufacturing, leveraging cuttingedge technologies to integrate physical and digital systems for optimised production processes. These factories digitally transform their physical counterparts, enabling real-time monitoring, analysis, and control of manufacturing operations. By doing so, smart factories can achieve unprecedented levels of operational efficiency, predictive maintenance, and product quality [1]. Enabled by a wide array of technologies, including Internet of Things (IoT) devices, advanced analytics, machine learning, and artificial intelligence (AI), to enable data-driven decision-making and automation [2]. This transformative approach is reshaping c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 383–394, 2024. https://doi.org/10.1007/978-3-031-47272-5_32

384

L. Webb et al.

the manufacturing landscape, driving innovation, and revolutionising traditional manufacturing practises towards wide adoption Industry 4.0. In the context of Industry 4.0, energy condition monitoring and data analytics are areas of innovation that have gained significant attention due to the challenges of interoperability in industrial environments and the increasing need for sustainable manufacturing systems [3]. Resman et al. [4] propose a five-step approach for data-driven digital twins (DT) in discrete assembly scenarios. Their effective use of digital models and shadows allows for inline process monitoring at different time intervals for each state of the physical system, enabling communication of adapted processes from the DT to the physical world. This approach facilitates targeted monitoring of energy-related data through levels of integration. Franceschi et al. [5] aim to implement a DT for flexible coordination between workers and machines. Their proof-of-concept assembly of an aircraft fuselage’s interior includes Human Machine Interface (HMI) dashboards that extend to mobile and Mixed Reality (MR) systems, enabling close monitoring of processes and task allocations for workers and machines. This work emphasises the importance of interoperability between workers and systems. Dall’Ora et al. [6] further highlight the need for interconnected and interoperable ecosystems within smart factories for effective data analytics. Their DT-enabled power consumption condition monitoring of a production line identifies network design as a key enabler for strategic decision-making and real-time monitoring. Their DT implementation aims to detect discrepancies between real-time data and digital twin data for early intervention in maintenance. Rocha et al. [7], address challenges of interoperability and propose an eventdriven ecosystem for energy consumption management. Using a message broker system with Apache Kafka, the solution integrates tools from different vendors and technologies. Tested in automotive industry robotic cells, it stores energy consumption data in a Postgres database with a user-friendly interface. Promising results demonstrate the potential for sustainable operations and advanced solutions like production scheduling for energy optimisation. Further technical aspects of energy condition monitoring can be seen [8], where further key issues of data establishment and aggregation, data processing and analysis, scalability and performance, and further applications of data-driven energy management are addressed. Alternatively, Zhong et al. [9], introduce an IoT-enabled real-time machine status monitoring approach for Cloud Manufacturing (CMfg) with the aim of achieving ultimate service sharing and circulation among different manufacturing parties. The contributions of this research include presenting a SOA architecture for organising a CMfg shop floor, demonstrating a systematic deployment scheme for IoT facilities in a typical manufacturing shop floor, and presenting re-engineered and rationalised production operations in a CMfg environment. However, their approach to multi-level analytics revolves around the use of RFID and specialised handheld monitoring devices, which may provide limitations in terms of deployment and usage efficiencies. However, introducing such solutions are generally expensive, with requirements for compatible or updated infrastructure to work alongside energy condi-

A Solution Architecture for Energy Monitoring and Visualisation

385

tion monitoring and data analytics platforms. Therefore, the need for low-cost and lightweight solutions are apparent for digital transformations, especially with small and medium-sized enterprises (SMEs) [10]. This paper proposes an interoperable n-tier energy monitoring and visualisation solution architecture and example dashboards at multiple user levels, providing insights from the process of data acquisition to visualisation. The boundaries of the scope of this study are carefully defined as being limited to the monitoring and visualisation of data pertaining to energy. However, network gateways and data analytics, which are included in the proposed solution architecture, were considered out of scope for this study. Therefore, this study can serve as a tangible demonstration of the potential insights that can be gained from granular and real-time data, with a key emphasis on data visualisation.

2 2.1

The Implementation of the Proposed Architecture An Overview of the Approach

Within the scope of this study, a proposed n-tier solution architecture and implementation are discussed in relation to energy monitoring and analytics within industrial robotic assembly. Figure 1 displays a high-level overview of an n-tier solution architecture used within industrial robotic assembly. The proposed solution architecture is primarily designed to aid machine-level energy monitoring and analytics in SMEs. Within the proposed architecture, the information flow starts with acquiring raw-process data from shop floor components such as robots and conveyors. Next, the acquired data are organised and managed through a network gateway to ensure data integrity while being transferred. After being passed through the network gateway, data management and storage are needed for adequate time series data collection and identification. The final two layers are aimed at data analytics and visualisation. Gaining valuable insights from the collected data and visualising them in a manner that is usable and useful is where the value of this solution architecture can be seen. In the following sub-sections, we will explain the implementation of the proposed architecture through the use of a case study that includes a small scale collaborative robotics system for low-volume high variety discrete assembly processes. 2.2

Test Set-Up

To demonstrate the proposed data-driven energy monitoring and visualisation solution architecture, a brief overview of the information flow and test bed is set up. Figure 2 shows the information flow and technologies used to acquire and visualise energy-related information. In the test setup, real-time energy consumption data was collected from a simple pick and place operation from a collaborative robot, i.e., the Niryo Ned robot. Figure 3 displays the performed operation where the robot was in an idle position before and after (4 mins each) the pick and place operation, lasting 18 min in total.

386

L. Webb et al.

Fig. 1. The proposed architecture for industrial robotic assembly.

Fig. 2. Energy-related information flow and implemented technologies (Layers that are outlined with a dashed line are not considered within the scope of this study.).

2.3

Data Acquisition Layer

To acquire real-time energy data from the robotic manipulator, a power analyser was used. The analyser sits in between mains power and the robot’s PSU, thus

A Solution Architecture for Energy Monitoring and Visualisation

387

Fig. 3. The experimented material handling operations.

measuring total power. Acquired data consists of Vrms (V), Arms (mA), power (W), and power factor (%). 2.4

Network Gateway

The network gateway acts as a central communication hub that enables connectivity between different devices and protocols. It serves as an interface between the robot manipulator, connected via hotspot, and the power analyser, connected via Ethernet. The network gateway facilitates data exchange and communication between these devices using OPC-UA (Open Platform Communications— Unified Architecture) server, which is a widely used industrial communication protocol for interoperability in industrial automation and control systems. The network gateway plays a critical role in aggregating, processing, and routing data between the robot manipulator, power analyser, OPC-UA server, database, and visualisation tool. It ensures that data from the robot manipulator and power analyser is securely transmitted throughout architecture. The network gateway may also handle protocol conversions, data filtering, and data transformation tasks to ensure seamless integration and interoperability among the different components of the energy monitoring and visualisation system. 2.5

Data Storage and Management Layer

The energy analytics and visualisation require a proper data management and storage approach and platform. Within industrial scenarios, the ability to acquire multiple sources of data while storing the timestamp of their measurements can provide valuable insights into production processes, leading to data-driven decisions. In this research work, the combination of OPC-UA and InfluxDB was used to address the requirements for data storage and management. OPC-UA is a widely used protocol for industrial communication and data exchange. It allows for the exchange of data between different devices and systems in an industrial environment, regardless of the manufacturer or platform. InfluxDB, on the other hand, is a time-series database that is designed to handle large volumes of time-stamped data. It provides a scalable and efficient way to store and retrieve time-series data, making it well-suited for industrial scenarios where data is generated at a high frequency.

388

L. Webb et al.

Within this study, a custom OPC-UA server was developed in Python to facilitate interoperability between the power analyser and Niryo Ned Modbus interface. To continue, each node within OPC-UA can be given a namespace and identifier. As this study is primarily focusing on monitoring energy consumption, all energy measurements are labelled within the same namespace with separate identifiers. The server then stores all OPC-UA nodes within an InfluxDB through a Telegraf plugin. The Telegraf plugin allows further management between OPCUA and InfluxDB, where a data sampling rate of 100 ms was configured. 2.6

Data Analytics Layer

This layer is where the actual analysis of the data takes place. It involves using machine learning/deep learning algorithms and statistical techniques to extract meaningful insights from the operational data. The analysis may involve identifying trends, anomalies, or patterns in the energy consumption data that can aid in the optimisation of the robotic assembly system’s energy usage [11]. For real-time analysis, Apache Flink or Apache Spark can be used for analysing the data as it arrives, enabling the response of anomalies and events quickly. For batch processing, Apache Hadoop can be used to process large volumes of data in parallel, enabling complex analyses that lead to deeper insights [12]. 2.7

Data Visualisation Layer

The final tier within the proposed architecture visualises all the stored and processed energy-related data. Depending on the target user, multiple dashboards with modular displays can be utilised to view different KPIs and metrics throughout the value chain and product life cycles. Within this study, the Grafana data visualisation platform was used to display energy-related data conditions targeted towards manufacturing technicians. Grafana allows creating custom dashboards that display the energy consumption data in various forms, such as charts, graphs, tables, and gauges, which can be useful for identifying trends, anomalies, and patterns in the data. Alerting features can also be used in Grafana to receive notifications when energy consumption levels exceed a certain threshold or when anomalies are detected. The configuration of Grafana was conducted in a way that interfaces with InfluxDB, allows viewing the dashboard remotely or within a handheld device, and uses OAuth2 to securely authenticate users. To continue, the Grafana dashboards were designed with interactive features to improve the user experience. Users could filter the data by selecting specific time periods. Zooming and panning functionalities were implemented in the line charts to allow users to focus on specific time intervals or data points of interest. Drill-down capabilities were also provided, allowing users to click on specific elements in the visualisations to access detailed information. These interactive features were designed to enable users to explore the data and gain insights more effectively.

A Solution Architecture for Energy Monitoring and Visualisation

2.8

389

Information Flow Between Layers

Figure 4 further highlights the information flow within the use case of this study. The diagram shows the high level functions and data from data acquisition to visualisation, further supporting the previous subsections. The key enabler of within this architecture is the OPC-UA server acquiring, mapping, and sending the energy data. Then, within the storage and visualisation layers, appropriate queries are needed to display and convert into metrics and Key Performance Indicators (KPIs).

Fig. 4. High level system information flow.

3

Energy-Related Data Visualisation

This section provides insights into the dashboards presented for monitoring and managing the energy performance of the robotic manipulator under consideration. The section discusses the usefulness and benefits of these dashboards, including the dashboard designed for real-time data and alerts to technicians and the dashboard offering managerial-level insights and visuals for informed decision-making by managers. The customisation features, alerts, and comprehensive approach towards energy management are highlighted, showcasing how these dashboards serve as valuable tools for optimising energy usage, improving operational efficiency, and promoting sustainability in the assembly line operations. 3.1

Key Performance Indicators and Metrics

Energy-related KPIs are essential to properly gauging the most useful data visualisations that will be used by the users. The first dashboard design in this study is a shop-floor condition monitoring dashboard that can be most useful for maintenance technicians and shop-floor engineers. This dashboard consists of the following: RMS voltage and current measurement provides an indication of the quality of the electrical supply and helps in identifying any voltage and current fluctuations or deviations from standard levels. These measurements are important to ensure that the manufacturing cell is receiving a stable and reliable

390

L. Webb et al.

electrical supply. Power (Watts) measurement is a key parameter that provides information about the amount of electrical energy being used by the manufacturing cell. Monitoring power consumption is important to identify areas where energy efficiency can be improved or to detect any abnormal or unexpected power usage. Power factor measurement is also important for electricity condition monitoring. The power factor indicates how efficiently the manufacturing cell/component is using the electrical power supplied to it. A low power factor indicates that the cell/component is drawing more power than required, which can lead to higher energy costs and the inefficient use of electrical power. However, these KPIs may not be appropriate for higher-level energy consumption insights. Zimmerman [13] provides a comprehensive list of KPIs and metrics used within robot performance and manufacturing processes. Utilising their proposed distinctions between metrics and KPIs, with hierarchical considerations, the second design will include further KPI metrics. For a dashboard to be useful at the managerial level, it is necessary to include additional energy-related KPIs and metrics in order to provide additional economic and sustainability insights. These particular KPI metrics have been selected to be included in this dashboard: – Current power (W): Raw value in Watts. – Total power usage (Wh): At each time interval, the power value is multiplied by the total time, E = W · t. – Current energy cost (£): power conversion to kWh then multiplied by the current UK energy cost per kWh, W/1000 × 0.34. – Utilisation (%): The percentage of power consumption from current power measurement, against the max power rating, W/66.6 – Carbon footprint(kgCO2/kWh): The total power and time are multiplied by the UK carbon intensity, kgCO2/kW h = (KWtotal ×Ttotal )×carbonIntestiy. 3.2

Dashboard Designs

The first dashboard presented is targeted towards maintenance technicians and shop-floor engineers who would need to quickly and easily identify energy-related fluctuations as they monitor the field-devices on the shop floor. Figure 5 displays the metrics of power (W), represented as a gauge, with colour coded power thresholds. The power factor is represented as a percentage. Voltage and current (Vrms & mArms ), represented both over time, within a graph, and as a single value from the latest measurement, Fig. 5 further illustrates the clear identification between the robots working and idle conditions by looking at the current and power visualisations. Further state and alert indicators are presented to easily display the state of the machine and if any energy spikes have been identified. The alerts can be communicated via email or SMS and are displayed within the same dashboard, indicating the date and time of the occurrence. Finally, a key deployment requirement was to display this dashboard on a handheld device, as displayed in Fig. 6.

A Solution Architecture for Energy Monitoring and Visualisation

391

Fig. 5. Energy-related data monitoring and visualisation dashboard for shop-floor engineers and maintenance technicians.

Fig. 6. Energy-related data monitoring and visualisation dashboard on a handheld device.

The second dashboard that was presented aims for higher level insights and allows managerial roles to view additional KPI metrics. The strategy that was implemented to address this issue consisted of providing visualisations for the purpose of forecasting energy consumption from the shop floor. Figure 7 displays a proposed concept of some managerial insights, utilising the same acquired data, within an alternative higher level dashboard. By providing visualisations of real-time data on current power consumption, power

392

L. Webb et al.

Fig. 7. Energy-related data monitoring and visualisation dashboard for managerial decision-making.

usage, carbon footprint, and utilisation, the dashboard enables a comprehensive overview of the energy performance of the robotic manipulator. The combination of these energy KPIs across the shop-floor decisions enables informed decision-making regarding the optimisation of energy usage, the identification and resolution of potential energy inefficiencies, and the reduction of the carbon footprint caused by the assembly line operations.

4

Discussion

The presented dashboards provide benefits for monitoring and managing the energy performance of industrial robotic manipulators. They facilitate proactive energy management, energy performance indications, operational efficiencies, and identification of potential reductions of energy costs. The state and alert indicators on energy dashboards enable timely notifications of energy spikes or abnormal conditions, allowing maintenance engineers and technicians to quickly respond and mitigate any energy-related issues, minimising downtime and optimising energy usage. Handheld device deployment offers mobility, allowing technicians to monitor energy performance on the go, providing convenience and flexibility in managing energy resources. The energy dashboards designed for managers offer higher-level insights and visualisations, enabling informed decisions on energy optimisation strategies, identification of potential energy inefficiencies, and reduction of the assembly line’s carbon footprint. This promotes sustainable practices and aligns with eco-friendly initiatives, contributing to the organisation’s overall environmental sustainability goals. Customisable dashboards enhance usability, while the alerts and notifications feature ensures prompt information about energy-related issues, enabling timely response and action. The comprehensive approach to energy management offered by the dashboards, covering multiple energy metrics, provides a holistic view of the robotic manipulator’s energy performance, allowing effective energy resource utilisation and cost savings. To further evaluate the proposed platform and visualisations, comparative approaches are discussed. Zheng et al. [14] proposed a platform that incorpo-

A Solution Architecture for Energy Monitoring and Visualisation

393

rates the acquisition of various devices, controllers, machines, and sensors for AR-based visibility and real-time machine status visualisations. Although their approach considers gaining insights into the energy conditions of each system, the interoperability of the platform was not widely addressed. Saqlain et al. [15] propose an IIoT-based management and monitoring approach with similar complexity in data acquisition and management, but they also include cost considerations. However, the comparison of speculation and complexity seen by SMEs with costs and interoperability may deter the prospect of implementing and operating such platforms within their organisations. Thus, the platform proposed in this study aims to appeal to SMEs by adhering to industrial communication protocols and open-source custom client-server and data visualisation tools, targeting the overall cost and interoperability of such ecosystems. In conclusion, there are some limitations that need to be addressed in this study’s comparison. Firstly, the scalability compatibility issue may arise when implementing this architecture in larger-scale manufacturing shop floors. Despite proposing an interoperable solution, the performance and capability of the OPCUA server may not yield the same results with data mapping and real-time data transfer in the same infrastructure. Secondly, cost considerations need to be taken into account, as the increased volume of sampled data will require additional bandwidth, storage, and computational resources for a big data platform, building on the previous point. Additionally, user-centered design approaches would improve the proposed solution, directly affecting the people who will use the monitoring solutions and design outcomes [16].

5

Conclusion and Future Works

This study proposed an n-tier energy-related data monitoring and visualisation solution architecture. With the implementation of dashboards, real-time energyrelated data was collected, analysed, and visualised in a series of user-friendly interfaces. The proposed solution architecture facilitated real-time monitoring, analysis, and visualisation of energy consumption data, providing manufacturers with the information needed to optimise their operations and make data-driven decisions. In future works, efforts can be directed towards integrating advanced data analytics techniques, such as machine learning algorithms and deep learning techniques, to enable predictive analytics, anomaly detection, and the optimisation of energy consumption in real-time. Furthermore, the network layer of the proposed solution architecture will be further developed to support efficient and secure data communication between the different tiers of the architecture. Also, further benchmarking and performance analytics will be conducted to forecast the potential of scaling this architecture within further smart manufacturing scenarios, with the addition of mobile robotics/AGVs monitoring through IoT expansions of the proposed solution architecture. The further expansions will provide a collective energy monitoring platform of a smart factories’ industrial robotic manipulators, mobile robots/AGVs, and manufacturing systems, solidifying a core infrastructure for future sustainable smart manufacturing practices.

394

L. Webb et al.

References 1. Ghobakhloo, M.: Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 252, 119869 (2020) 2. Erboz, G.: How to define industry 4.0: main pillars of industry 4.0. Managerial trends in the development of enterprises in globalization Era, vol. 761, p. 767 (2017) 3. Zhou, Z., Yao, B., Xu, W., Wang, L.: Condition monitoring towards energy-efficient manufacturing: a review. Int. J. Adv. Manuf. Technol. 91, 3395–3415 (2017) 4. Resman, M., Protner, J., Simic, M., Herakovic, N.: A five-step approach to planning data-driven digital twins for discrete manufacturing systems. Appl. Sci. 11, 3639 (2021) 5. Franceschi, P., Mutti, S., Ottogalli, K., Rosquete, D., Borro, D., Pedrocchi, N.: A framework for cyber-physical production system management and digital twin feedback monitoring for fast failure recovery. Int. J. Comput. Integr. Manuf. 35, 619–632 (2022) 6. Dall’Ora, N., Alamin, K., Fraccaroli, E., Poncino, M., Quaglia, D., Vinco, S.: Digital transformation of a production line: network design, online data collection and energy monitoring. IEEE Trans. Emerg. Top. Comput. 10, 46–59 (2022) 7. Rocha, A., Freitas, N., Alem˜ ao, D., Guedes, M., Martins, R., Barata, J.: Eventdriven interoperable manufacturing ecosystem for energy consumption monitoring. Energies 14, 3620 (2021) 8. Zhang, T., Ji, W., Qiu, Y.: A framework of energy-consumption driven discrete manufacturing system. Sustain. Energy Technol. Assess. 47, 101336 (2021) 9. Zhong, R., Wang, L., Xu, X.: An IoT-enabled real-time machine status monitoring approach for cloud manufacturing. Procedia Cirp. 63, 709–714 (2017) 10. Chinnathai, M., Alkan, B., Harrison, R.: A novel data-driven approach to support decision-making during production scale-up of assembly systems. J. Manuf. Syst. 59, 577–595 (2021) 11. Yao, F., Alkan, B., Ahmad, B., Harrison, R.: Improving just-in-time delivery performance of IoT-enabled flexible manufacturing systems with AGV based material transportation. Sensors 20, 6333 (2020) 12. Kahveci, S., Alkan, B., Mus’ab H, A., Ahmad, B., Harrison, R.: An end-to-end big data analytics platform for IoT-enabled smart factories: a case study of battery module assembly system for electric vehicles. J. Manuf. Syst. 63, 214–223 (2022) 13. Zimmerman, T.: Metrics and key performance indicators for robotic cybersecurity performance analysis. US Department of Commerce, National Institute of Standards (2017) 14. Zheng, P., Wang, H., Sang, Z., Zhong, R., Liu, Y., Liu, C., Mubarok, K., Yu, S., Xu, X.: Smart manufacturing systems for Industry 4.0: conceptual framework, scenarios, and future perspectives. Front. Mech. Eng. 13, 137–150 (2018) 15. Saqlain, M., Piao, M., Shim, Y., Lee, J.: Framework of an IoT-based industrial data management for smart manufacturing. J. Sens. Actuator Netw. 8, 25 (2019) 16. Christer-Nilsson, C.: User Centered Design of a Monitoring Dashboard

Locosim: An Open-Source Cross-Platform Robotics Framework Michele Focchi1,2 , Francesco Roscia1,3(B) , and Claudio Semini1 1

Dynamic Legged Systems (DLS), Istituto Italiano di Tecnologia (IIT), Genoa, Italy {michele.focchi,francesco.roscia,claudio.semini}@iit.it 2 Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy 3 Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy

Abstract. The architecture of a robotics software framework tremendously influences the effort and time it takes for end users to test new concepts in a simulation environment and to control real hardware. Many years of activity in the field allowed us to sort out crucial requirements for a framework tailored for robotics: modularity and extensibility, source code reusability, feature richness, and user-friendliness. We implemented these requirements and collected best practices in Locosim, a crossplatform framework for simulation and real hardware. In this paper, we describe the architecture of Locosim and illustrate some use cases that show its potential. Keywords: Computer architecture for robotics · Software tools for robot programming · Software-hardware integration for robot systems

1

Introduction

Writing software for robotic platforms can be arduous, time-consuming, and error-prone. In recent years, the number of research groups working in robotics has grown exponentially, each group having platforms with peculiar characteristics. The choice of morphology, actuation systems, and sensing technology is virtually unlimited, and code reuse is fundamental to getting new robots up and running in the shortest possible time. In addition, it is pervasive for researchers willing to test new ideas in simulation without wasting time in coding, for instance, using high-level languages for rapid code prototyping. To pursue these goals, in the past years several robotics frameworks have been designed for teaching or for controlling specific platforms, e.g., OpenRAVE [1], Drake [2] and SL [3]. Michele Focchi, Francesco Roscia: The authors equally contributed to this paper. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 395–406, 2024. https://doi.org/10.1007/978-3-031-47272-5_33

396

M. Focchi et al.

Fig. 1. Examples of robots already included in Locosim (from left to right, top to bottom): Aliengo [7], Go1 [8], HyQ [9], Starbot, CLIO [10], UR5 [11] with Gripper, Solo with Flywheels [12], Tractor (images are not in scale).

To avoid roboticists reinventing the wheel whenever they buy or build a new robot, we present our framework Locosim.1 Locosim is designed with the primary goal of being platform-independent, dramatically simplifying the task of interfacing a robot with planners and controllers. Locosim consists of a ROS control node [4] (the low-level controller), written in C++, that interfaces a custom Python ROS node (the high-level planner/controller) with either a Gazebo simulator [5] or the real hardware. The planner relies on Pinocchio [6] for computing the robot’s kinematics and dynamics and closes the control loop at a user-defined frequency. 1.1

Advantages of Locosim

The benefits of the proposed framework are multiple. – Locosim is platform-independent. It supports a list of robots with different morphology (e.g., quadrupeds, arms, hybrid structures, see Fig. 1) and provides features for fast designing and adding new robots. – Locosim implements functions needed for all robots. Once the robot description is provided, no effort is spent on libraries for kinematics, dynamics, logging, plotting, or visualization. These valuable tools ease the synthesis of a new planner/controller. – Locosim is easy to learn. The end user invests little time in training and gets an open-source framework with all the benefits of Python and ROS. – Locosim is modular. Because it heavily uses the inheritance paradigm, classes of increasing complexity can provide different features depending on the nature of the specific robotic application. For instance, the controller for fixedbase robotic arms with a grasping tool can be reused for a four-legged robot with flywheels since it is built upon the same base class. – Locosim is extensible. Our framework is modifiable and the end-user can add any supplementary functionality. – Locosim is easy to install. It can be used either inside a virtual machine, a docker container, or natively by manually installing dependencies. 1

Locosim can be downloaded from www.github.com/mfocchi/locosim.

Locosim: An Open-Source Cross-Platform Robotics Framework

1.2

397

Outline

The remainder of this paper is organized as follows. In Sect. 2 we highlight the critical requirements of a cross-platform robotics framework. In Sect. 3 we detail structure and features of Locosim. In Sect. 4 we discuss use-case examples of our framework, either with the real robot or with its simulated digital twin. Eventually, we condense the results and present future works in Sect. 5.

Fig. 2. End-users, robotic platform and simulation environment make a triad only if an effective robotics framework can join them.

2

Key Aspects of a Robotics Framework

In the most general sense, a robotics framework is a software architecture of programs and data that adhere to well-defined rules for operating robots. End-users are people who will ultimately use the robotics framework and potentially bring some modifications. The robotics framework is the center of a triangle having at its vertices the end-user, the robotic platform and the simulation environment (see Fig. 2). The simulation must replicate the behaviour of the robot with a sufficient degree of accuracy. The end-user must be able to test new features and ideas in the simulation environment before running them on the robot platform. A robotics framework should provide the link among these three. In this context, we identify a list of crucial requirements that a robotics framework must possess: generality, modularity, reusability, extensibility, rapid prototyping versus performances, feature-rich, and end-users development. Generality. It is essential to release software free from unnatural restrictions and limitations. An end-user may require to design software for controlling a single robot, for multi-robot cooperation, for swarm coordination, or for reinforcement learning, which requires an abundant number of robots in the training phase. It should be possible to model any kinematic structure (floating base/fixed base robot, kinematic loops, etc.).

398

M. Focchi et al.

Modularity. A robotics framework should provide separate building blocks of functionalities according to the specific needs of the robot or the application. These building blocks must be replaceable and separated by clear interfaces. The end-user may want to only visualize a specific robot in a particular joint configuration, move the joints interactively, or plan motions and understand the effects of external forces. Each of these functions must be equipped with tools for debugging and testing, e.g., unit tests. Replacing each module with improved or updated versions with additional functionalities should be easy. Reusability. Pieces of source code of a program should be reused by reassembling them in more complex programs to provide a desired set of functionalities, with minor or no modifications. From this perspective, parametrization is a simple but effective method, e.g., the end-user should be able to use the same framework with different robots changing only a single parameter. In the same way, digital twins [13] can be realized by varying the status of a flag that selects between the real hardware and the simulation environment. This avoids writing different codes for the simulator and the real robot. Extensibility. A robotics framework must be designed with the foresight to support the addition and evolution of hardware and software that may not exist at implementation time. This property can be achieved by providing a general set of application programming interfaces (APIs). Concepts typical of ObjectOriented Programming, such as inheritance and polymorphism, play a crucial role in extensibility. Rapid Prototyping Versus Performances. A framework should allow for fast code prototyping of researchers’ ideas. More specialized controllers/planners are built from simpler ones in the form of recursive inheritance (matryoshka principle). In this way end-users have unit tests at different levels of complexity. With fast code prototyping, end-users can quickly write software without syntax and logic errors. However, they do not have any assurance about the performance: such code is just good enough for the case of study in a simulation environment. Stringent requirements appear when executing codes on real robots, e.g., short computation time and limited memory usage. Thus, the framework must expose functionalities that can deal with performance. Feature-Rich. Most of the end-users need a sequence of functionalities when working with robots. These include but are not limited to the computation of kinematics and dynamics, logging, plotting, and visualization. A robotics framework should provide them, and they must be easily accessible. End-Users Development Besides its implementation details, a robotics framework should provide methods, techniques and tools that allow end-users to create, modify, or extend the software [14] in an easy way. It should run on

Locosim: An Open-Source Cross-Platform Robotics Framework

399

widely used Operating Systems and employ renowned high-level programming languages that facilitate software integration. Clear documentation for installation and usage must be provided, and modules should have self-explanatory names and attributes.

3

Locosim Description

Locosim was born as a didactic framework to simulate both fixed- and floatingbase robots. Quickly it evolved to be a framework for researchers that want to program newly purchased robots in a short time. Locosim runs on machines with Ubuntu as Operating System, and it employs ROS as middleware. Within Locosim, end-users can write robot controllers/planners in Python. 3.1

Architecture

Locosim consists of four components: Robot Descriptions, Robot Hardware Interfaces, ROS Impedance Controller, and Robot Control. We illustrate each component in the following.2 Robot Descriptions. The Robot Descriptions component contains dedicated packages for the characterization of each robot. For instance, the package that contains files to describe the robot myrobot, a generic mobile robot platform, is named myrobot description. With the main focus on fast prototyping and human readability, the robot description is written in Xacro (XML-based scripting language) that avoids code replication through macros, conditional statements and parameters. It can import descriptions of some parts of the robot from urdfs sub-folder or meshes describing the geometry of rigid bodies from the meshes sub-folder. At run time, the Xacro file is converted into URDF, allowing the end-user to change some parameters. The gazebo.urdf.xacro launches ros control package and the gazebo ros p3d plugin, which publishes the pose of the robot trunk in the topic /ground truth (needed only for floating-base robots). The robot description directory must contain the files upload.launch and rviz.launch. The former processes the Xacro generating the URDF and loads into the parameter server, and the latter allows to visualize the robot and interact with it by providing the conf.rviz file. This is the configuration for the ROS visualizer RViz, which can be different for every robot. Additionally, the file ros impedance controller.yaml must be provided for each robot: it contains the sequence of joint names, joint PID gains and the home configuration. The Python script go0 will be used during the simulation startup to drive the robot to the home configuration.

2

Some of the functions in Locosim components, which are quite established for the robotics community, are named after [3].

400

M. Focchi et al.

Fig. 3. Schematic representation of a typical use-case of Locosim. The end-user wants to simulate the UR5 robot arm. An instance of Ur5Generic, which is a derived class of BaseControllerFixed, sends the command to the robot though ros impedance controller and it receives back the actual state. Ur5Generic implements features to manage the real robot and the gripper, perform a homing procedure at startup and a class for inverse kinematics.

Robot Hardware Interfaces. This folder contains drivers for the real hardware platforms supported by Locosim. They implement the interface that bridges the communication between the controller and the real robot, abstracting the specificity of each robot and exposing the same interface (e.g., EffortInterface). For instance, the UR5 robot through its driver provides three possible ROS hardware interfaces: Effort, Position and Velocity, hiding the details of the respective underlying low-level controllers. ROS Impedance Controller. ROS Impedance Controller is a ROS package written in C++ that implements the low-level joint controller, in the form of PID with feedforward effort (force or torque). The /joint state publisher publishes the actual position, velocity and effort for each robot joint. By default, the loop frequency is set to 1 kHz. This and other parameters can be regulated in the launch file called ros impedance controller.launch. Robots with specific needs can be dealt with by specifying a custom launch file. This is the case of the CLIO climbing robot that requires the model of the mountain to which it is attached. The robot name parameter is used to load the correct robot description. If the real robot flag is set to true, the robot hardware interface is initialized; otherwise, the Gazebo simulation starts running first the go0 script from the robot description. In any case, Rviz will be opened with the robot’s specific configuration file conf.rviz. The end-user can manually change the location where the robot is spawned in Rviz with the spawn parameters. Physics parameters for the simulator are stored in the sub-folder worlds.

Locosim: An Open-Source Cross-Platform Robotics Framework

401

Table 1. Main attributes and methods of the BaseControllerFixed (BCF) and of the BaseController (BC) classes. All vectors (unless specified) are expressed in world frame. For methods with the same name, the derived class loads the method of the parent class and adds additional elements specific to that class. Name Attributes q, q des

Methods

Meaning

Class

Actual/desired joint positions

BCF, BC

qd, qd des

Actual/desired joint velocities

BCF, BC

tau, tau ffwd

Actual/feed-forward joint torques

BCF, BC

x ee

Position of the end-effector expressed BCF in base frame

contactForceW

Contact force at the end-effector

BCF

contactMomentW

Contact moment at the end-effector

BCF

basePoseW

Base position and orientation in Euler BC angles

baseTwistW

Base linear and angular velocity

BC

bRw

Orientation of the base link

BC

contactsW

Position of the contacts

BC

grForcesW

Ground reaction forces on contacts

BC

loadModelAndPublishers() Creates the object robot (Pinocchio BCF, BC wrapper) and loads publishers for visual features (ros pub), joint commands and declares subscriber to /ground truth and /joint states startFramework()

Launch ros impedance controller

send des jstate()

Publishes /command topic with set- BCF, BC points for joint positions, velocities and feed-forward torques

BCF, BC

startupProcedure()

Initializes PID gains

BCF, BC

initVars()

Initializes class attributes

BCF, BC

logData()

Fill in the variables x log with the con- BCF, BC tent of x for plotting purposes, it needs to be called at every loop

receive jstate()

Callback associated to the subscriber BCF, BC /joint states, fills in actual joint positions, velocities and torques

receive pose()

Callback associated to the subscriber BC /ground truth, fills in the actual base pose and twist, and publishes the fixed transform between world and base

updateKinematics()

From q, qd, basePoseW, baseTwistW, BC computes position and velocity of the robot’s center of mass, the contacts position, the associated Jacobians, the ground reaction forces, the centroidal inertia, the mass matrix and the nonlinear effects

402

M. Focchi et al.

Robot Control. From the end-user perspective, this is the most crucial component. It embraces classes and methods for computation of the robot’s kinematics and dynamics, logging and plotting time series variables, and real-time visualization on Rviz. Within this component, high-level planning/control strategies are implemented. The codes of the component are entirely written in Python and have a few dependencies: above all, NumPy [15] and Pinocchio [6]. The former offers tools for manipulating multidimensional arrays; the latter implements functions for the robot’s kinematics and dynamics. Pinocchio can be an essential instrument for researchers because of its efficient computations. Nevertheless, it can be time-consuming and cumbersome to understand for newcomers. To facilitate the employment, we developed a custom robot wrapper for building a robot object, computing robot mass, center of mass, Jacobians, centroidal inertia, and so on with easy-to-understand interfaces. For the end-user, the starting point for building up a robot planner is the class BaseControllerFixed, suitable for fixed-base robots, and the derived class BaseController, which handles floating-base ones. In Table 1, we report a list of the main attributes and methods of BaseController and of its parent BaseControllerFixed. For having a more complex and specific controller, the end-user can create its own class, inheriting from one of the previous two and adding additional functionalities. E.g., QuadrupedController inherits from BaseController, and it is specific for quadruped robots. Ur5Generic adds to BaseControllerFixed the features of controlling the real robot, a gripper and a camera attached (or not) to the robotic arm UR5 (see Fig. 3). The controller class is initialized with the string robot name, e.g., we write BaseController(myrobot) for the controller of myrobot. The end-user must pay particular attention to this string because it creates the link with the robot description and the robot hardware interface if needed. The robot name is used for accessing the dictionary robot param too. Among the other parameters of the dictionary, the flag real robot permits using the same code for both the real (if set to true) and simulated (false) robot, resulting in the digital twin concept. The BaseControllerFixed class contains a ControllerManager to seamlessly swap between the control modes and controller types if the real hardware supports more than one. For instance, UR5 has two control modes (point, trajectory) and two controller types (position, torque), whereas Go1 supports a single low-level torque controller. Additionally, the GripperManager class manages the gripper in different ways for the simulation (i.e., adding additional finger joints) and for the real robot (on-off opening/closing service call to the UR5 driver as specified by the manufacturer), hiding this complexity to the end-user. The method startFramework() permits to launch the simulation or the driver, executing ros impedance controller.launch. It takes as input the list additional args to propagate supplementary arguments, dealing with robot and task specificity. In the components folder there are additional classes for inverse kinematics, whole-body control, leg odometry, IMU utility, filters and more. Finally, the folder lab exercises contains an ample list of scripts, with didactic exercises of incremental complexity, to learn Locosim and the main robotics concepts.

Locosim: An Open-Source Cross-Platform Robotics Framework

3.2

403

Analysis of Design Choices

To fulfill the requirements stated in Sect. 2, we made a number of choices. We want to focus on why we selected ROS as middleware, Python as (preferred) programming language, and Pinocchio as computational tool for the robot’s kinematics and dynamics. Why ROS. The ROS community is spread worldwide. Over the last decade, the community produced hundreds or thousands of open-source tools and software: from device drivers and interface packages of various sensors and actuators to tools for debugging, visualization, and off-the-shelf algorithms for multiple purposes. With the notations of nodes, publishers, and subscribers, ROS quickly solves the arduous problem of resource handling between many processes. ROS is reliable: the system can still work if one node crashes. On the other hand, learning ROS is not an effortless task for newcomers. Moreover, modeling robots with URDF can take lots of time, as well as starting simulations [16]. Locosim relieves end-users from these complications by adopting a common skeleton infrastructure for the robot description and for the high-level planner/controller. Why Python. Among the general-purpose programming languages, Python is one of the most used. It is object-oriented and relatively straightforward to learn and use compared to other languages. The availability of open-source libraries and packages is virtually endless. These reasons make Python perfect for fast prototyping software. Being an interpreted language, Python may suffer in terms of computation time and memory allocation, colliding with the real-time requirements of the real hardware. In these cases, when testing the code in simulation, the end-user may consider using profiling tools to look for the most demanding parts of the code. Before executing the software on the real robot, these parts can be optimized within Python or translated into C++ code, providing Python bindings. For the same performance reasons, the most critical part of the framework, the low-level controller, is directly implemented in C++. Why Pinocchio. Pinocchio [6] is one of the most efficient tools for computing poly-articulated bodies’ kinematics and dynamics. Differently from other libraries in which a meta-program generates some source code for a single specific robot model given in input, as in the case of RobCoGen [17], Pinocchio is a dynamic library that loads at runtime any robot model. This characteristic makes it suitable for a cross-platform framework. It implements the state-of-theart Rigid Body Algorithms based on revisited Roy Featherstone’s algorithms [18]. Pinocchio is open-source, mostly written in C++ with Python bindings and several projects rely on it. Pinocchio also provides derivatives of the kinematics and dynamics, which are essential to in gradient-based optimization.

404

M. Focchi et al.

Fig. 4. Execution of the pick-and-place task with the anthropomorphic arm UR5. The end-user can drive the real hardware (setting real robot to true) or perform a simulation (real robot to false).

4

Use Cases

We want to emphasize the valuable features of Locosim with practical use cases.3 4.1

Visualize a Robot: Kinematics Check

As first use case, we illustrate the procedure to visualize the quadruped robot Aliengo (see Fig. 1) in RViz and how to manually interact with its joints. This is a debugging tool which is crucial during the design process of a new robot, because it allows to test the kinematics without added complexity. In the Robot Description package, we create a folder named aliengo description. In the robots folder we add the XML file for describing the robot’s kinematic and dynamic structure. We make use of the flexibility of the open source Xacro language to simplify writing process: we include files that describe a leg, the transmission model, and the meshes for each of the bodies. We create the launch folder containing the files upload.launch and rviz.launch. Launching rviz.launch from command line, the end-user can visualize the robot in RViz and manually move the joints by dragging the sliders of the joint state publisher gui. The conf.rviz file helps the end-user to restore previous sessions in RViz. With this simple use case we can effectively understand the importance of the key aspects formalized so far. Being extensible, Locosim allows for the introduction of any new robots, reusing parts of codes already present. 3

A video showing the above-mentioned and other use cases can be found here: https:// youtu.be/ZwV1LEqK-LU.

Locosim: An Open-Source Cross-Platform Robotics Framework

4.2

405

Simulation and Real Robot

As a second example, we present a pick-and-place task with the anthropomorphic arm UR5 (6◦ C of freedom, see Fig. 1). The pipeline of planning and control starts with the launch file ros impedance controller.launch. This is common for all the robots: it loads the robot description parameter calling the upload.launch and it starts the Gazebo simulator or the robot driver according to the status of the real robot flag. Additionally, it loads the controller parameters (e.g., PID gains), which are located in the robot description package of the robot. In the simulation case, the launch file spawns the robot at the desired location. In the real robot case, the robot driver is running (in a ROS node) on an onboard computer while the planner runs on an external computer. In both cases, the RViz GUI shows the robot configuration. Another node running on the same computer reads from a fixed frame ZED2 camera and publishes a point cloud message of the environment on a ROS topic. We extract the coordinates of the plastic bricks that are present in the workspace. With Ur5Generic, we plan trajectories for the end-effector position and orientation to grasp and relocate the bricks. We set an inverse kinematic problem to find a joint reference trajectory. It is published in the /ur5/command topic, together with feed-forward torques for gravity compensation. The low-level ros impedance controller provides feedback for tracking the joint references, based on the actual state in /joint state. On the real robot there is no torque control and only position set-points are provided to the ur5/joint group pos controller/command topic as requested by the robot driver provided by the manufacturer. All this is dealt with by the controller manager class, transparent to the end users. The power of Locosim lies on the fact that it is possible to use the same robot control code both to simulate the task and to execute it on the real robot, as reported in Fig. 4.

5

Conclusions

Locosim is a platform-independent framework for working with robots, either in a simulation environment or with real hardware. Integrating features for computation of robots’ kinematics and dynamics, logging, plotting, and visualization, Locosim is a considerable help for roboticists that needs a starting point for rapid code prototyping. If needed, performances can be ensured by implementing the critical parts of the software in C++, providing Python bindings. We proved the usefulness and versatility of our framework with use cases. Future works include the following objectives: Locosim will be able to handle multiple platforms simultaneously to be used in the fields of swarm robotics and collaborative robotics. We want to provide support for ROS2 since ROS lacks relevant qualifications such as real-time, safety, certification, and security. Additionally, other simulator like Coppelia Sim or PyBullet will be added to Locosim.

406

M. Focchi et al.

References 1. OpenRAVE. https://github.com/rdiankov/openrave 2. Drake. https://github.com/RobotLocomotion/drake 3. Schaal, S.: The SL simulation and real-time control software package, Los Angeles, CA, Technical Report (2009). CLMC. http://www-clmc.usc.edu/publications/S/ schaal-TRSL.pdf 4. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y., et al.: Ros: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, no. 3.2, p. 5. Kobe, Japan (2009) 5. Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), vol. 3, pp. 2149–2154. IEEE (2004) 6. Carpentier, J., Saurel, G., Buondonno, G., Mirabel, J., Lamiraux, F., Stasse, O., Mansard, N.: The Pinocchio C++ library: a fast and flexible implementation of rigid body dynamics algorithms and their analytical derivatives. In: IEEE/SICE International Symposium on System Integration (SII), vol. 2019, pp. 614–619. IEEE (2019) 7. Aliengo. https://www.unitree.com/en/aliengo 8. Go1. https://www.unitree.com/en/go1 9. Semini, C., Tsagarakis, N.G., Guglielmino, E., Focchi, M., Cannella, F., Caldwell, D.G.: Design of HYG-a hydraulically and electrically actuated quadruped robot. Proc. Inst. Mech. Eng., Part I: J. Syst. Control Eng. 225(6), 831–849 (2011) 10. Focchi, M., Bensaadallah, M., Frego, M., Peer, A., Fontanelli, D., Del Prete, A., Palopoli, L.: Clio: a novel robotic solution for exploration and rescue missions in hostile mountain environments (2022). arXiv:2209.09693 11. UR5. https://www.universal-robots.com/products/ur5-robot/ 12. Roscia, F., Cumerlotti, A., Del Prete, A., Semini, C., Focchi, M.: Orientation control system: enhancing aerial maneuvers for quadruped robots. Sensors 23(3), 1234 (2023) 13. Ramasubramanian, A.K., Mathew, R., Kelly, M., Hargaden, V., Papakostas, N.: Digital twin for human-robot collaboration in manufacturing: review and outlook. Appl. Sci. 12(10), 4811 (2022) 14. Lieberman, H., Patern` o, F., Wulf, V.: End User Development, vol. 9. Springer, Berlin (2006) 15. Harris, C.R., Millman, K.J., Van Der Walt, S.J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N.J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M.H., Brett, M., Haldane, A., del R´ıo, J.F., Wiebe, M., Peterson, P., G´erard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, C., Oliphant, T.E.: Array programming with NumPy. Nature 585(7825), 357–362 (2020). https://doi.org/10.1038/s41586-020-2649-2 16. Joseph, L., Cacace, J.: Mastering ROS for Robotics Programming: Design, Build, and Simulate Complex Robots Using the Robot Operating System. Packt Publishing Ltd. (2018) 17. Frigerio, M., Buchli, J., Caldwell, D.G., Semini, C.: RobCoGen: a code generator for efficient kinematics and dynamics of articulated robots, based on domain specific languages. J. Softw. Eng. Robot. (JOSER) 7(1), 36–54 (2016) 18. Featherstone, R.: Rigid Body Dynamics Algorithms. Springer, Berlin (2014)

A Locomotion Algorithm for an Apodal Robot to Climb and Descend Steps Vitor Hugo Prado Gomes, Thiago de Deus Lima Rocha(B) , Carla Cavalcante Koike, Dianne Magalh˜ aes Viana, and Jones Yudi Mori Alves da Silva Campus Universit´ ario Darcy Ribeiro, Universidade de Bras´ılia, Brasilia, DF, Brazil [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. Snake robots are one of the most versatile kinds of modular robots. Due to their ability to perform different gait patterns, such robots have been used in multiple constrained locomotion scenarios, especially those which are inaccessible to other robots. This work summarizes the snake robots’ mechanical principles, applies them to the construction of an apodal modular robot with pitch-yaw joints and simulates its locomotion over a single stair step using two different Finite State Machine based algorithms. Both cases achieve successfully the desired movement but only the second algorithm complies with NBR 9050, which regulates the construction of stairs and steps in Brazil.

Keywords: Snake robots

1

· Climbing robots · Locomotion algorithms

Introduction

Search and rescue operations on cluttered environments are one of the many tasks which pose difficulties to traditional wheeled robots and human workers. During disaster-like situations, rubble and debris can build up quickly inside buildings, which both traps civilians inside harmful areas and hinders rescue by conventional methods. Brazilian law 13.425/2017 and [1] determine that every building must have an emergency exit, which usually is a set of stairs. Modular robots, especially snake robots are suited for this scenario because they can change their own morphology in order to overcome its geometrical limitations and move over stairs and through rubble and a huge variety of terrains [2]. The study of snakes locomotion was mathematically formalized by Professor Hirose’s serpenoid curve [3] in 1974. It describes the snake’s motion through each of its joints bending angles. Gomez describes, in his Ph.D thesis [4], snake robots motion both in continuous and discrete form. According to his work, there are many ways to model an apodal modular robot. One of these ways is the idea of a block, which consists in one or more connected modules. A block is made by either different or equal modules. In order to correctly describe the construction c The Author(s), under exclusive license to Springer Nature Switzerland AG 2024  E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 407–417, 2024. https://doi.org/10.1007/978-3-031-47272-5_34

408

V. H. P. Gomes et al.

and assembly of an apodal robot, a list of parameters must be described: the bending angle ϕ is the angle formed at the joint by the two segments of the module. Its value is restricted to ϕ ∈ [−90, 90]◦ . For the specific case of the simulation, the distances d and d0 are equal, due to symmetry in the modules. N denotes the number of modules.

Fig. 1. Discrete serpenoid curve with N = 9 modules. Adapted from [4]

This work aims to describe a new way to move pitch-pitch and pitch-yaw snake robots. In order to effectively describe these algorithms, there are some parameters worth mentioning,based on [4]. The winding angle α, shown in Fig. 1, can be defined as the angle between the tangent line to the robot’s surface and its direction line (usually the X-axis) on the plane of motion. The number k of undulations defines how many equal periodic arcs can be seen during the robot’s movement. The most usual way to control each of the robot’s joints is through sinusoidal generators. In this case, each joint generates a sine wave with amplitude A, the same frequency f as all the others, but a phase shift of Δφ compared to its neighbouring joints. These control strategies focus mainly on setting movement standards for each of the robot’s joints which are usually moved by servomotors. The continuous model for the bending angle at the pitch-pitch robot, after discretization through sampling at fixed points of measurement, becomes the discrete model in Eq. 1. This equation states that, given an N-module apodal robot with block’s length d and left block’s length d0 and choosing its winding angle α and its number of undulations k, and the phase φ, for each ith of its joint points it can be found a bending angle ϕi that is applicable to the joint in order to make it move according to the chosen control strategy. Table 1 summarizes these parameters.  2πk   πk  d0  · sin (i − 1) + ϕi (φ) = 2α · sin N N d

(1)

Apodal Robot Locomotion on Steps

409

Table 1. Parameters of the discrete serpenoid wave. Source [4]. 2D serpenoid curve/wave parameters Parameter Description

Range

k

Number of undulations

k≥1

α

Winding angle

α ∈ [0, 120]

N

Number of articulations

N≥2

d

Distance between joints

d>0

d0

Left segment of the module d0 > 0

φ

Phase

φ ∈ (−180◦ , 180◦ ]

i

Joint number

i ∈ {1, M }

In order to effectively perform the adequate movement there are two conditions: (1) the robot’s geometry must be compliant with the spacial dimensions of both the stairs and the rubble; (2) The locomotion algorithm must be sufficiently robust and smart so as to guide the robot through its obstacles.On one hand, the stair’s dimensions are well known and normatized by [5], which enables the design of a compliant working snake robot. On the other hand, although not impossible, the correct identification of rubble geometry and path planning poses a question which has not been completely answered yet. Aware of these challenges, this article explores the first steps towards the construction of a fully functional rescue snake robot, first proposing a design for a new pitch-yaw modular snake robot and also the development of a motion algorithm for so that it can climb one stair step. In order to validate the algorithm and the robot’s motion V-REP PRO EDU [6] simulator was adopted throughout the experiment.

2 2.1

Methodology Robot’s Modules and Their Assembly

In order to achieve efficient locomotion, a new,to the best of our knowlegde not yet seen on similar literature module was proposed in [7] and is going to be used throughout the simulations. Each robot’s module outer surface is made of 3 different features.The first one is an octagonal prism, which prevents undesired rolling motion. The other two surfaces are made of non traditional geometries. Each one of these surfaces are positioned next to one of the prism’s bases. Within this prism there are components such as the Arduino microcontroller, a battery, a power board and an Xbee module. Each module is equipped with 2 servomotors responsible for moving the robot, one for each traditional geometry surface. At the same module, one non traditional geometry surface is always shifted 90◦ relative to its other side analog , which enables both yaw and pitch movement. The full assembly of the module weights 3 N and each motor is set to act with a torque of 3 N.m

410

V. H. P. Gomes et al.

Figure 2a, b and c show important dimensions for each module. Am = 80 mm stands for its height, Lm = 80 mm stands for its width and Cm = 155.8 mm stands for its length. The value Cc = 21.25 mm is also of relevance because it can be used to determine the full length of an N-module robot for when joining two modules, their surfaces will overlap by exactly the distance Cc. Consider also the following step’s dimensions: Sh stands for the step’s height, Sw for its width and Sl for its length.

(a) Module isometric view

(b) Width and heigth of the module

(c) Module’s length and distance to motor axis

Fig. 2. Module’s dimensions and overall geometry

In order to assemble the full robot, each module is connected in a chain orientation using rods as linking mechanisms. These rods couple one module’s pitch joint with the next module’s yaw joint, which enables these servos to take turns moving the robot. 2.2

Setting the Simulation

In order to effectively simulate the robot’s movement, it is required that the robot’s 3D model and its mechanical characteristics are all configured and correctly imported to the simulation environment. The drawing kit contained within the simulation environment was used to mimic exactly the outer surface of the proposed module. Its mechanical properties such as center of mass and its moments of inertia were also numerically calculated using SolidWorks. Owing to the fact that the servos are the only mean of locomotion for the robot, they were modelled as rotational joints within the simulation. In terms of size, the full robot is made of 10 individual modules, each coupled to its neighbours via linking rods. Each rotational joint is controlled via a Python script which itself connects with VREP simulator through and API called VREP Remote API Functions(Python) [8]. To develop each locomotion algorithm, it is assumed that for each initial position of the robot, there will always be at least one step immediately above it which is unscathed,hence available for climbing purposes. This assumption is

Apodal Robot Locomotion on Steps

411

based on the fact that emergency exits, such as stairs, can withstand more damage than the rest of the building. Based on this assumption, each of the following algorithms will implement a step by step pattern of locomotion.Although this choice might not ensure minimum time locomotion, it is rather useful in terms of repeatability of movement and geometrical constraints between the robot’s modules and the stair’s dimensions. 2.3

First Algorithm

In the first case scenario, the robot’s body is longitudinally set perpendicular to the stair and using only pitch movement, it is expected to climb up the stair, one module at a time, so that throughout the simulation, many of its modules are placed on top of the stair. In this case, the pitch movement is what both rises the robot up until the stair’s superior edge and also places each module back on its superior surface. Once the N-module robot is all set up, this algorithm runs accordingly to the Finite State Machine at Fig. 3.

Fig. 3. Finite State Machine for the pitch-pitch algorithm

In order for the algorithm to work, the height of the step will be constrained by the robot’s dimensions. Because the robot is positioned perpendicularly to

412

V. H. P. Gomes et al.

the step as the simulation is set up, the step’s width and the robot’s length are related. Similarly, the robot’s width and the step’s length are also related. At last, the distance which the robot is capable of moving up its modules via pitch joint and the step’s height is also related. Figure 4a and b show these relations.

(a) Minimum length for the step(X-axis) and for its width(Y-axis) using only pitch-pitch movement

(b) Maximum step’s height(Z-axis) during pitch-pitch first scenario

Fig. 4. Dimensional restrictions for the first scenario

Taking N as the robot’s number of modules and Ch the full length of one module, the step’s dimensions can be expressed as functions of the robot’s dimensions, which can be seen in Eqs. (2), (3) and (4), where Sh is the step’s height, Sw is its width and Sl is its length: Sh ≤

2.4

Lm + Cm − Cc, 2

(2)

Sw ≥ N · Cm + N · (Ch − 2 · Cc),

(3)

Sl ≥ Lm.

(4)

Second Algorithm

This second scenario contains both the same step and robot, but their relative angle is now different. The initial configuration is set in a way that the robot is parallel in length to the step’s length. Using both yaw and pitch joints, it is expected that the robot will place as many of its modules as it possibly can on top of the stair. Due to its initial configuration, pitch joints will raise each module until step’s height is achieved, then yaw joints will laterally move each module until they are fully placed on top of the stair. Once the N-module is set, this algorithm can be represented by the Finite State Machine at Fig. 5. Because of the orientation difference between the robot and the step in each of the scenarios, restrictions between module dimensions and step dimensions will also be different in each scenario. The height of the step is tied to the

Apodal Robot Locomotion on Steps

413

Fig. 5. Finite State Machine for the Pitch-yaw algorithm

maximal height the robot is capable of achieving. The step’s width will be tied to the robot’s upper base surface. The steps length will be tied to the robot’s total length. Figures 6a, b and 7 show these restrictions. Take N as the robot’s number of modules, Ch as each linking rod’s length. Equations (5) to (7) clarify the restrictions between step and robot dimensions. Am + Ch + Cm − Cc, 2

(5)

Lm Lm + Ch + Cm − 2 · Cc + , 2 2

(6)

Sh ≤ Sw ≥

Sl ≥ N · Cm + N · (Ch − 2 · Cc).

3 3.1

(7)

Results First Algorithm

Regarding initial conditions, the robot approaches the step accordingly to Eq. 1 in [4] as much as possible, which can be seen in Fig. 8a and b . The robot lifts its closest module to the stair step at Fig. 9a. Because of this movement, the lateral distance between the robot and the stair increases, which requires the robot to move its ground modules again towards the stair.Then, the new horizontally placed closest robot’s module to the stair is raised again at Fig. 9b

414

V. H. P. Gomes et al.

(a) Maximum step height(Z-axis) for pitch-yaw scenario

(b) Minimum step width(X-axis) for pitch-yaw scenario

Fig. 6. Dimensional restrictions for the second scenario

Fig. 7. Minimum step length (Y-axis) for pitch-yaw scenario

(a) First scenario’s initial position

(b) Position after the first approach

Fig. 8. Initial setup and approach for first scenario

(a) Robot after raising its first module

(b) Robot after raising its second module

Fig. 9. Initial setup to climb the step

(a) Robot after placing its first module on top of the step’s upper surface

(b) Robot after raising its third module

Fig. 10. Climbing process for first scenario

Apodal Robot Locomotion on Steps

(a) Half of simulation time robot’s position

415

(b) End of simulation time robot’s position

Fig. 11. Arrival on step’s top surface

and the whole robot moves again horizontally towards the stair. By the time there are 2 modules vertically raised, the robot places the first one to be raised on top of the stair at Fig. 10a and goes on to raise another horizontal module to a vertical orientation at Fig. 10b. From this point on, the robot repeats this cycle: horizontally approach the step, raise its closest horizontal module to the step and place one module on top of the step until there are no more modules below the stair or vertically placed, which means that all modules are within the step’s top surface. It is crucial to mention that the robot’s horizontal approach can be done both by modules above and below the step’s top surface. The criterion to choose which modules will drive the robot is based on how many modules there are in each level( above or below the step’s top surface). Figure 11a and b depict different moments of the simulation, respectively its half time and its final moments. The simulation as a whole took 5 min and 26 s. Most of this time is due to the approach movements the robot makes in order to move its raised modules next to the step’s top surface. 3.2

Second Algorithm

Regarding initial conditions, which for the second simulation case is both the robot and the step positioned parallel to each other in terms of length as Fig. 12a depicts. In order to increase stability, the robot’s tail is positioned horizontally as in Fig. 12b to create a foundation which is expected to reduce tendencies of tumbling. By the time it is complete, 2 of the foremost modules of the other side of the robot, its head, are raised at Fig. 13a and the first module to be raised is also placed on top of the step as in Fig. 13b. After the first head module is set on top of the stair, each and every one of the remaining modules is also raised and set on top of the step,a s shown in Fig. 14a and b .By the time half of the robot is on top of the stair, a new support is made using its head modules and the former base below the step is undone as in Fig. 15a. The same movement done before by the previous modules is again done to place the entire robot on top of the step. When it occurs, the head base is undone as in Fig. 15b and the robot remains at rest position, again parallel to the step, but now on top of it. The simulated time is 3 min and 9 s, which is exactly 2 min and 17 s less than the algorithm set in the first case scenario.

416

V. H. P. Gomes et al.

(a) Initial position for simulating the second scenario

(b) Robot after creating its stable foundation

Fig. 12. Initial setup for second scenario

(a) Robot after lifting its first two modules

(b) Robot after placing its first module on top of the step’s upper surface

Fig. 13. Initial movements for second scenario

(a) Robot after placing its second module on top of the step’s upper surface

(b) Robot after placing its third module on top of the step’s upper surface

Fig. 14. Climbing process for second scenario

(a) Robot assembles its upper head basis and disassembles its lower tail basis

(b) Robot’s final position for the second algorithm

Fig. 15. Arrival on step’s top surface

Apodal Robot Locomotion on Steps

4

417

Conclusions

Both algorithms were successful during simulations. The set of dimensional restrictions for the robot’s length, width and height was not prohibitive for its functioning. However, regarding real conditions and legal specifications for the design of stairs and steps, specially according to the brazilian association of technical norms(ABNT) NBR 9050 [5], the first algorithm’s set of restrictions for the steps is not human compatible. The worst inconsistency is that the robot’s length must be tied to the step’s width. The bigger the number N of modules, the wider the step and the more incompatible the step becomes relative to NBR 9050.Besides, this algorithm only uses pitch joints, which for pitch-yaw modules, is a waste of the robot’s capabilities. The same results could be obtained using simpler modules( e.g. pitch only modules). Even though this choice is not cost effective, it does not affect the robot’s hability to move. The restrictions imposed by the second scenario are easier to comply for stairs’ steps because of the relation between the robot’s length and the step’s length. This algorithm makes full use of the servomotors to move, which is more cost effective and enables for more complex movement gaits, which of course, do require more complex modules, such as a pitch-yaw one. On top of it all, in the end of the process of climbing one step, the robot is already set to climb another step on a supposed stair, which is beneficial for repetitive gaits used to climb multi step stair or even a large set of stairs.—inserir trabalhos futuros. Because the climbing algorithms were successful, future work will focus on how to effectively traverse both rubble terrain and sets of stairs together, including surface drag tests to improve the robot’s maneuverability.

References 1. Associa¸ca ˜o Brasileira de Normas T´ecnicas. NBR 9077: Sa´ıdas de emergˆencia em edif´ıcios, p. 8. Rio de Janeiro (2001) 2. Lipkin, K., et al.: Differentiable and piecewise differentiable gaits for snake robots. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA 2007, pp. 1864–1869. https://doi.org/10.1109/IROS.2007. 4399638 3. Umetani, Y., Hirose, S.: Biomechanical study of serpentine locomotion. In: Proceedings of the 1st RoManSySymp, pp. 171–184 (1974) 4. Gomez, J.G.: Modular robotics and locomotion: application to limbless robots. Pdd. Universidad Autonoma de Madrid, Madrid (2008) 5. Associa¸ca ˜o Brasileira de Normas T´ecnicas. NBR 9050: Acessibilidade a edifica¸co ˜es, mobili´ ario, espa¸cos e equipamentos urbanos. Rio de Janeiro (2015) 6. Rohmer, E., Singh, S.P.N., Freese, M.: V-REP: a versatile and scalable robot simulation framework. In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS) (2013) 7. dos Santos, R., et al.: Aspectos da locomo¸ca ˜o bioinspirada e simula¸ca ˜o de robˆ o modular apodal. Congresso Nacional de Engenharia Mecˆ anica, Salvador, BA, Brasil (2018) 8. Remote API Functions (Python). http://www.coppeliarobotics.com/helpFiles/en/ remoteApiFunctionsPython.htm

Author Index

A Akkawutvanich, Chaicharn Alkan, Bugra 323, 383 Aviles, Oscar F. 52

do Espírito Santo, Caroline 357 Dutra, Max Suell 52 Dyllon, Shwan 236

311

E El Youssef, Ebrahim Samer 159 El’Youssef, Ebrahim 251 Escalante, Felix M. 197 Exarchos, Alexandros 345

B Becker, Marcelo 197 Bernarnd, Yann 40 Berns, Karsten 299 Besselmann, Marvin Grosse 224 Boaventura, Thiago 197 C Cai, Wenhan 112 Carboni, Andrea Piga 3 Carvalho, João 125 Cerbaro, Jonathan 211 Chanfreau, Léonard 147 Chinanthai, Malarvizhi Kaniappan Chugo, Daisuke 368 Conceicao, Andre G. S. 275 Costa, Thamiris 251 Cunha, Thiago Boaventura 171

F Fernandes, Gustavo Queiroz 3, 75 Focchi, Michele 395 Freitas, Gustavo Medeiros 171

323

D da Costa, Lucas José 357 da Costa, Marcus V. Americano 159 da Silva, Ericka Raiane 357 Danilov, Vladimir 89 de Albuquerque, Thayse Saraiva 357 de Assis Monteiro, Vitor 211 de Azevedo Dantas, André Felipe Oliveira 357 de Carvalho, Marcelo E. C. 275 de Oliveira, André Schneider 211 De Pieri, Edson 251 de Souza, Marina Baldissera 3 Delisle-Rodriguez, Denis 357 Diane, Sekou 89 Díaz-Amado, José 263 Dillmann, Rüdiger 224

G Gomes, Vitor Hugo Prado 407 Guacheta-Alba, Juan C. 52 H Haomachai, Worasuchad 147 Hasan, Naimul 323 Hashimoto, Hiroshi 368 Hashimoto, Kenji 15 Henschke, Saskia 345 Hiyoshi, Kenta 28 Honekamp, Celine 345 Hossain, Mohammad Al-Amin 323 Huang, Songrui 112 Hutter, Marco 40 I Ibrahim, Sidik Haroune Ishizawa, Yuta 15

236

J Jatsun, Sergey 101 Junger, Jonas 40 Júnior, Pedro Américo Almeida Magalhães 171

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. S. E. Youssef et al. (Eds.): CLAWAR 2023, LNNS 811, pp. 419–420, 2024. https://doi.org/10.1007/978-3-031-47272-5

420

Author Index

K Kalempa, Vivian Cremer 211 Kanno, Kimikage 28 Kapytov, Dmitrii 89 Klimov, Konstantin 89 Koike, Carla Cavalcante 407

Roscia, Francesco

S Sá, Joana 185 Santos, Filipe 63 Sattar, Tariq P. 236 Schneider, Alexandra 345 Schneider, Danilo G. 286 Scholz, Neele 345 Semini, Claudio 395 Sequeira, João Silva 185 Seyfarth, André 345 She, Jin-Hua 368 Silva, Jones Yudi Mori Alves da 407 Silva, Manuel F. 63 Siqueira, Adriano 336 Sivak, Oleksandr 299 Souza, Marina Baldissera de 75 Sricom, Natchaya 311 Stähler, Carina 345 Stasica, Maximilian 345 Stemmer, Marcelo R. 286 Streiling, Kai 345 Suzuki, Taisei 15 Syring, Emma 345 Syrykh, Nikita 136

L Laus, Luís Paulo 3 Leite, Anderson 263 Li, Yao 368 Libarino, Cleia 263 Lopes, Maria S. 63 M Macêdo, Alam Rosato 159 Malchikov, Andrei 101 Manoonpong, Poramate 147, 311 Marques, Joao 263 Martinelli, Dieisson 211 Martins, Daniel 3, 75 Mauledoux, Mauricio 52 Medeiros, Vivian S. 197 Mizuuchi, Ikuo 28 Moreira, António Paulo 63 Moreira, Giovane 263 Moreno, Yecid 336 Mosconi, Denis 336 Muramatsu, Satoshi 368

T Takahashi, Ryunosuke 28 Takenaka, Hiroki 15 Tanaka, Takayuki 15 Teixeira, Marco Antonio Simões 211 Tischhauser, Fabian 40 Tokhi, Mohammad Osman 236, 323, 383

N Nakai, Mauricio Eiji 125 Nunuparov, Armen 136 O Ota, Hayato 15 Oumar, Ousmane Abdoulaye Ozkat, Erkan Caner 323

236

P Pedro, Gabriel Duarte Gonçalves Pirincoglu, Suat 345 Polat, Melike 345 R Ribeiro, Tiago T. 275 Rincon, Leonardo Mejia 75 Rocha, Geovana Kelly Lima 357 Rocha, Thiago de Deus Lima 407 Rönnau, Arne 224

395

171

V Valencia-Casteneda, Angie J. 52 Valsecchi, Giorgio 40 van Dam, Loes 345 Viana, Dianne Magalhães 407 Vonwirth, Patrick 299 W Webb, Louie

323, 383

Y Yokota, Sho 368 Z Zhao, Mingguo

112