Robot 2017: Third Iberian Robotics Conference: Volume 2 9783319708355, 9783319708362, 331970835X, 3319708368


380 107 168MB

English Pages 859 Year 2017

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface......Page 6
Organizing Committee......Page 8
Robot 2017 Program Committee......Page 9
Challenges in Medical Robotics in the Frame of Industry 4.0......Page 11
Machine Learning in Robotics......Page 12
Robot Competitions......Page 13
Sessions Program Committee......Page 14
Contents......Page 18
Robotic Solutions for Flexible Manufacturing......Page 26
1 Introduction to Large Scale Full Plant Automation......Page 27
2 System Description......Page 28
2.1 CDPR, Cable Robot......Page 29
2.2 Automated Gantry Crane......Page 31
2.3 Cable Driven Platform......Page 32
3.1 Workspace Definition......Page 33
3.2 Mechanical Design Calculation......Page 34
5 Control System Improvement......Page 35
References......Page 37
1 Introduction......Page 39
2 EuRoC Realistic Showcase......Page 41
3.1 Skill-Based Framework......Page 42
4.1 Multi Sensor Integration......Page 43
4.2 Human Detection and Tracking Using Laser Scanners......Page 44
5 Safety Management......Page 45
5.1 Region-Based Filtering and Reasoning......Page 46
5.2 Safety Handling......Page 47
6 Human-Robot Interaction......Page 48
8 Conclusion......Page 49
References......Page 50
1 Introduction......Page 52
2 On the Quality of a Grasp......Page 53
3 Grasp Transfer Quality Measures......Page 54
3.3 Measure Based on the Reachable Area of the Object Surface......Page 55
4 Implementation......Page 56
4.2 Obtaining Qg and Qp Using OWS......Page 57
4.3 Defining Sg and Sp......Page 58
5 Application Examples......Page 59
6 Validation......Page 60
7 Summary and Future Work......Page 62
References......Page 63
Application of Robotics in Shipbuilding......Page 64
1 Introduction......Page 65
2.1 System Description......Page 67
2.2 3D Acquisition System......Page 68
2.3 Adopted Solution......Page 69
3.1 Library of Templates......Page 70
3.3 Graphical User Interface......Page 71
3.4 Other Software Components......Page 72
4 Conclusions......Page 73
References......Page 74
1 Introduction......Page 75
2 Robotic Welding System Architecture......Page 78
2.2 Description of Robotic Arm Coordinate Systems......Page 79
2.3 System for Picking up Singular Points......Page 82
2.4 Computer System......Page 83
3 Conclusions......Page 84
References......Page 85
Cognitive Architectures......Page 86
1 Introduction......Page 87
2 Materials and Methods......Page 89
2.1 OrBiOne and KIO RTLS......Page 90
2.3 ROSRV......Page 91
2.4 Method......Page 93
3 Results and Discussion......Page 95
4 Conclusion and Further Work......Page 97
References......Page 98
1 Introduction......Page 99
2 Triaxial Sensor Model......Page 100
3 Calibration......Page 101
3.2 Model Redefinition......Page 102
3.3 Calibration Algorithm......Page 103
4.2 Data Acquisition......Page 104
5.2 Gyroscope Calibration......Page 106
References......Page 109
1 Introduction......Page 111
2 Related Work......Page 112
3.1 CPG Network and Neuron's Model......Page 113
3.2 Phase Resetting Controller......Page 115
4 Proposed Methodology for Automatic Characterization of Phase Resetting Controllers......Page 116
4.1 Experimental Results......Page 118
References......Page 120
1 Introduction......Page 122
2 Materials and Methods......Page 123
2.1 The Environment......Page 124
2.2 The Interface......Page 126
3 Results......Page 127
3.1 Proposal for Interface Improvements......Page 130
5 Conclusions and Future Work......Page 132
References......Page 133
Machine Learning in Robotics......Page 134
1 Introduction......Page 135
2 Background......Page 136
3 Methodologies......Page 137
3.1 Artificial Foveal Vision......Page 138
3.2 Weakly Supervised Object Localization......Page 141
4.1 Information Compression......Page 142
4.2 Attentional Framework Evaluation......Page 143
5 Conclusions and Future Work......Page 145
References......Page 146
1 Introduction......Page 147
2 Background......Page 148
3.1 WoLF-PHC......Page 149
3.3 WPL......Page 150
4 Deep Reinforcement Learning......Page 151
5 Mixed Deep Q-Learning......Page 152
6 Results......Page 153
References......Page 157
1 Introduction......Page 159
2 Revisiting Gaussian Mixture Models......Page 161
3 Using a GMM for Action Selection......Page 162
3.1 Initializing a GMM with Few Samples......Page 163
3.3 Updating an RWR-GMM......Page 164
4 Experimentation......Page 166
4.1 Using Algorithm 1......Page 168
References......Page 170
1 Introduction......Page 171
2 Related Work......Page 172
3.2 Visual Vocabulary......Page 173
3.5 Object Recognition......Page 174
4 Methodology......Page 175
5 Results......Page 176
6 Analysis of Results......Page 179
7 Conclusions......Page 181
References......Page 182
1 Introduction......Page 183
2.2 DCNN-SVM......Page 185
3 Experimental Setup......Page 186
4 Results......Page 188
References......Page 191
Robots Cooperating with Sensor Networks......Page 194
1 Introduction......Page 195
2 Related Work......Page 197
3 PF-EKF Range-Only Localization......Page 198
4 Experiments......Page 200
4.1 Performance......Page 201
5 Preliminary Tests in the Real Factory......Page 202
6 Conclusions......Page 203
References......Page 205
1 Introduction......Page 206
2.1 System Architecture......Page 207
2.2 Implementation......Page 208
3.1 Overview......Page 210
3.2 Architecture and Implementation......Page 211
4 Experiments......Page 212
References......Page 214
1 Introduction......Page 216
2 Related Work......Page 217
3 Robot-SN Security Architecture......Page 218
4.1 Sensor Node Discovery......Page 221
4.2 Topology Establishing......Page 222
5 Experimentation and Evaluation......Page 223
References......Page 226
1 Introduction......Page 229
2 State-of-the-Art......Page 230
3 Problem Formulation......Page 231
4.1 Short Term ``equivalent'' Cost Functional and Reach Set......Page 234
4.2 The Reach Set MPC Procedure......Page 236
4.3 The Robust Reach Set MPC Procedure......Page 237
References......Page 239
Sensor Technologies Oriented to Computer Vision Applications......Page 241
1 Introduction......Page 242
2 Related Work......Page 243
3 Overview of the Method......Page 244
4.1 3D Data Segmentation......Page 245
4.2 Object Classification and Positioning......Page 247
5 Experimental Test......Page 249
6 Conclusions......Page 251
References......Page 252
1 Introduction......Page 254
2.1 Field Platform......Page 256
2.2 3D Reconstruction......Page 257
3 Results......Page 259
References......Page 264
1 Introduction......Page 266
2.2 Edge Detection......Page 268
3 Localization Process......Page 269
4 Results......Page 271
5 Conclusions and Future Work......Page 273
References......Page 274
Robot Competitions......Page 276
1 Introduction......Page 277
2 The Grand Challenge......Page 278
3 Torre Del Sale - the Competition Site......Page 279
4.1 The Competition......Page 281
4.2 Grand Challenge Results......Page 283
5 Conclusions and Lessons Learned......Page 284
References......Page 285
1 Introduction......Page 286
2.1 Hardware Setup......Page 287
2.2 Simulation Environment......Page 288
3 Software Architecture......Page 289
3.1 State Machine to Manage the Mission Strategy......Page 291
4 Flight Tests......Page 292
5 MBZIRC Competition......Page 294
References......Page 295
1 Introduction......Page 297
3 Mapping System......Page 299
4 Experiments......Page 302
References......Page 307
1 Introduction......Page 309
2 Related Work......Page 310
3 Localization Filter......Page 311
4.1 Management of Dynamic Obstacles......Page 313
5 Software Description......Page 315
6 Experiments......Page 316
6.2 Robots......Page 317
6.3 Experimental Results......Page 318
7 Conclusions......Page 319
References......Page 320
Visual Perception for Robotics......Page 321
1 Introduction......Page 322
2 State of the Art......Page 323
3 Total Station as a Ground Truth......Page 324
4 Alignment Process......Page 326
5.1 Experiment with Total Station......Page 327
6 Discussion......Page 330
References......Page 331
1 Introduction......Page 333
2 Dirichlet Segmentation......Page 335
3.1 Pose Generation and Cloud Registration......Page 336
3.2 Arm Controller......Page 337
4 Experimental Validation......Page 339
5 Discussion......Page 342
References......Page 343
1 Introduction......Page 345
3.1 Kinematic Model......Page 347
3.2 Dynamic Model......Page 349
3.3 Camera Model......Page 350
4.2 Vision System......Page 351
5.1 Evaluation of the Performance of the Vision Module......Page 352
5.2 Impact Response: Cartesian Deflection and Velocity......Page 353
5.3 Passive Compliance Vs Active Compliance: Zero Deflection Control......Page 354
6 Conclusion......Page 355
References......Page 356
1 Introduction......Page 358
1.2 Motivation......Page 359
2 System Outline......Page 360
3.2 Grid Cells Classification Using 3D-NDT......Page 361
3.3 Map Update......Page 362
4 Path Planning......Page 363
4.1 Collision and Traversability Checking......Page 364
5 Implementation......Page 365
6.1 Plan Execution......Page 366
7 Conclusion......Page 368
References......Page 369
Educational Robotics......Page 370
1 Introduction......Page 371
2 The Robobo Robot......Page 372
2.1 Robobo Hardware......Page 374
2.2 Robobo Software......Page 376
3 Example of Student Project......Page 378
4 Conclusions......Page 380
References......Page 381
1 Introduction......Page 382
2 Background and Related Work......Page 383
3.1 Overview......Page 384
3.4 Education......Page 385
4 World-Wide ROS Training Activities......Page 386
5.1 Overview......Page 387
5.2 Education Measures......Page 388
5.3 The Education Cube......Page 389
5.4 Planned Course Contents......Page 391
5.6 Measures Conducted by the ROSIN Consortium......Page 392
References......Page 393
1 Introduction......Page 394
2 The DETI Robot......Page 395
3.1 Course Structure......Page 396
3.2 Robot Projects......Page 397
4 Assessing the Utilization of Robots in the Classroom......Page 398
5 Results......Page 399
5.2 Course Unit Quality Survey......Page 400
5.3 Students Survey......Page 401
References......Page 403
1 Introduction......Page 406
2 Methodology......Page 408
2.1 Mobile Robot Doc......Page 409
2.3 Test of the Teaching Material (Phase A)......Page 410
2.4 Run of the Educational Sessions at National Level (Phase B)......Page 411
2.5 Questionnaires: Evaluation and Feedback......Page 412
2.7 Scenarios......Page 413
3 Data Analysis......Page 414
References......Page 417
Autonomous Driving and Driver Assistance Systems (I)......Page 419
1 Introduction......Page 420
2 Car Model......Page 421
2.1 Linear Car Model......Page 422
3.1 Attitude Complementary Filter......Page 423
3.3 Car Estimator Model......Page 424
4 Test Platform......Page 425
5 Results......Page 426
5.2 Position Complementary Filter......Page 427
5.3 Car Estimation Model......Page 429
References......Page 431
1 Introduction......Page 433
2.1 Non Linear Model......Page 434
2.3 Validation......Page 435
3.1 Reference Value......Page 437
3.3 Proposed Controller......Page 438
3.4 LQR......Page 439
4.2 With Torque Vectoring Controller......Page 441
4.3 LQR......Page 442
4.4 Conclusion......Page 443
References......Page 444
1 Introduction......Page 445
2 Anticipative Kinodynamic Motion Planner Framework......Page 446
2.1 G2-splines Path Generation......Page 447
2.2 Velocity Profile Generation......Page 449
2.3 Cost Structure......Page 451
3 Simulations......Page 453
References......Page 456
1 Introduction......Page 457
2 System Model......Page 459
2.2 Road Geometry and Traffic Model......Page 460
3.1 Nonlinear Model Predictive Control......Page 461
3.2 Deadzone Penalty Functions......Page 462
3.3 Case Study: Ext-Eco-CC Mathematical Optimisation Problem......Page 463
4 System Evaluation......Page 464
4.1 Experimental Setup......Page 465
4.2 Experimental Results Validation......Page 466
5 Conclusion and Future Research......Page 468
References......Page 469
Autonomous Driving and Driver Assistance Systems (II)......Page 471
1 Introduction......Page 472
2 Methods......Page 473
3 Comparative Analysis......Page 475
3.1 Results......Page 477
4 Conclusion......Page 482
References......Page 483
1 Introduction and Motivation......Page 484
2.2 3D-LIDAR-based Vehicle Detection......Page 485
3 RefCN: Vehicle Detection Using 3D-LIDAR Reflection and YOLOv2 Object Detection Framework......Page 486
3.2 DRM-based YOLOv2 Object Detection Framework......Page 487
4.3 Experimental Setup and Computational Analysis......Page 489
4.4 Quantitative Results......Page 490
5 Conclusions, Key Findings and Future Work......Page 494
References......Page 495
1 Introduction......Page 496
2 Related Work......Page 497
3 System Overview......Page 498
4 Obstacle Detection......Page 499
4.2 Joint Detection and Viewpoint Estimation......Page 500
4.4 Implementation Details......Page 501
5.2 Extrinsic Parameters Auto-Calibration......Page 502
6.1 Object Detection and Viewpoint Estimation......Page 503
6.2 Scene Modeling......Page 504
7 Conclusion......Page 505
References......Page 506
1 Introduction......Page 508
2 Design......Page 509
2.2 Actuators......Page 511
2.4 Localization......Page 512
2.6 Planning......Page 513
2.8 Communication......Page 514
2.12 Debugging......Page 515
3.2 IvvI 2.0......Page 516
References......Page 518
Challenges in Medical Robotics in the Frame of Industry 4.0......Page 520
1 Introduction......Page 521
2 Some Health 4.0 Applications Today......Page 522
References......Page 530
1 Introduction......Page 532
2.1 Need of More Precise Tracking Technology......Page 534
2.2 Description of the Robotic Assisted Surgery System......Page 535
2.3 Mechanical Tracking System......Page 537
3.2 Surgical Planning......Page 538
3.4 Registration Performance......Page 539
4 Discussion and Conclusions......Page 541
References......Page 542
1 Introduction......Page 544
2.1 Ectomies......Page 545
2.2 Reparative-Reconstructive-Substitutive Surgery......Page 546
2.4 Laser Surgery......Page 548
3 Challenges......Page 549
3.1 Image Registration......Page 550
3.2 Modeling and Tracking......Page 553
4 Work in Progress......Page 554
References......Page 555
1 Introduction......Page 556
2 Cognitive Architecture......Page 558
4 Action System......Page 559
5 Reinforcement Learning......Page 560
References......Page 561
Rehabilitation and Assistive Robotics......Page 562
1.1 State of the Art......Page 563
2 Mechanical Design......Page 564
2.1 Finger Mechanism......Page 565
2.2 Hand Interface......Page 567
3 Biomechanical Analysis......Page 568
4 Prototype Stage......Page 571
5 Conclusion and Future Work......Page 572
References......Page 573
1 Introduction......Page 575
2.2 Participants......Page 578
2.3 Task......Page 579
3 Results......Page 582
References......Page 583
1 Introduction......Page 586
2 HybriSpot 24......Page 588
4 XML Protocols......Page 590
4.1 Basic Commands......Page 591
4.2 High-Level Commands......Page 592
6 Results......Page 594
7 Conclusions......Page 596
References......Page 597
Robotics and Cyber-Physical Systems for Industry 4.0 (I)......Page 598
1 Introduction......Page 599
1.1 State of the Art......Page 600
2 Work Principal......Page 601
2.1 Hand-Guiding Force......Page 602
2.2 Hand-Guiding Moment......Page 603
3 Controller......Page 604
3.1 Robot Control......Page 605
4 Experiments and Results......Page 606
5 Conclusion and Future Work......Page 608
References......Page 609
1 Introduction......Page 610
1.3 Overview of the System......Page 611
2.1 Real-Time 3D Reconstruction......Page 612
2.2 Off-line 3D Reconstruction......Page 613
3 Mesh Creation......Page 614
3.1 Integration with Unity......Page 616
4.2 Analysis and Results......Page 617
5 Conclusions......Page 619
References......Page 620
1 Introduction......Page 621
2 Related Work......Page 622
3 High Level System Architecture......Page 623
4 Logistic Planner......Page 625
5 Logistic World Model......Page 627
6 Conclusion......Page 630
References......Page 631
1 Introduction......Page 633
2 Human Action Validation......Page 635
3 Human Intention Recognition......Page 639
4 Simulation Results......Page 641
5 Conclusion......Page 643
References......Page 644
Robotics and Cyber-Physical Systems for Industry 4.0 (II)......Page 645
1 Introduction......Page 646
3 System Architecture......Page 647
3.2 Rviz......Page 648
3.4 MoveIt!......Page 649
3.5 Camera Calibration......Page 650
4 Path Planning......Page 651
5 Results......Page 652
6 Conclusion and Future Work......Page 655
References......Page 656
1 Introduction......Page 658
2 Related Work......Page 659
2.2 Trajectory......Page 660
3.2 Communication Between the Simulator and the Application......Page 661
4.1 Connectivity Grid......Page 663
4.2 Robot Controller......Page 664
4.3 Trajectories Defined by Splines......Page 665
5.1 Result of Trajectory Spline......Page 666
5.2 Result of the Planning Algorithm Applying a Grid of Connectivity......Page 667
References......Page 669
1 Introduction......Page 671
2 Related Work......Page 672
3 System Architecture and Heuristic-Based Approach......Page 673
4 Optimisation Methodologies......Page 675
4.1 Linear Scanning......Page 676
4.3 Simulated Annealing......Page 677
4.4 Potential Gradient......Page 678
5 Validation......Page 679
References......Page 681
1 Introduction......Page 683
2 Proposed Solution Concept......Page 684
3 Related Work......Page 686
4 Problem Description......Page 687
5.2 Robot Trajectory Generation System......Page 688
6 Tests and Results......Page 689
6.2 Robot Trajectory Generation System......Page 690
References......Page 693
Manipulation......Page 695
1 Introduction......Page 696
3 Vizzy's Hand Design......Page 698
4 Experiment Setup......Page 700
5 Results......Page 701
6 Conclusions and Future Work......Page 703
References......Page 704
1 Introduction......Page 705
2 Theoretical Development of the Kinematic Estimator......Page 707
2.1 Fundamentals of the Estimator......Page 708
2.2 Modelling of Flexible Links......Page 709
3.1 The Delta Robot Description......Page 712
3.2 Simulation Setup......Page 713
4 Conclusions......Page 715
References......Page 716
1 Introduction......Page 717
2 Problem Description......Page 718
3 Implementation of the Particle Filter......Page 719
3.2 Weight Update: Measurement Model......Page 721
4 Experimental Setup......Page 723
4.1 Results......Page 724
5 Conclusions......Page 728
References......Page 729
Legged Locomotion Robots......Page 730
1 Introduction......Page 731
2 Design of the Robot and Its Legs......Page 733
3.2 New Gait Patterns......Page 734
4.1 Trial 1......Page 736
4.2 Trial 2......Page 738
4.4 Trial 4......Page 740
5 Conclusions......Page 741
References......Page 742
1 Introduction......Page 743
2 Related Work......Page 744
3 Step Generator Architecture......Page 745
3.1 3D-LIPM and ZMP......Page 746
3.2 CPG Control Architecture......Page 747
4.1 Trajectories Generator......Page 749
4.2 Push Recovery Controller......Page 750
5 Optimization of Walk Engine Parameters......Page 751
5.2 Omnidirectional Scenario......Page 752
6 Conclusion......Page 753
References......Page 754
1 Introduction......Page 756
2.1 Body Subsystems......Page 757
2.2 Leg Subsystems......Page 758
2.3 Foot-Ground Subsystems......Page 759
3 Trajectory Planning......Page 760
4 Control System and Tuning......Page 761
5.1 Performance Indices......Page 762
5.2 Results......Page 763
References......Page 766
Communication-Aware Robotics (I)......Page 768
1 Introduction......Page 769
2 State of the Art......Page 770
3 Bluetooth Devices and Framework......Page 771
4 Perimeter Division Protocol Based on Coordination Variables......Page 772
5.1 Simulation Results......Page 774
5.2 Real Experiment......Page 776
6 Conclusions......Page 777
References......Page 778
1 Introduction......Page 780
2 Related Work......Page 781
3 Communication Architecture......Page 782
3.1 Data Compression......Page 784
3.3 User Interface......Page 785
4 Case Studies......Page 786
4.2 Reliability Based QoS......Page 787
5 Conclusions......Page 789
References......Page 790
1 Introduction......Page 792
2 Related Work......Page 794
3 Problem Setup......Page 795
4 Dynamic Communication Area......Page 796
5 Trajectory Planning with Limited Communication Time......Page 797
6 Simulations and Discussion......Page 800
References......Page 803
1 Introduction......Page 805
2 Related Work......Page 806
3 Problem Statement......Page 808
4 Optimal Relay Placement......Page 809
5.3 TDMA Layer -- Transmission Shaper......Page 811
6 Experimental Results......Page 812
6.2 Packet Delivery Ratio......Page 813
6.3 Throughput......Page 814
7 Conclusion......Page 815
References......Page 816
Communication-Aware Robotics (II)......Page 818
1 Introduction......Page 819
2 Fundamentals of Electromagnetic Propagation in Tunnels......Page 820
3 Discrete Localization Algorithm Based on RF Minimum Detection......Page 821
3.1 Minimums Map Generation......Page 822
3.2 Prediction and Estimation Using RLS and Mahalanobis Distance......Page 824
4.1 Scenario and Experimental Setup......Page 825
4.2 Localization Algorithm Implementation......Page 826
4.3 Preliminary Results......Page 828
References......Page 829
1 Introduction......Page 831
2 Related Work......Page 832
3 Signal Propagation and Nodes Deployment......Page 833
3.2 Measurement Setup......Page 834
3.3 Results of the Experiment......Page 836
4 A Proof of Concept......Page 838
4.2 Compressing the Data......Page 840
6 Conclusions......Page 841
References......Page 842
1 Introduction......Page 845
2 Radio Wave Propagation in Tunnels......Page 848
3 Experimental Setup......Page 849
4 Simulations and Experimental Results......Page 850
4.1 Measured Received Power......Page 851
4.2 Wireless Network Full Coverage Test......Page 853
4.4 Video Streaming......Page 854
5 Conclusions and Future Work......Page 855
References......Page 856
Author Index......Page 857
Recommend Papers

Robot 2017: Third Iberian Robotics Conference: Volume 2
 9783319708355, 9783319708362, 331970835X, 3319708368

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 694

Anibal Ollero Alberto Sanfeliu Luis Montano Nuno Lau Carlos Cardeira Editors

ROBOT 2017: Third Iberian Robotics Conference Volume 2

Advances in Intelligent Systems and Computing Volume 694

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

About this Series The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing. The publications within “Advances in Intelligent Systems and Computing” are primarily textbooks and proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results.

Advisory Board Chairman Nikhil R. Pal, Indian Statistical Institute, Kolkata, India e-mail: [email protected] Members Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba e-mail: [email protected] Emilio S. Corchado, University of Salamanca, Salamanca, Spain e-mail: [email protected] Hani Hagras, University of Essex, Colchester, UK e-mail: [email protected] László T. Kóczy, Széchenyi István University, Győr, Hungary e-mail: [email protected] Vladik Kreinovich, University of Texas at El Paso, El Paso, USA e-mail: [email protected] Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] Jie Lu, University of Technology, Sydney, Australia e-mail: [email protected] Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico e-mail: [email protected] Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil e-mail: [email protected] Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland e-mail: [email protected] Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong e-mail: [email protected]

More information about this series at http://www.springer.com/series/11156

Anibal Ollero Alberto Sanfeliu Luis Montano Nuno Lau Carlos Cardeira •



Editors

ROBOT 2017: Third Iberian Robotics Conference Volume 2

123

Editors Anibal Ollero Escuela Técnica Superior de Ingeniería Universidad de Sevilla Sevilla Spain Alberto Sanfeliu Institut de Robòtica I Informàtica Industrial (CSIC-UPC) Universitat Politècnica de Catalunya Barcelona Spain

Nuno Lau Institute of Electronics and Telematics Engineering of Aveiro (IEETA) Universidade de Aveiro Aveiro Portugal Carlos Cardeira IDMEC, Instituto Superior Técnico de Lisboa Universidade de Lisboa Lisbon Portugal

Luis Montano Departamento de Informática e Ingeniería de Sistemas, Escuela de Ingeniería y Arquitectura Instituto de Investigación en Ingeniería de Aragón Zaragoza Spain

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-319-70835-5 ISBN 978-3-319-70836-2 (eBook) https://doi.org/10.1007/978-3-319-70836-2 Library of Congress Control Number: 2017957985 © Springer International Publishing AG 2018 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book contains a selection of papers accepted for presentation and discussion at “Robot 2017: Third Iberian Robotics Conference,” held in Seville, Spain, November 22–24, 2017. Robot 2017 is part of a series of conferences that are a joint organization of Sociedad Española para la Investigación y Desarrollo de la Robótica/Spanish Society for Research and Development in Robotics (SEIDROB) and Sociedade Portuguesa de Robótica/Portuguese Society for Robotics (SPR). The conference organization had also the collaboration of several universities and research institutes, including University of Seville, Polytechnic University of Catalonia, University of Zaragoza, University of Aveiro, and University of Lisbon. Robot 2017 builds upon several successful events, including three biennial workshops (Zaragoza–2007, Barcelona–2009, and Sevilla–2011) and two Iberian Robotics Conferences (Madrid–2013 and Lisbon–2015). The conference is focused on the robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other countries. Robot 2017 featured three plenary talks by: – Oussama Khatib, Director of Stanford Robotics Lab, Stanford University, USA, President of the International Foundation of Robotics Research (IFRR) – Dario Floreano, Director of the Laboratory of Intelligent Systems, EPFL, Switzerland, Director of the Swiss National Center of Competence in Robotics, Switzerland – Alin Albu-Schäffer, Head of the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), Germany. Robot 2017 featured 27 special sessions, four of them with two slots in the Conference Program, plus 5 sessions coming from the General Track. The conference had an industrial track with four sessions. The main purpose of this track is to present industrial needs and recent achievements in robotic industrial applications looking to promote new collaborations between industry and academia. Six papers of the industrial track have been included in this book.

v

vi

Preface

The Special Sessions were about Aerial Robotics for Inspection, Agricultural Robotics and Field Automation, Autonomous Driving and Driver Assistance Systems, Challenges in Medical Robotics in the Frame of Industry 4.0, Cognitive Architectures, Communication-Aware Robotics, Cooperative and Active Perception for Robotics, Educational Robotics, Legged Locomotion Robots, Machine Learning in Robotics, Marine Robotics, Ontologies and Knowledge Representation for Robotics, Rehabilitation and Assistive Robotics, Robotics and Cyber-Physical Systems for Industry 4.0, Robotic and Unmanned Vehicles for Security, Robot Competitions, Robots Cooperating with Sensor Networks, Robots for Health care, Sensor Technologies oriented to Computer Vision Applications, Simulation in Robotics, Vision and Learning for Robotics, and Visual Perception for Robotics. Additionally, the four industrial Special Sessions were about Application of Robotics to Manufacturing Processes in the Aeronautic Industry, Application of Robotics to Shipbuilding, Integration of Drones in Low Altitude Aerial Space, and Robotics Solutions for Flexible Manufacturing. Finally, the sessions in the General Track were about the following topics: Aerial Robotics (double slot), Manipulation, Mobile Robotics, and Mobile Robotics Applications. The Robot 2017 Call for papers received 201 papers. After a careful review process with at least three independent reviews for each paper, 141 of them have been selected to be included in this book. There are over 500 authors from 21 countries including Australia, Brazil, Colombia, Croatia, Czech Republic, Denmark, Ecuador, Finland, France, Germany, Ireland, Italy, Luxembourg, Macao, Mexico, Poland, Portugal, Spain, United Arab Emirates, UK, and USA. We would like to thank all Special Sessions’ organizers for their hard work on promoting their special session, inviting the Program Committee, organizing the Special Session review process, and helping to promote the ROBOT 2017 Conference. This acknowledgment goes especially to the members of the Program Committee, Organizers of the Special Sessions, and Reviewers for the hard work required to prepare this volume as they were crucial for ensuring the high scientific quality of the event and to all the authors and delegates whose research work and participation made this event a success. The work of the Local Organizing Committee was also crucial to produce the Robot 2017 Program and this book. Last but not the least, we acknowledge and thank our editor, Springer, that was in charge of these proceedings, and in particular to Dr. Thomas Ditzinger. November 2017

Anibal Ollero Alberto Sanfeliu Luis Montano Nuno Lau Carlos Cardeira

Organization

General/Program Chairs Anibal Ollero Alberto Sanfeliu Luis Montano Nuno Lau Carlos Cardeira

University of Sevilla, Spain Polytechnical University of Catalonia, Spain University of Zaragoza, Spain University of Aveiro, Portugal University of Lisbon-IST, Portugal

Organizing Committee The following members of the Organizing Committee of Robot 2017 have contributed to the preparation of this book and its dissemination: Alberto Sanfeliu Ángel Rodríguez Anibal Ollero Arturo Torres Carlos Cardeira Carlos Rodríguez Francisco Real Iván Maza Jesús Capitán José Ángel Acosta José Antonio Cobano Luis Montano Nuno Lau Pablo Ramón Pedro Sánchez

Polytechnical University of Catalonia, Spain University of Sevilla, Spain University of Sevilla, Spain University of Sevilla, Spain University of Lisbon-IST, Portugal University of Sevilla, Spain University of Sevilla, Spain University of Sevilla, Spain University of Sevilla, Spain University of Sevilla, Spain University of Sevilla, Spain University of Zaragoza, Spain University of Aveiro, Portugal University of Sevilla, Spain University of Sevilla, Spain

vii

viii

Organization

Robot 2017 Program Committee Agapito Ledezma Espino Alexandre Bernardino Angel Pasqual del Pobil Alfonso Garcia Cerezo Alícia Casals Anders Christensen Andry Pinto Ángel García Olaya Aníbal Matos Antidio Viguria Antonio Barrientos Antonio Jesús Bandera António Paulo Moreira António Pedro Aguiar Armando Jorge Sousa Artur Pereira Arturo Morgado Bernardo Cunha Begoña C. Arrue Brígida Mónica Faria Carlos Cerrada Carlos Vazquez Regueiro Carme Torras Cristina Santos Danilo Tardioli Estela Bicho Francisco J. Rodríguez Fernando Fernández Fernando Lasagni Fernando Ribeiro Filipe Santos Filomena Soares Guillermo Heredia Ismael García Varea Javier Civera Jesús Capitán João Calado João Sequeira Jon Agirre Ibarbia

University Carlos III Madrid University of Lisbon, Portugal University Jaume I University of Malaga, Spain Polytechnical University of Catalonia, Spain University Institute of Lisbon, Portugal University of Porto, Portugal University Carlos III Madrid, Spain University of Porto, Portugal FADA-CATEC, Spain CAR CSIC-UPM, Spain University of Málaga, Spain University of Porto, Portugal University of Porto, Portugal University of Porto, Portugal University of Aveiro, Portugal University of Cadiz, Spain University of Aveiro, Portugal University of Sevilla, Spain Polytechnic Institute of Porto, Portugal UNED, Spain University A Coruña, Spain Institut de Robotica i Informatica Industrial (CSIC-UPC), Spain University of Minho, Portugal University of Zaragoza, Spain University of Minho, Portugal University of Luxembourg, Luxembourg University Carlos III de Madrid, Spain FADA-CATEC, Spain University of Minho, Portugal University of Porto, Portugal University of Minho, Portugal University of Sevilla, Spain University of Castilla-La Mancha, Spain University of Zaragoza, Spain University of Sevilla, Spain Polytechnic Institute of Lisbon, Portugal University of Lisbon, Portugal TECNALIA, Spain

Organization

Jorge Dias Jorge Lobo Jorge Martins José Barata José A. Castellanos José L. Villarroel José Luis Azevedo José Luis Lima José Luis Pons José María Cañas Josep Amat Juan Andrade Juana Mayo Lino Marques Lluís Ribas Xirgo Luis Almeida Luis Correia Luis Merino Luis Moreno Luis Paulo Reis Manuel Armada Manuel Ferre Manuel Silva Micael Couceiro Miguel Oliveira Miguel A. Cazorla Nicolás García-Aracil Nuno Cruz Oscar Reinoso Pablo Bustos Pascual Campoy Paulo Costa Paulo Gonçalves Paulo Jorge Oliveira Pedro Costa Pedro Fonseca Pedro Lima Ramiro Martinez de Dios Raul Cano Raul Morais Raul Suarez Rodrigo Ventura Rui Araújo

ix

University of Coimbra, Portugal University of Coimbra, Portugal University of Lisbon, Portugal University of Lisbon, Portugal University of Zaragoza, Spain University of Zaragoza, Spain University of Aveiro, Portugal Polytechnic Institute of Bragança, Portugal Instituto Cajal-CSIC, Spain University Rey Juan Carlos, Spain Polytechnical University of Catalonia, Spain Institut de Robotica i Informatica Industrial (CSIC-UPC), Spain University of Sevilla, Spain University of Coimbra, Portugal Autonomous University of Barcelona, Spain University of Porto, Portugal University of Lisbon, Portugal University Pablo Olavide, Spain University Carlos III de Madrid, Spain University of Minho, Portugal CAR CSIC-UPM, Spain CAR CSIC-UPM, Spain Polytechnic Institute of Porto, Portugal University of Coimbra, Portugal University of Porto, Portugal University of Alicante, Spain University of Miguel Hernández, Spain University of Porto, Portugal University of Miguel Hernández, Spain University of Extremadura, Spain Polytechnical University of Madrid, Spain University of Porto, Portugal Polytechnic Institute of Castelo Branco, Portugal University of Lisbon, Portugal University of Porto, Portugal University of Aveiro, Portugal University of Lisbon, Portugal University of Sevilla, Spain FADA-CATEC, Spain UTAD, Portugal Polytechnical University of Catalonia, Spain University of Lisbon, Portugal University of Coimbra, Portugal

x

Rui Rocha Thierry Keller Vicente Feliú Vicente Matellán Vitor Santos

Organization

University of Coimbra, Portugal TECNALIA, Spain University of Castilla-La Mancha, Spain University of León, Spain University of Aveiro, Portugal

Special Sessions’ Organizers Aerial Robotics for Inspection Guillermo Heredia

University of Seville, Spain

Agricultural Robotics and Field Automation Raul Morais Santos Filipe Neves dos Santos

UTAD, Portugal INESC-TEC, Portugal

Application of Robotics to Manufacturing Processes in the Aeronautic Industry Raúl Cano

FADA-CATEC, Spain

Application of Robotics to Shipbuilding Arturo Morgado

Universidad de Cádiz, Spain

Autonomous Driving and Driver Assistance Systems Vitor Santos Angel Sappa Miguel Oliveira Arturo de la Escalera

Universidade de Aveiro, Portugal CVC Barcelona and ESPOL Guayaquil, Spain INESC-TEC, Portugal Universidad Carlos III de Madrid, Spain

Challenges in Medical Robotics in the Frame of Industry 4.0 Josep Amat Alicia Casals

UPC, Spain UPC, Spain

Organization

xi

Cognitive Architectures Vicente Matellán

Universidad de León, Spain

Communication Aware Robotics Danilo Tardioli Alejandro R. Mosteo Luis Riazuelo

CUD Zaragoza, Spain CUD Zaragoza, Spain Universidad de Zaragoza, Spain

Cooperative and Active Perception in Robotics Eduardo Silva Jesús Capitán Luis Merino

ISEP/IPP, Portugal Universidad de Sevilla, Spain Universidad Pablo de Olavide, Spain

Educational Robotics Armando Sousa Fernando Ribeiro

FEUP, Portugal Universidade do Minho, Portugal

Integration of Drones in Low Altitude Aerial Space Antidio Viguria

FADA-CATEC, Spain

Legged Locomotion Robots Manuel Silva Cristina Santos Manuel Armada

ISEP/IPP and INESC-TEC, Portugal University of Minho and Centro ALGORITMI, Portugal CSIC-UPM, Spain

Machine Learning in Robotics Guillem Aleny Brígida Mónica Faria Luis Merino Carme Torras

Institut de Robotica Informatica Industrial (CSIC-UPC), Spain ESS-PP, Portugal UPO, Spain Institut de Robotica i Informatica Industrial (CSIC-UPC), Spain

xii

Organization

Marine Robotics A. Pedro Aguiar António Pascoal João B. Sousa Fernando Lobo Pereira

FEUP, Portugal IST, Portugal FEUP, Portugal FEUP, Portugal

Ontologies and Knowledge Representation for Robotics Paulo Gonçalves Julita Bermejo-Alonso Ricardo Sanz

IPCB and IDMEC/Universidade de Lisboa, Portugal UPM, Spain Autonomous Systems Laboratory (ASLab), UPM, Spain

Rehabilitation and Assistive Robotics Luis Paulo Reis Brígida Mónica Faria

University of Minho, Portugal P.Porto, Portugal

Robotic and Unmanned Vehicles for Security A. García-Cerezo

Universidad de Málaga, Spain

Robotics and Cyber-Physical Systems for Industry 4.0 José Lima Germano Veiga Paulo Leitão

INESC-TEC, Portugal INESC-TEC, Portugal IPB, Portugal

Robotics Solutions for Flexible Manufacturing Jon Agirre Ibarbia

Tecnalia Research & Innovation, Spain

Robot Competitions Bernardo Cunha José Luis Azevedo

University of Aveiro, Portugal University of Aveiro, Portugal

Organization

xiii

Robots Cooperating with Sensor Networks J. Ramiro Martinez de Dios

University of Sevilla, Spain

Robots for Health Care Raquel Fuentetaja Pizán Angel García Olaya

Universidad Carlos III de Madrid, Spain Universidad Carlos III de Madrid, Spain

Sensor Technologies Oriented to Computer Vision Applications Carlos Cerrada

UNED, Spain

Simulation in Robotics Luis Paulo Reis Artur Pereira

University of Minho, Portugal University of Aveiro, Portugal

Vision and Learning for Robotics Miguel Cazorla Oscar Reinoso

Universidad de Alicante, Spain Universidad Miguel Hernández, Spain

Visual Perception for Robotics B.C. Arrue

University of Sevilla, Spain

Sessions Program Committee Aamir Ahmad Abdelghani Chibani Abdulla Al-Kaff Ahmed Hussein Alberto Olivares-Alarcos Alberto Sanfeliu Alejandro Linares-Barranco

Alejandro R. Mosteo Alejandro Suarez Alexandre Bernardino Alfonso García-Cerezo Alicia Casals Alicja Wasik Ana Lopes

xiv

Anais Garrell Anders Christensen Andre Dias Andrea Alessandretti Andry Pinto Angel Garcia Olaya Ángel Manuel Guerrero-Higueras Angela Ribeiro Angelos Amanatiadis Anthony Mandow Antidio Viguria Antonio Adan Antonio Agudo Antonio Bandera Antonio Barrientos Antonio Morales António Neves Antonio Paulo Coimbra António Paulo Moreira Antonio Valente Armando Sousa Artur Pereira Arturo Bertomeu-Motos Arturo Morgado Arturo Torres-González Augusto Tavares Begoña C. Arrue Belén Curto Benoit Landry Brígida Mónica Faria Bruno J.N. Guerreiro Camino Fernández Cándido Otero Carlos Cardeira Carlos Hernández Corbato Carlos Miguel Costa Carlos Rizzo Carlos Vázquez Regueiro Carme Torras Claudio Rossi Clauirton Siebra Craig Schlenoff Daniel Gutierrez Galan Daniel Silva Danilo Tardioli

Organization

David Alejo David Valiente Dimitrios Paraforos Domenec Puig Denes Nagy Duarte Valério Edson Prestes Eduardo Ferrera Eduardo Mendes Eduardo Molinos Vicente Enrique Valero Estela Bicho Eugenio Aguirre Molina Eugenio Marinetto Eurico Pedrosa Felipe Jimenez Fernando Caballero Fernando Fernández Fernando Garcia Fernando Gomez-Bravo Fernando Ribeiro Filipe Neves Dos Santos Filomena Soares Francisco Alarcón Francisco Bonnín Francisco Gomez-Donoso Francisco J. Rodríguez Lera Francisco Javier Badesa Francisco Madrigal Francisco Martín Rico Francisco Ramos Francisco Rodríguez-Sedano Frederic Lerasle Gary Philippe Cornelius Gil Lopes Guillermo Heredia Iñaki Diaz Ismael Garcia-Varea Ivan Armuelles Ivan Maza J.M. Aguilar Jaime Duque Domingo Javier Perez Turiel Jesús Balsa Jesus Capitan

Organization

Jesus Fernandez-Lozano Joao Calado Joao Fabro João Macedo João Sequeira Joel Carbonera Jon Agirre Ibarbia Jorge De León Rivas Jorge Díez Jorge Lobo Jorge Martins Jose A. Castellanos José Antonio Cobano José Barbosa José Carlos Rangel Jose Eugenio Naranjo Jose García Rodríguez Jose Javier Anaya Jose Joaquin Acevedo Jose Lima José Luis Villarroel José M. Cogollor José Manuel Cañas José María Armingol José María Cañas Plaza José María Catalán Orts Josenalde Oliveira Josep Amat Joshue Pérez Josu Larrañaga Leturia Juan Andrade Cetto Juan Barios Juan Braga Juan Jesús Roldán Gómez Juan Pedro Bandera Rubio Julián Estévez Julita Bermejo-Alonso Justyna Gerłowska Leopoldo Rodriguez Lluis Ribas Xirgo Luis Almeida Luis Correia Luis Costa Luis Evaristo Caraballo de La Cruz Luis Garrote

xv

Luis Gomes Luis Merino Luis Montano Luis Moreno Luis Oliveira Luis Paulo Reis Luis Riazuelo Luis Rocha Manuel Armada Manuel Ferre Manuel Garcia Manuel Jesus Dominguez Morales Manuel Pérez Manuel Silva Marcelo Borges Nogueira Marcelo Petry Marco Ceccarelli Margarita Bachiller Martin Rico Francisco Micael Couceiro Miguel Ángel Conde Miguel Angelo Sotelo Miguel Cazorla Miguel Trujillo Mohammad Rouhani Natalia Ayuso Nicolas García Aracil Nicolas Jouandeau Nieves Pulido Noé Pérez-Higueras Nuno Cruz Nuno Lau Oscar Lima Oscar Reinoso Oscar Sipele P. Javier Herrera Pablo Bustos Pablo Jiménez Pablo Ramon Soria Paulo Costa Paulo Dias Paulo Gonçalves Paulo Menezes Paulo Oliveira Paulo Peixoto

xvi

Pedro Costa Pedro Fonseca Pedro Lima Pedro Neto Pedro Pedro Costa R. Praveen Kumar Jain Rafael Barea Ramiro Martinez de Dios Raquel Fuentetaja Raul Cano Raul Morais Santos Raul Suarez Ricardo Pascoal Ricardo Sanz Rita Cunha Robert Fitch

Organization

Roberto Arroyo Rui Araujo Rui Gomes Rui Rocha Sedat Dogru Stephen Balakirsky Tiago Nascimento Valter Costa Veera Ragavan Vicente Feliú Vicente Matellán Víctor Vaquero Vítor Carvalho Vitor Santos Yaroslav Marchukov

Contents

Robotic Solutions for Flexible Manufacturing Full Production Plant Automation in Industry Using Cable Robotics with High Load Capacities and Position Accuracy . . . . . . . . . David Culla, Jose Gorrotxategi, Mariola Rodríguez, Jean Baptiste Izard, Pierre Ellie Hervé, and Jesús Cañada Human-Robot Collaboration and Safety Management for Logistics and Manipulation Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . Gi Hyun Lim, Eurico Pedrosa, Filipe Amaral, Ricardo Dias, Artur Pereira, Nuno Lau, José Luís Azevedo, Bernardo Cunha, and Luis Paulo Reis Grasp Quality Measures for Transferring Objects . . . . . . . . . . . . . . . . . Fernando Soler, Abiud Rojas-de-Silva, and Raúl Suárez

3

15

28

Application of Robotics in Shipbuilding Development of a Customized Interface for a Robotic Welding Application at Navantia Shipbuilding Company . . . . . . . . . . . . . . . . . . . Pedro L. Galindo, Arturo Morgado-Estévez, José Luis Aparicio, Guillermo Bárcena, José Andrés Soto-Núñez, Pedro Chavera, and Francisco J. Abad Fraga Towards Automated Welding in Big Shipbuilding Assisted by Programed Robotic Arm Using a Measuring Arm . . . . . . . . . . . . . . Arturo Morgado-Estevez, Pedro L. Galindo, Jose-Luis Aparicio-Rodriguez, Ignacio Diaz-Cano, Carlos Rioja-del-Rio, Jose A. Soto-Nuñez, Pedro Chavera, and Francisco J. Abad-Fraga

43

53

xvii

xviii

Contents

Cognitive Architectures Cybersecurity in Autonomous Systems: Hardening ROS Using Encrypted Communications and Semantic Rules . . . . . . . . . . . . . Jesús Balsa-Comerón, Ángel Manuel Guerrero-Higueras, Francisco Javier Rodríguez-Lera, Camino Fernández-Llamas, and Vicente Matellán-Olivera

67

Triaxial Sensor Calibration: A Prototype for Accelerometer and Gyroscope Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Bernal-Polo and H. Martínez-Barberá

79

Automatic Characterization of Phase Resetting Controllers for Quick Balance Recovery During Biped Locomotion . . . . . . . . . . . . . Julián Cristiano, Domènec Puig, and Miguel Angel García

91

Interface Design of Haptic Feedback on Teleoperated System . . . . . . . . 102 Francisco Rodríguez-Sedano, Pere Ponsa, Pablo Blanco-Medina, and Luis Miguel Muñoz Machine Learning in Robotics Deep Networks for Human Visual Attention: A Hybrid Model Using Foveal Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Ana Filipa Almeida, Rui Figueiredo, Alexandre Bernardino, and José Santos-Victor Mixed-Policy Asynchronous Deep Q-Learning . . . . . . . . . . . . . . . . . . . . 129 David Simões, Nuno Lau, and Luís Paulo Reis Reward-Weighted GMM and Its Application to Action-Selection in Robotized Shoe Dressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Adrià Colomé, Sergi Foix, Guillem Alenyà, and Carme Torras Pose Invariant Object Recognition Using a Bag of Words Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Carlos M. Costa, Armando Sousa, and Germano Veiga Tactile Sensing and Machine Learning for Human and Object Recognition in Disaster Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Juan M. Gandarias, Jesús M. Gómez-de-Gabriel, and Alfonso J. García-Cerezo Robots Cooperating with Sensor Networks Autonomous Localization of Missing Items with Aerial Robots in an Aircraft Factory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Julio L. Paneque, Arturo Torres-González, J. Ramiro Martínez-de Dios, Juan Ramón Astorga Ramírez, and Anibal Ollero

Contents

xix

Wireless Sensor Networks for Urban Information Systems: Preliminary Results of Integration of an Electric Vehicle as a Mobile Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 J.J. Fernández-Lozano, J.A. Gomez-Ruiz, Miguel Martín-Guzmán, Juan Martín-Ávila, Socarras Bertiz Carlos, and A. García-Cerezo Design of a Robot-Sensor Network Security Architecture for Monitoring Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Francisco J. Fernández-Jiménez and J. Ramiro Martínez-de Dios A Robust Reach Set MPC Scheme for Control of AUVs . . . . . . . . . . . . 213 Rui Gomes and Fernando Lobo Pereira Sensor Technologies Oriented to Computer Vision Applications Obtaining and Monitoring Warehouse 3D Models with Laser Scanner Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Antonio Adán, David de la Rubia, and Andrés S. Vázquez 3D Monitoring of Woody Crops Using a Medium-Sized Field Inspection Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 José M. Bengochea-Guevara, Dionisio Andújar, Francisco L. Sanchez-Sardana, Karla Cantuña, and Angela Ribeiro A Vision-Based Strategy to Segment and Localize Ancient Symbols Written in Stone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Jaime Duque-Domingo, P. Javier Herrera, Carlos Cerrada, and José A. Cerrada Robot Competitions euRathlon and ERL Emergency: A Multi-domain Multi-robot Grand Challenge for Search and Rescue Robots . . . . . . . . . . . . . . . . . . 263 Alan F.T. Winfield, Marta Palau Franco, Bernd Brueggemann, Ayoze Castro, Gabriele Ferri, Fausto Ferreira, Xingcun Liu, Yvan Petillot, Juha Roning, Frank Schneider, Erik Stengler, Dario Sosa, and Antidio Viguria Autonomous Landing of a Multicopter on a Moving Platform Based on Vision Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 José Joaquín Acevedo, Manuel García, Antidio Viguria, Pablo Ramón, Begoña C. Arrue, and Anibal Ollero 3D Mapping for a Reliable Long-Term Navigation . . . . . . . . . . . . . . . . 283 Jonathan Ginés, Francisco Martín, Vicente Matellán, Francisco J. Lera, and Jesús Balsa

xx

Contents

A Lightweight Navigation System for Mobile Robots . . . . . . . . . . . . . . . 295 M.T. Lázaro, G. Grisetti, L. Iocchi, J.P. Fentanes, and M. Hanheide Visual Perception for Robotics Bridge Mapping for Inspection Using an UAV Assisted by a Total Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Javier Prada Delgado, Pablo Ramon Soria, B.C. Arrue, and A. Ollero Multi-view Probabilistic Segmentation of Pome Fruit with a Low-Cost RGB-D Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Pablo Ramon Soria, Fouad Sukkar, Wolfram Martens, B.C. Arrue, and Robert Fitch Vision-Based Deflection Estimation in an Anthropomorphic, Compliant and Lightweight Dual Arm . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Alejandro Suarez, Guillermo Heredia, and Anibal Ollero 3D Navigation for a Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Jan Škoda and Roman Barták Educational Robotics Robobo: The Next Generation of Educational Robot . . . . . . . . . . . . . . . 359 Francisco Bellas, Martin Naya, Gervasio Varela, Luis Llamas, Moises Bautista, Abraham Prieto, and Richard J. Duro The ROSIN Education Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Alexander Ferrein, Stefan Schiffer, and Stephan Kallweit Mobile Robots as a Tool to Teach First Year Engineering Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Pedro Fonseca, Paulo Pedreiras, and Filipe Silva Methodology and Results on Teaching Maths Using Mobile Robots . . . 394 Paola Ferrarelli, Tamara Lapucci, and Luca Iocchi Autonomous Driving and Driver Assistance Systems (I) Application of Sideslip Estimation Architecture to a Formula Student Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 André Antunes, Carlos Cardeira, and Paulo Oliveira Torque Vectoring for a Formula Student Prototype . . . . . . . . . . . . . . . 422 João Antunes, Carlos Cardeira, and Paulo Oliveira Path and Velocity Trajectory Selection in an Anticipative Kinodynamic Motion Planner for Autonomous Driving . . . . . . . . . . . . . 434 Jordi Pérez Talamino and Alberto Sanfeliu

Contents

xxi

Deadzone-Quadratic Penalty Function for Predictive Extended Cruise Control with Experimental Validation . . . . . . . . . . . . . . . . . . . . 446 Seyed Amin Sajadi-Alamdari, Holger Voos, and Mohamed Darouach Autonomous Driving and Driver Assistance Systems (II) Comparative Study of Visual Odometry and SLAM Techniques . . . . . . 463 Ana Rita Gaspar, Alexandra Nunes, Andry Pinto, and Anibal Matos Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection Intensity Data . . . . . . . . . . . . . . . . . . . . . . 475 Alireza Asvadi, Luis Garrote, Cristiano Premebida, Paulo Peixoto, and Urbano J. Nunes Modeling Traffic Scenes for Intelligent Vehicles Using CNN-Based Detection and Orientation Estimation . . . . . . . . . . . . 487 Carlos Guindel, David Martín, and José María Armingol Complete ROS-based Architecture for Intelligent Vehicles . . . . . . . . . . . 499 Pablo Marin-Plaza, Ahmed Hussein, David Martin, and Arturo de la Escalera Challenges in Medical Robotics in the Frame of Industry 4.0 Health 4.0 Oriented to Non-surgical Treatment . . . . . . . . . . . . . . . . . . . 513 Carles Soler Collaborative Robots for Surgical Applications . . . . . . . . . . . . . . . . . . . 524 Álvaro Bertelsen, Davide Scorza, Camilo Cortés, Jon Oñativia, Álvaro Escudero, Emilio Sánchez, and Jorge Presa New Technologies in Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Alícia Casals, Narcís Sayols, and Josep Amat Collaborative Robotic System for Hand-Assisted Laparoscopic Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 Carmen López-Casado, Enrique Bauzano, Irene Rivas-Blanco, Víctor F. Muñoz, and Juan C. Fraile Rehabilitation and Assistive Robotics Mechanical Design of a Novel Hand Exoskeleton Driven by Linear Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 Jorge A. Díez, Andrea Blanco, José M. Catalán, Arturo Bertomeu-Motos, Francisco J. Badesa, and Nicolás García-Aracil Robotic Platform with Visual Paradigm to Induce Motor Learning in Healthy Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Guillermo Asín-Prieto, José E. González, José L. Pons, and Juan C. Moreno

xxii

Contents

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Juan Pedro Dominguez-Morales, Angel Jimenez-Fernandez, Saturnino Vicente-Diaz, Alejandro Linares-Barranco, Asuncion Olmo-Sevilla, and Antonio Fernandez-Enriquez Robotics and Cyber-Physical Systems for Industry 4.0 (I) End-Effector Precise Hand-Guiding for Collaborative Robots . . . . . . . . 595 Mohammad Safeea, Richard Bearee, and Pedro Neto Integrating 3D Reconstruction and Virtual Reality: A New Approach for Immersive Teleoperation . . . . . . . . . . . . . . . . . . . 606 Francisco Navarro, Javier Fdez, Mario Garzón, Juan Jesús Roldán, and Antonio Barrientos Enhancement of Industrial Logistic Systems with Semantic 3D Representations for Mobile Manipulators . . . . . . . . . . . . . . . . . . . . . 617 César Toscano, Rafael Arrais, and Germano Veiga Human Intention Recognition in Flexible Robotized Warehouses Based on Markov Decision Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Tomislav Petković, Ivan Marković, and Ivan Petrović Robotics and Cyber-Physical Systems for Industry 4.0 (II) Dynamic Collision Avoidance System for a Manipulator Based on RGB-D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 Thadeu Brito, Jose Lima, Pedro Costa, and Luis Piardi Development of a Dynamic Path for a Toxic Substances Mapping Mobile Robot in Industry Environment . . . . . . . . . . . . . . . . . . . . . . . . . 655 Luis Piardi, José Lima, Paulo Costa, and Thadeu Brito Poses Optimisation Methodology for High Redundancy Robotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668 Pedro Tavares, Pedro Costa, Germano Veiga and António Paulo Moreira Offline Programming of Collision Free Trajectories for Palletizing Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680 Ricardo Silva, Luís F. Rocha, Pedro Relvas, Pedro Costa, and Manuel F. Silva Manipulation Estimating Objects’ Weight in Precision Grips Using Skin-Like Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 Francisco Azevedo, Joana Carmona, Tiago Paulino, and Plinio Moreno

Contents

xxiii

Kinematic Estimator for Flexible Links in Parallel Robots . . . . . . . . . . 704 Pablo Bengoa, Asier Zubizarreta, Itziar Cabanes, Aitziber Mancisidor, and Charles Pinto Tactile-Based In-Hand Object Pose Estimation . . . . . . . . . . . . . . . . . . . 716 David Álvarez, Máximo A. Roa, and Luis Moreno Legged Locomotion Robots Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 Jorge De León, Mario Garzón, David Garzón-Ramos, and Antonio Barrientos A Hybrid ZMP-CPG Based Walk Engine for Biped Robots . . . . . . . . . 743 S. Mohammadreza Kasaei, David Simões, Nuno Lau, and Artur Pereira Modelling, Trajectory Planning and Control of a Quadruped Robot Using Matlab®/Simulink™ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756 Italo Oliveira, Ramiro Barbosa, and Manuel Silva Communication-Aware Robotics (I) Cooperative Perimeter Surveillance Using Bluetooth Framework Under Communication Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 J.M. Aguilar, Pablo Ramon Soria, B.C. Arrue, and A. Ollero Development of an Adaptable Communication Layer with QoS Capabilities for a Multi-Robot System . . . . . . . . . . . . . . . . . . . . . . . . . . 782 Hannes Harms, Julian Schmiemann, Jan Schattenberg and Ludger Frerichs Trajectory Planning Under Time-Constrained Communication . . . . . . . 794 Yaroslav Marchukov and Luis Montano Balancing Packet Delivery to Improve End-to-End Multi-hop Aerial Video Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807 Luis Ramos Pinto, Luis Almeida, and Anthony Rowe Communication-Aware Robotics (II) Discrete Robot Localization in Tunnels . . . . . . . . . . . . . . . . . . . . . . . . . 823 Teresa Seco, Carlos Rizzo, Jesús Espelosín, and José Luis Villarroel Low-Bandwidth Telerobotics in Fading Scenarios . . . . . . . . . . . . . . . . . 835 Samuel Barrios, Natalia Ayuso, Danilo Tardioli, Luis Riazuelo, Alejandro R. Mosteo, Francisco Lera, and José Luis Villarroel

xxiv

Contents

Wireless Propagation Characterization of Underground Sewers Towards Autonomous Inspections with Drones . . . . . . . . . . . . . . . . . . . 849 Carlos Rizzo, Pedro Cavestany, François Chataigner, Marcel Soler, German Moreno, Daniel Serrano, Francisco Lera, and Jose Luis Villarroel Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861

Robotic Solutions for Flexible Manufacturing

Full Production Plant Automation in Industry Using Cable Robotics with High Load Capacities and Position Accuracy David Culla1 ✉ , Jose Gorrotxategi1, Mariola Rodríguez1, Jean Baptiste Izard2, Pierre Ellie Hervé2, and Jesús Cañada3 (

)

1

2

Tecnalia Research and Innovation, Parque Científico y Tecnológico de Gipuzkoa, Mikeletegi Pasealekua 7, 20009 Guipúzcoa, Spain [email protected] Tecnalia Francia, CSU Building, Bât 6, 950 Rue Saint Priest, 34090 Montpellier, France 3 Cemvisa Vicinay, c/Zelaietabide, 1, 48210 Otxandio, Bizkaia, Spain

Abstract. The aim of this paper is to introduce an innovative machinery result of combining cable suspended robot technology based on parallel kinematics with a traditional gantry crane. This machinery has been named Cablecrane. Its purpose is to keep the same load capabilities as in traditional gantry cranes while enabling full 6 degrees of freedom (DOF) control of the payload. This means that the payload is fully controlled in position and orientation while it is being manipulated. Thus, precision load handling and movement without oscillations are possible in any direction, in any orientation. In addition, combined with appropriate calibration, sensors and integrated CNC controller, most of manipulation tasks in plant can be programmed and automatized. The final result is an increase in production, full plant automation and enhanced plant safety. Keywords: Cablecrane · Industrial cable robot · Cable robotics · Parallel kinematics · Plant automation · Factory automation

1

Introduction to Large Scale Full Plant Automation

In any industry factory or plant, the handling and assembly of large and heavy compo‐ nents or machines are two of the most demanding tasks that have to be carried out. These tasks do not only involve moving objects from one position, but also relatively precise positioning of the objects in particular spots or locations. The correct placement of an object involves both spatial position and orientation, that is, angular position. This means that three different geometrical coordinates and three different angles have to be consid‐ ered. Due to the physical characteristics of the often very large and heavy objects to be handled, substantial amounts of technical skills or dexterity are often required to the operators for the correct adjustment of the six degrees of freedom (DOF). In addition, in many occasions fully automated movement is also desirable. Currently, small scale automation activities in plant are carried out by anthropomorphic robots distributed in cells. Recently, gantry robots have been successfully in large © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_1

4

D. Culla et al.

gantries [9]. However, due to limitations such as the reduced workspace of commercial robots, or high cost of bigger size gantry robots, large scale automation of heavy loads with full load control is out of reach. Only manual manipulation is still done with tradi‐ tional cranes while the operator takes care not to swivel the load. Thus, at least two different issues have to be dealt with in the context of handling and assembly: • The importance of sufficient hoist capacity and adequate positioning. This is clear in the case of, for example, the transport and assembly of wind turbines, airplanes, machine tools, heavy structures for buildings, heavy equipment, among others. • Automation of operations throughout the production plant. Cablecrane (Fig. 1) system makes possible large scale automation along the plant combining parallel cable-driven robotics embedded in a traditional gantry crane. This provides the best cost effective solution, automated and manually controlled, over most of the handling and production processes in plant with full load position and orientation control all over the plant. CABLECRANE System

Automated plant

Fig. 1. Cablecrane system installed in a factory plant

The impact on the working conditions is highly positive as it guarantees the safety in handling, reduces setting time after movements, increases productivity by 50% and reduces cycle time as well as work accidents.

2

System Description

Cablecrane system has three parts clearly identifiable (Fig. 2): • Cable driven parallel robot (CDPR). • Automated gantry crane. • Cable driven mobile platform.

Full Production Plant Automation in Industry

CDPR – winches 1 to 4

Gantry Crane

5

CDPR – winches 5 to 8

Cable driven mobile platform

Fig. 2. Main components of Cablecrane

Figure 3 shows Cablecrane system assembled on floor before installation in produc‐ tion facility. Commissioning will be carried out by December 2017.

Fig. 3. Cablecrane main gantry crane and motors

2.1 CDPR, Cable Robot This sub-system is formed by 8 servomotors installed in the gantry crane. Each servo‐ motor controls the length of a cable that pulls a cable driven platform on its end. The length that each motor collects or releases is synchronized with the rest of the motors according to parallel kinematics design. Therefore, by controlling the cable lengths and solving the kinematics, the desired position in the plant of the cable-driven platform workspace is reached at any time.

6

D. Culla et al.

Besides, there is a set of pulleys (Fig. 4) that direct each cable from the drum towards an exit point from which it connects with the attachment point in the platform. The coordinates of these last two points as well as the way they are connected have been defined and optimized in order to maximize load capacities, increase workspace size and decrease interferences between cables, platforms, loads and crane following opti‐ mization procedures [1, 3]. B A

D

C

Fig. 4. Pulleys used to direct cables. Cable routing is A-B-C-D.

Fig. 5. Winch view

It is also important to minimize the number of pulleys while avoiding singularities between pulleys and cables. On one hand, it is the cable, while it changes length, that pulls and orients the pulley. Therefore, low amount of contact between cable and pulley will produce sudden movements and vibrations because the pulley does not follow the cable orientation smoothly. On the other hand, high number of pulleys reduces the life of the cable. Industrial design has been implemented in cable, drum and pulley design, following ISO 4301/FEM 1001 and ISO 16625/FEM 1001.

Full Production Plant Automation in Industry

7

2.2 Automated Gantry Crane The gantry crane has two functions: • Act as a mobile supporting frame of the cable robot, displacing it to any place and thus, extending the workspace all over the plant. • Withstand most of the payload as any traditional crane. These two functions are achieved, respectively, thanks to: • All motors present in the crane are servomotors. Additional sensors are installed in order to control the absolute position of the crane in plant: lasers are used to control the global position of the crane along the plant. Inductive sensors close to the crane wheels are used to keep crane orientation parallel to the rails used for displacement. • Installation of a secondary gantry embedded in main gantry (Fig. 6). This secondary gantry moves an additional winch that drives its own cable. This 9th cable is used to withstand most of the load as in traditional cranes. Its innovation is that this cable is force controlled. Thus, the load in each cable is distributed in a way that almost all of it is held by the vertical cable. The rest of the load is left in order to keep minimum required tension in the cables that form the cable robot. This allows them to keep the platform controlled while it is being held in the most efficient way. • The pull of the 9th cable must always be vertical with respect to the platform to keep it balanced in combination with the rest of the cables. In order to do so, the secondary gantry installed in the main gantry has its own rails, servomotors and sensors used to follow the planar (XY) position of the platform (Fig. 7). These allow for relative x and y axes movement with respect to the main gantry crane.

Additional vertical cable, used to lifth heavy loads while keeping tension in the rest of the cables

Fig. 6. Vertical cable can be used when payloads are high.

8

D. Culla et al.

x displacement

Secondary gantry

y y displacement x

Fig. 7. Secondary gantry is used to follow platform’s XY position.

2.3 Cable Driven Platform The cable driven mobile platform (Fig. 8) acts in a factory plant as a large scale fully automated end-effector tool. Its design, based on parallel kinematics, is such that allows installation of different kind of grippers/manipulators in its lower part. Cables are attached using industrial elevation rings so that they can be freely oriented while lengths change.

Fig. 8. View of the platform (CAD and pre-assembly).

The end effector of Cablecrane is characterized by: • Large workspaces covered in a fully automated manner. • High load capacity. • Accurate position of any kind of tooling along a wide workspace withstanding external loads.

Full Production Plant Automation in Industry

9

• • • • •

Full control. They can manage 6 degrees of freedom. Highly productive. Their response time is low and can move around at a high speed. Versatile and multi-tasking. Low maintenance consumer. Low space user. Winches are installed in the gantry crane and robot architecture does not need floor space to be stored. • No swinging of the load: parts firmly held by 8 cables coming from different direc‐ tions.

3

CDPR Definition Methodology

3.1 Workspace Definition One key issue for the definition of the CDPR in Cablecrane is to define the coordinates where the cables must be attached to the platform and where the pulleys have to be placed. This is not an easy to predict task, especially if workspace is to be maximized, and so the achievable turns. These coordinates are required to verify and compute the main limitations to be considered [1, 2] in the design: • Cable tension values must always be positive and smaller than a maximum allowable value. • Cable interferences must be avoided. These interferences can be between cables and/ or objects such as the platform, pulleys, gantry crane, etc.… • Cable tension must compensate external loads within a given range [4]. • Cable mass and elongation affect to the feasible workspace by modifying platform balance [10–12]. Traditional CAD systems, although necessary in the design, are not enough by its own to produce an optimal design in an effective manner, as they do not consider solving force systems with the conditions present in cable robotics. This is required because each of the positions that the platform will reach in the space needs to be verified in terms of collisions but also in terms of force balances. Introducing mathematical calcu‐ lation interference and force balance solving systems in a script tool makes the analysis of feasible workspace for a given set of coordinates much faster (Figs. 9 and 10). Besides, the coordinate definition for each cable exit point in pulleys and connecting point in platform is usually submitted to a mathematical optimization [3] which target can be to minimize cable tensions, motor consumptions, or any other criteria. Therefore, it is required to develop optimization software tools in order to improve the design. Tecnalia has developed its own tools for the last years. In the case of Cablecrane design, the tools have been used to maximize workspace size and turns in z axis while keeping good turn amplitude in x and y axis.

10

D. Culla et al.

Cable-platform collision Cable-cable collision

Valid workspace considers cable collision points and positive cable tension solution

Fig. 9. Valid workspace considers cable collision points and positive cable tension solution.

Fig. 10. Cable tensions are calculated at each point, negative values are discarded.

3.2 Mechanical Design Calculation From a classic mechanical design point of view, special attention needs to be payed to platform’s and gantry crane’s mechanical design. In fact, for every position and orien‐ tation in the workspace:

Full Production Plant Automation in Industry

11

• Loads are changing continuously in direction, which makes complicate to determine the most critical design cases. • Structures need to be stiffened to keep cable exit points with the smallest deformations as possible in order to guarantee the best precision positioning of the platform. The classic approach is to use finite element method software to solve and verify the most critical load cases. However, it is first required to define of the most critical load cases. With this purpose, the stiffness matrix of the CDPR was first used as input to a secondary tool that automatically pre-calculated deformation and tensions for each point and orientation of the workspace considering the cable force application direction and values at each of these workspace points. Once these critical points were sorted and identified, FEM was used again for final mechanical design.

4

Feasible Workspace

The result of the design features a cable collision free workspace of 8 × 3.2 × 2 m with a standard payload of up to 3000 kg (Fig. 11). Turn capacities are 40, 20 and 90º around x, y, z axis respectively. The total volume required by the crane is 9 × 7.5 × 4.5 m.

Workspace of the cable robot 8 x 3.2 x 3 [m]

Worskpace extends all over plant thanks to gantry crane

Fig. 11. Available workspace.

Since all the servomotors are installed in the gantry crane, these workspace capa‐ bilities are extendable all along the reach of the gantry crane in plant.

5

Control System Improvement

Getting a full automated and precise control system has required to implement the main modifications described in the following.

12

D. Culla et al.

First, the total cable length from each winch to the platform attachment point needs to be precisely measured. Calibration is required to determine pulley positions, orien‐ tations and final cable routing. This determines the total length of cable for each platform position. The total cable length is used then to compute cable elongation and correct the amount of cable to be reeled in or reeled out in each winch. Obviously, cable stress deformation tests (ISO12076) have been carried out previously to get the elongation model. Best fit between strain 𝜖 and tensile force ( F (has been))found to be an offset loga‐ rithmic squared curve with the shape of 𝜖 = a. ln2 F − F0 + b (Fig. 12). 1 0,9 0,8

0,7

%

0,6 0,5 0,4 0,3 0,2 Best fit Data

0,1

0 0

5 10 ln(x+xo)^2; x = [kN]

15

Fig. 12. Stress-elongation analysis for the cables used in Cablecrane

It is also necessary to take into account the swiveling angle of the pulleys. This means, the change in orientation but also the length of the arc when the cable passes through the pulleys. Details of the model implementation have been developed in [13]. The improvement in the position error due to the implementation of the elasticity and pulley models was measured by measuring several positions along z axis. (Figure 13) shows the impact on the precision. From an original error of 25 cm, the error drops to a few millimeters (max 6 mm, mean error 2.5 mm) over the full height. Desired platform height [mm] 0

500

1000

1500

2000

2500

3000

0

[mm error]

-50 -100 -150 -200 -250

No elasticity model, no pulley model Logarithmic law for elasticity

Logarithmic law for elasticity + pulley model

-300

Fig. 13. Error measures in Z versus desired height, in mm.

Another important modification with respect to a traditional crane is that all axes in the crane are driven by servomotors, which allows controlling the position of the crane

Full Production Plant Automation in Industry

13

at each time. For that purpose, distance sensors have been installed in order to maximize placement precision and correct guiding through plant. In addition, CNC interpreter has also been included as part of the standard control features, which allows the whole system to behave automatically as any other machine tool. Finally, cable robotics requires specific control strategies [5] that consider dual space (joint and cartesian space) feed forward loops. These loops consider additional inertia forces in platform and joints in order to get the smoothest and best calculated cable tension transition from one position to another.

6

Multi-task Orientated Platform

Cablecrane architecture opens the doors for automation in multiple sectors and activities due to its high flexibility solution for large spaces [6–8]. These tasks can have direct application in naval, construction, aeronautics, nuclear, civil engineering, logistics and material handling industries, i.e.: • Production of large and heavy metallic parts and structures, notably involving welding, sand-blasting, painting, stripping, inspection and deconstruction. • Manipulation, assembling and maintenance of large parts and systems in a precise way (±2 mm) and with complete control of part orientation (6 DOF). • Automated logistic operations: quick pallet storage in automated mode at high speeds, with unmanned or manual operations and collision control. • Inspection, monitoring and maintenance: fast movement along large spaces with platforms and/or camera, incorporating specific tooling and repairing materials. It can perform operations in highly risky confined spaces. • Operations on big surfaces: painting and welding. • 3D printing of large scale construction parts & small scale buildings Acknowledgement. This project has received funding from the Spanish Ministry of Economy, Industry and Competitiveness’s RETOS research and innovation program under grant agreement No RTC-2015-4244-8 - CABLECRANE: “Design, manufacturing and validation of the servocontrolled gantry crane that integrates a cable-driven robot for fully controlled handling and assembling of heavy and high added value parts in large workspaces”.

References 1. Riehl, N.: Modélisation et conception de robots parallèles à câbles de grande dimension. Ph.D. thesis. Université Montpellier II, Montpellier (2011) 2. Merlet, J.-P.: Kinematics of the wire-driven parallel robot MARIONET using linear actuators. In: IEEE International Conference on Robotics and Automation, Passadena, CA, USA (2008) 3. Gouttefarde, M., Collard, J., Riehl, N., Baradat, C.: Geometry selection of a redundantly actuated cable-suspended parallel robot. IEEE Trans. Rob. 31(2), 501–510 (2015) 4. Gouttefarde, M., Gosselin, M.: Analysis of the wrench-closure workspace of planar parallel cable-driven mechanisms. IEEE Trans. Robot. 22(3), 434–445 (2006). s.l., Elsevier

14

D. Culla et al.

5. Lamaury, J., Gouttefarde, M., Chemori, A., Hervé, P.: Dual-space adaptive control of redundantly actuated cable-driven parallel robots. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo (2013) 6. CoGiRo Project. http://www2.lirmm.fr/cogiro/. Accessed Jun 2017 7. Tecnalia, Cable Driven Parallel Robotics for industrial applications. https://youtu.be/ px8vwNerkuo. Accessed Jun 2017 8. Tecnalia, COGIRO: Cable Robot for agile operation in large workspaces. https://youtu.be/ 6RjBKvoc6N4. Accessed Jun 2017 9. Megarob, development of flexible, sustainable and automated platform for high accuracy manufacturing operations in medium and large complex components using spherical robot and laser tracker on overhead crane. http://www.megarob.eu/. Accessed Jun 2017 10. Riehl, N., Gouttefarde, M., Krut, S., Baradat, C., Pierrot, F.: Effects of non-negligible cable mass on the static behavior of large workspace cable-driven parallel mechanisms. In: IEEE International Conference on Robotics and Automation, pp. 2193–2198 (2009) 11. Riehl, N., Gouttefarde, M., Baradat, C., Pierrot, F.: On the determination of cable characteristics for large dimension cable-driven parallel mechanisms. In: IEEE International Conference on Robotics and Automation, pp. 4709–4714 (2010) 12. Kozak, K., Zhou, Q., Wang, J.: Static analysis of cable-driven manipulators with nonnegligible cable mass. IEEE Trans. Robot. 22(3), 425–433 (2006) 13. Nguyen, D.-Q., Gouttefarde, M., Company O., Pierrot, F.: On the analysis of large-dimension reconfigurable suspended cable-driven parallel robots robotics and automation. In: IEEE International Conference on Robotics and Automation, Hong Kong (2014)

Human-Robot Collaboration and Safety Management for Logistics and Manipulation Tasks Gi Hyun Lim(B) , Eurico Pedrosa, Filipe Amaral, Ricardo Dias, Artur Pereira(B) , Nuno Lau, Jos´e Lu´ıs Azevedo, Bernardo Cunha, and Luis Paulo Reis IEETA, Universidade de Aveiro, Aveiro, Portugal {lim,efp,f.amaral,ricardodias,artur,nunolau,jla}@ua.pt, [email protected], [email protected]

Abstract. To realize human-robot collaboration in manufacturing, industrial robots need to share an environment with humans and to work hand in hand. This introduces safety concerns but also provides the opportunity to take advantage of human-robot interactions to control the robot. The main objective of this work is to provide HRI without compromising safety issues in a realistic industrial context. In the paper, a region-based filtering and reasoning method for safety has been developed and integrated into a human-robot collaboration system. The proposed method has been successfully demonstrated keeping safety during the showcase evaluation of the European robotics challenges with a real mobile manipulator.

1

Introduction

Human-robot collaboration will play a key role to develop efficient solutions for small and middle-sized enterprises (SMEs), whose environments are typically less structured and demands higher flexibility than those of large-scale or massproduction industries [15]. This paradigm tackles the challenge of the safety of human operators [12]. Ensuring safety, an industrial robot can increase autonomy in a production line beyond a repetitive and low intelligence paradigm. Up to the present, workspace sharing by human operators and robots demands a redesign of working environments with physical safety solutions such as a safety zone. This paper proposes a solution applicable not only in a packaging industry, but also in other industries to increase human-robot collaboration keeping safety. Current packaging machine vendors do not provide any automated mechanism for the task of feeding blank piles and the state of the art is to have a human operator dedicated to feed blank piles to a packaging machine, as shown in Fig. 1. This is a tedious and repetitive task and human operators may occasionally refrain from collecting blank piles from more distant pallets. If we consider only the task of removing blank piles from a pallet, the industrial state c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_2

16

G.H. Lim et al.

(a) human operator feeds a blank pile

(b) manipulation in a safety zone

Fig. 1. Packaging environment.

of the art already provides robotised solutions. However, these solutions follow a completely different paradigm, as a robot operates in a very structured safety zone [2], where several laser scanners and physical fences forbid the entrance of humans. These robots cannot cope with small errors in the positioning of the objects [10]. Furthermore, the robot uses a very complex and expensive gripper with several degrees of freedom. The EuRoC project [14] has launched industry-relevant challenges to exploit synergies among all the actors of the value chain in manufacturing and servicing. Especially the Challenge 2 (C2), named Shop Floor Logistics and Manipulation, addresses Logistics and Robotic Co-Workers scenarios of the SRA2009 [4] in small batch manufacturing. The work described in the paper was used in the Showcase task of the EuRoC project by team TIMAIRIS. To complete the three challenges of the showcase task, the proposed solution addresses several technical issues as described below. It uses a skill-based anytime agent architecture [1], that has been evolving from the one used for the Stage I simulation tasks [13]. The separation of task-dependent representations and a generic agent processing algorithm allows the robot to start at any task state. Using this feature, human collaborators or robot operators may repeat a task after recovering from failure or to test the skill from a given starting point. To develop the different skills, several different motion planners have been tested, and algorithms have been devised to increase the robustness of the motion planner and get feasible and usable motions that respect the task constraints. The planners have been compared using 4 metrics: path length; joint variations; number of replans and number of points [18]. The use of multimodal interaction is an important research topic and there is no off-the-shelf solution [8], to the best of our knowledge, for recognizing gesture commands in environments such as the TIMAIRIS showcase setup. The proposed solution combines RGB and Depth features, extracted through Deep Learning and Hand Boundary Histograms, that are combined to obtain the best recognition rate. By encoding a fixed number of symbols into interaction graphs, the system can represent any number of commands through simple edition of the graph [9]. The detection of the position of the pile of blanks is performed using a filtered edge detection image, where the blank edges are preserved while the edges of

Human-Robot Collaboration and Safety Management

(a)

17

(b)

Fig. 2. Showcase environment; (a) simulated workspace for autonomous packaging, (b) KUKA KMR mobile manipulator.

the printed patterns are removed. This image is then used for spiral template matching against a model of the blank edge, scaled considering the distance between the camera and the blank piles. This approach proved to be robust to very different kinds of blank printed patterns and also to very different lighting conditions. During some tests, the decision on the top printed pattern to be used in each of the piles was performed by an external user and all blank piles were detected without failure. Other perception tasks also use a combination of Depth and RGB images, always promoting the robustness of the approach. This paper addresses safety management to share workspace by human operators and a real mobile manipulator for logistics and manipulation tasks. There are several technical issues for safety: human detection and tracking, safety reasoning, multi-sensor integration, resource management.

2

EuRoC Realistic Showcase

The EuRoC challenges are organized into successive stages with increasing timelines for productization. This paper presents the result about the Showcase in the Stage II: REALISTIC LABS of the TIMAIRIS team who is one of 15 challenger teams advancing to this stage out of 102 teams in the simulation contest. The TIMAIRIS showcase task is composed by a simplified version of the complete blank feeding problem to be developed for the final use case for real packaging environments, as shown in Fig. 2. The main idea is to have a simplified problem as a realistic lab, although having most of the components of the final solution. Hence, in the showcase, a prototype of a blank magazine (a mechanism that feeds the packaging machine) and two pallets filled in with the same type of blanks are used to demonstrate the capacity of the KUKA platform to pick a pile of blanks from the pallets, transport it and place it in the blank magazine. A special gripper has been developed for this purpose. In this task the following technologies are going to be covered: the platform is able to recognize empty pallets, plan and replan its actions; an initial version

18

G.H. Lim et al.

Sensor Interface

Effector Interface

Perception Modules

Manipulation / Navigation skills

Solver Human Tracker

Safety Manager

Planner

HRI

Fig. 3. An overview of skill-based architecture.

of the multimodal interface is used, integrating the possibility of gesture commands in the interaction with the platform (gestures are used mostly for safe navigation); the robot is able to navigate in an environment that includes a few humans and interact with them; the manipulation of the selected type of blanks is demonstrated. The setup requires an open area of approximately 20 m2 , where the prototype of the blank magazine and the two pallets with blanks are placed. Equipment also includes the KUKA KMR platform and the new blank gripper (developed in the showcase). Initially, calibration may be required for configuring the detection and grasping of the blanks, and for the multimodal interface.

3 3.1

System Architecture Skill-Based Framework

In a higher level view of the skill-based framework, as shown in Fig. 3, the functional components are represented by boxes. A Perception module collects sensory data and processes them to extract information used by high-level modules. A Skill is the capacity of doing a particular task, such as, picking and placing an object or moving the end-effector of the manipulator to a desired pose. Perception modules and Skills receive and transmit sensory-motor data via Sensor interface and Effector interface respectively. The Planner provides solutions for the current packaging state, while the Solver is responsible for taking decisions on how to solve the current task based on the current sensory data and available Skills. The Human tracker keeps on tracking humans in the workspace by using laser scanners. The Human-Robot Interaction (HRI) module communicates

Human-Robot Collaboration and Safety Management

19

with a human by recognizing gestures and by providing information via multimodal interfaces [9]. The Safety manager infers whether it is in a safe state or not based on system states and human tracking information. When the Safety manager decides not to go ahead, it requests Solver to gaze the nearest human operator and triggers HRI to interact with him. Then, HRI takes an order from the operator to resume or stop the paused task using gestures. 3.2

Resource Management Scheme

To continuously monitor humans, the Human tracker and Safety manager need to be continuously active, while all other modules including skills and perceptions modules just run on request. That should cause conflicts over resources. For example, when the robot wants to recognize the pose of a blank magazine to feed a blank pile, a perception module tries to rotate a camera system to the blank magazine on the table. If, at the moment, a human operator approaches, the Safety manager also tries to rotate the camera system to gaze the operator.

Algorithm 1. Algorithm for resource management. 1: procedure Node(D) 2: R ← buildResourceMap(D) 3: while available(R) do 4: run(Proc) 5: release(D) 6: end while 7: end procedure

 D is the name of daemon  run normal procedures  release resources at the end

Based on the skill-based agent architecture [1,13], the system architecture is added resource management scheme, as shown in Algorithm 1. When a robot starts, each module which needs to run continuously is launched and becomes a daemon. The Solver builds a resource map which lists all necessary resources for each daemon module. At every spin, that means a wake-up for all subscriptions, services, timers and so on in ROS (Robot Operating System) 1 , the daemon checks the availability of its resources. If available, it runs normal procedures and release all resources at the end of the procedures.

4

Human Tracking

4.1

Multi Sensor Integration

Because of limitations on the field of view in single-sensory systems and resource conflicts over access to shared resources such as cameras, multi sensor integration is a foremost approach to derive synthesized information from multiple sources [6, 1

http://wiki.ros.org/tf.

20

G.H. Lim et al.

(a)

(b)

Fig. 4. (a) Hokuyo laser range finder (LRF) URG-04LX-UG01, (b) Sensors on mobile manipulator: two Hokuyo LRFs in red circles, two stereo camera systems in blue rectangles, and a Kinect RGB-D camera in a yellow rectangle.

7,16]. In the challenge environment, each perception module uses camera systems at specific poses and locations based on spatial knowledge [1,11]. In order to track humans constantly, two low cost Hokuyo LRFs have been mounted on both sides of the platform manipulator to cover all directions without shadow regions from its location, as shown in Fig. 4. Firstly, multiple single-plane laser scans are merged to generate virtual laser scans, disregarding the actual occlusions as it would be seen from the merged scan viewpoint [3]. Then the resulting scans are transformed to the robot frame, and are aligned with the point clouds of the stereo camera systems. 4.2

Human Detection and Tracking Using Laser Scanners

A machine-learning-trained classifier2 is used to detect human body from singleplane laser scans. Positive and negative samples of laser scanner readings of human body patterns have been collected and are trained into random forests beforehand [5]. Given a training set, the training algorithm repeatedly selects a random sample with replacement of the training set and fits trees in random forests to these samples. After training, prediction for new samples can be made by the cumulative result from all the trees (fb ) in the forest on x : B 1  fb (x ), fˆ = B

(1)

b=1

where B is the number of training or bagging. In the tracking procedure, a laser ring from merged scan is segmented and classified by the forest classifier. The segmented scans become the set of candidate clusters. Each candidate is matched within a threshold with the list of trackers which are listed in the previous tracking procedure. If not matched, a 2

https://github.com/wg-perception/people.

Human-Robot Collaboration and Safety Management

21

Fig. 5. Realistic environment.

During Navigation

During Manipulation

Fig. 6. Regions for safety handling.

new tracker is added into the match list. Then all trackers are updated based on Kalman filter [17]: Hk = Pk−1 + Q, Kk = P (P + R)−1 , x ˆk = x ˆk−1 + Kk (fˆ − x ˆk−1 ), Pk = (1 − Kk )Pk−1 Hk ,

(2)

where Q and R are the uncertainty parameters of the Kalman filter and 0.002 and 10 are used in the work, respectively. Finally, Human tracker attempts to pair individual body parts together and estimates the center of one person, since whole bodies might consist of one or more body parts, such as torso, legs and arms from the single-plane scan viewpoint. In the realistic environment, there are many recognition failures, especially false positives (FPs) from objects such as blank piles on a workbench, bags on a windowsill and the robot arm, as shown in Fig. 4(b), because the LRFs are mounted at the level.

5

Safety Management

Figure 5 shows the showcase setup of TIMAIRIS as a realistic lab for autonomous packaging. The scenario of TIMAIRIS showcase is a simplified version of the

22

G.H. Lim et al.

complete blank feeding problem, and it is composed of a prototype of a blank magazine, the mechanism that feeds the blanks into the packaging machine, and two pallets filled in with actual blank piles used in production. The task consists in recognizing the blank piles on two different pallets, devising a plan for picking them, and transporting and placing them in the blank magazine in a smooth way so that it does not get jammed. The workspace is assumed to be shared by humans and the robot, as shown in Fig. 2. The Safety Manager module is responsible for taking care of safety issues, using the information provided by the Human Tracker module. Its processing unfolds in two parts: region-based filtering and safety handling. 5.1

Region-Based Filtering and Reasoning

Figure 6 shows regions for safety reasoning. One green region and several black regions represent the workspace and objects including the robot and tables, respectively. The observations in yellow regions and red regions are considered as humans and are evaluated in order to decide the safety action to be taken. The details are given in Sect. 5.2. All observations in the green region are taken into account as candidates to track and monitor humans. As referred above, in the realistic environment, the observations delivered by the Human Tracker module contain many FPs. Most of them appear in places where humans can not be, such as objects in the windowsill and humans on the boundary of the workspace. They are out of the workspace. To deal with these FPs, a green region is defined in advance, and any detection that lies outside of this region is discarded. On the other hand, there are also FPs caused by objects on the robot, the mast of the pan and tilt camera, piles on the pallets and the blank magazine. Those are in the regions that are occupied by the robot and objects such as tables. For them black regions are established, and any observation in these regions is discarded, as shown in Fig. 7. There is a difference between the robot inner region and the others: the others are fixed in the environment during the whole showcase round; the former is fixed to the robot frame. Since the virtual laser scans are defined in the robot frame, the detected human candidates are also defined in the same frame. Thus, to filter FPs based on the exclusion regions of the environment, a transformation from robot frame to environment frame is required.

Fig. 7. Filtered regions.

Human-Robot Collaboration and Safety Management

23

Along the showcase round, the only situations in which the robot arm is out of the robot inner region are during the grasping operation of a pile and during the feeding operation of the blank magazine. In both cases, the arm movement can cause FPs to be detected by the human tracking module. In the former situation, the robot is close to the pallet containing the pile; in the latter, it is close to the table where the blank magazine is mounted on. In both cases, the gap between the robot and the table or the pallet is narrower than a human body. This fact is used to eliminate the FPs caused by the arm movement, by defining narrow regions around the table, the pallets and the robot and filtering out human detections that appear in the intersection of these regions. Figure 8 is a snapshot of the rviz tool, showing the showcase environment and safety handling information. The small circular spots represent possible human parts (in our case, torso or arm), the color representing a reliability value, increasing from black to blue. Only one of these spots was considered as a human requiring safety attention, the one represented by the green wider spot (see the linked video3 ).

Fig. 8. Regions for safety handling, shown using the rviz tool.

5.2

Safety Handling

Safety handling is based on regions defined around the robot platform and the robot arm. Two rectangular regions are defined around the robot platform during navigation. Red one is close to the robot and the other yellow one next, as shown in Fig. 6, left. During manipulation, a red circular region is defined around the robot arm, whose radius is determined by the arm range, as shown in Fig. 6, right. Actually, only the part of this region that does not intersect the inner region of the robot counts, as detections in this robot inner region are filtered out in the aforementioned filtering phase. 3

https://youtu.be/bJaJznbzMOI.

24

G.H. Lim et al.

Based on these regions, safety handling proceeds as follows. If every human just stays in the green regions, it is considered that the robot operation does not put humans in danger. So no safety action is taken at all, and the robot keeps doing its operation. If any human enters a yellow region which is the furthest rectangular region and outside of arm manipulation region, only the navigation operation is suspended, while the arm keeps doing its current manipulation operation, if any. If a human enters a red region which is the arm manipulation region or the rectangular region closer to the robot platform, the robot suspends its current operation, holding its state, and stops platform and arm movements. In any of these two last situations, the pan and tilt camera points to the chest of the closest human, in order to allow for an human operator gesture command. The pan value is determined by the (x, y) coordinates of the human detection point; the tilt value is defined assuming the average height of a human. The robot gets out of the stopping and holding situation in two different ways. If a command is given by the human operator, the robot just executes it: the suspended operation can be resumed, interrupted or the whole task can be interrupted. If no command is given and the human moves out of yellow regions and red regions, the robot automatically resumes the suspended operation.

Fig. 9. Interaction graph for the packaging task.

6

Human-Robot Interaction

For small batch production, changing or adding new features, such as continuously changing printed patterns of blanks, can be a burden on both human operators and autonomous robots. Since industrial environments are noisy, where machines produce a continuous whirring sound, verbal communication is difficult for humans and impractical for robots. Thus, gestures have been considered as a practical alternative way of communication. So far gesture recognition systems in HRI have focused on small number of implicit interactions such as pointing

Human-Robot Collaboration and Safety Management

25

and handing over. With respect to pointing, the target object must be in the field of view of both human and robot. To ask to fetch objects out of view or in a cluttered environment where a human is impossible to specify an object by pointing, HRI systems need to have a rich vocabulary to designate an object. Figure 9 shows the interaction graph of the showcase challenge. It enables to compose sentences with a fixed number of symbols [9].

7

Challenge Evaluation

During the showcase evaluation of the Stage II with a real KUKA mobile manipulator, TIMAIRIS showcase addresses all of these issues (and others) in the form of three challenges, with quantifiable metrics (see the official video of the evaluation4 ). The objectives and metrics are organized so that the execution can be evaluated in a comprehensive and robust way. For safety management, the first objective on perception includes a metric: identifying the presence of humans in the vicinity of the robot (O1M4). The second objective on manipulation includes a metric: stopping manipulation and navigation if a human is close to the robot (O2M4). The third objective on the planning capabilities includes a metric: adapting navigation paths (O3M2). The last objective evaluates human robot interaction and includes a metric: tracking a human for interaction (O4M2). The achieved metrics are presented in Table 1. Table 1. Quantifiable evaluation Target Events Percent (%) Recognize safety situations (O1M4)

2

2

100

Manipulate safely (O2M4)

2

2

100

Adapt navigation paths (O3M2)

2

2

100

Track a human for interaction (O4M2) 2

2

100

As can be seen from Table 1, TIMAIRIS completed the showcase with every objective and metric being accomplished. In what regards Perception, Manipulation/Navigation and Planning, i.e. Objectives 1, 2 and 3 previously specified, all metrics have been achieved during the execution of the first part of the showcase evaluation that consisted on three challenges. Tracking humans with the pan-tilt camera (O4M2) was also achieved during this first part of the evaluation.

8

Conclusion

The proposed safety management system provides opportunities for a flexible and safe human-robot collaboration in a realistic industrial setting. As the result 4

https://youtu.be/8sNQLCs1jwE.

26

G.H. Lim et al.

of the Stage II evaluations, TIMAIRIS has been decided to advance to the final stage of the EuRoC. The achievements obtained in the showcase phase provide an excellent base for the development of the pilot experiment. The pilot experiment environment will be a real industrial setting but, all the developments in the paper are directly applicable in the environment. Some new features will have to be addressed that also depend on the speedup of the final prototype, such as considering several packaging machines and different types/shapes of blanks, enhanced safety and more challenging navigation issues. Acknowledgement. This work was supported by the EuRoC Project under Grant no. 608849.

References 1. Amaral, F., Pedrosa, E., Lim, G.H., Shafii, N., Pereira, A., Azevedo, J.L., Cunha, B., Reis, L.P., Badini, S., Lau, N.: Skill-based anytime agent architecture for logistics and manipulation tasks: EuRoC Challenge 2, Stage II-Realistic Labs: Benchmarking. In: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 198–203. IEEE (2017) 2. Augustsson, S., Christiernin, L.G., Bolmsj¨ o, G.: Human and robot interaction based on safety zones in a shared work environment. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 118–119. ACM (2014) 3. Ballardini, A.L., Fontana, S., Furlan, A., Sorrenti, D.G.: ira laser tools: a ROS laserscan manipulation toolbox. arXiv preprint arXiv:1411.1086 (2014) 4. Bischoff, R., Guhl, T.: The strategic research agenda for robotics in europe [industrial activities]. IEEE Robot. Autom. Mag. 1(17), 15–16 (2010) 5. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001) 6. Dias, R., Lau, N., Silva, J., Lim, G.H.: Multi-object tracking with distributed sensing. In: 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 564–569. IEEE (2016) 7. Lim, G.H.: Two-step learning about normal and exceptional human behaviors incorporating patterns and knowledge. In: 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 162–167. IEEE (2016) 8. Lim, G.H., Oliveira, M., Mokhtari, V., Kasaei, S.H., Chauhan, A., Lopes, L.S., Tom´e, A.M.: Interactive teaching and experience extraction for learning about objects and robot activities. In: 2014 RO-MAN: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 153–160. IEEE (2014) 9. Lim, G.H., Pedrosa, E., Amaral, F., Lau, N., Pereira, A., Dias, P., Azevedo, J.L., Cunha, B., Reis, L.P.: Rich and robust human-robot interaction on gesture recognition for assembly tasks. In: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 159–164. IEEE (2017) 10. Lim, G.H., Suh, I.H.: Improvisational goal-oriented action recommendation under incomplete knowledge base. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 896–903. IEEE (2012)

Human-Robot Collaboration and Safety Management

27

11. Lim, G.H., Yi, C., Suh, I.H., Ko, D.W., Hong, S.W.: Ontology representation and instantiation for semantic map building by a mobile robot. In: Intelligent Autonomous Systems 12, pp. 387–395. Springer (2013) 12. Pedrocchi, N., Vicentini, F., Matteo, M., Tosatti, L.M.: Safe human-robot cooperation in an industrial environment. Int. J. Adv. Robot. Syst. 10(1), 27 (2013) 13. Pedrosa, E., Lau, N., Pereira, A., Cunha, B.: A skill-based architecture for pick and place manipulation tasks. In: Progress in Artificial Intelligence: 17th Portuguese Conference on Artificial Intelligence, EPIA 2015, pp. 457–468. Springer (2015) 14. Siciliano, B., Caccavale, F., Zwicker, E., Achtelik, M., Mansard, N., Borst, C., Achtelik, M., Jepsen, N.O., Awad, R., Bischoff, R.: EuRoC-the challenge initiative for european robotics. In: Proceedings of ISR/Robotik 2014, 41st International Symposium on Robotics, pp. 1–7. VDE (2014) 15. Stenmark, M., Malec, J., Nilsson, K., Robertsson, A.: On distributed knowledge bases for robotized small-batch assembly. IEEE Trans. Autom. Sci. Eng. 12(2), 519–528 (2015) 16. Teunissen, P.: An integrity and quality control procedure for use in multi sensor integration. In: ION GPS Redbook, Integrated Systems, vol. 7. Institute of Navigation (ION) (2010) 17. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005) 18. Tudico, A., Lau, N., Pedrosa, E., Amaral, F., Mazzotti, C., Carricato, M.: Improving and benchmarking motion planning for a mobile manipulator operating in unstructured environments. In: Portuguese Conference on Artificial Intelligence, pp. 498–509. Springer (2017)

Grasp Quality Measures for Transferring Objects Fernando Soler(B) , Abiud Rojas-de-Silva, and Ra´ ul Su´ arez Insitute of Industrial and Control Engineering, Universitat Polit`ecnica de Catalunya, Barcelona, Spain {fernando.soler,francisco.abiud.rojas.de.silva,raul.suarez}@upc.edu

Abstract. There is a lack of quality indexes to evaluate grasps that are more likely to allow a hand-to-hand transfer of an object during a manipulation task. In order to overcome it, this paper presents a proposal of grasp transfer quality measures to evaluate how easy or feasible is that an object grasped by one hand could be grasped by another hand to perform a hand-to-hand transfer. Experiments were conducted to evaluate the proposed grasp transfer quality measures using different objects and the model of a real robotic hand. Keywords: Grasping · Grasp quality measures · Transfer manipulation

1

Introduction

The use of dual-arm robots has been increasing in the field of object manipulation due to their advantages over one-arm robots. However, using two arms simultaneously can lead to other specific problems, such as the computation of suitable grasps to hold the object with both hands or to transfer the object from one hand to the other. This, in turn, generates the need of quality measures to evaluate and choose the best grasps to perform the transfer of an object. The evaluation of grasp quality in a general sense has been already studied, a review of different quality measures proposed for grasp planning in literature is presented in [1]. In this review, the quality measures are classified into two groups: the first group is related to the contact points on the grasped object and the second group to the hand configuration. Besides, the combination of the quality measures from the previous two groups is also discussed to obtain good grasps for specific purposes. Nevertheless, despite the list of already proposed quality measures, none of them addresses the problem of measuring how good is a grasp for an object transferring, even when this is a very frequent action. This paper focuses on the proposal of quality measures to evaluate grasps in order to determine how good they are in order to transfer objects from one hand to another using a dual-arm robot, as shown in Fig. 1. Work partially supported by the Spanish Government through the projects DPI201340882-P and DPI2016-80077-R. F. Soler and A. Rojas-de-Silva—Partially supported by the Mexican CONACyT doctoral grants 410931 and 313768. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_3

Grasp Quality Measures for Transferring Objects

29

Fig. 1. Dual-arm robot (ADARS: Anthropomorphic Dual Arm Robotic System) about to perform a transfer of the object from the right to the left hand.

The rest of the paper is structured as follows: Sect. 2 summarizes relevant literature on grasp quality measures. Section 3 describes the grasp transfer quality measures. Section 4 describes how the grasp transfer quality measures are computed. Section 5 presents application examples of the grasp transfer quality measures and results. Section 6 presents the validation process, and Sect. 7 presents the summary and future work.

2

On the Quality of a Grasp

One of the fundamental property for grasps is force-closure (FC) [2]. A grasp is FC if the forces applied by the fingers on an object counteract any disturbance force or torque in any direction ensuring object immobility. The most common quality index is based on the FC property and was presented by Ferrari and Canny [3] and describes the largest perturbation that the grasp can resist under arbitrary disturbances or wrenches. This quality measure is directly associated with the Grasp Wrench Space (GWS), which is the set of all wrenches than can be applied on an object through unitary forces at the grasping contact points. The quality measure is defined by the radius of the largest ball centered at the origin of the wrench space and contained in the convex hull of the possible wrenches produced by unitary forces of the fingers on the object. If the origin is inside the convex hull, then the grasp is FC. To quantify the quality of a grasp, several approaches have been proposed, such as the largest sphere fitting inside the convex hull of GWS, the nearest distance from the origin to the border of the convex hull and the volume of the GWS [4]. The Object Wrench Space (OWS) [5] contains all the wrenches that can be produced by a set of unitary external disturbances forces acting on the surface of the object. The OWS is hardly used by grasp planners due to the high amount of time required for its computation which is a problem when many grasps have to be evaluated.

30

F. Soler et al.

Recently, an algorithm to speed up the computation time of a grasp quality metric based on OWS was presented in [6]. Another tool used to evaluate grasps based on wrench spaces is the Task Wrench Space (TWS) [7]. The TWS uses detailed information about the given task and incorporates all the perturbation wrenches that can be produced during a manipulation task. When the task information is not specified, such wrenches emerge from the interaction between the object and the environment, such as the gravity and object’s acceleration due to arm movements. Earlier works on grasping consider that the object being grasped is alone in the environment. An algorithm is proposed in which a grasp scoring function uses information about the environment around the object and the robot’s kinematics to find feasible grasps in a cluttered environment [8]. Other quality function uses the sum of several measures to obtain grasps for each hand using a dual-arm robot [9]. The main task is to move an object from one position to another by transferring an object from one hand to another, so that at some moment of the task, a bimanual grasp is done, which is evaluated by another quality function based on the quality of each hand separately and the configuration of the robot when grasping the object at its start and goal configuration. The GWS can also be used to determine the quality of a bimanual grasp as in the case of single grasp. The computation and evaluation of grasps are done on-line as well as the search of no-collision motions which means that the search of feasible grasps is not limited to a set of pre-computed grasps since grasp poses are defined during the planning process [10]. The manipulability information is used as a quality measure [11] to find grasps that allow a high manipulability to manipulate large objects, since they limit the movement of a dual-arm robot when it is working as a closed kinematic chain.

3

Grasp Transfer Quality Measures

The goal of the grasp transfer quality measures (TQ) is to determine how good is an initial grasp so that a second suitable grasp can be done on the same object in order to perform a hand-to-hand transfer. If the grasp being analyzed obtains a good quality for transferring, it means that: (a) it allows good grasps for another hand to perform a transfer, or, (b) it allows an easy finding of the grasp for the second hand. Both cases will be discussed below in this work. It is desired that both the initial grasp before the transfer and the second grasp after the transfer are FC grasps in order to ensure object immobility during any manipulation. How good is a FC grasp in terms of the maximal perturbation it can resist in any direction is evaluated using a FC grasp quality Q [3], which is defined as (1) Q = min ω ω∈∂W

where ∂W is the boundary of the convex hull of the set of wrenches produced by the applied grasping forces on the object boundary. Some of the grasp transfer quality measures TQ introduced below in this work are based on Q.

Grasp Quality Measures for Transferring Objects

3.1

31

Measure Based on the Individual Maximum FC Grasp Quality Reachable

This proposal is based on the maximum FC grasp quality that can be achieved on the object. The grasp transfer quality TQ MQ of the initial grasp is a function of the maximum FC grasp quality that can be achieved by a second grasp when the object is already grasped by the first hand, i.e., the first hand acts an obstacle for the second hand to grasp the object and therefore not all the object surface is reachable for a second grasp. TQ MQ is computed as TQ MQ =

Qp Qg

(2)

where Qp is the maximum FC grasp quality that could be reachable by the second hand while the first hand is holding the object. Qg is the maximum FC grasp quality reachable on the whole object without any constraint. TQ MQ ∈ [0, 1], being 1 the best case. Note that in practice the computation of TQ MQ depends on the hands used, because when the first hand grasps the object its structure will make some points of the object to be unreachable by the second hand. These practical aspects and how Qp and Qg are computed are described below in Sect. 4.2. 3.2

Measure Based on the Grasp Quality Improvement

This proposal is based on the ratio of Qp , previously defined, and the FC grasp quality of a initial grasp Qi and is expressed as TQ GQ =

Qp Qi

(3)

TQ GQ ∈ [0, ∞]. TQ GQ has a simple interpretation: the greater the value of TQ GQ , the better the chance to find grasps with the second hand that improves the quality of the initial grasp. 3.3

Measure Based on the Reachable Area of the Object Surface

The area of the object surface is used to define this measure. The surface of the object that is reachable by a second hand (when the first hand is grasping the object) is used to compute the grasp transfer quality TQ SA for the initial grasp. It is computed as Sp (4) TQ SA = Sg

32

F. Soler et al.

where Sp is the reachable area of the grasped object surface when the first hand is grasping it. Sg is the whole area of the object surface. TQ SA ∈ [0, 1], being 1 the best case. The computation of Sp and Sg is described in Sect. 4.3. 3.4

Measure Based on the Number of Reachable Points of the Point Cloud

This proposal is based on the number of points of the point cloud representing the object that are reachable by the second hand while the first hand is grasping the object. The grasp transfer quality TQ NP is computed as TQ NP =

m , n

n>m

(5)

where m are the number of points reachable by the second hand when the first hand is grasping the object. n are the number of points of the whole object model. TQ NP ∈ [0, 1], being 1 the best case.

4

Implementation

This section describes how the grasp transfer quality measures are computed in practice. The object model is represented by a point cloud Pg = {pi , i = 1, . . . , n} (Fig. 2), with high enough density, i.e. n must be large enough. The point clouds have been manipulated using The Point Cloud Library (PCL), which is a large scale open project for 2D/3D image and point cloud processing [12].

Fig. 2. Left: 3D object model. Right: Point cloud of the complete object Pg .

Grasp Quality Measures for Transferring Objects

4.1

33

Obtaining the Reachable Part of the Object

Given the point cloud Pg representing the whole object boundary, and an initial grasp for a given hand, the reachable part of the object is determined by the points of Pg that are not in contact nor enveloped by the hand and therefore could be considered as grasp points for the second hand. In practice, this set of points, denoted as Pp , is obtained as follows: 1. The 3D hand model for a given grasp configuration is represented by a point cloud denoted as Ph (see Fig. 3a-Left). 2. Compute the Oriented Minimum Volume Bounding Box of Ph (for simplicity denoted as MBB-Ph henceforth) as shown in Fig. 3a-Right. 3. The reachable part of the object boundary is considered to be the set of points Pp = {p ∈ Pp | p ∈ M BB−Ph }. Figure 3b shows an example of the resulting reachable part of an object.

(a)

(b)

Fig. 3. (a) Left: Ph representing the grasping configuration of the Allegro Hand. Right: MBB-Ph obtained for a point cloud of the Allegro Hand representing the grasping configuration. (b) Left: Ph and Pg . The red points represent the Allegro hand, the green points represent Pp and the blue points are in contact or enveloped by the hand. Right: MBB-Ph is shown in solid to show its dimension.

4.2

Obtaining Qg and Qp Using OWS

As stated above, OWS includes the wrenches that can be generated by normalized forces acting anywhere on the surface of the object. The general form of OWS is defined as ω=

n  i=1

αi ωi ∧ αi ≥ 0 ∧

n 

αi = 1

(6)

i=1

where ωi denotes the wrench that can be applied on the object at contact point pi ∈ P g .

34

F. Soler et al.

This is equivalent to the convex hull given by OWS = ConvexHull(

n 

{ωi , ..., ωn )

(7)

i=1

The metric used to evaluate the grasp FC grasp quality Q is given by Eq. (1). Therefore, Q obtained from OWS represents the maximum FC grasp quality that a grasp can achieve on the whole object, i.e., Qg . Similarly, Qp represents the maximum FC grasp quality that a grasp could achieve on the reachable part of the object while the first hand is holding the object. 4.3

Defining Sg and Sp

Since the representation of the objects is given by point clouds, and there is no guarantee of a uniform distribution of the points, it is used the Greedy Projection Algorithm (GPA) [13] to create triangular mesh models representing the object surfaces from unorganized point clouds. The area of a triangular mesh model (M ) is computed by nt  Aj (8) S= j=1

where Aj is the area of the j -th triangle of M . nt is the total number of triangles. Aj is obtained by the Heron’s formula considering the lengths of its three sides  (9) Aj = s(s − a)(s − b)(s − c)

Fig. 4. Triangular mesh models Mg and Mp of a detergent bottle computed from Pg and Pp respectively

Grasp Quality Measures for Transferring Objects

35

where a, b, c are the lengths of the triangle edges. is the semiperimeter of the triangle. s = a+b+c 2 From the point clouds Pg and Pp of the object, their respective triangular mesh models Mg and Mp are obtained. Figure 4 shows an example of the triangular mesh model computation of a detergent bottle considering the grasp shown in Fig. 3b. Once Mg and Mp have been created, their respective areas Sg and Sp are computed using Eq. (8).

(a)

(b)

Fig. 5. (a) Allegro Hands. (b) Objects used for experimentation: a detergent bottle, a cookie box, a feeding bottle, a coffee mug and a milk box.

5

Application Examples

The experimental verification of the four proposed TQ was done using two robotic Allegro Hands from Simlab (Fig. 5a) which have 16 degrees of freedom (DOF), 4 fingers and 4 independent joints per finger. Five objects with different sizes and shapes have been used for the evaluation of the proposed TQ (Fig. 5b). The object models were obtained from GrabCAD and Autodesk 123D repositories [14,15], which offer a large variety of CAD models of object with different shapes and sizes. All the selected objects can be grasped and handled by one hand. The computation of the grasps for each object was done using the robotic simulation toolbox Simox [16] that allows the generation of random grasps around free-flying objects satisfying the FC property. Equation 1 was used for the FC grasp quality computation for each individual grasp. The values of each TQ for each object are obtained as follows: 1. Compute a set of 30 grasps using the right hand, G r = {g1r , . . . , gjr } j = 1, . . . , 30. Figure 6 shows an example of a random grasp for each object used in this work.

36

F. Soler et al.

Fig. 6. Example of a random grasp for each object. Red points represent the grasping hand pose Ph . Green points compose the point cloud that represents the reachable part of the object Pp . Blue points are the points that are in contact or enveloped by MBB-Ph . Table 1. Results of TQ for 10 grasps computed on a detergent bottle and the indices TQ ∗ used for the correlation analysis. TQ ∗

Detergent bottle TQ Grasp

TQ MQ TQ GQ TQ SA TQ NP TQ ∗MQ TQ ∗GQ TQ ∗RS

1

0.991

3.901

0.767

0.761

0.91

3.28

0.76

2

0.986

3.772

0.705

0.743

0.9

3.48

0.73

3

0.984

3.168

0.69

0.719

0.93

2.99

0.71

4

0.983

3.57

0.662

0.495

0.9

3.52

0.608

5

0.976

3.549

0.644

0.601

0.9

3.49

0.73

6

0.975

3.294

0.639

0.675

0.85

3.19

0.51

7

0.975

2.944

0.633

0.617

0.78

2.792

0.708

8

0.973

3.6

0.631

0.566

0.68

3.546

0.756

9

0.973

2.438

0.629

0.394

0.83

3.25

0.704

10

0.972

3.122

0.628

0.61

0.73

2.21

0.706

2. Compute a set of point clouds, Pp = {Pp1 , . . . , Ppj } j = 1, . . . , 30, representing the reachable part of the object for each grasp gjr . 3. Compute OWS. The four TQ described by Eqs. 2, 3, 4 and 5 were computed for each gjr ∈ G r using Pg , OWS and Pp . Table 1 shows the results of the grasp transfer quality measures of 10 randomly chosen grasps of one particular object.

6

Validation

An auxiliary index TQ ∗ for each proposed grasp transfer quality TQ will be computed in order to perform a correlation analysis between the values of TQ obtained for the right hand and the values of TQ ∗ obtained for the left hand (shown in Table 1). The indices TQ ∗ were computed as follows:

Grasp Quality Measures for Transferring Objects

37

Table 2. Correlations between TQ and TQ ∗ Object

Correlations % TQ MQ -TQ ∗MQ TQ GQ -TQ ∗GQ TQ SA -TQ ∗RS TQ NP -TQ ∗RS

Detergent bottle 76.09

80

81.86

81.31 64.50

Coffee mug

85.81

72.52

66.98

Cookies box

55.39

95.69

77.52

78.34

Feeding bottle

97.94

91.16

60.47

57.57

Milk box

80.66

81.70

76.01

77.21

1. Compute a set of 500 grasps using the left hand, G l = {g1l , . . . , gil } i = 1, . . . , 500, in the same way as the set G r around the whole object. 2. Select the grasp gil with the best quality Qlgmax . 3. For each grasp gjr : (a) Remove the grasps in G l that are in collision with MBB-Ph in order to obtaining another set of grasps Gjl = {gil ∈ G l | gil compatible with gjr }. (b) Select the grasp gil ∈ Gjl with the best quality Qlpmax . (c) Compute the TQ ∗ as • For TQ MQ Qlp TQ ∗MQ = l max (10) Qgmax • For TQ GQ TQ ∗GQ =

Qlpmax Qrj

(11)

where Qrj is the FC grasp quality of gjr . • For TQ SA and TQ NP c (12) TQ ∗RS = m where c is the number of grasps in Gjl and m is the number of grasps in G l . Now, the correlation between TQ and TQ ∗ was computed using the Pearson correlation coefficient which is a regression measure that quantifies the variation between two variables and is obtained as:  xy (13) CORR(%) =    2 ( x )( y 2 ) where x represents the value of T Q and y represents the value of T Q∗ . Table 2 shows the correlation percentage between the theoretical values obtained with T Q and the real values obtained with T Q∗ for different objects. The results show a high correlation between TQ MQ and TQ ∗MQ for most of the objects used and only one object shows a moderate correlation (coffee mug).

38

F. Soler et al.

Fig. 7. Scatter diagram for each grasp transfer quality measure for 30 grasps using the detergent bottle (Fig. 2).

The correlation between TQ GQ and TQ ∗GQ shows the best results obtaining correlations above 80% for most objects. The correlation between TQ SA and TQ ∗RS , which is similar to the correlation between TQ NP and TQ ∗RS , shows the lowest correlation (under 80% for most objects), three objects (detergent bottle, cookie box and milk box) have high correlation and two objects (coffee mug and feeding bottle) have moderate correlation. Figure 7 shows the scatter diagram of the correlation analysis for each TQ for 30 grasps using the detergent bottle.

7

Summary and Future Work

This work proposes different grasp transfer quality measures to evaluate grasps with the purpose of transferring objects from one hand to another using a dualarm robot or more than one single-arm robot. These grasp transfer quality measures can determine whether a initial grasp on an object is good enough so that another hand can easily find a good grasp on the same object. This may help to choose grasps that are more likely to allow transfer of objects. The validation results showed a high correlation between TQ and TQ ∗ for most objects. The results validated in this paper were obtained for one particular hand. Future work includes the validation of the grasp transfer quality measures using more objects with different shapes and sizes and other different robotics hands.

Grasp Quality Measures for Transferring Objects

39

References 1. Roa, M., Su´ arez, R.: Grasp quality measures: review and performance. Auton. Robots 38, 65–88 (2015) 2. Nguyen, V.: Constructing force-closure grasps. Int. J. Robot. Res. 7, 3–16 (1988) 3. Ferrari, C., Canny, J.: Planing optimal grasps. In: IEEE International Conference on Robotics and Automation, pp. 2290–2295 (1992) 4. Li, Z., Sastry, S.: Task-oriented optimal grasping by multifingered robot hands. IEEE J. Robot. Autom. 4, 32–44 (1988) 5. Pollard, N.: Parallel methods for synthesizing whole-hand grasps from generalized prototypes, Technical report AI-TR 1464 (1994) 6. Liu, S., Carpin, S.: A fast algorithm for grasp quality evaluation using the object wrench space. In: IEEE Conference on Automation Science and Engineering, pp. 558–563 (2015) 7. Borst, C., Fisher, M., Hirzinger, G.: Grasp planning: how to choose a suitable task wrench space. In: IEEE International Conference on Robotics Automation (2004) 8. Berenson, D., Diankov, R., Nishiwaki, K., Kaganami, S., Kuffner, J.: Planning in complex scenes. In: IEEE-RAS International Conference on Robotics and Automation (2007) 9. Saut, J., Gharbi, M., Corts, J., Sidobre, D., Simon, T.: Planning pick-and-place tasks with two-hand regrasping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2010) 10. Vahrenkamp, N., Asfour, T., Dillman, R.: Simultaneous grasp and motion planning. In: IEEE Robotics Automation Magazine (2012) 11. Vahrenkamp, N., Przybylski, M., Asfour, T., Dillman, R.: Bimanual grasp planning. In: IEEE-RAS International Conference on Humanoid Robots (2011) 12. Rusu, R., Cousins, S.: 3D is here: point cloud library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA) (2011) 13. Marton, Z.C., Rusu, R., Beetz, M.: On fast surface reconstruction methods for large and noisy datasets. In: IEEE International Conference on Robotics and Automation (2009) 14. https://grabcad.com. Accessed February 2017 15. https://123dapp.com. Accessed February 2017 16. Vahrenkamp, N., Krhnert, M., Ulbrich, S., Asfour, T., Metta, G., Dillmann, R.: Simox: a robotics toolbox for simulation, motion and grasp planning. In: Intelligent Autonomous Systems, vol. 12, pp. 585–594 (2013)

Application of Robotics in Shipbuilding

Development of a Customized Interface for a Robotic Welding Application at Navantia Shipbuilding Company ( ) Pedro L. Galindo1 ✉ , Arturo Morgado-Estévez1, José Luis Aparicio1, Guillermo Bárcena1, José Andrés Soto-Núñez2, Pedro Chavera2, and Francisco J. Abad Fraga2

1

2

Universidad de Cádiz, Avda. Universidad de Cádiz, nº 10, 11519 Puerto Real, Cádiz, Spain [email protected] Navantia S.A., S.M.E., Astillero Bahía de Cádiz, Polígono Astillero s/n, 11519 Puerto Real, Cádiz, Spain

Abstract. In this paper, a customized interface developed in the framework of the ROBOT FASE II project is described. This project aimed at improving the productivity of two FANUC ARCMate 100iC MIG welding robots with R-30iA controllers mounted in an 8 meters-high mobile gantry crane at Navantia company in Puerto Real, Spain. The solution designed for welding application by the University of Cadiz consists of four parts (1) a library of piece templates including relevant information for each piece to be welded, including obviously the typical piece geometry shape and dimensions, but also all parameters needed for welding(sequence, intensity of the arc, waving description,…) and optimized to get a perfect result by a professional welder team (2) a coordinate measuring arm used to capture 3D information from the real world, (3) a software to generate automatically the optimized FANUC welding program using both the template and the 3D information captured by the arm, adapting the template to real-world coordinates and orientation, (4) and finally an FTP interface to transmit the opti‐ mized welding program to each robot for immediate welding operation. The use of this solution for welding operation has reduced robot programming time from hours to minutes for a typical structure allowing an important increase of produc‐ tivity at Navantia company. Keywords: Industry 4.0 · Robotics · Shipbuilding · Welding · Shipyard 4.0

1

Introduction

Through robotics, the value of human labor is increased, resulting in greater economic return for business and higher wages for workers. Automated technology and robotic systems are increasingly important in modern shipbuilding manufacturing industries. Robots for welding, painting, heavy lifting and other tasks, are helping to increase the productivity at shipyards, while sparing human workers the most dangerous and thank‐ less tasks. Navantia is a Spanish state-owned shipbuilding company [1], giving service to both military and civil sectors. It is the first shipbuilding company in Spain, the fifth largest

© Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_4

44

P.L. Galindo et al.

shipbuilder in Europe, and the ninth largest in the world. By size and technological capability, it occupies a leading role in European and Worldwide military shipbuilding, with experience in building technologically advanced ships (frigates, amphibious ships, patrol vessels, submarines,…) for different navies. Robotics in the shipbuilding industry is neither a new [2] nor an easy task. Tradi‐ tionally, the shipbuilding industry has relied on the labour of a large number of skilled workers, who worked for many hours in dangerous and highly demanding tasks such as welding, cutting and painting. However, with the intensive use of robotic applications, Asian countries such as South Korea, Japan and China have greatly reduced manufac‐ turing time and labour costs, becoming leaders of the market [3]. The reason for this is the specialization of Asian companies offering a narrow set of products, but highly competitive and manufactured in an impressively short period of time. To become competitive, European companies should offer the best product of the highest quality at a competitive price. This goal is achieved in Navantia thanks to its constant commitment to innovation and technology. Navantia has a powerful technical office and makes extra‐ ordinary efforts in Research and Development with the aim of always offering the latest products and services. The innovative spirit that reigns within the company leads not just to foreseeing the future needs of clients but also to getting a step ahead with original projects. It is precisely in this spirit that Navantia decided to consider robotics tech‐ nology for welding more than 5 years ago. Robot Fase I project, developed in 2012 allowed Navantia, as it is shown in Fig. 1, the acquisition of two FANUC ARCMate 100iC MIG welding robots with R-30iA controllers mounted in an 8 meters-high mobile gantry crane at Navantia company in Puerto Real, Spain. However, the use for production purposes of this equipment was quite difficult as the programming process was very difficult, time-consuming, and quite expensive in terms of highly specialized man-hours.

Fig. 1. Welding robotic structure: two FANUC ARCMate 100iC MIG welding robots with R-30iA controllers mounted in an 8 meters-high mobile gantry crane at Navantia company.

Development of a Customized Interface for a Robotic Welding Application

45

Robot Fase II project was developed in 2016 in collaboration with the University of Cádiz. In this project, the main objective was the improvement of the productivity and the programming speed using the available infrastructure. The complete solution, described in the next sections, was originally conceived and undertaken by two research groups at the University of Cadiz with more than 20 years of experience (in areas such as Robotics, Artificial Intelligence, Image Processing, Data Analysis, Simulation,…) in Industry 4.0 related projects: (i) Intelligent Systems and (ii) Applied robotics. Intelligent Systems research group took care of all software related questions, and also designed the software for calibration and automatic generation of robot code. Applied Robotics research group was in charge of all hardware-related tasks, such as robot manipulation, maintenance and training.

2

Robotic Welding Solution Description

The shipbuilding process is mainly divided into design, cutting, welding, assembling, grinding, blinding, and painting processes [4]. Among these manufacturing processes, welding is the most crucial, expensive, and time-consuming process. For that reason, the main objective of the project was the use of a robotic arm for welding purposes in a production environment. This could yield an increased productivity and an improvement in the working conditions of company workers, greatly reducing their work-related musculoskeletal disorders [5–7]. Today, welding robots are vital to shipbuilding for their economic feasibility, production effectiveness, speed, and level of detail. The problem with the use of robotics in shipbuilding is robot programming [8–10]. Usually, programming an industrial robot takes much time and may be suitable for jobs consisting in the repetition of a monotonous motion, applying therefore to identical pieces (same size, shape and orientation). But, in shipbuilding, workpieces are not identical. What is more, they are usually so large that their dimensions are prone to large errors (in the order of few millimeters in the best case) not only for manufacturing accumulated errors but also to unavoidable causes, such as metal dilatation. 2.1 System Description The solution to the original problem had some requirements to be fulfilled. The main requirement was that the provided solution should significantly reduce the required time for programming the robot in a typical production situation. A second requirement was that workpieces to be welded should be considered of variable size. While the shape among different pieces was considered to be nearly constant, differences in the dimen‐ sions across the three directions of the space could vary significantly. Three different solutions where considered to meet these requirements: (i) finding a commercial solution, (ii) developing a completely automatic solution and (iii) designing a semiautomatic process capable of reducing the required programming time. While some commercial solutions were available for different purposes, none of them had enough flexibility to be adapted to the available robotic system. What is more,

46

P.L. Galindo et al.

the flexibility needed due to different shapes and sizes reduced the options to a few companies. Some of them provide solutions for fully automatic welding of open blocks (profiles, T-bars and girders) or determine the welding jobs in its working area by means of an integrated logic system. However, the system design is not open, and new block shapes or structures always means a dependency on this company for new developments that we tried to avoid. On the other side, other companies offer different solutions for programming in place work pieces (Shipbuilding solutions, panel structures) or manipulated work pieces (any product). However, its price and the difficulty in programming welding sequences for new workpieces to expand the system without the help of the company motivated us to discard this option as well. The completely automatic solution was quickly discarded, as the company was interested in a solution working in the workshop, not only in the laboratory, compatible with a huge number of workpiece shapes and sizes and easily adaptable to new structures. The high complexity of the task and the absence of such solutions in the market, convinced us to find a solution just in the middle, considering an ad-hoc semiautomatic welding process developed entirely by the University of Cadiz and Navantia. Once the semiautomatic process was defined, the system was conceived into three parts. First a 3D acquisition system was required to capture the real dimensions of the pieces to be welded. Secondly, and given that an enormous range of shapes and sizes was possible for the workpieces, the definition of a library of templates was considered, including those shapes more frequently welded in the typical panel lines in a shipyard. Templates had to be designed for each different workpiece, but its configuration should be independent of its dimensions to a given tolerance limit. Then, in a real operation, the worker should select the appropriate template according to the desired shape, collect from 3D space the real dimensions using the 3D acquisition system in real time and the software should mix automatically information from both sides (template, 3D real world coordinates) to generate a welding program in the FANUC language to be loaded into the robot. And finally, the program should be transmitted to the robot in an automatic way. 2.2 3D Acquisition System There were many alternatives to acquire 3D information. Both mechanical and optical systems were considered, including coordinate measuring arms, trackers, optical systems, laser and/or structured light scanners and stereo cameras. The accuracy in the final configuration would be the sum of the inherent error of the chosen technology, alignment errors in transferring the coordinate system of the 3D capture to the robot coordinate system and measuring errors due to human intervention. After a series of testing experiments using different equipment solutions in a real world environment, we decided to use a coordinate measuring arm, as the accuracy of this solution was in the order of microns, the price was extremely competitive compared to other alternatives and the interface with the ad-hoc software was guaranteed by the use of the corresponding software development kit (SDK). The SDK would allow us to

Development of a Customized Interface for a Robotic Welding Application

47

easily capture 3D coordinates when the worker pressed the trigger of the arm. These coordinates were referred to its own internal coordinate system and should be rotated and translated easily, but this was possible using a simple matrix multiplication using homogeneous coordinates [11]. The analysis of the market for coordinate measuring arms was quite exhaustive, and several brands and distributors were considered (ROMER, FARO, NIKON, ECHO Arm, HEXAGON, Kreon, RPS, CIMCore, etc.). Most of the requirements for the coor‐ dinate measuring arm (accuracy, light weight, number of degrees of freedom, possibility to add an scanner,…) were fulfilled by all suppliers, but the ROMER arm length had no competence and given the application for which it was going to be used, a 4.5 meters length ROMER coordinate measuring arm was chosen. The ROMER absolute Arm features absolute encoders (there is no need for refer‐ encing, when the arm is turned on, it is ready to go), versatility, portability, stability, light weight and the possibility to adapt a high-performance laser scanner. What is more, it does not require warm-up time or initialization, thanks to a stable carbon fiber structure and industry leading absolute encoders. 2.3 Adopted Solution The adopted solution considers a light-weight portable coordinate measuring arm (see Fig. 2) that the worker uses to capture 3D coordinates. This information is sent to the software that translates it into a real robotic program using the chosen template, and automatically transmitted to the robot.

Fig. 2. Schematic description of the proposed welding equipment configuration. 3D coordinate measuring arm is used to capture 3D information, while welding robots use this information in a continuous welding operation.

This approach has several advantages. When the robot is welding, the worker may start taking coordinates of the adjacent piece. This is extremely important, as the worker can advance the 3D coordinate measuring process while keeping out of the robot working range. As a consequence, the robot may stay welding 100% of the time, as welding is much more time consuming than coordinate measuring. In fact, as welding usually takes one order of magnitude more time than measuring (including arm

48

P.L. Galindo et al.

movement, anchorage and workpiece measuring), this makes possible that a single worker manipulates several robots at the same time. Therefore, using this approach and taking into account the use of a coordinate meas‐ uring arm, the final configuration adopted in the ROBOT FASE II Project was the one shown in Fig. 3.

Fig. 3. Schematic description of the proposed welding solution. A coordinate measuring arm is used to capture 3D information. The software adapts a previously chosen template using real world coordinates and generates the robotic program.

3

Software Structure

The software has been developed entirely by the Intelligent Systems and Applied Robotics research groups of the University of Cadiz, in collaboration with Navantia company and continuously supported by its most skilled and experienced welding workers. The software is structured into four parts: 1. A library of templates 2. An automatic generator of FANUC code 3. A friendly Graphical User Interface to have access to all functions and easily manip‐ ulate the whole system 4. Other software components • An interaction layer to SDKs from ROMER and FANUC companies • An FTP interface to send the generated program to the robot • ROBOGUIDE-WeldPro simulation software 3.1 Library of Templates The library of templates consists of a collection of packages of highly specialized infor‐ mation for each piece. The gathering of this information was made with the help of highly skilled professional welders, as it is the core of the system, being responsible for

Development of a Customized Interface for a Robotic Welding Application

49

the quality of the whole system. The information stored in each template consist of three parts: the geometry, the welding sequence, and the welding parameters for each section. The geometry (shape and theoretical dimensions) is stored in a CAD file using OBJ format. This file is used not only for the generation of the corresponding program, but also for visualization purposes in the user interface, as shown below. The welding sequence is quite important as it effectively controls distortion. It consists of a series of steps that define the order of making the welds. It strongly depends on the piece shape and dimensions, and is supplied by the skilled welder. Welding parameters are also very important, but in this case they affect directly the quality of the welding at the end of the process. It is a complex set of parameters defining the process for each section of the welding, including the torch orientation and speed, arc intensity, waving amplitude, waving frequency, waving orientation, supplied mate‐ rial, material, etc. 3.2 Automatic Generator of FANUC Code Once the 3D coordinates of the real world piece are available, the chosen template is modified accordingly. This process uses all the template information and creates a new program specially fitted to the real world piece. The generation of the FANUC code is not easy, and it is divided into several sections: initial positioning, approach, welding, separation, go to home. Calibration is made by touching four previously identified and referenced points in the robot using the coordinate measuring arm. These points have been chosen to be in a position similar to the vertices of a tetrahedron in order to maximize the accuracy of the transformation [12]. A relevant feature included in the final version of the software is the orientation. The system uses the captured 3D points to estimate the orientation of the piece in 3D, making possible to weld similar pieces in any orientation in the space, as the angle of the piece with respect to the theoretical piece stored in the template library is calculated and properly considered in the welding program. 3.3 Graphical User Interface In this project, the design of the Graphical User Interface (GUI) was very important, as it is intended to be user friendly [13]. It has been developed in Lazarus, a cross-platform IDE for Rapid Application Development [14]. It has a variety of components ready to use and a graphical form designer to easily create complex GUIs. In this project, the Castle Game engine was used for 3D visualization of OBJ files, allowing the code for visualization to be quite simple and reduced in length. The first advantage of the developed software (in this case it was the main reason to choose the Lazarus environment) is that all the code, including libraries, graphical inter‐ faces, etc. is encapsulated in a single executable file, making it quite easily to be updated, as copying the new executable file with the same name is all what is needed. The software is quite intuitive, as it is shown in Fig. 4.

50

P.L. Galindo et al.

Fig. 4. Main window of the Graphical User Interface. On the left side, the template tree is shown. In the middle of the screen we may observe the chosen template in 3D. On the right hand side we may observe the message window. Three main buttons are observed: in the left side to load the template, in the center to capture coordinates using the coordinate measuring arm, and in the right side to send the resulting FANUC code to each robot.

The main window is easy to navigate, as it is designed to be used by the worker using a single window, consisting of (i) a template section to choose the desired template, (ii) a visualization area to show in 3D the chosen piece in 3D, being possible to shift, rotate, zoom in and out,…quite easily using the mouse, a message window to send some feed‐ back to the user of the evolution of the process and (iii) three main buttons, one to load the template, another one to capture the real 3D coordinates with the coordinate meas‐ uring arm and the other one to send the automatically generated code to the FANUC robot using FTP. The software is quite efficient in the sense that it is able to work with complex pieces in 3D in an ordinary computer, taking only a few seconds to generate the FANUC code at the end of the capture process. Finally, the main advantage is that the software has been entirely developed by the University of Cadiz, and therefore any modification or improvement needed in the future may be implemented quite easily. 3.4 Other Software Components The rest of the software components are intended to communicate with the ROMER arm and the FANUC robot. In the first case, an SDK has been available to communicate the software with the ROMER arm, capturing the coordinates in 3D and accessing them in real time. The capture is determined by the user by manually activating the rugged and accurate touchtrigger probe of the ROMER arm.

Development of a Customized Interface for a Robotic Welding Application

51

In the case of the FANUC robot, two packages of software have been used. The first one is a FANUC SDK to have access to all parameters of the robot (specially 3D coor‐ dinates and robot joint angles) in real time. The other one is a FTP interface, that was used to send the generated code to each of the robots. It is worth mentioning that the software is manipulated by the worker remotely, using a remote application (TeamViewer) and a ruggerized tablet connected via WIFI. Finally, in the development of templates, the ROBOGUIDE-WeldPRO program‐ ming tool was used (see Fig. 5), as this tool allowed us to simulate two FANUC ARC Mate 100iC arc welding robots in 3D space. It proved to be extremely accurate and easy, and very useful for developing and testing purposes.

Fig. 5. ROBOGUIDE design with two FANUC ARcMate 100iC robots, strongly resembling the real world configuration in Navantia company

4

Conclusions

In this paper, the development of a customized interface for a robotic welding application at Navantia shipbuilding company in the framework of the ROBOT FASE II project has been described. This software allows the reduction of the programming time from hours to a few seconds by a non-experienced worker. The performance improvement in the welding process is quite remarkable. It has been designed using a user-friendly Graphical User Interface, being quite easily updated, improved and expanded. The FANUC ROBOGUIDE programming tool has been used to simulate the real environment and has been used for developing and testing purposes. Future work will be devoted to replace the coordinate measuring arm with an arti‐ ficial vision system in order to automatically capture the 3D real world environment.

52

P.L. Galindo et al.

References 1. Navantia Homepage. http://www.navantia.es. Last accessed 10 July 2017 2. Reeve Jr., R.C., Rongo, R.: Shipbuilding Robotics and Economics NSRP Ship Production Symposium on Commercial Competitiveness for Small and Large North American Shipyards, pp. 25–27 (1995) 3. Mickeviciene, R.: Global shipbuilding competition: trends and challenges for Europe, The Economic Geography of Globalization. In: Pachura, P. (ed.) (2011). http://www. intechopen.com/books/the-economic-geography-of-globalization/global-shipbuildingcompetitiontrends-and-challenges-for-europe 4. Kim, C.-S., Hong, K.-S., Han, Y.-S.: Welding Robot Applications in Shipbuilding Industry: Off-Line Programming, Virtual Reality Simulation, and Open Architecture. https://www. intechopen.com/books/industrial_robotics_programming_simulation_and_applications/ welding_robot_applications_in_shipbuilding_industry__off-line_programming__virtual_ reality_simulatio. Last accessed 10 July 2017 5. Keefe, J.: Shipconstructor drives automatic welding robots. Mar. News Mag. 28(2), 24–27 (2017) 6. Ogbemhe, J., Mpofu, K.: Towards achieving a fully intelligent robotic arc welding: a review. Ind. Robot Int. J. 42(5), 475–484 (2015) 7. Shi, L., Tian, X.C., Zhang, C.H.: Automatic programming for industrial robot to weld intersecting pipes. Int. J. Adv. Manuf. Technol. 81(9), 2099–2107 (2015) 8. Ku, N., Ha, S., Roh, M.-I.: Design of controller for mobile robot in welding process of shipbuilding engineering. J. Comput. Des. Eng. 1(4), 243–255 (2014) 9. Larkin, N., Short, A., Pan, Z., van Duin, S.: Automated programming for robotic welding. Trans. Intell. Weld. Manuf. 1(1), 48–59 (2017) 10. Wu, G., Wang, D., Dong, H.: Off-line programmed error compensation of an industrial robot in ship hull welding. Lecture Notes in Artificial Intelligence, vol. 10463, pp. 135–146 (2017) 11. Bez, H.E.: Homogeneous coordinates for computer graphics. Comput. Aided Des. 15(6), 361– 365 (1983) 12. Garner, L.E.: Outline of Projective Geometry. Elsevier Science Ltd., Amsterdam (1981) 13. TechRepublic page. http://www.techrepublic.com/blog/10-things/10-things-that-makesoftware-user-friendly/. Last accessed 10 July 2017 14. Lazarus Homepage. http://www.lazarus-ide.org. Last accessed 10 July 2017

Towards Automated Welding in Big Shipbuilding Assisted by Programed Robotic Arm Using a Measuring Arm ( ) Arturo Morgado-Estevez1 ✉ , Pedro L. Galindo1, Jose-Luis Aparicio-Rodriguez1, 1 Ignacio Diaz-Cano , Carlos Rioja-del-Rio1, Jose A. Soto-Nuñez2, Pedro Chavera2, and Francisco J. Abad-Fraga2

1

Faculty of Engineering, University of Cadiz, Av. de la Universidad de Cádiz, nº 10, 11519 Puerto Real, Cádiz, Spain [email protected] 2 Navantia S.A., S.M.E, Bay of Cádiz Shipyard, Polígono Astillero, s/n, 11519 Puerto Real, Cádiz, Spain

Abstract. This paper presents an automated robotic welding system adapted for shipbuilding in large shipyards. This solution has been devised in the shipyard of Navantia located in the south of Spain, in the context of ROBOT FASE II R&D project. It also presents the human teams that have developed this welding system. The article explains the 3 parts that make up the welding system. The location of the robotic welding arm of the Fanuc brand is detailed in the first part. In addition, the gantry and spatial coordinate axes are described and indicated where the robotic arm is housed. The second part contains the system capture of the coor‐ dinates in the space for the reading of the singular points to be soldered. These points are measured by a portable measuring arm. The last part is composed of the system of communication between the different parts throughout the computer. The computer is responsible for synchronizing the measuring arm and the robotic welding arm by translating the points to be soldered. Keywords: Robotic welding · Shipbuilding · Automated welding · Fanuc robotic arm · Portable measuring arm · Industry 4.0 · Shipyard 4.0 · Naval industry

1

Introduction

In our days, it is necessary to increase the automation and robotization of the naval sector with the aim of increasing productivity and at the same time becoming more competitive in the market [7, 13]. Currently, the welding process occupies a big portion in shipbuilding construction; more than 65% of entire welding processes were automated in shipbuilding area both in the horizontal position and in the vertical position [1]. There are several sources of variability in robotic arc welding. Firstly, the accuracy of component manufacture and assembly involves variability. Thus, attention must be given to parts manufacturing processes, assembly tooling and quality control to ensure that dimensional tolerances on assemblies for robot welding can be maintained within acceptable limits. © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_5

54

A. Morgado-Estevez et al.

Secondly, the location of assemblies on the work manipulator leads to variability. Additional holding fixtures and a precise fixture manipulator location system are required in the manner of those commonly used in NC machining systems. An accept‐ able level of repeatability is often the major system specification requirement for robot arc welding facilities. As with manual welding, careful attention is required to be given to weld sequencing [8]. The need for autonomous welding tasks has increased recently in shipyards to improve productivity. Autonomous welding tasks using robots have been used in many applications for more than 20 years. However, the robots work only at a fixed location, and a crane is usually used to move the robots from one working location to another [9]. A survey of the late 1980s revealed that the borrowed technology and know-how embodied in the imported robots are playing a crucial role to improve their competi‐ tiveness by upgrading the quality and confidence of the products, and by increasing the flexibility of the production lines. In short, the developing countries which have other‐ wise been stagnant in permanent backwardness of technology could easily step up to produce the products of same quality as high as the advanced countries [12]. In the 90’s there were several robotized solutions. The robot of Weston in 1989 was a very attractive robotic welding system because of its robustness and flexibility. Skjolstrup and Ostergaard in 1994 achieved the next step in the technological advance of shipbuilding [2]. In addition, in an effort to increase productivity of welding process and to reduce the impact to workers from industrial disaster, the research in the field of welding automa‐ tion is sharply increasing in shipbuilding industry. DSME (Daewoo Shipbuilding and Marine Engineering) had developed and successfully applied a welding robot system for grand assembly line in OKPO shipyard in 1997 and has recently developed a multi welding robot system for the sub-assembly line [11]. In the 90’s it was possible to find a ship welding robotic system (SWERS). A robotic solution was easy to use, that is, where an unskilled worker can operate it out of “common sense”. This robot was not autonomous, it could be taught directly by the workers. This project was a collaboration between National University of Singapore, Gintic Institute of Manufacturing Technology and the Singapore Productivity and Standards Board [3] (Fig. 1). At the beginning of the 21st century, the Industrial Automation Institute (IAI) in Spain developed for “Astilleros Españoles S.A.”, a robotic welding system called “ROWER 1” that could be used in a closed block. The robot moved like a spider and had four legs capable of extending and contracting. It could move autonomously [4] (Fig. 2).

Towards Automated Welding in Big Shipbuilding

55

Fig. 1. Robot SWER

Fig. 2. Robot ROWER 1

In 2011, the RRXC [1] showed up, which was composed of a 6-axis modularized controller, and an auxiliary transportation device. The entire cross section of the RRXC is small enough to be placed inside the double-hulled structures via a conventional access hole of 500 × 700 mm2, from the exterior of the structure of shipyard. This robot was also developed in its second and third version (RRX2 and RRX3) with more improve‐ ments and features as well as more axes [10]. This research was supported in part by Daewoo Shipbuilding and Marine Engineering (DSME) of Republic of Korea [5, 6]. More recently, we could find other welding robotic solutions based in autonomous robots, with six-axis manipulator, with multiple welding and several movement types.

56

2

A. Morgado-Estevez et al.

Robotic Welding System Architecture

The robotic welding system architecture shown below has the elements and intercon‐ nections needed to accomplish the robotic welding process. This architecture contains mainly three different subsystems: • The robotic group system • The measuring arm system • The computer system The robotic group system is formed by a blue gantry and two robotic groups attached to it. This system is in charge of controlling and monitoring the welding process carried out by the robotic arms. These two robotic groups contain a controller that manages all the signals and movements performed by the two robotic groups: a ‘group one’ or ‘yellow group’ formed by a FANUC robot capable of moving around a limited work‐ place and a ‘group two’ or ‘orange group’, which is able to move the ‘yellow group’ and extend the motion capabilities of it. This robotic group has also a welding group in charge of the welding process (Fig. 3).

Fig. 3. Robotic welding system architecture

In this architecture a computer system in charge of controlling and monitoring the robotic group system and the measuring arm group can be found. This measuring arm group has the responsibility of taking the coordinates of points in order to calculate the path to weld right after.

Towards Automated Welding in Big Shipbuilding

57

2.1 Joint Research Group Navantia-University of Cadiz This automated robotic welding system has been developed between the R&D group of Navantia company and two research groups from the University of Cadiz. The research group at Navantia is composed of an expert welder in automated welding supervised by the workshop manager. They have been in charge of carrying out all the tests of operation to put it into production. The test pieces and homologation of the operation of the welding system have been carried out by the members of Navantia. The most important contribution of Navantia research group has been the experience and knowledge of the welding parameters that have been programmed in the robot. The Applied Robotics research group has worked on the interconnection of the communication system between the coordinates system of the robotic arm and the computer. In addition, it has wirelessly interconnected via wifi the measuring arm and the computer. The group has collaborated in the performance tests to integrate the system into the production line. The most important contribution of the Applied Robotics research group has been the interpretation of all the low-level orders generated by the robots and sent to the computer. In this way, total control over the robotic welding system has been achieved. The Intelligent Systems research group has developed all the software needed for the synchronization between the robotic arm and the measuring arm. The group has developed a new algorithm that improves the accuracy of the robotic arm and the ‘group two’. In addition, a new concept of translation of the coordinate system has been devised by this research group obtaining weld beads with an accuracy of less than 1 mm. This joint research group between Navantia and the University of Cadiz has rein‐ forced knowledge in automated robotic welding systems applied to the naval industry setting new challenges for future work within the company. The members who have participated in the development of the automated robotic welding system appear in the following photo. 2.2 Description of Robotic Arm Coordinate Systems The following figure shows the structure of the entire welding system. The blue gantry is situated in a big workshop in Navantia. It is composed by two systems of three axes: the ‘group one’ or ‘orange group’, and in the end of every ‘group one’, there is a ‘group two’ (or ‘yellow group’) where the robotic arm is situated. The three coordinate axis system is described below. Blue gantry: In the Fig. 4, we can see the big gantry. It allows movement on the Xaxis. The gantry is moved manually by an operator. The controller is located at the base of the gantry. It allows powerful movements, but it is not very accurate. Before starting the procedure for welding automatically, the gantry must be positioned close to the area to be welded.

58

A. Morgado-Estevez et al.

Fig. 4. Joint research group Navantia-University of Cadiz

‘Group two’ (orange): In the Fig. 5, the ‘orange group’ can be distinguished. It permits movements in three spatial directions (X, Y, and Z) with full precision. ‘Group

Fig. 5. The entire welding system.

Towards Automated Welding in Big Shipbuilding

59

two’ is automatically moved by the Fanuc robot control system. In the sequence of automated welding, this group intervenes reaching greater distance at the time of welding. The X-axis can be moved 2 m, the Y-axis, 10 m and the Z-axis, 6 m. The accuracy of that system of coordinate axes is less than 1 mm. The ‘group 2’ is sufficiently accurate for arc welding (Fig. 6).

Fig. 6. ‘Group one’ and ‘Group two’

Fig. 7. ‘Group one’

60

A. Morgado-Estevez et al.

‘Group one’ (yellow): The Fanuc ARC Mate 100iC/12 robot has 6 DoF and it permits extremely precise movements in three spatial directions (x, y and z) and gives it any orientation (w, p and r). The robot is positioned inverted at the end of the Z-axis of ‘group two’. The 30iB robot controller is mounted above the gantry next to the Lincoln Electric welding system. It has more resolution than the ‘group one’. The Fanuc robot has a precision greater than 0.08 mm. ‘Group one’ is connected to the computer via a RJ45 connector. The computer transfers the program code to the Robot’s console (TPE). The program will be executed from the TPE. In the following figure, welding ‘group one’ is shown (Fig. 7). The following table summarizes the technical characteristics of the automated robotic welding system (Table 1). Table 1. Features of the automated robotic welding system. Parts Blue gantry Group two (orange) Group one (yellow)

DoF num. 1 3 6

Axes X X, Y, Z x, y, z, w, p, r

Range (m) 30 2, 10, 6 1.4

Precision (mm) >20 >1 >0.08

2.3 System for Picking up Singular Points The portable measuring arm is a key component in the robotic welding system. The main purpose of this measuring system is to take 3D points following a welding template loaded in the software control system. This measuring arm counts with a number of features that make it suitable to be used as part of the robotic welding system in the shipyard. To start with, the arm contains a magnet basement what makes it easier to place in a metallic environment. The operator can take points in a specific place and move the measuring arm to another while the robotic system is executing the welding routine. Furthermore, the arm has a battery included and a spare one, giving the system even more flexibility, since it is not necessary to have wires connected. The measuring arm installed is a high-precision-7-DoF arm with 4.5 m of length, which makes it able to reach a great variety of far and difficult access points in the workplace. Besides, this arm has three different types of proves, this fact helps the oper‐ ator to take points in different location and access (Fig. 8). The communication system between the control software installed in the computer and the measuring arm can be wire or wireless; in this case, we use a wireless commu‐ nication in order to give the system more flexibility. The arm counts with a specific communication protocol that can manage the messages sent and received from the computer wirelessly and give the developer an easy way to control and monitor all the signals and points taken.

Towards Automated Welding in Big Shipbuilding

61

Fig. 8. Portable measuring arm

2.4 Computer System The computer system has an important role in the whole system. It is the part in charge of managing the robotic welding system and it has the software to control the welding process installed. This computer has connections with different systems. It is connected to a local network to control and monitor all the signals from the robotic arm (‘yellow group’) and the ‘orange group’ in a remote way. This computer communicates via Wi-Fi or USB with the portable measuring arm in order to get the points taken and monitor different signals from it. In addition, the computer has a tablet associated; this tablet is connected via Wi-Fi and replies the whole computer system. This helps the operator to be in contact with the computer and operate the control software in every moment, everywhere in the workspace, as being in front of the computer during the complete welding process is not needed. On the other hand, the computer system has a specific software installed, developed to carry out the robotized welding process in an automatic way. This specific software has five different modules: • • • • •

Loading module Template visualization module Creation and modification views module Capture points module Communication module

This first module is in charge of loading the templates programmed that will be used in the next capture of points with the measuring arm. Afterwards, the software will ask for capturing points following the welding path from the template loaded. Once all the

62

A. Morgado-Estevez et al.

points are captured, the software generates a program. This program will be transmitted via Wi-Fi from the computer system towards the controller of the robotic system (‘yellow group’ and ‘orange group’), finally the operator will supervise the program and run it (Fig. 9).

Fig. 9. Communication with the computer system

With regard to the communication module, it is important to say that this module allow the user to establish a connection between the software integrated in the computer system and the two-robotics systems via FTP. The robotic group can send the computer system information such as a list of programs loaded in the robot, the state of the signals, or the welding parameters loaded, and the computer system can upload new programs generated to the robot.

3

Conclusions

It has been presented the solution of an automated robotic welding system developed by the Navantia R&D team and the Applied Robotics and Artificial Intelligence research group of the University of Cadiz. The Applied Robotics research group from the University of Cadiz has been mainly in charge of developing and managing all the communications between the computer system installed and the physical components of the architecture forming the robotic welding system and measuring system. The Intelligent Systems research group has developed the control software to manage the whole welding process. In addition, Navantia research group has supported and guided all research activities thanks to its experience in naval welding processes. It has been taken the most of the infrastructure of one of Navantia’s workshops, formed by a big blue gantry and two orange motion groups attached to it, holding each of them one robotic arm.

Towards Automated Welding in Big Shipbuilding

63

References 1. Ko, S.H., Kim, J.G., Moon, H.S.: Automatic welding robot system for the horizontal position in the shipyard. In: 2012 12th International Conference on Control, Automation and Systems, JeJu Island, pp. 240–245 (2012) 2. Lee, D.: Robots in the shipbuilding industry. Robot. Comput. Integr. Manufact. 30(5), 442– 450 (2014) 3. Ang Jr., M.H., Lin, W., Lim, S.: A walk-through programmed robot for welding in shipyards. Ind. Robot Int. J. 26(5), 377–388 (1999) 4. De Santos, P.G., Armada, M., García, E., Akinfiev, T., No, J., Prieto, M., Nabulsi, S., Cobano, J.A., Ponticelli, R., Sarriá, J., Reviejo, J., y A. Ramos, C.S.: Desarrollo de robots caminantes y escaladores en el IAI-CSIC, Congreso español de informática CEDI (2007) 5. Lee, D., Ku, N., Kim, T.-W., Kim, J., Lee, K.-Y., Son, Y.-S.: Development and application of an intelligent welding robot system for shipbuilding. Robot. Comput. Integr. Manufact. 27(2), 377–388 (2011) 6. Ku, N., Cha, J., Lee, K.-Y., Kim, J., Kim, T., Ha, S., Lee, D.: Development of a mobile welding robot for double-hull structures in shipbuilding. J. Marine Sci. Technol. 15(4), 374–385 (2010). doi:10.1007/s00773-010-0099-5 7. Kim, J.-H.: Automation and robotization of production processes in shipbuilding. In: 2008 International Conference on Control, Automation and Systems, Seoul, South Korea, pp. 35– 41 (2008). doi:10.1109/ICCAS.2008.4694703 8. Middle, J.E., Sury, R.J.: Advancing the application of robotic welding. Prod. Eng. 63(7), 38 (1984) 9. Ku, N., Ha, S., Roh, M.-I.: Design of controller for mobile robot in welding process of shipbuilding engineering. J. Comput. Des. Eng. 1(4), 243–255 (2014) 10. Kim, J., Lee, K.-Y., Kim, T., Lee, D., Lee, S., Lim, C., Kang, S.-W.: Rail running mobile welding robot ‘RRX3’ for double hull ship structure. IFAC Proc. 41(2), 4292–4297 (2008) 11. Kang, S.W., Youn, H.J., Kim, D.H., Kim, K.U., Lee, S.B., Kim, S.Y., Kim, S.H.: Development of multi welding robot system for sub assembly in shipbuilding. IFAC Proc. 41(2), 5273–5278 (2008) 12. Torii, Y.: Robotization in Korea: trend and implications for industrial development. Technol. Forecast. Soc. Change 35(2–3), 179–190 (1989) 13. Boekholt, R.: Welding Mechanisation and Automation in Shipbuilding Worldwide: Production Methods and Trends Based on Yard Capacity. Elsevier (1996). doi: 10.1533/9780857093196.1

Cognitive Architectures

Cybersecurity in Autonomous Systems: Hardening ROS Using Encrypted Communications and Semantic Rules ´ Jes´ us Balsa-Comer´on1(B) , Angel Manuel Guerrero-Higueras1 , 2 andez-Llamas1 , Francisco Javier Rodr´ıguez-Lera , Camino Fern´ 1 and Vicente Matell´ an-Olivera 1

Research Institute on Applied Sciences in Cybersecurity (RIASC), Universidad de Le´ on, Av. de los Jesuitas s/n, 24008 Le´ on, Spain [email protected] 2 Computer Science and Communications Research Unit (CSC), University of Luxembourg, Luxembourg City, Luxembourg

Abstract. Cybersecurity in autonomous systems is a growing concern. Currently most research robotic systems are built using ROS framework, along with other commercial software. The goal of this paper is to improve ROS security features by using encrypted communications and semantic rules to ensure a correct behavior. To encrypt communications, Advanced Encryption Standard algorithm has been applied. Then, the framework ROSRV has been used to define semantic rules for ROS messages. In order to test this proposal, two experiments have been carried out: in the first one, plain-text messages are not allowed and must be blocked; in the second one, rules for detecting denial of service attacks are tested against a real attack performed on a Real-Time Locating System, used by a mobile robot to estimate its location.

Keywords: Autonomous systems

1

· Cybersecurity · Robotics · ROS

Introduction

Several vectors of attack have been described for robotic systems [5]. This paper is focused on the cybersecurity problems associated with mobile service robots. In particular, we are concerned about the problems associated to the middlewares that are generalizing as development frameworks. Robot Operating System (ROS) [9] has become the most popular framework for developing robotic applications. It started as a research project, but currently most manufacturers of commercial platforms use ROS as the de facto standard for building robotic software. Just in July 2016, ROS binary packages were downloaded 8,441,279 times [1], counting only downloads from the main repository in the official site, not the numerous mirrors around the globe. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_6

68

J. Balsa-Comer´ on et al.

ROS is basically a set of libraries for robotics similar to operating system services, providing hardware abstraction for sensors and actuators, low-level device control, and inter-process communication. ROS framework is a message-passing distributed system. Computation takes place in processes named nodes which can receive and send messages. Nodes publish messages into information buffers called topics. For instance, a node can be in charge of accessing a sensor, performing the information processing, and publishing it as an information structure on a named topic. Another node can subscribe to this topic, that is, read its information. Then this process can make a decision about the movement of the robot. Next, this node will publish the commands in another topic to send them to the motors. ROS nodes can be running in the same computer or in different ones. Usual configuration is composed by at least one ROS Master and some clients. ROS Master is the key element in the ROS system. It runs as a nameservice and manages registration information about all topics used by nodes. The Master is updated in real time by the running nodes, which provide information about topics they publish/subscribe and the type of message used by each topic. Figure 1 describes the conceptual model of ROS, showing the Master with two nodes, Talker and Listener. Node Talker publishes messages of type foo in the topic bar, while node Listener is subscribed to topic bar to receive its messages.

Fig. 1. Conceptual model of ROS topics.

Unfortunately, no security was considered in the design of this distributed communication mechanism. Major security problems in the current version of ROS are plain-text communications, unprotected TCP ports, and unencrypted data storage. Thus, ROS poses privacy risks to research organizations, companies and individuals. The plain transport of ROS may suffer eavesdropping, spoofing or denial of service (DoS). Intruders may compromise the privacy of final users by accessing cameras or sensors readings, or by tracking a person with a compromised ROS-based robot. In particular, when developing assistance robots, privacy of patients and caregivers is a major concern. The proposal here presented to improve ROS security includes two approaches. First, we apply encryption algorithms to provide a security layer to ROS data. Second, we define semantic rules which provide some meta-control for specific behaviors. For instance, we could define a rule to improve safety by ignoring decisions about the movement of the robot when a person is too close.

Cybersecurity in Autonomous Systems: Hardening ROS

69

A performance evaluation of hardening ROS by encrypting its communications with the Triple Data Encryption Algorithm (3DES) [11] was studied in [8]. In the present work, we apply Advanced Encryption Standard (AES) to encrypt messages that selected nodes publish. Subscribing nodes are required to decrypt the messages in order to use the data. The second measure presented in this work is focused on both security and safety. Securing communications is just one dimension in the cybersecurity of autonomous systems. If we want to see these machines working in environments like homes, we need to secure navigation abilities and interaction mechanisms, to avoid manipulated or malicious behaviors and make robots reliable assistants. With this objective in mind, we include semantic rules that control messages passed between ROS nodes. We use the ROSRV framework [7], which allows to monitor safety properties and enforce security policies. We define two rules as proof of concept. The first one aims to reject any message that is not encrypted. This rule provides confidentiality, but may also provide integrity and authentication. The second rule is in charge of detecting cyberattacks against Real-Time Locating Systems (RTLSs), which are commonly used by autonomous robots. The rule defined is able to discard the location values given by the RTLS system when it is suffering a Denial of Service (DoS) attack. Two experiments were carried out to test the proposal. The first one consisted on encrypting the messages that a specific node publishes in a topic. Another node subscribes to the topic and deciphers messages in order to use the data. At the same time, other node publishes plain-text messages to the same topic. The monitor implemented with the rule for detecting encrypted messages analyses the topic data and blocks plain-text messages. In the second experiment, a DoS attack is performed against the beacons of a RTLS system. Location messages published by the RTLS node are discarded by the second monitor if they are affected by the attack. The following sections that this paper includes are organized as follows: Sect. 2 describes systems, tools, and environment used for the experiments, and specifies the procedure followed; Sect. 3 shows the executed tests and discusses the results that can be extracted from the data analysis; Sect. 4 summarizes conclusions and future work envisioned.

2

Materials and Methods

The experiments implemented were conducted in an indoor mock-up apartment, shown in Fig. 2, located at the Robotics Lab of the University of Le´ on (Spain). OrBiOne, a ROS-based robot, was used as the main platform for testing the developed proposal. For the experiment with encrypted messages, a notebook was used for publishing both plain-text and encrypted messages into the topic /chatter, which the robot was subscribed to. Robot and notebook were running Ubuntu 14.04 LTS with ROS Indigo installed on it, and communication between the two hardware platforms was made via WiFi connection.

70

J. Balsa-Comer´ on et al.

Fig. 2. Plane of the Robotics Lab. Red dots show beacons location. Robot image shows OrBiOne location during the experiments.

2.1

OrBiOne and KIO RTLS

OrBiOne is an assistant robot developed by Robotnik1 . It has several sensors, such as a RGBD camera, a laser sensor, and an inertial unit. It can operate a manipulator arm attached to his torso, and it has a wheeled base for moving around the room. OrBiOne includes a wireless access point, which allows WiFi communications with other robots or computers. The software to control the robot hardware is based on ROS framework. KIO RTLS is a commercial solution for indoor localization, manufactured by Eliko2 . KIO estimates the real-time position of a mobile transceiver, called tag, in either two or three dimensional space. The position is calculated using radio beacons, called anchors, that have to be placed around the room in fixed points. Red dots in Fig. 2 show the distribution of six anchors in the Robotics Lab. KIO RTLS has four types of beacons: A, B, C, and D. Each of the six beacons used in the experiments has its own identifier: 408A, 411A, 408B, 412B, 501C and 401D. Thus there are two A-beacons, two B-beacons, one C-beacon and one D-beacon. The tag needs to receive signals from at least one A-beacon, one B-beacon, one C-beacon and one D-beacon in order to calculate a 3D location. If only the 2D location is needed, just the signal of three beacons, each one of a different type, is required to estimate the position. Figure 3 shows a KIO tag placed on the robot OrBiOne. Also, one of the KIO anchors is detailed. This anchor was placed in the ceiling of the mock-up apartment. The size of the anchor has been enlarged in the picture (tags and anchors have the same size). 1 2

http://www.robotnik.es/manipuladores-roboticos-moviles/rb-one/. http://www.eliko.ee/kio-rtls/.

Cybersecurity in Autonomous Systems: Hardening ROS

71

Fig. 3. OrBiOne robot with KIO RTLS system.

2.2

Encrypting Communications

As mentioned in the Introduction, the performance of hardening ROS by ciphering communications with the 3DES algorithm was evaluated with positive results in a previous work. In this work we have used another well known symmetric encryption algorithm, AES, recommended by the National Institute of Standards and Technology (NIST) [2]. It [4] is a symmetric key block cipher that has variable key length of 128, 192, or 256 bits. AES is able to encrypt datablocks of 128 bits in 10, 12 and 14 rounds, depending on the key size. Due to AES flexibility [6], it is suitable for both hardware and software implementation. The library PyCrypto 2.6.13 was used to apply AES algorithm to the node that encrypts ROS messages. 2.3

ROSRV

ROSRV is a runtime verification framework integrated seamlessly with ROS. It aims to monitor safety properties and enforce security policies. Its core is a monitoring infrastructure that intercepts, observes and optionally modifies messages passing through the ROS system. ROSRV checks system’s runtime behavior against user-defined properties, triggering desired actions when verifying these properties. It also controls system state and execution of commands by enforcing an access policy to address security concerns. ROSRV allows to ensure safety properties based on event sequences. Event handlers, called monitors, are defined with rules using a C++ based syntax. For 3

https://pypi.python.org/pypi/pycrypto/2.6.

72

J. Balsa-Comer´ on et al.

automatic monitor generation out of formal specifications, ROSRV depends on ROSMOP4 framework which generates code for monitors from the defined rules. It is important to note that the last available version of ROSRV5 was implemented for ROS Groovy, which is deprecated since July 2014. Some modifications in the source code were needed in order to get ROSRV working in ROS Indigo. In the experiments implemented, two monitors were defined to be used by ROSRV. The first one aims to detect whether a message is encrypted, and block it when it is not. The second monitor is able to detect a DoS attack by analyzing the data gathered by the KIO tag. Monitor 1. Algorithm 1 describes the behavior for Monitor 1. It uses the ent tool6 to identify whether a message is encrypted. This tool performs a variety of statistical tests on the stream of bytes passed as input, and reports the results. It is useful for evaluating pseudorandom number generators for encryption and statistical sampling applications, compression algorithms, and other applications where the information density of a file is of interest. Values calculated by ent are: Entropy (S), Chi-square Test (χ and χerr ), Arithmetic Mean (μ), Monte-Carlo value for Pi (π), and Serial Correlation Coefficient (r). The first one, S, shows the information density of the contents of the message, expressed as a number of bits per character. Values close to 8.00 denote a high entropy in the data [10], as happens in ciphered data. S is the best indicator to detect encrypted messages [3], however it does not work fine with short messages, which is the case with most ROS topics. With messages shorter than 200 bytes, there is not enough data to get a reliable value for entropy. Then, we have to check the value of μ, which is simply the result of summing all the bytes in the message and dividing by the message length. If the data is close to random, μ result will approximate to 127.5, according to ent documentation. In our tests, μ values greater than 100 are obtained only for encrypted messages, so they can be clearly identified. In the cases when μ is lower than 100, we need to also consider the results of Chi-square Test, the most commonly used test for the randomness of data. Values of χ greater than 290 along with χerr greater than 10% identify encrypted messages properly when μ value is lower than 100. Monitor 2. Listing 1.1 shows some data gathered by the KIO tag. Each line represents a location estimate. Values of columns 2, 4, 6, and 8, show the identifier of the anchors whose data has been received. Columns 3, 4, 7, and 9 show the distance from the respective anchors to the tag. Columns 10 and 11 represent the tag identifier and the timestamp. Last three columns show the 3D coordinates of the location estimate.

4 5 6

https://github.com/Formal-Systems-Laboratory/ROSMOP/. https://github.com/Formal-Systems-Laboratory/ROSRV/. http://www.fourmilab.ch/random/.

Cybersecurity in Autonomous Systems: Hardening ROS

73

Algorithm 1. Monitor 1 Input: msg Output: IsEncrypted 1: S, χ, χerr , μ, π ← ent msg 2: if μ > 100 then 3: return TRUE 4: else 5: if χ < 290 and χerr > 10% then 6: return TRUE 7: else 8: return FALSE 9: end if 10: end if

1

 Encrypted message  Boolean value  Analyse msg with ent

003 c 408 a 684 408 b 351 501 c 263 401 d 659 4062 1214180 00000000 00000000 00000000 00000000 3.40 1.34 1.54

Listing 1.1. Data gathered by the KIO tag.

A DoS attack may be detected when no data is received from any of the anchors. When this happens, the tag gathers “0000” as the anchor identifier and “0” as the distance to the tag. Listing 1.2 shows the data gathered by the KIO tag when the anchor 408A is suffering a DoS attack. 1

003 c 0000 0 408 b 351 501 c 265 401 d 664 4062 6419526 00000000 00000000 00000000 00000000 3.47 1.41 1.57

Listing 1.2. Data gathered by the KIO tag when suffering a DoS attack.

Algorithm 2 shows the behavior of Monitor 2. It is able to detect a DoS attack on the KIO RTLS system, by analysing the data obtained by the KIO tag. Algorithm 2. Monitor 2 Input: msg Output: DoSDetected 1: regex ← (.* 0000 0 .*) 2: if regex in msg then 3: return TRUE 4: else 5: return FALSE 6: end if

2.4

 KIO RTLS message  Boolean value

Method

Two experiments were carried out. First one was intended to verify that Monitor 1 was able to block any plain-text messages. In the second one, Monitor 2 was checked against real DoS attacks performed on the KIO RTLS. The results of these tests are presented in Sect. 3.

74

J. Balsa-Comer´ on et al.

Detection of Not Encrypted Messages. Encrypted and plain-text messages were generated using two different ROS nodes. They publish ROS String messages in the same /chatter topic. Monitor 1 detects unencrypted messages and discards them (Fig. 4).

Fig. 4. Unencrypted messages detection through Monitor 1.

DoS Attacks Detection. Data from KIO RTLS was recorded by OrBiOne robot standing in the apartment, as shown in Fig. 2, under two different situations: during normal operation and when a DoS attack is being conducted. We created a different rosbag file7 for each situation. In order to get the data, we first developed the ROS node kio-rtls-talker to publish the KIO outputs in a topic that will be used by OrBiOne robot. The node kio-rtls-talker gets the outputs gathered by the KIO tag and publishes them as geometry msgs/Point messages into the topic /kio rtls talker/stdout. Second, we implemented the node kio-rtls-listener that consumes the messages published by the kio-rtls-talker. Figure 5 depicts the conceptual model for the exchange of messages through the topic /kio rtls talker/stdout. Monitor 2 was developed to analyse the messages published on that topic.

Fig. 5. DoS attacks detection through Monitor 2.

7

A rosbag file is equivalent to a recording of the state of the robot in a period of time. It can be used as data set.

Cybersecurity in Autonomous Systems: Hardening ROS

3

75

Results and Discussion

Figure 6 presents at the top of the image the text messages before being encrypted by the node talkerAES. Below, it is shown the output given by Monitor 1, and the encrypted messages published in the topic /chatter.

Fig. 6. Plain-text messages before being encrypted by the node talkerAES (top); ROSRV Monitor 1 (middle); and encrypted messages published in /chatter (bottom).

Figure 7 shows the output produced by node talker as well as the processing done by Monitor 1, and the published messages (none in this case) in the topic /chatter. The output of Monitor 1 shows the results of parsing with the ent tool the messages generated by the node talker. As can be seen in the Fig. 7, plain-text messages are discarded. So, no messages are published in the topic /chatter. Monitor 1 provides a security feature by ensuring that only encrypted messages are used. Figure 8 shows the playback of the rosbag file containing the recordings of the topic kio-rtls-talker during normal operation of KIO RTLS. The image includes the output of Monitor 2 and the messages published in the topic /kio rtls talker/stdout. Figure 9 presents the same information when KIO RTLS is suffering a DoS attack. As can be seen, Monitor 2 (middle) is able to detect the attack and block the affected messages.

76

J. Balsa-Comer´ on et al.

Fig. 7. Output produced by node talker (top); ROSRV Monitor 1 (middle); and published messages, none, in /chatter (bottom).

Fig. 8. KIO RTLS during normal operation. Rosbag file with the recorded data from kio-rtls-talker (top); ROSRV Monitor 2 (middle); and published messages in /kio rtls talker/stdout (bottom).

Cybersecurity in Autonomous Systems: Hardening ROS

77

Fig. 9. KIO RTLS during a DoS attack. Rosbag file with the recorded data from kio-rtls-talker (top); ROSRV Monitor 2 (middle); and published messages, none, in /kio rtls talker/stdout (bottom).

Following a similar approach, new security issues could be managed. In addition, Monitor 1 and 2 could be combined together to improve both security and safety, allowing to block more complex security issues.

4

Conclusion and Further Work

It has been shown that the ROS framework can be hardened by using symmetric encryption algorithms and semantic rules to ensure specific properties in ROS messages. In our experiments, the ciphering key is previously known by all the nodes used. A future line of work is to define a safe key exchange method, for example by using a Diffie-Hellman algorithm. It has also been proved that the semantic rules can be used to improve security by defining conditions or properties. In this work they have been applied to detect plain-text messages in a ROS topic, as well as DoS attacks against the KIO RTLS. Other rules may be defined in an analogous way, to detect any other kind of attack or to ensure safety in the robot behavior. Monitor 1 could be improved not only to discard unencrypted messages, but checking if they have been encrypted with a specific key. This feature would greatly improve the identification of trusted nodes. However, it means that all nodes must know the key if symmetric encryption algorithms are used. The use of asymmetric encryption can avoid the key exchange problem. This feature has been considered in ROS 28 , which uses asymmetric encryption between publishers and subscribers. However, the new version of ROS is in beta phases and still far to be released. Finally, the technical contributions of this work are pointed out. They include: 8

https://github.com/ros2/ros2/wiki.

78

– – – –

J. Balsa-Comer´ on et al.

A ROS package to easily use the KIO RTLS in mobile robots. ROS nodes to encrypt/decrypt ROS string messages using AES. Migration of ROSRV framework from ROS Groovy to ROS Indigo. Implementation of the rules to generate the proposed ROSRV monitors.

Source code of the ROS nodes and ROSRV monitors mentioned in this paper are available at public Git repository9 . Also, the ROS package to use the KIO RTLS has been uploaded to the ROS Wiki10 . Acknowledgment. The authors would like to thank the Spanish National Institute of CyberSecurity (INCIBE) under grant Adenda21 ULE-INCIBE.

References 1. ROS community metrics report, July 2016. http://download.ros.org/downloads/ metrics/metrics-report-2016-07.pdf. Accessed 22 Nov 2016 2. Barker, E., Barker, W.C.: Cryptographic Standards in the Federal Government. NIST Spec. Publ. 800, 175A (2016) 3. Clifford, P., Cosma, I.: A simple sketching algorithm for entropy estimation over streaming data. In: AISTATS, pp. 196–206 (2013) 4. Daemen, J., Rijmen, V.: The Design of Rijndael: AES-the Advanced Encryption Standard. Springer, Heidelberg (2013) 5. Denning, T., Matuszek, C., Koscher, K., Smith, J.R., Kohno, T.: A spotlight on security and privacy risks with future household robots: attacks and lessons. In: Proceedings of the 11th International Conference on Ubiquitous Computing, pp. 105–114. ACM (2009) R advanced encryption standard (AES) new instructions set. Intel 6. Gueron, S.: Intel Corporation (2010) 7. Huang, J., Erdogan, C., Zhang, Y., Moore, B., Luo, Q., Sundaresan, A., Rosu, G.: Runtime verification for robots. In: International Conference on Runtime Verification, pp. 247–254. Springer (2014) 8. Lera, F.J.R., Balsa, J., Casado, F., Fern´ andez, C., Rico, F.M., Matell´ an, V.: Cybersecurity in autonomous systems: evaluating the performance of hardening ROS, M´ alaga, Spain, p. 47 (2016) 9. Quigley, M., Conley, K., Gerkey, B., et al.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, pp. 5–10, Kobe, Japan (2009) 10. Roman, S.: Coding and Information Theory, vol. 134. Springer, New York (1992) 11. Smid, M.E., Branstad, D.K.: Data encryption standard: past and future. In: Proc. IEEE 76(5), 550–559 (1988)

9 10

http://niebla.unileon.es/proyectos/publications/robot2017.git. http://wiki.ros.org/KIO-RTLS.

Triaxial Sensor Calibration: A Prototype for Accelerometer and Gyroscope Calibration P. Bernal-Polo(B) and H. Mart´ınez-Barber´a University of Murcia, 30003 Murcia, Spain [email protected], [email protected]

Abstract. The calibration of an accelerometer, and a gyroscope is performed. A linear sensor model have been used. The triaxial sensor calibration algorithm is based on the minimization of a cost function, and is performed offline. Calibration hardware has been designed, and used to get the calibration data. The algorithm has been tested with the acquired data, and the calibration results are presented. Although the sensor model is linear, some experiences about dealing with non-linearities are exposed. Keywords: Triaxial sensor · Calibration · Accelerometer · Gyroscope IMU

1

·

Introduction

Accuracy in the estimation of the mechanical state of a navigation system is a desirable property. With the introduction of sensors such as accelerometers and gyroscopes, the concept of inertial navigation emerged. This dead reckoning technique allows higher update rates than others like odometry, GPS, or Doppler effect. In addition, it has the advantage of being self-contained, so no external equipment is needed. Nowadays this concept has gained attention due to the development of low-cost devices capable of providing very precise measurements given their price. An Inertial Measurement Unit (IMU), that comprises three accelerometers and three gyroscopes, is an example of a triaxial sensor: a sensor composed of three measuring axes. With IMUs we can measure accelerations and, after a double integration, obtain position estimations. However, there is an inherent problem with this kind of devices. Technological limitations in sensor manufacturing imply the existence of different sensor sensitivities, measurement biases, non-orthogonality between their axes, and non-linearities among others. These intrinsic errors cause measurements to be deviated from their actual value. That deviation is catastrophic when it comes to dead reckoning [5], which creates the need for accurate calibration. For accelerometer calibration, it is common to exploit the fact that the magnitude of gravity is independent of the orientation of the sensor [6,9,10,13]. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_7

80

P. Bernal-Polo and H. Mart´ınez-Barber´ a

Other works accommodate this idea into other types of triaxial sensors [3,4,8,11]. The advantage of this approach is that we do not need to know the actual orientation of the sensor to perform the calibration; only the magnitude of the measured vector is required. This work also relies on this concept. Naturally, there are works in which this idea is not used [2]. It is also common to use a linear model for the sensor [2–4,6,8–11,13]. But there are works in which non-linear models have been proposed [1]. In [12] even a neural network model is used. However, the use of such a model would imply a computational overload introduced by the online computation of the neural network output. Both online and offline calibration algorithms have been developed. Online calibration algorithms have the advantage of not having to perform a calibration before using the sensor, but they have the disadvantages of involving a greater computational overload and complexity of the algorithm, what usually implies the simplification of the calibration model. Reference [7] is an example of an online calibration algorithm in which only gyroscope biases are calibrated. On the other hand, offline calibration algorithms have to be performed before using the sensor, but they allow the calibration for complex sensor models, and after the calibration the corrected measurements are obtained very efficiently. Then, this second approach is the preferred choice. Offline calibration algorithms are usually methods for finding the solution to an optimization problem, which generally consist on the minimization of a cost function. It is common to obtain a non-linear system of equations after applying the minimum condition. The solution to these non-linear systems can be found using iterative methods [4,9–11], although there are also works in which an analytical solution is found [8,13]. Nevertheless, to arrive at an analytic solution they have to perform some change of variables. This change of variables may lead to a distortion of the cost function, which could result in irrelevant solutions. Of course, the strategy of minimizing a cost function is not always used. In [2] they use an offline Kalman filter to find the model parameters. This paper reviews the linear sensor model, and introduces some new perspectives that help to understand the proposed one. An offline calibration algorithm is developed. The calibration method, based on the invariant magnitude of the measured vector, is redefined for arbitrary but known magnitudes of the measured vectors. We also present a low-cost prototype to automate the calibration procedure, and show some preliminary calibration results with real data. The paper is organized as follows. In Sect. 2 we introduce the sensor model. In Sect. 3 we present the calibration method. In Sect. 4 the calibration prototype is described. We also show examples of calibration data, which are then used in Sect. 5 to perform a real calibration. Finally, in Sect. 6, we expose the conclusions.

2

Triaxial Sensor Model

In this Section we discuss the sensor error sources, and we present the model used to correct them.

Triaxial Sensor Calibration: A Prototype

81

Let us consider a measurable vector y expressed in an orthonormal base e2 ,  e3 } anchored to the sensor. Let us also consider the vector x meaB = { e1 ,  sured by the sensor. We are interested in the function relating them. Because of the misalignment between axes, the difference in sensitivity of each independent axis, their bias, and the noise in the measurements, the relation between x and y is not the identity. e1 ,  e2 ,  e3 } denote the vectors defining each of the sensor meaLet B  = { surement axes, expressed in the base B. We will assume that the i-th sensor axis measures the projection of the y vector into the unit vector  ei that defines such eT axis. These projections, expressed in the base B, would be { ei  i y}i . However,  the sensor measures y in the base B . Then, our transformation is given by ⎞ ⎛ —  e1 — e2 —⎠ y . (1) x = ⎝—  —  e3 — Or it would be if the sensor would have a particular sensitivity equal for every axis, and no bias. Assuming that the sensitivity of the axes does not change with the magnitude of the measured quantity, we can model it by multiplying the i-th projection by a scale factor, si . Denoting b to the vector representing the measurement bias, and r to the vector of random variables with 0 mean1 representing the measurement noise, the complete model for our triaxial sensor will be ⎛ ⎞ e1 — — s1  e2 —⎠ y + b + r = S y + b + r . x = ⎝ — s2  (2) — s3  e3 — It is important to note that this model does not incorporate non-linear relations between x and y, time variations, or temperature dependence. Therefore, we need to be sure that our sensor is free of these behaviors, or at least that they are negligible.

3

Calibration

Once our sensor model is defined, we are interested in finding the parameters that characterize a particular sensor. This section presents the method used to find those parameters: the calibration algorithm. The fundamental concept from which the present calibration algorithm is developed arises from a first attempt to calibrate an accelerometer. A natural idea emerges in such scenario: the measured module of the gravity vector should be the same independently of the orientation of the sensor. This conception lead us to consider the module of the y vector in our sensor model: y 1

=

 S−1 ( x − b − r ) 

=

 T ( x + c ) + r  ,

(3)

Note that if r has an expected value different than 0 its contribution could be delegated to b.

82

P. Bernal-Polo and H. Mart´ınez-Barber´ a

where T = S−1 , c = −b, and r = −S−1 r. Whenever the sensor is static, the parameters of our model should satisfy this equation for any orientation. By collecting a set of sensor measurements for a wide range of static orientations we will obtain enough constraints to narrow the solution. Then, our calibration problem turns into a fitting problem. We will define an error function whose minimum we want to find. The location of the minimum will be the solution to our problem: the parameters of our model. 3.1

Error Function Definition

Let {xn }N n=1 be a set of measurements taken with our triaxial sensor in different static conditions2 . Let also {yn }N n=1 be the set of known modules of the vector y for each static orientation. We define our error function as f (T, c)

=

N  

 T (xn + c) 2 − yn2

2

.

(4)

n=1

We choose to minimize the distance between squared modules rather than the distance between modules because it greatly simplifies the mathematical treatment. We have also extended the initial idea of a fixed, invariant under rotations module to the more general idea of an arbitrary module, but also invariant under rotations, based on the premise that the module of a vector is easily obtainable. We notice as well that given a solution to our minimization problem, {T∗ , c∗ }, then {RT∗ , c∗ }, with R such that RT R = I, would also be a solution to our minimization problem. Note that R does not need to be a rotation matrix. In fact, a change in the sign of any column of the matrix T∗ would not produce a change in the value of the error function f . The solution is not unique. 3.2

Model Redefinition

In order to limit the number of solutions, we can add further constraints to the calibration matrix T. These constraints take the form of relations between the vectors of B and the ones of B  . We will choose the vector  e1 of the base B to point in the direction of the e1 =  e1 ). The vector  e2 will point in the direction vector  e1 of the base B  (then   eT1  e2 ). Finally, the vector e1 (this is  e2 ∝  e2 −  e1  of  e2 minus its projection onto    e3 will point in the direction of  e3 minus its projection onto the vectors  e1 and  e2 (that is  eT1  e3 −  eT2  e3 ). Then, our vectors { e3 ∝  e3 −  e1  e2  ei }i will satisfy 1  e1 =⇒  e1 = 0 , (5) e1 =  0  0 0 0  0 ∗ 1  ∝ 010  e1  e2 =  , (6) e2 ∝ I −  eT1  e2 =⇒ e2 =⇒  0 0 0 0 1  0 0 0  0 ∗ 0  ∝ 000  e1  e2  e3 = ∗ , (7) e3 ∝ I −  eT1 −  eT2  e3 =⇒ e3 =⇒  1

2

0 0 1



A static condition for an accelerometer is one in which we are not measuring acceleration other than gravity; a static condition for a magnetometer is one in which we are not measuring magnetic field other than the reference one.

Triaxial Sensor Calibration: A Prototype

83

where ∗ stands for any real value, and  means a real value different than 0. As a consequence, our calibration matrix will have the form ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ s1 0 0  0 0 — s1  e1 — e2 —⎠ = ⎝ ∗  0⎠ =⇒ T = ⎝∗  0 ⎠ , (8) S = ⎝ — s2   e3 — — s3  ∗ ∗  ∗ ∗  where we have used the property that the inverse of an invertible lower triangular matrix is also lower triangular. The matrix S should be invertible because if it were not, the set { ei }i would not be a base, and the sensor would not be able to measure a vector, but its projection onto a subspace of R3 . Some works use an upper triangular matrix in their calibration model e3 instead of (5), and then we had defined [3,4,10]. If we had chosen  e3 =   e1 using the same procedure followed before, we would have obtained an e2 and  upper triangular matrix. As stated before, the solution is not unique, but our choice on the meaning of each vector of B will define the form of the matrix T. Lastly, we want our reference frame to have right-handed orientation. In a e2 ) ·  e3 must be positive. This right-handed reference frame the result of ( e1 ×  expression turns out to be the determinant of our calibration matrix, det(S). −1 −1 = [det(T)]−1 , and that for a triangular Knowing that det(S)

= [det(S )] matrix det(T) = i Tii , we can assure that our orientation will be right-handed by forcing positive terms on the diagonal of T. 3.3

Calibration Algorithm

Our final calibration model needs 9 parameters to be characterized: θ = (T11 , T21 , T22 , T31 , T32 , T33 , c1 , c2 , c3 )

.

(9)

Being interested in finding the minimum of the cost function (4) we are looking for those values that satisfy F

:=

∂f ∂θ

=

0 .

(10)

Equations resulting from (10) form a system of non-linear equations. Being these equations differentiable, we can use an iterative algorithm like the Newton’s method to solve this system: J(θ k ) (θ k+1 − θ k ) where J(θ k ) = ∂F(θ) . ∂θ θ=θ k

=

−F(θ k )

,

(11)

84

4

P. Bernal-Polo and H. Mart´ınez-Barber´ a

Experimental Testbed

In order to test our calibration algorithm, it is convenient to have an experimental testbed in which the process is automated. In this section we will depict our hardware design, and how data acquisition is performed. 4.1

Hardware Design

Although the calibration algorithm can be applied to any triaxial sensor, our main goal is to use it for calibrating accelerometers and gyroscopes. In addition, several works have reported a dependence of the calibration parameters with temperature. Even though we do not present results of a temperature-dependent calibration, our hardware will be designed to be able to do so in the future. To calibrate any triaxial sensor we need to be able to place it in different orientations. For this purpose we will use two servos, one attached to the other. We will secure the sensor to an end, and we will fix the other end to a pole. With this setup we will be able to calibrate our accelerometer. However, we will need to provide an angular velocity to calibrate our gyroscope. To this end we will mount the pole on top of a rotating motor. The actual angular velocity will be measured using an optical tachometer. Since (4) is invariant under rotations, it will not be necessary to orient the servos accurately; we just need them to provide a sufficiently diverse set of orientations. However, we will need the accuracy provided by the optical tachometer in the angular velocity measurements. The assembly will be placed inside a box that will function as thermal insulation. The box will have a window to see what happens inside. A Peltier device will be used to control the temperature inside the box. Being a part of the assembly on top of the rotating motor, we will need two different controllers: one on the rotating part (C1), and the other on the outside (C2). C1 will be responsible for placing the servos in different orientations, reading the sensor measurements, and sending these measurements to the external controller via an RF module. C2 will receive the measurements from the internal controller, will measure the actual modulus of the angular velocity, and will resend the received measurements along with the measured angular velocity modulus to the PC. It also will set the motor speed, and the intensity going through the Peltier device. Figure 1 illustrate the architecture of the calibration system. Figure 2 shows the current version of the assembled hardware. 4.2

Data Acquisition

Our calibration subject is an MPU-6050 by InvenSense. This device is an IMU containing an accelerometer and a gyroscope. We aim to find the calibration parameters that characterize these sensors.

Triaxial Sensor Calibration: A Prototype IMU

C1

SERVOS

85

TACHOMETER

RF MODULE

PELTIER

C2

SERIAL

PC

MOTOR

Fig. 1. Calibration hardware block diagram

Fig. 2. Calibration hardware prototype

Accelerometer Data. The accelerometer range has been established to measure accelerations in the interval (−16, 16) g. Using the hardware described in Sect. 4.1 we obtained a set of static3 accelerometer measurements. Figure 3 shows the 3-dimensional points that represent each acceleration measurement. Gyroscope Data. The gyroscope measurement range has been set as (−2000, 2000 ) ◦ s−1 ≈ (−34.9, 34.9)rads−1 . Using the hardware described in Sect. 4.1 we got a set of static4 gyroscope measurements. Figure 4a shows the data collected with the gyroscope. Figure 4b shows the data obtained with the tachometer.

3 4

Here static means with no acceleration other than the gravity. Here static means with no angular velocity other than the tachometer measured one.

86

P. Bernal-Polo and H. Mart´ınez-Barber´ a

2000

0 -2000 −2000

0 2000

0

2000 -2000

Fig. 3. Raw static accelerometer measurements (16-bit signed integer)

5

Experimental Results

This section presents the calibration results. Accelerometer calibration offers no complication. However, gyroscope calibration needs more attention. 5.1

Accelerometer Calibration

Knowing that our sensor measures accelerations in the interval (−16, 16) g, and that it outputs its measurements as 16-bit signed integers, we would expect −4 g. After introducing the static acceleration samples Ta ≈ I 16g 215 ≈ I4.8823 · 10 to the calibration algorithm we obtain: ⎛ ⎞ 4.8863 0 0 0 ⎠ · 10−4 g , = ⎝ 0.0111 4.8567 (12) Ta 0.0663 −0.0086 4.8073

T −28.47 − 1.51 67.95 ca = . (13) The obtained result is close to what was expected. Nevertheless the effects of bias, misalignment, and scale factors are appreciated. Figure 5 displays a representation of the accelerometer axes. 5.2

Gyroscope Calibration

Knowing that our sensor measures angular velocities in the interval ◦ (−2000, 2000) s−1 , and that it outputs its measurements as 16-bit signed integers, we would expect Tω ≈ I

πrad 2000◦ s−1 · 180 ◦ 215

≈ I1.0653 · 10−3 rads−1 . Taking the

30000

0

-30000

−30000 30000

0

0 30000

angular velocity (rad s−1 )

Triaxial Sensor Calibration: A Prototype

87

50 40 30 20 10 0 0

1000

-30000

2000

3000

4000

5000

measurement number

(a)

(b)

Fig. 4. Angular velocity measurements. a Raw static gyroscope measurements (16-bit signed integer). b Tachometer data. The area outside the measuring range of the gyroscope is filled.

2000

0 2000 2000

0

Fig. 5. Representation of the accelerometer measurement axes (g −1 )

whole set of static angular velocity samples, the calibration algorithm reaches the next solution: ⎛ ⎞ 1.4074 0 0 1.3300 0 ⎠ · 10−3 rads−1 , = ⎝ 0.0595 (14) Tω −0.2246 −0.0846 1.4198

T 2008.2 383.9 − 1804.7 cω = . (15) Fig. 6a is a representation of the obtained gyroscope axes. We notice a great misalignment between the axes. This could be due to an actually large misalignment, or to an error in the calibration. Figure 6b shows the relation between the tachometer measured angular velocity, and the angular velocity module computed with the gyroscope measurements after being transformed using the obtained parameterization. We did not take into account the sensor saturation effects. This saturation lead to a non-linearity in the gyroscope output. Then, a new calibration must be performed, leaving out this time the measurements in the non-linear region. In our case, the non-linear effects

P. Bernal-Polo and H. Mart´ınez-Barber´ a

500

0

0 500

500 0

(a)

calibrated gyroscope measurement (rad s−1 )

88

45

30

15

0 0

15

30

45 −1

tachometer measurement (rad s

)

(b)

Fig. 6. Results of the first attempt to calibrate the gyroscope. a Representation of the gyroscope measurement axes (rad−1 s). b Comparison between tachometer measured angular velocity, and the calibrated gyroscope measurements. The area outside the measuring range of the gyroscope is filled.

start to be noticed for angular velocities greater than 20 rads−1 . Calibrating our gyroscope for measurements below this limit, we retrieve our final calibration transformation: ⎛ ⎞ 1.0720 0 0 0 ⎠ · 10−3 rads−1 , = ⎝ 0.0022 1.0679 (16) T∗ω 0.0089 0.0060 1.0702

T 29.0904 26.9676 87.4518 c∗ω = . (17) The new result is closer to the result we expected than the previous one. As in the case of the accelerometer, we still notice the effects of bias, misalignment, and scale factors. Figure 7a shows a representation of the corresponding gyroscope measurement axes for these calibration parameters. The final comparison between the tachometer measured angular velocity, and the angular velocity module computed using the calibrated gyroscope measurements is displayed in Fig. 7b. We observe that the calibration will be valid for angular velocities below 20 rads−1 , but above that value non-linearities are present.

500

0

0 500

500 0

calibrated gyroscope measurement (rad s−1

Triaxial Sensor Calibration: A Prototype

89

45

30

15

(a)

0 0

15

30

45 −1

tachometer measurement (rad s

)

(b)

Fig. 7. Results of the final gyroscope calibration. a Representation of the gyroscope measurement axes (rad−1 s). b Comparison between tachometer measured angular velocity, and the calibrated gyroscope measurements. The area outside the measuring range of the gyroscope is filled.

6

Conclusion and Future Work

We have reviewed the triaxial sensor model, and revised its underlying concepts. We have proposed an error function, and designed the calibration algorithm based on it. A low-cost prototype for triaxial sensor calibration has been developed. Although it has been designed to calibrate accelerometers and gyroscopes, it can also be used to calibrate magnetometers considering that the Earth’s magnetic field is a constant vector in the vicinity of the calibration prototype. Calibration data have been acquired using this prototype, and the proposed calibration algorithm has been tested with them. We have become aware of the saturation effects in the sensor, which involve the emergence of non-linearities, and we have learned that we must take them into account in the calibration process. Although we have calibrated satisfactorily our sensors, we still need to study the limitations of our calibration algorithm. We also need to think about how to handle the temperature dependence of the sensors.

References 1. Batista, P., Silvestre, C., Oliveira, P., Cardeira, B.: Accelerometer calibration and dynamic bias and gravity estimation: analysis, design, and experimental evaluation. IEEE Trans. Control Syst. Technol. 19(5), 1128–1137 (2011)

90

P. Bernal-Polo and H. Mart´ınez-Barber´ a

2. Beravs, T., Begus, S., Podobnik, J., Munih, M.: Magnetometer calibration using kalman filter covariance matrix for online estimation of magnetic field orientation. IEEE Trans. Instrum. Meas. 63(8), 2013–2020 (2014) 3. Bonnet, S., Bassompierre, C., Godin, C., Lesecq, S., Barraud, A.: Calibration methods for inertial and magnetic sensors. Sens. Actuators, A 156(2), 302–311 (2009) 4. Dorveaux, E., Vissiere, D., Martin, A.P., Petit, N.: Iterative calibration method for inertial and magnetic sensors. In: Proceedings of the 48th IEEE Conference on Decision and Control, 2009 Held Jointly with the 2009 28th Chinese Control Conference, CDC/CCC 2009, pp. 8296–8303. IEEE (2009) 5. Flenniken, W., Wall, J., Bevly, D.: Characterization of various imu error sources and the effect on navigation performance. In: ION GNSS, pp. 967–978 (2005) 6. Fong, W., Ong, S., Nee, A.: Methods for in-field user calibration of an inertial measurement unit without external equipment. Measur. Sci. Technol. 19(8), 085, 202 (2008) 7. Markley, F.L.: Attitude error representations for kalman filtering. J. Guid. Control Dyn. 26(2), 311–317 (2003) 8. Renaudin, V., Afzal, M.H., Lachapelle, G.: Complete triaxis magnetometer calibration in the magnetic domain. J. Sens. 2010 (2010) 9. Sipos, M., Paces, P., Rohac, J., Novacek, P.: Analyses of triaxial accelerometer calibration algorithms. IEEE Sens. J. 12(5), 1157–1165 (2012) 10. Skog, I., H¨ andel, P.: Calibration of a mems inertial measurement unit. In: XVII IMEKO World Congress, pp. 1–6 (2006) 11. Vasconcelos, J.F., Elkaim, G., Silvestre, C., Oliveira, P., Cardeira, B.: Geometric approach to strapdown magnetometer calibration in sensor frame. IEEE Trans. Aerosp. Electron. Syst. 47(2), 1293–1306 (2011) 12. Wang, J.H., Gao, Y.: A new magnetic compass calibration algorithm using neural networks. Meas. Sci. Technol. 17(1), 153 (2005) 13. Zhang, H., Wu, Y., Wu, W., Wu, M., Hu, X.: Improved multi-position calibration for inertial measurement units. Measur. Sci. Technol. 21(1), 015, 107 (2009)

Automatic Characterization of Phase Resetting Controllers for Quick Balance Recovery During Biped Locomotion Juli´ an Cristiano1(B) , Dom`enec Puig1 , and Miguel Angel Garc´ıa2 1 Intelligent Robotics and Computer Vision Group, Department of Computer Science and Mathematics, Rovira i Virgili University, 43007 Tarragona, Spain {julianefren.cristiano,domenec.puig}@urv.cat 2 Department of Electronic and Communications Technology, Autonomous University of Madrid, 28049 Madrid, Spain [email protected] http://deim.urv.cat/~rivi/

Abstract. This paper proposes a methodology for automatic characterization of phase resetting controllers for quick balance recovery after loss of it during biped locomotion. The system allows to easily characterize and design useful phase resetting controllers using a simulation environment. Several experiments have been performed using a NAO humanoid robot in order to automatically characterize and test the phase resetting mechanism. Notwithstanding, it can be implemented by using any humanoid robot with a similar kinematic structure. Once the controllers are characterized, the proposed system detects the robot’s current state through the information provided by its inertial sensors and then applies the correct phase resetting in a short period of time in order to quickly recover the robot’s balance. The proposed control scheme reacts quickly whenever unknown external perturbations are applied to the robot’s body by using the proposed phase resetting mechanism. Keywords: Biped locomotion · Central Pattern Generators Humanoid robots · Phase resetting

1

·

CPGs

·

Introduction

Many studies show the presence of specialized networks of neurons able to generate the rhythmic patterns in animals and humans. These networks are called central pattern generators (CPGs). CPGs are modelled as networks of neurons capable of generating stable and periodic signals controlled through a set of constant parameters. In the case of vertebrates, these networks are located in the central nervous system within the spinal cord. The output signals from these CPGs are sent to the muscles through the peripheral nervous system. High-level c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_8

92

Ju. Cristiano et al.

commands are sent to the different CPGs by the brain through the spinal cord. These commands do not generate the periodic signal by themselves, since the oscillation is autonomously generated within the CPG in the spinal cord. The complex nervous system of animal and humans is able to automatically learn and generate motion patterns by controlling extensor and flexor movements. One important characteristic of the locomotor system of animals and humans is the quick adaptation of the motion patterns generated by the system as response to internal changes or external environmental conditions. Recent studies have shown that the oscillatory activity found in some neurons uses phase resetting mechanisms in order to quickly synchronize and coordinate its functioning. Basically, the phase resetting mechanism synchronizes the neurons current state in order to configure the phase of an oscillation to a reference point for a given event or stimulus. Inspired in this fact, some previous studies have proposed the phase resetting mechanism during biped locomotion in order to deal with external perturbations ([1,2]).

2

Related Work

Biped locomotion is a complex problem since it involves the inherent instability of humanoid robots. Therefore, it is important to develop an appropriate control scheme capable of generating stable motions and also that guarantees a quick reaction whenever unstable situations appear. Phase resetting controllers have shown to be a suitable feedback strategy to quickly react to internal changes or external perturbations. The phase resetting mechanism is also an efficient option for robots with reduced computational capabilities because it does not require a complex processing of data to react, which is not possible in most of the small size humanoid robots [3]. Most of the systems proposed for locomotion control of biped robots using a phase resetting mechanism as feedback strategy are only focused in the characterization of phase resetting controllers by performing some experiments with the real robot ([1,2]). However, in a real workspace it is possible that the robot can be pushed by external forces coming from any unknown direction and the characterization in a real scenario can be infeasible. Furthermore, the characterization from one robot to another can be different. Therefore, in this paper a methodology for the correct characterization of these controllers is proposed. The idea is to define a testing scenario in simulation which allows to extensively explore and characterize these controllers by performing several simulations under different situations. The final goal is to develop an initial model for a simulated scenario which can be later adjusted in a real robot by performing a reduced number of experiments in the real robot compared to those needed actually in order to characterize these controllers. This paper is organized as follows. Section 2 describes the control system used to generate the locomotion patterns for biped robots and also presents the implemented phase resetting mechanism. The methodology proposed for the

Automatic Design of Phase Resetting Controllers

93

Fig. 1. CPG network of 4 neurons as proposed in [5].

appropriate characterization of phase resetting controllers and some experimental results are presented and discussed in Sect. 3. Finally, conclusions and future work are given in Sect. 4.

3

Generation and Control of Biped Locomotion Patterns

In this section, our previous CPG-based locomotion control system is used to generate and control the locomotion patterns for biped robots [4]. The signals generated by the system are used to directly drive the robot’s joins, and the phase resetting mechanism described in [3] is used to guarantee a quicker and deterministic response. Finally, the proposed methodology for automatic characterization of phase resetting controllers is presented. 3.1

CPG Network and Neuron’s Model

The CPG utilized to generate the locomotion patterns is based on a network of 4 interconnected neurons with mutual inhibition previously proposed by Matsuoka [5]. The topology of that CPG is shown in Fig. 1. That network has been chosen as it generates oscillatory output signals in phase, anti-phase and with phase differences of π2 and 3π 2 radians. These phase differences are sufficient to control the robot’s movement directly in the joint space, as shown in [6]. The interconnection weights between the neurons of that CPG are shown in Table 1. Figure 2 shows the output signal of each neuron of the CPG network. The CPG’s neurons are defined according to the well-known Matsuoka’s neuron model: τ u˙ i = −ui −

N 

wij yj − βvi + ue + fi

(1)

j=1

τ  v˙ i = −vi + yi yi = max(0, ui ),

(2) i = 1, ..., N

The external input ue affects the amplitude of the neuron’s output signal. The frequency of the output signal is determined by the time constants τ and

94

Ju. Cristiano et al.

y

1

0.2 0.1 0

0

1

2

3

4

5

6

7

4

5

6

7

4

5

6

7

4

5

6

7

Time[s]

y

2

0.2 0.1 0

0

1

2

3 Time[s]

y

3

0.2 0.1 0

0

1

2

3 Time[s]

y

4

0.2 0.1 0

0

1

2

3 Time[s]

Fig. 2. Output signals of the 4-neuron CPG network. Table 1. CPG’s interconnection weights w1,1 0.0 w1,2 0.0 w1,3 2

w1,4 0.5

w2,1 0.5 w2,2 0.0 w2,3 0.0 w2,4 2 w3,1 2

w3,2 0.5 w3,3 0.0 w3,4 0.0

w4,1 0.0 w4,2 2

w4,3 0.5 w4,4 0.0

τ  . The set of parameters must satisfy some requirements in order to yield stable oscillations ([5,7]). Term fi is a feedback variable that can be used to control the output amplitude and to synchronize the output signals with a periodic input signal. Parameter wij represents the bidirectional interconnection weight between two neurons. Those inteconnection weights determine the phase difference among the output signals generated by the CPG. When a network of neurons is set, they all oscillate together according to their internal parameters and the network interconnections, converging to specific patterns and limit cycles. Variable N represents the number of neurons that constitute the CPG (N = 4 in this work). Parameter Kf has been introduced as proposed in [8] in order to modulate the frequency of the output signal. The time constants in (1) and (2) are thus reformulated as: τ = τ o Kf τ  = τo Kf , where τo and τo are the original time constants.

Automatic Design of Phase Resetting Controllers

95

The internal parameters that determine the behaviour of each neuron are summarized in Table 2. The CPG generates stable oscillations provided those parameters satisfy some requirements ([5,7]). Table 2. Internal parameters for each neuron Parameter Value τo

Parameter Value

0.2800 ue

0.4111

τo

0.4977 fi

0

β

2.5000

The proposed system has been tested on the NAO platform [9], which is a small size humanoid robot with 21 degrees of freedom, 56 cm tall and weighting 4.8 Kg. Notwithstanding, the same control system can easily be adapted to other humanoid robots with a similar kinematic structure. The controllers proposed in [6] have been used to determine the angle in radians of the following joints of the NAO robot: RHipP itch = bias1 + a(−ξ(y1 − y3 ) + (y2 − y4 )) LHipP itch = bias1 + a(ξ(y1 − y3 ) − (y2 − y4 )) LKneeP itch = bias2 + b(y2 − y4 ) RKneeP itch = bias2 + b(y4 − y2 ) RAnkleP itch = bias3 + c(ξ(y1 − y3 ) + (y2 − y4 )) LAnkleP itch = bias3 + c(−ξ(y1 − y3 ) − (y2 − y4 )) RHipRoll = d(y2 − y4 ) LHipRoll = d(y2 − y4 ) LAnkleRoll = e(y4 − y2 ) RAnkleRoll = e(y4 − y2 ) RShouldP itch = bias4 + f (y1 − y3 ) LShouldP itch = bias4 − f (y1 − y3 )

(3)

Those controllers depend on 10 internal parameters: 4 biases (bias1, ..., bias4) and 6 gains (a, b, c, d, e, f ). Parameter ξ controls the stride length. Both the latter and the locomotion frequency, which is controlled through the value of Kf , determine the robot’s walking velocity. Detailed information can be found in [10]. 3.2

Phase Resetting Controller

A phase resetting controller is a simple but effective feedback strategy. These controllers must detect the external force applied to the robot’s body through

96

Ju. Cristiano et al.

the quick tracking and analysis of the measures provided by the robot’s sensors. Once the external perturbation is detected by the system, it must react by activating the phase resetting mechanism in order to quickly recover balance. In the control scheme used in this work, the controller synchronizes the neurons’ output signals by modifying the current phase of the locomotion pattern generated by the system to a desired phase given an external event or stimulus, such as an external force applied to the robot’s body. The aim of the phase resetting mechanism is the generation of a force in the direction opposite to the one of the force generated by the external perturbation by changing the phase of the current locomotion pattern in order to guarantee the quick recovery of balance. In this work, the information provided by the 3-axis accelerometer is tracked and used to detect the exact instant at which the external force is applied to the robot’s body and also to estimate its magnitude and direction. According to the current phase of the generated locomotion pattern and the corresponding phase in which the external force applied to the robot is detected, the phase resetting controller must react by changing the current phase of the locomotion pattern to another phase that allows the robot to recover its balance. This phase change is effective after Δt seconds. In the experiments, the controller’s response time (Δt) was set to 40 ms. However, this time could be modified according to the desired system’s response. Detailed information about the implementation of the phase resetting mechanism can be found in [3]. Figure 3 shows a example of the phase resetting controller implemented for the output signals of the 4-neuron CPG network shown in Fig. 1, which is used to generate the locomotion pattern together with the controllers described previously in (3). In the figure the external force is detected by the system in the sample number 3064 and later the phase resetting is activated in order to deal with the external perturbation.

4

Proposed Methodology for Automatic Characterization of Phase Resetting Controllers

The behaviour of a phase resetting controller is determined by the shape of the corresponding Phase Resetting Curve (PRC). After the external force is detected by the sensors, the PRC function indicates the phase shifts that must be applied to the locomotion pattern generated by the CPG-based control system according to its current phase in order to avoid the robot from falling down. In this paper we extend our previous work by proposing a methodology to automatically characterize and model the PRCs. This procedure must be done in advance by performing several experiments that allow the previous definition of the appropriate set of curves. However, these PRCs can be redefined at any time according with new sensory information or by modifying its shape according to the deterioration of the robot’s physical parts. Two PRCs, used to react to external perturbations in the forward and backward directions, have been automatically characterized and tested on a NAO

Automatic Design of Phase Resetting Controllers

97

0.2 y −y

0.1

2

4

y −y 1

0

3

−0.1 −0.2 0

1000

2000

3000 4000 Sample number

5000

6000

7000

0

1000

2000

3000 4000 Sample number

5000

6000

7000

0.2 0.1 0 −0.1 −0.2

Fig. 3. Example of the phase resetting mechanism applied to the output signals of the 4neuron CPG network shown in Fig. 1. The plots represent the system’s response without (top) and with (bottom) the phase resetting controller. In the figure the external force is detected by the system in the sample number 3064. After the phase resetting response time, the phase of the locomotion pattern is modified to produce a force in the opposite direction to the one of the external perturbation.

biped robot in simulation on the Webots simulator following the procedure described below. • The locomotion pattern used for the experiments was generated by means of the CPG-joint-space control scheme described in Sect. 2, with the parameters for the straight-line locomotion pattern. • The locomotion pattern’s phase is varied between 0 and 360 degrees using a fixed step-size of 10 degrees. • External forces with different values of magnitude are applied to the robot’s body in the backward and forward directions for each tested phase of the locomotion pattern, the aim is to determine those force magnitudes that produce loss of balance during bipedal locomotion causing the robot falls down. • The information provided by the accelerometer was analysed and used to predict the cases where the robot falls down whenever the external force is applied to its body. Later, this information is used to activate the phase resetting mechanism. • Several experiments were performed on the Webots simulator in order to characterize the phase resetting controllers by means of the definition of the two PRCs for those external forces that cause the loss of balance during biped locomotion. It was determined in simulation the phase shifts that must be

98

Ju. Cristiano et al.

PRC1 [Rad]

4

2

0 0

50

100

150

200

250

300

350

Current phase [degrees]

Fig. 4. Phase resetting curve found for the experiments using perturbations in the forward direction. PRC2 [Rad]

4 3 2 1

0

50

100

150

200

250

300

350

Current phase [degrees]

Fig. 5. Phase resetting curve found for the experiments using perturbations in the backward direction.

applied by the system in order to avoid the robot from falling down. These values were used to build the corresponding PRC. • The phase resetting controller is characterized after the definition of the PRCs, namely, if the system detects a external force applied to the robot, it uses the previously estimated PRC in order to calculate the amount of phase shift required to quickly recover the balance.

4.1

Experimental Results

In the experiments performed, the external forces were considered to be applied to the robot’s trunk along the forward and backward directions. The simulator allows the definition of the exact point in the robot’s body in which the force is applied, as well as its desired magnitude and direction. The external force was also applied at a known phase of the locomotion pattern and at the same point on the robot’s body in order to test the system under the same dynamic conditions. The whole system was implemented in webots using several controllers that interact among them in order to allow the complete study of the whole system in simulation. The forces that guarantee that the robot will fall down whenever the feedback mechanism is not activated have been used to define the PRC and also to test the system operating in both open and closed loop. In all the experiments, the robot starts its motion from the same position.

Automatic Design of Phase Resetting Controllers

99

The phase resetting curves PRC1 and PRC2 shown in Figs. 4 and 5, respectively, were obtained after performing several experiments in the simulator. Using the methodology described in this paper, several PRC can be implemented and activated using different sensory information. Experimental results showing the behaviour of the overall system in the simulated workspace can be found on the companion website1 and in Figs. 6, 7, 8 and 9.

Fig. 6. System response with the phase resetting controller deactivated for a forward perturbation.

Fig. 7. System response with the phase resetting controller deactivated for a forward perturbation.

Fig. 8. System response with the phase resetting controller deactivated for a backward perturbation.

1

Companion website: https://youtu.be/C8FmdlcX7Ts.

100

Ju. Cristiano et al.

Fig. 9. System response with the phase resetting controller activated for a backward perturbation.

5

Conclusions

The phase resetting mechanism is a suitable feedback strategy for humanoid robots with reduced computational capabilities. Furthermore, according to the experiments, this feedback strategy has a quick and effective response able to deal with considerable external perturbations. In this paper a methodology to automatically characterize phase resetting controllers taking advantage of simulation experiments is presented. Some experiments have been performed, however, an exhaustive characterization must be made in order to deal with all possible unstable situations for the humanoid robot. Thus, future work will include the rigorous study of feedback controllers in order to cope with more complex external perturbations. Future research will be carried out in order to propose a generalized modelling and characterization of phase resetting controllers under different conditions. The final goal is to firstly develop an initial model for the controllers in a simulated scenario which can be later adjusted to the real robot by performing a reduced number of experiments compared to those actually needed in order to characterize these controllers.

References 1. Nakanishi, M., Nomura, T., Sato, S.: Stumbling with optimal phase reset during gait can prevent a humanoid from falling. Biol. Cybern. 95(5), 503–515 (2006). http://dx.doi.org/10.1007/s00422-006-0102-8 2. Nomura, T., Kawa, K., Suzuki, Y., Nakanishi, M., Yamasaki, T.: Dynamic stability and phase resetting during biped gait. Chaos Interdisc. J. Nonlinear Sci. 19(2), 026103 (2009). 1–12. http://dx.doi.org/10.1063/1.3138725 3. Cristiano, J., Garcia, M.A., Puig, D.: Deterministic phase resetting with predefined response time for cpg networks based on matsuokas oscillator. Robot. Auton. Syst. 74, 88–96, (2015). http://dx.doi.org/10.1016/j.robot.2015.07.004 4. Cristiano, J., Puig, D., Garcia, M.A.: Efficient locomotion control of biped robots on unknown sloped surfaces with central pattern generators. Electron. Lett. 51(3), 220–222 (2015). http://dx.doi.org/10.1049/el.2014.4255 5. Matsuoka, K.: Sustained oscillations generated by mutually inhibiting neurons with adaptation. Biol. Cybern. 52(6), 367–376 (1985). https://link.springer.com/ article/10.1007/BF00449593

Automatic Design of Phase Resetting Controllers

101

6. Morimoto, J., Endo, G., Nakanishi, J., Cheng, G.: A biologically inspired biped locomotion strategy for humanoid robots: modulation of sinusoidal patterns by a coupled oscillator model. IEEE Trans. Robot. 24(1), 185–191 (2008). http://dx. doi.org/10.1109/TRO.2008.915457 7. Matsuoka, K.: Analysis of a neural oscillator. Biol. Cybern. 104(4–5), 297–304 (2011). http://dx.doi.org/10.1007/s00422-011-0432-z 8. Zhang, D., Poignet, P., Widjaja, F., Ang, W.T.: Neural oscillator based control for pathological tremor suppression via functional electrical stimulation. Control Eng. Pract. 19(1), 74–88 (2011). https://doi.org/10.1016/j.conengprac.2010.08.009 9. Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 769–774. IEEE (2009). http://dx.doi.org/10.1109/ROBOT.2009. 5152516 10. Cristiano, J., Puig, D., Garcia, M.: Generation and control of locomotion patterns for biped robots by using central pattern generators. J. Phys. Agents 8(1), 40–47 (2017). http://dx.doi.org/10.14198/JoPha.2017.8.1.06

Interface Design of Haptic Feedback on Teleoperated System Francisco Rodríguez-Sedano1 ✉ , Pere Ponsa2 , Pablo Blanco-Medina1 , and Luis Miguel Muñoz2 (

1

)

Robotic Group, University of León, MIC, Campus de Vegazana, s/n, 24071 León, Spain [email protected] 2 Automatic Control Department, Universitat Politècnica de Catalunya Barcelona Tech, Av. Víctor Balaguer, 1, 08800 Vilanova i la Geltrú, Barcelona, Spain

Abstract. The use of teleoperation systems can provide substantial advantages in various fields such as remote explosive dismantling and rescue operations, as well as improve the accuracy on certain tasks like surgery. These systems offer feedback mainly based on visual information of the task performed showing rele‐ vant data of the systems used as well as their surroundings. Our proposal involves the use of haptic devices to remotely control a robot in order for the user to receive haptic feedback alongside the visual information. In a first design that several experts evaluated, it was concluded that the interface used had to be substantially improved. This communication describes the process developed to design and evaluate an interface to improve user interaction with the teleoperation system. Keywords: Teleoperation · Human-Robot interface · Baxter robot · Haptics

1

Introduction

Certain tasks such as surgery, explosive dismantling and rescue operations can be quite complex and even dangerous to perform in a direct manner. In these kind of situations, teleoperated robots or other systems can be very helpful by reducing the risk involved and potentially increasing the overall accuracy of said tasks by reducing the human error factor that can be caused by various motives, such as hand tremor. In fact, these type of systems are rather proficient in the introduced areas and tasks [1]. These systems must provide proper enhancements for the user experience. In order to do this, they use different types of interfaces (e.g. cameras, microphones or input devices) so they can provide enough information to the user and complete his/her perception of the controlled device and its surroundings. However, the video feedback obtained from the cameras can be limited by certain technical constraints [2, 3], such as a decreased view due to obstacles or even bad resolution from the camera itself. To make up for these limitations, haptic devices can be used in certain applications to help the user understand better the systems involved as well as the items in the remote environment [4, 5]. On the already mentioned scenario about surgery, the tissue’s texture or the weight of other items could be replicated with the help of haptic devices. This information can © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_9

Interface Design of Haptic Feedback on Teleoperated System

103

be presented on a graphical user interface, but the mere representation by numerical standards would not be as effective as the actual feeling those numbers represent in reality. We believe this type of information to be very valuable in the context of tele‐ operated systems. The motivation behind this work is to design and evaluate an efficient interface to test whether the feel of the touch improves the performance and user experience in teleoperation tasks. So we can verify it, we have used an experiment designed in the robotics group of the University of León and that has a sufficiently long exposure to the haptic feedback [6]. The teleoperation system of that experiment consists of a Geomatic Touch haptic interface and a Baxter Research Robot, acting as the master and the slave devices respectively. Additionally, two external cameras will be used to complete the user’s perception of the controlled environment. This environment will be described in more detail in the next section. The physical appearance of the Baxter robot was designed to create the impression that Baxter is a member of the team: emotional expressions, human-robot task collab‐ oration [7]. However, in this work, the most important feature is Baxter can detect when there are no more objects to pick and the aim is show the Baxter’s behavior inside the graphical interface. Regarding within the user experience in teleoperated systems, this work follows the recommendations of the ISO 9241-920 Ergonomics of human-system interaction (inter‐ face design, user centered design) [8]. This document is organized as follows. Section 2 will cover in greater detailed the environment in which the experiment will be performed and the methodology used to design the interface. Section 3 will describe the results obtained in the evaluation of the designed interface prototypes. Section 4 will give some recommendations from the point of view of usability testing. Finally, the document ends with our conclusions and future lines.

2

Materials and Methods

Our main goal with this project is to prove the utility of haptic feedback in remote control situations and their ability to improve the user’s perception and overall performance in different scenarios while performing varied tasks. To do this, we propose an experiment in which we will teleoperated a robot with a haptic device trying to complete a simple task while providing haptic feedback. In addition, in a second phase of the project, we will also verify our initial hypothesis via a questionnaire that will be filled by users who are familiar with these types of systems, which will also help us to improve our design. The performed task will be similar to a previous design introduced in [6]. In a first phase, this task was performed without the use of a proper interface, and the conclusions we achieved revealed that the interaction with the environment was not natural, so it would have to be improved in future works. This is the goal of this paper. But first let’s briefly describe the working environment.

104

F. Rodríguez-Sedano et al.

2.1 The Environment A teleoperation system is composed by at the very least two devices, a master device and a slave device that establishes communication between them by using a communi‐ cation channel that connects them physically. The master’s movements will be replicated on the slave, being able to control the latter’s motion. In our case, the slave will commu‐ nicate the master specific information regarding the environment that will be replicated in the latter so as to provide haptic feedback in certain scenarios. Control of the teleop‐ eration system with human interaction.

Fig. 1. The teleoperated system.

As we have already mentioned, the main goal of our project is to demonstrate that haptic feedback improves the user’s experience and performance, so our master device will be a haptic device. As for our slave device, we will use a Baxter Research Robot, completing our master-slave system. The task will be simple to perform, and the environment will suit Baxter’s large size. Furthermore, in order to prove the utility of the interface’s feedback, the user will not be able to directly see the slave device by regular means, finding an obstacle if he tries to do so. Additionally, the user will be able to use a second monitor to perform training previous to the aforementioned task so as to grasp easily the concepts of movement, picking and releasing. This training process will not have haptic feedback available, merely focusing the user on understanding said concepts. The entire environment can be seen in the Fig. 2.

Interface Design of Haptic Feedback on Teleoperated System

105

Fig. 2. Detail of the (users, teleoperated) environment.

In order to decrease communication delays as much as possible, both of the devices are connect to the same workstations via Ethernet cables. For the experiment, our master device must be able to send its movements to the slave robot, but it also has to be able to provide the user with kinesthetic information. That is, to reflect the forces or vibrations sensed by Baxter. Because of these reasons, we decided to use a Geomatic Touch haptic device, a 6DOF device that has 3 DOF dedicated to standard movements (X, Y and Z Cartesian coordinates) while the remaining 3 help replicate orientation (pitch, roll and yaw movements) In order to make it work in our workstation, we have used the package “Phantom Omni”, developed by Suarez-Ruiz [9] and adapting it to our ROS Indigo distribution due to it being developed in ROS Hydro. Our chosen slave device was Baxter Research Robot. Baxter consists of 2 × 7-degreeof-freedom arms, with inbuilt feedback systems, although for our experiment we only used one of the arms. It was developed to allow collaborative human-robot co-work and to improve Human-Robot Interaction. Its 7DOF system provides kinematic redundancy, which allows for object manipu‐ lation enhancement. It also has several mounted sensors, one camera located in each gripper that will be used for display purposes and an infrared sensor range that has a 4-40 cm range. It is connected to the same workstation as the Master device via Ethernet so as to reduce delay between them as much as possible. In order to interact with the environment, we must be able to properly translate the movements of the master device to Baxter. However, as pointed as pointed in our previous work [6], both of the devices have a different number of DOFs. Due to this, adjustments must be carried out in order to reduce the complexity of the movement

106

F. Rodríguez-Sedano et al.

correlation. Our approach will be to reduce the number of joints used in both systems, trying to replicate a standard 3DOF movement on the X, Y and Z axis. Specific details about our implementation can be found in our previous work that concerns similar issues regarding teleoperation of the same systems [6]. Some adjust‐ ments have been made to this particular implementation, including an increased weight of the checkers piece grabbed as well as the ability to stop the teleoperation at any given time. This is due to the fact that the user can run out of physical space to perform specific movement because of the different workspaces in both devices. With this function, when the user stops the teleoperation he or she can reset the haptic device to a different position without the movement being replicated, effectively increasing their workspace and reducing frustration and other negative consequences of the limited original space. 2.2 The Interface Even though our primary objective is to ascertain the utility of haptic feedback in tele‐ operated environments, we must also provide the user with a proper interface that allows him to grasp the concept and layout of the remote environment, as well as displaying relevant information about the teleoperation itself. A first version of the interface was designed for this experiment can be seen in Fig. 3.

Fig. 3. First version of the interface.

Once this first prototype interface was designed, we had to evaluate it. For this, a group of 7 expert users was chosen. Five of these experts were familiar with the use of haptic devices. The other two had developed the evaluation methodology to be used and described below.

Interface Design of Haptic Feedback on Teleoperated System

107

To evaluate the interface by taking into account experts’ perceptions about improve‐ ments in the design, we decided to do it from a qualitative perspective. For this, we collected experts’ impressions using the think aloud technique [10] during the devel‐ opment of a task defined with the teleoperated system. After their experience with the system, a semi-structured interview was conducted. Specific software was used to record screen and user actions. Also was recorded the interview with a video camera. After analyzing the results of this first evaluation, a second prototype interface was designed, taking into account the suggestions of the experts and later evaluated according to the same methodology. 2.3 Methodology For the evaluation of interface prototypes, we used the Ergonomic Guideline for Super‐ visory Control Interface Design (GEDIS). The GEDIS guideline provides a series of recommendations to consider when designing an interface or evaluate interfaces already created [11]. GEDIS is not a standard, it is a guideline that seeks to cover all the aspects of the interface design. But in our case study it is appropriate because it provides a clear view of the usability of the interface by the end users. The guide is based on the analysis and evaluation of 10 indicators to finally get an overall index of the interface quality and possible areas for improvement. For the correct use of the GEDIS guideline, the collaboration between the teleoperated technical team and the human factor technician is necessary. As follows two evaluations of the interface are presented. The indicators values are used to calculate the global evaluation by solving the following formula: 10 ∑

Global_evaluation =

i=1

pi indi

10 ∑ i=1

(1) pi

Where, ind = indicator and p = weight. all indicators have the same weight (p1 = p2… = p10 = 1). The guide recommends that the global evaluation index should not be lower than 3 points. A positive evaluation should reach at least 4 points. The results of the evaluation of both prototypes are discussed below.

3

Results

The Table 1 show two evaluations of first and second versions of the interface, (where A = appropriate, M = medium, N.A. = Not appropriate). In this study only seven indi‐ cators were measured in the first evaluation and each indicator in the second evaluation.

108

F. Rodríguez-Sedano et al. Table 1. Applying the GEDIS guideline.

Indicator and sub-indicator name Architecture Map existence Number of levels le Division: plant, area, subarea, team Distribution Model comparison Flow process

First Evaluation: April’17 Numeric/qualitative value 2.6 [YES, NO] [5, 0] 0 [le < 4, le > 4] [5, 0] 5 [a, m, na] [5, 3, 0] 3

2.0 [a, m, na] [5, 3, 0] 3 [clear, medium, no clear] [5, 3, 0] 0 Density [a, m, na] [5, 3, 0] 3 Navigation 0 Relationship with architecture [a, m, na] [5, 3, 0] 0 Navigation between screen [a, m, na] [5, 3, 0] 0 Color 5.0 Absence of non-appropriated [YES, NO] [5, 0] 5 combinations Color number c [4 < c 7] [5, 0] 5 Blink absence (no alarm [YES, NO] [5, 0] 5 situation) Contrast screen versus [a, m, na] [5, 3, 0] 5 graphics objects Relationship with text [a, m, na] [5, 3, 0] 5 Text Font 4.5 Font number f [f < 4, f > 4] 5 Absence of small font (smaller [YES, NO] [5, 0] 5 8) Absence of non-appropriated [YES, NO] [5, 0] 5 combinations Abbreviation use [a, m, na] [5, 3, 0] 3 Status of the devices 1.5 Uniforms icons and symbols [a, m, na] [5, 3, 0] 3 Status team representativeness [YES, NO] [5, 0] 0 Process values 3.0 Visibility [a, m, na] [5, 3, 0] 3 Location [a, m, na] [5, 3, 0] 3 Graphs and tables Non assessed Data-entry commands Non assessed Visibility Usability Feedback Alarms Non assessed

Second Evaluation: July’17 Numeric/qualitative value 2.6 [YES, NO] [5, 0] 0 [le < 4, le > 4] [5, 0] 5 [a, m, na] [5, 3, 0] 3 3.7 [a, m, na] [5, 3, 0] 3 [clear, medium, no clear] [5, 3, 0] 5 [a, m, na] [5, 3, 0] 3 3 [a, m, na] [5, 3, 0] 3 [a, m, na] [5, 3, 0] 3 5.0 [YES, NO] [5, 0] 5 [4 < c 7] [5, 0] 5 [YES, NO] [5, 0] 5 [a, m, na] [5, 3, 0] 5 [a, m, na] [5, 3, 0] 5 5.0 [f < 4, f > 4] 5 [YES, NO] [5, 0] 5 [YES, NO] [5, 0] 5 [a, m, na] [5, 3, 0] 5 2.5 [a, m, na] [5, 3, 0] 5 [YES, NO] [5, 0] 0 3.0 [a, m, na] [5, 3, 0] 3 [a, m, na] [5, 3, 0] 3 Non assessed 3.0 [a, m, na] [5, 3, 0] 3 [a, m, na] [5, 3, 0] 3 [a, m, na] [5, 3, 0] 3 Non assessed

Interface Design of Haptic Feedback on Teleoperated System

109

From architecture’s perspective, the Fig. 3. shows a single screen application, which can move in the direction of a broad structure and not deep. It is recommended to add a main home screen. In this main screen it is necessary to differentiate between the use that will be made by the user and the use that the facilitator will make. – The user (expert in teleoperated tasks) will access his screen related to the task to be performed. – The facilitator (expert in usability testing) will access the configurable parameters: way of recording the data, frames per second that are monitored on the screen, list of numerical values for tracking failure in the grip of the part, Number of collisions detected. – It is necessary to evaluate the migration of the data gathering to a file in Excel for a later analysis. – Recording the video sequence of the cameras will allow analyze the Baxter’s behavior and the user behavior. It is advisable for the user to identify the necessary information in the local area (user’s environment) and to differentiate it from the necessary on-screen information in the remote area. In conventional industrial control room, the graphic interface presents a certain complexity. Instead, in teleoperation it is more important to enhance the sense of presence of the user in the workplace where the task is performed, so the graphical interface is less complex. In the case of two screens, developers could name each screen at the top of the screen. For the screen of Fig. 3, for example, the name can be, “Teleoperation” for the pick and place task. The process flow is not clear since although we can correctly see the task in the image provided by the cameras, it is confused with the lower part in which “Save Data” is indicated. Should the user save the data? In any case, the user must have a guide that indicates a sequential action: monitor, use the Phantom interface to perform the task, save data. The main distribution recommendation is: – Split the screen into two parts – Put name above each image of the camera (Camera in Body Robot, Camera in Robot Arm) – Developers could add a gray rectangle and it is sent to the background, this will highlight the box with the information of faults, falls, and collisions. These variables are perceived by the user as grouped. – It is convenient to sort the information of the cited variables, analyze the size of each image: the two images could have the same size – The Save Data image suggests that the user quit the application. For Navigation indicator, it is recommended to add screens and navigation buttons between screens so that the user can move from the main screen to the screen of Fig. 3, and can return or quit the application. For the Color indicator, the use of gray background is appropriate as it highlights the image of the cameras, this improves the sense of presence of the user in the remote place where the task is carried out.

110

F. Rodríguez-Sedano et al.

It is important to evaluate the color of the table in which the piece of the checkers game is placed. That color is the one that is captured by the camera on the graphic screen, hence the user must see through the image of the camera the contrast between the bottom of the table and the piece to be taken. Use of red color should be revisited. Usually the red color is used to indicate alarms, faults, etc., so it is necessary to evaluate if it is convenient that the checker piece and the target have a red color. In any case the image of the silhouette of the user is not appropriate that is red, since our focus is focused there, when it should focus on the image transmitted by the cameras. For Text indicator, the recommendation is avoid font size below eight. Some text words are in Spanish Language, however, appears Swap in English Language. Check the nomenclature used for the cameras: webcam is the body Baxter camera? For Status of the Devices indicator, Check the iconography used because it shows a combination of text and some icons. The save icon suggests leaving the application. When the user is using the Phantom interface and the robot is moving, this feedback of the movement should be visible on the screen. Therefore, is missing a small box in the upper center that indicates the state of the robot, if it is at rest, if it is in movement, if the grippers is open or closed. Round objects, emulating LED lights within a text box, can give us this information. For process values, it should be checked which variables and which numeric value should appear on the screen. Failure, collision information allows log and after show historical trends. The question is whether this information should be consulted while the task is being carried out or should be consulted at the end of the task. Along the haptic feedback, it is necessary to assess if it can be specified if the force that the user’s hand is exerting on the Phantom interface is low, medium or high. Graphs and Tables, indicator is not assessed. Data-entry commands is not assessed. Alarms indicators is not assessed. Finally, the global evaluation index of the first interface is 2.6 (in a 0-5 numeric scale). 3.1 Proposal for Interface Improvements After the initial feedback received by applying the GEDIS methodology, we improved our original interface to an upgraded version that would be used for the future task and related questionnaires. This interface is displayed in the Fig. 4. Among other things, it shows the user a display of two interchangeable cameras for better perception, boxes indicating if teleoperation is currently enabled or not as well as haptic feedback and the amount of fails, drops and collisions caused during the teleop‐ eration. It also displays a progress bar that indicates the proximity of an item by using Baxter’s infrared capabilities. A “Fail” is caused by the user opening the gripper with the haptic to try to grab a piece, only to fail at said task by a wrong measurement of the actual distance to the piece.

Interface Design of Haptic Feedback on Teleoperated System

111

Fig. 4. Second version of the interface.

Similarly, a “Drop” will be added when the user drops a properly grabbed piece to the table. Finally, collisions are also counted if the user hits surrounding items with enough speed or force. However, Baxter has proven quite sturdy when registering forces high enough to cause a proper collision, the minimum of said force being remarkably high. With that said, Baxter is more sensible to fast movements done on himself, so collisions will serve both as a register of high enough impacts and fast movements done in the master device. An example of said movement would be a quick circular move, which will cause Baxter to complain about the speed reached when trying to replicate it, but never compromising the stability of the teleoperation. In addition to these values, we will also record the overall time spent performing the task as well as the time spent on each of the cameras in order to understand which of them is more useful for the users. After applying GEDIS to evaluate this new prototype, the score has been improved in the following indicators: distribution, navigation, text font and status of the devices. In addition, it has been possible to evaluate the indicator “Data-entry commands” that improves the final score. The global evaluation index of the second version is 3.5 (in a 0-5 numeric scale). This indicates that the evaluation of the first prototype has been improved, although it does not reach 4.0 points necessary for a positive evaluation.

112

4

F. Rodríguez-Sedano et al.

Discussion

In addition to evaluating the interface, the experts who participated in the experiment have reported some improvement on the task to be performed in the experiment. In previous work [6] this task had been designed following the Guideline for Ergonomic Haptic Interaction Design (GEHID). The GEHID guideline for ergonomic haptic inter‐ action design provides a method that seeks to cover aspects of the haptic interface design and the human-robot task in order to improve the performance of the overall teleoperated system [12]. The task developed in the first test (April 2017) is correctly carried out, but the temporal interval in which forces and moments can appear due to the haptic feedback on the hand of the human performing the task, is very brief. Thus, performing the task with or without haptic feedback may not have too much differentiation. It is recommended to think about a task. For example: insertion of one piece’s part into another, in which the contact between the parts causes a force/momentum feedback on the hand of the user that is more intense (and measurable) and is carried out in an interval time. Increasing the complexity of the task, the time to carry it out, the effort of the human hand, can lead to a posteriori evaluation of physical effort, fatigue, comfort. Following the user-centered approach, the usability testing in laboratory follow a set of steps. The main recommendation is improving the procedure of these steps. For instance: – Define a consent form. – Explain the aim of the experimental study, location, data…. – Technical description of the teleoperated task. The user could be expert or novice and must pay attention of the right use of the teleoperated system. – At the end of the experimental session, define the satisfaction questionnaires adapted for this study. Taking into account all these recommendations, the design of the interface must be modified before using it in the experiment.

5

Conclusions and Future Work

This paper presents our interface design for a haptic teleoperation system and its eval‐ uation through the GEDIS methodology. The use of haptic devices can increase the user’s performance in a wide margin of different tasks performed in various areas. However, it has to be complemented with the use of a graphical interface to offer both the haptic feedback and the visual feedback, enhancing the user’s experience with the remote environment as much as possible. We will further research and improve the topics presented in our previous work using our new developed interface, improving the user’s perception of the remote environment. Our future work will be focused on conducting an experiment designed to evaluate the use of haptic feedback in different tasks with a more qualitative approach, analyzing

Interface Design of Haptic Feedback on Teleoperated System

113

the user’s performance by recording data that is relevant to the teleoperated system. This experiment will follow recommendations proposed by the GEHID guideline in order to prove our initial proposal about the implementation of haptic feedback in teleoperated systems.

References 1. Vertut, J., Coiffet, P.: Teleoperation and Robotics. Applications and Technology, vol. 3B. Springer, Netherlands (1985) 2. Woods, D.D., Tittle, J., Feil, M., Roesler, A.: Envisioning human-robot coordination in future operations. IEEE Trans. Syst. Man Cybern. C (Appl. Rev.) 34(2), 210–218 (2004) 3. Woods, D., Watts, J.: How not to have to navigate through too many displays. In: Helander, M., Landauer, T., Prabhu, P. (eds.) Handbook of Human-Computer Interaction. Restricting the Field of View: Perceptual and Performance Effects, 2nd edn., pp. 1177–1201. Elsevier Science, Amsterdam (1997) 4. Son, H.I., Franchi, A., Chuang, L.L., Kim, J., Bulthoff, H.H., Giordano, P.R.: Human-centered design and evaluation of haptic cueing for teleoperation of multiple mobile robots. IEEE Trans. Syst. Man Cybern. B (Cybern.) 43(2), 597–609 (2013) 5. Sitti, M., Hashimoto, H.: Teleoperated touch feedback from the surfaces at the nanoscale: modeling and experiments. IEEE ASME Trans. Mechatron. 8(2), 287–298 (2003) 6. Rodríguez Lera, F., Blanco Medina, P., Inyesto Alonso, L., Esteban, G., Rodríguez Sedano, F.: Strategies for Haptic-Robotic Teleoperation in Board Games: Playing Checkers with Baxter, WAF 2016, June 2016 7. Faneuff, J.: Designing for Collaborative Robotics. In: Follett, J. (ed.) Designing for Emerging Technologies. O’Reilly (2014) 8. ISO. ISO 9241-920. Ergonomics of Human-system Interaction - Part 920: Guidance on tactile and haptic interactions, Geneva (2009) 9. Suárez-Ruiz, F.: ROS Packages for Sensable Phantom Omni Device, GitHub (2014), https:// github.com/fsuarez6/phantom_omni. Accessed 06 Apr 2016 10. Lewis, C.H.: Using the “Thinking Aloud” Method in Cognitive Interface Design. Technical Report RC-9265. IBM (1982) 11. Ponsa, P., Díaz, M.: Creation of an ergonomic guideline for supervisory control interface design. Eng. Psychol. Cognit. Ergon. (2007) 12. Muñoz, L.M., Ponsa, P., Casals, A.: Design and development of a guideline for ergonomic haptic interaction. In: Hippe, Z., Kulikowski, J.L., Mroczek, T. (eds.) Human-Computer Systems Interaction: Backgrounds and Applications 2: Part 2, pp. 15–19. Springer, Heidelberg (2012)

Machine Learning in Robotics

Deep Networks for Human Visual Attention: A Hybrid Model Using Foveal Vision Ana Filipa Almeida, Rui Figueiredo(B) , Alexandre Bernardino, and Jos´e Santos-Victor Institute for Systems and Robotics, Instituto Superior T´ecnico, Universidade de Lisboa, Lisbon, Portugal [email protected], {ruifigueiredo,alex,jasv}@isr.tecnico.ulisboa.pt

Abstract. Visual attention plays a central role in natural and artificial systems to control perceptual resources. The classic artificial visual attention systems uses salient features of the image obtained from the information given by predefined filters. Recently, deep neural networks have been developed for recognizing thousands of objects and autonomously generate visual characteristics optimized by training with large data sets. Besides being used for object recognition, these features have been very successful in other visual problems such as object segmentation, tracking and recently, visual attention. In this work we propose a biologically inspired object classification and localization framework that combines Deep Convolutional Neural Networks with foveal vision. First, a feed-forward pass is performed to obtain the predicted class labels. Next, we get the object location proposals by applying a segmentation mask on the saliency map calculated through a top-down backward pass. The main contribution of our work lies in the evaluation of the performances obtained with different non-uniform resolutions. We were able to establish a relationship between performance and the different levels of information preserved by each of the sensing configurations. The results demonstrate that we do not need to store and transmit all the information present on high-resolution images since, beyond a certain amount of preserved information, the performance in the classification and localization task saturates. Keywords: Computer vision · Deep neural networks · Object classification and localization · Space-variant vision · Visual attention

1

Introduction

The available human brain computational resources are limited, therefore it is not possible to process all the sensory information provided by the visual perceptual modality. Selective visual attention mechanisms are the fundamental mechanisms in biological systems, responsible for prioritizing the elements of c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_10

118

A.F. Almeida et al.

the visual scene to be attended. Likewise, an important issue in many computer vision applications requiring real-time visual processing, resides in the involved computational effort [2]. Therefore, in the past decades, many biologically inspired attention-based methods and approaches, were proposed with the goal of building efficient systems, capable of working in real-time. Hence, attention modeling is still a topic under active research, studying different ways to selectively process information in order to reduce the time and computational complexity of the existing methods. Nowadays, modeling attention is still challenging due to the laborious and time-consuming task that is to create models by hand, trying to tune where (regions) and what (objects) the observer should look at. For this purpose, biologically inspired neural networks have been extensively used, since they can implicitly learn those mechanisms, circumventing the need of creating models by hand. Our work is inspired by [4] which proposed to capture visual attention through feedback Deep Convolutional Neural Networks. Similarly in spirit, we propose a biologically inspired hybrid attention model, that is capable of efficiently recognize and locate objects in digital images, using human-like vision. Our method comprises two steps: first, we perform a bottom-up feed-forward pass to obtain the predicted class labels (detection). Second, a top-down backward pass is made to create a saliency map that is used to obtain object location proposals after applying a segmentation mask (localization). The main contributions of this paper are the following: first, we evaluate the performance of our methodology for various well-known Convolutional Neural Network architectures that are part of the state-of-the-art in tasks of detection and localization of objects when combined with multi-resolution, human-inspired, foveal vision. Then, we establish a relationship between performance and the different levels of information preserved by foveal sensing configurations. The remainder of this paper is organized as follows: Sect. 2 overviews the related work and some fundamental concepts behind the proposed attentional framework. In Sect. 3.1, we describe in detail the proposed methodologies, more specifically, a theoretical explanation of an efficient artificial foveation system and a top-down, saliency-based mechanism for class-specific object localization. In Sect. 4, we quantitatively evaluate the our contributions. Finally, in Sect. 5, we wrap up with conclusions and ideas for future work.

2

Background

The proposed object classification and localization framework uses several biologically inspired attention mechanisms, which include space-variant vision and Artificial Neural Networks (ANN). As such, in the remainder of this section we describe the fundamental concepts from neuroscience and computer science on which the proposed framework is based.

A Hybrid Attention Model Using Foveal Vision

2.1

119

Space-Variant Vision

In this work, we propose to use a non-uniform distribution of receptive fields that mimics the human eye for simultaneous detection and localization tasks. Unlike the conventional uniform distributions which are typical in artificial visual systems (e.g. in standard imaging sensors), the receptive field distribution in the human retina is composed by a region of high acuity – the fovea – and the periphery, where central and low-resolution peripheral vision occurs, respectively [14]. The central region of the retina of the human eye named fovea is a photoreceptor layer predominantly constituted by cones which provide localized highresolution color vision. The concentration of these photoreceptor cells reduce drastically towards the periphery causing a loss of definition. This space-variant resolution decay is a natural mechanism to decrease the amount of information that is transmitted to the brain (see Fig. 2). Many artificial foveation methods have been proposed in the literature that attempt to mimic similar behavior: geometric method [15], filtering-based method [16] and multi-resolution methods [5]. 2.2

Deep Convolutional Neural Networks

Deep neural networks are a subclass of Artificial Neural Networks (ANN) and are characterized by having several hidden layers between the input and output layers. The deep breakthrough occurred in 2006 when researchers brought together by the Canadian Institute for Advanced Research (CIFAR) were capable of training networks with much more layers for the handwriting recognition task [9]. As far as visual attention is concerned, the most commonly used are the Convolutional Neural Networks (CNN), that are feed-forward Deep ANN that take into account the spatial structure of the inputs. These, have the ability to learn discriminative features from raw data input and have been used in several visual tasks like object recognition and classification. A CNN is constituted by multiple stacked layers that filter (convolve) the input stimuli to extract useful and meaningful information depending on the task at hand. These layers have parameters that are learned in a way that allows filters to automatically adjust to extract useful information without feature selection so there is no need to manually select relevant features. In this work we study the performance of state-of-the-art CNN architectures that were within our attentional framework, namely, CaffeNet [8], GoogLeNet [13] and VGGNet [12].

3

Methodologies

Our hybrid detection and localization methodology can be briefly outlined as follows: In a first feed-forward pass, a set of object class proposals is computed (Sect. 3.2) which are further analyzed via top-down backward propagation to obtain proposals regarding the location of the object in the scene (Sect. 3.2).

120

A.F. Almeida et al.

More specifically, for a given input image I, we begin by computing a set of object class proposals by performing a feed-forward pass. The probability scores for each class label (1 000 in total) are collected by accessing the network’s output softmax layer. Then, retaining our attention on the five highest predicted class labels, we compute the saliency map for each one of those predicted classes (see Fig. 3). Then, a top-down back-propagation pass is done to calculate the score derivative of the specific class c. The computed gradient indicates which pixels are more relevant for the class score [11]. In the remainder of this section, we describe in detail the components of the proposed attentional framework. 3.1

Artificial Foveal Vision

Our foveation system is based on the method proposed in [3] for image compression (e.g. in encoding/decoding applications) which, unlike the methods based on log-polar transformations, is extremely fast and easy to implement, with demonstrated applicability in real-time image processing and pattern recognition tasks [1].

Fig. 1. A summary of the steps in the foveation system with four levels. The image g0 corresponds to the original image and h0 to the foveated image. The thick up arrows represent sub-sampling and the thick down arrows represent up-sampling.

Our approach comprises four steps that go as follow. The first step consists on building a Gaussian pyramid with increasing levels of blur, but similar resolution. The first pyramid level (level 0) contains the original image g0 that is downsampled by a factor of two and low-pass filtered, yielding the image g1 at level 1.

A Hybrid Attention Model Using Foveal Vision

(a) f0 = 30

(b) f0 = 60

121

(c) f0 = 90

Fig. 2. Example images obtained with our foveation system where f0 defines the size of the region with highest acuity (the fovea), from a 224 × 224 image uniform resolution image.

More specifically, the image gk+1 can be obtained from the gk via convolution with 2D isotropic and separable Gaussian filter kernels of the form 2

G(u, v, σk ) =

2

1 − u 2σ+v2 k , e 2πσk

0 0. The GP likelihoods are weighted by Nk according to (4) and (5) and used to compute posterior association probabilities p(an = k) by normalisation. Our algorithm produces a desired number of samples for the scene segmentation, which can be used to draw probabilistic conclusions about the scene segmentation and the location and shape of objects.

3 3.1

Multiple View Segmentation Pose Generation and Cloud Registration

The vine-like structure of apple trees on a trellis comprises leaves, branches and fruit. The leaves can occlude the fruit and also induce noise in the observed point clouds. For these reasons the input point clouds are firstly filtered with a standard noise removal. At this stage, the clouds have fragments of the apple but morphologically these are identical to leaves since both are small planar surfaces. In order to achieve better results in the probabilistic segmentation we perform registration of point clouds taken from multiple points of view. A pose planner generates camera poses such that the scene is observed from different

324

P. Ramon Soria et al.

Fig. 1. System block diagram.

angles. Figure 1 summarises the multiview probabilistic segmentation process. It is assumed that the robot is positioned in front of the apple branches. First, a snapshot is taken and used to infer the relative angle of the apple “vine” with respect to the arm. Using this angle, we compute candidate poses reachable by the arm arranged on a portion of a sphere. Then, at each step of the algorithm a pose is selected that maximises the distance from the history of past poses granting a good convergence of the mapping process. From each viewpoint a new cloud is taken, which is registered using ICP plus information from the arm controller that provides an initial estimate of the point cloud’s true pose. Finally, the map of the scene is filtered by color using Euclidean distance between colors and a model value in HSV. Then, it is simplified using the voxel cloud segmentation algorithm described in [10] before being introduced into the Dirichlet segmentation process. 3.2

Arm Controller

Path Planning and Control. The arm path planning framework leverages recent breakthroughs in trajectory optimization algorithms that perform very well in high-dimensional configuration space. The algorithms used in our system include: TrajOpt [13], GPMP2 [14] and CHOMP [15]. In preliminary experiments we found that none of the algorithms performed consistently better than any other for all scenarios. CHOMP had the highest success rate, however, TrajOpt had significantly better computation time. The algorithms can fail for several reasons, most often because they are inherently prone to getting trapped in local minima and there is no guarantee of ever finding a solution. Sampling based planners, such as RRT*, are another Table 1. Average computation time and success rates per planner. Ranked planner runs all three optimisers in parallel and, unless all planners fail, returns first successful solution. Planner Computation time (s) Success (%) TrajOpt 0.09

94

GPMP2 0.09

68

Chomp

1.15

96

Ranked

0.088

98

MVPS of Pome Fruit with RGB-D Camera

325

Fig. 2. Arm hardware setup and simulation in OpenRAVE

effective path planning method used in high-dimensional configuration spaces. A key property is that these planners are probabilistically complete, that is as the planner continues running, the probability of not finding a solution (if one exists) asymptotically approaches zero. Bi-directional RRT, a variant of RRT, exhibits a high planning success rate at the cost of sub-optimal paths [16]. We performed comparison experiments with 100 random end-effector poses sampled over a sphere surface, centred on the middle of the trellis. As can be seen from results in Table 1 the Ranked planner (which runs all optimisers in parallel) performed the best, however still had a 2% failure rate. Therefore, the Ranked planner is used in our system and supplemented by Bi-directional RRT, which is used as a last resort if all others fail. The paths returned are geometric and often sparse, hence they are first time parameterised and up-sampled before being sent to the arm for execution. Arm Hardware. In order for a robot arm to achieve an arbitrary end effector position and orientation in 3D space, i.e. a 6D pose, six degrees of freedom (DOF) are necessary [17]. Hence, manipulators suited for dexterous tasks, such as active sensing, should have at least six DOF. Redundant manipulators have seven or more DOF and offer greater dexterity and flexibility for maneuvering around obstacles. Rethink Robotics’ Sawyer, a 7DOF robot arm, is used in our system. An open source SDK for the Sawyer arm provides a convenient API for executing trajectories and requesting state information. To control the robot arm, its SDK provides a built-in Joint Trajectory Action Server (JTAS) which facilitates commanding the arm through multiple waypoints [18]. JTAS takes as input a list of timestamped waypoints and then determines appropriate joint velocity commands, through interpolation, to send to the arm so that the given trajectory is followed. Simulation Environment. The Sawyer arm and its environment are shown in Fig. 2. The arm is simulated in OpenRAVE, an Open Robotics Automation Virtual Environment for developing and testing motion planners [19]. OpenRAVE can import standard robot models, such as COLLADA, allowing seamless

326

P. Ramon Soria et al.

integration with all of its interfaces, including motion planning libraries, inverse kinematics solvers and collision checkers. The motion planners are implemented as OpenRAVE planner plugins. Further, one of the strengths of OpenRAVE is its ability to robustly run multiple environments simultaneously in the same process. Hence, parallel path planning is well supported. Function with Rest of the System. The Arm Controller provides a TCP server front end which acts as the interface to the rest of the system. It listens on a TCP socket for messages, containing serialised commands, and once received deserialises and parses them for execution. The sequence of commands is as follows. First the Cloud Registration sends a request to the Arm Controller for the arm’s current pose. Then the Pose Planner computes a target pose based on the current pose and map information, which is then sent to the Arm Controller for planning and execution. The Arm Controller then lets the Cloud Registration node know whether the execution succeeded or failed. This process is repeated until no more viewpoints are needed.

4

Experimental Validation

In this section we discuss the experimental setup used for testing our system. We use a 1.5 m by 1.5 m section of an artificial apple trellis with the same characteristics as typical orchard trellises. The initial position of the arm with respect to the trellis is unknown at the beginning of each experiment. The only assumption we make is that the trellis is in the range of the camera (the Intel RealSense SR300 has a maximum range of 1.5 m). Figure 3 illustrates the map building result of an experiment using the registration procedure described in Sect. 3. The first row in Fig. 4 shows the input cloud that is fed into the Dirichlet segmentation algorithm. In the second row the result of the segmentation process is shown, where different colours are used to show the association to the different objects. Finally, the last row shows the centroids of the resulting objects. As the sampler tends to create new candidates of groups with single parts at random

(a)

(b)

(c)

(d)

Fig. 3. Example of map building according to Sect. 3. The coordinate frames illustrate the inferred poses from where the point clouds were taken, and each cloud shows the reconstructed RGB-D pointcloud of the apple scene.

MVPS of Pome Fruit with RGB-D Camera

327

Fig. 4. Segmentation results for an increasing number of viewpoints from left to right. The first row shows the registered point clouds for multiple views, the second row shows the labeled groups representing the segmented apples, and the last row displays the centroids of apples that were segmented with a sufficiently large number of parts associated.

samples, a threshold has been set for a minimum number of parts for a group to be considered an object. As the number of viewpoints increases, more parts are added to the scene and an increasing number of apples are correctly located. In order to validate the locations of the individual apples as identified by our segmentation algorithm, the arm has been used to take a close up shot of each fruit, representing a key step in a harvesting or fruit inspection pipeline. Figure 5, shows a series of examples of these snapshots. The computation time of the Dirichlet segmentation scales superexponentially with the number of parts in the scene. The supervoxelisation

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 5. Close-up view of the apples after complete segmentation process and validation of their locations.

328

P. Ramon Soria et al.

Fig. 6. Effect of the seed size and voxel resolution on the number of parts of the input cloud for the Dirichlet segmentation and the effect on the speed of the sampler. Note: the legend describes the voxel resolution (number after ‘v’) and seed size (number after ‘s’).

(a)

(b)

(c)

(d)

Fig. 7. Segmentation results for different distributions of the apples over the vine.

algorithm mentioned in Sect. 2 over-segments the input clouds into supervoxels (which we refer to as parts). This algorithm depends on two input parameters parameters: seed size and voxel resolution [10]. Figure 6(a) shows the effect of these parameters on the number of parts fed into the segmentation process for the same dataset. Figure 6(b) shows the average time per sample in the Gibbs sampler. Finer parameters (lower voxel resolution and seed size) produce more parts, which increases the computation time. Finally, Fig. 7 shows additional results of the probabilistic segmentation with different configurations and clusters of apples. All the apple vines were completely built and all the apples were successfully segmented using 1000 iterations of the Gibbs Sampler. The centroids of the apples are distributed over the samples within a range of 3 cm. This range depends on the parameters of the

MVPS of Pome Fruit with RGB-D Camera

329

sampler, and a specific centroid depends on the selected iteration step, as these are updated sequentially to ensure convergence. The surfaces of the apples fit well to the data, demonstrating accurate reconstruction close to the observed data points, as required for harvesting and manipulation.

5

Discussion

We have presented a system for probabilistic segmentation of pome fruit, including hardware realisation consisting of an Intel RealSense SR300 camera mounted on a commercial robotic arm, robust path planning algorithms, point cloud acquisition and registration, preprocessing (color-based presegmentation and cloud-based supervoxelisation) and MCMC-based proababilistic segmentation and fruit detection. We have investigated the robustness of the path planning and point cloud registration algorithms in hardware experiments and demonstrated the feasibility of the probabilistic segmentation algorithm. The results of the GPIS- and DP-based probabilistic segmentation algorithm are promising in that they reliably detect apples and their exact location in the

Fig. 8. Results of the probabilistic segmentation. The bottom right figure shows the association entropy map, where blue indicates low entropy and red indicates high entropy.

330

P. Ramon Soria et al.

point cloud. Further, the probabilistic surface reconstruction using GPIS can be used for robust grasping algorithms [20] that take into account uncertainty in the fruit surface. Furthermore, probabilistic representations of the environment are essential for active methods to improve perception. Figure 8 shows the pointcloud and segmentation results at an intermediate step of 5 registered scans. As was observed before, apples are correctly located and segmented. Furthermore, the bottom right subplot shows the association entropy map, where blue indicates low entropy and red indicates high entropy. Those parts of the scene with high association entropy indicate that parts have been less consistently associated to the same objects compared to regions of lower entropy. Fruit that are densely cluttered (like in the center of the scene) generally result in larger segmentation uncertainty, whereas the single apple at the top right is confidently identified as one object. Such results can immediately be employed for grasping (start by picking the apple that is confidently segmented) and active perception (acquire more data in cluttered regions of the scene). In future work we intend to increase the robustness of the segmentation algorithm. We have already employed a noise model that rejects parts that do not belong to objects, and in a more involved approach planar prior shapes can be used to model the shape of leaves. Our general goal is to close the loop around perception and provide an end-to-end system that makes observations with a low cost camera, registers point clouds, chooses viewpoints based on information theoretic measures and robustly moves the arm accordingly. Acknowledgments. This work has been carried out in the framework of the AEROARMS (SI-1439/2015) EU-funded projects and the Australian Research Council’s Discovery Projects funding scheme (DP140104203).

References 1. Nguyen, T.T., Vandevoorde, K., Kayacan, E., De Baerdemaeker, J., Saeys, W.: Apple detection algorithm for robotic harvesting using an RGB-D camera. In: Proceedings of International Conference of Agricultural Engineering, Zurich, Switzerland (2014) 2. Sa, I., Lehnert, C., English, A., McCool, C., Dayoub, F., Upcroft, B., Perez, T.: Peduncle detection of sweet pepper for autonomous crop harvesting combined color and 3-D information. IEEE Robot. Autom. Lett. 2(2), 765–772 (2017) 3. Zhao, Y., Gong, L., Huang, Y., Liu, C.: Robust tomato recognition for robotic harvesting using feature images fusion. Sensors 16(2), 173 (2016) 4. Fern´ andez, R., Salinas, C., Montes, H., Sarria, J.: Multisensory system for fruit harvesting robots: experimental testing in natural scenarios and with different kinds of crops. Sensors 14(12), 23885–23904 (2014) 5. Williams, O., Fitzgibbon, A.: Gaussian process implicit surfaces. In: Gaussian Processes in Practice (2006) 6. Martens, W., Poffet, Y., Soria, P.R., Fitch, R., Sukkarieh, S.: Geometric priors for Gaussian process implicit surfaces. IEEE Robot. Autom. Lett. 2(2), 373–380 (2017)

MVPS of Pome Fruit with RGB-D Camera

331

7. van Hoof, H., Kroemer, O., Peters, J.: Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Trans. Robot. 30(5), 1198– 1209 (2014) 8. Pajarinen, J., Kyrki, V.: Decision making under uncertain segmentations. In: Proceedings of IEEE ICRA, pp. 1303–1309 (2015) 9. Cadena, C., Koseck´ a, J.: Semantic parsing for priming object detection in indoors RGB-D scenes. Int. J. Robot. Res. 34(4–5), 582–597 (2015) 10. Papon, J., Abramov, A., Schoeler, M., W¨ org¨ otter, F.: Voxel cloud connectivity segmentation - supervoxels for point clouds. In: Proceedings of CVPR, pp. 2027– 2034 (2013) 11. Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: Proceedings of IEEE ICRA, pp. 1–4 (2011) 12. Rasmussen, C.E.: Gaussian processes for machine learning (2006) 13. Schulman, J., Ho, J., Lee, A., Awwal, I., Bradlow, H., Abbeel, P.: Finding locally optimal, collision-free trajectories with sequential convex optimization. Robot. Sci. Syst. 9, 1–10 (2013) 14. Dong, J., Mukadam, M., Dellaert, F., Boots, B.: Motion planning as probabilistic inference using Gaussian processes and factor graphs. In: Robotics: Science and Systems, vol. 12 (2016) 15. Zucker, M., Ratliff, N., Dragan, A.D., Pivtoraiko, M., Klingensmith, M., Dellin, C.M., Bagnell, J.A., Srinivasa, S.S.: Chomp: covariant hamiltonian optimization for motion planning. Int. J. Robot. Res. 32(9–10), 1164–1193 (2013) 16. Kuffner, J., Lavalle, S.: RRT-connect: an efficient approach to single-query path planning. In: Proceedings of IEEE ICRA, pp. 995–1001 (2000) 17. Hayashi, A.: Geometrical motion planning for highly redundant manipulators using a continuous model. Ph.D. thesis, Austin, TX, USA (1991) 18. InternaSDK. http://sdk.rethinkrobotics.com/intera/arm control systems 19. Diankov, R., Kuffner, J.: Introduction to the openrave architecture 0.9.0. http:// openrave.org/docs/latest stable/coreapihtml/architecture concepts.html 20. Dragiev, S., Toussaint, M., Gienger, M.: Gaussian process implicit surfaces for shape estimation and grasping. In: Proceedings of IEEE ICRA, pp. 2845–2850 (2011)

Vision-Based Deflection Estimation in an Anthropomorphic, Compliant and Lightweight Dual Arm Alejandro Suarez ✉ , Guillermo Heredia, and Anibal Ollero (

)

Robotics, Vision and Control Group, University of Seville, Camino de los Descubrimientos s/n, 41092 Seville, Spain {asuarezfm,guiller,aollero}@us.es

Abstract. This paper proposes the application of a stereo vision system for esti‐ mating and controlling the Cartesian and joint deflection in an anthropomorphic, compliant and ultra-lightweight dual arm designed for aerial manipulation. Each arm provides four degrees of freedom (DOF) for end-effector positioning in a human-like kinematic configuration. A simple and compact spring-lever mecha‐ nism introduced in all joints provides mechanical compliance to the arms. A color marker attached at the end effector of the arms is visually tracked by a stereo pair installed over the shoulders. The Cartesian position and velocity of the markers is estimated with an Extended Kalman Filter (EKF), while the corresponding points in an equivalent stiff-joint manipulator are obtained from the kinematic model and the position of the servos. The Cartesian deflection is defined as the difference between these two measurements, obtaining the joint deflection from the inverse kinematics. The vision-based deflection estimator is validated in test bench experiments: position estimation accuracy, impact response, passive/active compliance and contact force control. Keywords: Compliance · Visual deflection estimation · Aerial manipulation

1

Introduction

Mechanical compliance is a highly desirable feature in an aerial manipulation robot interacting with the environment, where contact forces and motion constraints generated during certain operations may destabilize the aerial platform, and where impacts and overloads may damage the manipulator [1, 2]. Imagine for example a situation in which a robotic arm installed at the base of a multi-rotor vehicle is grabbed to a fixed point on flight. Any small displacement of the aerial platform with respect to the operation point will introduce an external torque that, if propagated through a stiff manipulator to the base of the aerial platform, may destabilize the attitude controller. Thus, joint compli‐ ance increases safety in the manipulation operation, as it provides a certain level of tolerance against external forces and overloads, making even possible to estimate the contact force/joint torque from the deflection of the elastic element. However, this is done at expenses of decreasing the positioning accuracy of the end effector due to the influence of inertia and gravity over the compliant joints. © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_28

Vision-Based Deflection Estimation

333

Aerial manipulation is a very recent research field, which proposes the integration of one or multiple [3] robotic arms in vertical take-off and landing (VTOL) unmanned aerial vehicles (UAV), either helicopter [4] or multirotor [5]. The development of this technology is motivated by the necessity in several industrial scenarios to reach areas and workspaces out of the range for humans to be able to perform certain inspection and maintenance tasks. Some typical examples include detection and insulation of leaks in high altitude pipes in chemical plants, visual inspection and repair of cracks in blades of wind turbines, contact-based measurement of beam deflection in bridges, or the installation and retrieval of sensor devices in buildings. Several prototypes of aerial manipulators have been developed and applied to grasping and transportation tasks. A 7-DOF industrial manipulator is integrated in a high payload autonomous helicopter in [4]. Two compliant grippers at the base of another helicopter are used for object grasping in [6]. Quadrotor platforms have been widely employed in different tasks in indoors [7, 8] and outdoors [9] environments. Visual markers (tags) [10, 11] or visual features of the target object [12] have been applied to visual servoing in the context of aerial manipulation. Dual arm manipulation has been also addressed, although only in a few works. The valve turning operation described in [13] employs two simple 2-DOF arms and the torque in the yaw angle generated by the propellers for this task. A human-size dual arm manipulator is described in [14] and tested in outdoor flights [15]. A commercially available manipulator is shown in [16]. The benefits of mechanical joint compliance in robotic arms intended to aerial manipulation have been evidenced in our previous work. Reference [1] describes a method for estimating the weight of grasped objects from the deflection of extension springs acting as tendons in the elbow joint, as well as the possi‐ bility of detecting collisions with low energy transfers. The 3-DOF compliant joint arm developed in [2] is applied to contact force control and to obstacle localization. However, the integration of a deflection potentiometer in the joints is a hard task from the mecha‐ tronics point of view. This has motivated the evaluation of vision sensors (cameras) for measuring the deflection of the end effector, inspired by the work done with flexible link manipulators intended to space robotics [17, 18]. The main contribution of this paper is the development and experimental validation of a vision system used for estimating and controlling the mechanical deflection of a compliant joint dual arm designed for aerial manipulation with multirotor platform. A stereo camera installed over the shoulder structure is focused on the color markers attached to the end effector of the arms. A modified version of the CAMShift (Contin‐ uously Adaptive Mean-Shift) algorithm [19, 20] provides the tracking points to an Extended Kalman Filter (EKF), which estimates the 3D position and velocity of the markers. The Cartesian deflection is then computed from this estimation and from the forward kinematic model of the manipulator. Experimental results in testbench have been conducted, demonstrating its application to impact detection and force control. The rest of the paper is organized as follows. Section 2 describes the anthropomor‐ phic, compliant and lightweight dual arm designed for its integration in multirotor plat‐ form. The kinematics and dynamics of the manipulator are presented in Sect. 3 along with the model of the vision system. Section 4 details the vision-based deflection esti‐ mator, presenting the results in Sect. 5 and the conclusions in Sect. 6.

334

2

A. Suarez et al.

System Description

The range of operations that an aerial manipulation robot is able to perform will depend on the level of dexterity of the robotic arms integrated in the aerial platform. Addition‐ ally, certain inspection and maintenance tasks typically conducted by human operators imply bimanual manipulation. Therefore, it is desirable that the manipulator is able to reproduce the kinematics of the human arms, as well as its features in terms of compli‐ ance, low weight and inertia, and robustness. The anthropomorphic, compliant and lightweight dual arm system developed from the scratch at the Robotics, Vision and Control Group of the University of Seville has been designed taking into account these considerations. A picture of the prototype can be seen in Fig. 1. Each arm counts with three joints in the shoulder and one joint in the elbow disposed in a human-like kinematic configuration. Also the length of the upper arm and the forearm links (25 cm), and the separation between the arms (32 cm) are similar to the human arm. The actuators employed are the Herkulex servos manufactured by Dongbu Robot, models DRS-0201 and DRS-0101. The aluminum frame structure has been carefully designed in such a way that it isolates the servo actuators from impacts and overloads, and at the same time allows the integration of a compliant spring-lever transmission mechanism in all the joints. The end effector is a simple gripper built with a Futaba S3003 servo and a micro switch for contact-based grasping. A ZED stereo camera attached over the shoulder structure and rotated 45 degrees downwards in the pitch angle allows the teleoperation of the arms with visual feedback and the application of visual servoing algorithms for automatic tasks.

Fig. 1. Anthropomorphic, compliant and lightweight dual arm.

3

Modeling

3.1 Kinematic Model The reference frames, joint variables, lengths and vectors associated to the kinematic model of the dual arm are represented in Fig. 2. Here Xi Y i Zi is the coordinated system of the i -th manipulator and Xc Y c Zc is the camera frame to which the marker positions

Vision-Based Deflection Estimation

335

are measured. Superscript i = {1, 2} denotes the arm (left/right), while subscript j = {1, 2, 3, 4} indicates the joint in the following order: shoulder pitch, roll, and yaw, and elbow pitch. The rotation angle of each link with respect to the previous link is denoted by qij, while 𝜃ji is the angular position of the corresponding servo. The joint deflection Δ𝜃ji is then defined as the difference between these two angles:

Δ𝜃ji = qij − 𝜃ji

(1)

Fig. 2. Kinematic model of the dual arm manipulator with stereo camera head.

The deflection of the joints is caused by the compression of the springs in the compliant transmission mechanism [2], with a maximum amplitude around ± 20 degrees. Let Fi :ℜ𝟒 → ℜ𝟑 represent the forward kinematics of the i -th arm, that is, the function that maps the joint space into the Cartesian space of the arm. Analogously, the inverse kinematic model is defined by function 𝜴i :ℜ𝟑 → ℜ𝟒, which returns the joint variables for a given Cartesian position. Note that the inverse kinematics has an infinite number of solutions due to the redundant degree of freedom of the manipulator. This work assumes that the shoulder roll angle is a known parameter for simplicity, so qi2 = 𝜑i. The forward and inverse kinematics are then defined as follows: i ⎡ xi ⎤ ⎡ q1 ⎤ i ⎥ i⎥ ( ) ( ) ( ) ( ) ⎢ ⎢ y q ri qi = ⎢ i ⎥ = Fi qi ; qi ri = ⎢ 2i ⎥ = 𝜴i ri , 𝜑i z q3 ⎢ ⎥ ⎢ i⎥ ⎣1⎦ ⎣ q4 ⎦

(2)

Here ri is an arbitrary point in the Cartesian space of the i -th arm associated to the vector of joint variables qi. Let us particularize the kinematic model to the red marker attached to the wrist point of both arms shown in Fig. 2. The vision system provides the 3D position of each marker referred to the camera frame, that is, the vector rcM,i. The position and orientation of the camera with respect to each arm is known, and so the

336

A. Suarez et al.

corresponding transformation matrix ic T ∈ ℜ𝟒×𝟒. Then, the position of the marker can be referred to the coordinate system of each arm in the following way: riM,i = ic T ⋅ rcM,i

(3)

Imagine an equivalent stiff joint manipulator, in which the compliant joints become stiff. The joint deflection is zero, and thus, the angular position of the output links is the servo position vector 𝜽i. The Cartesian deflection Δlp ∈ ℜ𝟑 evaluated at a certain point p of the manipulator is defined as the difference between the positions of this point in the compliant joint manipulator w.r.t. the equivalent stiff joint manipulator: ( ) ( ) Δlip = Fip qi − Fip 𝜽i

(4)

The information available for estimating the deflection in the dual arm system is the angular position of the servo actuators, represented by vector 𝜽i, and the Cartesian posi‐ tion of the markers attached at the end effector provided by the vision system, rcM,i. Therefore, the deflection of the manipulator at this point is computed as follows:

( ) ΔliM,i = ic T ⋅ rcM,i − FiM,i 𝜽i

(5)

The interpretation of the right side in Eq. (5) is the following: the first term represents the position where the marker really is (measured by the camera), while the second one corresponds to the position where the marker would be if the joints were stiff (given the position of the servos). The Cartesian deflection may be useful for estimating and controlling directly the contact force exerted by the end effector, or for estimating the weight of a grasped object. The joint deflection can be estimated in a similar way, applying the inverse kinematic model to the marker point given by the vision system and subtracting the angular position of the servos: ( ) Δ𝜽i = 𝜴i riM,i , 𝜑i − 𝜽i

(6)

The joint torque can be finally derived from the static torque-deflection characteristic. 3.2 Dynamic Model The most evident source of deflection in the developed compliant joint manipulator is gravity. Take as reference that the deflection at the end effector when the arm is fully stretched is around 8 cm. The own weight of the upper arm and forearm links causes the compression of the springs of the transmission mechanism. However, the compliant joints will also suffer the effect of inertia, Coriolis and centrifugal forces generated by the motion of the links. Although the influence of these terms in practice is almost negligible for normal operation velocities, it is necessary to take into account the complete dynamics in the design of the control system. As described in [21, 2], the dynamic model of a compliant joint manipulator can be decomposed in two parts: the dynamics of the actuators, reduced to the inertia and

Vision-Based Deflection Estimation

337

friction of the rotor and the torque transmitted to the output links through the transmis‐ sion mechanism, and the output link dynamics, which comprises the inertia, centrifugal and Coriolis, and the gravity term along with the external forces and the torque received from the transmission: i

Bi ⋅ 𝜽̈ + 𝝉 i = 𝝉 im − 𝝉 if

(7)

( ) ( ) ( ) Mi qi ⋅ q̈ i + Ci qi , q̇ i + Gi qi = 𝝉 i + 𝝉 iext

(8)

Here Bi and Mi ∈ ℜ𝟒×𝟒 are the generalized mass matrices of the rotors and the links, respectively, Ci ∈ ℜ𝟒 is the Coriolis and centrifugal term, and Gi ∈ ℜ𝟒 is the gravity torque. Vectors 𝝉 im and 𝝉 if ∈ ℜ𝟒 model the torque and friction generated by the motors, while 𝝉 iext represents the torque due to exogenous forces, typically contact forces. It is easily to observe in previous equations that the term 𝝉 i ∈ ℜ𝟒 is the torque transmitted by the spring-lever mechanism, which is characterized by the stiffness and dampening constants. ( ) ( ) i 𝝉 i = K i ⋅ qi − 𝜽i + Di ⋅ q̇ i − 𝜽̇

(9)

It is convenient to remark that the possibility to measure the deflection of the compliant joints facilitates the development of applications based on force/torque control. 3.3 Camera Model The Extended Kalman Filter (EKF) implemented for estimating the 3D position and velocity of the color markers makes use of the pin-hole camera model for computing the expected projection of the markers into the image plane of the cameras. Each of the cameras in the stereo pair will be identified by index ( k = ){1, 2}. The intrinsic parameters, obtained from calibration, are the focal length, fxk , fyk , and the radial and tangential ]T [ c distortion coefficients. If rcM,i = xM,i , ycM,i , zcM,i represents the position of the marker of the i -th arm in the camera frame (see Fig. 2), then the projection of its centroid over the image plane of the j-th camera is given by: [

ckM,i

ck = x,i cky,i

]

b ⎡ c ± xM,i ycM,i ⎢ 2 = fk ⋅ j fy ⋅ c ⎢ x zcM,i zM,i ⎣

T

⎤ ⎥ ⎥ ⎦

(10)

were b = 12[cm] is the base line of the stereo pair. The distortion model described in [22] is applied over the projected centroid for taking into account the deviation produced by the wide-angle lenses of the ZED camera.

338

4

A. Suarez et al.

Vision-Based Deflection Estimation

4.1 Structure of the Estimator The block diagram of the developed deflection estimator is depicted in Fig. 3, and it is based on the kinematic model described in Sect. 3.1. The estimator takes as input the angular position of the servo actuators, 𝜽i, along with the centroid of the i -th marker projected over the k-th camera given by the tracking algorithm, giving as output the estimated Cartesian and joint deflection, ΔliM,i and Δ𝜽i, respectively. The EKF integrates the measurements provided by the tracking algorithm, obtaining the XYZ position and velocity of the markers, riM,i and viM,i. The Cartesian deflection is computed subtracting the position of the marker in the equivalent stiff-joint manipulator to the position given by the EKF. The joint deflection is computed in a similar way, applying the inverse kinematic model.

Fig. 3. Structure of the deflection estimation system.

4.2 Vision System The vision system consists of a ZED stereo camera focused on the color markers attached at the wrist point of both arms, and a program developed in C ++ that implements a modified version of the CAMShift which allows the tracking loss detection and object re-detection using color and geometric information. This modified version, called Tracking-Searching CAMShift, was developed previously by the authors in the context of a multi-UAV system that made use of color markers disposed over quadrotors for detecting, identifying and recovering against positioning sensor faults [20]. The CAMShift algorithm is a well suited solution for tracking color objects due to its simplicity, low computational requirements in time and memory, robustness to blurring and to changes in the illumination conditions. However, its main drawback is the neces‐ sity of a marker whose color is in high contrast with respect to the background. The shape of the marker should be spherical so its projection of the image plane of the stereo pair is independent from the point of view or pose of the arms. A calibration process is carried out in an offline phase for determining the focal length, principal point as well as the distortion coefficients of the ZED camera. The hue, saturation and value (HSV) rejection thresholds used for isolating the markers on the

Vision-Based Deflection Estimation

339

back-projection image were tuned experimentally from a picture of the markers at the nominal observation distance. 4.3 Extended Kalman Filter Although the position of the markers can be estimated geometrically from the pair of projection centroids given by the CAMShift algorithm, the Kalman filter is preferred as it also provides an estimation of the velocity of the marker, which can be exploited for estimating the rate of energy exchange during the impact of the manipulator. The nonli‐ nearity associated to the pin-hole camera model requires the application of the extended version of this algorithm. A simple constant velocity model is assumed for describing the motion of the marker, so the state vector is defined as follows: [ x= x

y

z

vx

vy

vz

]T

(11)

The measurement vector taken as input by the EKF is the centroid of the marker projected on each camera, ckM,i, along with the elapsed time since last update, Δt. The accuracy in the position/velocity estimations has been tuned through the process noise and measurement noise covariance matrices Q ∈ ℜ6×6 and R ∈ ℜ2×2, respectively.

[ Q=

( ) diag3×3 𝜎p2 𝟎3×3

] ] [ 2 𝟎3×3 𝜎c 0 ( ) ; R = 0 𝜎2 diag3×3 𝜎v2 c

(12)

[ ] [ ] where 𝜎p2 = 2.5[m], 𝜎v2 = 100 m∕s and 𝜎c2 = 25 pixels .

5

Experimental Results

5.1 Evaluation of the Performance of the Vision Module The purpose of this experiment is to evaluate the time performance and the accuracy of the developed vision system described in Sect. 4, considering for simplicity a single marker. All the experiments have been conducted over a Sony VAIO laptop with an Intel® CoreTM i7-3632QM CPU @ 2.20 GHz x 8 processor and Ubuntu 14.04 LTS. The resolution of the ZED camera was set to 2560 × 720 pixels, at a frame rate of 30 fps. The execution of the vision module comprises the image acquisition using the Video‐ Capture class provided by the OpenCV library, the application of the modified CAMShift algorithm for each frame (left and right) along with the EKF update, and the visualization of the images on the screen. The execution times measured in a 50 s time interval are summarized in Table 1.

340

A. Suarez et al. Table 1. Execution times of the visual tracing algorithm. Acquisition time Processing time Iteration time

Mean [ms] 7,76 16,66 33,32

Standard deviation [ms] 2,71 2,92 5,06

The accuracy in the position estimation of the marker provided by the vision system was measured using a drylin® T rail guide system TS-04-15 and an igus® TW-04-15 sliding carriage. The marker can be linearly displaced along the Z axis of the camera (depth), obtaining the ground truth from a ruler disposed parallel to the rail guide. Figure 4 represents the position estimation error when the marker is displaced from 0.15 meters (minimum observable distance) up to 0.5 meters (maximum reach of the arms). As it can be seen on the right side of Fig. 4, the error is nonlinear with the distance. This is mainly due to errors in the calibration of the camera and in the projection of the centroid of the marker into the image plane. In order to improve the accuracy of the deflection controller, a calibration process is executed for obtaining the deflection offset around the normal operation position of the arms.

Fig. 4. Position estimation error along the X, Y and Z (depth) axes.

5.2 Impact Response: Cartesian Deflection and Velocity In this experiment, the color marker is attached at the wrist point of the left arm (see Fig. 2), so the Cartesian deflection and speed at this point are obtained applying the estimation method described in Sects. 3.1 and 4.1. The impact is generated throwing the TW-04-15 sliding carriage (62 grams weight) along the rail guide from a height of 0.5 meters, hitting the servo of the gripper. The left arm is in L-position, with the elbow joint flexed 90 degrees. The deflection and speed of the marker caused by the impact are represented in Fig. 5. Note that the underdamped response is proper of a mass-springdamper system: the potential energy of the impact is transformed in elastic potential energy and released at a lower rate.

Vision-Based Deflection Estimation

341

IMPACT RESPONSE - CARTESIAN DEFLECTION AND VELOCITY deflection [m]

0.04

0 -0.02 -0.04

deflection velocity [m/s]

X Y Z

0.02

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0

0.2

0.4

0.6

0.8

1 time

1.2

1.4

1.6

1.8

2

0.2 0.1 0 -0.1 -0.2

Fig. 5. Cartesian deflection (up) and velocity (down) at the marker position due to impact.

5.3 Passive Compliance Vs Active Compliance: Zero Deflection Control This experiment illustrates the concepts of passive and active compliance, using the developed vision system for this purpose. Let us consider that a pushing-pulling force is applied over the end effector of the left arm, causing the compression of the springs employed in the compliant joints. At the moment when the force ceases, the excess of elastic potential energy stored in the springs will be transformed into kinetic energy, so the manipulator will reconfigure itself to reach the state of minimum energy. This passive behavior can be identified in the left side of Fig. 6, where the end effector of the left arm was pushed by hand sequentially along the X, Y and Z axes. Note that the displacement of the end effector w.r.t. its initial position is the Cartesian deflection. Now, imagine that the position of the marker is controlled for maintaining a zero Cartesian deflection reference, what can be achieved solving the inverse kinematics of the manipulator (see Sect. 3.1). When the arm is pushed, the deviation in the position of the marker measured by the vision system is feedback to the controller, which adjust the position of the joint servos according to the direction of the deflection. This operation mode is called active compliance for indicating that the actuators actively contribute to release the excess of elastic potential energy stored in the springs, as it can be seen in the right side of Fig. 6.

Fig. 6. Passive deflection of the left arm (left) and zero deflection control (right).

342

A. Suarez et al.

When the end effector is pushed, the apparent effect is that the arm is guided without resistance. A video of the experiments can be found in [23]. The zero deflection control mode may be useful for reducing the influence of contact forces generated in grabbing situations over the aerial platform. 5.4 Deflection (Force) Control One of the main motivations for the development of a vision-based deflection estimation system is the possibility of estimating and controlling the contact forces that the aerial manipulator applies during the execution of certain operations or tasks, for example in the installation of sensors, the assembly of structures or the manipulation of tools. Although the analysis of the force-deflection characteristic is out of the scope of this work, a proportional relationship in the form F = K ⋅ Δl can be considered, where the diagonal constant matrix K ∈ ℜ𝟑×𝟑 will in general depend on the joint angles. The control of the Cartesian deflection is evaluated here for simplicity [23]. Figure 7 shows the reference and measured value in the XYZ axes (left), and the position of the marker along with the joint position of the servos (right). The position of the arm is the same as stated above (90 deg flexion of the elbow joint). A simple proportional controller gener‐ ates an incremental position reference for the end effector integrating the Cartesian error. As seen on the left side of Fig. 7, the control error tends to zero.

Fig. 7. Cartesian deflection-reference (left) and marker/joints position (right).

6

Conclusion

This paper has demonstrated the application of a stereo vision system for estimating and controlling the deflection of a lightweight and compliant dual arm designed for aerial manipulation with multirotors. A small color marker attached at the end effector is visu‐ ally tracked by the camera head, obtaining its position and velocity by means of an EKF. The Cartesian deflection is defined as the difference between the real position of the marker given by the EKF, and the position in the equivalent stiff joint manipulator. The proposed method avoids the integration of a deflection sensor in the joints, facilitating

Vision-Based Deflection Estimation

343

the development of tasks requiring contact force control. As future work, it would be convenient to improve the accuracy of the position estimator. Acknowledgement. This work has been funded by the Spanish MINECO Retos project AEROMAIN (DPI2014-5983-C2-1-R) and by the H2020 AEROARMS Project, Grant Agreement Nº 644271. The research activity of Alejandro Suarez is supported by the Spanish Ministerio de Educacion, Cultura y Deporte FPU Program.

References 1. Suarez, A., Heredia, G., Ollero, A.: Lightweight compliant arm for aerial manipulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 1627–1632. IEEE, September 2015 2. Suarez, A., Heredia, G., Ollero, A.: Lightweight compliant arm with compliant finger for aerial manipulation and inspection. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 4449–4454. IEEE, October 2016 3. AEROARMS Project http://aeroarms-project.eu/ 4. Kondak, K., Huber, F., Schwarzbach, M., Laiacker, M., Sommer, D., Bejar, M., Ollero, A.: Aerial manipulation robot composed of an autonomous helicopter and a 7 degrees of freedom industrial manipulator. In: IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 2107–2112. IEEE, May 2014 5. Jimenez-Cano, A.E., Martin, J., Heredia, G., Ollero, A., Cano, R.: Control of an aerial robot with multi-link arm for assembly tasks. In: IEEE International Conference on Robotics and Automation (ICRA), 2013, pp. 4916–4921. IEEE, May 2013 6. Pounds, P.E., Bersak, D.R., Dollar, A.M.: The yale aerial manipulator: grasping in flight. In: IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 2974–2975. IEEE, May 2011 7. Mellinger, D., Lindsey, Q., Shomin, M., Kumar, V.: Design, modeling, estimation and control for aerial grasping and manipulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, pp. 2668–2673. IEEE, September 2011 8. Cataldi, E., Muscio, G., Trujillo, M.A., Rodríguez, Y., Pierri, F., Antonelli, G., Ollero, A.: Impedance Control of an aerial-manipulator: preliminary results. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 3848–3853. IEEE, October 2016 9. Heredia, G., Jimenez-Cano, A.E., Sanchez, I., Llorente, D., Vega, V., Ollero, A.: Control of a multirotor outdoor aerial manipulator. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), 2014, pp. 3417–3422. IEEE, September 2014 10. Lippiello, V., Cacace, J., Santamaria-Navarro, A., Andrade-Cetto, J., Trujillo, M.A., Esteves, Y.R., Viguria, A.: Hybrid visual servoing with hierarchical task composition for aerial manipulation. IEEE Robot. Autom. Lett. 1(1), 259–266 (2016) 11. Santamaria-Navarro, A., Grosch, P., Lippiello, V., Sola, J., Andrade-Cetto, J.: Uncalibrated visual servo for unmanned aerial manipulation. IEEE/ASME Trans. Mechatron. (2017) 12. Ramon Soria, P., Arrue, B.C., Ollero, A.: Detection, location and grasping objects using a stereo sensor on UAV in outdoor environments. Sensors 17, 103 (2017) 13. Korpela, C., Orsag, M., Oh, P. Towards valve turning using a dual-arm aerial manipulator. In: International Conference on Intelligent Robots and Systems (IROS 2014), pp. 3411–3416. IEEE/RSJ (2014)

344

A. Suarez et al.

14. Suarez, A., Jimenez-Cano, A.E., Vega, V., Heredia, G., Rodriguez-Castaño, A., Ollero, A.: Lightweight and human-size dual arm aerial manipulator. In: International Conference on Unmanned Aircraft Systems (ICUAS) (2017) 15. https://www.youtube.com/watch?v=4sjpmDOSZms 16. https://www.prodrone.jp/en/archives/1420/ 17. Jiang, X., Konno, A., Uchiyama, M.: A vision-based endpoint trajectory and vibration control for flexible manipulators. In: IEEE International Conference on Robotics and Automation, 2007, pp. 3427–3432. IEEE, April 2007 18. Xu, Y., Ritz, E.: Vision based flexible beam tip point control. IEEE Trans. Control Syst. Technol. 17(5), 1220–1227 (2009) 19. Bradski, G.R.: Computer vision face tracking for use in a perceptual user interface (1998) 20. Suarez, A., Heredia, G., Ollero, A.: Cooperative sensor fault recovery in multi-UAV systems. In: IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1188– 1193. IEEE, May 2016 21. Albu-Schäffer, A., Ott, C., Herzinger, G.: A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. Int. J. Robot. Res. 26(1), 23– 39 (2007) 22. OpenCV camera calibration. http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/ camera_calibration.html 23. Video of the experiments. https://youtu.be/yLR0DelSlXU

3D Navigation for a Mobile Robot (B) ˇ Jan Skoda and Roman Bart´ ak

Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic [email protected], [email protected]

Abstract. We propose a novel 3D navigation system for autonomous vehicle path-planning. The system processes a point-cloud data from an RGB-D camera and creates a 3D occupancy grid with adaptable cell size. Occupied grid cells contain normal distribution characterizing the data measured in the area of the cell. The normal distributions are then used for cell classification, traversability, and collision checking. The space of traversable cells is used for path-planning. The ability to work in three-dimensional space allows autonomous robots to operate in highly structured environments with multiple levels, uneven surfaces, and various elevated and underground crossings. That is important for the usage of robots in real-world scenarios such as in urban areas and for disaster rescue missions.

Keywords: Robotics

1

· Navigation · 3D · Path finding

Introduction

In robotic research, the problem of mobile robot navigation is among the most important. It has been thoroughly studied, but mostly for a two-dimensional space. Although there are many 3D mapping systems available for use, robust and general 3D navigation systems, that are not constrainted only to some environments, are still very rare. Mostly, such navigations work with a 2D map or project the constructed 3D map into 2D for navigation. The major issue of 3D navigation and the reason, why most navigation systems work in 2D, is the computational complexity of 3D sensor processing, path-planning, and traversability estimation – the recognition of areas, where the robot can drive. We therefore utilize several techniques to improve efficiency, especially octree [3] data structure for map representation and 3D normal distribution transform [9] (3D-NDT) for data representation and traversability estimation. In this paper, we introduce a general 3D navigation system for groundvehicles equipped with an RGB-D camera. Together with 3rd-party libraries, our system performs localization, mapping, traversability estimation, and navigation to a given goal location. The system is even able to work in previously unexplored indoor environment, to build the map simultaneously with navigation, and to navigate through small yet-unmapped areas. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_29

346

1.1

ˇ J. Skoda and R. Bart´ ak

Related Work

There are many production-grade navigation systems operating in 2D [7], which are able to control the robot in dynamic environments. While mapping systems, especially visual, have been able to create 3D maps for several years, state-ofthe-art navigations for ground vehicles still can’t efficiently use them and instead project the 3D maps into a horizontal map, possibly enriched with elevation information (2.5D) [1], or several horizontal maps [2]. 2D maps can describe a floor or multiple floors of a building, but can’t represent stairs, inclined surfaces or uneven terrain. 2.5D elevation maps add support for some cases of uneven terrain, but are unable to represent more complicated structures such as bridges or tunnels. 3D maps are commonly used by navigation systems for flying robots [12], however such UAV systems don’t recognize traversable surfaces, so they can’t be used for ground vehicle navigation. Traversability was solved for example in [1] by fitting planes to heightmap regions and classifying regions into several categories, which is very similar to our approach. The field of truly 3D navigation for ground robots contains just a few results and published systems are mostly not well suitable for usage with a real robot. For example Stoyanov et al. [10] uses an approach similar to our proposal with the octree structure and 3D-NDT. However, it cannot be used as real-time navigation as it cannot replan the path with respect to obstacles discovered when following the planned path. Morisset et al. [6] solved planning for a hexapod walking robot in a graph of 2.5D regions classified as flat terrain, uneven terrain and stairs. Then, they use different map representation and planning strategies for the individual classes. Their solution is significantly faster than our approach on flat terrain, but is also much more complex. They didn’t implement traversability checking important for wheeled vehicles. We however consider this approach of composing 3D map out of graph of 2.5D regions to be a comparable alternative to the fully 3D map we are presenting. If we focus on individual subproblems – mapping, map representation, and path-planning – there are interesting published results. State-of-the-art visual mapping systems for RGB-D camera [4] are precise, and can correct accumulated odometry errors in the map using loop closure techniques. They are however very computationally expensive, and the large maps usually need to be preprocessed in order to be efficiently usable. For the preprocessing, we use tree-structured spatial occupancy grids called octrees [3], which are commonly used for dense 3D map representation. In addition to that, 3D-NDT [9] can be used for map simplification and traversability estimation. 1.2

Motivation

As robots operate in a 3D world, the simplification of the world into a 2D map causes a loss of information. That can prevent the robot from operating correctly in more complicated situations. We provide three examples of types of situations, that may happen in real-world scenarios.

3D Navigation for a Mobile Robot

347

In Fig. 1, there are two surface levels and the robot can choose to ride under the bridge or on it. 2D navigation systems are unable to use the”bridge” and mostly consider it an obstacle. The robot sometimes has to enter a yet unmapped area. That can happen for example in rescue missions in an unknown environment or when some parts of environment were occluded during mapping. Finally, 2D navigation systems often cannot sense holes in the surface or ends of it. In general, 3D navigation is necessary in any environment, where the robot should traverse significantly uneven surfaces. One of the typical robotic problems, where 3D navigation would be very useful, is autonomous or assisted rescue mission in damaged buildings after various disasters.

2

System Outline

The input of the system is a partial map and location estimate continuously received from a localization and mapping system (SLAM). The map is composed of a set of nodes, which correspond to a single captured frame from the RGB-D camera. A node con- Fig. 1. A situation, where the robot can tains a point-cloud representation of choose to ride under the bridge or on it. the captured RGB-D frame, pose estimate of the robot at the time of capturing the respective RGB-D frame, and a timestamp. As that map can contain millions of points, we process it into a 3D octree [3] grid (discussed in Sect. 3.1). We can then quickly check any grid cell for traversability or presence of an obstacle. A classical occupancy grid would however not be suitable for traversability check, as it would require too fine grid resolution, which would ruin the performance. We therefore propose a solution utilizing approximation of the measured points with 3D-NDT [9] (3D normal distribution transform) for traversability estimation. This is described in Sect. 3.2. Traversable cells of the map can be used for path-planning. We utilize the well-known A* algorithm with the Manhattan distance as the heuristic and we propose a specific cost function (Sect. 4) as the optimality criterion. As the map representation distinguishes free and unexplored areas, the plan can even traverse unexplored areas. That is needed because maps often contain small unexplored “gaps” for example due to occlusions. The sensors are only able of inaccurate and incomplete perception of the environment. Localization is sometimes imprecise, which means that the measured points will be added to a wrong part of the map. The system has to be able to deal with these errors during navigation and to relocate map points when an error is detected.

348

3

ˇ J. Skoda and R. Bart´ ak

Mapping Subsystem

Our system is receiving measurements from an RGB-D camera and from a SLAM system in order to discover new obstacles or yet unexplored areas. RGB-D measurements are received as point clouds annotated with timestamp and pose estimate. For performance reasons, we use only some RGB-D measurements, that might contain new information. We select them by thresholding transitional, angular, and temporal distances from the previous used measurement. 3.1

Octree Structure

The internal representation of the map uses an octree structure implemented in the Octomap library [3], which is frequently used in state-of-the-art robotic mapping systems. Octree is a tree structure for accessing a grid with dynamic size. Every node of the tree is a grid cell, that can contain none or exactly 8 children (therefore octree), which represent the 8 subcells with a two times shorter edge. The root node’s cell contains the whole environment. The nodes with no children are called leafs as usual in the graph theory terminology. The structure is visualized in Fig. 2. The octree allows the map to describe cluttered parts of the environment with denser grid and use coarse grid in the free space. Octree therefore saves both memory, which is an important aspect of 3D mapping, and processing time, especially during map updates. Currently, only unexplored and empty space, containing no points, is represented using coarser grid. Merging such cells for coarser grid can be done simply by recursively deleting their children. Splitting big cells can be done by creating children until the leaf level. That happens when the camera measures a point in a merged cell.

Fig. 2. Octree data structure [3] visualization. One occupied cell is drawn in black, free cells are in non-transparent gray, unexplored cells are transparent.

3.2

Grid Cells Classification Using 3D-NDT

Each leaf cell of the map is classified into one of those disjoint classes: unexplored, free, obstacle, and traversable. All cells are initially considered unexplored and later classified as traversable or obstacle only when the cell contains enough points for precise classification. On the contrary, we perform raytracing to mark cells as free space or remove previously detected obstacles. When no points were

3D Navigation for a Mobile Robot

349

measured in a cell and some points were seen through that cell, we mark the cell as free. To distinguish traversable and obstacle cells, we use a 3D normal distribution approximation of the points measured in the cell. We basically estimate a 3D normal distribution covering points measured in every cell. If the distribution is flat and horizontal, we consider the cell to be traversable (Fig. 3). Each normal distribution is described by its mean value and a covariance matrix: n

µ=

1 xi n i=1

(1)

C=

1  (xi − µ)(xi − µ)T n − 1 i=1

(2)

n

where x are the vectors of 3D coordinates of points in the cell, n is the number of the points, µ ∈ R3 is the mean value of x and C ∈ R3×3 is the covariance matrix. 3D-NDT can also be represented by ellipsoid describing distribution’s confidence region (generalization of confidence interval for multiple dimensions), which is the area, where certain fraction (confidence) of points is located (see Fig. 4). The shape of this ellipsoid is used for traversability estimation. The center of the ellipsoid is given by the mean value µ. The directions of the principal axes of the ellipsoid are given by the eigenvectors of the distribution’s covariance matrix. The squared relative lengths of the principal axes are given by the corresponding eigenvalues. We define the normal vector of the surface measured in a cell as the shortest axis of the distribution’s ellipsoid. The axis may be inverted (multiplied by −1) if it doesn’t point in the direction of camera. That way, the normal vector always points in the direction of free space and its size describes the flatness of the surface – flat surface has very small normal. Finally, we use this normal vector to estimate cell traversability: if the normal vector points upwards and is small, the surface is considered traversable. 3.3

Map Update

There are two types of map update that have to be handled. The first type is an RGB-D measurement that can contain new obstacles or unexplored parts of the map. For each cell, where enough points were sensed in the received measurement, we compute a new 3D-NDT and update the type of that cell. We only consider points received in the latest RGB-D measurement containing this cell to ensure, that no duplicate points/surfaces caused by localization error will be present, because they might corrupt the traversability estimation. Moreover, during the execution of the plan, we periodically check each cell of the path and its surroundings for obstacles that could be discovered after map update. The second type of map update is a loop closure. Loop closure is a discovery of an error in the map detected by the SLAM system. It happens for example

350

ˇ J. Skoda and R. Bart´ ak

Fig. 3. Visualization of the NDT usage in two dimensions. Blue ellipses are traversable cells, red ones are obstacles. Notice the effect of the octree cell merging. (We use 2D visualization for clarity.)

Fig. 4. The confidence region ellipse of an NDT representing a set of points. (We use 2D visualization for clarity.)

when the localization provides imprecise location for some time, during which we inserted obstacles into wrong places of the map. The localization failure can be caused by odometry error when mapping unknown areas or when the robot doesn’t recognize already known area. When loop closure is detected, the SLAM system changes the location estimate of some RGB-D measurements and updates the map accordingly to fix errors, such as obstacles inserted to wrong parts of the map. The system performs reinsertion of corrected RGB-D measurements starting with the first one with the wrong location estimate until the last measurement. That means marking the cells corresponding to former measurements as unexplored and inserting the corrected measurements into the map. The reinsertion can be very time consuming, because the upper bound of computational complexity is linear with respect to the number of measurements. The loop closures in indoor environments (where we can use the RGB-D sensor) are usually small and contain just a few latest measurements, so the loop closure handling doesn’t get, in practice, linearly slower during time.

4

Path Planning

The task of the planning algorithm is to find a path between the robot position and a given goal coordinate in the map structure. The resulting path is a list of subsequently adjacent map cells and has to fulfill several constraints to ensure safety of the path and reliability of the system: it has to be traversable, obstacle free, and mostly explored, as we will describe later. In addition to the above constraints, we optimize the path with respect to a cost metric, which expresses distance and accounts for possible problems along the path such as the number of nearby obstacles, unexplored cells, and the angle of slopes. The motivation for the specific cost metric is to minimize the chance of possible problems during plan execution that would lead to re-planning. The exact definition of the function can be found in the Master thesis [11].

3D Navigation for a Mobile Robot

351

We do not limit the shape of the path for any particular robot movement constraints (such as for robots unable to turn in place) and we don’t consider the robot orientation. That is appropriate for example for robots with differential drive, but not for more complicated drives like the Ackermann steering. For path planning, we currently use the well-known A* algorithm. The algorithm traverses the traversable and unexplored cells of the map (see Sect. 3.2), while preferring cells in the direction of goal. As the map representation distinguishes free and unexplored areas, the planner can even traverse unexplored areas. That is needed because maps often contain small unexplored “gaps” caused for example by occlusions. 4.1

Collision and Traversability Checking

The system checks traversability and possible collisions of the robot centered on a particular cell for all cells traversed by the planning algorithm. We call those two checks together a safety check. The safety check can fail because of an obstacle cell that is present within a certain distance from the checked cell, or by an absence of traversable surface somewhere within that area. The distance depends on the dimensions of the robot and in this implementation, we use a simple spherical model of a robot for safety checking. Three examples of safety check are shown in Fig. 5. During the checks, we also consider the 3D-NDT stored in the cells to increase precision of the system. A traversable cell can satisfy the check only if its surfaces altitude – the z coordinate of the 3D-NDT mean value – is close enough to that of the safety-checked traversable cell. Therefore, the system can distinguish between a small traversable asperity and a bigger stair as visualized in Fig. 5, part A.

Fig. 5. Three cases of safety check. In A, the traversability is not satisfied, as the traversable cell to the left is too low and the cell to the right is not traversable. In B, the cell is considered safe, as there is only one solitary obstacle. In C, the collision check will fail because of two obstacles.

Fig. 6. The sampling local planner. Selected path is shown as a thick blue line. Corresponding linear and angular velocity command (0.3; −0.1) will be sent for execution.

352

ˇ J. Skoda and R. Bart´ ak

To increase robustness against errors in the map, we threshold the number of obstacles and missing traversable area, so an error in one cell can not cause the safety check to fail. The system requires at least two obstacle cells or at least two missing traversable surfaces for reporting the checked cell as unsafe. See the example in Fig. 5. That is extremely important during the post-planning checks after map updates, which detect new obstacles along the path. Without this thresholding, one incorrect cell measurement in that check would require recomputing the whole plan. We also ignore obstacles that are under some traversable surface, e.g. points incorrectly measured below floor. Full details are given in the Master thesis [11]. 4.2

Plan Execution

We implemented only the global path-planning and used the ROS Navigation [7] local planner for motion planning. This local planner receives the global plan as a 3D curve and the robot location and outputs linear and angular velocity command standardized in ROS. Its algorithm generates several possible velocity commands, predicts the trajectory of the robot after 1.5 s of executing such velocity command and chooses the velocity command after which the robot gets closest to the requested trajectory and to the next vertex of the global path curve. The weighted sum of those two errors is used as the optimality criterion. Illustration of the local planner mechanism is in Fig. 6. During the execution of the plan, the system checks the upcoming part of the plan for traversability and collisions and triggers replanning if a problem is detected.

5

Implementation

The navigation system was implemented in C++11. We have chosen to use the ROS (Robotic Operating System) as an abstraction layer. That allows us to use the ROS mapping tools, simplifies sharing of the navigation system with robotic community, and enables the system to work on different robotic platforms. Also, we use the Octomap [3] and the PCL [8] libraries, and the rtabmap [4] mapping system. We have tested our implementation on our local robotic platform shown in Fig. 7 equipped with differential drive and Microsoft Kinect camera. The camera was positioned on an elevated mount and tilted downwards. The computations were performed on a quadcore notebook carried by the robot. The implementation is able to visualize the processed map and the planned path as shown in Fig. 8.

3D Navigation for a Mobile Robot

Fig. 7. Our local robotic platform, on which we have tested the implementation.

6

353

Fig. 8. Planning on an inclined board. The robot is standing to the left of the image and the goal is denoted by a purple sphere. Unsafe cells are the reachable cells, where collision checking failed. The edges of board (highlighted by the stripe pattern) were detected, so the robot won’t fall from it.

Evaluation

To evaluate the results of the implementation, we have suggested several scenarios and observed the overall robustness and performance of the system. We paid attention especially to the reliability of the system and to special situations, when the navigation fails. In the scenarios, we ask the system to navigate to given coordinates relative to the robot starting position. The system has neither prior knowledge of the environment nor the map. It plans the path after the first RGB-D measurement. As our robot unfortunately doesn’t provide 3D odometry but only 2D wheel odometry, we use visual odometry tracked by the SLAM system. 6.1

Plan Execution

We repeated each of the scenarios several times and measured the number of successful runs, Nav-f failures of our navigation system (the robot crashed or was unable to find a path) and Other-f failures of the underlaying systems (localization was lost, the SLAM map was corrupted). The results can be found in Table 1. We also identified several cases, where the system is unable to work. The RGB-D camera can’t detect transparent surfaces such as glass. Also, the narrow field of view of the camera occasionally caused the used SLAM system to fail after sharp turns.

354

ˇ J. Skoda and R. Bart´ ak

More details about the individual scenarios can be found in the Master thesis [11]. We have also recorded a few short videos of the scenarios, that are available at http://www.janskoda.cz/navigation/. Note that for safety reasons, we were confirming the path plans manually before execution, so the timing in videos doesn’t correspond with the implementation performance. Table 1. The reliability evaluation results Experiment

Ok

Nav-f Other-f N

Around slope

4

0

1

5

Slope

6

2

2

10

Into unexplored

5

0

0

5

Bridge traversal

7

1

2

10

Long operation

6

1

3

10

Navigation reliability 87.5% Reliability overall

70%

In addition to the overall results, we present one of the scenarios, “bridge traversal” in detail. In this scenario, the robot had to navigate into a part of the environment, that was only accessible by a wooden “bridge” (Fig. 9). Other paths were blocked. The system was able to plan the path, that is shown in Fig. 10, in about 2 s. Later, it discovered an unsafe cell on the path and after replanning, the robot arrived to the goal location. The replanning took about 3 s and the resulting path is shown in Fig. 11.

Fig. 9. The environment of the scenario.

3D Navigation for a Mobile Robot

Fig. 10. The initial map and the plan to the temporary goal. The meaning of the used cell colors is described in Fig. 8.

7

355

Fig. 11. The map source point cloud and the resulting path as recorded by the SLAM system.

Conclusion

We have proposed a 3D navigation system for ground vehicles. The design of our system has several advantages compared to other state-of-the-art navigations. It is well suited for uneven terrain, that can even have multiple levels. 3D navigation is also safer than standard navigation systems in many situations, because it can avoid surface edges and holes in the ground, that are invisible for most available navigations. The novelty of the system lies in our map structure and map update procedures, which are able to model the environment accurately, yet efficiently with respect to map update and path planning speed. Also, we paid a lot of attention to dealing with the problems caused by imprecise localization and RGB-D input of the broadly available cameras. On the other hand, our design results in several disadvantages as well. The used RGB-D camera can’t sense any obstacles closer than approximately 0.4 m and has relatively narrow field of view, which together limits the ability of the system to avoid obstacles. Also, the computational complexity and especially the latency of the RGB-D stream processing both by the camera driver and our system slows down the reaction speed of the system. It therefore takes about 2 s to react to new or moving obstacles. The results of the implementation were evaluated during a series of experiments where a robot had to navigate in environments containing a single navigational problem. In the evaluation, we have shown that the implementation of the system works as expected, assessed the reliability of the system and highlighted its problems.

356

ˇ J. Skoda and R. Bart´ ak

Acknowledgements. We would like to thank Dr. Ohya from University of Tsukuba, Japan, for his valuable insight that helped with this work and Dr. Obdˇza ´lek for help with the robotic platform. Research is supported by the Czech Science Foundation under the contract P103-15-19877S.

References 1. Doroodgar, B., Yugang, L., Goldie, N.: A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims. IEEE Trans. Cybern. 44, 12 (2014) 2. Hornung, A., Phillips, M., Jones, E.G., Bennewitz, M., Likhachev, M., Chitta, S.: Navigation in three-dimensional cluttered environments for mobile manipulation. In: IEEE International Conference on Robotics and Automation (ICRA) (2012) 3. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W. OctoMap : an efficient probabilistic 3D mapping framework based on octrees. In: IEEE International Conference on Autonomous Robots (2013) 4. Labbe, M., Michaud, F.: Online global loop closure detection for large-scale multisession graph-based SLAM. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2014) 5. Maier, D., Hornung, A., Bennewitz, M.: Real-time navigation in 3D environments based on depth camera data. In: IEEE/RAS International Conference on Humanoid Robots (2012) 6. Morisset, B., et al.: Leaving flatland: toward real-time 3D navigation. In: IEEE International Conference on Robotics and Automation (ICRA) (2009) 7. Marder-Eppstein, E., Berger, E., Foote, T., Gerkey, B., Konolige, K.: The office marathon: robust navigation in an indoor office environment. In: IEEE International Conference on Robotics and Automation (ICRA) (2010) 8. Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA) (2011) 9. Stoyanov, T., Magnusson, M., Almqvist, H., Lilienthal, A.: On the accuracy of the 3D normal distributions transform as a tool for spatial representation. In: IEEE International Conference on Intelligent Robots and Systems (IROS) (2011) 10. Stoyanov, T., Magnusson, M., Andreasson, H., Lilienthal, A.: Path planning in 3D environments using the normal distributions transform. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2010) ˇ 11. Skoda, J.: 3D Navigation for Mobile Robots, Master thesis, Charles University (2017) 12. Xu, S., Honegger, D., Pollefeys, M., Heng, L.: Real-time 3D navigation for autonomous vision-guided MAVs. In: IEEE International Conference on Intelligent Robots and Systems (IROS) (2015)

Educational Robotics

Robobo: The Next Generation of Educational Robot Francisco Bellas1 ✉ , Martin Naya1, Gervasio Varela2, Luis Llamas2, Moises Bautista1, Abraham Prieto1, and Richard J. Duro1 (

1

)

Integrated Group for Engineering Research, Universidade da Coruña, A Coruña, Spain {francisco.bellas,martin.naya,m.bautista,abprieto, richard}@udc.es 2 Mytech, A Coruña, Spain {gervasio.varela,luis.llamas}@mytechia.com

Abstract. This paper presents Robobo in the context of higher education. Robobo is a low-cost educational mobile robot that combines a simple wheeled base with a smartphone, which provides the latest technology to the robot. With Robobo, students can develop their own projects through ROS using cameras, microphones or high-resolution screens, bringing teaching closer to the real requirements of the market they will find when they finish their studies. In this work, the hardware and software development that has been carried out is described in detail. Furthermore, it is presented an exemplifying case of student project that shows the potentiality of Robobo in this context. Keywords: Educational robots · ROS · Human-robot interaction

1

Introduction

University degrees in computer science and engineering have been using mobile robots in different subjects from a long time [1–3]. They are a basic tool when teaching auton‐ omous robotics but, in addition, they have shown to be very useful in other subjects like computer vision [4] or control systems. Up to now, mainly due to economical limitations, these educational robots have been quite simple in technological terms. Thus, the most popular educational robot in higher education is the Lego Mindstorms [5], a modular robot equipped with an Arm926ej-S Core@300 MHz processor in its last version (EV3). This robot is equipped with a sonar or infrared sensor, a gyroscope and a color sensor, and it has an average price of 350–400€. A very popular and small mobile robot in European Universities is the Thymio II [6], suitable for multi-robot teaching. It only costs around 120€ and it is equipped with infrared sensors, a temperature sensor and an accelerometer. It has a low performance PIC24@32 MHz microcontroller. As a conse‐ quence of this simplicity, the practical use of real robots in higher education has been limited to simple control mechanisms and laboratory applications. This issue has not been a real problem because robotics was not a real market in industry, so robots were used in classes as simple prototypes, without a rigorous trans‐ lation to reality. In fact, the use of robotic simulators has been very popular. But, as it is known, the current and near future situation is that robotics will be a key market for © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_30

360

F. Bellas et al.

engineers and computer scientists [7]. As a consequence, the robots used in University classrooms must be updated to include all the features of the real robots that will be present in our reality, both in industrial and domestic domains. This implies that robotic classes must change simple platforms or simulated robots for realistic ones. But if one moves to robots with more powerful sensors, the price increases. For example, the epuck robot [8] is a mobile robot that was very popular in the last decade in several European Universities. It has infrared sensors too and a dsPIC30F6014A@64 MHz controller, and it includes a low-resolution camera of 640 × 480 pixels. The average price of one unit is 600€. A more powerful robot used in higher education is the Kephera IV [9], which uses a 752 × 480 resolution camera, an ARM Cortex 800 MHz processor, infrared and sonar sensors, but with a price of around 2500€. Considering high-end mobile robots like the NAO [10], with an Atom Z530 1.6 Ghz CPU, two 1280 × 960 cameras, sonar and tactile sensors, the price becomes even higher (around 6000€), which is unreachable for most higher education institutions when considering the large number of robots that intensive teaching of these topics would imply (no more than two or three students per robot, and preferably only one). The most remarkable platform in terms of cost-features balance is the TurtleBot [11]. It is a low-cost (520€) and open-source mobile robot, which is very popular for ROS (Robot Operating System) users as a testing platform, mainly for SLAM (Simultaneous Localization and Mapping) research, because it contains a 360º LIDAR system in the latest version. Its application to a wider range of robotic domains, as human-robot interaction, using speech recognition, cameras or tactile interfaces, is limited. Consequently, a few years ago, the authors of this paper decided to start the devel‐ opment of low-cost educational mobile robot that can incorporate state-of-the-art tech‐ nology to be used in engineering and computer science degrees. This robot has been called Robobo [12], and it combines a simple wheeled base with a smartphone. With Robobo, students can develop engineering projects using cameras, microphones or highresolution screens, bringing teaching closer to the real market they will find when they finish their studies. This paper is focused on the presentation of the current version of the Robobo hardware and software, which is explained in Sect. 2. Furthermore, its potentiality in higher education will be illustrated through an exemplifying case of student project, in Sect. 3. This project exploits the Human-robot Interaction (HRI) features that are provided by the smartphone based on its visual, auditory and tactile capabilities.

2

The Robobo Robot

The Robobo hardware is made up of two elements: a mobile wheeled base and a smart‐ phone that is attached to the base, as shown in Fig. 1. It must be pointed out that the Robobo robot is not only the mobile base, but the combination of base and smartphone. This configuration provides the following features: • Low-cost: The educational institution must acquire only the mobile base, because the smartphone can be provided by the student. Our experience in this aspect is that students perceive this extra application of their own smartphone as positive. Robobo

Robobo: The Next Generation of Educational Robot

361

will be commercialized in classroom packs with a reference cost per unit that will be lower than the most popular educational robots, TurtleBot 3 or LEGO EV3. • Continuous update: The smartphone technology, both in hardware and software, is in continuous update, so Robobo can be easily improved by simply upgrading the smartphone. It is well-known that students usually have the latest models, so new hardware features can be easily incorporated in classes. The Robobo base will support new smartphone features by simply updating its firmware and software. • Long-term duration: Related to the previous two features, it can be concluded that the Robobo validity is longer than other educational robots, and the investment in the mobile base lasts longer. • State-of-the-art technology: Apart from the sensors and actuators that are in the mobile base and that will be explained later in detail, students can use all the hardware elements that are included in typical current smartphones: – 2 high-resolution cameras – Proximity, light and temperature sensors – Gyroscope, accelerometer and magnetometer – GPS – Microphone – High-resolution LCD screen – Connectivity: 4G, WI-FI, USB, … – Speaker – ….

Fig. 1. The Robobo robot with the smartphone attached (left). The mobile base with a 3D printed pusher (right)

In addition to the Robobo hardware, a software architecture that allows to program the Robobo through ROS (Robot Operating System) [13] has been developed. In the

362

F. Bellas et al.

following sections, the Robobo hardware and software design and implementation will be described. 2.1 Robobo Hardware The Robobo hardware is composed by two elements: the smartphone and the mobile base. Any commercial smartphone with the following requirements is supported: • Operating system: Android 5.1 or higher (iOS version is under development) • Display size: minimum 4.7 inches and maximum 5.5 inches • Thickness: 95 mm maximum The main elements of the base hardware are displayed in Fig. 2. The mobile base has two driving motors (MT1, MT2) which are in charge of performing the displacement of the platform. They are 150:1 gear motors with encoder plus a pinion-crow design. This extra ratio increases the maximum available torque and the robot’s acceleration. At the back of the base, there are two caster balls which allow an omnidirectional move‐ ment. Furthermore, the base is equipped with two additional motors that allow an inde‐ pendent movement of the base and the smartphone:

Fig. 2. Main hardware elements of the Robobo base

Pan motor (PAN): It is a 150:1 gear motor with encoder with pinion-crown reduction. This reduction provides the possibility of performing soft and precise movements, with a 4.86º minimum angular displacement.

Robobo: The Next Generation of Educational Robot

363

Tilt motor (TILT): It is a 1000:1 gear motor with encoder. This is a high ratio for two reasons: • 1000:1 reduction implies that the smartphone is self-supported. No current is needed to maintain the smartphone in the desired position. • This ratio gives us the required torque to move the smartphone in the worst-case conditions. The gear motor gives a maximum of 44 Ncm, which is higher that the 13 Ncm in the worst considered case obtained in our tests. These two motors provide Robobo with a many different movements, and with a richer type of HRI possibilities. A specific Printed Circuit Board (PCB) has been designed for Robobo in order to maximize the performance and capabilities of the base, and to adapt the electronics to the external design. These are the main electronic components: Microcontroller: The electronic structure of the Robobo is built around a PIC32 microcontroller. The CPU runs at 80 MHz, being enough to control the sensors, actuators and carry out miscellaneous tasks that the PIC must perform. It is in charge of carrying out simple calculations and operations, such as motor-encoder adjustments or analog battery readings, leaving the complex and high cost computing tasks to the smartphone. Sensors: Robobo uses 8 VCNL4040 Vishay I2C infrared sensors (IR1 to IR8) for three purposes, which depend on their orientation: • Object detection (IR1, IR3, IR5, IR7): They are placed in the front and rear part of the base oriented parallel to the ground. They are used for obstacle detection and avoidance in cluttered environments. Furthermore, they can perform a soft object approximation thanks to their gradual detection range. • Fall avoidance (IR2, IR4, IR6, IR8): Placed in the front and rear part of base, they are placed at a 45º degree orientation towards the ground, with the aim of detecting holes and avoiding falls when on tables or stairs. • Ambient light sensing: The VCNL4040 can also be used as an ambient light sensor. Actuators: As commented above, 4 DC-DC motors are used to move and control the base. The 4 motors have a 6-pole magnetic encoder shaft to precisely measure the motor revolutions. In addition, they are controlled by two motor drivers that are limited to a maximum of 1.3A each. The motor drivers are controlled by a PWM signal from the PIC microcontroller. RGB LEDs: There are 7 RGB LEDs (L1 to L7 in Fig. 2). Five of them are placed in the front part of the base and two at the rear. The LED driver has a 4096 bit resolution per color, which provides a high color resolution. The LEDs have a user interface func‐ tionality, showing information to the user, such as the battery status or infrared obstacle detection. Bluetooth module: A Microchip BM78 Bluetooth module is used to establish communication between the base and the smartphone. This module was selected for being a simple, low cost, CE and FCC certified Bluetooth module. It uses the 4.2 version, which supports the SPP (Serial Port Profile) for serial communication with the micro‐ controller.

364

F. Bellas et al.

Power system: A 2500 mAh and 3.7 V LiPo battery is mounted in the base. This battery was selected for its high charge density, in order to provide a high level of autonomy in the minimum possible space. The voltage range of these batteries is 4.2 V– 3.0 V. Two integrated circuits (IC) complete the rest of the power system: • A Buck-Boost converter to obtain a fixed 3.3 V used by the microcontroller, Blue‐ tooth, LEDs, etc. This IC is needed due to the variable voltage range of the battery. • A Step-up converter, to boost the battery voltage to the motor’s 7 V. Furthermore, there is a battery manager in the circuit board for recharging the battery through a BATT USB connector (see Fig. 2). 2.2 Robobo Software The Robobo software development can be organized into low-level (firmware) and highlevel (software architecture) blocks: Low-level: Figure 3 shows the general hardware architecture of the mobile base, where only the main connection paths are represented to clarify the scheme. The core of the system is the PIC32 microcontroller, which is in charge of controlling the different integrated circuits, sensors and actuators on the base. Furthermore, the PIC32 manages the communications between the microcontroller and the Bluetooth module, which sends information to the smartphone and receives motor commands from it.

Fig. 3. Robobo electronic architecture

Robobo: The Next Generation of Educational Robot

365

High-level: The main requirement of the Robobo software architecture was to support the programming of the robot using different programming paradigms like ROS, Java or Scratch. A completely modular software architecture, shown in Fig. 4, has been designed. It is based around the concept of Robobo module and a very lightweight library, called the Robobo Framework, which provides the essential mechanisms to manage and run those modules on an Android smartphone. On top of this library, two different kinds of modules can be loaded: • Functionality modules (the orange ones in Fig. 4): Implemented in Java using the Android standard API, they provide different functionalities to the robot, like speech recognition, face detection, environment sensing or physical movement. Further‐ more, these modules complement the native API of the robot, which can be directly used for programming it using Java for Android, thus creating Robobo Apps that can be run in any Android smartphone. • A series of proxy modules (ros-proxy and remote-proxy modules in Fig. 4), also implemented in Java, which provide particular interfaces to other programming environments. These interfaces provide a translation between an external API or protocol, and the native Robobo API in Java.

Fig. 4. Block diagram of the Robobo software architecture

Robobo comes by default with a particular set of these modules but, as they are completely decoupled between them and with respect to the framework, advanced users can customize the set of modules, and even implement new modules to support new robot functionalities, or even new programming paradigms through proxy modules. Finally, it is important to note that there exists a module for connecting the Robobo framework and its modules to the Robobo robotic base. The rob-interface module, shown in pink in Fig. 4, implements the Bluetooth communications protocol of the base and provides a control API for other modules to use.

366

F. Bellas et al.

Java programming is directly supported by the native API provided by the different modules. Using the Robobo framework users can create custom Android Apps that control the behavior of the robot. These apps use the programming interfaces of the Robobo modules to access the different functionalities of the robot and build new robot behaviors or solve new tasks. For block programming, two different approaches are currently supported: Scratch, and our own block-based IDE that uses Blockly. As can be seen in Fig. 4, both approaches are connected to the framework using the same proxy, the Robobo Remote Access Protocol (RRAP). This is a simple JSON based protocol that allows remote access to the Robobo modules’ APIs. Finally, Robobo can also be programmed using the Robot Operating System (ROS), which is the main programming language for teaching robotics at higher education levels. ROS is supported by a ros-proxy module than translates between the native APIs of the modules and ROS messages and topics.

3

Example of Student Project

To illustrate the type of project that can be carried out using Robobo in computer science or engineering degrees, this section describes a specific example that was proposed to students in the “Robotics” subject of the Computer Science degree, at the University of Coruna, during the 2016/17 academic year. The project was focused on HRI, specifi‐ cally, on programming the Robobo to act as a pet. That is, the robot must ask the user for food, attention and “affection”. The way it is performed must be selected by the students, giving them the opportunity of creating highly realistic solutions. The functionality modules required to solve this project use all the interactive modalities of Robobo: Speech: The speech capability of the robot was deployed using the Android libraries and a speechProducer module was implemented. Sound recognition: SpeechRecognition and soundRecognition were implemented using the Pocketsphinx [14] and TarsosDSP [15] libraries. Vision: A faceRecognition module was implemented using the Android libraries, and a colorDetection module using the OpenCV library [16]. Tactile sense: The touchDetection module performs the recognition of tactile gestures over the smartphone screen. It allows a tap (quick and simple touch on the screen), a touch (sustained touch over the screen), a caress (slide over the screen) and flings (end a slide in a fast way). The Android libraries were used to deploy this module. Visual emotion: The emotionProducer module allows to display different images or animations in the smartphone screen, giving a powerful interaction capability to users to show the robot emotions. Movements: The motorCommand module allows the user to move the four motors of the Robobo base. With these functionality modules and the programming language selected by students (and supported by ROS), the specifications of the pet behavior were the following:

Robobo: The Next Generation of Educational Robot

367

• Basic instincts: Robobo must store hunger and thirst levels and, when they are below a threshold, it must say “I am hungry” or “I am thirsty” until they are refilled. • Movement: Robobo must move in order to capture user attention if no event has been executed for a predefined time-lapse. For instance, it can spin or emit a sound. • Touch: Two different behaviors must be implemented depending on the type of screen touch modality: – Tap: If the user touches the screen with a single tap, Robobo must react differently depending on the part of the “face” that is touched. If it is in the “eye” or in the “mouth”, it will move the tilt motor backwards and show an angry expression (see Fig. 5a and b for examples). If it is in any other point, it will show a laughing face and emit a sound saying “tickles” (see Fig. 5c and d for examples)

(a)

(b)

(c)

(d)

Fig. 5. Expressions of the touching behavior of the Robobo pet

– Flip: If the user slides a finger over the screen, the pan-tilt unit moves accordingly. The possible slide directions must be discretized into four pan-tilt movements: tilt backwards or forwards and tilt rightwards or leftwards. • Voice: Robobo must react to the following predefined phrases: – “Here comes the food”: The robot prepares to receive food – “Here comes the drink”: The robot prepares to receive drink – “Hello”: The robot answers by saying “Hello” – “How are you?”: The robot will respond by saying its thirst and hunger levels • Vision: – Color: Robobo must detect 2 different colors, green for food and blue for drink, but only after the corresponding voice command has been detected. In both cases, it must say whether the color is correct or not. For instance, if the user says “Here comes the drink”, the robot must look for a blue color area of a predefined size in its field of view (the left image of Fig. 6 shows an example), and after a timelapse, it must say “Thank you for the drink” or “I don’t see the drink”.

368

F. Bellas et al.

Fig. 6. Green ball that represents “food” (left) and user face recognition (right)

– Face: Robobo must detect the user face. If it is below a threshold, the robot will move backwards (Fig. 6 right) This example provided an engaging experience for the students. As shown, it addresses many topics within robotics and programming while, at the same time, provides a fun way for the students to get acquainted with them. In fact, many of the functionalities can be initially programmed without using the base, and the students would often do this at home playing around with their mobile phone. Summarizing, even though this example is simple and it is an initial test, it is easy to see that there is a lot of potential in using this type of simple robots with high end functionalities (sensing and actuation structures provided by the combination of the base and the smartphone). It is our aim to progressively introduce this approach in more courses during the next year.

4

Conclusions

Traditional robots used in higher education must be updated in order to prepare engi‐ neering and computer science students to the real market they will face in the near future. In this line, a low-cost educational robot called Robobo has been designed, developed and tested, which is made up of a simple mobile base that has a smartphone attached to it. With this configuration, state-of-the-art technology can be used by students in classes now and in the future, because Robobo can be easily updated only by upgrading the smartphone model. The robot will be available in November 2017 through the Robobo Project web page: http://en.theroboboproject.com.

Robobo: The Next Generation of Educational Robot

369

References 1. Schilling, K., Rösch, O.: Mobile mini-robots for engineering education. Global J. Eng. Educ. 6(1), 79–84 (2002). IGI 2. Fagin, B., Merkle, L.: Quantitative analysis of the effects of robots on introductory Computer Science education. J. Educ. Resour. Comput. 4, 1 (2002). Article 2 3. Williams, A.: The qualitative impact of using LEGO MINDSTORMS robots to teach computer engineering. IEEE Trans. Educ. 46(1), 206 (2003) 4. Sankowski, D., Nowakowski, J.: Computer Vision in Robotics and Industrial Applications. World Scientific, Singapore (2014) 5. Danahy, E., Wang, E., Brockman, J., Carberry, A., Shapiro, B., Rogers, C.B.: Lego-based robotics in higher education: 15 years of student creativity. Int. J. Adv. Rob. Syst. 11(2), 27 (2014) 6. Mondada, F., et al.: Bringing robotics to formal education: the Thymio open-source hardware robot. IEEE Robot. Autom. Mag. 24(1), 77–85 (2017) 7. Frey, C., Osborne, M.: The future of employment: how susceptible are jobs to computerisation? Technol. Forecast. Soc. Chang. 114, 254–280 (2017) 8. Mondada, F., Bonani, M., Raemy, X., Pugh, J., Cianci, C., Klaptocz, A., Magnenat, S., Zufferey, J., Floreano, D., Martinoli, A.: The e-puck, a robot designed for education in engineering. In: Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions (2009) 9. Soares, J., Navarro, I., Martinoli, A.: The Khepera IV mobile robot: performance evaluation, sensory data and software toolbox. In: Proceedings of Robot 2015: Second Iberian Robotics Conference, pp. 767–781 (2016) 10. Shamsuddin, S. et al.: Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In: Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, pp. 188–193 (2012) 11. TurtleBot we page. http://www.turtlebot.com 12. The Robobo Project web page. http://en.theroboboproject.com 13. ROS web page. http://www.ros.org 14. Pocketsphinx web page. https://github.com/cmusphinx/pocketsphinx 15. TarsosDSP web page. https://github.com/JorenSix/TarsosDSP 16. OpenCV web page. http://opencv.org

The ROSIN Education Concept Fostering ROS Industrial-Related Robotics Education in Europe Alexander Ferrein, Stefan Schiffer(B) , and Stephan Kallweit Mobile Autonomous Systems and Cognitive Robotics Institute (MASCOR), FH Aachen University of Applied Sciences, Aachen, Germany {ferrein,s.schiffer,kallweit}@fh-aachen.de Abstract. ROS Industrial (ROS-I) is an effort to deploy the Robot Operating System (ROS) for industrial manufacturing applications. The ROS-I activities are organised by the ROS Industrial consortium (RIC). With the EU-funded project ROSIN, which started in 2017, the ROS-I activities are further supported. The project will give out funds for developing ROS-I components. As a further important measure, the ROSIN project focuses on education measures for training a large number of students and industry professionals to become specialists in ROS-I. In this paper, we outline the broad ROSIN education programme, which consists of a series of summer schools, a professional academy and intends to provide the course contents in Massive Open Online Courses as well. Keywords: Robot Operating System (ROS) · ROS Industrial (ROS-I) · Robotics education · Professional training · Summer school · MOOC

1

Introduction

The influence of robotic systems is steadily increasing, in the industry and the automation sector just as well as in areas such as mobile service robotics. The development of an intelligent robot application is a complex and time-consuming task that requires a lot of expertise in various fields. Hence, developers of such applications need to be well-trained. Middlewares help in developing applications. One of the most well-known robotics middlewares is the Robot Operating System (ROS). The ROS Industrial (ROS-I) branch of the ROS project aims at extending ROS with features important for industrial manufacturing applications. In this paper we present the EU-funded ROSIN project that aims at fostering the use of high-quality ROS-based solutions in industry applications. We will focus on the educational activities planned and already started in the ROSIN project. The rest is organised as follows. We start with a brief introduction of the background and some related work. In the next section, we present the core ideas of the ROSIN project. As this paper focusses on the educational aspects of the ROSIN project, we review other existing ROS education initiatives in Sect. 4, before we outline the ROSIN education activities in Sect. 5. We conclude with Sect. 6. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_31

ROSIN Education

2

371

Background and Related Work

Robot middlewares are facilitating the robot application design. They come with tool support, they allow to debug the real-time software in a principled way and they open the opportunity to simulate the application in a 3D physical simulation environment. A number of middlewares from the academic sector are available. Since 2000, the Open Robot Control Software (Orocos) is developed. It is quite well known for its Kinematics and Dynamics Library and its Bayesian Filter Library. The Player project [6] which also started in early 2000 generated a widely used network server for cross-robot development which is often combined with Stage [5]—a 2D simulator for robot hardware. Other approaches like Urbi [1] are hardware dependant and belong more to the area of scripting languages. The framework Fawkes is used in competitions like RoboCup but is not widely deployed [8]. The aforementioned systems have in common that they are mostly used in academia. None of these frameworks have been intensively applied for industrial applications. In the above list, the very well-known middleware Robot Operating System (ROS) [9], which has celebrated its 10th anniversary in 2017, is yet missing. ROS is one of the most accepted frameworks for robotic applications world-wide and has a wide-spread community that supports it. The ROS-I [3] branch started in 2012, with industrial robotic applications in mind. These applications should benefit from components already available in ROS, such as, say, perception, navigation or path planning (e.g. [7]). Up to now, the acceptance of ROS-I in industrial environments is still low. This is despite the fact that ROS can help solving challenging future industrial robot applications. To steer and to support the development of ROS for an industrial setting, the ROS Industrial Consortium America has been founded in 2013 at the South West Research Institute (SwRI). The goal of the consortium is to foster the development of the ROS-I core as well as ROS-I components for industrial robots. By using flexible and advanced techniques such as object detection, environment modelling or path planning so far deployed in mobile robotics mainly in academia, industrial robot applications can heavily benefit. This could also lead to cutting the costs for developing industrial robot applications by (a) making use of open-source software supported by a large community which allow commercial use without restrictions (through using BSD or Apache 2.0 license models), and (b) reducing the manufacturers’ technology “lock-in” risk by providing standardised robot and sensor interfaces through ROS Industrial. In the meantime, there exists also a ROS Industrial Consortium Europe (RIC-EU), led by Fraunhofer IPA. Very recently, it has been agreed on founding the ROS Industrial Consortium Asia Pacific. The effort of pushing ROS Industrial recently gained more support. In the beginning of 2017, the H2020 project ROSIN funded by the European Commission was started. ROSIN has the goal to further push the ROS-I activities in cooperation with the ROS Industrial Consortium Europe. This is pursued by providing funds for ROS-I-related software projects, by improving the software quality developed in those projects, and by a broad education initiative for

372

A. Ferrein et al.

training software engineers with ROS-I. We will describe the ROSIN project in the following section in more detail.

3

The ROSIN Project

The Robot Operating System has gained much attention in the academic world and has become a de-facto standard middleware for mobile robotics applications. World-wide, many research institutes make use of this framework and provide their research work as open-source software modules. To this end, the process to develop a complex robotic application is much easier and less time-consuming today than it was 10 years ago because many important components of such a system are available as ROS nodes. However, many ROS components do not quite meet the requirements of industrial robotics applications in terms of realtime guarantees or software quality yet. The ROS-I effort—as mentioned above— was started to deploy ROS also for industrial robots. The benefits are that a larger developer community from different stakeholders can provide software solutions. Further, with ROS-I a form of standardisation of components, sensors and algorithms takes place. 3.1

Overview

The goals of the ROSIN project are to change the availability of high-quality intelligent robot software components especially for the European industry. To reach this goal, the criticism brought forward by the industry against ROS, in its current form, has to be addressed. The main points raised are: – – – –

lack of (sufficient) real-time support; stability issues; no (software) quality guarantees of the framework, and a lack of professional training sites.

The ROSIN project’s work programme addresses this criticism with three main focus areas. The development of relevant industrial software components in high-quality is supported by ROSIN with a funding from the European Commission. The consortium consists of the Robotics Institute of TU Delft, the Netherlands, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Germany, the IT University of Copenhagen, Denmark, the Mobile Autonomous Systems & Cognitive Robotics Institute at FH Aachen University of Applied Sciences, Germany, Tecnalia Research & Innovation, Spain, and ABB AB, Sweden. The first measure of the ROSIN project is to fund socalled Focussed Technical Projects (FTPs, see Sect. 3.2). Second, with special tool support and dedicated training, the quality of the software developed in FTPs will be improved (Sect. 3.3). Third, the development and QA measures are flanked with a broad education initiative for current and future ROS-I software engineers (Sect. 3.4). In the following, we describe these measures in greater detail.

ROSIN Education

3.2

373

Focussed Technical Projects

The idea of Focussed Technical Projects as promoted by the ROS Industrial Consortium (RIC) is to kick-start relevant and needed ROS-I capabilities. A full member of a RIC can champion a particular project and share the cost and effort with other RIC members who are interested in that particular capability. Under the guidance of the Consortium Manager, the SwRI, a network of interested developers start with the implementation of the component as soon as the funds are in place. The idea of FTPs is the fundamental concept to support developments within the ROSIN project. With funds allocated by the European Commission for ROSIN, the funding for FTPs is in place. Members of the ROSIN project board decide, which projects should be funded. It further monitors the progress of a respective FTP. The challenging goal of the ROSIN project is to give out funds for 50 successful FTPs over a time span of four years. For more information about how to apply for an FTP, please visit the ROSIN project’s website.1 3.3

Quality Assurance Aspects

The ROSIN project proposal mentions three strategies how software quality of the newly established ROS-I components could be improved. The quality assurance strategy includes a Community-based quality management process. It will be established in such a way that it proposes quality standards which may or may not be obeyed, but it will leverage to force software developers to comply with the given quality standard. There will also be workshops and tutorials as well as some technical support. Within the ROSIN project, a Continuous Integration infrastructure will be provided allowing code reviews, error reporting and unit tests. Further, ROS-specific bug and quality trackers are to be developed within the ROSIN project. Model-driven development has been identified as a successful design and software production methodology. While there exist some description languages such as the Unified Robot Description Format (URDF) in ROS already, the concept of model-driven code generation will be strengthened to provide another means to assure the software quality. The component testing will also be improved including thorough testing of the ROS core. Besides testbased validation methods, also program-analysis based validation and regression methods will be applied to the developed ROS-I components. 3.4

Education

Even when a company agrees to use ROS-I as the middleware for their software developments, the major question remains where to hire employees with fundamental ROS-I knowledge. Despite the fact that many roboticists leaving varsity these days have been in contact with ROS before, profound and certified knowledge is desirable from a company’s viewpoint. Therefore, a third pillar of 1

http://www.rosin-project.eu/

374

A. Ferrein et al.

the ROSIN concept foresees an education and training programme to impart indepth knowledge of ROS Industrial. The different ROSIN education measures, which will be described in Sect. 5 in detail, will impart the required ROS-I knowledge, needed by the industry. The project comes with a number of measures, but is open to further educational ideas. Therefore, it is also possible to apply for funds to run additional ROS-I education measures.

4

World-Wide ROS Training Activities

Even though ROS is quite well established and is the standard middleware for mobile robotics, the possibilities to receive professional training are limited. The academic world often uses ROS in the classroom, but the number of publicly available courses world-wide is less than ten. The well-established ROS Industrial Training by Fraunhofer IPA for Europe2 or the one held at SwRI3 in the USA are known to be of high quality. They usually host around 40 participants with the aim of building the bridge between ROS and ROS Industrial with a lot of practical contents.4

(a) ROS Summer School 2016 held in Aachen, Germany

(b) ROS Industrial training at Fraunhofer IPA

Fig. 1. ROS Summer School 2016 held in Aachen, Germany

The ROS Summer School in Aachen5 is well established and was already held five times. The last summer school had more than 50 participants out of 27 nations and went on for twelve days. The program is structured in a ROS introduction, basics, higher level applications of mobile robotics and a UAV session. A small introduction to MoveIt! is given for interested participants. The lectures and laboratory sessions are mixed equally with a high level of practical work with custom hardware. A more detailed description of the concept, 2

http://rosindustrial.org/news/2016/9/26/ros-industrial-training-and-conference-2016-schedulenow-online 3 http://aeswiki.datasys.swri.edu/rositraining/indigo/Exercises/ 4 See, e.g., http://rosindustrial.org/news/2016/4/20/recap-ros-i-training-spring-2016 5 https://www.fh-aachen.de/en/faculties/mechanical-engineering-and-mechatronics/ international-topics/ros/

ROSIN Education

375

the organisation and the financing of the Aachen ROS Summer School has been described in [4]. Figure 1a gives an impression of the Summer School held in Aachen last year. Figure 1b from a training at Fraunhofer IPA.6 The number of novel ROS training activities increased over the last two years. For instance, in July 2015, a summer school was established in China, Shanghai, East China Normal University with over 200 participants for the first time. It is a four day event and consists mostly of lectures without any practical work with robots. In Turkey, at the Eskisehir Osmangazi University & Inovasyon Muhendislik a three day event was held for the 2nd time with ten participants focusing on ROS Basics, Simulation in ROS and ROS higher level applications. Lectures and laboratory sessions are equally split. In the UK, a ROS summer course is established in London, England, Middlesex at the University of London. It hosts eleven participants for five days and consists mainly of laboratory sessions, where the usage of Advanced ROS Packages and simulation is explained. The hardware deployed in this course is the well-known Turtlebot system. Another ROS training is held in Brazil, Sao Paulo, Centro Universitario FEI, but details about the number of participants are missing. The course takes five days and is structured in a ROS introduction, some basics, higher-level applications for mobile robotics. Lectures and lab sessions are offered but the goal is to have a more practical approach to ROS. It was held for the first time. In Austria, Maribor, TEDUSAR a ROS course is held with less than 20 participants for five days, focusing on RoboCup Rescue. This course concentrates on a ROS introduction, basics and higher-level applications for mobile rescue robotics. It is held for the 3rd time and consists of lectures and lab sessions. Another ROS course takes place in Australia, Kioloa, Australian Centre for Robotic Vision where the number of participants and the contents is not known. It seems to focus on ROS Vision. The list of summer schools shows that there are only a couple of ROS training facilities world-wide. The possibilities are increasing, but ROS Industrial – or even components like MoveIt! – are not yet covered by the mentioned courses.

5

Scope of Education Activities Within ROSIN

One of the main objectives of the ROSIN project is to kick-start industrial robot applications based on the ROS middleware. With the aforementioned FTPs, interested developers and industrial partners could start interesting development projects. Another measure in ROSIN to facilitate ROS Industrial applications are educational activities. 5.1

Overview

In order to strengthen the ROS Industrial community it is also of utmost importance to educate and train current and future ROS-I software developers. 6

http://rosindustrial.org/news/2016/12/16/ric-europe-event-recap-part-1-ros-industrial-training -and-conference-2016

376

A. Ferrein et al.

Therefore, the ROSIN Education programme targets different end-user groups. These are, in particular, students and professional developers but also members of the broader ROS/ROS Industrial community. For each of the end-user groups, the ROSIN Education programme provides specific training and education activities tailored to the needs of that particular group. Hence, the ROSIN education initiative includes the following different activities: – – – –

ROS-I ROS-I ROS-I ROS-I

School offers Classes for students. Academy offers Trainings for (industry) professionals. MOOCs offers Online Courses for different stakeholders. Market is a Central Hub where stakeholders can meet.

For the different end-users (students, professionals, community) tailor-made education activities exist which aim at optimally training the respective user group. The education programme for professionals also foresees some form of an accepted certificate which states the profound knowledge of the certificate holder in particular ROS-I topics. For self-studies, online courses will be provided (ROS-I MOOCs). Important for the success of the ROSIN project (FTPs as well as educational activities) will be to bring together people from academia and industry to work together on related ROS-I projects. For this purpose, the foundation of the educational activities will be the ROS-I Market, which will be a brokering platform where stakeholders from both sides can find respective peers. In the following, we describe the different activities in greater detail. 5.2

Education Measures

The ROSIN education concept is based on three main measures and a supplementary activities. A structural overview is depicted in Fig. 2a. ROS-I School. The ROS-I School addresses university students and young professionals to get an entry to the ROS Industrial eco-system. In regularly held summer schools, participants could enroll for a one week teaching activity. The training will give good insights into currently used ROS-I software tools such as Moveit! or Decartes. As the activity aims at a very intense training, the number of admitted students per training event is rather limited. Therefore, the activities will be held several times per year to give a large number of interested students and young professionals the opportunity to participate in this educational activity. Further, we will also offer workshops and summer schools with varying topics. These are not only limited to teaching particular ROS-I components and tools, but will also allow to address how good software engineering works in order to improve the quality of future ROS-I components. ROS-I Academy. As a second activity, the ROS-I Academy aims at establishing a ROS-I certified engineer programme to certify certain skills within the ROS Industrial software engineering eco-system, similar to MSE or CISCO certified

ROSIN Education

377

engineers. The certified skills comprise basic knowledge in ROS Industrial, as taught in the ROS-I Schools activity, skills in code review and specialised ROS-I topics inspired by FTPs or based on different robot platform programming skills. In close collaboration with industrial parties, the contents for the certified ROS-I engineering programmes will be selected. Further, the course will be accredited by some official relevant body in some form to ensure that the required skills will be taught and quality standards of the training are met. The number of participants per course will also be limited to ensure that each engineer gets a good hands-on experience with real hardware and gain good insights into ROSI components. Also, a course concept where courses with different topics that build upon each other will be offered. ROS-I MOOCs. To support the other two education activities, the ROS-I MOOC’s objective is (1) to develop Massive Open Online Courses and (2) to provide a learning platform. Each participant of the ROS-I Schools or the ROS-I Academy could deepen their knowledge with this type of online courses in addition to presence courses held in ROS-I Schools or ROS-I Academies. Here, a number of different up-to-date topics will be presented ranging from ROS-I core over software engineering and software quality assurance topics (cf. Sect. 3.3) to single ROS-I components. ROS-I Market. Finally, the ROS-I Market activity aims at providing a common internet platform to bring together the interested stakeholders of ROS-I. The objectives are to provide a brokerage platform, where industrial partners could post their needs for staff or development actions, developers could offer their expertise and students will be provided with internships at industrial or academic ROS Industrial stakeholders. The ROSIN Education Board will try and make matches between offers and demands from the different stakeholder’s sides. 5.3

The Education Cube

The main educational activities are structured along three dimensions: topic, level of proficiency, and type of activity. We give a description of each dimension in the following. Topic. The topics offered in ROSIN education measures are oriented towards industry needs. The first and foremost industrial applications are dealing with robotic manipulation. A second area then is mobility, dealing with mobile robots moving around an industrial site. Finally, the combination of these two forms the third area mobile manipulation where a robotic manipulator is moving around on a mobile platform. Level. Each topic can be dealt with at different levels of proficiency. Beginners will start at a basic level, moving via intermediate proficiency to an advanced level. The levels will be oriented along the SOLO taxonmy, where basic corresponds to SOLO 2, intermediate to SOLO 3, and advanced to SOLO 4.

378

A. Ferrein et al.

Type. Depending on the particular target group, we organise education in three bodies. The ROS-I School provides university level Classes for students. The ROS-I Academy offers vocational Trainings for industry professionals. Finally, for both groups mentioned above as well as for interested developers from the (broader) ROS(-I) community, a collection of Online Courses are present in the ROS-I MOOCs. Since the trainings in the ROS-I Academy and the classes in the ROS-I School are quite similar, we should and will not keep them too separated. The structure of the educational activities along the three dimensions described above can be understood as a cube. The core instatiation of this education cube can be found in Fig. 2b.

Manipulat Trainings ion

ROSIN Education and Exchange

Manipulat MOOCs ion

[level] Manipulation Classes

ROSIN Board

R M OS OO -I C

Ac ROS ad -I em y

Online Course

Training

Intermedi ate

Advanced Manipulat ion

Intermediat e Manipulat ion

Advanced Mobility

Intermediat e Mobility

Intermediat e Mobile Manipulat ion

d ce an gs dv in A ain r T d ce anCs dv A OO M

d ce anes dv s A las C

. ed s rm g te in In ain r T

.

ed rm s te C In OO M

[type] ic s as g B in in ra T

. ed rm s te se In las C

ROS-I Ac ademy

ic as s B OC O

M

Basic

Basic Manipulat ion

Basic Mobility

stry

ROS-I Market

(a) The ROSIN education concept.

Mobility

Indu

Manipulation

Professionals Community

Advanced Mobile Manipulat ion

Mobi Manipulat le Trainings ion

Mobi Manipulat le MOOCs ion

Mobile Manipulation Classes

Basic Mobile Manipulat ion

Mobile Manipulation

R Sc OS ho -I ol Class

Students mia

de Aca

Advanced

Mobility Classes

Mobility Trainings

Mobility MOOCs

ic as s B sse la C

ROS-I MO OCs

ROS-I Sch ool [topic]

(b) The ROSIN education cube.

Fig. 2. The ROSIN education concept and the education cube.

Extending the Cube. The core education cube can be extended along any of its dimensions. For the level of proficiency, for now we plan for three stages, namely basic, intermediate, and advanced. At a later point in time, however, a sufficient amount of participants will want to further deepen their understanding. Thus, introducing an expert layer on top of the existing levels might be appropriate. This level would then be corresponding to SOLO 5 in the SOLO taxonomy. Also, not all interested participant will meet the entry requirements of the basic education measures. As a consequence, we foresee preparatory courses that bring those individuals up to speed. While offering such prep courses as a MOOC will have the maximal coverage, given sufficient demand, special classes or trainings might be offered in the ROS-I School or the ROS-I Academy respectively. Just as the scope of industrial robotic applications will broaden, so will the topics of our educational measures. Following industry needs, new topics will be accomodated and accounted for, such as 3D perception or machine learning. Besides particular topics stemming for applications, also more general subjects like continuous integration or software architectures might be included.

ROSIN Education

379

The three types depicted in the core education cube are oriented towards the primary target groups. Eventually, to reach maximum impact, the educational measure could/should be franchised. This in turn requires training the instructors of such franchised educational activities. A ROS-I Teacher Training hence is a straightforward extension of the core cube. 5.4

Planned Course Contents

The overall training contents will focus on industrial applications. As an example, consider a standard pick-and-place scenario, where the individual positions of single objects are taught and later processed using a position list. This problem will be solved by using ROS-I standard components. The next step includes a flexible vision-based approach, where the positions of the objects to manipulate are no longer pre-specified with a teach-in. Instead, they are determined via standard tools available in the ROS environment such as OpenCV, PCL and the alvar functionality. This approach enhances the standard industrial pick-andplace procedure already by using concepts from mobile robotics such as perception and navigation. The next step includes collision avoidance with alternative path planning, in order to improve this application beyond the industrial standard. The basic concept of a collaborating robot will be used to attract the industrial users to the ROSIN activity. This setup focusses on the use of MoveIt! in combination with necessary training in setting up a serial kinematic with URDF, XACRO (XML Macros) and other description formats. Essential modules like the tf package will be explicitly explained for industrial serial kinematics which are still the most common robots for industrial use. Fundamental knowledge of the topics mentioned above should enable the industrial user to cover numerous non-standard robotic applications using ROSI. These applications can be as well supported using the funding possibility as FTPs. This way, the industrial users will be encouraged to make use of ROS-I for solving complex robotic tasks with open-source technology instead of going for proprietary cost intense solutions. The course contents above was just one example. During the project, we will further develop the concept including leading-edge components from successful FTPs. In general, training content will be aligned with prevalent industrial use cases. Currently, these are very often pick-and-place applications like sketched above. Another focus in the teaching activities lies on the quality of the developed software components. Therefore, workshops how to use state-of-the art development tools and methods (cf. Sect. 3.3) will be part of the initiative. Finally, the performance, stability and quality of the used software components will be evaluated during the ROSIN project. Intended Learning Outcomes. Upon reflection within the education board, we are trying to use the SOLO taxonomy [2]. Let us give an example of the goals formulated along the so-called Intended Learning Outcomes (ILO). Consider a basic manipulation training in the ROS-I Academy. The training covers a UR5 application with path planning and collision avoidance. After completing the training, the participant will be able to

380

A. Ferrein et al.

– – – – –

define their robot by using URDF, kinematic chain etc.; configure the robot by setting parameters using the ROS reconfigure tools; visualize the robot by making use of rviz; make it move by deploying MoveIt! make it see by integrating additional cameras and recognizing objects using OpenCV; – make it clever by combining motion planning and object recognition into an intelligent high-level control application; – do something useful with ROS by finding task specific solutions to given application problems. This was an example for a basic to intermediate ROS-I Academy training. Following SOLO taxonomy, for each different education level different ILOs will be in focus. 5.5

Educating Future Teachers

The ROSIN Education Board will set up a special education activity in order to educate future ROS-I instructors. To conduct education activities is not restricted to consortium members of the ROSIN project. The goal is rather to use the project as a multiplier to interest many other institutions to participate in ROS-I activities. However, a certain quality level of external education measures is required. The ROSIN Education board will ensure the quality of external education measures. This is done by offering special train-the-trainer courses within the ROSIN education measures. In this measure, we need to distinguish between teachers and training instructors. This is because with teachers we can already assume a certain level of didactical proficiency. With instructors for professional trainings that might not be the case. 5.6

Measures Conducted by the ROSIN Consortium

Within the ROSIN project, a number of education activities will be carried out by the ROSIN consortium members. The Mobile Autonomous Systems & Cognitive Robotics Institute (MASCOR) will conduct eight editions of the ROS-I school at their premises in Aachen, Germany. The Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) will offer two kinds of professional training activities. The first one will be a basic training course teaching the fundamentals of ROS-I. This activity shall take place every two months. There will be also more advanced courses every three month. The workshops will take place either at the IPA training facilities or at MASCOR. Some workshops will also be held at the consortium member TECNALIA in Spain. The courses will also be accredited in the course of the ROSIN project. Further, the consortium members will set up online courses at an appropriate platform (e.g. EdEx, Moodle, or Coursera). At a later stage, an additional programme will be dedicated to train future ROS-I trainers.

ROSIN Education

6

381

Conclusion and Future Work

In this paper, we presented the ROSIN project, in general, and its ROS Industrial-related robotics education activities, in particular. The ROSIN project aims to foster the acceptance and quality of ROS Industrial-based solutions in industry and academia. To this end, the ROSIN project features a set of educational activities to increase the number of ROS Industrial literate people among students and professionals alike. Tailored offers for the different end-user groups accommodate for particular interests. In so-called ROS-I Schools, students receive training in developing high-quality applications using ROS Industrial. Similarly, the ROS-I Academy aims at training and later also certifying professionals in developing industrial-strength solutions with ROS Industrial for industrial applications. ROS-I MOOCS are meant to offer any interested party improving their ROS Industrial proficiency with online training. Finally, the ROS-I Market offers a central platform for cooperation and exchange between all stakeholders. The education activities are also open for third-parties who can propose additional ROS-I education measures and who can apply for funds through the ROSIN project from the European Commission. Acknowledgements. The ROSIN project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 732287.

References 1. Baillie, J.C., Demaille, A., Nottale, M., Hocquet, Q., Tardieu, S.: The Urbi universal platform for robotics. In: SIMPAR 2008, Venice, Italy (2008) 2. Biggs, J.B., Collis, K.F.: Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcome). Academic Press, New York (1982) 3. Edwards, S., Lewis, C.: ROS-Industrial – applying the Robot Operating System (ROS) to industrial applications. In: ICRA/ROSCon, St. Paul, Minnesota (2012) 4. Ferrein, A., Kallweit, S., Scholl, I., Reichert, W.: Learning to program mobile robots in the ros summer school series. In: Proceedings of the 6th International Conference on Robotics in Education (RIE-15) (2015) 5. Gerkey, B., Vaughan, R.T., Howard, A.: The player/stage project: tools for multirobot and distributed sensor systems. In: ICAR 2003, Coimbra, Portugal (2003) 6. Gerkey, B.P., Vaughan, R.T., Støy, K., Howard, A., Sukhatme, G.S., Mataric, M.J.: Most valuable player: a robot device server for distributed control. In: IROS 2001, Wailea, Hawaii (2001) 7. Michieletto, S., Tosello, E., Romanelli, F., Ferrara, V., Menegatti, E.: ROS-I interface for COMAU robots. In: Proceedings SIMPAR, pp. 243–254. Springer (2014) 8. Niem¨ uller, T., Ferrein, A., Beck, D., Lakemeyer, G.: Design principles of the component-based robot software framework fawkes. In: Proceedings SIMPAR 2010, LNCS, vol. 6472, pp. 300–311. Springer (2010) 9. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source Robot Operating System. In: ICRA Workshop on Open Source Software (2009)

Mobile Robots as a Tool to Teach First Year Engineering Electronics Pedro Fonseca1(B) , Paulo Pedreiras1 , and Filipe Silva2 1

2

IT, Universidade de Aveiro, 3810-193 Aveiro, Portugal {pf,pbrp}@ua.pt IEETA, Universidade de Aveiro, 3810-193 Aveiro, Portugal [email protected]

Abstract. Engineering degrees require a strong background in Physical Sciences and Mathematics, demanding a high level of conceptualization and abstract reasoning that many students do not possess at the entry level of their high education studies. This can cause students demotivation and dropout, a situation that Higher Education institutions have felt the need to cope with. One methodology to address this problem is to introduce the use of robots in the classes. This tool has unique characteristics that may potentially contribute to increase students’ motivation and engagement, which are key factors on their academic success. This paper presents the rationale, challenges and methodology used to introduce robots as a tool to teach introductory electronics to first year students in a Electronics and Telecommunications Engineering Masters degree. The paper also reports evaluation indicators that result from two different surveys, one generic, carried out in the scope of the Quality Assurance System of the University, and another one developed specifically to evaluate the course. The results confirm that there is a clear and overall positive impact. Particularly significant are the gains on the students motivation and subject comprehension, without a noticeable impact on the course difficulty and required effort. It is also specially relevant that students are strongly in favour of keeping robot’s usage due to its impact on both knowledge and motivation. Keywords: Undergraduate/First year · Engineering curriculum Learning environment/laboratory · Qualitative - 1. Case study Motivation

1

· ·

Introduction

For many students, the starting of their studies at a Higher Education (HE) level can be a challenging task. This is a problem that has been detected worldwide [4,10,23]. Engineering degrees, as well as other degrees in the STEM (Science, Technology, Engineering and Mathematics) area, are particularly prone to these problems. On the one hand, the required preparation in experimental sciences, c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_32

Mobile Robots as a Tool to Teach First Year Engineering Electronics

383

such as Physics, Chemistry, and Mathematics, demands for a high level of abstraction for which not all students are prepared to. As a result, many players in the education field have felt the need to develop strategies to tackle with the problems potentially faced by STEM students. Many of these proposals are focused on increasing students’ motivation [13,14], as de-motivation and abandonment are known to be significantly related [8]. The use of robots appears to be one of the most promising strategies for improving STEM education as advances in technology reduced costs and facilitated their use in the classroom. One advantage is the fact that students can more easily learn abstract concepts and gain a more functional level of understanding [18]. Another argument for teaching with robots is that they stimulate creative solutions and problem-solving skills [2]. Robots are built of hardware components and software programs, each one of them depending on different fields of knowledge such as electronics, mechanics and programming. This interdisciplinary nature of robots will inevitably give rise to the learning of different subjects and, consequently, how all parts of a complex system interact and depend upon each other [2]. Case studies found in the literature positively document the use of robotics to teach different subjects [6,19,20] to a wide range of age groups [3,22]. In what concerns HE, most studies illustrate the potential effectiveness of robotics to positively impact both learning and student motivation [1,15,21]. At the same time, engaging students to active learning facilitates deeper understanding of content knowledge and promotes interest in STEM subjects [5,11,16]. Further, robotics promotes the application of the subject matters in a rigorous manner through experimentation, allowing students to put their learning into action. From the viewpoint of skills development, the use of robots requires problem solving, appears to foster cooperative learning and teamwork and encourages students to creatively solve problems [9,12,17]. The main objective of this paper is to provide a quantitative study that examines the use of mobile robots for teaching introductory electronics to first-year engineering students. Despite the positive motivational benefits of introducing robots for engineering education suggested by other studies, rigorous quantitative research is missing from the literature in what concerns an Electronics Curriculum. The results of this study involve a research design based on student achievement data. In line with this, the remainder of the paper is as follows. Section 2 presents the robot used in the classes and Sect. 3 presents the course unit where the robot was used. Section 4 addresses the rationale and methodology for obtaining the results, that are presented in Sect. 5, concluding in Sect. 6.

2

The DETI Robot

The DETI robot (Fig. 1) is a small, didactic robot developed at the Department of Electronics, Telecommunications and Informatics (DETI) of the University of Aveiro. Two opposed wheels provide differential driving and a ball caster guarantees the robot stability. Wheels are driven by DC motors coupled to a gear box.

384

P. Fonseca et al.

Fig. 1. DETI robot

The robot is controlled by a board based on the PIC32 microcontroller from Microchip. A bootloader allows the board to be programmed by connecting the board to a PC via an USB cable. A C/C++ library with the basic functions to operate the robot is made available to the students, with the objective of hiding the microcontroller’s idiosyncrasies and provide a simple interface to the hardware resources. This is important because, at this stage, student’s competences on computer architectures are still rather limited.

3

The Course at the University of Aveiro

As many other HE institutions, the University of Aveiro found the need to develop means to motivate new students in order to improve students’ satisfaction and retention and to reduce attrition. In the case of the Electronics and Telecommunications Engineering degree, a fundamental piece of this strategy was the development of a new course unit, aimed at motivating first-year students and introduce them to the field of Electronic and Telecommunications Engineering. This course unit started in 2007, following the introduction of the Bologna model. This course unit started to be a one-semester, 6 ECTS course unit, 47260 - “T´ecnicas Laboratoriais em Electrotecnia” (Laboratory Techniques in Electrotechnology), which, in the academic year 2013–14 was converted into a two-semester, 8 ECTS course, 40331 - “Laborat´ orios de Eletr´ onica” (Electronics Laboratories). We will present here the course outline; the interested reader can find more details in [7]. 3.1

Course Structure

The course structure is presented in Fig. 2. In each semester, students have a block of laboratory assignments, followed by a small project, which lasts for

Mobile Robots as a Tool to Teach First Year Engineering Electronics

385

Fig. 2. Course organization

approximately five to six weeks and that aims at building a simple electronic circuit, with a clear and evident application. In the first semester, this project consists on the design of a small, stand-alone device: e.g., a LED that lights up in the dark, a jig with reference current and voltage sources to test the lab multimeters. Students’ work starts with the project outline and then proceeds to design the circuit schematic and the printed circuit board, which is assembled and tested. The outcome is a fully functional circuit that the students designed and built and that they can keep. The second semester follows the same pattern, but now the circuits are not stand-alone devices, but rather to be applied to the DETI robot. This can be a sensor, or some other means of interacting with the robot. The students’ tasks include developing the software to program the robot. 3.2

Robot Projects

The challenge met by the lecturers, when preparing the course, can be described by trying to fulfil the following set of requirements: – to provide a meaningful development activity, involving some level of engineering design, albeit simple enough to be tackled by first year students; – involve a robot, in order to provide the motivation that comes from working with an interactive device; – keep the focus on Electronics. The solution adopted was to propose a project, based on the DETI robot, where students are defied to developed an electronic extension module, comprising sensors or some other electronic means of interacting with the robot. When planning the course activities, care was taken to guarantee that the students’ work is focused on Electronics, as 40331 is a course on Electronics. The robot is a means to an end: it is seen as a tool for teaching electronics. In this way, students are expected to follow a set of activities during the project: – design the electronics for the sensor circuit, involving simple design activities; – prototype the circuit on a breadboard and validate its operation;

386

P. Fonseca et al.

– design and assemble the PCB; – install the sensor; and – program the robot.

4

Assessing the Utilization of Robots in the Classroom

As a result of the structure of 40331, with a small project at the end of each semester, students are exposed to design activities in Electronic in two different contexts: – in the first semester, they develop their project as a stand-alone Electronic device; – in the second semester, they develop a similar set of activities, but now envolving the application of their design to the DETI robot. This means that students taking 40331 experience, in a relatively short time span, two different ways of developing their Electronics project: with and without the robot. As such, these students are particularly well positioned to assess the impact of using a robot to promote the study of Electronics, as they have experienced both situations. Three sources were used to collect data to assess the impact of the robot use in the classroom. The first, subjective, is the lecturers perception of the students reaction. This is a course with a three hours long weekly section, most of the time in lab sessions, which allows for some degree of acquaintance with the students. The second source is the University’s quality assurance data, including success rates and the students survey. This questionnaire addresses several aspects of every course unit, from student’s motivation and self-assessment to lecturers’ performance, infrastructure conditions and the coordination of the several activities. Every item assessed in a Likert scale from 1 (worst) to 9 (best). From the whole survey, the questions in Table 1 were selected for analysis. Questions addressing issues not directly related to the subject of this work where not considered for analysis. Finally, the third source was a questionnaire aimed specifically at measuring the robot usage impact. To understand how students perceived the impact of the robot usage in their motivation and learning, a survey was set up (Table 2). Questions in group 1 were asked both relative to the course unit and relative to the robot. This allowed us to estimate the difference of perception by the students of the stand alone projects and the robot related projects. The questions in group 2 aim at understanding how students felt that the robot contributed to their knowledge and, in group 3, about the value of the robots as a learning tool. Finally, group 4 investigates whether the students value the robot’s contributions concerning knowledge gains and motivation. For each question, students were invited to qualify each of the statements in a Likert scale from “Totally disagree” (−2) to “Totally agree” (+2). The questionnaire was sent to the 432 students that took 40331 since 2013–14; from those, 66 responses were received1 . 1

The questionnaire is still open at the moment of writing this text.

Mobile Robots as a Tool to Teach First Year Engineering Electronics

387

Table 1. Quality assurance survey questions. Overall assessment Q1

Motivation to the course unit

Q2

Satisfaction with own performance

Q12 Overall functioning of course unit Learning activities Q7

Coordination of course unit components

Q9

Suitability of proposed activities

Q14 Development of subject comprehension Q15 Matching between course activities and previous skills Difficulty and effort required Q16 Course contents difficulty Q17 Level of effort and time required for passing grade Table 2. Students questionnaire Group 1 – 40331/robot contributions 40331 unit course/robot project contributed to... Q 1.1 ... get to know more about Electronics Q 1.2 ... get to know more about other technical subjects Q 1.3 ... get more non technical skills (soft skills) Q 1.4 ... become more motivated toward Electronics Q 1.5 ... conclude further course units more easily Group 2 – Application of circuits to the robot Applying the circuits to the robot. . . Q 2.1 ... allows for a better understanding of their operation Q 2.2 ... allows for a better understanding of their application Q 2.3 ... allows for a greater motivation in their development Group 3 – Suitability of robot usage Q 3.1 The robot projects were an effective way of teaching Electronics Q 3.2 The learning outcomes obtained with the robot could not be easily obtained otherwise Group 4 – Continuation of robot usage If I were responsible for the course, I would keep the usage of robots in classes, due the the gains in... Q 4.1 ... knowledge Q 4.2 ... motivation

388

5 5.1

P. Fonseca et al.

Results Lecturers Perception

As for the instructors perception goes, the students get quite motivated by the robot based projects. When the sensor is working and connected to the robot, and the interaction is possible, it can sometimes be difficult to make the class come to an end, due to the students involvement in their projects. The possibility of designing the sensor, not only the circuit but also its physical realization, tends to spark the creativity in the students. For instance, in one case, students designed the printed circuit board so that, by soldering the connector directly to the board, the sensor would be placed in the correct position when assembled in the robot, instead of using a standard interface cable provided to all the groups. 5.2

Course Unit Quality Survey

Figure 3 presents the success rate in 40331, with the results extended to three years before, referring to the previous, one semester, course unit (47260). It can be seen that the introduction of the new structure in the academic year 2013–14 resulted in a drop of the success rate, recovering, in the last two years, to values close to 80%. When 40331 was introduced, there was no previous experience in lecturing annual course units, therefore this was also for the faculty an year of learning.

Fig. 3. Success rates

The results concerning the overall assessment, including students self-assessment, are presented in Fig. 4. The effects of introducing the new course, in 2013– 14, are clearly visible, the most notable effect being on the students’ self satisfaction (Q2). In the following years, all these indicator values recover, with Q1

Mobile Robots as a Tool to Teach First Year Engineering Electronics

389

Fig. 4. Q.A. survey responses

(Motivation to the course unit) and Q12 (Overall functioning of the course unit) attaining higher values than those achieved with 47260. A similar pattern is perceived in the responses to the learning activities (questions 7, 9, 14 and 15). After a drop in 2013–14, all indicators recover. The improvement from 47260 to 40331 is specially visible in Q9 (Suitability of proposed activities), Q14 (Development of subject comprehension) and Q15 (Matching between course activities and previous skills), denoting that the students indeed appreciated the change and regard it as a positive factor. Finally, in the last group, and coherently with the previous results, the perceived difficulty and required effort to successfully complete the course have a peak in the academic year 2013–14, reducing in the following years. 5.3

Students Survey

In what concerns the questionnaire specifically designed to assess the robot use impact, Fig. 5 presents the results for the mean value of the responses to Group 1 questions, comparing the responses concerning the course unit as a whole (“LabE”), and the robot projects (“Robot”). In the set of questions Q 1.1 to Q 1.4, the students responses rate the robot project contribution with a mean value between 1 (“Partially agree”) and 2 (“Totally agree”). Comparing the course contribution to the robot projects contribution, students rate more highly the course impact in questions Q 1.1 (contribute to know more about Electronics) and Q 1.5 (contribute to complete

390

P. Fonseca et al.

Fig. 5. Responses to group 1.

successfully further course units). On the other hand, they rate the contribution of robot projects for learning more about other subjects, either technical (Q 1.2) and non-technical (Q 1.3), and to motivate the students to learn Electronics higher than the overall course unit contribution. In what concerns the gains perceived by the students from using the circuits applied to a robot, the results are presented in Fig. 6. Here, the marks represent the average response value and the bars are one standard deviation long (positive and negative). For group 2, the values are clearly positive, between 1.31 and 1.66. The result that is more positively perceived by the students is related to the motivation. In what concerns groups 3 and 4, students consider the robot projects as an effective way of teaching Electronics, with an average value of 1.32. The responses to the question whether the same learning outcomes could be obtained employing other means is the one that gets the lowest average score (0.70). When asked whether students would keep the robots in the classes if they were in charge, the response is positive, with average values of 1.29 considering the gains in knowledge and of 1.74 considering the motivational aspects.

Fig. 6. Responses to groups 2 to 4.

Mobile Robots as a Tool to Teach First Year Engineering Electronics

391

Again, and coherent with the responses to Group 2, students value the robot projects primarily as a motivational tool and recognize its value as a learning tool. By considering the responses to Q 1.4, Q 2.3 and Q 4.2, we can conclude that students, most of all, value the robot projects as a motivational tool. The gains in knowledge brought by the robot projects are also recognized, as it can be seen by the responses to Q 1.2, Q 1.3, Q 2,1, Q 2.2 and Q 4.1.

6

Conclusions

Small mobile robots have unique and appealing characteristics that turn them valuable helper teaching tools on many fields, among them introductory-level electronics. Engineering degrees are known for being particularly challenging for first year students. On the other hand, motivation and engagement are key factors to increase student’s academic success. Therefore, taking the opportunity open by a course restructuring carried out in 2013–2014, a robot-based project was introduced on the Electronics Laboratories course of the Masters course in Electronics and Telecommunications Engineering at the University of Aveiro. A thorough evaluation of the impact of the robot’s introduction on the students performance was carried out. The evaluation targeted several axes, such as motivation, acquisition of technical competences, required effort and impact and articulation with other courses. The survey results are, overall, very positive. In particular, the impact on students motivation and on the subject comprehension, without implying more effort nor increasing the course difficulty, are particularly relevant. Therefore, it can be concluded that robotics can be an effective tool to improve the academic performance and satisfaction of electronics first year students. The experience also allowed the authors to gain insight about the relevance of operational aspects. In particular, providing customized tools, tailored to hide the idiosyncrasies of the robots’ hardware, and so expose simple and concise interfaces, contributed decisively to keep the student’s focus on the core course components and limit the complexity to acceptable levels. Moreover, the broad range of scientific domains involved in the course is also challenging for the faculty staff, which usually tend to have relatively narrow areas of expertise, typically closely related with their research areas. Acknowledgment. The authors would like to acknowledge the work of Prof. Jos´e Lu´ıs Azevedo, the main developer of DETI robot for his work in the development of tools to promote robotics at student level.

References 1. Aroca, R.V., Watanabe, F.Y., Avila, M.T.D., Hernandes, A.C.: Mobile robotics integration in introductory undergraduate engineering courses, pp. 139–144. IEEE (2016). https://doi.org/10.1109/LARS-SBR.2016.30. http://ieeexplore.ieee. org/document/7783516/

392

P. Fonseca et al.

2. Beer, R.D., Chiel, H.J., Drushel, R.F.: Using autonomous robotics to teach science and engineering. Commun. ACM 42(6), 85–92 (1999). https://doi.org/10.1145/ 303849.303866. http://portal.acm.org/citation.cfm?doid=303849.303866 3. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a systematic review. Comput. Educ. 58(3), 978–988 (2012). https://doi.org/10.1016/j.compedu.2011.10.006. http://linkinghub.elsevier.com/retrieve/pii/S0360131511002508 4. Berzonsky, M.D., Kuk, L.S.: Identity status, identity processing style, and the transition to university. J. Adolesc. Res. 15(1), 81–98 (2000). https://doi.org/10.1177/0743558400151005 5. Eguchi, A.: RoboCupJunior for promoting STEM education, 21st century skills, and technological advancement through robotics competition. Robot. Autonom. Syst. 75, 692–699 (2016). https://doi.org/10.1016/j.robot.2015.05.013. http://linkinghub.elsevier.com/retrieve/pii/S0921889015001281 6. Fagin, B., Merkle, L.: Measuring the effectiveness of robots in teaching computer science. In: SIGCSE 2003 Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education, p. 307. ACM Press, Reno (2003). https://doi.org/ 10.1145/611892.611994. http://portal.acm.org/citation.cfm?doid=611892.611994 7. Fonseca, P., Pedreiras, P., Cabral, P., Cunha, B., Silva, F., Matos, J.N.: Motivating first year students for an engineering degree. In: CISPEE 2016 – 2nd International Conference of the Portuguese Society for Education in Engineering, Vila Real, Portugal (2016). https://doi.org/10.1109/CISPEE.2016.7777745. http://ieeexplore.ieee.org/document/7777745/ 8. French, B.F., Immekus, J.C., Oakes, W.C.: An examination of indicators of engineering students’ success and persistence. J. Eng. Educ. 94(4), 419–425 (2005). https://doi.org/10.1002/j.2168-9830.2005.tb00869.x 9. Greenwald, L., Kopena, J.: Mobile robot labs. IEEE Robot. Automat. Mag. 10(2), 25–32 (2003). https://doi.org/10.1109/MRA.2003.1213613 10. Kantanis, T.: The role of social transition in students’: adjustment to the first-year of university. J. Inst. Res. 9(1), 100–110 (2000) 11. Kim, C., Kim, D., Yuan, J., Hill, R.B., Doshi, P., Thai, C.N.: Robotics to promote elementary education pre-service teachers’ STEM engagement, learning, and teaching. Comput. Educ. 91, 14–31 (2015). https://doi.org/10.1016/j.compedu.2015.08. 005. http://linkinghub.elsevier.com/retrieve/pii/S0360131515300257 12. Lalonde, J., Bartley, C., Nourbakhsh, I.: Mobile robot programming in education, pp. 345–350. IEEE (2006). https://doi.org/10.1109/ROBOT.2006.1641735. http://ieeexplore.ieee.org/document/1641735/ 13. Linnenbrink, E.A., Pintrich, P.R.: Motivation as an enabler for academic success. Sch. Psychol. Rev. 31(3), 313 (2002) 14. Lynch, D.J.: Motivational factors, learning strategies and resource management as predictors of course grades. Coll. Student J. 40(2), 423–428 (2006) 15. McGill, M.M.: Learning to program with personal robots: influences on student motivation. ACM Trans. Comput. Educ. 12(1), 1–32 (2012). https://doi.org/10. 1145/2133797.2133801. http://dl.acm.org/citation.cfm?doid=2133797.2133801 16. McLurkin, J., Rykowski, J., John, M., Kaseman, Q., Lynch, A.J.: Using multi-robot systems for engineering education: teaching and outreach with large numbers of an advanced, low-cost robot. IEEE Trans. Educ. 56(1), 24–33 (2013). https://doi. org/10.1109/TE.2012.2222646. http://ieeexplore.ieee.org/document/6363493/ 17. Mirats Tur, J., Pfeiffer, C.: Mobile robot design in education. IEEE Robot. Autom. Mag. 13(1), 69–75 (2006). https://doi.org/10.1109/MRA.2006.1598055. http://ieeexplore.ieee.org/document/1598055/

Mobile Robots as a Tool to Teach First Year Engineering Electronics

393

18. Nourbakhsh, I.R., Crowley, K., Bhave, A., Hamner, E., Hsiu, T., Perez-Bergquist, A., Richards, S., Wilkinson, K.: The robotic autonomy mobile robotics course: robot design, curriculum design and educational assessment. Autonom. Robots 18(1), 103–127 (2005). https://doi.org/10.1023/B:AURO.0000047303.20624.02. http://link.springer.com/10.1023/B:AURO.0000047303.20624.02 19. Oliver, J., Toledo, R.: On the use of robots in a PBL in the first year of computer science/computer engineering studies. In: Global Engineering Education Conference (EDUCON), pp. 1–6. IEEE (2012). https://doi.org/10.1109/EDUCON.2012. 6201026. http://ieeexplore.ieee.org/document/6201026/ 20. Ortiz, O.O., Pastor Franco, J.A., Alcover Garau, P.M., Herrero Martin, R.: Innovative mobile robot method: improving the learning of programming languages in engineering degrees. IEEE Trans. Educ. 60(2), 143–148 (2017). https://doi.org/ 10.1109/TE.2016.2608779. http://ieeexplore.ieee.org/document/7582486/ 21. Sakata Jr., K., Olguin, G.S.: Robotics: a case study of contextualization in engineering education. In: WEE2011 - 1st World Engineering Education Flash Week, Lisbon, Portugal (2011) 22. Spolaˆ or, N., Benitti, F.B.: Robotics applications grounded in learning theories on tertiary education: a systematic review. Comput. Educ. 112, 97–107 (2017). https://doi.org/10.1016/j.compedu.2017.05.001. http://linkinghub.elsevier.com/retrieve/pii/S0360131517300970 23. Tao, S., Dong, Q., Pratt, M.W., Hunsberger, B., Pancer, S.M.: Social support: relations to coping and adjustment during the transition to University in the People’s Republic of China. J. Adolesc. Res. 15(1), 123–144 (2000). https://doi.org/ 10.1177/0743558400151007

Methodology and Results on Teaching Maths Using Mobile Robots Paola Ferrarelli1(B) , Tamara Lapucci2 , and Luca Iocchi1 1

2

DIAG, Sapienza University of Rome, Rome, Italy [email protected] Department of Advanced Research, Clementoni Spa, Recanati, Italy

Abstract. In 58 Italian Public Comprehensive Institutes (Istituti Comprensivi), that include Primary and Elementary schools, 2911 students experimented the use of a mobile robot, Sapientino Doc by Clementoni, to learn curricula matters such as Mathematics, Geometry and Geography (MGG). The project “A scuola di coding con Sapientino” was developed during the 2016/2017 regular school year for about 3 months (April–June 2017). The schools were distributed throughout Italy and involved 2911 students from 5 to 8 years old, 155 classes, and 163 teachers. The aim of the research is to demonstrate a learning gain in Mathematics, Geometry and Geography, after the students use a mobile robot during regular lessons held by their own teachers in their classrooms. In this paper, we present the methodology used to develop the project and the results of data analysis.

1

Introduction

A survey on the use of robots in educational field was made by Benitti [2], who reviewed the international literature on Educational Robotic (ER) published over 10 years listing robots, students age and obtained results, and by Mubin [8] who gives an overview on robot kits, robot roles and robot usage domains. About robot kits, a recent study was conducted by Garcia [10] by listing characteristics of robot kits and apps, currently available on the market for teachers of 4–14 age students, and costs, that for Mondada [7] is one of the obstacles to the massive use of robots in the schools. From the literature analysis, it emerges that many of ER activities are extra-curricular, during summer camps or workshops. Few studies show quantitative data, using statistical methods, or a controlled group to compare the results of the test group, or a random choice of the users belonging to each group. Few empirical data are available on the use of low cost robots since R robots are used. Few studies analyze the use of robots to teach in 90% Lego subjects different from computer science or mechatronics. Moreover, the use of robots in a class is not the only favorable condition for learning, but also: the presence of the teacher, the spaces suitable to do the activities with the robot, c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_33

Methodology and Results on Teaching Maths Using Mobile Robots

395

the availability of one kit for each team of 2–3 students, short theory lessons and tutorials to link theory and practice, realistic but affordable tasks linked with curricular subjects, teachers at ease with the robot, etc. So ER researchers and teachers need to choose the more suitable robot kit for their students, and carefully design where and how to use it and with which role. The analysis of the interviews to 85 students by Shin [11] highlighted that young students prefer a robot that acts as a peer during the learning process. In the ER research field, studies with students 5–12 years old have been conducted using different robot kits. For example, Bee-Bot was used by many authors, including: Highfield [5] with 33 children aged 3–5 years in a Primary school in a 12 weeks project for 2 h each lesson; Eck [4] with 4–5 years old children, 1 h a week for 6 weeks; Di Lieto [3], on a sample of 12 children aged 5–6 years, 75 min twice a week for R WeDo was used by Kazakoff, Sullivan and Bers [6] during a 10 h 6 weeks. Lego R Mindstorms was intensive course in the robotic week with 29 children. Lego used by Barker and Ansorge [1], in a post school program with 32 students aged 9–11 years, 1 h twice a week for 6 weeks, and by Zygouris [13] to teach geometry concepts to 12 years old. Thymio was used by Friebroon and Ben-Ari [12] with thirty 7–8 years old students of Elementary school, while Nulli and Di Stasio [9] used Cubetto with 5–6 years old kids of Primary School during a school year. Our research was conducted using the mobile robot Doc, during the project “A scuola di coding con Sapientino”, with almost 3,000 5–8 years old students, during the regular school lessons.

Fig. 1. Educational activity organization: (left) filling the questionnaire, (right) learning with robot Doc.

The project “A scuola di coding con Sapientino”1 was promoted by three different subjects: Clementoni, an Italian company leader in educational toys (the Sapientino DOC inventor), DIAG Dept. (Department of Computer, Control, and Management Engineering) at Sapienza University of Rome, and MIUR (The Italian Ministry of Education, University and Research). The project was born with the main shared purpose to bring in the classrooms an easy-to-use and fun tool 1

Web site of the project with additional information, material and results: https:// sites.google.com/a/dis.uniroma1.it/doc-scuola/attivita.

396

P. Ferrarelli et al.

(the robot Sapientino Doc2 by Clementoni) to introduce the basic concepts of programming and robotics. The educational activities with the robot (see Fig. 1 right) were designed to learn main concepts of Maths, Geometry, Geography (MGG) measuring the learning gain through questionnaires (see Fig. 1 left).

2

Methodology

The methodology that we used for this project is illustrated below: 1. identify a robot that fits Primary and Elementary schools needs 2. identify the curricula subjects in which we want to analyze the learning gain after using the robot 3. write the educational activities within a scenario 4. write the initial and final evaluation questionnaires to be distributed to students for the assessment of the learning gain 5. Phase A: test the educational sessions and the teaching materials in few sample schools and refine the material before the national distribution 6. Phase B: run the educational sessions on a large distribution of Italian Public Comprehensive Schools and collect the evaluation questionnaires data through an on-line web system 7. analyze data and publish the results Due to time and project constraints, we could not arrange a controlled experiment to do a comparison with another method of teaching MGG. This further experimental modality is planned as future work. Table 1. Parameters of our experiment. Number of students

2911

Where

intra-curricular activity inside the classroom, during the regular lesson time

Robot role

tool to engage the students to learn

Teacher role

transfer base knowledge

Subjects

Maths, Geometry, Geography (MGG)

Background knowledge required none for students; provided teaching material for teachers Robot cost

low cost product, affordable for public schools

Table 1 summarizes the characteristic of the project in terms of number of participants, teacher role, robot role and cost, domain and location of the learning activities, that are fundamental parameters to describe the experiment, as underlined by [8]. 2

http://www.clementoni.com/en/61323-doc-educational-smart-robot/.

Methodology and Results on Teaching Maths Using Mobile Robots

2.1

397

Mobile Robot Doc

Our research was conducted using the mobile robot Doc, even if the methodology can be adopted using mobile robots with the same characteristics of Doc. Doc is an easy programmable mobile robot. It is tall 12 cm and it looks like an astronaut with a visor, see Fig. 2 (left). When Doc is switched on, 2 circles illuminate the visor for simulating the eyes, so one can easily identify the front of the robot. It is equipped with 2 driving wheels about 8 cm apart, with no odometers and sensors, it has a step of 13 cm and it moves at a speed of about 11.5 cm/s. Doc can move linearly or turn itself, on the spot, by 90◦ . It does not have movable arms or manipulators. The keyboard for its programming is placed on its head and is composed by 7 touch buttons through which it is possible to program Doc to move a step forward or backward, to turn 90◦ to the right or left, to reproduce a fun sound effect, to cancel the moves entered so far from robot’s memory, and to make Doc undertake the command sequence just entered (see Fig. 2 left bottom). There are three playing modes: Free, Game and Edu, that can be selected by moving the selector behind robot head (see Fig. 2 left top). In particular, using the Free mode, the robot is free to move around, either on its puzzle mat or on any other clean and smooth surface. The strong quality of the product is that the user can enter a sequence of commands and execute it with one touch. The robot can interact with the user through a set of predefined sentences at the beginning of the activity, with an introductory message to invite the user to start or at the end, to give feedback, or in the meanwhile, in case of lack of activity, to catch the user’s attention, before to eventually freeze. Its energy is supplied by 3 batteries 1.5 V AA alkaline, which guarantees continuous use of the robot for 5–7 h. The Sapientino Doc product box contains also rectangular double face puzzle mat of dimensions 91 cm × 65 cm, that recreates a grid with boxes that adapt to Doc’s paces (13 cm). The analysis of the product has highlighted several educational strengths among which the teaching of mathematics, geography (orientation, points of view, directions) and geometry (open and closed lines, geometric shapes) concepts. Scenarios and educational activities were therefore designed according to these basic concepts, using the robot in Free mode. We also designed an evaluation questionnaire for each scenario, to analyze the student improvements after using the robot, and a feedback

Fig. 2. Left: Robot Doc with its keyboard and modes selector. Right: Educational activity: commutative properties of the sum on the line of numbers.

398

P. Ferrarelli et al.

questionnaire for the teacher to evaluate the student experience in terms of interest, motivation, active participation, fun, cooperation, frustration, rule respect, anxiety, self-organization. 2.2

Experimental Design

The hypothesis we want to test with our research is that using a mobile robot enables a learning gain in MGG with additional benefits of being an amusement experience and of facing and learning a new technology. We measure the learning gain by evaluating the score of the initial and the final evaluation questionnaires, that consist in eight questions about MGG. The experimental method adopted a within-subject design (repeated measures). In fact, each student executed the questionnaires under two different conditions: before using the robot (initial questionnaire), after using the robot (final questionnaire). The null hypothesis is that there is no difference in the questionnaire scores. 2.3

Test of the Teaching Material (Phase A)

Five classes were selected for phase A of the project. Each class received 4 boxes of Sapientino Doc, i.e. 4 robots, batteries, a grid board with white squares and a first version of the teaching material, which included the description of the didactic activities (scenarios) and the evaluation questionnaires for the students. During phase A, teachers were trained by Sapienza and Clementoni researchers, through a meeting at school, without the students, at a time immediately before the activities, and they chose two scenarios to be executed in the classroom. After, the teachers, supported by the researchers, administered the initial evaluation questionnaire, managed the educational activities and administered the final evaluation questionnaire. During these educational sessions the researchers could test each scenario, train the teachers, receive their comments and suggestions on scenarios, questionnaires and designed acquisition tools. Researchers also measured the time that took to do an educational session, that consists in the execution of two scenarios. It takes 30 min. to fill the initial evaluation questionnaires, about 2 h to execute the activity with the robot and 30 min. to fill the final evaluation questionnaire. At the end of phase A, we improved the teaching material and created the project website for teachers’ on-line training, with video demonstration of educational activities, a community, a contacts section and a frequent asked questions page. We also added the Edu and Game scenarios for Primary schools, we created one more scenario and questionnaire for the first Elementary classes and added a feedback questionnaire for the teachers. Finally, the proposed scenarios and the questionnaires associated with them are shown in Table 2. In order to improve the questionnaire aesthetics and the compilation experience, their final version was done by Clementoni’s graphic staff and colored printed. All this material (in Italian) is available in the already reference project web site.

Methodology and Results on Teaching Maths Using Mobile Robots

399

Table 2. Scenarios and questionnaires for Primary and Elementary. The X value indicate the association between them. Scenarios

Questionnaires Mathematics 1 Mathematics 2 Geography Geometry Primary

Mathematics 1 X Mathematics 2

X X

Geography

X

Geometry Storytelling

X X

Edu

X

Game

X

2.4

X

X

X

X

X

X

X

X

Run of the Educational Sessions at National Level (Phase B)

As already mentioned, a total of 58 Italian Public Comprehensive Institutes (Istituti Comprensivi), including 155 Primary and Elementary classes, adhered to the project on a voluntary basis. The schools were distributed throughout Italy, involving 60% of Italian regions (see Fig. 3), 2,911 students from 5 to 8 years old, 163 teachers and about 600 robots.

Fig. 3. Geographic distribution of schools participating to the project.

As in Phase A, each class received 4 boxes of Sapientino Doc, i.e. 4 robots, batteries, a grid board with white squares and the final version of the teaching material, which included the description of the scenarios and the evaluation questionnaires for the students. During Phase B, teachers were trained on-line. They chose

400

P. Ferrarelli et al.

two scenarios and executed their activities autonomously (i.e., without physical presence of Clementoni or Sapienza researchers). Finally, they collected the evaluation questionnaire data and filled in their on-line feedback questionnaire, through the website adhered to the project on a voluntary basis (Table 3). Table 3. Number of classes and questionnaires involved in the project, divided by school grade. Number of Classes

Total Primary 5y.o. I Elem. 6y.o. II Elem. 7y.o. III Elem. 8y.o. 155

38

58

52

7

Questionnaires 4926

677

1938

2056

255

2.5

Questionnaires: Evaluation and Feedback

We created three sets of questionnaires: two paper evaluation questionnaires for students and one on-line feedback questionnaire for teachers. An initial evaluation questionnaire is filled by the students before the execution of the activities with the robot, and a final one is filled after the activities with the robot. Both are supposed to be filled by the students in the classroom, just before and after the activities with the robot. Teachers could read the questions to the Primary and first Elementary students, if needed. The questions in the final questionnaire are slightly different from the ones in the initial, but the difficulty level is the same. In fact, during Phase A, we observed that with a final questionnaire equal to the initial one, the students took less time to fill the final questionnaire because they wrote automatically the same answer of the initial one, without any cognitive effort on the question. The questionnaires regard Mathematics, Geometry or Geography subjects and aim at evaluating students capacity to answer eight questions for each scenario. For Primary school students, that cannot neither read nor write, we created ad-hoc questionnaires with three questions, where students have to color their answers. The protagonist of the initial questionnaire is a frog, Zap, while in the final one there is a robot, Doc, in order to create a cognitive link with the performed activities. We collate together the initial and the final questionnaires because we want to be sure that they refer to the same student, in order to relate individual performance before and after the teaching activity. The student can use pencil, rubber and must fill with the appropriate answer a red box near the question. For example, in the Mathematics 1 questionnaire, the first two questions ask how many steps will do Zap/Doc to reach a flag, being the step equal to one unit in Question 1 and two units in Question 2; Question 3 asks how many steps will do Zap/Doc to reach the pool goggles, Question 4 to reach the swimming pool and Question 5 asks for the total number of steps done. To complete Questions 6 and 7, the student must read a short story and fill the gaps with appropriate given

Methodology and Results on Teaching Maths Using Mobile Robots

401

numbers. Finally, in Question 8, s/he must write which is the longest route made by Zap/Doc to reach its home. The on-line feedback questionnaire for the teachers have multiple goals: (1) collect the student answers to the evaluation questionnaires; (2) collect the feeling of the teachers about the student experience using the robot, in terms of interest, motivation, active participation, fun, cooperation, frustration, rule respect, anxiety, self-organization; (3) collect the teachers feedback about coding tools. 2.6

Educational Activity Organization: Teachers and Teams

The teacher was free to decide which scenarios to execute during the educational session, depending on his/her didactic objective. We asked to run 2 scenarios in the same day, in order to minimize the bias effect of the evaluation questionnaires and the loss of data due to the absence of students. Moreover, we suggested to the teachers to spend some time, in the days before the educational session and without the students, to become familiar with the robot keyboard, modes and actions. As already mentioned, at the beginning, the students filled in the initial evaluation questionnaire associated with the chosen scenario (see Fig. 1 left), then they were divided in teams of 4–5 students, in order to execute the activities of the scenarios, using the robot (see Fig. 1 right). Each team could use one robot and a billboard (the white one or the colored one). At the completion of the activities, the students filled in the final evaluation questionnaire. The teacher could fill in the feedback questionnaire in a different day. 2.7

Scenarios

The design of educational scenarios took place alongside the analysis of robot Doc and analyzing similar activities that took place in similar domains (Thymio3 , BeeBot4 , MARRtino Sapienza mobile robot5 ). At the beginning, we wrote the following 4 different scenarios and the related evaluation questionnaires: (1) Mathematics, (2) Geography, (3) Geometry, (4) Storytelling. After Phase A, we introduced a simplified Mathematics scenario for first elementary classes, splitting Mathematics in two scenarios: Mathematics 1 and Mathematics 2. Moreover, we introduced Edu and Game scenarios to address the needs of Primary students. Mathematics 1 scenario includes a number of didactic activities related to the baseline calculation on the line of numbers. The activities take place with the Doc robot set on a grid board with numbers. Students have to program linear opened routes. The didactic objective is the verification on the line of numbers of simple math operations such as sum, subtraction and multiplication. The methodology includes students programming the robot to move on the line of numbers and checking the results of the maths operations. Figure 2 (right) shows an example 3 4 5

https://www.thymio.org/en:thymio. https://www.bee-bot.us/. http://tinyurl.com/marrtino.

402

P. Ferrarelli et al.

of an activity to teach the commutative property of the sum: (i) put the robot on the blank mat at the initial position (number 0); (ii) program robot A to move forward for 2 steps, turn on the right to watch the reached number, turn on the left and move forward for 3 steps, turn on the right to watch the final number position (number 5): students must record the robot A final number position; (iii) program robot B to first move forward 3 steps and then 2 steps: students must record the final number position of robot B and compare it with the final number position of robot A. Mathematics 2 scenario differs from the previous one in the methodology. In fact, in this scenario the robot is used as a tool to verify the results of operations mentally calculated previously by the students. Example of activity: students record on their notebook the result of 2+3 and 3+2 sums. Then they program the robot to verify the correctness of their results. Geometry scenario includes a number of tasks related to geometry concepts that take place with the Doc robot on a grid board with white boxes and other graphical elements. The didactic objectives is the verification of the knowledge of simple flat geometric figures (square, rectangle and circumference) and the introduction to the perimeter’s calculation. The methodology include students programming the robot to move by making a route with a geometric shape. For example, program the robot to move along a square route. Geography scenario includes a series of activities related to geography concepts that take place with the Doc robot set on a grid board with white boxes and other graphical elements. The didactic objective is the verification of basic elements of geography such as different points of view and reference systems. The methodology include students placing the robot on the grid board in an initial position and orientation and then programming it to reach a final position and orientation being at different relative orientations with respect to the robot. For example, the robot is placed in the left corner of the grid, watching the students team and they must program it to reach the opposite corner. Storytelling scenario explore the ability to set and move the robot in scenes or stories represented on the puzzle mat. It includes a series of activities related to a story chosen by the teacher. It could be a story that the class has already developed in recent months (for example going at the theater). It can be implemented with the white grid board, enriched with the characters of the history (protagonist, antagonist, aides, obstacles, etc.), or the boards provided by the product. Edu and Game scenarios are the ones described in the instruction manual of the product.

3

Data Analysis

Table 4 summarizes the number of classes that executed the questionnaires, divided by school grade. In this paper we focus on the results of the Mathematics 1 questionnaire (Maths). The results of Primary, Geometry, Geography and Mathematics 2 questionnaires are published on the project website mentioned above. By comparing the initial and the final evaluation questionnaires, using statistical methods, we want to measure if there were improvements in the score

Methodology and Results on Teaching Maths Using Mobile Robots

403

Table 4. Number of classes that executed the questionnaires, divided by school grade. Number of classes Questionnaires Mathematics 1 Mathematics 2 Geography Geometry Primary Primary

0

0

0

0

52

I Elementary

44

2

28

32

0

II Elementary

3

37

36

34

0

III Elementary

1

5

3

5

0

48

44

67

71

52

Total

obtained by the students on each question. Maths questionnaires were executed by 48 classes involving a total of 886 students (445 female and 441 male). Global statistics. The first analysis presented here considers the score to each question by all the students. The results are presented in Table 5 showing the percentage of correct answers given in each question before and after the educational activity with the robot, the difference, and the corresponding p-value of the two distributions. As shown in the table, most of the results are extremely significant, two results are very significant (Questions 4 and 7), one is significant (Question 7) and one is not significant (Question 8). Table 5. Percentage of correct answers in 886 Math questionnaires before and after the educational activity with the robot and p-value of the distribution. Question Before act. After act. Difference p-value

Significance

1

60.9%

68.5%

7.6%

8.56e-04 ***

2

53.6%

66.3%

12.7%

5.08e-08 ****

3

57.1%

65.5%

8.4%

3.03e-04 ***

4

54.0%

60.3%

6.3%

7.17e-03

5

35.0%

43.6%

8.6%

2.14e-04 ***

6

73.9%

80.4%

6.5%

1.25e-03

7

71.4%

76.5%

5.1%

1.48e-02

*

8

89.7%

88.1%

−1.6%

2.89e-01

-

** **

From this analysis, it is possible to conclude that the educational activities with the robot has a very significant impact in the increase of score of the Math questionnaire. The same results are shown in a graphical format in Fig. 4. In the rest of the paper, we present other analysis only in a graphical format for better readability. Note that, being the low number of II and III Elementary classes using this questionnaire, Fig. 4 (left) represents basically I Elementary results. By looking more specifically at the score of each student, we found some general positive trend after using the robot: 47% of students got a better score in the

404

P. Ferrarelli et al.

final evaluation questionnaire and 26% of them got the same score; the percentage of students who achieved the maximum score (i.e., all correct answers) increased from 19% to 25%; only a few students did not answer to the questions, but for Questions 4 and 5 the number of blank answers decreased, while it remained equal for Questions 2, 6 and 7; all the questions, except Question 5, got more than 50% of correct answers before the activity, confirmed that the questionnaire is well calibrated on the students level and this was possible mostly thanks to the teachers feedback collected during Phase A; the general downward trend of score during questionnaire time, is probably due to students effort in doing eight Maths questions. In particular, the last question, Question 8, albeit with low significance, kept a negative difference for all the schools levels. Breaks should be foreseen during questionnaire time.

Fig. 4. Percentage of students who gave correct answers to the questions. Left: a comparison of the results before and after the activity with the robot, for all the students. Right: the comparison for III Elementary students only.

We would like to show in Fig. 4 (right), the data related to III Elementary students only. While these data are not statistically relevant (only 1 class), we can observe a highest percentage of correct answers (89% on average), with respect to the I Elementary students (65% on average) due to the fact that the Mathematics questionnaire was written for I Elementary students principally, and the high number of questions with negative difference (−6% means 1 student) are probably due to loss of motivation in doing for the second time a trivial questionnaire for them. Here the lesson learned is that the improvement of the score is also affected by the way in which the activities to be performed with the robot are designed, not only the robot itself. Teacher’s feedback. Analyzing the feedback questionnaires, we can distinguish three types of teachers involved in the project: the neophytes, or beginners, not expert in coding; the apprentices, teachers who attended coding courses but hadn’t experience in classroom; the experts, teachers who had a medium or strong experience in coding and robotics in classroom. Overall, they all commented that Doc is used by children with joy and fun, without fear to make mistakes. Most of them underlined how all the students were active and interested, also the ones assessed as usually not motivated.

Methodology and Results on Teaching Maths Using Mobile Robots

4

405

Conclusions

In this paper we presented our methodology to teach Mathematics using mobile robots and the results of data analysis, collected from 2,911 students of 58 Italian Public schools, Primary (5 years old) and Elementary (from 6 to 8 years old). Using a eight questions evaluation questionnaire on Maths, we found the improvements in the score obtained by the students on most of the questions and this learning gain is very significantly correlated with the use of the robot. Overall, we found general positive trend in the increase of better and maximum scores and the decrease of blank answers. The lessons learned are to reduce the number of questions and calibrate the questions with the level of the class. About teachers, all their feedbacks have reported a good level of satisfaction about the experience. They discovered a new cheap and practical tool for educational robotics and noticed high involvement and enthusiasm of their students. As future work we are planning a controlled experiment to compare our methodology with the traditional one and schedule a didactic competition associated with the project activities between classes and schools. Acknowledgements. The project “A scuola di coding con Sapientino” was promoted by Clementoni, DIAG Department at Sapienza University of Rome and MIUR (Italian Ministry of Education, University and Research). We would like to warmly thank all the teachers that, on a voluntary base, supported and realized the project with us and all the children that joyfully participated in the teaching activities.

References 1. Barker, B.S., Ansorge, J.: Robotics as means to increase achievement scores in an informal learning environment. J. Res. Technol. Educ. 39(3), 229–243 (2007) 2. Benitti, F.B.V.: Exploring the educational potential of robotics in schools: a systematic review. Comput. Educ. 58(3), 978–988 (2012) 3. Di Lieto, M.C., Inguaggiato, E., Castro, E., Cecchi, F., Cioni, G., Dell’Omo, M., Laschi, C., Pecini, C., Santerini, G., Sgandurra, G., et al.: Educational robotics intervention on executive functions in preschool children: a pilot study. Comput. Hum. Behav. 71, 16–23 (2017) 4. Eck, J., Hirschmugl-Gaisch, S., Hofmann, A., Kandlhofer, M., Rubenzer, S., Steinbauer, G.: Innovative concepts in educational robotics: robotics projects for kindergartens in Austria. In: Austrian Robotics Workshop 2013, vol. 14, p. 12 (2013) 5. Highfield, K.: Robotic toys as catalyst for mathematical problem solving. Aust. Prim. Math. Classroom 15(2), 22–27 (2010) 6. Kazakoff, E.R., Sullivan, A., Bers, M.U.: The effect of a classroom-based intensive robotics and programming workshop on sequencing ability in early childhood (2013) 7. Mondada, F., Bonani, M., Riedo, F., Briod, M., Pereyre, L., Retornaz, P., Magnenat, S.: Bringing robotics to formal education: the thymio open-source hardware robot. IEEE Robot. Autom. Mag. 24(1), 77–85 (2017) 8. Mubin, O., Stevens, C.J., Shahid, S., Al Mahmud, A., Dong, J.-J.: A review of the applicability of robots in education. J. Technol. Educ. Learn. 1, 209-0015 (2013) 9. Nulli, G., Di Stasio, M.: Coding alla scuola dell’infanzia con docente esperto della scuola primaria. Ital. J. Educ. Technol. 1(1) (2017)

406

P. Ferrarelli et al.

10. Rees, A.M., Garc´ıa-Pe˜ nalvo, F.J., Toivonen, T., Hughes, J., Jormanainen, I., Vermeersh, J.: A survey of resources for introducing coding into schools (2016) 11. Shin, N., Kim, S.: Learning about, from, and with robots: students’ perspectives. In: The 16th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2007, pp. 1040–1045. IEEE (2007) 12. Yesharim, M.F., Ben-Ari, M.: Teaching robotics concepts to elementary school children. In: Proceedings of International Conference on Robotics in Education (RiE) (2017) 13. Zygouris, N.C., Striftou, A., Dadaliaris, A.N., Stamoulis, G.I., Xenakis, A.C., Vavougios, D.: The use of lego mindstorms in elementary schools. In: 2017 IEEE Global Engineering Education Conference (EDUCON), pp. 514–516, April 2017

Autonomous Driving and Driver Assistance Systems (I)

Application of Sideslip Estimation Architecture to a Formula Student Prototype Andr´e Antunes2 , Carlos Cardeira1,2(B) , and Paulo Oliveira1,2 1

2

IDMEC, Universidade de Lisboa, 1049-001 Lisboa, Portugal Instituto Superior T´ecnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal {andre.antunes,carlos.cardeira,paulo.j.oliveira}@tecnico.ulisboa.pt

Abstract. This paper describes an estimator architecture for a Formula Student Prototype, based on data from an inertial measurement unit (IMU), a global positioning system (GPS), and from the underlying dynamic model of the car. A non-linear dynamic model of the car and realistic models for the sensors are presented. The estimates of attitude, rate-gyro bias, position, velocity and sideslip are based on Kalman filtering techniques. The resulting system is validated on a Formula Student prototype and assessed given ground truth data obtained by a set of differential GPS receivers installed onboard.

Keywords: Kalman Filter

1

· Sideslip · Sensors · Estimation

Introduction

Formula Student is an university competition that challenge students from around the world to build a single-seat racing car. Along the years the competition has been evolving with the implementation of new materials and technologies, always looking to follow the world evolution of automotive technology. Recently it has approach the driverless cars starting a new parallel competition for these vehicles, which associated with electric independent all-wheel drive already used by the teams, opens a door for countless control approaches. Control strategies like vehicle stability control and torque-vectoring [1] depend widely on a sideslip observer to assure that the vehicle stays in a stable route. This observer is especially needed when there are significant differences between the model and the true vehicle, something that usually happens when working with road vehicles and tires. This paper undertakes the implementation of an architecture proposed in [2] for a Formula Student prototype to estimate the sideslip of the vehicle. During a test day, data was acquired from an inertial measurement unit (IMU) consisting of an accelerometer a magnetometer and a gyroscope, all of them with 3 axis, a global positioning system (GPS) and a steering encoder. These sensors were already part of the vehicle. In order to verify the results obtained c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_34

410

A. Antunes et al.

Fig. 1. IST - FST06e

from the estimators, a secondary system using a differential global positioning system (DGPS) was attached to the car. This one gives the heading and the velocity components, values required to calculate the sideslip angle of the car. Both systems, the DGPS and the car sensors are completely independent. The acquired values are then processed in offline to skip implementation issues. In this document, the models and estimators are briefly explained in Sects. 2 and 3 respectively, with only the more important equations presented, their deduction can be found in [2,3]; followed by a closer look to the test platform (Fig. 1), its technical characteristics, and the sensors are explored in Sect. 4; Sect. 5 illustrates the results obtained in each estimator and a comparison is performed with the values from the DGPS; Finally, Sect. 6 outlines some remarks and future work.

2

Car Model

This section presents the general non-linear equations for the vehicle model from where the linear car model is obtained. These equations and their deduction are explored in detail by [2], and as such, in this work only the resulting equations are exposed. The following model is named a planar model since it considers that no pith (θ) or roll (φ) rotations exist, being these the major assumptions, which also implies that no load transfer occurs. Besides this limitation, aerodynamics forces acting on the car are neglected. This restricts all the forces to the tires, as can be seen in Fig. 2(a), where each one produces a longitudinal and a lateral force respectively Fx and Fy , with the indexation Front Left (FL), Front Right (FR), Rear Left (RL) and Rear Right (RR). And where δ defines the wheel steer angle, v is the velocity vector, β is the car sideslip angle, a is the distance between the centre of gravity (CG) and the front axle, b the distance between the CG and the rear axle, and tr is the length of the axle, where both are considered equal. The resulting equations of this model are described by (1), where r is the

Sideslip Estimation

411

Fig. 2. (a) Forces applied on the Car; (b) Tyre angles and frames. Images from [2]

angular velocity around the z-axis, m is the mass of the car plus the driver and Iz is the moment of inertia around the z-axis. 1 F [F sin δ − FxR ] m y 1 v˙ y = −vx r + [FyF cos δ + FyR ] m 1 1 r˙z = aFyF cos δ − bFyR Iz Iz

v˙ x = vy r −

(1a) (1b) (1c)

The longitudinal force Fx on the car is assumed to be a direct input, without considering the tires limitations. The lateral forces Fy on the tires are given by a cornering stiffness approximation expressed by (2), where Cα is the cornering stiffness constant, and αi is the slip angle of the tyre i, defined as αi = βi − δi , as seen in Fig. 2(b). (2) Fy = −Cα αi The sideslip angle of the wheel (βi ) is the projection of the sideslip angle β of the car in the wheel by vi = B v + B r × B r i , where B r i is the vector of distance between the CG and the wheel i. This cornering stiffness approximation contains several assumptions that are explained in detail in [2]. 2.1

Linear Car Model

The car model used in the estimator is an approximation of the Eqs. (1) and (2) for small angles and the assumption of a constant longitudinal velocity (vx = const), which has been widely used in the literature [4–6]. With the small angle approximation is possible to define the wheel slip angle as:   vy + xi r vy + xi r -1 − δi (3) αi = tg − δi ≈ vx − y i r vx

412

A. Antunes et al.

Using a different cornering stiffness for front and rear wheels, respectively Cαf and Cαr , is then possible to rewrite (1)–(3) in a state space form: ⎤ ⎤ ⎡ ⎡ C +C −aCαf +bCαr Cαf   − vx v  − αfvx m αr vx m v˙ y m y ⎦ (4) = ⎣ + ⎣ aC ⎦ δ −aCαf +bCαr a2 Cαf −b2 Cαr αf r˙ r − Iz Iz vx Iz vx

3

Estimator Architecture

In this section, the proposed architecture for the estimation of the sideslip is exposed. As depicted in Fig. 3, the estimation process is composed of three sequential filters. An Attitude Complementary Filter (ACF), a Position Complementary Filter (PCF), and a Car Estimation Model (CEM).

Fig. 3. Flowchart of Filter Scheme where mr , ωr , ar , pr and δr are respectively the reading from the magnetometer, gyroscope, accelerometer, GPS and steering encoder. Grey boxes represent required data processing before use in filters.

The ACF is used to correct the yaw reading from the magnetometer and to estimate the bias of the gyroscope. The yaw is then used in the rotation matrices inside the PCF, which uses the accelerometer and the GPS to estimate both velocity components in the body reference frame. The CEM is used to include the car dynamics in the sideslip estimation using the velocity components, the corrected yaw rate and the steering angle. The three filters are implemented using discrete Kalman Filter [7,8], where the CEM is discretized from a continuous state space model. Both complementary filters are explained in more detail in [2,3]. 3.1

Attitude Complementary Filter

The ACF combines the yaw readings with the angular velocity around the z-axis to give a more accurate yaw value and a bias estimation for the angular velocity to correct the gyroscope signal. This assumes that the yaw rate ψ˙ = ωr,k reading is affected by random white noise (wωr ,k ) as well as a constant bias (bωk ): ωrk = ωk + bωk + wωr ,k

Sideslip Estimation

413

With this, is then possible to write the discrete Kalman Filter in a state space form as:           K1 1 −T T ψk ψk+1 = + (yk − yˆk ) ωr,k + bωk+1 bωk K2 0 1 0 yˆk = ψˆk ,

yk = ψr,k + vk

where the index k defines the instant in time t = kT being T the sampling time interval. The gyroscope reading is ωr,k , and the yaw reading from the magnetometer is ψr,k which is corrupted with random white noise vk . The values K1 and K2 are the Kalman gains associated to each state. 3.2

Position Complementary Filter

The PCF combines the readings of the accelerometer with the ones from the GPS to give an estimate of the velocity components in the vehicle body frame which in this case are unobservable states. Since the GPS uses a global reference frame as the opposite of the accelerometer and the needed velocity components, the yaw estimation from the ACF is used to convert between reference frames using a rotation matrix defined as Rk . The equations that rule the PCF are the motion equations: T2 ¯ k+1 = p ¯k ¯k + T v ¯k + p Rk a 2 ¯ k+1 = v ¯k ¯ k + T Rk a v ¯, v ¯, a ¯ are respectively the vectors of position, velocity and acceleration. where p Is then possible to write the PCF system as:

 2



T ¯ ¯k K ˆk ˆ k+1 p p I TR R 1p ˆ pk ) ¯k + ¯  = + 2 k a (ypk − y Bˆ Bˆ 0 I vk+1 vk Rk K2p TI ˆk , ˆ pk = p y

¯ k + vpk ypk = p

¯k is the readings from the accelerometer, where I ∈ R2 is the identity matrix, a ¯ k is the GPS readings of position that are corrupted with random white noise p vpk . The values K1 and K2 are 2 × 2 diagonal matrices with the Kalman gains identified with a linear time-invariant system based on the above, explored in [2,3]. The subscript B indicates the components in the vehicle body frame. 3.3

Car Estimator Model

The Car Estimator Model is based on system (4), which depends on two state

variables vy r . This system is time variant due to vx , which can generate complications if the longitudinal velocity is zero, or close to zero due to numeric problems. The CEM uses the velocity components estimations of the PCF, the

414

A. Antunes et al.

gyroscope reading corrected with the bias from the ACF, and the steering angle of the wheel. The system for the filter is given by: ⎡ ⎣

vˆ˙ y rˆ˙





C

+C

− αfvx m αr ⎦ = ⎢ ⎣ −aC +bC αf

Iz vx

⎤   ⎡ Cαf ⎤ − vx ⎥ vˆy K1vy K1r m + ⎣ aC ⎦ δ + (yl − yˆl ) ⎦ a2 Cαf −b2 Cαr αf rˆ K2vy K2r − I

−aCαf +bCαr vx m

αr

Iz vx

  vˆy yˆl = , rˆ

z

    vy v yl = + vy r vr

where vvy and vr are the noises associated to each measurement and the Ki gains are the Kalman gains that relate the error of measured and estimated data to each state variable.

4

Test Platform

In order to test the proposed estimation architecture, a real test was conducted. At the time, these algorithms weren’t already implemented in a hardware capable of real-time processing, so all the data was logged, and processed offline after the test. The test platform was FST06e (Fig. 1), an electric Formula Student Prototype. This vehicle is propelled by two independent 50 kW motors at the rear, one motor per wheel, with a single fixed gear with no clutch. With a weight of 280 Kg and a distance between axis of 1.59 m, is capable of achieve 0–100 km/h in 2.9 s, and a top speed of 150 km/h. The remaining parameters of the car needed for the models and estimators are presented in Table 1. Table 1. FST06e Parameters Description

Var Variable

Units

Front and rear track



1.24

m

Mass of the Car + Driver

m

356

kg

Yaw Inertia

Iz

120

kg.m2

Weight distribution



45.1–54.9

%−%

Static load at front wheels Static load at rear wheels

F R

Fz 787.5

N

Fz 958.7

N

Cornering Stiffness front

Cαf 1.527 × 104 N/rad

Cornering Stiffness rear

Cαr 1.995 × 104 N/rad

The car relies on a distributed electronic circuit for all the monitoring, controls and acquisition. This circuit spreads all along the car, and consists in several modules interconnected by a CAN-BUS line working at 1M bit/s. For this test 3 modules were essential, the GPS, the Steering and IMU, and the log unit. Each of these modules has one dedicated micro-controller, a dsPIC30f4013 from

Sideslip Estimation

415

Texas Instruments, working at 30 MHz in a self-developed board. The GPS module incorporates a SkyTraq S1216F8 chip configured to an update rate of 25 Hz . The Steering and IMU module consists of a GY-80 IMU that includes a 3-axis accelerometer (ADXL345), a 3-axis gyroscope (L3G4200D) and a 3-axis magnetometer (HMC5883L). The steering encoder is a 3-turn rotational potentiometer used as a voltage divider with a 12bit ADC, that is attached to the steering column. The log unit simply reads one message at the time in the CAN-BUS line and writes it to a file in an SD card. In order verify the estimated data, a second independent system was used. This one, wasn’t connected to the car’s previous system, and had a separated log system. This system consisted of two GPS antennas placed on the front and rear of the car separated by 2.5 m as seen in Fig. 4. The antennas were connected to an Ashtech MB100 board that can deliver the heading angle and velocity components needed to calculate the sideslip angle of the car. This data was logged at 10 Hz. The acquisition was made during a track day at the university campus using a parking lot limited to an asphalt area of approximated 60 m × 25 m. The trajectory of the car consisted in several circles in both ways, and turns after a long straight.

Car GPS antenna

Car IMU and Steering encoder

DGPS Acquisition system DGPSRear antenna

DGPS Front antenna

Fig. 4. FST06e Acquisition system with DGPS. In orange are the sensors belonging to the car, and in blue the elements of the DGPS.

5

Results

This section presents the results from the different filters and estimators during the test run using the FST06e with the DGPS (Fig. 4). Is also explained the processing done to each signal before the filters. For this test, the car was stripped of all aerodynamic elements, to better correspond to the models, and for the antennas to have a clean view of the sky.

416

5.1

A. Antunes et al.

Attitude Complementary Filter

To feed the Attitude complementary filter, the data from gyroscope and the magnetometer are required. Before the use of this data, a calibration to align the axis of the IMU with the vehicle axis is done using the accelerometer with data from a static acquisition. This calibration is used in the three sensor units (accelerometer, gyroscope and magnetometer), and kept for all the filters. The yaw angle is computed from the magnetometer after a 3-dimensional calibration is done to correct hard and soft non-linearities [9]. The yaw angle is also corrected from the magnetic declination, to match the position referential. The ACF is fed with this data resulting in a corrected yaw angle of the vehicle as can be seen in Fig. 5. The angle must be in a continuous or cumulative form, since discontinuities in the transition from 0◦ ↔ 360◦ or −180◦ ↔ 180◦ generate problems in the ACF. In Fig. 5, is possible to see the estimated heading angle of the car, side by side with the raw value from the magnetometer. Besides the yaw angle, also the bias of the gyroscope’s z-axis is estimated. In Fig. 6, an overlapping of several responses to different initial conditions are presented, showing a convergence after some seconds. From the analysis of the raw data of the sensor, an offset of −0.62◦ /s was expected for a stationary measurement, something that is verified in the graph.

Fig. 5. ACF heading angle result and raw heading measurement from the magnetometer.

Analysing Fig. 6, is possible to see that around the 160 s, the bias starts to oscillate. This is when the vehicle started to move. One of the assumptions made at the start is that no roll or pitch happens, but in the real vehicle this isn’t true. Although small, these rotations exist and influence the IMU that is attached to the car. Besides that, the track isn’t perfectly flat containing a slight tilt in one side. These unpredicted rotations can justify the oscillations when the car is moving. 5.2

Position Complementary Filter

The position complementary filter relies on three major sources, the GPS, the accelerometer and the corrected yaw angle or heading of the car from the ACF.

Sideslip Estimation

417

Fig. 6. Bias result from the ACF, and a overlapping of several responses for different initial conditions

A GPS receiver doesn’t give the required X and Y position coordinates, so a transformation is done from ECEF (Earth-Centred, Earth-Fixed) coordinates to ENU (East North Up) coordinates, using as origin a point along the track. The East is defined as the X coordinate, and North the Y coordinate, the Up or Z is not used. Also, a correction is performed to the location of the GPS antenna, this was considered necessary since the distance between the antenna and the centre of gravity was substantial (≈ 1 m). The correction was performed using ¯ gps is the vector with the distance from the GPS antenna to the (5) where d centre of gravity, and ψ is the yaw angle from the ACF.       xcg xgps cos ψ − sin ψ ¯ dgps (5) = + ycg ygps sin ψ cos ψ The GPS only has a 25 Hz acquisition frequency compared with the 100 Hz of the remaining sensors, this was overcome by using an interpolation for the missing points to match the general frequency, and was only possible because processing was done offline. For an online application, a multi-rate solution must be implemented, or the global frequency reduced to match the smallest frequency available. The accelerometer kept the axis calibration explained in the ACF section, and was corrected from offsets in the readings. It was also implemented a correction from the influences of angular velocities due to the distance between the IMU ¯ imu is the vector of accelerations and the centre of gravity using Eq. (6) where A ¯ cg is the vector of accelerations in the centre of gravity, ω ¯ is of the reading, A ¯ imu the distance vector from the IMU to the vector of angular velocities, and d the centre of gravity. ¯ imu ) ¯ cg + ω ¯ imu = A ¯ × (¯ ω×d A

(6)

The results of the PCF using the values of position acceleration and yaw angle can be seen in Fig. 7. This graph presents an overlapping of the velocity components acquired from the DGPS, and the estimates from the PCF. It must be noticed that the PCF doesn’t use in anyway the data from the DGPS,

418

A. Antunes et al.

Fig. 7. Velocity components estimations resulting from the PCF compared with the ones from the DGPS.

being the two signals completely independent one from the other. The noise is clearly higher in the estimation, this is due to the sensor signals, specially the accelerometer. Since the accelerometer is corrected with the gyroscope unfiltered data using (6), the combination of both signals results in a very noisy signal. An example is the time window between [225 s; 250 s], where the vehicle was performing several turns at constant radius and speed, and the resulting lateral acceleration from (6) has a mean value of 10.2 m/s2 and a standard deviation σ = 1.8 m/s2 . In the overall, even with the noise, the estimation of both velocity components is considerably close to the results from the DGPS with small errors. 5.3

Car Estimation Model

For this estimator, is necessary the velocity components from the PCF, the steering angle of the front wheels, and the corrected gyroscope reading. The steering encoder mentioned before doesn’t read the δ value of the wheel, but the steering wheel angle, besides that, the encoder to achieve a greater resolution relies on a gear ratio to convert the steering wheel range to make a better use of the 3-turn encoder. Due to Ackerman geometry [10] both wheels rarely have the same angle, and the ratio between the steering wheel angle and the wheel angle isn’t linear, but for the sake of simplicity this angle is assumed linear and equal. The linear car estimator, uses Eq. (4) where the transition matrix relies on the longitudinal velocity vx . For values equal or close to 0, some problems arise since some of the elements are dividing by zero. To overcome this situation, is assumed that the longitudinal velocity used in the transition matrix of (4), has a lower saturation of vx = 3 m/s. A similar problem arises computing β = tg-1 (vy /vx ) when the velocities are very close to zero, any small noise can generate huge angles. To get around this

Sideslip Estimation

419

Fig. 8. Sideslip angle (β) estimation from the CEM compared with the one given by the DGPS

situation a rule was implemented that when vx < 3 m/s then vy = 0 m/s, which means that for small velocities (under 3 m/s) no sideslip occurs. Note that 3 m/s was just a value that seems small enough to not induce a great error, and worked well during the tests. The results obtained using the car model estimator for the sideslip angle are presented in Fig. 8, as well as the sideslip angle measured by the DGPS. This last one didn’t use any signal treatment, and is acquired at only 10 Hz.

Fig. 9. Graphic representation of the results obtained from the estimators with the car represented by a triangle with the heading orientation, and a blue vector representing the velocity vector.

420

A. Antunes et al.

The resulting signal is a bit noisy due to the input signals, in this case the velocity components from the PCF, as seen previously. Once again, the estimator doesn’t use any value from the DGPS. Analyzing both signals is possible that the estimative is a little more conservative in terms of amplitude of the angle. With some simple changes in the gains is possible to increase this amplitude, but with the consequence of an increase in the noise. The presented result is a compromise between accuracy and noise. It’s also important to remember that one of the assumptions in this filter is the small angle approximation, which from the DGPS values, was clearly exceeded. Even with some differences the estimator keeps up with all the variations registered by the DGPS. In Fig. 9 is possible to see a graphical representation of the car during a part of the track ([221 s; 227 s]), where the car is simplified by a red triangle align with the heading angle. A blue arrow represents the velocity vector of the car at the centre of gravity. The β, is the angle between the heading and the velocity vector. It’s a well-known fact, among all the drivers of this car, that it suffers a lot from understeer, something visible in Fig. 9, where is possible to see that the velocity vector always points to the inside of the curve in relation to the heading, showing that behaviour. This situation was also clearly visible during the data acquisition.

6

Conclusion

The estimation architecture explored was validated on a Formula Student prototype. The Attitude Complementary Filter was implemented with interesting results of heading and an estimate that corresponds to the expected bias. The results of the Position Complementary Filter were close to the ground truth given the low-cost sensors used. The Car Estimation Model was able to produce a sideslip estimation close to the values acquired from the DGPS, and the observed during the field tests. The final results were similar to the values from the DGPS. Thus, the estimation architecture proposed is capable to produce the intended results. This architecture paves the way for the next stages of the projects, namely control system design and driverless competition prototype development, by providing an observer for the sideslip. Acknowledgments. This work was supported by FCT, through IDMEC, under LAETA, project UID/EMS/50022/2013. The authors thank the prompt and fruitful cooperation of the IST Formula Student team, FST Lisboa and ISR - Dynamical Systems and Ocean Robotics Lab, namely the assistance of Bruno Cardeira.

References 1. De Novellis, L., Sorniotti, A., Grubber, P., Pennycott, A.: Comparison of feedback control techniques for torque-vectoring control of fully electric vehicles. IEEE Trans. Veh. Technol. 63(8), 3612–3623 (2014) 2. Antunes, A., Cardeira, C., Oliveira, P.: Sideslip estimation of a formula student prototype through GPS/INS fusion. In: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 26–28 April 2017, Coimbra, Portugal, pp. 184–191 (2017)

Sideslip Estimation

421

3. Vasconcelos, J.F., Cardeira, B., Silvestre, C., Oliveira, P., Batista, P.: Discretetime complementary filters for attitude and position estimation: design, analysis and experimental validation. IEEE Trans. Control Syst. Technol. 19(1), 181–198 (2011) 4. Jazar, R.N.: Vehicle planar dynamics. In: Vehicle Dynamics: Theory and Application, 1st edn. Springer, New York (2008). Chap. 10 5. Ryu, J., Rossetter, E.J., Gerdes, J.C.: Vehicle sideslip and roll parameter estimation using GPS. In: 6th International Symposium on Advanced Vehicle Control, Hiroshima (2002) 6. Nam, K., Oh, S., Fujimoto, H., Hori, Y.: Estimation of sideslip and roll angles of electric vehicles using lateral tire force sensors through RLS and Kalman filter approaches. IEEE Trans. Ind. Electron. 60(3), 988–1000 (2013) 7. Franklin, G.F., Powell, J.D., Workman, M.: Multivariable and optimal control. In: Digital Control of Dynamic Systems, 3rd edn., pp. 389–391. Ellis-Kagle Press (2006). Chap. 9 8. Brown, R., Hwang, P.: Discrete Kalman filter basics. In: Introduction to Random Signals and Applied Kalman Filtering: with Matlab Exercises, 4th edn., pp. 249–262. Wiley, New York (2012). Chap. 7 9. Vasconcelos, J.F., Elkaim, G., Silvestre, C., Oliveira, P., Cardeira, B.: Geometric approach to strapdown magnetometer calibration in sensor frame. IEEE Trans. Aerospace Electron. Syst. 47(2), 1293–1306 (2011) 10. Abe, M.: Vehicle Handling Dynamics: Theory and Application, 1st edn, pp. 5–117. Butterworth-Heinemann publications, Oxford (2009). Chap. 2–3

Torque Vectoring for a Formula Student Prototype Jo˜ao Antunes2 , Carlos Cardeira1,2(B) , and Paulo Oliveira1,2 1

IDMEC, Lisbon, Portugal Instituto Superior T´ecnico, Universidade de Lisboa, Lisbon, Portugal {joao.pedro.antunes,carlos.cardeira,paulo.j.oliveira}@tecnico.ulisboa.pt 2

Abstract. Torque Vectoring (TV) has the objective to substitute the need of a mechanical differential, while improving the handling and response of the wheeled vehicle. This work addresses the design of a torque vectoring system in an rear wheel driven formula student prototype. The proposed solution resorts to a PID controller for yaw rate tracking with an evenly distributed torque to each wheel. Also an LQR scheme is discussed, for tracking the yaw rate and the lateral velocity. To assess and design, first a 7 degree of freedom (DOF) non linear model is constructed, followed by a linear 2 DOF model, both validated with real data. The linear model, is used to design and simulate the proposed controllers. When the controller is within the desired parameters it is tested in the non linear model. Tests with the vehicle are performed to verify the contribution of the controller to the overall performance of the vehicle. Keywords: Torque vectoring · Formula student · PID controller · LQR controller · Vehicle model

1

Introduction

Each year that goes by sees an increase in sales of personal use of electric vehicles (battery EVs, plug-in hybrids and regular hybrids), all of which rely on electric motors as a basis or an aid to propulsion. It also opens the opportunity for vehicle stability control systems directly at the motors, like Electronic Stability Program (ESP) and Anti-lock Braking System (ABS) which gives the vehicle better stability and maneuverability. This paper addresses another type of vehicle stability control called Torque Vectoring (TV). By controlling the amount of torque distributed to each driven wheel, the system has the potential to improve both the stability and response of a vehicle without compromising safety and drivability. Developing a torque vectoring system can be tackled resorting to several different approaches. The most common [6,8] use the yaw rate of the vehicle as the reference for the controller. More advanced solutions [7,9,11] use the sideslip c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_35

Torque Vectoring for a Formula Student Prototype

423

angle and/or a combination of yaw rate and sideslip angle. The choice on the strategy will depend on the available sensors. A variety of control system design approach can be used. The most basic method is to distribute the left and right torque, proportional to the amount of steering input ΔT = f (δ), where δ is the steering wheel angle. The proportional integral derivative (PID) controller is the classic control structure and the most commonly used in practical applications. It is a straightforward method to implement and tune [9,12]. Sliding mode control is a non linear control design methodology used by several researchers to achieve the objectives of tracking the yaw rate and slip angle [9,11]. Predictive control estimates the future states of the vehicle in order to find the best control input [3,4]. Some authors also try to implement Fuzzy control to create a set of rules for the allocation of the torque [14].

2

Vehicle Model

Vehicle dynamics is the area devoted to the development of models that describe the behavior of a vehicle for any given set of inputs and disturbances. Modeling this type of system is a very complex and challenging task. A lot of different approaches and models can be needed. A complex multi-body system (with 20+ degrees of freedom), or a simple two degree of freedom model can be representative [5, p. 6]. The development of the torque vectoring controller will be tested in the FST06e. The FST06e is a prototype electric vehicle powered by two Siemens permanent magnetic synchronous motors each one with an RPM range of 0 to 8000, and producing a maximum torque of 107 Nm. With a planetary gear set fixed with a gear ratio of 4.1:1, amplify at the wheel with a total of 876 Nm. 2.1

Non Linear Model

The vehicle model used to study a torque vectoring control will typically have seven degrees of freedom. The lateral and longitudinal velocities of the vehicle (vx and vy respectively) and the yaw rate ψ˙ constitute three degrees of freedom related to the vehicle body. The wheel velocities of the four wheels, the front left wheel (wf l ), front right wheel (wf r ), rear left wheel (wrl ), and rear right wheel wrr ) constitute the other four degrees of freedom [10]. The full model consists in a horizontal model which describes the model position and orientation of the vehicle. A model of the steering kinematics, which describes the relation between the steering wheel and the actual steering of each wheel. A simplified tire model to calculate the forces acting on the wheels. A balance at the wheel between the applied torque and the tire longitudinal force. Lateral and longitudinal weight transfer, to model the loads at each wheel during cornering, braking and acceleration. Figure 1 shows the block diagram of the model, where some of the physical variables can be found in [2].

424

J. Antunes et al.

Fig. 1. Simulink Schematic of Non Linear Model

2.2

Linear Model

The linear model, is one of the most common models. The linear bicycle model is a simplification in which it is only being considered the lateral velocity vy and yaw rate ψ˙ on the model [5,6,8,14]. The way the model is presented, the equations only generate lateral force by the steering input. The goal with the torque vectoring is to generate yaw moment based on controlling the torque (longitudinal force) at the driven wheels. For this it is necessary to introduce a new term Mz that represents the additional yaw moment. Table 1 summarizes the values from the FST06e need for the model. x˙ = Ax + Bu1 + Eu2   v˙y x˙ = ¨ ; u2 = δ u 1 = Mz ; ψ ⎡



Cy,f + Cy,r mvx0

⎢ ⎢ A=⎢ ⎣ −l C + l C f y,f r y,r Izz vx0

2.3

⎡ ⎤ ⎤ −lf Cy,f + lr Cy,r Cy,f ⎡ ⎤ − vx0 ⎢ mvx0 ⎥ ⎥ mvx0 0 ⎢ ⎥ ⎥ ⎥B = ⎣ 1 ⎦E = ⎢ ⎥ 2 2 ⎣ ⎦ Cf lf + Cr lr lf Cy,f ⎦ Izz − Izz Izz vx0 (1)

Validation

The constant radius turn (skidpad) is defined by ISO 4138 [1]. According to the norm, the test should be performed with a minimum circle radius of 30 m. For the FST06e, the available test track is at the campus in front of the main build

Torque Vectoring for a Formula Student Prototype

425

Table 1. Linear model values Term Yaw rate

Symbol Value Units Term Symbol Value [rads−1 ] Rear wheelbase lr 0.717 ψ˙ −1

Velocity

vx0

[0,40] [ms

Rear Stiffness

Cy,r

21429 [N rad−1 ] Yaw moment

Front Stiffness

Cy,f

] −1

15714 [N rad

Steering angle

[m]

δ

[-3.3,3.3] [rad]

Mz

-

[N m]

-

[ms−1 ]

] Lateral velocity vy

2

Units

Inertia moment Izz

120

[Kgm ]

Gear ratio

Gr

4.4

-

Mass

356

[Kg]

Half track

tr

0.65

m

Wheel Radius

Rw

0.265

m

m

Front wheelbase lf

0.873 [m]

in the parking lot, with a maximum circle radius of 7.5 m. Another test track is at the International Kartodromo in Palmela, with a maximum radius of 15 m. In the Formula Student competition, one of the disciplines is to perform a constant radius turn in a track with a radius of 8.75 m. To be more close to the competition conditions, the tests are modified and performed with a range of 5-9 m radius. In total three data sets are available. “Test 1” and “Test 2” are both done at the campus. The radius of the circle is the same, but the driver varied the velocity. “Test 3” is a test with a bigger radius done in Palmela. Table 2 summarizes the tests. Table 2. Test setups Test 1 Test 2 Test 3 Unit Radius of circle

5.62

Global velocity

7

8.5

9.3

[m/s]

Steering angle

110

100

71

[deg]

72

58.6

[deg/s]

Measured yaw rate 72.3

5.62

9.02

[m]

During the tests some data are being monitored. The inputs are the global velocity (vCG ), the steering wheel angle (δ). The output values are the longitu˙ Although, many more data dinal, lateral acceleration (ax , ay ) and yaw rate (ψ). can be logged from the car, these are sufficient to conclude if the models are accurate enough [13, p. 345]. Figure 2 shows the comparison between the yaw rate from the vehicle and the yaw rate from both linear and non linear simulations. The simulated values are very close to the real data. The data presented illustrates that the vehicle made 4 turns. Two to the left and two to the right (≈ 5s each). Negative values of yaw rate represent the car cornering to the right (in a clockwise way) while positive values represents the car cornering left (counter clockwise).

426

3

J. Antunes et al.

Control System Design

Based on the availability, sampling time and quality of the sensors, the strategy for the calculation of the reference value is presented next, followed by the choice and tuning of the controllers. 3.1

Reference Value

Before the introduction of the control algorithm, it is necessary to define a reference signal. Various authors propose the calculation of the reference based on the velocity and steering angle of the car. It is assumed that the vehicle is in a steady state [10]. lf m lr m − (2) Ku = Cy,f (lf + lr ) Cy,r (lf + lr ) If the under-steer gradient, Ku is positive (Ku > 0): The car is said to have an under-steer behavior (under yaw rate). If Ku is negative (Ku < 0): The car is said to have an over-steer behavior (over yaw rate). And if Ku = 0, it means that the car is neutral-steer (ideal yaw rate). A neutral-steer vehicle has the smallest possible turning radius for a given velocity, which corresponds to optimal performance. Therefore, it would be assumed that a neutral-steered vehicle should be chosen as the reference. However, this approach would put the car on the verge of over-steer instability, so a slightly under-steered is taken as reference. This reference is much closer to neutral-steered than the actual car, so the controller can improve the yaw rate. Given the velocity and steering angle of the car and with the known steer gradient and wheelbase it is possible to know the turning radius, combining with 150

yaw rate [deg/s]

100

50

0

-50

-100

-150 145

Simulation Non Linear Model FST06e data Simulation Linear Model

150

155

160

165

170

175

180

time [s]

Fig. 2. Comparison between real data form the FST06e and both linear and non linear simulation during a skidpad to the left and right from test1

Torque Vectoring for a Formula Student Prototype

427

the under-steer gradient and the road radius, gives the desired yaw reference. ψ˙ desired =

vCG 2 δ (lr + lf ) + Ku vGC

(3)

The desired yaw rate is a function of the velocity, steering and characteristic of the car. The under-steer gradient Ku can be tuned in for each driver preference. The smaller the under-steer gradient the bigger the difference between the desired and actual yaw rate, more will the car have neutral steer characteristics. 3.2

Maximum Yaw Value

With all the various possible implementations and control strategies, some limitations are valid for all torque vectoring controllers. These limitations are related to the physical properties of the vehicle like, the maximum yaw rate possible, the maximum tire adhesion, microprocessor computing time, etc. Depending on the entry speed of the car, it will be able or not to achieve the desired yaw. If entering in a corner to fast the road may be unable to provide the necessary tire forces, and the car just goes forward, thus under-steering. The solution is to bound the force according to the tire-road coefficient. vCG β˙ ≤ μg vCG ψ˙ + ax β +

1 + tan β 2

(4)

In Eq. (4), if considering that the car has a small heading angle, the equation can be further simplified and reduced to: μg ψ˙ max = σ vCG

(5)

where σ represents a tunability factor to take into account changes in the friction coefficient from different types of pavement. The yaw rate reference is used as long as it doesn’t pass the maximum possible yaw rate. |ψ˙ des | ≤ ψ˙ max | ψ˙ des , X(m, n) = (6) ±ψ˙ max , otherwise 3.3

Proposed Controller

The added momentum results from the difference between the left wheel torque Trl and the right wheel torque Trr . This difference multiplied by the half track of the car will be the additional yaw momentum Mz in the model. If the right wheel has more torque than the left wheel the car will have a positive yaw momentum, thus turning to the left, if the opposite happens the car while have a negative momentum and will turn right. Mz = (Trr − T rl)tf

(7)

428

J. Antunes et al. Table 3. PI values for different velocities values Vx (m/s) P

I

7

296.29 12716.7

10

392.23 12492.5

13

421.72 12040

16

479.87 11536.07

19

396.22 11058.5

22

404.79 13650

But the torque at the wheel is not the same torque at the motor. Between the motor and the wheel there is a planetary gear set, this gear set multiplies by a gear ratio Gr the torque from the motor Twheel = Gr ∗ Tmotor . Then the torque at the wheel has to be divided by the wheel radius Rw to obtain the torque at the ground. These values can be found in Table 3. Putting together all this information and with Eq. 7 the yaw moment from the difference in torque is given by. ΔT =

0.265 Rw Mz = 0.05Mz Mz = 2tr Gr 2 ∗ 0.65 ∗ 4.4

(8)

Thus rewriting the state space equation from Eq. 1 in order to the delta torque instead of the yaw moment the new input matrix is: ⎡ ⎤ 0 ⎦ 1 B=⎣ (9) 0.05 ∗ Izz Taking into consideration that the linear model is dependent on the velocity, the design of one controller will not be sufficient to ensure the control for full range of operation. So a gain scheduled controller is implemented. Six different setting points are chosen. The main disadvantage of having a small number of points is that when the gains change the driver could sense an unexpected yaw rate change. If this occurs more setting points are necessary and thus the same procedure is refined or points are interpolated. 3.4

LQR

Linear quadratic regulator (LQR) is an optimal control strategy for linear systems. In the design of this type of control an optimal gain K is calculated based on the performance index J. The advantage of the PI controller is its simplicity

Torque Vectoring for a Formula Student Prototype

429

in implementation and understanding of what is happening in terms of allocated torque, but this simplicity also has its drawbacks. If the tire-road friction is wrong or the calculated reference yaw rate is excessive for the current state of the vehicle, the vehicle behavior may become unstable. To further improve the torque control, a LQR controller is presented. The controller will also use both models, both lateral velocity vy and yaw rate ψ˙ are considered state variables and ΔT the control input. The difference is that now it will also be monitored the lateral velocity, which will give a more robust control. ΔT = Kr ψ˙ + Kv vy

(10)

The performance index may be written in the following way: 1 tf J(u) = [(Xd − X)T Q(Xd − X) + uT Ru]dt 2 t0

(11)

where: – u - control effort, u = Mz – R - weight factor of the control effort – Q - Penalization matrix for the states, lateral velocity vy , yaw rate ψ˙ and desired yaw rate ψ˙ des

tf

J= t0

1

 1 (ψ˙ − ψ˙ des )2 + wΔT 2 dt 2 2

(12)

where ψ˙ des is the desired yaw rate of the vehicle. Minimizing this will lead to a vehicle with very close to neutral steer behaviour. Not forgetting that as discussed, the control effort ΔT must be constrained both due to the maximum torque possible and the tires limit. The K gains are calculated by solving the Riccati equation, choosing appropriate values of Q and R. The controller is tuned by varying both values, First R and then Q. (13) AT P + P A − P BR−1 B T P + Q = 0 Solving for our case will be: 2a11 p11 + 2a21 p12 − R−1

p212 = Q11 Izz

(a11 + a22 )p12 + a21 k22 + a12 k11 − R−1 2a12 p12 + 2a22 p22 − R−1

p222 = Q22 2 Izz

p12 p22 = Q12 Izz

(14)

430

J. Antunes et al.

This system can be solved and the optimal feedback gain matrix K will be:   

p12 p22  1 p11 p12 −1 T −1 0 (15) = R−1 K=R B P =R Izz Izz Izz p12 p22 ⎡ ⎤ 1 0 0 Equation 14 is solved for Q = ⎣0 1 0 ⎦ and a value of R = 10−6 . 0 0 106

4

Results

The results of the implemented controller are discussed. The car is tested in the same conditions as in Sect. 2.3. The driver performs a skidpad with a radius of 5 m trying to match the same conditions that are used to validate the model (same speed and steering angle). With the data acquired with the different controllers, a comparison is made, and the gain in vehicle performance evaluated. 4.1

Without Torque Vectoring Controller

Figure 3 shows the variables, desired yaw rate (blue color), current yaw rate (red color), and the speed of the vehicle (yellow color). The driver does 5 laps of approximately 5 s each (80s-110 s) to the left, cornering in a counter-clockwise way, and from 110s-120 s the driver exists and starts cornering in the opposite way in a clockwise way, from (120s-150 s). The vehicle speed is also plotted in the graph to confirm if the driver is maintaining its speed, recalling Eq. 3 the reference is calculated based on the velocity and the steering angle, if both are constant then the desired yaw will also be constant. Focusing now just on the current yaw rate value and desired yaw rate value it can be observed that the current value of the yaw rate is 70 deg/s, and the desired yaw rate is 81deg/s, which means that with torque vectoring there can be a possible gain of 11deg/s. Also, it is important the see that if we look at test1 from Table 4, the yaw rate is quite similar, which makes sense because the test track is the same. 4.2

With Torque Vectoring Controller

After the baseline test is performed, the test with the controllers are performed. Figure 4 shows the same variables as in Fig. 3, but this time the torque vectoring controller is acting on the vehicle. The data presented is from the car cornering to the right. As the driver is starting the corner, like in Fig. 3 at first (525s-535 s) the driver is inconsistent but starts to became consistent the more time he is in the corner (535s-555 s). That is when it can be seen that the controller is improving the yaw rate of the car. Table 4 shows that the proposed controller contributes to an increase in lateral performance, the vehicle as more yaw rate, allowing to achieve a higher cornering speed, which translates in a reduction of 7.6% of lap time.

Torque Vectoring for a Formula Student Prototype 100

RefYaw yaw Filtrado Vel

80

431

9

60

yaw rate [deg/s]

20

7

0 6

-20 -40

Velocity [m/s]

8

40

5

-60 4

-80 -100 80

90

100

110

120

130

140

150

time [s]

Fig. 3. Data logged form the vehicle during the test with no torque vectoring. The variables presented are: Yaw rate reference, yaw rate, and global velocity

Fig. 4. Data logged from the vehicle during the test with the PI controller. The variables presented are: Yaw reference, yaw rate, and global velocity Table 4. Comparison between torque vectoring and no torque vectoring for the PI controller Yaw rate [deg/s] Velocity [m/s] Time [s]

4.3

No TV 70

8.4

4.97

TV

8.8

4.59

74

LQR

Once the tests of the PI controller are done, the LQR controller is tested. Due to tire wear and electronic faiulure it was not possible to test the LQR controller in the car. Figure 5 compares the real data with the PI controller and the simulation of the LQR controller, it can be seen that the LQR controller response would be very similar to that of the PID controller.

432

J. Antunes et al. 50 Real Data Torque RR Real Data Torque RL Simulation PI Torque RL Simulation PI Torque RR Simulation LQR Torque RR Simulation LQR Torque RL

40

Torque at wheel [Nm]

30

20

10

0

-10

-20

-30 520

525

530

535

540

545

550

555

560

Time [s]

Fig. 5. Comparison of torques between simulation and real data from the car for the LQR controller and PI controller

4.4

Conclusion

A nonlinear model for simulation has been presented, as well as a linearized model for yaw rate control through the application of a differential torque. Both models are validated with logged data from the electric prototype. Two control strategies are presented; (i) A gain scheduling PI controller for controlling the wheel torques based on the yaw rate, (ii) and an LQR approach for controlling the wheel torques based on the yaw rate and lateral velocity. It was concluded that for low velocities the models, are not much affected by the lateral velocity gain, and in terms of response it was quite similar to the PI controller. The main advantage of the LQR is its robustness and lack of a gain scheduling when compared to the base solution. The microcontroller developed by the team proved sufficient for receiving and filtering the data, and also run the controller every 0.2 s. The fail safe system was crucial to ensure that the tests with the controller could be executed in safe conditions. Looking at all the different test procedures, it is clear that the implementation of the torque vectoring has an effect in the vehicle behaviour. When the controller is used a gain of 7.6% is achieved, which translates in a reduction of 0.38 s in a skidpad when compared to the same situation but without the torque vectoring. Acknowledgments. This work was supported by FCT, through IDMEC, under LAETA, project UID/EMS/50022/2013. The authors thank the prompt and fruitful cooperation of the IST Formula Student team, FST Lisboa and ISR - Dynamical Systems and Ocean Robotics Lab, namely the assistance of Bruno Cardeira.

Torque Vectoring for a Formula Student Prototype

433

References 1. Passengers cars - Steady-state circular driving behaviour - Open-loop test methods ISO 4138:2004. 4th edn., June 2012 2. Jazar, R.N.: Vehicle Dynamics: Theory and Application, 2nd edn. Springer (2014), ISBN:978-1-4614-8544-5 3. Anwar, S.: Yaw stability control of an automotive vehicle via generalized predictive algorithm. In: Proceedings of the 2005, American Control Conference, pp. 435–440, June 2005, doi:10.1109/ACC.2005.1469974 4. Di Cairano, S., Tseng, H.E., Bernardini, D., Bemporad, A.: Vehicle yaw stability control by coordinated active front steering and differential braking in the tire sideslip angles domain. IEEE Trans. Control Syst. Technol. 21(4), 1236–1248 (2013), 1063-6536 5. Schramm, D., Hiller, M., Bardini, R.: Vehicle Dynamics: Modelling and Simulation, 1st edn. Springer (2014), ISBN 978-3-540-36045-2 6. Ghosh, J., Tonoli, A., Amati, N.: A torque vectoring strategy for improving the performance of a rear wheel drive electric vehicle. In: 2015 IEEE Vehicle Power and Propulsion Conference (VPPC), pp. 1–6, October 2015, doi:10.1007/11823285 121 7. Kaiser, G., Holzmann, F., Chretien, B., Korte, M., Werner, H.: Torque vectoring with a feedback and feed forward controller – applied to a through the road hybrid electric vehicle. In: 2011 IEEE Intelligent Vehicles Symposium (IV), vol. 4128, pp. 448–453, June 2011 8. Kaiser, G., Liu, Q., Hoffmann, C., Korte, M., Werner, H.: Torque vectoring for an electric vehicle using an LPV drive controller and a torque and slip limiter. In: 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pp. 5016–5021, December 2012, doi:10.1109/CDC.2012.6426553 9. De Novellis, L., Sorniotti, A., Gruber, P., Pennycott, A.: Comparison of feedback control techniques for torque-vectoring control of fully electric vehicles. IEEE Trans. Veh. Technol. 63(8), 3612–3623 (2014), doi:10.1007/11823285 121 10. Rajamani, R.: Vehicle Dynamics and Control, Mechanical Engineering Series, 1st edn. Springer (2005), ISBN:978-0387263960 11. Rubin, D., Arogeti, S.: Vehicle yaw stability control using rear active differential via Sliding mode control methods. In: 21st Mediterranean Conference on Control and Automation, pp. 317–322, June 2013, doi:10.1109/MED.2013.6608740 12. Stoop, A.: Design and Implementation of Torque Vectoring for the Forze Racing Car. Master’s thesis, Delft Center for Systems and Control (2014) 13. Kiencke, U., Nielsen, L.: Automotive Control Systems: For Engine, Driveline and Vehicle, 2nd edn. Springer (2005), ISBN:978-3-540-26484-2 14. Zhao, C., Xiang, W., Richardson, P.: Vehicle Lateral Control and Yaw Stability Control through Differential Braking, pp. 384–389, July 2006, doi:10.1109/ISIE. 2006.295624

Path and Velocity Trajectory Selection in an Anticipative Kinodynamic Motion Planner for Autonomous Driving Jordi P´erez Talamino and Alberto Sanfeliu(B) Institut de Rob´ otica i Inform` atica Industrial (CSIC-UPC), Parc Tecnol´ ogic de Barcelona, 08028 Barcelona, Spain {joperez,sanfeliu}@iri.upc.edu http://www.iri.upc.edu/

Abstract. This paper presents an approach for plan generation, selection and pruning of trajectories for autonomous driving, capable of dealing with dynamic complex environments, such as driving in urban scenarios. The planner first discretizes the plan space and searches for the best trajectory and velocity profile of the vehicle. The main contributions of this work are the use of G2 -splines for path generation and a method that takes into account accelerations and passenger comfort for generating and pruning velocity profiles based on 3rd order splines, both fulfilling kinodynamic constraints. The proposed methods have been implemented in a motion planner in MATLAB and tested through simulation in different representative scenarios, involving obstacles and other moving vehicles. The simulations show that the planner performs correctly in different dynamic scenarios, maintaining the passenger comfort. Keywords: Autonomous driving · Urban · Anticipation · Kinodynamic motion planning · Path planning · G2 -splines, Velocity profiles

1

Introduction

Nowadays autonomous driving is becoming more and more popular due to its numerous benefits, and motion planning is a key element [1,2]. Motion planners need to generate candidate geometric paths that will be followed by a low level control system. Popular techniques, such as B´ezier curves, present a high complexity generation from the road shape, with the presence of non-intuitive geometric waypoints and multitude of parameters [3]. On the other hand, velocity profile generation methods often use trapezoidal profiles due to their simplicity despite its dynamic limitations because they have discontinuities in the acceleration [4]. Better approaches compute spline-based velocity trajectories over time, fixing the total maneuver time [5] or also over the road length [6]. Nevertheless, these time conditions make the discretization process difficult and lead even to the need of a post-optimization. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_36

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

435

We explain in Sect. 2 of this article the anticipative kinodynamic motion planner that is used; in Sect. 2.1, the G2 -splines for path generation; in Sect. 2.2, the velocity profile generation; and in Sect. 2.3 the cost structure. Finally in Sect. 3 we explain the simulations and in Sect. 4, the conclusions.

2

Anticipative Kinodynamic Motion Planner Framework

Figure 1 shows the general scheme of the proposed motion planner.

Fig. 1. Motion planner framework

First of all, the planner discretizes the environment choosing several endpoints, which are state configurations X = [x, y, θ, κ], where x and y are the position coordinates, θ is the heading and κ is the curvature, which is related with the steering wheel angle using the bicycle kinematic model. Then candidate paths are generated from the host vehicle state to the endpoints (Sect. 2.1). The motion planner uses a novel approach to generate paths, G2 -splines [7]. This method only needs a basic geometric state X for the initial and the final points. It can approximate any kind of path shape preserving a smooth curvature continuity and minimizing curvature variability, and this implies kinematic feasibility for a vehicle following it: circular segments, straight lines, clothoids, complete lane changes, etc. Once the planner has a set of candidate paths, for each one of them it computes a set of velocity profiles in order to anticipate dynamic obstacles (Sect. 2.2). Each velocity profile has an associated cost and the one with the minimum cost is chosen, for a certain path. Finally, all the candidate paths with its associated velocity profiles are compared with a cost structure (Sect. 2.3) and the minimum cost solution (path and velocity profile) is chosen to be the executed one. The velocity profiles are computed using 3rd order splines, taking into account the dynamic restrictions of the candidate path and the road path shape, and they also are kinematically validated. The output of the proposed motion planner is a kinematically and dynamically feasible path with an associated velocity profile for the host vehicle, also fulfilling comfort restrictions for the passengers, such as bounded accelerations and jerk.

436

J.P. Talamino and A. Sanfeliu

2.1

G2 -splines Path Generation

The G2 -splines are geometric polynomials of 5th order presenting second order geometric continuity (G2 ), so the curvature κ is continuous [7]. In order to build a general path p(u), understood as a path with arbitrary defined starting endpoint XA = [xA , yA , θA , κA ], and ending endpoint XB = [xB , yB , θB , κB ], the equations to define these splines are:     x(u) x0 + x1 u + x2 u2 + x3 u3 + x4 u4 + x5 u5 p(u) = := (1) y0 + y1 u + y2 u 2 + y3 u 3 + y4 u 4 + y5 u 5 y(u) where u ∈ [0, 1] and: x0 = xA x1 = η1 cos θA 1 x2 = (η3 cos θA − η12 κA sin θA ) 2 3 1 x3 = 10(xB − xA ) − (6η1 + η3 ) cos θA − (4η2 − η4 ) cos θB + 2 2 3 1 + η12 κA sin θA − η22 κB sin θB 2 2 3 x4 = −15(xB − xA ) + (8η1 + η3 ) cos θA + (7η2 − η4 ) cos θB − 2 3 − η12 κA sin θA + η22 κB sin θB 2 1 1 x5 = 6(xB − xA ) − (3η1 + η3 ) cos θA − (3η2 − η4 ) cos θB + 2 2 1 1 + η12 κA sin θA − η22 κB sin θB 2 2 y0 = yA y1 = η1 sin θA 1 y2 = (η3 sin θA + η12 κA cos θA ) 2 3 1 y3 = 10(yB − yA ) − (6η1 + η3 ) sin θA − (4η2 − η4 ) sin θB − 2 2 1 3 − η12 κA cos θA + η22 κB cos θB 2 2 3 y4 = −15(yB − yA ) + (8η1 + η3 ) sin θA + (7η2 − η4 ) sin θB + 2 3 + η12 κA cos θA − η22 κB cos θB 2 1 1 y5 = 6(yB − yA ) − (3η1 + η3 ) sin θA − (3η2 − η4 ) sin θB − 2 2 1 1 − η12 κA cos θA + η22 κB cos θB 2 2 The resulting path depends on the parameter vector η that affect its shape, η = [η1 , η2 , η3 , η4 ]. An important characteristic of the resulting spline is its curvature κ(u), which can be computed using the following equation: κ(u) =

x(u)¨ ˙ y (u) − x ¨(u)y(u) ˙ (x(u) ˙ 2 + y(u) ˙ 2 )3/2

(2)

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

437

Fig. 2. η3 4 variations in a lane change maneuver (top) and η1 2 iterations approximating a circular path (bottom)

G 2 -splines optimization algorithm. The general case for the optimization of the η values requires numerical optimization [7]. However, due to the type of the required vehicle motion maneuvers, we can apply a faster solution that does not need this numerical optimization, but a short iterative process. We have realized that these trajectories have symmetrical behaviour between the starting and ending points, then we can impose the following constraints to the η parameters. η1 = η2 and η3 = −η4 so only 2 parameters are needed to be tuned: η1 2 = η1 = η2 and η3 4 = η3 = −η4 . η3 4 ∈ (−∞, +∞), and it affects to the curvature changes. It can be concluded that the best value for this parameter is 0 in all the cases. In Fig. 2, different η3 4 values are tested performing a lane change maneuver. As it can be seen, if η3 4 > 0 (green trajectories) the curvature changes are concentrated in the center of the trajectory, whereas η3 4 < 0 (blue trajectories) concentrate it on the extreme initial and final points. The red trajectory represents η3 4 = 0, and it is the smoothest and more balanced trajectory, presenting the minimum curvature variability. On the other hand, η1 2 ∈ (0, +∞), and it forces θ and κ to stay close to the initial and final values. It has been found that the smoothest trajectory is reached when η1 2 is close to the trajectory length (meters). For this reason, an iterative method is performed in order to converge into the best possible trajectory, giving to η1 2 the length of the path. After a few iterations, usually 3 or 4, all the computed trajectories converge into the smoothest one (Fig. 2). This process obtains similar results as the proposed numerical optimization in [7], which minimizes

438

J.P. Talamino and A. Sanfeliu



 dκ the curvature variability min , but with much less computation allowing η du to reach real time performance. 2.2

Velocity Profile Generation

We explain three different situations for the velocity profile generation. 3rd Order Spline Without Initial Acceleration. The proposed 3rd order spline in velocity has 4 variables: a, b, c, d (Eq. 3). The fifth unknown is the total time of the trajectory, T . The equation system is solved by applying initial and final conditions, and also using the fact that the velocity spline is symmetric, so the maximum acceleration is achieved at time equal to T /2. x(t) = a4 t4 + 3b t3 + 2c t2 + dt + x0 v(t) = at3 + bt2 + ct + d a(t) = 3at2 + 2bt + c ⎧ 4a2max ⎪ ⎧ ⎪ b = ⎪ ⎪ 3(vf − v0 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪v(0) = v0 ⎪ ⎪ b2 ⎪ ⎪ ⎪ ⎪ ⎨a = − ⎨a(0) = 0 3amax v(T ) = vf c = 0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪a(T ) = 0 ⎪ ⎪ ⎪ ⎪ d = v0 ⎪ ⎪ ⎪ ⎩a( T ) = a ⎪ ⎪ max 2 ⎪ ⎩T = 3(vf − v0 ) 2amax

(3)

(4)

3rd Order Spline with Arbitrary Initial Acceleration. The 3rd order spline in velocity is the same as in the case without initial acceleration. The equation system is slightly different because the spline is not symmetric and the maximum acceleration is reached in the time t1 . The 6 unknowns are the spline variables a, b, c, d, the total time of the trajectory T and the time t1 .

⎧ v(0) = v0 ⎪ ⎪ ⎪ ⎪ ⎪a(0) = a0 ⎪ ⎪ ⎪ ⎨v(T ) = v f ⎪ a(T ) = 0 ⎪ ⎪ ⎪ ⎪ ⎪ a(t1 ) = amax ⎪ ⎪ ⎩  a (t1 ) = 0

  ⎧ 4a20 ⎪ ⎪ 0= − 3a0 T 2 + ⎪ ⎪ a − a ⎪ 0 max   ⎪

⎪ ⎪ (vf − v0 )a0 9(vf −v0 )2 ⎪ ⎪ + 6(vf − v0 ) − 12 T + a0 −a ⎪ max ⎪ ⎪ ⎪ a0 − amax

⎪ 1 ⎪ 3(v −v ) 0 f ⎪ − 2a0 ⎨b = T T 2 b ⎪ ⎪ a= ⎪ ⎪ 3(a0 − amax ) ⎪ ⎪ ⎪ ⎪ c = a ⎪ 0 ⎪ ⎪ ⎪ ⎪ d = v 0 ⎪ ⎪ ⎪ ⎪ ⎩t = − b 1 3a

(5)

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

439

We have to solve a 2nd order equation to find T1 and T2 . Not in all cases there exists a solution, due to the denominator value. amax must be always greater than a0 if accelerating or lower when decelerating. When only one of the Ti is positive, then this is the unique solution. In the case when both Ti are positive corresponds to two possible solutions: increasing acceleration up to amax and finishing faster the maneuver, or decreasing the acceleration from a0 and finishing the maneuver in longer time. In this case, the fastest maneuver is always chosen, so T = min(T1 , T2 ). The cases when the spline has no solution correspond to situations when the vehicle is accelerating in the opposite direction of the desired final speed, or contradictory cases. This issue is solved by adding a linear section that starts in a0 and ends in 0 acceleration, with a desired slope that guarantees comfort and feasibility. Velocity Profile Generation Algorithm. The complete algorithm for generating a velocity profile is detailed in Algorithm 1.

Algorithm 1. Generate a velocity profile if a0 = 0 then Spline case without a0 else if vf = v0 then Add linear acceleration from a0 to 0 Spline case without a0 else if contradictory case then Add linear acceleration from a0 to 0 Spline case without a0 else Spline case with arbitrary a0 end if end if end if

Set of Velocity Profile Candidates. This spline-based method gives a smooth transition between two different velocities, caused by the derivative of the acceleration, the jerk, which is continuous. In addition, the possibility of directly adjusting the parameter of maximum desired acceleration in the trajectory is very useful for this application, as it is directly related with the time needed for the trajectory and also with the comfort of the passengers and its dynamic feasibility. For each candidate path, the motion planner computes a set of velocity profile candidates, discretizing the final velocities and also the accelerations, taking into account the kinodynamics of the vehicle. Starting in the current state of the

440

J.P. Talamino and A. Sanfeliu

Fig. 3. Sets of all the velocity profiles, with a discretization of 0.5 m/s and different accelerations. In the right figure, cases with a lower final velocity are contradictory cases

host vehicle, several final velocities are chosen from 0 to the maximum allowed road/street velocity. Then, for each final velocity, different accelerations are also chosen, from the maximum deceleration limit to the desired acceleration value. Examples of these velocities and accelerations can be seen in Fig. 3. In order to reduce the computation, we impose some constraints in the velocity profile using the splines to reject the non-feasible candidates and restrict the search space. The constraints are: 1. The distance to collide with a static obstacle, by using the distance in length of the track from the host vehicle. If a collision is detected, the only allowed final velocity is 0. 2. The maximum allowed velocity due to the planned path curvature. 3. The maximum allowed velocity due to the center lane path curvature. This restriction assures safety, in order to be more conservative and check a static property of the track geometry, because the planned path could be smoothed when re-launching. 4. The longitudinal dynamics of the vehicle. This restricts the accelerations to ensure feasibility. With this approach, we can have the required accelerations, and then we can reduce the search space and prune the velocity profile selecting the best ones. 2.3

Cost Structure

The chosen velocity profile is the one with minimum dynamic cost. The motion planner uses two kind of cost terms: static and dynamic. Dynamic costs are related to the temporal dimension and they are associated to a velocity profile and include the dynamic obstacles (Table 1) and is computed as Jdynamic = i wi ci . The costs are based on the method proposed in [6], but adapted to fit the proposed approach.

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

441

Table 1. Dynamic costs Cost Formula

Physical interpretation

cv

1 − vf,vp /vmax,desired

Velocity

ca

abs(amax,vp /amax,braking ) Acceleration

cobs,d f · e(−1/λ)·d

Impact Behaviour Comfort & Efficiency

Dynamic obstacles repulsion Safety & Behaviour Table 2. Static costs

Cost Formula

Physical interpretation

cl

l/smaneuver

Path length



max(κ) · rmin Maximum path κ

Comfort & Kinematic feasibility

cκ˙

max(κ) ˙ · rmin Maximum path κ˙

Comfort & Kinematic feasibility

cof f

o/omax

cobs,s f · e(−1/λ)·d

Impact Efficiency

Lateral offset from centerline Behaviour Static obstacles repulsion

Safety

vf,vp is the final velocity of the candidate velocity profile. vmax,desired is the maximum allowed velocity at the current moment. amax,vp is the maximum acceleration value of the candidate velocity profile. amax,braking is the acceleration value (always < 0) of the maximum allowed braking. In the other hand, static costs are the terms related to path geometry and static obstacles and they are associated to a candidate path (Table 2). The cost function is computed as the weighted sum of all the statics terms, Jstatic = w c . i i i All the terms (static and dynamic) with the exception of the obstacle terms, are normalized between 0 and 1. smaneuver is the longitudinal distance from the host vehicle and the ending point of the path. κ is the curvature value of a specific point in the path. rmin is the minimum turning radius of the host vehicle. omax is the lateral distance from the farthest endpoint to the center of the lane (the point with more lateral offset). f is a scale value and λ is the decay of the exponential function. d is the distance from a obstacle (static or dynamic) to the host vehicle (they have been modelled as circles). If d is smaller than a threshold, then the cost is penalized:

f · e(−1/λ)·d + P enalization, if d < threshold cobs = (6) otherwise f · e(−1/λ)·d , Unlike other approaches that penalize collisions with an infinite cost, as [6], it is preferred to preserve the value. Then, in a situation where all the candidate solutions present high cost (for instance, an unavoidable obstacle), the planner will give always the minimum cost solution, so it will be the less dangerous one. Finally, for several maneuver candidates, formed by a path-velocity profile pair, a total cost function Jtotal is computed as the minimum cost maneuver, Jtotal = minmaneuver (Jdynamic + Jstatic ).

442

3

J.P. Talamino and A. Sanfeliu

Simulations

The motion planner and the low level vehicle control have been implemented in MATLAB. The Stanley’s lateral controller [8] has been used together with a longitudinal controller which takes into account the dynamics of the vehicle. The next figures show representative road scenarios (with straight and curves lanes), with the lines (in black), its center-lines (in blue) and the path followed by the host vehicle (in green). The host vehicle (in red and purple) and the velocity profile are plotted at each interval of time. The simulations were made in MATLAB with a slow CPU and the average computation for each simulation cycle with static and dynamic obstacles was of 250 ms. Taken into account that programming in C++ is around 10 times faster, this method can run at 25 ms.

Fig. 4. Simulation without any obstacle. Followed path (left) and vehicle velocity profile (right) in red and planned in blue

Fig. 5. Detail of a static obstacle avoidance maneuver

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

443

Fig. 6. Overtaking of a slower vehicle in a one way track (left) and in a two way track with oncoming traffic (right). Red vehicle is the host.

Static Environment. Figure 4 shows a simulation in a road environment without any obstacle in the lanes. Figure 5 shows the same environment, but with several static obstacles, represented by circles in the lanes. This simulates an urban scenario, for instance with vehicles parked in double row. Dynamic Environment. Figure 6 shows two different overtaking maneuvers involving dynamic obstacles.

444

J.P. Talamino and A. Sanfeliu

Fig. 7. Incorporation in a T-intersection, in several instants of time. Conservative case

Fig. 8. Incorporation in a T-intersection, in several instants of time. Aggressive case

Trajectory Selection in an Anticipative Kinodynamic Vehicle Motion Planner

445

T-Intersection. To enter in a T-intersection is mandatory to anticipate the oncoming vehicles, which come in both directions. Figure 7 is a conservative case, where the host vehicle slows down and let the traffic pass. On the other hand, Fig. 8 is a more aggressive case, where the host vehicle takes advantage of a gap between vehicles.

4

Conclusions

After analyzing the simulations, it is verified that the proposed methods of path and velocity profile generation behaves correctly and help to overcome successfully all the presented situations, which correspond to representative cases that occur in on-road driving. The proposed approach for the path generation, using the G2 -splines, performs above expectations and a real-time computation can be obtained. The proposed method for the velocity profile generation behaves smoothly and provides a fully analytic solution that can deal with any arbitrary situation. Moreover, it allows to directly adjust the desired acceleration instead of the maneuver time and this helps to the selection and pruning of a candidate set of velocity profiles, taking into account kinodynamic and dynamic constraints.

References 1. Paden, B., C´ ap, M., Zheng Yong, S., Yershov, D., Frazzoli, E.: A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Trans. Intell. Veh. 1(1), 33–55 (2016) 2. Katrakazas, C., Quddus, M., Chen, W.-H., Deka, L.: Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions. Transp. Res. Part C 60, 416–442 (2015) 3. Bautista, D.G., Rastelli, J.P., Lattarulo, R., Milan´es, V., Nashashibi, F.: Continuous curvature planning with obstacle avoidance capabilities in urban scenarios. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), pp. 1430–1435 (2014) 4. Ferguson, D., Howard, T.M., Likhachev, M.: Motion Planning in urban environments, Part I and II. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1063–1069, 1070–1076 (2008) 5. Gu, T., Dolan, J.M.: On-road motion planning for autonomous vehicles. In: International Conference on Intelligent Robotics and Applications (ICIRA), vol. 3, pp. 588–597 (2012) 6. Xu, W., Wei, J., Dolan, J.M., Zhao, H., Zha, H.: A real-time motion planner with trajectory optimization for autonomous vehicles. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2061–2067 (2012) 7. Bianco, C.G., Piazzi, A.: Optimal trajectory planning with quintic G2 - splines. In: Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 620–625 (2000) 8. Hoffmann, G.M., Tomlin, C.J., Montemerlo, M., Thrun, S.: Autonomous automobile trajectory tracking for off-road driving: controller design, experimental validation and racing. In: American Control Conference (ACC), pp. 2296–2301 (2007)

Deadzone-Quadratic Penalty Function for Predictive Extended Cruise Control with Experimental Validation Seyed Amin Sajadi-Alamdari1(B) , Holger Voos1 , and Mohamed Darouach2 1

2

Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, 29, avenue JF Kennedy, 1855 Luxembourg City, Luxembourg {amin.sajadi,holger.voos}@uni.lu Centre de Recherche en Automatique de Nancy (CRAN) UMR-CNRS 7039, Universit´e de Lorraine, IUT de Longwy, 186 rue de Lorraine, 54400 Cosnes et Romain, France [email protected] https://www.uni.lu

Abstract. Battery Electric Vehicles have high potentials for the modern transportations, however, they are facing limited cruising range. To address this limitation, we present a semi-autonomous ecological driver assistance system to regulate the velocity with energy-efficient techniques. The main contribution of this paper is the design of a real-time nonlinear receding horizon optimal controller to plan the online costeffective cruising velocity. Instead of conventional 2 -norms, a deadzonequadratic penalty function for the nonlinear model predictive controller is proposed. Obtained field experimental results demonstrate the effectiveness of the proposed method for a semi-autonomous electric vehicle in terms of real-time energy-efficient velocity regulation and constraints satisfaction. Keywords: Deadzone penalty function · Nonlinear model predictive control · Ecological advanced driver assistance system · Electric vehicles

1

Introduction

Battery Electric Vehicle (BEV) has one of the most promising powertrain technology for the predictable future transportations [1]. However, the BEVs have limited onboard energy capacity, which limits their cruising range on a single charge. Several methods have been developed to extend the cruising range such as the Ecological Advanced Driver Assistance Systems (Eco-ADAS) with anticipated driving style. For the automotive systems, receding horizon optimal control also known as Model Predictive Control (MPC) has been an attractive approach. In MPC, an Optimal Control Problem (OCP) is solved repeatedly in c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_37

NMPC with Deadzone-Quadratic Penalty Function

447

a receding horizon principle. The first element in a sequence of finite control actions is applied to the system at each sampling time. The well-established modern Cruise Control (CC) systems automate the throttle and brake control of the vehicle to retain the pre-set longitudinal velocity. Several works of literature may be founded, such as [2], where a sequential optimisation approach was presented for connected CC system. In [3,4], an energy-efficient linear MPC that use the energy consumption map of a BEV was established. Despite the effectiveness of the linear MPCs to some extent, they have limitations. In order to improve the performance specifications, Nonlinear MPC (NMPC) is distinguished by the use of nonlinear system models in the OCP. An instance work of the NMPC for the Eco-CC system considering updown road slopes with an Internal Combustion Engine (ICE) fuel consumption model was presented in [5]. An Extended Eco-CC (Ext-Eco-CC) system that considers further road curves and traffic speed limit areas for the BEVs was introduced in [6]. Comparison and assessment of energy consumption models, cost functions, and solution methods of the Eco-CC system for passenger vehicles were reviewed in [7,8]. Considering the general class of (residual) penalty functions used in the NMPC, the 2 -norm is preferred in practice due to its efficiency in implementation. The quadratic penalty function yields least-square or Euclidean norm approximation [9]. The 2 -norm is preferred for energy-efficiency applications. However, the NMPC based on 2 -norm associated to states may also lead to aggressive system behaviour [10]. As an alternative, a systematic way of dealing with large state residuals based on Huber function was proposed in [10]. The Huber function, φM (x) is equivalent to a 2 -norm within the region [−M, M ] and to a 1 -norm outside. The 1 -norm is preferred for robust regulations where the absolute value penalty function yields 1 -norm approximation. Thus, the sensitivity to outliers or large residuals is lower than the 2 -norm [9]. Although most of the mentioned NMPCs are based on agile and intuitive setpoint tracking, this may not be a suitable strategy for the energy-efficient state regulation. One of the main reason for high energy consumption of the system is strict achieving and tracking the set-point. In this paper, we propose a deadzonequadratic and deadzone-linear penalty functions that have the advantages of 2 and 1 -norms respectively. This method preserves the energy-efficient behaviour within the desired operating zone. The main idea of the deadzone-quadratic penalty function is to assess low penalty or insensitivity for residuals smaller than the deadzone width and quadratic or linear penalty for bigger residuals. This motivates to find a tradeoff between the agile set-point tracking and energyefficient strategy. The main contribution of this work is to design a real-time NMPC with deadzone-quadratic penalty function to enhance the Ext-Eco-CC system for the BEVs. For this purpose, the components are considered to develop a system model. First, the BEV longitudinal dynamics, its energy consumption, as well as road geometry and traffic sign information are modelled in a reasonably accurate framework. Second, a real-time nonlinear receding horizon optimal controller is designed to plan the online cost-effective cruising velocity. Then, the NMPC

448

S.A. Sajadi-Alamdari et al.

takes advantage of a convex deadzone-quadratic penalty function for velocity tracking within desired reference zone. Finally, the performance of the proposed concept is evaluated in terms of energy-efficient velocity regulation and constraints fulfilment. The remainder of this paper is structured as follows: The system model is introduced in Sect. 2. The NMPC formulation with deadzone-quadratic penalty function is presented in Sect. 3. Section 4 includes field experimental validation of the proposed concept, followed by the conclusion and future work in Sect. 5. Notation Throughout this paper, Rn denotes the n-dimensional Euclidean space. R+ := [0, ∞). N = {1, 2, . . .} is set of natural numbers. N+ := N ∪ {0} and Z[a,b] := {a, a + 1, . . . , b} is set of integers from a to b.

2

System Model

Safe and energy-efficient velocity profile identification has a significant improvement on extending the cruising range of a BEV. The semi-autonomous BEV concept that extends the functionalities of an Eco-CC system is presented in Fig. 1. Similar to the modern CC systems, the driver pre-sets the desired velocity. The Semi-autonomous Ext-Eco-CC system predictively regulates the velocity with respect to the longitudinal motion of the vehicle, its energy consumption dynamics, road geometric navigation data, and traffic sign information. While the driver handles the steering control of the vehicle, this system should plan a proper energy-efficient cruising velocity profile autonomously for the entire trip without requiring the driver interventions. For more details about the proposed Ext-Eco-CC system, see [6].

Road and Traffic Information

Vehicle and Energy Dynamics with Position Information Nonlinear Model Predictive Controller

Traction Input

GPS

Charging

Discharging

Fig. 1. Semi-autonomous BEV Ext-Eco-CC system

NMPC with Deadzone-Quadratic Penalty Function

2.1

449

Vehicle and Energy Dynamics

The position (s) and velocity (v) along the longitudinal motion of the BEV can be expressed by Newton’s second law of motion, which it is assumed to be a point mass at the centre of gravity as follows: s˙ = v, v˙ = (Ftrac − Fres )/M,

(1) (2)

where M , Ftrac (t), and Fres (t) are equivalent mass of the vehicle, traction force, and total motion resistive forces, respectively [1]. The traction force (throttle and brake pedals) depends on the equivalent mass and control input as Ftrac (t) := M u(t). The control input is bounded (umin (v) ≤ u(t) ≤ umax (v)) by the physical limits of the traction force the wheel-road contact can support without slip. The main total resistive force including aerodynamic drag, gradient, and rolling resistance forces represented by: Fres =

1 ρAf CD v 2 + M g sin(θ(s)) + Crr (v)M g cos(θ(s)), 2

(3)

where ρ, Af , CD , g, θ(s), and Crr (v), are the air density, the vehicle frontal area, the aerodynamic drag coefficient, the gravitational acceleration, the road slope angle as a function of the host vehicle position, and the velocity dependent rolling resistance coefficient, subsequently. The rolling resistance coefficient for passenger vehicles on a concrete road can be approximated as Crr (v) = 0.01(1 + v/576) [1]. Energy consumption of a BEV depends on a number of factors including driven velocity, acceleration profile, geometric characteristics of roads, and traffic situations. For a given velocity at a given traction force, the operating point of the electric machine and the related power consumption or regeneration could be determined [6]. The energy consumption during cruising at constant speed is equal to the resistive power. This can be approximated through the curvefit process with measurement data by a polynomial of velocity as fcruise = b3 v 3 + b2 v 2 + b1 v + b0 and acceleration as fa = a2 u2 + a1 u + a0 . Therefore, at any given velocity and control input, a linear relation of the traction power-to-mass ratio can describe the energy consumption of the BEV as: e˙ = fa (ptrac /M ) + fcruise ,

(4)

where ptrac = Ftrac v, denotes the traction power. This model is capable of capturing the energy consumption of a BEV including the regenerative braking for the full-range of velocity and the control input (for more details, see [6]). 2.2

Road Geometry and Traffic Model

Road geometries and traffic information have favourable advantages for the ADAS safety and energy management applications [11]. In [6], the road slopes,

450

S.A. Sajadi-Alamdari et al.

road curves, and traffic speed limit zone data are modelled as continuous and differentiable functions. In that method, the road slope profile (fslp (θ(s))) is proposed to be the sum of quadratic functions of the vehicle position representing each road segments slope data as follows: Nsgm

fslp (θ(s)) :=



n n H(s−s (an s2 + bn s + cn )H(s−s , n−1 ) n)

(5)

n=1 n n and H(s−s are hyperwhere Nsgm is the number of road segments, H(s−s n−1 ) n) functions of the nth road segment. These functions represent the data points in each segment of the road utilising hyper-function concept to interconnect the estimated segments of the road at the boundaries positions, sn−1 and sn . The road curves and traffic speed limits profiles are modelled in a similar way [6]. The simple curve is used to express the total absolute road curve profile (fcrv (δ(s))) which is defined as:

fcrv (δ(s)) :=

N crv  n=1

  n  H(s−s ent ) 

1 Rcrvn

  n H , (s)  (s−sext )

(6)

where Ncrv is the number of road curves, and Rcrvn is the radius of a circle valid for the curve’s arc length with two position points, sent and sext , at the respective entrance and exit positions. Furthermore, the traffic speed limit profile (flmt (s)) can be modelled as: flmt (s) :=

N lmt 

n n H(s−s (vlmt − vmax )H(s−s + vmax , str ) end )

(7)

n=1

where Nlmt is the number of speed limit zones, and vlmt is the specified speed limit value at positions starts from sstr up to the end of the zone send . The vmax is the maximum speed value of the electric vehicle (for more details, see [6]).

3

Optimal Control and Penalty Functions

For the sake of completeness, a general NMPC formulation will be reviewed. Next, the deadzone-quadratic and deadzone-linear penalty functions will be introduced. 3.1

Nonlinear Model Predictive Control

Consider a general discrete-time system: xt+1 = f (xt , ut ),

(8)

where t ∈ N+ ; xt ∈ Rnx is the system states vector, and ut ∈ U ⊂ Rnu is a nonempty measurable set for the inputs. The f (·) is nonlinear Borel-measurable

NMPC with Deadzone-Quadratic Penalty Function

451

vector of functions that describes the system dynamics. Let N ∈ N be the both state and control prediction horizon. Define an N-stage feedback control policy as: π := {π0 (·), π1 (·), . . . , πN −1 (·)},

(9)

where the Borel-measurable function πi (·) : R(i+1)nx → U, for all i = 0, . . . , N −1 is a general state feedback control law. The control input ui is selected as the feedback control law ui = πi (·) at the ith stage of the control policy. In receding horizon optimal control, the cost function of an OCP is commonly defined as: VN (xt , π ) :=

N −1 

Jc (ˆ xi , ui ) + Jf (ˆ xN ),

(10)

i=0

where Jc : Rnx × U → R+ and Jf : Rnx → R+ are the cost-per-stage function and the final cost function, respectively, and x ˆi denotes the predicted states at −1 time i given the initial states x ˆ0 = xt , and control law {πi (·)}N i=0 . Using the cost function (10), the OCP for (8) is formulated as follows: VN∗ (xt ) := minimise π

VN (xt , π )

(11a)

subject to: x ˆi+1 = f (ˆ xi , πi ), πi (·) ∈ U,

for all i ∈ Z[0,N −1] , for all i ∈ Z[0,N −1] ,

(11b) (11c)

gj (ˆ xi ) ≤ 0, x ˆ0 = xt ,

for all j = 1, . . . , s, and i ∈ Z[0,N −1] ,

(11d) (11e)

where VN∗ (xt ) denotes the optimal value function under the optimal control polxi ) that are required icy π ∗ . The inequality state constraints are denoted by gj (ˆ to be fulfilled. The OCP in receding horizon principle involves applying the first element of the control action sequence ut = π ∗0 (·) repeatedly to the system at each sampling time. For more details and the first-order necessary conditions for a solution of the OCP see e.g. [9,12]. 3.2

Deadzone Penalty Functions

In many practical NMPC applications considering the energy-efficiency, it is desirable to reach a region of reference set-points with relatively low-cost value rather than costly but accurate and agile set-point tracking. This could be accomplished using a nonnegative and symmetric deadzone-quadratic penalty function such as:  0 : |x| ≤ z, (12) φq (x) := x2 − z 2 : |x| > z, where z is the edge of free zone that no penalty is assessed if |x| ≤ z. The φq (·) function agrees with least-square for any residual outside of the zone width. In other words, the residuals smaller than the zone width are ignored which lead to low-cost function value.

452

S.A. Sajadi-Alamdari et al.

In a case of energy-efficient robust regulations, deadzone-linear penalty function agrees with absolute value for the residual outside of the zone width as follows:  0 : |x| ≤ z, (13) φl (x) := |x| − z : |x| > z. Unfortunately, these deadzone penalty functions are not convex which lead to a challenging OCPs. However, a smooth approximation of deadzone penalty function may address the challenge. In this paper, a deadzone penalty function based on softplus rectifier is proposed. The softplus is an approximation to the activation function so-called Rectified Linear Unit (ReLU) which is mostly utilised in the deep neural networks [13]. The proposed deadzone-linear penalty function is a combination of the two softplus as follows: ψl (x) := ln(1 + exp(x − z)) + ln(1 + exp(−x − z)).

(14)

The ψl (x) have advantages such as being a convex function with efficient computation and gradient propagation [14]. The gradient of the deadzone-linear penalty function is a combination of two sigmoid functions as follows: dψl (x) exp(x − z) exp(−x − z) = − . dx 1 + exp(x − z) 1 + exp(−x − z)

(15)

Similar to ψl (x), the deadzone-quadratic penalty function can be formulated as follows: (16) ψq (x) := (ln(1 + exp(x − z)) + ln(1 + exp(−x − z)))2 . The gradient of the deadzone-linear penalty function is a linear continuous function with a deadzone area, [−z, z], as follows: dψq (x) dψl (x) = 2ψl (x) . dx dx

(17)

For sake of simplicity, Fig. 2 shows the proposed ψq (x) and ψl (x) penalty functions for a scalar residual with z = 5 in comparison with φq (x), φl (x), 2 , and 1 -norms. Note that when the state residual is within the zone, the gradient is non-zero and the optimality conditions are satisfied as 1 and 2 -norms. In other words, the states will converge to final reference set-point values but slower than conventional norms which lead to the energy-efficient behaviour. 3.3

Case Study: Ext-Eco-CC Mathematical Optimisation Problem

The state vector for the Ext-Eco-CC system from Eqs. (1), (2), and (4), is defined as xt = [s, v, e]T ∈ R3 ; the control input is the traction input applied on BEV, ut = u ∈ U ⊂ R (for more details see [6]); Please note that all states are measurable and the measurement noise is negligible.

NMPC with Deadzone-Quadratic Penalty Function

453

Fig. 2. Deadzone penalty functions with relative comparisons

The cost-per-stage function for Ext-Eco-CC system is defined as: Jc (ˆ xi , ui ) :=

N −1  i=0

1 [ψq (ˆ xi − xref )Q +  ui − uref 2R ], 2

(18)

with corresponding weights (Q and R). The final cost function for Ext-Eco-CC system is defined as: 1 Jf (ˆ xN ) := ψq (ˆ xN − xref )Q . (19) 2 The lateral acceleration of the BEV should be lower than the comfort level (ˆ ωref ) as inequality constraint as follows: si , vˆi ) := vˆi2 /fcrv (δ(ˆ si )) ≤ ω ˆ ref , g1 (ˆ

(20)

The velocity of the BEV should also be lower than speed limit zones as: si , vˆi ) := vˆi ≤ flmt (ˆ si ). g2 (ˆ

(21)

In addition, the velocity should be within the standstill and the reference setpoint so-called funnel concept (see e.g., [15]) as follows: vi ) := 0 ≤ vˆi ≤ vref + vrlx g3 (ˆ

(22)

where vref is the reference set-point and vrlx is the relaxed velocity for the inequality constraint. One of the efficient methods to solve the resulting OCP in the receding horizon manner can be based on Pontryagin’s Minimum Principle. The obtained dual OCP can be solved efficiently in real-time by the Continuation and Generalised Minimal RESidual (C/GMRES) with a proper inequality constraints handling method (for more details see [12,16]).

4

System Evaluation

The proposed Ext-Eco-CC system has been evaluated with practical experiments on a test track using realistic values of the parameters. A Smart Electric Drive

454

S.A. Sajadi-Alamdari et al.

Fig. 3. Smart fortwo electric drive 2nd Curve 1st Curve s = 500

Start Point 4th Curve 3rd Curve

c OpenStreetMap contributors 

s = 850

Fig. 4. Centre de Formation pour Conducteurs (CFC) [17]

third generation (Smart-ED) commercial BEV, which is available for practical experiments, is chosen here to model the dynamics of a BEV and its energy consumption (Fig. 3). A closed test track located at Colmar-Berg, Luxembourg, (CFC) is chosen to model the road geometry with traffic information (Fig. 4). The test track has a total length of 1.255 km and includes curves and relative slope profile. This track has four main curves with 20 m, 25 m, 15 m, and 27 m radius. Note that the speed limit zone is not imposed in system evaluation in order to simplify the system evaluation (for more detail, see [6]). 4.1

Experimental Setup

In order to validate the proposed concept, the NMPC with deadzone-quadratic penalty function is experimentally implemented on the Smart-ED vehicle and the Ext-Eco-CC system is tested on the CFC test track. The position of the SmartED is updated by the Global Positioning System (GPS) sensor. The velocity and energy consumption of the vehicle including the battery current and voltage information is updated by the Controller Area Network (CAN-bus) through the On-Board Diagnose (OBD) interface. The onboard computational resource for R the Ext-Eco-CC concept is foreseen by a Linux operating system on the Intel TM Core i7 with a memory of 7.7 GiB PC and connection panel. The connection

NMPC with Deadzone-Quadratic Penalty Function

455

Fig. 5. The Linux operated PC with connection panel

panel is developed for the system power supply and actuators communication (Fig. 5). The control input of the proposed NMPC with deadzone-quadratic penalty function is realised by actuating either the accelerator pedal or brake actuator. The accelerator pedal is replaced by an electronic board (E-accelerator) to manipulate the required acceleration and imitates the electric signals generated by the original accelerator pedal of the Smart-ED. The brake actuator is manipulated by an electric stepper motor that is connected to the brake pedal by a planetary gearbox and flexible cable. The automatic brake actuation is designed in a way that preserves the possibility for the driver to brake in emergency cases. Figure 6 shows the configuration of the E-accelerator and brake actuators for the Ext-Eco-CC system.

Fig. 6. Automatic accelerator and brake actuators

4.2

Experimental Results Validation

In order to show the performance of the proposed Ext-Eco-CC system, a prediction horizon for the predictive controller is set to T = 15 s, to cover upcoming road geometry, and traffic speed limit zone with N = 30 discretized steps.

456

S.A. Sajadi-Alamdari et al.

The constants in performance index function are set as Q = diag[0, 2, 0], and R = diag[450]. Note the weight for energy-consumption is set to zero since the effectiveness of the deadzone-quadratic penalty function in energy efficiency is the main focus in this paper. The reference for the lateral acceleration comfort level is ωref = 3.7 m/s2 . We have compared our proposed deadzone-quadratic NMPC (DQ-NMPC) with the conventional NMPC with 2 -norm (C-NMPC) and human driver (HD) in terms of velocity regulation, travel time (t), power consumption profile and total energy-consumption (e). For the sake of fair comparison, all of the tests started from the standstill and the maximum reference velocity value is chosen, vref = 100 km/h without imposing speed limit zone. The desired reference zone for velocity tracking is chosen as z = 2 m/s. We have proposed human driver to drive as fast and energy-efficient as possible. Figure 7a shows the performance of various tests in terms of velocity regulations and total travel time. The DQ-NMPC and C-NMPC increase the velocity up to reaching the first curve (220 ≤ s ≤ 270) where the lateral acceleration constraint should be satisfied. As it is shown, the human driver is faster than the controllers. However, during the first and second curves (320 ≤ s ≤ 440), the controllers and human drivers show similar behaviour. Afterwards, the controllers increase over again the velocity up to the point the third curve (860 ≤ s ≤ 930) are in their prediction horizon. This leads to the beginning of slowing down predictively to satisfy the upcoming constraints in an energy-efficient way. The human driver show late but sharper velocity reduction which may not be an energy-efficient technique. Finally, the controllers keep the velocity during the fourth curve (930 ≤ s ≤ 1045) and speed up once more to reach the starting point on the test track. Thus, the presented result shows that the maximum reference velocity is not reachable, however, the reference velocity for less than

Fig. 7. Experimental results of DQ-NMPC in compare with C-NMPC and Human Driver for (a) Velocity regulation, and (b) Power consumption profile.

NMPC with Deadzone-Quadratic Penalty Function

457

vref = 80 km/h is reachable on the experimental tests carried on the CFC track. Figure 7b shows the power consumption profile and total energy. Note that negative power consumption refers to energy recovery mechanism. Figure 8a and b show the velocity and power consumption normalised histogram information. The proposed DQ-NMPC benefits from an improved penalty function which leads to a denser velocity and power consumption distribution compared to the C-NMPC and human driver. Based on achieved results, it is shown that the set-point value is not reachable on the test track by the controllers or the human driver. The DQ-NMPC leads to more steady velocity profile and consequently the better drive comfort with relatively small increased travel time. The total energy consumption of DQ-NMPC is +13.65% more energy efficient than the human driver and +6.58% more energy efficient than the CNMPC. In other words, for longer trips with more hilly and curvy roads, our proposed method has higher potential to be more energy-efficient. It is noteworthy that the OCP average calculation time for the DQ-NMPC is 2.35 ms which indicate its real-time capability of the proposed controller.

Fig. 8. Experimental performance distribution of DQ-NMPC in compare with C-NMPC and the Human Driver for (c) Velocity and (d) Power consumption.

5

Conclusion and Future Research

A semi-autonomous ecological driver assistance system was developed to regulate the velocity in an energy-efficient manner for the electric vehicles. A realtime nonlinear receding horizon optimal controller with approximate deadzonequadratic penalty function was proposed to plan online the cost-effective velocity profile. The limited cruising range of the electric vehicles was improved by the assessed low penalty value on set-point tracking zone and ecological driving techniques. Various tests on a semi-autonomous electric vehicle in terms of real-time

458

S.A. Sajadi-Alamdari et al.

states regulation and constraints fulfilment were carried out. The effectiveness of the proposed method was demonstrated by the achieved field experimental results. Further practical experiments will be conducted including extending the functionalities of semi-autonomous ecological driving.

References 1. Ehsani, M., Gao, Y., Emadi, A.: Modern Electric, Hybrid Electric, and Fuel Cell Vehicles: Fundamentals, Theory, and Design, 2 illustr edn. CRC Press, Boca Raton (2009) 2. Li, N.I., He, C.R., Orosz, G.: Sequential parametric optimization for connected cruise control with application to fuel economy optimization. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 227–232. IEEE, Las Vegas (2016). https://doi.org/10.1109/CDC.2016.7798274 3. Schwickart, T., Voos, H., Hadji-Minaglou, J.R., Darouach, M., Rosich, A.: Design and simulation of a real-time implementable energy-efficient model-predictive cruise controller for electric vehicles. J. Franklin Inst. 352(2), 603–625 (2015). https://doi.org/10.1016/j.jfranklin.2014.07.001 4. Schwickart, T., Voos, H., Hadji-Minaglou, J.R., Darouach, M.: A fast modelpredictive speed controller for minimised charge consumption of electric vehicles. Asian J. Control 18(1), 133–149 (2016). https://doi.org/10.1002/asjc.1251 5. Kamal, M.A.S., Mukai, M., Murata, J., Kawabe, T.: Ecological vehicle control on roads with up-down slopes. IEEE Trans. Intell. Transp. Syst. 12(3), 783–794 (2011). https://doi.org/10.1109/TITS.2011.2112648 6. Sajadi-alamdari, S.A., Voos, H., Darouach, M.: Nonlinear model predictive extended eco-cruise control for battery electric vehicles. In: 24th Mediterranean Conference on Control and Automation (MED), pp. 467–472. IEEE, Athens (2016). https://doi.org/10.1109/MED.2016.7535929 7. Saerens, B., Rakha, H., Diehl, M., Van den Bulck, E.: A methodology for assessing eco-cruise control for passenger vehicles. Transp. Res. Part D Transport Environ. 19, 20–27 (2013). https://doi.org/10.1016/j.trd.2012.12.001 8. Sciarretta, A., De Nunzio, G., Ojeda, L.L.: Optimal ecodriving control: energyefficient driving of road vehicles as an optimal control problem. IEEE Control Syst. 35(5), 71–90 (2015). https://doi.org/10.1109/MCS.2015.2449688 9. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004) 10. Gros, S., Zanon, M.: Penalty functions for handling large deviation of quadrature states in NMPC. IEEE Trans. Autom. Control 9286(c), 1–1 (2017). https://doi. org/10.1109/TAC.2017.2649043 11. Eskandarian, A. (ed.): Handbook of Intelligent Vehicles, vol. 2. Springer, London (2012). https://doi.org/10.1007/978-0-85729-085-4 12. Ohtsuka, T.: A continuation/GMRES method for fast computation of nonlinear receding horizon control. Automatica 40(4), 563–574 (2004). https://doi.org/10. 1016/j.automatica.2003.11.005 13. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) 14. Dugas, C., Bengio, Y., Belisle, F., Nadeau, C., Garcia, R.: Incorporating secondorder functional knowledge for better option pricing. In: Proceedings of the 13th International Conference on Neural Information Processing Systems, NIPS 2000, pp. 451–457. MIT Press, Cambridge (2000)

NMPC with Deadzone-Quadratic Penalty Function

459

15. Maciejowski, J.M.: Predictive Control: With Constraints. Prentice Hall, Pearson Education (2002) 16. Huang, M., Nakada, H., Butts, K., Kolmanovsky, I.: Nonlinear model predictive control of a diesel engine air path: a comparison of constraint handling and computational strategies. IFAC-PapersOnLine 48(23), 372–379 (2015). https://doi.org/ 10.1016/j.ifacol.2015.11.308 17. Centre de Formation pour Conducteurs. http://www.cfc.lu/

Autonomous Driving and Driver Assistance Systems (II)

Comparative Study of Visual Odometry and SLAM Techniques Ana Rita Gaspar(B) , Alexandra Nunes, Andry Pinto, and Anibal Matos INESC TEC, Porto, Portugal {argaspar,apn,andry.m.pinto}@inesctec.pt, [email protected]

Abstract. The use of the odometry and SLAM visual methods in autonomous vehicles has been growing. Optical sensors provide valuable information from the scenario that enhance the navigation of autonomous vehicles. Although several visual techniques are already available in the literature, their performance could be significantly affected by the scene captured by the optical sensor. In this context, this paper presents a comparative analysis of three monocular visual odometry methods and three stereo SLAM techniques. The advantages, particularities and performance of each technique are discussed, to provide information that is relevant for the development of new research and novel robotic applications.

Keywords: Evaluation

1

· Odometry · SLAM · Vision

Introduction

The increased use of the autonomous vehicles is related to the fact that their properties allow its application in diverse tasks that can be dangerous and repetitive for the human. To ensure that vehicles are completely involved in various applications it is relevant to augment their capacity to reliably navigate autonomously even in unknown environments. In this context, a large effort is being made by the researchers to explore the concepts of the odometry and SLAM (Simultaneous Localization and Mapping) in order to support the activity of mobile robots in different scenarios. Odometry allows estimation of the robot’s position from a single reference and the SLAM technique localizes the robot and constructs a map of the environment. Therefore, SLAM techniques can use odometry-based methods to provide an estimation of motion. The biggest advantage of SLAM is related with the revisiting capability, which means that, the technique reduces the positioning error along the navigation path once a revisited area is detected. Optical systems have the ability to provide information with high quality at a reasonable cost. Therefore, the development of visual odometry and SLAM approaches have been an active line of research that was followed by a large number of institutions worldwide. The appearance of several c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_38

464

A.R. Gaspar et al.

visual-based techniques have triggered a fundamental question: what technique is suitable for a specific application? Therefore, the major contribution of this paper include: to provide a comparative study of some of the visual odometry and SLAM techniques that are currently available in the literature. Moreover, this paper discusses the performance of these methods for typical applications related to mapping and data fusing with other sensors. Thus, it is important to highlight this kind of studies because it allows to understand the main properties, advantages and disadvantages of each implementation, as well as its results in different environments and testing conditions. Therefore, it is possible to select the best and more convenient method for particular applications. This paper is organized as follows: Sect. 2 presents conventional methods for visual odometry and SLAM. The comparative study is presented in Sect. 3, where the results obtained by several techniques are evaluated in common testing scenarios. Finally, Sect. 4 shows the major conclusions of this paper.

2

Methods

As a prerequisite of the many tasks that involve the robot motion, the localization is the most crucial feature for an autonomous robot. In this sense, the visual odometry estimates motion with only input images from of one or multiple cameras. The use of visual odometry presents advantages compared to traditional method (encoders in wheels) since it is more reliable in slipping events, namely in rugged lands where drift errors can occur frequently. However, the analysis of egomotion from sequence of images are very complicated due to the presence of the external objects moving in the scenario (which violates the motion coherent assumption [1]). It is also necessary to ensure that the rate of acquisition is fast enough to avoid any aliasing phenomenon (and to increase the overlapping area between images). On the other hand, the SLAM is a more sofiscated approach that constructs a local map according to the navigation of the robot in the environment. Within this map, it provides the estimation of the robot’s position. The implementation of visual approaches for both the odometry and SLAM (often called as vOdometry and vSLAM, respectively) usually resorts to feature-based analysis to increase the performance and, as a consequence, the frame rate of the output data from the visual system. With the aim to simulate the human vision it is possible to use stereo cameras to acquire the 3D information from the environment. It should be highlighted that non-structured and dynamic environments usually impose severe challenges for visual-based techniques and, therefore, the detection of revisited areas is a key point for the navigation stack of mobile robots since it decreases the positioning error. Considering the high number of the implementation of techniques available in the literature, the current research had selected the most promising one by considering factors such as, performance expectation reported in scientific articles, public availability of the methods and other particularities (robustness of features, internal assumptions and others). A comparative analysis is conducted by taking into consideration a set of three monocular odometry methods and three stereo-based SLAM methods.

Visual Odometry and SLAM Techniques

465

For odometry method three algorithms were selected, namely mono-vo, viso2 and mORB-SLAM. Mono-vo [2] implementation was developed in 2015 and it is based on OpenCV. Uses the FAST for features detection and the Kanade-LucasTomasi to search the correspondence in the next image. Incorporates a mechanism that searches for new features and uses an outliers removal mechanism. This implementation aims the use the scale information from an external source of data and, therefore, it is possible to correct the previous estimations. Moreover, the mono-vo follow a heuristic to estimate the forward motion as the dominant motion. The viso2 [3] implementation was developed in 2011 and calculates the camera position estimation using a set of rectified images. The method has available a large number of configurable parameters which increases the flexibility, but turns the method very difficult to setup. Frequently, a motion estimation system cannot estimate with a metric scale from monocular sequences. Thus, viso2 assumes that the camera motion follows a fixed and known height from the ground (used to predict the scale factor). This method uses a bucketing technique to the correct distribution of features in images however, it has a relevant limitation related to pure rotations which degradates the estimation. The mORB-SLAM is the monocular implementation of the ORB-SLAM presented in [4]. Therefore, this method uses keyframes and ORB as features extractor, detecting corners from the FAST and BRIEF descriptor, to ensure the (soft) real-time capacity. A bundle adjustment is conducted with a new keyframe in order to remove some erroneous estimations (and features) and provides a better positioning. For SLAM technique three algorithms were selected, namely RTAB-Map, SPTAM and ORB-SLAM2. The RTAB-Map [5] system was developed in 2014 to capture a Graph-Based SLAM implementation and to present an incremental approach for loop closure detection. It is important to note that the calculation of the egomotion, with the own method of odometry called “s odom”, presents limitations in situations comprising “Empty Space Environments” (when the features are a distance to the camera larger than 4 m). This means that, the performance is affected by image sequences presenting large egomotion (reduces the feature matching in consecutive frames) however, it is possible to use other visual odometry methods to solve this limitation such as, viso2. The RTAB-Map was particularly developed for scenarios involving cars and based on two cameras with large focal distances. Although being used in different robotic applications, the method only estimates a new position when 6DOF motion is detected between consecutive frames. The loop closure detection is constructed online through a bag-of-words approach (with SURF descriptors). A Graph Optimization approach is used to correct the map during the robot navigation. Considering that mapping large-scale environments during long-terms navigation paths is constrained by the computational power available onboard. The RTAB-Map implements a memory management approach that considers only part of the map to fulfill online processing requirements of todays applications. The S-PTAM [6] implementation was developed with the goal to obtain a stereoscopic system able to help the robot navigation, by providing more reliable estimations. Divides the

466

A.R. Gaspar et al.

SLAM-based approach into two tasks: Tracking and Map Optimization. Uses the BRIEF descriptor with binary features to reduce the storage requirements and speedup the feature correspondence. The Shi-Tomas algorithm imposes a good spatial distribution of the features. Like the ORB-SLAM, the S-PTAM uses a keyframe-based approach to estimate the motion. During the creation of map, the method adjusts the nearby points by excluding those points that are considered erroneous. This task is presented as a maintenance process independently of the Tracking, which is an advantage in terms of the processing time. The ORB-SLAM2 [4] implementation was developed in 2015 and it is able to use monocular, stereo and RGB-D cameras. It is a feature-based approach that uses the ORB extractor because of time constraints (important in the real-time applications). Therefore, the egomotion determination is characterized by a reliable motion estimation, since it is invariant to view point and illumination changes (FAST corners with BRIEF descriptors). In terms of motion estimation, it is keyframe-based approach and avoids an excessive computational demand. Allows a camera relocalization in real-time when the Tracking process was lost, by following a bag-of-words approach. Uses the Covisibility Graph concept to bring the possibility of adding new keyframes and, consequently, to obtain an environment ideal reconstruction. The Covisibility Graph helps the closure loop detection, because this detection can be achieved by a similarity measure between bag-of-words vector and all neighbors of the Covisibility Graph. Finally, an Essential Graph provides a real-time effective loop closure since it maintains the words that represent a strong match (assuming a vocabulary constructed offline using the DBoW2 library [7]).

3

Comparative Analysis

Two set of experiments were conducted in this section. The first aims to provide evidences about the accuracy of egomotion estimation considering monocular images sequences - odometry trials. The second aims to provide a comparative study of different vSLAM-based approaches without any constraint about the environment or navigation path. A simple but effective comparative analysis is performed by considering all techniques introduced in the previous section. During the analysis some public datasets were considered to allow replicability of results by other scientific works (when proposing other visual-based methods). The performance of the methods are discussed by taking into consideration some metrics, namely the Central Processing Unit (CPU) utilization in percent, processing time and normalized Euclidian error between the ground-truth and estimated trajectories in the same conditions, for example data acquisition rate and image size. Regarding the results obtained in the monocular odometry, a normalization of trajectories are conducted due to the unknown scale factor between different methods. Moreover, the images sets that were choosen represent paths that do not evidence revisited areas since the analysis is focused on the accuracy of the egomotion estimation. Following a similar methodology, vSLAM techniques discussed in the previous section are evaluated by taking

Visual Odometry and SLAM Techniques

467

into account the same metrics used in the odometry evaluation. In this case, the detection of the closure loop is also contemplated since it is a key feature for all SLAM techniques. Quantitative analysis can be retrieved by measuring the accuracy of each SLAM technique in two checkpoints (since not all techniques provide estimatives in all frames), represented by “error point1;error point2” in the Tables 4, 5 and 6. Qualitative discussion is made using graph representations of the trajectories. A very relevant phenomenon that it is usually ignored by the literature is the aliasing. This research discussed this phenomenon during the trials which means, the influence of the processing time in the overall accuracy of each method is investigated. To evaluate the methods were selected three test scenarios, that represent indoor and outdoor environments. These environments show a clear image of the behavior that will be expected for each technique. The three public datasets comprise KITTI, MIT Stata Center and New College. The KITTI1 dataset is composed of 22 stereo images sequences with different trajectories obtained, in urban and freeway environments, see Fig. 1(a). The height of the camera in relation to the ground and the no-oscillation have been taken into account. The camera calibration parameters are available as well as the ground-truth of the trajectory made during the acquisition of high resolution image sequences. The MIT Stata Center2 is an indoor dataset, obtained from a robot, see Fig. 1(b). This dataset was made to support the development of visual SLAM algorithms and, therefore, the trajectories are longer and present various direction changes. The New College3 dataset provides data that was acquired in gardens, see Fig. 1(c). All data is synchronized: images, laser, GPS and IMU information and odometry data (ground-truth) are available.

Fig. 1. Illustrative example of the used datasets: (a) KITTI (b) MIT Stata Center (c) New College

Table 1 presents a summary of the testing conditions and scenarios evaluated to each technique. 1 2 3

Dataset available on http://www.cvlibs.net/datasets/kitti/eval odometry.php. Dataset available on http://projects.csail.mit.edu/stata/downloads.php. Dataset available on http://www.robots.ox.ac.uk/NewCollegeData/.

468

A.R. Gaspar et al. Table 1. Test Conditions to methods evaluation

3.1

Results

Odometry KITTI – Sequence 07. As visible in the Fig. 2, the mono-vo implementation follows the movement of the camera for most of the time. However, the incorrect detection of a direction change caused a little error between the real and estimated trajectories.

Fig. 2. Normalized trajectories obtained by KITTI dataset (07)

On the other hand, mORB-SLAM implementation estimates the camera position correctly, but with lower error, since it captures all direction changes with a satisfactory accuracy. It can be noticed in Table 2 that the mORB-SLAM presents the better egomotion estimation but it takes longer processing time (in average)4 . In the majority of cases, the mono-vo implementation captures 4

KFr represents the number of keyframes used to egomotion estimation.

Visual Odometry and SLAM Techniques

469

the egomotion during for a large part of the trajectory made by the observer, however the largest maximum error was caused by a wrong detection of one direction change. The viso2 implementation presents a higher error, because of the deviations of the first positions. Table 2. Comparison of the normalized trajectories obtained by KITTI dataset (07) Processed frames

Normalized error Maximum Average

Time

CPU >35% max. = 40%

Std

Mono-vo

1000

0,64

0,13

0,17

85 s

Viso2

999

0,62

0,32

0,18

74 s >12% max. = 15%

mORB-SLAM

996 KFr = 374

0,36

0,10

0,10 109 s

>18% max. = 20%

New College. The mORB-SLAM and mono-vo implementations try to follow the circular motion of the observer, see Fig. 3.

Fig. 3. Normalized trajectories obtained by New College dataset

The viso2 provides an incorrect estimation, that can be justified by some oscillations in the camera during the motion of the observer as well as the relative changes of depth, according to the literature. Therefore, the performance of viso2 is severely affected by these conditions (which reduces its robustness and reliability). From the Table 3, it is visible that, although all implementations present errors, the mORB-SLAM is the best. However, neither this implementation nor mono-vo can obtain the final position intended. In terms of the processing time, the mono-vo implementation presents better results, even using more frames but with a slightly higher CPU usage.

470

A.R. Gaspar et al.

Table 3. Comparison of the normalized trajectories obtained by New College dataset Processed frames

Normalized error Maximum Average

Time

CPU

127 s

>28% max. = 34%

Std

Mono-vo

2500

1,22

0,61

0,26

Viso2

1765

1,11

0,85

0,19 124 s >13% max. = 18%

mORB-SLAM

1657 KFr = 293

0,69

0,37

0,16 273 s

>19% max. = 24%

SLAM KITTI – Sequence 05. According to the Fig. 4, it is possible to observe that the ORB-SLAM2 implementation is the only one that estimates all camera positions, detects the loops and adjusts its trajectory. It should be noticed that, as expected, the RTAB-Map system (with the viso2 providing the egomotion estimation) try to replicate the motion made by observer however, there are deviations. These deviation could be justified by the susceptibility of the method in situations with inclination changes or even with camera rotations.

Fig. 4. Trajectories obtained by KITTI dataset (05)

Taken into account the results presented in Table 4, it is safe to say that ORBSLAM2 lead to lower errors between the real trajectory and ground-truth. In terms of the CPU utilization and processing time, the ORB-SLAM2 implementation presents higher values comparatively with the S-PTAM implementation. KITTI – Sequence 09. Figure 5 depicts that ORB-SLAM2 has some drifts during the estimation of the trajectory in this sequence. Moreover, the technique do not detect any revisited area and, as a consequence, the trajectory was not adjusted by the closure loop detection mechanism. In relation to the other implementations, is not possible to conclude about the effectiveness of closure loop, once the estimated final position was not close enough of the initial position (circular path). One relevant issue was the number of frames that were not

Visual Odometry and SLAM Techniques

471

Table 4. Comparison of the trajectories obtained by KITTI dataset (05) ORB-SLAM2 Processed frames

2761

RTAB-Map viso2 s odom 1224

Error Maximum 1,64 m 420,91 m Average 0,55 m;1,05 m 30,84 m;17,73 m

S-PTAM

2745

1916

— —

123,29 m 33,05 m;8,41 m

Processing time

5 min 32 s

4 min 57 s

2 min 19 s

4 min 43 s

CPU

47%

57%

28%

54%

Fig. 5. Trajectories obtained by KITTI dataset (09)

considered by the SLAM techniques, which lead to aliasing situation - and some motion components were not captured by these visual techniques. Although the other implementations try to follow the camera motion, they get lost along the trajectory. Table 5 demonstrates that the ORB-SLAM2 has the ability to characterize the observer motion with better accuracy and reliability (no aliasing phenomenon because the entire image sequence was processed). The CPU utilization and processing time have higher values, but it uses all frames to trajectory estimation. On the other hand, the RTAB-Map (viso2) does not estimate the realistic path, being difficult to provide a quantitative analysis of the performance of this method. MIT Stata Center. Figure 6 shows that RTAB-Map, with the odometry incorporated directly by the implementation, is not able to provide data (lack of inliers). It is important to emphasize that were modified some parameters to the features extraction but without changes in the obtained results. This fact can be explained because of the limitation of this method in indoor environments, in particular the presence of large homogeneous spaces (halls), represent a challenging problem for this method.

472

A.R. Gaspar et al. Table 5. Comparison of the trajectories obtained by KITTI dataset (09) ORB-SLAM2

Processed frames

1591

RTAB-Map viso2 s odom 648

Error Maximum 28,87 m 145,14 m Average 11,46 m;37,62 m —;207,65 m

S-PTAM

1557

884

— —

115,06 m 60,61 m;103,25 m

Processing time

3 min 4 s

2 min 49 s

1 min 20 s

2 min 40 s

CPU

36%

55%

30%

80%

Fig. 6. Trajectories obtained by MIT Stata Center dataset

The S-PTAM implementation estimates correctly the trajectory since the loops are correctly detected and, therefore, the method redefines the trajectory taken by the observer. In fact, this technique was able to estimate the entire path with a realistic scale. The RTAB-Map implementation (viso2) tries to follow the direction changes occured during the path. The result of RTAB-Map (viso2) shows that the path has suffered from a wrong estimation at the beginning of the sequence (y-axis) which caused a total deviation. This fact can be explained by the existence of a high and fast rotation at the start of the trajectory which clearly demonstrates that this method is quite susceptible to errors in these situations. All methods have demonstrated similar CPU usages (see Table 6), however, ORB-SLAM2 had the best performance in this metric. Although the best CPU usage, the ORB-SLAM2 did not have the best accuracy since the localization was lost for a while (during a transition to a darker area) and the method was not able to detect the loop closure. Thus, it is possible to conclude that was calculated wrongly the first direction change and, consequently, it is quite difficult to characterize the error. The RTAB-Map (viso2) system was also unable to determine the loop closure, because discards many frames, possibly due to the lack of the 6DOF motion, between consecutive frames.

Visual Odometry and SLAM Techniques

473

Table 6. Comparison of the trajectories obtained by MIT Stata Center dataset ORB-SLAM2

4

RTAB-Map viso2

S-PTAM

Processed frames

2312

2730

2962

Error Maximum

38,92 m

26,95 m

2,57 m

Average

6,30 m;—

Processing time

4 min 47 s

3,29 m;14,67 m 8,87 m;0,34 m 4 min 23 s

4 min 10 s

CPU

33%

53%

68%

Conclusion

This article studies several visual-based techniques. It presents a comparative analysis of odometry and SLAM approaches in realistic indoor and outdoor scenarios. Moreover, a quantitative and qualitative discussion is presented by taking into account several metrics such as, graphic representation of the navigation path, processing time and the aliasing phenomenon (considering real-time constraints). This phenomenon causes positions error along the trajectory, once are lost some frames between consecutive samples. The results showed that the mono-vo follows relatively well the motion of the majority of the trajectories, but any motion detected provide a new position even when the camera does not move, which causes an increase in the error. The viso2 presents a good motion estimation always that detects 6DOF, however it provides erroneous estimations when the camera presents oscillations or changes its height in relation to the ground. The mORB-SLAM also generates good results in most cases, The mORB-SLAM also generates good results in most cases, with errors lower in relation to the others (approximately 45%). Thus, the mORB-SLAM and viso2 are the most complete with the principal difference in the requirement by the mORB-SLAM estimator of a vocabulary constructed a` prior. In the case of the SLAM implementations, the ORB-SLAM2 presents, in the majority of the cases, a good motion estimation and provides estimations with lower errors (decrease more than 80%). It is important to reinforce that the RTAB-Map with own odometry method is difficult to parametrize and highly dependent on the environment, such as in the case of the KITTI dataset that can be possible to considered a “Empty Space Environment”. The S-PTAM is suitable only for MIT Stata Center dataset and, in this case, it was the only one that provided correct results. This fact can be explained by the higher time between images input in relation to the KITTI dataset. Thus, it is notable that the S-PTAM and ORB-SLAM2 are the most adequate. These implementations are differentiated by the fact of the S-PTAM does not present an approach for loop closure detection, but only Bundle Adjustment, which does not provide results so good to known revisited areas.

474

A.R. Gaspar et al.

To future work, the authors will conduct novel datasets (and incorporate these datasets in the discussion), including new environments, to support the scientific community and intend to reinforce the study namely with other methods. Acknowledgements. This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER006961, and by National Funds through the FCT - Funda¸ca ˜o para a Ciˆencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.

References 1. Pinto, A.M., Moreira, A.P., Correia, M.V., Costa, P.: A flow-based motion perception technique for an autonomous robot system. J. Intell. Robot. Syst. 75(3), 475–492 (2014). doi:10.1007/s10846-013-9999-z 2. Singh, A.: An OpenCV based implementation of Monocular Visual Odometry. Indian Institute of Technology Kanpur. Technical report, Kanpur (2015) 3. Kitt, B., Geiger, A., Lategahn, H.: Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In: IEEE Intelligent Vehicles Symposium. University of California, San Diego, CA, USA, pp. 486–492 (2010) 4. Mur-Artal, R., Montiel, J.M.M., Tard´ os, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015). doi:10. 1109/TRO.2015.2463671 5. Labb´e, M., Michaud, F.: T Online global loop closure detection for large-scale multisession graph-based SLAM. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2014). doi:10.1109/IROS.2014.6942926 6. Pire, T., Fischer, T., Civera, J., Crist´ oforis, P., Berlles, J.J.: Stereo parallel tracking and mapping for robot localization. In: Intelligent Robots and Systems, pp. 1373– 1378 (2015). doi:10.1109/IROS.2015.7353546 7. Galvez-Lopez, D., Tard´ os, J.D.: Bags of binary words for fast place recognition in images sequences. Intell. Robots Syst. 28(5), 1188–1197 (2012). doi:10.1109/IROS. 2012.2197158

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection Intensity Data Alireza Asvadi(B) , Luis Garrote, Cristiano Premebida, Paulo Peixoto, and Urbano J. Nunes Department of Electrical and Computer Engineering, Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal {asvadi,garrote,cpremebida,peixoto,urbano}@isr.uc.pt

Abstract. This paper addresses the problem of vehicle detection using a little explored LIDAR’s modality: the reflection intensity. LIDAR reflection measures the ratio of the received beam sent to a surface, which depends upon the distance, material, and the angle between surface normal and the ray. Considering a 3DLIDAR mounted on board a robotic vehicle, which is calibrated with respect to a monocular camera, a Dense Reflection Map (DRM) is generated from the projected sparse LIDAR’s reflectance intensity, and inputted to a Deep Convolutional Neural Network (ConvNet) object detection framework for the vehicle detection. The performance on the KITTI is superior to some of the approaches that use LIDAR’s range-value, and hence it demonstrates the usability of LIDAR’s reflection for vehicle detection. Keywords: Vehicle detection · 3D-LIDAR reflection · Deep learning

1 Introduction and Motivation Vehicle detection is one of the key tasks in artificial sensor-based perception systems for intelligent robotic vehicles and intelligent transportation systems. Robust and reliable vehicle detection finds variety of practical applications including collision warning systems, collision avoidance system, autonomous cruise controls, advanced driver assistance systems (ADAS), and autonomous driving systems. Autonomous vehicles use different types of sensors (e.g., Camera, LIDAR and RADAR) to have a redundant and robust perception system. Color cameras, being the type of sensor most used, suffer from illumination variations, lacking direct object distance estimation, and inability of vision through the night which restricts the reliability of safe driving. LIDAR and RADAR measure distance by emitting and receiving electromagnetic waves. RADAR is able to work efficiently in extreme weather conditions but suffers from narrow flied of view and low resolution, which limits its application in tasks of detection and recognition of objects. In comparison with RADAR, 3D-LIDAR has a precise range measurement and a full 360◦ field of view. Motivated by reduction c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_39

476

A. Asvadi et al.

in their cost and increase in resolution and range, 3D-LIDARs are becoming a reliable solution for scene understanding. In this paper, we propose a sensory perception solution for vehicle detection (herein called RefCN, which stands for ‘Reflectance ConvNet’) using a less explored 3DLIDAR reflection and a Deep ConvNet-based detection framework (YOLOv2 [1]). The approach introduced here depends on a front view Dense Reflection Map (DRM) that is constructed from sparse 3D-LIDAR reflectance data. DRMs run through the trained DRM-based YOLOv2 pipeline to achieve vehicle detection. The paper is organized as follows. Related work is presented in Sect. 2. The proposed approach is described in Sect. 3. Experimental results are discussed in Sect. 4, and Sect. 5 brings concluding remarks and future work.

2 Related Work This section reviews research works that relate to LIDAR reflection, and surveys previous work in LIDAR-based vehicle detection. 2.1

Perception Using 3D-LIDAR Reflection

To our knowledge only a few works exist that use LIDAR reflection for artificial perception tasks. Tatoglu and Pochiraju [2] use LIDAR reflection for point cloud segmentation based on the diffuse and specular reflection behaviors. Hernandez et al. [3] detect traffic lanes on roads by exploiting the LIDAR reflection of lane marking. Caltagirone et al. [4] use top view LIDAR elevation and reflection data for road detection. 2.2

3D-LIDAR-based Vehicle Detection

This subsection gives a concise overview of vehicle detection approaches using 3DLIDARs in IV and ITS domains. Behley et al. [5] perform object detection with a hierarchical segmentation of 3D-LIDAR point cloud (PCD) followed by bag-of-word (BoW) classification. Asvadi et al. [6] use PCD clustering for object proposal generation and a depth map based ConvNet classifier for vehicle detection. Wang and Posner [7] discretize LIDAR points and reflectance values into a Voxel grid. A 3D sliding window with a linear SVM classifier is used for obtaining detection scores. Li et al. [8] use a 2D Fully-Convolutional Network (FCN) in a 2D point map (top-view projection of 3D-LIDAR range data) and trained it end-to-end to build a vehicle detection system. Li [9] extends it to a 3D FullyConvolutional Network (FCN) to detect and localize objects as 3D boxes from LIDAR point cloud data. Gonzalez et al. [10] use images and LIDAR-based depth maps with HOG and Local Binary Patterns (LBP) features. They split the training set in different views to take into account different poses of objects and train a Random Forest (RF) of local experts for each view for object detection. Chen et al. [11] use LIDAR range data and images. A top view representation of point cloud is used for 3D proposal generation. 3D proposals were projected to color image, LIDAR’s top and front views. A ConvNet-based fusion network is used for 3D object detection. Oh et al. [12]

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection

477

use segmentation-based object proposal generation from LIDAR depth maps and color images. They use decision level fusion to combine detections from two independent ConvNet-based classifiers in the depth map and color image. Table 1 provides a review of vehicle detection approaches using 3D-LIDAR data. Table 1. Related work on 3D-LIDAR-based vehicle detection. Col., Ran. and Ref. are abbreviations for Color, Range and Reflectance modalities, respectively. Reference

Modality

Representation

Detection technique

Behley et al. [5]

Ran.

Full 3D: PCD

Hierarchical Seg. + BoW

Asvadi et al. [6]

Ran.

PCD + Front view

Clu. based Prop. + ConvNet

Wang and Posner [7] Ran. + Ref.

Full 3D: Voxel

Sliding-window + SVM

Li et al. [8]

Ran.

Top view

2D-FCN

Li [9]

Ran.

Full 3D: PCD

3D-FCN

Gonzalez et al. [10]

Col. + Ran

Front view

Sliding-window + RF

Chen et al. [11]

Col. + Ran. + Ref

Front + Top views

Top view 3D Prop. + ConvNet

Oh et al. [12]

Col. + Ran.

Front view

Seg. based Prop. + ConvNet

While most of the 3D-LIDAR-based vehicle detection systems were built on range data, in this paper a less exploited modality, 3D-LIDAR reflection intensity data, is used for vehicle detection. The front view Dense Reflection Map (DRM) is generated and used with YOLOv2, and is shown that the DRM can be useful for the vehicle detection purpose.

3 RefCN: Vehicle Detection Using 3D-LIDAR Reflection and YOLOv2 Object Detection Framework The architecture of the proposed RefCN vehicle detection system is shown in Fig. 1. The 3D-LIDAR point cloud is projected to the camera coordinate system and a Sparse Reflectance Map (SRM) is generated. The SRM is up-sampled for getting a Dense Reflectance Map (DRM). The DRM is inputted to the trained DRM-YOLO to detect vehicles. The RefCN has two steps: (1) DRM generation and (2) DRM-based YOLOv2 object detection framework.

Image Plane

Image Plane

3D-LIDAR

Image Plane Max Pooling

ConvoluƟon

Sensor

SRM

DRM

Max Pooling

ConvoluƟon

DRM-YOLOv2

Detected Vehicles

Fig. 1. The pipeline of the RefCN: vehicle detection using reflection intensity data from a LIDAR and the Deep ConvNet based method (YOLOv2).

478

3.1

A. Asvadi et al.

Delaunay Triangulation-Based DRM Generation

The DRM is generated by projection of the point cloud on the image plane, followed by triangulation and interpolation as described in the sequel. 3D-LIDAR Image Projection. 4D LIDAR data, 3D point cloud with reflection intensity P = {X,Y, Z, I}, is filtered to the cameras field of view, and projected onto the 2D-image plane using Projection Matrix

   P = PC2I × R0 × PL2C × P ∗

(1)

where PC2I is the projection matrix from the camera coordinate system to the image plane, R0 is the rectification matrix, and PL2C is LIDAR to camera coordinate system projection matrix. Considering P∗ = {X ∗ ,Y ∗ , Z ∗ , I ∗ }, using the row and column pixel values {X ∗ ,Y ∗ } accompanied with reflectance data I ∗ , a compact Sparse Reflectance Map (SRM) is computed, which has a lower density than the image resolution. Delaunay Triangulation (DT). The Delaunay Triangulation (DT) is used for mesh generation from the row and column values {X ∗ ,Y ∗ } of the projected 3D-LIDAR points P∗ . The DT produces a set of isolated triangles Δ = {δ1 , · · · , δn }, each triangle δ composed of three vertices nk , {k : 1, 2, 3}, useful for building the interpolating function f (·) to perform interpolation on reflectance values I ∗ . Interpolation of Sparse Reflectance Map (SRM). Unsampled (missing) intensity value I of a pixel P which lie within a triangle δ , is estimated by interpolating reflectance values of the surrounding triangle vertices nk , {k : 1, 2, 3} using Nearest Neighbor interpolation (which means selecting the value of the closest vertex), ending up in a DRM. I = f (argmin||P − nk ||), nk

{k : 1, 2, 3}

(2)

For more details on the interpolation of scattered points, please refer to [13]. Premebida et al. [14] and Ashraf et al. [15] investigate the quality of the generated up-sampled dense map representations from LIDAR data using different interpolation techniques. 3.2

DRM-based YOLOv2 Object Detection Framework

You Only Look Once (YOLO) [16] models object detection as a regression problem. The most-recent version of YOLO, denoted as YOLOv2 [1], is used in this paper. In the proposed RefCN framework, the DRM is divided into 13 × 13 grid regions, where each grid cell is responsible for predicting five object BB centers with their associated confidence scores. A convolutional network runs once on the DRM to predict object BBs. The network is composed by 19 convolutional layers and 5 max-pooling layers. YOLO looks at the whole DRM during training and test time; therefore, in addition to vehicle appearances, its predictions are informed by contextual information in the DRM. The constructed DRM run through the trained DRM-based YOLOv2 pipeline to achieve vehicle detection. Figure 2 illustrates the different steps of the RefCN.

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection

479

Fig. 2. Illustration of the RefCN process. Top to bottom: a color image with superimposed projected LIDAR points. The SRM with color coded reflectance values. The generated 2D triangulation. The zoomed area within the red box of above image. The fifth image represents the constructed DRM. The last image shows the vehicle detection result with confidence scores.

480

A. Asvadi et al.

4 Experimental Results and Analysis For the RefCN evaluation, quantitative and qualitative experiments using the KITTI dataset [17] were performed. 4.1

KITTI Object Detection Dataset

The KITTI object detection ‘training dataset’ (containing 7,481 frames) was partitioned into two subsets: 80% as training set (5,985 frames) and 20% as validation set (1,496 frames). The ‘Car’ label was considered for the evaluation. 4.2

Evaluation Metrics

Following KITTI’s assessment methodology, the PASCAL VOC intersection-overunion (IOU) metric on three difficulty levels was used as the evaluation criterion with an overlap of 70% for car detection. The overlap rate in 2D is given by, IOU =

area(2D-BB ∩ 2D-BBg ) area(2D-BB ∪ 2D-BBg )

(3)

The precision-recall curve and Average Precision (AP), which corresponds to the area under the precision-recall curve, were computed and reported over easy, moderate and hard data categories to measure the detection performance. 4.3

Experimental Setup and Computational Analysis

The experiments were run on a computer with a Hexa-core 3.5 GHz processor, powered with a GTX 1080 GPU and 64 GB RAM under Linux. The YOLOv21 [1] 416 × 416 detection framework, written in C, were used in the RefCN implementation. YOLOv2 Training with DRMs. For training, as an initial weight, convolutional weights of the pre-trained ConvNet on ImageNet were used. DRM-YOLOv2 was trained on DRMs, generated from KITTI training set, to adapt it to DRM-based vehicle detection. The DRM-YOLOv2 is fine-tuned for 80,200 iterations using stochastic gradient descent with learning rate of 0.001, 64 as batch size, weight decay of 0.0005 and momentum of 0.9. Computational Complexity and Run-Time. Two versions of DRM were implemented: a version using MATLAB scatteredInterpolant function and a much faster reimplementation in C++. The DRM generation in MATLAB takes about 1.4 s while in C++ it takes 34 ms. The implementation details and the computational load of DRM generation and YOLO detection steps are reported in Table 2. The overall time for processing each frame using C++ implementation is 49 ms (more than 20 frames per second). Considering that the KITTI dataset was captured using a 10 Hz spinning Velodyne HDL-64E, it can be concluded that RefCN can be performed in real-time. 1

https://pjreddie.com/darknet/yolo/.

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection

481

Table 2. The RefCN processing time (in milliseconds). Impl. details

Proc. time Environment

DRM generation 34

C++

YOLO detection 15

C

4.4 Quantitative Results Quantitative experiments were conducted to assess the performance of the RefCN: (i) Sparse Reflectance Map (SRM) versus DRM; (ii) comparison of DRMs with different interpolation methods; (iii) DRM versus color and range data modalities; and (iv) RefCN versus state-of-the-art methods. Sparse Reflectance Map (SRM) vs DRM. The RefCN was trained on training set and evaluated on the validation set. As can be seen from Fig. 3 and Table 3, the results show that the DRM (with Nearest Neighbor interpolation) considerably improves the detection performance. Table 3. Detection accuracy with SRM vs DRM on validation-set. Input data Easy

Moderate Hard

SRM

23.45 % 17.57 %

15.57 %

DRM

67.69 % 51.91 %

44.98 %

DRM with Nearest Neighbor, Linear and Natural Interpolations. The result from the previous experiment shows that the use of dense up-sampled representation considerably improves the detection rate. A question that can be raised is which interpolation method gives the best performance. In this experiment, we evaluated three interpolation methods: Nearest Neighbor (DRMnearest ), Natural Neighbor (DRMnatural ) and Linear (DRMlinear ). DRMnatural is based on Voronoi tessellation of the projected LIDAR points which result in a continuous surface except at projected points. DRMlinear is based on linear interpolation between sets of three points (of the projected LIDAR points) for surfaces in DT format. Figure 4 shows an example of the generated DRMlinear and DRMnatural . The detection performance for each interpolation method is reported in Fig. 3 and Table 4. The best performance was attained, for all categories, with DRMnearest . Table 4. Detection accuracy using DRM with different interpolation methods on validation-set. Input data

Easy

Moderate Hard

DRM (DRMnearest ) 67.69 % 51.91 %

44.98 %

DRMlinear

60.60 % 45.71 %

40.79 %

DRMnatural

65.25 % 50.07 %

44.76 %

482

A. Asvadi et al. 1

1

Easy Moderate Hard

0.9

0.8

0.7

0.7

0.6

0.6

Precision

Precision

0.8

Easy Moderate Hard

0.9

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0 0

0.1

0.2

0.3

0.4

0.5

Recall

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Recall

Fig. 3. Precision-Recall on KITTI Validation-set (Car Class). Left: SRM (dashed lines) versus DRM (solid lines). Right: DRMnearest (solid lines), DRMnatural (dashed lines) and DRMlinear (dotted lines).

Fig. 4. Top to bottom: an example of the generated DRMlinear and DRMnatural , respectively. Please refer to Fig. 2 for the illustration of the corresponding color image and the generated DRMnearest .

Fig. 5. An example of the generated dense depth map (please refer to Fig. 2 for the illustration of the corresponding color image and DRM.

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection

483

Table 5. Detection accuracy with color vs depth and DRM input on validation-set. Input data

Easy

Moderate Hard

Color

73.54 % 61.41 %

53.71 %

Depth

66.86 % 54.13 %

45.94 %

Reflec. (DRM) 67.69 % 51.91 %

44.98 % Car

1

1

Easy Moderate Hard

0.9 0.8

Easy Moderate Hard

0.8

0.6

Precision

Precision

0.7

0.5 0.4

0.6

0.4

0.3

0.2

0.2 0.1

0

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0

1

0.2

0.4

Recall

0.6

0.8

1

Recall

Fig. 6. Left: Precision-Recall on KITTI validation-set (Car Class) using Color (dotted lines) Depth (dashed lines) and DRM reflectance (solid lines). Right: Precision-Recall on KITTI test-set (Car Class). Table 6. RefCN evaluation on KITTI test-set. Approach

Moderate Easy

Hard

RunTime Environment

MV3D (LID.) [11] 79.24%

87.00% 78.16% 0.24

[email protected] Ghz (Python+C/C++)

VeloFCN [8]

53.59%

71.06% 46.92% 1

[email protected] Ghz (Python+C/C++)

Vote3D [7]

47.99%

56.80% 42.57% 0.5

[email protected] Ghz (C/C++)

Proposed RefCN

35.72%

50.28% 29.86% 0.05

[email protected] Ghz (C/C++)

CSoR [18]

26.13%

34.79% 22.69% 3.5

[email protected] Ghz (Python+C/C++)

mBoW [5]

23.76%

36.02% 18.44% 10

[email protected] Ghz (C/C++)

DepthCN [6]

23.21%

37.59% 18.01% 2.3

[email protected] Ghz (MATLAB)

Reflectance vs Depth vs Color Data. In this experiment, we have compared reflectance, depth and color data modalities. DT with Nearest Neighbor interpolation and Range Inverse encoding were used for dense depth map generation (see Fig. 5). To have fair comparison among modalities, the images (DRM, dense depth map and color image) were converted to JPEG files with 75% compression quality. The average

484

A. Asvadi et al.

Fig. 7. Example screenshots of RefCN results. Detection results are shown, as green BBs in the color-images (top) and DRMs (bottom) compared to the ground-truth (dashed-magenta). Notice that the depicted color-images are shown only for visualization purpose.

Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection

485

file size for each DRM, dense up-sampled depth map and color modalities is approximately 44 KB, 28 KB and 82 KB, respectively. As it can be seen in Fig. 6 and Table 5, the results show that the DRM has a better performance than the depth modality in Easy class. Comparison with the State-of-the-Art. To compare with the state-of-the-art, the RefCN was trained on the full KITTI object detection ‘training dataset’ and evaluated on the test set. Results are reported in Fig. 6 and Table 6. Although the RefCN uses only reflection data, it surpasses some of the approaches that use LIDAR’s range data, which provides empirical evidence of the usability of LIDAR’s reflection intensity data in vehicle detection systems.

Fig. 8. The effect of retroreflectivity of car license plates and traffic signs in the DRM and the corresponding color-image. The green BBs show a car license plate and traffic signs.

4.5 Qualitative Results Figure 7 shows some of the representative qualitative results with many cars in the scene. A video result is available at https://youtu.be/1JJHihvp7NE. As can be seen, for most cases, the RefCN correctly detects target vehicles.

5 Conclusions, Key Findings and Future Work To fulfill the requirements of a complementary and multimodal perception system, we presented a methodology for using 3D-LIDAR reflection intensity data for vehicle detection, and we provide empirical evidence of the benefits of the system through quantitative and qualitative experiments. In this paper, we have observed that the dense up-sampled representation (DRM) considerably improves the detection rate (in comparison with the SRM). We also observed that the DRM has a better performance than

486

A. Asvadi et al.

the depth modality in “Easy class”. The generated DRM can be used in multi-modal data fusion systems for object detection and tracking. A direction for the future work is to explore the DRM for car license plate and traffic sign detection (see Fig. 8). Acknowledgments. This work has been supported by “AUTOCITS - Regulation Study for Interoperability in the Adoption of Autonomous Driving in European Urban Nodes” - Action number 2015-EU-TM-0243-S, co-financed by the European Union (INEA-CEF); and FEDER through COMPETE 2020, Portugal 2020 program under grant UID/EEA/00048/2013.

References 1. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: IEEE CVPR (2017) 2. Tatoglu, A., Pochiraju, K.: Point cloud segmentation with LIDAR reflection intensity behavior. In: IEEE ICRA (2012) 3. Hernandez, D.C., Hoang, V.D., Jo, K.-H.: Lane surface identification based on reflectance using laser range finder. In: IEEE/SICE SII (2014) 4. Caltagirone, L., Samuel, S., Svensson, L., Wahde, M.: Fast LIDAR-based road detection using fully convolutional neural networks. In: IEEE IV (2017) 5. Behley, J., Steinhage, V., Cremers, A.B.: Laser-based segment classification using a mixture of bag-of-words. In: IEEE/RSJ IROS (2013) 6. Asvadi, A., Garrote, L., Premebida, C., Peixoto, P., Nunes, U.J.: DepthCN: vehicle detection using 3D-LIDAR and convnet. In: IEEE ITSC (2017) 7. Wang, D.Z., Posner, I.: Voting for voting in online point cloud object detection. In: Robotics: Science and Systems (2015) 8. Li, B., Zhang, T., Xia, T.: Vehicle detection from 3D Lidar using fully convolutional network. In: Robotics: Science and Systems (2016) 9. Li, B.: 3D fully convolutional network for vehicle detection in point cloud. In: IEEE/RSJ IROS (2017) 10. Gonzalez, A., Vazquez, D., Lopez, A.M., Amores, J.: On-board object detection: multicue, multimodal, and multiview random forest of local experts. CYB IEEE Trans. 47, 3980–3990 (2016) 11. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3d object detection network for autonomous driving. In: IEEE CVPR (2017) 12. Oh, S.-I., Kang, H.-B.: Object detection and classification by decision-level fusion for intelligent vehicle systems. Sensors 17(1), 207 (2017) 13. Amidror, I.: Scattered data interpolation methods for electronic imaging systems: a survey. J. Electron. Imaging 11(2), 157–176 (2002) 14. Premebida, C., Garrote, L., Asvadi, A., Ribeiro, A.P., Nunes, U.: High-resolution LIDARbased depth mapping using bilateral filter. In: IEEE ITSC (2016) 15. Ashraf, I., Hur, S., Park, Y.: An investigation of interpolation techniques to generate 2D intensity image from LIDAR data. IEEE Access 5, 8250–8260 (2017) 16. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE CVPR (2016) 17. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: IEEE CVPR (2012) 18. Plotkin, L.: PyDriver. https://github.com/lpltk/pydriver. Accessed Sep 2017

Modeling Traffic Scenes for Intelligent Vehicles Using CNN-Based Detection and Orientation Estimation Carlos Guindel(B) , David Mart´ın, and Jos´e Mar´ıa Armingol Intelligent Systems Laboratory (LSI) Research Group, Universidad Carlos III de Madrid, Legan´es, Spain {cguindel,dmgomez,armingol}@ing.uc3m.es

Abstract. Object identification in images taken from moving vehicles is still a complex task within the computer vision field due to the dynamism of the scenes and the poorly defined structures of the environment. This research proposes an efficient approach to perform recognition on images from a stereo camera, with the goal of gaining insight of traffic scenes in urban and road environments. We rely on a deep learning framework able to simultaneously identify a broad range of entities, such as vehicles, pedestrians or cyclists, with a frame rate compatible with the strong requirements of onboard automotive applications. The results demonstrate the capabilities of the perception system for a wide variety of situations, thus providing valuable information to understand the traffic scenario.

Keywords: Object detection Intelligent vehicles

1

· Viewpoint estimation · Deep learning ·

Introduction

Technology has adopted an increasingly important role in transportation systems over the past decades. Advanced Driver Assistance Systems (ADAS) have been introduced in an attempt to deal with the fact that wrong decision-making and distractions are the cause of a significant proportion of traffic accidents. These systems represent an increase in the degree of automation towards the future goal of fully autonomous driving, which is expected to lead to significant improvements in several issues associated with transportation systems. While automated cars have been already successfully tested in urban environments [1], they are in most cases heavily dependent on off-line-built maps. Navigation with scarce or non-existent prior knowledge remains an open challenge due to the wide range of complex situations which is required to be handled (occluded landmarks, unexpected behaviors, etc.) within a highly dynamic, semi-structured environment. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_40

488

C. Guindel et al.

Forthcoming self-driving systems will be demanded to understand complex traffic situations by themselves. This requirement relies on a robust inference of the position and motion of every traffic participant in the surrounding scene. Not only the presence of obstacles must be accounted but also an accurate estimation of the class to which every obstacle belongs (i.e. car, cyclist, etc.) is essential in order to correctly understand and predict the traffic situation. Vision-based approaches [2] have been proved to be highly cost-effective while enabling close-to-production assemblies given their compact size and ease of integration. Furthermore, video frames provide a rich data source from which additional information can be extracted. In this paper, a vision-based approach enabling an enhanced onboard environment perception is introduced. It is targeted to the detection and localization of the different road participants present in the surroundings of a movable platform. Additionally, object detection is enriched through a viewpoint estimation, enabling high-level inference about short-term behaviors. A modern convolutional network-based framework is employed to perform the critical inference steps according to appearance features; later, stereo information allows introducing spatial reasoning into the system. The remainder of this paper is organized as follows. In Sect. 2, we briefly discuss related works. Section 3 gives a general overview of the work. Section 4 describes the obstacle detection approach, while Sect. 5 introduces the scene modeling procedure. Results are reported in Sect. 6. Finally, Sect. 7 gives our conclusions about the work.

2

Related Work

Obstacle detection is an essential feature for automated driving systems. Consequently, a large number of algorithms have been historically developed to that end. Effort often focused on vehicle and pedestrian detection as these agents are the most commonly found ones in traffic scenes. According to the sensing device in use, vision-based methods have traditionally fallen into two main categories: monocular-vision methods and stereo-vision methods. Stereo-vision provides depth information about the scene and thus is commonly used in driving applications [3]. Stereo-vision algorithms usually make some assumptions about the ground or the expected free space on it [4]. However, rich geometry information about the scene can be recovered, enabling the building of representations such as probabilistic occupancy maps [5], elevation maps [6] or full 3D models [7], where obstacles can be identified. On the other hand, monocular obstacle detection is commonly based on appearance features. Selection of suitable features was traditionally the most crucial step in the performance of the detection pipeline and thus a lot of application-specific features, such as HOG-DPM [8], have been proposed to perform detection of traffic participants (e.g. cyclists [9]). Orientation estimation of the detected objects, while less frequent, has also been addressed [10].

Modeling Traffic Scenes for Intelligent Vehicles

489

Representation learning based on deep neural networks has delivered a paradigm shift in recent years, showing vast improvements over hand-crafted features in several kinds of recognition tasks. In particular, Convolutional Neural Networks (CNN) can learn hierarchical data representations that have been shown useful in object classification [11]. Instead of using the classical sliding-window approach, detection with CNNs frequently relies on attention mechanisms to limit the number of proposals to be effectively classified. Within this tendency, Girshick et al. introduced the now widely known recognition using regions (R-CNN) paradigm [12]. These regions can be selected according to classical similarity-based segmentation methods; however, much effort has recently been devoted to end-to-end pipelines where every stage, including region proposal, can be effectively learned. Faster R-CNN [13] take advantage of a Region Proposal Network (RPN) which feeds the R-CNN responsible for the classification task. CNNs have been applied in several tasks involved in autonomous driving, such as lane detection [14] and, certainly, object detection [15]. In some cases, orientation is also predicted to increase the information about the detected instances; thus, in [16], a CNN is used for object detection and viewpoint estimation. Viewpoint is also estimated in [17] through keypoint likelihood models.

3

System Overview

This work has been designed to become the core element of the perception module in the IVVI 2.0 (Intelligent Vehicle based on Visual Information) research platform [18]. IVVI 2.0 is a manned vehicle, equipped with cutting edge automotive sensors, which is meant for the development and testing of ADAS. Visual sensing units in the IVVI 2.0 include a trinocular stereo camera covering the field of view in front of the vehicle, which is the source of the images used by the presented approach. The processing unit includes a high-performance GPU which enables high-parallel processing, such as that carried out in CNNs. Robot Operative System (ROS)1 is used for inter-module cooperation. The method presented here provides a step forward in vision-based detection and classification. The work consists of two main branches that are intended to run in parallel, as shown in Fig. 1: 1. Object detection and viewpoint estimation based on appearance. Features are extracted exclusively from the left stereo image. 2. Object localization, robust to changes in position and orientation of the vision system resulting from the vehicle movement. A stereo-based 3D reconstruction is performed, and the extrinsic camera parameters are extracted under a flat-ground assumption. As usual in deep learning frameworks, our object detection approach is meant to be performed almost entirely in the GPU; on the other hand, the object 1

http://www.ros.org/.

490

C. Guindel et al.

Fig. 1. Proposed system overview

localization pipeline is expected to make an intensive use of a typical multicore CPU during the step of 3D reconstruction. This twofold process has been designed to fill the available computing capability, in order to meet the time requirements inherent to the application.

4

Obstacle Detection

A wide variety of dynamic obstacles can be found in urban and road environments. Whereas object classification aims to classify predefined image regions, object detection also requires localizing every object within the image coordinates. We adopt a state-of-the-art, CNN-based approach, Faster R-CNN [13], to perform object detection. Based on the popular R-CNN detector [12], Faster R-CNN provides an end-to-end trainable framework spanning from the image pixels to the final prediction. While outperforming classical detection pipelines, Faster R-CNN can deal with a large number of classes with no major impact on performance, making it particularly suitable for driving environments. Faster R-CNN involves two different stages: a Region Proposal Network (RPN), which is responsible for identifying those image regions where objects are located, and an R-CNN, where image regions from the previous RPN are classified. Both components are based on CNN architectures and in fact, they share the same set of convolutional layers. For this reason, Faster R-CNN enables object detection at real-time frame rates. We adopt the strategy introduced in [19] to incorporate the viewpoint inference into the detection framework. The basis of the idea is to benefit from the already computed convolutional features to obtain an estimation of the orientation of the objects with respect to the camera. Figure 2 illustrates the approach. As with the region proposals from the RPN, viewpoint can be estimated at almost no cost during test-time given that convolutions are computed only once. According to the requirements of the application, only the yaw angle (i.e. azimuth) from which objects are seen is to be estimated, since both relevant obstacles and the ego-vehicle are assumed to move on the same ground plane.

Modeling Traffic Scenes for Intelligent Vehicles

491

Fig. 2. Proposed object detection and viewpoint estimation approach.

4.1

Discrete Viewpoint Approach

A discrete approach is adopted for the viewpoint estimation, so that the full range of possible viewpoints (2π rad) gets divided into Nb bins Θi ; i = 0, . . . , Nb − 1 of which only one is employed to represent the object viewpoint. Accordingly, objects with a ground-truth angle θ are assigned a viewpoint label i during the training step such that θ ∈ Θi :     2π 2π Θi = θ ∈ [0, 2π)  ·i≤θ < · (i + 1) (1) Nb Nb The proposed object viewpoint estimation system is aimed to provide a viewpoint estimation consisting of the parameters of a categorical distribution over Nb possible viewpoints, r. A single-valued θˆ estimation can therefore be provided as the center of the bin b∗ with the greatest probability according to r: π(2b∗ + 1) θˆ = Nb 4.2

(2)

Joint Detection and Viewpoint Estimation

In the R-CNN framework, image regions are propagated through the network and, finally, a fixed-length feature vector is extracted to predict the object class. We introduce the viewpoint estimation straightforwardly: it is inferred from the same feature vector that is also used to predict the class. This is motivated by the fact that appearance is highly affected by viewpoint, so a good set of features should be able to discriminate among different viewpoints.

492

C. Guindel et al.

Solutions introduced with Fast R-CNN [20] are adopted here, so the resulting feature vectors are fed into a sequence of fully connected layers that are finally split into three sibling layers. As in the original approach, the first two sibling layers provide a classification and a bounding box regression, respectively. On the other hand, the new third layer is responsible for giving an estimation of the viewpoint, which is ultimately normalized through a softmax function. Given that classification is performed over K classes, the output of this branch is a vector r composed of Nb · K elements, representing K categorical distributions (one per class) over the Nb viewpoint bins: k r k = (r0k , . . . , rN ) for k = 0, . . . , K b

4.3

(3)

Training

According to the results reported in [13], we adopt an approximate joint training strategy, which has been shown to offer the best time-precision trade-off. Viewpoint estimation loss is introduced as a logistic loss that only adopts non-zero values for the ground-truth class; that is, from the Nb K-dimensional output r, we only take into account the Nb elements belonging to the ground-truth class when computing the loss: Lv =

∗ 1  Lcls (r uj , b∗j ) NB

(4)

j∈B

where NB is the size of the batch B used to train the Fast R-CNN stage, and ∗ Lcls (r uj , b∗j ) is the multinomial logistic loss computed with the Nb elements from r corresponding to the ground-truth class u∗ (i.e. the probability distribution of the angular bins for the ground-truth class) and the ground-truth label for the bin classification b∗ . This summation is added to the existing four components of the loss in the original Faster R-CNN framework, to get a five-component multi-task loss which is used for training the CNN. Although different weights might be applied to the components of the loss function, we let every (normalized) loss have the same contribution. 4.4

Implementation Details

As usual in classification tasks, convolutional layers are expected to be initialized from a model previously trained on the ImageNet classification dataset [21], while fully connected layers are given random values according to a Gaussian distribution. In this paper, eight evenly spaced viewpoint bins are considered for the viewpoint estimation (Nb = 8). Finally, a per-class non-maximum suppression (NMS) is applied to prune away the duplicated detections.

Modeling Traffic Scenes for Intelligent Vehicles

5

493

Scene Modeling

Object detection can be augmented with geometrical information in order to retrieve an instantaneous, local model of the traffic participants in front of the vehicle. To that end, we use the information from the two cameras in the stereo rig to build a dense 3D reconstruction of the environment. Initially, the 3D point cloud of the scene is represented in camera coordinates. If the ground is assumed to be flat in a relatively small neighborhood from the vehicle, extrinsic parameters of the stereo system can be estimated in an online fashion. Accordingly, the effect of camera pose changes due to the vehicle movement (e.g. traveling on uneven road surfaces) can be properly removed. Through this process, obstacles first detected through the object detection stage can be localized in world coordinates and assigned an absolute yaw angle. 5.1

Stereo 3D Reconstruction

We adopt a semi-global approach [22] to perform dense stereo matching. Despite this family of algorithms being more processing intensive than the traditional block-matching methods, challenges posed by road environments, e.g. lack of texture or changes of illumination, make them more suitable for the intended application. As an example, the disparity map obtained from the scene in Fig. 3a is shown in Fig. 3b. As a result, a 3D point cloud is obtained (Fig. 3c). Then, a voxel grid downsampling, with a grid size of 20 cm, is performed. In addition to reducing the amount of data to be processed, this filtering is aimed to normalize the point density along the depth axis. 5.2

Extrinsic Parameters Auto-Calibration

Coefficients defining the ground plane must be estimated as a first step to obtain the vision system extrinsic parameters. Two pass-through filters are applied in order to remove points outside a 0–2 m range along the vertical axis and a 0–20 m range along the depth axis. Within that ranges, flatness assumption is fulfilled with a high probability. Points comprising the filtered point-cloud are then fitted to a plane using RANSAC [23] with a 10 cm threshold. Only planes perpendicular to a fixed direction, with a small angular tolerance, are considered. Since angles defining the camera pose are expected to be small, that axis is chosen as the vertical direction in camera coordinates. Figure 3d illustrates the ground plane (shown in green) obtained from the voxel-filtered point cloud. It can be shown [24] that, given a road plane defined by axc +byc +czc +d = 0, with (xc , yc , zc ) being the coordinates of a point belonging to the plane, roll (ψ), pitch (φ) and height (h) defining the camera pose can be obtained as:   −c ψ = arcsin(a) φ = arctan h=d (5) b

494

C. Guindel et al.

(a)

(b)

(c)

(d)

Fig. 3. Extrinsic parameters estimation pipeline: (a) left image; (b) disparity map; (c) point cloud; (d) inliers for the plane, in green, over the voxelized cloud (d).

Yaw angle cannot be extracted solely from the plane, and thus it is assumed to be nil. We choose not to translate the world coordinate frame along x and y camera axes, although that displacement may be arbitrarily chosen (e.g. the origin might be centered at the front end of the vehicle). That set of extrinsic parameters defines a transformation which is then applied to the non-filtered point-cloud to get points in world coordinates. 5.3

Object Localization

To obtain the spatial location of the objects in the scene, the correspondence between points in the image and points in the 3D cloud, preserved within an organized point cloud structure, is exploited. Points belonging to the ground, as well as those too close to the camera (e.g. from the hood of the car), are removed beforehand. For every detection, the median values of the x, y and z coordinates for the set of 3D points corresponding to the 11 central rows of the object bounding box are computed and used as an estimation of the 3D location of the object. Yaw angle, expressed as the rotation around a local vertical axis, can be approximated taking into account the angle between the positive x axis of the world coordinate frame and the point given by the coordinates of the object. By using all the inferred information about the obstacles, a top-view model of the vehicle surroundings is built, where every object in the field of view is included alongside their estimated orientation.

6

Results

Our joint object detection/viewpoint estimation pipeline was quantitatively evaluated according to the standard metrics on a well-established image benchmark, while the performance of the scene modeling stage and, eventually, the full system, have been tested in real traffic scenes using the IVVI 2.0 vehicle. 6.1

Object Detection and Viewpoint Estimation

Experiments for assess the object detection and viewpoint estimation branch have been conducted on the KITTI object detection benchmark [25], taking

Modeling Traffic Scenes for Intelligent Vehicles

495

advantage of the available class and orientation labels. Since annotations for the testing set are not publicly available, the labeled training set has been divided into two splits, for train and validation, ensuring that images from the same sequence are not used in both subsets. Overall, 5,576 images are used in training whereas 2,065 are subsequently employed to test our algorithms. Since our work focus on the simultaneous recognition of the different agents of the scene, our algorithm has been trained to detect the seven foreground classes provided by the KITTI dataset. Special care was taken to avoid including regions overlapping with DontCare and Misc regions, neither as positive nor negative samples while training. Given that our approach is independent of the particular architecture selected for the convolutional layers, we tested the two baseline architectures from Faster R-CNN in our application: ZF [26] and VGG16 [27]. On the other hand, even though all the models were obtained scaling the input images to 500 pixels in height during the training, different scales were evaluated at test-time. In all cases, training was carried out for 90k iterations, with a base learning rate of 0.0005, which was scaled by 0.1 every 30k iterations. For the sake of brevity, we only evaluate Average Orientation Similarity (AOS), as introduced in [25], which is intended to assess the joint detection and viewpoint accuracy. Results for the different architecture/scale combinations are given in Table 1. Please note that results for the Person sitting and Tram classes are not reliable due to the low number of samples and were therefore excluded. Processing times are for our implementation using the Python interface of Caffe [28] and a NVIDIA Titan Xp GPU. Table 1. Average orientation similarity (%) and run times (ms) on the test split for different scales and architectures Net

Scale Car Pedest. Cyclist Van Truck mean Time

ZF

375 500 625

44.2 35.6 52.7 43.7 51.6 40.7

16.1 18.4 22.7

8.5 12.9 15.1

3.2 3.5 5.3

21.5 26.2 27.1

46 73 90

VGG 375 500 625

64.8 54.7 74.7 61.0 75.7 60.9

25.0 33.0 35.2

22.9 8.5 30.0 12.1 31.1 15.4

35.2 42.2 43.7

79 112 144

As shown, precision does not grow significantly when the test-time scale is raised beyond the original train-time scale, i.e., 500 pixels. On the other hand, VGG16 considerably outperforms ZF for every analyzed class. 6.2

Scene Modeling

Tests for the scene modeling were performed using the IVVI 2.0 vehicle in real traffic situations. According to the results in the previous section, we chose

496

C. Guindel et al.

the VGG16 architecture, and 500 as the image scale. Due to the generalization capability featured by CNN structures, models trained on the KITTI dataset were used without modifications. An ROI with 500 pixels in height, comprising the area where objects are typically present in the images, was extracted from the original 1024 × 768 images to be utilized by the CNN branch, while the full frame is employed to build the point cloud at the modeling branch. Figure 4 shows four examples of monocular detections (upper row) and their resulting scene models, where obstacles are represented as dots on a top view of the reconstructed point cloud (lower row). Object orientation is represented by an arrow. Additionally, points belonging to the ground plane (RANSAC inliers) are projected on the image and colored green; they provide a rough estimation of the traversable area for the vehicle.

(a)

(b)

(c)

(d)

Fig. 4. Some examples of traffic scenes correctly identified by our system.

7

Conclusion

A computer vision framework designed to reach a full traffic scene understanding has been presented. Traffic participants are identified by a CNN-enabled method, showing the potential of this approach within automotive applications. Obstacle viewpoint estimation is introduced as an additional information source to endow the system with further insight into the scene features. Because of the nature of the adopted approach, joint object detection and viewpoint estimation can be performed simultaneously over all classes. Since CNN parameters are shared across all categories and feature vectors computed by the CNN are low-dimensional, computation times are compliant with realtime requirements, yet achieving accurate results. This information can be further augmented using a stereo vision 3D reconstruction to gather an accurate situation assessment in complex traffic situations.

Modeling Traffic Scenes for Intelligent Vehicles

497

New categories of traffic elements, even those belonging to the infrastructure, may be subsequently added to enhance the scene understanding. The presented approach can be naturally extended to the time domain in order to make predictions about future behaviors of agents involved in the scene. In this regard, viewpoint estimation provided by the presented method plays a fundamental role to enable robust inference. Additionally, the output provided by the system is suitable to be combined with information from other perception modules, e.g., semantic segmentation, to build an even further comprehensive model of the surroundings of the vehicle. Acknowledges. Research supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R), and the Comunidad de Madrid through SEGVAUTO-TRIES (S2013/MIT-2713). The Titan Xp used for this research was donated by the NVIDIA Corporation.

References 1. Broggi, A., Cerri, P., Debattisti, S., Laghi, M.C., Medici, P., Panciroli, M., Prioletti, A.: PROUD-public road urban driverless test: architecture and results. In: Proceedings of IEEE Intelligent Vehicles Symposium (IV), pp. 648–654 (2014) 2. Zhu, H., Yuen, K.V., Mihaylova, L., Leung, H.: Overview of environment perception for intelligent vehicles. IEEE Trans. Intell. Transp. Syst. (2017) 3. Franke, U., Pfeiffer, D., Rabe, C., Knoeppel, C., Enzweiler, M., Stein, F., Herrtwich, R.G.: Making Bertha See. In: IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 214–221 (2013) 4. Musleh, B., de la Escalera, A., Armingol, J.M.: U-V disparity analysis in urban environments. In: Computer Aided Systems Theory - EUROCAST 2011. Springer, Heidelberg, pp. 426–432 (2012) 5. Badino, H., Franke, U., Mester, R.: Free space computation using stochastic occupancy grids and dynamic programming. In: IEEE International Conference on Computer Vision Workshops (ICCVW) (2007) 6. Oniga, F., Nedevschi, S.: Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection. IEEE Trans. Veh. Technol. 59(3), 1172– 1182 (2010) 7. Broggi, A., Cattani, S., Patander, M., Sabbatelli, M., Zani, P.: A full-3D Voxelbased dynamic obstacle detection for urban scenario using stereo vision. In: Proceedings of IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 71–76 (2013) 8. Felzenszwalb, P.F., Girshick, R., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010) 9. Tian, W., Lauer, M.: Fast cyclist detection by cascaded detector and geometric constraint. In: Proceedings of IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 1286–1291 (2015) 10. Pepik, B., Stark, M., Gehler, P., Schiele, B.: Teaching 3D geometry to deformable part models. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3362–3369 (2012)

498

C. Guindel et al.

11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012) 12. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580–587 (2014) 13. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2016) 14. Li, J., Mei, X., Prokhorov, D.: Deep neural network for structural prediction and lane detection in traffic scene. IEEE Trans. Neural Netw. Learn. Syst. 28(3), 690– 703 (2017) 15. Yang, F., Choi, W., Lin, Y.: Exploit all the layers: fast and accurate CNN object detector with scale dependent pooling and cascaded rejection classifiers. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2129–2137 (2016) 16. Yang, L., Liu, J., Tang, X.: Object detection and viewpoint estimation with automasking neural network. In: Computer Vision - ECCV 2014, pp. 441–455 (2014) 17. Tulsiani, S., Malik, J.: Viewpoints and keypoints. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1510–1519 (2015) 18. Mart´ın, D., Garc´ıa, F., Musleh, B., Olmeda, D., Pel´ aez, G.A., Mar´ın, P., Ponz, A., Rodr´ıguez Garavito, C.H., Al-Kaff, A., de la Escalera, A., Armingol, J.M.: IVVI 2.0: an intelligent vehicle based on computational perception. Expert Syst. Appl. 41(17), 7927–7944 (2014) 19. Guindel, C., Martin, D., Armingol, J.M.: Joint object detection and viewpoint estimation using CNN features. In: Proceedings of IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 145–150 (2017) 20. Girshick, R.: Fast R-CNN. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015) 21. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015) 22. Hirschm¨ uller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008) 23. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981) 24. de la Escalera, A., Izquierdo, E., Mart´ın, D., Musleh, B., Garc´ıa, F., Armingol, J.M.: Stereo visual odometry in urban environments based on detecting ground features. Robot. Auton. Syst. 80(June), 1–10 (2016) 25. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361 (2012) 26. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Computer Vision - ECCV 2014, pp. 818–833. Springer (2014) 27. Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs/1409.1 (2014) 28. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of ACM International Conference on Multimedia, pp. 675–678 (2014)

Complete ROS-based Architecture for Intelligent Vehicles Pablo Marin-Plaza(B) , Ahmed Hussein, David Martin, and Arturo de la Escalera Universidad Carlos III de Madrid, Leganes, Spain {pamarinp,ahussein,dmgomez,escalera}@ing.uc3m.es http://www.uc3m.es/islab

Abstract. In the Intelligent Transportation Systems Society (ITSS), the research interest on intelligent vehicles is increasing during the last few years. Accordingly, this paper presents the advances in the development of the ROS-based (Robot Operating System) software architecture for intelligent vehicles. The main contribution of the architecture is its powerfulness, flexibility, and modularity, which allows the researchers to develop and test different algorithms. The architecture has been tested on different platforms, autonomous ground vehicles from the iCab (Intelligent Campus Automobile) project and in the intelligent vehicle based on Advanced Driver Assistance Systems (ADAS) incorporated from IvvI 2.0 (Intelligent Vehicle based on Visual Information) project.

1

Introduction

The interest of the autonomous driving vehicles starts three decades ago, solving simple problems related to the dynamics and stabilization of the vehicle; such as Anti-lock Braking System (ABS) and Traction Control System (TCS) with proprioceptive sensors like encoders and inertial measurement units. From late 1990 to the last decade different Advanced Driver Assistance Systems (ADAS) applications; for instance, park assistance, Adaptive Cruise Control (ACC), night vision, and lane leave detection among others were developed and installed in some commercial vehicles. The intention of this features was always to assist the driver and not the autonomous driving. This decade, the revolution of sensors and computers, allow the research groups focus on the different levels of autonomy, increasing the complexity of vehicle tasks. From collision avoidance, automated highway driving and recently cooperative behavior on road, where the rules of each country are applied for all vehicles. When talking about off-road navigation, the rules for roads are not applicable and new challenges emerge due to the unstructured environment and the lack of lanes or signs. The intention of this paper is to present a software architecture, based on modular and portable solution to solve each one of the problems that autonomous vehicles have to face. Most of them are common from the on road; such as c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_41

500

P. Marin-Plaza et al.

localization, mapping, perception, and behavior decision. However, the most important difference between on road and off road is the analysis of the map during behavior selection. This architecture is easily configurable to work with different platforms and it is intended to compare and evaluate the solution of problems by different methods. The document is structured as follows, Sect. 2 introduces the design of the architecture and involved elements. Section 3 describes the platforms, where this architecture has been tested. Section 4 exposes conclusions and future work.

2

Design

The main idea to create a whole architecture is to allow the researchers the possibility to test and compare the implemented algorithms, in order to reduce the time spent on implementation. With a modular architecture, it is possible to change or select easily different modules keeping the coherence of the connections between nodes. The prototyping software is ROS (Robot Operating System), and the most important aspect of the architecture is the connection between packages. Additionally, standard messages for sensors, mapping and navigation are used in order to keep the connections unbroken and ensure the compatibility between nodes when a new package needs to be incorporated. The ideal software for research platforms is focused on different aspects; such as modularity, flexibility, simplicity, and usability. They are described as follows: – Modularity: is the degree to which a system’s components may be separated and recombined. Based on that aspect, each problem to solve, in the aforementioned Sect. 1, has been simplified and isolated in order to work independently of the system. Navigation, localization, control, and path planning are isolated, in order to debug and test individually, as well as mapping, and behavior decision making. – Flexibility: means designs that can adapt when external changes occur. In this case, the external change will be a researcher changing the main method, in order to make a comparison between the old method and the new one. For example, one case of use is the odometry ego-motion method. The sources like wheel odometry from the encoders or GPS odometry may vary substantially between them and a good filter will generate a better localization. The architecture allows the change from one filter to other or even generates both. The use of managers, in order to select which method is active, is a fundamental part of the architecture. One important fact is that if a new feature like a new message, parameter or signal is necessary, the architecture is mature enough to adapt and assimilate this change without modifying the rest of the system. – Simplicity: is the state or quality of being simple. As the architecture grows in complexity with many more modules, the configuration file and the launch file of the whole architecture maintain the tracking of which parts of the modules are running.

ROS-based Architecture

501

Fig. 1. Extended overview of the architecture

– Usability: means the availability or convenience of use. This aspect is the most important for sharing the knowledge with other researchers. Each package is documented and available to be used by other researchers when necessary. The overview of the whole system is shown in the Fig. 1. On the top left, there are some examples of the inputs of the system as sensors or controllers. In the top middle, a simplified diagram of the modules of the architecture is presented. On its right side, possible actuators such as motors or speakers. Photos of iCab 1, IvvI 2.0 and iCab 2 are under the top blocks and described later on. Below the photos, there is the description of the modules in more details. The connection

502

P. Marin-Plaza et al.

messages between them are always the standard of ROS repositories, for the reasons aforementioned, to keep the compatibility and interchangeability of the packages. In order to configure which packages are active, roslaunch files have been used with specific nodes. These files run the selected packages with specific parameters under different namespaces, in order to generate the whole system. If new sensor or node is required, a new launch file is incorporated. 2.1

Inputs

The inputs of the system are based on which type of research platform is configured. Each sensor is able to run under Linux or at least the driver is able to communicate with the ROS core. This section describes the mounted sensors on the platform. – Laser Rangefinder: LMS 291 and LD-MRS, both from Sick company. The drivers are sick wrapper and sick ldmrs. Both lasers are configurable and share information under a specific type of message called LaserScan.msg. – Stereo Camera: The use of Point Grey Bumblebee 2 and Bumblebee xB3 allow the architecture to gather information under the specific messages of Image.msg. Each camera is publishing left and right image and the post process nodes, rectify them in order to create the disparity map. – GPS + IMU + Compass: uBlox module is one solution to retrieve information about the inner state of the vehicle with the IMU by Imu.msg and Compass Float.msg. For the GPS module, this device publish information by NavSatFix.msg. The driver is Mavros. – Ultrasonic: In order to control them, it is necessary a microcontroller to send and receive the time of flight and the accordingly distance. The simple solution is the use of Arduino and the driver to publish the information is rosserial arduino. The messages to publish are Float.msg. – Joystick: The manual driving needs a stable joystick to govern the car. The driver Joy, which is generic from ROS libraries, is the perfect solution. The message published is Joy.msg – Lidar: The device is Velodyne VLP-16. This special sensor is able to provide information from the environment with 360◦ over 16 different layers. The driver Velodyne driver allows the architecture to receive messages of the type PointCloud2.msg and LaserScan.msg 2.2

Actuators

The only method to move a vehicle is over the motor, brake and direction wheel, therefore, the commands generated from the architecture are accordingly to standard messages in ROS AckermannDrive.msg. The importance to maintain this message is reflected on the possibility to use with different actuators. The only change is the low-level driver outside of the architecture and outside the computers.

ROS-based Architecture

503

The speakers use sound driver to play different instructions for the passengers by the type SoundRequest.msg. The touchscreen is included here due to the GUI created that is part of the architecture. In addition to taking inputs from passengers, related to transportation requests. 2.3

Perception

Image processing is a tool for environmental understanding, where features of the scene are extracted and analyzed in order to create better decisions. The preprocess is in charge of transforming the data extracted from the stereo camera as image raw for both left and right and rectified accordingly with lookup tables. The use of lookup tables or the official ROS repository for camera calibration is up to the user of the architecture. For specific Bumblebee Point Grey sensors for the iCab and IvvI projects, the lookup tables have been selected due to better performance. In order to understand the environment using the stereo analysis, it is necessary to create the disparity map. Again, the architecture provides the opportunity to select the method and as long as the use of standard image messages maintains the same structure, the method to rectify or create the disparity map is irrelevant from the point of view of the architecture and only the performance is affected. Towards the obstacle detection, some nodes use the extrinsic calibration from the camera to the road plane, accordingly, it is necessary to save computational power and perform only once, because the use of point-clouds requires high CPU load. 2.4

Localization

The localization package is intended to solve the problem of the odometry of the vehicle. The knowledge of vehicle pose, position, and orientation, during the movement in the environment, is critical to making effective decisions. Due to this reason, the vehicle needs its own localization at all time and for this architecture, several ways to obtain the odometry have been developed, which is described as follows: – Encoders: The accumulative displacement on longitudinal and latitudinal coordinates are extracted from the rear wheels and the steering wheel. Several algorithms has been tested in order to obtain the less accumulative error on this type of odometry and afterwards, the use of a compass in the wheel odometry calculation improved the overall efficiency. The geometry for the look ahead distance of pure pursuit allows the node to calculate the distance covered. Through the covered distance and the compass angle, it is possible to identify the vehicle pose. – GPS: Based on the latitude coordinates, longitude coordinates and X and Y UTM values, the readings of this sensor provides the global movement of the vehicle. These readings are highly dependent on the number of available satellites in the testing area. In order to obtain the local odometry, the use of

504

P. Marin-Plaza et al.

the initial heading direction is possible to be selected by two methods, based only on the GPS readings after the start of the movement or based directly on the compass. – Visual: The possibility to obtain the plane of the ground in front of the vehicle opens the possibilities to create two different approach for this objective. One method based on the analysis of the ground plane to extract features using PCL. The other method uses de UV-disparity analysis [10]. – Velodyne: Using the LOAM (Laser Odometry and Mapping) algorithm [12] from ROS repositories and using different configurations, this package provides one of the best odometries for the vehicle, especially near buildings and structures around the path. This method is based on the 3D reconstruction of the environment that the sensor provides. – Wireless: The use of access points for wireless Internet communication provides the localization in a known environment [2]. The odometry manager is a fundamental part of this package. It is in charge of selecting the final odometry of the vehicle under the configured namespace. All other nodes that use the odometry are directly connected to the output of the odometry manager. Other odometries derived from this package are only for testing and debugging. The messages used for all odometry generation nodes are from nav msgs package and the type is Odometry.msg. It includes the ego position of the vehicle, the velocity and the covariance for the confidence level. Moreover, the odometry manager is responsible for the fusion of the different type of odometry techniques in order to achieve better results with less error. 2.5

Mapping

The use of nav msgs package allows the system the use of OccupancyGrid.msg to share with the different processes. The main characteristic of this occupancy grid map is the reconfigurability to work with different sizes and resolutions. The mapping library creates the grid map for each sensor and later fuse them to create one map with all surroundings information. The use of TF (transformation tree) is mandatory, therefore, proper transformations are estimated. The data fusion algorithm is up to the researcher, in order to test the importance of each sensor depending on the confidence level. Finally, the visualizer adds some elements; such as the icon for the vehicle in colors, waypoints and other features only for debugging purposes. Each message contains the data array, where the different values between 0 and 100 are specified for the occupation probability of each cell. 2.6

Planning

In order to go from one point to other, the planning is divided into two main layers. On the one hand, global planning creates a path for the vehicle from the starting point to the destination point, avoiding all static obstacles in the provided occupancy grid map. The message type Path.msg is used on this nodes and it is from the package nav msgs. The most important field is the waypoints,

ROS-based Architecture

505

where the car needs to pass by in order to send to the proper controller commands to convert later into movement commands. Grid map message is mandatory for each node in order to allow the use of different algorithms; such as A* or Simulated Annealing [3]. On the other hand, there is the local planning for the vehicle movement among the pre-set waypoints by the global planner. The planner uses the local grid map with the current information and current location as input to navigate to the next waypoint. Accordingly, the better local grid map of the surroundings, the better and more efficient planning is obtained. The local planning traces the path to follow, using different methods; such as splines, Time Elastic Bands [11] or Bezi´er curves [1]. The same workflow from the other package with the planning (global or local) manager is used here. The manager is in charge of publishing to ROS the selected path, based on the simple selection or filter and fused by different algorithms. 2.7

Control

The lateral and velocity controllers are generated in a closed-loop model. This package generates the commands to navigate by ackermann msgs, where the velocity and the steering angle are the results of the PID controller package from ROS. This package is configured for each vehicle, due to the slight differences in the actuators, therefore, the PID coefficients are different. The message AckermannDrive.msg is generated in this package as the result of the current position and the desired position, gathered by the message aforementioned Path.msg from the local planning manager. 2.8

Communication

For the V2X communication, the use of Multimaster FKIE allows the architecture to discover other ROS core masters in the same network. The study of comparison between 4G and WiFi networks is conducted in [7] along with the different configurations using Virtual Private Network (VPN). The Vehicleto-Vehicle (V2V) communication allows each agent in the system to know the position of the other agents, among other related information for cooperation purposes. The Vehicle-to-Infrastructure (V2I) creates the registry log to debug and is the main component of interaction with the iCab project to save the transportation requests for the vehicles to go from one point to another. Finally, the Vehicle-to-Pedestrian (V2P) communication uses the network to alert both nearby vulnerable pedestrians and the vehicle in case of possible collision [4]. All configurations are bidirectional, that means the infrastructure-to-vehicle and the pedestrian-to-vehicle communications are implemented as well. Due to the project specifications for each type of communication and not for the general purpose, the messages shared through V2X configurations are not standard.

506

2.9

P. Marin-Plaza et al.

Cooperation

The package main purpose is to manage the cooperation and coordination among all vehicles in the system. It follows the multi-robot task allocation (MRTA) problem modeling, using the multi-traveling salesmen (mTSP) formulation. Currently, there are two proposed solutions, market-based approach and a metaheuristic based optimization approach. The cooperation manager selects which method is used. Because of the complexity of the cooperation level, an extra non-standard message is used in order to share all relevant information of the vehicle [5]. 2.10

Simulation

One of the most extended tools for 3D simulation open-source is Gazebo 2, due to the physics motor, reconfigurability, portability, and usability. The model of the vehicle has been created using an Ackermann vehicle 3D configuration. The control lib is loaded in the ROS core and communicates the transmission of the movement of the links to the simulator by libroscontroller. The sensors; such as laser rangefinder, Velodyne, cameras, and ultrasonic are implemented to be used by the model. For the returned odometry message of the vehicle inside the simulator, an interface between gazebo and the architecture has been implemented, which publishes the gazebo odometry by the Odometry.msg, allowing the manager to select this odometry for simulation purposes to link with the rest of nodes. 2.11

Decision Making

The logic core is placed in this package. For the iCab project, there are simple tasks developed, such as follow right boundary, follow left boundary, drive between boundaries, path follower, and platooning. These tasks are based on the grid map and the odometry, which allow the iCab vehicles to move around the off-road environment. The selection of each task is based on a state machine with different levels of decision [6]. 2.12

Debugging

The tools for recording the data using rosbag and its posterior analysis allow the test and debug with multiple configurations. The launch files are configured to select where to save the data, which topics to be analyzed, and the duration of the bagfiles to split. Other tools such as rqt plot, rqt graph, rosconsole, rostopic, roswtf, tf monitor among others are connected to the packages, in order to provide useful information. The most common IDE for developing is QT, which allows live debugging while ROS is running.

ROS-based Architecture

2.13

507

GUI

This package has the implementation of the graphical user interface (GUI). It is possible to run different interfaces related to the active project(iCab or IvvI), along with other visualization options such as RVIZ. It has been developed with QT-creator.

3

Platforms

Based on the previous description of the architecture, it has been tested the iCab project vehicles [6,8] and in the IvvI 2.0 project [9]. 3.1

ICab

This platform is described as an electric golf cart, where the direction wheel has been removed and replaced with a motor encoder system. For the traction movement, the power unit is governed by MOSFET PWM with the ability to go forward and backward at (limited at this moment) maximum speed of 15 Km/h. For the brake system has been incorporated a linear motor actuator to push the on-wheel mechanism and create the correspondent friction to decelerate controlled by Arduino motor shield. The steering angle and PWM traction control are done by a microcontroller outside of the ROS scope. The sensor system is composed of a configurable stereo camera (from 640 × 480 RGB to 1024 × 768 RGB @ 20 Hz) at the front, lidar single plane (180◦ FOV @ 10 Hz) at the front, GPS + IMU + compass on top of the vehicle, lidar 16 layers (360◦ vertical + 30◦ horizontal) on top of the vehicle, ultrasonic sensors x2 wide angle ±45◦ at the rear. The hardware to manage the sensors and the actuators are divided into two computers, AAEON embedded i7 PC, and Intel NUC 6I7KYK, both working with Ubuntu 16.04 and ROS Kinetic. For the emergency button, there is an electronic device inside of the vehicle and a WiFi paired button. Exists two vehicles iCab 1 and iCab 2 with the same setup. The Fig. 2 shows this elements. 3.2

IvvI 2.0

This platform is shown in the Fig. 3 and present the ADAS system for the drivers without any autonomous intervention. The sensors are a configurable stereo camera, (from 640 × 480 RGB to 1024 × 768 RGB @ 20 Hz) at the front, IR camera (320 × 240 @ 15 Hz) at the front, lidar 16 layers (360◦ vertical + 30◦ horizontal) on top of the vehicle and Kinect 2 inside of the vehicle pointing at the driver in order to monitored the distractions and the drowsiness. One powerful computer is mounted at the rear of the vehicle and it has the same setup as in the iCab platforms, Ubuntu 16.04 and ROS Kinetic.

508

P. Marin-Plaza et al.

Fig. 2. iCab

Fig. 3. IvvI 2.0

ROS-based Architecture

4

509

Conclusions and Future Work

In this article, a complete ROS-based architecture for intelligent vehicles has been studied and developed as a complex and essential task for intelligent transportation systems society (ITSS). So, a reliable solution based on the ROS-based software architecture has been proposed by fusing data and knowledge to adapt and adjust overall vehicle parameters in different platforms. The reconfigurable architecture aids different embedded systems in vehicles to increase the powerfulness, flexibility, and modularity, to allow the test of new algorithms easily. The architecture has been exemplified into two platforms, first is an autonomous ground vehicle, and second, an intelligent vehicle based on ADAS. The usefulness of the presented architecture, and comparative advantages with respect to non-adaptable architectures, have been extensively demonstrated through results under demanding circumstances such as GPS outages, degraded computer vision scenarios, or multi-path trajectories, while maintaining the positioning accuracy in complex urban scenarios. Therefore, the aim of this research is that with the growing research in the autonomous vehicle field in the last years, multiple research groups develop powerful algorithms only for specific platforms. That is, based on the dependency level, when a new platform appears, the portability of this algorithms are not simple. The work exposed in this paper, provides a solution to create a complete and configurable architecture, from different modules, that is not linked to the sensor devices or the hardware low-level controller. Then, the provided architecture is able to manage two different platforms, as exposed in the previous sections, iCab platforms (autonomous vehicles) and IvvI 2.0 (ADAS vehicle). As a future work, this architecture solution will be applied, using moderatecost available sensors, in the forthcoming autonomous vehicle industry, such as cooperative driving, automatic maneuvers for safety, autonomous urban vehicles, or collision avoidance, among other complex systems that require complete architectures. Moreover, the study of different levels of compatibility is also required to provide the solutions when a platform is not compatible enough to support or execute the whole architecture. Accordingly, the future proposed work is the use of tools for communication such as Lightweight Communications and Marshalling (LCM). Acnowledgement. This research is supported by Madrid Community project SEGVAUTO-TRIES (S2013-MIT-2713) and by the Spanish Government CICYT projects (TRA2013-48314-C3-1-R, TRA2015-63708-R and TRA2016-78886-C3-1-R).

References 1. Gonzlez, D., Prez, J., Lattarulo, R., Milans, V., Nashashibi, F.: Continuous curvature planning with obstacle avoidance capabilities in urban scenarios. In: IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), 2014, pp. 1430–1435. IEEE (2014)

510

P. Marin-Plaza et al.

2. Hernandez, N., Ocaa, M., Alonso, J.M., Kim, E.: Continuous space estimation: Increasing wifi-based indoor localization resolution without increasing the sitesurvey effort. Sensors 17(147), 1–23 (2017) 3. Hussein, A., Mostafa, H., Badreldin, M., Sultan, O., Khamis, A.: Metaheuristic optimization approach to mobile robot path planning, pp. 1–6 (2012) 4. Hussein, A., Garcia, F., Armingol, J.M., Olaverri-Monreal, C.: P2v and v2p communication for pedestrian warning on the basis of autonomous vehicles. In: IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 2034– 2039 (2016) 5. Hussein, A., Marin-Plaza, P., Garcia, F., Armingol, J.M.: Optimization-based approach for cooperation and coordination of multi-autonomous vehicles. In: Sixteenth International Conference on Computer Aided Systems Theory (EUROCAST), pp. 1–2 (2017) 6. Hussein, A., Marin-Plaza, P., Martin, D., de la Escalera, A., Armingol, J.M.: Autonomous off-road navigation using stereo-vision and laser-rangefinder fusion for outdoor obstacles detection. In: IEEE Intelligent Vehicles Symposium (IV), pp. 104–109 (2016) 7. Kokuti, A., Hussein, A., Marin-Plaza, P., Garcia, F.: V2x communications architecture for off-road autonomous vehicles (icab). In: Proceedings of IEEE International Conference on Vehicular Electronics and Safety (ICVES) (2017) 8. Marin-Plaza, P., Beltran, J., Hussein, A., Musleh, B., Martin, D., de la Escalera, A., Armingol, J.M.: Stereo vision-based local occupancy grid map for autonomous navigation in ros. In: Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP), vol. 3, pp. 703–708 (2016) 9. Martin, D., Garcia, F., Musleh, B., Olmeda, D., Pelez, G., Marn, P., Ponz, A., Rodrguez, C., Al-Kaff, A., de la Escalera, A., Armingol, J.M.: Ivvi 2.0: an intelligent vehicle based on computational perception. Expert Syst. Appl. 41(17), 7927–7944 (2014) 10. Musleh, B., Martin, D., de la Escalera, A., Armingol, J.M.: Visual ego motion estimation in urban environments based on uv disparity. In: Intelligent Vehicles Symposium (IV), 2012 IEEE, pp. 444–449. IEEE (2012) 11. Rsmann, C., Feiten, W., Wsch, T., Hoffmann, F., Bertram, T.: Trajectory modification considering dynamic constraints of autonomous robots. In: 7th German Conference on Robotics, Proceedings of ROBOTIK 2012, pp. 1–6. VDE (2012) 12. Zhang, J., Singh, S.: Loam: lidar odometry and mapping in real-time. In: Robotics: Science and Systems, vol. 2 (2014)

Challenges in Medical Robotics in the Frame of Industry 4.0

Health 4.0 Oriented to Non-surgical Treatment Carles Soler ✉ (

)

Casiopea Robotics, Barcelona, Spain [email protected]

Abstract. The emerging technologies that are conforming the Industry 4.0 are also impacting on health. Artificial intelligence, 3D printing, robotics, big data, Internet of Things, augmented reality, among others, are adding a layer of digi‐ tization on classical processes, allowing to increase the effectiveness and effi‐ ciency in the processes related to health and opening a new space of possibilities. In this article, some examples will show the state of art of Health 4.0 in the nonsurgical field. Keywords: Industry 4.0 · Artificial intelligence · 3D printing · Robotics · Big data · Internet of Things · Augmented reality

1

Introduction

“Industry 4.0” is the name of an initiative, initially proposed by the German government, which wants to take advantage of a renewed technological framework in which the use of different technologies (including robotics, artificial intelligence, cloud computing, Internet of Things, big data, additive manufacturing or augmented reality) aims to raise the production again competitive in countries with a long industrial tradition. From the application of the concepts associated with Industry 4.0, industry, and especially the manufacturing industry, is accelerating its automation process, adding a layer of digi‐ tization on the physical reality. At the end of the process, it is expected to achieve a highly efficient production environment, where the necessary human resources will be considerably smaller in number to the present ones, and with professional profiles and competencies required that are very different from those that are needed today. The concepts associated with Industry 4.0 are not exclusive to manufacturing production processes, but can be applied to areas as diverse as logistics, transportation, energy or health. Thus, the application of emerging technologies that imply the creation of a layer of digitization and aim to increase the efficiency in the processes related to health opens a new space of possibilities. Today it is possible to find multiple operational applications of what, in a non-original way, we can qualify with the label “Health 4.0”.

© Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_42

514

2

C. Soler

Some Health 4.0 Applications Today

Robotics is deeply impacting on surgical treatments, but non-surgical treatments are also taking advantage of the emerging technologies. Without wishing to be exhaustive, some significant examples are presented in the following sections. Disease prediction [1] Heart attacks are hard to anticipate. In an effort to predict them, many doctors use guidelines similar to those of the American College of Cardiology/American Heart Association (ACC/AHA), based on eight risk factors, including age, cholesterol level and blood pressure. The University of Nottingham (United Kingdom) has led an study comparing the use of the ACC/AHA guidelines with machine-learning algorithms. First, they analyzed data coming from the electronic medical records of 378,256 patients in the United Kingdom. They used about 80% of the data to search for patterns and build their own internal “guidelines”. Then they tested themselves on the remaining records. Using record data available in 2005, they predicted which patients would have their first cardi‐ ovascular event over the next 10 years, and checked the guesses against the 2015 records. Neural networks correctly predicted 7.6% more events than the ACC/AHA method, and raised 1.6% fewer false alarms. As prediction leads to prevention (through cholesterollowering medication or changes in diet) that would have meant lots of lives that could have been saved. Diagnosis [2, 3] Since 2013 Watson, the IBM artificial intelligence system, dedicates part of its activity to support in decision making on cancer treatments, developing diagnoses and proposing a ranking of possible personalized treatments for each patient, ranked by applicability. The system analyzes patient data against thousands of historical cases, information from more than 5,000 h of training by oncologists, nearly 300 medical journals, more than 200 textbooks and 12 million pages of text. After being initially tested at the Memorial Sloan-Kettering Cancer Center, several hospitals around the world today have access to the services of Watson for Oncology. The concordance of their diagnoses with teams of highly specialized oncologists exceeds 90%, and the time required to issue a diagnosis of high complexity has decreased from weeks to hours. Mobile diagnosis [4, 5] Early 2017, after training with a database of nearly 130,000 skin disease images, computer scientists at Stanford Artificial Intelligence Laboratory have presented an artificially intelligent diagnosis algorithm for skin cancer. The final product was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which repre‐ sented the most common and deadliest skin cancers, the algorithm matched the perform‐ ance of board-certified dermatologists. Once the algorithm has been developed, since the diagnosis is based on an image, it is immediate to think of taking advantage of the photographic capacity of the

Health 4.0 Oriented to Non-surgical Treatment

515

smartphones to achieve to bring reliable skin cancer diagnoses ubiquitous Thus, the developing team is already working on it. The accessibility, intuitive user-interface and the inherent connectivity offered by smartphones can overcome the limiting factors of many point of care tests, creating new and exciting possibilities, a major revolution that will transform the patient experience and accelerate diagnostic decision making. Remote medical care [6, 7] Sometimes, in rural or small hospitals a medical specialist is physically unavailable. In these cases, a robot can be deployed to check in on a patient with a physician from elsewhere. Since 2013, the InTouch Vita robot aims to facilitate remote medical care. Includes built-in ports for diagnostic devices like ultrasound machines, stethoscopes, and otoscopes, plus two cameras and the ability to navigate autonomously in a non-structured environment. New treatment plans [8, 9] In 2013, Stanford researchers identified a possible new treatment for small cell lung cancer, which is particularly deadly. The pipeline worked by scanning hundreds of thousands of gene-expression profiles (gathered by multiple researchers and stored in large databases) across many different cell types and tissues, some normal and some diseased, some treated with medications and some not. Alone, these profiles may not mean much, but when viewed together, they appear unsuspected patterns and trends. It came up that Imipramine, which is an anti-depressant, had the ability to help cure certain types of lung cancer. And because the drug was already approved by the Food and Drug Administration for use in humans, they’d been able to quickly and (relatively) inexpen‐ sively move into human trials. Remote monitoring [10–12] For years there has been interest in applications for remote monitoring of patients. For instance, in-home monitoring of vital signs of patients with chronic obstructive pulmo‐ nary disease provides more proactive care and makes healthcare more convenient and persistent. Up to now, these applications have been complex and expensive. Current availability and price decrease of sensors are facilitating the appearance of new products and services. The Consumer Electronics Show (CES) in Las Vegas, the biggest consumer elec‐ tronics fair in the world, is a great showcase of the new proposals either from established players and new start-ups. Some examples: HTC and Under Armour have presented a wristworn activity tracker designed for athletes and a chest strap heart monitor to measure workout intensity. Misfit introduced Specter, a pair of sleep-tracking head‐ phones. GreatCall unveiled the Lively Wearable, a device worn on the wrist or around the neck that tracks activity and offers a mobile emergency response service via a onetouch button that connects seniors to trained agents in emergency situations. L’Oreal teamed up with flexible electronics company MC10 created MyUVPatch, an adhesive patch users can apply to their skin and then consult a mobile app to track their UV exposure. Cercacor launched Ember, a finger clip sensor that measure pulse rate and

516

C. Soler

hemoglobin, and connects to an app via Bluetooth. Omron debuted a new blood pressure monitor that also tracks activity and sleep, and connect to an app. Neogia offered a wearable that detects sleep apnea and improves sleeping quality via a personalized artificial intelligence that learns about the user. TempTraq offered a patch-like smart device, which monitors body temperature 24/7. QardioCore presented hearth monitor without patches and wires to record continuous ECG, heart rate, heart rate variability, respiratory rate, skin temperature, and activity data, which can be shared with medical professionals. Bloomlife developed a “pregnancy wearable” that measures contractions and sends the information to a smart phone. Bodytrak presented a wearable that measures biometric information from the ear: body temperature, heart rate, VO2, speed, distance and cadence, continuously and in real-time. Pre-surgical models [13, 14] 3D printed pre-surgical planning models, based on patients’ scans, are being used all over the world with the goal of improving accuracy and efficacy of complicated surgeries. 3D printed replicas allow physicians to evaluate and interact with patient anatomy in ways 2D images cannot, allowing unprecedented preparation for complex surgical cases as doctors are able to examine all angles before even touching a scalpel. Physicians know from first-hand experience that 3D printed patient-specific models improve surgery, improve outcomes and result in lower treatment costs. While originally conceived of as a way to help surgeons with complicated or extremely delicate surgical procedures, doctors are discovering some unexpected bene‐ fits from using 3D printed replicas of patients’ organs for surgical-preplanning for more common surgeries. 3D-printed anatomical models allow doctors to get a much clearer idea of their patient’s internal anatomy, and make better treatment recommendations (Fig. 1).

Fig. 1. 3D printed heart (Source: 3dprint.com)

Implants [15–17] In 2014, the first tests of replacements of knee and hip by 3D printed ones were completed. That year, Peking University Third Hospital’s Orthopedics Department announced clinical trials of 3D printed artificial vertebral bodies. But dental prostheses were the first to be introduced in a fully operational customized medical solutions. Thus,

Health 4.0 Oriented to Non-surgical Treatment

517

in 2015 the Catalan company Avinent Implant System had already passed the testing stage and regulatory issues and was offering its customized implant solutions by 3D printing. Today, they are offering a complete digital solution of personalized implants to hospitals and clinics in areas such as reconstructive surgery, orthognathic surgery, neurosurgery, panfacial fractures, condyle reconstructions, and reconstructions of any part of the body, such as the spinal column. Implants have been used for decades. What is new is the way in which 3D printing allows the shapes of the orthotics to be created. There is an incredible flexibility in terms of the forms that can be generated. Rather than relying on boxy shapes that are easier to produce, the 3D printer’s ability to create extremely complex geometries means a reduc‐ tion in the amount of additional hardware necessary to force a generic implant to stay in place. Another benefit is the porous nature of the printed implant which allows bones to grow into the implant creating a natural bond. As with anything introduced into the body, these implants must be sterile and compatible with living cells. While it is a major challenge in the field, newer printable biocompatible materials are being developed for use in humans, some are synthetic polymers and some are derived from natural products like gelatin or seaweed (Fig. 2).

Fig. 2. 3D printed spine (Source: 3dprint.com)

Organ printing [18–20] With lots of patients on the waiting list for an organ transplant and a shortage of donated organs for transplants, organ printing is a promising revolutionary technology for saving human lives. The goal is to print a working organ that can be transplanted into a human, but according to the estimations of researchers, the ability to produce 3D printed complex organs is at least a decade away. Meanwhile, printing living structures with bio-ink using a 3D printer is already a reality. In 2014, the company Organovo delivered the first sample of its 3D bioprinted liver tissue. Today, bioprinted lung, liver, skeletal muscle, cardiac, blood vessel, skin, bone, cartilage and nerve tissue are available. These tissue samples can be used by pharmaceutical companies in toxicology tests of new drug candidates. It is also possible to create structures like ears or noses. In 2016, Wake Forest Institute for Regenerative Medicine unveiled its system that allows, after a couple of months, to

518

C. Soler

naturally form cartilage tissue and blood vessels into those bioprinted structures. But it will take a while to start trials in humans. Minimal invasive interventions [21, 22] Nanorobots have huge potential for application in medical robotics, for which they offer accuracy in their performance of operations along with minimal invasion of the patient. Researchers from Johns Hopkins University are developing “soft” robots with micro‐ grippers that are capable of adhering to specific body tissues. These could be used in extraction procedures for biopsies or for the localized injection of drugs. In another area, researchers from the University of Bristol are studying how swarms of nanorobots could detect cancer cells and carry out non-invasive surgical interventions at cellular level in patients with tumors. For their part, researchers from the Swiss Federal Institute of Technology in Zurich (ETHZ) are working with nanorobots magnetically guided to carry out eye operations on patients with cataracts and glaucoma. Surgical support [23, 24] The team of Dr Itaru Endo, head of the department of surgery of the digestive tract and liver transplantation of Yokohama City University, in collaboration with the Fraunhofer MEVIS, has developed a machine-vision system, based on an iPad app, to give surgical support in liver operations. The application, which is now being clinically evaluated, gives access to three-dimensional surgical data. Thanks to augmented reality techniques, during the operation images of the vascular system are superimposed so as to allow the pattern of blood circulation in the liver to be discerned, which would otherwise be invisible to the naked human eye. The application also shows the areas of blood flow and assesses the potential risks in real time (Fig. 3).

Fig. 3. Fraunhofer iPad app guides liver surgery through augmented reality (Source: www.engadget.com)

Sample processing [25] Classical industrial robots are already present in laboratories, either for the processing of samples, for the preparation of drugs dosages, or for preparing casework DNA samples. These activities need multiple steps that, when done with manual workflow, are tedious. Robots can automate most, if not all, of this processes. That allows to improve efficiency, save time, reduce human error and improve documentation, enabling to focus on analysis and interpretation rather than process tasks.

Health 4.0 Oriented to Non-surgical Treatment

519

Prosthetics [26, 27] 3D printing and open source platforms are helping to engender low-cost prosthetic devices that users can buy at a fraction of the cost of traditional commercial prostheses. These prostheses may have reduced functionality, but they represent a clear advantage over traditional non-movable prostheses. In the open source domain, pure magic can happen when designers, experts in robotics, makers, and owners of 3D printers get together to bring robotic prostheses to people who otherwise could not afford them, and especially to children. The e-NABLE project involves a community of more than 3,200 people who have already created more than 700 prostheses for users all over the world. The designs are intended to be as functional as possible, rather than to mimic the human limb, and can be customized to suit individual needs (fingers, feet, whole arms, elbows…). The “Hando-matic” software allows the user to design his or her own prosthesis and to obtain the data files for its 3D printing, which they can do themselves or get another person in the community who has a printer to do for them (Fig. 4).

Fig. 4. 3D printed prosthetic hand (Source: enablingthefuture.org)

Rehabilitation and mobility aid [28–30] The first working medical exoskeleton was created in 1972 by the Mihajlo Pupin Institute in Belgrade, Yugoslavia. Although several labs demonstrated early success with exoskeleton development for medical use, research in industrial robotics was far more practical and profitable. It wasn’t but decades later when exoskeleton development was restarted. There are two main categories of medical exoskeletons: rehabilitation and mobility aid. A rehabilitation device is used as a tool to augment a physical rehabilitation program and is no longer used after the completion of the program. A mobility aid wearable will be used permanently and the user is not expected to progress in the recovery process. In this case the exoskeleton becomes an augmentative device. As examples, the Ekso GT by Ekso Bionics has to be used by a person that still has some mobility and the user has to be able to shift weight from one leg to the other (its software allows a gradual decrease in the assistive force as the patient becomes stronger during rehabilitation). In contrast, the REX by REX Bionics takes full control of

520

C. Soler

walking, including transferring weight from one leg to the other, so its operator “rides” the suit and does not use their own muscles to walk. Although with less presence in the market, there also exist upper body exoskeletons, focusing on the arm (shoulder, elbow) or hand (fingers, wrist) with strengthening or augmentation functions (Fig. 5).

Fig. 5. Mobility exoskeleton (Source: www.medicaltourismitaly.com)

Assistive solutions [31–34] Robotics is proposing solutions to make life easier for people with disabilities. The company Whill has gone beyond the concept “Wheelchairs” to introduce the “Personal electric vehicle”. Robot Care Systems has created LEA (Lean Empowering Assistant) an elderly-walker that provides support to more active and safe movement, so elderly can live longer independently in their own home. Camanio Care focus its activity looking for positive mealtime experience for elderly or people with different disabilities, offering assistive eating and drinking aids like Bestic and Drinc. In the field of support for people with neurological disorders there is an extensive collection of advanced interactive robots. For years, robotic teddies looking like seals, dogs, cats, bears or rabbits are helping in the treatment of patients with autism or Alzheimer. For instance, since 2003 PARO stimulates interactions and improves the socialization of patients (Fig. 6).

Fig. 6. PARO (Source: www.parorobots.com)

Health 4.0 Oriented to Non-surgical Treatment

521

Delivery transport [35, 36] Hospitals move an immense amount of materials through hallways, on elevators, in basements and to patient units. This is a complex and demanding internal logistics chal‐ lenge that has implications on costs, quality and safety. Automating delivery of this material generates considerable savings, efficiency and worker satisfaction, because if robots perform the delivery and transportation tasks they release clinical and service staff to focus on patient care. One of the first was UCSF Medical Center at Mission Bay, that in 2017 incorporated a fleet of 25 autonomous robots made by Aethon for the transportation of medicines, medical equipment, bedding, food and waste. These robots have an integrated mapping system to orient themselves wherever they might be on the premises and they commu‐ nicate with the hospital information systems by means of Wi-Fi. Their infrared and ultrasonic sensors help them to avoid collisions and they use radio waves to open the doors of elevators, which they can only use if they are empty of people (Fig. 7).

Fig. 7. TUG (Source: www.aethon.com)

Economical efficiency [37] The current evidence shows that for most patients, all drugs from a class called statins used to prevent cardiovascular problems are equally safe and effective, so doctors are usually advised to use the cheapest. In a project that cost almost nothing to build, by doing one simple analysis on publicly available NHS prescriptions data, Mastodon C, a big data start-up company, found that in one year in England an average of £27 m a month of potentially unnecessary expen‐ diture on the two proprietary statins took place. As Sir Bruce Keogh, Medical Director of the NHS Commissioning Board, said: “Variation in prescribing habits costs the NHS millions of pounds a year. Transparent sharing of information will help clinicians under‐ stand whether they are over or under prescribing. This will focus minds in a way that will not only improve the quality of treatment for patients but also reduce cost and free up money for re-investment in other parts of the NHS.”

522

3

C. Soler

Conclusions and Further Developments

The examples previously presented are only a few examples of what these technologies will allow us to do over the next decade, which will bring about a revolution in the field of medicine. We won’t have to wait longtime to see achievements today only featured in sci-fi. But Health 4.0 is not only a technological issue. Legal and ethics aspects will have to be faced with an open mind but in a very strict way: the border between what will be feasible and what acceptable too often won’t be clear.

References 1. http://www.sciencemag.org/news/2017/04/self-taught-artificial-intelligence-beats-doctorspredicting-heart-attacks 2. https://www.ibm.com/watson/health/oncology-and-genomics/oncology 3. https://www.mskcc.org/about/innovative-collaborations/watson-oncology 4. http://news.stanford.edu/2017/01/25/artificial-intelligence-used-identify-skin-cancer/ 5. https://www.nature.com/nature/journal/v542/n7639/full/nature21056.html 6. http://robohub.org/the-state-of-telepresence-healthcare-and-telemedicine/ 7. https://www.intouchhealth.com/ 8. http://www.datapine.com/blog/big-data-examples-in-healthcare/ 9. http://scopeblog.stanford.edu/2013/09/27/big-data-big-finds-clinical-trial-for-deadly-lungcancer-launched-by-stanford-study/ 10. https://mapr.com/blog/5-big-data-trends-healthcare-2017/ 11. http://www.mobihealthnews.com/content/ces-2016-running-list-health-and-wellnessdevices 12. http://medicalfuturist.com/10-best-health-technology-innovations-ces-2017/ 13. http://blog.stratasys.com/2017/03/08/3dheart-surgical-model-study/ 14. http://www.materialise.com/en/blog/how-3d-printed-renal-models-improve-pre-surgicaldecisions 15. https://digital-health.avinent.com/eng/default.cfm 16. https://3dprint.com/165236/3d-printed-vertebrae-india/ 17. http://www.faseb.org/Portals/2/PDFs/opa/2016/FASEB-HorizonsInBioscience-3DBioprinting.pdf 18. http://www.qmed.com/mpmn/gallery/image/1-3-d-printed-tissues-and-organs-promise-andchallenges 19. https://www.economist.com/news/science-and-technology/21715638-how-build-organsscratch 20. http://gizmodo.com/3d-bioprinting-just-took-a-major-step-forward-1758803208 21. https://spectrum.ieee.org/robotics/medical-robots/medical-microbots-take-a-fantasticvoyage-into-reality 22. http://robohub.org/minimally-invasive-eye-surgery-on-the-horizon-as-magneticallyguided-microbots-move-toward-clinical-trials/ 23. https://www.engadget.com/2013/08/22/fraunhofer-ipad-app-guides-liver-surgery/ 24. https://www.apple.com/ipad/life-on-ipad/new-eyes-for-hands-on-surgery/ 25. https://www.forensicmag.com/article/2016/01/robotics-rescue-automated-sampleprocessing

Health 4.0 Oriented to Non-surgical Treatment 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.

523

http://enablingthefuture.org/ http://www.openhandproject.org/ http://exoskeletonreport.com/2016/06/medical-exoskeletons/ http://eksobionics.com/ https://www.rexbionics.com/ http://whill.us/ http://www.robotcaresystems.nl/ http://www.camanio.com/en/ http://www.parorobots.com/ http://www.ucsfmissionbayhospitals.org/articles/high-tech-tug-robots-do-heavy-lifting-atmission-bay.html 36. http://www.aethon.com/tug/tughealthcare/ 37. https://theodi.org/news/prescription-savings-worth-millions-identified-odi-incubatedcompany

Collaborative Robots for Surgical Applications ´ Alvaro Bertelsen1(B) , Davide Scorza1 , Camilo Cort´es1 , Jon O˜ nativia2 , ´ Alvaro Escudero2 , Emilio S´ anchez3 , and Jorge Presa2 1

2 3

Vicomtech-IK4, San Sebasti´ an, Spain [email protected] Egile Innovative Solutions, Mendaro, Spain CEIT and TECNUN, San Sebasti´ an, Spain

Abstract. This works presents the impact that collaborative robotic technologies can offer for surgical applications, with emphasis on the tracking and execution steps. In particular, a new workflow for spine and trauma surgery is presented, in which a miniature mechanical tracker is attached directly to the patients’ bony structure (Patent pending). The tracker is capable of following the patients’ motion with high precision, measuring the deviation with respect to the trajectories defined in the surgical plan and providing a feedback channel to a robot which assists the surgeon holding the surgical tools in place. The clinical application of vertebral fusion has been chosen as testing scenario and preliminary results are presented to demonstrate the feasibility of this concept.

1

Introduction

The recent advances in artificial intelligence have paved the way of the Industry 4.0 concept, loosely defined as the fourth industrial revolution, preceded by ones triggered by the introduction of steam power (first industrial revolution), assembly lines and electricity (second) and control and automation (third). In the upcoming industrial revolution, robots and automated system should be able to work in coordination, sharing information between them and taking decentralised decisions. According to the definitions of Hermann, Pentek and Otto, Industry 4.0 is centred on the concept of cyber-physical systems (CPS) –defined as networks of interacting physical and digital elements– with six key design aspects of Industry 4.0: interoperability, virtualisation, decentralisation, real-time capability, service orientation and modularity [1]. Among the possible interaction modalities covered by the Industry 4.0, collaborative robots –defined as robotic assistants designed to work hand-to-hand with human workers– take a relevant role to increase manufacturing efficiency and flexibility (Fig. 1). Surgery –and healthcare in general– is also experiencing the push of the Industry 4.0 technologies, opening the way to a new era of intelligent and datadriven surgery and giving birth to the new discipline of surgical data science [2]. According to Maier-Hein et al., three key clinical applications would benefit c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_43

Collaborative Robots for Surgical Applications

525

Fig. 1. The evolution of surgical practice. In the past, the physician offered treatment with minimum equipment. In the present, a wealth of information is given by a set of isolated sources, leaving the surgeon with the task of using all the available information based on their experience and knowledge domain. In the future, surgery will be based on a network of intelligent devices –among them, surgical robots– that will offer a holistic processing of all the available patient data. Picture originally published by Maier-Hein et al. [2]

from the achievements of surgical data science, which are surgical training, decision support and context-aware assistance [2]. On the last area, autonomous assistance can provide surgeons with timely information through surgical phase recognition, decision-support through patient-specific simulations and collaborative robots, the latter also identified as a key component of Industry 4.0 initiatives. The introduction of robots into the operating rooms has been gradual and slower than expected. In fact, tasks carried out during surgery are inherently collaborative, as they are always exerted on a human person –the patient– and with the assistance of the surgeon, nurses, anesthetists and the rest of surgical staff: in the operating room, there is no way to isolate the robots from persons, as usually done in industry. These requires providing robots with superior sensing, cognitive and interaction capabilities, to effectively cope with the lack of structure of the surgical situation and offer the required level of safety to avoid harm to the patient and surgical staff.

526

´ Bertelsen et al. A.

To achieve the capabilities envisaged by the Industry 4.0 and Surgical Data Sciences paradigms, collaborative surgical robots first need to offer solid solutions to two fundamental problems –registration and tracking– required to effectively sense the patients’ location and follow it over time. Registration consists in the computation of the geometrical transform that maps the coordinate system of the surgical plan to the coordinate system of the real-world scenario, whereas tracking consists on the detection of displacements over time between the alignment computed in the registration. In this work, a novel surgical assistant is presented, which offers innovative solutions to these two basic problems. In the following sections, an overall description is given as well as a their application to the specific field of spinal surgery, which has demanding requirements that justify their development.

2 2.1

Bone-Mounted Mechanical Tracker Need of More Precise Tracking Technology

The need of tracking technology in the operating room is twofold, as it is required to follow the motion of the instrumental and the patient. Two tracking technologies are most widespread nowadays, which are optical and electromagnetic tracking. Optical tracking employs a set of cameras which detect groups of markers attached to the surgical instruments and the patient’s anatomy, computing their positions in the three-dimensional space. Electromagnetic tracking employs generator that emit a space-varying magnetic field which is sensed by small detectors fitted on the instruments. Whereas optical tracking has relatively high precision, it suffers from the need to keep a clear line of sight between the cameras and the markers. On the other hand, electromagnetic tracking does not suffer from occlusions, but its precision is lower and its performance is seriously affected by the proximity of metallic objects, which is a usual situation in many surgical scenarios such as trauma surgery [3]. Although optical tracking is acknowledged as the tracking technology that offers the highest level of precision, its error levels are still insufficient for certain procedures such as vertebral fusion, which requires submillimetric precision to avoid nerve damage or vertebral fracture [6]. Determination of the optical trackers’ error levels is a question that is difficult to answer: a study published by Elfring et al. compared multiple optical trackers –all of them available commercially– and estimated error levels between 0.22 and 1.02 mm [4]. Although the minimum error levels that can be attained are reduced, the optical trackers’ performance is seriously degraded by the distance and angle between the markers and cameras, which are factors that are difficult to control in a real surgical scenario. On the other hand, mechanical tracking offers considerable advantages for the surgical field which have not been adequately explored. These devices are based on a mechanical device with sensors on each of their joints, which detect the displacement of its end-effector, which is attached to the patient anatomy. These devices can be lightweight, are unaffected by occlusion and offer unmatched levels

Collaborative Robots for Surgical Applications

527

of precision due to their stable build and highly precise pose-estimation methods based on the resolution of their forward kinematics equations. Their main drawbacks are their limited working space and their obtrusiveness, although the latter can be alleviated by an appropriate design that can be effectively deployed without interfering with the surgeons’ workspace. 2.2

Description of the Robotic Assisted Surgery System

The system that is presented here allows surgeons to plan a surgery before the patient goes into the operating room on a preoperative CT scan. During the surgery, the robotic assistant system provides a robust tool guide that will be positioned and oriented according to the preplanned trajectories in order to perform the intervention in a minimally invasive way. This is achieved by means of a two step process. First, the position of the patient is registered accurately by attaching a clamp with a registration fiducial on a non-critical and accessible area of the targeted bone. Second, after the position of the patient is registered, the relative movements between the robotic arm and the patient are constantly monitored in real-time by an mechanical tracking system that is mounted on the end effector of the robotic arm. When bone motion is detected, the robotic arm compensates such movement in order to keep the same position and orientation of the tool guide with respect to the bone. Figures 2a and b illustrate the tracking system attached to the vertebra with the tool guide pointing to the targeted area.

(a)

(b)

Fig. 2. Tool guide and mechanical tracking system. Both, the tracking system and the tool guide are mounted on the end-effector of a robotic arm. The distal end of the tracking system is attached to the vertebra in order to monitor changes in position and orientation of the targeted bone. The robotic arm keeps the tool guide at the same trajectory at all time by compensating for bone movement.

The system is designed to achieve high accuracy without slowing down the workflow of the surgeon in the operating room. To achieve this goal, the mechanism to mount the registration fiducial or the tracking system on the clamp that

528

´ Bertelsen et al. A.

is attached to the bone is based on a magnetic kinematic coupling system. The base part of the coupling mechanism belongs to the clamp that is fixed to the bone. This base part presents three V-shaped groves with a magnet at the center. The top part of the coupling mechanism that goes in the registration fiducial and at the distal end of the tracker presents three spheres and another magnet. This coupling mechanism creates a precise and repeatable interface between two rigid bodies [5]. This system provides six contact points, two per sphere, in order to guarantee that the coupling mechanism constrains six degrees of freedom of the relative movement between the clamp and the tip of the tracker or the registration fiducial. Figure 3 illustrates the coupling mechanism and the registration fiducial.

(a)

(b)

Fig. 3. Registration fiducial with magnetic kinematic coupling mechanism to mount the fiducial on the clamp in a fast a precise manner. In (a) the base and the top portions of the kinematic coupling mechanism can be seen. In (b) the fiducial is mounted on the clamp.

The joint coupling of tracking system and tool guide on the robotic arm’s end effector makes the system very accurate and avoids the presence of an additional elements such as cameras or magnetic field generators. The tracker’s base and the tool guide belong to the same rigid body and their geometrical relationship is measured during the manufacturing process. Therefore, there is no need of any extra calibration step during the intervention. Moreover, since the tracking system is monitoring the exact positioning of the tool guide, the overall accuracy of the system is higher than the accuracy of other systems where the tracker monitors the position of the base of the robot. In cases, the overall accuracy is directly influenced by the precision of the robotic arm, which is effectively compensated with the proposed configuration.

Collaborative Robots for Surgical Applications

2.3

529

Mechanical Tracking System

The tracking device is a passive mechanical system with six angular joints that are constantly monitored with rotary encoders. In order to monitor changes in position and orientation of the bone with respect to the tool guide without constraining the mobility of the bone the tracking system has to present at least six degrees of freedom. The exact position and orientation of the tip of the tracker are obtained by combining the readings of the rotary encoders with the forward kinematic equations of the kinematic chain.

Tool guide q1

Roboti carm end effector

q2

q5

q4

q3

q6 Tracker Fig. 4. Schematic representation of the kinematic chain of the mechanical tracking device.

In Fig. 4a schematic representation of the tracking system is provided. The system presents six rotary joints with its corresponding rotary encoders. The first, fourth and sixth encoders (q1 , q4 and q6 ) track the rotational movement along the longitudinal axes of the corresponding segments and the second, third and fifth encoders (q2 , q3 and q5 ) track the rotational movement along axes that are perpendicular to the segments. The accuracy of the tracker depends on the length of the segments, the resolution of the encoders and the manufacturing and mounting process of the tracker itself. With a combination of four segments of lengths of the order 10 cm and six rotary encoders with resolutions of 16 bits an overall theoretical accuracy of the order of 50 µm can be achieved. However, experimental evaluation is required to include other factors such as calibration errors, slacks and dead zones of the kinematic chain. Ongoing work is currently being done on this subject (Fig. 5).

530

3

´ Bertelsen et al. A.

Application on Spinal Surgery

3.1

Description of the Surgical Procedure

The proposed mechanical tracking device can have considerable impact on highprecision surgical tasks, among which vertebral screw insertion is a relevant example. Screw insertion is fundamental task on multiple spinal surgeries, such as cervical body fusion and transpedicular fixation, which consists in the insertion of screws into the vertebrae through their pedicles and their anchoring on their bodies’ cortical bone. The precision requirements for this task are quite demanding, as there are considerable risks of nerve damage –which can lead to intense pain or even disability– or rupture of surrounding blood vessels, leading to internal hemorrhage. The required level of precision varies among vertebrae –ranging from 0.1 mm up to 3.8 mm– with the largest majority of vertebrae requiring sub-millimeter accuracy and with the particular case of the T5 vertebra which may require insertion of screws with diameters larger than the critical pedicular dimensions [6]. Unsurprisingly, the largest part of surgical robots designed for spinal applications have focused on this particular task [7].

Fig. 5. Schematic of screw insertion. A misplaced screw can breach the spinal canal producing nerve damage (left) or breach the outer vertebral walls damaging surrounding blood vessels (centre). A correctly placed screw should pass through the axis of the pedicle and be firmly anchored on the cortical bone (right).

3.2

Surgical Planning

According to the proposed workflow, the patient has to undergo a Computerised Axial Tomography (CAT) scan before the surgery. The acquired CAT scan is loaded into 3D Slicer1 , extended with a custom planner specifically designed for this purpose [8]. The modules permit interactive placement of the screws, deciding their with and length and ensuring that appropriate anchoring on the cortical bone is achieved.

1

http://www.slicer.org, last visited on July 2017.

Collaborative Robots for Surgical Applications

3.3

531

Intra-operative Workflow

Once the patient enters the operating room and is put under anesthesia, a small incision is made over the vertebrae to be operated and a clamp is attached to one of the spinal processes. Then, a 3D printed fiducial object is attached to the clamp and a set of two fluoroscopic images is acquired. The fiducial object projection is segmented automatically and the relative position between the imaging plane and the origin of the fiducial object is computed. Then, a geometry containing two three-dimensional planes is produced. Once the geometry is computed, the pre-operative CAT scan is placed into the geometrical scene by means of a 2D-3D registration algorithm. This generates a set of Digitally Reconstructed Radiographs (DRR), projecting the CAT data onto the radiographic planes. The DRRs and actual radiographs are compared –computing a similarity metric– and the CAT image’s position and orientation is updated until the similarity metric reaches a maximum value. A multi-resolution scheme using 3 levels –with downsamplings of 4, 2 and 1– is used to prevent falling into local minima. After registration, the pre-operative and intra-operative are brought into alignment, but patient’s motion has to be tracked to avoid misalignments. To achieve this, the fiducial is removed from the clamp and the tracker’s end effector is connected in its place. Thus, any subsequent motion of the patient will be detected by the tracker and transmitted to the rest of the system. This feedback motion signal is fed into a robotic manipulator, which is commanded to align the tool holder’s axis to the insertion trajectories over the pedicles. The surgeon is able to mill the pedicle and then insert the screw using surgical tools inserted through the cannula. Once a screw is successfully inserted, the robot is commanded to place the instrument holder on the next pedicle and the insertion procedure is repeated. 3.4

Registration Performance

Among the closed loop tracking system, the 2D-3D registration is the largest source of error, so its accurate implementation is critical to locate the patient’s anatomy with respect to the mechanical tracker. An inaccurate registration will lead to a poor tracking, no matter how precise the mechanical tracker is. In the presented workflow, the DRRs and the actual radiographs have similar contrast, so the use of the Normalised Cross Correlation (NCC) metric is appropriate, along with a Powell optimiser, with offers good convergence properties without the need to compute the metric’s gradient. The implemented 2D-3D registration algorithm was tested following a public evaluation protocol –originally published by the University of Ljubljana–, which defines the 2D and 3D images, starting positions and performance evaluation metrics [9]. The accuracy of 2D-3D registration algorithms is affected by the number of 2D radiographs included in the registration and their relative angles [10]. Although better results are obtained increasing the number of images, surgeons

532

´ Bertelsen et al. A.

are normally reluctant to acquire more than two radiographs during interventions, so this number was fixed at two. The registration performance was found to be affected not only by the relative angle between radiographs, but also by the absolute angle, measured with respect to the patient axes. To measure this L1

L2 0.6

0.6

0.55 0.55 0.5 0.5

0.45

0.4

0.45

0.35 0.4 0.3 lat-ap

50-130

50-150

30-110

30-130

50-AP

30-AP

lat-ap

50-130

50-150

(a)

30-110

30-130

50-AP

30-AP

30-130

50-AP

30-AP

(b)

L3

L4

0.6

0.6

0.55

0.55 0.5

0.5

0.45

0.45

0.4 0.4 0.35 0.35

0.3

0.3

0.25

0.25

0.2

0.2

0.15 lat-ap

50-130

50-150

30-110

30-130

50-AP

30-AP

lat-ap

50-130

50-150

(c)

30-110

(d) L5 0.6 0.55 0.5 0.45 0.4 0.35 0.3

lat-ap

50-130

50-150

30-110

30-130

50-AP

30-AP

(e)

Fig. 6. Boxplots of observed registration errors on 2D-3D registrations performed on each vertebra of the Ljublajana dataset, with different pairs of 2D images. Lowest registration errors are obtained with pairs separated by 80 to 100◦ –in accordance with intuition– although the standard pair –formed by the antero-posterior and lateral views– never obtains the best results. On each plot, the horizontal axis marks the angles of each pair of 2D images and the vertical axis marks the target registration error in mm.

Collaborative Robots for Surgical Applications

533

Table 1. TRE for the precision evaluation carried out using the Ljubljana dataset. For each vertebra, the mean TRE obtained for three image pairs –50–150, 30–110 and 30–130 separated by 100, 80 and 100◦ respectively– are compared with the standard AP-LAT pair. The standard pair did not obtain the most precise result in any case (for each vertebra, the best result is marked with bold font). Vertebra LAT-AP 50–150 30–110 30–130 L1

0.457

0.431

0.443

0.390

L2

0.374

0.302

0.4769

0.449

L3

0.359

0.213

0.31302 0.249

L4

0.275

0.315

0.252

0.156

L5

0.395

0.333

0.284

0.332

effect an experiment was made using the Ljublajana dataset, running multiple registrations using different set of 2D images separated by 40, 60, 80 and 100◦ . For each included vertebra, a series of registrations –with initial mean Target Registration Error (mTRE) up to 10 mm– were conducted, measuring their final mTRE. Results –summarised on Fig. 6 and Table 1– show that optimal angular separation between images is obtained with values between 80 and 100◦ –in accordance with the results published by Tomaˇzeviˇc et al. [10]–, but different image pairs with the same angular separation give considerably different results. In fact, the standard pair formed by the lateral and antero-posterior images never gave the best precision, as illustrated on Fig. 6 and Table 1. Replacing the standard pair by a more optimal one can yield error decrements between 0.07 and 0.12 mm. It is hypothesised that the latter effect is caused by the better view of the vertebral pedicles and processes offered by the antero-posterior and oblique images, which could have more structure to help the registration to obtain a better fit. This lack of performance of registrations that included the lateral view was also observed on phantom experiments, although those scenarios have additional factors which are not present in the Ljubljana dataset. These are the particular geometry of the fiducial object, which –in the lateral view– has its fiducial beads aligned in a quasi-linear pattern, which is an ill-posed situation for computation of the radiographic plane position. In addition, the radiographs field-of-view is adjusted to cover the largest possible section of the vertebra, pushing the fiducial beads’ close to the edges of the images which, in the case of radiographs acquired with image-intensifiers, are more affected by distortion. Further experiments are required to better describe this effect.

4

Discussion and Conclusions

A novel collaborative surgical robotic assistant has been presented, along with its application for spinal interventions. A description of its components has been given, with particular emphasis on its mechanical tracking and image

534

´ Bertelsen et al. A.

registration systems. Laboratory experiments characterising their performance have been presented, although additional experiments on phantoms and cadavers are underway. A novel mechanical tracking have been presented, giving a description of its architecture and components. Preliminary testing has given error levels around 50 µm, which is below error levels offered by currently available surgical tracking systems. Additional testing is required to fully characterise its performance and integration tests are needed to evaluate its functionality in more realistic conditions. In addition, a characterisation of the image registration system –identified as the largest contribution of error– has been given. Preliminary testing with a publically available dataset has been presented, demonstrating that it precision level is quite high in comparison with other algorithms. In addition, it was shown that its performance is affected not only by the angular separation between 2D images, but also by their angular orientation with respect to the patient axes. Additional experiments are required to better characterise this effects, as well as integration testing on phantoms and cadavers. The presented collaborative robots falls in the category of intelligent tool holders, capable of holding surgical instruments with minimal deviations and unaffected by tremor, fatigue and patient motion. Although the presented tracking system permits effective following of the patient’s movements, there is certain room for improvement in the degree of autonomy given to the robot, which requires better sensing, cognitive capabilities and integration with the surgical situation. The prospect of fully robotised surgeons is certainly far-fetched, but the idea of autonomous robots capable of performing minor surgical tasks is not. In fact, a milestone has been achieved by the STAR system, a robot capable of performing fully automated sutures [11], and similar achievements could be translated to the spinal surgical field.

References 1. Hermann, M., Pentek, T., Otto, B.: Design principles for Industrie 4.0 scenarios. In: 2016 49th Hawaii International Conference on System Sciences (HICSS) 39283937. IEEE (2016). doi:10.1109/HICSS.2016.488 2. Maier-Hein, L., et al.: Surgical Data Science: Enabling Next-Generation Surgery (2017). http://arxiv.org/abs/1701.06482 3. Franz, A.M., et al.: Electromagnetic tracking in medicine: a review of technology, validation, and applications. IEEE Trans. Med. Imaging 33, 17021725 (2014) 4. Elfring, R., de la Fuente, M., Radermacher, K.: Assessment of optical localizer accuracy for computer aided surgery systems. Comput. Aided Surg. 15, 1–12 (2010) 5. Alexander, H.: Slocum: design of three-groove kinematic couplings. Precis. Eng. 14–2, 67–76 (1992) 6. Rampersaud, Y.R., Simon, D.A., Foley, K.T.: Accuracy requirements for imageguided spinal pedicle screw placement. Spine (Phila. Pa. 1976) 26, 352–359 (2001) 7. Bertelsen, A., Melo, J., Snchez, E., Borro, D.: A review of surgical robots for spinal interventions. Int. J. Med. Robot. Comput. Assist. Surg. 9, 407–422 (2013)

Collaborative Robots for Surgical Applications

535

8. Fedorov, A., et al.: 3D slicer as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30, 13231341 (2012) 9. Tomaeviˇc, D., Likar, B., Pernuˇs, F.: Gold standard data for evaluation and comparison of 3D/2D registration methods. Comput. Aided Surg. 9, 13744 (2004) 10. Tomaˇzeviˇc, D., Likar, B., Pernuˇs, F.: 3D/2D image registration: the impact of X-ray views and their number. Med. Image Comput. Comput. Assist. Interv. 10, 4507 (2007) 11. Shademan, A., et al.: Supervised autonomous robotic soft tissue surgery. Sci. Transl. Med. 8 (2016) 12. Knez, D., Likar, B., Pernus, F., Vrtovec, T.: Automated pedicle screw size and trajectory planning by maximization of fastening strength. In: Computational Methods and Clinical Applications for Spine Imaging, CSI 2015, pp. 3–13 (2016) 13. Dao, T.T., et al.: Multimodal medical imaging (CT and dynamic MRI) data and computer-graphics multi-physical model for the estimation of patient specific lumbar spine muscle forces. Data Knowl. Eng. 9697, 318 (2015)

New Technologies in Surgery Alícia Casals ✉ , Narcís Sayols, and Josep Amat (

)

Centre of Research in Biomedical Engineering (CREB), Universitat Politècnica de Catalunya, (UPC), Barcelona, Spain {alicia.casals,narcis.sayols,josep.amat}@upc.edu

Abstract. The progress of technology and robotics in industry has not yield to an equivalent development in the medical field. This paper analyses surgical procedures from the point of view of industrial processes looking for analogies in both fields so as to evaluate the possibilities of using equivalent technologies in both of them. After analyzing surgical specialties from the mechanical point of view, the actions to be performed, and the main requirements as precision and working conditions, a look at the main challenges that surgical robotics should face is presented. Keywords: Surgical robotics · Image registration · Tracking

1

Introduction

Surgery is the medical practice that implies mechanical manipulation of anatomical structures as a healing means. Since this treatment consists in mechanical manipulation of tissues, a logical consequence is considering the possibility of applying production techniques and particularly CAD/CAM and those related to the industry 4.0. However, while technological progress has changed drastically the production processes in industry, the application of technology in the medical field, although having had a very high impact in the medical practice, has not evolved at the same pace than in industry. The reasons for this dephasing is manifold [1]. First, the variability among humans. Patients differ one from each other in size, shape or proportions, thus impeding the possibility to routinely program a task on a predefined object and of using frequently used techniques as CAD/CAM. Humans differ from machines or industrial parts also since many tissues in the human body are soft, deformable, and even when dealing with hard tissues they frequently cannot be fixed to a static position. In this context trajectories cannot be previously defined and there is a continuous need of adapting the actuation to changing environments. Unfortunately the modeling of human organs and tissues is not yet good enough to allow the development of algorithms for the register of patientmodalities and of tracking the human movements, so as to adequately positioning the robot with respect to the human anatomical part. And last but not least, when putting technology into practice in the medical field many other issues arise referring to ethical, legal and safety aspects which should be considered when designing an application or system.

© Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_44

New Technologies in Surgery

2

537

Technologies

Medical practice frequently carries with it three stages. A first and previous stage is diagnosis, which is critical since it determines the next steps in the process of healing as well as the necessary way to guarantee the adequate treatment, Fig. 1. The second stage is the one that should implement the best possible treatment that can give response to the specific needs identified in the diagnosis phase. This second stage passes through the medical knowledges, but not only this, it also relies on the clinical eye of the phys‐ icist, who needs a wide data-base of all symptoms, of the diagnosis and of the best treatments already experienced. All these information is crucial to help taking the right decisions and perform adequately. A treatment admits many variants: – A psychotherapeutic treatment, in which the own organism fights against the disease it suffers by means of the help of psychotherapy. – A pharmacologic treatment, based on drugs, and chemotherapy addressed to chem‐ ically combat cancer. – A kinesiotherapy treatment, which combines different physical stimulus by means of manual manipulation and, eventually, with the aid of other mechanical or elec‐ tronic elements or devices on the different body parts, aiming to pursue therapeutic outputs. – Surgery, understood as the most severe physical actuation that can be of different types: Ectomies, Reparative-Reconstructive and Substitutive surgery, and Radio‐ therapy, Laser and US surgery applied over the specific affected anatomy.

Fig. 1. Schema of the three steps involved in a healing process and the different kind of treatments

Finally, a third phase, after a good diagnosis and a specific treatment may require a rehabilitation step, both to complete the previous treatment and to recover the quality of life after an accident, trauma or after damages caused by aging. This work addresses basically severe treatments, that is, surgery. Their main characteristics are described in the following subsections. 2.1 Ectomies One of the older techniques applied is that of removing the anatomic part considered as the cause of the disease. This technique, that uses the suffix Ectomy to name the concrete treatment, are usually related to actions such as cutting away, amputation or surgical ablation. This technique comes from the ancient Greeks, meaning the suffix ectomy, the

538

A. Casals et al.

act of cutting. This name is used then to refer to the kind of interventions that imply cutting or removing some materials, parts or tissues from inside the body. This surgery then implies a sequence of actions which are cutting, removing and a last action of sewing or fixing the parts. Actually, all technological advances are oriented to improve the technology of the corresponding instruments (electro-cauterization, stapling …) as well as to robotize the intervention when possible using CAD/CAM techniques. However, as robotizing is normally difficult, surgical robotics frequently relies on tele‐ operated robots. Cutting and separating tissues, was performed traditionally in open surgery. It was in the decade of 1980 when it was substituted in many cases by surgery through small incisions using specific instruments to operate through trocars. This move from open surgery to minimally invasive surgery, MIS, based on the use of specific instruments opened the doors, on the one hand to the possibility that these instruments were robot‐ ically controlled and, on the other hand, to endow these instruments with more degrees of freedom than those operated manually, Fig. 2. These instruments specifically devel‐ oped for MIS have been evolving facilitating a relatively comfortable operating posture and gestures, even when accessing inside the body through a unique incision. They gave place to the so called Single Port Surgery or Single Incision Laparoscopic Surgery (SILS). Operation through a unique port into the abdominal cavity through the belly button, that avoids new visible scars, has also evolved towards the Natural Orifices Transluminal endoscopic Surgery (NOTES). When this surgery, MIS, is used for joints repair, it is named arthroscopic surgery.

Fig. 2. (a) Conventional manual instruments endowed with opening-closing movements and a rotation (roll), and (b) instrument endowed with opening-closing movements plus the three orientation degrees of freedom used in robotized laparoscopic surgery.

2.2 Reparative-Reconstructive-Substitutive Surgery This kind of surgery refers basically to orthopedic interventions, those dealing with bone tissues, such as: hip, knee or shoulder surgery, maxillofacial reconstruction, or spine repair. This set of surgeries relies on mechanical procedures, closer to those going on in industry as drilling, milling, sawing, holing or puncturing. Orthopedic surgery, or orthopedics, is the branch of surgery concerned with conditions involving the muscu‐ loskeletal system. This kind of surgery comprises different kind of operations: cutting, modelling and inserting, aiming to complete damaged bone structures. Drilling, using bolds and rods, plaques and scaffolds to join, immobilize or enforce bone parts. The third technique consists in substituting naturals structures by artificial elements (pros‐ thesis).

New Technologies in Surgery

539

The first case constitutes the application of CAD/CAM in medicine. A typical inter‐ vention is maxillofacial reconstruction, required after removal of a part of the jaw to eliminate cancerous tissues. For this intervention, the planning of the optimal cut of part of the bone or another body part, as the tibia, is used to extract the exact shape and size of the missing part of the jaw bone. Once planned, the right execution of the cut is achieved using CAM techniques, guaranteeing this process the optimization of both, the extracted bone part and the posterior insertion. This greatly improves manual operation that relies on the human visual perception and an iterative try and correct strategy. A second type of surgeries, that of performing mechanical operations such as inserting screws, plaques or scaffolds, is another example of how CAD/CAM techniques can be applied to orthopedic surgery. In these surgeries CAD constitutes the valid instrument for the 3D modelling of the devices, while simulation techniques provide the necessary assistance to help deciding the exact place where perforations should be done, both to assure patient integrity and to guarantee the maximum robustness and mechanical efficiency. Referring to the execution, the CAM process can be performed manually or robotized, depending on the surgical specialty, which may or may not allow immobili‐ zation of the anatomical part. When the anatomy cannot be immobilized, as in the case of the femur or the knee, it is also possible to resort to molds, which are fixed over the bone in a way that the drillings and machining to perform are not subject anymore to the manual imprecision of the surgeons. In the third case, that of implanting prosthesis, the work to be done carries with it some machining. But in this case, when the work is done manually, the differences between the optimal hole and that really executed, the necessary to implant the pros‐ thesis, can be adjusted with the injection of biological cements, which are manufactured from the own patient bone extracted during the machining previous to the fitting of the prosthesis. For that, in the previous phase the CAD permits the computing of the exact geometrical position of the prosthesis to implant to achieve the maximum mobility and robustness as well as assuring the best position of the prosthesis in the 3D space with respect to the symmetric healthy joint. In all cases, their robotization has been introduced at the time that the registering problems have been solved. Register can imply CAD-images (TC and radiography) or CAD-CAD (TC and 3D model based on taking fiducial points by means of a navigator). As registration in real time is not always possible, when this happens a partial or total immobilization of the anatomical part is required. These techniques are quite adequate to be robotized. As happens when using CAD/CAM techniques, the automatic operation can be programmed and thus the system can operate autonomously. However, due to the many eventualities that may appear along an intervention, they are performed by humans, but with the assistance of a teleoperated robot, or the more and more used collaborative robots cooperating with humans. Working in cooperation, the human and the robot both hold the same surgical instrument and while the human guides the instru‐ ment movements, the robot supervises the task according to a pre-established plan and the perception of the environment conditions, allowing the human movements when moving in a free space, slowing them down when operating close to critical areas or steering them when approaching a target to facilitate its safe guidance.

540

A. Casals et al.

2.3 Radiotherapy Radiotherapy constitutes a kind of treatment alternative or complementary to surgery which as surgery has a direct incidence on the body tissues, but in contrast to surgery, radiotherapy does not interact physically with the human body, being in this sense less invasive. The challenge is to completely avoid any damage to healthy tissues. With this aim, the radiation source, or radiation beam, should be highly focalized towards the target tissues. This treatment should be imperatively robotized since it is not possible to hold and guide the beam manually due to the high requirements in precise focalization, together with its high weight. Due to the need of avoiding damaging healthy tissues, the beam is not statically focusing the target from a given point, but following a trajectory in space while keeping the target point as the center of the spherical surface that described the end effector tip. In this way, the damaged tissue receives a high radiation, while other tissues are only slightly radiated avoiding their damage. Thus, the robot guarantees the right focalization and minimizes the collateral effects, Fig. 3. To facilitate the access of the tool tip to all the desired volume, it is possible to add external degrees of freedom, to control the bed position and orientation.

Fig. 3. Focalization of the target point with a robot that follows a trajectory with curvature radii R and center of rotation on the target tissue.

2.4 Laser Surgery Laser has become a multifunctional source of energy. Its name stands for Light Amplifica‐ tion by Stimulated Emission of Radiation). Its polarized light allows concentrating all the power at a given point or trajectory producing different effects on the tissues according to the characteristics and the parameters of the radiation. Laser can be used to cut, for ablation, for welding or cauterizing. As soft tissues are rich in water, laser is good for cutting them. Laser therapies are especially good to treat soft tissues. With laser it is possible to have a destroying effect strongly and precisely localized, making precise cuts. With low power beams the effect on the surface areas produces ablation. Laser can weld when phototherm‐ ically the extracellular components of connective tissues fuse themselves (retina, cornea, plastic surgery). The main laser surgery is addressed to the eye. Lasers may be used to treat nonrefractive conditions (e.g. to seal a retinal tear). Laser eye surgery or laser corneal surgery is a medical procedure that uses a laser to reshape the surface of the eye. This is done to correct myopia (short-sightedness), hypermetropia (long sightedness) and astigmatism (uneven curvature of the eye’s surface). Being back pain one of the most prevalent affecta‐ tion, laser can also have an incidence on it.

New Technologies in Surgery

541

2.5 Ultrasound The focalization of acoustic waves produces a local heating that provokes the cellular death of the affected tissues. It is comparatively the less invasive technique since there are no incisions, and in contrast with radiotherapy, it does not produce radiation. Its name HIFU comes from High Intensity Focussed UltraSound. There are three kind of actuators for this treatment: (a) Piezoelectric actuators, which have limited energy and produce very local effects, as in prostate cancer, (b) Electromagnetic actuators, usually arranged in arrays and focalizing the beams with mirrors towards the target point. As they have to operate in an aqueous medium they are submerged in water, and (c) Hydraulic actuators which provide higher energy and can be used to destroy stones (lithotripsy). US treatment is mainly used in prostate cancer, to destroy solid tumors of the bone, brain, liver, pancreas and so.

3

Challenges

While there are well consolidated application areas, other applications still require to solve some technical limitations. Surgery has experimented a significative progress in the last years, not only due to the continuous growth of new medical knowledge, but also pushed by the development of new surgical techniques. These technologies, have allowed, up to different degrees, the introduction of robotics in surgical practice. But the contributions of robotics nowadays rely on its operating precision, its capacity of tracking some determined trajectories and even in providing the assistance required to avoid the physical and mental stress that surgeons suffer in many kind of surgeries, more than in its intelligence, which is still very low, and in being able to operate under a cognition based behavior. With the aim of taking advantage of the robotics potential some bottlenecks should still be overcome. These current operative limitations lie in the need of working over soft anatomies, which are deformable and elastic, and they change their shape depending on the patient’s position and due to the effect of being manipulated by the surgical instruments. These anatomical characteristics of soft tissues strongly complicate the adaptation over the physical scene tasks, even those repetitive and mechanics as can be a suture. On the other hand, operation over bone structures, that is, no deformable hard tissues, can also present these difficulties. This problem appears when in many cases it is not possible to achieve a sufficient immobilization level so as to be able to consider the operating reference axis fix in space. This forces the need to have available locali‐ zation systems operating in the 3D space and at high speed in order to obtain the 6 degrees of freedom of the anatomical structure with time delays low enough to adjust the reference frames to the anatomy movements in real time. This means that the tasks going on over moving anatomic parts can be executed as if they were fixed, and without producing neither strains nor ruptures on the surgical instruments, no mattering how delicate they are. To tackle these problems, all the challenges concentrates on:

542

A. Casals et al.

(a) The capability of dynamically modelling both, rigid and flexible elements, based on the data obtained from the available TC images together with the information provided by the sensors used for the perception of the surgical environment, Fig. 4.

Fig. 4. Schema of the localization and registration for robot surgery.

(b) The capability of registering in real time the model obtained dynamically from the working scene, with the physical model, previously known, over which the inter‐ vention has been planned. (c) The capability of tracking the anatomic elements that the perception systems use for its registration with the model The challenge stablished in the research field in surgical robotics is achieving an advance in each of these issues, both to plan and execute some determined surgical interventions in an autonomous way, and to allow the introduction of aiding systems. These aids, such as virtual reality, constitute an appropriate solution in teleoperation to benefit from the surgeons knowledge, experience and ability in surgery, counting at the same time with the precision and tiredness performance of robotics. 3.1 Image Registration Image registration is a key problem in medical imaging and in particular in image-guided interventions, in which the objective is to align the preoperative and the intraoperative data. Hence, surgical procedures benefit from image registration since it allows providing the surgeon with the current position of the instruments in relation with the patient anatomy. In addition, surgeons can detect their approaching to nearby vulnerable structures and the objective target at any time. In this context, it is necessary the use of accurate and real time methods. Mathematically, the registration of two images A and B to find the best map 𝜑̃ from a set of admissible transformations {𝜑} that best aligns these two images based on a comparison criterion, usually by the minimization of an energy function 𝜀, i.e., 𝜑̃ = arg min𝜑 𝜀. Image registration has been usually classified following the criteria written in [2] which basically are image dimensionality, image modality, nature of the registration and geometric type of transformation.

New Technologies in Surgery

543

Registration is classified by the dimensionality of the images into 3D/3D, 2D/2D and 2D/3D. In medical environments 3D/3D registration is usually used in diagnosis to complement the information of different images such as CT and SPECT. An example of 2D/2D is the registration and display of the combined X-ray images in the diagnosis of bone injuries. Instead, in orthopedic surgery, 2D/3D registration is used in navigation based on 3D CT preoperative images and 2D fluoroscopy intraoperative images, because fluoroscopy is a real-time X-ray image which is suitable for detecting bones. Hence, both images must be transformed to the same dimensionality to be registered. In essence, the dimensional correspondence is carried out by projecting the 3D data to a 2D space or by transforming the 2D data into 3D. Three different strategies can be used for this transform which are: projection, back-projection and reconstruction. The projection strategy transforms the 3D data into a 2D space by using a projection matrix T ∈ M3x4 (ℝ). In the second strategy, back-projection, 2D data is transformed to rays and then the distance between these rays and the 3D data is minimized to obtain the best transformation function. And finally, reconstruction uses a number N > 1 of 2D images to create a 3D image from all of them via a reconstruction function R and then, this reconstruction is compared directly to the 3D image. Referring to modalities, four types of registration techniques can be identified. First, monomodal when all the images have the same modality. Second, quasi-intra-modal which is a registration where the different images are basically of the same modality, but they differ in the photon energy and in the detector characteristics. An example of this type is a navigation using CT preoperative images and X-ray intraoperative images. The third type are multi-modal registration. This type gathers registration methods that use different image modalities. This type is used in diagnosis using CT and SPECT imaging. Finally, the last category of registration includes the registration methods that tries to align images with a standard model (atlas). Referring to the nature of registration, image Registration strategies are classified as extrinsic and intrinsic according to the method used to achieve the correspondence between the two data sets. The first ones are called extrinsic because they rely on artificial objects such as stereotactic frames or some external markers implanted into the bone, soft tissues or skin. This type of techniques have as big advantage that they do not require high computation because the optimization algorithm only has to compare a few markers. However, the markers must be placed prior to the acquisition of the preoper‐ ative images and cannot be removed until the intervention is over to guarantee that their position has not changed. Intrinsic methods are so called because their strategy is the use of anatomical struc‐ tures taken from the own mages to be registered. These methods are usually divided into feature-based, intensity-based and gradient-based. As its name indicates, feature-based algorithms rely on finding the transformation that optimizes the distance between some features extracted from the different images. These features can be points [3], curves [4], contours [5] and/or surfaces [6]. Then, the applied algorithms usually follow an approach similar, or an adaptation, of the iterative closest point(ICP) [7]. The ICP starts from a given initial transformation and tries to resolve this minimization problem by alternatively optimizing feature correspondences and transformations.

544

A. Casals et al.

The intensity-based strategy relies on finding the best correspondence between the information contained in the pixel and voxels of the images. This strategy tries to mini‐ mize the difference between the two images and not the distance between some features extracted from the images. Hence, it can only be used when the information in the pixels and voxels of the images are comparable. Therefore it is not generally adequate for aligning MR images with X-Ray images. There are some publications using these type of images when contrast agents are applied. Nowadays, as the use of fluoroscopy has spread out, there are a lot of research works in this topic, registering 3D CT preoperative images with 2D fluoroscopy/X-ray intraoperative images. The method used to register these data sets is usually based on the projection of digitally reconstructed radiographs (DRRs). These DRR are generated from a CT image by emitting rays from the camera position to the volume data and summing the intensities of the volume found along the ray path. Gradient-based methods register images using objective functions that are directly dependent on the image gradient. These methods use the fact that there is a direct rela‐ tionship between the 3D gradient vectors of CT images and 2D gradient vectors of an X-Ray image. The classification in terms of geometric transformation are principally divided into rigid and non-rigid registration. Rigid registration is based on a Euclidean transformation, a transformation of a six degree of freedom composed by a translation and a rotation. In non-rigid registration the transformation used is of any other type such as affine, polynomial, splines, elastic transformations, local models, etc. Aiming to achieve real time and an adequate accuracy, we are working in a method to improve the ICP algorithm computational time. The ICP algorithm is suited for parallel architectures, stable and robust even for special cases. The principal idea behind our work is the of use relevant points, those that better describe the object shape in order to obtain better approximations to the solution. The points selected as relevant features are the contour points with higher curvature. The proposed method merits come from the search of a trade-off between the computational time that depends on the number of iterations before reaching a solution and the achieved accuracy. Thus, the contribution of the method is the improvement in the correspondence step that reduces the time per iteration. Once all the features have been extracted from the images, the algorithm iteratively performs the following steps: First, projection of all the 3D points to the image plane and selection of the contour points. Second, calculation of the curvature value of each contour point. After, these points are classified by its curvature with respect to the nearby points and finally, the most relevant points are selected as final features to be used in the ICP. In the optimization step of the ICP, these relevant points weighed according to their curvature classification. In the following iteration, the number of points finally selected is increased. The algorithm has been tested in a scene using plastic models of different organs and in different positions, Fig. 5. The experiments evaluate the accuracy and computational time in relation to the increment in number of points used in each iteration of the algorithm.

New Technologies in Surgery

545

Fig. 5. Registration of the 3D images of the three models with the 2D images obtained from different views.

These experiments show an improvement of about 10% in computational time with respect to the standard ICP algorithm, obtaining the same accuracy. The computational cost can be improved in relation to the application needs since this methodology opti‐ mizes the number of points according to the accuracy needed. 3.2 Modeling and Tracking Position tracking is a key problem in many fields such as robotics, medical engineering and virtual reality. In surgical treatments that rely on navigation systems. In these type of procedures, the surgeon is assisted by a computer aided system, mainly in the phases of planning and in image guided surgery. In consequence, these medical systems need both, accuracy and real-time execution. Tracking systems are based on different technologies such as electromagnetic systems, radio frequency, ultrasounds or medical imaging. The objective of tracking is to determine the position and orientation of the tracked object. In the case of image tracking, the objective is to track a target along a sequence of frames. So here, tracking could be described as a registration with the addition of the temporal dimension. In consequence, the tracking of an object can be carried out by continuously registering the two images by giving as an initial solution for each new frame the position of the previous frame. These type of tracking algorithms are divided into deterministic and Bayesian. Deterministic systems simply rely on the information extracted from the current frame and from the solution of the previous image to iteratively calculate the new position. Bayesian trackers try to estimate the best possible position of the target assuming that the previous positions and measurements contain statistical noise and other inaccuracies. Bayesian algorithms are divided into two steps called prediction and update. The predic‐ tion step uses the information from the prior position to estimate a new position, while the updated state uses the information given by the new frame to get a better estimation. In Bayesian trackers, the position of the tracked object is described as a possible state which is characterized by the dynamic and measurement models. The dynamic model determines how the state changes over time (physical laws of motion), and the meas‐ urement model characterizes how measurements are obtained from the state and its statistics. The principal algorithms in Bayesian tracking are the Kalman and particle filters. Kalman filter is a Bayesian algorithm that needs that the dynamic and

546

A. Casals et al.

measurement models are linear and Gaussian. If this assumptions holds, this algorithm is optimal in terms of minimum mean square. If the models are not linear, a generali‐ zation of the algorithm named extended Kalman filter can be used. The particle filter is a Bayesian algorithm that approximates the probability models by a sum of Dirac func‐ tions (particles) centered in a set of possible solutions. In consequence, its main advantage is that it can be used with any type of stochastic model, neither Gaussian nor linear, and when the model is unknown. Its principal drawback is that it needs many particles to infer correctly the probabilistic distributions of the models what makes the algorithm computationally inefficient. In this field, we have implemented a set of Kalman trackers that uses the mentioned registration algorithm to obtain new measurements of the possible position in real-time. In our approach, we use a Kalman filter for each dimensionality of the solution (6 in total) because we suppose that the trajectory of the target is continuous in time and, in consequence, it has to be continuous in each dimension. This approach is a linearization of the real model since it does not take into account the influence of one dimension on the others which incorporates some non linearities to the problem and it permits the parallelization of the system. The Kalman filter was selected for its low computational cost with respect to other Bayesian trackers and because the error of our measurement model are Gaussian. We present our states not as the position in the frame, but as the trajectory of the last points to impose to the system a continuity to the solutions and to reduce the variability of the measurements. This imposition and the use of all the trajectory to the filter carries with it the temporal response of the system. This filter was selected because in medical applications it is advisable to prevent sudden movements. A response of 18 processing cycles is obtained, that is, a response time of 0.5 s; time that has still to be reduced. Taking into account that with the implemented system a time of 30 ms has been achieved, its bandpass is of up to 1/20 × 30 ms, that is 1.6 Hz.

4

Work in Progress

Our research in surgical robotics started with the automatic guidance of the laparoscopic camera by interpreting the area of interest to be visualized as the central point defined by the surgical instruments tip. The first trials in the lab along 1994 and 95 led to the first vision based autonomous camera guidance system experimented clinically (ICAR95). These first results led to the design of a three arms robot, two operating arms and the third for holding the camera. From this platform as a crane in the operating bed a new robot was conceived in 2008, which was patented (patent) and motivated the creation of a spin-off, RobSurgicalSystems S.L. in 2012. This is a modular robotic system with 3, 4 or 5 arms in process of getting the EC mark. The challenges are: (a) optimizing the ratio occupied volume with respect to accessible space, (b) design of a telecontrol station that allows to approach the surgeon to the patient and the operative field, introduce virtual reality and haptic feedback, and (c) design instruments (avoiding current patents) under the constraints imposed by the existing live patents that impedes the use of elements as elementals as pulleys.

New Technologies in Surgery

547

In the field of maxillofacial surgery, the works started in 2000 on an industrial robotic platform still have to face the challenge of being able to operate on non-immobilized parts, by integrating a quick enough registering system, in addition to the current assis‐ tive aids as virtual fixtures. New challenges come from the need of introducing robotics, and MIS, in fetal surgery. This application requires a small robotic system with arms to hold two instruments but not the camera since it can be kept fixed as the operating area is small.

5

Conclusions

Robotics really constitutes an effective assistant to the surgeon as robots can complement humans thanks to their complementary qualities. The main goal now is to improve surgical procedures and make possible new kind of interventions that could not be possible without robotics. Due to the good human-system synergies, the question that arise is whether we really want robots as substitute of humans. The complexity of developing robotic applications which are autonomous enough to operate automatically is still far, and many more cooperative tasks can be efficiently achieved. Another point in the development of robotics is the low speed of development of new systems and applications, which are due in part to legal issues. In this sense, the lack of the best solutions frequently appear due to limitations due to existence of patents frequently abusive that impede the use of the most reasonable technology. In developing robotic systems for surgical applications, besides the safety issues well stablished another main point that refers to ethics is the consciousness on developing the technology just needed and no more than the necessary to achieve the expected results without adding superfluous costs and dependencies. The goal is to achieve social benefits improving surgeries and widening the field of feasible interventions.

References 1. Casals, A., Frigola, M., Amat, J.: La Robótica una valiosa herramienta en cirugía. Revista Iberoamericana de Automática e Informática Industrial 6(1), 5–19 (2009) 2. van den Elsen, P.A., Pol, E.-J.D., Viergever, M.A.: Medical image matching - a review with classification. IEEE Eng. Med. Biol. 12(1), 26–39 (1993) 3. Bijhold, J.: Three-dimensional verification of patient placement during radiotherapy using portal images. Med. Phys. 20(2 Pt 1), 347–356 (1993) 4. Zikic, D., Groher, M., Khamene, A., Navab, N.: Deformable registration of 3D vessel structures to a single projection image. In: Proceedings of SPIE, vol. 6914, pp. 691412–691412–12 (2008) 5. Guéziec, A., Kazanzides, P., Williamson, B., Taylor, R.H.: Anatomy-based registration of CTscan and intraoperative X-ray images for guiding a surgical robot. IEEE Trans. Med. Imaging 17(5), 715–728 (1998) 6. Yamazaki, T., Watanabe, T., Nakajima, Y., Sugamoto, K., Tomita, T., Yoshikawa, H., Tamura, S.: Improvement of depth position in 2-D/3-D registration of knee implants using single-plane fluoroscopy. IEEE Trans. Med. Imaging 23(5), 602–612 (2004) 7. Besl, P., McKay, N.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)

Collaborative Robotic System for Hand-Assisted Laparoscopic Surgery Carmen López-Casado1, Enrique Bauzano1, Irene Rivas-Blanco1, Víctor F. Muñoz1 ✉ , and Juan C. Fraile2 (

)

1

2

Departamento Ingeniería de Sistemas y Automática. Lab. Robótica Médica, Universidad de Málaga, Málaga, Spain {mclopezc,vfmm}@uma.es Departamento de Ingeniería de Sistemas y Automática, Universidad de Valladolid, Valladolid, Spain

Abstract. Hand-assisted laparoscopic surgery is a Minimally Invasive Surgery technique that is based on the insertion of one surgeon’s hand inside the abdominal cavity. In this scenario, a robotic assistant can properly collaborate with the surgeon, working side by side with him/her. This paper presents a robotic system for this kind of technique, based on a cognitive architecture that makes possible an efficient collaboration with the surgeon, thanks to a better understanding of the environment and the learning mechanisms included. This architecture includes a hand gesture recognition module and two different autonomous movement of the robotic arms, one for the camera motion and the other for the tool movement. All of these modules take advantage of the cognitive learning mechanisms of the architecture, fitting their behavior to the current user and procedure. Keywords: Surgical robotics · Cognitive systems · Surgical procedure modelling · Smart surgical glove

1

Introduction

Hand-assisted laparoscopic surgery (HALS) is a new intermediate scenario between conventional laparoscopy and laparotomy, in which the surgeon inserts one hand into the abdominal cavity for organ manipulation, while handles the conventional minimally invasive surgical tool with the other. This is a useful approach in complex situations where conventional laparoscopic surgery cannot be performed. This kind of techniques are widely used thanks to studies where it has been demonstrated that HALS techniques do not imply longer patients’ recovery time [1]. The surgeons’ global requirements for this kind of technique is not covered using single robotized tools, a robotic assistant concept is needed for HALS procedures. The idea of robot co-worker proposed by Haddadin et al. [2], where robots work side by side with humans, can be extrapolated to be used within surgical environments. In these cases, the robot co-worker concept is based on cognitive architectures [3, 4] that make possible an efficient collaboration with the surgeon, thanks to a better understanding of the environment and the learning mechanisms included. These mechanisms involve both © Springer International Publishing AG 2018 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_45

Collaborative Robotic System for Hand-Assisted Laparoscopic Surgery

549

surgeon behavior and surgical procedure learning; and make possible to properly react to unforeseen situations. This kind of robotic systems should include interfaces in order to collaborate with the surgeon in a natural way. The basis of these interfaces is the recognition of the surgeon’s task to properly assist with autonomous tool movements or providing a suit‐ able view of the abdominal cavity. In HALS, the task recognition is usually based on hand gesture recognition, using smart surgical gloves [5]. This kind of device allows tracking the position and distances among phalanges and fingertips. This information is then used to recognize the hand gesture with techniques as Hidden Markov Models (HMM) [6, 7] or Dynamic Time Warping (DTW) [8, 9] among others. Regarding the suitable view of the abdominal cavity, the use of conventional endo‐ scopes makes the surgical environment to have many blind areas that are not reachable. To enhance the field of view, intra-abdominal devices are used. These devices are inserted through tool incisions and when having magnetic coupling [10] they can be moved through the abdominal cavity, reaching areas not covered by conventional endoscopes. The aim of the present paper is the development of a robotic system for HALS procedures, using a cognitive co-worker architecture, where manipulators work side by side with the surgeon, collaborating during the surgical maneuvers and learning from practice. The workspace (Fig. 1) is composed by two robotic arms, one to handle an articulated laparoscopic tool and the other to handle an intra-abdominal camera through a magnetic coupling. The second tool is managed by the surgeon to perform laparoscopic procedures that requires two tools (i.e. suture). The surgeon also wears a surgical glove to track the hand movements within the abdominal cavity. The cognitive architecture will recognize the present stage of the surgical workflow by means of the surgeon’s hand gestures and the motions of the surgical instruments. Thanks to this information, the robotic arm will be able to collaborate with the surgeon and to assist with the articulated tool and placing the camera in the suitable location to provide a complete and appropriate view of the surgical field.

Fig. 1. HALS workspace.

550

2

C. López-Casado et al.

Cognitive Architecture

The cognitive architecture for HALS procedures is based on three layers (Fig. 2). The lower one is the environment itself, including the patient, the surgeon and the robotic assistant. Then, the second one is formed by the modules that allows the robot-world interaction. The Perception System receives all the information of the patient and the surgeon: the surgeon’s hand gesture, the tool position, etc. that it then processes. The Action System commands the motion of the robotic assistant, receiving the new loca‐ tions from the cognition layer. A Human Machine Interface (HMI) is also included to allow the surgeon to modify the autonomous movement of the robotic arms or even stop the whole system in case of emergency using voice commands. Finally, the cognition layer implements all the system knowledge and the learning mechanisms to suitable assist the surgeon.

Fig. 2. Cognitive robotic architecture.

The cognition layer is divided into several memories. The long-term memory includes all the system global knowledge and is formed by the semantic and the proce‐ dural memories. The semantic memory, modelled as a database, includes evidences, meanings, concepts and any form of knowledge that is needed to understand the HALS environment: • List of HALS procedures and their division into different stages as well as the trigger signals that makes the procedure to change from one stage to the next one. • List of actions to be completed by the action system. For each procedure stage, the actions to be executed, have to be set. • List of surgical tools to be used in the HALS procedures. On the other hand, the procedural memory manages the actions that must be executed. It contains the information on how, when and what actions must be performed by the Action System. This knowledge is presented as a set of production rules (if-then rules). Each rule has a condition (the if part) that evaluates the current state of the envi‐ ronment; and a set of actions (the then part) that include the actions to be done for each

Collaborative Robotic System for Hand-Assisted Laparoscopic Surgery

551

element in the robotic assistant. This knowledge can be improved using the reinforce‐ ment learning module, which makes possible to learn from practise, rewarding positively the rules correctly selected and penalizing the wrong ones. Finally, the working memory is the short-term memory of the architecture. It contains the information inferred from the environment combined with the long-term memory information, i.e. it offers the description of the current situation of the system. The Decision Procedure uses all this data to decide which production rule, within the set in the procedural memory, must be executed in each cycle.

3

Perception System

The Perception System is formed by all the devices that acquire information of the abdominal cavity. The most important one for HALS procedures is the data obtained by the smart glove that the surgeon wears. This glove provides information about the posi‐ tion, orientation and distances among the fingers and the phalanges that is then used to recognize the hand gesture made by the surgeon. The hand gesture recognition algorithm allows the robotic assistant to detect patterns during the movement of the surgeon’s hand, which may correspond to a required action of the surgical protocol or to a specific gesture to command the robotic assistant. All data are acquired by the smart glove and is processed by a HMM algorithm. A set of HMMs are firstly trained, one for each of the gestures to be recognized. Next, the move‐ ment of the surgeon’s hand is processed during a surgical protocol by each of all the trained HMMs. This process outputs the probability of the gesture of being one of the trained gestures. The recognized gesture will be the one with the highest probability. Coupled with the recognized gesture, the algorithm provides a numerical value informing about the confidence of the recognition, the Confidence Index.

4

Action System

The Action System oversees the movement of the two robotic arms. Each one has a different navigation strategy, but both are autonomous. The navigation strategy for the camera is based on the combination of two simple strategies: one proactive and one reactive. The first one consists in moving the camera depending on the stage of the HALS procedure, during this period the camera cannot be moved. On the contrary, the reactive approach consists in tracking the surgical tools, without considering the procedure. Both approaches are rigid and do not include any intelligence nor awareness to be considered autonomous. But, if both approaches are combined, advantages of reactive and proactive strategies will be offered. This new strategy is based on computing the global focus of attention of the image (PFoA), i.e. the position where the camera must be placed, as a weighted combination of both contributions: tool tracking and stage field of view.

PFoA = Kr ⋅ PREACTIVE + Kp ⋅ PPROACTIVE

(1)

552

C. López-Casado et al.

where Kr , Kp ∈ [0, 1]∕Kr + Kp = 1 are the weight for the reactive and the proactive contribution respectively; and PREACTIVE, PPROACTIVE are the target position of the camera if reactive approach and proactive one is the only one used. The robot motion highly depends on the value of these two constants (Kr, Kp). If Kr has a value close to 1, the camera will follow the tools with no care of the current stage of the HALS procedure. On the contrary, if Kp is high, the camera motion will depend on the stage of the proce‐ dure without considering the positions of the surgeon tools. First behavior is usually comfortable for novices, meanwhile the second one is more suitable for expert surgeons. Regarding movement of the surgical tool, it is autonomous in displacement and in actuation of the laparoscopic grasp. This behavior is achieved using an auto-guided system based on potential field algorithms. In this way, a low potential field is assigned to the target position to attract the robot tool whereas high potential fields are located over the obstacles (surgeon’s hand and tool) to create a repulsion effect. The robot action will depend on the current state of the surgical procedure; however, its usual tasks are related to aid the surgeon by holding tissue/organs or stretching the needle thread during a suture.

5

Reinforcement Learning

Reinforcement learning (RL) is a technique focus on the maximization of a numeric reward signal. It is based on discovering which actions are the most rewarded by trying them [11], what makes this kind of algorithm a trade-off between exploration and exploitation: the system must exploit what it has already learnt, but it also has to explore the new actions that has not been tested yet. This kind of learning is based on production rules equivalent to the ones in the procedural memory. The difference between the normal rules and the ones belonging to the RL algorithm is that the last ones are balanced. Each RL rule has a value associated with it, the Q-value, that is obtained following a Qlearning technique. This technique computes this value depending on the reward received after executing the action associated with it and the discounted expected future reward, which is obtained from the available actions in the next state [12]. The internal goal of the system is to maximize the expected future reward. The reward received after completing the action, must inform on how good the action executed was. The RL algorithm has been used in two different fields within this work. On one hand, it has been used to learn the Kp and Kr weight of the autonomous movement of the camera. And on the other hand, it has been used to enhance the hand gesture recog‐ nition algorithm. Regarding the camera movement, different values for Kp and Kr have been learnt for each user and for each stage of the HALS procedure. The reward signal has been computed as a fuzzy model depending on the number of corrections made through the HMI, the time spent to complete a stage and a numerical value corresponding with the user satisfaction. Regarding the recognition algorithm, the RL technique has been used to include new on-line gesture records into the gesture library. This new record is weighted with respect to the old ones, this weight is the one to be learnt. In this case, the reward signal is the Confidence Index obtained in each recognition.

Collaborative Robotic System for Hand-Assisted Laparoscopic Surgery

6

553

Implantation and Conclusions

This paper has described the cognitive architecture of a robotic assistant that can be used for any HALS procedure, as well as the main modules that govern its behavior: the hand gesture recognition modules and the autonomous movements module. Both, take advantage of the learning mechanisms of the architecture, making possible to fit the robotic assistant performance to the users. Most part of the Cognition Layer (Fig. 2) has been implemented in SOAR [13], a general cognitive architecture that support all the capabilities required by a general intelligent agent. It is based on a production system, and uses explicit production rules to govern its behavior. This architecture has been encapsulated into a ROS node [14]. ROS is an open source software framework for robot software development. This framework is based on nodes with a publishing/subscribing communication policy. Thus, all the devices of the HALS environment have been integrated as a ROS node to be connected one with each other.

References 1. Vogel, J.D., et al.: Hand-assisted laparoscopic right colectomy: how does it compare to conventional laparoscopy? J. Am. Coll. Surg. 212(3), 367–372 (2011) 2. Haddadin, S., et al.: Towards the Robotic Co-Worker. Robotic Research. Springer Tracts in Advance Robotics, vol. 70, pp. 261–282 (2011) 3. Chui, C.K., et al.: Development of a cognitive engine for balancing automation and human control of surgical robotic system. In: 2014 11th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 13–17 (2015) 4. Weede, O., et al.: Towards cognitive medical robotics in minimal invasive surgery. In: Proceedings of Conference on Advances in Robotics, pp. 1–8 (2013) 5. Fahn, D.S., et al.: Development of a fingertip glove equipped with magnetic tracking sensors. Sensors 10(2), 1119–1140 (2010) 6. Galka, J., et al.: Inertial motion sensing glove for sign language gesture acquisition and recognition. IEEE Sens. J. 16(16), 6310–6316 (2016) 7. Rossi, M., et al.: Hybrid EMG classifier based on HMM and SVM for hand gesture recognition in prosthetics. In: IEEE Conference Industrial Technology, pp. 1700–1705 (2015) 8. Plouffe, G., et al.: Static and dynamic hand gesture recognition in depth data using dynamic time warping. IEEE Trans. Instrum. Meas. 65(2), 305–316 (2016) 9. Bautista, M.A., et al.: A gesture recognition system for detecting behavioral patterns of ADHD. IEEE Trans. Cybern. 46(1), 136–146 (2016) 10. Valdastri, P., et al.: A magnetic internal mechanism for precise orientation of the camera in wireless endoluminal applications. Endoscopy 42(6), 481–486 (2010) 11. Sutton, R.S., et al.: Reinforcement Learning: An Introduction, vol. 1(1). MIT Press, Cambridge (2012) 12. Laird, J.E.: Tuning Procedural Knowledge: Reinforcement Learning. The Soar Cognitive Architecture. MIT Press, pp. 181–202 (2012). ISBN:9780262301145 13. Laird, J.E.: The Soar Cognitive Architecture. MIT Press (2012) 14. Koubaa, A.: Robotic Operating System (ROS). SCI, vol. 625. Springer (2016)

Rehabilitation and Assistive Robotics

Mechanical Design of a Novel Hand Exoskeleton Driven by Linear Actuators Jorge A. D´ıez(B) , Andrea Blanco, Jos´e M. Catal´ an, Arturo Bertomeu-Motos, Francisco J. Badesa, and Nicol´as Garc´ıa-Aracil Universidad Miguel Hern´ andez, Elche, 03202 Alicante, Spain [email protected] http://nbio.umh.es/ Abstract. This paper presents the mechanical design of a novel hand exoskeleton for assistance and rehabilitation therapies. As a solution for the movement transmission, the proposed device uses modular linkage that are attached to each finger by means of snap-in fixations. The linkage is kinematically and dynamically analyzed by means of simulations with AnyBody Simulation Software to obtain an estimation of the range of motion and admissible forces. In order to check the deviations of the real performance respect to the simulated results, due to uncertain variables, a first prototype is built and tested.

Keywords: Hand exoskeleton robotics

1 1.1

·

Assistive robotics

·

Rehabilitation

Introduction State of the Art

In the current literature we can find a wide diversity of robotic devices which can actuate the movements of the human hand [1]. Part of these devices, such as [2–4], are aimed to perform rehabilitation therapies; while many others are pretended to assist the hand motion during activities of daily-living [5–7]. Depending on the application, a hand exoskeleton may require uneven features. For example, a rehabilitation-aimed exoskeleton needs to be fairly backdrivable and allow a wide range of movement, so it is flexible enough to perform different rehabilitation exercises. In contrast, an assistance exoskeleton must be stiff enough to assure a firm grasping of objects present during activities of daily living and can sacrifice flexibility of movement in favor of predefined grasping patterns. These different requirements result on diverse force transmission architectures: – Some devices use linkages in order to transmit the force from the actuator to the human joints. This is a stiff architecture that requires a proper alignment c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_46

558

J.A. D´ıez et al.

between kinematic centers of the linkage and human joints, but allows a good control of the hand pose. Due to the flexibility of the design, with the correct sizing, these mechanisms can achieve complex movement patterns with simple actuators. – Another extended architecture is the cable driven glove. These are more flexible and simpler alternatives, that rely on the own human joints to direct the movement, so they are less prone to uncomfortable poses. In contrast, they require pulleys to achieve high forces and are harder to control in intermediate positions. Additionally, this kind of exoskeletons need a pair of cables in antagonist configuration in order to assist both extension and flexion movements. – Finally, some devices use deformable actuators, like pneumatic muscles or shape-memory alloys, attached directly to the hand by means of a glove. They result in very light and simple devices, but actuators are not placed in the most advantageous place to achieve great forces. In general, according to [1] there is a clear trend to use cable driven devices and deformable gloves for assistance purposes while linkage architectures are mainly used in rehabilitation devices. 1.2

Framework and Objective

AIDE Project (H2020) [8] is a project that is developing a multimodal system to assist disabled people in the realization of a wide range of activities of the daily living. One of the main research lines, is the analysis of usage of exoskeletons to assist users during the interaction with their environment [9]. In this framework, we are developing a hand exoskeleton with the concept described in the patent ES2558024B1 [10] as starting point. This device is required to be able to grasp different kind of everyday use objects, such as glasses, adapted cutlery or handles, with a grip that must be firm and safe in order to provide real autonomy to the user.

2

Mechanical Design

Since we consider the grip strength as a decisive factor to achieve our goals, we have chosen an architecture based on a linkage that controls the pose of the phalanxes. Using this type of power transmission, we can tune the leverage between links in order to achieve a satisfying balance between the transmitted force and range of movement; and therefore optimize the weight and power required for the actuators. As a first approximation, we propose an exoskeleton with three active degreesof-freedom, corresponding to flexion-extension of index finger, flexion-extension of middle finger and flexion-extension of both ring and little finger. Thumb will have a series of passive-degrees of freedom that will allow to place the thumb in a suitable pose in the installation phase, and will be lockable to allow the thumb to work in opposition during the grasp.

Mechanical Design of a Hand Exoskeleton

2.1

559

Finger Mechanism

The proposed solution (Fig. 1) consists on a five-bar-linkage that couples the movement of both proximal and medial phalanxes so that they can be controlled with a single active degree-of-freedom. This mechanical system is driven by a linear actuator fixed to the exoskeleton frame and connected to the link L2. We have decided not to actuate on the distal phalanx since it would restrain excessively the movements of the finger, leading to a greater probability of uncomfortable poses.

Fig. 1. Proposed linkage for a finger

This linkage forms closed kinematic chains with the finger, but this is only true if the finger is perfectly fixed to the linkage. Unfortunately, the interface between the robot and the human body will introduce an uncertainty that, due to the complexity and high grade of non-linearity of the mechanism, may completely modify the kinematic behavior of the robot. In this regard, similarly to the solution proposed by Ho et al. [11], we have added a pair of circular guides, whose centers match with the joints of a reference finger. In this way, the kinematic chain is closed exclusively by mechanical parts whose dimensional parameters are known. The addition of the circular guides carries extra dimensional restrictions, due to the possible interference between guides and links. The design developed by Ho et al. consists on a pair of slots in which a train of bearings slide, restricting the movement of the links. This architecture may be effective, but requires substantially good quality in the machined surfaces of the slot and the use of miniature elements in order to be as compact as possible. As an alternative, we propose the use of a doubled edged curved guide which slides between a group of 4 V-shaped bearings (Fig. 2). This results on a simple but effective option, that can be easily built with technologies like 3D-printing or plastic moulding. Regarding the linear actuator, size and weight are the main limiting design factors. Despite pneumatic actuator may result attractive for this kind of devices due to their reduced dimensions and high force, we consider that electric drives

560

J.A. D´ıez et al.

Fig. 2. Detailed view and section of the designed circular guide, using bearings with V -shaped profile

are more suitable for domestic environments, since it is easier and cheaper use the household electric installation than providing a compressed air supply to the user. Among the different existing technologies, we consider that Actuonix PQ12 Linear Actuator [12] provides an adequate balance between size, weight and maximum force. Moreover, this actuator has two interesting features for this application: in one hand, the screw transmission allows to have a high backdrive force that avoid the releasing of the grasped object in case of accidental lose of the power supply; in the other hand, it has a built in potentiometer, so no additional space is required to set up an external one. The sizing of the whole mechanism can not be easily done by mathematical means, since complex and non-linear kinematics result on multiple solutions or unstable convergence. Additionally, elements like screws or shafts can be found just in specific sizes, so any optimization of the link sizes would hardly be compatible with these normalized parts. Therefore, we have used a parametric CAD model of the device in order to obtain a satisfying solution by manually changing the geometric variables. Figure 3 shows a detailed view of the resulting finger mechanism with the extreme theoretical pose of the finger. At this stage of design, this linkage and dimensions will be the same for each finger except thumb. Unlike the rest of fingers, thumb can not be approximated as a planar mechanism due to its degree of freedom of opposition. However, many common grasps can be done by just maintaining firmly the position of the thumb while the rest of fingers perform the whole movement. So, at this design stage, we have designed a mechanism that allow to place manually the user’s thumb in a comfortable pose and then block it.

Mechanical Design of a Hand Exoskeleton

561

Fig. 3. Drawing of a finger mechanism with extreme positions. (Left: Maximum extension. Right: Maximum flexion)

2.2

Hand Interface

Since the designed linkage is kinematically determinate by means of circular slides, it is necessary to design an interface able to deal with misalignment between human joints and robot rotation centers, as well as the deviation between the reference finger size and actual user’s dimensions. As an interface between moving parts of the exoskeleton and human fingers, we have designed a snap-in fixing system by using elastic ring-like pieces that can be inserted in certain points of the finger linkage (Fig. 4). This system allows a quick attach-detachment process, reducing the time required in the set up and allowing a fast reaction against any dangerous situation. The stiffness of the ring can be designed to automatically release itself if a certain force threshold is overcome. Additionally, the diameter of each ring can be chosen accordingly to the real dimensions and misalignment, in order to provide the user with the most comfortable experience. Each finger mechanism must be attached to a frame fixed with respect to the metacarpal section of the hand, but the optimal placement varies significantly depending on hand’s dimension. As a solution, we propose an attachment similar to the finger’s one (Fig. 5). In this case, the finger frame has a slot in which a double wedge-like part is inserted. This wedge constraints two translation degrees-of-freedom, but allows the movement along the slot direction, so that the position of the linkage along the finger metacarpal axis can be adjusted. Once the linkage is correctly placed, the friction between the frame and the wedge prevents it from displacing. Finally, each wedge is screwed to a semi-rigid hand orthosis that wraps both palm and back of the hand. As a result, we obtain a modular hand exoskeleton

562

J.A. D´ıez et al.

Fig. 4. Finger-Exoskeleton fixing system. Left: Section of the ring-box assembly with the force needed for releasing. Right: Detailed view of the attachment points in the finger linkage.

Fig. 5. Hand-Exoskeleton fixing system. Left: Section that shows the wedge-like part inserted in the slot of the finger frame. Right: Detailed view of the connection system before attachment.

that can be easily modified to the user’s needs, since just the necessary phalanxes or even finger linkages may be set up.

3

Biomechanical Analysis

In order to study how do exoskeleton kinematics affect the finger kinematics, we have decided to use the AnyBody Simulation Software [13] with the human finger model proposed by Wu et al. [14]. We have included the kinematics and dynamics of the finger linkage to this model (Fig. 6), paying attention in modeling the finger-linkage interface as accurately as possible, to obtain realistic reaction forces. In detail, the ring snap-in fixation has been modeled as a cylindrical slide whose axis is aligned with the phalanx longitudinal axis, attached rigidly to the exoskeleton’s link through a bar with length equal to the radius of the ring. With this numerical model, we can easily solve the kinematics of the mechanism and compute the relationship between the angles of both metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints. This result is shown

Mechanical Design of a Hand Exoskeleton

563

Fig. 6. Human finger [14] and exoskeleton model in AnyBody

in Fig. 7. Despite the internal non-linearity of the mechanism and interface, along the working range defined by the actuator’s stroke, the output angles present a fairly linear behavior; so we may use two simple expressions to obtain an estimation of the hand’s pose with just measuring the stroke of the motor. According to Chen et al. [15] reaction torques in each joint of the finger, and the ratio between them, may vary greatly depending on the user and the type of grasp. Therefore, in order to compute in which working points (MCP torque, PIP torque) our exoskeleton can work properly, we have performed an inverse dynamics study computing the force that must exert the actuator to balance the mechanism, applying different torque combinations in the joints of the finger. Taking the maximum force required in each simulation for each combination of MCP and PIP torque, we have computed the admissible load assumptions for our exoskeleton (Fig. 8). We have performed a preliminary calculation in order to check whether these admissible loads are suitable for the required applications: The heaviest object that will be considered to be grasped will be a full bottle of water with a capacity of 1.5 L, which can be considered as a mass of 1.5 kg. According to the study of O’Meara and Smith [16], the human skin has a static friction coefficient of 0.8, so to lift a weight of 1.5 kg, a hand must exert a total of approximately 2 kg of force over it. As a simplification, the applied force is distributed as follow: half the force is exerted by the thumb (1 kg) and the other half is distributed equally between

564

J.A. D´ıez et al.

Fig. 7. Angles of the MCP and PIP joints for each actuator stroke. Solid lines represent the output of the simulation. Dashed lines represent the corresponding linear fit. Linear fit equations are provided, where y refers to the joint angle and x to the motor stroke. Angles are measured as in Fig. 3.

Fig. 8. Module of the force required to the linear actuator in order to stand a specific torque combination. Green area shows the working points in which the actuator can stand torques even without electrical power. Yellow zone defines working points that necessarily require powered actuator. Red zone contains torque combinations not standed by the mechanism.

index and middle finger (0.5 kg each). If the force is applied in the middle of each phalanx and only the MCP and PIP joints are actuated, the required torque for each joint can be summarized in Table 1. With this load hypothesis, each finger would be working inside the safety zone of Fig. 8.

Mechanical Design of a Hand Exoskeleton

565

Table 1. Loads required to grasp 1.5 kg bottle with a simplified grasp model

4

Joint

Next phalanx Applied Joint average force (N) torque length (m) (Nm)

Required torque (Nm) Safety factor >2

IndexMCP

0.058

0.0725

0.15 0.6

2.5 N

IndexPIP

0.022

2.5 N

0.0275

MiddleMCP

0.053

2.5 N

0.06625 0.14

MiddlePIP

0.026

2.5 N

0.0325

0.7

Thumb-MCP 0.032

10 N

0.16

0.32

Prototype Stage

There are many factors, such as clearances in the interfaces or joint misalignment, that can not be easily evaluated with simulations, so it is mandatory to build a prototype to check the real performance of the designed exoskeleton. In this prototype, the exoskeleton will have 3 finger modules that control index finger, middle finger and the pair formed by ring and little fingers, in addition to the aforementioned thumb blocking linkage. Each finger module is built in Polylactic Acid (PLA) material by Fused Deposition Modeling (FDM) 3D printing technology, which allows fast and cheap fabrication of complex geometries. Despite this fact, all parts have been designed or subdivided in geometries that can be easily built by computer numerical control (CNC) machining in order to allow the production of further prototypes using traditional materials. We have particular interest in studying the resistance of the PLA 3D printed parts in order to check the feasibility of building the exoskeleton using technical polymers, to reduce as much as possible the weight of the final device. Additionally to the finger modules, we have similarly built a set of rings with diameters that cover the range from 15 mm to 25 mm with a step of 1 mm. Finally, as an interface with the hand, we have chosen a commercial hand orthosis, were the fixations for the finger module and thumb blocking linkage have been rigidly attached. Figure 9 displays the modules that compose the whole exoskeleton (which can be easily attached and detached), the device set up on a hand, and a grasp test in which the device is capable of holding a bottle of half liter of water. Once the user wears the hand exoskeleton, we have observed that, despite that the mechanism holds its position, rings allow certain freedom of movement of the fingers so the user can accommodate them to a more natural pose. This adjustment introduces an error in the kinematics that varies each time the devices is attached and removed from the hand. Figure 10 shows the result of the measurement of the angles of the joints for a same user after installing the exoskeleton several times. These measures were taken approximately by photogrametry, with a precision of about ± 1◦ . For both joints we can observe that the error with respect to the simulated values is remarkably high for low angles and becomes

566

J.A. D´ıez et al.

Fig. 9. Left: Disassembled exoskeleton with finger modules, hand orthosis and finger rings. Center: Exoskeleton attached to a human hand. Right: Grasp test of a bottle of 0.5 L of water.

Fig. 10. Mismatch between simulated and measured angles due to clearances in the rings and misalignment between joints. Sparse points correspond to measurements for a same subject after different exoskeleton set-ups.

lower in closed poses. This phenomenon is due to the alignment between both rings of the finger, since it allows the finger to move along the axial direction of the ring. When the exoskeleton becomes more closed, misalignment between rings restrict the displacement along them and the real behavior becomes closer to the simulated model.

5

Conclusion and Future Work

According to the first tests with the prototype, the designed device has proven to be feasible to grasp certain objects present in the activities of daily living, such

Mechanical Design of a Hand Exoskeleton

567

as small bottles, glasses or adapted cutlery. In comparison with other existing devices, we want to remark several features that may suppose an improvement in this kind of devices: – Snap-in fixations together with the modular finger mechanisms allow a quick installation and removal of the device as well as eases the maintenance of the hardware and the substitution of worn up parts. – The use of elastic rings as interface with fingers, despite that they introduce uncertainty in the actual hand pose, provide a comfortable fixation to the user allowing the proper accommodation of the finger. Additionally, rings add safety measure when extending the hand since they will detach at a certain interaction force. – As PLA has proven to be tough enough to stand the hand loads, the use of technical plastics as structural materials seems to be feasible. These kind of materials will result on lightweight devices as well as will allow cheap mass manufacturing process like molding or injection. – Since grasping contact forces mainly fall to rings, so they can be coated with adherent materials to improve the safety of the grip. Future works will consist in an exhaustive validation of the device as well as in exploring additional potential applications, such as rehabilitation therapies. Implementation of a force measurement system is a short-term objective, since it will provide valuable data required for validation tests and studies with disabled users. This force measurement will extend the functionality of the devices because of the utility in control loops or detection of movement intention. Acknowledgments. This work has been founded by the European Commission through the project AIDE:Adaptive Multimodal Interfaces to Assist Disabled People in Daily Activities (Grant Agreement No: 645322); by the Spanish Ministerio de Econom´ıa y Comptitividad through the project DPI2015-70415-C2-2-R; and by Conselleria d’Educaci´ o, Cultura i Esport of Generalitat Valenciana through the grants ACIF 2016/216 and APOTIP 2016/021.

References 1. Heo, P., Gu, G.M., Lee, S.J., Rhee, K., Kim, J.: Current hand exoskeleton technologies for rehabilitation and assistive engineering. Int. J. Precis. Eng. Manuf. 13(5), 807–824 (2012) 2. Brokaw, E.B., Black, I., Holley, R.J., Lum, P.S.: Hand spring operated movement enhancer (HandSOME): a portable, passive hand exoskeleton for stroke rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 19(4), 391–399 (2011) 3. Mulas, M., Folgheraiter, M., Gini, G.: An EMG-controlled exoskeleton for hand rehabilitation. In: 9th International Conference on Rehabilitation Robotics, ICORR 2005, pp. 371–374. IEEE, June 2005 4. Kinetic Muscles Inc., Hand Physical Therapy with the Hand MentorTM. http:// www.kineticmuscles.com/hand-physicaltherapy-hand-mentor.html

568

J.A. D´ıez et al.

5. Martinez, L.A., Olaloye, O.O., Talarico, M.V., Shah, S.M., Arends, R.J., BuSha, B.F.: A power-assisted exoskeleton optimized for pinching and grasping motions. In: Proceedings of the 2010 IEEE 36th Annual Northeast Bioengineering Conference, pp. 1–2. IEEE, March 2010 6. Yamada, Y., Morizono, T., Sato, S., Shimohira, T., Umetani, Y., Yoshida, T., Aoki, S.: Proposal of a SkilMate finger for EVA gloves. In: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2001, vol. 2, pp. 1406–1412. IEEE (2001) 7. Tadano, K., Akai, M., Kadota, K., Kawashima, K.: Development of grip amplified glove using bi-articular mechanism with pneumatic artificial rubber muscle. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2363–2368. IEEE, May 2010 8. AIDE: adaptive multimodal interfaces to assist disabled people in daily activities (GA 645322). http://cordis.europa.eu/project/rcn/194307 es.html 9. D´ıez, J.A., Catal´ an, J.M., Lled´ o, L.D., Badesa, F.J., Garcia-Aracil, N.: Multimodal robotic system for upper-limb rehabilitation in physical environment. Adv. Mech. Eng. 8(9), 1–8 (2016). doi:10.1177/1687814016670282 10. Garcia-Aracil, N., Sabater, J.M., Fern´ andez, E., Badesa, F.J., Morales, R., D´ıez, J.A., Enriquez, S.C.: Self-adaptive for hand rehabilitation and method of use and modular robotic device, ES Grant ES2558024B1, Oficina Espa˜ nola de patentes y marcas (2016) 11. Ho, N.S.K., Tong, K.Y., Hu, X.L., Fung, K.L., Wei, X.J., Rong, W., Susanto, E.A.: An EMG-driven exoskeleton hand robotic training device on chronic stroke subjects: task training system for stroke rehabilitation. In: 2011 IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–5. IEEE, June 2011 12. Actuonix Motion Devices Inc. Miniature Linear Motion Series PQ12, Victoria, Canada (2016). https://www.actuonix.com/Actuonix-PQ-12-P-Linear-Actuatorp/pq12-p.htm 13. The AnyBody Modeling System (Version 7.0.0). [Computer software]. AnyBody Technology, Aalborg, Denmark (2017). http://www.anybodytech.com 14. Wu, J.Z., An, K.N., Cutlip, R.G., Krajnak, K., Welcome, D., Dong, R.G.: Analysis of musculoskeletal loading in an index finger during tapping. J. Biomech. 41(3), 668–676 (2008) 15. Chen, F.C., Favetto, A., Mousavi, M., Ambrosio, E.P., Appendino, S., Battezzato, A., Manfredi, D., Pescarmona, F., Bona, B.: Human hand: kinematics, statics and dynamics. In: 41st International Conference on Environmental Systems (ICES), Portland, Oregon, USA, vol. 5249, pp. 17–21. AIAA, January 2011 16. O’Meara, D.M., Smith, R.M.: Static friction properties between human palmar skin and five grabrail materials. Ergonomics 44(11), 973–988 (2001). doi:10.1080/ 00140130110074882

Robotic Platform with Visual Paradigm to Induce Motor Learning in Healthy Subjects Guillermo As´ın-Prieto(B) , Jos´e E. Gonz´ alez, Jos´e L. Pons, and Juan C. Moreno Neural Rehabilitation Group, Cajal Institute, Spanish National Research Council (CSIC), Avda. Dr. Arce, 37, 28002 Madrid, Spain [email protected] http://www.neuralrehabilitation.org/

Abstract. Recent projects highlight how motor learning and a high level of attention control can potentially improve submaximal force production during recovery after stroke. This study focuses on the assessment of detailed metrics of force production and position control -healthy subjects- and their correlation with submaximal force production control learning during a new task consisting in maintaining the position for early rehabilitation after stroke. We used a Motorized Ankle Foot Orthosis (MAFO) with zero-torque control together with a visual paradigm interface to exert controlled torque profiles to the ankle of the subject. The subject is asked to follow the trajectories in the visual interface, while the robot disturbs the movement. The aim of the exercise is to improve the motor control by learning how to maintain the position to follow the trajectory, compensating the perturbations, in three possible training paradigms: (1) fixed torque, (2) progressive increase of torque, and (3) modulated torque based on score on the task. All training paradigms led to an improvement in the score comparing pre and post-training performances, so we concluded that this platform induces learning on healthy subjects. To sum up, we conclude that this tool is useful to induce learning in healthy subjects, and thus will keep improving the training paradigms, for the translation into a rehabilitative tool. Keywords: Stroke recovery control

1

·

CVA

·

Neural rehabilitation

·

Motor

Introduction

Recent projects highlight how motor learning and a high level of attention control can potentially improve submaximal force production during recovery after stroke. Motivation and attention are key elements for a potentially successful motor rehabilitation [1]. It is capital for the success of the therapy that the patient understands the exercise and the potential outcome, to keep the c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_47

570

G. As´ın-Prieto et al.

engagement and the attention, thus enhancing motivation [2]. Furthermore, if the patient lacks the capacity to understand how an exercise is executed correctly, little effect can be expected [3]. It is important to note that while talking about stroke patients that their brain has suffered an injury, learning function may be compromised. Nonetheless, it has been seen that after local brain cortex tissue damage, rehabilitative training can produce a reorganization to some extent on the adjacent intact cortex [4]. These findings suggest that undamaged motor cortex might play an important role in motor recovery. One important factor is that there seems to exist a direct relationship between impairment and function: function of walking is correlated to lower-limb strength (impairment) [5]. Other key factor in a successful therapy towards motor recovery is the concept of repetition. On the one hand, the idea of “repetition without repeating” is important for the avoidance of disengagement during the progress of the task. On the other hand, this need of repetition may lead to health problems for physiotherapists, and robotics can be of huge help in providing them with a tool to enhance the therapy by allowing them to ensure repeatability and providing a tool to more objectively assess performance metrics [6]. The concept of “repetition without repeating” considers that training sessions must include recording the modified parameters to challenge the patient [2]. Within this concept, the idea of interleaving different exercises also arises, and it has been seen that this mixing of learning tasks is more effective for learning than learning tasks one by one with specific focused trainings [7], and this mixed schedule might aid recovery because it considers each movement as a problem to solve and not as a pre-memorized muscular sequence [8]. In fact, Shea and Kohl [9] stated that “retention of the test task was best facilitated by acquisition practice that included task variations rather than conditions that included additional practice on the test task or even a condition identical to the retention condition.” The nervous system seems to rely on the concept of adaptation, meaning that a previously learned skill can be recovered after a change in the operating environment. This theory of adaptation says that after a change in the environment dynamics, the control system does not need to relearn but adapt, suggesting that the nervous system tunes a learned internal model and has the ability to extrapolate this learning to new scenarios [10]. Robotics has been proposed and partially proven to be beneficial in interventions in acute and chronic stroke patients [11–13]. It is important to study the better tradeoff between the learning rate and the amount of helping guidance given by the robot during training [14]. Robotic guidance approaches also offer a powerful tool in rehabilitative and learning cases where large errors may be dangerous or undesirable, as this guidance provides the user with a tunnel inside of which the movement is allowed and controlled outside of it. Robotics can be used not only as standalone devices, but also in combination with virtual environments, providing the possibility of showing the patient and the physiotherapist the feedback of the performance of the task, which has been proven to improve rehabilitation [3]. In the literature we can find that visual feedback through virtual environments helps to improve the per-

Robotic Platform to Induce Motor Learning

571

formance in specific tasks, even improving the results in comparison to real-task training [15]. Indeed, this visual approach has been proven to improve robotic haptic guidance therapy scenarios [16]. Furthermore, the idea of this virtual environment or “video game” makes the implicit learning process transparent for the user, not being aware of what it is being learned [6]. They also ease to offer the user with a goal-oriented task and possibility of repetition [17]. Robots are crucial for these haptic approaches, as they help the physiotherapist to provide an objectively equal movement to the patient’s limb. Several authors have tested the error augmentation approach [6,18–20] to enhance motor learning. The nervous system learns forming the internal model of the dynamics of the environment via a process of error reduction. Emken and Reikensmeyer concluded that motor learning process can be accelerated by exploiting the error-based learning mechanism. They used the approach of perturbing the movement via a robotic device, to augment the error while the user was performing the task. It seems that this learning process is indeed a minimization problem, based on the last performed trials to accomplish the task [20], where the minimization takes place between the weighted sum of the kinematic error and the muscular effort [21]. Combined haptic error augmentation and visual feedback have provided better results than conventional direct training (pure joint mobilization), exploiting the idea of using the after-effects of a resistive force to help perform the user the desired trajectory task [22]. Moreover, this error-augmentation strategy produces, when the after-effects are evaluated, the correct muscle activation patterns to achieve the desired trajectory, and results showed that subjects were able to reduce errors more rapidly than those that received a robotic guidance approach training [14]. Reikensmeyer and Patton suggest combining robotic guidance and error-augmentation techniques, starting with guidance and gradually removing it and increasing error-augmentation. Several studies have been conducted with stroke patients, showing promising results comparable to those learning results seen in healthy subjects. For example, Patton et al. [23] found some preliminary evidence that showed that the approach of error-augmentation could be used to obtain smoother trajectories in stroke patients. Krakauer [8] found that, although guidance approach leads to better performance during the training, it is not optimal for retaining the learning over time, so these training may improve performance in the clinical environment, but may not be transferable to activities of daily living. Reikensmeyer and Patoon demonstrated that with this error-augmentation approach, stroke patients can potentially alter abnormal limb movement patterns that appear relatively fixed, although the reasons why people with stroke perform abnormal patterns remain to be identified [14]. One important point to take into account in rehabilitative processes is that “several types of exercise programs, including muscle strengthening, physical conditioning, and task-oriented exercises have led to an improvement in balance and mobility” [24]. Thus, it makes sense to focus the approach of early rehabilitation and motor recovery on simple tasks, towards the integration of them into more complex processes while the recovery takes place over time.

572

G. As´ın-Prieto et al.

Patton et al. [19] also found evidence on stroke survivors retaining ability to adapt to force fields, being better at adapting to real scenarios less demanding than training scenarios (lower forces in the real world than in the training); and more importantly, they can last longer if the exercise resembles normal movements and the after-effects can be perceived by the patient as an improvement [6]. Most studies are focused on upper-limb rehabilitation, but Duncan et al. [25] demonstrated that similar motor recovery of upper and lower-limbs occur after stroke. Thus, we focus our study, given this state-of-the-art analysis, on lowerlimb recovery, specifically on the ankle. It has been reported that both plantar and dorsiflexion are compromised after a stroke: plantarflexor passive stiffness is the cause for reduced plantarflexion torque before starting the swing phase in gait, and may also be the cause to limited dorsiflexion, compromising foot clearance [26]. It is also interesting to point out that these rehabilitative approaches can be either used for chronic or acute patients, leading to performance improvement and recovery beyond the current 6-month recovery plateau [27]. This study focuses on the assessment of detailed metrics of force production and position control -healthy subjects- and their correlation with submaximal force production control learning, as improving regularity in submaximal force production might help improve functionality and reduce disability [28] during a new task consisting in maintaining the position for early rehabilitation after stroke. Several assessment measurements are taken during the learning task to temporally determine the scale of this learning. Our aim is to characterize the capacity to perform the precision task of maintaining the position with a submaximal force production in healthy subjects.

2

Materials and Methods

In this section, we briefly present the platform we use for this study and expose the methodology. The analysis of the obtained data is performed with MathR R . Works MATLAB 2.1

Experimental Platform

A Motorized Ankle Foot Orthosis (MAFO) is used for this study (see Fig. 1), with a zero-torque control. This robotic platform permits to exert controlled torque profiles to the ankle joint of the subject. It is connected to a BeagleBone Black board running Ubuntu and a custom application to interface the CANR where the visual enabled driver board of the robot to MathWorks Simulink , paradigm is programmed, and to MATLAB, from where the control parameters for the robot are sent. 2.2

Participants

Nine males and six females without any history of neuromuscular and cardiovascular disorders participated in this study; with ages 27.73 ± 3.58, and sport activity more than 2 and less than 6 h per week.

Robotic Platform to Induce Motor Learning

(a)

573

(b)

Fig. 1. (a) Subject inside the robotic platform performing the experiment; and (b) detail of the actuator of the robotic platform.

2.3

Task

The experiment consisted in following the trajectories depicted via the visual paradigm (Fig. 2), while the robot disturbed the movement by performing plantar and dorsiflex interleaved torque patterns (Fig. 3). The aim of the exercise was to improve the motor control by learning how to maintain the position to follow the trajectory in the screen, compensating the perturbations done by the robot. Training Paradigms. Three training paradigms were designed for the experiment:

Fig. 2. Visual paradigm. The user controls the position of the bird with the angular position of the ankle.

Normalized amplitude

574

G. As´ın-Prieto et al. 1

0

0.8

−0.2

0.6

−0.4

0.4

−0.6

0.2

−0.8

0

0

200

400 Samples

600

−1

0

200

400 Samples

600

Fig. 3. Torque patterns.

– Fixed torque: This training paradigm performed the thirty trajectories at a fixed torque (established at 16 N·m as a technical limitation of the set robotcontroller). – Progressive: Progressive paradigm progressively increased the torque exerted by the robot from a minimum of 1 N·m up to 16 N·m. – Modulated: The peak torque exerted in each trajectory was increased in 2 N·m if score in the previous one was 100%, and decreased to the average between the two last trajectories’ score (saturated in this way: decreased at least 1 N·m and at most 2 N·m in comparison to previous torque). For example, if score is lower that 100%, previous peak torque was 8 N·m, and current torque was 10 N·m, then next torque will be (8 + 10)/2 = 9 N·m, due to the difference 10 − 9 = 1 N·m being higher or equal to 1 and lower or equal to 2 N·m. Task Description. The task consisted on forty trajectories, distributed as in Fig. 4: (1) first trajectory let the user move the foot freely to understand the dynamics of the exercise; (2) the three trials from 2 to 4 assessed the performance of the subject in the task before the training (at fixed torque); (3) thirty training trials were performed (1 out of 3 different training paradigms was randomly selected for each user); (4) the three trials from 35 to 37 assessed the performance of the subject in the task after the training, at 80% of the fixed torque, e.g. 1.2 N·m (in order to assess the performance at a different level than that used for training in the fixed torque training paradigm); and (5) finally, the last three trials from 38 to 40 assessed the performance of the subject in the task after the training.

Fig. 4. Experiment phases.

Robotic Platform to Induce Motor Learning

Progressive

Modulated

80

60

60

60

40

40

40 m ss se as t

80

80

rs

se as t rs Fi

80

Fi

m ss

m ss se as t rs Fi

en t

80

% La as st ses as sm se en ss t m en t

80

en t

100

en t % La as st ses as sm se en ss t m en t

100

Score (%)

100

% La as st ses as sm se en ss t m en t

Full torque

575

Fig. 5. Scores for the three training paradigms (average and standard deviation), for the three assessments: (1) Before training at fixed torque, (2) after training at 80% of fixed torque, and (3) after training at fixed torque.

1

1

0

0

0

80

m es s as s t rs Fi

80

m ss se as t rs Fi

80

m

en t

1

% La as st ses as sm se en ss t m en t

2

en t

2

% La as st ses as sm se en ss t m en t

2

% La as st ses as sm se en ss t m en t

Modulated 3

se ss as t rs Fi

Progressive 3

en t

Root Mean Square Error (degrees)

Full torque 3

Fig. 6. Root mean squared error for the three training paradigms (average and standard deviation), for the three assessments: (1) Before training at fixed torque, (2) after training at 80% of fixed torque, and (3) after training at fixed torque.

576

3

G. As´ın-Prieto et al.

Results

In Fig. 5 we depict the scores for the three training paradigms (average and standard deviation) together with the trend (calculated as the fitting of the two fixed torque assessment). We observe that the score increases after the training, and that the score for the assessment at the 80% of the fixed torque at the end of the training is better for all training paradigms. Figure 6 shows the root mean squared error for the three training paradigms. Lower errors are obtained after the training. In Fig. 7 we can see the mean interaction torque for the three training paradigms, separated in dorsi and plantarflex trials. For (1) the fixed torque training paradigm, all trials reach the 16 N·m target; (2) in the progressive paradigm we can observe this progressively increase in the torque; and (3) in the modulated

Fig. 7. Torque for the three training paradigms, separated in dorsiflexion and plantarflexion. Average is represented with a solid black line, and standard deviation is represented by the grey shadow in the figure.

Robotic Platform to Induce Motor Learning

577

torque training paradigm, the torque increases up to a plateau, not reaching the established fixed torque in any case. We had to remove from the analysis the first training trial due to a technical problem (fixed torque of 16 N·m was incorrectly applied in this first trial for all the training paradigms). We observed that all training paradigms led to an improvement in the score comparing pre and post-training performances, so we concluded that this platform induces learning on healthy subjects. Consistently, the error of the subject when positioning the bird on the visual cue (e.g. maintaining the foot at 0◦ ), before and after training, decreased. All the users for the cases of fixed torque and progressive torque paradigms obviously reached the prescribed torque, but this was not the case for the modulated torque paradigm, where none of the users was able to reach this 16 N·m torque during the training, so we concluded that this torque exceeds our approach of using submaximal force production. Thus, we will study possible methodologies to calculate this customized submaximal torque for each subject. All the subjects reported that they liked the game, and most of them said that it was too easy. This is also observable looking into the scores. We will test more difficult trajectories to be maintained, as well as different torque profiles, in order to increase the difficulty of the game. To sum up, we conclude that this tool is useful to induce learning in healthy subjects, and thus will keep improving the training paradigms, for the translation into a rehabilitative tool. Acknowledgements. This research has been funded by the Commission of the European Union under the BioMot project - Smart Wearable Robots with Bioinspired Sensory-Motor Skills (Grant Agreement number IFP7-ICT- 2013-10-611695, and partially supported with grant RYC-2014-16613 by Spanish Ministry of Economy and Competitiveness.

References 1. Cramer, S.C., Sur, M., Dobkin, B.H., O’brien, C., Sanger, T.D., Trojanowski, J.Q., Rumsey, J.M., Hicks, R., Cameron, J., Chen, D., et al.: Harnessing neuroplasticity for clinical applications. Brain 134(6), 1591–1609 (2011) 2. Cano-de-la Cuerda, R., Molero-S´ anchez, A., Carratal´ a-Tejada, M., Alguacil-Diego, I.M., Molina-Rueda, F., Miangolarra-Page, J.C., Torricelli, D.: Theories and control models and motor learning: clinical applications in neurorehabilitation. Neurolog´ıa (English Edition) 30(1), 32–41 (2015) 3. As´ın-Prieto, G., Cano-de-la Cuerda, R., L´ opez-Larraz, E., Metrot, J., Molinari, M., van Dokkum, L.E.H.: Emerging perspectives in stroke rehabilitation. In: Emerging Therapies in Neurorehabilitation, pp. 3–21. Springer (2014) 4. Nudo, R.J., Wise, B.M., SiFuentes, F., Milliken, G.W.: Neural substrates for the effects of rehabilitative training on motor recovery after ischemic infarct. Science 272(5269), 1791 (1996) 5. Langhorne, P., Coupar, F., Pollock, A.: Motor recovery after stroke: a systematic review. Lancet Neurol. 8(8), 741–754 (2009)

578

G. As´ın-Prieto et al.

6. Patton, J.L., Mussa-Ivaldi, F.A.: Robot-assisted adaptive training: custom force fields for teaching movement patterns. IEEE Trans. Biomed. Eng. 51(4), 636–646 (2004) 7. Shea, J.B., Morgan, R.L.: Contextual interference effects on the acquisition, retention, and transfer of a motor skill. J. Exp. Psychol. Hum. Learn. Memory 5(2), 179 (1979) 8. Krakauer, J.W.: Motor learning: its relevance to stroke recovery and neurorehabilitation. Curr. Opin. Neurol. 19(1), 84–90 (2006) 9. Shea, C.H., Kohl, R.M.: Composition of practice: influence on the retention of motor skills. Res. Q. Exerc. Sport 62(2), 187–195 (1991) 10. Conditt, M.A., Gandolfo, F., Mussa-Ivaldi, F.A.: The motor system does not learn the dynamics of the arm by rote memorization of past experience. J. Neurophysiol. 78(1), 554–560 (1997) 11. Fasoli, S.E., Krebs, H.I., Stein, J., Frontera, W.R., Hogan, N.: Effects of robotic therapy on motor impairment and recovery in chronic stroke. Archives Phys. Med. Rehabil. 84(4), 477–482 (2003) 12. Volpe, B.T., Krebs, H.I., Hogan, N., Edelstein, L., Diels, C., Aisen, M.: A novel approach to stroke rehabilitation robot-aided sensorimotor stimulation. Neurology 54(10), 1938–1944 (2000) 13. Krebs, H.I., Hogan, N., Aisen, M.L., Volpe, B.T.: Robot-aided neurorehabilitation. IEEE Trans. Rehabil. Eng. 6(1), 75–87 (1998) 14. Reinkensmeyer, D.J., Patton, J.L.: Can robots help the learning of skilled actions? Exerc. Sport Sci. Rev. 37(1), 43 (2009) 15. Todorov, E., Shadmehr, R., Bizzi, E.: Augmented feedback presented in a virtual environment accelerates learning of a difficult motor task. J. Motor Behav. 29(2), 147–158 (1997) 16. Liu, J., Cramer, S.C., Reinkensmeyer, D.J.: Learning to perform a new movement with robotic assistance: comparison of haptic guidance and visual demonstration. J. Neuroeng. Rehabil. 3(1), 20 (2006) 17. Dobkin, B.H.: Strategies for stroke rehabilitation. Lancet Neurol. 3(9), 528–536 (2004) 18. Emken, J.L., Reinkensmeyer, D.J.: Robot-enhanced motor learning: accelerating internal model formation during locomotion by transient dynamic amplification. IEEE Trans. Neural Syst. Rehabil. Eng. 13(1), 33–39 (2005) 19. Patton, J.L., Stoykov, M.E., Kovic, M., Mussa-Ivaldi, F.A.: Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors. Exp. Brain Res. 168(3), 368–383 (2006) 20. Scheidt, R.A., Dingwell, J.B., Mussa-Ivaldi., F.A.: Learning to move amid uncertainty. J. Neurophysiol. 86(2), 971–985 (2001) 21. Emken, J.L., Benitez, R., Sideris, A., Bobrow, J.E., Reinkensmeyer, D.J.: Motor adaptation as a greedy optimization of error and effort. J. Neurophysiol. 97(6), 3997–4006 (2007) 22. Bittmann, M.F., Patton, J.L.: Forces that supplement visuomotor learning: a “sensory crossover” experiment. IEEE Trans. Neural Syst. Rehabil. Eng. 25(8), 1109– 1116 (2017) 23. Patton, J.L., Mussa-Ivaldi, F.A., Rymer, W.Z.: Altering movement patterns in healthy and brain-injured subjects via custom designed robotic forces. In: Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2, pp. 1356–1359. IEEE (2001)

Robotic Platform to Induce Motor Learning

579

24. Leroux, A., Pinet, H., Nadeau, S.: Task-oriented intervention in chronic stroke: changes in clinical and laboratory measures of balance and mobility. Am. J. Phys. Med. Rehabil. 85(10), 820–830 (2006) 25. Duncan, P.W., Goldstein, L.B., Horner, R.D., Landsman, P.B., Samsa, G.P., Matchar, D.B.: Similar motor recovery of upper and lower extremities after stroke. Stroke 25(6), 1181–1188 (1994) 26. Lamontagne, A., Malouin, F., Richards, C.L., Dumas, F.: Mechanisms of disturbed motor control in ankle weakness during gait after stroke. Gait Posture 15(3), 244– 255 (2002) 27. Page, S.J., Gater, D.R., Rita, P.B.: Reconsidering the motor recovery plateau in stroke rehabilitation. Arch. Phys. Med. Rehabil. 85(8), 1377–1381 (2004) 28. Panjan, A., Simunic, B., Sarabon, N.: Maximal voluntary torque and the ability to finely grade submaximal torque are not related/sposobnost finega uravnavanja submaksimalnega navora ni povezana z najvecjim hotenim navorom. Kinesiologia Slovenica 17(3), 5 (2011)

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis Juan Pedro Dominguez-Morales1 , Angel Jimenez-Fernandez1 , Saturnino Vicente-Diaz1 , Alejandro Linares-Barranco1(B) , Asuncion Olmo-Sevilla2 , and Antonio Fernandez-Enriquez3 1

Robotic and Technology of Computers Lab., University of Seville, Av. Reina Mercedes s/n, 41012 Sevilla, Spain {jpdominguez,alinares}@atc.us.es 2 Technical Department, Master Diagnostica S.L., Granada, Spain [email protected] 3 Technical Department, VITRO S.A., Sevilla, Spain [email protected]

Abstract. Human Papilloma Virus (HPV) could develop precancerous lesions and invasive cancer, as it is the main cause of nearly all cases of cervical cancer. There are many strains of HPV and current vaccines can only protect against some of them. This makes the detection and genotyping of HPV a research area of utmost importance. Several biomedical systems can detect HPV in DNA samples; however, most of them do not have a procedure as fast, automatic or precise as it is actually needed in this field. This manuscript presents a novel XML-based hierarchical protocol architecture for biomedical robots to describe each protocol step and execute it sequentially, along with a robust and automatic robotic system for HPV DNA detection capable of processing from 1 to 24 samples simultaneously in a fast (from 45 to 162 min), efficient (100% markers effectiveness) and precise (able to detect 36 different HPV genotypes) way. It includes an efficient artificial vision process as the last step of the diagnostic. Keywords: Robotic arm · In-vitro analysis · Automatic diagnostic · Computer vision · OpenCV · Convolutional neural networks · XML protocol

1

Introduction

Human papillomavirus (HPV) is a group of more than 150 DNA viruses from the papillomavirus family. Most HPV infections will cause no physical symptoms, but others could cause benign papillomas or premalignant lesions that could lead to cancer, especially cervical cancer. HPV is transmitted through intimate skinto-skin contact and is the most common sexually transmitted infection (STI). This virus is so common that nearly all sexually active men and women get it c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_48

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis

581

at some point in their lives, making it a very high interest subject of study for cervical cancer screening. Persistent human papillomavirus infection is a necessary cause for the development of cervical cancer [1,2]. Therefore, sensitive HPV detection is crucial to establish the risk of progression to high grade intraepithelial lesions and cervical carcinoma. In addition, HPV genotyping constitutes an important tool to assess the specific genotype-associated risk of progression and subsequent patient management. HPV testing offers greater sensitivity for the early detection of pre-cancerous cervical lesions and has been recommended as a triage tool for efficient patient management. The HPV detection systems available on the market are mainly based on hybridization assays [3], signal amplification assays [4], or the amplification of viral sequences by Polymerase Chain Reaction (PCR) [4]. Several HPV detection systems do not identify the specific genotype(s) present in the sample, and their results have a limited usefulness as they can only be reported as high-risk positive, low-risk positive, or high-risk + low-risk positive. The market requires highly sensitive and specific HPV genotyping systems. These methods should also be automated, rapid and reliable. There are several systems in the market for HPV DNA detection. Most of R Technolthem are time consuming (6–8 h minimum to get results) (CLART 1 ogy from Genomica, Spain ), others are not fully automatic (FT-Pro FlowThrough hybridization system from Diagcor BioScience, China2 ), or are not able to identify a multiplex panel of HPV genotypes (Cobas from Roche Diagnostics, Switzerland3 ). Other systems are not able to run a set of samples simultaneously, but only one sample per run (GeneXpert from Cepheid, USA4 ). HybriSpot 24 (HS24) platform5 is a robotic device intended for automatic and rapid multiplexing analysis of biomarkers, by a reverse dot blot colorimetric hybridization into a porous CHIP based on DNA Flow Technology. This hybridization platform allows the target molecules (DNA amplicons) to vertically flow through and cross the membrane towards the complementary capture probes that are immobilized inside the matrix pores. This approach enables the reaction between the target molecules in a three-dimensional porous environment rather than passive surface hybridization, allowing the reactions to be completed in seconds. It generates much higher signal intensities in a very short period of time, reducing reagent volumes and processing times from hours to minutes, which represents an economic alternative for molecular diagnostic. 1 2 3 4 5

R Genomica, “CLART Technology.” [Online]. Available: http://genomica.es/en/in vitro diagnostics products.cfm. Diagcor BioScience, FT-Pro Flow-Through hybridization system. [Online]. Available: http://www.diagcor.com/en/mdx-products/detail/ft-pro-auto-system. Roche Diagnostics, Cobas. [Online]. Available: http://molecular.roche.com/assays/ Pages/cobasHPVTest.aspx. Cepheid, GeneXpert. [Online]. Available: http://www.cepheid.com/us/cepheidsolutions/systems/genexpert-systems/genexpert-i. VITRO S.A., Hybrispot 24. [Online]. Available: www.masterdiagnostica.com/ products/moleculardiagnostickits/dnaflowtechnology/instruments.aspx.

582

J.P. Dominguez-Morales et al.

The rest of the manuscript is structured as follows: Sect. 2 presents hybriSpot 24, which is the biomedical robot that is used in this work to implement and execute XML protocols, as well as to obtain execution times. Then, Sect. 3 explains the image analysis technique used in this work. Section 4 introduces XML protocols and describes their hierarchical structure and components (Basic Commands and High-Level Commands). Section 5 describes the process of building XML protocols. Section 6 presents results for a HPV diagnosis protocol in terms of execution times, performance, robustness and validation. Finally, Sect. 7 presents the conclusions of this work.

2

HybriSpot 24

HybriSpot 24 (See Fig. 1 is a fully automatic biomedical robot developed by VITRO S.A6 , DropSens7 , Master Diagnostica8 and RTC Lab9 , and commercialized by VITRO S.A., whose main aim is to detect and genotype viruses like HPV. HS24 can process from 1 to 24 samples simultaneously in a period of time that ranges from 45 to 162 min, respectively. It provides a diagnostic platform for rapid detection and identification of HPV genotypes by reverse dot blot hybridization onto a microarray porous membrane (see Fig. 2). HS24 is based R GX-271 Liquid Handler robotic arm. This arm is a very rigid on a Gilson and robust Cartesian Coordinate Robot (CCR) that also has a dispenser, Fig. 1, which is a syringe-like electronic device that allows to pour or to suction liquids from various inputs automatically using specific commands. The dispenser is connected to a probe located on the robot, in addition to a washing solution container that is used to purge the system after particular steps of a protocol. HS24 also has: – A washing station with two positions: a waste position and a probe cleaning position. – Two 3 × 8 (three rows and eight columns) matrices to place DNA samples. – 28 containers for reagents. Four of them are dedicated to a special reagent (hybridization buffer), which is most widely used in every protocol, and have two heaters so that the temperature can be set inside the containers. Setting this reagent to a specific temperature at a particular step of the protocol is essential for the speed and success of the analysis. – Two 4 × 3-matrix (four rows and three columns) reaction chambers with one associated Peltier cell. Each chamber allows the incubation process on the membranes. Using the robotic arm, the dispenser and the diverse elements presented above, HS24 is able to combine reagents with DNA samples, incubate them and execute a set of operations that will result on the spots color development at 6 7 8 9

VITRO S.A. Available: http://www.vitro.bio/. DropSens. Available: http://www.dropsens.com/. Master Diagnostica. Available: http://www.masterdiagnostica.com/. RTC, Robotic and Tech. of Computers Lab. http://www.rtc.us.es/.

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis

583

R Fig. 1. HS24 instrument, composed of a Gilson GX-271 Liquid Handler robotic arm with one dispenser attached, a washing station, a webcam, two reaction chambers with peltier cells, two reagents zones (with heaters), two 12 samples areas and embedded STM32-based USB PCBs for heaters / valves control.

the end of the process. In order to provide to the system with enough flexibility for other protocol implementations, or for tuning or debugging a protocol under development, all these operations are described in XML. Thus, the protocol is a sequence of XML sentences, which include information about every single execution step of the robot. Each membrane has a printed matrix with spots. Some spots are always there for control, some others will show up depending on the result of the detected genotypes. Figure 2 shows a membrane with highlighted spots after successfully completing a HPV diagnosis protocol. HS24 includes a high definition camera attached to the robotic arm that takes pictures of the membranes. They are sent to the host computer software, where they are automatically analyzed with two different mechanisms to perform a diagnosis and store it in a database: (a) OpenCV [5] artificial vision algorithms and (b) Convolutional Neural Network (CNN) previously trained with an extensive database collected in Master Diagnostica during the instrument testing period.

584

J.P. Dominguez-Morales et al.

Fig. 2. Membrane with highlighted spots after executing a HPV diagnosis protocol (o). o+f, o+b and o+r are the output from different synthetic samples generation techniques.

3

Image Analysis

The principal approach implemented in this instrument is a classical one using artificial vision techniques, through the OpenCV library; we aim to compare this principal one to a deep learning based technique, using Convolutional Neural Networks. The second one aim is for validating deep learning in this particular application. The OpenCV library technique will be used as the ground truth for the deep learning training procedure, since 100% of accuracy has been obtained in the laboratory with proper camera configuration and light conditions. Once the protocol procedure of the samples have concluded, the camera installed in the instrument, close to the syringe, is activated and a picture is taken. The following operations are applied to the picture: (a) a smooth filter process, (b) a circle detection for the membrane plus centering it, (c) remove from the image anything not useful, (d) search for key spots in the membrane using template matching techniques for three different small circles detection, (e) distance calculation between these spots (the maximum distance identifies two of the five control spots, and the other three follow logic rules to be located), (f) membrane rotation to the same view for all membranes in the database, (g) diagnostic depending on how many spots and which ones are detected in the membrane. In the other hand, for the CNN technique, a database has to be created using the analyzed samples during the testing period of the instrument for the HPV analysis only. The images taken from the samples after the HPV analysis have been processed using different techniques like focus filters (o+f), brightness filters (o+b) and rotation filters (o+r) to increase the number of images in the database in a synthetic way. A classic CNN architecture extracted from LeNet-5 has been implemented and trained with Caffe [13]. The CNN execution can be performed in the HSHS software by using the open-source Caffe libraries.

4

XML Protocols

Our instruments have to execute a particular protocol to analyze a tissue sample to make a diagnostic of the presence of a particular DNA sequence

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis

585

(i.e. HPV, Zoonosis, Sepsis, etc.). This protocol is stored as a sequential list of commands that the robot has to execute to perform a specific functionality. XML (eXtensible Markup Language) [6] has been selected in this case. XML is a language developed by W3C that is mainly used to store information. The main benefits of this language are its easy managing and its very legible structure. The information in this format is saved under a hierarchical structure similar to a tree: there is a root node and subtrees of children with a parent node. Thus, we call it XML-protocol. The root node is the name of the protocol, and two different types of children could be below the root (by convention, trees are drawn growing downwards): – If the node is a leaf node (it does not have any children) then it is a Basic Command, i.e. a simplest actuation of the robot. – If the node is an inner node (it has at least one child) then it is a High-Level Command. This command is a set of Basic Commands that describes a higher level functionality. 4.1

Basic Commands

They are instructions that describe basic functionalities of the robot, like position control, dispenser control, valves and pumps control and Peltier cells control. The set of basic commands that an XML-protocol can use allows to execute all the steps needed by a biomedical robot in the analysis process. These commands are: – RobotCmd. Used to command the arm motion in the X, Y and Z Cartesian axes. The Z movement allow the syringe of the robot to go up and down in order to suction or dispense liquids properly. The robot has a liquid detector in the syringe. There are four additional parameters in this command that allow to (1) select movements only in X and Y axes, (2) do a homing process after the command is executed, (3) do a movement only in Z axis and (4) enable liquid detection in the probe. XY movements are allowed only if Z axis is under a security threshold to avoid syringe damage. As can be seen, the speed of the movement cannot be set with this command. The robot speed is controlled with a different parameter and cannot be modified using these kind of commands. – DispCmd. Its purpose is to suction or dispense liquids like reagents or DNA samples using the probe attached to a dispenser. Quantities used in this command should be checked carefully to avoid exceeding the maximum capacity of the dispenser. The speed to execute this process can also be specified with a value between 0.01 and 40 ml/min, in addition to the position of the valve of the dispenser, which will allow to suction or dispense from the probe tip or from the washing solution container. The speed in the absorbing process should not be high, since the potential formation of air bubbles could affect the entire analysis [7]. – PeltierCmd. This command controls the temperature of the Peltier cells located bellow the reaction chambers and the main reagents of the robot.

586





– –

J.P. Dominguez-Morales et al.

This process is essential for the incubation of the samples. Some parameters are required for executing this command, such as the ID of the Peltier cell that will be affected, the target temperature, and the waiting time after reaching the temperature. This command will block the following command until this wait is over. ValveCmd. On a biomedical robot, valves redirect the liquid flow in the system. ValveCmd command allows this task, adding some control over it by using three parameters: (1) to specify the action (on/off) that will be applied to (2) a specific valve or to a set of them and (3) how much time will the system wait after this action until the next command starts its execution. PumpCmd. Pumps allow to turn on or off the liquid flow in the robot. This is very useful in washing and waste removal processes. This command has the same parameters as ValveCmd, but it acts on the pumps instead of the valves. WaitCmd. This command waits for a specific amount of time (minutes or seconds. CameraCmd. This command takes a picture of the membrane whose position is specified as an input. The captured picture is processed using an OpenCV (Open Source Computer Vision) [5] algorithm, or a CNN previously trained. OpenCV is an open source computer vision and machine learning software library of programming functions mainly used for motion tracking [8], facial or gesture recognition [9,10] and augmented reality [11]. This library has been used to detect which spots of the matrix that is printed on the membrane are highlighted, and their intensity. With this information, OpenCV gives an automated diagnosis of the detected genotypes.

As they are focused on low-level functionalities, creating an entire protocol using only these commands could be a slow and tedious process with a high rate of human error on it, which may cause serious damage to the robot. 4.2

High-Level Commands

High-level commands are a set of basic commands that describe their whole basic functionalities under a more complex and useful task. These commands allow the creation of longer protocols faster, and since they need less parameters to be specified, human mistakes in this process are reduced. The most important high-level commands are: – Wash. This command cleans the probe attached to the robot both internally and externally, using a washing solution, which is pumped through the system using basic commands to control valves and pumps. Throughout the execution of the protocol, the robot uses different reagents and samples, which could spoil the whole process if they mix at some point. That is what makes a purge command like this so important for a biomedical robot. – Reagents dispensing. Before a protocol starts, the camera attached to the robotic arm scans the workspace, to identify where the reagents needed in the

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis











587

protocol are placed, and saves this information. Using this information plus the localization of the membranes that will be used (set by the user), this command is able to dispense a specific quantity of any of the reagents (both values are specified as parameters for this functionality) in every membrane used in the protocol. Figure 3 shows the execution flowchart for this command. Samples dispensing. This high-level command is similar to the reagents dispensing command described above. Instead of dispensing a specific quantity of a reagent in the membranes, it dispenses the reagent along with a small quantity of a sample, both specified as parameters. Camera Capture. After the protocol is finished, this command is launched to take images of the membranes that were used, and to analyze them with OpenCV algorithm. This command is a combination of multiple CameraCmd basic commands. Incubation. As the incubation of the membranes is one of the main steps in a protocol, this command was created for this purpose. The incubation high-level command combines a PeltierCmd and a WaitCmd. This will set the membranes to a temperature that is specified in the input parameters and then it waits for a specific amount of time after the Peltier cell reaches the target temperature.Incubation. Drain. After some particular steps of the protocol, the membranes need to be washed from reagents and samples that were dispensed. This cleaning process is done with the drain command, which uses pump and valve basic commands to allow the flow of washing solution into the system below the reaction chambers where the membranes are placed. Remove waste. Usually, the quantity of reagent that is suctioned is higher than the one that is dispensed. This adds a small security gap of reagent to maintain the pressure in the pipe that connects the probe with the dis-

Reagent suction - Go to reagent position (RobotCmd) - Suction a specific quantity (DispCmd)

No

Enough reagent to dispense?

Yes

Dispense reagent in the membrane - Go to membrane position (RobotCmd) - Dispense a specific quantity (DispCmd)

No

All membranes dispensed?

Yes

End

Fig. 3. Dispense Reagents command flowchart.

588

J.P. Dominguez-Morales et al.

penser while dispensing. This gap is added for each dispensation, generating a remainder in the dispenser that keeps increasing its quantity after some of these instructions. The remove waste high-level command empties the syringe. It is important to execute this command before the remainder and the quantity that will be suctioned exceed the dispenser maximum quantity.

5

Building XML Protocols

In an analysis of a virus like HPV, the number of Basic-Commands that the protocol can have is 400 per sample, approximately. Creating an XML protocol manually for an analysis with more than one sample could take an inconceivable amount of time. This is why protocols are not created manually by researchers; it would take many hours to create it and, due to the fact that it would have a lot of Basic-Commands, the human error possibility is very high. To solve this problem, a software tool was developed to generate the XML protocol from a template written in CSV format [12] which consists of very high-level instructions that will later be translated into basic and high-level commands. A CSV template can be written using more than 30 different instructions, allowing to generate protocols that describe the same functionality as those created by using only Basic and High-Level commands, but spending less time and effort and reducing the error probability. A CSV template for the HPV analysis has around 63 instructions and the time spent to create it is 30 min approximately. Therefore, the researcher needs to create a generic CSV protocol template based on his/her experiments and validations. Then, after selecting which reagents and samples are going to be used in the protocol, the software tool reads the CSV file associated with the selected analysis and automatically converts it to a valid XML protocol, which can be understood and executed by the robot based on this information. As it is written in a generic way, the CSV template, once created, is useful for the analysis regardless of which reagents, samples or membranes are used, being completely independent from these values. However, XML protocols can still be built manually or even modified after being generated using a CSV for further control over it. Basic and High-Level commands are programmed as classes and functions in C#, respectively. This means that adding new commands that were not described in Sect. 3 can be done just by creating a new class defining the command functionality and binding it to the main architecture class. This easy process allows a quick and effortless adaptation of this protocol system to new instruments or robot functionalities.

6

Results

The time that the analysis process takes to complete depends on two factors: first, the kind of analysis that will be executed and, second, the number of samples that will be analyzed. Figure 4 shows execution times for the HPV virus diagnosis protocol using 1, 2, 6, 12, 15 and 24 membranes. These six tests were

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis

589

Fig. 4. Execution times of the HPV diagnosis protocol for 1, 2, 6, 12, 15 and 24 samples.

executed in HS24. As can be seen, time increases linearly except for the range between 12 and 15 samples, where the second reaction chamber (12 samples per reaction chamber) needs to be used, which is farther from the washing station and the hybridization buffer containers than the first one. Table 1 shows execution times for different number of samples, the number of instructions in the CSV file and the number of Basic Commands in the XML protocol. As it is an automated process without the need of human supervision, robots can be programmed with protocols during nighttime, increasing the number of tests per day and reducing human cost. The performance and robustness of the automatic HS24 system was validated by testing10 low concentrations of the HPV genotypes included in the detection kit for HPV (5 copies for HPV 16 and HPV 18, 50–500 copies for the other genotypes). Basically, this validation demonstrates the reproducibility of the results in positions 1 to 24 on the HS24 device and the reproducibility of the results for runs with different number of samples. Moreover, the system demonstrated a high sensitivity of detection for all the genotypes. Table 1. Execution time, number of instructions and number of Basic Commands for 1, 2, 3, 6, 12, 15 and 24 samples. Number of Execution time Number of Number of (minutes) instructions Basic Commands samples

10

1

45

63

967

2

49

63

1161

6

70

63

2325

12

96

63

3877

15

128

63

5065

24

162

63

7587

All tests were performed at Master Diagnostica.

590

J.P. Dominguez-Morales et al.

The reproducibility of the results for runs with different number of samples was evaluated. Replicates for a positive sample containing a limiting copy number of several genotypes (50 GE) were loaded at different positions in the HS24 platform, according to four protocols: (1) Protocol for 2 samples (2 replicates), (2) protocol for 12 samples (3 replicates), (3) protocol for 15 samples (4 replicates), (4) protocol for 24 samples (6 replicates). The results were automatically analyzed and all the samples were detected without differences among positions and protocols. In order to verify the reproducibility of the protocol in different positions of the HS24 reaction chambers, four replicates for each of the 36 HPV genotypes detected by the method at a limiting copy number were loaded in different positions of the two reaction chambers in HS24, and protocols for 24 samples were performed. The results were automatically analyzed and showed 100% reproducibility for all the genotypes analyzed in the different positions of the reaction chambers using classical image processing techniques during instruments validation at Master Diagnostica Labs for around 2 k samples. A LeNet-5 CNN has been implemented in Caffe and a the under construction database is being created using images taken from membranes after applying HPV analysis. For MNIST experiments 60k training samples were needed to obtain 98% accuracy. We aim to obtain similar results with similar database length on these HPV images.

7

Conclusions

This manuscript presents a novel XML-based hierarchical protocol architecture for biomedical robots that allows to describe any execution step that a HPV diagnosis protocol could need. This architecture is already implemented in hybriSoft (hybriSpot 24 sample management software from VITRO), allowing the HPV DNA detection processing of up to 24 samples simultaneously in less than three hours, which is a novelty in terms of speed in the field of biomedical robotic systems. As XML files are generated automatically using a generic CSV template of the protocol, regardless of the number of samples and the reagents that will be used, the user does not have to create a new protocol for the same virus diagnosis when using a different number of samples, reducing time and effort to zero in these cases. This also leads to having a minimum human error probability when creating the protocol. Adjusting and adapting specific steps of the protocol can be done by modifying a single XML or CSV file, which means a fast and effortless way to change the robot behavior. Although in this manuscript we focused on HPV DNA detection, this protocol architecture allows the detection of other infections like sepsis, meningitis, etc. which are already implemented on the hybriSpot 24 and hybriSpot 12-Auto systems. Acknowledgments. We would like to thank VITRO for their continuous support and help during this work and for lending us a hybriSpot 24 for protocol testing purposes.

A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis

591

This work is supported by FIDETIA (P055-12/E03). This work is supported by the Spanish government grant (with support from the European Regional Development Fund) COFNET (TEC2016-77785-P).

References 1. Bosch, F.X., Lorincz, A., Munoz, N., Meijer, C.J.L.M., Shah, K.V.: The causal relation between human papillomavirus and cervical cancer. J. Clin. Pathol. 55, 244–265 (2002) 2. Walboomers, J.M., Jacobs, M.V., Manos, M.M., Bosch, F.X., Kummer, J.A., Shah, K.V., Munoz, N.: Human papillomavirus is a necessary cause of invasive cervical cancer worldwide. J. Pathol. 189, 12–19 (1999) 3. Wong, A.A., Fuller, J., Pabbaraju, K., Wong, S., Zahariadis, G.: Comparison of the hybrid capture 2 and cobas 4800 tests for detection of high-risk human papillomavirus in specimens collected in PreservCyt medium. J. Clin. Microbiol. 50, 25–29 (2012) 4. Abreu, A.L.P., Souza, R.P., Gimenes, F., Consolaro, M.E.L.: A review of methods for detect human Papillomavirus infection. Virol. J. 9, 262 (2012), http://doi.org/ 10.1186/1743-422X-9-262 5. Bradski, G.: The opencv library. Doctor Dobbs J. 25, 120–126 (2000) 6. Bray, T., Paoli, J., Sperberg-McQueen, C.M., Maler, E., Yergeau, F.: Extensible markup language (XML). World Wide Web J. 2, 27–66 (1997) 7. Savchenko, Y.: Supercavitation-problems and perspectives. In: CAV 2001, pp. 1–8 (2001) 8. Luo, Y., Yang, H., Hu, Z.: Human limb motion real-time tracking based on CamShift for intelligent rehabilitation system. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 343–348 (2009) 9. Pingxian, Y., Rong, G., Peng, G., Zhaoju, F.: Research on lip detection based on Opencv. In: International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE), pp. 1465–1468 (2011) 10. Li, X., Han, G., Ma, G., Chen, Y.: A new method for detecting fatigue driving with camera based on OpenCV. In: International Conference on Wireless Communications and Signal Processing (WCSP), pp. 1–5 (2011) 11. Manaf, A. S., Sari, R. F.: Color recognition system with augmented reality concept and finger interaction: case study for color blind aid system. In: 9th International Conference on ICT and Knowledge Engineering, pp. 118–123 (2012) 12. Shafranovich, Y.: Common format and MIME type for comma-separated values (CSV) files (2005) 13. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional Architecture for Fast Feature Embedding. arXiv preprint arXiv:1408.5093 (2014)

Robotics and Cyber-Physical Systems for Industry 4.0 (I)

End-Effector Precise Hand-Guiding for Collaborative Robots Mohammad Safeea1 , Richard Bearee2 , and Pedro Neto1,2(B) 1

2

University of Coimbra, 3000-033 Coimbra, Portugal [email protected] Arts et Metiers ParisTech, LSIS Lille 8, Boulevard Louis XIV, 59046 LILLE Cedex, France [email protected]

Abstract. Hand-guiding is a main functionality of collaborative robots, allowing to rapidly and intuitively interact and program a robot. Many applications require end-effector precision positioning during the teaching process. This paper presents a novel method for precision handguiding at the end-effector level. From the end-effector force/torque measurements the hand-guiding force/torque (HGFT) is achieved by compensating for the tools weight/inertia. Inspired by the motion properties of a passive mechanical system, mass subjected to coulomb/viscous friction, it was implemented a control scheme to govern the linear/angular motion of the decoupled end-effector. Experimental tests were conducted in a KUKA iiwa robot in an assembly operation.

Keywords: Hand-guiding

1

· Collaborative robot · End-effector

Introduction

Human-robot interaction has been studied along the last decades, from textbased programming to off-line programming [1] to the more recent intuitive techniques in which humans interact with the robots like interact with each other using natural means like speech, gestures and touch [2,3]. In the touch interaction mode the robot can be hand guided to teach the required paths. Hand-guiding is a representative functionality of collaborative robots, allowing unskilled users to interact and program robots in a more intuitive way than using the teach pendant. Existing collaborative robots include hand-guiding functionality with limitations in terms of the accuracy required for many tasks like assembly. For precision positioning (position and orientation of the endeffector) the teach pendant is still used (even for sensitive robots). The use of the teach pendant limits the intuitiveness of the teaching process provided by the hand guiding and it is time consuming (an important parameter on the factory floor). When the operator is using the teach pendant, he/she has to visualize

c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_49

596

M. Safeea et al.

mentally different reference axes of the robot. In some scenarious, the teach pendant based robot positioning can conduct to undesirable collisions, which can cause damage in sensitive equipment. This paper presents a novel method for end-effector precision hand-guiding to be applied in collaborative robots equipped with joint torque sensors or with a force/torque sensor attached to the end-effector. It was implemented a control scheme that utilizes force feedback to compensate for the end-effector weight, so that a minimal effort is required from the operator to perform the hand-guiding motion. Inspired by the motion properties of a passive mechanical system, mass subjected to coulomb/viscous friction, it was implemented a control scheme to govern the linear/angular motion of the decoupled end-effector. Experimental tests were conducted in a KUKA iiwa 7 R800 robotic manipulator in an assembly operation requiring precision and fine tuning. It is demonstrated that the proposed method allows precise hand guiding with smooth robot motion in terms of position and orientation of the end-effector. 1.1

State of the Art

The importance of collaborative robots and hand-guiding teaching is well known in the robotics community. It is still a challenge to have robots and humans sharing the same space performing collaborative tasks with the required safety levels to minimize the risk of injuries [4]. Human subjects can interact physically with a robot arm moving and guiding the robot into the desired grasping poses, while the robot’s configuration is recorded [5]. Different control schemes for humanrobot cooperation have been proposed, for example physical interaction in 6 DOF space using a controller that allows asymmetric cooperation [6]. Robot assistance through hand-guiding is also used to assist in the transport of heavy parts. In [7] it is studied the application of a human-industrial robot cooperative system to a production line. Safety, operability and the assistance of human skills were studied as they relate to hand-guiding. Hand-guiding teaching for conventional industrial robots are normally sensor based. In [8] it is proposed a sensor less hand guiding method based on torque control. The dynamic model of a robot along with the motor current and friction model is used to determine the users intention to move the end-effector of a robot instead of directly sensing the external force by the user. The problem of Cartesian impedance control of a redundant robot executing a cooperative task with a human has been addressed in [9]. Redundancy was used to keep robots natural behavior as close as possible to the desired impedance behavior, by decoupling the end effector equivalent inertia. The authors claim that this allows easily finding a region in the impedance parameter space where stability is preserved. A method for using impedance control of multiple robots to aid a human moving an object and conducted an experiment with motion along one degree of freedom (DOF) was presented in [10]. In [11] it is proposed an approach for collision detection and reaction for an industrial manipulator with a closed control architecture and without external sensors. The robot requirements are the joint velocity, motor currents and joint

End-Effector Precise Hand-Guiding for Collaborative Robots

597

positions. Robot assisted surgery demands high precision hand-guiding accuracy. An approach that enables an operator to guide a robot along a predefined geometric path while taking required joint constraints into account and thus implicitly considering the actuator dynamics is proposed in [12].

2

Work Principal

Assuming force feedback at the robot end-effector (the robot may have joint torque sensors or a force/torque sensor attached to the end-effector), three groups of robot hand-guiding motion are considered, Fig. 1: 1. The first motion-group represent the positioning (with linear displacements) of the end-effector in the X, Y and Z coordinates system of the robot base; 2. The second motion-group is used for orienting the end-effector in the Cartesian space; 3. The third motion group is used to rotate the end-effector around its axis. Those motion groups are introduced because they are the most intuitive for humans when we perform a motion that requires precision.

Fig. 1. The proposed robot motion groups for precision hand-guiding

Let us consider the hand-guiding force the force applied by the operator at the end-effector for linear positioning, and the hand-guiding moment the moment applied by the operator at the end-effector for angular positioning. The force/moment measurements represent the forces/moments due to (1) endeffector weight (2) the inertial forces/moments due to the acceleration of the end-effector, and (3) the external hand-guiding force/moment (to be applied by the operator for achieving hand-guiding). To simplify the calculations we omit the inertial forces/moments due to end-effector acceleration because in precise hand-guiding applications these inertial forces are relatively small compared to its weight and to the external forces/moments. The measured forces and torques can be approximated to be due to the hand-guiding forces and the weight of the end-effector. The components of the hand-guiding force described relative to the robot base frame are used as inputs for the proposed linear motion controller, i.e., for positioning the end-effector according to the robot base frame.

598

2.1

M. Safeea et al.

Hand-Guiding Force

The components of the hand-guiding force described in robot base frame f b = (fx , fy , fz ) serve as input to move the robot end-effector along the X, Y or Z directions of the robot base, 1st motion group. The maximum of the components (fx , fy , fz ) is used to calculate the control command to control the end-effector one axis at a time. Those hand-guiding forces (fxe , fye , fze ) described on the endeffector reference frame are calculated by subtracting the weight of the endeffector from the measured forces: ⎡ ⎤ ⎡ e⎤ ⎡ e ⎤ ux fx 0 ⎣ fye ⎦ = ⎣ uey ⎦ − Reb ⎣ 0 ⎦ (1) w fze uez where (uex , uey , uez ) are the components of end-effector force/torque measurements, described in sensor frame (in this study the frame of the force/torque measurements is the same as the frame of the end-effector, otherwise a constant transform between the two frames can be introduced and the methodology described is still valid). Here, w is the weight of the end-effector and Reb is the rotation matrix from base frame to end-effector frame. This matrix is obtained from the transpose of the rotation matrix from the end-effector to the base frame:  T Reb = Rbe where Rbe is obtained from the direct kinematics. The hand-guiding force f b described in base frame is calculated from: ⎡ e⎤ fx f b = Rbe ⎣ fye ⎦ fze The force command f sent to the control algorithm is: ⎡ ⎤ a00 f = ⎣ 0 b 0 ⎦ fb 00c where we have to respect the following conditions: a b c Condition 1 0 0 |fx | > |fy | && |fx | > |fz | 0 1 0 |fy | > |fx | && |fy | > |fz | 0 0 1 |fz | > |fx | && |fz | > |fy |

(2)

(3)

(4)

End-Effector Precise Hand-Guiding for Collaborative Robots

2.2

599

Hand-Guiding Moment

For end-effector orientation (2nd and 3rd motion group) the hand-guiding moment me shall be calculated from the end-effector force/torque measurements: ⎡ e⎤ ⎡ e ⎤ μx τx me = ⎣ τye ⎦ − ⎣ μey ⎦ (5) τze μez  where τxe , τye , τze are measurements of the three components of torques from  the end-effector force/torque measurements, and μex , μey , μez are components of moment due to weight of end-effector described in end effector frame: ⎡ e⎤ ⎡ ⎤ ⎡ ⎤ μx 0 −zce yce 0 ⎣ μey ⎦ = ⎣ zce 0 −xec ⎦ Reb ⎣ 0 ⎦ (6) −yce xec 0 w μez where (xec , yce , zce ) are the coordinates of the center of mass of the end effector, described in the end-effector reference frame. In the 2nd motion group, the end-effector axis zeef is oriented in space, Fig. 1. For this type of motion the input to the controller is calculated from the handguiding moment. The input command is the vector mxy . To calculate this vector, the vector mexy shall be calculated first, where it represents the component of the hand-guiding moment in the XY plane of the end-effector frame, Fig. 2. The components of vector mexy are described in sensor reference frame. This vector is calculated from: ⎡ ⎤ 100 mexy = ⎣ 0 1 0 ⎦ me (7) 000 The input command for 2nd motion group is calculated from: mxy = Rbe mexy

(8)

where mxy represents the vector mexy after being transformed to base frame. For the 3rd motion group, the end-effector is allowed to rotate around its axis zeef . For this type of motion the input command to the controller is the vector mez . To calculate this vector, the vector mez shall be calculated first, where it represents the Z direction of the end effector frame, as shown in the Fig. 3. This vector is calculated from: mez = me − mexy

(9)

The controller command for the 3rd motion group is calculated from: mz = Rbe mez where mz represents the vector mez after being transformed to base frame.

(10)

600

M. Safeea et al.

Fig. 2. Hand-guiding moment in sensor reference frame for 2nd motion group (at left) and for the 3rd motion group (at right)

Fig. 3. Damper-mass mechanical system

Only one of motion group (2nd and 3rd motion group) is allowed to be performed once at a time. The motion command vector sent to the control algorithm m is calculated from:

mxy if mxy  > mz  (11) m= mz if mz  > mxy 

3

Controller

The robot is controlled at the end-effector level, so that we consider the decoupled end-effector as a mass moving under the effect of Coulomb and viscous friction. The equation of linear motion of the center of gravity of the mass moving is: m¨ x + bx˙ + fr = f

(12)

where x ¨ is the linear acceleration of the center of mass, x˙ is the linear velocity of the center of mass, m is the mass, b is the damping coefficient, fr is Coulomb

End-Effector Precise Hand-Guiding for Collaborative Robots

601

friction, and f is the external force acting on the mass. The equation of angular rotations around the center of gravity of the mass is: iθ¨ + β θ˙ + τr = τ

(13)

where θ¨ is the angular acceleration around rotation axes, θ˙ is the angular velocity, i is the moment of inertia around the rotation axes, β is the damping coefficient, τr is the torque due to Coulomb friction, and τ is the external torque. We consider that the Coulomb and viscous friction effects are much bigger than the effect of the inertia of the mass. In this context, the inertial factors from previous equations are omitted (Fig. 4) so that equation (12) becomes: bx˙ + fr = f

|f | > |fr |

(14)

and: x˙ = 0 otherwise

(15)

where |f | is the absolute value of the external force. Equation (13) becomes: β θ˙ + τr = τ

|τ | > |τr |

(16)

and: θ˙ = 0

otherwise

(17)

where |τ | is the absolute value of the external torque. 3.1

Robot Control

The linear motion of the end-effector is controlled by: f (f −fr ) f  > fr  bf  v= 0 otherwise

(18)

where v is the linear velocity vector of the end-effector, f is the force vector command issued to the controller, fr is the sensitivity threshold (motion is only executed when the force command reaches a value above this threshold), and b is the motion constant that defines the rate of conversion from force measurement to velocity. The angular motion of the end-effector is controlled by: m(m−τr ) m > τr  βm (19) = 0 otherwise where  is the angular velocity vector of the end-effector, m is the moment command issued to the controller, τr is the sensitivity threshold (where motion is only valid when the force command value is higher than this threshold), and β

602

M. Safeea et al.

is a motion constant that defines the rate of conversion from force measurement to velocity. The end effector velocity x˙ is: v x˙ = (20)  After calculating the velocity of the end-effector, angular and linear, joint velocities are calculated using the pseudo inverse of the Jacobean J† : q˙ = J† x˙

(21)

From the angular velocities and by using an iterative solution, the state space vector can be calculated, which is used to control the manipulator, Fig. 4.

Fig. 4. Robot control algorithm

4

Experiments and Results

The experiments were performed in a KUKA iiwa 7 R800 robotic manipulator in an assembly operation requiring precision positioning and fine tuning. The offthe-shelf hand-guiding functionality provided by KUKA lacks in the precision

End-Effector Precise Hand-Guiding for Collaborative Robots

603

that is crucial for fine and precise positioning of the end-effector for many applications. The proposed method was implemented in a control loop updated at each 8.5 ms. The controller compensates automatically for the piece weight such that when no force is applied by the operator, the piece is held in place by the robot. Different tests demonstrated that in the 1st motion group the robot position along X, Y and Z can be precisely adjusted, in the 2nd motion group the end-effector rotation around X, Y and Z is achieved, while in the 3rd motion group we can rotate around the end-effector axis with accuracy, Fig. 5. Figure 6 shows the end-effector position along Y axis according to a given applied force. It is clear to visualize the end-effector displacement when the force is applied. The proposed hand-guiding method accuracy was measured and compared with a nominal path in plane xy, Fig. 7. The operator moves the end-effector along the y axis (250 mm) so that it was achieved a maximum error of 0.09 mm along x

Fig. 5. Precision hand-guiding for assembly operation

Fig. 6. End-effector position along y axis according to applied force

604

M. Safeea et al.

Fig. 7. Nominal end-effector path against the real robot end-effector path in plane xy

direction. In summary, it was demonstrated that the precision hand-guiding abilities of the proposed method work well and are useful in collaborative robotics.

5

Conclusion and Future Work

A novel precision hand-guiding method at end-effector level for collaborative robots was proposed. Experimental tests demonstrated that with the proposed method it is possible to hand-guide the robot with accuracy, with no vibration, and in a natural way by taking advantage of the proposed three motion groups. It was also demonstrated that the system is intuitive and compares favorably with off-the-shelf KUKA hand-guiding. Future work will be focused on optimizing the control by utilizing the redundancy of the KUKA iiwa for achieving better stiffness while hand-guiding, and also for avoiding collisions with obstacles while hand-guiding. In addition the control can be optimized to give some feedback to the user when reaching near the joint limits. Acknowledgments. This research was partially supported by Portugal 2020 project DM4Manufacturing POCI-01-0145-FEDER-016418 by UE/FEDER through the program COMPETE 2020, the European Unions Horizon 2020 research and innovation programme under grant agreement No 688807 - ColRobot project, and the Portuguese Foundation for Science and Technology (FCT) SFRH/BD/131091/2017.

End-Effector Precise Hand-Guiding for Collaborative Robots

605

References 1. Neto, P., Mendes, N.: Direct off-line robot programming via a common CAD package. Robot. Auton. Syst. 61(8), 896–910 (2013). doi:10.1016/j.robot.2013.02.005 2. Neto, P., Pereira, D., Pires, J.N., Moreira, A.P.: Real-time and continuous hand gesture spotting: an approach based on artificial neural networks. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 178–183 (2013). doi:10.1109/ICRA.2013.6630573 3. Simao, M.A., Neto, P., Gibaru, O.: Unsupervised gesture segmentation by motion detection of a real-time data stream. IEEE Trans. Ind. Inform. 13(2), 473–481 (2017). doi:10.1109/TII.2016.2613683 4. Haddadin, S., Albu-Schaffer, A., Hirzinger, G.: Requirements for safe robots: measurements, analysis and new insights. Int. J. Robot. Res. 28(11/12), 15071527 (2009). doi:10.1177/0278364909343970 5. Balasubramanian, R., Xu, L., Brook, P.D., Smith, J.R., Matsuoka, Y.: Physical human interactive guidance: identifying grasping principles from human-planned grasps. IEEE Trans. Robot. 28(4), 899–910 (2012). doi:10.1109/TRO.2012.2189498 6. Whitsell, B., Artemiadis, P.: Physical Human-Robot Interaction (pHRI) in 6 DOF with asymmetric cooperation. IEEE Access 5, 10834–10845 (2017). doi:10.1109/ ACCESS.2017.2708658 7. Fujii, M., Murakami, H., Sonehara, M.: Study on application of a human-robot collaborative system using hand-guiding in a production line. IHI Eng. Rev. 49(1), 24–29 (2016) 8. Lee, S.D., Ahn, K.H., Song, J.B.: Torque control based sensorless hand guiding for direct robot teaching. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 745–75 (2017). doi:10.1109/IROS.2016.7759135 9. Ficuciello, F., Villani, L., Siciliano, B.: Variable impedance control of redundant manipulators for intuitive human robot physical interaction. IEEE Trans. Robot. 31(4), 850–863 (2015). doi:10.1109/TRO.2015.2430053 10. Kosuge, K., Yoshida, H., Fukuda, T.: Dynamic control for robot-human collaboration. In: Proceedings of the 2nd IEEE International Workshop on Robot and Human Communication, pp. 398–401 (1993). doi:10.1109/ROMAN.1993.367685 11. Geravand, M., Flacco, F., De Luca, A.: Human-robot physical interaction and collaboration using an industrial robot with a closed control architecture. In: 2013 IEEE International Conference on Robotics and Automation, pp. 4000–4007 (2013). doi:10.1109/ICRA.2013.6631141 12. Hanses, M., Behrens, R., Elkmann, N.: Hand-guiding robots along predefined geometric paths under hard joint constraints. In: 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–5 (2016). doi:10.1109/ETFA.2016.7733600

Integrating 3D Reconstruction and Virtual Reality: A New Approach for Immersive Teleoperation Francisco Navarro, Javier Fdez, Mario Garz´ on(B) , Juan Jes´ us Rold´ an, and Antonio Barrientos Centro De Autom´ atica y Rob´ otica, UPM-CSIC, Calle Jos´e Guti´errez Abascal, 2,28006 Madrid, Spain {francisco.navarro.merino,javier.ferfernandez}@alumnos.upm.es, {ma.garzon,jj.roldan,antonio.barrientos}@upm.es

Abstract. The current state of technology permits very accurate 3D reconstructions of real scenes acquiring information through quite different sensors altogether. A high precision modelling that allows simulating any element of the environment on virtual interfaces has also been achieved. This paper illustrates a methodology to correctly model a 3D reconstructed scene, with either a camera RGB-D or a laser, and how to integrate and display it in virtual reality environments based on Unity, as well as a comparison between both results. The main interest regarding this line of research consists in the automation of all the process from the map generation to its visualisation with the VR glasses, although this first approach only managed to get results using several programs manually. The long-term objective would be indeed a real-time immersion in Unity interacting with the scene seen by the camera.

Keywords: Robotics teleoperation

1

· 3D reconstruction · Virtual reality · Immersive

Introduction

The promise of Virtual Reality (VR) lies mainly in its ability to transport a user into a totally different world and make him feel that he is immersed in it. This immersion generally implies a total separation from the real world, meaning that the virtual environment rarely, if ever, represents any real-world location. In contrast with this, several 3D Reconstruction techniques are being developed in order to create models of the real world, and also in general terms, those reconstructions rarely include fictional or virtual elements. The motivation of this work is to combine the benefits of both techniques, and therefore, create a VR world from a reconstruction of a real location. With this solution, it will be possible to immerse a user in a virtual representation of a real c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_50

Integrating 3D Reconstruction and Virtual Reality

607

location. For example, it would be possible to see and walk in an remote location, and if this technology is combined with teleoperation of mobile manipulators, actually interact with an unaccessible, dangerous or remote location. 1.1

Objectives

In order to achieve the goal of integrating 3D reconstructions into virtual worlds, the following partial objectives have been defined. – Reconstruct 3D maps of a location using with two different devices and techniques. – Create a mesh of the reconstruction. – Contrast the real environment with the virtual maps. – Integrate the reconstruction in a VR interface. 1.2

Related Works

Nowadays, there are three different immersive technologies. First of all, the augmented reality (AR), which overlaps virtual elements on the videos or images of real scenes. Secondly, the virtual reality (VR), which is based on virtual models of real or imaginary scenarios. The third technology is a combination of the first two, which is known as mixed reality (MR). Virtual reality is the option in which this paper has been focused. On the other hand, in robotics and other fields like medicine [1] 3D reconstruction is used as a process of capturing the shape and appearance of real objects. There are several devices, such as a lidar, RGB-D cameras or sonars that can display this function. In addition, there are also several programs or libraries that filter and reconstruct the point clouds that those devices generate. Both 3D reconstruction and VR have been subject of study for many years and there are hundreds of papers published on these topics separately. Nevertheless, the number of publications that connect both subjects is very low. Some of the existing research work are listed next. A successful integration of 3D reconstructions of antique objects in a virtual museum, using a 3D scanner and a high-resolution camera was accomplished by the University of Calabria [2]. Additionally, Stanford correctly managed to import a mesh model of a reconstructed scene in Unity using the Kinect One [3]. Lastly, York University succeeded at modelling a 2D mapped trajectory, using a laser along with wheel odometry, and display it in Unity using a ROS connection based on Rosbridge [4]. 1.3

Overview of the System

Figure 1 illustrates a general view of the process and the main devices and programs used along with this research. The environment information is acquired by two different devices and it is processed using diverse software tools in order to obtain a mesh of the scene. Then, these meshes are imported into Unity and visualised with the virtual reality glasses.

608

F. Navarro et al.

Fig. 1. Overview. This figure illustrates a general view of the process and the main devices and programs used along with this research.

1.4

Main Contributions

This research encompasses different ideas from previous works, but it also proposes several novel contributions, listed next, that distinguish this research form previous ones. – It includes an interface and recreates a real world. – It compares two different 3D reconstructions and their results. – It uses ROS, allowing to use this same method in real time just by doing a socket between ROS and Unity.

2

3D Map Generation

This section presents the necessary steps to accomplish a 3D representation of a real world scenario from two different methods. The first one processes the information of a RGB-D camera at real-time while the second one creates the reconstruction after the data acquisition from a 2D laser scanner. 2.1

Real-Time 3D Reconstruction

This technique uses a Microsoft’s Kinect One (v2) RGB-D camera in order to acquire enough data from the environment. This device presents an RGB camera to get colour information as well as a depth sensor, with an effective range of 40–450 cm, to obtain distance estimation from the scene. Once the environment data is available, colour and depth information are processed with using the RTAB-map software [5], which uses the OpenCV, QT and PCL libraries. One of the main advantages of this method is the creation of a real-time point cloud, mapping the scene as the camera is moving through the scenario. Figure 2 shows the final point cloud after the desired scene is completely mapped manipulating this tool and the camera. Additionally, the odometry of the camera has been overlapped in the figure for the purpose of illustrating the mapping trajectory.

Integrating 3D Reconstruction and Virtual Reality

609

Fig. 2. Kinect Point Cloud. This figure shows the final point cloud after the desired scene is completely mapped manipulating this tool and the camera.

2.2

Off-line 3D Reconstruction

The second technique, is based on the use of an Hokuyo 2D laser scanner. This sensor measures the distance to a target by illuminating it with a pulsed laser light, and measuring the reflected pulses with a sensor. Differences in laser return phases and wavelengths can be used to make digital 3D-representations of the target. Unlike the RGB-D camera, the map reconstruction is not done in real time. Since the Lidar can only provide distance measurements in a plane, a rotating mechanism was designed, it rotates the scanner on its x axis in order to get 3D information. The data is recorded and stored in a rosbag. Then, it is transformed to point clouds and stored in PDC files. Those point clouds can be grouped into one large 3D reconstruction using the libpointmatcher [6] library, that uses an Iterative Closest Point (ICP) algorithm to estimate the correct transformation

Fig. 3. Lidar Point Cloud. This figure shows a view of the reconstructed scene with the Lidar.

610

F. Navarro et al.

between one point cloud and the next. Finally, the result is visualized, filtered and post-processed with Paraview software [7], as shown in Fig. 3.

3

Mesh Creation

Since the Unity VR engine does not support point clouds or PCD formats, it is compulsory to use one that allows a correct integration. The mesh format is the option selected in this work, a mesh is a collection of vertices, edges and faces that defines a 3D model. In this research, a polygonal mesh, whose faces are triangles, will be used. The mesh format was chosen not only because it is compatible with Unity, but also due to the fact that a solid 3D mesh contains more useful information than a point cloud. In short, a discrete 3D reconstruction is converted into a continuous 3D model. Before creating the mesh, a post-processing phase is applied to the map. It consists on applying noise reduction and smoothing filters in order to obtain a more realistic reconstruction. The filtering can be applied to both on-line and off-line maps, and it is based on the use of the Point Cloud Library [8]. Regarding the camera reconstruction, the obtained point cloud presents several local and global flaws, which result in a non-consistent 3D map. Firstly, to reduce these defects, RANSAC algorithm is applied in order to reject outliers. Once the noisy points are removed, more loop closures are detected along with the Iterative Closest Point (ICP) algorithm. This step increases the correct identification of features, avoiding mapping an already visited area twice, as well as improving the alignment of object edges. Figure 4 presents the improved point cloud with fairly corrected edges and objects. The main parameters used for the organised meshing are the size and the angle tolerance of the faces, which are, in this case, triangles. The angle tolerance was set to 15 for checking whether or not an edge is occluded whereas the faces

Fig. 4. Post-processed map. Point cloud with corrected edges and objects.

Integrating 3D Reconstruction and Virtual Reality

611

size was 2 pixels. It should be pointed out that smaller triangles mean higher resolution of the resulting mesh. One of the main concerns during the meshing is the inclusion of textures, being this step essential for a correct colour display in Unity. So as to accomplish a complete integration, the triangulated point cloud along with the cameras used are exported from RTAB-map to MeshLab [9] for a further post-processing adding the corresponding textures. The final camera mesh is shown in Fig. 5.

Fig. 5. Resulting Mesh. Final mesh form RGB-D camera map.

On the other hand, the Lidar mesh creation consists in two steps. First, a filter to the point cloud, which reduces the number of points. Then the 3D triangulation, which is speeded up by the previous filter. In the first step, the libpointmatcher library [6] was used, which is a modular library that implements the ICP algorithm for aligning point clouds. Additionally, it has been used the filter Voxel Grid Filter which down-samples the data by taking a spatial average of the points in the cloud. The sub-sampling rate is adjusted by setting the voxel size along each dimension. The set of points, which lie within the bounds of a voxel, are assigned to that voxel and will be combined into one output point. There are two options in order to represent the distribution of points in a voxel by a single point. In the first, the centroid or spatial average of the point distribution is chosen. In the second, the geometrical centre of the voxel is taken. For this point cloud, it has been chosen the first one for its higher accuracy. Figure 6a shows the model after the triangulation. The triangulation has been done with the Point Cloud Library (PCL). This library includes a function called Greedy Projection Triangulation that creates the Polygon File Format document. After the triangulation, with the program MeshLab the files are converted into DAE or OBJ format so that the model could b imported into Unity (as it only accepts certain formats). Figure 6 presents the triangulation and the mesh that has been finally imported into Unity.

612

F. Navarro et al.

(a) Triangulation

(b) Mesh

Fig. 6. Lidar mesh creation. (a) shows the model after the triangulation. (b) presents the mesh that has been finally imported into Unity

3.1

Integration with Unity

For the development of the virtual reality interface, the program selected has been Unity and, in order to visualise the 3D reconstruction, it has been used the HTC Vive glasses and the program Steam VR. Unity is a cross-platform game engine which is primarily used to develop games and simulations for computers, consoles and mobile devices. The package HTC Vive includes an HTC Vive headset, two controllers with buttons and two base stations that can locate objects inside the wok area. Last, Steam VR is a complement for Unity required to compile and run programs in virtual reality. In order to see the meshes into Unity, it was necessary to have a mesh and its textures. These textures are necessary so as to visualise the mesh with colours. Once the model is imported into Unity, it is possible to create scripts and interact with it as it can be done with other objects. The scale and localization of the 3D map can be modified in order to better reflect the real world, or to allow a better immersion. Also, new elements, real or virtual can be added to the scene, and it is possible to interact with them using the Steam VR capabilities.

Integrating 3D Reconstruction and Virtual Reality

4

613

Experiments and Results

This section presents the experiments developed in order to test the proposed system. Two different scenarios, one for each of the mapping techniques have been used. Their description and the results of the experiments are described next. 4.1

Test Scenarios

The first scenario, where there RGB-D camera was used, is an indoors laboratory of the Department of Automatics at Technical University of Madrid (UPM). This room is equipped with sensors for VR, what makes it interesting for an immersion at the same place of the reconstruction. In addition, this immersion could ensure an accurate validation of the dimensions and distances mapped of the room to assess how faithful it is and its quality. The second scenario, used with the laser reconstruction, took place in an olive field located in the Spanish province of Toledo. Unlike the previous case, this experiment was done in an outdoor environment so that it was possible to get other results and compare them. It is important to mention that the test was done rotating a base connected to the Lidar in order to be able to reconstruct 360◦ . 4.2

Analysis and Results

One of the drawbacks regarding of the Kinect RGB-D mapping is its inability to reliably operate outdoors, the reason for this is that the sensor can not deal with direct and intense lightning. This factor limited the research and comparison between both methods, being the camera reconstruction obliged to take place indoors. Focusing on the final mesh obtained, which is shown in Fig. 7b, compared to the real world, shown Fig. 7a, the global outcome is considered satisfying. The vast majority of its objects are easily identified, and they have adequate shapes and sizes. However, although the overall result from this RTAB-Map reconstruction is very good and usable, it presents some local defects that occurred during the mapping stage. For example, most objects on the table are not correctly reconstructed because they were not on the camera field of vision. Moving on to the Unity integration and immersion, as a consequence of reconstructing the same room containing the VR equipment, the process of calibrating and validating the mesh was straightforward. In the second scenario, different results and conclusions were found. First of all, as shown in Fig. 8, there is an area in the middle of the point cloud that does not have any points. That is caused due to the structure that rotates the lidar, which can not reach the full range and consequently leaves an area without information. Furthermore, even tough this is not relevant from a visual point of view, it causes many problems when creating the mesh reconstruction, because of the discontinuities found.

614

F. Navarro et al.

(a) Real World

(b) Virtual World

Fig. 7. Kinect 3D reconstruction in Unity. (a) presents the real room and one of the experiments. (b) illustrates the view of each eye of the virtual glasses.

Another important result of the use of lidar is that the quality of the virtual map is worse than other created with other tools. This results from the fact that the point cloud generated by the lidar does not have colour. Therefore, it is not possible to import any textures in the virtual interface. However, the main advantage is that it is able to reconstruct outdoor worlds, what makes that tool really useful. Figure 8 displays the visualization of the mesh reconstructed from the Lidar. Finally, it is important to mention that there are two main difficulties in the developing of the complete process. Firstly, the generation of the textures in MeshLab and the point cloud triangulation imply a lot of time consumption. Secondly, a large part of the process is manual which could hamper the future automation of the method.

Integrating 3D Reconstruction and Virtual Reality

615

Fig. 8. Lidar 3D reconstruction in Unity. This figure displays the visualization of the mesh reconstructed from the Lidar.

4.3

Comparison

The main differences observed objectively between both methods rely heavily on two different items, the sensor used and the scene mapped. Regarding the sensor, the results with the camera are of a higher quality due to the fact of colour information acquisition, along with the infrared depth sensor. However, the laser is limited in this kind of reconstruction having to rotate to correctly map the environment. Lastly, the field of view of the laser is much longer and wider, with a range of 30 m and an field of view of more than 180◦ . Referring to the scene reconstructed, the real-time mapping was done indoors while the post-created reconstruction was achieved outdoors. Here the longer range of the laser scanner, and its ability of handling sun light and other exterior conditions, make it a better solution for the reconstruction of the scene, since the RGB-D camera suffers in adverse outdoors conditions.

5

Conclusions

This work correctly integrated 3D reconstructions into virtual worlds. For this purpose, two maps were reconstructed using two different devices: Camera RGB-D and Lidar. Both of them worked in a different way since the first one managed to reconstruct with colour an indoor environment, whereas the second could not. However, the laser worked satisfactorily in outdoor scenes. Regarding the mesh creation, it turned out to be an adequate approach not only because of a valid format for its integration into Unity but also for improving the overall map resulting in a more accurate, continuous and faithful 3D representation.

616

F. Navarro et al.

Another necessary task was to contrast the virtual and real world because it was essential to know if it was possible to identify the elements from the virtual sight. Fortunately, even if there were parts that were not positioned exactly in their pretended place, the virtual map allowed us to distinguish and recognise the objects. It is important to mention the fact that this occurred in both maps. Finally, future works will continue this line of research, trying to improve the reconstruction of the map and the performance of the process. Furthermore, next efforts will be focused on the reconstruction and integration in Unity in real time since the long-term objective is to be able to reconstruct an environment while the robot is tracking a path, allowing this aim to see the elements of the robot and interact with them from the distance thanks to the virtual reality. Acknowledgments. This work was partially supported by the Robotics and Cybernetics Group at Universidad Polit´ecnica de Madrid (Spain), and it was funded under the projects: PRIC (Proteccin Robotizada de Infraestructuras Crticas; DPI201456985-R), sponsored by the Spanish Ministry of Economy and Competitiveness and RoboCity2030-III-CM (Robtica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Founds of the EU.

References 1. Cong, V., Linh, H.: 3D medical image reconstruction. Biomedical Engineering Department Faculty of Applied Science, HCMC University of Technology, pp. 1–5 (2002) 2. Bruno, F., Bruno, S., De Sensi, G., Luchi, M., Mancuso, S., Muzzupappa, M.: From 3d reconstruction to virtual reality: a complete methodology for digital archaeological exhibition. J. Cult. Heritage 11, 42–49 (2010) 3. Cazamias, J., Raj, A.: Virtualized reality using depth camera point clouds. Stanford EE 267: Virtual Reality, Course Report (2016) 4. Codd-Downey, R., Forooshani, P., Speers, A., Wang, H., Jenkin, M.: From ROS to unity: leveraging robot and virtual environment middleware for immersive teleoperation. In: 2014 IEEE International Conference on Information and Automation, ICIA 2014, pp. 932–936 (2014) 5. Labb, M., Michaud, F.: Online global loop closure detection for large-scale multisession graph-based slam. In: IEEE International Conference on Intelligent Robots and Systems, pp. 2661–2666 (2014) 6. Pomerleau, F., Colas, F., Siegwart, R., Magnenat, S.: Comparing ICP variants on real-world data sets. Autonom. Robot. 34(3), 133–148 (2013) 7. Ahrnes, J., Geveci, B., Law, C.: ParaView: an end-user tool for large-data visualization. In: Hansen, C.D., Johnson, C.R. (eds.) Visualization Handbook, pp. 717–731. Butterworth-Heinemann, Burlington (2005) 8. Rusu, R., Cousins, S.: 3D is here: point cloud library. In: IEEE International Conference on Robotics and Automation, pp. 1–4 (2011) 9. Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G.: MeshLab: an open-source mesh processing tool. In: Sixth Eurographics Italian Chapter Conference, pp. 129–136 (2008)

Enhancement of Industrial Logistic Systems with Semantic 3D Representations for Mobile Manipulators C´esar Toscano1 , Rafael Arrais1(B) , and Germano Veiga1,2 1

INESC TEC, R. Dr. Roberto Frias, 4200-465 Porto, Portugal {ctoscano,rafael.l.arrais,germano.veiga}@inesctec.pt 2 Faculty of Engineering, University of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal

Abstract. This paper proposes a logistic planner with supplementary 3D spatial representations to enhance and interact with traditional logistic systems on the context of mobile manipulators performing internal logistics operations. By defining a hierarchical structure, the logistic world model, as the central entity synchronized between multiple system components, the reliability and accuracy of the logistic system is strengthened. The proposed approach aims at implementing a robust and intuitive solution for the set-up of mobile manipulator based logistic systems. The logistic planner includes a web based interface for fast setup of the warehouse layout based on robot sensing, as well as the definition of missions for the fleet of robotic systems. Keywords: Mobile manipulators · Cyber-physical systems · World model · Logistic system · Service-oriented architectures · Vertical integration

1

Introduction

With the advent of flexible robots, able to sense and interact with increasingly complex and unstructured industrial environments, there is a crucial need to correctly identify, model and organize all their physical objects and functional components, in order to successfully support robotic operations. Vertical integration between Enterprise Information Systems (EIS) (e.g. Manufacturing Execution System (MES)) and robotic systems, operating in complex industrial environments, present great development challenges. One example is the difficulty to reliably model and disseminate accurate information on the pose of objects and physical components from the environment. To support robot operation and to facilitate vertical integration, there is an implicit need to conceive a dynamic and scalable structure to cope with the necessity to model spatial properties of all physical objects and functional c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_51

618

C. Toscano et al.

components in a complex industrial environment. This structure can be seen as an enhancement of traditional logistic systems, in the sense that it combines semantic data with geometric information updated in real time. This paper proposes a dynamic hierarchical structure, the logistic world model, to augment traditional logistic data with 3D spatial information. Thus, the logistic world model holds semantic data, spatial location, geometric dimensions, and internal structure of every physical object present in the environment. This dynamic structure provides support for the operation of mobile manipulators and acts as the principal information provider to the planning, execution and monitoring features of the logistic system. The logistic world model structure exists and is maintained within a central node, the logistic planner, responsible for orchestrating operations on the robotic system and providing interfaces to handle software components interactions. The logistic planner also manages integration with the robotic fleet and the EIS or any other information management system supporting the production system (e.g. MES). The communication interface between components was developed with a Service Oriented Architecture [1] in mind. As such, this approach makes the functionality implemented by each entity accessible through a service interface. This work was driven forward by the Sustainable and Reliable Robotics for Part Handling in Manufacturing Automation (STAMINA) project, funded by the European Unions Seventh Framework Programme for research, technological development and demonstration. The project aimed at developing a mobile manipulator to perform picking operations in a logistic supermarket located on a car manufacturing industrial plant. The purpose of this paper is twofold: on one hand it provides a comprehensive definition and analysis of the high level architecture of the system, focusing on the logistic planner component, which is the orchestration node and central hub of communication between different components of the system; on the other hand, the logistic world model structure is presented, as a tool for synchronizing data between different system components. The remainder of the paper is organized as follows: Sect. 2 describes related work, Sect. 3 presents an overview of the high level software architecture of the system, Sect. 4 describes the logistic planner component, Sect. 5 details the proposed logistic world model, its creation and usability, and Sect. 6 draws the conclusions.

2

Related Work

The usage of mobile manipulators in industrial applications have increased in the last years [2]. Nevertheless, there are some barriers delaying the adoption of the technology. Due to safety concerns, costs of the solutions, and to the lack of flexibility of current systems to adapt to production changeovers and layout reconfigurations in a time-efficient manner, it is difficult to achieve the desired performance to justify the admission of mobile manipulator in many industrial applications [3]. Moreover, the vertical integration of robotics systems with EIS

Logistic Systems Semantic 3D Representations for Mobile Manipulators

619

is still a research challenge [4]. In this context, a model of the environment specifying the semantics and geometric form of the physical objects that the robot may interact with seems to be a flexible and promising approach not only to have control on its behaviour but also to establish an integration with an EIS managing a given industrial environment. On the context of representing 3D spatial data for robotics, the Simulation Description File (SDF) format, an XML-based specification created by the Open Source Robotics Foundation, allows the definition of models of objects and environments so as to support the simulation of robotic systems. SDF models a physical environment as a set of world elements, describing scenes through a set of properties addressing elements such as the ambient light, the sky, shadows, and fog. A physical object is modelled as a hierarchical set of elements having a position and orientation relative to a specified frame. However, the geometric form of the object is not addressed. In a logistic supermarket the key need is not only to specify the location of each physical object but also its internal structure so that one can relate objects, constitute aggregates (e.g., a shelf has several layers where small boxes are stored) and help robotics achieve their goals in an easier and quicker way (e.g. to reach a small box contained inside a shelf). Also, in its present format, SDF purposes are oriented to support the simulation of the behaviour of a robotic system as a set of rigid bodies connected by joints and not exactly to support its overall behaviour in a given physical environment.

3

High Level System Architecture

A service-oriented approach [5] was used to design and implement the STAMINA logistic system. Information flow between components was designed to be defined by the corresponding exposed services whose functionality are defined through the service’s Application Program Interface (API). Services were designed to be stateless, with the objective of maximizing the loose coupling of logistic components [6]. Figure 1(a) identifies the components that comprise the STAMINA system and the corresponding information flow. On the top level, the MES organizes factory data into object categories (possible types of physical things present on the logistic area) and kitting orders, providing them to the logistic planner whenever they are created and/or changed. A crucial function of the logistic planner is to act as the major information provider to the planning and execution functions performed by the mission planner and the components inside each robot, through the logistic world model. The logistic planner will be analysed in more detail in Sect. 4, and the logistic world model will be presented in Sect. 5. The mission planner [7] interacts exclusively with the logistic planner and is responsible for generating mission assignments for individual robots in the fleet, taking as major input the current state of the logistic system (stated in the logistic world model) and a set of orders requesting the construction of kits composed by several parts. These kits (kitting boxes) will be later on transported to the automotive assembly line. Parts can be stored in small boxes, which are

620

C. Toscano et al. Manufacturing Execution System

object categories kitting orders

kitting orders alerts

Microsoft Windows 10 or Ubuntu 14.04.1 with Java Virtual Machine

keeps data on Logistic Planner

World Model Order Manager -object categories -kitting orders -object instances -missions -3D volumes -alerts -robot skills

world model kitting orders

Logistic Planner Mission Planner

provides user interface on

mission

communicates with world model mission

ROS Bridge

Robot Task Manager Skills Manager World Model Consistency Checker

(a)

Internet Browser

Ubuntu 14.04.1 communicates with

robot skills world model task status

MongoDB

Mission Planner

communicates with

Task Manager

communicates with

World Model Consistency Checker

(b)

Fig. 1. STAMINA architecture: (a) high level overview of the logistic system, highlighting main components and information flow (colors are used to differentiate between the two main categories of information elements, World Model and Order Manager). (b) STAMINA components as ROS nodes (UML Deployment diagram).

placed on shelves, and also in large boxes (pallet style), which are placed on the ground of the logistic area. According to the assignment, missions (which may include the construction of one or two kits) are subsequently delivered to individual robots in the fleet. Each robotic system in the fleet is composed by an on-board task manager, skills manager and world model consistency checker software components. Upon the reception of a mission from the logistic planner, the task manager [8] component identifies and sequences the tasks that will be needed to fulfil the mission, considering the skills [9] available on the robot. This is handled by a novel multi-agent planning algorithm, invoked by general-purpose automated planning techniques. The execution of the skills is coordinated by the skills manager. The task manager is also responsible for keeping track of the robot’s progress on its mission, and by sending progress updates to the logistic planner. The skills manager performs and monitors the execution of skills on the robot. A skill is composed by three stages [10,11]. The precondition stage checks are performed to ensure that the conditions necessary for carrying out the requested skill have been satisfied. The execution stage performs the action related to the skill. Finally, a post-condition stage verifies whether the requested skill achieved its desired outcome successfully. The results of the assessment stages are reported to the task manager, which can trigger re-planning, in case of a failure, or request the execution of the following skill, in case of a success [9]. The world model consistency checker [12] is able to perform a real-time analysis on the reliability of the information stored on the logistic world model (which is provided to the task manager as the mission’s context). This assessment is performed by comparing the sensed reality by the robotic fleet with the corre-

Logistic Systems Semantic 3D Representations for Mobile Manipulators

621

sponding virtual representation. The objective of this component is to check for inconsistencies on the logistic world model and either perform a correction on the stored information or report the detected situation to the operator responsible for the logistic supermarket.

4

Logistic Planner

The logistic planner is the coordinator node and the communications hub between different sub-systems, responsible for managing robotic operations and providing service oriented interfaces for integrating different sub-system components. The management of the lifecycle of a kitting order, namely its creation, planning, assignment, execution and monitoring, is addressed by the order manager, a subcomponent within the logistic planner. At the level of vertical integration, the logistic planner interacts with the EIS [13] by providing the integration mechanisms to extract and handle data from its MES and complementary systems, which manage information regarding the logistic supermarket. Conversion logic with higher level planning functions defined in Enterprise Resource Planning (ERP) or MES type system comprises a main goal of the logistic planner, as it hides the remaining sub-system components from integration-related issues. In order to provide a more suitable experience to the responsible operator, the logistic planner was implemented as an autonomous software component, running as a server on top of a Java virtual machine. The logistic planner provides a JavaScript and HTML5 user interface on standard Internet browsers. Besides mediating between functions performed by the remaining components in the system and integrating the logistic activities with the EIS, managing the logistic world model is another feature of the logistic planner. The logistic world model serves the purpose of providing the robotic fleet data about the working environment (i.e., the logistic area). As such, information from the high-level systems is extracted, processed and organized in such fashion that it is understandable by the logistic planner and, consequently, can be used to create the logistic world model. Inversely, the semantic and format of data are converted from the logistic planner back to the EIS in order to report actions performed or information sensed by the robotic fleet on the logistic supermarket. The data repository generated and accessed by the logistic planner, such as the logistic world model, is provided by MongoDB, a document-oriented Data Base Management System (DBMS). This DBMS was chosen for its ability to keep data in documents organized as data hierarchies as data on the logistic world model is hierarchically organized. Additionally, this data base technology offers high degrees of performance and scalability [14]. Ubuntu 14.04.1 constitutes the operating system hosting the remaining components, such as the mission planner, task manager and the world model consistency checker. All of these components are available as Robot Operating System (ROS) nodes (ROS Indigo release) and are programmed in the C++ language. In this instance of the STAMINA architecture (see Fig. 1(b)), the services correspond to ROS services and topics. Access to/from the logistic planner is

622

C. Toscano et al.

made through the ROS Bridge [15]. The ROS Bridge is a ROS node that facilitates communications from the ROS world (mission planner, task manager, skill manager) to other entities running in different computational environments. This bidirectional communication channel is implemented as an HTML 5 web socket. Thus, the logistic planner communicates with all remaining STAMINA components through the ROS Bridge. Service invocations made by the Logistic Planner are sent to the ROS Bridge and from there to the ROS node running the targeted service (for example, to the one running the mission planner). The corresponding response from the service is sent to the ROS Bridge and then to the logistic planner. Inversely, a proxy of the services provided by the logistic planner is available at the ROS Bridge so that an invocation of this proxy service will be translated into the invocation of the real service available at the logistic planner. In terms of communication, internal transactions between the logistic planner and the ROS Bridge are hidden: the ROS Nodes available in the network are not aware of the presence of the ROS Bridge. Additionally, the exchanged ROS messages are translated by the ROS Bridge into the JSON format. The logistic planner sends and receives JSON messages and the ROS Bridge translates these messages into a ROS-specific format. Figure 2 identifies how the logistic planner is organized in terms of software implementation. Communication with the logistic planner’s server element is achieved through the HTTP protocol by invoking RESTfull web services using the JSON data format. Invocation of these web services, done by both EIS and the user interface running on an Internet browser, triggers the activation of Controllers and Actions inside the server, which validate the data input and output and maps JSON data into Java data. The Play framework, a high-productivity Java and Scala web application framework integrating the components and APIs currently needed for developing modern web applications, is used to implement the reactive part of the server.

Fig. 2. Software implementation organization within the logistic planner.

At a lower level, the required functions are encapsulated as Services and the Spring Data provides the mapping of Java objects into database objects. Although all the usual DBMS, such as Microsoft SQL Server, Oracle and Sybase, are supported, MongoDB is the one selected. The Object Model specifies all the Java objects and relationships corresponding to project specific structures, such

Logistic Systems Semantic 3D Representations for Mobile Manipulators

623

as kitting orders, missions, and the elements that comprise the logistic world model. Besides handling RESTfull requests coming from either the user interface or the EIS, the logistic planner needs to have active threads in support of interactions with the mission planner and task managers without any human intervention. This mechanism is implemented by AKKA Actors [16], objects that encapsulate state and behavior and that communicate exclusively by exchanging messages, having their own light-weight thread. In practice, AKKA Actors are used within the logistic model to control the planning of a set of kitting orders by the mission planner, their assignment to the task manager and the execution, monitoring and other minor functions.

5

Logistic World Model

As the planning and execution procedures implies a direct action or assessment to be made on a physical object or functional component located in the environment, it is crucial to store information on all the entities located in the logistic supermarket. The type of information exchanged between system components mainly concerns the physical objects located in the logistic supermarket, such as containers (shelves, small/large boxes), parts, packaging elements, kits, conveyors and robots. The geometry, internal organization and spatial location of every physical object in the logistic supermarket are modelled in the logistic world model with the objective of providing this information to the planning and execution components. The logistic world model is modelled through a hierarchical graph, where each object is represented by a node and the link between two nodes refers to a containment relationship between two physical things. Figure 3a depicts the organization of the logistic world model and the types of nodes it may contain. The root node represents the entire logistic supermarket. This is decomposed into specific regions, represented by the child nodes underneath the root node, where each region is further decomposed into different instances of physical objects such as shelves and large boxes. This decomposition of objects continues up to the level that is appropriate to enable the operation of robots in the environment. Since the world model aims to accommodate a multi-robot system, Fig. 3b shows how the objects on the logistic area are related with the reference robot’s referential and with the other robots in the fleet. Each node in the logistic world model contains information specifying the corresponding object position, orientation and geometry, in the form of a bounding volume. For example, bellow a specific region node there might be nodes to model kits (containers decomposed into compartments), large boxes (containers organized in layers that contain parts), and shelves (object decomposed into levels which contain cells where small boxes can be located). The bounding volume of each node specifies the volume of the represented physical object (including the volume of every child node) and its location in the logistic supermarket, by relating its location with the location of its container

624

C. Toscano et al.

Fig. 3. Logistic world model conceptual representation (cardinality represented between any two nodes): (a) Implantation hierarchy for the logistic area. (b) Relationship between each robot’s referential.

object. As most of the objects represented in the logistic world model have the geometric form of a parallelepiped, it is possible to have a common volume specification by defining a two-dimensional closed polyline extruded by height along the z-axis, as shown in Fig. 4. The two-dimensional closed polyline is defined in a local coordinate system (of the object represented by the node), whose origin is an internal point of the closed polyline. There is a transformation which relates the origin of the local coordinate system to the origin of the parent’s coordinate system (i.e. the container object). The translation component of the transformation is specified by a 3D vector while the rotation component is given by a tuple specifying three rotations (roll, pitch and yaw). The points that specify the polyline are defined in the clockwise direction. The model also allows a given volume to be empty (i.e. its closed polyline is not defined), in order to enable nodes that simply provide a transformation between two coordinate systems. The construction of the logistic world model in the logistic planner is accomplished in two distinct phases that are part of the set up process. The objective of this set up process is to have a concrete virtual representation of each physical object present in the logistic area, specifying their characteristics and their 3D locations. In the first set up phase, all object types are defined through either an interaction with the EIS or by using the user interface to manually create their 3D representations. The outcome of this stage is the identification and description of shelves, large/small boxes, conveyors, parts and kits. Objects are described in terms of their system identifier, internal organization in terms of levels and cells within each level, and geometric properties, such as width, depth and height.

Logistic Systems Semantic 3D Representations for Mobile Manipulators

625

Fig. 4. Logistic world model bounding volume concept. The origin of the local coordinate system is related to the origin of the parent’s coordinate system by a spatial transformation with translational and rotational components.

The second phase is performed in the logistic planner’s user interface by the technician managing the logistic area. The goal for this stage is to correctly identify which objects actually exist in the logistic supermarket, which containment relationship relate some of the objects and where they are located in space. The main element supporting the second phase is a two-dimensional image of the logistic supermarket area, provided to the technician by the logistic planner’s user interface. This two-dimensional image can either be created by the robot through its Simultaneous Localization and Mapping (SLAM) functionality, or can be provided by a Computer-Aided Design (CAD) application. In both cases, visual patterns on the image will indicate to the technician the location and orientation of each physical object in the logistic supermarket (e.g. shelves and large boxes) on a 2D editor in the logistic planner. This editor enables the technician to follow an iterative process where he/she tells the system the position and orientation of shelf and large box, by using the graphical tool available on the user interface. Instances of shelves and large boxes are created and dragged and dropped into the background image on the proper places. As a final step, the technician establishes containment relationships between the automotive parts and the possible containers (i.e., small/large boxes and shelves). Also, small boxes are placed on shelves in a transversal or longitudinal orientation. After having been constructed by the set up process described above, the logistic world model is provided to the interested components in the system (e.g., mission planner, task manager) throughout appropriate services. Actions having a physical effect on the logistic supermarket cause the robots (namely, their task management components) to generate notifications informing the logistic planner of changes in the logistic world model by the way of a service (e.g. a part was picked from a large box and placed on a kitting box). Additionally, and as a result of the real-time analysis performed by the world model consistency checker, the logistic world model can also be updated when a robot detects

626

C. Toscano et al.

(a)

(b)

Fig. 5. Example of a logistic world model 3D representation of a PSA - Peugeot Citro¨en Automobiles S.A. factory, in Rennes, France, given by: (a) the web browser interface; (b) ROS 3D visualizer, RVIZ.

spatial inconsistencies between the sensed reality and the digital representation (e.g. no small box is detected in the place stated in the logistic world model). The logistic planner is responsible for keeping the logistic world model synchronized between the different system components. Figure 5a is a 3D visualization of an example logistic world model obtained using the logistic planner web browser interface. The image shown in the background (grey area) is the SLAM-based image built and retrieved from the robot. Point (0,0,0) of the robot corresponds to the left bottom corner of the image. This establishes the initial referential three (X,Y,Z) axes in the logistic world model. On top of the background image, one can see nine shelves (the bigger objects) and three large boxes (orange colour). These objects were placed in these locations by the technician on the set-up phase describe above. The main purpose of the 3D visualization is to help the technician confirm that the model is correct after its edition in the 2D editor and to visually follow the progress of the robot fleet. Figure 5b is the visualization of the same world model by the ROS three dimensional visualizer, RVIZ.

6

Conclusion

This paper presents a system which provides vertical integration with existing EIS. A hierarchical structure named logistic world model is proposed to augment traditional logistic data with 3D spatial information, in particular to store the pose of objects in the environment. A novel component called the logistic planner is responsible for creating and updating this spatial information. The logistic planner can be seen as the orchestrator and major information provider to the planning and execution of missions performed by each robot of the fleet. The proposed web user interface provides a robust, reliable and intuitive approach to specify, plan and monitor the execution of missions carried out by a robotic fleet. This work is integrated in the EU project STAMINA, where a mobile manipulator was developed to conduct automatic pick and place operations in a logistic supermarket. Future work will include the usage and adaptation of the logistic planner and the logistic world model in other industrial applications.

Logistic Systems Semantic 3D Representations for Mobile Manipulators

627

Acknowledgements. This work is financed by the ERDF European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme, and by National Funds through the Portuguese funding agency, FCT - Fundao para a Cincia e a Tecnologia, within project SAICTPAC/0034/2015- POCI-01-0145-FEDER-016418. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE01-0145-FEDER-000020” is financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).

References 1. Perrey, R., Lycett, M.: Service-oriented architecture. In: Proceedings of the 2003 Symposium on Applications and the Internet Workshops, pp. 116–119 (2003). https://doi.org/10.1109/SAINTW.2003.1210138 2. Bostelman, R., Hong, T., Marvel, J.: Survey of research for performance measurement of mobile manipulators. J. Natl. Inst. Stand. Technol. (2016) 3. Shneier, M., Bostelman, R.: Literature review of mobile robots for manufacturing, National Institute of Standards and Technology, US Department of Commerce, May 2015 4. Krueger, V., Chazoule, A., Crosby, M., Lasnier, A., Pedersen, M.R., Rovida, F., Nalpantidis, L., Petrick, R., Toscano, C., Veiga, G.: A vertical and cyber-physical integration of cognitive robots in manufacturing. Proc. IEEE 104(5), 1114–1127 (2016) 5. Bianco, P., Lewis, G., Merson, P., Simanta, S.: Architecting service-oriented systems, Technical report CMU/SEI-2011-TN-008, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA (2011) 6. Girbea, A., Suciu, C., Nechifor, S., Sisak, F.: Design and implementation of a service-oriented architecture for the optimization of industrial applications. IEEE Trans. Ind. Inf. 10(1), 185–196 (2014) 7. Crosby, M., Petrick, R.P.A.: Mission planning for a robot factory fleet. In: IROS 2015 Workshop on Task Planning for Intelligent Robots in Service and Manufacturing, Hamburg, Germany (2015) 8. Rovida, F., Kruger, V.: Design and development of a software architecture for autonomous mobile manipulators in industrial environments. In: 2015 IEEE International Conference on Industrial Technology (ICIT), pp. 3288–3295. IEEE (2015) 9. Pedersen, M.R., Nalpantidis, L., Andersen, R.S., Schou, C., Bøgh, S., Kr¨ uger, V., Madsen, O.: Robot skills for manufacturing: from concept to industrial deployment. Robot. Comput. Integr. Manuf. 37, 282–291 (2016) 10. Pedersen, M.R., Nalpantidis, L., Bobick, A., Kr¨ uger, V.: On the integration of hardware-abstracted robot skills for use in industrial scenarios. In: 2nd International Workshop on Cognitive Robotics Systems: Replicating Human Actions and Activities, pp. 1166–1171 (2013) 11. Pedersen, M.R., Herzog, D.L., Kr¨ uger, V.: Intuitive skill-level programming of industrial handling tasks on a mobile manipulator. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 4523–4530. IEEE (2014) 12. Arrais, R., Oliveira, M., Toscano, C., Veiga, G.: A mobile robot based sensing approach for assessing spatial inconsistencies of a logistic system. J. Manuf. Syst. 43, 129–138 (2017)

628

C. Toscano et al.

13. Niu, N., Da Xu, L., Bi, Z.: Enterprise information systems architecture analysis and evaluation. IEEE Trans. Ind. Inf. 9(4), 2147–2154 (2013) 14. Parker, Z., Poe, S., Vrbsky, S.V.: Comparing NoSQL MongoDB to an SQL DB. In: Proceedings of the 51st ACM Southeast Conference, p. 5. ACM (2013) 15. Crick, C., Jay, G., Osentoski, S., Pitzer, B., Jenkins, O.C.: Rosbridge: ROS for non-ROS users. In: Proceedings of the 15th International Symposium on Robotics Research (2011) 16. Roestenburg, R., Bakker, R., Williams, R.: Akka in Action. Manning Publications Co. (2015)

Human Intention Recognition in Flexible Robotized Warehouses Based on Markov Decision Processes Tomislav Petkovi´c(B) , Ivan Markovi´c, and Ivan Petrovi´c Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia {tomislav.petkovic2,ivan.markovic,ivan.petrovic}@fer.hr

Abstract. The rapid growth of e-commerce increases the need for larger warehouses and their automation, thus using robots as assistants to human workers becomes a priority. In order to operate efficiently and safely, robot assistants or the supervising system should recognize human intentions. Theory of mind (ToM) is an intuitive conception of other agents’ mental state, i.e., beliefs and desires, and how they cause behavior. In this paper we present a ToM-based algorithm for human intention recognition in flexible robotized warehouses. We have placed the warehouse worker in a simulated 2D environment with three potential goals. We observe agent’s actions and validate them with respect to the goal locations using a Markov decision process framework. Those observations are then processed by the proposed hidden Markov model framework which estimated agent’s desires. We demonstrate that the proposed framework predicts human warehouse worker’s desires in an intuitive manner and in the end we discuss the simulation results. Keywords: Human intention recognition Hidden Markov model · Theory of mind

1

· Markov decision processes ·

Introduction

The European e-commerce turnover managed to increase 13.3% to 455.3e billion in 2015, compared to the 1.0% growth of general retail in Europe [1]. With the internationalization of distribution chains, the key for success lies within efficient logistics, consequently increasing the need for larger warehouses and their automation. There are many fully automated warehouse systems such as the Swisslog’s CarryPick Mobile system and Amazon’s Kiva system [2]. They use movable racks that can be lifted by small, autonomous robots. By bringing the product to the worker, productivity is increased by a factor of two or more, while simultaneously improving accountability and flexibility [3]. However, current automation solutions based on strict separation of humans and robots provide limited operation efficiency of large warehouses. Therefore, a new c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_52

630

T. Petkovi´c et al.

integrated paradigm arises where humans and robots will work closely together and these integrated warehouse models will fundamentally change the way we use mobile robots in modern warehouses. Besides immediate safety issues, example of a challenge such models face, is to estimate worker’s intentions so that the worker may be assisted and not impeded in his work. Furthermore, if the robot is not intelligent, but controlled by a supervisory system, the supervisory system needs to be able to estimate worker’s intentions correctly and control the robots accordingly, so that the warehouse operation efficiency is ensured. There exists a plethora of challenges in human intention recognition, because of the subtlety and diversity of human behaviors [4]. Contrary to some more common quantities, such as the position and velocity, the human intention is not directly observable and needs to be estimated from human actions. Furthermore, the intentions should be estimated in real-time and overly complicated models should be avoided. Having that in mind, only the actions with the greatest influence on intention perception should be considered. For example, in the warehouse domain, worker’s orientation and motion have large effect on the goal intention recognition. On the other hand, observing, e.g., worker’s heart rate or perspiration could provide very few, if any, information on worker’s intentions. Therefore, such measurements should be avoided in order to reduce model complexity and ensure real-time operation [4]. Many models addressing the problem of human intention recognition successfully emulate human social intelligence using Markov decision processes (MDPs). The examples of such models can be found in [4], where authors propose framework for estimating pedestrian’s intention to cross the road and in [5] where authors proposed framework for gesture recognition and robot assisted coffee serving. There are multiple papers from the gaming industry perspective, proposing methods for improving the non-playable character’s assisting efficiency [6,7]. An interesting approach is the Bayesian Theory of Mind (BToM) [8] where beliefs and desires generate actions via abstract causal laws. BToM framework in [8] observes agent’s actions and estimates agent’s desires to eat at a particular foodtruck. However, though impressive, BToM model does not predict the possibility of agent’s change of mind during the simulation [9]. In this paper, we propose an algorithm for warehouse worker intention recognition motivated by the BToM approach. We expand the BToM to accommodate the warehouse scenario problem and we present formal and mathematical details of the constructed MDP framework. The warehouse worker is placed in a simulated 2D warehouse environment with multiple potential goals, i.e., warehouse items that need to be picked. The worker’s actions, moving and turning, are validated based on the proposed MDP framework. Actions, resulting in motion towards the goal, yield greater values than those resulting in moving away from the goal. We introduce worker’s intention recognition algorithm based on the hidden Markov model (HMM) framework similarly to those presented in [10,11]. The proposed intention recognition algorithm observes action values generated by the MDP and estimates potential goal desires. We have considered worker’s tendency to change its mind during the simulation, as well as worker’s intention

Human Intention Recognition

631

of leaving the rack. In the end, we demonstrate that the proposed algorithm predicts human warehouse worker’s desires in an intuitive manner and we discuss the simulation results.

2

Human Action Validation

In the integrated warehouse environment the worker’s position and orientation need to be precisely estimated, and in the present paper we assume that these quantities are readily available. Furthermore, we would like to emphasize that most of warehouse worker duties, such as sorting and placing materials or items on racks, unpacking and packing, include a lot of motion which is considered to be inherently of the stochastic nature. Therefore, we model the worker’s perception model P (O|S), with O and S representing observation and state, respectively, as deterministic, and the action model P (S  |S, A), with A representing action, as stochastic. A paradigm that encompasses uncertainty in agent’s motion and is suitable for the problem at hand are MDPs [12]. The MDP framework is based on the idea that transiting to state S yields an immediate reward R. Desirable states, such as warehouse items worker needs to pick up, have high immediate reward values, while undesirable states, such as warehouse parts that are of no immediate interest, have low immediate reward values. The rational worker will always take actions that will lead to the highest total expected reward and that value needs to be calculated for each state. Before approaching that problem, we define the MDP framework applicable to the warehouse domain. In order to accomplish that, we have placed the worker (later referred as agent) in a simulated 2D warehouse environment shown in Fig. 1. The environment is conR GUI development environment without predefined structed using MATLAB physical interpretation of the map tiles and the map size is chosen arbitrary to be 20 × 20. There are three potential goals and the shortest path to each goal is calculated. There are many off-the-shelf graph search algorithms we can use to find the optimal path to each goal, such as Dijkstra’s algorithm and A∗ . However, if there exist multiple optimal paths to the goal, there are no predefined rules which one to select and the selection depends on the implementation details. Consider the example scenario with the warehouse worker in Fig. 2. It is intuitive that the rational worker will tend to follow the green path because the orange path would require the worker to either take the additional action of turning or unnatural walking by not looking forward. Having this in mind, we modify the A∗ search algorithm which selects optimal path the agent currently sees the most. This has been done by introducing the heuristic matrix H using the Manhattan distance (L1 ) heuristics as follows:  |xg − x| + |yg − y| − , if the agent sees tile (x,y) Hx,y = (1) otherwise |xg − x| + |yg − y|, where  is a small value. Subtracting a small value from the visible tiles directs the search in their direction and does not disrupt heuristic’s admissibility. The cost

632

T. Petkovi´c et al.

Fig. 1. Agent (green tile) in simulation environment with three potential agent’s goals (colored tiles). Unoccupied space is labeled with yellow tiles and occupied space (i.e. warehouse racks) is labeled with black tiles. The optimal path to each goal is shown with red dots and red solid lines denote agent’s vision field. Visible tiles are colored white. Red dashed line denotes agent’s current orientation and colored lines indicate direction of the average orientation of the visible optimal path to each goal calculated using (2).

of each movement is also modified in a similar way by subtracting a small value  from the base movement cost of 1, if the tile is visible. Example of the modified A∗ search algorithm results can be seen in Fig. 3. The average orientation of the visible path to each goal is defined as follows: ⎧  N  N  ⎨atan2  sin(θn ), cos(θn ) , if N > 0 θgoal = (2) n=1 n=1 ⎩ otherwise θa + π, where N is the number of visible optimal path tiles, θa is agent’s orientation and θn are relative orientations of each visible optimal path tile (x, y) with respect to the agent (xa , ya ): (3) θn = atan2(y − ya , x − xa ). We propose a mathematical model for validating agent’s actions based on the assumption that the rational agent tends to (i) move towards the goal it desires most by taking the shortest possible path, and (ii) orients in a way to minimize difference between its orientation and most desirable goal’s average orientation of the visible optimal path calculated in (2). The proposed model goal is to assign large value to the actions compatible with the mentioned assumptions,

Human Intention Recognition

(a)

Fig. 2. Warehouse worker’s (blue circle) shortest path to the red goal is ambiguous because both orange dashed and green dotted paths are optimal. The black arrow denotes worker’s orientation.

633

(b)

Fig. 3. The proposed A∗ modification yields different optimal paths with the agent’s orientation change.

and small values to the actions deviating from them. These values will be used to develop agent’s intention recognition algorithm in the sequel. We can notice that the introduced validation problem is actually a path planning optimization problem. Perfectly rational agent will always choose the action with the greatest value and consequently move towards the goal. We approach the agent’s action values calculation by introducing the agent’s action validation MDP framework. We assume that agent’s position and orientation are fully observable and create the MDP state space S as: ⎡ ⎤ x (4) Sx,y,k = ⎣y ⎦ . θ The agent’s orientation space Θ must be discrete because the MDP framework assumes a finite number of states. We have modeled Θ to include orientations divisible with π4 and it can be arbitrary expanded: Θ = {0,

π π 3π 5π 3π 7π , , , π, , , }. 4 2 4 4 2 4

(5)

The action space A includes actions ‘Up’, ‘Down’, ‘Left’, ‘Right’, ‘Turn Clockwise’, ‘Turn Counterclockwise’ and ‘Stay’, labeled in order as follows: A = (∧, ∨, , R, L, S).

(6)

It has already been stated that the agent’s actions are fully observable but stochastic. In order to capture stochastic nature of the agent’s movement, we define the transition matrix T of agent’s movement:

634

T. Petkovi´c et al.

⎡ 1 − 2 ⎢ 0 ⎢ ⎢  ⎢ T =⎢ ⎢  ⎢ 0 ⎢ ⎣ 0 0

0  1 − 2   1 − 2  0 0 0 0 0 0 0

  0 1 − 2 0 0 0

0 0 0 0 0 0 0 0 1− 0 0 1− 0 0

⎤ 0 0⎥ ⎥ 0⎥ ⎥ 0⎥ ⎥ ⎥ ⎥ ⎦ 1

(7)

where element Tij denotes realization probability of the action Aj , if the wanted action is Ai . Moving actions have small probability 2 of resulting in lateral movement, and turning actions have small probability  of failing. The value of the constant  is obtained experimentally and equals to 0.1. If the agent’s action cannot be completed, because of the occupied space blocking the way, column responding to the impossible action is added to last column and is set to zero vector afterwards. We define three hypotheses, Hi , i = 1 . . . 3, one for each goal state as follows: “Agent wants to go to the goal i and other potential goals are treated as unoccupied tiles”. The immediate reward values R for each hypothesis and state are calculated as follows:  π, if S  is the goal state according to the Hi Ri,S  = (8) −( + |θi − θa |), otherwise where  is a small number and |θi −θa | represents the absolute difference between average orientation of the visible path to the goal i and agent’s orientation. Note that we have taken the angle periodicity into account while calculating the angle difference in (8). The goal state is rewarded and other states are punished proportionally to the orientation difference. If the agent does not see path to the goal i, the reward is set to the lowest value, −π which is derived from (3). One of the most commonly used algorithms for solving the MPD optimal policy problem is the value iteration algorithm [4], which assigns calculated value to the each state. The optimal policy is derived by choosing the actions with the largest expected value gain. The value iteration algorithm iteratively solves the Bellman’s equation [13] for each hypothesis Hi :  PS,S  (RHi ,S  + γVj (Hi , S  ))} (9) Vj+1 (Hi , S) = max{ a

S

where S is the current state, S  adjacent state, and PS,S  element of the row Ta in transition matrix T which would cause transitioning from state S to S  . The algorithm stops once the criteria:  ||Vj (Hi , Sk ) − Vj−1 (Hi , Sk )|| < η (10) i,k

is met, where the threshold η is set to 0.01. State values, if the goal state is the dark yellow (southern) goal and agent’s orientation of 3π 2 , is shown in Fig. 4. The agent’s behavior consistent with the hypothesis Hi is defined as follows.

Human Intention Recognition

Fig. 4. State values for the agent’s orientation of goal labeled with the red circle.

3π 2

635

if the goal state is the southern

Definition 1 (Consistent behavior). If the agent in state S takes the action a under the hypothesis Hi , with the expected value gain greater or equal than the expected value gain of the action “Stay”, its behavior is considered consistent with the hypothesis Hi . Otherwise, its behavior is considered inconsistent with the hypothesis Hi . Behavior consistency is an important factor in determining agent’s rationality, which will be further discussed in next section. While calculating the immediate rewards and state values has O(n4 ) complexity and can be time consuming, it can be done offline, before the simulation start. Optimal action, Π ∗ (Hi , S), for each state is the action that maximizes expected value gain and, on the other ¯ ∗ (Hi , S), is the action that minimizes expected hand, the worst possible action, Π value gain.

3

Human Intention Recognition

Once the state values VH,S are obtained, model for solving agent’s intention recognition is introduced. While agent’s actions are fully observable, they depend on agent’s inner states (desires), which cannot be observed and need to be estimated. We propose framework based on hidden Markov model for solving the agent’s desires estimation problem. HMMs are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition [14] and force analysis [15]. They are an MDP extension including the case where the observation (agent’s action) is a probabilistic function of the hidden state (agent’s desires) which cannot be directly observed. We propose a model with five hidden states, which is shown in Fig. 5 and listed in Table 1.

636

T. Petkovi´c et al. Table 1. HMM framework components Symbol Name

Description

G1

Goal 1

Agent wants to go to the northern goal

G2

Goal 2

Agent wants to go to the eastern goal

G3

Goal 3

Agent wants to go to the southern goal

G?

Unknown goal

Agent’s desires are not certain

Gx

Irrational agent Agent behaves irrationally

Fig. 5. Hidden states and transition probabilities. The used constant values are as follows: α = 0.2, β = 0.1, γ = 0.65, δ = 0.1.

In order to avoid confusion caused by MDP and HMM frameworks, both having similar or the same element names, MDP states will be referred as states and HMM states will be referred to as hidden states, or simply desires. All of the other similarly named elements in this chapter refer to the HMM unless stated otherwise. Hidden state G? indicates that the agent behaves consistent with multiple goal hypotheses and the model cannot decide between them with enough certainty. On the other hand, hidden state Gx indicates that the agent behaves inconsistently with every goal hypothesis. This hidden state includes the cases of the agent being irrational or agent’s desire to go to an a priori unknown goal. The proposed model cannot distinguish between these cases. The agent’s change of mind during the simulation is allowed, but with very low probability. The constant values in Fig. 5 are obtained experimentally and we introduce HMM transition matrix THM M :

Human Intention Recognition

THM M

⎡ 0.8 ⎢0 ⎢ =⎢ ⎢0 ⎣0.1 0

0 0.8 0 0.1 0

0 0 0.8 0.1 0

0.2 0.2 0.2 0.65 0.1

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥. 0.05⎦ 0.9

637

(11)

During the simulation, each agent’s action a ∈ A generates a three element observation vector OS,a , each element belonging to one hypothesis. Observation vector element OS,a,i is calculated as follows: OS,a,i =

¯ ∗] E[V (Hi , S)|a] − E[V (Hi , S)|Π ∗ ¯ ∗] , E[V (Hi , S)|Π ] − E[V (Hi , S)|Π

(12)

where E[·] denotes expected value gain. Calculated observations are used to generate the HMM emission matrix B. The emission matrix is expanded with each agent’s action (simulation step) with the row B  , where the element Bi stores the probability of observing observation vector O from hidden state Si . Last three observations are averaged and maximum average value φ is selected. It is used as an indicator if the agent is behaving irrationally. Each expansion row B  is calculated as follows: ⎧  ⎨ tanh(O1 ) tanh(O2 ) tanh(O3 ) tanh(0.55) 0 , if φ > 0.5  B = ζ ·  ⎩ tanh( O1 ) tanh( O2 ) tanh( O3 ) tanh(0.1) tanh(1 − φ) , otherwise 2 2 2 (13) where ζ is a normalizing constant and O is i-th observation vector. The initial probabilities of agent’s desires are:   Π= 00010 , (14) indicating that the initial state is G? . After each agent’s action, the agent’s desires are estimated using the Viterbi algorithm [16] which is often used for solving HMM human intention recognition models [17]. The Viterbi algorithm outputs the most probable hidden state sequence and the probabilities of each hidden state in each step. These probabilities are the agent’s desire estimates.

4

Simulation Results

In the previous sections, we have introduced the MDP and HMM frameworks for modeling human action validation and intention recognition. We have obtained the model parameters empirically and conducted multiple simulations evaluating the proposed human intention recognition algorithm. The proposed algorithm is tested in a scenario, where the most important simulation steps are shown in Fig. 4 and the corresponding desire estimates are shown in Fig. 7. The starting position is (x1 , y1 , θ1 ) = (5, 5, 3π 2 ). The agent behaves consistently with all the hypotheses and proceeds to the state (x6 , y6 , θ6 ) = (8, 5, 0). Because of the mentioned hypothesis consistency, the desire estimates for all of the goal

638

T. Petkovi´c et al.

(a) Simulation step 2.

(b) Simulation step 3.

(c) Simulation step 6.

(d) Simulation step 7.

(e) Simulation step 12.

(f) Simulation step 13.

(g) Simulation step 18.

(h) Simulation step 25.

(i) Simulation step 31.

Fig. 6. Representative simulation steps (best viewed in color).

states increase. The actions from simulation step 7 to step 12 are consistent only with the hypothesis H3 which manifests as the steep rise of the P (G3 ) and fall of probabilities related to other goal hypotheses. In the step 13, action “Stay” is the only action consistent with the hypothesis H3 and because the agent chooses the action “Right”, the P (G3 ) instantly falls towards the zero and P (G? ) and P (G2 ) rise. While it might seem obvious that the agent now actually wants to go to the Goal 2, it has previously chosen actions inconsistent with that hypothesis and the model initially gives greater probability value to the desire G? than to G2 .

Human Intention Recognition

639

Next few steps are consistent with the hypothesis H2 and the P (G2 ) rises until the simulation step 18, when it enters steady state of approximately 0.85. The goal desires will never obtain value of 1 because the element B4 is never zero, thus allowing agent’s change of mind. In the state (x25 , y25 , θ25 ) = (16, 7, π4 ) agent can decide to go to the Goal 1 or Goal 2. However, it chooses to take the turn towards the dead end in the simulation step 31. The proposed model recognizes that this behavior is inconsistent with all of the hypotheses and the P (Gx ) steeply rises to value slightly smaller than 1, declaring the agent irrational (Fig. 7).

1

Agent’s desires estimates

0.8

0.6

0.4

0.2

0 5

10

15 20 Simulation step

25

30

Fig. 7. Hidden state (desires) probabilities. Probabilities of the goal states are colored according to the goal tile’s color. The unknown goal state probability is colored black and irrational agent state probability is colored red.

5

Conclusion

In this paper we have proposed a feasible human intention recognition algorithm. Our goal was to estimate the intention of a human worker, i.e., agent, inside of a robotized warehouse, where we assumed that the agent’s position and orientation are known, as well as the potential goals. The proposed approach is based on the Markov decision process, where first we run offline the value iteration algorithm for known agent goals and discretized possible agent states. The resulting state values are then used within the hidden Markov model framework to generate observations and estimate the final probabilities of agent’s intentions. Simulations have been carried out within a simulated 2D warehouse with three potential goals, modeling a situation where the human worker should need to enter the robotized part of the warehouse and pick an item from a rack. Results suggest that the proposed framework predicts human warehouse worker’s desires in an intuitive manner and within reasonable expectations.

640

T. Petkovi´c et al.

Acknowledgement. This work has been supported from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688117 “Safe human-robot interaction in logistic applications for highly flexible warehouses (SafeLog)” and has been carried out within the activities of the Centre of Research Excellence for Data Science and Cooperative Systems supported by the Ministry of Science, Education and Sports of the Republic of Croatia.

References 1. E-commerce Europe. European B2C E-commerce Report 2016 - Facts, Figures, Infographic & Trends of 2015 and the 2016 Forecast of the European B2C E-commerce Market of Goods and Services (2016) 2. D’Andrea, R.: Guest editorial: a revolution in the warehouse: a retrospective on kiva systems and the grand challenges ahead. IEEE Trans. Autom. Sci. Eng. 9(4), 638–639 (2012) 3. Wurman, P.R., D’Andrea, R., Mountz, M.: Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Mag. 29(1), 9 (2008) 4. Bandyopadhyay, T., Won, K.S., Frazzoli, E., Hsu, D., Lee, W.S., Rus, D.: Intentionaware motion planning. In: Springer Tracts in Advanced Robotics, vol. 86, pp. 475–491 (2013) 5. Lin, H., Chen, W.: Human intention recognition using Markov decision processes. In: CACS International Automatic Control Conference, (Cacs), pp. 340–343 (2014) 6. Nguyen, T.-H.D., Hsu, D., Lee, W.-S., Leong, T.-Y., Kaelbling, L.P., Lozano-Perez, T., Grant, A.H.: CAPIR: collaborative action planning with intention recognition, pp. 61–66 (2012) 7. Fern, A., Tadepalli, P.: A computational decision theory for interactive assistants. In: Advances in Neural Information Processing Systems 23 (NIPS), pp. 577–585 (2011) 8. Baker, C.L., Tenenbaum, J.B.: Modeling human plan recognition using Bayesian theory of mind. In: Plan, Activity, and Intent Recognition, pp. 1–24 (2014) 9. Chater, N., Oaksford, M., Hahn, U., Heit, E.: Bayesian models of cognition. Wiley Interdisc. Rev. Cogn. Sci. 1(6), 811–823 (2010) 10. Wang, Z., Peer, A., Buss, M.: An HMM approach to realistic haptic Human-Robot interaction. In: Proceedings - 3rd Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, World Haptics 2009, pp. 374–379 (2009) 11. He, L., Zong, C., Wang, C.: Driving intention recognition and behaviour prediction based on a double-layer hidden Markov model. J. Zhejiang Univ. Sci. C 13(3), 208– 217 (2012) 12. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics (1999) 13. Bellman, R.: A Markovian decision process. J. Math. Mech. 6, 679–684 (1957) 14. Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989) 15. Chen, C.S., Xu, Y., Yang, J.: Human action learning via Hidden Markov Model. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 27(1), 34–44 (1997) 16. Forney Jr., G.D.: The Viterbi algorithm. Proc. IEEE 61(3), 268–278 (1973) 17. Zhu, C., Cheng, Q., Sheng, W.: Human intention recognition in smart assisted living systems using a hierarchical Hidden Markov Model. In: IEEE International Conference on Automation Science and Engineering, pp. 253–258 (2008)

Robotics and Cyber-Physical Systems for Industry 4.0 (II)

Dynamic Collision Avoidance System for a Manipulator Based on RGB-D Data Thadeu Brito1,2(B) , Jose Lima2,3 , Pedro Costa3,4 , and Luis Piardi1,2 1

4

Federal University of Technology - Paran´ a, Apucarana, Brazil thadeu [email protected], luis [email protected] 2 Polytechnic Institute of Bragan¸ca, Bragan¸ca, Portugal [email protected] 3 Faculty of Engineering of University of Porto, Porto, Portugal [email protected] INESC-TEC, Centre for Robotics in Industry and Intelligent Systems, Porto, Portugal

Abstract. The new paradigms of Industry 4.0 demand the collaboration between robot and humans. They could help and collaborate each other without any additional safety unlike other manipulators. The robot should have the ability of acquire the environment and plan (or re-plan) on-the-fly the movement avoiding the obstacles and people. This paper proposes a system that acquires the environment space, based on a kinect sensor, performs the path planning of a UR5 manipulator for pick and place tasks while avoiding the objects, based on the point cloud from kinect. Results allow to validate the proposed system. Keywords: Collaborative robots Collision avoidance · RGB-D

1

·

Manipulator path planning

·

Introduction

One of the most important task in Industry 4.0 related to cooperation is the ability to estimate and avoidance of collision for a robot manipulator. Collaborative robotics is a topic addressed in Industry 4.0 where humans and robots can share and help each other in a cooperative way. Collaborative robot can be used without any additional safety unlike other manipulators. This means, the robot should have the capacity of acquire the environment and plan the movement avoiding the obstacles and people. The cooperation between human and robot requires that the robot could re-plan the path to reach the target position that avoids the collision with human parts and obstacles in real time, this means on the fly, while the arm is moving. Such process can be called dynamic collisions avoidance. Nowadays RGB-D sensors help this environment acquisition and perception so that the system can do the path planning with constraints. The depth cameras c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_53

644

T. Brito et al.

are increasing its popularity and decreasing its prices. The well-known kinect sensor is an example of that. This paper proposes a system that acquires the environment space, based on a kinect sensor, performs the path planning of a UR5 manipulator while avoiding the objects. Two algorithms were tested in real acquired situations with a simulated UR5 robot and the results point the advantages for this approach.

2

Related Work

An important step to be considered when developing the manipulator system is the path planning. Path planning is a key area of robotics. It comprises planning algorithms, configuration space discretization strategies and related constraints. It is well known that path planning for robots with many degrees of freedom is a complex task. Barraquand and Latombe [1], in 1991, proposed a new approach to robot path planning that consisted of building and searching a graph connecting the local minima of the potential function defined over the robot’s configuration space. This new approach was proposed considering robots with multiple degrees of freedom. Later Ralli and Hirzinger [2] refined that same algorithm accelerating the system, calculating solutions with a lower estimated executing time. Probabilistic methods were introduced by Kavraki et al. [3] with the objective to reduce the configuration free space complexity. This method is not adapted for dynamic environments since a change in the environment causes the reconstruction of the whole graph. Several variants of these methods were proposed: Visibility based PRM [4], Medial axis PRM [5], Lazy PRM [6] and sampling based roadmap of trees [7]. Other methods are used and Helguera et al. used a local method to plan paths for manipulator robots and solved the local minima problem by making a search in a graph describing the local environment using and A* algorithm until the local minima are avoided [8]. The path planning becomes more complex when there are inserted obstacles in a given environment. Blackmore and Williams in 2006 presents a complete algorithm by posing the problem as disjunctive programming. They are able to use existing constrained optimization methods to generate optimal trajectories for manipulator path planning with obstacles [9]. Path planning in real-time is introduced by Samir et al. in 2006 in the dynamic environment. This approach is based on the constraints method coupled with a procedure to avoid local minima by bypassing obstacles using a boundary following strategy [10] More recent Tavares et al. use a double A* algorithm for multiple industrial manipulators. This approach uses one A* algorithm to approach the target and an another A* more refining to reduce the error [11].

3

System Architecture

Many efforts have been done to achieve a system that acquires the environment by means of an RGB-D sensor, planning a way for an UR5 manipulator to reach its end point with the ability to avoid obstacles. Figure 1, presents a simplified block diagram of the system.

Dynamic Collision Avoidance System for a Manipulator

645

Fig. 1. Diagram of system functions. Image adapted from [22].

3.1

ROS

Robotic Operating System (ROS) is a framework that contains a wide range of use in developing programs for robots. ROS makes interactions between the functionalities of a robot (sensors, locomotion, vision, navigation and location) with contribution of libraries and services, facilitating the robotic application. The philosophy is to make a piece of software that could work in other robots by making little changes in the code [16]. Several ROS modules are used in the present work. Next subsections address the main ones. 3.2

Rviz

The ROS framework comes with a great number of powerful tools to help the user and developer in the process of debugging the code, and detecting problems with both the hardware and software. This comprises debugging facilities such as log messages as well as visualization and inspection capabilities which allows the user to see what is going on in the system easily [16]. Rviz, presented in Fig. 2 is a 3D visualizer to make a virtual simulation of robotic models in ROS. During the simulation it is possible to create scenarios with obstacles, to change positions of pose of a robot or to move them through a virtual “world”. It is also possible to insert sensors, such as Kinect, change positions of sensors or robots. This way it is confirmed that the application is ready to be implemented in a real application, avoiding possible problems in real robots.

646

T. Brito et al.

Fig. 2. Screenshot of a Rviz 3D visualizer with a simple application. On the left a display panel, in the middle a simulated UR5 on the table and on the right a views panel.

3.3

RGB-D Sensor

Develop a robotic application requires the use of sensors. There are currently several types of devices that ROS supports. This package is defined in different categories: 2D range finders, 3D sensors, Pose estimation, Cameras, Sensor Interfaces, and other ones [20]. The 3D sensor package, contains the RGB-Depth (RGB-D) sensors such as Kinect. RGB-D cameras consist of an RGB and a depth sensor that capture color images along with per-pixel depth information (depth map). These features have promoted the wide adoption of low-cost RGB-D cameras in numerous at-home applications, such as body tracking, gait monitoring for tele-rehabilitation, tracking of facial expressions, object and gesture recognition among the others [12]. 3.4

MoveIt!

MoveIt! is a well-known software for planning mobile manipulation movements, incorporating the latest advances in motion planning, manipulation, 3D perception, kinematics, control and navigation. It provides an easy-to-use platform for developing advanced robotics applications, evaluating new robot designs and building integrated robotics products for industrial, commercial, R&D and other domains [13]. Figure 3 shows a screenshot of the MoveIt! performing a path planning. The main node of this software is the move group that integrates among several other tools. A good example of how move group works is the path planning, where it is necessary to collect information from a point cloud and turn it into obstacles in the simulation. MoveIt! uses C++ or Phyton language which makes it easy to establish commands and create interface when viewing some movement

Dynamic Collision Avoidance System for a Manipulator

647

Fig. 3. Screenshot of a MoveIt! with a planned path executed.

in 3D. The algorithms embedded in MoveIt! can be used by many planners: Open Motion Planning Library (OMPL), Stochastic Trajectory Optimization for Motion Planning (STOMP), Search-Based Planning Library (SBPL) and Covariant Hamiltonian Optimization for Motion Planning (CHOMP). 3.5

Camera Calibration

The calibration of consumer-grade depth sensors has been widely investigated since the release of the first-generation Kinect in 2010. Various calibration methods, particularly for the depth sensor, have been studied by different research groups [14]. The availability of affordable depth sensors in conjunction with common RGB cameras (even in the same device, e.g. the Microsoft Kinect) provides robots with a complete and instantaneous representation of both the appearance and the 3D structure of the current surrounding environment. This type of information enables robots to perceive and actively interact with other agents inside the working environment. To obtain a reliable and accurate measurements, the intrinsic parameters of each sensors should be precisely calibrated and also the extrinsic parameters relating the two sensors should be precisely known. The calibration must be done because there are no integrated sensors able to provide both color and depth information yet (sensors are separated). These sensors provide colored point clouds that suffer from a non accurate association between depth and RGB data, due to a non perfect alignment between the camera and the depth sensor. Moreover, depth images suffer from a geometric distortion, typically irregular and position dependent. These devices are factory calibrated, so each sensor is sold with its own calibration parameter set stored inside a non-volatile memory. On the other side, the depth distortion is not modeled in the factory calibration. So, a proper calibration method

648

T. Brito et al.

Fig. 4. Calibration procedure to obtain the intrinsic and extrinsic parameters.

for robust robotics applications should precisely estimate the misalignment and both the systematic and distortion errors [15]. Figure 4 shows the calibration procedure.

4

Path Planning

Different methods of path planning can be exploited in an application of robotic manipulators. An interesting planner is the OMPL, a library for many trajectory calculation algorithms. However, to check for collisions, the FCL library (Flexible Collision Library, included in MoveIt!) is used. The OMPL planner works with two ways to create a path, one uses differential constraints (Control-based planners) and the other establishes a path through the geometric and kinematic constraints of the system (Geometric Planners) which is addressed in this paper [23]. A widely used algorithm is the multiple query of scripts created from the environment, known as PRM (Probabilistic Roadmap Method). These multiple scripts are based on sampling algorithms, which can have a higher cost framework. Another good algorithm is RRT (Rapidly-exploring Random Trees), that is very simple to implement: it has low cost of framework and has good outputs that accomplishes its work by making state trees. Therefore, with these algorithms it is possible to carry out a path planning from an initial pose to a final pose. The steps to perform this path planning are indicated in Fig. 1, so the use of a RGB-D image generated by a Kinect can check for possible obstacles. Images are introduced into the system through interconnected nodes, which make the calculations necessary to have non-collision paths between the start and the goal poses. Finally, the execution of the movement is done, if there is

Dynamic Collision Avoidance System for a Manipulator

649

a trajectory planning without collisions. Otherwise the MoveIt! informs that it is not possible to carry out the collisions free movement. If the environment changes at any time, for example if obstacles change places or new obstacles are inserted into the work environment, the system (OMPL) will recalculate the trajectory to reach the final pose. By this way, the aim of a system of avoiding collisions dynamically arises, that is, the whole collision avoidance system adapts the planning routes according to the environment.

5

Results

To verify that the dynamic collision avoiding system works, it was made a lab simulation with a Kinect sensor and a virtual model of a UR5 manipulator. In this way, the purpose is to create a scenario with real obstacles to guarantee the operation of the system. The main idea is to make the Kinect sensor to create a point cloud of real obstacles and indicates to MoveIt! where the virtual manipulator can not collide, that is, where the manipulator can move to reach the goal pose. The working environment for the validation of the dynamic collision avoidance system was developed with a simple table as the base for one box that form the real obstacles. The virtual manipulator model has been configured to be fixed to the center of the table, so that the manipulator stays between the box and human, without maintaining contact with them. At this stage the use of the RGB-D sensor was important to generate the point cloud and to be able to calibrate the positioning of all the objects. In order to avoid shadow interference in point cloud generation, the best RGB-D sensor fixture is at the top of the working environment. In Fig. 5, it is possible to observe the real working environment and the processes of transformation of this environment to the MoveIt! as a point cloud in order to perform the perception of the real

Fig. 5. Figure that shows the scenario created, the obstacles in the real world and the steps to transform the scenario in a way that the software perceives.

650

T. Brito et al.

obstacles and to make a simulation of path planning without collisions in a virtual model of the UR5 manipulator. For the MoveIt! to perform the path planning it is necessary to configure the algorithm that will do the routes of the manipulator. For the tests, two algorithms, PRM and RRT, were chosen. Both algorithms were chosen because they are widely used in path planning, so these algorithms can be inserted into the dynamic collision avoidance system. In order to guarantee the consistency of the comparison between two algorithms, the same path planning configuration was performed in the tests of each algorithm. Only two parameters were changed, planning time and planning attempt while the other parameters remained with the MoveIt! default configuration. The planning time has been set to 10 s. This parameter indicates the time limit at which the system will take to find a path planning. In the parameter planning attempts was set to 15. This parameter indicates to the system how many path planning should be done within the set time. In case the system does not find a path planning within the timeout, the system will not move the manipulator. The same manipulator pose configurations (start and goal poses) were used in both algorithms. The start and goal poses are indicated in Fig. 6.

Fig. 6. The state where the UR5 manipulator is transparent is the start pose, as the state where the manipulator is orange is the goal pose.

Therefore, during each test the algorithm must find a route solution within the time limit and create states (or poses) to realize the trajectory. The state sequence that is expected for each algorithm to find can be visualized in Fig. 7. The first test uses the PRM algorithm. Figure 8 shows the real scenario created and the transformation of this scenario to the perception of the software. It consists of: a point cloud transformed into Octotree (blue and purple boxes), the planning trail (in Gray) and the remaining points are the point cloud not considered for simulation. To ensure that the first test does not interfere with the next test, all nodes have been restarted. The second test, with the RRT algorithm presented in

Dynamic Collision Avoidance System for a Manipulator

651

Fig. 7. Example of a sequence of states forming a trajectory.

Fig. 8. Path planning using the PRM algorithm.

Fig. 9, shows the same scenario created and the trail that the planner choose to reach the goal pose. At the end of each path planning performed by the algorithms through MoveIt!, the time and amount of states (or poses) were used to find a route to the final pose. These data were collected and analyzed in graph format, as shown in Fig. 10. Although both algorithms find a certain amount of states to perform the path planning, it is not necessary to use all found states. The logs presented in graph format show that the algorithm PRM use all the time that was configured to find a path planning. This has resulted in a higher value of pose states. However, the RRT algorithm uses less time to find a path planning solution, resulting in smaller amounts of pose states.

652

T. Brito et al.

Fig. 9. Path planning using the RRT algorithm.

Fig. 10. Results of logs generated by the system.

6

Conclusion and Future Work

In the presented paper a collaborative manipulator and two path planning algorithms were stressed and compared allowing to develop a system that helps and collaborates with humans, according to the new paradigms of Industry 4.0. The system uses ROS and a RGB-Depth sensor (Kinect) to acquire the environment such as objects and humans positions. The implemented system allows to re-plan the movement avoiding collisions while guaranteeing the execution of operations. The logs generated by the system, show a difference between the analyzed algorithms. While RRT uses agility in finding a solution to plan the path to the goal pose, the PRM uses all the time it has been assigned to find a trajectory. Therefore the use of these algorithms must be adequate to the objective of the project in which it is implemented. The simulation of an UR5 robot with acquired point cloud validates the approach of both algorithms. In future works, it will be possible to optimize the point cloud for the system to have a faster response to the introduction of new objects in the working environment of the UR5 robot.

Dynamic Collision Avoidance System for a Manipulator

653

Acknowledgment. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020” is financed by the North Portugal Regional Operational. Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). This work is also financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the FCT – Funda¸cao para a Ciˆencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.

References 1. Barraquand, J., Latombe, J.C.: Robot motion planning. a distributed representation approach. Int. J. Robot. Res. 10(6), 628–649 (1991) 2. Ralli, E., Hirzinger, G.: Fast path planning for robot manipulators using numerical potential fields in the configuration space, vol. 3, pp. 1922–1929 (1994) 3. Kavraki, L., Svestka, P., Latombe, J.C., Overmars, M.: Probabilistic roadmaps for path planning in high dimensional configuration spaces. IEEE Trans. Robot. Autom. 12(4), 566–580 (1996), ISSN: 1042-296X 4. Sim´eon, T., Laumond, J.P., Nissoux, C.: Visibility based probabilistic roadmaps for motion planning. Adv. Robot. 14(6), 477–494 (2000) 5. Wilmarth, S., Amato, N., Stiller, P.: Maprm: a probabilistic roadmap planner with sampling on the medial axis of the free space. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1024–1031 (1999) 6. Bohlin, R., Kavraki, L.E.: Path planning using lazy PRM. In: Proceedings of the IEEE International Conference on Robotics and Automation, San Fransisco, vol. 1, pp. 521–528 (2000) 7. Plaku, E., Bekris, K.E., Chen, B.Y., Ladd, A.M., Kavraki, L.E.: Sampling based roadmap of trees for parallel motion planning. IEEE Trans. Rob. 21(4), 597–608 (2005) 8. Helguera, C., Zeghloul, S.: A local-based method for manipulators path planning in heavy cluttered environments. In: Proceedings of IEEE International Conference on Robotics and Automation, San Francisco, pp. 3467–3472 (2000) 9. Blackmore, L., Williams, B.: Optimal manipulator path planning with obstacles using disjunctive programming. In: American Control Conference, Minneapolis (2006) 10. Lahouar, S., Zeghloul, S., Romdhane, L.: Real-time path planning for multi-DoF manipulators in dynamic environment. Int. J. Adv. Robot. Syst. 3(2) (2006) 11. Tavares, P., Lima, J., Costa, P., Moreira, A.P.: Multiple manipulators path planning using double A*. Ind. Robot: Int. J. 43(6), 657–664 (2016). https://doi.org/ 10.1108/IR-01-2016-0006 12. Staranowicz, A., Brown, G.R., Morbidi, F., Mariottini, G.L.: Easy-to-use and accurate calibration of RGB-D cameras from spheres. In: Klette, R., Rivera, M., Satoh, S. (eds.) Image and Video Technology, PSIVT 2013. LNCS, vol. 8333. Springer, Heidelberg (2014) 13. Chitta, S.: MoveIt!: an introduction. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 625. Springer, Cham (2016)

654

T. Brito et al.

14. Walid Darwish, W., Tang, S., Wenbin, L., Chen, W.: A new calibration method for commercial RGB-D sensors. Sensors 17, 1204 (2017). https://doi.org/10.3390/ s17061204 15. Basso, F., Pretto, A., Menegatti, E.: Unsupervised intrinsic and extrinsic calibration of a camera-depth sensor couple. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2014) 16. Martinez, A., Fern´ andez, E.: Learning ROS for Robotics Programming. Packt Publishing, Birmingham (2013) 17. Joseph, L.: Mastering ROS for Robotics Programming. Packt Publishing, Birmingham (2015) 18. The Robotic Operation System Wiki, http://wiki.ros.org/ROS/Tutorials/ UnderstandingTopics 19. ROS Industrial Training Exercises with version Kinetic, https://github.com/ ros-industrial/industrial training/wiki 20. Sensor supported by ROS, http://wiki.ros.org/Sensors 21. Sucan, I.A., Chitta, S.: MoveIt, http://moveit.ros.org 22. Sucan, I.A., Chitta, S.: MoveIt, http://picknik.io/moveit wiki/index.php? title=High-level Overview Diagram 23. Sucan, I.A., Moll, M., Kavraki, L.E.: The Open Motion Planning Library. IEEE Robot. Autom. Mag. (2012), http://ompl.kavrakilab.org

Development of a Dynamic Path for a Toxic Substances Mapping Mobile Robot in Industry Environment Luis Piardi1,2(B) , Jos´e Lima2,4 , Paulo Costa3,4 , and Thadeu Brito1,5 1

4

Federal University of Technology - Paran´ a, Toledo, Brazil luis [email protected], thadeu [email protected] 2 Polytechnic Institute of Bragan¸ca, Bragan¸ca, Portugal [email protected] 3 Faculty of Engineering of University of Porto, Porto, Portugal [email protected] INESC-TEC, Centre for Robotics in Industry and Intelligent Systems, Porto, Portugal 5 Federal University of Technology - Paran´ a, Campo Mour˜ ao, Brazil

Abstract. Some industries have critical areas (dangerous or hazardous) where the presence of a human must be reduced or avoided. In some cases, there are areas where humans should be replaced by robots. The present work uses a robot with differential drive to scan an environment with known and unknown obstacles, defined in 3D simulation. It is important that the robot be able to make the right decisions about its way without the need of an operator. A solution to this challenge will be presented in this paper. The control application and its communication module with a simulator or a real robot are proposed. The robot can perform the scan, passing through all the waypoints arranged in a grid. The results are presented, showcasing the robot’s capacity to perform a viable trajectory without human intervention.

Keywords: Simulation

1

· Mobile robotics · Path planning

Introduction

The new paradigms of industry demand the collaboration between robot and humans. It is desired that both could help each other and also to collaborate. The impact on Industry 4.0 and Cyber-Physical Systems is a new technical systems paradigm based on collaboration [1] and will contribute to several areas, such as, new value chain models in industry, human security, efficiency, comfort and health, among the others. Some industries have critical areas where the presence of a human must be reduced or avoided. In these cases, there are some danger or hazardous areas where humans should be replaced by robots. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_54

656

L. Piardi et al.

In the present case, the developed robot should be able to map toxic substances in an area (replacing humans) and outside that area, the robot should share the space with humans. This dynamic environment, with moving obstacles must be handled by the robot. This paper presents an approach that can be used to scan a dangerous area in an industry. It is composed by a planning method that routes the path avoiding the known obstacles and also re-plan, on the fly, avoiding previously unknown obstacles that are detected by Time off Flight sensors, during the scanning process. It is also presented a simulation environment that allows to test and validate the proposed approach and algorithms. Further, these algorithms will be implemented in a real robot and the toxic mapping task will be done in a real environment. Due to some confidential issues, the company name, photos and toxic elements are omitted. Nevertheless, the layout of the mapping area of the simulation environment is presented in Fig. 1.

Fig. 1. Layout of the scanning area in simulation environment.

The remaining of the paper is organized as follows: after a brief introduction in this section, the state of the art of path planning is addressed in next section. Section 3 presents the system architecture where the communication between different modules and the robot model are stressed. The connectivity grid, the path planning and robot control are presented in Sect. 4 whereas Sect. 5 shows the results. Finally, Sect. 6 rounds up the paper with conclusions and points some future work direction.

2

Related Work

This chapter cites references that use techniques related to planning of dynamic paths of robots, highlighting the resolution of traveling salesman problem. In

Dynamic Path Mapping Robot for Industry Environment

657

addition to the problem of the path planning performed by the robot, it is also proposed a model of the trajectory that connect the points. These waypoints followed by the robot, using parametrized curves by polynomials with splines for the trajectory. 2.1

Planning

The classic travelling salesman problem (TSP), regarding a group of n cities, where the purpose of this problem is to start the route in city defined, visiting the other cities only once, and them returning to the first city [2]. Considering the possibility of the existence of several cities, the TSP becomes complex with (n − 1)! possible routes to be calculated. The work of Pereira et al. [3] presents a problem with characteristics similar to the TSP, where they seek to determine the most efficient way to collect golf balls scattered through an open field using a robot with differential drive. Similar to the TSP is the scanning of a room with toxics substances as addressed in the present work. Using a connectivity grid with standard space between each point or distributed points configurable by user, the robot needs to find an optimal path between the starting and the destination point, crossing all points of the grid while avoiding repeat points. Moreover, obstacle avoidance should also be performed. For robot navigation and scanning, Hirakawa et al. [4] presents research using the Adaptive Automaton theory to model, identifying and classifying the robot decision. Our work presents a similar approach, where we seek to find the next movement of the robot in the exploration process, with four options of movements: north, south, east, west. It is possible to modify the decision due to the existence of unmapped objects, which are detected based on a Time of Flight sensor (ToF). There is a great deal of effort by the academic community in researching path planning, where they use different approaches such as Rapidlyexploring Random Tree [5], Road Map [6], Cell Decomposition [7], Potential Field [8], and others. 2.2

Trajectory

Many works with mobile robotics use smooth curves because they are performed by robots with non-holomonic constraints. The B´ezier curves are an example. As presented in the works [6,9,10], the robots can perform the B´ezier curves autonomously. However, this approach, despite implementing a smooth curve, does not require the robot to pass over at all points. It is only an approximation to the points, which is not interesting for the mapping of toxic substances inside a room or industry. Magid et al. [8] use potential field for path planning. For building a smooth curve, they use splines in order to pass at all points and avoid obstacles [8]. In our approach, we will use an improved splines approach so that the curve performed by the robot is smooth and also precise on the waypoints.

658

3 3.1

L. Piardi et al.

System Architecture Robot

For the present work a model of mobile robot with differential steering system was used. It consists of two drive wheels mounted on a common axis, and each wheel can independently be driven either forward or back-ward. By this way, it is possible control the speed and rotation of the robot. The real robot was also developed as presented in right side of Fig. 2 whereas left figure shows the robot developed in a realistic simulation system, the SimTwo. Further explanations about the implementation and use of this software can be seen in [11,12]. The real robot will be used in future works for the toxic mapping after the validation through the simulation of the approach presented in this work.

Fig. 2. Model of simulated robot at left and the real robot at right.

3.2

Communication Between the Simulator and the Application

For this project, it was used two different environments. The first one (ControlApp) is an application, developed in Lazarus, that interfaces the user and controls the robot, based on its position (x,y,θ) provided by the second one, the SimT wo a 3D simulated environment. The ControlApp has an interface for the user to interact with the robot. It provides to the operator the speed, position and orientation of the robot in realtime. Moreover, ControlApp shows an image that represents the environment of the robot. The interface is shown in the Fig. 3. The operator can control the robot through two methods. The first one, user can insert the waypoints, clicking with the mouse on the image in the desired position. The second method computes a grid of waypoints by selecting the dx and dy as the distance of waypoints in x and y direction. Then, it is automatically inserted a grid of possible waypoints according to the values input by the user. The ControlApp is responsible for all processing that involves the robot, such as calculations for the spline trajectory and planning, calculations for the control of the speed of the wheels and recalculating a new route when an unmapped obstacle is detected.

Dynamic Path Mapping Robot for Industry Environment

659

Fig. 3. Interface of Lazarus application, ControlApp.

In the simulation environment, the physical structure of the robot is defined as width, weight, length, thickness, size and position of the wheels and sensors. The motor and sensors simulation models are addressed in previous works [13–15]. Figure 4 represents the SimT wo simulation environment for the developed software interface shown in Fig. 3.

Fig. 4. SimTwo simulation 3D environment.

The communication between both applications is based on a UDP/IP network protocol. A packet encoding the robot position, orientation and distance sensor data is sent from the SimT wo to the ControlApp, whereas a packet containing the right and left speed wheels is sent from ControlApp to the SimT wo, as presented in Fig. 5.

660

L. Piardi et al.

Fig. 5. Communication between user, ControlApp and the SimT wo.

4

Application Development

With intention of to replace human presence by robots in high risk environments (dangerous industrial environment), it is necessary that such robots are autonomous. Robots must have the ability to check unmapped obstacles and make efficient decisions without the help of human. On the other hand, the Human Machine Interface (HMI) is necessary to perform some configurations, such as waypoints, grid accuracy, speeds and visualization. 4.1

Connectivity Grid

The connectivity grid is an approach used for the robot to scan all desired areas of a room or environment, considering the distance between the waypoints defined by the user. This grid will establish all the possible trajectories for the robot. Figure 6a shows a connectivity grid for a environment with 3 meters length and width and the waypoints with a distance of 0.5 meters (dx,dy). The robot is constrained to perform only horizontal or vertical trajectories in the north, south, east and west directions (no diagonal trajectories). Initially a mapped object existence is verified. If a mapped object is verified, the waypoint that intercepts this object is excluded from the connectivity grid and, as a result

(a)

(b)

Fig. 6. (a) Connectivity Grid. (b) Connectivity grid with the presence of obstacle.

Dynamic Path Mapping Robot for Industry Environment

661

all trajectories that provide access to this point are excluded. Figure 6b illustrates the effect of an obstacle on the connectivity grid. Based on this condition will be projected the path for the robot to reach the end point and crossing all available waypoints. During the course, the robot checks for unmapped obstacles through a distance sensor (in practice, it will be used a ToF sensor). When an unmapped obstacle less than 0.15 meters is detected, the robot returns to the last waypoint (avoiding a collision with the unmapped obstacle) while plans a new route, thus eliminating the point where there is an obstacle. With this algorithm the robot will go through all the waypoints until reaching the end point and avoiding obstacles. 4.2

Robot Controller

Robot controller uses the kinematics model of the robot. It receives the position (x,y,θ) from the simulator and then controls the speed of the right and left wheel (Vr and Vl ). In simulation environment, SimT wo is responsible for the dynamics presents in the system, since it uses the ODE (Open Dynamics Engine) library for simulating rigid body dynamics [12]. Figure 7 shows an outline of the robot and represents the characteristics of its movement along a room considering the speed of the robot’s wheels.

Fig. 7. Considerations for calculating the linear velocity of the robot wheels.

Since V is the linear velocity, w is the angular velocity of the robot ( δθ δt ) and b is the distance between wheels of the robot, we obtain the Eq. 1. Vl + V r Vl − Vr ; w= (1) 2 2 For the orientation of the robot to the next point of its trajectory, the value of w must follow a reference (wref ) in function of a constant of proportionality V =

662

L. Piardi et al.

(Kp ) and error between the robot orientation and trajectory. The velocity V must be constant (Vref ) as shown in Eq. 2 and complemented by Fig. 7. The coordinate (x, y) indicates the actual robot position and (x1 , y1 ) represents the coordinates of the next robot trajectory point. Vref = K1

wref = Kp · error

;

(2)

Finally, the velocity of wheels can be calculated by Eq. 3. Vl = Vref − wref · 4.3

b 2

;

Vr = Vref + wref ·

b 2

(3)

Trajectories Defined by Splines

Splines in two-dimensional space were used in this work to represent the trajectories performed by the robot. The trajectory is the path that the robot follows between two distinct waypoints. The total path that the robot will perform, from the starting point until reaching the end point is divided into several trajectories, which will depend on the layout of the mapped and unmapped obstacles. For n waypoints there are n − 1 trajectories. The implemented algorithm calculates a cubic spline for the mathematical representation of the trajectories(calculations and mathematical proofs can be found in Chap. 18 Sect. 6 of the book [16]), which are feasible by the robot. It’s purpose is to smooth the trajectory that connects the waypoints through a third degree function. To ensure a smooth curve, the angle of entry into a waypoint should be the same as the angle of exit. The implemented algorithm determines this angle as a function of the slope of the straight line defined by the waypoint that precedes and succeeds the point in question.

Fig. 8. Trajectory defined by Spline. The green curves indicate the trajectories to be followed by the robot. The black lines indicate the angle of entry and exit of the waypoints.

Figure 8a represents the calculation for the entry and exit angle of point 1, whereas Fig. 8b represents the estimate for waypoint 2. Equation 4 describes the calculation to determine the angle. θi = atan2(yi+1 − yi−1 , xi+1 − xi−1 )

(4)

Dynamic Path Mapping Robot for Industry Environment

4.4

663

Path Planning Considering the Grid of Connectivity

Considering the connectivity grid and the waypoints with their respective possible trajectories (Fig. 6a), an algorithm was developed to path planning of the robot knowing the start and end point. There are eight priorities for choosing a path for the robot (Fig. 9). The path planning is calculated for all priorities, one at a time. The priority that has the smallest number of iterations and consequently the smallest trajectory will be selected to be executed by the robot. The algorithm checks which waypoint adjacent to the current waypoint was less visited by the robot. If only one waypoint was less visited, the robot will move to it. When the smallest number of visits is verified in more than one adjacent waypoint, the choice will be the one of these least visited waypoints according to the current priority of the robot’s trajectory (Fig. 9). This algorithm is used for all eight priorities. Switching from one priority to the next occurs when the algorithm finds the final waypoint and all other waypoints have been visited at least once. When the eight priorities are finalized, the one with the shortest path will be selected. If an obstacle is detected, the waypoint that intercepts the obstacle will be deleted as well as the possible trajectories as shown in Fig. 6b. Then, the algorithm calculates a new route in the same way as described above. However it will take into consideration the points already visited as well as the waypoints that can not be visited due to the existence of obstacles.

Fig. 9. Priority of robot trajectories.

5

Results

In order to validate the developed application, some tests were accomplished using the SimT wo simulator and the ControlApp. 5.1

Result of Trajectory Spline

The first test was performed to verify the quality of the trajectories described by the spline algorithm. It consists of inserting the robot in an environment simulated by the SimT wo whose dimensions are three meters width

664

L. Piardi et al.

Fig. 10. Result of the spline trajectory algorithm.

by two meters height. In ControlApp are inserted the waypoints that will define the trajectories by the user as observed in Fig. 10. The results obtained from the simulator by the algorithm that calculates the spline trajectories shows that it reaches the objective of constructing a discrete path, segmented by continuous parametric curves, constituting a smooth route feasible by a robot with differential drive. 5.2

Result of the Planning Algorithm Applying a Grid of Connectivity

With the objective of analyzing the path planning and a grid of connectivity with obstacles, three tests were made in a simulation environment with 3 meters width by 3 meters height having a spacing dx and dy of the connectivity grid of 0.35 meters for all tests performed below. The identification of mapped obstacles is highlighted through the red color in SimT wo simulation and ControlApp and the identification of unmapped obstacles is highlighted through the purple color in SimT wo simulation and ControlApp. Result of the Path Planning in Environments Without Obstacles: The algorithm for the path planning is executed only once because there are none obstacles in the environment and having an outstanding result. The robot (starts at left bottom corner) explored the entire room through all the waypoints present in the first seven columns of the connectivity grid only once. In the eighth column the robot passed twice in each waypoint to reach the end point (right upper corner) as Fig. 11 shows. Given the proximity of waypoints, this case does not have the concavities of the curve well defined as in the example presented in the Fig. 10. Result of the Path Planning in Environments with Mapped Obstacles: The Fig. 12 presents mapped obstacles simulating a room, having only a single path for entry or exit. The waypoints that are in the regions of the mapped obstacles are excluded from the connectivity grid before executing the path planning

Dynamic Path Mapping Robot for Industry Environment

665

Fig. 11. Result of planning algorithm without obstacle.

Fig. 12. Result of planning algorithm with mapped obstacles.

algorithm, i.e., only waypoints free of obstacles will be computed for path planning. Consequently the algorithm is executed once. The results for this robot path planning validates the approach, since to complete the planning, the robot will pass a maximum of two times in each waypoint. Result of the Path Planning in Environments with Mapped and Unmapped Obstacles: In the case of environments with the presence of unmapped obstacles (Fig. 13), the path planning algorithm for the robot will be executed the number of times the robot identifies an unknown obstacle, recalculating a new path to reach the end point. Therefore, the arrangement of the unmapped obstacles will result in a change in the route of the robot and also the number of times that each waypoint of the connectivity grid will be visited. Comparing Figs. 12 and 13, it can be observed that the robot’s path in the presence of unmapped obstacles is considerably more complex and nevertheless the robot performs its test with success and safety without colliding against obstacles.

666

L. Piardi et al.

Fig. 13. Result of planning algorithm with mapped and unmapped obstacle.

6

Conclusion and Future Work

This paper presents a system for explorer and scanning a dangerous area in an industry, where the presence of humans should be reduced. The planning method that routes the path avoiding the known and unknown obstacles that are detected by ToF sensors during the mapping process is implemented in a simulation environment. The results are convincing since the robot maps the selected waypoints, introduced by user or grid. As future work, the replacement of the SimT wo by the real robot (already developed) will be done. Once the protocol communication between ControlApp and SimT wo will be adopted by the real robot, this task is simplified. Acknowledgment. Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020” is financed by the North Portugal Regional Operational. Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). This work is also financed by the ERDF European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the FCT Funda¸cao para a Ciˆencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.

References 1. Mosterman, P.J., Zander, J.: Industry 4.0 as a cyber-physical system study. Softw. Syst. Model. 15(1), 17–29 (2016) 2. Miller, C.E., Tucker, A.W., Zemlin, R.A.: Integer programming formulation of traveling salesman problems. J. ACM 7(4), 326–329 (1960). New York 3. Pereira, N., Ribeiro, F., Lopes, G., Whitney, D., Lino, J.: Autonomous golf ball picking robot design and development. Ind. Robot Int. J. 39(6), 541–550 (2012)

Dynamic Path Mapping Robot for Industry Environment

667

4. Hirakawa, A. R., Saraiva, A. M., Cugnasca, C. E.: Autˆ omatos Adaptativos Aplicados em Automa¸cao e Robtica(A4R). IEEE Lat. Am. Trans. 5(7), 539–543 (2007). S˜ ao Paulo 5. Vaz, D.A.B.O.: Planejamento de movimento cinem´ atico-dinˆ amico para robˆ os mveis com rodas deslizantes. Universidade de S˜ ao Paulo (2011) 6. Yang, X., Zeng, Z., Xiao, J., Zheng, Z.: Trajectory planning for RoboCup MSL mobile robots based on B´ezier curve and Voronoi diagram. In: IEEE International Conference on Information and Automation, pp. 2552–2557 (2015) 7. Kloetzer, M., Mahuela, C., Gonzalez, R.: Optimizing cell decomposition path planning for mobile robots using different metrics. In: 19th International Conference on System Theory, Control and Computing (ICSTCC), pp. 565–570 (2015) 8. Magid, E., Karen, D.,Rivlin, E., Yavneh,I.: Spline-based robot navigation. In: IEEE/RSJ International Conference on Intelligent Robots and System, pp. 2296– 2301 (2006) 9. Hwang, J.H., Arkin, R.C., Kwon, D.S.: Mobile robots at your fingertip: Bezier curve on-line trajectory generation for supervisory control. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1444–1449 (2003) 10. Simba, K.R., Uchiyama, N., Sano, S.: Real-time trajectory generation for movile robots in a corridor-like space using Bezier curves. In: IEEE/SICE International Symposium Systrem Integration, pp. 37–41 (2013) 11. Costa, P., Gon¸calvez, J., Lima, J.: SimTwo realistic simulator: a tool for the development and validation of robot software. Int. J. Theor. Appl. Math. Comput. Sci. 17–33 (2011) 12. Nascimento, T.P., Moreira, A.P., Costa, P., Costa, P, Concei¸ca ˜o, A.G.S.: Modeling omnidirectiona mobile robots: an approach using SimTwo. In: 10th Portuguese Conference on Automatic Control, Funchal, pp. 117–223 (2012) 13. Lima, J., Gon¸calves, J., Costa, P., Moreira, A.: Modeling and simulation of a laser scanner sensor: an industrial application case study. In: Azevedo, A. (ed.) Advances in Sustainable and Competitive Manufacturing Systems. Lecture Notes in Mechanical Engineering, pp. 697–705. Springer, Heidelberg (2013) 14. Lima, J., Gon¸calves, J., Costa, P.: Modeling of a low cost laser scanner sensor. In: Moreira, A., Matos, A., Veiga, G. (eds.) CONTROLO2014 Proceedings of the 11th Portuguese Conference on Automatic Control, vol. 321, pp. 697–705. Springer, Cham (2015) 15. Gon¸calves, J., Lima, J., Costa, P., Moreira, A.: Modeling and simulation of the EMG30 geared motor with encoder resorting to SimTwo: the official Robot@Factory simulator. In: Azevedo, A. (ed.) Advances in Sustainable and Competitive Manufacturing Systems, pp. 307–314. Springer, Heidelberg (2013) 16. Chapra, C.S., Canale, R.P.: Numerical Methods for Engineers, 6th edn. The McGraw-Hill Companies, New York (2010)

Poses Optimisation Methodology for High Redundancy Robotic Systems Pedro Tavares1,3(B) , Pedro Costa1,2 , Germano Veiga1,2 , and António Paulo Moreira1,2 1 2

FEUP - Faculty of Engineering of University of Porto, Porto, Portugal [email protected] INESC TEC - INESC Technology and Science formerly INESC Porto, Porto, Portugal 3 SARKKIS Robotics, Porto, Portugal

Abstract. The need for efficient automation methods has prompted the fast development in the field of Robotics. However, most robotic solutions found in industrial environments lack in both flexibility and adaptability to be applied to any generic task. A particular problem arises when robots are integrated in work cells with extra degrees of freedom, such as external axis or positioners. The specification/design of high redundancy systems, including robot selection, tool and fixture design, is a multi-variable problem with strong influence in the final performance of the work cell. This work builds on top of optimisation techniques to deal with the optimal poses reachability for high redundancy robotic systems. In this paper, it will be proposed a poses optimisation approach to be applicable within high redundancy robotic systems. The proposed methodology was validated by using real environment existent infrastructures, namely, the national CoopWeld project. Keywords: Optimisation · Poses determination Metaheuristics · High redundancy robotic systems

1

·

Cost functions

·

Introduction

The robotic field’s expansion leads to the desire of developing highly autonomous and efficient methodologies to address the prominent problems of not only research and development but also the needs pointed by the industrial corporations. One of those problems in robotic systems is task management, that involves the integration of the motion planning in the work cell context, including technological process limitations, communication with external devices, automatic work cell calibration among others. Currently, there is no optimal tool to create an action sequence to complete a given task (currently handled by human experience). However, some studies point to the usage of optimisation functions to attend this problem [1,3]. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_55

Poses Optimisation Methodology

669

Recently, a wide range of applications have been developed towards the robotic implementation in harsh and/or repetitive tasks, such as welding, cutting and transportation, commonly found in the construction industries. These tasks are fairly complex as they need to cope with several parameters. Robotic systems have to be autonomously capable of performing any coded task in a precise and efficient way. Thus, when developing systems that are going to be included into dangerous situations, the architecture that embraces the robotic system needs to consider robust elements. From the presented tasks, cutting and transportation seem to have a clear idea on where to improve and define a better and cleaner strategy while welding still needs to have some re-visitation on sensing, planning and adequate optimisation to any required task. The specification of optimal design methodology of high redundancy systems, including robot selection, tools and fixtures design, is a multi-variable problem with strong influence in the final performance of the work cell. An active example of this problem is related to selection of flexible elements (such as the torching cables for welding operations) and their constraints. In 2015, Graetz and Michaels, in their work entitled “Robots at work”, analysed for the first time the economic impact of industrial robots pointing that robots significantly added increased value to industries [9]. Thus, this optimisation research topic applied to industrial work cells seems to be adequate to current state of industrial development. Throughout this project, a novel design methodology capable of defining the correct work cell for a given operation in robotic applications is being developed in order to achieve industrial requirements and potentiate production goals. Furthermore some considerations on work cell motion and components reposition will assure an efficient function of the system. The current paper is structured in five main section. Section 2 aims to provide an overview of the current state of optimisation technique appliance in research or industrial processes. Following, Sect. 3, System Architecture and HeuristicBased Approach, intends to present the defined optimisation structure, as well as the detail model description of a generic work cell submitted to the selected algorithms. Then, Sect. 4, Optimisation Methodologies, will describe a set of algorithms to be applied in order to accurately find an optimal solution. Section 5, Validation, will provide preliminary results of the proposed optimisation approach and a comparison between selected algorithms. Section 6, Discussion and Future Perspectives, will summarise the contribution of this project to the scientific community and intended iterative of it.

2

Related Work

Robotic applications have become key to ensure process efficiency in a wide range of scientific fields. Several studies have suggested that there can be considerable gains in terms of profitability and reduced operation times on fields ranging from medical surgery to manufacturing proceedings [5,8,12].

670

P. Tavares et al.

Currently, optimisation methodologies have grown interest within research and industrial communities due to their success in solving multi-variable problems. Several algorithms have been defined and implemented showing great adaptability to complex tasks [17]. The main trend of such optimisation methodologies is related to heuristic-based algorithms. By defining detailed goals and constraints, such algorithm can iterative search for a ideal solution, and has been applied to several applications [7,15]. Another key topic connected to optimisation and focusing in robotics is related to the work cell design, that comprises, among others, robot selection and fixture design. Cheng mentioned the simulation tool’s advantages in order to develop a robotic work cell [4]. Furthermore, there have been authors claiming to find powerful enough methodologies to handle machining [2] and welding challenges [10,11]. Still, despite its importance, existent robotic systems are focused on solving separate necessities while there is no optimal tool to properly design a generic robotic work cell. The nearest solution to an optimal design methodology was presented by Kamoun et al., with an approach concerning the display of equipment over a given area [13]. Recently, Pelleginelli et al., proposed an extended formalisation of design and motion planning problems for spot-welding multi-robot cells [16]. However, both solutions only considered previously selected equipment or reduced features and did not include the optimal selection of such equipment. Despite the tremendous utility that optimisation methodologies have shown when applicable to other principals, the robotic industrial world still lacks a flexible and autonomous solution regarding the complete optimisation of work cell and its properties.

3

System Architecture and Heuristic-Based Approach

The optimisation approached presented is directly linked to an user interface software. This software follows the paradigm of MVVM (Model-View-View Model), a three layer software architecture. Each layer has its own importance. The Model is the centralized database of all relevant content. Here we can detail system components and their relations. The View Model is responsible for the communication management between the raw data on the model and the user interface that is referenced as View. Thus, in order to use the proposed optimisation approach, each user has to follow a script of inserting work cell components, defining relations between them and then generating a robotic Kinematic Chain that will be storage as the system Model. The View Model of the proposed approach is related to the optimisation methodology. The raw data inserted in the Model will be de-serialized and the kinematic chain’s components will be classified as fix parts, conveyor, external axis, robots or tools. Then an world map is created accordingly to the found components and converted to a visual interface, View.

Poses Optimisation Methodology

671

Once loaded the system, the optimise algorithm is fully defined. This algorithm can then be described as a set of steps (see Algorithm 1).

Algorithm 1. Optimization Proceeding Inputs: Model Data, Optimisation Algorithm; Outputs: Optimized Pose; Map = GenerateMap(Model Data); Job = LoadJob(); Points = IdentifyPoints(Job); OptimisedPoses = CreateStructure(Optimisation Algorithm); RunOptimisationAlgorithm(Points,Optimisation Algorithm);

The terminal part of the optimisation proceeding differs accordingly to the chosen algorithm. An insight on what algorithms have been selected and validated will follow in Sect. 4. However, all of the selected algorithms follow an ideology of heuristic-based solutions and share a common goal, minimization of effort and maximization of present and subsequent poses. In that regard, a cost function was developed based on six features: 1. External Axis Motion: While performing a task, it is pretended to minimize the external axis moves as they may insert instability within the robotic system. 2. Singularities: Robots’ behaviour becomes unstable during singularities, thus, they should be avoided. 3. Configuration Change: Robots should whenever possible keep an original configuration to avoid sudden uncontrolled movements. 4. Joints’ Effort: Minimization of system effort smooths moves and protects components. 5. Reachability: The distance between robot’s base and goal position should be minimized to increase the reaching probability for future poses. 6. Joints’ Limits: Similar to the previous criteria, in order to increase the reaching probability of future poses, an ideal pose should maximize the interval between joint position and limits. Thus, the cost function that untimely dictates the viability of a random pose results in a weighed sum and can be described using Eq. 1. 

(w1 ∗ ExternalM ove + w2 ∗ Singularities

COST = + w3 ∗ Conf igurationChange + w4 ∗ JointsAmplitude + w5 ∗ Reachability + w6 ∗ JointsLimits)

(1)

672

P. Tavares et al.

Although not all variables stated in the equation are not continuous (ExternalMove and ConfigurationChange), the cost function value is well defined in all its domain. That is accomplish by processing each feature separately as described below. External Axis Motion is consider a boolean state, on whether the generated solution requires external axis moves. In that case, the value ExternalMove will be set to 1, otherwise it will be 0. The associated weight w1 is fairly high due to the fact that this is one of the feature that we desire to avoid the most. Singularities are determined and analysed using the robot Jacobian where its determinant is a indicator of singularities. When close to 0, the robot is approach a singularity state. Thus, the inverse of that value (will be higher when closer to that singularity state) is used as the Singularities parameter. Each robot manufacture has a configuration definition for a given joint state. Generically this is linked with the wrist-shoulder-elbow configurations. Ideally, when finding a new solution for a specified position, robots should avoid changing configuration as it prevents uncontrolled movements. Once again, ConfigurationChange is a binary value, 0 when configuration change is not require, 1 otherwise. Regarding joints’ effort, the weight associated to each joint is not linear since that some joints have higher implications than other. Considering as an example, an anthropomorphic robot, the initial three joints’ amplitude is more relevant than the wrist joints. Therefore the parameter JointsAmplitude is obtained from a weighed sum with decreasing weight for each joint starting from base to end. The Reachability value is calculated based on the robot full length and is defined as the quotient between base to goal position distance and the full length value. Finally, the joints’ limits follow the same pattern as the previous parameter, as it results from a quotient between estimated joint value and its limits. However, another layer is inserted here as for each joint the weight is different for similar reasons presented for the joints’ effort parameter. One final comment is related to the usage of the cost function when there is no available solution. In that cases the testing hypothesis is discard without even being weighed. By combining all values it is possible to classify any given pose of the system and, thus, finding the optimal solution following one of the methodologies presented next.

4

Optimisation Methodologies

To face the proposed challenge there were selected four mains optimisation techniques: Linear Scanning of available poses, Genetic Algorithms, Simulated Annealing and Potential Fields. Each of the proposed methods returns a heap storing the best outcomes of the cost function. Considering that each solution results in an array of joint values throughout the kinematic chain, a dynamic structure is built upon the

Poses Optimisation Methodology

673

map generation. Every heap element will follow the parameters define within that same structure. The reason behind using a multi-solution heap is related to the continuous path of the robotic system following the array of solutions. Even if a random position is validated and optimised, along the path between poses, there might be an extra constraint such as obstacles or speed effort. Thus, in those cases, the initial best solution has to be discarded and new one will be searched within the heap. Each method also can be divided in two main phases: creation of hypothesis and validation. Since the idea is to find the optimal robotic system pose for a pre-defined position, the hypothesis initial focus the external axis values and then using a path planner validates and determines the robot positioning for those external axis values. The implemented methods will be synthesise in the following subsections. 4.1

Linear Scanning

The standard and easier way to do a search for an optimal solution is linearly go through all hypothesis while saving the best one. Since the final return is expected to be a heap, the saving results need to be extended to its size. The key step of this method is selecting the discretization step that balances memory usage and time consumption. As expected this method raises problems for high redundancy systems as computational capacities are limited and considerations on memory usage need to be consider. Thus, when creating the hypothesis to test a linear discretization method for each element of interest was implemented, bounding the number of hypothesis. The ideal algorithm can be described as following (Algorithm 2).

Algorithm 2. Linear Scanning Algorithm Inputs: PointsToOptimise, Model Data; Outputs: Optimised Poses; Hypothesis = Create_Testing_Hyphotesis(); Optimised Poses = Create_Poses_Structure(); foreach PointsToOptimise do foreach Hypothesis do EvaluteHypothesisCost(Hypothesis.Current, PointToOptimise.Current); Update_Optimised_Poses(); end end return Optimised Poses;

674

4.2

P. Tavares et al.

Genetic Algorithms

This method can be defined as a search and optimisation tool able to solve multiconstraint problems [6]. Genetic algorithms recur to genes (variable of interest) to store a sequence or solution of interest. Most of common applications using this method start with two set of solutions and iteratively swap (exchange of genes between solutions) or mutate (random or methodical change of a given gene), creating a population of solutions. Within our proposed methodology we start with a higher number of randomly generated genes. Each gene is defined as a vector resulting of the external axis values. Then, each gene undergoes a reachability validation of the defined position. Iteratively new genes are generated throughout a fixed number of iterations and the optimised heap is built. The generation of each gene is based on the swap and mutation operations, that are randomly selected. In case of swap procedure, the second gene is also randomly chosen from the multi-gene population. The algorithm can be describe as following (Algorithm 3).

Algorithm 3. Genetic Algorithm Inputs: PointsToOptimise, Model Data; Outputs: Optimised Poses; Population = Generate_Genes(); Optimised Poses = Create_Poses_Structure(); foreach PointsToOptimize do for NumberOfIterations do foreach Gene in Population do EvaluteHypothesisCost(Gene, PointToOptimize.Current); Update_Optimised_Poses(); end Population = Generate_NewGenes(Population, mutationRate, swapRate); end end return Optimised Poses;

4.3

Simulated Annealing

Another optimisation technique is the simulated annealing method, which is a probabilistic to find the global optima of a given function [14]. This method starts from a random solution and iteratively searches its neighbourhood to define new possible solutions. Then, probabilistically decides to which solution it should iterate until untimely finds the global optimum.

Poses Optimisation Methodology

675

However, when dealing with high redundancy system this method entails a high time and computation effort. Thus, a minor adjustment to the method was implemented in order to reduce the execution time of the method. A threshold was defined in order that the method runs iteratively until reaching a fixed number of solutions that verify such constraint, stopping without fully completing the algorithm, giving a secure and acceptable list of solutions while minimizing time consumption. Another add-in was related to the initial point. Since that this is neighbourbased, if the initial point and its neighbours do not produce a valid solution, the method would stop and a erroneous value would be found. As such, the initial point is randomly fixed within half robot’s length to the goal point. The algorithm is shown next.

Algorithm 4. Simulated Annealing Inputs: PointsToOptimise, Model Data; Outputs: Optimised Poses; Solution = Generate_Initial_AcceptablePosition(); Optimised Poses = Create_Poses_Structure(); foreach PointsToOptimise do while SolutionNumber < IntendedSolutionNumber do EvaluateSolution(Solution); Update_Optimised_Poses(); Neighbours = Get_Solution_Neighbours(); Solution = Select_Next_Solution(Neighbours); end end return Optimised Poses;

4.4

Potential Gradient

Similar to Simulated Annealing, Potential Gradient is an algorithm based on surrounding solutions of the current iteration. However, this algorithm stops at local optimums. The iteration direction is defined by the sum of directional derivatives framed with the optimisation function. Once determined what is the best directional vector a new solution is generated until reaching a local optimum. Despite being computational light, this algorithm does not guarantee optimal solutions for any given problem.

676

5

P. Tavares et al.

Validation

Attempting to ensuring a multi disciplinary validation process, two work cells with different properties were modelled and inserted in a custom simulator. Those work cells are displayed below in Fig. 1.

Fig. 1. At the left - a work cell with a cartesian external axis, a fanuc ic30 and a cutting torch; at the left - a work cell with two kinematic chain, one composed by a rotative external axis (Ring) and a Motoman MH5 robot and the second by a external positioner.

A third cell was used, although it is not presented above. That is due to the fact of this cell is currently being physically implemented according to the national project CoopWeld. Thus, the third cell was only test and validated in a simulated environment. The work cells here presented are focused in these redundancy systems composed by external axis, robot and operation tool. However the proposed approach is also applicable to simpler robotic cells. In order to validate the methodology a set of Cutting and Welding Jobs were generated using a CAM (Computer Aided Manufacturing) software for the production of beams, MetroID and CLARiSSA, proprietary of SARKKIS robotics. This software generates a set of vectors containing poses that describe the given operation (see Fig. 2).

Fig. 2. MetroID user interface.

Poses Optimisation Methodology

677

Using these software it was then created several test beams. Examples are presented next in Fig. 3. The idea behind the creation of those beams was to define different jobs that required external axis/positioners movement.

Fig. 3. Beams examples.

In order to evaluate each of the implemented algorithms there were consider three parameters: reachability percentage, time consumption and cost of reached solution. The results are summarized in Table 1. These are according to a validation test of 15 beams, each with 5 to 130 operations, which resulted in a total of 563 points to be optimised. The results provide in the table are means per operation of the correct achieved solutions. Table 1. Results summary of the validation test Optimisation methodology Solution reachability (%) Time consumption (s) Cost value Linear scanning

100%

18.031

0.103

Genetic algorithms

100%

1.022

0.098

Simulated annealing

96.4%

0.740

0.147

Potential fields

81.4%

0.412

0.134

These results were achieved once established the proper parameters for each algorithm. Those were determined by considering the best testing performance for random positions for each algorithm when concerning memory management, time consumption and solution reachability. Concerning Linear Scanning was implemented a discretization in 50 equally spaced hypothesis of each external axis/positioners based on their interval range. The Genetic Algorithm methodology was implemented using a cross rate of 50% and a mutation rate of 10%, for a random generated population of 1000, throughout 25 iterations. This was the set of parameters that produced the best results in a exhaustive study done with different parametrizations. We also limited Simulated Annealing reaching goal to a maximum cost of 0.15 in order to reach a higher number of solutions. Moreover, Simulated Annealing was implemented using a multi dimensional neighbour radius of 8 increments. Each increment is considered to be the interval value of each external part when discretized into 1000 equally spaced hypothesis.

678

6

P. Tavares et al.

Discussion and Future Perspectives

In this article we presented a solution with enough flexibility to be applied to all robotic installations. This solution allows one to avoid the common pitfalls associated with robotic poses configuration. Once defined the cost function containing key elements for poses description, a set of algorithms can be applied to identify the correct configuration for a pre-defined pose. Here, the optimisation algorithms prove to be the best methodology. Throughout the project, four were implemented. Tests shows that each can be valuable within the scientific community. Directional algorithms, such as Potential Gradient, proved to be faster to achieve a solution. However, they fail to reach a reasonable solution to all cases. Energy gradient based, such as Simulated Annealing, reach all solutions, however the time consumption to reach the optimal one sometimes is too high. Thus, fixing a limit may help reducing time efforts but compromises efficiency. Linear Scanning is set to find the optimal solution within a discretization step. However, the time consumption is too high and the mentioned discretization in order to avoid memory issues might have to be enlarge reducing efficiency to find the optimal position. A recursive Linear Scanning algorithm could be implemented however it would forfeit at each iteration a set of solution (not ensuring that the optimal one is not discarded) and still remains with the time consumption problem. Thus, Genetic Algorithms, seem to be the best methodology as they reach a viable solution for all cases with a reduce time and computation effort. Future work concerning optimisation in robotic work cells is related to components positioning and choosing, thus, providing a complete software solution to optimal design a high redundancy robotic work cell. In conclusion, the work presented here formalizes a flexible approach for poses optimisation methodology, ensuring optimal configuration for robotic elements to perform a given task in a faster and efficient way. Acknowledgments. A special word to SARKKIS robotics and INESC-TEC (in particular the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme, and the FCT – Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project «POCI-01-0145-FEDER-006961») for their commitment in research and development of revolutionary state-of-the-art algorithms and for their contribution regarding software tools and engineering hours availability.

References 1. Alatartsev, S., Stellmacher, S., Ortmeier, F.: Robotic task sequencing problem: a survey. J. Intell. Robot. Syst. 80(2), 279–298 (2015)

Poses Optimisation Methodology

679

2. Andrisano, A.O., Leali, F., Pellicciari, M., Pini, F., Vergnano, A., Pini, F.: Integrated design of robotic workcells for high quality machining (2011) 3. Bennewitz, M., Burgard, W., Thrun, S.: Optimizing schedules for prioritized path planning of multi-robot systems, vol. 1, pp. 271–276 (2001) 4. Cheng, F.S.: Methodology for developing robotic workcell simulation models, vol. 2, pp. 1265–1271 (2000) 5. Chu, B., Jung, K., Chu, Y., Hong, D., Lim, M.T., Park, S., Lee, Y., Lee, S.U., Min Chul, K., Kang Ho, K.: Robotic automation system for steel beam assembly in building construction, pp. 38–43 (2009) 6. Deb, K.: Introduction to genetic algorithms. Sadhana - Acad. Proc. Eng. Sci. 24(4– 5), 293–315 (1999) 7. Fathi, M., Álvarez, M., Rodríguez, V.: A new heuristic-based bi-objective simulated annealing method for U-shaped assembly line balancing. Eur. J. Ind. Eng. 10(2), 145–169 (2016) 8. Geller, E., Matthews, C.: Impact of robotic operative efficiency on profitability. Am. J. Obstet. Gynecol. 209(1), 20e1–20e5 (2013) 9. Graetz, G., Michaels, G.: Robots at Work (2015) 10. Gueta, L., Chiba, R., Arai, T., Ueyama, T., Ota, J.: Compact design of work cell with robot arm and positioning table under a task completion time constraint, pp. 807–813 (2009) 11. Hauer, S., M.V.H.C.S.K.: Design and simulation of modular robot work cells, pp. 1801–1802 (2009) 12. Kamezaki, M., Hashimoto, S., Iwata, H., Sugano, S.: Development of a dual robotic arm system to evaluate intelligent system for advanced construction machinery, pp. 1299–1304 (2010) 13. Kamoun, H., Hall, N., Sriskandarajah, C.: Scheduling in robotic cells: heuristics and cell design. Oper. Res. 47(6), 821–835 (1999) 14. Kirkpatrick, S., Gelatt Jr., C., Vecchi, M.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983) 15. Mansouri, S.: A multi-objective genetic algorithm for mixed-model sequencing on JIT assembly lines. Eur. J. Oper. Res. 167(3), 696–716 (2005) 16. Pellegrinelli, S., Pedrocchi, N., Tosatti, L.M., Fischer, A., Tolio, T.: Multi-robot spot-welding cells for car-body assembly: design and motion planning. Robot. Comput.-Integr. Manufact. 44, 97–116 (2017) 17. Yang, J., Yang, J.: Intelligence optimization algorithms: a survey 3(4), 144–152 (2011)

Offline Programming of Collision Free Trajectories for Palletizing Robots Ricardo Silva1 , Lu´ıs F. Rocha1 , Pedro Relvas1 , Pedro Costa1,2 , and Manuel F. Silva1,3(B) 1

2

INESC TEC - INESC Technology and Science, Porto, Portugal {rfsilva,luis.f.rocha,pedro.m.relvas}@inesctec.pt FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal [email protected] 3 ISEP - School of Engineering of the Porto Polytechnic, Porto, Portugal [email protected]

Abstract. The use of robotic palletizing systems has been increasing in the so-called Fast Moving Consumer Goods (FMCG) industry. However, because of the type of solutions developed, focused on high performance and efficiency, the degree of adaptability of packaging solutions from one type of product to another is extremely low. This is a relevant problem, since companies are changing their production processes from low variety/high volume to high variety/low volume. This environment requires companies to perform the setup of their robots more frequently, which has been leading to the need to use offline programming tools that decrease the required robot stop time. This work addresses these problems and, in this paper, is described the solution proposed for the automated offline development of collision free robot programs for palletizing applications.

Keywords: Palletizing programming

1

· Industrial robots · Simulation · Offline robot

Introduction

The use of robotic systems has been increasing in particular in the so-called Fast Moving Consumer Goods (FMCG) industry. Among the various applications of robotic systems in this industry, it should be highlighted the automated systems for the movement of raw materials and ingoing products and, fundamentally, the robotic systems for palletizing and packaging of finished products. In particular, the palletizing activities are specially demanding not only in terms of operation speed but also due to the inherent complexity in the handling of different types of packaged products. Despite the common application of robotic technology solutions, these are generally designed to ensure high performance under well defined operating conditions (i.e. very specific in terms of operating standards). Therefore, a trade-off between the operational performance and the desired degree of flexibility is normally seen. In fact, because of c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_56

Offline Programming of Palletizing Robots

681

the type of solutions developed, focused on high performance and efficiency, the degree of adaptability of packaging solutions from one type of product to another is extremely low. Also, there is also high rigidity as regards a possible change in the configuration of the packaging system to new operating requirements, for example, integrated into a new layout of the manufacturing process. From the perspective of the production process, this type of limitations significantly affects the flexibility of the production system itself, often preventing the development of production processes in a low volume/high variability environment [1]. In fact, after the efforts made in recent years by several industrial units to make production processes more flexible, allowing for a diversified and competitive response, it is clear that packaging/palletizing systems should follow this development: become more adaptable and flexible. Following this set of identified problems, INESC TEC is developing a project having as its main objective, to investigate, study, explore and develop a framework that allows the design and development of adaptive palletizing solutions based on robotic technologies. The framework includes a set of innovative elements that efficiently create and evaluate palletizing solutions and, at a later stage, design and develop the solution to be produced, based on an innovative concept of modularity (modular functional elements) and advanced technology for offline programming of the robotic systems. This project constitutes a unique opportunity for a qualitative leap in the industry equipment, which provides palletizing solutions, evolving from a paradigm based on the use of conventional design and development practices to a new paradigm of solution creation, based on technologically advanced modular architectures that are adaptive and intelligent (aligned with the principles and concepts of industry 4.0). Bearing these ideas in mind, the next section describes the main ideas behind this project, followed by Sect. 3 that presents the related work to the problem addressed in this paper which is described in Sect. 4. Section 5 presents the conceptual architecture proposed for the solutions and, in the sequel, Sect. 6 introduces the main results achieved and its discussion. Finally, in Sect. 7 the main conclusions of this work are referred, as well as some ideas for future developments.

2

Proposed Solution Concept

The development of flexible and configurable products that can evolve in the future is a complex task. The limitations of conventional design and development approaches are recognized in this context. Unlike conventional product development, where architectures/platforms with well-structured and detailed interfaces are not used, the concept underlying the proposed solution is based on the principles of integrating advanced modularity techniques into the design and development of products. In this way, it will be possible to make the integrated and modular development of new products by drastically reducing the time of design, development and installation of new equipments.

682

R. Silva et al.

The “modular product” project assumes to guarantee two main characteristics. First, each functional element of the product consists of “parts” which have well defined interactions with other parts of the same product. Second, modular product design allows to perform changes to be made in one part, or even replace it (ex: conveyor, robot, etc.) without this change affecting other product functions. As in the automotive industry, modularity should consider the entire design process. While standardization involves design-to-production optimization, the product architecture influences the use and the sale of the final product, as it contributes for the enhancement of its flexibility and adaptability. The concepts of modularity considered here aim at the decomposition of what would be a robotized palletizing complex monolithic solution, in subsystems that constitute complete functional units, that can be designed and produced independently (allowing the construction of different products by means of a combination of subsystems) and with a reduced complexity, but keeping its integrated operation. This innovative concept will be materialized in a technological kit that will validate the application of the developed research in the design and development of robotic adaptive palletizers, and will be supported by a set of software and hardware prototypes (see Fig. 1).

Fig. 1. Diagram of the proposed project solution modules

Considering the proposed project solution architecture depicted in Fig. 1, this paper specifically focus on the “Palletizing System Simulator” and “Robot Trajectory Generation System” Modules. More in detail, is presented the simulation environment, and how this simulator can be used for quickly designing a palletizing cell and programming the respective robots.

Offline Programming of Palletizing Robots

3

683

Related Work

Robot palletizing cells have captured a lot of attention in the last decade in the scientific sphere. Two essential issues affecting the performance of palletizing robots, that should be considered when designing a flexible automatic robot palletizing production line, are: the proper robotic cell layout arrangement and the path planning of the industrial robot. In [2,3] is presented an offline programming robot palletizing simulator that consist of: (i) a fast algorithm to generate the palletizing product mosaic, (ii) a 3D robot simulator, and (iii) an optimized trajectory generation algorithm. According to its authors, with this system it becomes possible to program the industrial robot without the need of using the teach pendant or writing a single line of code. The authors focus their work on the optimization of the computational time of their path planner, based on A*. In [4] the authors deal with facility layout analysis and path planning of a robotic palletizing production line, using a 4-DOF partial closed-loop palletizing robot and two carton conveyors. The layout of the robotic cell is arranged considering the reachability of the same, but at the same time minimizing the robot footprint. In terms of path planning, the authors propose a straight-line and arc based motion planning approach under the constraints of joint velocity and acceleration to achieve a higher productivity and an easier implementation. Ryosuke et al. [6] also deal with the design of a palletizing cell and with the robot path planning problem. To solve both, the authors propose a design method to improve the performance of a palletizing manipulator. The working environment is optimized regarding the base position of the manipulator and the shape and position of the pallet. To reduce the computational time, the parameters of an environment are quickly evaluated with the proposed method, in which are set passing points to reduce the computation time of path planning. Focusing on the path planning problem, the authors in [5] present an algorithm that computes a collision free trajectory considering a 6-axis industrial robot. The target applications for the developed approach are palletizing and handling robotic cells. Given a 3D environment, the proposed algorithm decomposes the Cartesian workspace of the robot into 3 dimensional cylinder slices centered around the robot base. For its turn, Nan Luan et al. [7] present a maximum speed algorithm for serial palletizing robots. As the authors state, operation speed is one of the most important performance index of a robotic palletizing cell. To meet the required performance, the authors propose an iterative control algorithm that optimizes the maximum robot speed given a certain positional tolerance. The authors refer that, with this algorithm it is possible to obtain maximum speed and acceleration for the robot manipulator regardless of manipulator position, arm posture or joint couplings, thus improving palletizing efficiency and bringing more economic benefits into practical application.

684

4

R. Silva et al.

Problem Description

The problem addressed in this paper is the identification, definition and development of the “Palletizing System Simulator” and “Robot Trajectory Generation System” modules. In Fig. 1 is presented the project solution architecture. As mentioned, this project foresees the inclusion of a 3D robotic cell simulator as an integral part of the proposed modular architecture. As its main requirements, this simulator should allow the design and simulation of the all palletizing cell, and also the offline programming and the download of the generated program code for robots from distinct brands. The inclusion of this software in the project aims not only for the reduction in the design time of the palletizing cells, but also in the development and setup times of the robots programs. Firstly, this software allows the offline programming of the mentioned robots, as well as the simulation of its operation. Secondly, it makes possible to expand the software components library by designing new models of customized equipment or even whole palletizing cells. This way, their inclusion in each project can be performed in a simple way. Indeed, by simulating the robots virtual models and the palletizing systems in which they operate, the efficiency of design of a robotic cell can be significantly improved. Also, considering the accuracy and level of detail offered by the simulation software, it is possible to achieve models having a behaviour very similar to the actual equipment, enabling the study and test of a solution before its implementation, with very high reliability. Moreover, using the simulator is also ensured the concept of adaptability proposed in the project. After the offline programming stage, through the software’s own language, the goal is to translate this language into the ones used by the robot’s controllers of different vendors (namely ABB, KUKA, FANUC and Yaskawa Motoman). For this purpose, the idea is to define a so-called neutral language, to which is translated the software’s language. Then, the next step is to perform the translation of this neutral language to each considered robot’s manufacturer language. The neutral language adopted in this Project is the Industrial Robot Language (IRL - DIN Standard 66312), since the software already allows the translation of its instructions set to this standard. This simulator will be part of the “Palletizing System Simulator” module (see Fig. 1), whose main goal is to make the programming stage of each palletizing cell’s robot. Therefore, using the Python’s API provided by the simulation software, it was developed a plug-in that generates the robot’s program only by selecting the pick and place points of the products to palletize. This module, on its turn, will exchange information with the “Robot Trajectory Generation System” module. It aims to program the movement of the robot between the points of pick and place, considering its operation in the projected cell, in a way that results in collision free trajectories. Each of these two modules are detailed in the next section.

Offline Programming of Palletizing Robots

5

685

Modules Conceptual Architecture

The conceptual architecture of the two developed modules is presented in this section. Firstly, the “Palletizing System Simulator” module will be presented, followed by the “Robot Trajectory Generation System” module. 5.1

Palletizing System Simulator

The conceptual architecture of the simplified robot’s programming module for a palletization task follows the scheme of Fig. 2.

Fig. 2. Palletizing System Simulator architecture scheme

As the figure shows, an Init button is used to start the process. After that, the OnSelection.py function was developed to wait for the selection of the picking point, by clicking on the corresponding components (which must be a simulator predefined type of object - a product), and placing points (which must also be a simulator predefined type of object - a pallet). If the selected components are different than expected, the function rejects the selections. After the selections are made successfully, the function stores the coordinates of the selected objects which are passed to the Palletize.py function. Besides the mentioned coordinates, this function has two input files: Configs.txt, which has information about the dimensions of the products to be transported and the dimension of the pallet on which the palletization will occur; and Mosaic.txt, which contains the points that define the products mosaic to be palletized. As output the function sets the program instructions for the robot’s movements and gripper actuation, and starts the simulation. 5.2

Robot Trajectory Generation System

For the generation of collision free trajectories the work presented by [8] was considered. This planner is similar to the ones from the A* family and has associated the need of space discretization and the definition of the kinematics equations that rule the robot’s movement. Based on this algorithm, the implemented functionality follows the architecture scheme of Fig. 3.

686

R. Silva et al.

For the generation of the robot’s collision free trajectory, the A* Path button is pressed to start planning. Thus, the PathPlanning.py function firstly receives the start and end point coordinates of the trajectory from the OnSelection.py function, resulting from the selection process referred in Subsect. 5.1. These are defined as points above the product to be palletized and above the chosen pallet, respectively, meaning that the path planning is made between these two points of the palletizing cell. Then, the algorithm process starts. For its operation, the RobotKinematics.py function is first identified, which is responsible for handling the kinematics of the robot. For its turn, the CollisionDetector.py function finds information of collision occurrences in the 3D robot workspace using the simulator own collision detector. This way, it is possible to identify collisions between the set composed of the robot, gripper and transported product, and any other component part of the palletizing cell in each discretized space. After that, all the discretized joint space information required for A * algorithm’s logic is stored in a data structure whose query functions are defined in DataHandler.py. Finally, as Fig. 3 shows, the output path points are handled by Palletize.py. This function, described in Subsect. 5.1, is responsible for its inclusion in the robot’s movement instructions for the palletizing process.

Fig. 3. Robot Trajectory Generation System architecture scheme

6

Tests and Results

This section presents the obtained results with the two developed modules. A simple palletizing cell, depicted in Fig. 4(a), is used as an example. This cell has a 100 kg payload KUKA palletizing robot, two separate picking point feeders (identified by the green numbers) and two pallets representing two different placing points (identified by the red numbers). Also, a customized menu for this work was defined and added to the simulator top tab menu to access the developed functions of the plugin (see Fig. 4(b)).

Offline Programming of Palletizing Robots

(a) Palletizing cell

687

(b) Customized project simulator menu

Fig. 4. Palletizing cell example for testing purposes

As in the previous section, this one is divided in order to present the results of the “Palletizing System Simulator” module in the first subsection, and the “Robot Trajectory Generation System” module in the second one. 6.1

Palletizing System Simulator

With the developed functions mentioned in Subsect. 5.1, it actually becomes possible to program the palletizing robot in a very simple way, just by selecting the pick and place points of the product to palletize. The process starts with the press of the Init button, located in the Palletize tab of the project simulator menu (Fig. 5(a)). This action starts the simulation automatically until the products of both feeders reach the two picking points (Fig. 5(b)). Selecting the product at the desired picking location (Fig. 5(c)), sets the coordinates of the picking point for the robot’s instruction generation (Fig. 5(e)). Then, by clicking on the desired pallet, the coordinates of the placing point are also set (Fig. 5(d)). As stated in Subsect. 5.1, the exact coordinates of the placing points (Fig. 5(f)) are defined in the Mosaic.txt file. After this last selection, the simulation resets and only the feeder corresponding to the defined picking point works and the robot starts the palletization process. In Figs. 5(g), (h) and (i) three moments of a three layer palletization process can be seen. 6.2

Robot Trajectory Generation System

Path planning comes as a result of the developed application described in Subsect. 5.2. The first test performed was the trajectory planning for the palletizing task shown in Fig. 5. After selecting the pick and place points and clicking the A* Path button located in the Path Planning tab of the project simulator menu, the algorithm process starts. When it finishes, it returns the points of the discretized space that must be included in the trajectory. To illustrate the result of the planned trajectory for the given example, Fig. 6(a) shows the points of the discretized space included in the resulting trajectory shown in Fig. 6(b).

688

R. Silva et al.

(a) Init button to start the (b) Products stop at picking (c) Picking point selection process points by clicking in the product

(d) Placing point selection (e) Robot’s generated pick- (f) Robot’s generated placby clicking in the pallet ing points ing points

(g) Palletization: first layer (h) Palletization: layer

second (i) Palletization: complete pallet

Fig. 5. Palletizing System Simulator steps

The second test performed to the trajectory planning plugin was its behaviour in collision situations. Thus, metallic fences were located in the robot’s workspace, resulting the layout depicted in Fig. 7. After the algorithm application under these conditions, the obtained result is shown in Fig. 8. Again, in Fig. 8(a) are illustrated the points added to the trajectory and in Fig. 8(b) the resulting trajectory. Finally, in order to compare the two illustrated cases, Fig. 9 shows the result of the collision’s detector inclusion in the application architecture. As can be seen, the robot’s movement is adjusted having a collision-free trajectory.

Offline Programming of Palletizing Robots

(a) Trajectory added points

(b) Resulting trajectory

Fig. 6. Planned trajectory for the palletizing example of Fig. 5.

Fig. 7. Palletizing cell with fences as obstacles

(a) Trajectory added points

(b) Resulting trajectory

Fig. 8. Planned trajectory in a palletizing cell with obstacles

689

690

R. Silva et al.

(a) Without collision detector

(b) With collision detector

Fig. 9. Trajectory planning comparison

7

Conclusions

The use of robotic palletizing systems has been increasing in the FMCG industry. Due to marked demands, most of these companies are changing their production processes from low variety/high volume to high variety/low volume, which demands frequent reprogrammings of their equipments with the inherent production breakdowns and also the need to frequently customize and/or change the palletizing solutions. To overcome these problems, this work proposes a solution based on the principles of integrating advanced modularity techniques into the design and development of automated palletizing solutions. This paper addressed the development of a system for the automated offline generation of collision free robot programs for palletizing applications in the scope of this project. It has been detailed the automatic generation of robot programs, based on the specification by the user of the picking and placing position coordinates and on the pallet mosaic configuration, and the robot collision free trajectory generation system for the implementation of this program. Based on preliminary tests, the system works as predicted and has been considered user friendly by the people who have tested it. However, future improvements have to be performed in the collision free trajectory generation system in order to lower its computational burden and allow to decrease the computational time required to compute collision free trajectories. Acknowledgements. This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 under the PORTUGAL 2020 Partnership Agreement, and through the Portuguese National Innovation Agency (ANI) as a part of project AdaptPack — POCI-01-0247-FEDER-017887.

References 1. Yaman, H., Sen, A.: Manufacturer’s mixed pallet design problem. Eur. J. Oper. Res. 186, 826–840 (2008)

Offline Programming of Palletizing Robots

691

2. Yu, S., Lim, S., Kang, M., Han, C., Kim, S.: Off-line robot palletizing simulator using optimized pattern and trajectory generation algorithm. In: 2007 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Zurich, pp. 1–6 (2007) 3. Lim, S., Yu, S., Han, C., Kang, M.: Palletizing simulator using optimized pattern and trajectory generation algorithm. In: Di Paola, A.M.D., Cicirelli, G. (eds.) Mechatronic Systems Applications. InTech. (2010) 4. Zhang, L., et al.: Layout analysis and path planning of a robot palletizing production line. In: 2008 IEEE International Conference on Automation and Logistics, Qingdao, pp. 2420–2425 (2008) 5. Scheurer, C., Zimmermann, U.E.: Path planning method for palletizing tasks using workspace cell decomposition. In: 2011 IEEE International Conference on Robotics and Automation, Shanghai, pp. 1–4 (2011) 6. Chiba, R., Arai, T., Ueyama, T., Ogata, T., Ota, J.: Working environment design for effective palletizing with a 6-DOF manipulator. Int. J. Adv. Robot. Syst. 13, 1–8 (2016) 7. Luan, N., Zhang, H., Tong, S.: Optimum motion control of palletizing robots based on iterative learning. Ind. Robot Int. J. 39(2), 162–168 (2012). https://doi.org/10. 1108/01439911211201627 8. Tavares, P., Lima, J., Costa, P.: Double A* path planning for industrial manipulators. In: Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Lisbon, vol. 2, pp. 119–130 (2016)

Manipulation

Estimating Objects’ Weight in Precision Grips Using Skin-Like Sensors Francisco Azevedo, Joana Carmona, Tiago Paulino, and Plinio Moreno(B) Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior T´ecnico, Univ Lisboa, Torre Norte Piso 7, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal [email protected], [email protected], tiago paulino @hotmail.com, [email protected] http://vislab.isr.tecnico.ulisboa.pt/

Abstract. The estimation of object’s weight is a very challenging problem due to limitations on tactile technology and robust and fast controllers that can adapt to friction state changes. In this article we study the weight estimation using skin-like tactile sensors, which provide accurate 3 dimensional force measurements on the finger tips of Vizzy, a robot with human-like hands. The sensor reading from the fingertips are analyzed and segmented in order to find the most adequate stages of the movement for the computation of the weight. The analysis is based on the difference of the load force between the grasping and holding up stages, which provides a good estimation of the object’s weight for different objects and various repetitions of the experiments. Keywords: Weight estimation Vizzy robot

1

·

Humanoid robots

·

Tactile sensors

·

Introduction

Humans are able to execute manipulation actions that aim at adapting the gripping force while grasping objects, using rough weight guesses from vision as initial estimation, followed by tactile afferent control loop that provides robust and precise grasp such as the precision grip. The capability of a fast adaptation using the tactile sensors relies on the high density of tactile afferents (about 140 afferents/cm2 ) in the fingertips and the specialized action-phase controllers [3], which allow to sense accurately large areas of the objects when compared to the current technologies for robotic hands tactile sensing. Nevertheless, studies on grip control and slippage detection for robotic hands have shown the plausibility of haptic feedback for robots in simplified experimental setups. In addition to the technological limitations, adaptive object manipulation requires robust switching control algorithms and independent-mode controllers that provide a fast response, and at the same time model unstable states such as grasping in the presence of dynamic friction. All these challenges are closely related to the c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_57

696

F. Azevedo et al.

weight estimation of objects by humanoid robots in uncontrolled environments, which is the long-term objective of our work.

Fig. 1. Example of the initial robot configuration before the execution of a precision grip experiment for weight estimation

In this article we address the weight estimation of objects, by executing manipulation actions with the humanoid robot Vizzy [4], which has two hands very similar to their human counterparts. Figure 1 shows the robot right at the beginning of a weight estimation experiment. Although the robot was mechanically designed for power grasps, Vizzy is able to execute precision grips for medium size objects. We focus on the weight estimation of objects during the execution of precision grips, using skin-like tactile sensors at the fingertips [5]. The sensors provide an estimation of the force from the changes in the magnetic field, considering three main elements: (i) a 3 dimensional hall effect sensor, (ii) a magnet and (iii) a silicon cover for the magnet. The changes in magnetic field due to the deformation of the silicon part are mapped onto 3 dimensional forces, which provide the tactile perception to the silicon cover. These 3 dimensional forces estimated by the sensor are analyzed over time in order to find the different stages of the precision grip execution. The sequence of stages is as follows: (1) initial positioning of the hand around the object, (2) object grasping and lifting, (3) holding, (4) returning the object to the initial position and (5) returning the robotic hand to the initial position. These stages are segmented for all the sensors that touch the object, and the estimation of the weight of each object was based on the assumption that the difference between the forces exerted by the robotic hand in the stages 2 and 3 along the three directions sum up to the weight of the object (i.e. the change in the load force due to gravity). In addition, we consider that the friction force keeps a constant value during the stages 2 and 3, which is valid if there is no relative movement between the object and the sensors (no slips nor oscillations). Therefore, the friction component of the force is removed by calculating the difference between the forces in stages 2 and 3.

Estimating Objects’ Weight Using Skin-Like Sensors

697

We evaluate the weight estimation algorithm on two objects with different size and shape, and also increasing the weight of the objects by adding water. The results show that the tactile sensor provide good estimation of the weight while having skin-like skills.

2

Related Work

Recent developments in tactile sensing such as BioTac1 , OptoForce2 and the skin-line magnetic-based [5] sensors have opened the door to explore the challenging area of grip control [1,7] and slippage detection [6] for robotic hands. Since autonomous grip control using tactile sensors is still an open research problem due to technology limitations and development of controllers that are able to operate in different conditions such as the changes between static and dynamic friction. A robust set of controllers should be able to switch between states and adaptively change the control reference in order to manipulate autonomously objects with different materials and shapes. Thus, the autonomous weight estimation is a very challenging problem that has to consider physical properties (i.e. static and dynamic friction), autonomous switching between control modes and the contact points on the object. Thus, the weight estimation of objects is usually done in simplified settings where for instance the object shape fits very well the gripper shape and the robot arm is carefully designed for the weight estimation by following the moving crane approach [2]. In that work, the manipulation action is a power grasp where the friction problems are not present, but the authors are able to estimate online the object weight in a very short time (0.5–0.7 s). In [2], the voltage signals of the load cell (sensors) are analyzed offline in order to characterize the different manipulation stages. Then, associations between the voltage response and the manipulation stages lead to an ad-hoc algorithm for the segmentation of the signal. Finally, the average value of the signal in the selected interval is utilized to learn the parameters of a regression function that maps load cell voltages onto weights. As in [2], we analyze the signal response over time and identify the manipulation stages. However, we address a more challenging manipulation problem, the two-fingertip precision grip of a humanoid robot hand, where the friction issues arise. In the following we describe the characteristics of the hand.

3

Vizzy’s Hand Design

In this work is used the robot Vizzy [4] to perform the grasps. Vizzy was designed as a human assistant for social interaction tasks. Vizzy has an anthropomorphic upper body with similar degrees of freedom and motion execution skills of a human. Regarding its hands, the palm and finger sizes and number of limbs are 1 2

https://www.google.com/patents/US7658119. https://optoforce.com/file-contents/OMD-20-SE-40N-DATASHEET-V2.2.pdf? v14.

698

F. Azevedo et al.

also similar to an adult person, but having only four fingers capable of grasping objects. The thumb and index fingers are actuated each one by a single motor, while the middle and ring fingers are coupled to one motor. The motor of a finger is coupled to a pulley, that pulls a fishing line string. The fishing line string is attached from the pulley to the last limb of the finger, such that the motion of one motor moves in an under-actuated manner the three limbs of each finger. In this work we only used two fingers: thumb and index, in order to perform the precision grasp. Regarding the sensors, the thumb has three sensors and the rest of the fingers have four sensors each. The sensors are presented in orange in Fig. 2 and the ones used in this experiment are numbered from 1 to 4.

Fig. 2. Indexes of the force sensors in Vizzy’s hand

These tactile sensors [5] are composed by a soft elastomer body with a small permanent magnet inside. Below the magnet there is a magnetic field sensing element (i.e. Hall-effect sensor). When an external force is applied on the elastomer, the relative magnet position changes and the Hall-effect sensor detects the magnetic field variation, that can be converted in a measure of the applied force. An air gap is left between the elastomer and the magnetic sensor in order to increase the sensitivity for small forces. The use of a 3-axis Hall-effect sensor allows the detection of the magnetic field variations along the 3 axis, meaning that the sensor is capable of measuring the force magnitude and direction in 3D. The presented tactile sensors are dependent on the contact area, that is unknown. The feedback of the measured Hall sensor provides the magnetic field vector. To achieve the force vector, some assumptions of the contact area are needed during the calibration process. The sensors are calibrated for a contact with a plane surface perpendicular to the Z axis of each sensor. The 2 sensors near the fingertips are covered with same elastomer piece but each sensor has its own individual calibration, made with a planar surface on top of that sensor.

Estimating Objects’ Weight Using Skin-Like Sensors

4

699

Experiment Setup

The experimental tests were performed using two different objects (plastic containers) with similar shapes and surface characteristics but slightly different sizes (Fig. 5). In order to increase the variability of the test objects regarding the variable of interest, these objects were used in two different configurations: empty and partially filled with water. The movement executed by the robotic arm during data acquisition can be divided into a series of sequential steps described in the Introduction (1) (Fig. 4).

Fig. 3. Test objects: object 1 (left) and object 2 (right).

(a) Stage 1

(b) Stage 2

(d) Stage 5 (beginning)

(c) Stages 3 and 4

(e) Stage 5 (end)

Fig. 4. Movement phases: 1-initial position; 2-object grasping and lifting; 3-holding; 4-landing the object; 5-return to the initial position.

At the beginning of the test (stage 1, on Fig. 4a), the palm of the robotic hand was placed perpendicularly to the surfaces of the object where the contact would be established. The initial position of the object and the hand was defined according to two main criteria. On the one hand, there was no initial contact

700

F. Azevedo et al.

between them in order to assure that the sensors would not detect any significant force at this stage of the sequence. On the other hand, the relative position of the thumb and the finger had to be optimized in a way that the grip forces exerted by these fingers were approximately perpendicular to the surface of the object and parallel to the palm of the robotic hand. This was achieved by setting the thumb to an abducted position and the index finger in opposition to it. According to the GRASP Tanoxomy [1], this type of grasp corresponds to a precision grasp with pad opposition, which is naturally executed by humans to grasp small objects. The optimization of the initial relative position of the hand and the object has proven to be critical for the success of the grasp, particularly to prevent oscillations in posterior stages of the movement and to guarantee the appropriate contact between the sensors and the object. The following stage of the movement (stage 2, on Fig. 4) consisted in the movement of the thumb and the finger against the surface of the object without changing significantly their initial configuration. The final position of the fingers at this stage was tuned to optimize the compromise between minimizing the potential occurrence of slipping events during the lifting phase while not inducing any significant degree of deformation in the object. The following stage of the sequence was the lifting and holding of the object (stage 3, on Fig. 4). During these phases, the most relevant issue was to minimize the motion artifacts resulting from small oscillations of the object, which was accomplished by controlling the velocity of the movement of the robot’s joints. Finally, both the objects and the robotic hand were returned to their initial positions (stages 4 and 5 on Fig. 4) and the robotic hand also returned to the configuration described in the first stage. The sequential movement was repeated over several trials for each one of the objects and the acquisition of the data from the sensors was performed using the Arduino Nano board. The raw data consisted of the magnitude of the forces along the three directions measured in each sensor’s reference frame. Matlab was used for real-time visualization and monitoring of the results as well as post-acquisition signal processing and extraction of the results.

5

Results

Figure 5 displays the results obtained in one of the trials performed with the test object 1, consisting of the magnitude of the three components of the force (Fx , Fy and Fz ) over time, for one of the sensors of the robotic hand. The sampling frequency used for data acquisition was 20 Hz and each trial lasted for approximately 12.5 s, which was enough to perform the previously described sequence of movements and to obtain an appropriate number of samples in each one of the stages. The estimation of the weight of each object was based on the assumption that the difference between the forces exerted by the robotic hand in the stages (2) and (3) along the three directions sum up to the weight of the object. Another necessary assumption is that the friction (and resultant force) is constant during the movement, which is valid if there is no relative movement between the object and the sensors (no slips nor oscillations). Therefore, the friction component of the force is nullified by calculating the difference between the

Estimating Objects’ Weight Using Skin-Like Sensors

701

Fig. 5. Illustrative example of the results obtained for one trial performed with test object 1. The graphic represents the magnitude of the three components of the force (Fx, Fy, Fz) for one sensor over time as the sequence of movements is executed. The vertical lines along the time axis represent the temporal sequence of stages that constitute the overall movement: (1) initial position of the robotic hand; (2) lifting and grasping (3) holding; (4) returning the object to its initial position (in contact with the table); (5) opening the robotic hand.

forces in stages (2) and (3). It was necessary to manually identify stages (2) and (3) for each one of the trials and compute the average force along the directions X, Y and Z for sensors 1, 2, 3 and 4 during those stages of the movement. The weight of the object was then estimated as the sum of the absolute difference between the average load and grip forces measured during stages (3) and (2), respectively Eq. 1. The mass was computed according to Eq. 2, where m stands for the mass of the object in grams, FT otal is the difference between the average forces in stages (2) and (3) that was identified as the weight of the object and g is the acceleration due to gravity. Table 1 summarizes the results of the mass for objects 1 and 2 in both configurations. FT otal =

4 

3 2 3 2 3 2 [|Fx,i − Fx,i | + |Fy,i − Fy,i | + |Fz,i − Fz,i |]

(1)

i=1

1000FT otal (2) g The experimental results are a reasonable approximation of the actual mass of both objects with water. However, the standard deviation for these objects is considerable, which is mainly due to fluctuations in the initial positioning of the object that resulted in a non-optimal contact between the sensors and the surface of the objects that compromised data acquisition. Moreover, due to technical constraints, it was not possible to achieve the desired number of trials. On the other hand, the experimental mass of the test objects in the empty m=

702

F. Azevedo et al.

Table 1. Experimental mass of test objects 1 and 2 in both configurations (E-Empty and W-With Water) Object Index Object Configuration

Mass (g) Experimental Mass (g)

Standard Deviation (g)

1

E W

40.0 94.0

73.8 94.6

8.1 13.8

2

E W

26.0 76.0

52.0 78.5

19.2 7.4

configuration exhibits a significant deviation from its actual value. Since the sequence of movements was maintained for both configurations, it was verified that, regarding the test objects in the filled configuration, the added weight resulted in more stable contact points and, in general, more reproducible measurements. This fact contributes for the difference in accuracy observed between the experimental results for both configurations. Another relevant conclusion is the systematic overestimation of the mass for both objects and configurations. This overestimation can be explained by the acquisition of data from two sensors at each fingertip that do not contact the surface of the object in an optimal position. In fact, as can be observed in [reference to an image that illustrate the grasp], the contact is established in an intermediate position between the two sensors, which represents a deviation from the optimal position (at the center of the sensor’s surface). Nevertheless, if only one sensor was used under similar circumstances, an underestimation of the mass would be expected.

6

Conclusions and Future Work

This experimental work allowed us to obtain satisfactory results regarding the estimation of objects’ weight on precision grips using skin-like sensors integrated on the humanoid robot Vizzy. However, more accurate and reproducible results will require an optimization of the experimental protocol concerning the positioning of the contact points between the sensors and the object’s surface. For future work, one possible direction would be the adaptation of the data processing to achieve an estimation of the weight of deformable objects, for which the assumption of a constant friction force during the holding stage of the movement is no longer valid. Another relevant direction would be a more autonomous data processing in order to allow a real-time identification of the stages of the movement useful for weight estimation. This would be a crucial advancement towards an online estimation of the objects’ weight, which could ultimately be used for real-time adjustments of the grip forces in order to avoid the occurrence of slip events during grasping. The elastomer body that contains the sensor is not tailored to measure weight. In an ideal scenario the sensor is at equal distance to the surface of contact. A spherical shape would be more suitable for this measurements instead of the one portrayed in Fig. 2.

Estimating Objects’ Weight Using Skin-Like Sensors

703

Acknowledgements. This work was supported by FCT [UID/EEA/50009/2013], partially funded by POETICON++ [STREP Project ICT-288382], the FCT Ph.D. programme RBCog and FCT project AHA [CMUP-ERI/HCI/0046/2013] and IEEEIST EMBS Student Chapter Colab Sessions.

References 1. Chitta, S., Sturm, J., Piccoli, M., Burgard, W.: Tactile sensing for mobile manipulation. IEEE Trans. Robot. 27(3), 558–568 (2011) 2. Copot, D., Ionescu, C., Nascu, I., De Keyser, R.: Online weight estimation in a robotic gripper arm. In: 2016 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), pp. 1–6. IEEE (2016) 3. Johansson, R.S., Flanagan, J.R.: Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 10(5), 345–359 (2009) 4. Moreno, P., Nunes, R., de Figueiredo, R.P., Ferreira, R., Bernardino, A., SantosVictor, J., Beira, R., Vargas, L., Arag˜ ao, D., Arag˜ ao, M.: Vizzy: a humanoid on wheels for assistive robotics. In: Robot (1), pp. 17–28 (2015) 5. Paulino, T., Ribeiro, P., Neto, M., Cardoso, S., Schmitz, A., Santos-Victor, J., Bernardino, A., Jamone, L.: Low-cost 3-axis soft tactile sensors for the humanfriendly robot vizzy. In: 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017) 6. Su, Z., Fishel, J.A., Yamamoto, T., Loeb, G.E.: Use of tactile feedback to control exploratory movements to characterize object compliance. Front. Neurorobot. 6, 1–20 (2012) 7. Wettels, N., Parnandi, A.R., Moon, J.-H., Loeb, G.E., Sukhatme, G.S.: Grip control using biomimetic tactile sensing systems. IEEE/ASME Trans. Mechatron. 14(6), 718–723 (2009)

Kinematic Estimator for Flexible Links in Parallel Robots Pablo Bengoa1 , Asier Zubizarreta1(B) , Itziar Cabanes1 , Aitziber Mancisidor1 , and Charles Pinto2 1

Automatic Control and System Engineering Department, University of the Basque Country (UPV/EHU), Ingeniero Torres Quevedo Plaza, 1, 48013 Bilbo, Spain {pablo.bengoa,asier.zubizarreta}@ehu.eus 2 Mechanical Engineering Department, University of the Basque Country (UPV/EHU), Ingeniero Torres Quevedo Plaza, 1, 48013 Bilbo, Spain

Abstract. Control of flexible link parallel manipulators is still an open area of research. The flexibility and deformations of the limbs make the estimation of the Tool Center Point (TCP) position a non-trivial area, being one of the main challenges on this type of robots. In the literature different approaches to estimate this deformation and determine the location of the TCP have been proposed. However, most of these approaches require the use of high computational cost integration methods or expensive measurement systems. This work presents a novel approach which can not only estimate precisely the deformation of the flexible links (less than 3% error), but also its derivatives (less than 4% error). The validity of the developed estimator is tested in a Delta Robot, resulting in less than 0.025% error in the estimation of the TCP position in comparison with the results obtained with ADAMS Multibody software. Keywords: Kinematic estimator

1

· Flexible link · Parallel robot

Introduction

Nowadays, higher quality and smaller production time is required in industry due to globalization. However, for those robots traditionally used in industry, the serial robots, the increase in speed implies, usually, a loss in accuracy. Hence, when both, quality and speed need to be met, the use of Parallel Kinematic Robots (PKR) is more appropriate [11]. These type of robots are composed by multiple kinematic chains which connect a fixed or base platform and a mobile one, where the Tool Center Point (TCP) is located. This structure provides higher precision, stiffness and load/weight ratio [3]. Trying to achieve higher productivity, manufacturers have reduced the crosssection of the limbs in order to decrease the moving mass of the robot. This, in addition to the high accelerations demanded in low operation cycles causes a c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_58

Kinematic Estimator for Flexible Links in Parallel Robots

705

certain degree of elastic deformation of the limbs [3]. These deformations have influence in the kinematics and dynamics of the robots, causing in some cases substantial errors in motion control. Hence, the tracking of the TCP’s trajectory is one of the most challenging problems caused by elastic deformation, since it must be compensated so that the accuracy requirements are fulfilled. When stiff links are considered, the location of the TCP is usually determined by the kinematic model of the robot, which uses the data from the sensors attached to the actuated joints of the robot. In the case of flexible robots, however, the TCP location depends not only on the joint positions but also on the deformation of its flexible links. Therefore, as the deformation of different flexible elements must be considered, the stiff element assumption may not be accurate enough [14]. In the literature, different types of sensors have been considered for measuring link flexibility. One of the most accurate approaches is the use of artificial vision [4]. However, most of visual servoing approaches require the robot and image Jacobian to map end-effector velocity [9], and, in flexible manipulators, these Jacobians need the information of the flexible variables [12], so they still need to be measured [7]. Hence, even if precise global data can be derived from vision systems, their use is limited by its computational cost, the low sampling rate and the effects of visual obstruction. To overcome these limitations, accelerometers and strain gauges have been widely used. Accelerometers are usually placed at the tip of each flexible link, and their measurements are traditionally used to stabilize the oscillation of the TCP by vibration control approaches [13]. Hence, it is possible to estimate the TCP location of the manipulator if joint motions are sensorized [6], since the tip acceleration contains both information of flexible-link vibrations and the rigid-body motions. This approach, however, requires to integrate the acceleration signal twice for velocity and position estimations which provide noisy measurements and may contains biases, resulting in high accumulation of errors [5]. On the other hand, the use of strain gauges has been widely used to measure local deformation along the link. Strain gauges are an attractive approach since they do not require the knowledge of dynamic parameters, such as link masses or inertias, to estimate the deformation of the flexible links. However, strain gauges suffer from biasing due to the electromagnetic interferences and they are prone to temperature variations and measurement noises [1]. Thus, even with the use of additional sensors, measurement of the deformation is not a trivial task. Based on this fact, some authors have proposed the use of sensor fusion techniques to improve the measurement. In [8], the forward kinematics of a 6 degrees of freedom (DOF) robot is estimated by using an extended Kalman filter to process accelerometer and encoder data, while in [16] the fusion of accelerometer and encoder signals using a disturbance observer to compensate the nonlinearities of the deformation was proposed. However, even though the use of sensor fusion techniques improves the accuracy of the measurement, those techniques require, usually, high computational load [15].

706

P. Bengoa et al.

In this work, a novel approach to estimate the deformation of flexible links is proposed for control purposes. The approach is based on the fusion of a mathematical model and a single high resolution optical encoder. Using this measure, the position and orientation of the TCP of flexible robots can be estimated accurately. In addition, the proposed approach estimates the deformation of the flexible links with minimal computational cost, providing significant advantages for Real-Time implementation over the aforementioned approaches. The rest of the paper is organized as follows: Sect. 2 outlines both the concept and theoretical development of the presented estimator. In Sect. 3, the theoretical approach is applied into a Delta robot and its simulation results are discussed in Sect. 4. Finally, the most important ideas are summarized.

2

Theoretical Development of the Kinematic Estimator

In robotics, the Direct Kinematic Model allows to calculate the trajectory in the operational space x(t) for any given joint space trajectory q(t), this is, f (q(t)) = x(t). Therefore, it is easy to see that for any flexible link manipulator, the direct kinematic equation admits multiple solutions, as the deformation of the link provides more than one end-effector position/orientation for the same configuration of joints q (Fig. 1).

Fig. 1. A flexible manipulator with the same configuration of rigid joints qr (q1 , q2 ) but multiple end-effector position/orientation due to the flexibility of the links, qf (qfd , qfs ).(qfs link tip transverse flexural deflection and qfs link tip flexural slope).

Therefore, the Direct Kinematic Model has to be defined not only in terms of rigid joint motion variables qr but also in terms of the flexible variables qf , x(t) = f (qr , qf )

(1)

Measurement of rigid joints can be easily carried out using extra sensors attached to joints. However, as stated in the introduction, the measurement of the flexible DOF can be a challenging task. In this section, a novel estimator based on a normal mode analysis approach is proposed which allows to estimate, with minimal computational cost, the deformation with high accuracy.

Kinematic Estimator for Flexible Links in Parallel Robots

2.1

707

Fundamentals of the Estimator

Normal mode analysis is a mathematical tool which represents a pattern of motion in which all parts of a system move with the same frequency and fixed phase relation. Hence, this mathematical tool provides the kinematic relation between the different DOF of the system based on its dynamic properties, such as their structure, material and boundary conditions. Moreover, the general motion of a system is composed as a superposition of its normal modes, being all normal modes orthogonal to each others, since the excitation of one mode will not affect to a different mode. This way, if a particular mode is considered and the modal motion of a single degree of freedom is measured, the complete set of DOF qDOF modal motions can be estimated, qDOF =

n DOF

Xk k

(2)

k=1

where Xk is the k th mode shape or eigenvector and k is the k th modal motion [10]. Several researchers [2] have suggested to consider only the first few modes in the model neglecting high frequency since their amplitude are much smaller. Furthermore, in control applications, the bandwidth of the working frequencies is limited by the application itself and the actuator system. This, usually ensure that even the second lowest resonant frequency of the manipulator is out of the working bandwidth in all robot workspace. So, the deflection of the limbs can be estimated considering just the first natural frequency mode of the manipulator. Hence, (2) can be simplified to, qDOF =

n DOF

Xk k ≈ X1 1

(3)

k=1

This constitutes the basis of the proposed estimator. Lets assume that a robot is composed by a series of flexible links i = 1, . . . n connected with several stiff links. If the modal analysis of each flexible link is carried out based on the previous considerations, the final deflection of the ith link can be estimated with a properly placed sensor that measures a single flexible DOF. Consider, for instance, the example of Fig. 2, where the flexible link b is connected with two stiff links (a, c) using rotary joints. If Euler-Bernouilli Beam Theory and Finite Element Method approach is used to model the flexible link, one of the flexible variables, qf si , would define the deformation slope at the tip of the flexible link. If this angular deformation is measured by high resolution optical encoder and its data is introduced in (3), by substituting 1 = qf si , the total deflection at the tip of the link could be estimated, and therefore the kinematics of the TCP. The developed approach presents several advantages over previous ones. First, since each flexible link is considered, the obtained kinematic relation of the deformations is constant for a given mode, resulting in a low computational cost approach. Hence, the normal modes of each link are independent of

708

P. Bengoa et al.

Fig. 2. Schematic of a flexible link (b) connected with two stiff links (a, c) using rotatory joint.

the manipulator configuration which allows to calculate the required matrices off-line, reducing the time to compute link deformation. Second, the approach provides the required accuracy for control purposes, as long as the deflections are small enough (small deflection assumption). Finally, the relationship holds not only for the position problem, but also for velocity and acceleration problems, as vector Xk defines the relationship between the k th mode of the flexible DOFs motion of the of the ith link. Hence, the full kinematic relations between the flexible variables are, qfi =

nDOF k=1

Xki ki ≈ X1i 1i q ¨fi =

nDOF k=1

q˙ fi =

nDOF k=1

Xki ˙ki ≈ X1i ˙1i

(4) Xki ¨ki ≈ X1i ¨1i

In the following section, the procedure to calculate the generalised inertia matrix and the stiffness matrix of each link is detailed, so that the modal analysis and their eigenvalues can be calculated. 2.2

Modelling of Flexible Links

When modelling flexible link robots, one of the most used approaches is Finite Element Method (FEM). According to this procedure, each flexible link i is an assemblage of a finite number, ni , of small elements of length li which are interconnected at certain points called nodes. Each element is referenced as ij, where subscript j denotes the number of the element, being the more number of elements per link (hence, smaller elements), the more precise the overall solution of the system will be, making it converge to the exact solution as precisely as desired at the cost of higher computational cost. In order to solve the dynamic problem, the energy of each element is to be analysed using the Euler-Lagrange method. However, before calculating the potential and kinematic energy, the position vector r0i of each element must be defined (Fig. 3). For simplicity, this vector will be expressed in terms of a local coordinate vector ri and then translated it into the inertial frame. This is accomplished by using the transformation matrix T0i ,

Kinematic Estimator for Flexible Links in Parallel Robots

709

Fig. 3. Schematics of the DOF of each link’s element.



⎤ ⎡ ⎤ L1 L2 r0i = T01 ⎣ 0 ⎦ + T02 ⎣ 0 ⎦ + . . . + T0i ri u2n1 +1 u2n2 +1

(5)

⎤ ⎡ (j − 1) li + xij ⎦ 0 ri = ⎣ zij

(6)

where,

According to Euler-Bernoulli beam theory, all elements possess two DOF at each end of the element: a transverse flexural deflection (u2j−1 and u2j+1 ) and a flexural slope (u2j and u2j+2 ) (Fig. 3). These flexible DOF are related by the shape functions φk (x), which describe the flexural displacement z, as z(x, t) =

4 

φk (x) u2j−2+k (t)

(7)

k=1

where, for element j of the ith link the shape functions are defined by Hermitian polynomials, φ1 (x) = 1 − 3

x2 x3 +2 3 , 2 lij lij

x2 x3 φ3 (x) = 3 2 − 2 3 , lij lij

φ2 (x) = x − 2

x2 x3 +2 2 lij lij

x2 x3 φ2 (x) = − +2 2 lij lij

(8)

where lij is the length of the element and the boundary conditions to be satisfy by the displacement variables x and z(x) are, ∂z(0) ∂z(l) = u2j , z(l) = u2j+1 , = u2j+2 (9) ∂x ∂x After defining the position vector, potential energy Vij and the kinetic energy Tij of each element are computed in terms of the generalised variables of the z(0) = u2j−1 ,

710

P. Bengoa et al.

˙ Afterwards, total kinetic system q = (q1 , q2 , ...qn )T and their time derivatives q. T and potential V energies for the entire system is obtained by summing those energies. This is, ˙ = T (q, q)

ni m  

Tij

V (q) =

i=1 j=1

ni m  

Vij

(10)

i=1 j=1

where m is the number of links and Tij is the kinetic energy which is obtained by,  T  ∂r ∂r 1 lij 1 Tij = ρi Ai (11) dx = z˙ Tj Mij z˙ j 2 0 ∂t ∂t 2 where matrix Mij is the generalised inertia matrix of element ij. This matrix is symmetrical and each (k, o) element of it is defined as, 



li

Mij (k, o) =

ρi Ai 0

∂r ∂zjk

T 

∂r ∂zjk

dxij ,

o, k = 1, 2, ...nq

(12)

where ρi is the mass density, Ai is the cross-section area of the element, zjk is the k th element of zj = [qr , u2j−1 , u2j , u2j+1 , u2j+2 ]T , nq is the number of the variables of the system and qr = [q1 , q2 ...qnr ] is the vector of variables associated to the stiff model. Following a similar procedure, the overall potential energy of the system is obtained, (10). The potential energy is divided into two components: the potential energy due to the elasticity (Veij ) and the potential energy due to the gravity (Vgij ). This is,  Vij = Vgij + Veij =

lij

ρij Aij g [0 0 1] r dxij + 0

1 2





lij

E Ii 0

∂ 2 yij ∂x2ij

2 dxij

(13)

By rewriting the elasticity term of (13), the following expression is obtained, 1 T (14) zj Kij zj 2 where Kij is the stiffness matrix of the j element of the ith link. In order to determine the deflection of each link, the generalised stiffness and inertia flexible submatrices (Kif and Mif , respectively) have to be defined. These matrices are obtained by selecting the submatrices associated to the flexible variables qf from the generalised inertia and stiffness matrices. After obtaining the generalised inertia submatrix Mif and the stiffness submatrix Kif of each limb, the Boundary Conditions (BC) of each flexible link have to be defined, this is, the deflection (w), the slope (w ), the shear force (v) and/or the bending moment (M ) in both ends of each link (x = 0 and x = lTi ). Thus, depending on the link’s configuration, different BC are defined. Veij =

Kinematic Estimator for Flexible Links in Parallel Robots

2.3

711

Modal Analysis of the Flexible Links

After calculating the inertia and stiffness submatrices, the modal analysis procedure can be applied. For that purpose, a reduced set of equations of motion is required in which the damping and the applied load are not taken into account,    ¨ + Kif BC q = 0 Mif BC q To solve (15), a harmonic solution based on (16) is proposed. 

(15)

qf = X sin(ω t) (16) where ω is the natural frequency vector. The harmonic solution defines the deflection relationship between all the DOF of the link. Therefore, since the structural configuration does not change during motion, for a given mode is possible to relate the effects of flexibility on each DOF if just one DOF amplitude is measured. This is the key concept to develop the proposed estimator, as stated in the beginning of Sect. 2. By substituting (16) into (15) and simplifying it, the following equation is obtained.     i = 1, 2, 3...nDOF (17) Kif BC − ωi2 Mif BC Xi = 0 where the number of eigenvalues and eigenvectors nDOF is equal to the number of DOF of the discretised limb has. Hence, by solving (17), the eigenvector Xi can be calculated. This eigenvector, as it has been shown in (4), represents the relationship between the flexible DOF of the link. As stated in the introduction of Sect. 2, this calculation can be carried out off-line, since the inertia and stiffness submatrices do not depend on the robot’s position. This way, if one of the flexible DOF is measured, the rest can be estimated based on the calculated eigenvalues.

3

Case of Study: Delta Robot

In the following section the developed estimator will be implemented into a flexible-link parallel Delta Robot in order to validate it. For that purpose, the procedure detailed in the previous section will be applied.

3.1

The Delta Robot Description

The Delta robot is a widely used, three DOF parallel robot whose lightweight structure provides fast dynamic capabilities, being mainly used for pick and place applications. This robot is composed of 3 uniformly distributed arms (i = 1, 2, 3) that connect the mobile platform with the fixed base. Each of these arms is composed by two limbs: the upper one is connected to the fixed base by an actuated rotational joint called qai (for i = 1, 2, 3) and the lower ones are composed by an articulated parallelogram which allows passive rotations in two directions (qnai and αi for i = 1, 2, 3), as it is shown in Fig. 4. This configuration limits the motion of the TCP, which is located in the mobile platform, to a pure three dimensional translational motion. For the study case, the main parameters of the robots (Table 1) have been obtained from an Omron Mini Delta CR-UGD4MINI-NR commercial robot, whose flexible model has also been implemented in ADAMS Multibody Software. In order to increase the deformations, AW5083/H111 aluminium platens, whose mechanical properties are summarised in Table 2, have been used in the lower limbs.

712

P. Bengoa et al.

Fig. 4. Robot schematic with the most important parameters and variables. Table 1. Parameters of Omron Mini Delta CR-UGD4MINI-NR Length (m) Mass (kg) Inertia (kg m2 ) Fixed base

|ai | = 0.100

Upper link Li

0.150

0.0365

ILi

Lower link lTi

0.400

0.1319

Il

Mobile platform |di | = 0.040 0.1278

3.2

xx

ixx

= 2.2781 10−6 ILi = 8.7001 10−5 ILi = 8.6422 10−5 yy zz

= 2.0023 10−6 Il

Ipxx = 4.6225 10−5

iyy

= 0.0010

Ipyy = 4.6225 10−5

Il

izz

= 0.0010

Ipzz = 9.1472 10−5

Simulation Setup

In order to analyse the flexible behaviour of the flexible links and validate the proposed approach, a flexible simulation model of the robot developed in ADAMS Multibody Software has been excited with a periodical movement in the z axes. Hence, a sinusoidal motion has been applied in the actuated joints qai , qa1 = qa2 = qa3 = 0.15 − 0.1 sin(6 π t)

(18)

Two factors have been considered to carry out the validation of the proposed estimator. First, the deformation estimation capability of the developed approach is analysed. Based on FEM, certain number of nodes compose each flexible link. Each of those nodes have two DOF, the deflection displacement and the slope deflection. Hence, in order to analyse the accuracy of the approach, the estimation of each DOF is compared with the data obtained from ADAMS Multibody Software. Since the developed sensor not only estimates the position and the orientation of the DOF, but also the speed and the acceleration of them, the accuracy of estimator will be analysed in position, speed and acceleration. Results are shown in Fig. 5. As it can be seen the estimation is accurate either in position, speed and acceleration. Hence, the developed approach is able to estimate the deflection of the tip with a maximum error of less than 0.005 mm, which is about 3% of the maximum deflection amplitude, less than 10−4 m/s (4%) in speed estimation error and around 2 · 10−3 m/s2 (4%) in acceleration estimation error. Table 2. Mechanical properties of the aluminium platens. Length Width

Thickness Young’s modulus Mass density

0.400 m 0.015 m 0.003 m

71 GP a

2740 Kg/m3

Kinematic Estimator for Flexible Links in Parallel Robots (a) Position Estimation Error at the end effector

x 10

−0.5

Deflection [m/s]

Deflection [m/s]

Deflection [m]

−5

0

713

−1

0.5

1

1.5

2

4.5

5

1

1.5

2

2.5 3 3.5 4 4.5 Time [s] (c) Acceleration Estimation Error at the end−effector

5

1

1.5

2

5

−4

x 10

2.5 3 3.5 4 Time [s] (b) Speed Estimation Error at the end−effector

1 0 −1 0.5 −3

5

x 10

0 −5

0.5

2.5 3 Time [s]

3.5

4

4.5

Fig. 5. Estimator link deflection estimation compared with the real deflection at the tip of the link obtained with ADAMS Multibody Software: (a) position, (b) speed and (c) acceleration.

On the other hand, the performance of the estimator calculating the Direct Kinematic Problem will be also evaluated. This model is derived from the loop closure equations of the mechanism (Fig. 6). For the defined Delta robot, these equations are, Γi (x, q) = ai + Li (qa ) + lTi (qa , qna , αi ) + δ i − di − px = 0

(19)

where i = 1, 2, 3 is the link identification and δi is the variable to be estimated which defines the deflection distance in the flexible link end which respect to its analogous rigid link. Hence, the position of the TCP of the robot, is calculated by solving px . Figure 7 shows the error between the estimated TCP position and the obtained from the flexible ADAMS Multibody Software model for the x, y and z component. As it can be seen, the estimation error of the developed approach rises up to 8 · 10−6 m for the x axis,

Fig. 6. Graphical representation of the loop closure of a Delta Robot.

714

P. Bengoa et al.

Fig. 7. TCP deflection estimation error done by the developed approach.

1.7 · 10−6 m for the y axis and 0.5 · 10−6 m for the z axis, which means an error of less than the 0.025%, 0.005%, 0.001% of the applied movement, respectively.

4

Conclusions

This work presents a novel approach for the deformation estimation of flexible links. The approach is based on the use of modal analysis, the Euler-Bernouilli Beam Theory and the Finite Element Method approach. This estimator, which is focused on Parallel robots control applications, provides a accurate and computationally effective estimations in comparison with other approaches. The proposed estimator has been validated in a flexible Delta Robot which has been modelled on ADAMS Multibody Software. The deformation data obtained by the estimator was compared with the one obtained by ADAMS, resulting in a maximum error of less than 3%, 4% and 4% for the deformation position, speed and acceleration estimation, respectively. Furthermore, the TCP position has also been estimated, resulting in a maximum TCP error of 0.025%. Acknowledgement. This work was supported in part by the Spanish Ministry of Economy and Competitiveness under grant BES-2013-066142, UPV/EHU’s PPG17/56 projects, Spanish Ministry of Economy and Competitiveness’ MINECO & FEDER inside DPI-2012-32882 project and the Basque Country Government’s (GV/EJ) under PRE-2014-1-152 and BFI-2012-223 grants and under recognized research group IT914-16.

Kinematic Estimator for Flexible Links in Parallel Robots

715

References 1. Bascetta, L., Rocco, P.: End-point vibration sensing of planar flexible manipulators through visual servoing. Mechatronics 16(3–4), 221–232 (2006). doi:10.1016/ j.mechatronics.2005.11.005 2. Book, W.: Modeling, design, and control of flexible manipulator arms: a tutorial review. In: 29th IEEE Conference on Decision and Control, pp. 500–506, vol. 2. IEEE (1990). doi:10.1109/CDC.1990.203648 3. Chen, X., Li, Y., Deng, Y., Li, W., Wu, H.: Kinetoelastodynamics modeling and analysis of spatial parallel mechanism. Shock Vib. 2015, 1–10 (2015). doi:10.1155/ 2015/938314 4. Corke, P.I., Hutchinson, S.A.: Real-time vision, tracking and control. In: 2000 IEEE International Conference on Robotics and Automation, 1 April, pp. 622–629 (2000). doi:10.1109/ROBOT.2000.844122 5. Dubus, G.: On-line estimation of time varying capture delay for vision-based vibration control of flexible manipulators deployed in hostile environments. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3765– 3770. IEEE (2010). doi:10.1109/IROS.2010.5651211 6. Gu, M., Asokanthan, S.F.: Combined discrete-distributed control of a single-link flexible manipulator using a Lyapunov approach. J. Dyn. Syst. Meas. Contr. 121(3), 448 (1999). doi:10.1115/1.2802495 7. Gu, M., Piedboeuf, J.C.: A flexible-link as an endpoint position and force detection unit. IFAC Proc. Volumes (IFAC-PapersOnline) 15(1), 361–366 (2002). doi:10. 1016/S0967-0661(03)00105-9 8. Henriksson, R., Norrlof, M., Moberg, S., Wernholt, E., Schon, T.B.: Experimental comparison of observers for tool position estimation of industrial robots. In: Proceedings of the 48th IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, pp. 8065–8070. IEEE (2009). doi:10.1109/CDC.2009.5400313 9. Hutchinson, S., Hager, G., Corke, P.: A tutorial on visual servo control. IEEE Trans. Robot. Autom. 12(5), 651–670 (1996). doi:10.1109/70.538972 10. Maia, N.M.M., Montalvao e Silva, J.M.J.M.: Theoretical and Experimental Modal Analysis. Research Studies Press (1997) 11. Merlet, J.: Parallel Robots, 2nd edn. Springer Science & Business Media (2006) 12. Piedboeuf, J.C.: The jacodian matrix for a flexble manipulator. J. Robotic Syst. 12(11), 709–726 (1995). doi:10.1002/rob.4620121102 13. Rodriguez-Donate, C., Morales-Velazquez, L., Osornio-Rios, R.A., Herrera-Ruiz, G., Romero-Troncoso, R.d.J.: FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links. Sensors 10(4), 4114–4129 (2010). doi:10.3390/s100404114 14. Rognant, M., Courteille, E., Maurine, P.: A systematic procedure for the elastodynamic modeling and identification of robot manipulators. IEEE Trans. Rob. 26(6), 1085–1093 (2010). doi:10.1109/TRO.2010.2066910 15. Trejo-Hernandez, M., Osornio-Rios, R.A., de Jesus Romero-Troncoso, R., Rodriguez-Donate, C., Dominguez-Gonzalez, A., Herrera-Ruiz, G.: FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts. Sensors 10(4), 3373–3388 (2010). doi:10.3390/s100403373 16. Zheng, J., Fu, M.: A reset state estimator using an accelerometer for enhanced motion control with sensor quantization. IEEE Trans. Control Syst. Technol. 18(1), 79–90 (2010). doi:10.1109/TCST.2009.2014467

Tactile-Based In-Hand Object Pose Estimation 1(B) ´ David Alvarez , M´ aximo A. Roa2 , and Luis Moreno1 1

Robotics Lab, Carlos III University of Madrid, Leganes, Spain {dasanche,moreno}@ing.uc3m.es 2 German Aerospace Center - DLR, Wessling, Germany [email protected]

Abstract. This paper presents a method to estimate the pose of an object inside a robotic hand by exploiting contact and joint position information. Once an initial visual estimation is provided, a Bootstrap Particle Filter is used to evaluate multiple hypothesis for the object pose. The function used to score the hypothesis considers feasibility and physical meaning of the contacts between the object and the hand. The method provides a good estimation of in-hand pose for different 3D objects. Keywords: Robotic grasping · Object pose estimation · Tactile sensing

1

Introduction

Grasp planning is generally considered as the computation of the locations where the fingers should contact an object in order to obtain a stable grasp. However, those theoretical locations are often hard to reach in real life due to uncertainties in the sensing, modeling, planning or execution phases, leading to a hand-object configuration different from the planned one. In-hand object pose estimation is an online cognitive process that humans perform while grasping or manipulating objects. Information acquired from different sources (including action, vision, tactile, and semantic knowledge) is linked in an interactive manner [1]. Tactile sensing is specially important when the object is inside the hand; in fact, experiments have shown that humans fail to accurately perform manipulation tasks when their sense of touch is impaired [2]. The problem of finding the pose of an object when it is grasped by a robotic hand has become a topic of high interest. A common approach starts with a vision-based pose estimation, which is later improved by using information coming from tactile sensors in the hand. In [3] the vision estimate is refined by This work has received funding from the Spanish Ministry of Economy, Industry and Competitiveness under the projects DPI2013-47944-C4-3-R and DPI2016-80077-R, and the RoboCity2030-III-CM project, stage III, S2013/MIT-2748), cofunded by “Programas de Actividades I+D en la Comunidad de Madrid”, and by Structural Funds of the EU. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_59

Tactile-Based In-Hand Object Pose Estimation

717

minimizing the distance from the object’s point cloud to the contact locations in the hand. An offline description of the object’s facets to find possible combinations that match the current sensor measurements is used in [4]. The idea of preventing physically unfeasible solutions by discarding collisions between the object and the hand was included in [5] with an estimation based on a particle filter. A similar idea was presented in [6], in which a manifold of state space that respects the contact constraints is used for a 2D analysis when the hand is pushing an object. An approach that uses arrays of tactile sensors to recognize the object and its 6D pose by exploring the object’s surface and edges, using a particle filter combined with an Iterative Closest Point approach, was used in [7]. Estimation of an object’s pose combining stereo vision and a force-torque sensor mounted on the wrist of a robot was reported in [8]. It also used joint positions to estimate the location of the fingers with respect to the object’s faces, an idea also exploited in [9] in the context of a global search of the pose of the object. Statement of contributions: this work presents the in-hand estimation of the 6D pose for a known object by exploiting tactile and joint information in the robotic hand, starting with an initial vision-based estimation of the pose. Based on ideas presented in [5] for 2D objects, geometrically meaningful constraints are included in a particle filter, where each particle represents a pose hypothesis of the object. In comparison to previous works, the estimation not only uses the contact locations, but also the force measurements provided by pressure-based sensors in the hand. Moreover, properties of a stable grasp are included in the cost function that evaluates the best estimation.

2

Problem Description

The overall process of a grasp execution starts with a visual estimation of the object’s pose in the robot frame. This is used to plan the grasp, and then the robotic arm is generally sent to a pre-grasp pose. The initial pose estimation of the object can be described with respect to the reference frame of the wrist by x = [q, t]T = [qx , qy , qz , qw , tx , ty , tz ]T

(1)

where q is a rotation expressed as a quaternion, and t is a translation vector. The grasping process is then triggered, and the fingers start closing towards the object. When any contact with the object is detected, the pose estimation starts (after that moment the object might be moved by the hand). Therefore, the problem of finding the pose of an object with respect to the hand (wrist) can be formulated as finding a set of parameters that define a transformation (1) that matches the information provided by the tactile sensors. The ReFlex TakkTile hand used in this work (Fig. 1) is equipped with two types of sensors: pressure sensors located along the fingers (9 along each finger) and the palm (11 sensors), and magnetic encoders in the proximal joint and the tendon that moves the distal joint, which allow computing the location of both phalanges in each finger [10]. Therefore, the position of the tactile sensors with respect to the

718

´ D. Alvarez et al.

Fig. 1. Experimental setup for in-hand object pose estimation. A reference frame parallel to the wrist reference frame is shown in the lower right corner.

wrist and the normal vector to the surface of each sensor can be inferred. The sensors deliver force measurements based on the pressure transmitted by the rubber that covers the fingers. However, since the pressure flows through the rubber, one single contact with an object may be detected by two consecutive sensors. When this happens, a linear combination of measurements is performed to compute the actual contact location ci with respect to the wrist, as follows:     fi  × (ti+1 − ti ) (2) ci = ti + 1 − fi + fi+1  where ti is the position of sensor i and fi its force measurement. Note that this assumes that only one contact per link with the object is possible. The initial pose estimation of the object with respect to the hand is obtained by computing the relative transformation between two Apriltags located in the hand wrist and on the object (Fig. 1). Assuming that the CAD model of the object is available, the problem of in-hand pose estimation is tackled by combining the following general ideas: I When a contact is detected by a sensor, the estimated position of the object must produce a contact at the same location. The opposite situation is illustrated in the left side of Fig. 2, where the estimated object pose cannot explain the contact readings in two fingers. II The estimated position of the object should not be in collision with the hand (just in contact). Besides, the object cannot float without contacting the hand at all when at least one sensor reading is positive. III The inward normal of the object surface at the contact location and the outward normal at the contact surface in the hand should have the same direction. When friction is considered, the normals do not necessarily have to be aligned, but, since the friction coefficient is in general not known, the angle between normals should be as close to 0 as possible (Fig. 2).

3

Implementation of the Particle Filter

A particle filter is a type of Bayesian filter, a general probabilistic approach for recursively estimating an unknown probability density function using incoming

Tactile-Based In-Hand Object Pose Estimation

719

Fig. 2. Left: two contacts are detected (red arrows), but the estimated pose of the object does not produce contacts. Right: friction cone at a contact location.

measurements and a mathematical modeling process. One of the most popular methods for performing Bayesian filtering is the Kalman Filter, usually applied to optimally solve a linear and Gaussian dynamic model [11]. However, for highly non-linear and non-Gaussian problems it fails to provide a reasonable estimate [12]. Besides, Kalman filters (regular or extended) perform better with an analytic description of the measurement model [5], not available here. The estimation method used here is the Bootstrap Particle Filter, a type of Sequential Monte Carlo method (SMC) [13]. These are a set of simulation-based methods that provide a convenient approach to compute posterior distributions sequentially, avoiding linearity or normality assumptions. In order to mathematically define how the Bayesian filtering works, let us start by defining two sets of random (vector) processes: Xt := x(0), ..., x(t) Yt := y(0), ..., y(t)

(3)

where Xt is a set of random variables of interest and Yt is a set of measurements of the process of interest. Bayes’ theorem for conditional distribution can be expressed as P r(Yt |Xt ) × P r(Xt ) . (4) P r(Xt |Yt ) = P r(Yt ) The principal idea under SMC techniques is representing a probability distribution as a set of samples from this distribution. Therefore, the more samples taken from the distribution, the better representation is achieved. Besides, if the system is assumed to be Markovian, using Sequential Importance Sampling (SIS), an estimate of Pˆ r(Xt |Yt ) can be computed using the past simulated samples Xt−1 ∼ q(Xt−1 |Yt−1 ) [14]. The desired sequential solution of the update of the weight of each particle at each time-step can be written as: Wi (t) = Wi (t − 1) ×

P r(Yt |Xi,t−1 ) × P r(Xi,t |Xi,t−1 ) q(Xi,t |Xi,t−1 , Yt )

(5)

where i refers to each particle. This weighting update enables the formulation of a generic Bayesian Sequential Importance Sampling algorithm, a constrained version of importance sampling.

720

´ D. Alvarez et al.

Finally, in order to avoid the possible degeneration of the particles, i.e. that most particles have a weight value of 0, the Bootstrap Particle Filter uses Sequential Importance Resampling (SIR) to eliminate particles with low importance weights while preserving particles with large probabilities [12]. After this step, the weights of the new particles are all set to be equal to 1/N , which means they all have the same importance. Algorithm 1 shows the implementation of the Bootstrap Particle Filter used in this work.

Algorithm 1. Bootstrap Particle Filter 1: procedure BPF(Np , prior estimation) Initialization: 2: xi (0) ∼ P r(x(0)), Wi (0) = N1p Importance Sampling: 3: xi (t) ← system model(xi (t − 1), inputt ) 4: Wi ∼ P r(Wi (t)) Weight Update: 5: Wi (t) = Wi (t − 1) × measurement(y(t)|xi (t)) Weight Normalization: 6: Wi (t) = NWp i (t) i=1

Wi (t)

Resampling: ˆef f (t) ≤ Nthresh , then x ˆi (t) ⇒ xj (t) 7: if N 8: end procedure

3.1

Importance Sampling: System Model

In a Particle Filter, the system model relates the state at time i with respect to the state at time i − 1 and the input to the system at time i. In this case, since the state is the object’s pose, this relation is given by the displacement. In order to compute the object’s displacement based on finger movements, only the location of sensors that detect contact is taken into account. Let si = [xi , fi > contact threshold] be a sensor location with positive contact. Then, Δxi = [Δqi , Δti ]T is the displacement and rotation of sensor i between two consecutive readings. The total object movement is then computed as the average translation and rotation, Δxia , among the sensors in which a contact is detected. 3.2

Weight Update: Measurement Model

The measurement model gives to each of the particles a weight that quantifies how similar is the state expressed by that particle to the truth, based on the measurements provided by the position and tactile sensors in the hand. The measurement model used in this work is inspired by [5]. Three new features have been added: first, not only the sensor location but also the force measurements are used to compute the real contact locations. Second, the evaluation method

Tactile-Based In-Hand Object Pose Estimation

721

considers differently each sensor depending on whether it is in contact or not. And third, the friction cone of a contact is considered to evaluate the feasibility of a contact direction with the object. The estimation method works in parallel with the grasping execution using a simulated environment. The Flexible Collision Library (FCL) [16] is used to compute the shortest distance (no collision, positive value) or deepest penetration (in collision, negative value) between each sensor and the object. Taking into account this information, three kind of measurements are considered: – For every sensor that does not detect contact with the object, a probability is assigned to each particle (representing the object’s pose) based on its distance to the object doi by:    doi o pnc,i (di ) = 0.5 ∗ 1 + erf √ (6) 2σnc where σd is a standard deviation value that can be adjusted to match the inaccuracy of the measurements, and erf corresponds to the error function. This function is chosen to assign high weight values to positive distances and small values to negative distances, which helps to avoid estimations that predict unreal collisions. The resulting function is represented in Fig. 3. – For each sensor that detects a contact, the distance doi is used to associate a probability to the measurement with: pc,i (doi )

−0.5

=e

 do 2 i σc

(7)

where σc can be adjusted to account for the uncertainty in the force sensors. This function assigns high weights to values that are close to zero, i.e. close to contact. The function is represented in Fig. 3. – Assuming the grasp is stable, the normal of the surface of the object at the contact point (for the sensors that detect a contact) should lie within the friction cone around each contact point in the hand. The contact force measured by the hand is considered to be normal to its surface, therefore, the angle αi between the normals to the surfaces can be computed, and afterward evaluated with: αi 2 (8) pa,i (α) = e−0.5( σa ) where σa accounts for the friction between the surfaces. This function assigns high weights to values that are close to 0. Finally, a combined weight for each particle (Wi ) can be expressed as: Wi =

N m 

pnc,i ∗ pc,i ∗ pa,i

(9)

k=1

where Nm is the number of measurements for each particle. This weight is calculated for every particle during the update step.

722

´ D. Alvarez et al.

Fig. 3. Left: weight function used when a contact is not detected, σnc = 0.005 m. Right: weight function used when a contact is detected, σc = 0.005 m.

4

Experimental Setup

The ReFlex TakkTile Hand was used for experimental tests. Figure 1 shows the setup when the hand is about to start closing the fingers. Note that both hand and object have Apriltags [17] on their surfaces (36 h 11 family and 0.0163513 m of side length), which are used to compute the ground truth pose of the object with respect to the hand (but are not used in the pose estimation process). The experiments follow the next steps: – Hand and object are placed on a table surface, and the scene is perceived with an Asus Xtion Pro sensor. – Data recording is started, an operator picks up the hand and positions it at a seemingly good location to execute the grasp. The fact that the robotic hand is held by a human operator has no influence on the estimation provided by the filter (since this work deals with the estimation of the pose of an object with respect to the hand, no robot arm is actually needed to prove the benefits of the technique). – 10 seconds later, the fingers close around the object. Once all of them are in contact, a constant “closing” velocity is maintained in all fingers to make the grasp stable (Non-stable grasps are not studied in this work). – The particle filter is started as soon as the first contact between any finger and object occurs. The initial population is built out of the visual estimation of the pose of the object (based on AprilTags), adding Gaussian noise in every axis according to the expected error in the visual estimation. – When the grasp is completed, the object is lifted to check if it is stable. New estimations are computed until the hand is commanded to open. One important issue when using a particle filter is deciding the number of particles to use. It must be a compromise between a very big population, which increases the likelihood of obtaining a good estimate, and a small one, which allows higher frequency of the estimation process. In order to optimize this

Tactile-Based In-Hand Object Pose Estimation

723

Table 1. Average update time. Particles Time [s] 100

0.3170

150

0.4787

200

0.6116

250

0.7727

300

0.9034

600

1.8772

1000

2.9871 Fig. 4. Variance of the estimated pose.

choice, the iteration period is analyzed against the number of particles by performing 10 tests using 100, 150, 200, 250, 300, 600 and 1000 particles; the average computation times on a standard desktop PC (Intel Core i5 with 4 cores) are shown in Table 1. Time grows almost proportionally with the number of particles. Assuming a desirable minimum update frequency of 1 Hz, this is only fulfilled with up to 300 particles. Then, additional tests are performed to study the variance of the posterior distribution over 10 different repetitions, as shown in Fig. 4. Blue and red lines correspond to the variance in the estimation of position and orientation, respectively. Theoretically, as the number of particles grows, the estimation of the particle filter is more accurate and the variance of the estimations should be smaller. Results show a small change in variance because the number of particles changes very little. However, a trend can be appreciated: estimations get better until 200 particles, when this improvement slows down or disappears. For this reason, further tests are performed using 200 particles on three different objects selected from the YCB database [15]; their characteristics can be seen in Fig. 5 and Table 2. The parameters used in the particle filter are summarized in Table 3. For the input to the system (displacement between iterations), a Gaussian noise, with 0 mean and a diagonal covariance matrix, is added both in position and orientation. No artificial noise is added to the measurements, since the data provided by the sensors is quite noisy itself. The values used in the weighting function have been empirically chosen. For instance, σd and σc were initially set to 0.5 mm, but this made all the particles to have a weight value of 0 in most of the iterations, which is useless for the estimation process. On the other hand, the greater the σ values are, the less importance is given to the measurements. 4.1

Results

Figure 6 shows an example of the grasping process of the Pringles can. The first image presents the starting pose of the object and the hand. In the second image,

724

´ D. Alvarez et al.

Table 2. Characteristics of the objects used in the experiments. Object Cheez-it Pringles Jell-o

Size [mm] 60 × 160 × 230 75 × 250 35 × 110 × 89

Weight [kg] 0.453 0.205 0.187 Fig. 5. Objects used for testing.

Table 3. Parameters used for the pose estimation with the proposed filter. Input Noise

Position - Mean Position - Standard Deviation Rotation - Mean Rotation - Standard Deviation

0 0.7 mm 0 0.35◦

Measurement Noise Mean Standard Deviation

0 0

Weighting Function σd σc σa

1 mm 1 mm 8◦

the proximal links of the fingers make contact with the object; the fingers keep closing until the distal link also makes contact, as shown in the third image. In the fourth and fifth images, the hand is robustly grasping the object, which is lifted and turned towards the camera. Note that this movement does not affect the computation of the actual pose of the object with respect to the hand (provided there is no slippage). Then, in the next images, the hand is moved down, the object is placed back on the table, and the fingers are opened. The pose estimation process was repeated 10 times. Figure 7 shows the results for this scenario, including the average of all the trials, and absolute values of the estimation for position (cm) and rotation (◦ ) in the three axes. All the given data are for poses of a frame at the center of mass of the object with respect to the base of the hand; the orientation of the axes of this frame is indicated in the first image of Fig. 6, to help interpreting the results. As an output of the filter, the best particle of the population at each iteration is drawn in a purple line. Besides, the weighted average of the posterior and its standard deviation are shown as a blue line and a shaded green area, respectively. Finally, the ground truth (GT) computed using the Apriltag system is included as a red line. Both the ground truth and the estimation show slight movements in X and Y axis. The motion in Y axis can be considered as noise, since is has a range of less than 1 mm. The motion in X axis is very likely induced by the asynchronous

Tactile-Based In-Hand Object Pose Estimation

(a) a

(b) b

(c) c

(d) d

(e) e

(f) f

(g) g

(h) h

725

Fig. 6. Pringles can, grasped from the side with a robotic hand held by a human operator. The sequence is ordered left to right, top to bottom.

contact of the fingers on the object, and by the force applied to stabilize the grasp, reflected also in the Z axis. Although the ground truth does not detect almost any rotation, the estimations show slight variations around 0◦ . Note that Y is the axis of revolution of the cylinder. When no textures are considered for the 3D model, the object can rotate around the Y axis while still fulfilling the geometrical constraints of the contacts when being grasped, so rotations around this axis do not make any difference. Rotations estimated around X and Z axis are much more noisy, reaching a maximum error of 1.2% (with respect to all possible orientations in 360◦ ). Similar results are obtained for the other two objects. While the position is generally well estimated, the estimation of rotations has higher uncertainties. Table 4 summarizes the results of pose estimation for the three objects, comparing the ground truth to the best and to the average particles. Average error and standard deviation are presented for the errors in position and orientation. These results can be compared, at least in a qualitative way, with other similar works. [3,6] evaluate similar techniques to this one for 2D applications. In both cases, average position errors are around 2 to 10 times bigger than ours, while having similar computation times. Orientation errors are also around 5 to 10 times bigger. In [5], results in 2D for real experiments and in 3D for simulated ones are presented. Their filter was tried with 5 to 80 particles, which obviously leads to better computation times; however, this results in larger errors in position and specially in orientation, which were improved here by adding information on the force measurement and the orientation of the surfaces in contact.

´ D. Alvarez et al.

726

6

μ σ Best GT

0.5 0.4 0.3 0.2

2

Orientation X (°)

Position X (cm)

μ σ Best GT

4

0.1 0 −0.1

0

−2

−0.2 −0.3

−4

−0.4 −0.5 0

2

4

6 Time (s)

8

−6

10

0

2

4

6 Time (s)

8

−3.4

μ σ Best GT

−3.8

−4

2

0

−2

−4.4

−4 2

4

6 Time (s)

8

0

10

15.6 15.5

15.3 15.2 15.1

4

6 Time (s)

8

10

μ σ Best GT

0

Orientation Z (°)

15.4

2

2

μ σ Best GT

15.7

Position Z (cm)

4

−4.2

0

μ σ Best GT

6

Orientation Y (°)

Position Y (cm)

−3.6

10

−2

−4

−6

15 14.9

−8

14.8 14.7

0

2

4

6 Time (s)

8

10

0

2

4

6 Time (s)

8

10

Fig. 7. Results of the pose estimation for the Pringles can. Left column corresponds to positions in X, Y, and Z axes. Right column corresponds to rotations around the same axes.

Tactile-Based In-Hand Object Pose Estimation

727

Table 4. Average errors and standard deviation in the pose estimation for three selected objects. Object

Particle Position error

Orientation error

μ = 0.2140 cm μ = 1.739◦ σ = 0.0583 σ = 0.0571 Average μ = 0.1759 cm μ = 2.153◦ σ = 0.0571 σ = 0.106

Pringles can Best

Jell-o box

Best

μ = 0.3704 cm σ = 0.1013 Average μ = 0.3638 cm σ = 0.10951

μ = 3.392◦ σ = 0.871 μ = 3.508◦ σ = 0.954

μ = 0.7310 cm μ = 0.812◦ σ = 0.2069 σ = 0.131 Average μ = 0.7187 cm μ = 0.905◦ σ = 0.2397 σ = 0.154

Cheez-it box Best

5

Conclusions

This paper proposes and implements an in-hand pose estimation algorithm to detect the movement of the object that occurs while performing a grasp. The estimation process is based on the measurements provided by the sensors of the hand: pressure sensors for contact detection and encoders for proprioceptive information. The measurement model relies on a virtual simulation of the relative position of the hand and the object, which uses a collision detection library, to compare real measurements with simulation-based ones. The approach uses a Bootstrap particle filter, selected mainly because of its flexibility in the measurement model. The algorithm is tested in real grasping scenarios with three different objects. Results show that the main movements produced while stabilizing the grasp can be estimated. The main deviations occur in the degrees of freedom in which there is no measurement update. The estimation might be improved by introducing a new measurement term that penalizes movements higher than those used in the control input of the system. However, increasing the number of variables in the measurement computation increases also the computation time required to predict the behavior of the object. Also, the experiments presented here compare results to a ground truth given by an Apriltag system, which greatly limited the variability of motions. Further experiments using a better system for ground truth prediction that allows larger motion variations is a future work. An ongoing work is the combination of the proposed tactile scheme with a visual-based pose estimator, robust to medium and high levels of occlusions (which commonly happen after the object is grasped with the hand). Visual information can also provide useful insights for the degrees of freedom that cannot be detected with the tactile-based pose estimation.

728

´ D. Alvarez et al.

References 1. Macura, Z., Cangelosi, A., Ellis, R., Bugmann, D., Fischer, M., Myachykov, A.: A cognitive robotic model of grasping. In: Proceedings of International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, pp. 89–96 (2009) 2. Rothwell, J., Traub, M., Day, B., Obeso, J., Thomas, P., Marsden, C.: Manual motor performance in a deafferented man. Brain 105, 515–542 (1982) 3. Bimbo, J., Seneviratne, L., Althoefer, K., Liu, H.: Combining touch and vision for the estimation of an object’s pose during manipulation. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4021– 4026 (2013) 4. Haidacher, S., Hirzinger, G.: Estimating finger contact location and object pose from contact measurements in 3-D grasping. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 1805–1810 (2003) 5. Chalon, M., Reinecke, J., Pfanne, M.: Online in-hand object localization. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2977–2984 (2013) 6. Koval, M., Dogar, M., Pollard, N., Srinivasa, S.: Pose estimation for contact manipulation with manifold particle filters. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4541–4548 (2013) 7. Aggarwal, A., Kirchner, F.: Object recognition and localization: the role of tactile sensors. Sensors 14, 3227–3266 (2014) 8. Hebert, P., Hudson, N., Ma, J., Burdick, J.: Fusion of stereo vision, force-torque, and joint sensors for estimation of in-hand object location. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 5935–5941 (2011) 9. Bimbo, J., Kormushev, P., Althoefer, K., Liu, H.: Global estimation of an object’s pose using tactile sensing. Adv. Rob. Syst. 29, 37–41 (2015) 10. Tenzer, Y., Jentoft, L., Howe, R.: The feel of MEMS barometers: inexpensive and easily customized tactile array sensors. IEEE Rob. Autom. Magaz. 21(3), 89–95 (2014) 11. Kalman, R.: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 82, 35–45 (1960) 12. Doucet, A., de Freitas, N., Gordon, N.: An Introduction to Sequential Monte Carlo Methods. Springer, New York (2001) 13. Candy, J.: Bootstrap particle filtering. IEEE Signal Process. Mag. 24, 73–85 (2007) 14. Ristic, B., Arulampalam, S., Gordon, N.: Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House, Norwood (2004) 15. Calli, B., Singh, A., Walsman, A., Srinivasa, S., Abbeel, P., Dollar, A.: The YCB object and model set: towards common benchmarks for manipulation research. In: Proceedings of IEEE International Conference on Advanced Robotics, pp. 510–517 (2015) 16. Pan, J., Chitta, S., Manocha, D.: FCL: a general purpose library for collision proximity queries. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3859–3866 (2012) 17. Olson, E.: AprilTag: a robust and flexible visual fiducial system. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3400–3407 (2011)

Legged Locomotion Robots

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks Jorge De Le´on(B) , Mario Garz´ on, David Garz´ on-Ramos, and Antonio Barrientos Centro De Autom´ atica y Rob´ otica, UPM-CSIC, Calle Jos´e Guti´errez Abascal, 2, 28006 Madrid, Spain {jorge.deleon,ma.garzon,antonio.barrientos}@upm.es, [email protected]

Abstract. This paper presents a study about gait patterns for hexapod robots with extremities called C-legs. The study analyses several modes of gait that different animals use to move through the terrestrial environment, and another new ones that arise when looking at the limitations that present the existing ones. The whole study is reinforced with a series of simulations carried out, where the obtained results are analysed to select the best gait pattern for a specific situation.

Keywords: Hexapod robots

1

· C-legs · Gait patterns · USAR

Introduction

Urban Search and Rescue (USAR) is one of the most challenging applications for mobile robotics. The complexity of the tasks and the scenarios where they have to be carried on, require novel and robust solutions. This paper presents a contribution towards this goal, by proposing a new approach for the use of mobile terrestrial robots in this application. One of the most problematic issues regarding the use of ground robots in USAR applications is their locomotion mode. Most conventional ground robots, which have locomotion based on wheels or caterpillars, are not suitable because they may not be able to overcome the obstacles found in their way and therefore they can not complete their missions. The new design presented in this paper is intended as a guide for developments of new bio-inspired robots, and thus reduce failure due to this and other type of difficulties. A wide variety of bio-inspired robots may be found in literature, however, a high percentage of those robots are based on the physiognomy of the hexapods. The class of the hexapods is a classification of the arthropods, and is the class that groups the larger number of species, including among them a variety of insects. The hexapods found in nature can have terrestrial or aerial locomotion, however, robots inspired in them are mainly terrestrial. Hexapod locomotion is widely used in robotics because of its robustness, stability and the facility in which they are adapted to the field of engineering. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_60

732

J. De Le´ on et al.

There are many developments of hexapod robots, The FZI (Research Centre for Information Technology) in Karlsruhe, Germany, has created the LAURON family, whose development goes into the fifth generation [1] (See Fig. 1a). This robot is equipped with different walking modes and is able to maintain stability even on a terrain with numerous obstacles. Another example is the DIGbot robot, designed by Case Western Reserve University in Cleveland, United States, and whose leg design makes it possible to advance through tree trunks and vertical walls, shown in Fig. 1b.

(a) Robot LAURON IV

(b) Robot DIGbot

(c) Robot RHex

Fig. 1. Example of hexapod robots

But, although the kinematics of this type of robots is well studied and very simple to implement, the mechanical implementation and its control is very complex [7], therefore, a new series of robots have been developed. These robot continue being hexapods but present a new physiognomy in their extremities. These new extremities have the shape of a letter C. Robots using this legs can move at much higher speeds and they have less mechanical requirements. Another advantage of this type of robots is their capacity of “locomotion without legs” by which robots can overcome obstacles when none of their legs touch the ground [2]. A robot that uses this type of design is the RHex (See Fig. 1c) developed in USA under DARPA funding. However, despite the great proliferation of robots using C-shaped legs, there have not been studies that deepen into the kinematic of the legs ([5,8,10]). Even tough this study is fundamental for this type of robots because it largely conditions the process of design and construction. The objective of this paper is to contribute in this regard, by presenting a study of the different modes and gaits used for locomotion of this type of robots. The results of those studies have been validated through simulations, and they provide insightful details as to how to work with this type of robots. The structure of the article begins with this introduction, and then continue describing the model of the robot. Subsequently, the gait patterns are defined and, finally, are the tests and simulations performed, ending with the conclusions of the analysis.

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks

2

733

Design of the Robot and Its Legs

The robot that has been used is a prototype developed by the Robotics and Cybernetics group of the Polytechnic University of Madrid ([8,9]). This robot has been designed with a series of specifications to be able to perform search and rescue tasks (USAR [6]) and also robotic protection of critical infrastructures (PRIC [4]) in environments that present different obstacles such as buildings, industrial complexes, . . . where a conventional robot with wheels cannot move around the area to be monitored.

Fig. 2. Render of the designed robot

The first requirement of design is that the robot must be able to overcome a stair, so according to the conclusions of the analysed works it is necessary that the diameter of the leg is equal to or greater than the height of the step. So, to make the leg, a 200 mm diameter acetal tube with 4.5 mm thick wall thickness and a 50 mm wide width have been selected. Then the legs are cut in the form of letter C. With this imposition of design, a new restriction appears, the minimum dimension of the robot, which is the sum of the three radii of the legs plus the thickness of them. lengthrobot > 3 · (diameterlegs + thicknesslegs ) lengthrobot > 613.5 mm

(1) (2)

In addition to these two restrictions, there is another one depending in the deployability of the robot. It should be classified as a man-portable or manpackable robot. Man-packable. Robot systems that can be carried in one or two backpacks along with the person’s personal protective gear. Man-packable robots can be further subdivided into micro and mini. Micro-robots fit completely in one pack and can be deployed in spaces where humans or canines cannot fit. Man-portable. Robot systems that can be lifted or carried by one or two people but do not fit on a backpack or would distort the person’s center of gravity. Typically are robots tank-ike or with a manipulator arm. With this, the resulting design is the one that can be observed in Fig. 2.

734

3

J. De Le´ on et al.

Definition and Evaluation of the Gait Patterns

All the walking hexapods have the characteristic of being stable, both dynamically and statically; that is to say that with the six extremities that they have a perfect support on the ground, this way when withdrawing the support of one, two or three extremities, the five, four or three remaining legs continue giving sustenance. Which means, that the hexapod has advantages compared to bipeds and quadrupeds by having more points of support and give an excellent support at the time of generating different patterns of walking.

Fig. 3. Scheme of the position and definition of the legs of our hexapod

3.1

Gait Patterns from Animals

Based on the observation of nature, it is possible to distinguish three types of gait for the hexapods: alternate tripod, tetrapod and wave, whose walking patterns can be observed in the Fig. 4. Alternate Tripod: Legs I1, D2, and I3 (see Fig. 3) are moved simultaneously for a period of time (either forward or backward) in the air, black area; While the legs D1, I2 and D3 remain in contact with the ground having a movement contrary to the other tripod, white area. Tetrapod: The legs are moved in pairs of opposing extremities. That is, I1 and D3 move (either forward or backward), black areas, while the rest remain on the ground. When the cycle of I1 and D3 is ending the leg D2 start moving in the same way as in the previous one. Then the legs I3 and D1 begin to perform the same action as above but during their duty cycle. And, finally, the leg I2 realizes the movement. Wave: In this type of walking the legs are independent, each one moves for a period of time (either forward or backward) and continues the adjacent extremity.

3.2

New Gait Patterns

However, as the human has been demonstrated throughout history, is possible to develop things that nature has not been able to. Therefore, in this section are presented two novel gait modes that have not been found in any biological study to date.

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks

735

Fig. 4. Gait patterns. (From top to bottom: Alternate tripod, tetrapod and wave)

The first one (new mode 1 ) could be defined as a variant of the tetrapod gait, but in this case the combination of legs does not present a smooth transition. Instead of setting the gait with a smooth transition between pairs of legs, a pattern of movement will be generated in which each pair of legs will always move, first the front, then the intermediate, and finally the rear, and again the pattern will start again. When implementing this mode is necessary to have a good calculation of the torques and efforts generated in the pair that acts, because in this type of movement there is not present a support as large as in other movements. The second designed movement (new mode 2 ) may seem rather rudimentary but nonetheless not less effective, it is intended for very irregular terrain. In this case all the legs of the robot will move together in each cycle, that means that the robot will be completely supported and then to not present any support and fall on the ground. This movement may have the disadvantage of being aggressive with the robot because of the large number of impacts it receives in each transition.

Fig. 5. New gait patterns.

736

4

J. De Le´ on et al.

Simulations and Results

In order to obtain a complete analysis of the gait patterns, a series of tests have been carried out where all the patterns are tested under different conditions. The tests were carried out on the virtual robot programming platform, ROS (Robot Operating System), and the Gazebo simulator. As in any design process, the simulation part provides a lot of data, mainly to detect possible elements that have been poorly designed (both hardware and software). In the case that is discussed, a set of four tests has been designed where the gait patterns algorithms that have been programmed will be tested. The first is a test of displacement on flat ground, without slopes or obstacles; the second is a test of overcoming a stair; the third simulates an uneven terrain and the last one is the overcoming of an inclined plane. As the simulation in a computer can be considered a simulation in a controlled environment and therefore, if it is always performed under the same conditions, the results obtained will be identical, at the beginning of each test the robot will be affected by a disturbance provoked purposely by the system. In this way we try to expose the robot to different cases in each repetition of the tests. Each test will be repeated a total of 100 times to have a large sample and to obtain more conclusive results. In addition, a series of data are collected for their analysis: duration of the test, distance travelled, average velocity, displacement in bodies/second, initial position (x, y), final position (x, y) and height of the center of mass of the robot. 4.1

Trial 1

The objective of the first test is to analyse if all the pattern gaits are able to realize a displacement of at least one body per second, which means that they exceed in this ability to numerous already designed hexapod robots. Besides that quality, it is tried to analyse which is the maximum speed that is able to reach each mode, to achieve this, the speed of the robot will be increased successively until it is unable to travel a distance of X meters with an error in the Transverse axis less than ten percent of the distance travelled (Fig. 6). The duration of the test is 60 s. In the Table 1 the results can be analysed.

Fig. 6. Maximum error allowed in test 1.

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks

737

Table 1. Results of the first test. Pattern

Distance (m) Velocity (m/s) Bodies Final pos. (x, y) Average height (m)

Tripod

4,5

0,075

2,16

4.5; 0,066

0,0971

Tetrapod

7,53

0,125

3,62

6.95; −2.9

0,093

Wave

4,45

0,074

2,13

−0.05; 4.45

0,088

Wave V2

5,57

0,093

2,68

5.46; 1.12

0,088

New mode 1 4,15

0,069

1,99

3.98; −1.19

0,085

New mode 2 12,07

0,201

5,79

12.02; 1.16

0,062

Numerous data of interest can be obtained from test number 1, the first one can be obtained from the analysis of the highest speed by the robot, the new mode 1 is three times faster than the other modes of running, the second that more average speed has had is the pattern gait that imitates the tetrapods, but far below. Although the velocities may seem low, they are quite the opposite, in robotics it is considered excellent that a mobile robot can reach a speed of one body per second. In the Table 2 you can see a comparison with the other robots designed to date [8]. Table 2. Comparative of speed with other hexapod robots. Robot

Length (m) Mass (kg) Velocity (m/s) Bodies/second

CW Robot 0,5

1

0,0833

0,16

Dante II

3

770

0,017

0,006

Atilla

0,36

2,5

0,03

0,083

ASV

5

3200

1,1

0,22

Boadicea

0,5

4,9

0,11

0,22

Sprawlita

0,17

0,27

0,42

2,5

Robot

0,48

9,5

12,07

5,79

In Fig. 7 are drawn the different paths that each gait pattern realized for the test 1. It is highlighted, in white, the path of the Wave Mode, making almost a half-turn, in order to try to solve this error by the algorithm of the pattern, the sequence of movement of the legs (Fig. 8) is changed and the result can be see in yellow colour. Another fact that comes from the data is that only the tripod mode is able to maintain a more or less rectilinear, the other modes are affected by the imbalances that occur at some point in the transitions when the center of gravity loose the stability margin (not all the gait patterns are stable all the time) or by the lag that occurs in the first step. The comparison of the height of the center of mass of the robot is also interesting (Fig. 9). There are some patterns that maintain a more constant height

738

J. De Le´ on et al.

Fig. 7. Trajectories followed by the different gait modes. (1) Alternate tripod (2) Tetrapod (3.1) Wave (3.2) Wave V2 (4) New Mode (5) New Mode 2

Fig. 8. New wave mode design.

while others are constantly rising and falling, this can become a determining factor when selecting the mode of operation according to the surface where the robot is going to act.

Fig. 9. Height of the center of mass of the robot in the gait patterns. (A) Altern tripod (B) Tetrapod (C) Wave (D) Wave V2 (E) New Mode (F) New Mode 2.

4.2

Trial 2

The second test try to find out what types of pattern are capable of overcoming a stair. This test is a very important feature that the robot must have since one of

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks

739

the main limitations that the current USAR robots have is the inability to climb stairs, therefore, getting the robot to be able to overcome this test becomes an essential objective to overcome. The stair designed for the test is designed to the standards in architecture [3], where the footprint is required to be at least 28 cm. And the riser in the interval 13 to 18 cm, fulfilling the following relation: 54 < 2 · stepheight + 1 · stepwidth < 70

(3)

In the case covered, the riser has a height of 17 cm and the footprint 34 cm, giving a result of 68 cm.

Fig. 10. Profile of the two patterns that overcome test 2. (A) New Mode (B) New Mode 2

In this case the new mode 2 was able to overcome the obstacle, although with difficulties, the robot had to be perpendicular to the step that was going to climb, otherwise it did not surpass it and had to be replaced, however, if there was no deviation in the path is capable of climbing the whole stair at once. The new mode 1 also achieved the goal but provided that it was placed in the same position as the previous mode and also, only exceeded it if the movement of the legs was made front pair. The results of the two modes that surpassed the test are detailed in the Fig. 10, where it is seen that the new mode 2 is much faster despite having a greater number of attempts to climb each step. For its part, the new mode 1 presents that slowness because in each step that surpasses, it has to wait until the sequence of movement arrives again to the pair ahead. As the others locomotion patterns were still unable to climb the stair it was decided to make a change in the control strategy, since one of the reasons why the robot could not climb the ladder is the lack of torque against the step. To do this, the algorithm was redesigned to perform a sequence control in time interval, the positive side of this control is that when one leg forces against a step, after the defined interval of time, the second front leg start to apply force too, helping to solve the obstacle. The consequence of using this running mode is that the synchronization between legs is lost.

740

J. De Le´ on et al.

The conclusion of this test clearly indicates that overcoming obstacles presents two key factors, the first is the relationship between leg diameter and step, where the second can not be greater than the first; And the second means that the running mode used must apply a torque with the two front legs simultaneously in order to overcome the obstacle. 4.3

Trial 3

USAR robots are usually designed to move around irregular terrain, but as already discussed in the introduction a large number are unable to overcome this obstacle. The test does not try to find out only who is able to overcome the abrupt terrain but also to do it in a faster way. The terrain modeled for the test is observed in the Fig. 11a according to the mode of travel selected the robot will follow the path according to the torque that can be applied in each sequence and the possible collisions between the terrain and the body (Fig. 11b).

(a) Model of the terrain for (b) Results of test 3. test 3.

Fig. 11. Test 3.

As can be seen from the results of this test, alternate tripod and the new mode 1 are those that have followed a more rectilinear path, this can be due to two main factors, which have an average height respect the other modes, with which they have been able to overcome obstacles that the other modes have had to surround it, or, that motion sequences are able to generate more torque to overcome obstacles. Finally, in the Fig. 12 the profile of the route followed by each mode can be seen. 4.4

Trial 4

Often, after earthquakes or collapses of buildings, walls are torn down but forming inclined planes, or USAR members have to access the interior of a house through the roof, as it happens when there are landslides that bury houses. This test is designed to simulate these situations and find out which sequence of moves is able to overcome a more inclined plane. Robots were started by

Study of Gait Patterns for an Hexapod Robot in Search and Rescue Tasks

741

Fig. 12. Profile of the route followed by each running mode in test 3. (A) Alternate tripod (B) Tetrapod (C) Wave (D) Wave V2 (E) New Mode (F) New Mode 2

testing a constant slope of 10% with a length of 4 meters in the horizontal projection, and then increasing the slope to reach a maximum slope of 50%. The results obtained in this test have been very satisfactory, all robots have been able to overcome it without any inconvenience. Table 3 shows the time that all the patterns need to overcame the trial with a slope of 50%. Table 3. Results of passing test 4 for a slope of 50%. Mode

Time (s)

Alternate tripode 80 Tetrapod

5

51

Wave

79

New mode 1

71

New mode 2

20

Conclusions

A study on the locomotion gaits for an C-leg based hexapod robot have been carried out, it shows that those gaits largely affect the performance of the robot, and therefore it is a field that needs to be studied in detail. Two novel gait modes have been proposed and compared with those found in literature. It is worth highlighting that the results obtained by the two gait modes proposed, especially the second of them, new mode 2, to the best of our knowledge, have not been studied before, maybe because of the unnatural movement that

742

J. De Le´ on et al.

it performs. However, it has been by far the most versatile movement offered, much more than expected, being able to overcome all the tests with almost no drawbacks. It has also been observed that the alternate tripod is able to perform a more rectilinear trajectories and maintain a more constant height than any of the other gaits, which is why it is the one that predominates in hexapods, both robotic and those found in nature. It would be very interesting to be able to analyze a mixed behavior, that in terrains where obstacles are easy to dodge or with a size smaller than half the diameter of the leg, use the alternate tripod and when a high obstacle has to be overcome, change the mode to the new mode 1 or new mode 2. The relevance of the study presented in this work has been demonstrated, moreover, it has open new development routes to terrestrial rescue robots, where the elements that, to date, have been limited to robots that use conventional wheels and caterpillars.

References 1. Roennau, A., Heppner, G., Nowicki, M., Dillmann, R.: LAURON V: a versatile sixlegged walking robot with advanced maneuverability. In: IEEE ASME, pp. 82–87, Julio 2014 2. Balasubramanian, R.: Legless locomotion: concept and analysis. Technical Report, The Robotics Institute. Carnegie Mellon University, Pittsburgh, Pennsylvania 15213. https://pdfs.semanticscholar.org/207c/599e4d5c9bbbc71e109645d79 fbc9def52e6.pdf 3. de Espana, G.: Codigo tecnico de la edificacion. db-si. Technical report, Gobierno de Espana 4. G´ omez, J.J.R., Oviedo, M.G., de Le´ on Rivas, J., Ramos, D.A.G., Barrio, A.M., Terrile, S., Au˜ n´ on, P.G., Giner, J.D.C., Rossi, C., Cruz, A.B.: Proyecto pric: protecci´ on robotizada de infraestructuras cr´ıticas. In: Libro de actas de las Jornadas Nacionales de Rob´ otica 2017, vol. 1, pp. 1–6. CEA-IFAC, Valencia, Junio 2017. http://oa.upm.es/46819/, rob´ otica y Cibern´etica RobCib 5. Moore, E.Z.: Leg design and stair climbing control for the RHex robotic hexapod. Master’s thesis, Department of Mechanical Engineering McGill University, January 2002 6. Murphy, R.R.: Disaster Robotics. The MIT Press, Cambridge (2014) 7. Nie, C., Corcho, X.P., Spenko, M.: Robots on the move: versatility and complexity in mobile robot locomotion. IEEE Robot. Autom. Mag. 20(4), 72–82 (2013). http://ieeexplore.ieee.org/document/6582554/ 8. Rivas, J.D.L.: Definicion y analisis de los modos de marcha de un robot hexapodo para tareas de busqueda y rescate. Master’s thesis, Escuela Superior de Ingeniera Industrial. Universidad Politecnica de Madrid, October 2015 9. Tordesillas, J.: Diseno y simulacion del sistema de locomocion de un robot hexapodo para tareas de busqueda y rescate. Master’s thesis, Universidad Politecnica de Madrid, July 2016. http://oa.upm.es/42893/ 10. Tordesillas, J., Leon, J.D., Cerro, J.D., Barrientos, A.: Modelo cinematico de un robot con C-LEGS. Jornadas de Automatica, pp. 267–275, September 2016. http:// ja2016.uned.es/assets/files/ActasJA2016.pdf

A Hybrid ZMP-CPG Based Walk Engine for Biped Robots S. Mohammadreza Kasaei(B) , David Sim˜ oes, Nuno Lau, and Artur Pereira IEETA/DETI, University of Aveiro, 3810-193 Aveiro, Portugal {mohammadreza,david.simoes,nunolau,artur}@ua.pt

Abstract. Developing an optimized omnidirectional walking for biped robots is a challenging task due to their complex dynamics and kinematics. This paper proposes a hierarchical walk engine structure to generate fast and stable walking. In particular, this structure provides a closedloop CPG-based omnidirectional walking that takes into account two human-inspired push recovery strategies. In addition, this structure is fully parametric and allows using a policy search algorithm to find the optimum parameters for the walking. To show the performance of the proposed structure, a set of experiments on a simulated NAO robot has been carried out. Experimental results demonstrate that the proposed structure is able to generate fast and stable omnidirectional walking. The maximum speed of forward walking that we have achieved was 59 cm/s. Keywords: Humanoid robots · Omnidirectional walking · Hierarchical structure · Central Pattern Generator(CPG) · CREPS-CMA · 3D Soccer simulation

1

Introduction

The ability to efficiently move in dynamic environments is an essential property for robots which want to operate in human environments. Humanoid robots are the most appropriate type of robots to operate in these environments due to their similarity in kinematic architecture with a human. Hence, developing a fast and stable walking has become a hot topic for several research groups such as robotics and biologists [10,16,23]. Over the past decades, several approaches have been presented to construct a stable walking for biped robots and can be generally divided into four categories: trajectory-based methods, passive dynamics control, heuristic based methods and Central Pattern Generators [11,18,21]. Trajectory-based methods employ dynamics models of robots with some stability criterion to produce stable walking. These approaches are computationally expensive due to calculating complex geometrical equations at each time step. Most of the approaches in this group consider some constraints in dynamics to achieve simplicity. For example, c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_61

744

S.M. Kasaei et al.

keeping the center of mass at a constant height is one of the general assumptions in these approaches. This constraint is not only harmful to the knee joint but it also increase the consumption of energy. In addition, these kinds of approaches require predefining positions for placing the feet. Passive dynamics approaches try to describe behaviors of robots by their passive dynamics with no sensing or control. These types of approaches fall beyond the scope of this paper. Some others are based on heuristic approaches such as genetic algorithms, reinforcement learning, etc. In these approaches, the learning phase takes a considerable amount of time because of the requirement of a large amount of samples [21]. These approaches are not suitable for a real robot due to the high potential of damaging the hardware during the training period [12]. Central Pattern Generator (CPG) approaches are biologically inspired control methods. These methods try to produce stable walking by generating some rhythmic patterns for each limb instead of using physical dynamics model of the robot. They not only require less computational power but also provide energy efficient walking. In addition, these approaches generate more humanlike motions [19]. In this paper, an omnidirectional CPG-based walk engine is proposed to generate fast and stable walking for a simulated soccer humanoid robot. Towards this goal, Partial Fourier Series (PFS) oscillators are used to generate smooth output trajectories. In order to achieve more stable walking, sensory feedback signals are used to modify oscillators outputs while interacting with the environment. In addition, an optimization technique based on Contextual Relative Entropy Policy Search with Covariance Matrix Adaptation (CREPS-CMA) [1] is used to appropriately tune the parameters. The remainder of this paper is structured as follows. In Sect. 2, we review related work. The architecture of our step generator is explained in Sect. 3. The hierarchical structure of our walk engine and detailed methodologies are explained in Sect. 4. Optimization method and the results are then presented in Sect. 5. Finally, conclusions are presented and future research is discussed.

2

Related Work

Over the past decade, several researches have been conducted on CPG-based walk engine for bipeds and quadrupeds [10,12,14]. Recently, interests of using bio-inspired approaches as possible alternatives to generate gait locomotion are growing due to their successful results and fewer requirements [8]. The main focus of this paper is on biped robots. A brief literature survey concerning CPG-based walk engines is presented in the following of this section. Picado et al. [18] proposed a forward walk generator for a simulated humanoid robot. They used PFS oscillators to generate joint trajectories. Genetic Algorithms were then used to find optimum parameters for each trajectory. They chose a simple but effective fitness function which considered forward walking velocity and the torso average oscillation. Their approach has been tested on a NAO simulated humanoid robot. Simulation results showed their approach

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots

745

is able to generate forward walking with good velocity (51 cm/s). Afterward, Shahriar et al. [2] extended their approach to produce forward and side walk for the agent. As they argued, their approach was able to produce faster forward walking (55 cm/s). Song et al. [19] developed a CPG-based forward walk generator with a balance controller for a humanoid robot. In their approach, onboard gyro and accelerometer sensors are used to measure the angles of upper-body. By using these angles, a controller was developed to adjust the tilt angle of the upper-body. To verify the effectiveness of their controller, real Zero Momentum Point (ZMP) data has been recorded during walking on different slopes between −7◦ to 7◦ . Experimental results with a real NAO humanoid robot showed their approach can generate stable forward walking on unknown slopes. Cristiano et al. [4] implemented a CPG-based walk engine with sensory feedback that enabled biped robots to walk on uneven terrain. In order to develop a feedback walk engine, inertial and force sensors were used. Their system consisted of two stages including the trajectories generator (trunk, legs, and arms) and a stability controller to automatically adjust these trajectories using sensors data. To evaluate the abilities of their walk engine, simulation experiments with a NAO humanoid robot have been carried out. Simulation results showed the robot was able to walk on stairs with a step height of 1 cm and was also able to walk on sloped terrain. Liu and Urbann [14] presented and implemented a CPG-based walk engine which was able to model the walking pattern using continuous and differentiable mathematical functions. In addition, they used ZMP criterion and a body posture controller to stabilize the robot during walking. The proposed controller was based on sensory feedback and tried to modify walking pattern in real time. They have performed several experiments on a NAO humanoid robot to verify their method. The results showed their method could provide energy efficient and stable dynamic walking. Most of the above approaches were able to provide unidirectional walk engine and some of them used sensory feedback to increase its stability when walking. To use humanoid robots in a real human environment, we believe they should not only be able to generate omnidirectional human-like walking but also be able to react appropriately against disturbances (e.g., external pushes, inclined surfaces, uneven terrain, etc.). In the remainder of this paper, a CPG-based walking engine is presented that is able to generate omnidirectional walking patterns in an appropriate manner. Besides, two human-inspired push recovery strategies are developed to provide robustness against disturbances. Moreover, this walking engine is structured in hierarchical layers to fade the complexities.

3

Step Generator Architecture

Biped locomotion is inherently periodic and be commonly generate by repeating a series of steps. A step can commonly be generated by two approaches, the engineering-oriented approach based on Zero Moment Point (ZMP) [22] and

746

S.M. Kasaei et al.

the biologically-inspired approach based on CPG [17]. In the reminder of this section, a brief overview of the Three-Dimensional Linear Inverted Pendulum (3D-LIPM) [9] is presented, and our CPG-based step generator is introduce. 3.1

3D-LIPM and ZMP

3D-LIPM is widely used to analyze the behavior of Center of Mass (CoM) during walking. In this model, overall dynamics of a humanoid robot is approximated by a single mass that is connected to a certain point on the ground via a massless rod. To achieve simplicity and linearity in dynamics, the length of the rod is assumed to be constant, therefore, the single mass is limited to move along a horizontally defined plane Fig. 1. 3D-Linear inverted pen(See Fig. 1). According to this assumption, the dulum motion equations of the CoM are decoupled in sagittal and lateral plane. Moreover, the ground reaction force always passes through the CoM and can not generate angular momentum. The motion equation of this pendulum is as follows: x ¨(t) =

1 g x(t) − τ (t), zc m × zc

(1)

where t represents the time which is reset at the end of each step, m is the mass of the pendulum, g describes the gravity acceleration, zc is the height of CoM and τ is the torque around the axis. ZMP is the most commonly used criterion for controlling a biped robot during walking. ZMP concept is used to specify a point on the ground which does not produce any momentum (due to gravity and body inertia) in the horizontal direction. The ZMP equation for 3D-LIPM is obtained by the following equation: px (t) =

τ (t) , m×g

(2)

where px is the location of the ZMP. By substituting Eq. (1) to the ZMP Eq. (2) we obtain: zc ¨(t). (3) px (t) = x(t) − x g Traditionally, by using a preplanned ZMP trajectory and solving this differential equation, a static walking can be obtained [9]. Although this approach has been successfully tested on several real robots, it has several drawbacks. It is computationally expensive and requires to preplanned motions. In addition, to keep the CoM at a constant height, knee joints have to be bent during walking which is not only energy efficient but also harmful to the knee actuators. Moreover, it does not generate very human-like motion [3,12,17].

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots

3.2

747

CPG Control Architecture

The latest neurophysiological studies on invertebrate and vertebrate animals showed that rhythmic locomotion patterns (e.g. walking, running, etc.) are generated by CPGs at the spinal cord [6]. Furthermore, they proved the basic building block of a CPG is an oscillator and a CPG can be modeled by coupling some oscillators in a particular arrangement. Hence, CPGs are able to generate several endogenously rhythmic signals without a rhythmic sensory input. During last decade, several CPG-based walk engines have been proposed which were based on different oscillator models such as Partial Fourier Series (PFS), Hopf, Matsuoka, etc. [15,18,19]. In most of them, an oscillator is allocated at each limb and its output is directly used as a set point command (torque, position, etc.). Humanoid robots have several Degrees of Freedom (DOF), therefore adjusting the parameters of each oscillator is not only difficult but also trial-intensive. In addition, adapting sensory information and obtaining appropriate feedback pathways for all the oscillators is a complex subject [19]. In this paper, to overcome these difficulties, a specific arrangement for the oscillators is designed which takes into account the dynamics and kinematics model of the robot. Our arrangement is composed of only seven oscillators by considering the symmetry motion pattern between the legs and arms. These oscillators generate the Cartesian coordinate positions of feet and arms to produce overall stride trajectories. In order to develop a CPG trajectory generator, PFS oscillators are used as the main oscillator since they are able to generate multi-frequency shape signals. PFS try to decompose a periodic signal into a sum of simple oscillators (e.g. sines or cosines). A PFS oscillator model is defined as follows: f (t) = A0 +

N 

An sin(nωt + φn ),

ω=

n=1

2π T

∀t ∈ ,

(4)

where N is the number of frequencies, An , ω and φn are the amplitude, the angular velocity and the phase of the nth term, respectively. These parameters can be adjusted to obtain different shapes. As we previously mentioned, walking is inherently periodic. Therefore, according to Eq. (3), if the trajectory of the ZMP is periodic, then the trajectory of the CoM is also periodic. Therefore, Let us consider trajectories of the CoM and the ZMP can be generated by PFS oscillators, then we have: N 

px (t) = A0 +

An sin(nωt + φn ),

ω=

2π T

∀t ∈ ,

(5)

Bn sin(nωt + φn ),

ω=

2π T

∀t ∈ .

(6)

n=1

x(t) = B0 +

N  n=1

Considering ω02 =

g Zc

and by plugging Eqs. (5) and (6) into Eq. (3) we obtain:

748

S.M. Kasaei et al.

A0 +

N  n=1

An sin(nωt + φn ) = B0 +

N  n=1

(Bn +

Bn n2 ω 2 ) sin(nωt + φn ). ω02

(7)

This equation shows the ZMP trajectories and the CoM trajectories have the same form with different amplitudes. Now by comparing the left-hand coefficients with the right-hand coefficients of Eq. (7), the relationship between the amplitudes will be obtained:  A0 = B0 (8) 2 An = Bn (1 + ( n×ω ω0 ) ). These relationships declare several features for biped walking which are very useful for adjusting the oscillators and walking parameters. For instance, by increasing the ZMP frequency, the amplitudes of the CoM will be decreased. Generally, each step consists of two phases, single support and double support. To achieve fastest walking speed, we considered the double support period is very short. In our target structure, a step is defined by Length (L), Width (W ), Orientation (θ) and Step Duration (T ). According to these parameters, ZMP trajectory can be generated. The best intuitive choice for the ZMP trajectory is the middle of the supporting foot [5]. Hence, ZMP trajectories with respect to the torso location can be generated using two distinct periodic symmetric square functions. Therefore, ZMP trajectories are defined as follow:   −(D + W ) − T ≤ t < 0 −L − T ≤ t < 0 xZM P (t) = yZM P (t) = D+W 0 ≤ t < T, L 0 ≤ t < T, (9) where D is the distance between the feet. Since yZM P and xZM P are odd functions, a specific method exists to approximate them by a PFS. By using this method, the coefficients of yZM P and xZM P can be calculated as follows: ⎧ ⎧ 0 0 ⎪ ⎪ ⎨A0 =  ⎨A0 =  4(D+W ) 4L = Cy = C for n odd for n odd x nπ nπ ⎪ ⎪ = = a a n n ⎩ ⎩ 0 for n even. 0 for n even, (10) By substituting these coefficients into Eq. (8), the trajectories of CoM can be approximated. An approximation to ZMP and corresponding CoM trajectory in Y direction when the sum is truncated at the N = 31 are shown in Fig. 2(a). As is shown in this figure, the output of nth PFS has large oscillations near the target ZMP. To overcome this issue and to generate a smooth output, the Lanczos sigma factors [7] are employed which are defined as follows:  n sin nπ m = , (11) sinc nπ m m

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots ZMP and CoM

0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08 -0.1

0

0.5

1

1.5

ZMP and CoM

0.08

ZMP CoM

CoM and ZMP Refrence in Y (m)

CoM and ZMP Refrence in Y (m)

0.1 0.08

2

2.5

3

Time (s)

(a)

749 ZMP CoM

0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08

0

0.5

1

1.5

2

2.5

3

Time (s)

(b)

Fig. 2. ZMP trajectory and corresponding CoM trajectory in Y-direction: (a) the outputs of oscillators when the sum is truncated at the N = 31; (b) the outputs of oscillators after applying the Lanczos sigma factors.

where m = N +1. By multiplying these factors into each terms of PFS, the proper smooth outputs will be achieved. The obtained results are shown in Fig. 2(b). In the next section, we will design an omnidirectional walk engine by using this CPG step generator.

4

Hierarchical Walk Engine

Biped locomotion can generally be generated based on the trajectories of ZMP and CoM. An inverse kinematic method should then be used to calculate the joint angles. In this section, a hierarchical structure is proposed to develop an omnidirectional walk engine which has three levels, including the Trajectories generator, the Push recovery controllers and the Low level controllers. The overall architecture of this walk engine is depicted in Fig. 3.

Fig. 3. Overall architecture of our walk engine.

4.1

Trajectories Generator

ZMP trajectories can be defined by substituting the parameters of a step into Eq. (10) to obtain the coefficients for Eq. (5). Then, by substituting these

750

S.M. Kasaei et al.

coefficients into Eq. (8) and using Eq. (6) the trajectories of CoM can be generated. According to the sagittal symmetry of walking in frontal and lateral planes, the same trajectories with a half-period phase shift are generated for the swing leg movements. Moreover, three separate oscillators are allocated to generate Z, θ and arms trajectories for both legs and both arms in a symmetrical manner. All of the reference trajectories of walking have already been generated and by using an inverse kinematic method a feed-forward walking engine can be achieved. 4.2

Push Recovery Controller

Feed-forward walk engines are able to provide walking only on even surfaces or on predefined inclined slopes. This type of walk engines generates rhythmic gaits for given inputs regardless of the environment. Therefore, due to unexpected errors which can be raised from several sources, such as external disturbances, inaccurate dynamic model and etc., they can not be robust. For instance, during walking on rough terrain environments, several forces will be applied to the robots. Hence, to operate on these environments, robots should be able to do some recovery strategies to keep their stability. To develop stable walking, several criteria have been defined [5,13] and the most important criterion for stability is keeping the Center of Gravity (CoG is the vertical projection of the CoM on the ground) inside a polygon that is defined by the foot or feet touching the ground (support polygon). Real position of the ZMP, position and velocity of the CoM are generally used to estimate the state of the robot. We have developed two independent human-inspired strategies to keep the stability during walking and regain balance after external pushes. Ankle Strategy. According to Eq. (1), the position of the CoG is proportional to the ankle torque. Therefore, to keep the CoG inside support polygon, a compensating torque can be applied at the ankle joints. Ankle torque will saturate whenever the CoG is at the boundaries of the support polygon. Therefore, by using this strategy, balance can be recovered whenever the CoG does not reach at the edge of the support polygon. To provide more restoring torque, the arms can also be used. To implement this strategy a PD controller is used.

Fig. 4. Ankle strategy

d ω, (12) dt where θc is the compensation angle that is added to the angle of the ankle. Kp and Kd are the proportional and derivative gains and ω is the angular velocity of CoM. Kp and Kd are set based on expert knowledge and experiments. This strategy is shown in Fig. 4. θ c = Kp × ω + K d ×

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots

751

Hip Strategy. During fast walking, the CoM accelerates forward, meaning the Center of Pressure (CoP) moves behind the CoG. Ankle strategy tries to decelerate the CoM and keep the CoG inside the support polygon. It causes the CoP to move in front of the CoG and to reach to the boundary of the support foot. In this situation, the hip will start to roll over and, as a consequence, the robot can not regain its stability only by using the ankle strategy. As is shown in Fig. 5, hip angle error can be a good feedback to detect this condition. Using this feedback an Fig. 5. Hip strategy indirect reference ZMP controller can be developed, which is trying to follow reference ZMP without measuring the real ZMP by keeping the hip parallel with the ground [20]. Since the humanoids have many joints, to control the acceleration of CoM, beyond the ankle torque, the hip and waist joints can be used to apply more torque around the CoM. Therefore, to prevent a fall, in addition to the ankle and arm joints, the joints of the waist and the hips should be used. Hip strategy tries to compensate the disturbance by moving CoM and hip quickly. To implement this strategy, two PD controllers are used. One of them is used to apply torque to the hip joints to create the acceleration of the upper body and another one is used to decelerate the body. The set points for both controllers and the gains for each of them are selected depending on the amplitude of the disturbance. 4.3

Low Level Controller

This level is the lowest level of the proposed structure and consists of three main modules, including state estimator, inverse kinematics solvers, and joints position controller. Low level controller tries to estimate position, velocity, and acceleration of the CoM by using the data of the IMU, which is mounted on the hip, and the forward kinematic model of the robot, which uses the values of the joint encoders. This level also provides PID position controllers to control the actuators. Moreover, to increase the portability, each module which has dependency on the robot’s hardware is kept in this level. Most humanoid robots have standard 6DoF legs with different configurations (3DoF in the hip, 1DoF in the knee and 2DoF in the ankle). Therefore, to apply the proposed walking engine to a new humanoid robot, only this layer need to be configured.

5

Optimization of Walk Engine Parameters

As described in previous sections, the walk engine is parameterized by 7 parameters, representing the expected step movement in the x, y, and z axes, along with the turning and inclination angles, the step duration and the center of mass height. The parameters are filtered in order to avoid out-of-bounds values, by using their absolute values when parameters must be positive. This creates a smooth fitness landscape for the algorithm to explore, instead of simply limiting the parameters within a certain range, which creates sharp ridges and flat areas.

752

S.M. Kasaei et al.

These parameters have been tuned in order to obtain both an initial solution that was very stable and that could move forward, albeit slowly, as well as the fastest hand-tuned solution. CREPS-CMA [1] is then used to optimize the slow and stable solution, and compared it against the best hand-tuned version. CREPS-CMA is a state-of-the-art contextual policy search algorithm that iteratively generates sets of candidate solutions sampled from a parameter distribution based on a given context. Candidate solutions are evaluated against a given task and context, and the parameter distribution is updated based on the candidates’ fitness, thus improving the quality of the samples in the next iteration. Due to CREPS-CMA’s parallel nature, we could take advantage of a distributed optimization architecture to achieve a speed-up of 30x, thus enabling us to optimize the parameters in a practical amount of time. 5.1

Maximum Speed Scenario

This scenario focused on optimizing the humanoid agent to achieve the highest speed, moving straightforward, without being unstable enough to fall. The agent is allowed to move during 15 s, and the fitness function f with parameters θ is defined as f (θ) = −δx + δy + ε, (13) where δx represents the total distance covered in the x-axis, δy represents the total distance deviated in the y-axis, and ε is 0 if the agent did not fall and 30 otherwise. With this function, the agent is rewarded for moving along the x-axis, and it is penalized for deviating to the sides or falling. After 5000 epochs of training, CREPS-CMA improved upon our initial speed of 11 cm/s and reaching a maximum stable velocity of 59 cm/s. In comparison, the hand-tuned best solution achieved only a top speed of 50 cm/s, resulting in an improvement of 18% faster speed. In addition, our optimized agent is faster than both agents in [2,18]. The distances covered by the optimized, hand-tuned and initial solutions are shown in Fig. 6. 5.2

Omnidirectional Scenario

In our second scenario, we focused on optimizing the humanoid agent to achieve high mobility and turning speed, without being unstable enough to fall. We allow the agent to start moving with maximum speed forwards for a random amount of time (from 1 to 3 s), this way assuring our parameters allow the agent to turn when he is already moving. The agent is then allowed to turn to a specific angle during 10 s, or less if he starts moving away from the target. We defined the angle as a random value in the interval [0, 45]◦ , and the agent is given a target 1 unit of distance away in that direction. Using a positive interval instead of a symmetrical one is a simple trick to speed up optimization, which we exploit by negating the y and θ parameters when, in actual play, the target stands in a direction with a negative angle. Fitness function f with parameters θ is defined as 10 20 + δt + ε, (14) f (θ) = δd + δθ 3 180

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots

753

Fig. 6. The top panel shows the distance covered in the same amount of time by the agent after optimization. The middle panel shows the distance covered by our handtuned solution. The bottom panel shows the stable initial hand-tuned solution given to CREPS-CMA.

where δd represents the total distance to the intended target, δθ represents the difference between the final and target orientations of the agent, δt represents the time taken, and ε is 0 if the agent did not fall and 30 otherwise. With this function, we reward the agent for moving as fast as possible and with as much accuracy as possible to the target pose, without falling. We use some ratios to scale each segment of the reward, such that the distance, angle, and time have a similar importance, penalty-wise. After 2000 epochs of training, we achieve a fast omni-directional walk, where the agent can turn up to 45◦ in a single unit of distance, as can be seen in Fig. 7.

Fig. 7. An exemplary omni-directional walk after the optimization process has finished. The agent is given several turning targets, starting with left turns and finishing with right ones, all of these with increasing angle. The agent walks forward for 2 s after every turn.

6

Conclusion

This paper proposed a hierarchical structure to generate an omnidirectional walk engine. This structure decoupled the walking motion into three hierarchy lev-

754

S.M. Kasaei et al.

els to increase flexibility and portability. In the first level, 3D-LIPM and 7 PFS oscillators are used to generate the trajectory of the feet. In the second level, two human-inspired push recovery strategies are developed to increase stability during walking. To achieve highly portable walk engine, all modules which depend on the robot’s platform, are developed in the lowest level. The proposed walk engine is developed and completely tested on a simulated Nao humanoid robot. CREPS-CMA is used to find the optimize parameters. Experimental results showed that this structure using the optimized parameters provided fast and stable walking (59 cm/s) which was 18% faster then the hand-tuned best solution. In the future, we intend to develop knee push recovery strategy to provide a more stable walking. Acknowledgment. This research is supported by: Portuguese National Funds through Foundation for Science and Technology (FCT), in the context of the project UID/CEC/00127/2013; and by European Union’s FP7 under EuRoC grant agreement CP-IP 608849. The second author is supported by FCT under grant PD/BD/113963/2015.

References 1. Abdolmaleki, A., Simoes, D., Lau, N., Reis, L.P., Neumann, G.: Contextual relative entropy policy search with covariance matrix adaptation. In: Proceedings of 2016 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 94–99. IEEE, May 2016 2. Asta, S., Sariel-Talay, S.: Nature-inspired optimization for biped robot locomotion and gait planning. In: Applications of Evolutionary Computation, pp. 434–443 (2011) 3. Behnke, S.: Online trajectory generation for omnidirectional biped walking. In: Proceedings of 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, pp. 1597–1603. IEEE (2006) 4. Cristiano, J., Puig, D., Garcıa, M.: Locomotion control of biped robots on uneven terrain through a feedback CPG network. In: Proceedings of the XIV Workshop of Physical Agents, pp. 1–6 (2013) 5. Erbatur, K., Okazaki, A., Obiya, K., Takahashi, T., Kawamura, A.: A study on the zero moment point measurement for biped walking robots. In: Proceedings of 2002 7th International Workshop on Advanced Motion Control, pp. 431–436. IEEE (2002) 6. Guertin, P.A.: The mammalian central pattern generator for locomotion. Brain Res. Rev. 62(1), 45–56 (2009) 7. Hamming, R.: Lanczos’ σ factors and the σ factors in the general case 32.6 and 32.7. In: Numerical Methods for Scientists and Engineers, pp. 534–536 (1986) 8. Ijspeert, A.J.: Central pattern generators for locomotion control in animals and robots: a review. Neural Netw. 21(4), 642–653 (2008) 9. Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., Hirukawa, H.: Biped walking pattern generation by using preview control of zeromoment point. In: Proceedings of 2003 IEEE International Conference on Robotics and Automation, ICRA 2003, vol. 2, pp. 1620–1626. IEEE (2003)

A Hybrid ZMP-CPG BasedWalk Engine for Biped Robots

755

10. Kajita, S., Tan, K.: Study of dynamic biped locomotion on rugged terrainderivation and application of the linear inverted pendulum mode. In: Proceedings of 1991 IEEE International Conference on Robotics and Automation, pp. 1405– 1411. IEEE (1991) 11. Kasaei, S.M., Lau, N., Pereira, A., Shahri, E.: A reliable model-based walking engine with push recovery capability. In: proceedings of 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 122– 127, April 2017 12. Kasaei, S.M., Kasaei, S.H., Shahri, E., Ahmadi, A., Lau, N., Pereira, A.: How to select a suitable action against strong pushes in adult-size humanoid robot: Learning from past experiences. In: Proceedings of 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 80–86. IEEE (2016) 13. Komura, T., Leung, H., Kudoh, S., Kuffner, J.: A feedback controller for biped humanoids that can counteract large perturbations during gait. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, pp. 1989–1995. IEEE (2005) 14. Liu, J., Urbann, O.: Bipedal walking with dynamic balance that involves threedimensional upper body motion. Robot. Auton. Syst. 77, 39–54 (2016) 15. Missura, M., Behnke, S.: Self-stable omnidirectional walking with compliant joints. In: Proceedings of 8th Workshop on Humanoid Soccer Robots, IEEE International Conference on Humanoid Robots, Atlanta, USA (2013) 16. Mohades Kasaei, S.H., Kasaei, M., Alireza Kasaei, S.: Design and implementation of a fully autonomous humanoid soccer robot. Ind. Robot. Int. J. 39(1), 17–26 (2012) 17. Or, J.: A hybrid cpg-zmp control system for stable walking of a simulated flexible spine humanoid robot. Neural Netw. 23(3), 452–460 (2010) 18. Picado, H., Gestal, M., Lau, N., Reis, L., Tom´e, A.: Automatic generation of biped walk behavior using genetic algorithms. In: Proceedings of Bio-Inspired Systems: Computational and Ambient Intelligence, pp. 805–812 (2009) 19. Song, K.T., Hsieh, C.H.: CPG-based control design for bipedal walking on unknown slope surfaces. In: Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 5109–5114. IEEE (2014) 20. Song, S., Ryoo, Y., Hong, D.: Development of an omni–directional walking engine for full–sized lightweight humanoid robots. In: IDETC/CIE Conference (2011) 21. Torres, E., Garrido, L.: Automated generation of CPG-based locomotion for robot NAO. In: RoboCup 2011: Robot Soccer World Cup XV, pp. 461–471 (2012) 22. Vukobratovic, M., Frank, A., Juricic, D.: On the stability of biped locomotion. IEEE Trans. Biomed. Eng. 1, 25–36 (1970) 23. Yu, J., Tan, M., Chen, J., Zhang, J.: A survey on CPG-inspired control models and system implementation. IEEE Trans. Neural Networks Learn. Syst. 25(3), 441–456 (2014)

Modelling, Trajectory Planning and Control of a Quadruped Robot TM R /Simulink Using Matlab Italo Oliveira1 , Ramiro Barbosa2 , and Manuel Silva3(B) 1

3

Universidade Tecnol´ ogica Federal do Paran´ a, Campo Mour˜ ao, Paran´ a, Brazil italo [email protected] 2 Instituto Superior de Engenharia do Porto, Porto, Portugal [email protected] Instituto Superior de Engenharia do Porto and INESC TEC, Porto, Portugal [email protected]

Abstract. Due to the difficulty of building and making control tests in real robots, it is usual to first have a simulated model that provides a good approach of a real robot’s behaviour. The importance of a good control system in execution of a planned trajectory inspired this work, whose purpose is to design a control system for a quadruped robot and test its performance. R · Modelling · Trajectory Planning Keywords: MATLAB system · Simulation · Quadruped robots

1

·

Control

Introduction

Mobile robotics are an important tool in today’s society, since many scientific researches rely on instruments that should move in inhospitable places for humans. For the stated reasons, it is necessary to create means of locomotion capable of withstanding several adversities, such as high temperature and pressure, steep and irregular terrain, and places with high radioactivity. Mobile robots are mainly divided into locomotion by wheels, by tracks and locomotion by legs. Although wheeled locomotion has more efficiency in regular terrain and indoor environments, a legged locomotion is more robust when obstacles appear on its path [1,2]. Inside legged locomotion, there is the sub-group of quadruped robots, which are robots that use four legs to move. Compared to other types of legged robots, quadrupeds take advantage of bipedal robots in terms of stability, and compared to hexapod and octopod robots, they are more efficient [3,4]. There are also a lot of studies about quadruped robot models, specially in control and gait planning, as in [5] which describes a Central Pattern Generator to generate its gait or in [6] that implements a free gait based on posture control. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_62

Control of a Quadruped Robot

757

Still, [7] presents an on-line technique to construct walking gaits in irregular terrains using a control basis and [8] investigates an operational space control based on hierarchical task optimisation for quadruped locomotion. The quadruped robot presented in this work adopts the trot gait, originally found in horses [12], for its locomotion. Although factors such energy efficiency, stability, speed and mobility are still far from ideal, quadruped robots provide a good approach of the biological model when appropriate locomotion patterns are implemented [9]. The problem in the control of mobile systems is the proper application of parameters according to the goals to be achieved. Defining the sampling rate, which controller to use, how much accuracy the locomotion pattern requires, the energy efficiency and the specifications of the actuators and sensors, are necessary efforts that not always are a trivial solution. In most cases the mathematical models of the systems are complex and non-linear, and to find a proper controller for such systems require the application of linearization techniques in order to reach a simple and similar plant for the robot. Having these ideas in mind, this paper presents an overview of the implemented robot model in Sect. 2, considerations about the planned trajectory are shown in the Sect. 3, the implementation and analysis of the control system TM R /Simulink are presented in Sect. 4 and finally the conclusion using Matlab and future developments are given in Sect. 5.

2

Model Overview

The model of the robot used in this work is made of a four-part body with two rotational joints in each leg, which provide angular movements along the Z axis. Between each two adjacent parts of the body there’s a spring and damper system associated with a 2 DOF joint that gives more flexibility to the movements of the body [13]. In order to improve the simulated model, a foot-ground contact system, using equations of reaction forces in X and Y axis, was implemented. Each part of the robot is associated with an inertia matrix according to its shape. The robot has a total weight of 4 kg. Figure 1a shows the aspect of the robot TM model in SimMechanics . The simplified diagram with connections between body, legs and foot-ground systems is presented in Fig. 1b. The diagram also has a GND block that is needed to provide the robot its position in relation to the ground [15]. 2.1

Body Subsystems

The body has four interconnected blocks, each with its own rectangular shape inertia matrix Ib with mass mb = 0.9 kg, which is described in Eq. (1) due to its symmetry around coordinated axis [13]. The robot body dimensions are width (wb ) = 0.30 m, depth (db ) = 0.20 m and height (hb ) = 0.15 m. The complete body subsystem is shown in Fig. 2.

758

I. Oliveira et al.

(a) Simulated quadruped robot aspect TM in SimMechanics .

(b) Simplified diagram of the model.

Fig. 1. Quadruped robot model overview.

Fig. 2. Body scheme overview in SimMechanics.

⎤ ⎡ m (h2 + d2b ) 0 0 1 ⎣ b b ⎦ 0 0 mb (wb2 + d2b ) Ib = 12 2 2 0 0 mb (wb + hb )

(1)

The CS2 port present in each body block is used to connect the corresponding leg subsystem, and the ground reference is connected to the center of gravity of the BODY 1. 2.2

Leg Subsystems

As mentioned, each leg has two rotational joints: one in the hip and one in the knee. The implemented leg subsystem is depicted in Fig. 3.

Control of a Quadruped Robot

759

Fig. 3. SimMechanics diagram of the leg subsystems.

The two IC blocks provides the initial angle of each joint in the simulation. The subsystem is connected in its left to the corresponding body block and in its right to its foot-ground block. The control system sends the torque signal to the actuator blocks and its feedback is provided by the sensor blocks. The cylindrical inertia matrix associated with the body blocks leg sup and leg inf has radius r = 0.02 m, length L = 0.2 m and mass ml = 0.05 kg. Its equation, for the applied case of alignment in X axis, is provided by (2) [13]. ⎤ ⎡1 2 0 0 2 ml r 1 2 2 ⎦ 0 (2) Il = ⎣ 0 12 ml (3r + L ) 1 2 2 0 0 m (3r + L ) l 12 2.3

Foot-Ground Subsystems

The foot-ground subsystems are included in the model to get a closer approach to a real world robot. Its architecture is shown in Fig. 4. The sensor provides the velocity and position of the robot feet in order to compute friction and normal forces that the robot applies in the ground. The applied forces are calculated through Eqs. (3) and (4), which happens only when foot touch the ground. Fx = −Kx (x − x0 ) − Bx x˙

(3)

Fy = −Ky y − By (−y)m y˙

(4)

The equation used to compute friction forces is based on a linear springdamper system in which Fx is the applied force, Kx is the elastic constant of the spring, x and x0 are the current and previous X axis position in relation to the ground, respectively, Bx is the constant of the damper and x˙ is the velocity in X axis provided by the body sensor. For the normal force calculation was chosen a non-linear spring-damper equation because it presented better results in tests [11]. In this case, Fy is the applied

760

I. Oliveira et al.

Fig. 4. SimMechanics diagram of the foot-ground subsystems.

normal force, Ky is the elastic constant of the spring, y is the current Y axis position, By is the damper constant, m is a factor that depends on the characteristics of the soil and y˙ is the velocity in Y axis provided by the body sensor.

3

Trajectory Planning

Trajectory planning is crucial to the robot’s locomotion. A good planning means higher efficiency because it maximises the generated movement and minimises the spent energy used to generate it. With this in mind, a good option to legged locomotion is to use a cycloid trajectory during the transfer phase of the leg. In this way, the implemented trajectory is given by the expressions:    T  2πt Fc tT 2πt PT = VF t − 0 sin( ) , ) 1 − cos( 2π tT0 2 tT0

(5)

PS = [VF T0 , 0]T

(6)

where T0 is the cycle time, VF is the frontal velocity of the body defined by VF = LS /T0 , tT0 is the transfer phase time given by tT0 = (1 − β)T0 (where β is the duty factor of the leg), t is the time variable and Fc is the maximum height of the cycloid. In order to make possible the implementation of the gait it is needed to analyse the kinematics of the robot. Kinematics are responsible for providing the reference angles to the controller based on the trajectory planned positions. The study of kinematics was performed for one leg only, since the others present the same shapes and sizes. The kinematic model of a leg with two degrees of freedom was used in the case.

Control of a Quadruped Robot

4

761

Control System and Tuning

Due to the difficulty of finding a model that describes the robot’s non-linear behaviour, the design of the control system appeals to linearization methods to find a state-space representation that describes the main dynamics of the robot. This was made in order to find a proper solution for the tuning of controller parameters. After the tuning, the controller will act in robot’s non-linear system. The simplified block diagram of the system with the controller is shown in Fig. 5.

Fig. 5. Overall control system diagram.

The adopted procedure for the tuning of the controller was the following: (a) Select the IO points of (b) Select the parameters the robot for linearization; that will be trimmed; (c) Obtain the linear state space of (d) Find a proper compensator for the linearized the plant using the trim model; plant using classic control methods; (e) Test the selected parameters (f) If necessary, make fine adjustments in the non linear model of the robot; and repeat the last step.

The linearization of the system was made using the Simulink Linear Analysis tool. The linearized model was obtained using the trim model technique in the hip and knee of leg 1. All the joint’s positions and velocities remained untrimmed, while all the other parameters were setted to steady state. Since all legs have the same length and shape, there was no need to reproduce the analysis for all of them. The step-by-step linearization process is not in the scope of this work, and both techniques can be found in Simulink Control Design reference [14]. After linearization, the space state model was used in the Simulink Control System Analysis in order to obtain a proper compensator. The parameters were tuned using the PID Tuning tool shown in Fig. 6. A good option to this case is to use the PD controller with first order filter due to the varying reference θd (t). The transfer function of the PD controller with filter for the leg i and joint j is given by a lead type compensator formula, as described in Eq. (7). Cij (s) = Kij

s + Zij , Zij < Pij s + Pij

(7)

Due to its resemblance, the controller Ci1 and the controller Ci2 were tuned once and used in all hip and knee joints, respectively.

762

I. Oliveira et al.

Fig. 6. Tuning interface for the compensator.

5

Tests and Results

The tests were carried while the robot was adopting the trot gait. The velocity VF and the parameters Kij , Zij and Pij of the controllers were varied in tests for a saturation limit of ±20 Nm in order to obtain a first approach. After that, the best tuning was applied to different values of saturation and the results were compared. The number of simulated cycles was nc = 5 corresponding to a simulation time of t = nc ∗ T0 . The sampling period was 0.005 s and the step length was LS = 0.2 m. A total of ten tests were performed varying the controller parameters, and each test considers five different periods. Finally, for the best case the system was tested for five different values of saturation. 5.1

Performance Indices

The used performance indices were the Integral of Absolute Error (Iae ), the Integral of Squared Error (Ise ) and its percentage in relation to the reference θd area, %Iae and %Ise , given respectively by expressions (8)–(11). Iaehip =

4  0

i=1

Isehip =

4  i=1

nT0

0

nT0

|θdi1 − θi1 |dt

,

Iaeknee =

4  0

i=1

|θdi1 − θi1 |2 dt

,

Iseknee =

4  i=1

nT0

0

nT0

|θdi2 − θi2 |dt

(8)

|θdi2 − θi2 |2 dt

(9)

Control of a Quadruped Robot 2 4  

%Iae =

i=1 j=1

%Ise =

4  2 

763

nT0 0

|θdij − θij |dt ∗ 100

nT0 |θ |dt d ij 0

(10)

|θdij − θij |2 dt ∗ 100

nT0 |θdij |dt 0

(11)

nT0 0

i=1 j=1

where θdij (t) is the desired angle and θij (t) is the measured angle for the leg i and joint j. As a measure of efficiency of the system is used the average energy consumption by travelled distance, Eav , given by: 1  d i=1 j=1 4

Eav =

2



T0

0

|τaij (t)θ˙ij (t)|dt

(12)

where τaij is the applied torque and θ˙ij is the measured velocity in each leg i and joint j of the robot. Note that Eav assumes that energy regeneration is not available by actuators doing negative work [10]. For actuator size considerations, the maximum torque of the hips τhip and knees τknee , which corresponds to the stall torque needed to move the joint during a brief period of time, are also computed. 5.2

Results

After several tests with different control parameters, the results presented in Table 1 are given by test number 5, which best satisfied the desired performance in terms of error and efficiency. The best parameters are the ones that have the lowest values of Eav ∗%Iae and Eav ∗%Ise , as shown in Figs. 7 and 8, respectively. Table 1. Best results of the tests. VF Iaehip Iaeknee Isehip Iseknee %Iae %Ise d (ms−1 ) (deg) (deg) (deg) (deg) (m)

Eav (Jm−1 )

0.1

6.176 2.931

2.170

0.401

0.161 0.045 0.745 22681.668

0.05

6.571 4.344

1.548

0.414

0.096 0.017 1.126

9756.350

0.033

8.110 5.993

1.347

0.441

0.083 0.011 1.030

7425.555

0.025

11.249 7.892

4.216

0.631

0.084 0.021 0.981

5942.543

0.02

13.717 9.802

5.039

0.810

0.083 0.021 1.000

4645.231

The controller parameters that correspond to the results of Table 1 are shown in Table 2. Using the parameters of Table 2, the next test was to vary the saturation limit. For all five tests performed, the results are shown in Table 3. As expected,

764

I. Oliveira et al.

Fig. 7. Values of Eav multiplied by %Iae for each test.

Fig. 8. Values of Eav multiplied by %Ise for each test. Table 2. Controller parameters for the best case. Parameter Hip joint Knee joint Kij

77

204

Zij

20.88

20.88

Pij

150

150

Table 3. Saturation tests in the best case of the controller. τsat Iaehip (N m) (deg)

Iaeknee Isehip (deg) (deg)

Iseknee %Iae %Ise (deg)

d (m)

Eav (Jm−1 )

20

6.571

4.344

1.548

0.414

0.096

0.017 1.126

9756.350

15

6.867

4.814

2.368

0.639

0.103

0.027 1.071

9869.309

8.810

10

5.401

5.488

1.130

0.125

0.058 0.827

9789.381

5

73.618 11.784

652.621

8.986

0.753

5.836 0.862

5179.345

4

103.717 14.724

1292.455 17.459

1.045 11.555 0.905Z 3974.033

Fig. 9 shows that the higher energy efficiency Eav is obtained for the lowest saturation. On the other hand, higher precision leads to higher torques, as shown in Fig. 10. The joint trajectories for a maximum torque of 10 Nm are shown in

Control of a Quadruped Robot

765

Fig. 9. Values of Eav versus saturation torque.

Fig. 10. Values of %Iae versus saturation torque.

Fig. 11. Joint trajectories of leg 1 for a saturation of 10 Nm.

Fig. 12. Total distance travelled by the robot for a saturation of 10 Nm in the joints.

766

I. Oliveira et al.

Fig. 11 while in the Fig. 12 the position of the robot’s body is illustrated. As can be seen, the robot follows the desired joint trajectories with good accuracy, travelling a total distance of approximately 1 m.

6

Conclusions

The presented work focused in the implementation of a control system for a quadruped robot using linear analysis and trajectory planning. The planned trajectory presents good results when adopted the trot gait, especially for a velocity of VF = 0.05 ms−1 . The linearization provides a good response to the behaviour of the controllers, but still require fine adjustments to get better results. It is also concluded that the error is directly associated with the joint’s torque, which may vary with the presence of saturation in the controllers. Although several parameters can still be changed to improve the results, this work provided an analysis of some relevant aspects of modelling and control of a quadruped robot in simulation environment and represents the first steps for further developments. Acknowledgments. This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER006961, and by National Funds through the FCT - Funda¸ca ˜o para a Ciˆencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.

References 1. Hutter, M., et al.: ANYmal - a highly mobile and dynamic quadrupedal robot. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, pp. 38–44 (2016). https://doi.org/10.1109/IROS.2016.7758092 2. Raibert, M.H.: Legged Robots that Balance. MIT press, Cambridge (1986) 3. Cao, Q., van Rijn, A.T., Poulakakis, I.: On the control of gait transitions in quadrupedal running. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, pp. 5136–5141 (2015). https://doi.org/10. 1109/IROS.2015.7354100 4. Gao, F., Qi, C., Sun, Q., Chen, X., Tian, X.: A quadruped robot with parallel mechanism legs. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, p. 2566 (2014). https://doi.org/10.1109/ICRA.2014.6907223 5. Rutishauser, S., Sprowitz, A., Righetti, L., Ijspeert, A.J.: Passive compliant quadruped robot using Central Pattern Generators for locomotion control. In: 2008 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, Scottsdale, AZ, pp. 710–715 (2008). https://doi.org/10. 1109/BIOROB.2008.4762878 6. Igarashi, H., Machida, T., Harashima, F., Kakikura, M.: Free gait for quadruped robots with posture control. In: 9th IEEE International Workshop on Advanced Motion Control (2006). https://doi.org/10.1109/AMC.2006.1631698 7. MacDonald, W.S., Grupen, R.A.: Building walking gaits for irregular terrain from basis controllers. In: 1997 IEEE International Conference on Robotics and Automation, Proceedings, vol. 1. IEEE (1997)

Control of a Quadruped Robot

767

8. Hutter, M., et al.: Quadrupedal locomotion using hierarchical operational space control. Int. J. Robot. Res. 33(8), 1047–1062 (2014) 9. Suzuki, H., Nishi, H., Aburadani, A., Inoue, S.: Animal gait generation for quadrupedal robot. In: Second International Conference on Innovative Computing, Informatio and Control (ICICIC 2007), Kumamoto, p. 20 (2007). https://doi. org/10.1109/ICICIC.2007.169 10. Silva, M.F., Machado, J.A.T.: Kinematic and dynamic performance analysis of artificial legged systems. Robotica 26(1), 19–39 (2008) 11. Silva, M., Machado, J., Lopes, A.: Modelling and simulation of artificial locomotion systems. Robotica 23(5), 595–606 (2005). https://doi.org/10.1017/S0263574704001195 12. Muybridge, E.: Animals in Motion. Courier Corporation (2012) 13. Silva, M., Barbosa, R., Castro, T.: Multi-legged walking robot modelling in MATLAB/Simmechanics and its simulation. In: 2013 8th EUROSIM Congress on Modelling and Simulation (EUROSIM). IEEE (2013) 14. Mathworks. Simulink Control Design Reference. Version R2016b. User guide (2016) 15. Mathworks. Simscape Multibody Reference. Version R2016b. User guide (2016)

Communication-Aware Robotics (I)

Cooperative Perimeter Surveillance Using Bluetooth Framework Under Communication Constraints J.M. Aguilar(B) , Pablo Ramon Soria, B.C. Arrue, and A. Ollero University of Seville, 41092 Seville, Spain [email protected], {prs,barrue,aollero}@us.es

Abstract. The work presented focuses in the simulations and real experiments of perimeter surveillance under communication constraints, performed by teams of UAVs using a Bluetooth communication framework. When UAVs work in a colaborative manner, communication among them is essential to properly perform their task. Moreover, energy consumption and weight of the devices equipped in a UAV are important to be reduced at minimum possible, particularly in micro-UAVs. A coordination variables strategy is implemented to perform the perimeter division. Keywords: Bluetooth low energy · UAV · Communication · Coordination variables

1

Introduction

Nowadays, there is a great development and investment in the field of the unmanned air systems or UAS, commonly called drones. There are uncountable applications thanks to their adaptability to different tasks, as for instance in agriculture, mapping, delivery services, border defense, search and rescue operations, etc. Due to this, they have been the subject of much research to improve their autonomy, obstacle avoidance, localization, cooperation, path planing. This article focuses in a group of robots which divides a perimeter to surveil it in a cooperative manner. Any surveillance system is made of many activities that can be summarized in three main activities: – The detection of new events, intruders or information of interest. This task is strongly dependent on the movement planification strategy of the aerial robots, which is based on the information available and the estimations about the problem, the environment and the situation of the rest of the aerial robots. – The communication between the elements of the system, so each of them is aware of the whole system. This also involves the movement planification strategy as it needs the system status. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_63

772

J.M. Aguilar et al.

– The assignment of the best robot to handle every detected event. This can be made of two manners: as a centralized system, in which case a control station decides which aerial robot should handle each task; and as a distributed system, in which case a dynamic target allocation among the aerial robots can be addressed to assign the targets in the most efficient way. The work of this article is focused in the test of communication among the robots of a team when range constraints exist. Real experiments have been performed with UAVs equipped with Bluetooth devices in order to allow them to communicate themselves, and also with an algorithm based on coordination variables to partition the total perimeter assigned to the group. This system has been tested on simulations and on a real location in the Escuela Tecnica Superior de Ingenieria of Seville (Spain). The remain of the article is structured as follows: – Sect. 2 summarizes the state of the art work. – Sect. 3 introduces and explains how works the Bluetooth devices chosen for the experiments, and a brief description of the operating framework. – Sect. 4 details the developed perimeter division protocol used. – Sect. 5 shows both simulation and real experiments, and their evaluation. – Sect. 6 presents the conclusions of the article.

2

State of the Art

The cooperation among robots to carry out a widely range of tasks has been studied in all kind of robots. In [14,17] authors develop a system with multiple ground robots capable of covering area and tracking targets in a cooperative manner. One robot behaves as the master and reaches its objectives by other robots who behave as the slaves. In [19] authors paid attention to aerial robots, which track targets in the optimal way using the information of their neighbors. In [15], the system is also conformed with UAVs, the algorithm asigns missions in autonomously way based on their level of importance. The way the robots communicate among them depends on the application pursued. For instance, Zigbee is an standard protocol widely used for DIY projects. In [3,18] authors used this protocol for communicating nodes and robots in relative large distance, but its main disadvantage is its greater energy requirements in comparison with other systems. Authors in [6,13] used bluetooth technology to connect the robots as opposed due to its lower energy consumption and weight. In [5,11,16] authors worked in the localization of the robots given the communication framework, though due to being focused in ground robots the task was simpler. In this article the commmunication framework used is the one exposed in [8], based on standard Bluetooth Low Energy 4.1 connections. It works with the Nordic nRF51 chipset in combination with the S130 SoftDevice [12].

Cooperative Perimeter Surveillance Using Bluetooth Framework

773

The perimeter surveillance has been a field of study in lately years. A way of approach to this problem is through UAS and ground stations in the perimeter to manage the communication [2], or also using a method based on linear programming and Markow chain as it is posed in [4]. Other methods, more interesting when there are communication constrains, are distributed and decentralized: authors in [9] proposed a robust solution based on behaviour control of the multi-robot system. Another possibility are the coordination variables methods, which are proved to be fast convergent solution. In [7], it is developed an algorithm to coordinate a team of small homogeneous UAS to perform cooperatively a perimeter partitioning strategy, using coordination variables. This aforenamed algorithm slightly modified to consider heterogeneous robots is the one used in this paper, as poses in [?].

3

Bluetooth Devices and Framework

The use of Bluetooth Low Energy (BLE) has been steadily growing just since its initial launch in the market of the wireless communications, aimed at applications such of Internet of Things (IoT). This involves monitoring of wireless sensor networks and control of applications that are in continuous communication of state variables. The coordination of UAVs problem is very similar to this statement. In this work it is taken as a development framework an opensource implementation of a mesh network offered by M-Way Solutions [10]. This approach is placed in the upper layers of application and host of the Nordic nRF51 chipset in combination with the S130 SoftDevide so that it manages the roles of the different devices in the communications including self-healing capabilities and implementation with battery powered devices. This device is advantageous small and low power consumption, which suits for micro-uav teams. The S130 SoftDevice from Nordic enable the use of three central device connections and one peripheral. To manage the communication, the device establishes connections with its neighbors and also assigns group identifiers. All the nodes that got the same group identifier will be considered part of a swarm, so they can exchange information between them. Due to this, this structure conforms a scatternet topology (Fig. 1) which interconnects all the robots. Everytime a connection is made, the SDK serves an ID and the RSSI for that connection, which is corresponded to a power indicator in the received signal in a radio communication, and this can be used in different wireless protocols. About the specifications of the BLE technology, there is one specially important and remarkable, its data transfer limit. The conception of the BLE is focused on the transmission of small amounts of data at 1 Mbps, and in short distances up to 10 m. Due to the management of limited data packages in the connections, the BLE technology has another fundamental characteristic: a transmission time under the 3 ms, letting it to work in real time developments.

774

J.M. Aguilar et al.

Fig. 1. Example of the scatternet topology. Blue dots are the nodes, and the blue circles represent the range of the communications

4

Perimeter Division Protocol Based on Coordination Variables

The algorithm implemented to perform the perimeter surveillance is the one exposed in [1] based on coordination variables. It is a distributed and decentralized algorithm. Due to the short range of communication most of the time a robot of the patrol will be isolated from the others robots, so this kind of model allow the multi-robot system to converge to the final path partitioning through local decisions and asynchronous information exchanges, without any kind of hierarchical levels among the robots. Being B the entire perimeter defined as: B := {b(s) ∈ Rk : s ∈ [0, L]}

(1)

where b is a curve to cover the whole path B, s is defined as the distance to the initial path position b(0) along the curve b, and L is the length of the path B, being b(L) the final position. sup Each robot Qi decides autonomously its own segment Bi := [b(sinf i ), b(si )] in order to the total amount of robots patrol the entire perimeter B. Thus, each Qi uses a back and forth motion between its own first segment point b(sinf i ) and ). its own last segment point b(ssup i Hence, any robot Qi knows its maximum motion speed vimax , its current direction movement (right or left in its own segment) di , and its current position si into the curve b. The algorithm implemented uses a set of variables called coordination variables, which represent the minimum information necessary for each robot Qi to calculate its own segment Li . These variables are: lengthi = L speedsum = i

n  j=1

vjmax , ∀i = 1, 2, ..., n

(2)

Cooperative Perimeter Surveillance Using Bluetooth Framework

775

Being lengthi the length of the total perimeter, and speedsum the sum of i all robots maximum speed. In addition to these, each robot Qi has a set of intermediate variables which are required to calculate the coordination variables: – – – –

t lengthlef is the length of perimeter that Qi has currently on its left. i right lengthi is the length of perimeter that Qi has currently on its right. lef t speedi is the sum of the speed of all the robot which are at left of Qi . speedright is the sum of the speed of all the robot which are at right of Qi . i

The sequence of the algorithm is as following: the robot Qi moves at its maximum speed vimax along its segment Bi . In the case that it reachs the end of Bi , Qi does not stop its movement but continues till it communicates with another Qj , and both exchange their variable information and updates their segments, or it arrives at the end of the perimeter. If robot Qj meets robot Qi by Qi right side, Qj sends it all the information about its right side, scilicet, its sum of speeds speedright and its length on its j right side lengthright . Q does the same but with the left side variables. i j Then, both Qi and Qj use this new information to update their coordination : variables, lengthi and speedsum i t = speedlef + speedright + vimax speedsum i i i t lengthi = lengthlef + lengthright i i

(3)

sup And with this update they can calculate their segment [sinf i , si ] as follows:

lengthi speedsum i length i + vimax speedsum i

t = speedlef sinf i i

ssup = sinf i i

(4)

If one robot reachs the initial si = 0 or ending si = L position in the perimeter, it also updates its variables according to its direction di and turns back. The algorithm minimizes the information exchanges because robots only has to communicate with their neighbors.

Fig. 2. A team of 4 UAVs with different speeds implementing a perimeter division strategy

776

5

J.M. Aguilar et al.

Experimental Testing

A set of simulation and experimental results are provide to demonstrate the effectiveness of the communication framework works and validate Bluetooth devices. Results show that UAVs divide the perimeter proportionally to their own capabilities. 5.1

Simulation Results

The proposed distributed algorithm is based on the coordination variables. UAVs’ model and communication framework simulation have been developed in C++. The initial positions has been defined randomly, but the speeds have been chosen proportionally between them in order to obtain a visual result understable at one glance. The first simulation consist on 4 UAVs patrolling single line of 160 m, similar to Fig. 2, being their speeds 1 m/s, 2 m/s, 3 m/s, and 4 m/s. Figure 3(a) shows the result. As it can be observed, each UAV patrol a length proportional to their speed. For instance, the slower UAV patrol a length which is a quarter of the faster UAV, accordingly to the relation between their speeds. It is highlighted that the final solution is not a unique point but a cycle. This will be discussed in the next simulation.

(a) Patrolling a line

(b) Patrolling a square

Fig. 3. Graphic representation of the UAVs movements in the experiments

The second simulation is the same 4 UAVs patrolling not a single line, but the square made of the previous line of 160 m, namely, a square of side 40 m. Figs. 3(b) and 4 shows the result of this simulation. Though it seems that the UAVs have converged in no solution, the final solution of this division problem is not a single point but a cycle. Indeed, taking a careful glimpse, for instance, the left image of Fig. 4, we can easily see how the system converges quickly to a cycle of 10 points which are clearly signaled in Fig. 5. Through different simulation experiments, modifying the number of robots, their speeds, the perimeter to surveil and other sets of variables, it is enough clear that all them affects the number of points of the final solution of

Cooperative Perimeter Surveillance Using Bluetooth Framework

777

Fig. 4. Simulation results of 4 UAVs patrolling a square of side 40 m. Images only show the position of 2 of the 4 UAVs for clarity. Left image shows position X in meters against time; right image shows position Y in meters against time

Fig. 5. Signalization of the cycle of ten points

the system, but which is the exact relation is not utterly determined, because of being out of the scope of this article. The algorithm proposes is robust enough to variations in the system. The last simulation performed present the same 4 UAVs patrolling the square of side 40 m, but two of them get lost at the middle of the mission. As it can be observed in the Figs. 6 and 7, the other two UAVs patrolling the whole perimeter converging in a new cycle solution.

Fig. 6. Simulation results of 4 UAVs patrolling a square of side 40 m, getting lost two of them

778

J.M. Aguilar et al.

Fig. 7. Simulation results of 4 UAVs patrolling a square of side 40 m. Images only show the position of 3 of the 4 UAVs for clarity. Left image shows position X in meters against time; right image shows position Y in meters against time

5.2

Real Experiment

The location selected to carry out the real experiment was an outdoor zone of the laboratories of the University of Seville Escuela Tecnica Superior de Ingenieria that is shown in Fig. 8(a). Only two UAVs were used in the experiment, in order to demonstrate the communication framework. Thanks to the scalability of the perimeter division algorithm, which was proved in the simulations, two UAVs are enough: more robots would increase the number of point of the final cycle solution, and would make the graphics more difficult to read.

(a) Zone of the experiments

(b) Perimeter division

Fig. 8. Satellite images of the experiments

Both UAVs were equipped with the required sensor to know their own GPS position, and the aforenamed Nordic BLE device. Also they have an Intel NUC computer to compute the perimeter division algorithm and the communication instructions load in them. The experiment takes approximately 3 min to complete. Figure 8(b) shows the result of the experiment over the satellite image of the experiment zone. The UAVs patrol the perimeter till they meet themselves or

Cooperative Perimeter Surveillance Using Bluetooth Framework

779

Fig. 9. The 3D representation is turned in order to ease the visualization of the connections (continuous line) and disconnections (dashed line) of the UAVs

reach the end of the perimeter, in both case they updates their coordination variables, recalculate their routes and turn back to continue their patrol. Figure 9 is included to show the connection and disconnection between both UAVs. It is a remarkable fact that the bluetooth devices are able to maintain the connection further away than the 10 m specified, even though to establish that connection they do need to be in the 10 m range. Finally, the Fig. 10 is the same 3D representation but projected to over the latitude-longitude plane: in this image is easier to see the perimeter, but slightly more difficult to see when they connect or disconnect.

Fig. 10. 3D representation of the connections (continuous line) and disconnections (dashed line) of the UAVs projected over the longitude-latitude plane

6

Conclusions

Real outdoor experiments have been performed with satisfactory results. Both communication framework and perimeter division algorithm have been demonstrated to be robust and efficient. The Nordic bluetooth devices are proved to

780

J.M. Aguilar et al.

work properly, being an excellent option to use for UAVs communication thanks to their low consumption energy and weight. The devices connect within the 10 m range, and even maintain the communication till the 20 m. As future work it would be interesting evaluate the existent relation between the number of UAVs, their capabilities, the length and shape of the perimeter, etc., and the number of points of the final solution cycle. Likewise, another future work could be an experiment with more UAVs not only performing a perimeter division but also allocating tasks, as tracking different moving targets. Also measuring the energy consumption of the BLE devices along the experiment would be a good subject of study. Acknowledgements. This work was partially supported by the European Commission H2020-ICT Programme under the project MULTIDRONE (731667). This work has been developed in the framework of the project AEROARMS (SI1439/2015) EU-funded project and the project AEROMAIN (DPI2014-59383-C2-1-R).

References 1. Acevedo, J.J., Arrue, B.C., Maza, I., Ollero, A.: Cooperative perimeter surveillance with a team of mobile robots under communication constraints. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5067– 5072. IEEE (2013) 2. Beard, R.W., McLain, T.W., Nelson, D.B., Kingston, D., Johanson, D.: Decentralized cooperative aerial surveillance using fixed-wing miniature UAVs. Proc. IEEE 94(7), 1306–1324 (2006) 3. Benavidez, P., Nagothu, K., Ray, A.K., Shaneyfelt, T., Kota, S., Behera, L., Jamshidi, M.: Multi-domain robotic swarm communication system. In: IEEE International Conference on System of Systems Engineering, SoSE 2008, pp. 1–6. IEEE (2008) 4. Darbha, S., Krishnamoorthy, K., Pachter, M., Chandler, P.: State aggregation based linear programming approach to approximate dynamic programming. In: 2010 49th IEEE Conference on Decision and Control (CDC), pp. 935–941. IEEE (2010) 5. Esteves, Y.R., Concejero, J.B., Jim´enez, A.V.: Indoor localization of the points of interest using ro-slam. In: 2015 12th International Joint Conference on e-Business and Telecommunications (ICETE), vol. 1, pp. 35–42. IEEE (2015) 6. Herbrechtsmeier, S., Witkowski, U., R¨ uckert, U.: Bebot: a modular mobile miniature robot platform supporting hardware reconfiguration and multi-standard communication. In: FIRA RoboWorld Congress, pp. 346–356. Springer (2009) 7. Kingston, D., Beard, R.W., Holt, R.S.: Decentralized perimeter surveillance using a team of UAVs. IEEE Trans. Rob. 24(6), 1394–1404 (2008) 8. M-Way solutions (2015), https://github.com/mwaylabs/fruitymesh/wiki 9. Marino, A., Parker, L., Antonelli, G., Caccavale, F.: Behavioral control for multirobot perimeter patrol: a finite state automata approach. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 831–836. IEEE (2009) 10. Heil, M.: Bluerange meshing now open source. @ ONLINE 2015 (2015) 11. Menegatti, E., Zanella, A., Zilli, S., Zorzi, F., Pagello, E.: Range-only slam with a mobile robot and a wireless sensor networks. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 8–14. IEEE (2009)

Cooperative Perimeter Surveillance Using Bluetooth Framework

781

12. Nordic Semiconductor (2013), https://www.nordicsemi.com/eng/Products/ Bluetooth-low-energy/nRF51822 13. P´ asztor, A., Kov´ acs, T., Istenes, Z.: Compass and odometry based navigation of a mobile robot swarm equipped by bluetooth communication. In: 2010 International Joint Conference on Computational Cybernetics and Technical Informatics (ICCCCONTI), pp. 565–570. IEEE (2010) 14. Patil, D.A., Upadhye, M.Y., Kazi, F., Singh, N.: Multi robot communication and target tracking system with controller design and implementation of swarm robot using arduino. In: 2015 International Conference on Industrial Instrumentation and Control (ICIC), pp. 412–416. IEEE (2015) 15. Sampedro, C., Bavle, H., Sanchez-Lopez, J.L., Fern´ andez, R.A.S., Rodr´ıguezRamos, A., Molina, M., Campoy, P.: A flexible and dynamic mission planning architecture for UAV swarm coordination. In: 2016 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 355–363. IEEE (2016) 16. Sun, D., Kleiner, A., Wendt, T.M., et al.: Multi-robot range-only slam by active sensor nodes for urban search and rescue. In: RoboCup, pp. 318–330. Springer (2008) 17. Yang, B., Ding, Y., Hao, K.: Area coverage searching for swarm robots using dynamic voronoi-based method. In: 2015 34th Chinese Control Conference (CCC), pp. 6090–6094. IEEE (2015) 18. Zhang, B., Gao, S.: The study of zigbee technology’s application in swarm robotics system. In: 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC), pp. 1763–1766. IEEE (2011) 19. Zhou, Y., Dong, X., Zhong, Y.: Time-varying formation tracking for UAV swarm systems with switching interaction topologies. In: 2016 35th Chinese Control Conference (CCC), pp. 7658–7665. IEEE (2016)

Development of an Adaptable Communication Layer with QoS Capabilities for a Multi-Robot System Hannes Harms(B) , Julian Schmiemann, Jan Schattenberg, and Ludger Frerichs Institute of Mobile Machines and Commercial Vehicles, TU Braunschweig, 38106 Braunschweig, Germany {hannes.harms,j.schmiemann,j.schattenberg,ludger.frerichs}@tu-bs.de

Abstract. In this paper we present an approach to connect a multirobot system of unmanned ground and aerial vehicles inside a mobile ad-hoc network to exchange sensor data requiring a high bandwidth. The introduced communication layer extends the messaging system of an underlying middleware (ROS) to work with non-ideal wireless links. Every connection can be configured with a user defined data rate, protocol and priority to control the overall traffic load in the network. If required, messages can be bufferd during communication outages or due to low available bandwidth. Moreover, the message transport system contains mechanisms for lossless and lossy data compression as well as an user interface to quickly react to different application conditions.

1

Introduction

One key challenge in developing a multi-robot system, is to provide an appropriate communication system firstly for robot interaction and secondly for linking human machine interfaces. The characteristics of the communication system highly depend on the scope and area of application. However, many middleware frameworks used nowadays have a central service for establishing connections between software components that has to be reachable. They do not provide mechanisms for data compressions on-the-fly and the QoS (Quality of Service) parameters are only accessible for developers whereby human operators have no possibility to adapt communication behaviour to their needs. Within the joint development project ANKommEn a semi-automatic machine cluster consisting of three unmanned aerial vehicles (UAV), two unmanned ground vehicles (UGV) and a human operator with a control interface is developed to collect, process and visualize different sensor data on-the-fly. Every ground vehicle is equipped with a 3D-LiDAR sensor and a camera. In contrast each aerial vehicles has different sensor setup e.g. a high resolution camera for aerial images, a 3D-LiDAR sensor for mapping and a thermal camera for firefighting support. The sensor data is used to create a data base to decide about adequate rescue actions or to coordinate human rescue forces. The therefore used network is based upon a VHT c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_64

Development of an Adaptable Communication Layer with QoS Capabilities

783

(Very High Throughput) 802.11ac mesh network in ad-hoc mode. Mobile ad-hoc networks (MANETs) can be used without any fixed networks infrastructure, which makes them suitable for disaster scenarios. Moreover, MANETS provide a very flexible mechanism to form a network between participants that meet in a certain area. The wireless communication has to deal with a high network load and should be as robust as possible e.g. to deal with lossy links and link failures. Therefore a communication architecture is proposed, which is capable of controlling the delivery rate for each data channel, adjusting QoS parameters for reliability and priority, compressing data on the fly, and changing the underlying transport protocol. When using the multi-robot system for a cooperative mapping task, data links can be configured for high reliability, because mapping algorithm are mainly not fault tolerate to message loss. On the other hand when using the multi-robot system for a search and rescue mission the operator might be more interested in an live sensor feedback and can therefore adjust the QoS parameters with the user interface.

2

Related Work

During the last years several communication frameworks for robotics have been developed. They can be distinguished in connection oriented and connection-less communication methods. The Player/Stage framework [1] is a connection oriented middleware based on a client server architecture, which has the drawback that data loss will occur if a connection fails. Connection-less frameworks often implement a publisher/subscriber mechanism to pass messages between components. As described by Eugster et al. [2] a publisher subscriber mechanism can provide a space, time and synchronization decoupling of the sender and receiver, which makes the approach more fault tolerant for a multi-robot system. One of the frameworks with the most valuable impact on the community that implement a publisher/subscriber concept for message transport is ROS (Robert Operating System) [3]. ROS allows the usage of TCP or UDP for the message transport and comes with a comprehensive sensor and visualisation support. One of the biggest disadvantages of ROS is the centralized approach of the roscore for managing subscriber and publisher connections. ROS was never been designed to run on a distributed network including non-ideal wireless links. LCM (Lightweight communications and marshalling) [4] also implements the publisher/subscriber concept, but in contrast to ROS a decentralized approach is used and LCM is capable of soft real-time message transmission. The decentralized approach of LCM is achieved by sending all messages as UDP multicast. However UDP multicast messages are unsuitable for a mobile ad-hoc network when streaming large files e.g. point clouds from mapping tasks. To improve the drawbacks of ROS currently ROS2 is under development, that will relay on different implementations of the Data Distribution Service (DDS)1 , like RTI Connext DDS2 or 1 2

http://www.omg.org/spec/DDS/1.4/. http://www.rti.com/products/dds/index.html.

784

H. Harms et al.

OpenSplice3 . ROS 2 enables the setting of different QoS profiles for topics e.g. to achieve a reliable or a best-effort connection, to set delivery deadlines and to configure different levels for data durability [13]. 2.1

Multi-Robot Frameworks

In order to avoid the drawbacks of current middleware frameworks for multirobot systems several approaches have been developed. Hartanto et al. [7] are using a cloud-based publisher/subscriber system to enable a reliable communication for cooperative mapping task using the Data Distribution Service (DDS) for inter-robot communication. As described in [7] the inter-process communication is based on ROS and ROS/DDS Proxy is used for sharing data between robots. Another concept for a ROS multi-master system was developed by Tiderko et al. [5] called multimaster fkie. The multimaster fkie package provides methods to automatically discover independent roscores via UDP multicast and synchronising their states by exchanging the publisher and subscriber list. Schwarz et al. [6] developed a set of ROS packages called nimbro network to exchange topics and services via an unreliable Wifi link between two independent roscores. The topic transport of nimbro network can be subdivided into subscribing the topic, serializing the messages, sending the data to the receiver, and publishing the message. Furthermore nimbro network is able to compress messages using BZip2, switching the underlying protocol (TCP/UDP), and it provides mechanisms for FEC (Forward Error Correction) when using UDP. Another approach for a multi-robot ROS system called Pound was presented by Tardioli et al. [11]. Pound uses a similar approach to connect multiple ROS cores, but the communication is realized over a single UDP flow. Messages are inserted in the UDP send queue depending on the priority of the topic. Therefore Pound is able to reduce the jitter and meet bandwidth requirements, which makes the solution suitable for tight control loops. Furthermore Tardioli et al. [14] presented a multi-robot framework based on the routing protocol RT-WMP [12] to implement network wide fixed message priorities.

3

Communication Architecture

The presented communication layer is designed to work on top of a middleware system (in our case ROS) that handles the inter-process communication between applications in a subnetwork that can reliable communicate, e.g. some clients connected through Ethernet. Each swarm robot runs an independent roscore on an communication board. ROS nodes executed on local wired machines hook up with the roscore on the communication board. For transmitting and receiving ROS topics of wireless connected mesh neighbors we have extended the publisher and subscriber mechanism of ROS. Therefore the regular topic transport mechanism is splitted up and enlarged with a set of nodes for sending and receiving 3

http://www.prismtech.com/dds-community.

Development of an Adaptable Communication Layer with QoS Capabilities

785

topics via an unreliable WiFi connection, as shown in Fig. 1. In a similar approach Schwarz et al. [6] build up the communication for the NimbRo rescue robot. In contrast to Schwarz et al. [6] our approach introduces an additional set of state nodes. The Sender State and Receiver State nodes are used to distribute the local available topics of a single roscore to all other swarm neighbours, similar to the ROS master discovery developed by Tiderko et al. [5]. To exchange messages between the mesh robots the Sender State node requests the current state of the local roscore and publishes a message containing a list of all available topics and services. The Topic Sender node subscribes the sender state messages for transmitting the message via UDP multicast periodically to all other network participants. To avoid an endless loop of topics transmissions, the Sender State node takes also into account the already received state messages from other swarm clients. By doing this the sender state message only contains the original topics and services of the local roscore, therefore retransmissions of external topics are avoided. Based on the status message, the Topic Sender node is capable of establishing a unique connection through TCP or UDP for each data channel that is requested. As outlined in Fig. 2 the topic transport process can be subdivided into some subroutines. First of all, the message is serialized and if required the message can also be compressed using zstd [8], a real-time compression algorithm. Following on from that the message is send out via the Wifi link and received by the Topic Receiver node. Before publishing the message on the receiver side, the original message has to be recovered through decompressing and deserialization of the incoming byte stream.

Node

Node Subscription /my topic

Publication /my topic Topic

Sender State

Exchange States via UDP Multicast

Node

Receiver State

Node Subscription /IP/my topic

Publication /my topic

Subscription

Topic Sender Node

Wifi

Topic Reciever Node

Publication

Fig. 1. ROS topic transport extension

786

H. Harms et al. Sender State Subscribe Topics

configure() Topic Sender

Receiver State configure() Topic Receiver

serialization()

deserialization()

compression()

decompression()

TCP send()

UDP send()

Publish Topics

TCP UDP receive() receive()

Fig. 2. Software layout for the topic transport

Every connection between the sender and receiver node is handled by an unique socket and can be configured with a user defined data rate to control the overall traffic load in the network. Furthermore each connection can be adjusted with an individual buffer size on the sender side, to achieve QoS in terms of reliability. In case of a connection loss, data can be buffered for a certain period of time. If the connection recovers, the buffered data is send out with an appropriate rate to make a trade-off between the configured transmission frequency and the available network capacity. Image topics like aerial or thermal images could be either transferred using the raw data, or if the compression mode is enabled, a JPEG image compression is used by subscribing the ROS built-in compressed image topic. To achieve QoS in terms of priority, every socket on the sender node can be adjusted within three priority levels, matching the three bands of the pfifo fast [9] queuing disciplines of Linux kernel. Moreover the Topic Sender node also implements an additional threading model to prefer topics with higher priority. Therefore messages are divided in chunks of 0.1 MB. Whenever a message with higher priority is ready for transmission, topics with lower priority will pause sending after current chunk has finished. 3.1

Data Compression

One of the main problems in our use cases is that the utilized sensors can easily overload the network capacity. To prevent this the Topic Sender and Topic Receiver nodes contain the already mentioned mechanism for compressing and decompressing messages via zstd [8]. In contrast to Pound or nimbro network that are using Bzip2, our approach utilizes zstd (compression level 6) for message compression. Figure 3 depicts the compression time and ration for three typical

Development of an Adaptable Communication Layer with QoS Capabilities

787

328 300 200

135 90

100 0

0.2 0.9 Status Msg 3 KB

25 Cloud a 0.9 MB zstd bzip2

(a)

Cloud b 3.3 MB

3 Compression Ratio

Compression Time ms

data types exchanged in robotic applications: A status msg (size 3 KB), a smaller point cloud (size 0.9 MB) and a larger point cloud (size 3.3 MB). In all cases zstd provides a significant better compression time while the compression ratio is on a similar level compared to Bzip2. Therefore zstd seems more suitable especially for real-time applications.

2.7

2.6

2.5

2

2.3

2.4

1.8 1.8 Status Msg 3 KB

Cloud a 0.9 MB

Cloud b 3.3 MB

zstd bzip2

(b)

Fig. 3. Comparing compression time and ratio for zstd and Bzip2

3.2

Video Streaming

To save an additional amount of bandwidth, image topics can also be transferred as a h.264 encoded video stream. For this purpose the already introduced Sender State nodes transmits a configuration message to a Video Server node that subscribes the image topic. The Video Server node is able to inject every image in a Gstreamer pipeline by using a Gstreamer appsrc element. The h.264 video streams can be transferred with the RTP (Real-Time Transport) protocol from the UAV/UGV to the ground station. As shown in Fig. 4 the Video Client node receives the RTP video stream data and converts the video frames back to ROS image messages. 3.3

User Interface

The message transfer between each swarm participant can either be established automatically through predefined configuration files for the Sender and Receiver State nodes or manipulated via a user interface. The interface is implemented as a plugin for rviz and lets the operator switch between the UDP, TCP and RTP protocol for each topic. Apart from this the transmission rate, the buffer size, the priority and the compression mode can be adjusted. For each data channel the actual bitrate is calculated and displayed to give the user a better understanding of the current network load. Figure 5 shows the developed rviz interface during an experiment with two connected vehicles. Enabled topics for inter-robot

788

H. Harms et al.

Sender State configure()

Subscribe Video topics

Receiver State configure()

Video Server

Video Client

Gstreamer Appsrc

Gstreamer Appsink

RTP send()

RTP receive()

Publish Video topics

Fig. 4. Software layout for video streaming

communication are plotted in green. The columns contain parameters in the following order: Transmission Protocol (TCP/UDP), Delivery Rate (Hz), Buffer Size (Number of messages), Priority (Low, Mid, High), Compression (On/Off).

Fig. 5. User interface for controlling network traffic

4

Case Studies

To evaluate the performance of the developed communication layer several case studies were done. For all experiments two ground vehicles (Fig. 6b) and a stationary control station for a human operator were used. The hardware for the communication consists of an embedded ARM board (Gateworks GW 5520) that

Development of an Adaptable Communication Layer with QoS Capabilities

789

contains the WiFi module (Compex WLE900VX), configured in ad-hoc mode. For the routing algorithm we considered BATMAN [10] (Better Approach To Mobile Adhoc Networking) and open802.11s. Both protocols have shown similar throughput results in our use cases, but open802.11s prevents frequent route changes due to the air time link metric of the protocol.

A

B

C

D

(a) Travelled path of the ground vehicle. The control station is located at point A, inside the building.

(b) The unmanned ground vehicle equipped with a stereo camera, a 3D-LiDAR sensor, Wifi and GPS antennas.

Fig. 6. Testing area and ground vehicle

4.1

Priority Based QoS

In order to test the priority based QoS two test data streams were transmitted from a single ground vehicle to the control station, sharing the available bandwidth. As described in Fig. 7 both streams have a setpoint rate of 10 Hz. Due to the bandwidth constrains of the WiFi link the setpoint rate is not achieved. If both streams have the same priority they are sharing the available bandwidth resulting in a delivery rate, which does not reach the constraints of 10 Hz. If one stream is prioritized over the other, the setpoint rate can be reached and the prioritized stream consumes all of the available bandwidth. 4.2

Reliability Based QoS

To test the reliability based QoS implementation the pattern A - B - A shown in Fig. 6a was driven by a ground vehicle. During the experiment a data stream was transmitted to the control station with a setpoint rate of 8 Hz. With increasing distance between the vehicle and the control station, the message delivery decreases due to the limited available bandwidth of the link. When reaching point B the communication between the vehicle and the ground station has an outage. During the communication outage messages are pushed into a waiting queue, for later delivery. As depicted in Fig. 8 the message delivery rate for the data streams increases again when reducing the distance towards the control station. Using the described reliability option, messages that can not be delivered

H. Harms et al. Delivery Rate Hz

790

10

5

0

0

20

40

60

80

100

120

140

160

180

Time s Stream 1, no priority, setpoint rate 10 Hz (8

Delivery Rate Hz

Stream 2, no priority, setpoint rate 10 Hz (11

MB s ) MB s )

10

5

0

0

20

40

60

80

100

120

140

160

180

Time s Stream 1, high priority, setpoint rate 10 Hz (8 Stream 2, low priority, setpoint rate 10 Hz (11

MB s ) MB s )

Fig. 7. Streaming data with different priority levels

Delivery Rate Hz

20 15 10

A

B

A

5

1. Stream 2.5 M B/s QoS - buffering Setpoint rate 8 Hz 2. Stream 2.5 M B/s no QoS Setpoint rate 8 Hz

0 0

50

100

150

200

250

Time s

Fig. 8. Streaming data with reliability based QoS during an communication outage

during the communication outage are buffered and all outstanding messages are transmitted when the link recovers. During the transfer of all outstanding messages the delivery ratio can exceed the current setpoint rate and is only limited by the network throughput.

Delivery Rate Hz

Development of an Adaptable Communication Layer with QoS Capabilities

20

A

B

C

D

C

B

791

A

10

0 0

50

100

150

200

250

300

350

400

Delivery Rate Hz

Time s

20

TCP point cloud stream, setpoint rate 10 Hz (2.8

MB s )

A

D

B

C

C

B

A

10

0 0

50

100

150

200

250

300

350

400

Time s UDP video stream, setpoint rate 10 Hz (0.7

MB s )

Fig. 9. Streaming data during a multi hop case study

4.3

Multi Hop Data Streaming

To evaluate the multi hop capability of the mobile ad-hoc network the pattern setup shown in Fig. 6a is used again. In contrast to the experiment described before now a relay station (a second ground vehicle) is placed at point B. The second vehicle is driving from point A towards the turning point D and back again. During the ride the ground vehicle is passing the relay station at point B. Figure 9 shows the message delivery rate of two data streams that are transferred during the experiment from the ground vehicle towards the control station. One of the data streams is send via TCP (a 3D-LiDAR point cloud message) with reliability based QoS enabled and the other one is send via UDP (live video stream). Figure 9 shows the delivery rate for both data streams. The TCP point cloud stream is often missing the setpoint rate, especially during multi-hop communication between point B and D. When the vehicle comes close to the control station outstanding messages can be delivered. The UDP data stream mostly meets the configured setpoint rate, but it has also some drop outs with increasing link distance and during route changes when passing the relay station. Further multi hop experiments for a mapping application within our communication setup are described in Harms et al. [15]. The test have shown that smaller network traffic like control messages and live video (0.3 MB/s) for teleoperation could mostly be delivered in multi hop stages. Instead larger map updates (1.5 MB–100 MB per map update) were only delivered during single hop stages, thanks to the implemented mechanisms for priority handling.

5

Conclusions

It is shown that the proposed communication architecture is capable to extend the underlying middleware architecture to exchange messages in an unreliable

792

H. Harms et al.

network. In addition some methods were presented to reduce network load and meet bandwidth requirements. The presented user interface is another option to control data streams and manage different situations. In the future we would like to test if sensor streams can be transferred via multiple hops from aerial vehicles to the ground station to exploit the full potential of the mobile ad-hoc network. Further tests for detecting link failures and reconnection time are necessary, especially for TCP connections. It is also thinkable to spent more work on adjusting transmission rates automatically depending on the available network bandwidth in order to avoid preconfigured rates. In the long term another option would be to use some of the presented feature like data compression, delivery rate control and topic prioritization in combination with a ROS 2 interface, to get access to the comprehensive QoS options of the middleware. Acknowledgements. All presented work was done within the joint development project ANKommEn (german acronym for: Automated Navigation and Communication for Exploration) funded by the German Federal Ministry of Economic Affairs and Energy administrated by the Space Administration of the DLR (funding code: 50NA1518).

References 1. Gerkey, B., Vaughan, R., Howard, A.: The player/stage project: tools for multirobot and distributed sensor systems. In: Proceedings of the 11th International Conference on Advanced Robotics, pp. 317–323. Citeseer (2003) 2. Eugster, P.T., Felber, P.A., Guerraoui, R., Kermarrec, A.M.: The many faces of publish/subscribe. ACM Comput. Surv. (CSUR) 35(2), 114–131 (2003) 3. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T.B., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: Proceedings of ICRA Open-Source Software. Workshop (2009) 4. Huang, A., Olson, E., Moore, D.: LCM: lightweight communications and marshalling. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4057–4062, October 2010 5. Tiderko, A., Hoeller, F., Rhling, T.: The ROS multimaster extension for simplified deployment of multi-robot systems. In: Robot Operating System (ROS): The Complete Reference, vol. 1, pp. 629–650. Springer International Publishing (2016) 6. Schwarz, M., Beul, M., Droeschel, D., Schller, S., Periyasamy, A., Lenz, C., Schreiber, M., Behnke, S.: Supervised autonomy for exploration and mobile manipulation in rough terrain with a centaur-like robot. Front. Robot. AI 3, 57 (2016) 7. Hartanto, R., Eich, M.: Reliable, cloud-based communication for multi-robot systems. In: IEEE International Conference on Technologies for Practical Robot Applications (TePRA). Massachusetts (2014) 8. Zstandard - Fast real-time compression algorithm (2017). https://github.com/ facebook/zstd 9. Linux Classless Queuing Disciplines (qdiscs) (2017). http://linux-ip.net/articles/ Traffic-Control-HOWTO/classless-qdiscs.html 10. Johnson, D., Ntlatlapa, N., Aichele, C.: A simple pragmatic approach to mesh routing using BATMAN. In: 2nd IFIP International Symposium on Wireless Communications and Information Technology in Developing Countries, Pretoria, South Africa (2008)

Development of an Adaptable Communication Layer with QoS Capabilities

793

¨ 11. Tardioli, D., Parasuraman, R., Ogren, P.: Pound: a ROS node for reducing delay and jitter in wireless multi-robot networks. CoRR, 1707.07540 (2017) 12. Tardioli, D., Sicignano, D., Villarroel, J.L.: A wireless multi-hop protocol for realtime applications. Comput. Commun. 55, 4–21 (2015) 13. Maruyama, Y., Kato, S., Azumi, T.: Exploring the performance of ROS2. In: Computer Communications, EMSOFT, pp. 5:1–5:10. ACM (2016) 14. Tardioli, D., Sicignano, D., Riazuelo, L., Romeo, A., Villarroel, J., Montano, L.: Robot teams for intervention in confined and structured environments. J. Field Robot. 33, 765–801 (2016) 15. Schmiemann, J., Harms, H., Schattenberg, J., Becker, M., Batzdorfer, S., Frerichs, L.: A distributed online 3D-lidar mapping system. In: ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (2017)

Trajectory Planning Under Time-Constrained Communication Yaroslav Marchukov(B) and Luis Montano(B) Instituto Universitatio de Investigaci´ on en Ingenier´ıa de Arag´ on, University of Zaragoza, Zaragoza, Spain {yamar,montano}@unizar.es

Abstract. In the present paper we address the problem of trajectory planning for scenarios in which some robot has to exchange information with other moving robots for at least a certain time, determined by the amount of information. We are particularly focused on scenarios where a team of robots must be deployed, reaching some locations to make observations of the environment. The information gathered by all the robots must be shared with an operation center (OP), thus some robots are devoted to retransmit to the OP the data of their teammates. We develop a trajectory planning method called Time-Constrained RRT (TC-RRT). It computes trajectories to reach the assigned primary goals, but subjected to the constraint determined by the need of communicating with another robot acting as moving relay, just during the time it takes to fulfill the data exchange. Against other methods in the literature, using this method it is not needed a task allocator to assign beforehand specific meeting points or areas for communication exchange, because the planner finds the best area to do it, simultaneously minimizing the time to reach the goal. Evaluation and limitations of the technique are presented for different system parameters.

Keywords: Trajectory planning Cooperative scenarios

1

·

Time-constrained communication

·

Introduction

Lets consider a scenario where a team of robots must be deployed to inspect an environment. The mission of the robots is to reach some points of interest and make observations of the environment (taking pictures, environmental sensing...). The information must be transmitted to a static operation center (OP), where human operators monitor the mission. The robots and the OP have a limited signal range, so they cannot be continuously communicated. Each robot is equipped with a wireless device, thus it creates a dynamic communication This work was partially supported by Spanish projects DPI2012-32100, DPI201676676-R-AEI/FEDER-UE and T04-DGA-FSE. c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_65

Trajectory Planning Under Time-Constrained Communication

795

area, allowing to extend during the motion the communication area determined by the OP. This fact can be exploited by the system to exchange information between the robots. The exchange can be made in different ways. The simplest one, and probably the costliest in terms of time and resources, is that robots periodically visit the OP. For large scenarios this solution wastes a lot of time only on information exchange. Another option is to devote some robots to relay tasks, re-transmitting the information of the mates to the OP. However, in a multi-robot application, minimizing the number of these relay robots, maximizes the number of robots visiting the allocated primary tasks. Therefore, a better solution, is to allocate the nearest goals to some robot, which is responsible of visiting these goals and, at the same time, to communicate directly with the OP. This way, the robots devoted to reach farthest goals, share the collected data with the relay robot, which will upload this information to the OP when going to share its own data. In the same way, the OP updates the mission tasks and this information is shared with the team, through the relay robot. This procedure is illustrated in Fig. 1(a)–(c).

R3 R1 R1

R2

R1

TRelay

R2

tCmin

R3

Relay x

R2

TRobot

Goal 1

Goal 2 y

Robot

R3

(a)

(b)

(c)

(d)

Fig. 1. In (a), 3 robots must make observations of the environment (red crosses). The OP allocates the tasks to each robot (the colored areas associated to each in the Fig. (a). Additionally R2 (black) is assigned to communicate directly with OP. Each robot computes its trajectory, sharing it with the mates. Figure (b)–(c) represent the trajectories planned by R1 and R3 to meet R2 and share data when they make observations of the environment. Figure (d) illustrates a simplified example for two robots in the x-y-time space, in which Relay is assigned for communications while moving towards its assigned goal, and Robot for primary tasks. The latter computes its trajectory towards its goal, constrained by the time needed for information exchange (tcmin ).

Furthermore, a constant connectivity between robots, requires coordination, considerably constraining the motion. Nevertheless, based on the amount of collected data and the achievable bit-rate, the robot knows the time required to exchange the information. So, it can plan the trajectory, synchronizing with the relay only the required time to fulfill the transmission. As a result, the maximum deviation of the robot from its trajectory to the primary goal, is the time needed to exchange information with the relay, see Fig. 1(d). We assume that a task allocation algorithm has assigned the sequence of goals and the role of the robots, some of them for both, primary and relay tasks.

796

Y. Marchukov and L. Montano

The relay robots compute their trajectories and share it with the rest of the robots. Thus, we focus this work on developing a Time-Constrained trajectory planner based on a Rapidly exploring Random Tree algorithm (RRT), called TCRRT. It computes a trajectory which leads the robot to a primary goal through the communication area of the relay robot. The planner attempts to reduce the time of the entire trajectory, remaining synchronized with the relay the time needed to complete the transmission. The advantages of the presented algorithm are: – the planner does not require a task allocator to define beforehand meeting points or areas for communication: itself computes the time-constrained trajectory towards its assigned primary goal. – it also obtains the best area in which the exchange can be made, so the trajectory is obtained through that area. – the time for communication can be adapted to the requirements of the amount of collected data for each application and the employed communication technology. The robot is able to find out the time required for each specific transmission, applying it to the best trajectory computation. The rest of the paper is organized as follows: in Sect. 2, we present the related works. The problem is described in Sect. 3. In Sect. 4, we define the dynamic communication area concept and its parts, used in the planner. We present TCRRT to solve the time-constrained trajectory planning problem in Sect. 5. In Sect. 6, we discuss the results obtained by our method in simulations. Finally, in Sect. 7, we present several conclusions and the future work.

2

Related Work

The centralized approach for exploration missions is the most secure method to assure the coordination and the connectivity among the robots of the team. In works as [1] and [2], the teams are coordinated to accomplish a deployment mission. Both methods fully coordinate the team and are constrained to continuous connectivity between members, which in many cases might be a very strict constraint. We find interesting the works [3–6] in how they solve Time-Constrained planning. In [3], the authors avoid coordination, exploiting the signal of other members, but the algorithm is still restricted to a permanent communication. With our approach, the robot should be free to decide when to transmit the information and the period of this transmission. The communication time is relaxed in [4] and [5], but coordinating the robots is still needed specifically to this end. In [4], the team periodically establish line-of-sight communication, but does not take int account the time of the data exchange. In [5], a robot communicates with the team only when it makes an observation, but it needs to be coordinated with the teammates. In [6], a multi-robot synchronization problem is addressed, where the robot trajectories are prefixed loops and the areas for inter-robot communication are also a priori established between each pair of trajectories. In our work, on the contrary to those previous works, the robots

Trajectory Planning Under Time-Constrained Communication

797

share information without disrupting the trajectories of the relays, and without pre-fixed communication areas between trajectories. Furthermore, the aforementioned techniques require a specific allocation algorithm for connectivity tasks. The presented trajectory planner avoids the usage of this specific task allocator for connectivity, because how and where to achieve the coordination is computed by the trajectory planner. The trajectory computation involves the time in which the path is traversed, so the planner explores different temporary options to reach the goal. The runtime of methods such as Dijkstra, Fast Marching Method (FMM), or A* scale with the number of dimensions. Therefore, we consider sampling-based methods, which does not require the grid discretization. The randomized multiple-query algorithms such as Probabilistic Roadmaps (PRM) [7], are still computationally heavy for our problem. Therefore, we employ as the base method RRT [8]. Due to its single-query and randomized nature, it fits well to explore several trajectories, choosing the best one among them. There are many variations of RRTs in the literature. The RRT-Connect [9] raises two trees, one from the initial position of the robot, and another from the goal. This way, the environment is explored faster and it reduces the probability of getting trapped. The Dynamic-Domain RRTs (DD-RRT) [10] limit the sampling domain, generating random nodes in areas which do not lead towards the obstacles and minimizing the execution time. The solution cost is improved by RRT* [11] and Informed RRT* [12]. RRT* improves the parent selection and includes a reconnection routine of neighboring nodes, reducing the cost of possible paths, with respect to the basic RRT. The Informed RRT* employs heuristics to delimit the sampling domain, finding out solutions faster and in more problematic scenarios where the randomized algorithms are not usually effective, as narrow passages. Both mentioned techniques increase the computation time with respect to the basic RRT. The presented TC-RRT, described in Sect. 5, takes into account some of the features of the cited methods. We delimit the sampling space to lead the robot through the communication area, as occurs with DD-RRT and Informed RRT*. We use a parent selection routine, similar to RRT*, not to reduce the solution cost, but to avoid deadlock in a local minima, so increasing the success rate of the algorithm.

3

Problem Setup

The problem that we solve is the computation of a trajectory to some location, which is synchronized with the trajectory of a relay robot, in order to exchange data. So this involves a spatio-temporal planning. Thus, let us define some location x as a duple of [x y]T . As each position is visited at some time t, we can define a node n = [x t]T . Throughout the paper x(n) and t(n) will denote position and time of some node n. Since the space is three-dimensional, the distance between a pair of nodes will include the difference between times, besides the Euclidean distance. Thus, the distance between any two nodes n1 and n2 is expressed as:

798

Y. Marchukov and L. Montano

  x(n1 ) − x(n2 ) d(n1 , n2 ) = |t(n1 ) − t(n2 )|

(1)

We denote τ as a trajectory travelled by a robot, that can be defined as a sequence of contiguous nodes: τ = [n0 , n1 , ...nN ], where N denotes the end of the trajectory. The time of a trajectory to reach the goal is expressed with t(τ ). The robot must communicate with a mate, which is assigned for relaying the information to the operation center. The trajectory of the mate is expressed as τm , and it generates a dynamic communication area A(τm ). The details of computation of this area are explained in Sect. 4. An example with a scenario in presence of obstacles is depicted in Fig. 2(a). Knowing the size of the package to transmit, the robot determines the minimum time to fulfill the transmission, tcmin . Then, the time that the trajectory of the robot is synchronized with τm , is expressed as tc (τ ). In summary, the problem to solve is to find the fastest trajectory which leads the robot to its primary goal, while it is synchronized the required time to accomplish the information transmission, Fig. 1(d). Formally expressed as: τ ∗ = argmin(t(τ )) τ

subject to

4

(2)

tc (τ ) ≥ tcmin .

Dynamic Communication Area

A basic wireless antenna, located at some position x produces a communication area, denoted as A(x). This area is composed by all the positions where the received signal strength is higher than some specified threshold, which assures the communication. Thus, it is possible to obtain the distance dth , where the communication is guaranteed, from the following expression [14]: PRX = PT X − 10γlog10 (dth );

dth = 10

PT X −PRX 10γ

(3)

where PT X , PRX are the transmitted/received signal strength and γ is the pathloss exponent. Moreover, we want to assure always the communication, so we discard the positions which are obstructed by obstacles, considering only the line-of-sight (LoS) component. When a relay robot, with an antenna, follows its trajectory, it extends the static communication area with the movement. So, this dynamic area A(τm ) can be expressed as all the nodes which are within the communication distance through the trajectory τm , Fig. 2(a): A(τm ) = {n | dist(nmi , n) ∧ LoS(nmi , n), nmi ∈ τm , i = 0, ..., N }

(4)

where dist(nmi , n) is a boolean function that computes d(nmi , n) ≤ [dth Δtrelay ]T using Eq. 1, and Δtrelay is the maximum timestep of the relays trajectory. And LoS(τm , n) is a boolean function that computes if exists line-ofsight between the trajectory and the node, using the Bresenham algorithm [13].

Trajectory Planning Under Time-Constrained Communication

100

0

150

100 150

0

200 100 100

160

300 Time (s)

100

200

200

Time (s)

150

300

300

200

200

Time (s)

Time (s)

300

799

200

140

100

0

120

0

50 40

50 40

20 y

0 0

20 x

40

20 0

(a) A(τm )

40

40

y

0 0

100

20 x

40

(b) Areach

40

20 y

0 0

20 x

100 40

20

0

(c) Aca

y

0 0

20 x

(d) Af eas

Fig. 2. Different communication area parts.

The robot and the relay robot start from different initial positions. Thus, some nodes of A(τm ) are impossible to reach and must be discarded. So, we define the reachable area by the robot, Fig. 2(b), as: Areach = {n ∈ A(τm )

| x0 − x(n)/vmax ≤ t(n)}

(5)

where x0 is the initial position of the robot and vmax its maximum attainable speed. As explained in the previous sections, the size of the package to transmit and the speed of the wireless link are known. So only those nodes of the communication area, which will guarantee the transmission of the data, are considered by the planner. So we define the communication assurance area Aca , Fig. 2(c), as: Aca = {n ∈ A(τm ) |

t(n) ≤ t(nN ) − tcmin }

(6)

where t(nN ) is the time of the last position of the trajectory of the mate τm . Therefore, the feasible area, Fig. 2(d), which assures that the robot is able to reach the relay robot at time and guarantees a minimal time for the data exchange, is the intersection of the areas of Eqs. (5)–(6): Af eas = Areach ∩ Aca

(7)

If ∃nf eas ∈ Af eas , it means that the mission may be accomplished if the location of x(nf eas ) is reached no later than t(nf eas ). However, if Af eas = ∅, there is no solution. TC-RRT guides the search to the goal through the communication area, in order to synchronize the robot with the relay.

5

Trajectory Planning with Limited Communication Time

The trajectory planner uses as base method the basic RRT. The proposed TCRRT (Algorithm 1) has the same structure, but with different sampling and parent selection functions. First of all, let us define each node in the tree as z = [x p t a tc ]T , where x(z) and p(z) are the position and the parent of z, t(z) is the time to reach z, a(z) is a boolean to indicate if z is within A(τm ), and tc (z) is the communication time accumulated up to z in the tree. All these

800

Y. Marchukov and L. Montano

variables are computed when the node is inserted in the tree, with InsertNode. The algorithm generates random samples x in the workspace outside the obstacles (AreaSample), l.3 in Algorithm 1. The way to generate samples depends on if they are within or outside Af eas . This sample is connected to a parent in the tree by means of AreaNearest procedure (l.4), selecting this way the node znear that provides the fastest movement. As in the basic RRT , the Steer function cuts the stretch of the line between the generated sample (xsample ) and the selected parent (znear ), l.5. If the new branch is not obstructed by some obstacle, the node is inserted in the tree T , l.6-8. After expanding K nodes, there may be several branches or trajectories which achieve the goal. All of them fulfill the condition of the communication time, thus we select the fastest trajectory, as defined in Eq. 2. The algorithm iterates until expaning the number of nodes specified by the user. This way, the environment is rapidly explored, although without the optimal solution, because of the randomized nature of the algorithm. Two examples of tree computation and their respective trajectories are shown in Fig. 3. Let us explain in more detail the algorithm.

Algorithm 1. TC-RRT 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:

T ←InsertNode(xini , 0) for i=1:K do xsample ← AreaSample znear ← AreaNearest(xsample ) xnew ← Steer(xsample , x(znear )) if ObstacleFree(xnew ) then T ←InsertNode(xnew , znear ) end if end for return T

Algorithm 2 . InsertNode(xnew , znear ) new −x(znear ) 1: Δt = x v(xnew ,x(znear )) 2: tnew = t(znear ) + Δt 3: anew = ∃n : d([[xnew tnew ]T , n]) ≤ [d Δtrelay ]T , n ∈ A(τm ) 4: if anew & a(znear ) then 5: tcnew = tc (znear ) + Δt 6: else 7: tcnew = 0 8: end if 9: T ← [xnew znear tnew anew tcnew ]T

InsertNode routine (Algorithm 2) receives the position of a new node and the parent, and computes the time to reach the node from the parents position (l.1), where v represents the velocity of the robot. Then it computes the total time to reach it from the root (l.2). It checks the presence in the area by means Eq. 1 in l.3, where d represents a small value of distance. Note that this operation is quite lightweight, because Δtrelay selects only one horizontal slice of A(τm ). If both, the parent and the node, are within A(τm ), the time of l.1 is accumulated as communication time (l.4-8). AreaSample (Algorithm 3) works as follows. When some node is introduced into the communication area on time, i.e. in Af eas (l.1), and does not fulfill the condition of communication time of Eq. (2) (tc (z) < tcmin ), the tree is expanded through A(τm ). The algorithm selects the node which accumulates the maximal communication time, za in l.2, and choose a greater time of the relays trajectory, using tmin , tmax in l.3, and delimiting the sampling space by the distance dth . The selection of time limits is made in accordance with the relative speed between

Trajectory Planning Under Time-Constrained Communication

801

Algorithm 3. AreaSample 1: 2: 3: 4: 5: 6: 7: 8:

if ∃z : {dist([x(z) t(z)]T , n), n ∈ Af eas ∧ tc (z) < tcmin } then  Node which accumulates the maximal communication time za = argmaxz (tc (z)) x− = x : x − x (t(z ) + t x+ m a min ) ≤ dth m m = x : x − xm (t(za ) + tmax ) ≤ dth − + xsample ← rand(xm , xm ) else  xmin , xmax are the limits of the scenario xsample ← rand(xmin , xmax ) end if return xsample

600

400 Time (s)

Time (s)

800

400 200

0 0

40

20 40 x

(a)

200 50

0 0

(b)

20 0

y

20

40

0

y

x

(c)

(d)

Fig. 3. Tree (trajectory) computation for different scenarios. In (a), a robot that has collected data at [45,5] (red circle), must share information with some static mate at [5,5], then return to the initial position. In (b), the tree explores the environment, looking for the static communication area of the mate. When A(τm ) is achieved, the robot remains within A(τm ) until finish data exchange. Finally, T is expanded to the goal. The computed trajectory is depicted in green. In (c), the mate is moving from [5,5] to [45,45], thus the area is dynamic. The robot must transmit data, before reaching the goal at [5,45]. In (d), T intercepts the communication trail area of the mate as fast as possible, remains in A(τm ) until the end of communication, then achieves the primary goal.

the relay and the robot, setting tmin = Δtrelay vrelay /vrobot and tmax = tmin + Δtrelay , see Fig. 4(a). In the opposite case, if there exist no nodes which have entered within the communication area, Af eas , or the communication time has already been accumulated, so that tc (z) ≥ tcmin , the samples are generated outside the obstacles in all the scenario, l.5-7. AreaNearest (Algorithm 4) finds out the best parent node of the tree to connect the generated sample. The procedure is represented in Fig. 4(b). If there are no nodes of the tree inside A(τm ), l.1, it is connected to parents that provide the fastest movement, l.2. In the case that some node is within A(τm ), l.3, and accomplishes the condition of Eq. 2, l.4, it selects all the nodes accomplishing that condition, l.5, and chooses the parent of the fastest movement, l.6. When there are no nodes in A(τm ) which accomplish tc (z) ≥ tcmin , l.7, the suitable candidates to parent are those that have accumulated the maximum communication time of the entire tree, l.8. To increase the number of suitable candidate parents, a relaxation time tr is applied in l.9. It is computed as tr = nts dmax /vmax , where nts is the number of timesteps and dmax and vmax are the maximum attainable step and speed of the robot, respectively. Finally, the chosen parent is that one,

802

Y. Marchukov and L. Montano

Algorithm 4. AreaNearest(xsample ) 1: if z : a(z) then

  x −x(z) 2: znear = argminz v(xsample ,x(z)) sample 3: else 4: if ∃z : tc (z) ≥ tcmin then 5: zc = {z | tc (z) ≥ tcmin  }

 x −x(z) 6: znear = argminz∈zc v(xsample ,x(z)) sample 7: else 8: z ∗ = argmaxz (tc (z)) : a(z)  Maximum communication time of the entire tree 9: zc = {z ∈ z ∗ | tc (z) ≥ tc (z ∗ ) − tr } 10: znear = argminz∈zc (t(z)) 11: end if 12: end if 13: return znear

which provides the minimal time, l.10. Therefore, the selected parent is one that reduces the nodes time, maintaining the tree within A(τm ), see Fig. 4(c). z1

m

t(za)+tmax3 t(za)+tmin3 t(za)+tmax2 t(za)+tmin2

za

t(za)+tmax1 t(za)+tmin1

(a) Sampling area

A(

m

z2

)

z3

znear = z2

xsample

xsample *

z

- z1∉A( m ) - t(z2 )≥tcmin - t(z3 ) P2 (d)).

Fig. 5. (top) End-to-end PDR on a two-link network as a function of end-to-end network length L (links modeled as in Fig. 4). (bottom) Optimal position of the relay as fraction of L (bold curve).

Balancing Packet Delivery

5

813

System Architecture

We implemented our system following a three-tiered architecture that we summarise below. Describing the full implementation is out of this paper’s scope. 5.1

Application Layer (AL) – Video Capture and Collection

The AL is just used by the source and sink devices, the only nodes that can generate or consume data. The source grabs image frames directly from the camera device and passes them to the AL that fragments frames into multiple packets as needed and sends them to the layer below. Each packet also carries a corresponding header to facilitate re-assembly at the sink. In the sink, the AL receives packets from the layer below, extracts their header, re-assembles them into a full image frame and delivers it to the application for display. 5.2

Packet Manager (PM) Layer – Routing

The PM layer is in charge of finding the next hop of an incoming packet, whether it comes from the AL or from another node. The PM passes all packets that are to be forwarded along the network to the layer below that handles transmissions. Packets aimed at the node itself are passed to the AL above. 5.3

TDMA Layer – Transmission Shaper

The TDMA layer is in charge of shaping the outgoing communications, accessing the network interface directly. Periodically, i.e., every round period, enqueued packets are passed to the wireless card to be sent through the air during a given interval of time called a slot, only. A new packet is only sent if it is within the slot and when the network card informs the TDMA module that the last packet has been sent, which may take a variable time depending on whether there was an access delays or retransmissions (a maximum of two were configured). The TDMA layer is also in charge of synchronizing the network nodes. We follow an adaptive distributed clockless approach similar to that in [9]. Every node measures the average delay of the last set of received packets and adjusts the phase of its own slot to initiate transmissions right after the end of the previous slot. Figure 6 examplifies this process showing packet reception and transmission instants in a relay node. Each line in the y-axis contains events occurred within a round period. Yellow squares represent received packets and their x-axis position the time in the round when they were received. Gray squares represent packets transmitted (relayed) by the node. The node time slots are shown as long thin black rectangles. Figure 6 also shows the slot phase adjustment, visible through the slot shifts to the right, caused by delays in the packets received in the previous slot due to retransmissions and interference.

814

L.R. Pinto et al.

Fig. 6. Packet reception (yellow) and transmission (black) in a relay, using slots 1 and 2 of a TDMA round. Phase misalignment is due to adaptive and clockless synchronization. Blue packets are DRP protocol feedback PDR packets.

5.4

Dynamic Relay Placement (DRP) Protocol

To maximize end-to-end PDR, each relay runs a distributed protocol to track its best placement. The protocol main idea is that every UAV constantly estimates the PDR of its incoming link and shares this value, together with its GPS position, with the node before in the line topology. This allows each node to acquire the PDR and length estimates of both its incoming and outgoing links and, after collecting multiple estimates, perform a function regression and infer the links R and α parameters. Then it computes its optimal relative position between its neighbors and adjusts its position accordingly. Differential GPS units are to be used to minimize link length errors. As visible in Fig. 4, an error of 5 m, typical in standard GPS, leads to a deterioration of near 10% from optimal PDR. To facilitate these computations, node i TDMA layer adds a link i unique sequence number (sqi ) to the packets header. This way, receiving node i + 1 can determine which packets were lost from sender node i. An array of the last M sequence numbers of received packets (from every node) is kept updated (currently M = 200). The DRP protocol periodically sweeps this array verifying differences between consecutively registered sequence numbers, determining how many packets from node i to i + 1 were lost. The estimated PDR (Pi ) is the number of missing packets divided by the window size M . The TDMA layer of node i + 1 then sends a PDR packet upstream, i.e. to node i, containing the PDR estimate of link i (cf. blue packet in Fig. 6). This way, every node is able to estimate the quality of its incoming link (source side) directly, and its outgoing link (sink side) indirectly by means of the PDR packet.

6

Experimental Results

This section confirms the capability of the proposed DRP protocol to assess the PDR of each link at each relay node, and how the PDR of the links and endto-end network vary with the relative position of the relay in the line topology.

Balancing Packet Delivery

815

We also show the end-to-end throughput, i.e., video frame rate, to guarantee that we do not improve PDR at the expense of throughput. 6.1

Setup

Figure 7 shows the top view of the experimental setup where we have a single source UAV that streams images through a line network of two other UAVs that eventually deliver them to the base station. We used AR.Drone 2.0 as the UAV platform. The experiment was carried out indoors and distances between nodes taken from a tape measure. We chose a TDMA round period of T = 100 ms, and dedicated one equal slot to each transmitting UAV. The network hop count is 3 (n = 3), and routing tables are hard-coded accordingly, namely a UAV-source, two UAV-relays and a computer base station. The distance between source and second relay (node 3) is fixed to D = 18 m, and the first relay (node 2) is the only moving node (Fig. 7). Thus, for the purpose of analysing network performance, we will focus on the first three nodes and respective links, only, considering relay node 3 as the effective sink. Queues in the relays are limited, thus the system will drop oldest packets first if new incoming packets find a queue full.

Fig. 7. Experiment layout (top view) of the UAVs and computer base station. All nodes are fixed except the first relay (node 2). End-to-end length set to D = 18 m.

All nodes were set to operate on the same 802.11g ad-hoc network channel at 24 Mbps fixed bit rate. At this speed, we guarantee that we can produce data fast enough to saturate the channel, thus not being limited by the processing power of our platform. The transmission power was equally set among the UAVs (10 dBm) as well as the maximum number of 802.11 MAC retries (2, the minimum the platform allows). Each video streaming experiment lasted for roughly 100 consecutive seconds due to limited local log file storage. Each image frame, in raw format, has 57600 bytes and is divided into 50 packets by the AL that then passes them to the PM layer. Each packet corresponds to one horizontal line of pixels in an image frame. The source is programmed to capture a new frame only if the PM queue has room for at least 50 packets. 6.2

Packet Delivery Ratio

Figure 8 shows the effect of relative distance of a relay to its upstream (source side) and downstream (base station side) nodes on the PDR of the respective

816

L.R. Pinto et al.

links. As the relay moves farther from the source, and closer to its upstream neighbor, PDR from the first link oscillates around 95%, and the one from the second link improves dramatically. The best end-to-end condition is when this product is maximum. If one of the two links has low PDR, this translates to a low end-to-end PDR since packets have to be successfully transported at every hop to reach the sink. As discussed in Sect. 4, the asymmetric links lead to a best end-to-end scenario with the relay node away from the middle distance between its upstream and downstream nodes. In that midpoint the network PDR is 90% × 79% ≈ 72%. Figure 9(a) shows a snapshot of the video stream received at the sink when the relay is placed exactly in the midpoint between its two neighbors. When the relay moves away from the source, the network PDR eventually increases to ≈100%, i.e., a 40% improvement. Figure 9(b) shows a snapshot of the video stream with the relay close to the optimal position, revealing less losses as expected. Moving the relay farther away again deteriorates the network performance since the first link worsens significantly.

Fig. 8. Experimentally measured packet delivery ratio at the first (orange) and second (purple) links, with respective standard deviation, at different static relay positions. Best scenario when relay is closer to sink (x = 13 m) leading to end-to-end PDR ≈100%. Dashed lines are superimposed regressions of Eq. 1 on experimental data.

6.3

Throughput

The second metric we analyzed is throughput, measuring the data received at the sink per second, corresponding to the throughput of the second link, since all data that the relay transmits is originally generated at the source. Figure 10 shows that placing the relay at x = 13 m, ≈0.6 of the source to sink distance, maximizes throughput. Interestingly, the achieved throughput is near three times higher than at the mid-point x = 9 m. Overall, the experimental results validate our initial assumption that end-toend network performance in a line topology with asymmetric links is maximized moving the relay nodes to balance the PDR of the respective links.

Balancing Packet Delivery

817

Fig. 9. Video stream snapshot with relay positioned at: (a) midpoint between source and sink, showing plenty of packet losses (black lines); and (b) 0.6 of the distance between source and sink (x = 13 m), leading to balanced links and fewer packet losses. End-to-End Throughput (KB/s)

300 250 200 150 100 50 0 0

2

4

6

8

10

12

14

16

18

Relay Position (m)

Fig. 10. Measured end-to-end throughput. Placing the relay at midpoint (x = 9 m) is not optimal, again. At x = 13 m, throughput is three times higher.

7

Conclusion

UAVs are a promising tool to provide live video streaming to a ground station for multiple purposes. Due to limited radio range, other UAVs can act as relays and form a line network with multiple UAVs. The relays positioning has a significant impact on the end-to-end network performance, particularly PDR and throughput. We showed that in the presence of asymmetries in the links, adjusting UAV positions balancing the links PDR maximizes network performance. Experimental results, at different static positions and with a two-link line network, showed a PDR increase of 40% and 3-fold increase in throughput when compared with placing the relay node in the network mid-point. We also described and implemented an online PDR estimation protocol that will be able to guide the relays dynamically to their best positions. We are currently addressing the solution of the general case of positioning n relays and proving convergence of the protocol when operating iteratively. Moreover, determining the optimal number of relays in asymmetrical conditions is still an open problem.

818

L.R. Pinto et al.

Acknowledgments. This work is a result of the project NanoSTIMA (NORTE-010145-FEDER-000016), supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF). The work was also partially supported by UID/EEA/50008/2013.

References 1. Asadpour, M., Giustiniano, D., Hummel, K.A., Heimlicher, S., Egli, S.: Now or later?: delaying data transfer in time-critical aerial communication. In: Proceedings of Ninth ACM Conference on Emerging Networking Experiments and Technologies (CoNEXT), Santa Barbara, CA, USA, pp. 127–132. ACM Press (2013) 2. Bohm, A., Lidstrom, K., Jonsson, M., Larsson, T.: Evaluating CALM M5-based vehicle-to-vehicle communication in various road settings through field trials. In: Proceedings of IEEE Local Computer Networks Conference (LCN), Denver, CO, pp. 613–620. IEEE (2010) 3. Forster, C., Lynen, S., Kneip, L., Scaramuzza, D.: Collaborative monocular slam with multiple micro aerial vehicles. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3962–3970 (2013). doi:10.1109/IROS.2013. 6696923 4. Goddemeier, N., Daniel, K., Wietfeld, C.: Role-based connectivity management with realistic air-to-ground channels for cooperative uavs. IEEE J. Sel. Areas Commun. 30(5), 951–963 (2012). doi:10.1109/JSAC.2012.120610 5. Henkel, D., Brown, T.X.: Delay-tolerant communication using mobile robotic helper nodes. In: Proceedings of International Symposium Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks and Workshops (WiOPT), Berlin, Germany, pp. 657–666. IEEE (2008) 6. Jia, F., Shi, Q., Zhou, G.M., Mo, L.F.: Packet delivery performance in dense wireless sensor networks. In: Proceedings of International Conference on Multimedia Technology (ICMT), Ningbo, China, pp. 1–4. IEEE (2010) 7. Lindhe, M., Johansson, K.: Using robot mobility to exploit multipath fading. IEEE Wirel. Commun. 16(1), 30–37 (2009) 8. Muzaffar, R., Vukadinovic, V., Cavallaro, A.: Rate-adaptive multicast video streaming from teams of micro aerial vehicles. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1194–1201 (2016). doi:10.1109/ ICRA.2016.7487250 9. Oliveira, L., Almeida, L., Lima, P.: Multi-hop routing within TDMA slots for teams of cooperating robots. In: 2015 IEEE World Conference on Factory Communication Systems (WFCS), pp. 1–8 (2015). doi:10.1109/WFCS.2015.7160566 10. Pinto, L.R., Almeida, L., Rowe, A.: Video streaming in multi-hop aerial networks: demo abstract. In: Proceedings of the 16th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2017, pp. 283–284. ACM, New York, NY, USA (2017). doi:10.1145/3055031.3055047 11. Pinto, L.R., Moreira, A., Almeida, L., Rowe, A.: Characterizing multihop aerial networks of cots multirotors. IEEE Trans. Industrial Inf. 13(2), 898–906 (2017). doi:10.1109/TII.2017.2668439 12. Rizzo, C., Tardioli, D., Sicignano, D., Riazuelo, L., Villarroel, J.L., Montano, L.: Signal-based deployment planning for robot teams in tunnel-like fading environments. Int. J. Rob. Res. 32(12), 1381–1397 (2013)

Balancing Packet Delivery

819

13. Sicignano, D., Tardioli, D., Cabrero, S., Villarroel, J.L.: Real-time wireless multihop protocol in underground voice communication. Ad Hoc Netw. 11(4), 1484–1496 (2013). Special Issue on Wireless Communications and Networking in Challenged Environments 14. Zhao, J., Govindan, R.: Understanding packet delivery performance in dense wireless sensor networks. In: Proceedings of the First International Conference on Embedded Networked Sensor Systems (SenSys), Los Angeles, CA, USA, p. 1. ACM Press (2003)

Communication-Aware Robotics (II)

Discrete Robot Localization in Tunnels us Espelos´ın1 , and Jos´e Luis Villarroel3 Teresa Seco1(B) , Carlos Rizzo2 , Jes´ 1

Instituto Tecnol´ ogico de Arag´ on, (ITAINNOVA), Zaragoza, Spain {tseco,jespelosin}@itainnova.es 2 EURECAT Technology Center, Barcelona, Spain [email protected] 3 Arag´ on Institute for Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain [email protected]

Abstract. Tunnel-like environments represent a challenge in terms of robot localization due to the special conditions of this type of scenarios. Traditional robot localization systems based on laser, camera or odometry sensors do not perform well due to the hostile conditions and the general lack of distinctive features. In this paper we present a discrete localization system which takes advantage of the periodicity nature of the RF signal fadings that appears inside tunnels under certain configurations. Experimental results show the feasibility of the proposed method in order to periodically correct the robot position during its displacement along the tunnel. Keywords: Localization · RF · Tunnel-like environments · RLS · Mahalanobis distance

1

Introduction

There are several labors that need to be carried out inside tunnel-like environments such as inspections, rescue missions and even construction. The hard conditions, as well as the implicit risk for people of executing some of these tasks, make robots the most adequate devices to perform them. In this context, localization of the robot along the tunnel represents a challenge. Due to the absence of satellite signal in underground scenarios, outdoor methods based on GPS sensors can not be considered. Additionally, the darkness and lack of distinguishable features make the most common techniques for indoor localization — based on cameras or laser sensors — to perform erratically. Other localization methods rely on wheel odometers, but besides suffering cumulative errors, they can be inaccurate due to uneven surfaces and presence of fluids. Taking into account the aforementioned difficulties, other alternatives must be explored. From the point of view of the RF (Radio Frequency) signal propagation, several studies have been performed in tunnel scenarios [1–3]. In the cited works, two different behaviours of the RF signal are presented: on the one hand tunnellike environments behave as waveguides, extending the communication range in c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_67

824

T. Seco et al.

comparison to free space but on the other hand, the signal suffers from strong fading phenomena. The authors demonstrate that it is possible to obtain periodic fadings under certain transmitter-receiver configurations (i.e. position in the tunnel cross section and operating frequency). In this paper we propose the use of the periodic nature of the fadings for localization purposes, extending the methodology presented in [4]. Our approach consists of identifying the minimums of the RF signal during the displacement of the robot along the tunnel, and subsequently match these features with a RF map generated from the known signal propagation model. The robot position is periodically adjusted with the position reference provided by the map during the matching process. In order to accomplish these tasks, the combination of two strategies has been proposed as a first approach: Recursive Least Square (RLS) algorithm for the features identification [5], and the Mahalanobis Distance for the matching step. The remaining of the paper is structured as follows. In the next Section, the fundamental aspects of the electromagnetic propagation in tunnels relevant for this work are described. Section 3 presents the proposed discrete localization method, with a detailed description of each step of the algorithm. In Sect. 4, the preliminary results of the experimental tests and the algorithm implementation are shown. Finally, the conclusions are summarized in Sect. 5.

2

Fundamentals of Electromagnetic Propagation in Tunnels

Propagation in tunnels differs from regular indoor and outdoor scenarios. For operating frequencies high enough with free space wavelength much smaller than the tunnel cross-section dimensions, tunnels behave as hollow dielectric waveguides. If an emitting antenna is placed inside a long tunnel, the spherical radiated wavefronts will be multiply scattered by the surrounding walls. The superposition of all these scattered waves is itself a wave that propagates in one dimension —along the tunnel length— with a quasi-standing wave pattern in the transversal dimensions. This allows extending the communication range, but affects the signal with the appearance of strong fadings. There are many different possible transversal standing wave patterns for a given tunnel shape. Each one is called a mode and has its own wavelength, close to —but different from— the free space one, and with its own decay rate. For a detailed explanation, a good online source can be found in [6]. The electromagnetic field radiated from the antenna starts propagating along the tunnel distributed among many of the possible propagating modes supported by this waveguide. After a long-enough travel distance, only the lowest order modes survive (i.e. those with the lowest attenuation rate), giving rise to the slow fadings in the so-called far sector. These fadings are caused by the pairwise interaction between the propagating modes, and therefore, the higher the number of modes, the more complex the fading structure. On the transmitter side, the position of the antenna allows to maximize or minimize the power coupled to

Discrete Robot Localization in Tunnels

825

a given mode, favoring the interaction between certain modes, and allowing to produce a specific fading structure. In this specific work, given the tunnel dimensions and transmitter-receiver setup, the dominant modes are the first three modes (the ones that survive in the far sector). By placing the transmitter antenna close to a tunnel wall, we maximize the power coupled to the first and second modes while minimizing the excitation of the third one. In the receiving side, this produces a strictly periodic fading structure. The superposition of the first and second propagation y y and EH21 respectively) creates a 512 m periodic fading modes (called EH11 structure (Fig. 1(a)). In the very center of the tunnel there is no contribution y ), with lower energy, becomes from the second mode, and the third mode (EH31 observable, creating another fading structure with a different period. However, the received power associated to the fadings maxima is lower compared to the previous fading structure. The situation is illustrated in Fig. 1(b), which shows the data collected by having one antenna in each half of the tunnel, and another located in the center. It can be seen that there is a spatial phase difference of 180◦ between both halves of the tunnel (i.e. a maximum of one fading matches a minimum of the other) caused by the transversal structure of the second mode. For a more detailed explanation, see [3].

Fig. 1. Measured Received Power at 2.4 GHz inside the Somport tunnel, from [3]. The transmitter was kept fixed and the receiver was displaced along 4 Km from the transmitter. In (b), the same experiment was repeated for three different receiver’s cross-section positions: left half, center and right half. The solid lines represent the modal theory simulations, and the dotted lines the experimental results.

3

Discrete Localization Algorithm Based on RF Minimum Detection

We consider that the most relevant features of the RF waveform are the valleys, where a sharp change in the RSSI (Received Signal Strength Indicator) value

826

T. Seco et al.

is clearly distinguishable. Using the propagation model described in Chaps. 2.1 and 3.2 of [7], it is possible to know in advance the position of each valley along the tunnel. The proposed algorithm is based on the detection of these relevant features of the measured RF signal during the robot displacement. Thus, it is possible to correct the estimated position with the known positions corresponding to each fading feature, removing the odometric cumulative error periodically. The proposed localization approach is represented in Fig. 2. These steps are described in more detail in the next subsections.

Fig. 2. General stages of the localization algorithm

3.1

Minimums Map Generation

The map generation step consists of determining the location of the theoretical minimums from the known fadings model, which relates the RSSI to the distance along the tunnel. Although the map generation is executed in an offline process, the solution applied to this process is the same as the one selected for the online process. In order to accomplish the online minimums detection, a change in the slope of the signal from negative to positive must be detected. For this purpose, we propose to approximate the signal with a first order function (ˆ y = θ0 + θ1 u) using the resultant coefficients to check the change in slope. Therefore, a system identification method is needed to obtain the linear approximation at each timestamp. Although Least Squares (LS) method is the most widely applied solution for linear optimization problems, it requires an important quantity of previously recorded data samples for the solution to converge, which means that it is not suitable to solve problems with real time requirements. For this reason, the Recursive Least Square algorithm has been proposed as the solution. The RLS method, which is the recursive formulation of the LS algorithm, is capable of updating the solution online each time new data is available. The general formulation of the RLS algorithm is based on the following equations: e(k) = y(k) − yˆ(k) (1) yˆk = uT (k)θ(k − 1)

(2)

Discrete Robot Localization in Tunnels

θ(k) = θ(k − 1) + γ(k)e(k) γ(k) =

1 P (k − 1)u(k) uT (k)P (k − 1)u(k) + λ

827

(3) (4)

(I − γ(k)uT )P (k − 1) (5) λ being e the error, computed as the difference between the measured data y and the model output yˆ, calculated using the input sample u and the previous coefficient estimation θ. P is the inverse correlation matrix and it is proportional to the covariance matrix of the parameter estimates θ. λ is the forgetting factor, a positive scalar taking the form (0 < λ ≤ 1). The smaller the λ, the smaller the contribution of previous samples to the covariance matrix. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter coefficients with noisy signals. As a result of applying the RLS algorithm to the theoretical fadings model and the slope change detection process, an array of n distance values is obtained (xm1 , xm2 , .., xmn ). These positions are considered as global localization references and will be use as a prior known information (Fig. 3(a)). P (k) =

Fig. 3. Algorithm flow chart: (a) Map generation, (b) Prediction and Estimation steps

828

3.2

T. Seco et al.

Prediction and Estimation Using RLS and Mahalanobis Distance

The position of the vehicle x ˆt is predicted using the information provided by the odometry sensors (vt , wt ) and the position computed in the previous iteration xt−1 . In our specific case the vehicle travels in a straight line along the center of the tunnel, and therefore slightly heading variations are observed during its displacement. xˆt = xt−1 + v ∗ dt

(6)

In the estimation step, the existence of a minimum of the actual RSSI signal (zi ) is checked using the previously described RLS algorithm. If a potential minimum x ˆpot appears, a data association technique in order to match the potential minimum with the minimums of the map Xm is required. In this work, the use of the Mahalanobis distance in combination with the individual compatibility test (IC) is proposed. These techniques are the basis for some of the most popular data association algorithm used in SLAM in order to determine the subset of map features that are compatible with a measurement [8]. The Mahalanobis distance is computed between the position of the potential minimum x ˆpot and each of the positions of the minimum map (Xm ):  xpot , xmj ) = DM (ˆ

(ˆ xpot − xmj )T S −1 (ˆ xpot − xmj ) ∀xmj ∈ Xm ,

j = 1..n (7) It is important to notice that the covariance matrix (S) must take into account not only the covariances of the pose estimation and the minimum map, but also the covariance of the sensor which in this case is a ‘minimums’ detector: S = CovposeEst + Covmap + CovminSensor

(8)

In order to obtain CovminSensor , a calibration process is required. The data provided by this sensor are the minimum values obtained through the application of the RLS algorithm to the actual RSSI data (including the false positives). A first approach of the calibration procedure consists of measuring the distance between two minimums of the actual data (dmin), that would correspond to a known minimum. Figure 4 shows the results of the calibration procedure. The covariance is then obtained through the following expression:  CovminSensor =

dmin 2

2 (9)

The estimated position can be considered corresponding to an actual minimum if the Mahalanobis distance satisfies the individual compatibility test (IC): D2 (ˆ xpot , xmj ) < χ2d,1−α ,

∀xmj ∈ Xm ,

j = 1..n

(10)

where d is the dimension, and 1 − α is the desired confidence level (usually more than 95%). The value for χ is obtained from the Student’s t Distribution table.

Discrete Robot Localization in Tunnels

829

rssi (dB)

−65 −70 −75 dmin −80 3250

3300

3350 3400 3450 Distance (m)

3500

Fig. 4. Distance calculation between minimums of the actual signal

It is important to remark the noisy nature of RF signal in tunnels, which results in false positives during the minimum detection process, even though a digital filter is used. The proposed technique is capable of reducing or even removing the false positives during the matching process. Finally, if the potential minimum passes the IC test, the estimated position of the vehicle along its displacement xt is corrected with the position that comes from the corresponding minimum of the theoretical map xmj . Therefore, the accumulated odometry error is reset after identifying a minimum. Also, the prediction and estimation steps (Fig. 3(b)) are carried out iteratively.

4

Results

In order to test the proposed discrete localization method, the algorithm was implemented in Matlab. The main goal was to validate the solution using real data, and for this purpose a dataset logged in the Somport tunnel was used and processed offline. 4.1

Scenario and Experimental Setup

The test scenario selected for carrying out the experiments was the Somport railway tunnel, which is 7 Km long and connects Spain with France. It is straight but there is a change in slope at approximately 4 km from the Spanish entrance, and the cross-section is horseshoe-shaped (approximately 5 m high and 4.65 m wide). An all-terrain vehicle was utilized as the mobile platform. It was equipped with an array of twelve TPLINK wireless receivers. The antennas were spaced 0.125 m apart and the working frequency was 2.412 GHz. All of them were vertically polarized with a 4 dBi gain. The receiver array was placed at a height of 2.17 m on the moving platform. The radio signal was generated with a TPLINK tl-wn7200nd wireless adapter with Ralink chipset. It was placed 100 m form the entrance of the tunnel, 0.25 m from the right wall and 2 m above the ground. With these transmitter-receiver settings, the period expected for the fadings is around 512 m.

830

T. Seco et al.

In order to obtain the position of the mobile platform along the tunnel, the vehicle was equipped with two odometers and a SICK laser sensor. The vehicle localizes itself fusing all the sensor data with a localization algorithm on a previously built map [9]. It is only feasible to use this approach due to a very specific characteristic of this tunnel, which is lateral emergency shelters every 25 m, acting as landmarks. Without these shelters, the uncertainty along the tunnels grows indefinitely as there are no relevant features in the map as to match and reset the errors. The experimental setup can be seen in Fig. 5.

(a) The Somport tunnel

(b) Instrumented platform

Fig. 5. Experimental setup

The mobile platform moved up to 4000 m from the transmitter position along the center of the tunnel. During the displacement of the robot, the data provided by the sensors were streaming and logging with a laptop running Robot Operating System (ROS) [10] over Ubuntu 12-04. The real data used to validate the proposed algorithm are the RSSI values provided by the rightmost antenna. This method is intended to solve the localization problem in the far sector where the periodic fadings are observable. Therefore, the values corresponding to the near sector have been removed from the data set in order to avoid the fast-fadings effect (Fig. 6). 4.2

Localization Algorithm Implementation

The RLS algorithm needs to be implemented in order to get the minimums of the RF signal (fadings) along the tunnel. For this particular case, the variables involved in RLS method are defined as follows:

Discrete Robot Localization in Tunnels

831

−20 fadings model real data

RSSI (dB)

−40 −60 −80 −100 0

1000

2000 3000 Distance (m)

4000

5000

Fig. 6. Received signal strength from antenna 12. The vertical dashed line denotes the limit between the fast fadings and the selected work zone.

– Measured process output: y = RSSIvalues T – Inputs: u = (1 τ ) , where τ is the time between two consecutive RSSI received values T – Coefficients: θ = (θ0 θ1 ) T – Model output: yˆ = u θ = θ0 + θ1 τ , first order polynomial – Initialization: θ(0) = 0 and P (0) = δI. Since the knowledge of θ at t = 0 is very vague, a very high covariance matrix of the coefficients is to be expected, and thus a high value to δ must be assigned. A signal minimum is identified when a change in the slope is detected, i.e. when the slope of previous iteration is negative (θ1 (k − 1) < 0) and the slope of the current iteration is positive (θ1 (k) > 0). Minimums Map Generation: In this case, the RSSI values of the fadings model are used as the process output. The forgetting factor λ is selected as 0.5 because of the lack of noise in the theoretical model. For the initialization of P a value of 1000 is assigned to δ. The position corresponding to each minimum are stored (xm1 , xm2 , ..., xmn ). Minimums Detection of the Actual RF Signal Values: The RLS algorithm is applied using the RSSI value provided by the RF receivers as the process output. Due to the noisy nature of the signal, a major contribution of previous samples in the parameter estimation is needed. For this reason, a 0.95 value is assigned to λ. The value selected for δ is 1000 as in the previous case. Mahalanobis distance and IC test: The covariance of the minimum sensor obtained during the calibration process is (20/2)2 . For the IC test (10) it is necessary to select a value for χ. In this case, for a 95% confidence level with d = 1 and α = 0.05, the χ value provided by the Student’s Distribution table is 6.314 m.

832

T. Seco et al. −50

−50 fadings model min rls

−55

−60 RSSI (dB)

rssi (dB)

−60 1854

−65

2366

−70

2877 3389 2000

2500

3000 Distance (m)

3500

4000

0

4500

(a) Theoretical minimum map

50

100

150 200 time (s)

250

300

350

(b) Real RSSI data vs filtered data

−50

−50 filtered data pot min rls min model

−55

filtered data min rls + mah min model

−55 −60 RSSI (dB)

−60 RSSI (dB)

−70

−80

3901 −80 1500

−65

−75

−75

−65 −70

−65 −70

−75

−75

−80

−80

0

50

100

150 200 time (s)

250

300

350

0

(c) RLS application to RF Signal

A

50

100

250

300

350

35 ground truth x est x odom

2920 2910

odom RLS + Mah

30 25

2890

error (m)

2900

A

20 15

A

2880

10

2870

5

2860 2850

150 200 time (s)

(d) RLS and individual compatibility test

2930

distance (m)

real data filtered data

−55

152

152.5

153

153.5 time (s)

154

154.5

(e) Detail of the position correction

155

0

0

50

100

150 time (s)

200

250

300

(f) Position error

Fig. 7. Results of the propose algorithm based on RLS and Mahalanobis distance: (a) minimums map from fadings model used as known information, (b) filtered signal obtained by a first order filter, (c) results of the application of the RLS method to actual data, (d) final minimums detection result after the IC Test, (e) position correction during the displacement of the vehicle, (e) position error comparison between odometry method and the presented approach

4.3

Preliminary Results

The result of the minimums detection of the fadings model is shown in Fig. 7(a). The vector of reference positions (green circles) is used as a prior information

Discrete Robot Localization in Tunnels

833

in the online process. Due to the noisy nature of the RF signal, a first order butterworth filter has been applied to the real signal (Fig. 7(b)). The result of the RLS algorithm application to the actual RF signal is shown in Fig. 7(c). The potential minimums provided by the algorithm are represented as blue asteriks. The green circles are the minimums provided by the map. As it can be observed, only the use of the RLS method is not enough because several false positives are returned as signal valleys. Figure 7(d) shows the result of the combination of the RLS algorithm with the individual compatibility test based on Mahalanobis distance. It is clearly shown how most of the false positives are discarded using the proposed method. When a minimum of the RF signal is identified, the estimated position of the robot is set to the reference position of the minimums map. A detail of the position correction corresponding to detected minimum denoted by A is shown in Fig. 7(e). As expected, the position error of the odometry estimation increases reaching a value of 30 m whereas the error of the proposed method (around 8 m) is reduced each time a valley is identified (Fig. 7(f)).

5

Conclusions

In this paper, a discrete localization system based on RF fadings for tunnellike environments has been presented. The proposed method is based on the periodic nature of the RF fadings in this environment. It takes advantage of the RF propagation model to generate a minimums map used as position references. The algorithm has been validate with real data collected during an experimental test in a real scenario. The online identification of singular characteristics of the RF signal along the tunnel allows the robot position to be corrected. The empirical preliminary results show that it is possible to reset the odometry accumulated error periodically, being this period the known fadings signal period. The implemented discrete localization algorithm overcomes the lack of distinctive features in tunnels needed by other algorithms based on laser or cameras. Moreover, the only equipment required are the odometers and the RF transmitter and receiver. Although this approach is a discrete method, it provides enough localization corrections for some applications where a continuous precise localization is not critical. A future challenge to be faced is to extend the discrete solution to a continuous localization system based on the periodic fadings and incorporating the information provided by the proposed algorithm.

References 1. Rizzo, C., Tardioli, D., Sicignano, D., Riazuelo, L., Villarroel, J.L., Montano, L.: Signal based deployment planning for robot teams in tunnel-like fading environments. Int. J. Robot. Res. 32, 1381–1397 (2013)

834

T. Seco et al.

2. Rizzo, C., Lera, F., Villarroel, J.L.: UHF and SHF fading analysis using wavelets in tunnel environments. In: 2013 IEEE 78th Vehicular Technology Conference (VTC Fall), pp. 1–6, September 2013 3. Rizzo, C., Lera, F., Villarroel, J.: Transversal fading analysis in straight tunnels at 2.4 GHz. In: 2013 13th International Conference on ITS Telecommunications (ITST), pp. 313–318, November 2013 4. Rizzo, C., Lera, F., Villarroel, J.: A methodology for localization in tunnels based on periodic RF signal fadings. In: 2014 IEEE Military Communications Conference (MILCOM), pp. 317–324, October 2014 5. Nelles, O.: Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models. Engineering Online Library. Springer (2001). https:// books.google.es/books?id=7qHDgwMRqM4C 6. Orfanidis, S.J.: Electromagnetic waves and antennas. Rutgers University (2014). http://www.ece.rutgers.edu/∼orfanidi/ewa/ 7. Rizzo, C.: Propagation, Localization and Navigation in Tunnel-like Environments. Ph.D. thesis, University of Zaragoza, July 2015. https://zaguan.unizar.es/record/ 31897?ln=en 8. Castellanos, J.A., Neira, J., Tard´ os, J.: Map building and slam algorithms. In: Autonomous Mobile Robots, pp. 335–371, May 2006 9. Lazaro, M., Castellanos, J.: Localization of probabilistic robot formations in SLAM. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 3179–3184, May 2010 10. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)

Low-Bandwidth Telerobotics in Fading Scenarios Samuel Barrios1(B) , Natalia Ayuso1 , Danilo Tardioli2(B) , Luis Riazuelo1 , Alejandro R. Mosteo2 , Francisco Lera1 , and Jos´e Luis Villarroel1 1

Arag´ on Institute for Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain {666180,nayuso,riazuelo,lera,jlvilla}@unizar.es 2 Arag´ on Institute for Engineering Research (I3A), Centro Universitario de la Defensa, Zaragoza, Spain {dantard,amosteo}@unizar.es

Abstract. Sensor networks can monitor wide areas to gather information and relay alerts about concerning events. Response robotic missions in confined scenarios where such a network existed, like tunnels or mines, could take advantage of it as a backbone. This paper addresses challenges arising from the combined characteristics of nodes, typically of low power and bandwidth, and signal propagation in such scenarios, that exhibits extended range but also blind spots due to waveguide self-interference. Firstly, a measurement campaign is reported that characterized RSSI and PDR performance of XBee nodes in the Somport tunnel, enabling improved placement of nodes. Then, a teleoperation mission is demonstrated in which a mobile robot that relays rangefinder readings is commanded thought a backbone multi-hop network, within the restrictions of the extremely limited bandwidth of the IEEE 802.15.4 protocol.

1

Introduction

Sensor based networks have registered a constant growth in the last 20 years. The use of battery-powered nodes with communication capabilities has simplified the surveying of not easily accessible or dangerous environments. This includes mines, nuclear plants or tunnels where the access of human beings can be sometimes harmful as, for example, in case of fire or gas leaks. The deployment of low-power wireless nodes allows to monitor this type of scenarios without the need to install long and expensive cables, with a battery life of months or even years. In this way, the information collected by the nodes is sent to a so-called sink node that acts as an aggregator and can be connected to the wired network and, possibly, to the Internet. The data can be checked remotely but sometimes it can be useful to have the possibility of checking the situation on site especially if the readings provided by the sensors show some unexpected or worrying value. This can be made using a mobile robot that can be teleoperated taking advantage of the pre-existing sensors’ infrastructure that would be in charge of routing teleoperation-related information from the robot to the operator (possibly through the Internet). c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_68

836

S. Barrios et al.

However, on the one hand, the bandwidth offered by such small devices is extremely limited especially if the communication is required to be multi-hop and, on the other hand, the propagation in this kind of confined environments has substantially different characteristics than in open space. Specifically, as has been studied in [1], in the ultra high frequency band (300–3000 MHz) the wireless transmission along tunnel-like environments takes the form of waveguide propagation. This allows the communication over a range of several kilometers thanks to the reduced attenuation per unit length but favouring the appearance of fadings, periodic zones where the signal strength is weaker. These characteristics pose two different challenges: (1) the positioning of the nodes must be based on a criterium other than the simple distance among peers and (2) it is necessary to reduce the amount of data needed for teleoperation. The contribution of this paper is twofold. On the one hand, we provide a method to position a set of sensor nodes along a long tunnel taking into account the characteristics of the propagation inside the tunnel itself and verifying the results with a measuring campaign. On the other hand, we propose a scheme to teleoperate a robot along the tunnel using the extremely reduced bandwidth available and demonstrate this achievement with a real-world experiment. The remainder of this paper is organized as follows. In Sect. 2 the related works are presented. Section 3 describes the propagation of the communication signal in tunnels as well as a nodes deployment based on experimental analysis. Section 4 addresses the teleoperation issue, dealing with bandwidth limitations. In Sect. 5 we develop a proof of concept experiment in which we set up a robot teleoperation. Finally, conclusions form Sect. 6.

2

Related Work

Prior research on robots and wireless sensor-network interactions has not considered the use of preexisting communication infrastructure for the mobile robot teleoperation in fading scenarios, to the best of our knowledge. Dunbabin et al. presented an Autonomous Underwater Vehicle (UAV) vision based navigation, able to collect data from many static Underwater Sensor Nodes (USN) networked together optically and acoustically [2]. Palmer et al. considered USN with depth adjustment capabilities that allow IEEE 802.15.4 communication between surface nodes as well as between a node and the UAV remotely controlled with an additional 802.15.4 radio [3]. Teleoperation of Unmanned Ground Vehicles (UGVs) in harsh environments through the wireless LAN has been studied, facing the inherent problems due to the propagation characteristics in such environments. The performance and stability of even the simplest teleoperation systems involving vehicles mounted cameras’ video feeds and hand controlled motion commands can be committed. Shim and Ro faced the problem of random and unbounded communication time delay [4]. This work proposed measuring the round trip time (RTT) on real-time to switch the best control mode. In [5], the radio signal strength (RSS) gradient at the UGV location has been graphically represented for naturally guiding the

Low-Bandwidth Telerobotics in Fading Scenarios

837

Fig. 1. The somport tunnel.

human operators to drive the mobile robot (UGV) for reliable communications. In [6] the authors propose a complete system for tunnel intervention where a team of robots is teleoperated over a multi-hop network. Finally, because of the intrinsic bandwidth fluctuations, Gomes et. at, proposed an approach based on ROS (Robot Operating Systems) that increases the operators overall performance in low bandwidth situations by strategies to optimize all displayed information transmission quality [7].

3

Signal Propagation and Nodes Deployment

Wireless communication systems have become increasingly important as they represent, most of the time, a faster, more economical, and sometimes the only option to deploy a network, as in the case involving mobile agents. A huge effort has been made in terms of mathematical modeling and measuring campaigns in order to predict radio-wave propagation in different types of scenarios, from indoor, outdoor, urban to even underground, with the final goal of ensuring a good quality communication link and to increase the channel capacity. Among these scenarios, tunnels have attracted the attention for train applications, vehicular networks, and even service and surveillance missions in both military and civilian context [8–11]. Wireless propagation in these environments is described as strongly multipath, and if the wavelenght of the signal is much smaller than the tunnel cross section they act as an oversized dielectric waveguide, extending the communication range but affecting the signal with strong fadings [1,12]. The analysis of electromagnetic waves propagation inside a tunnel with arbitrary cross section is not analytically feasible. Even for simple geometries, such as rectangular or circular cross sections, no exact closed form solutions are available. To obtain approximate solutions, the most common approaches are the Modal Theory, which takes into consideration the interaction between the propagating modes [13,14] and the Geometrical Optics Theory (such as the Ray Tracing approach), which models radio signals as rays [15,16]. For these reasons, we proceed to analyze experimentally the signal propagation in a specific environment: the Somport Railway Tunnel. Based on the propagation already analyzed in [17,18], the objective is to characterize the communication link, finding a distance between nodes which allows ensuring a correct operation of the network. In this way, the experiment focuses on two measurements. First, the RSSI

838

S. Barrios et al.

(a) XBee 802.15.4

(b) XBee-Pro 802.15.4

(c) XBIB-U-DEV

Fig. 2. RF modules (a, b) and USB interface board (c).

(Received Signal Strength Indicator) along the tunnel, obtaining the maximum communication range for different emission powers as well as the location of the fadings. Second, the RSSI - PDR (Packet Delivery Ratio) relationship, obtaining the minimum RSSI that guarantees the correct reception of the information exchanged between nodes. Finally, by joining both measurements, a deployment strategy is proposed. 3.1

The Somport Railway Tunnel

The Somport Railway Tunnel is the location chosen to perform the experiment. It is an out-of-service tunnel which connects Spain and France through the Pyrenees. It is 7.7 km long and has a horseshoe-shaped section, approximately 5 m high and 4.65 m wide (Fig. 1). The tunnel is straight along its entire length, suffering a change in slope at approximately 4 km from the Spanish entrance. The walls are limestone with short sections covered with a thin concrete layer. Thanks to its characteristics, it is an ideal environment to perform experiments and emulate long tunnels, common in transport or mining applications. 3.2

Measurement Setup

As previously mentioned, the behavior of the tunnel as waveguide depends on the wavelength being much smaller than the transverse dimension of the tunnel. In other words, the higher the signal frequency, the lower the attenuation factor per unit length, disregarding wall roughness effects. An experimental analysis of the signals propagation in different frequencies of the ISM band (Industrial, Scientific and Medical) is performed in [17]. As a result, it is determined that the 2.4 GHz and 5.2 GHz signals have a much lower attenuation factor than those of 433 MHz and 868 MHz. For this reason, 2.4 GHz is the selected frequency band, since it has a high propagation range, a reasonable wall roughness tolerance and a wide variety of hardware available on the market. Regarding the communications protocol, since one of the objectives of this work is the development of a low energy consumption wireless sensor network

Low-Bandwidth Telerobotics in Fading Scenarios

(a) Instrumented vehicle

839

(b) Emitter module

Fig. 3. Receiver modules on mobile platform (a) and emitter module setup (b).

(WSN), IEEE 802.15.4 is the chosen one [19]. This technical standard specifies the physical and media access control layers for LR-WPANs (Low-Rate Wireless Personal Area Networks). It is suitable for communication in networks with low cost devices and low transfer rates, up to 250 kbit/s. In this way, and in exchange for a lower bandwidth, it allows a much lower power consumption than other standards such as IEEE 802.11 [20], maximizing the battery life of each node. According to the specifications described, the chosen RF (radio frequency) modules are Digi’s XBee 802.15.4 (Fig. 2a) and XBee-Pro 802.15.4 (Fig. 2b), due to their flexibility and low cost. These devices communicate over IEEE 802.15.4, emitting in a power range of -10 to 0 dBm and 10 to 18 dBm, respectively. They allow a maximum payload of 100 bytes, even tough the maximum size of the 802.15.4 MAC frame is limited to 127 bytes. In addition, they are mounted on XBIB-U-DEV interface boards (Fig. 2c), allowing the connection to a computer via USB. Concerning the antennas, Fig. 2 shows 2 different types depending on the module: XBee 802.15.4 uses a whip antenna with a gain of 1.5 dBi. On the other hand, the XBee-Pro 802.15.4 has a 2.1 dBi dipole antenna. Regarding the location of the devices, the emitter (Tx) module is placed on a 180 cm tripod separated 160 cm from the wall (Fig. 3b). This position is chosen in order to produce better defined and predictable fadings, as [21] reports. In order to avoid the effect of the change in slope, the emitter is placed close to the highest point of the tunnel, at approximately 4 km from the Spanish entrance (Fig. 1). The selected emission powers are -10, 0 and 18 dBm. The Tx module broadcasts packets of 60 bytes with a numeric identifier, which are captured by an array of two XBee-Pro 802.15.4 modules, allowing to characterize the influence of transversal fadings. The array modules have a sensitivity of -100 dBm and are at 200 cm height and separated 120 cm apart. As in [22] and [21], the spac-

840

S. Barrios et al. -50

-35

XBee1 (Tx) XBee2

XBee1 (Tx) XBee2 -45

RSSI (dBm)

RSSI (dBm)

-60

-70

-80

-55

-65

-90

-75

-100

-85 3500

4000

4500

5000

5500

Position (m)

(a) 0 dBm, Galleries 9-4

3500

4000

4500

5000

5500

6000

6500

7000

Position (m)

(b) 18 dBm, Galleries 9-1

Fig. 4. RSSI measured during the experiment. The closest module to the sending side is indicated by (Tx).

ing between two successive modules is fixed to be greater than one half of the wavelength, so that the coupling between antennas is minimized. The receiver array is moved along the tunnel using an off-road vehicle as mobile platform (Fig. 3a). In order to synchronize the received packets with the position in the tunnel, it is equipped with two 0.5 degree resolution encoders and a Scanning Laser Rangefinder. The vehicle localizes itself along the tunnel using a localization algorithm on a previously built map [23], allowing to maintain the same position reference for all the experiments. The experiment consists of six sweeps along the tunnel. For each emission power of the selected set (−10, 0 and 18 dBm), a route is made from the emitter, moving away until the loss of the signal. At this point, the vehicle returns to the starting point, resulting in two sweeps per power. 3.3

Results of the Experiment

First, the signal propagation is analyzed in the regions near the emitter. The packets have been received by the modules, located at the ends of the array (XBee1 and XBee2), differentiating between the closest to the emitter side (Tx) and the furthest one, since it is not placed at the center of the tunnel. Figure 4 shows the RSSI of the packets collected during the sweeps 3 and 5. The phenomena of the fadings can be appreciated mainly in the signal received by the closest module to the emitter (Fig. 4a). In correspondence with [21], the fadings have a spatial period of about 500 m. Regarding the signal received by the other module, it does not present fadings as sharp as the first one. However, the RSSI level is enough to ensure a suitable communications link (Fig. 4b). The next step is the measurement of the link quality. The objective is to find a minimum RSSI which allows ensuring the correct communication between two adjacent nodes. The chosen quality estimator is Packet Delivery Ratio (PDR), which is defined as the proportion of number of packets delivered against the

Low-Bandwidth Telerobotics in Fading Scenarios

841

Table 1. Results of the link quality study. Range (dBm) No. of Packets % PDR = 1 [−105, −100)

227

11.31

[−100, −95)

1805

30.93

[−95, −90)

5454

79.17

[−90, −85)

5717

91.81

[−85, −80)

5215

96.56

[−80, −75)

4839

98.68

[−75, −70)

4149

98.84

[−70, −65)

3838

98.98

[−65, −60)

1880

99.73

[−60, −55]

975

100

number of packets sent. Since it is a global measure and it is necessary to analyze the PDR during the sweep, a moving average of 20 packets is applied in order to obtain its evolution during the path. Thus, the packets received during the whole experiment are grouped in different ranges according to their RSSI. For each range, the percentage of windows with a PDR equal to 1 is found. This gives a measure of how likely it is that all packets reach their destination for a given RSSI range. The results are shown in Table 1. It can be observed that for RSSI values higher than −80 dBm the percentage of windows with a maximum PDR is greater than 98%. Depending on the application, it will be necessary to prioritize a greater reliability of packet delivery or to sacrifice it for a greater range of coverage per node. In this work a 98% probability is considered sufficient, which means that each node receives the packets of the adjacent one with a RSSI higher than −80 dBm. Table 2 shows the maximum range achieved during the experiments meeting the cited minimum RSSI requirement. It can be verified that even for the same power, there are different ranges depending on the module and the path. This is due to the fact that the transverse separation of the receivers to the wall has not been constant during the experiment. Hence, the transversal fadings have a strong influence on the RSSI of the received packets. Therefore, in order to achieve the indicated maximum ranges, a careful positioning of the transverse separation with respect to the emitter must be performed. Assuming the nodes are placed in the optimal transverse position, the number of devices needed to cover this tunnel (7.7 km) would be 61, 5 and 3, for emission powers of −10, 0 and 18 dBm, respectively. As detailed in following sections, the addition of repeaters reduces the network bandwidth in a factor (n − 1), being n the total number of nodes, since there is not spatial reuse. This is not problematic when monitoring the scenario, because the size of the measured data is usually small, as well as the frequency of sensing. However, for a high number of nodes, it implies a difficulty in establishing a continuous communication, as is the teleoperation of a robot.

842

S. Barrios et al.

Table 2. Maximum distance where a RSSI of −80 dBm is obtained. Underlined measurements are obtained in the modules next to the sending side. Sweep Power (dBm) XBee1 Range (m) XBee2 Range (m) 1

−10

2

−10

123

102

3

0

1557

1013

46

127

4

0

379

385

5

18

3380

3378

6

18

1613

2780

Hence, there are two different problems with different requirements that must be addressed separately. First, the teleoperation of a robot needs the highest coverage range using the least number of nodes. On the other hand, the monitoring of the scenario requires a higher number of nodes emitting with a low power. For this reason, the use of devices with several different emission levels is proposed. In this way, they can work in low power during a monitoring situation, where there is not high bandwidth requirement. Besides, in an emergency context where the teleoperation of a robot is needed, a minimum number of nodes can raise their power, diminishing the number of hops and providing a higher bandwidth to the network. Besides, it is necessary to consider that the RSSI is not kept above the minimum desired value during the whole path. Due to the phenomenon of fadings, there are areas where the link quality is lower than desired. In this way, forbidden regions are established, where the deployment of a node is not advisable. Figure 5 shows these zones for emission powers of 0 and 18 dBm. In the case of the greater power, it is observed that the zones coincide with the valleys of the fadings, being periodic and predictable. On the other hand, the smaller powers from these regions in higher areas of the fadings, making them comparable in length with the regions suitable for deployment. Therefore, a simple deployment strategy consists of placing the node i + 1 in the coverage range of the node i, avoiding the forbidden regions. Since monitoring applications usually require a periodic separation distance d between sensors, a node position may coincide with a forbidden region. In this case, it would be deployed in the closest previous suitable zone.

4

A Proof of Concept

In this section, we are going to show the results of a proof of concept experiment in which we set up a robot teleoperation over a backbone network made of XBee nodes. The goal is to control the movement of a Pioneer P3DX robot using a joystick while receiving LIDAR reading feedback (Fig. 6). The system is built on top of the ROS (Robotics Operating System) [24] middleware and consists in several nodes connected over the cited network using

-50

-40

-60

-50

RSSI (dBm)

RSSI (dBm)

Low-Bandwidth Telerobotics in Fading Scenarios

-70 -80 -90

843

-60 -70 -80

-100

-90 0

500

1000

1500

2000

0

Distance to emitter (m)

500

1000 1500 2000 2500 3000

Distance to emitter (m)

(a) 0 dBm, XBee1 (Tx)

(b) 18 dBm, XBee1 (Tx)

Fig. 5. Forbidden regions for the node deployment in the sweeps 3 and 5, respectively. Zones where the RSSI is below the minimum required (−80 dBm) are plotted in red while Tx indicates that the module was the closest to the sending side. Rpt 0

Rpt 1

Rpt n Laser Joystick

LIDAR Base

Robot

Joystick

Fig. 6. Teleoperation scheme.

a multi-core scheme. In Fig. 7 it is possible to observe the conceptual design of the system: the JOY node connected with the joystick publishes the stick movements messages through the /joy topic; it is read by the T EO BASE node that serializes the information (see below) and sends it over the network. On the other side, the messages are received and deserialized by the T EO ROBOT node and published on the /cmd vel topic that, in turn, is read by the ROSARIA node, which is in charge of sending the velocity commands to the actual robot. In the other direction the LIDAR readings are published by the SICK node on the /scan topic and read by the T EO ROBOT node that, again, serializes the data before sending it over the network. On the base station side, it is deserialized and published in the /comp scan node to be shown by the RV IZ node. Notice that the nodes on the two ends do not share any roscore since each of them consists in a independent ROS system connected through a non-ROS network. Thus, they share in a periodic and asynchronous way the information about the LIDAR and joystick readings.

844

S. Barrios et al.

/joy

/comp_scan

JOY

SICK

TEO_BASE

TEO_ROBOT

RVIZ

ROSARIA

/scan

/cmd_vel

Fig. 7. ROS nodes and topics involved in the system.

4.1

Adapting to the Available Bandwidth

Although the theoretical bandwidth of the XBee modules is 250 kbps, the maximum measured experimentally is only 32.5 kbps, due to the bottleneck that their serial port communication introduce. Additionally, the use of repeaters in a network reduces the raw bandwidth by a factor of 1/(n − 1), being n the total number of nodes. Considering the case of the Somport tunnel, emitting at the maximum power would require the use of 3 nodes, which would suppose a bandwidth of 16 kbps. On the other hand, in the case of 10 dBm, the resulting bandwidth would be 8 kbps, since 5 nodes would be required. As described, the system is conceptually very simple but being the bandwidth so limited, it has been necessary to reduce the information sent in order to be able to obtain an acceptable loop rate. For example, the SICK node configured as in our experiment (and connected via serial port to the computer) produces 180 range readings (one per degree) at a frequency of 4 Hz. This corresponds, taking also into account the additional data published (timestamp, max and min range, etc.), to approximately 0.8 KB of data, 4 times per second that is, 3.2 KB/s. Since the maximum payload of the XBee modules is 100 bytes, sending the raw data would imply splitting the LIDAR message in at least 8 parts and then merge them at the receiver side to rebuild it. This has two implications: on the one hand sending 8 different messages over a network would take several milliseconds (especially over a multi-hop one) reducing the loop rate. On the other hand, the loss of a single message would mean being unable to reconstruct the information. A similar consideration can be made with the joystick commands, even if in this case the information fits in a single packet. Additionally, the more the bandwidth used for these tasks, the less bandwidth available for others. To mitigate this problem, we decided to compress the data as described below. 4.2

Compressing the Data

Having high precision and resolution data is usually indicative of a correct system performance. However, if the bandwidth is limited it might not be possible or convenient. Since this is the case, we decided to reduce the information sent to the minimum necessary to find the data themselves useful. To do so, the precision and the resolution of the data is decreased, maintaining a more than acceptable

Low-Bandwidth Telerobotics in Fading Scenarios

845

Table 3. Packet Delivery Ratio (PDR) and Inter-Arrival Time (IAT) measured during the experiment. Full indicates the percentage of laser packets recomposed using their two halves. LIDAR

Joystick

PDR (%) Full (%) μ IAT (ms) σ IAT (ms) PDR (%) μ IAT (ms) σ IAT (ms) 100

76.6

207

80

91

282

105

level. Specifically, the ranges representation is reduced from float64 (8 bytes) to a single byte of information. The length of each reading is thus scaled to 256 possible values and the maximum range is reduced to 4 m independently on the actual range of the LIDAR sensor. All the values above that will be assigned the out-of-range value of 0xFF. This provides an accuracy of 4 m/254 = 1.5 cm, only 0.5 cm worse than that of the Hokuyo URG-04LX, just to give an example, and more that enough for a teleoperation task where the maximum speed of the robot is limited (0.5 m/s in our experiments). However, even with this reduction, the whole laser scan does not fit in a single packet (we need at least 180 byte, and the maximum payload is 100 bytes), so it is split in 2 halves, using 1 byte as identifier. In order to avoid losing all the information in case one of the two packets does not reach the destination, we send in the first packet the even laser readings (0◦ , 2◦ , etc.) and then the uneven. If a message cannot be completed with the second part or only the second part is received, the message is published anyway filling the missing information with the out-of-range value. In this way it is possible to have a valuable feedback information also in lossy networks.

5

Real-World Testing

In order to demonstrate the correct functioning of the system, the following experiment is proposed. Given a room with two doors communicated externally by a corridor, the goal is to teleoperate the robot, making it exit through one and return by the other using only the visual LIDAR feedback to perform the task. Both the LIDAR and joystick data were sent every 200 ms which is fast enough for an effective teleoperation while this period allows leaving free bandwidth for other possible flows. The information is sent through a network formed by 4 XBee modules. Figure 8 shows the path achieved, where the map has been generated using a SLAM node running in the robot itself while Table 3 shows the results in terms of inter-arrival time and PDR for each flow. As can be checked, it was possible to teleoperate the Pioneer P3DX, which is 40 cm wide approximately, through a door of comparable size.

6

Conclusions

Wireless sensor networks have allowed in the last 20 years the surveying of hostile or dangerous environments like mines, nuclear plants or tunnels where the access

846

S. Barrios et al.

Fig. 8. ROS nodes and topics involved in the system.

of human beings can be dangerous as in case of fire or gas leaks. However, the distribution of the nodes in the environment is subject to many factors as communication range, sensing requirements and so on. Also, they are usually static which means that in case of unexpected readings there is no way to know what’s actually happening. In this paper we proposed two different contributions. On the one hand, we present a way of positioning a set of sensor nodes in fading environments, like tunnels or mines, taking into account their peculiar propagation pattern and as a function of the transmission power that directly affects the battery life. On the other hand, we propose the use of the sensor nodes as an emergency backbone network to provide network communication to a mobile robot with the end of teleoperating it for on-site intervention in case of unexpected reading or necessity, for example, with the extremely limited bandwidth provided by the 802.15.4 protocol commonly used by sensor networks. The measuring campaign performed allow us to provide parameters that can be useful in similar settings. Additionally, the real-world teleoperation experiment demonstrates that such teleoperation is possible allowing a remote operator to explore the environment with the sole LIDAR feedback.

References 1. Emslie, A., Lagace, R., Strong, P.: Theory of the propagation of UHF radio waves in coal mine tunnels. IEEE Trans. Antennas Propag. 23, 192–205 (1975) 2. Dunbabin, M., Corke, P., Vasilescu, I., Rus, D.: Data muling over underwater wireless sensor networks using autonomous underwater vehicle. In: 2006 IEEE International Conference on Robotics and Automation, pp. 2091–2098, May 2015 3. Palmer, J., Yuen, N., Ore, J.-P., Detweiler, C., Basha, E.: On air-to-water radio communication between UAVs and water sensor networks. In: 2015 IEEE International Conference on Robotics and Automation, pp. 5311–5317, July 2015 4. Shim, K., Ro, Y.: The mobile Robot Teleoperation to consider the stability over the time-delay of wireless network. In: 2003 7th Korea-Russia International Symposium, vol. 2, pp. 457–461, July 2003

Low-Bandwidth Telerobotics in Fading Scenarios

847

¨ 5. Caccamo, S., Parasuraman, R., B˚ aberg, F., Ogren, P.: Extending a UGV Teleoperation FLC interface with wireless network connectivity information. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4305– 4312, September 2015 6. Tardioli, D., Sicignano, D., Riazuelo, L., Romeo, A., Villarroel, J.L., Montano, L.: Robot teams for intervention in confined and structured environments. J. Field Robot. 33(6), 765–801 (2016) 7. Gomes, J., Marques, F., Louren¸co, A., Mendon¸ca, R., Barata, J.: Gaze-directed telemetry in high latency wireless communications: the case of robot teleoperation. In: 42nd Annual Conference of the IEEE Industrial Electronics Society, pp. 704– 709, December 2016 8. Masson, E., Cocheril, Y., Combeau, P., Aveneau, L., Berbineau, M., Vauzelle, R., Fayt, E.: Radio wave propagation in curved rectangular tunnels at 5.8 GHz for metro applications. In: 2011 11th International Conference on ITS Telecommunications (ITST), pp. 81–85. IEEE (2011) 9. Bernad´ o, L., Roma, A., Paier, A., Zemen, T., Czink, N., Karedal, J., Thiel, A., Tufvesson, F., Molisch, A.F., Mecklenbrauker, C.F.: In-tunnel vehicular radio channel characterization. In: 2011 IEEE 73rd Vehicular Technology Conference (VTC Spring), pp. 1–5. IEEE (2011) 10. Cerasoli, C.: RF propagation in tunnel environments. In: IEEE Military Communications Conference, MILCOM 2004, vol. 1, pp. 363–369, October 2004 11. Kjeldsen, E., Hopkins, M.: An experimental look at RF propagation in narrow tunnels. In: IEEE Military Communications Conference, MILCOM 2006, pp. 1–7. IEEE (2006) 12. Delogne, P.: EM propagation in tunnels. IEEE Trans. Antennas Propag. 39(3), 401–406 (1991) 13. Mahmoud, S.F., Wait, J.R.: Guided electromagnetic waves in a curved rectangular mine tunnel. Radio Sci. 9(5), 567–572 (1974) 14. Dudley, D.G., Lienard, M., Mahmoud, S.F., Degauque, P.: Wireless propagation in tunnels. IEEE Antennas Propag. Mag. 49(2), 11–26 (2007) 15. Mahmoud, S.F., Wait, J.R.: Geometrical optical approach for electromagnetic wave propagation in rectangular mine tunnels. Radio Sci. 9(12), 1147–1158 (1974) 16. Chen, S.-H., Jeng, S.-K.: SBR image approach for radio wave propagation in tunnels with and without traffic. IEEE Trans. Veh. Technol. 45(3), 570–578 (1996) 17. Rizzo, C., Lera, F., Villarroel, J.L.: UHF and SHF fading analysis using wavelets in tunnel environments. In: 2013 IEEE 78th Vehicular Technology Conference (VTC Fall), pp. 1–6, September 2013 18. Rizzo, C., Lera, F., Villarroel, J.L.: Transversal fading analysis in straight tunnels at 2.4 GHz. In: 2013 13th International Conference on ITS Telecommunications (ITST), pp. 313–318, November 2013 19. IEEE Computer Society, IEEE Standard for Information Technology - Telecommunications and Information Exchange Between Systems - Local and Metropolitan Area Networks Specific Requirements Part 15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (LR-WPANs), IEEE Std 802.15.4-2003, pp. 1–670, October 2003 20. IEEE Computer Society, IEEE Standard for Information Technology - Telecommunications and Information Exchange Between Systems - Local and Metropolitan Area Networks - Specific Requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11-2007 (Revision of IEEE Std 802.11-1999), pp. 1–1076, June 2007

848

S. Barrios et al.

21. Rizzo, C., Villarroel, J.L., Lera, F.: Propagation, Localization and Navigation in Tunnel-like Environments. PhD thesis, University of Zaragoza (2015) 22. Molina-Garcia-Pardo, J.-M., Lienard, M., Degauque, P.: Propagation in tunnels: experimental investigations and channel modeling in a wide frequency band for MIMO applications. EURASIP J. Wireless Commun. Netw. 2009(1), 560571 (2009) 23. L´ azaro, M.T., Castellanos, J.A.: Localization of probabilistic robot formations in SLAM. In: 2010 IEEE International Conference on Robotics and Automation, pp. 3179–3184, May 2010 24. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)

Wireless Propagation Characterization of Underground Sewers Towards Autonomous Inspections with Drones Carlos Rizzo1(B) , Pedro Cavestany1 , Fran¸cois Chataigner1 , Marcel Soler1 , German Moreno1 , Daniel Serrano1 , Francisco Lera2 , and Jose Luis Villarroel2 1

2

EURECAT Technology Center, Barcelona, Spain [email protected] Robotics, Perception and Real-Time Group (RoPeRT), Universidad de Zaragoza, Zaragoza, Spain

Abstract. In tunnel-like environments such as sewers, road tunnels and mines, a robot or team of networked mobile robots can provide monitoring services, surveillance or search and rescue. However, these scenarios pose multiple challenges from the robotics and from the communication points of view. Structurally, sewers are GPS-denied and narrow scenarios with lack of illumination and presence of sediments, and in the communication context, the multipath propagation causes strong fading phenomena. In this work we characterize a sewer scenario from the communications point of view, based on a measuring campaign carried out in the sewers of the city of Barcelona, Spain, in the context of an ECHORD++ project towards future inspection with drones. Keywords: Tunnel-like environments · Communications Wireless propagation · MAVs · Inspection

1

·

Sewers

·

Introduction

The sewer network is one of the essential infrastructures of a city. Its characteristics — a very wide underground grid of pipelines and galleries, frequently narrow and worn out — along with the presence of big amounts of waste, yield a hostile working environment. Sewer inspections require many humans to work in risky and unhealthy conditions. Introducing a robotics solution in this process aims at reducing the labour risks, improving the precision of sewer inspections and optimizing sewer cleaning resources of the city, not only in terms of economic expenses but also in terms of water required for the cleaning process and of machinery needed. In this context, the ECHORD++ PDTI Urban Robotics: Sewer Inspection project1 , with the final goal of developing an automated robotics solution to 1

ECHORD++ http://echord.eu/pdti/pdti-urban-robotics-sewer-inspection/.

c Springer International Publishing AG 2018  A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2_69

850

C. Rizzo et al.

address the described problem, has chosen the sewer network of the city of Barcelona (Spain) as use-case and test site to develop a solution in a three years time-span. The aspired robotics solution will be able to determine the state of the sewer in order to identify segments where the sewers functionality has been compromised, either by sediments or by structural defects. Automated collection of data in an environment of this nature is a complex task: in many points of the sewer network the ground is highly irregular and full of obstacles which, combined with the high levels of wastewater and litter, impedes greatly the operability of terrestrial vehicles. In this context, the ARSI team (Aerial Robot for Sewer Inspection)2 has proposed and is developing a solution based on the use of a Micro Aerial Vehicle (MAV) for inspection tasks. It avoids the mobility constraints from which a ground robot would suffer, such as paths with steps, steep drops and even objects in the way. Additionally, a flying platform is able to move faster through the sewer compared to the terrestrial alternative and needs simpler logistics in deployment and operation. On the other hand, a MAV solution has to overcome strong constraints of size, weight and energy, as its flying space is bounded by sections less than 1 m wide. Therefore its size, and consequently its payload, are limited to minimal dimensions. The ARSI consortium has addressed the problem of sewer inspection with the integral design of an aerial platform, multi-rotor type, endowed with sensors for semi-autonomous navigation and data collection within the network, and capable of communicating with an on-surface operator (see Fig. 1). The main features of the design consist of: – Optimal layout of the payload on the platform, (embedded PC, 2D laser scan, multiple image sensors, LED lights, gas sensor and antenna) which minimizes the weight and energy requirements. – Localization and motion planning methods in GPS-denied environments. – Careful design of the image sensors and their configuration in order to ensure a detailed view of the whole section for structure assessment and defect detection purposes. – Use of the morphology of the sewer network to enable out-of-sight communication. – Integration of the mission planner within the operational software of sewer maintenance companies. The MAV needs to communicate to a base station or operator, located in the manhole of the sewer. However, electromagnetic waves do not propagate in tunnels as in regular indoor scenarios nor in free space, even if Line-of-Sight (LoS) is maintained between emitter and receiver [1,2]. In tunnels, the multipath propagation affects the radio signal. Free space solutions or regular indoor propagation models do not work well, and therefore special techniques must be adopted in order to ensure reliable communications. This aspect must be taken into account when planning a communications-aware robotic application to avoid jeopardizing the success of the mission. 2

ARSI http://echord.eu/essential grid/arsi/.

Wireless Propagation Characterization of Underground Sewers

851

Pixhawk autopilot Pointgrey Firefly MV camera

ENVIRA air quality sensor

IDS ueye camera Hokuyo UST - 20LX IDS ueye camera

intel NUC external compass

Pointgrey Firefly MV camera

(a) Sensor placement on MAV.

(b) Equipped platform flying during a field test.

Fig. 1. ARSI platform.

In this paper we show the results of a measuring campaign carried out in the sewers network, with the final goal of characterizing propagation to overcome the communication issues. The remaining of the paper is organized as follows: in the next Section, the fundamentals of electromagnetic propagation in tunnels are presented. Then, in Sect. 3, the test scenario and experimental setup are described. In Sect. 4, the results of the measuring campaign are presented and analyzed. Finally, Sect. 5 summarizes the main conclusions and future work.

852

2

C. Rizzo et al.

Radio Wave Propagation in Tunnels

Propagation in tunnels differs from regular indoor and outdoor scenarios. For operating frequencies high enough with free space wavelength much smaller than the tunnel cross-section dimensions, tunnels behave as hollow dielectric waveguides. If an emitting antenna is placed inside a long tunnel, the spherical radiated wavefronts will be multiply scattered by the surrounding walls. The superposition of all these scattered waves is itself a wave that propagates in one dimension —along the tunnel length— with a quasi-standing wave pattern in the transversal dimensions. This allows extending the communication range, but affects the signal with the appearance of strong fadings. There are many different possible transversal standing wave patterns for a given tunnel shape. Each one is called a mode and has its own wavelength, close to —but different from— the free space one, and with its own decay rate [3]. For a detailed explanation, a good online source can be found in [4], and a complete survey about radio propagation modeling in these scenarios is presented in [5]. The electromagnetic field radiated from the antenna starts propagating along the tunnel distributed among many of the possible propagating modes supported by this waveguide. After a long-enough travel distance, only the lowest order modes survive (i.e. those with the lowest attenuation rate), giving rise to the slow fadings in the so-called far sector [6]. These fadings are caused by the pairwise interaction between the propagating modes, and therefore, the higher the number of modes, the more complex the fading structure. On the transmitter side, the position of the antenna allows to maximize or minimize the power coupled to a given mode, favoring the interaction between certain modes, and allowing to produce a specific fading structure.

Fig. 2. Measured Received Power at 2.4 GHz in the Somport Tunnel, from [2]. The transmitter was kept fixed and the receiver was displaced along 4 km from the transmitter.

Wireless Propagation Characterization of Underground Sewers

853

In our previous works in road tunnels, given the tunnel dimensions and the transmitter-receiver setup, the dominant modes were the first three modes. By placing the transmitter antenna close to a tunnel wall, we maximize the power coupled to the first and second modes while minimizing the excitation of the third one. In the receiving side, this produces a strictly periodic fading structure. The y y and EH21 superposition of the first and second propagation modes (called EH11 respectively) created a 512 m periodic fading structure in the Somport tunnel (Fig. 2). See [2] for a detailed explanation.

3

Experimental Setup

The Environment. The measuring campaign was carried out in the evaluation area of the project. It is composed of the sewers network, currently in use, of a residential building block around Passeig Sant Joan street in Barcelona. The area of interest is composed of three segments: Valencia Street, with a length of 132 m, Bailen Street, 129 m long, and Mallorca Street (41 m), as shown in Fig. 3(a). Each of them has a slightly different tunnel cross-section shape and dimensions, as shown in Fig. 3(b). At last, it is composed of two different kind of joints between two adjacent streets: abrupt 90◦ turns, and smooth turns, in which a smooth concrete curvature guides the water (Fig. 4(b)).

abrupt 90 deg. turn BAILÉN 129m T162B

VALENCIA 132m T174

smooth turns MALLORCA 41m T133 PSSG. SANT JOAN

(a) Test Environment.

(b) Different cross-section types (T174 and T162B).

Fig. 3. Scenario description.

The Communication Nodes. To collect the information about the measured received power and bandwidth, an ad-hoc network was established. The communication nodes (base station - tx and, mobile receiver - rx) are composed of a Laptop with an external WiFi network card (Ralink rt2770 chipset), with a

854

C. Rizzo et al.

(a) Straight segment.

(b) Smooth turns.

Fig. 4. Sewer scenario structure.

sensitivity of −95 dBm (see Fig. 5). The tests were conducted at 2.4 and 5.2 GHz, with the transmitter broadcasting frames every 5 ms at a power of 20 dBm. All antennas used were dipoles, with a 2.15 dBi gain. The transmitter was placed at different points according to the test, and will be specified in each case. Finally, the human driven receiver played as the mobile platform and an ad-hoc system based on a measuring tape provided the position estimation.

Fig. 5. Deployed communication nodes.

4

Simulations and Experimental Results

In [1,2,7] we have shown that, depending on the transmitter-receiver setup (i.e. frequency, position), predictable periodic fadings can be obtained in tunnel environments, which subsequently can be used for network planning and deployment [7], navigation under connectivity constraints [8], and even with localization purposes [9,10]. These experiments where performed in a 7 km long tunnel with a cross-section of 4.65 × 5 m, and periodic fadings of 190 to 500 m of period were obtained at 2.4 GHz under different configurations (see Fig. 2). However, in the setup presented in this work, the cross section dimension is much smaller, causing a higher attenuation but shorter fadings according to simulations.

Wireless Propagation Characterization of Underground Sewers

4.1

855

Measured Received Power

Using the Modal Theory approach with the approximate expressions for the electromagnetic field modes and the corresponding propagation constants obtained by [11] for rectangular hollow dielectric waveguides — which are valid for high enough frequencies (i.e. with free space wavelength much smaller than the tunnel cross section dimensions) — and following the procedure in [11] and Chap. 3.5 of [12], by placing the transmitter and receiver close to the wall, 26 and 24 m of period fadings should be obtained in Valencia and Bailen street, respectively. Valencia Street. In the first test the transmitter was placed at point A, and the receiver was displaced from point A to point B, along 132 m from the transmitter, logging the RSSI (Fig. 6). Results are shown in Fig. 7(a), where we can appreciate spatial fadings with a period of about 25 m, and an attenuation of about 8 dB in 100 m. The free space path loss (as described in antenna theory textbooks, a good online reference can be found in [4]), at 2.4 GHz is approximately 82 dB for a 132 m long path. With these results, we corroborate both the waveguide behavior, which allows to greatly extend the coverage in comparison to free space, and the presence of fadings.

B C

BAILÉN 129m

D

abrupt 90 deg. turn VALENCIA 132m

smooth turn

E

MALLORCA 41m

F

A PSSG. SANT JOAN

Fig. 6. Experiment paths.

90◦ Abrupt Turn. Following the previous test, the receiver was moved from point B to point C, while the transmitter was kept at point A. In this condition, we have non LoS between the transmitter and the receiver. As soon as the receiver loses the LoS with the transmitter, the connectivity is lost (in less than 3 m). No results are shown, due to the fact that if a frame is not received, the RSSI cannot be determined (i.e. non-existence of RSSI = 0). Bailen Street + Smooth Turn + Mallorca Street. To analyze the influence of the smooth turn, the transmitter was moved to point B and the receiver was displaced from point B to point F , passing through point E (Fig. 6).

856

C. Rizzo et al.

Fig. 7. Measured Received Power at 2.4 GHz along different segments of the sewer.

Results are shown in Fig. 7(b). From point B to point D we can appreciate a similar behavior as in Valencia street: well defined periodic fadings with a mean period of about 22 m, and an attenuation of about 9 dB/100 m. The differences with respect to Valencia street are due to the fact that the cross section is slightly different. Later on, at point E (where the LoS with the transmitter is lost), it can be seen that the signal suffers great attenuation. Nevertheless, the received signal still remains above the sensitivity of the receiver. This means that the smooth turn guides the wave softly, making it able to provide communication coverage with a perpendicular street. It can be seen, in both cases, the appearance of periodic fadings along the straight segments. It is more clear along Bailen street Fig. 7(b) due to the transit

Wireless Propagation Characterization of Underground Sewers

857

conditions. Transiting along Valencia street was more difficult because of the conditions and the receiver didn’t keep a constant cross-section position at all times. However, the fadings are appreciated. Table 1 collects the results from both simulations and the measuring campaign. Regarding the attenuation rate, it can be seen that in the real case the attenuation is higher than in simulations. This is caused by the roughness and irregularities along the sewer. Nevertheless, the attenuation is much lower compared to free space, and hence the sewer is acting as a waveguide. With respect to the fadings period, the simulation and the experimental results match quite well. Table 1. Propagation Results Street

Cross-section Simulated Measured Simulated path loss path loss period (m) (dB/100 m) (dB/100 m)

Measured period (m)

Valencia T174

6.06

8

26

25

Bailen

6.36

9

24

22

T162B

Based on the tests performed, we can summarize that: – Similarly to studies of propagation in tunnels, sewers behave as waveguides: the communication range is extended in comparison to free space, but the signal suffers from strong fading phenomena. Under certain configurations, these fadings are strictly periodic. – In the abrupt 90◦ turns, the communication link is broken almost as soon as the LoS between the transmitter and receiver is lost, making it impossible to maintain communication along perpendicular streets. – The smooth turns attenuate the signals, but also guide it, making it possible to provide coverage along perpendicular streets. 4.2

Wireless Network Full Coverage Test

Considering the previous results, a communication design test was performed in order to provide full coverage along Valencia, Bailen and Mallorca streets. First, two communication nodes were placed at points A and F , and a repeater was deployed at point B to be able to make an abrupt 90◦ turn (following the convention of Fig. 6). Ping and text streaming was successfully achieved between both extremes. To make a more detailed study, a bandwidth measuring campaign was performed on a different day.

858

4.3

C. Rizzo et al.

Bandwidth Measurement

Following the wireless propagation characterization, an extensive measurement campaign was performed to determine the bandwidth of the link between the robot (drone) and the base station (BS) (tx → rx, rx → tx), specifically in eight strategic points (A to I in Fig. 8), under LoS and N-LoS conditions, with and without the use of a network repeater to overcome the problems related to the turns, described in the previous section. Two frequencies were tested: 2.4 GHz and 5.2 GHz. To measure the bandwidth, the JPERF software was used3 , which generates traffic to measure the capacity of the link by estimating the time required to send and receive large amounts of data. In each of the points, a sequential bi-directional link bandwidth measurement was performed. The drone was left static for a period of 120 s. In the first 60 s, the bandwidth of the BS-drone link was measured, followed by the drone-BS link. The base station was kept fixed at point A, and the repeater at point C. The robot was moved from point B trough I. Results are showed in Fig. 8. The letter U represents the upload link (BSdrone), while letter D represents the contrary. 2G and 5G represent the frequency, 2.4 and 5.2 GHz respectively. The units are represented in Mbps. As an example, U 2G = 85 means BS-robot link, at 2.4 GHz, and the obtained mean bandwidth was 85 Mbps. Finally, the bold letters represent the path (eg. AH is the link between points A and H without using the repeater, while ACG is the link between A and G, passing through the repeater located at C). After the tests performed, we can summarize that: – Under LoS conditions, the bandwidth is higher at 5.2 GHz compared to 2.4 GHz. – As in the previous Section, the signal is not able to make the abrupt 90 turn (point C + 3 m towards point D in Fig. 8), at 2.4 GHz nor at 5.2 GHz. – At 2.4 GHz, the smooth curved structure is able to guide the signal to the perpendicular street (point A to point H and I, or point C to point F and G). At 5.2 GHz, the signal is not guided by the smooth turn. This can be explained by the fact that the wavelength of the signal is much smaller and gets affected by surface irregularities and roughness. 4.4

Video Streaming

At last, a video streaming test was performed. The compression used was JPEG at 90%, with the camera working at 11.42 Hz, generating a traffic of about 13.8 Mbps. The test was performed from point A to point I, at 2.4 GHz, to be able to guide the signal with the smooth turn. Until point H, the video streaming is smooth with no interruptions. From point H to point I, some interruptions can be observed (which matches the bandwidth results, as the video streaming was using more bandwidth than the one available). This issue can be addressed by optimizing video quality, rate and compression settings. 3

https://iperf.fr/.

Wireless Propagation Characterization of Underground Sewers

859

Fig. 8. Bandwidth measurement results. The letter U represents the BS-drone link, while letter D represents the contrary. 2G and 5G represent the frequency, 2.4 and 5.2 GHz respectively. The units are represented in Mbps and the bold letters represent the path.

5

Conclusions and Future Work

In this work we have presented the results of a communications field tests in the sewers of Barcelona, in order to characterize the received power as well as the bandwidth of a communication link between a base station and a mobile robot. Results show that, similar to road tunnels [1], periodic fadings are also obtained inside the sewers networks with an attenuation that would allow to cover several hundreds of meters at 2.4 and 5.2 GHz. Also, as the attenuation is higher compared to our previous road tunnel scenarios, the near sector is shorter and almost negligible, obtaining clear fadings since the very beginning. Finally, the bandwidth measurement tests made us realize that in the smooth turns (which are mostly the same and appear in the whole sewer network), the curvature radius is high enough to softly guide propagation and overcome the turn. This does not happens at a higher frequency (5.2 GHz), and it might be due to fact that the wavelength of the signal is smaller and gets affected by the structure. Some topics remain open as future work for the upcoming experiments, such as exploring the spatial diversity as in [2] to verify the extend of this study and take advantage of spatial diversity schemes, as well as deeply characterizing the effect of the smooth turns at different operating frequencies, which would allow to minimize the use of repeaters to make turns. At last, we would like to introduce the periodicity of the fadings as a source in the localization algorithm, as a more robust implementation that in [9,10,13], which in this case is more

860

C. Rizzo et al.

feasible due to the period of the fadings inside sewers, which was from 20–30 m compared to 200–500 m in road tunnels at the same frequency. Acknowledgments. This work has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement No. 601116. The authors would like to thank Mr. Raul Hernandez and Mr. Albert Figueras from FCC (Fomento de Construcciones y Contratas) for their logistic support during the field tests.

References 1. Rizzo, C., Lera, F., Villarroel, J.L.: UHF and SHF fading analysis using wavelets in tunnel environments. In: Vehicular Technology Conference (VTC Fall), 2013 IEEE 78th. pp. 1–6 (Sept 2013) 2. Rizzo, C., Lera, F., Villarroel, J.L.: Transversal fading analysis in straight tunnels at 2.4 GHz. In: ITS Telecommunications (ITST), 2013 13th International Conference on. pp. 313–318 (Nov 2013) 3. Dudley, D., Lienard, M., Mahmoud, S., Degauque, P.: Wireless propagation in tunnels. Antennas and Propagation Magazine, IEEE 49(2), 11–26 (april 2007) 4. Orfanidis, S.J.: Electromagnetic waves and antennas. Rutgers University (2014), http://www.ece.rutgers.edu/∼orfanidi/ewa/ 5. Hrovat, A., Kandus, G., Javornik, T.: A survey of radio propagation modeling for tunnels. Communications Surveys Tutorials, IEEE 16(2), 658–669 (Second 2014) 6. Delogne, P.: EM propagation in tunnels. Antennas and Propagation, IEEE Transactions on 39(3), 401 –406 (mar 1991) 7. Rizzo, C., Sicignano, D., Riazuelo, L., Tardioli, D., Lera, F., Villarroel, J.L., Montano, L.: Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Volume 1, chap. Guaranteeing Communication for Robotic Intervention in Long Tunnel Scenarios, pp. 691–703. Springer International Publishing, Cham (2016) 8. Rizzo, C., Tardioli, D., Sicignano, D., Riazuelo, L., Villarroel, J.L., Montano, L.: Signal-based deployment planning for robot teams in tunnel-like fading environments. The International Journal of Robotics Research 32(12), 1381–1397 (2013) 9. Rizzo, C., Kumar, V., Lera, F., Villarroel, J.: RF odometry for localization in pipes based on periodic signal fadings. In: Intelligent Robots and Systems (IROS), 2014 IEEE/RSJ International Conference on (2014) 10. Rizzo, C., Lera, F., Villarroel, J.: A methodology for localization in tunnels based on periodic RF signal fadings. In: Military Communications Conference (MILCOM), 2014 IEEE. pp. 317–324 (Oct 2014) 11. Laakmann, K.D., Steier, W.H.: Waveguides: characteristic modes of hollow rectangular dielectric waveguides. Appl. Opt. 15(5), 1334–1340 (May 1976) 12. Rizzo, C.: Propagation, Localization and Navigation in Tunnel-like Environments. Ph.D. thesis, University of Zaragoza (7 2015), https://zaguan.unizar.es/record/ 31897?ln=en 13. Seco, T., Rizzo, C., Espelosin, J., Villarroel, J.L.: A Robot Localization System based on RF Fadings using Particle Filters inside Pipes. In: 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC). pp. 28– 34 (May 2016)

Author Index

A Abad-Fraga, Francisco J., 43, 53 Acevedo, José Joaquín, 272 Adán, Antonio, 227 Aguilar, J.M., 771 Alenyà, Guillem, 141 Almeida, Ana Filipa, 117 Almeida, Luis, 807 Álvarez, David, 716 Amaral, Filipe, 15 Amat, Josep, 536 Andújar, Dionisio, 239 Antunes, André, 409 Antunes, João, 422 Aparicio, José Luis, 43 Aparicio-Rodriguez, Jose-Luis, 53 Armingol, José María, 487 Arrais, Rafael, 617 Arrue, Begoña C., 272, 309, 320, 771 Asín-Prieto, Guillermo, 569 Asvadi, Alireza, 475 Ayuso, Natalia, 835 Azevedo, Francisco, 695 Azevedo, José Luís, 15 B Badesa, Francisco J., 557 Balsa, Jesús, 283 Balsa-Comerón, Jesús, 67 Barbosa, Ramiro, 756 Bárcena, Guillermo, 43 Barrientos, Antonio, 606, 731 Barrios, Samuel, 835 Barták, Roman, 345 Bautista, Moises, 359 Bauzano, Enrique, 548 Bearee, Richard, 595 Bellas, Francisco, 359 Bengoa, Pablo, 704 Bengochea-Guevara, José M., 239

Bernal-Polo, P., 79 Bernardino, Alexandre, 117 Bertelsen, Álvaro, 524 Bertomeu-Motos, Arturo, 557 Blanco, Andrea, 557 Blanco-Medina, Pablo, 102 Brito, Thadeu, 643, 655 Brueggemann, Bernd, 263 C Cabanes, Itziar, 704 Cañada, Jesús, 3 Cantuña, Karla, 239 Cardeira, Carlos, 409, 422 Carlos, Socarras Bertiz, 190 Carmona, Joana, 695 Casals, Alícia, 536 Castro, Ayoze, 263 Catalán, José M., 557 Cavestany, Pedro, 849 Cerrada, Carlos, 251 Cerrada, José A., 251 Chataigner, François, 849 Chavera, Pedro, 43, 53 Colomé, Adrià, 141 Cortés, Camilo, 524 Costa, Carlos M., 153 Costa, Paulo, 655 Costa, Pedro, 643, 668, 680 Cristiano, Julián, 91 Culla, David, 3 Cunha, Bernardo, 15 D Darouach, Mohamed, 446 de la Escalera, Arturo, 499 de la Rubia, David, 227 de León, Jorge, 731 Dias, Ricardo, 15 Diaz-Cano, Ignacio, 53

© Springer International Publishing AG 2018 861 A. Ollero et al. (eds.), ROBOT 2017: Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing 694, https://doi.org/10.1007/978-3-319-70836-2

862 Díez, Jorge A., 557 Dominguez-Morales, Juan Pedro, 580 Duque-Domingo, Jaime, 251 Duro, Richard J., 359 E Escudero, Álvaro, 524 Espelosín, Jesús, 823 F Fdez, Javier, 606 Fentanes, J.P., 295 Fernandez-Enriquez, Antonio, 580 Fernández-Jiménez, Francisco J., 200 Fernández-Llamas, Camino, 67 Fernández-Lozano, J.J., 190 Ferrarelli, Paola, 394 Ferrein, Alexander, 370 Ferreira, Fausto, 263 Ferri, Gabriele, 263 Figueiredo, Rui, 117 Fitch, Robert, 320 Foix, Sergi, 141 Fonseca, Pedro, 382 Fraile, Juan C., 548 Frerichs, Ludger, 782 G Galindo, Pedro L., 43, 53 Gandarias, Juan M., 165 García, Manuel, 272 García, Miguel Angel, 91 García-Aracil, Nicolás, 557 García-Cerezo, Alfonso J., 165, 190 Garrote, Luis, 475 Garzón, Mario, 606, 731 Garzón-Ramos, David, 731 Gaspar, Ana Rita, 463 Ginés, Jonathan, 283 Gomes, Rui, 213 Gómez-de-Gabriel, Jesús M., 165 Gomez-Ruiz, J.A., 190 González, José E., 569 Gorrotxategi, Jose, 3 Grisetti, G., 295 Guerrero-Higueras, Ángel Manuel, 67 Guindel, Carlos, 487 H Hanheide, M., 295 Harms, Hannes, 782 Heredia, Guillermo, 332 Herrera, P. Javier, 251 Hervé, Pierre Ellie, 3 Hussein, Ahmed, 499

Author Index I Iocchi, Luca, 295, 394 Izard, Jean Baptiste, 3 J Jimenez-Fernandez, Angel, 580 K Kallweit, Stephan, 370 Kasaei, S. Mohammadreza, 743 L Lapucci, Tamara, 394 Lau, Nuno, 15, 129, 743 Lázaro, M.T., 295 Lera, Francisco J., 283 Lera, Francisco, 835, 849 Lim, Gi Hyun, 15 Lima, José, 643, 655 Linares-Barranco, Alejandro, 580 Liu, Xingcun, 263 Llamas, Luis, 359 López-Casado, Carmen, 548 M Mancisidor, Aitziber, 704 Marchukov, Yaroslav, 794 Marin-Plaza, Pablo, 499 Marković, Ivan, 629 Martens, Wolfram, 320 Martín, David, 487, 499 Martín, Francisco, 283 Martín-Ávila, Juan, 190 Martínez-Barberá, H., 79 Martínez-de Dios, J. Ramiro, 179, 200 Martín-Guzmán, Miguel, 190 Matellán, Vicente, 67, 283 Matos, Anibal, 463 Montano, Luis, 794 Moreira, António Paulo, 668 Moreno, German, 849 Moreno, Juan C., 569 Moreno, Luis, 716 Moreno, Plinio, 695 Morgado-Estévez, Arturo, 43, 53 Mosteo, Alejandro R., 835 Muñoz, Luis Miguel, 102 Muñoz, Víctor F., 548 N Navarro, Francisco, 606 Naya, Martin, 359 Neto, Pedro, 595

Author Index Nunes, Alexandra, 463 Nunes, Urbano J., 475 O Oliveira, Italo, 756 Oliveira, Paulo, 409, 422 Ollero, Anibal, 179, 272, 309, 332, 771 Olmo-Sevilla, Asuncion, 580 Oñativia, Jon, 524 P Palau Franco, Marta, 263 Paneque, Julio L., 179 Paulino, Tiago, 695 Pedreiras, Paulo, 382 Pedrosa, Eurico, 15 Peixoto, Paulo, 475 Pereira, Artur, 15, 743 Pereira, Fernando Lobo, 213 Petillot, Yvan, 263 Petković, Tomislav, 629 Petrović, Ivan, 629 Piardi, Luis, 643, 655 Pinto, Andry, 463 Pinto, Charles, 704 Pinto, Luis Ramos, 807 Pons, José L., 569 Ponsa, Pere, 102 Prada Delgado, Javier, 309 Premebida, Cristiano, 475 Presa, Jorge, 524 Prieto, Abraham, 359 Puig, Domènec, 91 R Ramírez, Juan Ramón Astorga, 179 Ramon Soria, Pablo, 309, 320 Ramón, Pablo, 272 Reis, Luís Paulo, 15, 129 Relvas, Pedro, 680 Riazuelo, Luis, 835 Ribeiro, Angela, 239 Rioja-del-Rio, Carlos, 53 Rivas-Blanco, Irene, 548 Rizzo, Carlos, 823, 849 Roa, Máximo A., 716 Rocha, Luís F., 680 Rodríguez, Mariola, 3 Rodríguez-Lera, Francisco Javier, 67 Rodríguez-Sedano, Francisco, 102 Rojas-de-Silva, Abiud, 28 Roldán, Juan Jesús, 606 Roning, Juha, 263 Rowe, Anthony, 807

863 S Safeea, Mohammad, 595 Sajadi-Alamdari, Seyed Amin, 446 Sánchez, Emilio, 524 Sanchez-Sardana, Francisco L., 239 Sanfeliu, Alberto, 434 Santos-Victor, José, 117 Sayols, Narcís, 536 Schattenberg, Jan, 782 Schiffer, Stefan, 370 Schmiemann, Julian, 782 Schneider, Frank, 263 Scorza, Davide, 524 Seco, Teresa, 823 Serrano, Daniel, 849 Silva, Filipe, 382 Silva, Manuel, 680, 756 Silva, Ricardo, 680 Simões, David, 129, 743 Škoda, Jan, 345 Soler, Carles, 513 Soler, Fernando, 28 Soler, Marcel, 849 Soria, Pablo Ramon, 771 Sosa, Dario, 263 Soto-Núñez, José Andrés, 43, 53 Sousa, Armando, 153 Stengler, Erik, 263 Suarez, Alejandro, 332 Suárez, Raúl, 28 Sukkar, Fouad, 320 T Talamino, Jordi Pérez, 434 Tardioli, Danilo, 835 Tavares, Pedro, 668 Torras, Carme, 141 Torres-González, Arturo, 179 Toscano, César, 617 V Varela, Gervasio, 359 Vázquez, Andrés S., 227 Veiga, Germano, 153, 617, 668 Vicente-Diaz, Saturnino, 580 Viguria, Antidio, 263, 272 Villarroel, José Luis, 823, 835, 849 Voos, Holger, 446 W Winfield, Alan F.T., 263 Z Zubizarreta, Asier, 704