Introduction to Intelligent Robot System Design: Application Development with ROS 9819918138, 9789819918133

This book introduces readers to the principles and practical applications of intelligent robot system with robot operati

181 48 25MB

English Pages 589 [580] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
The Book Structure
Preface
Acknowledgments
Contents
About the Authors
Chapter 1: Composition of Robot System
1.1 Mobile Chassis and Manipulator
1.1.1 Mobile Chassis
1.1.2 Manipulator
1.2 Hardware Components of Robot System
1.2.1 Computing Platform
1.2.2 Motion Controller
1.2.3 Driver and Actuator
1.2.4 Sensor
1.2.5 Display and Touch Panel
1.2.6 Speaker and Microphone
1.2.7 Power Supply System
1.3 Sensor Description and Function Introduction
1.3.1 Encoder
1.3.2 Inertial Measurement Unit
1.3.3 LiDAR
1.3.4 Camera
1.3.5 Infrared Sensor
1.3.6 Ultrasonic Sensor
1.3.7 Millimeter Wave Radar
1.3.8 Collision Sensor
1.3.9 Multi-Sensor Fusion
1.4 Software Composition of Robot System
1.4.1 Operating System
1.4.2 Application Software
1.5 Summary
Further Reading
Exercises
Chapter 2: Connecting the Robot to ROS
2.1 Getting Started with ROS
2.1.1 Origin of ROS
2.1.2 Architecture of ROS
2.1.3 Features of ROS
2.2 ROS Installation
2.2.1 Operating Systems and ROS Versions
2.2.2 Introduction to Linux Foundation
2.2.3 ROS Installation
2.2.4 Set the Environment Variables
2.2.5 Verify Installation
2.3 File System and Communication Mechanisms in ROS
2.3.1 File System
2.3.2 ROS Communication and Its Working Mechanism
2.3.2.1 ROS Computation Graph Level
2.3.2.2 ROS Communication Mechanism
2.4 Writing the First ROS Program
2.4.1 ROS Package Dependency Management
2.4.2 ROS Workspace
2.4.3 Package Creation and Compilation
2.4.3.1 Package Creation
2.4.3.2 Package.xml File
2.4.3.3 Write the CMakeLists.txt File
2.4.3.4 Compile and Build Packages
2.4.4 Rules for Writing ROS Nodes
2.4.5 Two Ways to Run Nodes
2.4.6 Launch File
2.4.6.1 Basic Element
2.4.6.2 Parameter Settings
2.4.6.3 Remapping Mechanism
2.4.6.4 Nested Reuse
2.4.7 Fundamentals of Coordinate Transform
2.4.7.1 Introduction to Coordinate Transform
2.4.7.2 TF Principle
2.4.7.3 TF Message
2.4.7.4 TF Interface in roscpp and rospy
2.4.7.5 Use of Related Command in TF Package
2.5 ROS Common Components
2.5.1 Visualization Tools
2.5.1.1 Rviz
2.5.1.2 Gazebo
2.5.2 Rosbag: Data Logging and Replay
2.5.2.1 Use rosbag to Record Data
2.5.2.2 Play Back Bag Files
2.5.2.3 Check the Topic and Message of the Bag
2.5.3 Debugging Toolkit of ROS
2.5.3.1 Rqt_console
2.5.3.2 Rqt_graph
2.5.3.3 Rqt_plot
2.5.3.4 Rqt_reconfigure
2.5.3.5 Roswtf
2.6 Motion Control of Spark Chassis
2.7 Introduction to ROS External Devices
2.7.1 Remote Control Handle
2.7.1.1 Configuration and Use
2.7.1.2 Sending a Remote Handle Action Message
2.7.1.3 Controlling the turtlesim with the Remote Control Handle
2.7.2 LiDAR
2.7.2.1 Configuration and Use
2.7.2.2 LiDAR Releases Data in ROS
2.7.3 Vision Sensor
2.7.4 Inertial Measurement Unit and Positioning Module
2.7.4.1 Inertial Measurement Unit
2.7.4.2 Positioning Module
2.7.5 Servo Motor
2.7.6 Embedded Controller
2.7.6.1 Basic Configuration
2.7.6.2 Usage Example
2.8 Summary
Further Reading
Description of a Position
Description of an Orientation
Description of a Frame
Transformation from One Frame to Another
Exercises
Chapter 3: Robot System Modeling
3.1 Mobile Chassis Motion Model and Control
3.1.1 Mobile Robot Motion Model and Position Representation
3.1.2 URDF Modeling
3.1.2.1 Gazebo Simulation
3.1.2.2 URDF Model
3.1.3 Robot Status Publishing
3.1.4 Mobile Chassis Motion Control
3.2 LiDAR-Based Environment Perception
3.2.1 Rplidar Package
3.2.2 Introduction to hector_mapping
3.2.3 Usage of Hector_mapping
3.3 Summary
Further Reading
Exercises
Reference
Chapter 4: LiDAR SLAM for Mobile Robot
4.1 Foundation of SLAM
4.1.1 SLAM Overview
4.1.2 Mobile Robot Coordinate System
4.1.3 ROS Navigation Framework
4.1.4 Environment Mapping and Pose Estimation
4.2 Gmapping Algorithm
4.2.1 Principle Analysis
4.2.2 Implementation Process
4.3 Hector SLAM Algorithm
4.3.1 Principle Analysis
4.3.2 Mapping Results
4.4 Summary
Further Reading
Exercises
References
Chapter 5: Autonomous Navigation for Mobile Robot
5.1 Map-Based Localization
5.1.1 Monte Carlo Localization
5.1.2 Adaptive Monte Carlo Localization
5.1.2.1 Topics and Services of the AMCL Package
5.1.2.2 Parameters of AMCL Package
5.2 Map-Based Autonomous Navigation
5.2.1 Navigation Framework
5.2.2 Global Path Planning
5.2.2.1 Dijkstra´s Algorithm
5.2.2.2 A* Algorithm
5.2.3 Local Path Planning
5.2.3.1 Artificial Potential Field Method
5.2.3.2 Dynamic Window Approach
5.2.3.3 Timed Elastic Band Method
5.2.4 Navigation Package
5.2.4.1 Move_base Package Interface
5.2.4.2 Costmap Configuration
5.2.4.3 Local Planner Configuration
5.3 Summary
Further Reading
Exercises
References
Chapter 6: SLAM Based on Multi-Sensor
6.1 Inertial Measurement Unit and Calibration
6.1.1 IMU Measurement Model
6.1.1.1 Accelerometer Measurement Model
6.1.1.2 Gyroscope Measurement Model
6.1.2 Pre-Calibration of the System Error
6.1.3 Calibration of Random Error
6.2 LiDAR and IMU Extrinsic Parameter Calibration
6.3 Kinematic Odometer Model for Differential Wheeled Mobile Robot
6.4 Kalman Filter-Based Multi-sensor Fusion
6.5 Cartographer Algorithm
6.5.1 Principle Analysis
6.5.2 Mapping Results
6.6 Summary
Further Reading
Exercises
References
Chapter 7: Manipulator Motion Control
7.1 Manipulator Modeling
7.1.1 Common Manipulators in ROS
7.1.1.1 Universal Robots
7.1.1.2 Franka Panda
7.1.2 URDF Manipulator Model
7.1.3 URDF Modeling
7.2 Manipulator Control: MoveIt
7.2.1 Introduction to MoveIt
7.2.2 Setup Assistant to Configure Manipulator
7.2.3 MoveIt Visualization Control
7.2.3.1 Drag Planning
7.2.3.2 Random Planning
7.2.4 Manipulator Kinematics
7.2.4.1 D-H Modeling of the Manipulator
7.2.4.2 Forward Kinematics
7.2.4.3 Inverse Kinematics
7.3 MoveIt Programming: Manipulator Motion Planning
7.3.1 Motion Planning in Joint Space
7.3.2 Motion Planning in the Workspace
7.3.3 Motion Planning in Cartesian Space
7.3.4 Manipulator Collision Detection
7.4 Summary
Further Reading
Exercises
References
Chapter 8: Computer Vision
8.1 Getting to Know OpenCV
8.1.1 Installing OpenCV
8.1.2 Using OpenCV
8.2 Use of Monocular Vision Sensors
8.3 Camera Calibration
8.3.1 Pinhole Camera Model
8.3.2 Distortion
8.3.3 Camera Calibration Principle and Procedure
8.3.4 Camera Calibration Kit
8.4 Image Transformation and Processing
8.4.1 Perspective Transformation
8.4.2 Image Matching
8.4.3 Image Stitching
8.5 Image Feature Detection Algorithms
8.5.1 SIFT Algorithm
8.5.2 SURF Algorithm
8.5.3 FAST Algorithm
8.5.4 ORB Algorithm
8.6 Object Recognition
8.7 Summary
Further Reading
Exercises
References
Chapter 9: Vision-Based Manipulator Grasping
9.1 Depth Camera
9.1.1 Binocular and RGB-D Cameras
9.1.1.1 Binocular Camera
9.1.1.2 RGB-D Depth Camera
9.1.2 Binocular Camera Model and RGB-D Camera Model
9.1.2.1 Binocular Camera Model
9.1.2.2 RGB-D Camera Model
9.2 Deep Learning-Based Object Recognition
9.2.1 Convolutional Neural Network-Based Object Recognition
9.2.1.1 Mask R-CNN
9.2.1.2 SSD
9.2.1.3 YOLO
9.2.2 Common Deep Learning Frameworks
9.2.2.1 TensorFlow Framework
9.2.2.2 PyTorch Framework
9.3 Hand-Eye Calibration Principle and Procedure
9.4 Vision-Based Robot Grasping
9.4.1 Object Recognition
9.4.2 Object Grasping Pose Estimation
9.4.3 Grasping Pose Detection
9.4.4 Manipulator Grasping Motion Planning
9.5 Summary
Further Reading
Exercises
References
Chapter 10: Visual SLAM for Mobile Robot
10.1 Visual SLAM Framework
10.1.1 Visual Odometry
10.1.2 Back End Optimization
10.1.3 Loop Closure Detection
10.1.4 Mapping
10.1.4.1 Metric Map
10.1.4.2 Topological Map
10.1.4.3 Semantic Map
10.2 Introduction to the ORB-SLAM Algorithm
10.2.1 Tracking Thread
10.2.2 Local Mapping Thread
10.2.3 Loop Closing Thread
10.3 Dense Mapping
10.3.1 Representation of the Space Map
10.3.2 Binocular Camera Geometry Model and Calibration
10.3.2.1 2D-2D Mapping: Epipolar Geometry
10.3.2.2 Binocular Camera Extrinsic Parameter Calibration
10.3.2.3 3D-2D Mapping: PnP
10.3.3 Dense Mapping
10.4 Introduction to Other Visual SLAM Algorithms
10.4.1 LSD-SLAM
10.4.2 SVO
10.4.3 OpenVSLAM
10.4.4 VINS-Fusion
10.5 Summary
Further Reading
Exercises
References
Chapter 11: Introduction to ROS 2 and Programming Foundation
11.1 ROS 2 Design Concept
11.1.1 Problems with ROS 1
11.1.2 ROS 2 Development Status
11.1.3 ROS 2 Design Framework
11.1.3.1 Communication Model for ROS 2
11.2 ROS 2 Installation and Use
11.2.1 ROS 2 Installation
11.2.2 Running the Turtlesim
11.2.3 ROS 2 Command Line
11.3 ROS 2 Programming Foundation
11.3.1 ROS 2 Programming Method
11.3.1.1 Create an ROS 2 Workspace
11.3.1.2 Create ROS 2 Packages
11.3.1.3 Create a Simple Subscriber and Publisher (C++)
11.3.2 Differences Between ROS 2 and ROS 1 Programming
11.4 Summary
Further Reading
Exercises
Appendix A: Abbreviations
Appendix B: Index of Tasks in Each Chapter
Recommend Papers

Introduction to Intelligent Robot System Design: Application Development with ROS
 9819918138, 9789819918133

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Gang Peng · Tin Lun Lam · Chunxu Hu  Yu Yao · Jintao Liu · Fan Yang

Introduction to Intelligent Robot System Design Application Development with ROS

Introduction to Intelligent Robot System Design

Gang Peng • Tin Lun Lam • Chunxu Hu • Yu Yao • Jintao Liu • Fan Yang

Introduction to Intelligent Robot System Design Application Development with ROS

Gang Peng School of AI and Automation Huazhong University of Science and Technology Wuhan, Hubei, China

Tin Lun Lam Lab of Robot and AI Chinese University of Hong Kong Shenzhen, Guangdong, China

Chunxu Hu Research and Development Department Wuhan Jingfeng Micro Control Technology Co., Limited Wuhan, Hubei, China

Yu Yao School of Computer Science Wuhan University Wuhan, Hubei, China

Jintao Liu Research and Development Department YiKe Robot Lab Nanjing, Jiangsu, China

Fan Yang Research and Development Department Shenzhen NXROBO Co., Limited Shenzhen, Guangdong, China

ISBN 978-981-99-1813-3 ISBN 978-981-99-1814-0 https://doi.org/10.1007/978-981-99-1814-0

(eBook)

© Publishing House of Electronics Industry 2023 Jointly published with Publishing House of Electronics Industry The print edition is not for sale in China (Mainland). Customers from China (Mainland) please order the print book from: Publishing House of Electronics Industry. This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

The Book Structure

As illustrated in figure, each chapter in the book is organized with some tasks for hands-on practice, enabling readers to understand of theoretical knowledge.

v

Preface

Robotics is an integrated application field that covers many disciplines such as mechatronics, automation control, electronic information, AI, machine vision, network communication, and materials, and energy. Driven by project-based tasks and integrating advanced engineering object teaching methods, this book adopts a teaching mode that combines theoretical knowledge with practice, to build sensorbased intelligent robot systems development with ROS step by step, enabling readers to enhance their project development capabilities for practical application. The book introduces the composition of intelligent robot system and ROS from shallow to deep, not only enabling readers to quickly understand robotics and get started with ROS, but also presenting the basic principles and common algorithms in both mobile robot and manipulator. The comprehensive contents include ROS principles, mobile robot control, laser detection and ranging (LiDAR), localization and mapping, path planning and navigation, manipulator control, image recognition, camera calibration, depth camera, target grasping, visual SLAM, and differences between ROS 2 and ROS 1 programming. Furthermore, this book emphasizes hands-on practice and applications of the learned knowledge. The selected topics are conducive to mastering the development methods of ROS application software and full-stack development capabilities of robot system. Readers will learn the following contents from this book. • • • • • • • •

Application programming development methods of ROS Methods of robot motion modeling and GUI visualization in ROS Development of robot sensors and actuators in the ROS SLAM algorithms and multi-sensor fusion methods Navigation and path planning methods of mobile robot based on ROS Manipulator motion model, and MoveIt and Gazebo simulations Application development of computer vision in ROS Calibration of camera intrinsic parameters and extrinsic parameters, and hand-eye calibration • 2D/3D vision-based object recognition and manipulator control vii

viii

Preface

• Application development of visual SLAM system • ROS 2 programming foundation This book contains 11 chapters, organized into four sections. Part I presents foundations of ROS application system (Chaps. 1–3, 11), which will help readers to understand the composition of robot system, familiarize themselves with the functional uses of typical sensors and actuators, as well as Linux operations and the use of common ROS function packages, and learn how to perform robot modeling. Then, readers will learn to use ROS to implement robot simulations and connect to a physical robot for motion control and sensor data acquisition, which can enable them to build simple ROS application. Furthermore, ROS 2 programming foundation, including differences between ROS 2 and ROS 1 are also presented. Part II describes how to build a multi-sensor mobile robot platform based on LiDAR, IMU, encoder, etc. (Chaps. 4–6). Readers will learn SLAM foundations, robot localization and mapping methods, sensor calibration, and navigation and path planning. This part will teach readers how to design and develop a relatively complete ROS mobile robot application system. Part III introduces vision-based manipulator motion control (Chaps. 7–9), which covers manipulator control, computer vision, calibration of camera intrinsic parameters and extrinsic parameters, deep learning, and object grasping. Readers can acquire the ROS and image expertise required to design and develop vision-based intelligent manipulator robot systems by practicing the tasks presented in this section. Part IV covers the development of vision SLAM system for mobile robot (Chap. 10), which includes the vision SLAM framework and several typical visual SLAM algorithms. This section is suitable for readers who are engaged in the advanced development of intelligent mobile robot and unmanned systems. Each chapter is organized with corresponding tasks for hands-on practice, enabling readers to enjoy the fun of running ROS applications while also deepening their understanding of theoretical knowledge. These programs involve the C/C++ or Python programming languages, and readers should have a basic knowledge of Linux and some computer programming skills. Linux should be known well by robot system developers. In addition, some mathematics knowledge are also required, including advanced mathematics, probability theory, linear algebra, and matrix theory. As the saying goes, “knowledge is the beginning of action and action is the completion of knowledge,” which reflects the compilation idea of this book, i.e., the work-oriented instruction and practical learning are handled in the manner of “learning-by-doing, practicing-while-learning,” and methodology driven by tasks. Further reading materials are provided at the end of each chapter, and readers are encouraged to explore further relevant literature to fully grasp the content. Readers can perceive problems clearly and understand the logic behind them only if they have a thorough understanding of the systematic knowledge. This book is the accumulation of the author team’s robotics research project development over the past 20 years, and it provides advanced teaching methods and massive code examples for readers to study and research. In addition, the book has

Preface

ix

been used as teaching materials and experimental manuals in many universities and other undergraduate universities in China. It has also been used as practical material for engineering training and discipline competitions. Furthermore, the “Spark Project” and Chinese ROS community have hosted a series of public classes, achieving positive results. Hence, this book enables readers to quickly master the principle of ROS and the methods of application development. In summary, this book can be utilized as teaching materials for undergraduate and graduate programs, as well as a reference for researchers and engineers engaged in mobile robot, manipulator control, intelligent driving, and unmanned aerial vehicle (UAV). It is also suitable for those who are interested in ROS application system development. Wuhan, China Shenzhen, China Wuhan, China Wuhan, China Nanjing, China Shenzhen, China January 29, 2023

Gang Peng Tin Lun Lam Chunxu Hu Yu Yao Jintao Liu Fan Yang

Acknowledgments

During the writing of this book, we received a great amount of support from colleagues and students, including but not limited to the following. They are Jinhu Liao, Lu Hu, Bocheng Chen, Shanliang Chen, Zhenyu Ren, Zejie Tan, Jianfeng Li, Hao Wang, Jin Yang, Shangbin Guan, Shaowei Wan, Jiaxie Peng, Xuan Xu, Yicheng Zhou, Hangqi Duan, Yu Xu, Zhiyong Li, and Jie Liu. We would like to express our gratitude to them. At the time this book became a manuscript, automatic driving construction machinery project developed by our team was selected as one of the Major Technological Innovation Achievements (Scenes) of the New Generation of Artificial Intelligence in Hubei Province, China. They have faced all the challenges throughout the last years, thanks to their perseverance and tenacity for the above achievements. In addition, many of my colleagues from the School of Artificial Intelligence and Automation of Huazhong University of Science and Technology, as well as editor Wei Zhu from the Springer Press, have lent their strong support to the publication of this book. Your unwavering support is highly appreciated! The task code for each chapter in the book is available on Github (https:// github.com/NXROBO/spark), and the routines are plentiful and continuously updated online. In this way, readers can download and practice, most robot applications can be used for experimentation and studied through SPARK robot platform. This book contains a vast number of contents, some errors, and omissions may be unavoidable. If you have any questions, please don’t hesitate to contact me. I would like to give a special thanks to my family for their understanding and support over the years.

xi

Contents

1

Composition of Robot System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Mobile Chassis and Manipulator . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Mobile Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Hardware Components of Robot System . . . . . . . . . . . . . . . . . . 1.2.1 Computing Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Motion Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Driver and Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Display and Touch Panel . . . . . . . . . . . . . . . . . . . . . . 1.2.6 Speaker and Microphone . . . . . . . . . . . . . . . . . . . . . . 1.2.7 Power Supply System . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Sensor Description and Function Introduction . . . . . . . . . . . . . . 1.3.1 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . . . 1.3.3 LiDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Infrared Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.6 Ultrasonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.7 Millimeter Wave Radar . . . . . . . . . . . . . . . . . . . . . . . 1.3.8 Collision Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.9 Multi-Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Software Composition of Robot System . . . . . . . . . . . . . . . . . . 1.4.1 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Application Software . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 7 10 10 12 13 13 14 14 15 15 15 16 18 23 25 26 27 29 29 30 30 31 38 38 40

2

Connecting the Robot to ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41 xiii

xiv

Contents

2.1

Getting Started with ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Origin of ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Architecture of ROS . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Features of ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 ROS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Operating Systems and ROS Versions . . . . . . . . . . . . . 2.2.2 Introduction to Linux Foundation . . . . . . . . . . . . . . . . 2.2.3 ROS Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Set the Environment Variables . . . . . . . . . . . . . . . . . . 2.2.5 Verify Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 File System and Communication Mechanisms in ROS . . . . . . . . 2.3.1 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 ROS Communication and Its Working Mechanism . . . . 2.4 Writing the First ROS Program . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 ROS Package Dependency Management . . . . . . . . . . . 2.4.2 ROS Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Package Creation and Compilation . . . . . . . . . . . . . . . 2.4.4 Rules for Writing ROS Nodes . . . . . . . . . . . . . . . . . . . 2.4.5 Two Ways to Run Nodes . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Launch File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 Fundamentals of Coordinate Transform . . . . . . . . . . . . 2.5 ROS Common Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Visualization Tools . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Rosbag: Data Logging and Replay . . . . . . . . . . . . . . . 2.5.3 Debugging Toolkit of ROS . . . . . . . . . . . . . . . . . . . . . 2.6 Motion Control of Spark Chassis . . . . . . . . . . . . . . . . . . . . . . . 2.7 Introduction to ROS External Devices . . . . . . . . . . . . . . . . . . . 2.7.1 Remote Control Handle . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 LiDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Vision Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.4 Inertial Measurement Unit and Positioning Module . . . 2.7.5 Servo Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.6 Embedded Controller . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description of a Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description of an Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . Description of a Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformation from One Frame to Another . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

41 41 42 44 45 45 45 52 55 57 57 57 60 65 66 67 72 86 89 90 93 101 102 104 108 111 115 115 120 123 128 131 132 134 135 135 135 136 136 137

Robot System Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 3.1 Mobile Chassis Motion Model and Control . . . . . . . . . . . . . . . . 139 3.1.1 Mobile Robot Motion Model and Position Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Contents

xv

3.1.2 URDF Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Robot Status Publishing . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Mobile Chassis Motion Control . . . . . . . . . . . . . . . . . . 3.2 LiDAR-Based Environment Perception . . . . . . . . . . . . . . . . . . 3.2.1 Rplidar Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Introduction to hector_mapping . . . . . . . . . . . . . . . . . . 3.2.3 Usage of Hector_mapping . . . . . . . . . . . . . . . . . . . . . . 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147 164 168 178 178 179 182 186 186 188 189

4

LiDAR SLAM for Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Foundation of SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 SLAM Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Mobile Robot Coordinate System . . . . . . . . . . . . . . . . 4.1.3 ROS Navigation Framework . . . . . . . . . . . . . . . . . . . . 4.1.4 Environment Mapping and Pose Estimation . . . . . . . . . 4.2 Gmapping Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Principle Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Implementation Process . . . . . . . . . . . . . . . . . . . . . . . 4.3 Hector SLAM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Principle Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Mapping Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

191 191 191 193 200 203 208 209 209 215 215 221 221 221 222 223

5

Autonomous Navigation for Mobile Robot . . . . . . . . . . . . . . . . . . 5.1 Map-Based Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Monte Carlo Localization . . . . . . . . . . . . . . . . . . . . . 5.1.2 Adaptive Monte Carlo Localization . . . . . . . . . . . . . . 5.2 Map-Based Autonomous Navigation . . . . . . . . . . . . . . . . . . . 5.2.1 Navigation Framework . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Global Path Planning . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Local Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Navigation Package . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

225 225 225 229 236 236 238 246 256 269 269 271 272

6

SLAM Based on Multi-Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.1 Inertial Measurement Unit and Calibration . . . . . . . . . . . . . . . . 274 6.1.1 IMU Measurement Model . . . . . . . . . . . . . . . . . . . . . . 275

. . . . . . . . . . . . .

xvi

Contents

6.1.2 Pre-Calibration of the System Error . . . . . . . . . . . . . . . 6.1.3 Calibration of Random Error . . . . . . . . . . . . . . . . . . . . 6.2 LiDAR and IMU Extrinsic Parameter Calibration . . . . . . . . . . . 6.3 Kinematic Odometer Model for Differential Wheeled Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Kalman Filter-Based Multi-sensor Fusion . . . . . . . . . . . . . . . . . 6.5 Cartographer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Principle Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Mapping Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

277 279 281 284 287 291 292 300 301 302 305 305

7

Manipulator Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Manipulator Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Common Manipulators in ROS . . . . . . . . . . . . . . . . . . 7.1.2 URDF Manipulator Model . . . . . . . . . . . . . . . . . . . . . 7.1.3 URDF Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Manipulator Control: MoveIt . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Introduction to MoveIt . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Setup Assistant to Configure Manipulator . . . . . . . . . . 7.2.3 MoveIt Visualization Control . . . . . . . . . . . . . . . . . . . 7.2.4 Manipulator Kinematics . . . . . . . . . . . . . . . . . . . . . . . 7.3 MoveIt Programming: Manipulator Motion Planning . . . . . . . . . 7.3.1 Motion Planning in Joint Space . . . . . . . . . . . . . . . . . . 7.3.2 Motion Planning in the Workspace . . . . . . . . . . . . . . . 7.3.3 Motion Planning in Cartesian Space . . . . . . . . . . . . . . 7.3.4 Manipulator Collision Detection . . . . . . . . . . . . . . . . . 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307 308 308 310 313 318 318 319 325 327 339 339 343 347 352 361 361 363 364

8

Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Getting to Know OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Installing OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Using OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Use of Monocular Vision Sensors . . . . . . . . . . . . . . . . . . . . . . 8.3 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Camera Calibration Principle and Procedure . . . . . . . . . 8.3.4 Camera Calibration Kit . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Image Transformation and Processing . . . . . . . . . . . . . . . . . . . 8.4.1 Perspective Transformation . . . . . . . . . . . . . . . . . . . . .

365 366 366 366 368 371 371 376 377 380 384 384

Contents

9

10

xvii

8.4.2 Image Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Image Stitching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Image Feature Detection Algorithms . . . . . . . . . . . . . . . . . . . . 8.5.1 SIFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 SURF Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 FAST Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 ORB Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

389 390 392 392 397 402 405 407 415 415 417 418

Vision-Based Manipulator Grasping . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Depth Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Binocular and RGB-D Cameras . . . . . . . . . . . . . . . . . . 9.1.2 Binocular Camera Model and RGB-D Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Deep Learning-Based Object Recognition . . . . . . . . . . . . . . . . . 9.2.1 Convolutional Neural Network-Based Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Common Deep Learning Frameworks . . . . . . . . . . . . . 9.3 Hand-Eye Calibration Principle and Procedure . . . . . . . . . . . . . 9.4 Vision-Based Robot Grasping . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Object Grasping Pose Estimation . . . . . . . . . . . . . . . . . 9.4.3 Grasping Pose Detection . . . . . . . . . . . . . . . . . . . . . . . 9.4.4 Manipulator Grasping Motion Planning . . . . . . . . . . . . 9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

419 419 420

430 438 442 452 454 457 462 465 477 477 478 479

Visual SLAM for Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Visual SLAM Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Visual Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Back End Optimization . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Loop Closure Detection . . . . . . . . . . . . . . . . . . . . . . . 10.1.4 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Introduction to the ORB-SLAM Algorithm . . . . . . . . . . . . . . . . 10.2.1 Tracking Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Local Mapping Thread . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Loop Closing Thread . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Dense Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Representation of the Space Map . . . . . . . . . . . . . . . . .

481 481 482 484 485 485 487 489 489 490 498 502

425 430

xviii

11

Contents

10.3.2 Binocular Camera Geometry Model and Calibration . . . 10.3.3 Dense Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Introduction to Other Visual SLAM Algorithms . . . . . . . . . . . . 10.4.1 LSD-SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 SVO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 OpenVSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.4 VINS-Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

506 516 519 525 527 528 530 534 534 536 538

Introduction to ROS 2 and Programming Foundation . . . . . . . . . . 11.1 ROS 2 Design Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Problems with ROS 1 . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.2 ROS 2 Development Status . . . . . . . . . . . . . . . . . . . . . 11.1.3 ROS 2 Design Framework . . . . . . . . . . . . . . . . . . . . . 11.2 ROS 2 Installation and Use . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 ROS 2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Running the Turtlesim . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 ROS 2 Command Line . . . . . . . . . . . . . . . . . . . . . . . . 11.3 ROS 2 Programming Foundation . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 ROS 2 Programming Method . . . . . . . . . . . . . . . . . . . 11.3.2 Differences Between ROS 2 and ROS 1 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

541 542 542 543 544 547 547 549 553 554 554 562 564 565 566

Appendix A: Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Appendix B: Index of Tasks in Each Chapter . . . . . . . . . . . . . . . . . . . . . 569

About the Authors

Gang Peng, Ph.D. has research interests in intelligent robotics and intelligent manufacturing systems, intelligent sensing and control based on sensor fusion, intelligent analysis of industrial big data, AI, and machine learning algorithms. He has long been engaged in teaching, research and development of intelligent robot control, multi-sensor integration and information fusion, intelligent driving, and human-robot collaborative systems. He has edited and published three Chinese monographs and one Springer English monograph. He has co-edited two Chinese monograph and one English translation. Additionally, as the first author or corresponding author, he has published papers in IEEE Transactions and other international journals in the field of robotics and automation and has been granted more than 40 patents. Furthermore, he has presided over and completed the product transfer of scientific and technological achievements. He has been awarded the Outstanding Instructor Award of Huazhong University of Science and Technology for national major competitions and innovation contests many times, and has supervised graduate students who have won provincial and municipal innovation and entrepreneurial talent awards. Tin Lun Lam, Ph.D. Senior member of IEEE, serves as an Assistant professor at Chinese University of Hong Kong (Shenzhen), the Executive Deputy Director of the National-local Joint Engineering Laboratory of Robotics and Intelligent Manufacturing, and Director of Research at the Center on Intelligent Robots of Shenzhen Institute of Artificial Intelligence and Robotics for Society. His research focus includes new mobile robots, multi-robot systems, field robotics, soft robotics, and human–robot interaction. He has published two monographs and over 50 papers in top international journals and conferences in the field of robotics and automation and has been granted more than 60 patents. He has received the IEEE/ASME T-MECH Best Paper Award and IROS Best Paper Award on Robot Mechanisms and Design. Chunxu Hu the founder of the robotics community “Guyuehome,” obtained his master’s degree from the School of Artificial Intelligence and Automation of xix

xx

About the Authors

Huazhong University of Science and Technology. He focuses on the promotion and application of robotics operating system and is an evangelist of the China ROS Foundation. He has been awarded the honorary title of one of the 10 most influential people in ROS in 2019. Yu Yao, Ph.D. has been teaching, researching, and developing computer operating systems, AI technologies, and machine learning algorithms for many years. Jintao Liu, Ph.D. the founder of ExBot Robotics Lab, has translated and published more than 10 ROS books and is dedicated to the promotion and application of robot operating system. Fan Yang extramural advisor of graduate student at Tsinghua University, has been dedicated to the research of intelligent robots and intelligent hardware innovation and entrepreneurship. He has completed the marketing of several products, achieving good economic and social benefits. He is one of the editors of China’s White Paper on Artificial Intelligence Education, an evangelist of the China ROS Foundation, and a co-founder of the “Spark Project” ROS public classes.

Chapter 1

Composition of Robot System

Abstract Robots are widely used in industry, medicine, agriculture, construction, and the military. A robot system is typically composed of machine, drive, control, sensing, intelligence, and multi-machine collaboration, which is collectively composed of robots, operating objects, and environment. As a teaching and research platform, a robot is ideally suited as an engineering object to learn and gain expertise in software programming, embedded technology, control technology, sensor technology, mechatronics, image processing, pattern recognition, data communication, and multi-agent system. This chapter describes the components of a robot system, including the computing platform, controller, actuator, sensors, and software composition. Keywords Robot system · Mobile chassis · Manipulator · Hardware components · Software composition

1.1 1.1.1

Mobile Chassis and Manipulator Mobile Chassis

The chassis is the most fundamental component of a mobile robot. Whether it is a self-driving automobile or a simple smart toy car, different chassis drive types are utilized. In other words, different chassis drive types are utilized to enable flexible movement through wheel layout and speed synthesis. According to whether the velocity output of each wheel is constrained, i.e., whether each wheel can output velocity independently without considering the output velocity constraints of other wheels, the chassis drive can be divided into constrained and incomplete constrained types, as shown in Fig. 1.1. The constrained types of chassis drives include differential, Ackermann, and synchronous drives. The incomplete constrained types include omnidirectional and legged drives. Differential Drive: It is the most common mechanism. It consists of two parallel (independent) left and right drive wheels, both of which have power output. The differential drive chassis is also usually fitted with a universal wheel (a driven wheel, not powered) for weight support. Velocity differences between the two motors drive © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_1

1

1

Fig. 1.1 Classification of chassis drive types

2 Composition of Robot System

1.1

Mobile Chassis and Manipulator

3

the robot to move, including straight ahead and turning. This is where the differential drive term comes from. By controlling the left and right wheels to produce different velocities, the robot can move at different linear and angular velocities. The differential drive system is utilized by the famous TurtleBot robot, pioneer robot, and home sweeper robot. Synchronous Drive: The most prevalent of this type is tracked vehicles. Synchronous drives are those in which the front and rear wheels on the same side move at the same speed, in the same way as differential drives. Some four-wheeled chassis without tracks can also adopt synchronous drives to boost drive capacity, enabling them to carry more weight and ride more smoothly. Compared to a differential drive chassis, a synchronous drive chassis has relatively less accurate wheel odometers because turning on flat surfaces produces varying degrees of chatter. Hence, a synchronous drive chassis structure is not conducive to cornering. Ackermann Drive: It is the type of most automobile chassis, with front-wheel steering and rear-wheel drive. The Ackermann model has two rear wheels providing the power output. A differential is used to distribute speed to the two rear wheels and to produce differential speeds for different turning radii, while the front wheels are controlled by the steering wheel responsible for turning. Thus, the key to the Ackermann model lies in the differential at the rear wheels. The differential used in cars is a special gear combination structure that automatically distributes the output speed of the engine to both wheels according to the car’s real-time turning radius. In mobile robots, on the other hand, an electronic governor is used instead of a differential as the drive wheels are usually connected directly to the electric motor. Omnidirectional Drive: It is an unrestricted chassis motion direction, enabling it to translate and rotate in any direction in the plane. This drive type enables movement in both straight and lateral directions, and the chassis can move in any direction without the body turning. Omnidirectional drives have the potential to increase the robot’s movement efficiency. Particularly, in large freight warehouses, using the omnidirectional drive for the logistics handling trolley can save a large amount of time. The wheel of the omnidirectional drive chassis consists of a hub and roller. The hub is the main support of the entire wheel, and the roller is not connected to a power source but is mounted around the hub. Therefore, it can roll freely. Based on the angle between the hub and the roller axis, the wheels can be divided into omni- and Mecanum wheels. The hub of the omnidirectional wheel is perpendicular to the roller axis, and the chassis has three-wheeled and four-wheeled omnidirectional structures. The three omnidirectional wheels of the three-wheeled omnidirectional structure are 120° apart and can move in all directions. The wheels of the four-wheeled omnidirectional structure are perpendicular to each other, forming an orthogonal structure. If the four omnidirectional wheels are placed in a cross shape, it will form a rectangular coordinate system. The X shape, on the other hand, is commonly employed for improved performance in the axial direction. The Mecanum wheel is a common omnidirectional drive structure with a hub at a 45° angle to the roller axis. The left and right wheels are mirrored versions of one

4

1

Composition of Robot System

Fig. 1.2 Schematic diagram of the differential drive chassis structure

another. A four-wheel chassis structure is commonly used on flat surfaces and is not suitable for non-flat surface applications. Legged Drive: It is derived from bionic structures. It has somewhat complicated structures (bipedal, quadrupedal, hexapod, etc.), such as bipedal humanoid robots and robot dogs, cows, horses, and insects. This structure increases the degrees of freedom of the robot legs and can enhance the robot’s ability to walk with various gaits, thus improving the mobility of the robot and expanding the range of terrain it can traverse. However, the additional joints and drive motors also give rise to issues of power, control, weight, and energy along with technical challenges. The experimental robot used in this book adopts a differential drive chassis. Its structure is illustrated in Fig. 1.2. The main parameters of the mobile chassis are listed in Table 1.1. The kinematic model of the two-wheeled differential chassis is presented below.

Kinematics Analysis of Two-Wheeled Differential Chassis As shown in Fig. 1.3, the two drive wheels of the two-wheeled differential chassis are located on the left and right sides of the chassis, and the steering control is achieved by providing different speeds. In addition, one or two auxiliary universal wheels are usually added as support wheels. As shown in Fig. 1.4, the radius of the wheels of the car is r, the distance from the two wheels to the center of the car is d, the distance between the left and right wheels is D = 2d, the angular velocities of the left and right wheels are WL and WR, and the left and right wheel velocities are VL and VR, respectively. The velocity of the car is

1.1

Mobile Chassis and Manipulator

5

Table 1.1 Parameters of the mobile chassis Drive type Drive wheel diameter Maximum speed Encoders Load weight Surmountable obstacle height Battery capacity Photoelectric sensors

Power output connector Charging voltage

Two drive wheels, differential drive 68 mm 0.4 m/s 12 positions 5 kg 5 mm 18,000 mA Wall test: eight sensors distributed around the chassis for an obstacle or wall detection Ground test: six sensors distributed at the bottom of the chassis facing downward for detecting stairs Two 12 V/3A power outputs DC 19 V

Fig. 1.3 Schematic diagram of two-wheeled differential chassis

Fig. 1.4 Kinematic analysis of two-wheeled differential chassis

6

1

Composition of Robot System

Fig. 1.5 Kinematic model of two-wheeled differential chassis

set at V, the angular velocity is set at ω (rotating counterclockwise in the positive direction), and the radius of the turn is set at R. The kinematic equations of the two-wheeled differential chassis can be obtained as follows. Left wheel velocity: VL = ω*(R - d ). Right wheel velocity: VR = ω*(R + d ). Vehicle velocity: V = ω*R = ω(R - d + R + d)/2 = (V + VLR)/2 = (rWL + rWR)/2. Angular velocity of the vehicle: ω = VL/(R - d ) = VR/(R + d ), then R = (VR + VL)d/(VR - VL), R + d = 2VRd/(VR - VL). Furthermore, it is possible to obtain the following: ω = VR/(R + d ) = (VR - VL)/2d = (-VL + VR)/2d = (-rWL + rWR)/2d. The vehicle velocity and angular velocity are written in matrix form as follows. V ω

=

r=2

r=2

WL

- r=2d

r=2d

WR

=

r=2

r=2

WL

- r=D

r=D

WR

Motion Model of the Two-Wheeled Differential Chassis To perform the trajectory control of the robot, the motion model of the robot needs to be known. The trajectory of a two-wheeled mobile robot can be treated as a combination of multiple circular arcs or straight lines (when the rotating speed is 0). Mobile robots with a two-wheeled differential chassis structure cannot move laterally and can only move forward and rotate. At moment t, the robot velocity and angular velocity are defined as (vt, wt). As illustrated in Fig. 1.5, θt is the angle between the robot coordinate system xrobot axis and the world coordinate system x-axis at moment t. From moment t to moment t + 1, the motion distance of the robot is short (the encoder sampling period is generally ms level); therefore, the trajectory between two adjacent points can be viewed as a straight line; i.e., it has moved a distance of vt × Δt along the xrobot axis of the robot coordinate system. After projecting this distance onto the x-axis and y-axis of the world coordinate system, the displacement of the robot, Δx and Δy, moving in the world coordinate

1.1

Mobile Chassis and Manipulator

7

system at moment t + 1 relative to moment t and the rotation angle of the robot Δθ are obtained. Δx = vt × Δt × cosðθt Þ Δy = vt × Δt × sinðθt Þ Δθ = wt × Δt By analogy, to project the trajectory of the motion over a period of time, it is necessary to sum the increments over that period of time.

1.1.2

Manipulator

A manipulator (robotic arm) is a typical industrial robot, and photographs or videos of them at work can often be seen, for example, performing tasks such as assembly, welding, and grasping in a factory, or performing surgical operations in an operating room. The world’s first manipulator, a five-axis hydraulically driven robot for die casting, was made in 1959, with the arm controlled by a computer. The practical models (playback robots) of industrial robots were “UNIMATE” and “VERSTRAN” introduced in 1962. Since then, robots have been used in industrial applications with great success, performing various tasks in various industries worldwide. Many everyday items purchased today were assembled, packaged, or operated by robots. Unlike the mobile robots discussed in the previous section, manipulators cannot move anywhere in the environment. Instead, they have a fixed base, so they have limited space to work. Figure 1.6 shows several different types of manipulators. The number of joints describing the motion of a manipulator is usually defined using the degree of freedom (DOF). The most common is the 6-DOF manipulator. As illustrated in Fig. 1.6a, it consists of a series of rigid linkages and rotating joints and is a tandem linkage robotic arm. Figure 1.6b is a selective compliance assembly robot arm (SCARA) robot, which is rigid in the vertical direction and flexible in the horizontal plane. Hence, this type of robot is ideal for planar work tasks such as the assembly of electronic circuit boards. Figure 1.6c depicts a collaborative robot, which can work with a human on a production line, giving full play to the efficiency of the robot and the intelligence of the human. This type of robot is cost-effective, safe, and convenient and can greatly promote the development of manufacturing companies. As a new type of industrial robot, collaborative robots enable robots to dispense of the fence or cage and have a wide range of application areas. Figure 1.6d shows a 7-DOF flexible manipulator with better flexibility and mobility in the workspace than the 6-DOF manipulator due to the extra joint. Figure 1.6e shows a rectangular (Cartesian) coordinate system robot, which has 3-DOF of motion along the track and can have a very large working space.

8

1

Composition of Robot System

Fig. 1.6 (a) 6-DOF tandem industrial robot; (b) 4-DOF SCARA manipulator; (c) 6-DOF collaborative manipulator; (d) 7-DOF flexible manipulator; (e) rectangular coordinate robot; (f) parallel robot

Figure 1.6f depicts a parallel manipulator in which all the linkages are connected to the end tool in parallel. Additionally, all the motors of the manipulator joints are mounted on the base, leading to a rigid structure. In addition, the parallel structure has the advantage of a better motion speed than the tandem structure. A relatively fixed workspace and structured scene reduce the requirement of the manipulator for environmental perception. An industrial robot with a relatively fixed working environment is capable of moving at high speed and with high precision. The position of the object it is working on is also known in advance. Meanwhile, safety issues are simplified by using safety fences to keep people out of the robot’s workspace. The working conditions of a manipulator encompass trajectory planning and motion control. In addition, an actuator or tool is added to the end of the robot to control the work object. The terminal actuator can be a simple two-finger parallel gripper, a suction cup, or a bendable pneumatic gripper jaw. It also can be a complex humanoid manipulator with multiple bendable finger joints, as shown in Fig. 1.7.

1.1

Mobile Chassis and Manipulator

9

Fig. 1.7 Common terminal actuators of a manipulator Fig. 1.8 Desktop 4-DOF manipulator

The experimental robot used in this book is a mobile manipulator, also called a compound robot, in which a desktop 4-DOF manipulator1 is mounted on a mobile chassis. The structure of the manipulator is shown in Fig. 1.8. The main parameters of the manipulator are shown in Table 1.2.

1

Introduction of Uarm, https://www.trossenrobotics.com/uarm-swift-pro.aspx

10

1

Composition of Robot System

Table 1.2 Main parameters of the manipulator Weight Material Dimension (length, width, and height) DOF Repeat positioning error Load Arm span (working range) Maximum terminal motion speed Operating voltage Drive type Actuator Encoder

1.2

2.2 kg Aluminum 150 mm*140 mm*281 mm 4 0.2 mm 500 g 50 mm–320 mm 100 mm/s DC 12 V Stepper motor + reducer Suction pad 12 positions

Hardware Components of Robot System

From the perspective of control, the robot system can be divided into four major parts, i.e., actuator, drive system, sensing system, and control system, as shown in Fig. 1.9. Figure 1.10 demonstrates the control relationships among the different components.

1.2.1

Computing Platform

The computing platform is the foundation for the design and development of computer system hardware and software and determines the performance of the computer system. The bases of hardware and software are the central processing unit (CPU) and operating system, respectively. Currently, mobile chassis with environmental perception is usually equipped with a computing host running ROS. The following types are available depending on the computing host chip. 1. MPUs based on ARM Cortex-A Series belong to the low- to mid-end MPUs, such as Raspberry Pi, Orange Pi, and Banana Pi, with a high performance-to-price ratio. 2. Rexchip RK3399 series chips based on integrated ARM, GPU, and NPU belong to the mid- to high-end chips, with a high performance-to-price ratio. This category also includes chips based on Huawei Heisi (e.g., Atlas 200). 3. Chips based on the Nvidia Jetson TX-2 or Xavier series are high-end products with powerful performance and equipped with Nvidia’s own GPU. 4. CPUs based on X86 architecture, such as mini-PCs, industrial personal computers (IPCs) or laptops, and the AINUC series of mini-PCs with further GPU and VPU integration, are high-end products with powerful performance.

Hardware Components of Robot System

11

Fig. 1.9 Components of a robot system

1.2

12

1

Composition of Robot System

Fig. 1.10 Control relationships among robotic components

Computing platforms are used to carry out robot system duties and information processing. Program development, installation, and power consumption must all be considered when selecting a computing host. All the abovementioned computing platforms support the Ubuntu Linux operating system and can be deployed with ROS. Among these platforms, high-end products are appropriate for applications in deep learning and computer vision. The computing platform of the Spark experimental robot in this book employs the AINUC series mini-computer based on an X86 architecture CPU with integrated GPU and VPU as the intelligent computing platform. It is suitable for the great majority of mobile platform development needs.

1.2.2

Motion Controller

Motion control refers to the conversion of a predetermined control scheme and planning instructions into desired mechanical motion, thus achieving precise position, speed, acceleration, and torque or force controls of mechanical motion. Motion controllers are used to control the operation of motors and are widely applied in robotics and computer numerical control machine tools (CNC machine tools). Digital signal processing (DSP) or 32-bit single chips based on ARM Cortex-M cores are extensively used in motion controllers for mobile chassis and manipulators. They feature high performance, low cost, and low power consumption. It is necessary for the motion control system to provide the robot’s current/speed/position feedback, joint motor motion control algorithm, human–machine interaction, and other functions.

1.2

Hardware Components of Robot System

1.2.3

13

Driver and Actuator

Robot motion requires the power conversion of joint control signals to drive the joint motion and achieve various movements of the robot. Common drive systems used in robotics can be divided into electrical, hydraulic, and pneumatic drives. 1. Electrical drives use various electric motors to generate force and torque to drive the actuator directly or through mechanical transmission. In this way, it is possible to obtain various motions of the robot. Electrical drives are more efficient, easier to use, and less costly than hydraulic and pneumatic drives due to the omission of intermediate energy conversions. 2. The advantage of a hydraulic drive is that a large driving force or torque, i.e., a large power-to-weight ratio, can be obtained with a small driving output. The drive cylinder is constructed as part of the joint, which creates a simple and compact structure and good rigidity. The hydraulic drive is mostly used in situations that require a large output force and low-speed movement, such as automatic driving construction machinery. 3. A pneumatic drive is generally used for the terminal actuator of a manipulator to pick up and place objects through the control of negative and positive pressure. The pneumatic drive system has a good cushioning effect due to the compressibility of air. However, high positioning accuracy is difficult to achieve, and the usage of a pneumatic drive will cause noise. In addition, pneumatic systems are prone to rust and corrosion and will freeze at low temperatures because of the water in the air. All of these factors limit the application of pneumatic drives in robot systems. Other available specialty drive types include piezoelectric drives, ultrasonic motors, rubber drivers, and shape memory alloys. Robot actuators are mechanical devices that are directly oriented to the work object and are equivalent to the hands and feet of the human body. Actuators vary for different objects. Manipulators generally use servo motors as motion actuators to form the individual joints, while mobile robots employ DC motors as movement actuators to move in the task scene.

1.2.4

Sensor

A sensor consists of a sensitive element, conversion element, and conversion circuit. The sensitive element is a part that directly senses or responds to being measured. The conversion element converts the nonelectrical quantities into electrical quantities, and the conversion circuit converts the electrical signal output from the conversion element into a signal convenient for processing, displaying, recording, and transmitting.

14

1

Composition of Robot System

In a robot system, sensors play a vital role in completing signal acquisition and feedback. Sensors include the internal and external sensing systems. The internal sensing system consists of an encoder and a gyroscope, which can detect the pose state through its own signal feedback. The external sensing system encompasses a camera, infrared, or sonar, which can detect the external environment and obstacle information. This information is integrated with the internal sensor data to optimize the position estimation. Further details are provided in a later section along with information on different sensors.

1.2.5

Display and Touch Panel

Displays, also commonly referred to as monitors, are input–output (I/O) devices. Depending on the manufacturing material, they can be classified as cathode ray tube, plasma, or liquid-crystal displays (LCD). Touch panels, also known as “touch screens,” are inductive LCD devices that receive input signals such as touches to replace mechanical button panels. When a touch is made on a graphic control on the screen, the tactile detection system responds according to a predetermined setting. As a simple and convenient way of human–computer interaction, touch panels have been extensively applied in industrial control, consumer electronics, multimedia teaching, public information query, and military command.

1.2.6

Speaker and Microphone

Speakers and microphones are necessities in human–robot voice interactions. A speaker is also known as a “horn,” which is a common electroacoustic transducer. With the help of the electromagnetic, piezoelectric, or electrostatic effect, audio signals enable a paper cone or diaphragm to vibrate, and the vibration resonates with the surrounding air to emit sound. According to the transducer mechanism and structure, speakers can be divided into electric, piezoelectric (crystal or ceramic), electromagnetic, capacitive, electro-ionic, and pneumatic types. Among these, the electric speaker is widely used because of good electroacoustic performance, solid structure, low cost, and other advantages. Microphones are electroacoustic devices and transducers that convert acoustical energy into electrical energy. Specifically, voltage is produced by the action of sound waves on electroacoustic components and then converted into an electrical signal. The major components of microphones are the signal transmission circuit and signal amplifier. Different microphone types are available, including array and ordinary microphones. Ordinary microphones with low cost and high sensitivity are suitable for simple recording and recognition. Array microphones are necessary in terms of

1.3

Sensor Description and Function Introduction

15

higher requirements for voice interaction, such as sound source localization, echo cancelation, and command word wake-up.

1.2.7

Power Supply System

The power supply system provides energy for actuators, sensor systems, control systems, and other components of robots. The various components of a robot adopt different power supply voltage levels, such as 48 V, 24 V, 1 2 V, 5 V, or 3.3 V. To ensure the stability of a robot system, the power supply also needs to be protected and filtered. In this way, the robot system has a power management subsystem that provides and maintains the power requirements for each part of the system.

1.3 1.3.1

Sensor Description and Function Introduction Encoder

An encoder is a critical component for measuring the feedback of the robot joint motor speed. Common types include optical and magnetic encoders. Encoders usually adopt a quadrature encoding form for signal output, i.e., the quadrature output of the AB phase. In addition, when the motor completes one revolution, it will output the Z-phase pulse signal. Optical encoders are mounted on motor shafts, joints, or operating mechanisms to measure angular velocity and position. The optical encoder consists of an illumination source, fixed grating, turntable with a fine grating that rotates with the shaft, and light detector. As the turntable rotates, the amount of light penetrating the photodetector changes according to the arrangement of the fixed and moving gratings, generating a sine wave, which is then compared with a threshold value and transformed into a discrete square wave pulse. Encoder resolution is measured in units of cycles per revolution (CPR). A typical encoder resolution is 2000 CPR, and currently, optical encoders with a resolution of 10,000 CPR are available. Quadrature optical encoders are commonly used in mobile robot systems, as shown in Fig. 1.11. The direction of rotation is determined by the phase relationship between the observed pulse sequences of channels A and B. Specifically, the direction of rotation is decided by the square wave that produces a rising edge first. Moreover, four times higher resolution can be achieved by four different detectable states. Hence, a 2000 CPR quadrature encoder is able to produce 8000 counts. The Spark robot in this book uses a magnetic encoder for the wheel encoder of its motion chassis. The magnetic encoder generates a Hall voltage based on the Hall effect principle. Through a signal processing circuit, the Hall voltage is converted into a digital signal that can be recognized by the computer to achieve the encoding

16

1

Composition of Robot System

Fig. 1.11 Schematic diagram of an optical encoder

function. The motor rotates once to generate 12 pulses, the diameter of the wheel is 68 mm, and the reduction ratio of the motor reducer is 61.6:1. Combining these, the corresponding mileage of wheel movement for one pulse can be calculated as 68*3.14159/61.6/12 = 0.289 mm. Subsequently, the rotational speed can be measured according to the number of pulses per unit time.

1.3.2

Inertial Measurement Unit

An inertial measurement unit (IMU) is a device that measures the three-axis attitude angle (or angular rate) and acceleration of an object. Each IMU contains three singleaxis accelerometers and three single-axis gyroscopes. Accelerometers detect the acceleration signal of an object, while gyroscopes detect the angular velocity signal and measure the angular velocity and acceleration of the object in 3D space. The attitude of the object is then solved based on acceleration and angular velocity. IMUs have important applications in mobile robotics and aircraft navigation. In mobile robot systems, IMUs are often used to fuse with wheeled odometers or to participate directly in simultaneous localization and mapping (SLAM). Common types include 6-axis, 9-axis, and 10-axis IMUs,2 depending on the data category output, as shown in Fig. 1.12. Machining accuracy will lead to some deviation between the actual axes of the IMU and the ideal axes during the manufacturing process. Additionally, the raw values of the 3-axis acceleration, 3-axis angular velocity, and 3-axis magnetometer have fixed deviations from the real values. This requires error calibrations of the data when using the raw measurement data from the IMU module to eliminate axis, fixed zero, and scale deviations, as shown in Fig. 1.13. where the original vector of acceleration measurements acc plus the zero bias compensation Ba and white noise Na compensation are transformed with the scale

2 How to choose a ROS Robot Platform to get started with SLAM navigation? https://zhuanlan. zhihu.com/p/499864960

1.3

Sensor Description and Function Introduction

17

Fig. 1.12 6-axis, 9-axis, and 10-axis IMU modules Fig. 1.13 Mathematical model of IMU error calibrations

correction matrix Sa and axis bias correction matrix Ta to obtain the calibrated 3-axis acceleration vector acc_calib. Using a similar method, the calibrated 3-axis angular velocity w_calib and 3-axis magnetic force mag_calib can be obtained. Finally, the magnetic force data mag_calib in the magnetometer coordinate system will be transformed into the acceleration coordinate system, and the transformed mag_calibl2 will be used for the acceleration/magnetometer data fusion. The calibration process solves for the individual calibration coefficients on the right side of the error calibration mathematical model equation. Solving for the calibration coefficients is generally achieved using least squares or matrix inversion.

18

1

Composition of Robot System

Fig. 1.14 Schematic of position, velocity, and acceleration measurements for mobile robots

Once the IMU data calibration is completed, data fusion can be performed to obtain the 3-axis attitude angles (roll, pitch, yaw) based on the IMU data for each axis. Various attitude fusion algorithms are available, such as the extended Kalman filtering (EKF). To summarize, the implementation of odometry fusion or SLAM using IMUs in mobile robot systems requires the completion of the underlying IMU error calibration and attitude angle fusion. Figure 1.14 shows a schematic of the mobile robot position, velocity, and acceleration measurements. The IMU is very sensitive to measurement errors in both the gyroscope and accelerometer. For example, gyroscope drift inevitably affects the estimation of vehicle orientation. Accelerometer data are integrated twice to obtain the position, and any residual gravity vector will cause estimation errors in position. In mobile robotic applications, using some external measurement as a reference is necessary for drift estimation, often a camera or GPS. For example, with the help of a vision sensor, drift errors can be eliminated when specific features or locations in the environment are acquired. Similarly, the attitude estimation of the mobile robot can be corrected with GPS signals. The Spark robot chassis used in this book integrates a 6-axis motion attitude gyroscope sensor, as shown in Fig. 1.15. The module integrates a high-precision gyroscope and accelerometer to solve for real-time motion attitude data using dynamics solving and Kalman filtering algorithms.

1.3.3

LiDAR

Radio detection and range (Radar) is an electronic device that uses electromagnetic waves to detect the position of a target. Its functions include searching for and finding the target, measuring motion parameters such as distance, velocity, and angular position, and measuring characteristic parameters such as target reflectivity, scattering cross section, and shape. Conventional radar uses electromagnetic waves in the microwave and millimeter wave bands as carriers. Laser detection (LiDAR) uses lasers as carrier waves and can carry information in terms of amplitude, frequency, and phase. LiDAR has the following advantages.

1.3

Sensor Description and Function Introduction

19

Fig. 1.15 Integrated IMU module for robot chassis

• The laser light has a small divergence angle, concentrated energy, and high detection sensitivity. It has high angular resolution, velocity resolution, and distance resolution. • LiDAR displays a strong anti-interference ability, good concealment, no radio wave interference, and its insensitivity to the ground when working at low elevation angles. • The large Doppler shift enables the detection of targets from low to high speeds, and the use of distance Doppler imaging enables high-resolution clear images of moving targets. • LiDAR is several orders of magnitude shorter than microwaves with a narrower beam and can detect targets on a molecular scale. • LiDAR is smaller and lighter than microwave radar for the same functionality. On the other hand, LiDAR also has the following disadvantages. • Lasers are affected by the atmosphere and meteorology. Atmospheric attenuation and severe weather will cause a reduced range of action. In addition, atmospheric turbulence can also reduce the accuracy of LiDAR measurements. • The narrow laser beam makes it difficult to search for targets and capture them. Therefore, other equipment is generally available first for large airspace, with a rapid and rough target capture, which is then handed over to the LiDAR for precision tracking and measurement of the target. In mobile robot systems, it is essential to obtain the shape of the surrounding environment contours and detect obstacles. Robots can perform SLAM, obstacle avoidance, and autonomous navigation using the point cloud information obtained from LiDAR scanning. LiDAR is a radar system that detects target position, velocity, and other characteristic quantities by emitting a laser beam, and the antenna and system size can be made very small as shown in Fig. 1.16. The system consists of a laser transmitter, an

20

1

Composition of Robot System

Fig. 1.16 LiDAR

optical receiver, a turntable, and an information processing system. The laser transmitter emits a laser of a certain power at a certain wavelength and waveform through the optical antenna. The receiver consists of a laser sensor, modulator, cooling system, transmitting antenna, and laser power supply. The echo signal of the target is collected through the optical antenna and then converted into an electrical signal through photoelectric detection. Then, the distance, position, speed, and image information are obtained through amplification and signal processing. LiDAR works by first transmitting a detection signal (laser beam) to the target, then reducing the received light pulse signal (target echo) reflected from the target into electrical pulses, processing them, and obtaining relevant information about the target, such as target distance, bearing, altitude, speed, attitude, and even shape parameters. In this way, it is possible to detect, track, and identify the target. The core segment of the transceiver system of a LiDAR system is the emission waveform and modulation form of the laser. The laser-transmitted waveform has several types, including an amplitude-modulated continuous wave, frequencymodulated (FM) continuous wave, and narrow pulse. The currently popular method is to use a triangular or linear FM continuous wave for the laser-transmitted waveform and pulse compression for the post-signal processing. Recent years have witnessed the development of tunable laser technology and narrow pulse technology with highly repetitive frequencies. As a result, they now have many advantages. Table 1.3 shows a performance comparison of the transmitted waveforms of FM continuous pulse and narrow pulse. There are two major types of receivers for LiDAR: noncoherent and coherent reception technologies. Noncoherent energy reception can be used for pulse counting-based ranging, with the advantages of simple and mature technology. In terms of coherent reception technology, heterodyne, autodyne, and homodyne receptions are available. Additionally, coherent reception technology has a high reception sensitivity and speed resolution. However, the coherent reception method requires a particularly wide receiver frequency band. It also has high requirements for the frequency stability of the laser emission and more stringent requirements for the calibration of the optical antenna system and in-camera optical path. The transmitter of coherent LiDAR has laser frequency stabilization, frequency control, and polarization control systems. The coherent LiDAR receiver has a more complex

1.3

Sensor Description and Function Introduction

21

Table 1.3 Performance comparison of the narrow and FM continuous-wave pulse transmitted waveforms

Waveform FM continuous wave (timedomain pulse) FM continuous wave (frequencydomain pulse) Narrow pulse

Coherent detection (under the same ranging resolution and certain power limitations) Easy

Ranging resolution Medium

Speed measurement resolution Relatively high

Imaging rate Relatively high

Difficult

Medium

High

High

Difficult

High

Low

High

Table 1.4 LiDAR key performance indicators Indicators Measurement range Scanning frequency Angular resolution Measurement resolution/ accuracy Distance measurement sampling rate IP protection rating

Description Effective measurement of distance Number of scans the radar makes in one second Angular stepping of two adjacent ranging points Minimum amount of change in distance that can be perceived Number of range outputs in one second Dust and water resistance rating, consisting of two numbers. The first number indicates the level of ability to prevent the intrusion of foreign objects, and the second number indicates the degree of airtightness against moisture and water intrusion. The larger the two indicated numbers, the higher level of protection.

information processing unit. Therefore, the type of reception should be determined according to the specific use requirements. The mainstream LiDAR is based on two principles, triangulation and time-offlight (ToF) ranging. The LiDAR key performance indicators are shown in Table 1.4. LiDAR measurement methods include single-point ranging, single-line scanning, multi-line scanning, and fixed-area scanning. 1. Single-point ranging is the simplest basic ranging module, enabling monitoring over dozens of meters, several hundred meters, or even kilometers. 2. Single-line scanning uses a single-point ranging module plus motor rotation to form point cloud data in one plane, enabling monitoring of a one-dimensional plane of space to obtain angle and distance values of objects. 3. Multi-line scanning refers to a finer scan of three-dimensional space from multiple angles, such as 4-line, 8-line, 16-line, 64-line, and 128-line. The more lines, the finer the scan, the faster the scan speed, and the faster the point cloud data update.

22 Table 1.5 LiDAR main specification parameters

1

Range Sampling rate Scanning frequency Laser wavelength Laser power Precision Measurement resolution Communication interface Operating voltage Operating current Volume Weight

Composition of Robot System

0.13–16 m (reflectivity 80%) 8 K/s 4–20 Hz 780 nm 3 mW (maximum power)

/etc/apt/sources.list.d/ros-latest.list’

To increase the download and installation speed of the software, some mirror software sources in China can be used. 4. Download Sample Code The sample code can be found on GitHub at https://github.com/NXROBO/. Once the correct repository is added, the operating system knows where to download the program and automatically installs the software based on the command. 5. Set Up the Keys This step is to confirm that the original code is correct and that no one has modified any of the program code without the owner’s authorization, i.e., to make sure that the code is from an authorized site and has not been modified. Generally, when the repository is added, the repository key has also been added. Meanwhile, the keys are added to the trustworthy list of the operating system.

54

2

Connecting the Robot to ROS

$ sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers. net:80 --recv-key 421c365bd9ff1f717815a3895523baeeb01fa116

In case of problems with connecting to the keyserver, try replacing the command above with the following link. hkp://pgp.mit.edu:80 or hkp://keyserver.ubuntu.com:80

6. Installation Before starting, upgrade the package to avoid various problems, such as the wrong library version or software version. Enter the following command to upgrade. $ sudo apt-get update

The ROS is very large, and four configurations are available depending on the purpose. • Desktop-full install (recommended), which requires sufficient hard disk space. Use the following command. $ sudo apt-get install ros-kinetic-desktop-full

This method will install ROS, rqt toolkit, rviz visualization environment, robot-generic libraries, 2D (e.g., stage) and 3D (e.g., Gazebo) simulators, navigator feature set (movement, localization, mapping, manipulator control), and other perception libraries such as vision, LiDAR, and RGB-D cameras (depth cameras). • To install specific parts of the feature set or for limited hard drive space, the desktop installation files can be installed alone, including ROS, rqt toolkit, rviz, and other generic robot libraries. Later, the other feature package sets can be installed when necessary. Use the following command. $ sudo apt-get install ros-kinetic-desktop

• To simply try it out, ROS-base can be installed. ROS-base is usually installed directly on the robot, especially if the robot has no screen or human– machine interaction (HMI) and can only be logged into remotely. This only

2.2

ROS Installation

55

installs the ROS compilation and communication libraries and does not have any GUI tools. Use the following command. $ sudo apt-get install ros-kinetic-ros-base

• To install a specific ROS feature package independently, use the following command (replacing PACKAGE with the specified feature package name). $ sudo apt-get install ros-kinetic-PACKAGE

7. Initialize rosdep Before using ROS, the rosdep command line tool will first need to be installed and initialized (rosdep is installed in ROS by default). Rosdep enables the easy installation of system dependencies for the desired source compilation and is required to run some core components in ROS. Rosdep should be used after installing ROS. Use the following command to install and initialize rosdep. $ sudo rosdep init $ rosdep update

2.2.4

Set the Environment Variables

To be able to run ROS, the system needs to know the location of executable or binary files and other commands through the execution of the following script. $ source /opt/ros/kinetic/setup.bash

If another ROS distribution, such as indigo, is installed, it will also need to be configured. To do so, simply use indigo instead of kinetic. To test if the ROS installation is complete and correct, type roscore at the command line and see if a program starts. Note: If a command line window is opened again and roscore or another ROS command is typed, it will not work. This is because the script needs to be executed again to configure the global variables and the path to the ROS installation. To fix this, a script needs to be added to the end of the .bashrc file that will execute and configure the environment.

56

2

Connecting the Robot to ROS

$ echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc

The .bashrc file is in the user’s home folder (/home/USERNAME/.bashrc). Each time the user opens a terminal, and this file loads the command line or terminal configuration. When the above script is added at the end of the .bashrc file, it avoids having to repeat the command each time a new terminal is opened. For the configuration to take effect, the following command must be used to execute the file or to close the current terminal and open a new one. $ source ~/.bashrc

Note: More than one ROS distribution is installed. Because each call to the script overwrites the current configuration of the system, ~/.bashrc can only set the setup. bash for the version of ROS currently being used. To switch between several distributions, different setup.bash scripts need to be called. $ source /opt/ros/kinetic/setup.bash

Or $ source /opt/ros/indigo/setup.bash

If there are multiple setup.bash scripts for multiple ROS distributions in the . bashrc file, the last script is the running version. Hence, it is recommended that only a single setup.bash script be imported. To check the version being used via the terminal, the following command can be used. $ echo $ROS_DISTRO

2.3

File System and Communication Mechanisms in ROS

2.2.5

57

Verify Installation

Run the following command to install the dependencies required for the building package in Ubuntu. $ sudo apt-get install python-rosinstall python-rosinstallgenerator python-wstool build-essential

This completes an ROS system installation. A test to verify that the system has been installed successfully can be conducted by starting ROS Master with the following command. $ roscore

If it is installed correctly, the content illustrated in Fig. 2.13 will appear in the terminal.

2.3 2.3.1

File System and Communication Mechanisms in ROS File System

Akin to an operating system, ROS files are also organized in a particular fashion, and files with different functions are placed under different folders, as shown in Fig. 2.14.

Fig. 2.13 Information appearing after the roscore command is run

58

2

Connecting the Robot to ROS

Fig. 2.14 File system structure of ROS

Fig. 2.15 Typical structure of a ROS package

Packages: It is the form of software organization for ROS. The ROS packages are the minimal structures used to create ROS programs and contain the ROS runtime processes (nodes), configuration files, etc. Package Manifest: It provides information about the package name, version number, content description, author, license, dependencies, compilation flags, etc. The package.xml file inside the ROS package is the manifest file of that package. Stack: It is formed by organizing several packages with certain functions together. Message Type: Messages are a type of communication information that is published/subscribed between ROS nodes. The message types provided by ROS can be used, or the .msg file can be customized with the desired message type in the msg folder of the ROS packages. Service: It defines the request and response data types under the ROS client/server communication model. Use either the service types provided by the ROS system or the .srv file, defined in the srv folder of the ROS packages. Code: It is the folder where the source code of the function package node is stored. Fig. 2.15 shows a typical file structure of a package. The main functions of these folders are as follows. • • • • •

config: It stores the configuration files in the package, created by the user. include: It stores the required headers in the package. scripts: It stores Python scripts that can be run directly. src: It stores the C/C++ code to be compiled. launch: It stores all the launch files in the package.

2.3

File System and Communication Mechanisms in ROS

59

Fig. 2.16 Example package.xml file of ROS packages

• • • •

msg: It stores the message type customized by the package. srv: It stores the service type customized by the package. action: It stores the action command customized by the package. CMakeLists.txt: It stores the rules for the compiler to compile packages, which must be available. • package.xml: It stores the package manifest, which must be available. Fig. 2.16 shows an example of a typical package manifest. The package manifest clearly shows the name, version number, content description, author information, and software license of the package. In addition, the tag defines the compilation build tool, usually catkin. The tag defines the other packages on which the code is compiled, exported, and run. Specifically, is used to integrate the , , and tags. This information will need to be modified during the development of the ROS packages based on the specific content. A series of ROS commands have been designed for creating, compiling, modifying, and running the ROS packages. Table 2.2 briefly lists some common commands and their actions. These commands will be used frequently, and it is necessary for readers to have a deeper understanding of them in practice.

60

2

Connecting the Robot to ROS

Table 2.2 Common ROS commands Command catkin_create_pkg rospack catkin_make rosdep roscd roscp rosed rosrun roslaunch Fig. 2.17 ROS computation graph level

Action Creates packages Obtains information about the package Compiles packages in the workspace Automatically installs the package dependencies Changes directory of packages Copies files from a package Edits files in a package Runs an executable file in a package Runs the launch file Node

Master

Parameter Server

Message

Computation Graph Level

Topic

Server

Bag

2.3.2

ROS Communication and Its Working Mechanism

2.3.2.1

ROS Computation Graph Level

ROS creates a network that connects all processes, i.e., the computational graph level, including nodes, master, parameter servers, messages, services, topics, and bags, all of which provide data to this computational graph level in different ways, as shown in Fig. 2.17. Any node in the system can access this network and interact with other nodes through this network, obtain information posted by other nodes, and post its own data to the network. • Node Nodes are processes that perform computations. If a process is needed that can interact with other nodes, then a node needs to be created and connected to the ROS network. A system usually comprises multiple nodes capable of different functions, each with a specific single function. Nodes need to be written using roscpp or its ROS client library. When many nodes are running at the same time, it is easy to plot end-to-end communication as a node relationship diagram as demonstrated in Fig. 2.18. The processes are the nodes in the figure, and the wires between the nodes denote the end-to-end connectivity relationship.

2.3

File System and Communication Mechanisms in ROS

/turtle1/cmd_vel

/teleop_turtle

61

/rosout

/turtlesim /rosout

/rosout

/rosout /rqt_gui_py_node_3044

Fig. 2.18 Node relationship in ROS

• Node Manager (Master) The ROS manager provides name registration and lookup of nodes. Without the manager in the entire ROS system, there would be no communication between nodes, services, and messages. It is important to note that because ROS itself is a distributed network system, it is possible to run a manager on one computer and run the nodes managed by that manager on other computers. • Parameter Server The parameter server enables data to be stored by keywords. By using parameters, it is possible to configure nodes at runtime or change the job tasks of nodes. • Message Nodes communicate with each other by passing messages. There are many standard types of messages in ROS, and users can also develop custom messages based on standard messages. • Topic Topics are the data bus on which the ROS network routes and manages messages. Each message is published to a given topic. A node sends out a message by publishing it to this topic. Nodes can receive messages from other nodes by subscribing to a topic. A node can subscribe to a topic without that node also publishing that topic. This ensures that the publisher and subscriber of a message are decoupled from each other and are not aware of each other’s existence. Topic names must be unique; otherwise, errors will occur in routing messages between topics with the same name. • Service When publishing a topic, the message being sent is able to interact in a manyto-many fashion. However, this topic is not appropriate for a request or reply from a node. In this case, the service is allowed to interact directly with a node. In addition, the service must also have a unique name. When a node provides a certain service, all nodes can communicate with it by using the code written by the ROS client. • Bag Bags are for saving and playing back ROS message data. Bags are important mechanisms for storing data. They are capable of acquiring and recording various sensor data for developing robot systems and testing algorithms.

62

2.3.2.2

2

Connecting the Robot to ROS

ROS Communication Mechanism

ROS is a distributed framework that provides users with communication services between multiple nodes (processes). All software functions and tools are built on this distributed communication mechanism. Therefore, the ROS communication mechanism is the lowest level but also the core technology. The following section presents the three core communication mechanisms of ROS. 1. Topic Communication Mechanism Topics are the most frequently used communication mechanism in ROS, and they are rather complex. As shown in Fig. 2.19, there are two nodes in ROS: One is the Talker (publisher), and the other is the Listener (subscriber). The two nodes publish and subscribe to the same topic, respectively, and there is no mandatory launching order. The Talker is assumed to start first, and the detailed process of establishing communication can be analyzed in the following seven steps. ①

Talker Registration After Talker is started, it will register its own information, including the topic name of the published message, in the ROS Master using the remote procedure call (RPC) through the 1234 port. The ROS Master will add the node registration information to the registry. ② Listener Registration After the Listener is started, it will also register its own information, including the topic name of the message to be subscribed, in the ROS Master through the RPC.

ROS Master

XML/RPC

3)connect(““ scan”” ,TCP)

Talker XML/RPC:foo:1234 TCP data:foo:2345

4)TCP server:foo:2345

Listener

5)connect(foo:2345)

6)data messages TCP

Fig. 2.19 Topic communication mechanism based on the publish/subscribe model

2.3

File System and Communication Mechanisms in ROS

Fig. 2.20 Service communication based on server/client

63

ROS Master

XML/RPC

4)request data(args)

Talker XML/RPC:foo:1234 ROSRPC:foo:3456

5)reply data

Listener

TCP



ROS Master Matches Information The Master searches for a matching Talker from the registry based on the Listener’s subscription information. If no matching Talker is found, the Master waits for other Talkers to join. If the information about the matching Talker is found, the Master sends the RPC address information of the Talker to the Listener through the RPC. ④ Listener Sends a Request to Talker The Listener sends a connection request to the Talker through the RPC according to the received RPC address and transmits the subscribed topic name, message type, and communication protocol (TCP/UDP). ⑤ Talker Confirms the Request After receiving the Listener’s request, the Talker also confirms the connection information to the Listener through the RPC and sends its own TCP address information. ⑥ Network Connection between Listener and Talker The Listener uses TCP to establish a network connection with the Talker after receiving a confirmation. ⑦ Talker Sends Message to Listener After the connection is established, the Talker starts publishing messages to the Listener. The communication protocol used in the first five steps is RPC, and TCP is not used until the final process of posting data. The ROS Master plays an important role in the process of establishing connections between nodes. Nevertheless, it is not involved in the actual data transfer between nodes. In addition, after a node establishes a connection, it can turn off the ROS Master, and the data transfer between the nodes is not affected; however, other nodes will not be able to join the network between these two nodes. 2. Service Communication Mechanism The service is a communication mechanism with a reply, and the communication principle is shown in Fig. 2.20. The RPC communication between

64

2

Connecting the Robot to ROS

Table 2.3 Distinction between Topic communication and Service communication Synchronicity Communication model Underlying protocol Feedback mechanism Cache Real-time capabilities Node relationship Applicable scenarios

Topic Asynchronous Publish/subscribe ROSTCP/ROSUDP No Yes Weak Many-to-many Data transmission

Service Synchronous Client/server ROSTCP/ROSUDP Yes No Strong One-to-many (one server) Logical processing

Listener and Talker is reduced compared to that in the topic communication mechanism. The process is as follows. ①

Talker Registration After the Talker is started, it will register its own information, including the name of the provided service, in the ROS Master using the RPC through the 1234 port. The ROS Master will add the node registration information to the registry. ② Listener Registration After the Listener is started, it will also register its own information, including the name of the service to be subscribed, in the ROS Master through the RPC. ③ ROS Master Matches Information The Master searches for a matching server from the registry based on the Listener’s subscription information. If no matching server provider is found, the Master waits for other servers provider to join. If the information about the matching Server is found, the Master sends the RPC address information of the Talker to the Listener through the RPC. ④ Listener Tries to Build Network Connection with Talker After receiving confirmation, the Listener uses TCP to establish a network connection with the Talker and sends the request data of the service. ⑤ Talker Sends Service Reply Message to Listener After receiving the service request and parameters, the Talker starts to execute the service function. After execution, the Talker sends response data to the Listener. Topic and Service are the most basic and most frequently used communication methods in ROS, and their differences are illustrated in Table 2.3. The Topic communication is an asynchronous communication model in ROS based on the publish/subscribe model, which decouples the generators and users of information from each other. It is commonly used for data communication that is constantly updated and contains less logical processing. In contrast, the Service communication mechanism is mostly used to handle synchronous communication in ROS and adopts the client/server model. It is used for message exchange with smaller amounts of data but with strong logical processing.

2.4

Writing the First ROS Program

65

Fig. 2.21 RPC-based parameter management mechanism

ROS Master

Talker

Listener

3. Parameter Management Mechanism In this mechanism, the parameters are similar to global variables in ROS and are managed by the ROS Master. The communication mechanism is simpler and does not involve TCP/UDP communication, as shown in Fig. 2.21. ①

Talker Sets Variables Talker sends parameters, including parameter name and value, to the ROS Master through RPC, and the ROS Master saves the parameter name and value in the parameter list. ② Listener Queries Parameter Values The Listener sends a parameter lookup request, containing the parameter name to be searched for, to the ROS Master through RPC. ③ ROS Master Sends Parameter Values to Listener The Master searches the parameter list based on the lookup request of the Listener, and when it finds the parameter, it sends the parameter value to the Listener through RPC. It is important to note that if the Talker updates the parameter value to the Master, the Listener cannot know that the parameter value has been updated without re-querying the parameter value. A mechanism for dynamic parameter updates is required in many application scenarios, such as in robot navigation and map-building applications in which parameter values in the program need to be modified frequently. In this case, a callback function can be used for dynamic parameter updates.

2.4

Writing the First ROS Program

After learning about the ROS file system and communication mechanism, readers can explore how ROS programs are written in this section. First, the ROS package dependencies as well as the workspace are established. Then, the packages are created and the related files are edited, i.e., the source files for the publish and subscribe nodes are written, and the compilation and construction of the packages

66

2

Connecting the Robot to ROS

are performed. After the ROS executable is compiled and generated, a single node is run via the rosrun command, or multiple nodes are run via the roslaunch command. In addition, this section also describes the rules for writing and running the launch file as well as the programming foundation for the TF coordinate transformation to describe the coordinate transformation of the robot components.

2.4.1

ROS Package Dependency Management

After ROS is installed, two tasks need to be completed. One is to create a local ROS package dependency database. The other is to set ROS environment variables (see section 2.2.4 for details). ROS package dependency management is performed based on the local database. Packages that are downloaded or written often require dependencies on other ROS or third-party packages. ROS provides a convenient tool for managing package dependencies: rosdep. Rosdep is one of the tools that does not rely on a ROS standalone distribution and is installed by default in the /usr/bin/ directory. If rosdep has not already been installed locally, it can be installed with the following command. $ sudo apt-get install python-rosdep

Initializing and updating rosdep to create a local ROS package dependency database involves the following two steps, and the run screen is shown in Fig. 2.22. $ sudo rosdep init $ rosdep update

Fig. 2.22 Rosdep update run screen

2.4

Writing the First ROS Program

67

The first step, sudo rosdep init, creates a list index file in the /etc/ros/rosdep/ sources.list.d directory, the content of which is the address from which rosdep gets its dependency files. The address of the dependency files for ROS distributions after Groovy is determined by the index.yaml in rosdistro on GitHub. The second step, rosdep update, updates the dependency database based on the list index file, which is stored in the ~/.ros/rosdep/sources.cache directory. The location in which the file is stored is generated by these two commands. In addition to initializing and updating rosdep as mentioned above, rosdep is frequently used to check and install package dependencies. The command parameters for rosdep are used as follows. $ rosdep [options]

For example, $ rosdep check $ rosdep install $ rosdep keys $ rosdep resolve

• rosdep check: It checks if the system dependencies of the packages have been installed. • rosdep install: It determines if the system dependencies are not installed, a bash script is generated and executed to install the system dependencies for the packages. • rosdep keys: It lists the keys on which the package systems depend. • rosdep resolve: It resolve system dependencies based on the keys.

2.4.2

ROS Workspace

ROS is normally installed in the /opt/ros/ directory by default. However, during the process of developing robot systems, downloaded source code and user-written packages need to be stored in other directories for easy modification and compilation, and such a working directory is called a workspace. The creation of a workspace for downloaded packages enables the code to be modified to meet actual needs, enabling multiple versions to co-exist without affecting the packages in the ROS installation directory. Different build systems in ROS have different workspaces. The following sections present the build systems, workspace creation, and workspace overlay of ROS.

68

2

Connecting the Robot to ROS

1. Build Systems Two different ROS build systems are available before and after: rosbuild and catkin. Catkin is the officially recommended build system of ROS and the successor to the original ROS build system rosbuild. Catkin combines CMake macros and Python scripts to provide some functionality on top of CMake’s normal workflow. Compared to rosbuild, catkin was designed to be more conventional, enabling better distribution of packages, better cross-compiling support, and better portability. Packages after the Groovy version are basically built using catkin. 2. Create Catkin Workspace Similar to a normal CMake project, a single catkin package can be built as a standalone project. However, the creation of a catkin workspace enables the simultaneous building of multiple, interdependent packages. Catkin workspace creation is performed by catkin itself. Catkin is usually already installed when ROS is installed; however, the environment variables need to be set before using it. source /opt/ros/kinetic/setup.bash

The catkin workspace creation command is as follows. mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace

Once the catkin workspace is created, the catkin_make condition is in place. The catkin_make command must be executed at the top level of the catkin workspace. cd ~/catkin_ws/ catkin_make

At this point, the file structure of the catkin workspace is as follows. catkin_ws/ -- workspace src/ -- source code space CMakeLists.txt -- link to catkin/cmake/toplevel.cmakefile build/ -- build space CATKIN_IGNORE ... devel/ -- devel space (set by CATKIN_DEVEL_PREFIX variable) lib/

2.4

Writing the First ROS Program

69

.catkin .rosinstall env.bash setup.bash setup.sh ... install/ -- install space (set by the CMAKE_INSTALL_PREFIX variable) lib/ .catkin env.bash setup.bash setup.sh setup.sh ...

As can be seen from the file structure above, the catkin workspace contains up to four different spaces: source space, build space, devel space, and install space. • The source space contains all the package source files. In this directory, packages can be found for building. Its symbolic CMakeLists.txt file is linked to the . cmake file. • The build space contains the intermediate files for the compilation build process. • The devel space contains the build executable and runtime libraries and can be reset by the CATKIN_DEVEL_PREFIX variable; however, the directory selection requirements must ensure that the subdirectory has no packages. • The install space is the directory in which the package is installed and can be reset by the CATKIN_INSALL_PREFIX variable. Because it is a newly created workspace, the default install directory is not created. This directory is created only after catkin_make install is executed. 3. Workspace Overlay Usually, the ROS core and packages are installed in the /opt/ros/ ROS_DISTRO directory, while the user’s workspace is set up in the user’s directory. Users often change and customize the released package or create their own package based on their own needs. In addition, they make all calls that depend on this package using the customized package with the same name in the user workspace instead of the package in the system installation directory. Workspace overlay enables this functionality. In fact, workspace overlay is the overlay of system environment chains. The system environment chain overlay enables ROS to find packages whether they are installed in the system directory or in the user’s workspace. For packages with the same name in both locations, the strategy of masking out the lower level is adopted. This strategy is implemented mainly through environment variables, with the top-level relevant path of the overlay placed at the top of each environment variable, which also gives priority to searching.

70

2

Connecting the Robot to ROS

Fig. 2.23 Env command run screen

Environment Variables The operation of both a non-standalone release of ROS tools or packages is dependent on the setup of system environment variables (which are set after installing the ROS system). ROS default environment variables are set via a script in ROS with the format of setup.XXX (XXX is the name of the shell used). In the kinetic version, bash is used with the following command: > source /opt/ros/kinetic/setup.bash To avoid executing the source command once every time a new shell is created, the above command is usually added to the ~/.bashrc file. In this way, the script in . bashrc is automatically executed when a new bash shell is created. As shown in Fig. 2.23, the system environment variables associated with ROS can be checked with the env command: > env | grep ros. Catkin Workspace Overlay Executing catkin_make in the catkin workspace generates a system environment setup script (setup.XXX) in the devel directory, and executing catkin_make install generates a similar system environment setup script in the install directory. These scripts contain the settings from the ROS install directory environment setup script and also retain the environment variables already set by other scripts. The catkin workspace overlay can be implemented through these scripts. For example, the following script is the overlay system installation directory, and the execution results are shown in Fig. 2.24. source /opt/ros/kinetic/setup.bash env | grep ros mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace cd ~/catkin_ws catkin_make source ~/catkin_ws/devel/setup.bash env | grep ros

2.4

Writing the First ROS Program

71

Fig. 2.24 First example of catkin workspace overlay

Fig. 2.25 Second example of catkin workspace overlay

Because the catkin package compilation defaults to the compilation of all packages for the entire workspace, users need to create more than one catkin workspace. The following script implements the overlay of two catkin workspaces. The following script is executed based on the previous script, and the execution results are shown in Fig. 2.25. mkdir -p ~/catkin_ws_2/src cd ~/catkin_ws_2/src catkin_init_workspace

(continued)

72

2

Connecting the Robot to ROS

Fig. 2.26 Replacing the original source command

cd ~/catkin_ws_2 catkin_make source ~/catkin_ws_2/devel/setup.bash env | grep ros

In this way, the source command added to ~/.bashrc seems useless. If catkin_ws is the main workspace, all the environmental parameters can be set up by executing the setup script in its devel directory (see _setup_util.py for details, the environment variable setup script is generated by it). Hence, the source ~/catkin_ws/devel/setup. bash can be replaced with the original source command, as demonstrated in Fig. 2.26.

2.4.3

Package Creation and Compilation

Package writing generally consists of the following steps: package creation using build system tools, package feature writing and configuration file writing, package compilation, environment variable setup, and package running. Each of these steps is described below.

2.4

Writing the First ROS Program

2.4.3.1

73

Package Creation

The catkin package consists of the following components. • package.xml file: It is package manifest, which must be included. • CMakeLists.txt file: It is the rules for the compiler to compile packages, which must be included. • Packages: There can be no more than one package in each folder and packages cannot be nested. First, go to the source space and then use the catkin_create_pkg command to create a package called beginner_tutorials that depends on std_msgs, rospy, and roscpp: cd ~/catkin_ws/src catkin_create_pkg beginner_tutorials std_msgs rospy roscpp

where beginner_tutorials is the name of the package (required), followed by a list of dependencies on which that package depends. The catkin_create_pkg command syntax is as follows. catkin_create_pkg [package_name] [depend1] [depend2] [depend3]

The dependency item std_msgs contains common message types, representing basic data types and other basic message constructs such as MultiArray; rospy uses Python to realize various functions of ROS while roscpp uses C + + to realize various functions of ROS. They provide a client library that enables Python and C++ programmers to call these interfaces and quickly complete development work related to the topics, services, and parameters of ROS. The creation process is shown in Fig. 2.27. This creates the beginner_tutorials folder and corresponding files in the ~/catkin_ws/src directory. Its structure is as follows.

Fig. 2.27 Creating a package

74

2

Connecting the Robot to ROS

beginner_tutorials/ src\ include\ CMakeLists.txt package.xml

Package names should follow these naming conventions: lowercase, numbers, and underscore separators, and the first character must be a lowercase letter. If it does not conform to this naming convention, it is not supported by the ROS tools.

2.4.3.2

Package.xml File

In the root directory of packages, there are two files: package.xml and CMakeLists. txt, which are mandatory for catkin packages. They are generated by the catkin_create_pkg command based on the following template file. /usr/lib/python2.7/dist-packages/catkin_pkg/templates/package. xml.in /usr/lib/python2.7/dist-packages/catkin_pkg/templates/ CMakeLists.txt.in

The package.xml file is a replacement for the stack.xml and manifest.xml files in the original rosbuild system and contains the following information. • • • •

Descriptive data (e.g., profile of the package and maintainer). Dependent ROS packages or system packages. Meta-information about the package (e.g., author and website). Package release information (e.g., package version).

The package format has two versions, format 1 and format 2, and format 2 is the official recommended format. Format 2 of the package.xml file is presented below. Basic Structure: package.xml is structured data organized in XML (eXtensible Markup Language) using the following form: cd ~/catkin_ws catkin_make

The above command will compile and build all the packages under ~/catkin_ws/ src. To compile and build only some of the specified packages, the following command can be used. > catkin_make -DCATKIN_WHITELIST_PACKAGES="package1; package2".

To install the target, use the following command: > catkin_make install

Task 1 Run a Simple ROS Program 1. Create Workspace Create a catkin_ws. Note: If sudo is used to create multiple directories simultaneously, they will all be owned by the root. Hence, if the ROS is created by a non-root user, workspaces cannot be created in the root directory. sudo mkdir -p ~/dev/catkin_ws/src cd ~/dev/catkin_ws/src catkin_init_workspace

When the workspace folder is created, there are no packages in it. Next, compile the workspace using the following command. cd ~/dev/catkin_ws catkin_make

Enter the ls -l command to see the new folders created, which are “build” and “devel.” Read and execute the setup.bash file in the current bash environment. source devel/setup.bash

Verify that the catkin workspace environment variables are loaded.

2.4

Writing the First ROS Program

83

echo $ROS_PACKAGE_PATH

A message similar to the following is displayed to indicate that the ROS workspace has been created. /home/dev/catkin_ws/src:/opt/ros/kinetic/share

2. Create Package Use the catkin_create_pkg command to create a new package in the created workspace. cd ~/dev/catkin_ws/src catkin_create_pkg amin std_msgs roscpp

The package creation command creates a directory (amin) where this package is stored and generates two configuration files in the directory: package.xml and CMakeLists.txt. The format of the catkin_create_pkg command includes the name of the package and the dependencies. In the example above, the dependencies are std_msgs and roscpp. 3. Write ROS Node Programs Write a simple ros node program named hello.cpp. The source file named hello.cpp is also stored in the package folder (amin). The ROS node program is as follows: #include #include int main (int argc, char **argv) { ros::init(argc, argv, "hello") ; ros::NodeHandle nh; ROS_INFO_STREAM("Hello, ROS!") ; }

When a package contains multiple types of files, a src subdirectory can be created in the package directory to store C++ source files for easy file organization. 4. Compile hello.cpp Program Compile the hello.cpp program using the catkin compilation build system of ROS.

84

2

Connecting the Robot to ROS

Step 1: Declare the Dependency Library For C++ programs, declaring the other packages on which the program depends is necessary to ensure that catkin can provide the C++ compiler with the header files and link libraries needed to compile the packages. To provide the dependency library, the CMakeLists.txt and package.xml files need to be edited by using the build_depend (compile dependency) and run_depend (run dependency) keywords in the following format. package-name package-name

This step can also be omitted because the dependency library is automatically declared at the same time the package is created specifying the dependencies unless the dependencies are not specified at the time of creation. Use rospack depends1 amin to view the direct dependencies of the package. In many cases, dependencies of dependencies, i.e., indirect dependencies, will occur. For example, the dependency file roscpp of amin has other dependencies as well. The rospack depends amin can be used to list all dependencies for amin packages. Step 2: Declare the Executable Add two lines to CMakeLists.txt declaring the name of the executable file created. add_executable(executable-name source-files) target_link_libraries(executable-name ${catkin_LIBRARIES})

The first line declares the name of the executable file and the list of source files needed to generate this executable file. If there are multiple source files, they are separated by spaces. The second line tells Cmake which libraries (defined in find_package) need to be linked when linking executable files. If multiple executable files are included, the above two lines of code are copied and modified for each executable file. In this routine, the hello.cpp source file is compiled to an executable file named hello, and the following two lines need to be added after include_directories (include ${catkin_INCLUDE_DIRS}) in the CMakeLists.txt file. add_executable(hello src/hello.cpp) target_link_libraries(hello ${catkin_LIBRARIES})

2.4

Writing the First ROS Program

85

Fig. 2.28 Compilation results of catkin_make

Step 3: Compile Workspace In the workspace directory, use the catkin_make command to compile. cd ~/dev/catkin_ws catkin_make

The result of the compilation is shown in Fig. 2.28. The executable file “hello” will be generated in the ~/dev/catkin_ws/devel/amin directory. 5. Execution of “Hello” Program A program is a node, which needs a node manager to function properly. The node manager needs to be started first. roscore

After the node manager starts, execute the setup.bash script file. source devel/setup.bash

The setup.bash script file is automatically generated by catkin_make in the devel subdirectory of the workspace, and it sets several environment variables that enable ROS to find the created packages and the newly generated executables (i.e., to register the program). Run the node using the following command. rosrun amin hello

86

2

Connecting the Robot to ROS

Fig. 2.29 Program results

The syntax format of the node run is as follows: rosrun package-name executable-name

where package-name is the name of the package and executable-name is the name of the executable file. Fig. 2.29 demonstrates the result of running this simple ROS program.

2.4.4

Rules for Writing ROS Nodes

Nodes are processes that perform computational tasks, and a system typically consists of several nodes. The nodes are divided into publish node and subscribe node. The publish node publishes a message to a specified topic, and the subscribe node subscribes to the topic to obtain the message. The types of messages to be published and subscribed need to be defined in advance. 1. Write a Publish Node First, go to the package src folder in the workspace: cd ~/catkin_ws/src/ ros_demo_pkg/src

Then, create the publisher node file, e.g., the demo_topic_publisher.cpp program. #include "ros/ros.h" #include "std_msgs/Int32.h" #include int main(int argc, char **argv) { ros::NodeHandle node_obj; // Create a Nodehandle object to communicate with the ROS

(continued)

2.4

Writing the First ROS Program

87

// create a topic publish node, the node is number_publisher, the name of the published topic is "/numbers", the published data type is Int32, and set the parameter buffer size of the message pool. ros::Publisher number_publisher = node_obj. advertise("/numbers",10); // define the frequency of sending data, if the frequency is high, you need to increase the buffer size of the message pool at the same time. ros::Rate loop_rate(10); // 10Hz int number_count = 0; //initialize an int variable // When Ctrl+C is pressed, ros::ok() will return to 0, exiting the while loop. while (ros::ok()) { std_msgs::Int32 msg; //create ROS message of type Int32 msg.data = number_count; //assign the value of the integer variable to Message’s member data. ROS_INFO("%d", msg.data); //output the value of msg.data, which will also be stored in the log system. number_publisher.publish(msg); //node publishes message data to the topic //read and update ROS topics, the node will not publish the message without spinOnce() or spin() ros::spinOnce(); loop_rate.sleep(); // In order to reach the set send frequency, a delay time is needed. number_count++; } return 0; }

The spinOnce() and spin() functions are ROS message callback handler functions that are usually found in the main loop of the ROS, where the program needs to keep calling ros::spinOnce () or ros::spin (). The difference between the two is that after calling spinOnce(), it is possible to continue to execute the program later. However, spin() will not return after being called. That is, the program will not execute further. Because ros::spinOnce () is only called once, a loop needs to be added to call it again. ros::spinOnce() is flexible in its usage, and it is important to consider the timing, frequency of calling the message, and the buffer size of the message pool. All of these have to be coordinated according to the actual situation, or it can cause packet loss or delays. 2. Write a Subscribe Node Writing a subscribe node is akin to the process of writing a publish node. The subscription node file demo_topic_subscriber.cpp program needs to be added after entering the src folder of the package in the workspace.

88

2

Connecting the Robot to ROS

#include "ros/ros.h" #include "std_msgs/Int32.h" #include //create a callback function that is executed when new data is updated in the topic, this callback then extracts the new data and produces the outputs. void number_callback(const std_msgs::Int32::ConstPtr& msg) { ROS_INFO("Received [%d]",msg->data); } int main(int argc, char **argv) { ros::init(argc, argv, "demo_topic_subscriber"); ros::NodeHandle node_obj; // create subscribe node number_subscriber, subscribe topic is "/numbers", this node is associated with the callback function. ros::Subscriber number_subscriber = node_obj.subscribe("/ numbers",10, number_callback); //write into spin(), which is equivalent to an infinite loop, the subscribe node waits for the topic to update its data. ros::spin(); // exit only when Ctrl+C is pressed return 0; }

Provided that the program writes a message subscribe function, then, during the execution of the program, ROS will automatically receive subscribed messages in the format specified in the background. However, the received messages are not processed immediately. These messages must wait until ros::spinOnce() or ros::spin() is executed before they are called, which is the principle of the message callback function. The ros::spin() function does not appear in the loop because the program executes spin() and then calls no other statements. Hence, the spin() function cannot be followed by any statement other than return. Note: Do not forget to add the ros::spinOnce() or ros::spin() function in the corresponding position if the program writes a message subscribe function; otherwise, the data or messages sent by the publish node will not be received. Additionally, it is necessary to set a reasonable message pool size and execution frequency of ros::spinOnce(). For example, if the message sending frequency is 10 Hz, and the ros::spinOnce() calling frequency is 5 Hz, the message pool size must be larger than 2. 3. Build Nodes Edit the CMakelist.txt file in the package folder, then compile and build the source code. Go to the package folder and add the following statement in the CMakeLists.txt file.

2.4

Writing the First ROS Program

89

include_directories( include ${catkin_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS} ) #This will create executables of the nodes add_executable(demo_topic_publisher src/demo_topic_publisher.cpp) add_executable(demo_topic_subscriber src/demo_topic_subscriber.cpp) #This will generate message header file before building the target add_dependencies(demo_topic_publisher mastering_ros_demo_pkg_generate_messages_cpp) add_dependencies(demo_topic_subscriber mastering_ros_demo_pkg_generate_messages_cpp) #This will link executables to the appropriate libraries target_link_libraries(demo_topic_publisher ${catkin_LIBRARIES})

After adding the executable dependencies and target_link sections, enter the following command in the terminal to compile the package containing the above nodes. $ cd ~/catkin_ws/ $ catkin_make

After compiling the package, this ROS node can be run by setting the compiled executable to a global variable. $ cd ~/catkin_ws/ $ source /devel/setup.bash

2.4.5

Two Ways to Run Nodes

There are two ways to run a ROS node. One is to run nodes directly through the rosrun command; however, the rosrun command can only run one node at a time. A complex robot application has many nodes, so using rosrun to run node by node seems tedious. To solve this problem, there is another way to run a ROS node: using roslaunch. Running roslaunch can automatically open and run master nodes and multiple custom nodes.

90

2

Connecting the Robot to ROS

1. Run a Node Using rosrun To run a node using rosrun, first run the node manager Master: $ roscore

Next, run any node in the ros package. $ rosrun package_name node_name

For example, run the previous publish and subscribe nodes. $ rosrun mastering_ros_demo_pkg demo_topic_publisher $ rosrun mastering_ros_demo_pkg demo_topic_subscriber

2. Run a Node Using roslaunch The roslaunch command enables multiple ROS nodes specified in the launch file to be launched simultaneously, with the launch parameters configured in the launch file. Roslaunch will automatically launch roscore if the system has not previously launched roscore. The syntax of the roslaunch command is as follows. $ roslaunch [package] [filename.launch]

For example, $ roslaunch ros_demo_pkg demo.launch

Demo.launch is the launch file for the package ros_demo_pkg, which is presented in the next section.

2.4.6

Launch File

The launch file is a way to launch multiple nodes at the same time in ROS. It can automatically launch the ROS Master (node manager) and enables various configurations for each node, providing a great convenience for multiple node operation.

2.4

Writing the First ROS Program

2.4.6.1

91

Basic Element

Take the following simple launch file as an example.



The launch file is described in XML form and contains a root element and two node elements . The XML file must contain a root element , which is defined using a tag in which all other contents in the file must be contained. The node element is the core of the launch file. It is defined using tags and contains three attributes: pkg, type, and name. Pkg is the name of the function package in which the node is located, and type is the executable name of the node. These two attributes are equivalent to the input parameters of the rosrun command when executing the node. Name is used to define the name under which the node will run, and it overrides the name of the node in the input parameters of the init() function in the node. In addition to the three most-used attributes mentioned above, the following attributes may be used. • output=“screen”: It prints the standard output of the node to the terminal screen; the default output is a log document. • respawn=“true”: It resets attribute; this node will be automatically restarted when it quits; default:false. • required=“true”: It is the required node; when this node terminates, the other nodes in the launch file are also terminated. • ns=“namespace”: It is the namespace; adds a namespace prefix to relative names within a node. • args=“arguments”: It is the input arguments required by the node.

2.4.6.2

Parameter Settings

To facilitate setup and modification, the launch file supports the function of parameter setting, similar to the variable declaration in programming languages. There are two tag elements for parameter setting: and , one for parameter and the other for argument. 1. The parameter is a running parameter of the ROS system and is stored in the parameter server. The parameter is loaded in the launch file via the element. After the launch file is executed, the parameter is loaded into the ROS

92

2

Connecting the Robot to ROS

parameter server. Each active node can obtain the parameter value via the ros: param:get() interface, and users can also obtain the parameter value in the terminal through the rosparam command. is used as follows.

After running the launch file, the value of the parameter output_frame is set to odom and loaded to the ROS parameter server. The robot system in real life has a large number of parameters, so it uses another similar parameter ( ) for loading.

loads all the parameters in the YAML format file into the ROS parameter server. The command attribute needs to be set to “load” and optionally set the namespace to “ns.” 2. The tag and the tag are both “arguments” in Chinese, but the meaning of these two “arguments” is completely different. The argument is similar to a local variable inside the launch file and is restricted to the use of the launch file for the purpose of refactoring the launch file. Moreover, it has no relation to the internal implementation of the ROS node. is used as follows.

In the launch file, the argument is used as follows.

2.4.6.3

Remapping Mechanism

ROS provides a remapping mechanism, similar to the alias mechanism in C++. There is no need to modify the package interface. The interface name can be remapped to take an alias (the data type of the interface must be the same), and the

2.4

Writing the First ROS Program

93

robot system will recognize it. In launch files, the remapping function is implemented through the tag. For example, in terms of the keyboard control node of TurtleBot, the Topic of the published speed control command is /turtlebot/cmd_vel. If the Topic of subscribed speed control is /cmd_vel in the actual robot system, remapping /turtlebot/cmd_vel to /cmd_vel will retrieve the speed control command.

2.4.6.4

Nested Reuse

Complex robot applications often have many launch files, and dependencies exist among these launch files. To reuse an existing launch file directly, tags can be used to include other launch files, similar to “include” in C.

Launch is a very useful and flexible feature of the ROS framework. In using ROS, it is not necessary to write a large number of codes in many cases. The existing packages can be used, and the launch file can be edited to implement numerous robot functions. Please visit http://wiki.ros.org/roslaunch/XML for more information about tags.

2.4.7

Fundamentals of Coordinate Transform

2.4.7.1

Introduction to Coordinate Transform

Coordinate transform (TF) includes both transformations of position and pose. Coordinate TF is the representation of a coordinate in a different coordinate system, while a coordinate system TF is the relative positional relationship between different coordinate systems. It is important to distinguish between coordinate TF and coordinate system TF. The robot model in ROS contains a large number of components, each of which is collectively called a link (e.g., a joint of the robot), and each link corresponds to a coordinate system (frame). The term “frame” is used to represent the coordinate system of this component, such as world frame, base frame, or gripper frame. TF can track these frames over time (within 10 s by default). The following section outlines the meaning of coordinate TF.

94

2

Connecting the Robot to ROS

1. TF defines the data format and data structure of the coordinate transform, which is a kind of tree data structure, i.e., “TF tree.” 2. TF can be viewed as the /tf topic. The messages in the Topic save the data structure of the TF tree, which maintains the robot’s own components, and the coordinate transformation relationships between the robot and environment map. The content of the /tf topic is the entire TF tree of the robot, and the /tf topic requires many nodes to maintain, each node maintaining the relationship between two frames. 3. TF can be seen as a package that contains many tools to view the coordinate transform between joints, etc. 4. TF provides an API interface to encapsulate publisher and subscriber, which can be used for node programming. Through the TF interface, it is possible to establish the maintenance and subscription of the frame transformation relationship in the TF tree. The TF tree structure maintains the relationship between frames, and the coordinate relationship between different links can be continuously published based on the Topic communication mechanism. This tree structure must ensure that both parent and child frames have a certain node that is continuously publishing the correct positional relationship between them. In this way, it is possible to keep the tree structure intact and thus ensure the connectivity between any two frames. A break in one of the links can trigger a system error. Two neighboring frames rely on the broadcaster node to publish the pose relationship between them. Broadcaster is a publisher, and if a relative motion occurs between the two frames, broadcaster publishes the relevant message.

2.4.7.2

TF Principle

The TF tree is organized as a parent–child frame and can provide a positional transformation relationship between any two frames. At the top is the parent frame, and the child frame is underneath. In other words, it is possible to provide coordinate TF between all frames for any point in the robot system. As long as the coordinates of a point in one frame are given, the coordinates of that point in any other frame can be obtained. To represent the positional transformation relationship of arbitrary frames in a reasonable and efficient way, TF takes the form of a multilayer–multinomial tree to describe frames in the ROS system. Each node in the tree is a frame, which has a parent frame and multiple child frames. Parent–child frames are adjacent and are connected by lines with arrows. 1. TF Frame Description Specification When describing a coordinate TF, source is the source frame of the coordinate TF, and target is the target frame. This transform represents the coordinate TF. When describing a frame TF, parent is the original frame, and child is the transformed frame. This transform represents the frame TF and describes the pose of the child frame in the parent frame. That is, a frame TF from the parent frame to

2.4

Writing the First ROS Program

95

the child frame is equivalent to a coordinate TF of a point p in space from the child frame to the parent frame. For example, the frame TF from frame A to frame B, AB T, denotes the description of frame B in frame A. It also represents the coordinate TF of a point p in space, from coordinate BP in frame B to coordinate AP in A frame, i.e., A

P = AB T ∙ B P

2. Communication Mechanism of TF Tree The TF tree is built and maintained based on the Topic communication mechanism. The TF tree maintains the transformation relationship of all frames in the system based on the transformation relationship of each parent–child frame. Each transformation relationship from a parent frame to a child frame relies on the publisher node of a broadcaster to continuously publish. However, instead of publishing a Topic for each pair of parent–child frames, all parent–child frames are collected in one Topic, and the data contained in the message of this Topic are the transformation relationship of all parent–child frames. 3. Building and Maintenance of TF Trees In building a TF tree, a parent frame needs to be specified as the initial frame, such as the map frame in the robot system. A branch is added to the TF tree when a frame TF from an existing parent frame to a new child frame is published. Tf tree building and maintenance rely on the sendTransform interface in the tfbroadcastor class. The TransformBroadcaster class is a publisher, and the sendTransform interface is the function that encapsulates the publisher. It is necessary to keep the TF tree structure up to date with the latest positional transformation relationship by continuously updating the frame TF from the existing parent frame to the existing child frame. In addition, the TF tree structure needs to be kept intact by ensuring that there is some node in both the parent and child frames that is continuously publishing the correct positional relationship between the two frames. 4. Use of TF Trees Once a TF tree has been built correctly, it is possible to subscribe to the transformation relationship of any two frames using the subscriber provided by TF. The subscriber receives a message on the /tf topic that includes the transformation relationship of all published parent–child frames, i.e., the entire TF tree at the current moment. Then, the subscriber searches this tf tree to find the path of the desired TF based on the relationship of different parent–child frames. This transformation path then connects the two frames through the parent–child relationship pathway, and the transformation matrices on that pathway are multiplied together to obtain the transformation relationship of the two frames. Hence, the subscriber has to search this tree to obtain the relationship between two frames, which takes a considerable amount of time. To meet the real-time requirement, the data can be cached for each TF for 10 s.

96

2

Connecting the Robot to ROS

TF encapsulates publishers and subscribers so that developers can simply establish maintenance and subscribe to certain frame transformation relationships in the TF tree through the TF interface. When using the TF tree, the data structure of tfbuffercore is used to maintain the complete tree structure and its state, tflistener is used to monitor the TF between any two frames, and lookuptransform or waitfortransform is used to obtain the transform between any frames. The TF tree supports tf prefix, which can be used in multi-robots. Different robots are distinguished by using a different prefix. If there is only one bot, / is generally used. 2.4.7.3

TF Message

Each broadcaster between parent–child frames, i.e., two adjacent frames, needs to coordinate message dissemination to maintain the transformation relationship between coordinates. TransformStamped.msg is the message on the /tf topic. This message format is used to represent the relative coordinate transformation relationship between two frames and is a small segment of the transform in a TF tree. As described earlier, ROS relies on TF trees to represent the transformation relationship between coordinate systems of the entire system, rather than simply relying on the transformation relationship between a plurality of parent–child frames. Because the TF tree message type of the TransformStamped.msg message is the only way to describe the poses of one parent–child frame, the TF tree message type is based on the TransformStamped.msg message. The specific format of TransformStamped.msg is as follows. geometry_msgs/TransformStamped (this message type belongs to the geometry_msgs package, rather than the tf package) std_mags/Header header uint32 seq time stamp string frame_id string child_frame_id geometry_msgs/Transform transform geometry_msgs/Vector3 translation float64 x float64 y float64 z geometry_msgs/Quaternion rotation float64 x float64 y flaot64 z float64 w

This message represents the transformation relationship between the current coordinate system frame_id and its subcoordinate system child_frame_id. The specific pose transformation is defined by the geometry_msgs/Transform message

2.4

Writing the First ROS Program

97

type. The translation is indicated by a three-dimensional vector, and the rotation is indicated by a four-tuple. The content of /tf topic represents the entire robot TF tree, rather than the transformation relationship of two frames. Hence, the /tf topic requires many nodes to maintain, each node maintaining the relationship between two parent and child frames. That is, there may be a number of nodes sending messages to one /tf topic. A TF tree is equivalent to a stitching of coordinate TFs between multiple frames. The following section describes the message types for TF trees that are transmitted on /tf topics. It should be noted that a new iteration of tf, tf2, appeared after the release of ROS Hydro. tf2 provides the same set of features more efficiently by directly replacing the original tf with tf2. Meanwhile, some new features were added to tf2. tf2 is divided into tf2, which is used for specific operations such as coordinate TF, and tf2_ros, which is responsible for communicating with ROS in messages. The corresponding message type for the TF tree in tf2 is tf2_msgs/TFMessage. msg, which is located in the tf2_msgs package. tf2_msgs/TFMessage has the following specific format. geometry_msgs/TransformStamped[] transforms std_msgs/Header header uint32 seq time stamp string frame_id string child_frame_id geometry_msgs/Transform transform geometry_msgs/Vector3 translation float64 x float64 y float64 z geometry_msgs/Quaternion rotation float64 x float64 y flaot64 z float64 w

It can be seen that the message type of the TF tree is a variable length array defined by the TransformStamped type. It is formed by many sets of TF messages (TransformStamped) between two frames and is used to describe the message type of the TF tree of the entire robot system (tf2_msgs/TFMessage. msg).

2.4.7.4

TF Interface in roscpp and rospy

The TF library is available in both roscpp and rospy, and TF provides a number of useful interfaces, including the following:

98

2

Connecting the Robot to ROS

1. Definition of data types (classes): vectors, points, quaternions, 3*3 rotation matrices, poses, etc. 2. Data conversion: rotation matrices, the transfer function between quaternions, Euler angles, and rotation shafts are given. 3. Functions of the operation point, vectors, angles, quaternions, etc. 4. TF class, encapsulating the publisher and subscriber interfaces. The publisher publishes the frame transformation relationship to a period of transform on the /tf topic, and the transformation relationship between the source and target frames is available through the subscriber. The TransformBroadcaster class is the publisher interface. When the tf:: TransformBroadcaster class is built in a Node, and then sendTransform() is called, the frame transformation relationship can be published to a period of transform on / tf. The TransformListener is the class that receives the coordinate transformation from /tf.

2.4.7.5

Use of Related Command in TF Package

The use of the TF package can be divided into the following two aspects. 1. Monitor TF transform: It receives and caches all frame TFs published by the system and query the desired frame transformations from them. 2. Broadcast TF transform: It broadcasts the transformation relationship between frames to the system. Multiple broadcasts of TF transform for different parts exist in the system, each of which can directly insert the frame transformation relationship directly into the TF tree without further synchronization. Although TF is a code link library in ROS, a wealth of command line tools is provided to help users debug and create transforms. A few common command line tools are listed below. 1. Display all current frames. $ rosrun tf tf_monitor #Display information about the current coordinate TF tree $ rostopic echo /tf #Display all pose transformation relationship between parent-child frames as an array of TransformStamped message types

The function of the tf_monitor tool is to print information about all frames in the TF tree and to view information on the specified frames by entering parameters. 2. Draw TF trees: $ rosrun tf view_frames

2.4

Writing the First ROS Program

99

First subscribe to /tf and then draw the TF tree based on the received tf information. View_frames is a visual debugging tool, which displays all the TF tree frames and parent–child relationships in graphical form. 3. View the current TF tree. $ rosrun rqt_tf_tree

This command also queries the TF tree; however, the difference from the first command is that this command queries the current TF tree dynamically. Any current changes are immediately visible, such as when to disconnect and when to connect, and then displayed via the rqt plugin. 4. Query the transformation relationship between any two frames. $ rosrun tf tf_echo[source_frame][target_frame]

The function of the tf_echo tool is to query the transformation relationship between the specified frames, i.e., to display the pose transformation relationship between the source and target frames. Task 2 Run the Turtlesim First enter the following command: $ roscore

This command opens the ROS master node, ROS parameter server, rosout logging node, and other services, which is the first step in turning on all ROS services. Then, execute the turtlesim program that comes with ROS. $ rosrun turtlesim turtlesim_node

This opens a turtlesim example program, and this node receives velocity information from other nodes, posting information about the current pose of the turtlesim to the screen. Use the rostopic tool to see the name of the topic currently posted out by the node. $ rostopic list $ rostopic echo /turtle1/pose

100

2

Connecting the Robot to ROS

Fig. 2.30 Using the rqt_graph visualization tool

The screen will print out the contents of the topic /turtle1/pose. A small turtle can be seen on the screen slowly moving forward using the following command. > $ rostopic pub -r 10 /turtle1/cmd_vel geometry_msgs/Twist' {linear: {x: 0.2, y: 0, z: 0}, angular: {x: 0, y: 0, z: 0}}'

Note: There are spaces after the colon; otherwise, the formatting is incorrect. This sends a message with messagetype of geometry_msgs/Twist to /turtle1/ cmd_vel using the rostopic tool. The content of the message uses the yaml format. If the value of x in the linear item is changed to 0 and z in the angular item is changed to 1.0, the little turtle rotates. Use the rosmsg tool to view the geometry_msgs/Twist message format. > $ rosmsg show geometry_msgs/Twist

A definition of the geometry_msgs/Twist message format is printed on the screen, which contains a representation of the velocity of the rigid body in space: both linear velocity and angular velocity in three directions. Use the rosbag tool to record topic data. > $ rosbag record -a -O cmd_vel_record

Record for approximately 10 seconds and press ctrl+c to exit. The motion recorded can then be played back using the following command. > $ rosbag play cmd_vel_record.bag

Use the rqt_graph visualization tool to see the data being passed between several nodes, as shown in Fig. 2.30. Use the rosservice tool to demonstrate the service provided by the turtlesim node.

2.5

ROS Common Components

101

Fig. 2.31 Rqt toolbox service caller demo

> $ rosservice call /spawn '{x: 1.0, y: 1.0, theta: 1.0, name:''}'

Enter the service request for the data to be sent into the service, including the location and pose and the name of the turtle. If the generation is successful, the service response provided by the turtlesim_node will be returned. This can also be achieved using the service caller in the rqt toolbox, as shown in Fig. 2.31. Next, launch the keyboard control node, and the turtle can be made to move. Create a new terminal and enter the following command. > $ rosrun turtlesim turtle_teleop_key

In this way, the movement of the turtlesim can be controlled by the up, down, left, and right cursor keys of the keyboard. The above section demonstrates the use of a few common ROS command line tools, and more ROS command line tools can be printed using the -h parameter for their specific usage.

2.5

ROS Common Components

ROS contains a large number of tools that can help developers with visual simulation, data logging, and code debugging. This section focuses on the use of these ROS common components.

102

2

2.5.1

Visualization Tools

2.5.1.1

Rviz

Connecting the Robot to ROS

Robot systems have sensors such as cameras and LiDAR, and if equipped with depth cameras or 3D LiDAR, they are also able to provide 3D data in point cloud format. Rviz is a 3D visualization tool for ROS that enables the visual representation of 3D data. For example, rviz can express point cloud data from LiDAR without programming or from depth cameras such as RealSense, Kinect, or Xtion, as well as image data acquired from cameras. In addition, rviz is able to preview images in an image_view-like window, displaying various basic elements, including polygons, various different markers, maps (usually 2D grid maps), and even the poses of robots. Rviz supports the display of data for navigation packages, such as mileage and paths. Moreover, rviz can display CAD models of robot components and take into account the transformation between the frames of each component to draw a coordinate TF tree. This provides tremendous help for debugging frame simulations. When installing the full desktop version of ROS, rviz will be installed by default. If the installation of ROS is missing rviz, use the following command to install it: $ sudo apt-get install ros-kinetic-rviz

There are two ways to launch rviz. First, it can be started directly by running the command rosrun rviz. Second, it can be started as a plugin via rqt by running the command rosrun rqt_rviz, or by launching rqt first and then loading the rviz plugin manually in the GUI. The interface of rviz after launching is shown in Fig. 2.32. The rviz interface includes the following: 1. 3D view: It refers to the middle part of the window, where various data can be viewed in 3D. The background color, fixed frame, and grid of the 3D view can be set in the Global Options and Grid items displayed on the left side. 2. Displays: The display area on the left is a view area where the desired data can be selected from various Topics. Click the “Add” button at the bottom left to select a different display style. 3. Menu: It provides commands to save or read the display status and allows the selection of various panels. 4. Tools: It provides various functional tools, such as Interact, Move Camera, Select, Focus Camera, Measure, 2D Pose Estimate, 2D Navigation Goal, and Publish Point. 5. View setting area: It sets the viewpoint of the 3D view. – Orbit: It rotates around the specified viewpoint, which is the most commonly used basic view by default.

2.5

ROS Common Components

103

Fig. 2.32 Rviz interface

– FPS (first person): It displays the picture as seen by the first-person viewpoint. – ThirdPersonFollower: It displays the picture as seen by the third-person viewpoint. – TopDownOrtho: This is the Z-axis view, which is displayed in direct view instead of perspective. – XYOrbit: It is similar to Orbit, but the focus is fixed on the XY plane with the Z-axis value of 0. 6. Time Displays: It displays the current system time and ROS time. To restart rviz, click the “Reset” button at the bottom of the window. 2.5.1.2

Gazebo

Gazebo is a 3D dynamic simulator. It is capable of simulating robots in indoor and outdoor environments, providing high-fidelity physical simulations. Gazebo offers a complete set of sensor models and a user-friendly interaction for easy program development. Gazebo can be installed with the following command. $ sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kineticgazebo-ros-control

104

2

Connecting the Robot to ROS

Fig. 2.33 Gazebo interface

Gazebo starts up with the interface shown in Fig. 2.33. If an invalid parameter vmw_ioctl_command error appears, try to set export SVGA_VGPU10=0. WORLD: The World tab displays the models that are currently in scene and enables the user to view and modify model parameters. In addition, the camera view angle can be changed by expanding the “GUI” option and tweaking the camera pose. INSERT: The Insert tab is where new objects (models) are added. LAYER: A layer may contain one or more models. Toggling a layer on or off will display or hide the models in that layer. The upper toolbar is the main toolbar of Gazebo and contains the most-used options for interacting with the simulator, such as buttons to select, move, rotate and scale objects, create simple shapes (e.g., cubes, spheres, and cylinders), and copy/ paste models. The Gazebo simulation environment can be built in two ways: either by directly adding a model of the robot or external objects through the model list or by manually drawing a map through the Building Editor tool provided by Gazebo. As shown in Fig. 2.34, the simulation environment usually requires both of these methods to be built in order to draw a more realistic simulation environment.

2.5.2

Rosbag: Data Logging and Replay

When conducting robot system simulations, it is often necessary to record experimental session data to produce datasets for subsequent analysis, processing, and algorithm validation and to facilitate the sharing of dataset resources. ROS provides the rosbag tool for data logging and playback that is able to store all messages, message types, timestamps, and other information published by nodes

2.5

ROS Common Components

105

Fig. 2.34 Gazebo simulation environment

to their Topic in a bag structure (mentioned in Sect. 2.3.1). These bags are used to play back experimental data offline and simulate real situations. The bag is a container that includes the messages sent by each Topic and is used to record the session process between each node of the robot. Furthermore, the bag file is a record of the message deliveries during the system operation, from which the entire process can be replayed, even including the time delay. Therefore, messages are recorded with a timestamp. Not only is there a timestamp in the message header, but there is also a timestamp when the message is recorded. The difference between the timestamp used to record a message and the timestamp within the message header is that the former is the time when the message is recorded, while the latter is the time when the message is generated and published. The data stored in the bag take a binary format that is capable of supporting the processing and logging of ultra-high-speed data streams. Increasing the size of the file will reduce the speed of data logging. To solve this problem, compression algorithms can be utilized for data compression. For both recording and playing back bags, regular expressions or Topic lists can be used to filter the Topics to be processed. The subscribed Topics can also be selected by specifying the node name, or all Topics can be processed.

2.5.2.1

Use rosbag to Record Data

Once the node is running properly, data logging can be executed through the record command of the rosbag tool. For example: $ rosbag record /camera /tf -j -O test

The abovementioned command uses the bz2 compression algorithm to save the data as a test.bag package file. By default, the rosbag program subscribes to the relevant node and starts recording messages. Then, it stores the data in a bag in the

106

2

Connecting the Robot to ROS

Fig. 2.35 Record command parameters for the rosbag tool

current directory. Press Ctrl + C to end the recording when logging is complete. The specific parameters for the record command are shown in Fig. 2.35. In addition, calling rosbag record in the launch file also allows data logging.

2.5.2.2

Play Back Bag Files

To play back the recorded message, run roscore without running any other nodes and then execute the following command: $ rosbag play xxx.bag

During playback, the spacebar can be used to pause playback, and the “S” key can be used to step through the playback, which will automatically stop at the end. Playback of the bag file reproduces the scene in the ROS as if it were a real-time session and sends data to Topic. Thus, this function is often used to debug algorithms.

2.5

ROS Common Components

2.5.2.3

107

Check the Topic and Message of the Bag

There are two ways to view the data inside a bag. The first method is to use the info command of rosbag, which is rosbag info . As shown in Fig. 2.36, this command displays information about the bag file, such as the date of creation, duration, file size, number of internal messages, file compression format, and file internal data type, as well as the list of Topics and their corresponding names, number of messages, and message type. The second method is to use the rqt tool, as demonstrated in Fig. 2.37. The rqt_bag tool has a graphical interface. Additionally, it supports recording and playing back bags, viewing images (if available), and plotting scalar data graphs and message data structures. However, it should be noted that the playback with this tool is not the same as that with rosbag. Only the data are displayed, while the Topic is not published; thus, this tool cannot be used for algorithm debugging.

Fig. 2.36 Results of running the rosbag info command

Fig. 2.37 Rqt_bag tool interface

108

2

2.5.3

Connecting the Robot to ROS

Debugging Toolkit of ROS

ROS provides a set of visualization tools for robot development and debugging, and the collection of these tools is called the QT Toolkit. The QT Toolkit is included with the installation of ROS desktop-full version. If it is not installed, the following command can be used to install it. $ sudo apt-get install ros-kinetic-rqt $ sudo apt-get install ros-kinetic-rqt-common-plugins

2.5.3.1

Rqt_console

Rqt_console is used to export information such as Message, Severity, Node, Stamp, Topics, and Location. You can invoke rqt_console by typing the following command: $ rqt_console

After a successful launch, the screen appears as shown in Fig. 2.38, which demonstrates the current system log records.

2.5.3.2

Rqt_graph

Rqt_graph is a peer-to-peer network in which ROS processes data. When this program runs, all processes and the data they process are represented in the form of a peer-to-peer network, i.e., by nodes, node managers, topics, and services. The following command can be used to invoke rqt_graph. $ rqt_graph

Fig. 2.39 is an example of a computational graph for a turtlesim routine.

2.5.3.3

Rqt_plot

Rqt_plot is a 2D numerical curve plotting tool that plots data in the xy coordinate system. Rqt_plot is used for observing trends in variables over time. The following command can be used to invoke rqt_plot:

2.5

ROS Common Components

Fig. 2.38 Example of rqt_console tool interface

Fig. 2.39 Example of rqt_graph tool interface

109

110

2

Connecting the Robot to ROS

Fig. 2.40 Example of the rqt_plot tool interface

$ rqt_plot

Once rqt_plot is started, the topic message to be displayed is entered at the top of the interface. In the turtlesim routine, the changes in the x and y coordinates of the turtle are depicted by the rqt_plot tool, as shown in Fig. 2.40.

2.5.3.4

Rqt_reconfigure

Robot hardware debugging requires dynamically adjusting and setting the values of running node parameters (enabling users to modify variables not only at launch but also during runtime). In this way, users can develop and test nodes more efficiently. The following command can be used to invoke rqt_reconfigure: $ rosrun rqt_reconfigure

Once rqt_reconfigure is launched, the interface displays all the current dynamically configurable parameters of the system. These parameters can also be configured dynamically via input boxes, sliders, or drop-down boxes. Figure 2.41 illustrates an example of dynamic parameter configuration for the Kinect camera.

2.6

Motion Control of Spark Chassis

111

Fig. 2.41 Example of rqt_reconfigure tool interface

2.5.3.5

Roswtf

In addition to the QT toolkit, ROS provides the roswtf tool to check tf configuration, environment variables, packages, and roslaunch files for potential issues. Roswtf supports plugin extensions and has the following three uses. 1. Check system installation and environment variables: Run roswtf directly. 2. Check packages: Switch to the package directory first and then run roswtf. 3. Check the roslaunch file: Run roswtf yourfile.launch. Roswtf supports the following options: • all roswtf: Check all packages ROS_PACKAGE_PATH variable. • no-plugins: Disable roswtf plugins. • offline: Run offline tests only.

in

the

path

pointed

to

by

the

Generally, a warning test result has no effect on the normal operation of the system, and if there is a system abnormality, the error message will help to determine the problem.

2.6

Motion Control of Spark Chassis

Readers have already learned how to write and run a simple ROS program and run the turtlesim. Next, controlling a real robot in motion will be described. Spark comes with a PC that has implemented a communication node and subscribed to a topic called “cmd_vel.” Once the communication node receives

112

2

Connecting the Robot to ROS

the message, it can control the rotation of the chassis. Therefore, users only need to publish the “cmd_vel” topic in the program and keep publishing the corresponding messages to control the Spark chassis in real time. Similar to the “turtle_teleop_key” node in the turtlesim emulation, a keyboard control node can be implemented on the upper PC side to convert keyboard clicks into twist-formatted speed control messages, which are published via the “cmd_vel” Topic. The implementation of the keyboard control node is based on the relevant code in the spark_teleop package. Please refer to spark_teleop/src/teleop_node.cpp for the detailed code. After configuring the environment variables, the keyboard control node can launch the teleop.launch file in the spark_teleop package. Teleop.launch is shown below, and the four keys “w,” “s,” “a,” and “d” are used to achieve the backward and forward motion of the Spark chassis.







Task 3 Getting Spark in Motion In practical applications, mobile robots need to achieve autonomous motion, rather than being operated by human control. Therefore, in this task, readers will learn how to make Spark run itself by publishing the Twist message via the cmd_vel Topic. Open the terminal, go to the ~/spark/ directory, and launch the Spark session by entering the following command: $ source devel/setup.bash $ roslaunch spark_move_straight spark_move_straight.launch

2.6

Motion Control of Spark Chassis

113

Fig. 2.42 spark_move_straight folder

After running the routine, Spark will move up in a straight line, similar to letting Spark to spin in place presented in Chap. 1. Here is a specific look at the implementation process. Open the spark_move_straight folder in the file manager. This folder contains launch folder, src folder, CMakeList.txt file, and package.xml file, as shown in Fig. 2.42. First, open the src folder containing spark_move_straight.cpp, which has the following content: #include "ros/ros.h" #include int main(int argc, char** argv) { ros::init(argc, argv, "spark_move_straight_node"); //ROS node initialization ROS_INFO("The spark are moving straight!!!") ; // terminal output text ros::NodeHandle nh; //create node handle ros::Publisher cmd_pub; //create a Publisher // Publish a topic named /cmd_vel with message type geometry_msgs::Twist cmd_pub = nh.advertise("/cmd_vel", 1); ros::Rate rate(10); // set the frequency of the loop double prev_sec = ros::Time().now().toSec(); //get the current time int sec = 3; //Motion time 3s while (ros::ok()) { if (ros::Time().now().toSec() - prev_sec > sec) // whether time is up Break; geometry_msgs::Twist vel; // velocity command vel.linear.x = 0.1; // velocity of motion along the x-axis vel.angular.z = 0; // speed of rotation around the z-axis cmd_pub.publish(vel); //publish speed topics rate.sleep(); //delay according to loop frequency }

(continued)

114

2

Connecting the Robot to ROS

Fig. 2.43 Dependencies contained in package.xml Fig. 2.44 Packages contained in CMakeLists.txt

Fig. 2.45 Launch file

ROS_INFO("Spark stop moving!!!!") ; ros::spin(); // loop waiting for callback function return 0; }

With respect to the Spark motion section, the velocity Topic is published after creating the velocity object vel, setting the velocity of motion along the x-axis and the velocity of rotation around the z-axis. In addition, this routine sets the runtime (3 s) for Spark. To get the robot to rotate, modify the velocity vel.angular.z around the z-axis. These velocity parameters need to be changed dynamically in many applications, depending on the actual situation, such as robot obstacle avoidance motion. In the package.xml file, the dependencies are included as demonstrated in Fig. 2.43. Among these, the spark_base dependency is used to drive the Spark hardware (e.g., motors on the chassis). Similarly, the spark_base package needs to be included in CMakeLists.txt as shown in Fig. 2.44. In the launch folder, the spark_base.launch file also needs to be included to get Spark in motion, as shown in Fig. 2.45. Information about the running process of this task routine is illustrated in Fig. 2.46.

2.7

Introduction to ROS External Devices

115

Fig. 2.46 Information about the program running process Fig. 2.47 Remote control handle

2.7

Introduction to ROS External Devices

Using ROS external devices enables the robot to interact with the real world. The ROS package and ROS community support a wide range of external devices with stable interfaces. This section presents the common external devices used by ROS and how they are used with robots.

2.7.1

Remote Control Handle

2.7.1.1

Configuration and Use

A remote control handle is a device with a set of buttons and potentiometers that enables control of multiple movement modes of a robot, as shown in Fig. 2.47. ROS supports most general-purpose remote control handles as input devices. The following section describes how to use such devices in ROS, and users will learn how to operate them with examples such as the turtlesim.

116

2

Connecting the Robot to ROS

Fig. 2.48 Result of running the ls command

First, some packages need to be installed and configured, using the following command. $ sudo apt-get install ros-kinetic-joystick-drivers $ rosstack profile & rospack profile

Use the ls command to test whether the remote handle has been recognized, as shown in Fig. 2.48. $ ls /dev/input/

Use the jstset command to test the operation of the remote handle (js0, js1, etc.). $ sudo jstest /dev/input/js1

At this time, the remote control handle will have the corresponding output as demonstrated in Fig. 2.49. If everything works, the remote control handle can be used in the ROS system. The joy and joy_node packages are required here.

2.7.1.2

Sending a Remote Handle Action Message

First run the node manager and then start joy_node as shown in Fig. 2.50. $ roscore $ rosrun joy joy_node

View the messages posted by the node. As demonstrated in Fig. 2.51, these messages include the joystick axial message and button message for the remote control handle. $ rostopic echo /joy

2.7

Introduction to ROS External Devices

Fig. 2.49 Jstset command run result

Fig. 2.50 Start joy_node

View the message type, as shown in Fig. 2.52. $ rostopic type /joy

View the fields used in the message, as shown in Fig. 2.53. $ rosmsg show sensor_msgs/Joy

117

118

2

Connecting the Robot to ROS

Fig. 2.51 View node release messages

Fig. 2.52 View message type

Fig. 2.53 View the fields used in the message

2.7.1.3

Controlling the turtlesim with the Remote Control Handle

After launching roscore, launch the turtlesim node (requires installation of the turtlesim package). $ roscore $ rosrun turtlesim turtlesim_node

2.7

Introduction to ROS External Devices

119

Fig. 2.54 Remote control handle control effect

View the list of topics with the rostopic list command; topic/turtle1/cmd_vel will appear. Use the following command to view the topic type and message content: $ rostopic type /turtle1/cmd_vel $ rosmsg show geometry_msgs/Twist

The following section introduces how to associate the speed command of the remote control handle with the turtlesim. Download teleop_twist_joy (code address: https://github.com/ros-teleop/teleop_ twist_joy.git). Then, find and modify the following code snippet in teleop_twist_joy/ src/teleop_twist_joy.cpp. pimpl_->cmd_vel_pub = nh->advertise("/turtle1/cmd_vel", 1, true);

After the previous step is completed, compile in the catkin_ws workspace. Then, in a different terminal, enter the command after starting roscore: $ rosrun joy joy_node $ rosrun teleop_twist_joy teleop_node

As shown in Fig. 2.54, running this command will enable the control of the turtlesim movement with the remote control handle. The movement of the simulated robot in Gazebo and the real robot can be controlled by following similar steps. If a remote control handle is not available, the simple remote control of the robot can also be performed via the keyboard or mouse using the following command, as shown in Fig. 2.55.

120

2

Connecting the Robot to ROS

Fig. 2.55 Keyboard or mouse remote control effect Fig. 2.56 LiDAR

$ roscore $ rosrun turtlesim turtlesim_node $ rosrun teleop_twist_keyboard teleop_twist_keyboard.py $ rosrun mouse_teleop mouse_teleop.py

In addition to the three typical input devices for ROS (remote control handle, keyboard, and mouse), other devices, such as mobile phones or tablets, can also be used for information interaction and control of the robot. The details of their usage can be explored further in other references.

2.7.2

LiDAR

When moving a mobile robot, it is necessary to obtain information about the location and direction of obstacles and the profile of the site in the environment. Using LiDAR, it is possible to measure the distance between the robot and other objects. Another way is to construct and use maps for navigation. Figure 2.56 illustrates the LiDAR.

2.7

Introduction to ROS External Devices

2.7.2.1

121

Configuration and Use

The Spark robot has integrated drivers and companion programs for LiDAR. Users can download the code from GitHub and install the dependencies. $ git clone https://github.com/NXROBO/spark.git

It is important to note that when launching the laser node, issues generally occur due to the rules not being installed correctly, which can be resolved by installing the Spark dependencies via the onekey.sh script. Use the following command to determine if the dependencies are installed correctly (this step is not normally required), as illustrated in Fig. 2.57. Note: As the hardware will be updated over time, the LiDAR model configured by Spark will change. Hence, the parameters will need to be modified according to the actual model. $ ll /dev/ |grep 3ilidar perhaps $ ll /dev/ |grep ydlidar

Once it is working properly, the LiDAR can be used to obtain information. If EAI LiDAR (g6) is used, launch the node using the following command: $ cd ~/spark $ source devel/setup.bash $ roslaunch lidar_driver_transfer ydlidar_g6.launch

Note: The parameters of the roslaunch command vary depending on the LiDAR model. Figure 2.58 shows the effect after the node is launched normally.

Fig. 2.57 Check if dependencies are installed correctly. (a) If Spark is configured with ShanChuan LiDAR (3i), operation parameter: 3ilidar. (b) If Spark is configured with EAI LiDAR, operation parameter: ydlidar

122

2

Connecting the Robot to ROS

Fig. 2.58 Effect of a normal node launch

2.7.2.2

LiDAR Releases Data in ROS

Use rostopic to see if the Topic is publishing data. $ rostopic list

Successful publication of LiDAR data will include /scan, which is the topic of the LiDAR message being published. The node type, data structure, and more information about LiDAR work and data publication can be viewed with the following command: $ rostopic type /scan $ rosmsg show sensor_msgs/LaserScan $ rostopic echo /scan

For ease of understanding, the following command can also be used to visualize the data in the rviz GUI. The rviz default interface is shown in Fig. 2.59. $ rviz

2.7

Introduction to ROS External Devices

123

Fig. 2.59 Rviz default interface Fig. 2.60 Fixed frame option

Then, in the “Global Options”—“Fixed Frame” option on the “Displays” page, select “laser_frame” as shown in Fig. 2.60. After clicking the “Add” button, double click on “Create visualization”—“By topic” –“/point_cloud”–“PointCloud,” as shown in Fig. 2.61. After returning to the main rviz screen, the shape of the environment profile scanned by the LiDAR can be seen, as illustrated in Fig. 2.62.

2.7.3

Vision Sensor

In ROS systems, vision sensors mainly include two device types, RGB monocular cameras and depth cameras. The most common of the monocular cameras is the USB camera. It is simple to use these devices in ROS. First, install the corresponding package.

124

2

Connecting the Robot to ROS

Fig. 2.61 Rviz selection screen

Fig. 2.62 LiDAR data displayed in rviz

$ sudo apt-get install ros-kinetic-usb-cam

Before using it, it is important to note that the format needs to be modified depending on the pixel_format format of the camera, such as yuyv format.

2.7

Introduction to ROS External Devices

125

Fig. 2.63 Camera node running result

Otherwise, errors will cause the image not to appear. Launch the USB camera node with the following command and test if it is working properly. $ roscore $ rosrun usb_cam usb_cam_node $ roslaunch usb_cam usb_cam-test.launch

The contents of the usb_cam-test.launch file are as follows:











Launch the usb_cam-test.launch file; the running result is shown in Fig. 2.63.

126

2

Connecting the Robot to ROS

Fig. 2.64 View Topic list Fig. 2.65 Orbbec 3D camera

The display can also be viewed by using the following command. $ roscore $ rosrun rqt_image_view rqt_image_view

Use rostopic list to view the list of Topics as shown in Fig. 2.64. Fig. 2.65 shows a depth camera produced in China: the Orbbec 3D camera Astra with integrated microphone array. The Spark robot already has the camera drivers and dependencies installed. The node can be run after plugging in the camera USB port. $ roslaunch astra_camera astra.launch

Rostopic list can be used to view the list of Topics and perform related operations. Use /camera/depth/image to view depth information and /camera/rgb/image_raw to view RGB images, as shown in Fig. 2.66. The rqt_image_view package can also be used directly to view images.

2.7

Introduction to ROS External Devices

127

Fig. 2.66 View depth information and RGB images. (a) Depth information. (b) RGB image

Fig. 2.67 Rviz displaying point cloud data

Select the Topic to view through the list on the top left of the interface. To view point cloud data, PointCloud2 can be added in rviz by clicking the Add button. The effect is shown in Fig. 2.67. The image can also be viewed in rviz.

128

2

Connecting the Robot to ROS

2.7.4

Inertial Measurement Unit and Positioning Module

2.7.4.1

Inertial Measurement Unit

Inertial measurement units (IMU) are electronic devices that measure information such as acceleration and attitude of an object, including 3-axis gyro, 3-axis accelerometer, and magnetometer. Xsens MTi is a common IMU, as shown in Fig. 2.68. First, install the IMU driver by running the following command. $ sudo apt-get install ros-kinetic-xsens-driver $ rosstack profile && rospack profile

Then, launch the device node to view the sensor data. $ roslaunch xsens_driver xsens_driver.launch $ rostopic echo /imu/data

It is also possible to display visualized attitude data in rviz.

2.7.4.2

Positioning Module

The robot uses a positioning module to obtain information about its position. Common indoor positioning technologies include Bluetooth, Wi-Fi, Zigbee, ultrawideband (UWB), IMU, infrared positioning, ultrasonic positioning, and LoRa Edge. Common outdoor positioning systems include BeiDou Navigation Satellite Fig. 2.68 Xsens MTi IMU

2.7

Introduction to ROS External Devices

129

Fig. 2.69 NMEA GPS module

System (BDS, China), the Global Positioning System (GPS, U.S.), Global Navigation Satellite System (Glonass, Russia), Galileo (Europe), Real Time Kinematic (RTK), and Global Navigation Satellite System (GNSS). The following section describes how to use the GPS module in ROS, using the example of the NMEA GPS module as shown in Fig. 2.69. First, install the NMEA GPS module driver. $ sudo apt-get install ros-kinetic-nmea-* $ rosstack profile & rospack profile

Launch the node. _port is the port and _baud is the baud rate. $ roscore $ rosrun nmea_navsat_driver nmea_serial_driver _port:=/dev/ rfcomm0 _baud:=115200

Rostopic list can be used to view the list of Topics for /fix, and rostopic echo /fix can be used to view the content of Topic messages. The results are shown in Figs. 2.70 and 2.71, respectively. In addition, the following command can be used to view the raw NMEA data, and the results are shown in Fig. 2.72.

130

Fig. 2.70 View the list of Topics

Fig. 2.71 View the contents of Topic messages

Fig. 2.72 View raw NMEA data

2

Connecting the Robot to ROS

2.7

Introduction to ROS External Devices

131

$ rosrun nmea_navsat_driver nmea_topic_serial_reader _port:=/ dev/rfcomm0 _baud:=115200 $ rostopic echo /nmea_sentence

If there are no IMU and positioning module, a cell phone can also be used to obtain these data. Smartphones all have built-in three-axis gyro and acceleration sensors, as well as BDS or GPS modules, which can be transmitted to a PC via Bluetooth. In this way, ROS is able to use these data.

2.7.5

Servo Motor

Servos are also often used in desktop articulated robots or small mobile robots. This section a brief introduction to the use of Dynamixel servo motor, as shown in Fig. 2.73. First, install the driver. $ sudo apt-get install ros-kinetic-dynamixel-* $ rosstack profile && rospack profile

Once the installation is complete, connect the device to a computer and run the following command; the results are shown in Fig. 2.74. $ roslaunch dynamixel_tutorials controller_manager.launch

Then, run the following command: $ roslaunch dynamixel_tutorials controller_spawner.launch

Fig. 2.73 Dynamixel servo motor

132

2

Connecting the Robot to ROS

Fig. 2.74 Result of running controller_manager.launch

After that, use the /tilt_controller/command and publish a message to drive the motor via rostopic pub. Note: 0.5 is the radian value, which is equal to approximately 28.65 degrees. $ rostopic pub /tilt_controller/command std_msgs/Float64 -- 0.5

2.7.6

Embedded Controller

The underlying motion of mobile robots often employs an embedded controller, such as Arduino or STM32, which are shown in Fig. 2.75. The data interaction between the underlying embedded controller and the computer is generally established using the serial port. ROS establishes the communication connection with the embedded controller through the rosserial package.

2.7.6.1

Basic Configuration

This section introduces connection and usage of the embedded controller with ROS, considering Arduino as an example. First, install the package by entering the following command:

2.7

Introduction to ROS External Devices

133

Fig. 2.75 Arduino and STM32

$ sudo apt-get install ros-kinetic-rosserial-arduino $ sudo apt-get install ros-kinetic-rosserial

If the Arduino IDE (Integrated Development Environment) is already installed, this step can be omitted. Subsequently, run the following command: $ sudo apt-get update && sudo apt-get install arduino arduino-core

After installing the Arduino IDE is installed, run the following command to achieve data interaction with the embedded controller. Note: The storage path for the data interaction needs to be specified at the end of the command. $ roscore $ rosrun rosserial_arduino make_libraries.py /tmp

2.7.6.2

Usage Example

A wealth of example codes is provided in ros_lib, which is in the include folder, and contains the ros.h header file. For example, a code snippet for the hello world routine is as follows. #include #include ros::NodeHandle nh; //set node handle

(continued)

134

2

Connecting the Robot to ROS

std_msgs::String str_msg; ros::Publisher chatter("chatter", &str_msg); //create a Publisher, set the name of the topic to be published and its message type char hello[13] = "hello world!"; void setup() { nh.initNode(); //node initialization nh.advertise(chatter); // publish the topic } void loop() { str_msg.data = hello; chatter.publish( &str_msg ); //publish the message nh.spinOnce(); // loop waiting for callback function delay(1000); }

The code consists of two main functions: setup() and loop(), which are used for initialization and continuous operation, respectively. The name of the Topic sending the message is chatter. After launching roscore, run the following command in a new terminal: $ rosrun rosserial_python serial_node.py /dev/ttyACM0

View the Arduino release via rostopic echo. Furthermore, the IO ports of the embedded controller can be controlled, or various sensor data can be acquired, by referring to other example code and related documentation.

2.8

Summary

This chapter introduces the fundamentals of ROS and its installation and communication mechanisms. After completing the configuration of the ROS environment, readers can write a ROS program and become familiar with the concepts of the ROS workspace and packages. In addition, readers can understand the writing and running of nodes and run the turtlesim that comes with ROS. Thereafter, visualized debugging and getting the Spark robot in motion are discussed. In this way, readers can familiarize themselves with the external devices of the robot system.

Further Reading

135

Further Reading The mathematical knowledge of coordinate system transforms.

Description of a Position

AP =

px py pz

The aforementioned expression represents a coordinate system, {A}, with three mutually orthogonal unit vectors with solid heads. A point P is represented as a vector and can equivalently be considered as a position in space, or simply as an ordered set of three numbers. The individual elements of a vector are assigned the subscripts x, y, and z. In summary, the position of a point in space is described using a position vector.

Description of an Orientation Often, we will find it necessary not only to represent a point in space but also to describe the orientation of a body in space. In order to describe the orientation of a body, we will attach a coordinate system to the body and then give a description of this coordinate system relative to the reference system. There, one way to describe the body-attached coordinate system{B} is to write the unit vectors of its three principal axes^2 in terms of the coordinate system{A}. We name it with the notation A B R: A A A A XB Y B ZB BR =

=

XB  XA

Y B  XA

ZB  XA

XB  Y A

YB  YA

ZB  YA

XB  ZA

YB  ZA

ZB  ZA

The matrix has the following property: A B -1 BR = AR

= BA RT

136

2

Connecting the Robot to ROS

Description of a Frame Normally, the situation of a position and an orientation pair arises so often in robotics that we define an entity called a frame, which is a set of four vectors giving position A and orientation information. For example, frame{B} is described by A B R and PBORG, A where PBORG is the vector that locates the origin of the frame {B}: fBg =

A A B R, PBORG

Transformation from One Frame to Another We know the description of a vector with respect to some frame {B}, and we would like to know its description with respect to another frame, {A}. We now consider the general case of transformation. Here, the origin of frame {B} is not coincident with that of frame {A} but has a general vector offset. The vector that locates {B}'s origin is called APBORG. Also, {B} is rotated with respect to {A}, as described by A B R. Given BP, we wish to compute AP as in figure below.

We can first change BP to its description relative to an intermediate frame that has the same orientation as {A}, but whose origin is coincident with the origin of {B}. We then account for the translation between origins by simple vector addition and obtain the following: A

P = AB RB PþA PBORG

Then, we introduce a transformation from one frame to another as an operator in matrix form as below formula.

Exercises

137 A

P = AB T B P

The above formula has below structure; the 4*4 matrix in this structure is called a homogeneous transform matrix. It can be seen as a simple matrix representation of the rotation and translation of general transformations. A BR

A

P = 1

0

0

A

0

PBORG 1

B

P 1

Exercises 1. What does a ROS package contain? Which two of these components are indispensable in forming the smallest unit of a package? 2. What is the framework of the workspace? How many folders does it contain? 3. What are the three communication mechanisms of ROS? Explain their features and differences. 4. Using ROS can realize multi-node deployment on different control machines for distributed communication, thereby decoupling the system. In a LAN, the IP address of the ROS master node is 192.168.0.101, and the subnet mask is 255.255.255.0, how many computer controllers can be deployed in the LAN for communication? 5. Briefly describe the difference between a topic and a service. 6. What is the meaning of the coordinate TF? Give examples of robot application scenarios where coordinate TF might be used. 7. According to the principle of TF transformation, for a 7-DOF robotic arm, the homogeneous transformation matrix of the i-axis coordinate system {i} relative to the (i-1)-axis coordinate system {i - 1} is expressed as ii - 1 T. How to solve the transformation matrix 07 T of the end coordinate system {7} relative to the base coordinate system {0}. 8. The expression of TF can be represented by the Euler angle and Quaternion; please derive the transformation formula between them. 9. What is the difference between Gazebo and rviz? 10. What are the external devices of ROS? Which communication mechanism of ROS is used to communicate with the robot controller? 11. Write ROS nodes to obtain computer camera data, publish the data as a topic, and then, display the topic content at the terminal.

Chapter 3

Robot System Modeling

Abstract Robot system modeling is the foundation for verifying a robot’s motion control and simulation. The 3D robot model in ROS is implemented through a Unified Robot Description Format (URDF) file, which is an XML file describing the robot as well as its parts structure, joints, and degrees of freedom (DOF). The 3D robot in the ROS system has a corresponding URDF file. This chapter covers several common-wheeled robot motion models and robot modeling with URDF. The robot motion control is performed through ROS to achieve synchronized motion between the simulated and real robots. Then, the typical sensor in robot systems, light detection and ranging (LiDAR), is discussed. Finally, readers can observe the scene profile point cloud data to experience environment sensing and obstacle detection from the robot’s perspective. Keywords Robot simulation · motion model · URDF modeling · motion control · environment perception

3.1 3.1.1

Mobile Chassis Motion Model and Control Mobile Robot Motion Model and Position Representation

The most prevalent mobile robots are wheeled robots [1]. Next, several commonwheeled robots are introduced in this section. Two-wheeled differential chassis adopts the differential drive, as shown in Fig. 3.1, The two power wheels are located on the left and right sides of the chassis, and a castor wheel is utilized to maintain balance. Each power wheel has an independent drive mechanism that controls the speed independently. The chassis steering control is achieved with different speeds, and the linear motion of the chassis is achieved by turning the drive wheels at the same rate and in different directions. The mobile robot is treated as a rigid body and moves on a horizontal plane. The joints and DOF of the internal parts and wheels are ignored. The coordinate system is © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_3

139

140

3

Robot System Modeling

Fig. 3.1 Two-wheeled differential chassis Fig. 3.2 Position representation of the robot

defined as shown in Fig. 3.2; therefore, the pose coordinate of the robot in the plane global coordinate system is ξI = (x, y, θ)T. The radius of the mobile robot wheel is r, and the distance from the driving wheel to the middle point P between the two wheels is d. The angular rotation velocities of the left and right wheels are φ_ 1 and φ_ 2 , respectively. Then, the velocities of the left _ 2 , respectively. The velocity of the robot in the global _ 1 and rφ and right wheels are rφ coordinate system is defined as follows:

3.1

Mobile Chassis Motion Model and Control

141

Fig. 3.3 Differential motion model

ξ_ I =

x_ I y_ I θ_ I

In the local coordinate system of the robot, the velocity of the robot at point P in the middle of the two wheels is ξ_ R = RðθÞξ_ I , where R(θ) is the transformation matrix. x_ R y_ R θ_ R

x_ I

= RðθÞ

y_ I θ_ I

ξ_ R = RðθÞξ_ I ⟶ ξ_ I = RðθÞ - 1 ξ_ R The differential motion model is demonstrated in Fig. 3.3. The mobile robot is set to move in the positive XR direction, and the point P is moving in the XR direction with a translational velocity of x_ R . When the two driving wheels rotate simultaneously, x_ R = 12 rφ_ 1 þ 12 r φ_ 2 . In terms of a mobile robot in differential motion, point P is moving in the YR direction with a translational velocity of y_ R = 0. If the left wheel rotates forward and the right wheel is stationary, then point P rotates clockwise with the right wheel as the center of rotation. The rotational φ_ 1 . speed is defined as ω1 = r2d If the right wheel rotates forward and the left wheel is stationary, then point P rotates counterclockwise with the left wheel as the center of rotation. The φ_ 2 rotational speed is defined as ω2 = r2d . φ_ 2 φ_ 1 Then, the rotational speed of the mobile robot is θ_ R = - r2d þ r2d . Therefore, the kinematic model for a differential drive mobile robot is given as follows:

142

3

Robot System Modeling

Fig. 3.4 Ackermann drive chassis

Fig. 3.5 Synchronous drive chassis

ξ_ I = RðθÞ - 1 ξ_ R = RðθÞ - 1

r φ_ 1 r φ_ 2 þ 2 0 2 r φ_ 1 r φ_ 2 þ 2d 2d

Another common feature of wheeled robot chassis motion is front-wheel steering and rear-wheel drive, which belongs to the Ackermann drive type, as illustrated in Fig. 3.4. The front-wheel attitude is adjusted by a steering engine, thus changing the direction of the chassis motion. The two rear wheels connected to the motor rotate to provide forward power for the chassis. A synchronous drive mobile chassis utilizes a four-wheeled drive and differential steering, as shown in Fig. 3.5. Four-wheeled drive means that all four wheels of the

3.1

Mobile Chassis Motion Model and Control

143

Fig. 3.6 Three-wheeled omnidirectional chassis Fig. 3.7 Four-wheeled orthogonal vertical chassis

chassis have an available driving force, and the power of the motor is distributed to all four wheels. Therefore, it is less likely that the wheels slip in bad road conditions, and the control of the chassis is greatly improved. Three-wheeled omnidirectional mobile chassis is an omnidirectional driven chassis with good mobility and simple structure as demonstrated in Fig. 3.6. The three omnidirectional wheels are 120° apart. Each omnidirectional wheel consists of several small rollers, and the bus lines of each wheel form a complete circle. The robot can move either in the tangential direction of the wheel surface or in the axial direction of the wheel. The combination of these two motions can achieve motion in any direction in the plane. A four-wheeled orthogonal vertical chassis is another type of omnidirectional drive chassis. As shown in Fig. 3.7, it is composed of four omnidirectional wheels located at the vertices of the four sides. All four omnidirectional wheels are drive

144

3

Robot System Modeling

Fig. 3.8 Mecanum wheel chassis

wheels driven by motors. This four-wheeled orthogonal vertical chassis can move in a straight line in any direction. The Mecanum wheel chassis, which is another type of omnidirectional drive chassis as shown in Fig. 3.8, uses four Mecanum wheels. The Mecanum wheel design was patented by the Swedish company Mecanum. Multiple rollers are mounted on each Mecanum wheel, and the rollers are set at a 45° angle to the hub axle. This structure allows for easy omnidirectional movement and good flexibility of the chassis. Compared with the omnidirectional wheel, the Mecanum wheel can be directly mounted in the same way as in a conventional four-wheeled chassis. Figure 3.9 demonstrates the kinematic analysis of the Mecanum wheel chassis. The relationship between the motion vectors of the moving platform and the external control inputs is as follows: vRx vRy wR

=

R 4

1

-1

-1

1

wb1

1

1 1 la þ lb

1

1 1 la þ lb

wb2 wb3

1 la þ lb

1 la þ lb

wb4

The motion vector [vRx, vRy, wR] is related to the angular velocity wbi of the four Mecanum wheels. The 3-DOF independent motion of the moving platform can be controlled by wbi. When wb1 = wb2 = wb3 = wb4 ≠ 0, vRx = wR = 0, vRy = wbi ∙ R, and the moving platform moves longitudinally. ② When wb1 = - wb2 = - wb3 = wb4 ≠ 0, vRx = wbi ∙ R, vRy = wR = 0, and the moving platform moves laterally. ③ When-wb1 = wb2 = - wb3 = wb4 ≠ 0, wR = wbi ∙ R/la + lb, vRx = vRy = 0, and the moving platform spins in place. ④ When wb1 + wb3 = wb2 + wb4 = w ≠ 0 andwb1 - wb2 = wb4 - wb3 = δ ≠ 0, vRx ≠ 0 , vRy ≠ 0 , wR = 0, and the moving platform moves in a straight line in ①

3.1

Mobile Chassis Motion Model and Control

145

y

VRx denotes the lateral moving velocity component of the mobile chassis;

Wz2 D2

Wb2

VRy denotes the vertical moving velocity component of the mobile chassis;

B1

B2

Wr2

WR denotes the angular velocity of the mobile chassis;

E

2

VRy WR o

VRx

X

Wbi refers to the angular velocity of the Mecanum wheel;

lb

Wri means the angular velocity of the rotation of the roller about its axis; Wzi means the angular velocity of the wheel rotating around the axis perpendicular to the ground through the contact point B;

B3

B4

la and lb denote the lateral, vertical distance between the center of the wheel and the center of the mobile chassic, respectively;

la

Fig. 3.9 Kinematic analysis of Mecanum wheel chassis

the oblique direction, which is at an angle to the positive direction of the y-axis β = arctan [w/δ]. The inverse kinematic equation of the moving platform can be obtained by matrix transformation as follows: wb1 wb2 wb3 wb4

1 = R

1 -1 -1 1

1 1 1 1

- ðla þ lb Þ la þ lb - ðla þ lb Þ la þ lb

vRx vRy wR

Based on the inverse kinematic equation, the required rotational speed of the four wheels, i.e., the required motor output control, can be solved through the target moving speed of the moving platform. Fig. 3.10 demonstrates the relationship between the rotation of the four Mecanum wheels and the motion of the moving platform. If each wheel has rotation DOF and rolling DOF, it is called a steering wheel, as shown in Fig. 3.11. The steering wheel is an integrated mechanical structure with drive motor, steering motor, and gearbox. Through adjustments in the angle and speed of the steering wheel, the chassis can be steered and made to perform lane changes without the vehicle turning, which gives it great flexibility.

146

Fig. 3.10 Schematic of omnidirectional motion Fig. 3.11 Steering wheel

3

Robot System Modeling

3.1

Mobile Chassis Motion Model and Control

3.1.2

URDF Modeling

3.1.2.1

Gazebo Simulation

147

The kinematics of robot can also be practiced through simulation if a physical robot platform is not available. Two common types of robot simulation are algorithmic simulation, which can be implemented through MATLAB (ROS is supported in R2012b and above) or V-rep, and real-world simulation, which can be implemented through Gazebo for physical structure and characteristics simulation. This section focuses on the widely used Gazebo and robot modeling using URDF. Gazebo is a 3D dynamic simulator that is capable of simulating robot systems in both indoor and outdoor environments accurately and efficiently. Similar to a game engine, Gazebo allows for highly realistic sensor physics simulation and provides an interactive interface between the program and the user. In addition, it can test robot algorithms and system designs and perform simulation testing in realistic scenarios. Prior to the ROS hydro version, Gazebo was integrated into the ROS system in the form of the simulator_gazebo package set. Now, Gazebo is no longer dependent on the ROS system. Instead, it is a standalone program that can be installed directly on Ubuntu systems and encapsulates the Gazebo interface through the gazebo_ros_pkgs metapackage. This package provides an interface for robot simulation in Gazebo using ROS messages, services, and dynamic reconfigurations. The ROS interface for the Gazebo_ros_pkgs package is shown in Fig. 3.12. • gazebo_ros is mainly used to encapsulate gazebo interface, launch gazebo server side, and client side and to generate the URDF model. • gazebo_msgs is the Msg and Srv data structure of gazebo. • gazebo_plugins is used for generic sensor plugins of gazebo. • Gazebo encapsulation is implemented via gazebo_ros_api_plugin and gazebo_ros_path_plugin. In a ROS system, Gazebo can be started using the gazebo_ros package. Launch Gazebo with the roslaunch tool and load an empty world model as demonstrated in Fig. 3.13. The command is as follows: roslaunch gazebo_ros empty_world.launch

The mapping parameters that can be set by this package include the following: paused: start Gazebo in a paused state (default false). use_sim_time: It determines whether ROS nodes use the Gazebo-published simulation time, published over the clock topic (default true). gui: It launches the user interaction interface window of Gazebo (default true). headless: It disables the simulation of rendering components; this parameter cannot be enabled when gui is true (default false).

148

Fig. 3.12 Interface between Gazebo and ROS

Fig. 3.13 Empty world model

3

Robot System Modeling

3.1

Mobile Chassis Motion Model and Control

149

Fig. 3.14 Execution result of mud_world.launch debug: It starts gzserver (Gazebo Server) in gdb for debugging (default false).

For example, roslaunch gazebo_ros empty_world.launch paused=true use_sim_time=false gui=true headless=false debug=true

If Gazebo is started using the roslaunch tool instead of rosrun, the general mapping parameters can be specified via the roslaunch file. The package gazebo_ros also provides other examples. roslaunch gazebo_ros willowgarage_world.launch roslaunch gazebo_ros mud_world.launch roslaunch gazebo_ros shapes_world.launch roslaunch gazebo_ros rubble_world.launch

Figure 3.14 shows the execution of mud_world.launch (if run for the first time, it may take a while to download the model). Next, the Gazebo operation process is analyzed, using mud_world.launch as an example. First, view the mud_world.launch file.

150

3

Robot System Modeling









The file includes the path of empty_world.launch file and the assignment of the mapping parameters. World_name specifies the definition file for the world model in a relative path, and the absolute path is determined by the GAZEBO_RESOURCE_PATH environment variable, which defaults to /usr/ share/gazebo-x.x. With respect to the kinetic version, the default value is /usr/ share/gazebo-7.0.0. Then, view the empty_world.launch file.















(continued)

3.1

Mobile Chassis Motion Model and Control

151







This file sets the relevant parameters for running Gazebo through the mapping parameters, and finally runs two scripts, gzserver (debug) and gzclient, for Gazebo to work properly. The following section introduces how the world model is defined, using mud. world as an example.

model://sun

model://ground_plane

model://double_pendulum_with_base. For example, the file urdf/01-myfirst.urdf has the following contents:





• Running as a Library Another way to run robot_state_publisher is to run it as a library. Write C ++ code that includes the header file. #include

Then, add code for the constructor, which takes in a KDL tree. RobotStatePublisher(const KDL::Tree& tree);

To publish the state of the robot, call the publishTransforms functions. void publishTransforms(const std::map& joint_positions, const ros::Time& time); // publish moving joints void publishFixedTransforms(); // publish fixed joints

The first parameter is a map with the joint name and position, and the second parameter is the time at which the joint positions were recorded. The complete motion model needs to be passed to joint_state_publisher; otherwise, the tf tree is incomplete.

166

3

Robot System Modeling

The following section describes the steps of using robot_state_publisher by URDF. 1. Create the URDF Files Create a robot model URDF file named r2d2. 2. Publish the Status Specify what state the robot is in and create the package: cd %TOP_DIR_YOUR_CATKIN_WS%/src catkin_create_pkg r2d2 roscpp rospy tf sensor_msgs std_msgs Then, launch the editor and write the following code in the src/state_publisher. cpp file. 1 #include 2 #include 3 #include 4 #include 6 int main(int argc, char** argv) { 7 ros::init(argc, argv, "state_publisher"); 8 ros::NodeHandle n; 9 ros::Publisher joint_pub= n. advertise< sensor_msgs:: JointState>("joint_states", 1); 10 tf::TransformBroadcaster broadcaster; 11 ros::Rate loop_rate(30); 12 13 const double degree = M_PI/180; 14 15 // robot state 16 double tilt=0, tinc=degree, swivel=0, angle=0, height=0, hinc=0.005; 17 18 // message declarations 19 geometry_msgs::TransformStamped odom_trans; 20 sensor_msgs::JointState joint_state; 21 odom_trans. header. frame_id = "odom"; 22 odom_trans. child_frame_id = "axis"; 23 24 while (ros::ok()) { 25 //update joint_state 26 joint_state. header. stamp = ros::Time::now(); 27 joint_state. name. resize(3); 28 joint_state. position. resize(3); 29 joint_state. name[0] = "swivel"; 30 joint_state. position[0] = swivel; 31 joint_state. name[1] = "tilt"; 32 joint_state. position[1] = tilt; 33 joint_state. name[2] = "periscope"; 34 joint_state. position[2] = height; 35 36 // update transform

(continued)

3.1

Mobile Chassis Motion Model and Control

167

37 // (moving in a circle with radius=2) 38 odom_trans. header. stamp = ros::Time::now(); 39 odom_trans. transform. translation. x = cos(angle)*2; 40 odom_trans. transform. translation. y = sin(angle)*2; 41 odom_trans. transform. translation. z =.7 ; 42 odom_trans. transform. rotation= tf:: createQuaternionMsgFromYaw(angle+M_PI/2); 43 44 //send the joint state and transform 45 joint_pub. publish(joint_state); 46 broadcaster. sendTransform(odom_trans); 47 48 // Create new robot state 49 tilt += tinc; 50 if (tilt0 ) tinc *= -1; 51 height += hinc; 52 if (height>.2 || height> ~/.bashrc

Check and configure the port permissions. ls -l /dev |grep ttyUSB sudo chmod a+rw /dev/ttyUSB0

3. Run Tests Open roscore, go to turtlebot_ws, and run rplidar and rviz. roscore cd ~/turtlebot_ws source devel/setup.bash roslaunch rplidar_ros view_rplidar.launch At this time, the functioning of LiDAR can be observed, as shown in Fig. 3.29. 4. Run the Script This script uses udev rules to recreate the remapping of the LiDAR device mount points. . /scripts/create_udev_rules.sh

3.2.2

Introduction to hector_mapping

Hector_mapping is a SLAM approach that can be used without an odometer. It leverages LiDAR systems such as the Hokuyo UTM-30LX (scanning frequency: 40 Hz) to provide 2D pose estimates. While hector_mapping does not provide explicit loop closing ability, it is sufficient for general real-world scenarios. The

180

3

Robot System Modeling

Fig. 3.29 LiDAR operation results

system has been successfully used on unmanned ground robots, unmanned surface vehicles, handheld mapping devices, and quadrotor UAVs. To use hector_mapping, a sensor_msgs/LaserScan data source is required, and this node uses tf to transform the scan data. The most important node in the hector_slam program is the hector_mapping node. The relevant ROS API is described in detail below. • Subscribed Topics scan (sensor_msgs/LaserScan): It scans data from the LiDAR, usually provided by the node where the device is running, e.g., hokuyonode. syscommand (std_msgs/String): It is a system command that resets the MapFrame and robot poses to their initial state when the reset command is accepted. • Published Topics map_metadata(nav_msgs/MapMetaData): Other nodes can get the map data from this topic, which is latched, and updated periodically. map (nav_msgs/OccupancyGrid): Other nodes can get the map data from this topic, which is latched, and updated periodically. slam_out_pose(geometry_msgs/PoseStamped): It is the estimated robot pose without covariance. Poseupdate (geometry_msgs/PoseWithCovarianceStamped): It is the estimated robot pose with a Gaussian estimate of uncertainty. • Services dynamic_map (nav_msgs/GetMap): Get the map data.

3.2

LiDAR-Based Environment Perception

181

• Required tf. ->base_frame is usually a fixed value, published by robot_state_publisher and oratfstatic_transform_publisher periodically. • Provided tf. map->odom: It is the estimated robot pose in the map frame, published when pub_map_odom_transform is true. • Parameters ~base_frame (string, default:base_link): It is the name of the base frame of the robot. This is the frame used for localization and for the transformation of laser scanner data. ~map_frame (string, default:map_link): It is the name of the map frame (usually the name is map), which is also known as the world frame. ~odom_frame (string, default:odom): It is the name of the odometer frame, default: odom. ~map_resolution (double, default:0.025): map resolution (m). This is the length of a grid cell (default: 0.025). ~map_size (int, default:1024): It is the number of cells in a row per axis in the map. The map is square and has (map_size * map_size) grid cells. ~map_start_x (double, default:0.5): It is the location of the origin [0.0,1.0] of the map frame on the x-axis relative to the grid map; 0.5 is in the middle. ~map_start_y (double, default:0.5): It is the location of the origin [0.0,1.0] of the map frame on the y-axis relative to the grid map; 0.5 is in the middle. ~map_update_distance_thresh (double, default:0.4): It is the threshold for map updates (m). After each update, the platform has to experience a distance change as described by this parameter before the map can be updated again. ~map_update_angle_thresh (double, default: 0.9): It is the threshold for map updates (rad). After each update, the platform has to experience an angular change as described by this parameter before the map can be updated again. ~map_pub_period (double, default:2.0): It is the map publishing period (s). ~map_multi_res_levels (int, default:3): It is the number of map multiresolution grid levels. ~update_factor_free (double, default:0.4): It is the map update modifier for updates of free cells in the range; 0.5 indicates no change. ~update_factor_occupied (double, default:0.9): It is map update adjuster to update the change in occupied units. A value of 0.5 means no change. ~laser_min_dist (double, default: 0.4): It is the minimum distance (m) for laser scan endpoints to be used by the system. Scan endpoints smaller than this distance will be ignored. ~laser_max_dist (double, default: 30.0): It is the maximum distance (m) for laser scan endpoints to be used by the system. Scan endpoints greater than this distance will be ignored. ~laser_z_min_value (double, default: -1.0): It is the minimum height (m) relative to the laser scanner frame for laser scan endpoints to be used by the system. Scan endpoints lower than this value are ignored.

182

3

Robot System Modeling

~laser_z_max_value (double, default: 1.0): It is the maximum height (m) relative to the laser scanner frame for laser scan endpoints to be used by the system. Scan endpoints higher than this value are ignored. ~pub_map_odom_transform (bool, default: true): It determines if the flag bit of the map->odom transform should be published. ~output_timing (bool, default: false): It outputs the timing information for the processing of each laser scan via ROS_INFO. ~scan_subscriber_queue_size (int, default:5): It is the queue size (buffer) of the scan subscriber. This value should be set to high values (e.g., 50) if logfiles are played back faster than the real-time speed. ~pub_map_scanmatch_transform (bool, default:true): It determines if scanmatcher->map should be published to transform. The name of the scanmatcher frame is determined by the 'tf_map_scanmatch_transform _frame_name' parameter. ~tf_map_scanmatch_transform_frame_name (string, default: scanmatcher_frame): It publishes the coordinate name of the scanmatch to map transform.

3.2.3

Usage of Hector_mapping

1. Create a Workspace and Install and Compile the rplidar Package. 2. Install the hector_mapping Package. sudo apt-get install ros-indigo-hector-slam

3. Create the Launch File. Add the hector_mapping_demo.launch file to the rplidar_ros/launch/ directory.







(continued)

3.2

LiDAR-Based Environment Perception

183









-->









4. Run Tests Check and configure port permissions. lsusb sudo chmod a+rw /dev/ttyUSB0

Run the following two launch files to check that the radar is scanning the surroundings, as shown in Fig .3.30.

184

3

Robot System Modeling

Fig. 3.30 Hector_mapping results

roslaunch rplidar_ros hector_mapping_demo.launch roslaunch rplidar_ros rplidar.launch

Task 2 Observing the Environment from Robot’s Perspective Before starting the LiDAR, it is necessary to check that the Spark LiDAR model matches the one in the configuration file. $ sudo gedit /opt/lidar.txt

If it is a ShanChuan LiDAR (3i), modify the file content as 3iroboticslidar2. If it is an EAI lidDAR (g2 or g6), modify the file content as ydlidar_g2 or ydlidar_g6. Or configure it according to the real model of the LiDAR. 1. Activate LiDAR In the case of the ShanChuan LiDAR (3i), use the following command to launch the LiDAR. roslaunch iiiroboticslidar2 view_3iroboticslidar2.launch

3.2

LiDAR-Based Environment Perception

185

Fig. 3.31 LiDAR point cloud results

The view_3iroboticslidar2.launch file reads as follows. This command first launches the LiDAR node and then launches the rviz display.



Modify the Fixed Frame to lidar_link, and add the point cloud plugin. Then, the lidar point cloud appears as shown in Fig. 3.31. In the case of EAI LIDAR (g2), use the following command to launch the LiDAR. $ roslaunch ydlidar_g2 lidar_view.launch

The lidar_view.launch file contains the following content.

186

3

Robot System Modeling



Modify the Fixed Frame to laser_frame and add the point cloud plugin. The effect is the same as that in Fig. 3.31. 2. Activate the keyboard control to observe the laser point cloud data in real time as the robot moves. Launch the Spark chassis motion control node. $ source ~/spark/devel/setup.bash $ roslaunch spark_base spark_base.launch serialport="/dev/ sparkBase"

Launch the keyboard control node by entering the following command in a new terminal window. $ source ~/spark/devel/setup.bash $ rosrun spark_teleop teleop_node 0.2 1

3.3

Summary

This chapter introduces common mobile robot motion models, URDF modeling methods, and simulations in the Gazebo environment. Readers can learn how to realize the synchronous motion of ROS simulated robots and real robots on the Spark platform. As a result, readers can develop a deeper understanding of robot modeling and simulation in the ROS environment. Additionally, this chapter provides an overview of the typical LiDAR sensor in robot systems, and through experiments, readers can observe the scene profile point cloud data to experience the environment sensing and obstacle detection from the robot’s perspective.

Further Reading The concept of redundancy of robot manipulator is relative and defined for specific tasks. For a flat task, the commonly used 6-axis (six-degree-of-freedom) robot manipulator is also redundant. However, it is common practice to generalize this concept for all tasks, as three-dimensional space can be described in terms of six

Further Reading

187

Fig. 3.32 Kinematic structure of S-R-S type manipulator

degrees of freedom. Consequently, a 7-axis (seven degree-of-freedom) manipulator is often referred to as a redundant robot manipulator. We can use the extra degrees of freedom of redundant robot manipulator to achieve additional tasks such as ontology obstacle avoidance, singularity avoidance, joint limit avoidance, joint torque optimization, and increasing the degree of operation. At the same time, since one arm of a human also has seven degrees of freedom, from the perspective of bionics, redundant arms are also more realistic. An S-R-S type anthropomorphic manipulator comprises a relatively simple structure, similar to that of the human arm. As shown in Fig. 3.32, the manipulator is composed of seven revolute joints. The first three and the last three joints can be regarded as equivalent spherical joints because their axes intersect at a single point. The spherical joints will be referred to as shoulder (S) and wrist (W) joint, respectively, in analogy to those of the human arm. In a similar way, the middle joint (Joint #4) will be called the elbow (E) joint. Furthermore, similar to the human arm, the SR-S type manipulator comprises two characteristic subchains1: the positioning subchain formed by the first four joints with coordinates qi, i = 1, 2, 3, 4, and the orientation subchain formed by the last three (wrist) joints with coordinates qj, j = 5, 6, 7.

1

Taki S, Nenchev D. A novel singularity-consistent inverse kinematics decomposition for SRS type manipulators[C]. 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 5070-5075.

188

3

Robot System Modeling

An analytical methodology has been proposed for obtaining all of the feasible inverse kinematic solutions for a S-R-S redundant manipulator in the global configuration space constrained by joint limits2. The method could also be applied to the redundancy resolution problem. Firstly, a parameterized inverse kinematic solution was derived using the arm angle parameter. Then, the relations between the arm angle and joint angles were investigated in detail, and how to obtain feasible inverse solutions satisfying joint limits was discussed. Finally, the inverse kinematic computation method was applied to the redundancy resolution problem. Analytical methods for joint limit avoidance were also developed. The S-R-S inverse kinematics simulation based on MATLAB can refer to the network link3.

Exercises 1. It is known that the wheel radius of a differentially driven mobile robot is r, the distance from the driving wheel to the middle point P of the two wheels is d, and the rotational angular velocities of the left wheel and the right wheel are φ_ 1 , φ_ 2 , respectively. Place the robot in the planar global coordinate system, in which and the posture is expressed as ξI = (x, y, θ)T. How to calculate its kinematic model ξ T _I, which is also shown as x_ , y_ , θ_ . 2. How to load the world model in Gazebo via roslaunch? And how to add a URDF model to Gazebo via roslaunch? 3. How to create a 0.6 × 0.1 0.2 white cube URDF model. 4. How to set up the view model, collision model, and inertia model for link in the URDF file? 5. In the URDF file, use x, y, and z to indicate the displacement offset in the direction of the x - axis, y - axis, and z - axis, respectively (units: m); r, p, and y indicate the rotation around the x-axis, y-axis, and z-axis, and amount, respectively (units: rad). Given that x = 0.25, y = 0.24, z = 0.8, r = PI/2, r = PI/6, r = - PI/4, to solve the rotation matrix, translation matrix and quaternion representation of the transformation relationship. 6. For the Ackermann-driven type whose chassis motion characteristics of the wheeled robot are front-wheel steering and rear-wheel drive, how to obtain the inverse kinematics equation? 7. What is the message format of the LiDAR message type sensor_msgs/LaserScan?

2

M. Shimizu, H. Kakuya, W. Yoon, K. Kitagaki and K. Kosuge. Analytical Inverse Kinematic Computation for 7-DOF Redundant Manipulators With Joint Limits and Its Application to Redundancy Resolution. IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1131-1142, Oct. 2008. 3 https://www.cnblogs.com/xi-jiajia/p/16084375.html

Reference

189

8. Referring to Figs. 1.4 and 1.5 (demonstrates the two-wheeled differential motion model in Chap. 1), to analyze the odometer model of the two-wheeled differential robot.

Reference 1. Siegwart R, Nourbakhsh IR, Scaramuzza D (2013) Introduction on autonomous mobile robots (trans: Renhou L, Qingsong S). Xi’an Jiaotong University Press, Xi’an, p 5

Chapter 4

LiDAR SLAM for Mobile Robot

Abstract Simultaneous localization and mapping (SLAM) means that a mobile device starts from a location in an unknown environment, observes its own position, pose, and motion trajectory through sensors (e.g., LiDAR, camera), and constructs an incremental map based on its own position. LiDAR SLAM technology has become the core technology for autonomous localization and navigation of robots due to its performance advantages such as stability and reliability. This chapter covers the principle and usage of 2D LiDAR SLAM algorithms, which scan the environment profile using LiDAR for localization and mapping. Common algorithms include Gmapping, Hector SLAM, Karto SLAM, and Cartographer. Based on the understanding of 2D LiDAR algorithms, readers can learn to use 3D multi-line LiDAR SLAM algorithms, such as LOAM, LeGO-LOAM, LIO-SAM, and Fast-LIO. Keywords SLAM · mobile robot · coordinate transformation · pose estimation · Kalman filter

4.1 4.1.1

Foundation of SLAM SLAM Overview

Mobile robot technology involves localization, mapping, and path planning [1]. SLAM technology is at the intersection of localization and mapping, as shown in Fig. 4.1. The definition of SLAM is as follows: A robot starts its motion from a certain pose in an unknown environment and uses its sensors (e.g., LiDAR, cameras) and repeatedly observed map features (e.g., wall corners, columns) to determine its own position. Simultaneously, the robot constructs a map incrementally based on its own position, thus achieving simultaneous localization and mapping. In an unknown environment, autonomous robot localization and environmental mapping are interrelated and interdependent [2]. The process involves using the constructed map information to update the pose of the robot and using accurate estimated pose information for environmental mapping. The basic process of SLAM can be described in Fig. 4.2.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_4

191

192 Fig. 4.1 Relationship among technical fields in mobile robots

Fig. 4.2 Basic process of SLAM

4

LiDAR SLAM for Mobile Robot

4.1

Foundation of SLAM

193

Autonomous navigation of a robot requires a strong ability to sense the surrounding environment. In an unknown and complex environment, humans can sense changes in the external environment through various means such as sight, sound, and smell to determine their state, which also reflects the strong sensory capacities of humans. However, in an unknown environment, robots cannot sense the dynamic changes in the external environment as humans do and need to rely on the sensors they carry. These sensors usually include cameras, LiDARs, radars, odometers, and inertial measurement units (IMU). The robot has to analyze the sensor data for pose estimation and environmental mapping. For robots in 3D space (unmanned aerial vehicle), it is necessary to determine their 3D pose information, 3D rotation information, and environment maps. With respect to mobile robots on a 2D plane, only 2D coordinate information, 2D angle information, and a 2D map are required. Accurate pose estimation and mapping help to achieve autonomous robot navigation.

4.1.2

Mobile Robot Coordinate System

To perform the pose estimation and mapping, it is necessary to establish the coordinate system of the mobile robot. In ROS, the coordinate system of the mobile robot generally includes several coordinate frames such as map, base_link, odom, and base_laser. • map: It is the map frame, which is fixed, consistent with the world frame in which the robot is located. • base_link: It is the robot body frame, coinciding with the robot center. • odom: It is the odometer frame, which can be seen as a frame that moves as the mobile robot moves. The tf from base_link to odom will be published whenever the robot moves. Before the robot motion starts (initially), e.g., when the robot is at the charging origin, the odom frame and map frame are coincident. That is to say, the tf from map to odom is 0. The odom deviates over time due to accumulated errors in the odometer (calculated from the kinematic model). At this time, a sensor is required for correction, i.e., estimation. For example, if a LiDAR sensor is used for Gmapping pose estimation, the tf from the map to base_link can be obtained, which in turn provides the deviation between the pose estimation and the odometer, i.e., the deviation between the coordinate system of the map and odom. In this way, it is possible to correct the pose of the robot. Note the distinction between odom and odom topic. Odom topic is the odometer calculated by sensors (encoder, LiDAR, or visual odometer), and odom topic obtains the pose transformation matrix, which is the transformation relationship from odometer to base_link. • base_laser: It is the LiDAR frame, determined by the LiDAR mounting position; the transform between base_laser and base_link is fixed.

194

4

LiDAR SLAM for Mobile Robot

Fig. 4.3 TF properties

Fig. 4.4 Tf from base_laser to base_link

For example, a chassis model is created using URDF, and its TF properties are demonstrated in Fig. 4.3, which shows the two frames defined in the URDF file: base_link and base_laser. According to the right-hand rule, red, green, and blue represent the x-axis, y-axis, and z-axis, respectively. Use the tf view tool and launch a new window. Enter the following command: $ rosrun tf tf_echo base_link base_laser

The tf from base_laser to base_link is shown in Fig. 4.4. Translation and rotation are position and pose transforms. The translation transform shows that the base_laser has moved 0.088 m on the z-axis relative to the base_link. The rotation transform is demonstrated in three different forms: quaternion and rotation about fixed axes X-YZ: Roll–Pitch–Yaw (RPY) radian and degree forms. These relationship definitions can be found in the URDF file by entering the following command. It can be seen that robot_state_publisher publishes the TF from base_link to base_laser.

4.1

Foundation of SLAM

195

Fig. 4.5 Two coordinate frames of a mobile robot

$ rosrun tf view_frames

Task 1 Robot Coordinate Transformation In ROS, the transformation relationships among all reference frames are determined by tf trees, and if a tf between related frames is required, the transformation relationships need to be obtained by subscribing to the tf message tree. When using the tf tree, only the publishing and monitoring of the tf tree are required, and all the synchronization work is done automatically by ROS without the need for developer management. As illustrated in Fig. 4.5, the mobile robot platform contains the motion chassis and the LiDAR. Two frames are defined: one with the center point of the robot platform as the origin, named as the base_link frame, and the other with the center point of the LiDAR as the origin, named as the base_laser frame. The LiDAR sensor collects data about the obstacle in front of the robot. These data are measurements with the LiDAR as the origin; in other words, these data are measurements in the base_laser frame. A robot that uses these data for obstacle avoidance will have a deviation value from the center of the robot. The underlying reason is that the LiDAR is not mounted on the center point of the robot’s moving chassis, causing the obstacle to be at a distance from the center of the robot. Thus, the LiDAR measurements can be transformed from the base_laser frame to the base_link frame. It is known that the installation position of the LiDAR is 20 cm above and 10 cm in front of center point of the robot. The transformation relationship between these two frames can be defined; after the LiDAR measurement data is obtained, a coordinate transform (x: 0.1 m, y: 0.0 m, z: 0.2 m) can be adopted to transform the data from the base_laser to the base_link. If the opposite direction is defined, then the coordinate transform (x: -0.1 m, y: 0.0 m, z: -0.2 m) is used instead. In robotic applications, multiple coordinate frames, such as ultrasonic radar, millimeter wave radar, LiDAR, and camera frames, exist because complex systems are also configured with multiple sensors, such as ultrasonic radar, millimeter wave radar, LiDAR, and cameras. The tf package in ROS provides the capability to store

196

4

LiDAR SLAM for Mobile Robot

Fig. 4.6 Tf of the mobile robot

and calculate the transform of different sensor measurements between different frames. This is done by simply telling the tf tree the transform equations between these frames. The tf tree data structure manages these frame transforms. To define and store the relationship between the base_link and base_laser frames, we need to add them to a tf tree as shown in Fig. 4.6. Each node of the tf tree corresponds to a coordinate frame, and the edges between the nodes correspond to the transformation relationships between coordinate frames. All transformation relationships in the tf tree are from parent to child nodes. First, two nodes need to be defined, one for the base_link coordinate frame and one for the base_laser coordinate frame. To create the transformation relationship between the two coordinate frames, it needs to be determined which node will be the parent node and which node will be the child node. The choice of base_link as the parent node will make it easier to add more sensors as child nodes for the robot. Hence, the transform from the base_link node to the base_laser node is (x: 0.1 m, y: 0.0 m, z: 0.2 m). With this transform setup, the data coordinate transform from the base_laser frame to the base_link frame can be done by making a call to the tf library. The code to implement the robot coordinate transform is divided into two parts. • Write a node that broadcasts the transform between the two coordinate frames. • Subscribe to the tf tree, then traverse from the tf tree to the transformation equations between the two coordinate frames, and calculate the coordinate transform based on the equations. First create a package and add the dependencies: roscpp, tf, and geometry_msgs. $ cd %TOP_DIR_YOUR_CATKIN_WS%/src $ catkin_create_pkg robot_setup_tf roscpp tf geometry_msgs

1. Write Broadcast Nodes Go to the package, create the src/tf_broadcaster.cpp file, and write the broadcast node code. #include #include

4.1

Foundation of SLAM

197

int main(int argc, char** argv) { ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(), "base_link", "base_laser")); r.sleep(); } } As the tf::TransformBroadcaster class will be implemented to complete the broadcast of the tf tree, the relevant header file: tf/transform_broadcaster.h needs to be included. The program creates the tf::TransformBroadcaster class object to broadcast the transform from base_link to base_laser. Publishing the interface of transform via the TransformBroadcaster class takes five arguments. broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3 (0.1, 0.0, 0.2)), ros::Time::now(), "base_link", "base_laser"));

The first argument is the rotation transform between the two frames, which is stored via Quaternion. As no rotation transform occurs in the two frames, the pitch, roll, and yaw values are zero. The second argument is the offset transform of the coordinates. The offset of the two frames occurs in the x- and z-axes, which are placed into Vector3 according to the offset values. The third argument is the timestamp, done through the ROS API. The fourth argument is the coordinate frame stored by the parent node, base_link, and the last argument is the coordinate frame stored by the child node, base_laser. 2. Write Subscription Nodes Writing the node that subscribes to the tf broadcast will enable the use of the transform from base_link to base_laser in the tf tree to complete the coordinate transform of the location data. In the robot_setup_tf package, create the src/tf_listener.cpp file with the following code:

198

4

LiDAR SLAM for Mobile Robot

#include #include #include void transformPoint(const tf::TransformListener& listener) { //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 0.2; laser_point.point.z = 0.0; try { geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) ---–> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point. point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); } catch(tf::TransformException& ex) { ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } } int main(int argc, char** argv) { ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost:: bind(&transformPoint, boost::ref(listener))); ros::spin(); }

Because the tf::TransformListener object is used to automatically subscribe to tf messages in the ROS and to manage all the transform relationship data, the relevant header file, tf/transform_listener.h, needs to be included first. Create a callback function transformPoint, which will be called automatically each time a tf message is received. If the frequency of publishing tf messages is

4.1

Foundation of SLAM

199

set to 1 Hz, then the callback function executes at 1 Hz. In the callback function, the coordinate transform of the data from the base_laser to the base_link frame needs to be completed. First, a virtual test point laser_point of type geometry_msgs::PointStamped is created, which has coordinates (1.0, 0.2, 0.0). This type contains the standard header message structure, which allows for the inclusion of the published data timestamp and frame id in the message. The following code performs the transform of the test point coordinates in two frames. try{ geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) ---–> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point. point.z, base_point.point.x, base_point.point.y, base_point. point.z, base_point.header.stamp.toSec()); }

To transform the data of the virtual test point laser_point to the base_link frame, the transformPoint() function in the TransformListener object, which contains three arguments, needs to be called. The first argument is the id of the base_link frame to which to transform the point. The second argument is the original data being transformed, that is, the data of the laser_point. The third argument is used to store the transformed data. After the function is executed, base_point is the coordinate that has been transformed. Exception handling is needed to ensure that the code is fault-tolerant, so that when something goes wrong, it does not cause a program crash. Instead, an error will be raised. For example, when tf does not publish the required transform, an error will occur when calling transformPoint. catch(tf::TransformException& ex) { ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); }

3. Build the Code Next, the code needs to be built. Open up the CMakeLists.txt file of the package and add the appropriate compile options.

200

4

LiDAR SLAM for Mobile Robot

add_executable(tf_broadcaster src/tf_broadcaster.cpp) add_executable(tf_listener src/tf_listener.cpp) target_link_libraries(tf_broadcaster ${catkin_LIBRARIES}) target_link_libraries(tf_listener ${catkin_LIBRARIES})

Then, save the file and build the package. $ cd %TOP_DIR_YOUR_CATKIN_WS% $ catkin_make

4. Run the Code Open a terminal and run roscore. $ roscore

Then, open a second terminal and run tf_broadcaster. $ rosrun robot_setup_tf tf_broadcaster

Open a third terminal and run tf_listener to transform the virtual test point from the base_laser frame to the base_link frame. $ rosrun robot_setup_tf tf_listener

If everything runs properly, in the third terminal, the mock coordinate points will have been successfully transformed to the base_link frame, with the coordinates transformed to (1.10, 0.20, 0.20). In the robot application system, note that the PointStamped data type should be modified to the type of message issued by the actual sensor being used.

4.1.3

ROS Navigation Framework

Taking the example of a common two-wheeled differential speed mobile robot, the ROS navigation and localization process is introduced in bottom-up order, as shown in Fig. 4.7. The bottom layer is the wheel speed and chassis control. The differential wheel structure mobile chassis includes two driving wheels and one driven wheel. The driving wheels are equipped with motors with reducers, and each motor shaft is

4.1

Foundation of SLAM

Fig. 4.7 Process of ROS navigation and localization

201

Goal AMCL Path Planner move_base /cmd_vel + /odom

Base Controller Motor Speeds

equipped with an encoder that enables odometer function. The motor driver has an integrated speed controller that can be called directly. ROS also includes many thirdparty drivers. Next upward is the position control. If the robot is expected to move forward a certain distance at a certain speed according to a set value, PID needs to be applied for closed-loop control. If the current position and orientation are known, as well as the target position and orientation, ROS provides the move_base package to control the robot’s movement to the target point and to implement obstacle avoidance and path planner. Further up, SLAM Gmapping and other packages can be used to create maps, and then the Adaptive Monte Carlo Localization (AMCL) package can be used to locate based on the current data. The topmost layer parses the commands into a series of actions based on semantic information. Then, it drives the lower layer to execute the corresponding commands to make the robot move to the target location. The use of the ROS navigation framework is generally for differential and completely constrained wheeled robots, where the mobile platform can only accept x and y directional velocities and yaw angular velocities. The robot requires laser sensors (or laser data simulated by depth data) for mapping and localization. In addition, the robot shape is a square or circle of appropriate size. The Navigation function in ROS provides a complete solution for robot navigation. Navigation Stack is a metapackage in ROS that contains various functions of path planning, localization, abnormal behavior recovery, map server, etc. These open-source toolkits greatly reduce the workload of the robot application system

202

4

LiDAR SLAM for Mobile Robot

Fig. 4.8 ROS navigation framework

and can be deployed and implemented quickly. The Navigation package encompasses many navigation and localization-related packages, including the move_base package, which is a powerful path planner that can drive the motion of the mobile chassis to a target point in the world frame. It contains global and local path planners to complete the navigation task, while maintaining two costmaps, one for the global planner and one for the local planner. The global path planning is to plan the global path according to the given target location, while the local path planning is to plan the path based on the detected obstacles. The two costmaps can be configured with multiple layers as follows: • Obstacle Map Layer: It is used to dynamically record the obstacle information sensed by the sensors. • Inflation Layer: It expands outward on the map to prevent the robot from colliding obstacles. • Static Map Layer: It remains unchanged, usually a static map created by SLAM. • Other Layers: Costmaps are implemented in the form of plugins. Some opensource plugins are already available. Fig. 4.8 demonstrates the ROS navigation framework. The functions of its several major components are as follows: • sensor transforms component This component involves the transform of sensor coordinates using tf. The center of the robot is often not the sensor center, and the measurement data from the sensor needs to be transformed to coordinate information for the robot center. For example, the data acquired by the sensor is based on the base_laser frame, and the robot control is centered on the robot’s base_link. Therefore, a coordinate transform needs to be performed based on the position relationship between the two. • sensor sources component

4.1

Foundation of SLAM

203

This is the robot navigation sensor data input, and only two types are available: laser sensor data and point cloud data. • odometry source component Odometer data (nav_msgs/Odometry_message) needs to be entered for robot navigation. • base controller component This component is responsible for encapsulating the data obtained during navigation into specific line speeds and steering angles that are published to the hardware platform. • map_server component The map component of ROS (costmap) takes the form of a grid, and the value range of each grid is from 0 to 255 with three states: occupied (with obstacles), free, and unknown. Mobile robot applications typically start with global path planning, which uses the navfn package to plan a global path from the robot’s current location to the target location. The navfn package adopts the Dijkstra or A* path planning algorithm to compute the least costly path as the global path for the robot. Local path planning is path planning for real-time avoidance based on the information of nearby obstacles. It is real-time path planning and implemented using the base_local_planner package. This package uses Trajectory Rollout and Dynamic Window Approaches to calculate the speed and angle (dx, dy, dtheta velocities) that the robot should move in each cycle. The base_local_planner package uses map data to search for multiple paths to the target, then selects the optimal path based on evaluation criteria (whether it will collide an obstacle, how long it will take, etc.), and calculates the required real-time speed and angle. The main ideas of the Trajectory Rollout and Dynamic Window Approaches are as follows. 1. Sample the current state of the robot (dx, dy, dtheta). 2. For each sampling speed, calculate the state of the robot after driving at that speed for a period of time to obtain a travel route. 3. Score multiple routes based on evaluation criteria and then select the optimal route according to the score. 4. Repeat the above process.

4.1.4

Environment Mapping and Pose Estimation

A map is a description of a particular environment, and common maps include grid, point cloud, and topological maps. The localization of mobile robots and mapping are two closely related problems that can be performed simultaneously. In the process of SLAM, the most common and effective expression is the grid map.

204

4

LiDAR SLAM for Mobile Robot

Fig. 4.9 Grid map

The mapping process can generally be updated directly in the global map or in a sub-map. An update in the global map means that all frames to be calculated refer to the frame of the global map. Each update will update the entire map or a local area of the entire map. In terms of the sub-map update method, a local sub-map is constructed by the update process over a period of time. This local sub-map has its own frame, and its update process adopts the optimal SLAM algorithm. The sub-maps are connected to each other by the pose transform matrix, and then, all the sub-maps are accumulated to form the global map. In ROS, a map is usually a grayscale image in pgm format, called a grid map, as shown in Fig. 4.9. The white pixel dots in the image indicate feasible areas, the black pixel dots indicate obstacles, and the gray parts are unknown exploration areas. The essence of the SLAM is a state estimation problem, i.e., how to estimate unknown state variables from measurement data with noise. In this case, the measurement data can come from either sensors that measure the robot’s own motion (odometer, IMU) or sensors that observe the external world (camera, laser). The unknown state variables to be estimated include the robot’s pose and environment map [3]. As demonstrated in Fig. 4.10, a complete SLAM process can be described by state estimation p(x1 : t, m| z1 : t, u1 : t), where x_t denotes the pose of the robot, m refers to the environment map, u_t denotes the motion sensor measurements, and z_t denotes the observation data. In terms of the mathematical formulation, a complete SLAM process consists of two basic equations: the motion equation and the observation equation, as shown in

4.1

Foundation of SLAM

205

Fig. 4.10 Complete SLAM problem description

Eq. (4.1). The motion equation describes how the robot pose estimation (x) is performed from motion measurements (u), and the observation equation describes how the robot pose estimation (x) and mapping ( y) is performed from external environmental observation data (z). xk = f ð xk - 1 , uk , w k Þ zk,j = h yj , xk , vk,j

ð4:1Þ

It should be noted that for a robot carrying only motion sensors, the ideal pose estimation is often not achieved due to the time cumulative errors of the sensors. As a result, the robot usually also needs to carry sensors that observe the external environmental information to continuously remove the cumulative errors. However, the motion and observation equations are often not accurate due to the presence of environmental noise. This problem of state estimation with noise can be solved with multiple optimization iterations. The architecture of the SLAM system includes two main components: the front end and the back end, as shown in Fig. 4.11. The front end processes sensor data, estimates the robot’s poses and maps, and updates the robot's poses and maps in real time, while the back end preforms the optimal estimation of poses and maps to continuously improve their accuracy. Depending on the back-end processing, SLAM can be divided into filter-based methods and graph optimization-based approaches. In the early stage of SLAM, a filter-based solution is usually used to estimate the posterior probability of the

206

4

LiDAR SLAM for Mobile Robot

Fig. 4.11 SLAM front-end and back-end functions

system state based on the observed information as well as the control information based on the Bayesian principle. Several filter-based methods are available depending on the posterior probability representation. The most prevalent filterbased SLAM solutions include the Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), and particle filter (PF). Among the several filtering methods for recursive state estimation mentioned above, according to different approximation methods, the Gaussian distribution is used for Bayesian filtering to estimate the probability density function (PDF) and is called the Gaussian filter. In linear systems, a Gaussian filter is used for estimation or prediction and is called the Kalman filter. As many robot systems are not linear, the Kalman filter is not applicable. In nonlinear systems, the Taylor series expansion is often employed to locally linearize the nonlinear system so that the variables remain Gaussian distributed after the linear transformation, and then, the filtering is performed using a Kalman filter. This process is called the extend Kalman filter, which is an efficient recursive filter (autoregressive filter). EKF performs a first-order linear truncation of the Taylor series expansion (due to the computational complexity of the second-order derivative, practical applications take only the first-order derivative, and the state transition equation and the observation equation are approximated as linear equations, which can also provide good results). In this way, the model can be transformed into a linear problem handled by the computer, and it can achieve good filtering in cases in which the nonlinearity of the function is not very drastic. However, EKF must solve the Jacobian matrix of the nonlinear function (a matrix of first-order partial derivatives arranged in a certain way, reflecting the optimal linear approximation of a differentiable function). A Jacobian matrix is more complicated and error-prone for systems with complex models. Meanwhile, EKF is prone to decreased filtering accuracy for systems with a strong nonlinearity. Therefore, to improve the filtering accuracy and efficiency, other approximation methods are used. In general, it is easier to approximate the probability density distribution of a nonlinear function than to approximate the nonlinear function itself, giving rise to PF and UKF. If a large number of random samples are used to estimate the PDF for Bayesian filtering, a particle filter is developed. Specifically, PF approximates the PDF by finding a set of random samples propagating in the state space and obtaining a minimum variance estimate of the system state using the sample means. These samples are called “particles.” PF is based on the Monte Carlo method [4], which

4.1

Foundation of SLAM

207

uses a set of particles to represent probabilities and can be used in any form of the state space model. Its core idea is to randomly generate a large number of particles by using a reference distribution and to express the distribution of the particles in a random state drawn from the posterior probability; that is, the distribution of particles is approximated as the posterior probability density function of the state, and then, the estimation of the system state is computed. This is a sequential importance sampling. When a high accuracy is required for the mobile robot pose estimation, the number of particles required for particle filtering can be large, which results in a huge computational effort. In addition, for high-dimensional states, the computational effort increases exponentially with dimensionality. Reducing the number of particles leads to a degradation of the system estimation performance and affects the pose estimation accuracy. To solve these abovementioned issues, UKF, which can produce more accurate particles, was developed. UKF makes nonlinear system equations applicable to the standard Kalman filter framework under the linear assumptions through an unscented transform (UT). It uses a deterministic sampling of the PDF, which can obtain a high approximation accuracy with a relatively small number of particles compared to PF. Unlike EKF, which achieves recursive filtering by linearizing the nonlinear function, UKF can effectively overcome the problems of low estimation accuracy and poor stability of EKF estimation by approximating the probability density distribution of the nonlinear function (easier than approximating the nonlinear function). This is because it does not need to ignore the higher-order terms, resulting in a high computation accuracy of the nonlinear distribution statistics. In UKF, the UT is used to approximate a Gaussian distribution with a fixed number of parameters. The implementation principle of UF is as follows: In the original distribution, sigma point sets are constructed based on the statistical mean and covariance (or variance if it is a scalar) of the random variables, so that the mean and covariance state distributions of these points are equal to the mean and covariance of the original state distribution. These points are then substituted into the nonlinear function, and the Sigma point sets are transformed nonlinearly to obtain the nonlinear function-valued point sets. Then, for these point sets, the mean and covariance after function mapping (transform) are calculated. Because UT approximates the probability density distribution of a nonlinear function rather than approximating the nonlinear function itself, the algorithm implementation is not affected by the complexity of the model. In addition, the statistics of the resulting nonlinear function can be accurate to the third-order moment mean and covariance, thus having a higher filtering accuracy. Its calculation quantity is still of the same order as the EKF; however, the accuracy of EKF can only reach the first order. By applying the UT to the Kalman filter framework, the nonlinear model of the system is used directly, and there is no need to linearize the nonlinear system or calculate Hession and Jacobian matrices as in quadratic filtering, which improves the computational speed. As there is no need to calculate the Jacobian matrix, nondifferentiable nonlinear functions can be handled without any form of approximation to the nonlinear function. Good filtering performance can

208

4 LiDAR SLAM for Mobile Robot

be obtained even if the system function is discontinuous. Moreover, UKF does not necessarily require a Gaussian distribution for the distribution of random states; therefore, UKF has a very wide range of applications, especially for nonlinear systems, with higher filtering accuracy and convergence speed. In the above filter-based SLAM framework, because each step only considers the current pose of the robot or the pose information of the previous steps and ignores the historical information of the robot pose, this solution suffers from the problem of cumulative error. Hence, SLAM based on a graph optimization method emerges. Different from the filter-based SLAM algorithm, the graph optimization-based SLAM algorithm uses all the observed information to estimate the complete motion trajectory of the robot as well as the map. The map features scanned at different moments can be transformed into constraints between the robot’s poses at different moments, thus transforming the SLAM problem into an optimized estimation problem of the pose sequence. The nodes in graph optimization represent the robot’s poses at different moments, while the edges between the nodes represent the constraint relationship between the different poses. The core of graph optimization is to continuously adjust the pose nodes, so that the nodes satisfy the constraint relationship between the edges as much as possible. The final optimized result is the optimized robot’s motion trajectory and map, which will be closer to the robot’s real motion trajectory and environment map. According to the different sensors used, the research direction of SLAM is divided into LiDAR-based SLAM and vision-based SLAM. LiDAR-based LAM is highly accurate, fast, and moderately computationally intensive; however, LiDAR is more expensive. The main methods used in LiDAR-based SLAM include the EKF-based SLAM algorithm and PF-based SLAM algorithm. EKF needs to save the mean and covariance matrices of all road signs, and with the disadvantages of the linearization hypothesis, Gaussian noise hypothesis, and sensitivity to data association, it is rarely used at present. Although PF is highly fault-tolerant to data association, it is computationally intensive. The idea of graph optimization originates from the bundle adjustment (BA) method of structure from motion (SFM). The BA method can effectively use all historical observations, has a high fault tolerance for data association, is easy to close the loop detection, and features sparsity. The Cartographer algorithm disclosed by Google in 2016 combines the filtering method with the graph optimization method, with high comprehensive performance. This chapter focuses on two types of LiDAR-based SLAM algorithms (Gmapping and Hector SLAM), and the Cartographer algorithm is introduced in subsequent chapters.

4.2

Gmapping Algorithm

Gmapping is a widely used 2D LiDAR-based SLAM algorithm that adopts the Rao– Blackwellized particle filter (RBPF) method [5, 6]. Adaptive resampling techniques are added in the resampling process to mitigate the particle degradation problem. In

4.2

Gmapping Algorithm

209

addition, by incorporating the current sensor observations, the uncertainty of the robot pose estimation (particles) can be reduced, and the number of particles can also be appropriately reduced. The advantages of the Gmapping algorithm include incorporation of odometer information and dynamic adjustment of the map size. However, the mapping effect is related to the number of particles. A sub-map needs to be maintained for each particle, which consumes a lot of memory, and SLAM can only be performed in a flat indoor without closed-loop detection [7].

4.2.1

Principle Analysis

The core idea of the Gmapping algorithm using RBPF to solve the SLAM problem is to use the joint probability density p(x1 : t, m| z1 : t, u1 : t - 1) to estimate the map m as well as the robot’s motion pose x, where m denotes the map, x1 : t = x1, ⋯, xt denotes the robot’s motion pose, and z1 : t = z1, ⋯, zt denotes the observation value, usually the observation data from a 2D laser sensor; u1 : t - 1 = u1, ⋯, ut - 1 is the odometer data measurement. Drawing upon the Rao–Blackwellized idea, the above joint probability is decomposed into Eq. (4.2). pðx1:t , mjz1:t , u1:t - 1 Þ = pðmjx1:t , z1:t Þ  pðx1:t jz1:t , u1:t - 1 Þ

ð4:2Þ

That is, the robot poses can be estimated first, and then, the map can be estimated using the poses. The particle filtering algorithm can be used to estimate the poses.

4.2.2

Implementation Process

The core of the Gmapping algorithm is the RBPF method, using sampling importance resampling (SIR) filtering, and the most important steps in the implementation of this algorithm are as follows. 1. Particle Sampling ðiÞ

For each particle i, the pose particle xt pose

ðiÞ xt - 1

at moment t is sampled from the

at moment t - 1 using the proposal distribution (PD), based on the

odometry motion model of the mobile robot. 2. Weights Updating The weights are updated using Eq. (4.3), which is transformed into an iterative form for ease of programming implementation.

210

4

LiDAR SLAM for Mobile Robot

ðiÞ

ðiÞ

ωt

=

p x1:t jz1:t , u1:t - 1 ðiÞ

π x1:t jz1:t , u1:t - 1 ðiÞ

=

1  pðzt jz1:t - 1 , u1:t - 1 Þ

ðiÞ

π

ð4:3Þ

ðiÞ

p zt jx1:t , z1:t - 1 p xt jxt - 1 , ut - 1 ðiÞ ðiÞ xt jx1:t - 1 , z1:t , u1:t - 1

ðiÞ

 ωt - 1

PD uses an odometer motion model to approximate the target distribution function. However, the odometer accuracy is far less accurate than other sensors (e.g., LiDAR). The direct use of the odometer motion model for robot pose estimation will result in reduced particle weights in the region of interest, which will affect the localization accuracy. Therefore, the Gmapping algorithm adopts a smooth likelihood function that uses the laser observations of zt. The zt is integrated into the PD; that is, the sample set is concentrated in the likelihood region of the observations, so as to avoid the particles of interest from receiving too low weights. The updated weights are shown in Eq. (4.4) as follows: ðiÞ

ωt

ðiÞ

ðiÞ

ðiÞ

= ωt - 1  p zt jmt - 1 , xt - 1 , ut - 1 ðiÞ

= ωt - 1 

ðiÞ

ðiÞ

p zt jmt - 1 , x0  p x0 jxt - 1 , ut - 1 dx0

ð4:4Þ

To generate the next generation of particles more efficiently, the Gmapping algorithm first uses a laser scan matching algorithm to determine the region of interest for the observed likelihood function and then updates the sampling based on the target distribution in this region. 3. Particle Resampling The number of each particle is positively correlated with their weights. As only a limited number of particles can be used to approximate the continuous distribution, the diversity of particles will be lost over time. Therefore, resampling must be introduced to increase the diversity of particles. When the target distribution is different from the PD, it is more necessary to introduce resampling. In this way, all particles after resampling have the same weights, thus increasing the diversity of particles. The resampling threshold is determined based on Eq. (4.5). Neff =

1 N

ðiÞ

2

ð4:5Þ

w i=1

When Neff is less than N/2, resampling is required. 4. Map Updating Gmapping employs the gradient descent method for search and match. Each particle is scan-matched with the map, with high scores for high matching

4.2

Gmapping Algorithm

211

similarity and low scores for low similarity. The particle with the highest score is selected as the current pose of the mobile robot, which is then used to update the map. The scan matching criterion is based on Eq. (4.6). ðiÞ

ðiÞ

xðt iÞ = arg max p xjmt - 1 , zt , x0 t x

ð4:6Þ

where x0 ðt iÞ is the initial estimate. The complete algorithm flow of Gmapping is demonstrated in Fig. 4.12. Task 2 Gmapping-Based 2D LiDAR Mapping This task will demonstrate the Gmapping SLAM algorithm based on a single-line LiDAR for robot localization and mapping. Gmapping is installed by default when ROS is installed. It can determine if the kinetic version of ROS is installed by the following command: $ sudo apt-get install ros-kinetic-slam-gmapping

1. Install Dependencies The Gmapping algorithm for this task uses only LiDAR and does not use the odometer data generated by the Spark wheel speed encoder. Instead, it uses the laser scan information from the rfo2 library to generate a virtual odometer. The rfo2 library is already included in the GitHub code; however, the MRPT library required for rfo2 needs to be installed manually, which can be done with the following command: $ sudo apt install libmrpt-dev mrpt-apps

2. Launch $ cd spark $ source devel/setup.bash $ roslaunch spark_slam gmapping_no_odom_teleop.launch lidar_type_tel:=3iroboticslidar2

Note that if Spark is configured with EAI LiDAR (g2 or g6), it needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The file gmapping_no_odom_teleop.launch reads as follows:

212

4

LiDAR SLAM for Mobile Robot

Start

Particle Initialization

Motion update

N Scan matching, successful?

Y

Generate nearby new particles, Calculate Gaussian proposal distribution parameters, Update weights

Compute new particles according to Gaussian model, Update weights

Update the map with particle set

N Resampling?

Y Update the particle set

End

Fig. 4.12 Flow chart of the Gmapping algorithm

4.2

Gmapping Algorithm

213









The nodes started by this command include the Spark motion control model, the Spark motion control node, the laser node, the Gmapping SLAM algorithm node, the keyboard control, and the map save node. 3. View the Gmapping SLAM Algorithm File The spark_gmapping.launch configuration file has the following content:













(continued)

214

4

LiDAR SLAM for Mobile Robot

Fig. 4.13 Mapping result of the Gmapping algorithm















4. Control the Keyboard to Move the Robot in Indoor Environments The mapping results of the Gmapping SLAM algorithm are shown in Fig. 4.13.

4.3

Hector SLAM Algorithm

215

The map size is 40 m × 20 m, and the grid resolution is 5 cm. Some local distortion and misalignment can be observed on the mapping result. However, the map as a whole is accurate without obvious distortion, and there is no obvious separation in the closed-loop area of the map. 5. Save Maps After running for a while, switch to the terminal that prompts to save the map and press any key to save the currently constructed map. The map files will be saved to the spark_slam/scripts folder, where the test_map.pgm and test_map. yaml map files can be seen.

4.3

Hector SLAM Algorithm

Hector SLAM is another popular 2D/3D LiDAR-based SLAM algorithm that uses Gaussian Newton iteration to solve nonlinear least squares problems. The Hector SLAM algorithm has the following advantages: It does not need odometer data, has low requirements for ground flatness, and requires less computation. However, a laser sensor with high frequency and low measurement noise is needed. The map size is constant, and there is no closed-loop detection. Additionally, it is highly dependent on the result of scan matching, and the algorithm will fail once the matching fails.

4.3.1

Principle Analysis

1. Bilinear Map Interpolation To obtain a smooth map, the Hector SLAM algorithm adopts the bilinear interpolation method to interpolate the occupied grid map. The core idea of the bilinear interpolation method is that interpolation is performed once in both x- and y-directions. Based on Fig. 4.14, the probability value of the point Pm is shown in Eq. (4.7). M ðPm Þ ≈ þ

y - y0 x - x 0 x -x M ðP11 Þ þ 1 M ðP01 Þ y1 - y0 x1 - x0 x 1 - x0

y 1 - y x - x0 x -x M ðP10 Þ þ 1 M ðP00 Þ y1 - y 0 x 1 - x 0 x1 - x 0

ð4:7Þ

where M(Px,y) denotes the probability that grid Px,y is an obstacle. The partial derivative of M(Px,y) is

216

4

Fig. 4.14 Bilinear interpolation

LiDAR SLAM for Mobile Robot

3

3

\

3P \

3

3 [

[

\ [

∂M ðPm Þ y - y0 y -y ðM ðP11 Þ - M ðP01 ÞÞ þ 1 ðM ðP10 Þ - M ðP00 ÞÞ ≈ y y y ∂x 1 0 1 - y0 ∂M ðPm Þ x - x0 x -x ≈ ðM ðP11 Þ - M ðP10 ÞÞ þ 1 ðM ðP01 Þ - M ðP00 ÞÞ x1 - x0 x1 - x 0 ∂y

ð4:8Þ

2. Scan Matching The optimal criterion for the scan matching of the Hector SLAM algorithm is as follows: ξ = arg min ½1 - M ðSi ðξÞÞ2

ð4:9Þ

ξ

That is, finding an optimal transform variable ξ = ( px, py, ψ)T for the best alignment of the laser scan to the map. Si(ξ) is the coordinate of the end point of the scan in the world frame. Its function expression is Eq. (4.10). Si ðξÞ = ð cos ðψ Þ - sinðψ Þ sinðψ Þ cosðψ ÞÞ

si,x si,y

þ

px py

ð4:10Þ

For any given initial estimate ξ, find an optimal estimate Δξ to make Eq. (4.11) hold.

4.3

Hector SLAM Algorithm

217 n

½1 - M ðSi ðξ þ ΔξÞÞ2 → 0

ð4:11Þ

i=1

The first-order Taylor expansion is carried out on M(Si(ξ + Δξ)) and the minimum value of the expansion calculated; i.e., calculate the partial derivative of Δξ, and let the partial derivative be 0, to obtain Eq. (4.12). Δξ = H - 1

T

n

∇M ðSi ðξÞÞ i=1

∂Si ðξÞ ½1 - M ðSi ðξÞÞ ∂ξ

ð4:12Þ

where the Hessian matrix H is Eq. (4.13), and the partial derivative of the Si(ξ)pair ξ is Eq. (4.14). n

∇M ðSi ðξÞÞ

H= i=1

∂Si ðξÞ = ∂ξ

1 0

0 1

∂Si ðξÞ ∂ξ

T

∇M ðSi ðξÞÞ

∂Si ðξÞ ∂ξ

‐ sinðψ Þsi,x - cosðψ Þsi,y cosðψ Þsi,x - sinðψ Þsi,y

ð4:13Þ ð4:14Þ

According to the above derivation, the core of the Hector SLAM algorithm is to use the first-order Taylor series expansion of the Gaussian Newton iteration method to approximately replace the nonlinear regression model, and then, minimize the sum of squares of residuals (Eq. 4.11) of the original model by multiple iterations. Using the residual equation (Eq. 4.12), a better solution can be obtained by several iterations, so the Hector SLAM algorithm has a fast running speed. Meanwhile, to obtain better scan matching results and a faster running speed, Hector SLAM adopts multiresolution map layered iterations so as to accelerate the convergence of the solution. It should be noted that the multiresolution map here is not simply a low-resolution map obtained by downsampling the high-resolution map, but the algorithm directly maintains maps of different resolutions. The iterative solution process starts from the low-resolution map, and the solution obtained from the low-resolution map iteration is used as the initial value of the highresolution map solution; the algorithm iterates again to obtain a more accurate solution. For example, a three-layer map can be selected in the iterative process, and the map resolution from low to high is 20 cm, 10 cm, and 5 cm, respectively. After several iterations of each resolution map, the solution is transferred to the next layer of higher resolution maps as the initial value, and the iterative computation continues; finally, an optimal solution is obtained. Task 3 Hector SLAM-Based 2D LiDAR Mapping This task will demonstrate the Hector SLAM algorithm based on a single-line LiDAR for robot localization and mapping.

218

4

LiDAR SLAM for Mobile Robot

1. Launch $ cd spark $ source devel/setup.bash $ roslaunch spark_slam 2d_slam_teleop.launch slam_methods_tel: =hector lidar_type_tel:=3iroboticslidar2

Note that if Spark is configured with EAI LiDAR (g2 or g6), it needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The file 2d_slam_teleop.launch reads as follows:







The nodes started by this command include the following: the Spark motion control model, Spark motion control node, laser node, Hector SLAM algorithm node, keyboard control, and map save node. 2. View the Hector SLAM Algorithm File When using the laser sensor only, modify the following three places in the spark_hector.launch configuration file.



The spark_hector.launch configuration file (which uses only laser sensor data, not odometer data) has the following content:

4.3

Hector SLAM Algorithm

219































-->



3. Control the Keyboard to Move the Robot in Indoor Environments. The mapping result of the Hector SLAM algorithm is shown in Fig. 4.15, and the map shows no obvious offset or distortion. 4. Save Maps After running for a while, switch to the terminal that prompts to save the map and press any key to save the currently constructed map. The map files will be saved to the spark_slam/scripts folder, where the test_map.pgm and test_map. yaml map files can be seen.

Fig. 4.15 Mapping result of the Hector SLAM algorithm

Further Reading

4.3.2

221

Mapping Results

Based on Eq. (4.12), the Hector SLAM algorithm samples the Gaussian Newton iterative method to solve the global optimal solution. When the motion of the mobile robot is too vigorous, this method often does not obtain the global optimal solution. Instead, a local optimal solution is obtained, or it fails to converge after several iterations. Because the Hector SLAM algorithm accumulates errors and there is no measure to eliminate the accumulated errors, the algorithm will eventually fail to find a solution. Therefore, when the motion speed and steering speed of the mobile robot are too fast, it will cause the Hector SLAM algorithm to fail. In particular, when the steering angle speed is too fast, there will be obvious deviations in the map, leading to a mapping failure. Thus, Hector SLAM is suitable for sensors with a high sampling frequency.

4.4

Summary

This chapter covers the fundamental concepts and principles of SLAM and common 2D LiDAR SLAM algorithms. Based on the ROS navigation framework, two key technologies in SLAM are analyzed: environmental mapping and pose estimation. Thereafter, an example is presented to explain the coordinate transform method of mobile robots. Furthermore, two classical 2D LiDAR SLAM algorithms are analyzed: Gmapping and Hector SLAM. In addition, two mapping experiments presented in this chapter enable readers to deepen their understanding of the key SLAM techniques and lay the foundation for further research on other SLAM algorithms [8–10].

Further Reading Ji Zhang and Sanjiv Singh proposed LiDAR odometry and mapping (LOAM) in real-time using range measurements from a 2-axis LiDAR moving with 6-DOF1. This is a difficult issue because the range measurements are received at different times, and errors in the motion estimation can cause the misregistration of the resulting point cloud. Presently, coherent 3D maps can be built by offline batch methods, which often employ loop closure to correct for drift over time. This method achieves both low-drift and low-computational complexity without the requirement for high accuracy ranging or inertial measurements. The key concept for obtaining 1

Ji Zhang, SanJiv Singh. LOAM: Lidar Odometry and Mapping in Real-time. 2014 Robotics: Science and Systems Conference (RSS 2014), vol.2, no. 9, 12-16 July, 2014, Berkeley, California, USA.

222

4

LiDAR SLAM for Mobile Robot

this level of performance is dividing the complex problem of SLAM, which seeks to optimize a large number of variables simultaneously, using two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the LiDAR. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combining the two algorithms enables the method to realize real-time mapping. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the proposed method can achieve accuracy at the level of state-of-the-art offline batch methods. Tixiao Shan and Brendan Englot proposed a lightweight and ground-optimized LiDAR odometry and mapping method (LeGO-LOAM)2, to obtain real-time six degree-of-freedom pose estimation for ground vehicles. LeGO-LOAM is lightweight, and it can achieve real-time pose estimation on a low-power embedded system. LeGO-LOAM is ground-optimized, as it leverages the presence of the ground plane in its segmentation and optimization steps. In LeGO-LOAM, point cloud segmentation is first applied to filter out noise, and feature extraction is utilized to obtain distinctive planar and edge features. A two-step Levenberg–Marquardt optimization method is subsequently applied to the planar and edge features to solve different components of the six degree-of-freedom transformation across consecutive scans. Using datasets gathered from variable terrain environments with ground vehicles, to compare the performance of LeGO-LOAM with LOAM, experimental results showed that LeGO-LOAM achieves similar or better accuracy with reduced computational expense. LeGO-LOAM was integrated into a SLAM framework to eliminate the pose estimation error caused by drift, which was tested using the KITTI dataset.

Exercises 1. Type the start command in the terminal: roslaunch turtle_tf2 turtle_tf2_ demo. launch This command is to start the small turtle following program built in ROS, the movement of one turtle can be controlled by keyboard, and meanwhile, the other turtle can follow the movement. 2. Some practical tools built in ROS can easily and quickly implement a function or debug a program. During the operation of the turtle following in exercise (1), please use the rosbag to record the movement, and play back the recording at 0.5 times speed.

2

Tixiao Shan, Brendan Englot. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), pp:4758-4765,1-5 Oct, 2018, Madrid, Spain.

References

223

3. What are the optimization algorithms based on the back-end processing of the SLAM system? What are the advantages and disadvantages of different algorithms? 4. Using an example to describe the TF from base_link to base_laser of a mobile robot. 5. Gmapping is one of the classic 2D laser SLAM algorithm, what is the core idea and process of the Gmapping algorithm? 6. What are the inputs and outputs of the ROS Gmapping package? What topics are subscribed and published? Please install Gmapping package and dependencies, download the relevant data set and replay it, and then, run the Gmapping algorithm to build a map. 7. Describe the principles, advantages, and disadvantages of the Hector SLAM algorithm. 8. Write the mathematical expression for the Gauss–Newton method.

References 1. Thrun S, Burgard W, Fox D (2019) Probabilistic robotics (trans: Cao H, Tan Z, Shi X). Machinery Industry Press, Cambridge 2. Zheng W (2017) Research on key technologies of mapping and localization system for mobile robots. Huazhong University of Science and Technology, Wuhan 3. Barfoot TD (2018) State estimation for robotics (trans: Gao X, Yan Q, Liu F). Xi’an Jiaotong University Press, Xi’an 4. Dellaert F, Fox D, Burgard W, Thrun S (1999) Monte Carlo localization for mobile robots. Proceedings of 1999 IEEE international conference on robotics and automation (ICRA 1999), 10–15 May 1999, Detroit, Michigan, USA, vol 2, pp 1322–1328 5. Doucet A, de Freitas JFG, Murphy K, Russel S (2000) Rao-Blackwellized particle filtering for dynamic Bayesian networks. Proceedings of the conference on uncertainty in artificial intelligence (UAI 2000), June 30–July 3, San Francisco, CA, USA, pp 176–183 6. Grisetti G, Stachniss C, Burgard W (2007) Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans Robot 23(1):34–46 7. Hess W, Kohler D, Rapp H, Andor D (2016) Real-time loop closure in 2D LiDAR SLAM. Proceedings of 2016 IEEE international conference on robotics and automation (ICRA 2016), 16–21 May 2016, Stockholm, Sweden, pp 1271–1278 8. Montemerlo M, Thrun S, Koller D, Wegbreit B (2002) FastSLAM: a factored solution to the simultaneous localization and mapping problem. Proceedings of the national conference on artificial intelligence (AAAI 2002), 28 July–1 August 2002, Edmonton, Alberta, Canada, pp 593–598 9. Montemerlo M, Thrun S (2003) Simultaneous localization and mapping with unknown data association using FastSLAM. Proceedings of the IEEE international conference on robotics & automation (ICRA 2003), 14–19 Sept 2003, Taipei, China, pp 1985–1991 10. Montemerlo M, Thrun S, Koller D, Wegbreit B (2003) FastSLAM 2.0: an improved particle filtering algorithm for simultaneous localization and mapping that provably converges. Proceedings of the international joint conference on artificial intelligence (IJCAI 2003), 9–15 August 2003, Acapulco, Mexico, pp 1151–1156

Chapter 5

Autonomous Navigation for Mobile Robot

Abstract Navigation and localization are important aspects of robot research. Mobile robots use sensors in unfamiliar environments to perform map modeling and localization, followed by navigation. Many well-established packages in ROS can be used directly. For navigation, three packages are required: (1) move_base: performs path planning to move the mobile robot to the specified location; (2) gmapping, hector, cartographer, etc. construct maps based on LiDAR data; and (3) adaptive Monte Carlo localization is used for localization based on the constructed map. In the previous chapter, the use of LiDAR sensors for mapping was described. Based on that, this chapter will describe how the robot can use sensor information to determine its position and to navigate autonomously when the map information is known. Keywords Navigation · path planning · AMCL · localization · particle filter

5.1

Map-Based Localization

If the initial position of the robot and velocities of the left and right wheels are known, the distances travelled by the left and right wheels over a period of time can be calculated, which in turn can be used to determine the displacement and rotation angle of the robot. In this way, the pose information will be updated. However, significant issues exist related to this approach. First, the velocity is obtained through the robot’s internal sensors. Due to errors in the sensors, the cumulative error will increase if time integration is used to determine the distance. In addition, mobile robots will encounter a skidding-like problem during their motion, which can lead to inaccurate pose estimations. Therefore, utilizing a map for robot localization can effectively reduce the error.

5.1.1

Monte Carlo Localization

The Monte Carlo localization (MCL) algorithm is a probabilistic method that considers the current position of the robot as a density model with many particles © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_5

225

226

5

Autonomous Navigation for Mobile Robot

Fig. 5.1 LiDAR and map information. (a) Obstacle detection; (b) occupancy grid map

[1]. It adopts a particle filter-based approach to localization. Conceptually, each particle can be considered to represent a hypothesis that the robot is at this position. The particle filters have become a very popular class of numerical methods for the solution of optimal estimation problems in nonlinear non-Gaussian scenarios. In comparison with standard approximation methods, such as the popular extended Kalman filter, the principal advantage of particle methods is that they do not rely on any local linear technique or any crude functional approximation. The price that must be paid for this flexibility is computationally expensive. However, with the availability of ever-increasing computational power, these methods are already used in real-time applications appearing in fields as diverse as robotics, computer vision, and object tracking. MCL can be used on robot systems with external distance sensors, such as LiDAR sensors. LiDAR sensors measure the distance between the robot and the nearest obstacle in each direction. At each point in time, the robot obtains the measurements from the LiDAR sensor. As shown in Fig. 5.1a, the green triangle represents the robot, the red line represents the laser beam, and the yellow grid is the nearest obstacle detected by the robot in the direction of that laser beam. Fig. 5.1b demonstrates the occupancy grid map (OGM), in which a white grid indicates an obstacle while a black grid indicates an empty location. A mobile robot localization problem at a certain time point becomes a problem of solving the optimal function. The data from the LiDAR sensor are used to determine the mobile robot’s optimal pose estimation that matches the map information [2], as shown in Fig. 5.2. As shown in Figure 5.3, this localization method is divided into several steps. First, all the particles are moved according to the robot motion model, and the particle probabilities depend on the calculated scores from the sensing model.

5.1

Map-Based Localization

227

Fig. 5.2 Possible pose distribution of the robot

Fig. 5.3 Monte Carlo algorithm steps

Thereafter, based on these probabilities, more particles are moved to the sampling locations with higher probabilities, i.e., resampling is performed. Finally, the current optimal pose of the robot is estimated according to the average probability distribution, and the next round of particles is then determined according to the motion model.

228 Table 5.1 Correspondence between LiDAR sensor readings and the map result

5

Free Occupied

Autonomous Navigation for Mobile Robot Free a c

Occupied b d

1. Initialize Particle Swarm The particle swarm size is defined as M. When the initial pose of the robot is known, M initial poses can be copied as the initial particle swarm. When the initial pose is unknown, the swarm can be initialized using Gaussian distribution random sampling, which distributes the particles uniformly on the map. 2. Simulate Particle Motion To simulate the particle motion, first estimate the motion of any particle in the next moment based on the motion model of the robot, and, then obtain the next moment’s particle swarm. This method accurately models the motion of the robot. However, there are times when the motion model of the robot is difficult to build. The continuity of the robot’s motion can be used; however, the range of the robot’s motion is limited in two adjacent sampling time intervals. To solve this problem, the random function can be used to generate particles to describe the possible poses of the robot. 3. Calculate Particle Scores For each point measured by the LiDAR sensor, there are four matching cases between sensor readings and the map, as shown in Table 5.1. As shown in Table 5.1, each matching case has a corresponding score, e.g., (a, b, c, d) = (+1, -5, -5, +10). In practice, there will be more blank areas scanned by the LiDAR, and it is quite time-consuming to calculate the grid score. Hence, it is possible to determine only the number of matches between the obstacles localized by the LiDAR sensor and the obstacles in the map, i.e., modify the score discriminant as (a, b, c, d) = (0, 0, 0, +1). After scoring each particle, the particle with the highest score is selected as the robot’s pose estimate at that moment. 4. Particle Swarm Resampling After each round of scoring, some particles have low scores; that is, these particles severely deviate from the possible poses of the robot and have to be discarded to improve the quality and convergence speed of the particle swarm. Other particles have high scores, and the scores of these particles are close to each other and need to be retained. However, directly discarding the particles with too low scores will inevitably reduce the total number of particles. To keep the size of the particle swarm constant, the particles with high scores can be simply copied and resampled to keep the total number of particles unchanged. This is the process of particle swarm resampling. Based on the above four steps, MCL has the following advantages: • The global pose of the robot can be estimated without modeling the robot’s motion. • It is not necessary to know the initial position of the robot.

5.1

Map-Based Localization

229

• The calculation process consumes less memory. It is less computationally intensive and produces more accurate results. • The pose estimation of robot motion is very smooth, which is suitable for the navigation control of mobile robots. • Algorithms are easy to implement. On the other hand, MCL also has the following disadvantages. • MCL will fail once the robot is carried to an arbitrary position (kidnapped robot problem). • The localization accuracy is related to the number of particles, and a large number of particles are required to improve localization accuracy. • The localization is slow to converge. MCL is applicable to both local and global localization problems and has become the dominant algorithm in the field of robot localization. However, MCL cannot recover from the kidnapped robot problem or global localization failure. When the robot position is obtained, the incorrect particles elsewhere fade away. Thus, to some extent, the particles can only “survive” in the vicinity of a single pose, and if this localization is incorrect, the algorithm cannot be restored.

5.1.2

Adaptive Monte Carlo Localization

To address some of the drawbacks of MCL, many improved versions were developed, of which the adaptive Monte Carlo localization (AMCL) is quite effective. MCL may accidentally discard all particles near the correct pose in the resampling step. The importance of this problem is fully evident when the number of particles is small (e.g., 50) and spread over an entire relatively large area (e.g., the global localization process). At this time, the MCL needs to be optimized. The AMCL method is adaptive in two main ways. First, it solves the problem of a fixed particle number. When the mobile robot localization converges, the particles are basically concentrated in one place, and the number of particles can be appropriately reduced. Second, it solves the kidnapped robot problem. When the average score of particles is found to have suddenly decreased (the correct particles are discarded in certain iteration), more particles are redistributed globally. For the kidnapped robot problem, AMCL adopts a heuristic approach and adds particles randomly during the resampling process. The kidnapped robot problem is a small probability event, so AMCL randomly adds free particles with a certain probability, which also leads to two questions. How many free particles need to be added to solve the kidnapped robot problem? How are the free particles distributed? Regarding the first problem, AMCL uses surveillance to track two weights, ωslow and ωfast, representing the long-term and real-time estimated weights, respectively. ωslow and ωfast are controlled by the parameters αslow and αfast. The average weight of

230

5 Autonomous Navigation for Mobile Robot

all particles ωavg can be determined by combining the MCL principle and particle filter. Then, ωslow and ωfast can be calculated from Eq. (5.1). ωslow = ωslow þ αslow ωavg - ωslow

ð5:1Þ

ωfast = ωfast þ αfast ωavg - ωfast The number of free particles is determined by Eq. (5.2). max 0, 1‐

ωfast ωslow

ð5:2Þ

From Eq. (5.2), when the real-time estimated weights are better than the longterm estimated weights, then no free particles are added; otherwise, the number of added free particles is obtained from Eq. (5.2). The added free particles are distributed in the free grid area of the map. A uniform distribution model is adopted to randomly add free particles. On the other hand, in MCL, the number of particles used is fixed and immutable, while AMCL uses Kullback–Leibler divergence (KLD) sampling to adaptively calculate the number of particles required, which can improve the utilization and reduce the redundancy of particles, thus increasing the system computational speed and reducing the consumption of system memory resources. In MCL, it is necessary to cover the entire map with uniform and random particles; thus, a large number of particles are required. When the particles gradually converge, it is extremely wasteful to continue to maintain such a large particle swarm. Therefore, the algorithm uses Eq. (5.3) to calculate the upper bound Ntop on the number of required particles. KLD sampling will keep generating particles until the statistical bound of Eq. (5.3) is satisfied. Ntop =

k-1 2 þ 12α 9ð k - 1Þ

2 β 9ðk - 1Þ

3

ð5:3Þ

where α and β are the maximum error and standard normal distribution quantile between the real and estimated distributions, respectively. k denotes the number of particles whose state space is nonempty. As can be seen from Eq. (5.3), the upper bound on the number of particles has an approximate linear relationship with k. In the initial case, i.e., at the beginning of the global coordinate localization, the particles are relatively dispersed, and the value of k is larger, so the upper bound on the number of particles is higher, which ensures the localization accuracy. When the global localization is completed, the problem is transformed into a trajectorytracking problem. At this time, the particles are relatively concentrated and in a convergent state. The number of particles with a nonempty state space is small and the value of k is smaller, so the upper bound on the number of particles is lower.

5.1

Map-Based Localization

231

Through such a control method, the number of particles is effectively and dynamically adjusted, thus improving the convergence speed of the system. In the ROS navigation framework, the AMCL package is used to implement mobile robot localization. This method is a probabilistic localization system in a 2D environment that uses a particle filter to track the mobile robot poses in a known map [3]. Unlike the traditional odometer localization method, which performs track projections based on odometer and IMU measurements, the AMCL method uses external sensors to obtain information about the mobile robot in the global map, eliminating the cumulative odometer localization error. The various interfaces of the AMCL package are described below.

5.1.2.1

Topics and Services of the AMCL Package

• Topics Subscribed by AMCL Package 1. tf (tf/tfMessage): It indicates the coordinate transform message. 2. initialpose (geometry_msgs/PoseWithCovarianceStamped): It indicates the mean and covariance used to initialize the particle filter. 3. scan (sensor_msgs/LaserScan): It indicates LiDAR data. 4. map (nav_msgs/OccupancyGrid): When the use_map_topic parameter is utilized, AMCL will subscribe to the map topic to retrieve map data for the LiDAR-based localization. • Topics Published by AMCL Package 1. amcl_pose (geometry_msgs/PoseWithCovarianceStamped): It indicates the estimated pose of the robot in the map using covariance information. 2. particlecloud (geometry_msgs/PoseArray): It indicates the set of pose estimates being maintained by the particle filter. 3. tf (tf/tfMessage): It publishes the transform from odom (which can be remapped via the ~odom_frame_id parameter) to the map. • Services of AMCL Package 1. global_localization (std_srvs/Empty): It initiates global localization, where all particles are dispersed randomly through the free space in the map. 2. request_nomotion_update (std_srvs/Empty): It manually performs updates and publishes updated particles. • Services Called by AMCL Package 1. static_map (nav_msgs/GetMap): AMCL calls this service to retrieve the map data.

232

5.1.2.2

5

Autonomous Navigation for Mobile Robot

Parameters of AMCL Package

There are many parameters in the AMCL package, and three categories of parameters are available for configuring AMCL nodes: overall filter, laser model, and odometer model. • Overall Filter Parameters 1. ~min_particles (int, default: 100): minimum allowed number of particles. 2. ~max_particles (int, default: 5000): maximum allowed number of particles. 3. ~kld_err (double, default: 0.01): maximum error between the true and estimated distributions. 4. ~kld_z (double, default: 0.99): the upper standard normal quantile for (1-p), where p is the probability that the error of the estimated distribution is less than kld_err. 5. ~update_min_d (double, default: 0.2 m): translational movement required before performing a filter update. 6. ~update_min_a (double, default: π/6.0 radians): rotational movement required before performing a filter update. 7. ~resample_interval (int, default: 2): the number of filter updates required before resampling. 8. ~transform_tolerance (double, default: 0.1 s): time with which to post-date the published transform to indicate that this transform is valid into the future. 9. ~recovery_alpha_slow (double, default: 0.0 (disabled)): exponential decay rate for the slow average weight filter, used to determine when to recover by adding random poses. A good value might be 0.001. 10. ~recovery_alpha_fast (double, default: 0.0 (disabled)): exponential decay rate for the fast average weight filter, used to determine when to recover by adding random poses. A good value might be 0.1. 11. ~initial_pose_x (double, default: 0.0 m): initial pose mean (x), used to initialize the filter with a Gaussian distribution. 12. ~initial_pose_y (double, default: 0.0 m): initial pose mean (y), used to initialize the filter with a Gaussian distribution. 13. ~initial_pose_a (double, default: 0.0 radians): initial pose mean (yaw), used to initialize the filter with a Gaussian distribution. 14. ~initial_cov_xx (double, default: 0.5 * 0.5 m): initial pose covariance (x * x), used to initialize the filter with Gaussian distribution. 15. ~initial_cov_yy (double, default: 0.5 * 0.5 m): initial pose covariance (y * y), used to initialize the filter with a Gaussian distribution. 16. ~initial_cov_aa (double, default: (π/ 12) * (π/ 12) radians): initial pose covariance (yaw * yaw), used to initialize the filter with a Gaussian distribution. 17. ~gui_publish_rate (double, default: -1.0 Hz): maximum rate (Hz) at which messages are published for visualization, -1.0 to disable.

5.1

Map-Based Localization

233

18. ~save_pose_rate (double, default: 0.5 Hz): maximum rate (Hz) at which to store the last estimated pose and covariance to the parameter server in the variables ~initial_pose and ~initial_cov. This saved pose will be based on subsequent runs to initialize the filter. -1.0 to disable. 19. ~use_map_topic (bool, default: false): when set to true, AMCL will subscribe to map topics instead of making a service call to receive maps. 20. ~first_map_only (bool, default: false): when set to true, AMCL will only use the first map it subscribes to, rather than updating each time a new one is received. • Laser Model Parameters 1. ~laser_min_range (double, default: -1.0): minimum scan range; -1.0 will cause the laser’s reported minimum range to be used. 2. ~laser_max_range (double, default: -1.0): maximum scan range; -1.0 will cause the laser’s reported maximum range to be used. 3. ~laser_max_beams (int, default: 30): the number of evenly spaced beams in each scan to be used when updating the filter. 4. ~laser_z_hit (double, default: 0.95): mixture parameter for the z_hit part of the model. 5. ~laser_z_short (double, default: 0.1): mixture parameter for the z_short part of the model. 6. ~laser_z_max (double, default: 0.05): mixture parameter for the z_max part of the model. 7. ~laser_z_rand (double, default: 0.05): mixture parameter for the z_rand part of the model. 8. ~laser_sigma_hit (double, default: 0.2 m): standard deviation for the Gaussian model used in the z_hit part of the model. 9. ~laser_lambda_short (double, default: 0.1), exponential decay parameter for the z_short part of the model. 10. ~laser_likelihood_max_dist (double, default: 2.0 m): maximum distance for obstacle inflation on the map: for use in the likelihood_field model. 11. ~laser_model_type (string, default: “likelihood_field”): indicates which model to select, either beam, likelihood_field, or likelihood_field_prob. • Odometer Model Parameters 1. ~odom_model_type (string, default: “diff”): indicates which model to select, either “diff,” “omni,” “diff-corrected,” or “omni-corrected.” 2. ~odom_alpha1 (double, default: 0.2): specifies the expected noise in the odometry’s rotation estimate from the rotational component of the robot’s motion. 3. ~odom_alpha2 (double, default: 0.2): specifies the expected noise in the odometry’s rotation estimate from the translational component of the robot’s motion.

234

5

Autonomous Navigation for Mobile Robot

4. ~odom_alpha3 (double, default: 0.2): specifies the expected noise in the odometry’s translation estimate from the translation component of the robot’s motion. 5. ~odom_alpha4 (double, default: 0.2): specifies the expected noise in the odometry’s translation estimate from the rotational component of the robot’s motion. 6. ~odom_alpha5 (double, default: 0.2): translation-related noise parameter (used only when the model is “omni”). 7. ~odom_frame_id (string, default value: “odom”): specifies which frame to use for odometry. 8. ~base_frame_id (string, default: “base_link”): specifies which frame to use for the robot chassis. 9. ~global_frame_id (string, default: “map”): indicates the name of the coordinate frame published by the localization system. 10. ~tf_broadcast (bool, default: true): when set to false, AMCL will not publish the coordinate transform between map and odom. Task 1 Mobile Robot Localization Before performing this task, make sure that Task 2 or Task 3 in Chapter 4 has been completed and the map saved, otherwise an error will be reported for executing the command. Please note that the robot must run within the boundaries of the constructed map, or it will fail to locate. The following demonstrates the mobile robot localization experiment. The AMCL localization function is activated by entering the following command: $ cd spark $ source devel/setup.bash $ roslaunch spark_navigation amcl_rviz.launch lidar_type_tel=3iroboticslidar2

Please note that if Spark is configured with EAI LiDAR (g2 or g6), lidar_type_tel=3iroboticslidar2 needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The content of the amcl_rviz.launch file is as follows:





(continued)

5.1

Map-Based Localization

235















The nodes launched by this command include the Spark motion control model, Spark motion control node, laser node, map saving node, amcl localization, and rviz display. After a successful run, the map and initial set of particles can be observed in rviz, as shown in Fig. 5.4, where the particles represent the possible distribution of the mobile robot’s pose estimates. Initially, the particles are dispersed. After launching the keyboard control node, the particles are made to converge by controlling the robot motion with the following command: $ source ~/spark/devel/setup.bash $ rosrun spark_teleop teleop_node 0.2 1

236

5

Autonomous Navigation for Mobile Robot

Fig. 5.4 Initial distribution of particle set

With the movement of the mobile robot, it is possible to see that the particle set gradually converges. The particle distribution after convergence is demonstrated in Fig. 5.5. The particle distribution is relatively concentrated with a good localization effect.

5.2 5.2.1

Map-Based Autonomous Navigation Navigation Framework

Robot navigation consists of two processes: localization and path planning. The AMCL and move_base packages provided by ROS can build a complete framework for robot navigation1. The ROS navigation framework is shown in Fig. 5.6. The core of the navigation stack is the move_base node, which contains five modules: global_planner, local_planner, global_costmap, local_costmap, and recovery_behaviors. Move_base node subscribes to topics including tf (coordinate frame transform), odom (odometry data), map, sensor datas (laser data or point clouds), and goal and later publishes the cmd_vel topic. The modules are as follows. 1

Configure and use the navigation feature package set on the robot, http://wiki.ros.org/cn/ navigation/Tutorials/RobotSetup

5.2

Map-Based Autonomous Navigation

237

Fig. 5.5 Particle distribution after convergence

Fig. 5.6 Framework of the ROS navigation

tf: It uses ROS’s tf tool to publish transformation relationships between coordinate frames, including /map->/odom, /odom->/base_link, and /base_link->/sensor. odom: The navigation stack needs to use odometry data, which needs to be published with the tf tool and nav_msgs/Odometry messages. map: It provides a global map before navigation, which needs to be created in advance (but this is not required).

238

5

Autonomous Navigation for Mobile Robot

sensor data: It is used for obstacle avoidance and mapping. It can be LiDAR planar data or point cloud data (sensor_msgs/LaserScan or sensor_msgs/PointCloud). goal: It is the coordinates of the target location in the global map, published in the geometry_msgs/PoseStamped message format. The navigation framework contains a number of packages, which are installed directly using the following command: $ sudo apt-get install ros-kinetic-navigation

Although the navigation stack is designed to be as generic as possible, it still imposes three requirements on the robot’s hardware. 1. The navigation stack is only valid for differential and fully constrained wheeled robots, and it is assumed that the robot can be controlled directly using speed commands. The format of the speed command is x-direction speed, y-direction speed, and speed vector angle. 2. The navigation stack requires the robot to be equipped with a 2D plane ranging sensor such as LiDAR to construct a map and locate. 3. The navigation stack is developed using a square or round robot as a model. Although robots with other shapes can be used normally, they may not be well supported. As can be seen in the navigation framework diagram, move_base provides the interface for configuring, running, and interacting with ROS navigation, which consists of two main parts: global path planning and local path planning.

5.2.2

Global Path Planning

Global path planning is a process in which the robot finds an optimal path from its current location to reach a target point in a known environment. The optimal path here refers to the shortest path with no obstacle on the road. If the robot encounters an obstacle in the process of moving according to the planned global path, local path planning is performed to bypass the obstacle and reach the target point. For example, the grid map divides the actual site into many small areas. Each small area is called a grid, and each grid represents a node. These nodes are divided into obstacle nodes, free nodes, and unknown nodes. All nodes have their own unique coordinates in the same scale coordinate system. The obstacle information of each grid is represented by a binary number, where “1” indicates an obstacle and “0” indicates a free node. For simplicity and safety, the unknown nodes are also represented by “1” to prevent the robot from moving into unknown areas. All free nodes constitute a “free zone” in the map and connecting all the path points in order in the “free zone” constructs a weighted “map,” with the weight being the distance

5.2

Map-Based Autonomous Navigation

239

between nodes. The global path planning finds the shortest path from vertex to vertex in the graph. Graph theory is a branch of mathematics that takes graphs as the research objects; it has become a powerful tool for solving many problems in natural science, engineering, and social science. A graph is a mathematical object that represents the relationship between objects and is defined as an ordered binary set (V, E), where V is called the vertices set and E is called the edges set. E is disjoint from V. They can also be written as V(G) and E(G). If each edge is given a direction in the graph, it is called a directed graph. In a directed graph, there are outgoing and incoming edges associated with a node. Conversely, a graph in which edges have no direction is called an undirected graph. The shortest path problem is a classical problem in graph theory that looks for the shortest path between two points according to the value of “distance.” Common solving methods are Dijkstra’s algorithm and A* and D* algorithms.

5.2.2.1

Dijkstra’s Algorithm

Dijkstra’s algorithm is a typical single source shortest path algorithm that is used to find the shortest path between nodes. The basic idea of this algorithm is to take the starting source point as the center, find one vertex closest to the source point at a time, and then expand outward layer by layer with that vertex as the center until it expands to the target point. Finally, the shortest path from the source point to all the remaining points is obtained. Let G = (V, E) be a weighted directed graph, with the weights all greater than 0. Vertices set V in the graph is divided into two groups. The first group is called group S, which is the set of vertices for which the shortest path has been found. Every time a shortest path is found in the future, it is added to group S until all the vertices are added to group S. The second group is called group U, which is the set of the remaining vertices for which the shortest path has not been determined. The second group of vertices is added to S in increasing order of the shortest path length. During the addition process, the shortest path length from the starting point to each vertex in S is always less than the shortest path length from the starting point to any vertex in U. The specific algorithmic steps are as follows. 1. During initialization, group S contains only the starting point, i.e., S = {V0}, and the V0 distance is 0, while group U contains points other than the starting point. If there is a path between V0 and vertex Ui in group U, then there is a normal weight between < Ui, V0>, which is generally the distance value. If there are no edges reachable between the two, the weight between < Ui, V0> is infinite (also set to a certain large value when programming, e.g., 9999). 2. Select a vertex Vk from group U that is closest to V0 and add Vk to group S. This selected distance is the shortest path length from V0 to Vk.

240

5

Autonomous Navigation for Mobile Robot

B

Fig. 5.7 Typical undirected graph

4 A

2

5 6

C 4

F

5 3

D

8 E

3. Taking Vk as the new intermediate point, modify the distances of the vertices in group U. If the distance from V0 to vertex Vm passing through Vk is shorter than the direct distance from V0 to Vm, modify the distance value of vertex Vm so that it is equal to the sum of the distance from V0 to Vk and the distance from Vk to Vm. If the distance from V0 to vertex Vm passing through Vk is greater than the direct distance from V0 to Vm, maintain the distance value of vertex Vm. 4. Repeat the above steps until all vertices are contained in group S. Figure 5.7 illustrates a typical undirected graph. Next, Dijkstra’s algorithm will be used to find the shortest path from point A to other points, as shown in Table 5.2. According to Table 5.2, the search results for the shortest path are obtained as A!A = 0, A!B = 4, A!B!C = 6, A!F = 5, A!E = 6, A!F!D = 8. In practice, the optimal path can be found by taking the current position of the robot as the starting point and each path point as the target point, where the weight of the node-to-node path is directly equal to the grid-to-grid distance. The validation result of Dijkstra’s algorithm is shown in Fig. 5.8. The result shown in Fig. 5.8 is the search result of the shortest path of the undirected graph in Fig. 5.7. The upper chart of Fig. 5.8 is the adjacency matrix corresponding to the undirected graph in Fig. 5.7. The result is consistent with the analysis in Table 5.2. 5.2.2.2

A* Algorithm

The A* algorithm is an algorithm for solving shortest paths in static networks that combines heuristic approaches such as the breadth-first search (BFS) and Dijkstra’s algorithm and is guaranteed to find a shortest path. The core idea of the A* algorithm is to find a path such that the sum of the moving cost G and the estimated cost H at each step while walking on that path is minimized. Then, the problem will be transformed into minimizing the sum of G and H. The meaning of G and H and how to minimize the sum of G and H are described in detail below. As shown in Fig. 5.9, suppose one wants to walk from grid A to grid B, with a wall in between.

5.2

Map-Based Autonomous Navigation

241

Table 5.2 Path search process based on Dijkstra’s algorithm Step 1

S Add vertex A. At this point S = The shortest path A!A = 0 Start finding with A as the intermediate point

2

Add vertex B. At this point S = The shortest path is A!A = 0, A!B = 4 Taking B as the intermediate point, start from the path A!B = 4

3

Add vertex C. At this point S = < A B C> The shortest path is A!A = 0, A!B = 4, A!B!C = 6 Taking C as the intermediate point, start from the path A!B!C

4

Add vertex F. At this point S = < A B C F> The shortest path is A!A = 0, A!B = 4, A!B!C = 6, A!F = 5 Taking C as the intermediate point, start from the path A!F

5

Add vertex E. At this point S = < A B C F E> The shortest path is A!A = 0, A!B = 4, A!B!C = 6, A!F = 5, A!E = 6 Taking E as the intermediate point, start from the path A!E Add vertex D. At this point S = < A B C F E D> The shortest path is A!A = 0, A!B = 4, A!B!C = 6, A!F = 5, A!E = 6, A!F!D = 8

6

U U = A!B = 4, A!E = 6, A!F = 5 A !other vertices = infinite The minimum weight of A!B = 4 is obtained U = A!B!C = 6 A!B !other vertices = infinite The minimum weight of A!B!C = 6 is obtained U = A!B!C!D = 11 A!B!C!F = 10 (greater than A!F = 5 in step 1, with the weight of F maintained as 5, i.e., A!F = 5) A!B!C!E = infinite (greater than A!E = 6 in step 1, with the weight of E maintained as 6, i.e., A!E = 6) The minimum weight of A!F = 5 is obtained U = A!F!D = 8 (less than A!B!C!D = 11 in step 3, change the weight of D to 8, i.e., A!F!D = 8) A!F!E = infinite (greater than A!E = 6 in step 1, with the weight of E maintained as 6, i.e., A!E = 6) The minimum weight of A!E = 6 is obtained U = A!E!D = 14 (greater than A!F!D = 8 in step 4, with the weight of D maintained as 8, i.e., A!F!D = 8) The minimum weight of A!F!D = 8 is obtained U set is empty, path finding is complete

The obstacles and unknown areas are uniformly marked as “1” in black, and the “free zone” areas where the robot can walk are marked as “0.” A and B represent the robot and the target point, respectively. The goal is to find a shortest path from A to B. First, the following two definitions are given. G denotes the movement cost to move from starting point A to ending point B. H denotes the estimated cost to move from the specified square to the ending point. By replacing both the movement cost

242

5

Autonomous Navigation for Mobile Robot

Fig. 5.8 Validation result of Dijkstra’s algorithm Fig. 5.9 Diagram of the distribution of spatial barriers A

B

and estimated cost with distance values, G becomes the distance (or cost) from starting point A to ending point B, and H denotes the distance (or cost, i.e., heuristic estimated cost) from the specified square to ending point B. Note that the “distance” here refers to the number of grids passed, not the linear distance measured by the measurement tool. The search process is as follows.

1. Starting from point A, first put point A into a grid list to be checked, namely open list, which holds the points to be checked and judged. 2. Check all the free grids adjacent to A; add them to the open list; and set the starting point A to be the father of these grids.

5.2

Map-Based Autonomous Navigation

243

3. Remove grid A from the open list and add it to another list, namely close list, which holds the final selected path points that will be concatenated into the final path. 4. Calculate the G and H values of all the grids adjacent to A in the open list and find their sum. As mentioned before, the G and H values are the distance from the starting point to the ending point and the distance from the specified point to be checked to the ending point, respectively. After that, find the point with the smallest sum of G and H and put it into the close list as the next path point and delete it from the open list at the same time. 5. Check all the free grids adjacent to the grids selected in step 4; ignore the grids already in the close list; and add any grids not in the open list to the open list. 6. Take the selected grid as the father of the grid newly added to the open list. If an adjacent-free grid is already in the open list, check whether walking from the selected grid to this free grid has a smaller G value. If so, set the father of this grid as the currently selected grid. 7. Repeat the above steps until the last ending point is also added to the close list. At this time, the point in the close list is the shortest path point. The following conditions are required to implement the A* algorithm. 1. Map: It is a structure for storing a static road network, consisting of grids. 2. Grid: A grid is the basic unit that constitutes a graph and can also be called a node. Each grid has five attributes: coordinates, G, H, F (the sum of G and H), and parent node. 3. Open list: It is used to store the grid waiting to be processed. 4. Close list: It is used to store the grid of the actual path. 5. Starting point and ending point: It specifies the initial position of the robot and the target point, which are the two inputs to the algorithm. The above storage structures are required for the implementation of the A* algorithm, and abstract definitions of these structures are given below. The map struct definition contains the coordinates, reachability, number of adjacent nodes, and pixel value of the node. typedef struct // map struct { int x, y; // map coordinates unsigned char reachable, sur, value; //reachability, number of adjacent nodes, value (0 or 1) } MapNode;

The close list structure definition is used to store the final path node.

244

5

Autonomous Navigation for Mobile Robot

typedef struct Close //Close List { MapNode *cur; // pointer to the map struct char vis; //record whether the node is accessed struct Close *father; //record parent node int G,H,F; //moving cost, estimated cost, and the sum of the two } Close;

The open list structure definition is used to hold the grid points to be processed. typedef struct Open //Open table { int length; //the length of the current queue Close *Array[MaxLength]; /pointer of the evaluation node } Open;

The A* algorithm is implemented around the above three important structures. Because the A* algorithm needs to analyze the F value of each grid’s neighbors and the map consists of a 2D array, a grid can have up to eight adjacent grids. To process these grids one by one, an adjacent array with the size of 8 is designed, and a loop is used to process the first element of the adjacent array to the largest eighth element of the adjacent array. The design of the adjacent array takes itself as the origin and the above array as the offset; that is, it can travel in eight directions. typedef struct { signed char x,y; } Point; const Point dir[8] = { {0,1},{1,1},{1,0},{1,-1}, //east, southeast, south, southwest {0,-1},{-1,-1},{-1,0},{-1,1} // west, northwest, north, northeast };

As shown in Fig. 5.10, a shaded grid indicates an obstacle, and a blank grid is a free grid. The vertical axis in the figure is the x-direction and the horizontal axis is the y-direction. Figure 5.11 demonstrates the effect of path planning from the starting point (0,0) to the target point (10,12). It can be seen that the A* algorithm can effectively avoid the obstacles and accurately find the shortest path. Dijkstra’s algorithm and the A* algorithm are both effective approaches to solving shortest path problems. The differences between the two approaches are mainly reflected in the following aspects.

5.2

Map-Based Autonomous Navigation

245

Fig. 5.10 Grid map

1. Dijkstra’s algorithm focuses on calculating the shortest path length from the source point to all other points and can be regarded as having only one input, the specified source point; the focus of the A* algorithm is the shortest path between points, and each calculation will have two inputs, the starting point and the ending point, which can lead directly to a specific path. 2. Dijkstra’s algorithm is built on a more abstract graph theory, while the A* algorithm can be applied more directly to grid maps. 3. The search range of the A* algorithm is smaller than that of Dijkstra’s algorithm: Thus, A* is more efficient. 4. When there are many target points, the A* algorithm will introduce numerous repetitive data points and complex valuation functions. Therefore, if the determination of obtaining specific paths is not required but only a comparison of path lengths, Dijkstra’s algorithm will be more efficient. The path obtained by the traditional A* algorithm has more broken line inflection points, resulting in some redundant nodes in the obtained path. Such a path is not optimal and is not conducive to controlling robot movement. Hence, the path obtained by the algorithm needs to be optimized to reduce the number of redundant nodes and improve the robot movement efficiency. The D * algorithm was developed from the A * algorithm. It is also a global path planning algorithm and a dynamic heuristic path search algorithm. The biggest advantage of the D* algorithm is that it does not need to explore the map in advance, which means that the

246

5

Autonomous Navigation for Mobile Robot

Fig. 5.11 Path planning from (0,0) to (10,12)

environment is unknown beforehand, and the robot can take action. In addition, as the robot continues to explore, the path will be continually adjusted. The ROS move_base framework provides an interface to configure, run, and interact with robot navigation, allowing the easy configuration of real-time global and local path planning for the robot. The default global path planning for the move_base framework is the navfn package, which implements Dijkstra’s algorithm to calculate the minimum cost path on the costmap as the robot global path. If the global_planner package is used, the global path planning algorithm is the optimized A* algorithm.

5.2.3

Local Path Planning

Mobile robot path planning contains two important parts. From the global point of view, the goal is to calculate the global path to the target point after determining its own position and target point coordinates in the map. If an obstacle is detected during the movement, the robot carries out obstacle avoidance, that is, local path

5.2

Map-Based Autonomous Navigation

247

Fig. 5.12 Path planning process flowchart

planning. Local path planning requires a detailed judgment of the obstacle situation to cope with the changing dynamic environment, to adopt an adaptive processing method, and to simplify the algorithm complexity [4]. Figure 5.12 shows a flowchart of the path search on the grid map, where global path planning is performed first. When an obstacle or target is encountered, the parameters of the corresponding environment are selected to avoid the obstacle or reach the target point. In local path planning, the robot is locally or completely unknown to the environmental information. Local path planning focuses on considering the robot’s current local environmental information, so that the robot can have good obstacle avoidance capabilities. The robot’s working environment is detected through sensors to obtain information such as the location and geometric properties of obstacles. This planning requires the collection of environmental data and continuous dynamic updates of the environmental model [5]. The local planning method integrates the modeling and search of the environment, which requires the robot system to have high-speed information processing and computational capabilities, high robustness to environmental errors and noise, and the ability to provide real-time feedback and

248

5

Autonomous Navigation for Mobile Robot

Fig. 5.13 Obstacle distribution model

correction of the planning results. However, due to the lack of global environmental information, the planning results may not be optimal, and the correct or complete path may not be found. When the robot walks along the planned path, there will be an infrared distance sensor, ultrasonic sensor, or LiDAR sensor for obstacle detection and real-time ranging. Based on the actual size of the robot, when the distance to the detected obstacle is less than or equal to the safety distance, the robot considers the situation and performs obstacle avoidance. Therefore, local path planning is a real-time path planning for real-time avoidance based on the information of obstacles near the robot. As shown in Fig. 5.13, the obstacles are abstracted and standardized into rectangles according to their width and distance. In other words, in practical situations, the shape of the obstacle itself can be ignored, and the obstacle width and distance information in three directions can be directly obtained. After the abstracted obstacle distribution model is obtained, it is possible to safely avoid the obstacles and guarantee the safety of the robot.

5.2

Map-Based Autonomous Navigation

249

Please note that if the size of the abstracted rectangle is not reasonable, the estimated obstacle range may be larger than the actual range, which will reduce the space for the mobile robot to move around. The main local path planning methods include the artificial potential field method, dynamic window approach, and timed elastic band algorithm.

5.2.3.1

Artificial Potential Field Method

The traditional artificial potential field (APF) method is a local path planning algorithm proposed by Khatib and is mainly characterized by a small computational effort, easy-to-meet real-time requirements, and relatively smooth paths. APF treats the motion space as a potential field model, where the target point generates a gravitational potential field on the robot and the obstacle generates a repulsive potential field on the robot, and the robot moves toward the target point under the combined force of the two potential fields. 1. The Selection of the Repulsive Force Function Because the obstacle is inaccessible to the robot, the obstacle should present a repulsive effect on the robot in the potential field, and the smaller the distance, the more obvious the repulsive effect, i.e., the more potential energy of the robot. On the contrary, the larger the distance, the smaller the potential energy. Therefore, the potential energy is inversely proportional to the distance. The repulsive potential function is given as Eq. (5.4).

Ur

­k1 , r d r (r z 0) ° r max max ® °¯0, ަԆ

where ris the distance between the reference point of the robot and the obstacle, rmax is the maximum range of the potential field action, and k1 is a weighting factor. The repulsive force on the robot is given as Eq. (5.5).

Fr

 grad (U r )

­k1 , r d r (r z 0) ° 2 max max ® r ° ަԆ 0, ¯

Equation (5.6) is the angular formula for the repulsive force. θ = π þ arctg

yobst - yrobot xobst - xrobot

ð5:6Þ

In Eq. (5.6), (xobst, yobst) denotes the coordinates of the obstacle, and (xrobot, yrobot) denotes the coordinates of the robot. The direction of the repulsive force is away from the obstacle. When r → 0, Fr tends to be infinite. To avoid a collision between

250

5 Autonomous Navigation for Mobile Robot

the robot and the obstacle, a minimum safe distanceL is set. When r → L, Fr should be large enough. To make Fr continuous, it can be modified as Eq. (5.7). Fr =

k1 k1 , r ≤ r max ðr - LÞ2 ðr max - LÞ 0, r > r max

ð5:7Þ

rmax depends on the size of the robot and is generally taken as the diameter of the external circle of the robot. 2. Selection of the Gravitational Potential Function The motion of the robot is always toward the target point; therefore, the force of the target point on the robot can be seen as the gravitational force, and this gravitational force is related to the distance between the robot and the target point. The greater the distance between the two, the greater the gravitational potential energy. When the distance is 0, the gravitational force is also 0. At this point, the robot has reached the target point. Based on the above rules, the gravitational potential function of the robot can be set as Eq. (5.8). U g = k2  d

ð5:8Þ

where k2 is the weighting factor, and d is the distance from the robot to the target point. The gravitational force on the robot is shown in Eq. (5.9). F g = k2

ð5:9Þ

The angle of gravity is calculated as shown in Eq. (5.10). θa = arctg

ygoal - yrobot xgoal - xrobot

ð5:10Þ

From Eq. (5.10), it can be seen that the direction of the gravitational force points to the target point, and the value of the gravitational force is actually a constant. In practical applications, the gravity value can be set by adjusting the appropriate parameters. 3. Calculation of Resultant Force After calculating the gravitational and repulsive forces on the robot in the first two steps, the magnitude and direction of the final resultant force on the robot can be derived by decomposing and synthesizing the forces. The calculation is as follows. Calculate the repulsive, Fr, and gravitational, Fg, forces on the robot. The repulsive and gravitational forces are decomposed on the x- and y-axis. The repulsive components Frx and Fry and the gravitational components Fgx and Fgy are obtained; then, the magnitude of the resultant force is as follows:

5.2

Map-Based Autonomous Navigation

251

Fਸ = ( Frx  Fgx ) 2  ( Fry  Fgy ) 2 The direction of the resultant force is as follows:

Tਸ

arctg

Fry  Fgy Frx  Fgx

Once the magnitude and direction of the resultant forces are obtained through the above steps, these forces can be used to determine the direction of the highest efficiency path to bypass the obstacle and the required speed and angular velocity, providing a parametric basis for the motion control of the robot. The APF method is simple in principle and easy to program and implement. However, when there are complex situations and multiple obstacles in the scene, it is easy to fall into local extrema, especially when the target point is close to the obstacle. The robot may stop its motion before reaching the target or vibrate and swing back and forth near the obstacle, increasing the burden of turning and shifting, and reducing the robot walking efficiency. To address this issue, some improved APF methods have emerged. For example, based on the traditional APF method, the improved APF methods analyze whether the robot will fall into a certain local extremum by evaluating the correlation between the gravitational force and the repulsive force on the robot. If this evaluation indicates a local extreme, the robot can move a distance in the direction perpendicular to the gravitational force to avoid the point. Furthermore, operation parameters of the robot can be refined when avoiding obstacles according to the different obstacle orientations and distances. In addition, different parameters can be chosen based on different situations. Compared with the traditional method of using only a set of fixed parameters to apply in all situations, the improved APF method has a stronger adaptability and can improve the obstacle avoidance efficiency. Figure 5.14 shows the dynamic result of the obstacle avoidance of local path planning using the improved APF method described above. The small red circles represent the moving mobile robot, and the small blue circles represent the obstacles that do not exist in the known map but are detected by the sensors. This figure indicates that the algorithm is able to avoid the obstacles effectively.

5.2.3.2

Dynamic Window Approach

The dynamic window approach (DWA) samples multiple sets of velocities in the robot velocity space and simulates the robot trajectories at these velocities for a certain time. After multiple sets of trajectories are obtained, they are evaluated, and the velocity corresponding to the optimal trajectory is selected to drive the robot motion to avoid obstacles. This method is generally used for local path planning on the 2D plane. It is suitable for static environments and requires less computational resources. In addition, the motion dynamics model of the robot is considered;

252

5

10

10

9

9

8

8

7

7

6

6

5

5

4

4

3

3

2

2

1

1

0

0

1

2

3

4

5

6

7

8

9

10

Autonomous Navigation for Mobile Robot

0

0

1

2

3

4

(a)

6

7

8

9

10

(b)

10

10

9

9

8

8

7

7

6

6

5

5

4

4

3

3

2

2

1

1

0

0

1

2

3

4

5

6

7

8

9

10

0

0

1

2

3

4

(c) 10

9

9

8

8

7

7

6

6

5

5

4

4

3

3

2

2

1

1 0

1

2

3

4

5

(e)

5

6

7

8

9

10

6

7

8

9

10

(d)

10

0

5

6

7

8

9

10

0

0

1

2

3

4

5

(f)

Fig. 5.14. Effect of obstacle avoidance algorithm on local path planning

however, its cost function may get stuck in local optima. The specific steps of the DWA are as follows. 1. Sample discretely in robot velocity space (dx, dy, dtheta). 2. Simulate each sampled velocity and calculate the state of the robot after traveling at this speed for a short period of time, which can generate a walking route.

5.2

Map-Based Autonomous Navigation

253

3. Evaluate each trajectory obtained: whether it is close to obstacles, targets, and global path, and discard illegal paths. 4. Select the path with the highest score and send the corresponding velocity to the mobile robot. 5. Repeat the above steps. DWA needs to simulate the generation of many motion trajectories before the robot moves, which increases the amount of computation and tends to cause delays in obstacle avoidance. Many obstacles will cause the robot to spin, resulting in a nonoptimal motion trajectory, which reduces the safety and motion efficiency of the robot. The trajectory rollout algorithm is another local path planning method, which is similar to DWA. Their major difference lies in the difference in robot velocity space sampling. The sampling points of the trajectory rollout algorithm come from all available velocity sets throughout the forward simulation stage, while the sampling points of DWA come from available velocity sets in just one simulation step. This means that DWA is a more efficient algorithm in comparison because it uses a smaller sampling space and runs more efficiently; however, DWA is not suitable for mobile robots with low accelerations. In this case, the trajectory rollout algorithm is better. In ROS, the base_local_planner package implements the DWA and trajectory rollout algorithm. It calculates the velocity and angle (dx, dy, dtheta velocities) that the robot should travel in each cycle and uses the map data to search for multiple paths to reach the target. Then, the optimal path is selected based on the evaluation criteria (whether it will hit an obstacle, time required, etc.), and the required real-time velocity and angle are calculated and sent to the mobile robot. A dynamic window subgraph approach for local path planning similar to DWA is presented below. In the motion space of a mobile robot, there are two types of obstacles: fixed obstacles known in advance and temporary obstacles. For the fixed obstacles known in advance, an optimal obstacle avoidance path is determined using the A* algorithm, the improved A* algorithm, or other methods, and the robot reaches the target location from its current position. For temporary obstacles, the sensor maps the observed local obstacle information into a local subgraph (a dynamic window subgraph of size N*N), and then, the obstacles mapped into the local subgraph are expanded based on the obstacle radius. In this way, the obstacles around the robot can be built into the local subgraph in real time. As the robot keeps moving, the obstacles in the surrounding space, including fixed obstacles, temporarily stacked obstacles, and dynamic obstacles, will be continuously updated to the dynamic window subgraph for robot obstacle avoidance. To accelerate the judgment of whether the obstacles fused into the global map will collide with the robot and to improve efficiency, the global path that is within the current subgraph is intercepted. The current position of the robot is considered as the local starting point, and the point where the global path intersects with the subgraph boundary is considered as the local target point. Then, the Bresenham line algorithm is adopted to judge whether there are obstacles on the path from the local starting

254

5 Autonomous Navigation for Mobile Robot

point to the local target point. This method can quickly and accurately determine whether the temporary obstacles will collide with the robot. The updating principle of dynamic window subgraphs is to update the subgraph only when the motion distance of the robot exceeds the set threshold value. Therefore, the threshold value of the dynamic window subgraph update is critical. If it is set too small, it will lead to an increase in the system computation time and reduce the real-time performance. If it is set too large, it will cause a decrease in the robot obstacle avoidance accuracy. Furthermore, the size setting of the dynamic window subgraph range is also important. If it is too small, the obstacle avoidance accuracy will be reduced. If it is too large, the system computation time will be increased. Hence, the appropriate dynamic window subgraph update threshold and subgraph size range need to be adjusted according to the actual effect. After transforming the coordinates of the obstacles in the dynamic window subgraph to the global map coordinate frame, it is necessary to mark the location corresponding to the coordinates as an obstacle area in the global map and perform the obstacle range expansion process. This is to make sure that there is a suitable safety distance between the robot and the obstacles when the robot is moving; then, the robot can be controlled to avoid obstacles. When the robot bypasses the obstacle, the obstacles fused to the global map in the dynamic window subgraph are removed. The above process keeps cycling until the robot moves to the global target point.

5.2.3.3

Timed Elastic Band Method

The robot model targeted by the DWA algorithm is a differential or omnidirectional robot, i.e., a robot that can rotate in situ. However, the DWA algorithm cannot be used for the moving chassis of an Ackermann model. The timed elastic band (TEB) algorithm, on the other hand, is more suitable for the Ackermann model than in the DWA. The path connecting the starting point and the target point will be deformed by internal and external forces. This deformable path is defined as an elastic band; i.e., the condition for deformation is the internal and external force constraints on the elastic band. Between the starting point and the target point, there are N control points that change the shape of the elastic band (the point where the robot changes the direction of motion when avoiding obstacles). The internal and external forces of the elastic band balance each other, shortening the path while maintaining a certain distance from the obstacles. As it takes time for the robot to move between these points, this method is called the TEB algorithm. The internal and external force constraints of the TEB method include the following: 1. Global planning path and obstacle avoidance constraint: The constraint of following the global planning path pulls the elastic band toward the global path, while the constraint of obstacle avoidance keeps the elastic band away from the obstacle.

5.2

Map-Based Autonomous Navigation

255

2. Velocity and acceleration constraint: It is kinematic dynamic constraint, i.e., the maximum velocity and acceleration at which a mobile robot can move. 3. Non-holonomic kinematic constraint: The differential mobile robot has only two degrees of freedom in planar motion and can only rotate or move in a straight line in the direction it is facing. This kinematic constraint enables the differential mobile robot to move in a smooth trajectory consisting of several arcs. 4. Fastest path time constraint: It is the square of the time interval sequence, and this constraint enables the robot to obtain the path with the fastest motion time, rather than the traditional shortest path in space. It can be seen that the TEB algorithm finds a series of discrete poses with time information from a given path and forms these discrete poses into a trajectory that satisfies the objectives of shortest time, shortest distance, and distance from obstacles by means of graph optimization, while satisfying the constraints of robot motion dynamics. It is important to note that the optimized trajectories do not necessarily satisfy all constraints; that is, the given constraints are actually soft constraints. With the above definition, it can be seen that the TEB algorithm describes the path planning problem as a multi-objective optimization problem, i.e., to optimize the objectives such as minimizing the execution time of the trajectory, maintaining a certain distance from obstacles, and abiding by the constraints of kinematic dynamics. Because most of the objectives to be optimized are local and relate to only a few consecutive states of the robot, the optimization problem is an optimization of a sparse model. The sparse model multi-objective optimization problem is solved to obtain the optimal motion trajectory of the robot. This problem can be solved by constructing a hyper-graph using the optimization algorithm on large-scale sparse matrices in the g2o framework (an open-source framework for graph optimization). The robot states and time intervals are used as nodes, the objective functions and constraint functions are used as edges, and the nodes are connected by edges to form the hyper-graph. In this hyper-graph, each constraint is an edge, and the number of nodes allowed to be connected per edge is not limited. For a two-wheel differential chassis, the use of TEB to adjust the motion orientation can complicate the motion path with unnecessary backward movements at the starting and target points. This is not allowed in some application scenarios, as the backward movement may cause the robot to hit an obstacle. Therefore, TEB requires some optimization based on different scenarios. 1. When the robot’s orientation at the starting point is not toward the target point, TEB plans a route that considers backward rotation and corrects the orientation. However, in some scenarios, such backward and corrected angles are not allowed. Hence, for a two-wheel differential motion model, the chassis can first be rotated in place so that it is oriented toward the target point. Then, the optimization function is executed to obtain the optimal trajectory. 2. The orientation of the robot is cached when it reaches the target point, and the actual orientation at this point is compared with the cached target orientation. If there is a difference in orientation, the robot will rotate in place to the target orientation.

256

5

Autonomous Navigation for Mobile Robot

3. Based on the obstacle information, the shortest distance ahead from the obstacle is found, and then, the maximum linear velocity (max_vel_x) constraint is adjusted according to the obstacle distance to improve the obstacle avoidance. 4. The maximum linear velocity (max_vel_x) constraint is adjusted according to the distance of the robot’s current position from the global target point. The maximum linear velocity can be set to be larger when the robot is far from the global target point and smaller when the robot is close to the global target point, so as to prevent an overshoot.

5.2.4

Navigation Package

The various interfaces of the move_base package are described below.

5.2.4.1

Move_base Package Interface

• Actions Subscribed to by the move_base Package 1. move_base/goal (move_base_msgs/MoveBaseActionGoal): It represents the movement planning goal of move_base. 2. move_base/cancel (actionlib_msgs/GoalID): It represents a request to cancel a specific goal. • Actions Published by move_base 1. move_base/feedback (move_base_msgs/MoveBaseActionFeedback): It represents the feedback that contains the coordinates of the robot base. 2. move_base/status (actionlib_msgs/GoalStatusArray): It represents the status information on the goals that are sent to move_base. 3. move_base/result (move_base_msgs/MoveBaseActionResult): It indicates that the result of the operation on move_base is empty. • Subscribed topics 1. move_base_simple/goal (geometry_msgs/PoseStamped): It provides a nonaction interface to move_base for users who do not care about tracking to the execution status of their goals. • Published topics 1. cmd_vel (geometry_msgs/Twist): It is velocity commands entered to the robot chassis. • Services 1. ~make_plan (nav_msgs/GetPlan): It allows the user to ask for a plan from move_base to a given pose, without causing move_base to execute that plan.

5.2

Map-Based Autonomous Navigation

257

2. ~clear_unknown_space (std_srvs/Empty): It allows the user to clear the unknown space in the area around the robot. This is useful when costmap has been stopped in the environment for a long period of time and restarted in a new location. 3. ~clear_costmaps (std_srvs/Empty): It allows the user to tell the move_base node to clear the obstacles in the costmap. This may cause the robot to hit things and should be used with caution. • Parameters 1. ~base_global_planner (string, default: “navfn/NavfnROS”): It sets the name of the plugin for the global path planner used by move_base. 2. ~base_local_planner (string, default: “base_local_planner/ TrajectoryPlannerROS”): It sets the name of the plugin for the local path planner used by move_base. 3. recovery_behaviors(list, default: [{name: conservative_reset, type: clear_costmap_recovery/ClearCostmapRecovery}, {name: rotate_recovery, type: rotate_recovery/RotateRecovery}, {name: aggressive_reset, type: clear_costmap_recovery/ClearCostmapRecovery}]): It sets the list of recovery behavior plugins to use with move_base. These behaviors will be run when move_base fails to find a feasible plan. 4. ~controller_frequency (double, default: 20.0 (Hz)): It is the cycle period for publishing control commands. 5. ~planner_patience (double, default: 5.0 (s)): It determines how long the path planner will wait for a valid plan before space-clearing operations are performed. 6. ~controller_patience (double, default: 15.0 (s)): It determines how long the path planner will wait without receiving a valid control before space-clearing operations are performed. 7. ~conservative_reset_dist (double, default: 3.0 (m)): When clearing space from the map, this distance around the robot will be cleared from the costmap. 8. ~recovery_behavior_enabled (bool, default: true): It determines whether or not to enable the recovery behavior of move_base to clean out space. 9. ~clearing_rotation_allowed (bool, default: true): It determines whether or not the robot will attempt an in-place rotation when clearing out space. 10. (10~shutdown_costmaps (bool, default: false): When move_base is in the inactive state, it determines whether or not to shut down the costmap node. 11. ~oscillation_timeout (double, default: 0.0 (s)): It is the allowed oscillation time before executing recovery behaviors. 12. ~oscillation_distance (double, default: 0.5 (m)): It determines the distance the robot must move to be considered not to be oscillating. 13. ~planner_frequency (double, default: 0.0): It determines the rate at which to run the global planning loop.

258

5

Autonomous Navigation for Mobile Robot

14. ~max_planning_retries(int, default: -1): It determines how many times to allow for planning retries before executing recovery behaviors. A value of -1.0 corresponds to infinite retries. 5.2.4.2

Costmap Configuration

The navigation stack requires two costmaps to store information about obstacles in the world. One costmap is used for global path planning over the entire environment, and the other is used for local planning and real-time obstacle avoidance. Some parameters are required for both maps, while some are different for each individual map. Therefore, there are three sections below: common configuration options, global configuration options, and local configuration options [6]. 1. Common Configuration The navigation stack uses the costmap to store information about obstacles. A common configuration file called costmap_common_params.yaml needs to be created, as shown below: robot_radius: 0.2 map_type: costmap static_layer: enabled: true unknown_cost_value: -1 lethal_cost_threshold: 100 obstacle_layer: enabled: true max_obstacle_height: 2.0 origin_z: 0.0 z_resolution: 0.2 z_voxels: 10 unknown_threshold: 10 mark_threshold: 0 combination_method: 1 track_unknown_space: false #true needed for disabling global path planning through unknown space obstacle_range: 4.0 #for marking raytrace_range: 4.0 #for clearing publish_voxel_map: false observation_sources: scan scan: data_type: LaserScan topic: /scan marking: true clearing: true min_obstacle_height: -0.1 max_obstacle_height: 1.5

(continued)

5.2

Map-Based Autonomous Navigation

259

obstacle_range: 4.0 raytrace_range: 4.0 inflation_layer: enabled: true cost_scaling_factor: 0 # exponential rate at which the obstacle cost drops off (default: 10) inflation_radius: 0.35 # max. distance from an obstacle at which costs are incurred for planning paths.

The “obstacle_range” parameter determines the maximum sensor reading range that results in an obstacle being put into the costmap. Here, it is set at 4 m, which means that the robot will only update real-time information about obstacles within a radius of 4 m of its base to the costmap. The “raytrace_range” parameter is used to clear real-time obstacles in the range (including obstacles that exist in the global costmap and real-time obstacles in the local costmap) during the robot’s motion based on the sensor information. This is to obtain free space for movement. Setting it to 4 m means that given a sensor reading, the robot will attempt to clear out space in front of it of up to 4 m to obtain free space. The logic of this action is that if an obstacle is not detected at the location where it should have been detected, the obstacle should be cleared from the map. The “observation_sources” parameter defines the sensors that pass spatial information to the costmap. Each sensor is defined in the next lines. “min_obstacle_height” and “max_obstacle_height” describe the minimum and maximum heights of the obstacle, respectively. The “inflation_radius” parameter gives the minimum distance the robot must maintain from the obstacle and inflates the obstacle according to the robot’s inflation radius. For example, setting the inflation radius to 0.35 m means that the robot will attempt to stay 0.35 m or more away from obstacles on all paths. It is important to note that this distance refers to the distance from the center of the robot to the obstacle; therefore, the value of the inflation radius should be greater than the radius of the robot. Inflation areas smaller than the robot’s radius are collision zones, which are considered impassable. Inflation areas larger than the robot’s radius are buffer zones, which are considered passable. However, the robot will still avoid passing through the buffer zone whenever possible due to the higher cost of the buffer zone than a normal passable zone. 2. Global Configuration The global configuration file named global_costmap_params.yaml is used to store configuration options specific to the global costmap. The file content is as follows:

260

5

Autonomous Navigation for Mobile Robot

global_costmap: global_frame: /map robot_base_frame: /base_footprint update_frequency: 2.0 publish_frequency: 0.5 static_map: true transform_tolerance: 0.5 plugins: - {name: static_layer, type: "costmap_2d::StaticLayer"} - {name: obstacle_layer, type: "costmap_2d::VoxelLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"}

The “global_frame” parameter specifies in which coordinate frame the global costmap should run. The /map coordinate frame is usually selected. The “robot_base_frame” parameter specifies the coordinate frame the costmap should reference for the base of the robot. The “global_costmap” and “robot_base_frame” parameter define the coordinate transform between the robot and the map that must be used to construct the global costmap. The “update_frequency” parameter determines the frequency at which the costmap will be updated. The “publish_frequency” parameter determines the frequency at which the costmap publishes visualization information. The “static_map” parameter determines whether or not the costmap is initialized according to the map provided by the map_server or set to false if no existing map is used. 3. Local Configuration The local configuration file named local_costmap_params.yaml is used to store configuration options specific to the local costmap. The file content is as follows: local_costmap: global_frame: /odom robot_base_frame: /base_footprint update_frequency: 5.0 publish_frequency: 2.0 static_map: false rolling_window: true width: 4.0 height: 4.0 resolution: 0.05 transform_tolerance: 0.5 plugins: - {name: obstacle_layer, type: "costmap_2d::VoxelLayer"} - {name: inflation_layer, type: "costmap_2d::InflationLayer"}

5.2

Map-Based Autonomous Navigation

261

The parameters “global_frame,” “robot_base_frame,” “update_frequency,” “publish_frequency,” and “static_map” have the same meanings as described in the global configuration. Setting the “rolling_window” parameter to true means that the costmap will remain centered around the robot as it moves through the real world, without scrolling the window. The “width,” “height,” and “resolution” parameters set the width (m), height (m), and resolution (m/cell). The resolution here may be different from the resolution of the static map, but it is usually set to the same value.

5.2.4.3

Local Planner Configuration

The base_local_planner is responsible for computing the velocity commands to send to the mobile base of the robot given the planned global path. Some configuration options need to be set based on the specs of the robot. If DWA is adopted, create a file named dwa_local_planner_params with the following content: DWAPlannerROS: max_vel_x: 0.3 # Maximum linear velocity in x-direction, unit: m/s min_vel_x: 0.0 # Minimum linear velocity in x-direction, unit: m/s max_vel_y: 0.0 # Maximum linear velocity in y-direction, unit:m/s min_vel_y: 0.0 # Minimum linear velocity in y-direction, unit:m/s max_trans_vel: 0.3 # Absolute value of the robot’s maximum translational velocity min_trans_vel: 0.05 # absolute value of the robot’s minimum translational velocity trans_stopped_vel: 0.05 # The translational velocity of the robot when it is recognized as “stopped”. max_rot_vel: 2.0 #Maximum rotational angular velocity of the robot min_rot_vel: 0.4 #Minimum rotational angular velocity of the robot rot_stopped_vel: 0.4 # The rotational velocity of the robot when it is recognized as “stopped”. acc_lim_x: 1.0 #Limit acceleration of the robot in the x-direction acc_lim_theta: 15.0 #Limit angular acceleration of the robot acc_lim_y: 0.0 #Limit acceleration of the robot in the y-direction # Goal Tolerance Parameters yaw_goal_tolerance: 0.3 # 0.05 xy_goal_tolerance: 0.15 # 0.10 # latch_xy_goal_tolerance: false

(continued)

262

5

Autonomous Navigation for Mobile Robot

# Forward Simulation Parameters sim_time: 1.5 # 1.7 vx_samples: 6 # 3 vy_samples: 1 vtheta_samples: 20 # 20 # Trajectory Scoring Parameters path_distance_bias: 64.0 # 32.0 goal_distance_bias: 24.0 # 24.0 occdist_scale: 0.5 # 0.01 forward_point_distance: 0.325 # 0.325 stop_time_buffer: 0.2 # 0.2 scaling_speed: 0.25 # 0.25 max_scaling_factor: 0.2 # 0.2 # Oscillation Prevention Parameters oscillation_reset_dist: 0.05 # 0.05 # Debugging publish_traj_pc : true publish_cost_grid_pc: true global_frame_id: odom

If the TEB local path planning method is used, it is implemented by the teb_local_planner local path planner, and the main parameters are divided into the following categories. 1. Key Parameters Related to Trajectories. # Trajectory teb_autosize: True dt_ref: 0.3 # Desired track temporal resolution in seconds dt_hysteresis: 0.03 # Hysteresis window that automatically resizes based on the current temporal resolution. 10% of dt ref is recommended. max_samples: 500 # Maximum number of samples min_samples: 3 # Minimum number of samples (always greater than 2), default: 3 global_plan_overwrite_orientation: True # Overwrite the orientation of the local subgoal provided by the global planner global_plan_viapoint_sep: -0.1 # If positive, obtained from the global plan path via-points, this value determines the resolution of the reference path (the minimum separation between every two consecutive via-points along the global plan path, which can be resized by referring to weight_viapoint, default: - 0.1) allow_init_with_backwards_motion: false max_global_plan_lookahead_dist: 3.0 # Specify the maximum length of a subset of global planning paths feasibility_check_no_poses: 5 # Number of pose feasibility analyses per sampling interval, default: 4

(continued)

5.2

Map-Based Autonomous Navigation

263

publish_feedback: false # Whether or not to publish planner feedback containing full trajectories and dynamic obstacle lists, default: false

2. Key Parameters Related to the Robot. # Robot max_vel_x: 0.4 # Maximum linear velocity, default: 0.4 max_vel_x_backwards: 0.2 # The maximum backwards linear velocity, default: 0.2. This parameter cannot be set to 0 or a negative number. Otherwise, it will cause an error. max_vel_theta: 1.0 # Maximum angular velocity, default: 0.3 acc_lim_x: 2.0 # Maximum linear acceleration, default: 0.5. Setting it to 0 means no constraint. acc_lim_theta: 1.0 # Maximum angular acceleration, default: 0.5. For a mobile chassis in the Ackermann model, the angular velocity and angular acceleration constraints restrict the steering velocity and acceleration of the steering gear. Also, left and right rotation is limited by this. If the steering velocity is set high, the steering may oscillate. min_turning_radius: 0.0 # Minimum turning radius, default: 0.0. If the turning radius is set to be large, when making a U-turn, the robot will enter the bend along the outer line and can pass the bend relatively easily. However, the robot will also move along the outer line when passing a small bend, which will be a waste of time. If the turning radius is set to be small, the robot will move along the inner line when passing a small bend, which will save time. However, when making a U-turn, the robot will also enter the bend along the inner line, and it may not pass the bend. wheelbase: 0.0 # The distance between the drive and steering shaft, default: 1.0. For a differential mobile robot, this value is set to 0.0. cmd_angle_instead_rotvel: True # Whether to convert the received angular velocity message to an angle change in operation. When it is set to True, the data inside topic cmd_vel.angle.z is the angle of the steering gear. default: false.

3. Key Parameters Related to Obstacle Avoidance. # Obstacles min_obstacle_dist: 0.2 # Minimum desired separation from obstacles in meters. inflation_dist: 0.6 # Buffer zone around obstacles (should be larger than min_obstacle_dist to take effect), the robot will slow down when it enters the buffer zone, default: 0.6. The distance between

(continued)

264

5

Autonomous Navigation for Mobile Robot

trajectory and obstacles can be adjusted by min_obstacle_dist and inflation_dist to meet the needs of path planning in different environments. include_costmap_obstacles: True # Specify if obstacles of local costmap should be taken into account. It must be set to true to avoid real-time detected obstacles. costmap_obstacles_behind_robot_dist: 1.0 # Consider obstacles within n meters behind. The larger the setting, the wider the scope of consideration, but the greater the amount of computation. Meanwhile, the locally planned area cannot be exceeded. obstacle_poses_affected: 30 # Each obstacle position is attached to the closest pose on the trajectory to keep a distance. obstacle_association_force_inclusion_factor: 1.5 # All obstacles are forced to be considered within a radius of n * min_obstacle_dist, default: 1.5 costmap_converter_plugin: "" # Define the plugin name to convert costmap cells to points/lines/polygons. Set an empty string to disable the conversion such that all points are treated as pointobstacles. costmap_converter_spin_thread: True # If set to true, the costmap converter will invoke its callback queue in a different thread, default: true costmap_converter_rate: 5 # Define how often the costmap_converter plugin will process the current costmap (this value is no higher than the costmap update rate)

4. Key Parameters Related to Optimization. (These parameters are important and set the weights for the optimization method). # Optimization no_inner_iterations: 5 # Number of actual solver iterations called in each inner loop iteration no_outer_iterations: 4 # Number of actual solver iterations called in each outer loop iteration optimization_activate: True optimization_verbose: False penalty_epsilon: 0.1 # Provides a buffer for the velocity constraint, incurring a penalty to slow it down early to achieve the buffer before reaching the velocity limit. weight_max_vel_x: 2 # Optimization weight that satisfies the maximum allowed translational velocity. Whether the robot runs mainly at high or low speed during its motion depends on the assignment of these weights. weight_max_vel_theta: 0 # Optimization weight that satisfies the maximum allowable angular velocity. weight_acc_lim_x: 1 # Optimization weight that satisfies the maximum allowed translational acceleration.

(continued)

5.2

Map-Based Autonomous Navigation

265

weight_acc_lim_theta: 0.01 # Optimization weight that satisfies the maximum allowed angular acceleration. weight_kinematics_nh: 1000 # Optimization weight for kinematics. weight_kinematics_forward_drive: 2 # Optimization weight for forcing the robot to select only forward directions (forward translational velocities). The higher the weight, the greater the backward penalty. The range is 0 to 1000, and it is set as desired. weight_kinematics_turning_radius: 1 # Optimization weight for enforcing a minimum turning radius. The larger it is, the easier it is to reach the minimum turning radius. The range is 0 to 1000, and it is set as desired. weight_optimaltime: 1 # Optimization weight for shrinking the trajectory based on the transition/execution time, i.e., the optimal time weight. If it is large, then the robot cart will accelerate fast on the straight path and cut the inner path for the path planning during the turn. The smaller this parameter is, the more stable the robot carriage will be throughout its motion. The range is 0 to 1000, and it is set as desired. weight_obstacle: 50 # Optimization weight for keeping the minimum distance from obstacles. Default: 50.0. weight_viapoint: 1 # Weight that tracks the global path. weight_inflation (double, default: 0.1) # Weight for inflation radius. weight_dynamic_obstacle: 10 # not in use yet. weight_adapt_factor: 2 # Some weight added during the iteration.

TEB also provides recovery measures to restore robots that are stuck in clutter or have bad path planning to a normal state. In addition, the parameters of other yaml files in the move_base package, as well as the parameters in the AMCL launch file, also deserve attention. Run the following command line to configure the parameters, as shown in Fig. 5.15. rosrun rqt_reconfigure rqt_reconfigure

Task 2 Mobile Robot Navigation The autonomous navigation of the mobile robot can be performed after creating the appropriate parameter and launch files for the real scenario of the robot. First, launch the navigation-related nodes by entering the following command in the terminal: $ cd spark $ source devel/setup.bash $ roslaunch spark_navigation amcl_demo_lidar_rviz.launch lidar_type_tel=3iroboticslidar2

266

5

Autonomous Navigation for Mobile Robot

Fig. 5.15. Starting position of the robot after particle convergence

Please note that if Spark is configured with EAI LiDAR (g2 or g6), lidar_type_tel=3iroboticslidar2 needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The content of the amcl_demo_lidar_rviz.launch file is as follows:























The nodes started by this command include the Spark motion control model, Spark motion control node, LiDAR node, map save node, AMCL localization, Movebase navigation node, and rviz display. Compared to amcl_rviz.launch in the previous task, the launch file amcl_demo_lidar_rviz.launch in this task has an additional movebase node that enables optimal path planning for the mobile robot. After launching the amcl_demo_lidar_rviz.launch file, the initial pose of the mobile robot needs to be estimated first. Click on the 2D Nav Goal tab in rviz and move the robot so that the particles converge. The robot can also be controlled remotely via the keyboard as in the previous task to make the particles converge. The starting position of the robot after particle convergence is shown in Fig. 5.16. Once the initial position of the robot is determined, the robot can be navigated through the map by clicking on the 2D Nav Goal tab in rviz to select a new target point. Click the Add button in the bottom left corner to add the Path plugin to view the path information for the movebase motion planning. After setting the navigation target point, a global path for navigation will be generated in the interface, as demonstrated in Fig. 5.17a. The path contains global and local paths, and its properties can be set in the plugin. During the motion of the robot, due to the influence of mileage deviation and environmental obstacle avoidance, the robot

268

5

Autonomous Navigation for Mobile Robot

Fig. 5.16 Starting position of the robot after particle convergence

Fig. 5.17 Schematic of the robot navigation process. (a) Robot starting position; (b) robot navigation result

actually moves according to the local planning path and finally reaches the set target point. The navigation result is shown in Fig. 5.17b.

Further Reading

5.3

269

Summary

In this chapter, the use of sensor information to enable a mobile robot to acquire localization in a known map and perform autonomous navigation is described. The accurate pose of the robot in the map is obtained through the adaptive Monte Carlo localization (AMCL) package. Thereafter, the move_base function package is used to complete global path planning and local path planning, enabling the robot to avoid obstacles and move to the target point.

Further Reading More global path planning methods are as follows. 1. ARA* and AD* The A* algorithm is a powerful tool; however, its computation cost may be unaffordable for some applications. To solve this problem and to adopt different application contexts, various extensions of A* have been anticipated. The main extension to the A* algorithm is ARA* (Anytime Re-planning A*) Algorithm. This approach find improved solution and improve a bound on the suboptimality of the current solutions. ARA* proposes a graph-based planning and re-planning algorithm, which can generate bounded suboptimal solution. It has a flexible time cost and can return a valid solution to a pathfinding or graph traversal problem even if it is interrupted before it ends, by generating a fast, nonoptimal solution before progressively optimizing it. ARA* algorithm is a heuristic incremental search algorithm, which fixes its previous solution incrementally. The heuristic search method uses heuristic functions to control the expansion scope of search to find the best path. Because heuristic search can control the search space within a relatively small control range, that is, the search area is smaller, so it has a faster speed. Incremental search can obtain the optimal path faster through reuse technology. Because each incremental search can determine whether the node information has changed and only modify the changed node information, the incremental search is faster than each search from zero. ARA* algorithm can effectively solve complex dynamic search problems. As a reverse search algorithm, AD*(Anytime Dynamic A*) performs well in the dynamic environment and tries to complete the search process from the target point. AD* is an improvement based on the framework of A*, which is more adaptable to engineering practice. During the initial traversal, it is consistent with the Dijkstra algorithm. It saves the information of each node. The core of its algorithm is to assume that the unknown area is free space. On this basis, it incrementally realizes path planning and finds the shortest distance from the target point to each node by minimizing the estimated value of the target function. Because the shortest path information from each point to the destination in the space is stored, the efficiency is greatly improved during the re-planning. ARA*

270

5

Autonomous Navigation for Mobile Robot

solves the problem of real-time re-planning on the way of the robot, so that the robot can quickly get a feasible path, which is not the optimal path. If there is time after getting the suboptimal path, the robot will continue to optimize the path, so that the path gradually approaches the optimal path; if there is no time, the robot will continue to follow the suboptimal path. AD* theoretically combines the advantages of ARA* and LPA*. LPA* (Lifelong Planning A*) is to solve the problem of dynamic obstacles in the path following phase of the robot. The dynamic obstacles may be random pedestrians, or the map deviation caused by the unreliable SLAM algorithm after the map is built, which makes the original obstacles in the map disappear after the map is built. Therefore, AD* can solve the problem of real-time re-planning in a dynamic environment, meet the needs of engineering practice, and can be used to explore unknown environments. 2. D* and D* Lite The D* algorithm is based on the A* algorithm. In 1994, Anthony Stentz proposed Dynamic A* algorithm, which is a reverse incremental search algorithm. The incremental search algorithm will calculate the distance measurement information H(x) of each node during the search process. If an obstacle occurs in the dynamic environment and cannot continue to search along the pre-path, the algorithm will re-plan the path at the current state point according to the distance measurement information of each point that has been obtained previously, without re-planning from the target point. D* Lite is a complex pathfinding algorithm with practical applications in many fields such as robotics and game design. D* Lite uses the reverse search method of D* and LPA*, which is more suitable for handling dynamic obstacles. The new obstacle will not affect the path between the obstacle node and the target point, so it can make good use of the incremental feature of LPA*. This algorithm is capable of solving the shortest path problem and also incorporates additional functionality such as path re-planning and path correction. D* Lite works more efficiently than similar algorithms as it provides a fast and reliable path that leads from the start to the end. 3. RRT and RRT * A rapidly exploring random tree (RRT) is an algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow toward large unsearched areas of the problem. RRTs were developed by Steven M. LaValle and James J. Kuffner Jr. They easily handle problems with obstacles and differential constraints (nonholonomic and kinodynamic) and have been widely used in autonomous robotic motion planning. RRTs can be viewed as a technique to generate openloop trajectories for nonlinear systems with state constraints. An RRT can also be considered as a Monte Carlo method to bias search into the largest Voronoi regions of a graph in a configuration space. Some variations can even be considered stochastic fractals. RRTs can be used to compute approximate control policies to control high-dimensional nonlinear systems with state and action constraints.

Exercises

271

An RRT grows a tree rooted at the starting configuration by using random samples from the search space. As each sample is drawn, a connection is attempted between it and the nearest state in the tree. If the connection is feasible (passes entirely through free space and obeys any constraints), this results in the addition of the new state to the tree. With uniform sampling of the search space, the probability of expanding an existing state is proportional to the size of its Voronoi region. As the largest Voronoi regions belong to the states on the frontier of the search, this means that the tree preferentially expands toward large unsearched areas. The length of the connection between the tree and a new state is frequently limited by a growth factor. If the random sample is further from its nearest state in the tree than this limit allows, a new state at the maximum distance from the tree along the line to the random sample is used instead of the random sample itself. The random samples can then be viewed as controlling the direction of the tree growth, while the growth factor determines its rate. This maintains the space-filling bias of the RRT while limiting the size of the incremental growth. RRT growth can be biased by increasing the probability of sampling states from a specific area. Most practical implementations of RRTs make use of this to guide the search toward the planning problem goals. This is accomplished by introducing a small probability of sampling of the goal to the state sampling procedure. The higher this probability, the more greedily the tree grows toward the goal. RRT* inherits all the properties of RRT and works similar to RRT. However, it introduced two promising features called near neighbor search and rewiring tree operations. Near neighbor operations finds the best parent node for the new node before its insertion in tree. Rewiring operation rebuilds the tree within this radius of area to maintain the tree with minimal cost between tree connections. The efficiency of space exploration and the quality of paths have been improved. As the number of iterations increases, RRT* improves its path cost gradually due to its asymptotic quality, whereas RRT does not improve its jaggy and suboptimal path.

Exercises 1. What is the principle behind the implementation of the adaptive Monte Carlo algorithm? What are its advantages compared with the Monte Carlo algorithm? 2. Consider that multiple locations exist in a map where the surroundings are identical, and the mobile robot remains stationary at one of these locations without knowing the initial location. Can the correct pose of the robot be obtained using the adaptive Monte Carlo algorithm? Why? 3. What is the “kidnapped” robot problem in SLAM algorithms? What methods can be used to solve it?

272

5

Autonomous Navigation for Mobile Robot

4. What is the navigation framework of the move_base package? What is the functionality in the robot navigation process, and what topics and services are included? 5. What is the difference between global path planning and local path planning? 6. Write a ROS node that enables the robot to start its motion from the starting point and navigate to the target point. 7. What are the definitions and roles of the global costmap and local costmap? 8. The inflation layer of the costmap has two important parameters, i.e., the cost_scaling_factor and inflation_radius. The cost scaling factor is represented by fc, which affects the outward declination rate of cost values in the buffer zone of the inflation layer, as shown in the following equation. cost = e - 253f c ðd - rÞ where fc is the cost scaling factor, which is an adjustable parameter, d denotes the distance from the obstacle, and r is the radius of the robot. What will happen to robot’s motion near an obstacle when fc increases? What are the disadvantages when fc is set too large or too small?

References 1. Dellaert F, Fox D, Burgard W, Thrun S (1999) Monte Carlo localization for mobile robots. Proceedings of 1999 IEEE international conference on robotics and automation (ICRA 1999), 10–15 May, Detroit, Michigan, USA, vol 2, pp 1322–1328 2. Thrun S, Burgard W, Fox D (2019) Probabilistic robotics (trans: Cao H, Tan Z, Shi C). Machinery Industry Press, Cambridge 3. Zheng W (2017) Research on key technologies of mapping and localization system for mobile robots. Huazhong University of Science and Technology, Wuhan 4. Shen Y (2017) Research on path planning technology of substation inspection robot. Huazhong University of Science and Technology, Wuhan 5. Rösmann X, Feiten W, Wösch T, Hoffmann, F, Bertram, T (2012) Trajectory modification considering dynamic constraints of autonomous robots. Proceedings of 7th German conference on robotics, 21–22 May, Munich, Germany, pp 74–79 6. Hu C (2018) Robot development practice in ROS. Mechanical Industry Press

Chapter 6

SLAM Based on Multi-Sensor

Abstract Simultaneous localization and mapping (SLAM) is the key technology for mobile robots to achieve complete autonomy and multi-function. Mobile robots mainly carry sensors such as camera and LiDAR to obtain measurement of the environment. However, due to the lack of accuracy and robustness as well as the complexity of the scene, the SLAM using a single sensor will cause degradation issues and fail to meet the requirements. The applications of multi-sensor fusion can make up for the limitations and shortcomings of a single sensor and adapt to complex environmental changes. For example, the high-frequency output of IMU can handle rapid motion, while the feature tracking of camera can overcome the drift of IMU, and LiDAR point cloud can provide high-precision and long-distance depth information. In order to establish an information-rich environment map and achieve accurate positioning, the advantage of multi-sensor data complementarity can be used to realize multi-sensor fusion SLAM, so as to improve the robustness and adaptability of the robot in unknown dynamic and complex environment. Keywords Multi-sensor fusion · SLAM · Loop closure detection · Extrinsic parameter calibration · Graph optimization A mobile robot acquires its environmental information through sensors. SLAM algorithm estimates the pose of robot, environmental structures and features, and other states through environmental sensing to calculate accurate pose in real time [1]. Many SLAM algorithms use loop closure detection on the constructed environmental maps to correct the cumulative deviation of the robot’s relative poses in the environment. This is to avoid the impact on the environmental mapping, thus improving the accuracy of the constructed map. To obtain rich environmental information, SLAM algorithms usually use LiDAR or camera as external sensors. LiDAR is capable of acquiring highly accurate point cloud data. Compared with LiDAR, the camera obtains environmental images with more information at a lower cost. In addition, it is easier for cameras to obtain semantic information in the environment from the image data, and it is more adaptable to dynamic and complex unstructured environments. However, for cameras, the validity and accuracy of the image data is reduced when the mobile robot is moving rapidly, when there are © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_6

273

274

6

SLAM Based on Multi-Sensor

dynamic objects in the field of view, or when there are transparent objects such as glass. Furthermore, cameras are also vulnerable to interference from factors such as ambient light, low texture, or fewer features in the scene. Hence, it is difficult to meet the robustness requirements using only cameras as sensors. Due to the widespread use of low-cost microelectromechanical system (MEMS) sensors, mobile robots are also equipped with an inertial measurement unit (IMU) that can acquire acceleration and angular velocity measurements. Additionally, most ground mobile robots are driven by wheels or tracks and are often equipped with wheel speed sensors to obtain carrier motion speed. However, if only the IMU and odometer are utilized, the positioning is not accurate, and obstacle avoidance is not possible. Due to the complementarity of each sensor, ground mobile robots can combine LiDAR or vision measurements with easily available IMU measurements and wheel speed measurements for SLAM, which can improve the robustness and accuracy of the system. Therefore, it is important to utilize the different characteristics of different sensors. For example, gyroscopes have good dynamic performance, but they also have a drift problem, while accelerometers have good static performance but poor dynamic characteristics. Therefore, accelerometers can be used to compensate for gyroscopes and complement the data of multiple sensors, so as to draw on others’ strong points to offset one’s own weaknesses. Commonly used multi-sensor information fusion techniques include the weighted average, Bayesian estimation, multi-Bayesian, Kalman filter, particle filter, and artificial neural network methods. In complex environments, humans can accurately judge their situation and surroundings with robust position and environmental sensing. Humans use motor feedback from the eye, cochlea, and lower limbs to determine their position in the environment. The eye is responsible for visual measurements, the cochlea is able to obtain inertial measurements, and the lower limbs, as proprioceptors, can estimate their relative velocity in the environment through contact with the environment. As shown in Fig. 6.1, for a mobile robot, the camera plays the role of the eye, the IMU plays the role of the cochlea, and the wheel speed sensor senses its own “pace” similar to a human lower limb to obtain its relative speed in the environment. In SLAM algorithms, the use of these multiple complementary sensor data can improve robustness and enhance the adaptability of the robot in unstructured environments with unknown dynamics and complexity. This chapter presents the multi-sensor fusion SLAM algorithm based on LiDAR, IMU, and odometer. The following chapters will introduce the vision-based multi-sensor fusion SLAM algorithm.

6.1

Inertial Measurement Unit and Calibration

The six-axis IMU contains an accelerometer and gyroscope, with the three axes of the accelerometer and the three axes of the gyroscope corresponding to the x-axis, y-axis, and z-axis of the 3D Cartesian coordinate system. A common accelerometer

6.1

Inertial Measurement Unit and Calibration

275

Fig. 6.1 Human self-perception and robot multi-sensor fusion sensing

comprises a mass block, damper, elastic element, sensitive software, and processing circuitry. It is used to measure the acceleration in the axial direction. The basic principle of a gyroscope is that when an object rotates, the direction of the axis of rotation will remain constant in the absence of external disturbances, whereby the angular velocity of rotation around the axis is measured.

6.1.1

IMU Measurement Model

6.1.1.1

Accelerometer Measurement Model

ak = ak þ bak þ RBWk gW þ ηa

ð6:1Þ

where ak is the accelerometer measurement at the first k frame in the IMU reference frame, ak is the true acceleration value at the first k frame, bak is the zero-point bias of acceleration at the first k frame that needs to be estimated online, RBWk is the transform

276

6

SLAM Based on Multi-Sensor

from the reference frame to the world frame, gW = [ 0 0 g ]T is the gravitational acceleration in the world frame, g is the local gravitational acceleration, and ηa is the acceleration white noise. Typically, acceleration white noise ηa conforms to a Gaussian distribution with a mean of zero: ηa  N 0, σ2a

ð6:2Þ

Assume that the zero-point bias of acceleration ba wanders randomly with time and that its derivative with respect to time b_ a conforms to a Gaussian distribution with a mean of zero: b_ a = ηba , ηba  N 0, σ2ba

6.1.1.2

ð6:3Þ

Gyroscope Measurement Model ωk = ωk þ bgk þ ηg

ð6:4Þ

where ωk is the gyroscope measurement of the first k frame, ωk is the true angular velocity value of the first k frame, bgk is the zero-point bias of the angular velocity at the first k frame that needs to be estimated online, and ηg is the angular velocity white noise. Typically, angular velocity white noise conforms to a Gaussian distribution with a mean of zero: ηg  N 0, σ2g

ð6:5Þ

Assume that the zero-point bias of angular velocity bg wanders randomly with time and that its derivative with respect to time b_ g conforms to a Gaussian distribution with a mean of zero: b_ g = ηbg , ηbg  N 0, σ2bg

ð6:6Þ

6.1

Inertial Measurement Unit and Calibration

6.1.2

277

Pre-Calibration of the System Error

The measurement models of the accelerometer and gyroscope in the previous section only consider the zero-point bias and white noise of the sensor. However, this is not enough to build a complete measurement model for an actual IMU. Before using the IMU sensor, it is necessary to clarify the systematic and random errors of the IMU and pre-calibrate the IMU to further compensate for the measurement errors. The systematic error, also called the deterministic error, includes the axial error and scale factor. The axial error is used to correct the nonabsolute orthogonality of the x-, y-, and z-axes caused by the manufacturing process accuracy, and the scale factor is used to correct the ratio between the actual value and the sensor output value, that is, the error induced in the conversion of the digital signal to the physical quantity of the sensor. Typically, deterministic errors will be calibrated when IMUs are leaving the factory; thus, the axial error and scale factor are treated as deterministic values. As demonstrated in Fig. 6.2, for the actual IMU, the three axes of the accelerometer are not perfectly orthogonal due to manufacturing accuracy, and the three axes of the gyroscope are not perfectly aligned with the three axes of the accelerometer, which is called “axis misalignment.” The scale factor of the acceleration and angular velocity measured by the sensor to the actual physical quantity (true value) is inconsistent with the nominal one, which is called “scale factor.” The six-axis IMU sensor BMI055 produced by the Bosch Company is used as an example here. According to the datasheet, the typical value of the cross axis sensitivity of the accelerometer and gyroscope is ±1%, which means that the measured value in one axis will be applied to the other axis (orthogonal to the first axis) by ±1%. This error can be compensated for by calibrating the “axis misalignment.” The sensitivity tolerance of a gyroscope is typically ±1%, which means that scale factor from the measured angular velocity to the actual physical z′ z

θzy

θzx

y

x Fig. 6.2 Schematic diagram of axial bias (left) and scale factor (right)

278

6

SLAM Based on Multi-Sensor

quantity has an error of approximately ± 1% from the nominal value. This error can be compensated for by calibrating the “scale factor.” Reference [2] describes an IMU calibration method performed without external equipment that provides calibration results for “axis misalignment,” “scale factor,” and “zero bias.” Let the accelerometer measurement model be: aO = Ta Ka aS þ ba þ ηa

ð6:7Þ

where aOis the accelerometer measurement in the orthogonal frame, aS is the original measurement in the actual accelerometer frame, accelerometer axis misalignment Ta =

1 0 0

- αyz 1 0 T

αzy - αzx 1

, scale factor Ka =

sax 0 0

0 say 0

0 0 saz

, zero-point

bias, ba = ½ bax bay baz  , and ηa is white noise. The gyroscopic measurement model can be written as follows: ωO = Tg Kg ωS þ bg þ ηg

ð6:8Þ

where ωO is the gyroscope measurement in the orthogonal frame, ωS is the original measurement in the actual gyroscope frame, gyroscope axis misalignment Tg =

1 γ xz - γ xy

- γ yz 1 γ yx

γ zy - γ zx 1

,

scale

factor Kg =

sgx 0 0

0 sgy 0

0 0 sgz

,

zero-point

bias,

bg = ½ bgx bgy bgz T , and ηg is white noise. The system error pre-calibration process for the IMU is divided into the following steps. ① Plug the IMU into the acquisition equipment and leave standing for 50 s to estimate the gyroscope zero-point bias. ② Pick up the IMU, and rotate it in the positive and negative directions along each axis of the IMU (leave to stand for 5 s after each rotation) until the three axes (both positive and negative direction) of the gyroscope are sufficiently rotated and excited in a total of six directions. In this way, the “axis misalignment” and “scale factor” of the gyroscope can be calibrated. ③ Hold the IMU in the hand, and then move it in a rapid linear motion (leave to stand for 5 s after each motion) along each axis of the IMU in both positive and negative directions until the accelerometer is effectively excited in a total of six directions for all three axes. In this way, the “axis misalignment” and “scale factor” of the accelerometer can be calibrated. ④ After converting the recorded sensor data to a suitable format, calibrate the above-mentioned Ta, Ka, ba, Tg, Kg, bg using the open-source tool imu_tk provided in reference [2]. ⑤ Write a ROS node to compensate for the original sensor measurements aS and ωS using the parameters obtained from the pre-calibration. The resulting sensor

6.1

Inertial Measurement Unit and Calibration

279

Table 6.1 System error calibration results for IMU Zero bias

x-axis y-axis z-axis Scale factor x-axis y-axis z-axis Axis misalignment

Accelerometer 0.080551 m/s2 0.119632 m/s2 -0.340042 m/s2 1.01807 1.01469 1.00625 1:0 0:0 0:0

- 0:038892 1:0 0:0

Gyroscope -0.0032665 rad/s -0.0044932 rad/s 0.0010749 rad/s 0.99514 1.00125 0.99586 - 0:002532 0:022327 1:0

1:0 0:0646848 0:0037830

- 0:057251 1:0 - 0:014965

0:0010978 0:0165902 1:0

measurements aO and ωO are treated as ak and ωk in Eqs. (6.1) and (6.4), respectively, which are used for the pose estimation of the robot. The calibrated zero-point bias ba and bg are obtained by an offline method and are different from bak and bgk in the measurement model. The actual zero-point bias of the sensors bak and bgk will change continuously according to external factors such as sensor temperature and voltage. Therefore, during the pose estimation of the mobile robot, the actual zero-point bias bak and bgk will be estimated in real time based on ba and bg. The system error calibration results for the BMI055 sensor are shown in Table 6.1 [3].

6.1.3

Calibration of Random Error

Due to external environmental changes, the accelerometer and gyroscope measurement models carry random white noise signals, which conform to a Gaussian distribution with a mean of zero. Furthermore, the zero-point bias also changes with time, and their derivatives with respect to time also conform to a Gaussian distribution with a mean of zero. These two types of random noise signals cause the measured values to carry errors during the acquisition process that affects the accuracy. Hence, it is necessary to calibrate the white noise parameters and zero random walk parameters of the accelerometer and gyroscope. As the IMU data is continuously integrated, the random error will become larger. Therefore, it is necessary to calibrate the IMU random error. According to the IEEE standard in reference [4], the Allan variance (AVAR) method is used to test single-axis fiber optic gyros. The AVAR method is a time domain analysis technique used to determine the noise process of a signal [5]. Thus, the random error of the accelerometer and gyroscope is estimated using the AVAR method. First, the variance σ of the raw measurement data is calculated for different bandwidths τ of low-pass filters. Then, the filter bandwidth τ is treated as the x-axis, and the variance σ is treated as the y-axis to create a logarithmic plot (Allan standard deviation plot). This plot is used to analyze the variance trend as the filter bandwidth

280

6

SLAM Based on Multi-Sensor

changes. Finally, the following parameters of the gyroscope and accelerometer signals are obtained from the Allan standard deviation plot. • White Noise N (or Angular Random Walk, ARW) Unit for gyroscopes: rad=s p1Hz ; unit for accelerometers: m=s2 p1Hz ; the value corresponds to the intercept at σ = 1sec of the straight-line portion with the slop of -1/2 in the Allan standard deviation plot. The physical meaning is that for a gyroscope (or p accelerometer) with a sampling frequency of f, the actual measurement has a f × N standard deviation of white noise. • Bias Instability, or Bias Stability B Unit for gyroscopes: rad/s; unit for accelerometers: m/s2; the value corresponds to the minimum value of σ in the Allan standard deviation plot. For a gyroscope, the physical meaning of B is that after accurate compensation of the static zero bias, the error in the hourly angular integration of the gyroscope is less than 3600 × B(rad). • Rate Random Walk (RRW) K Unit for gyroscopes: rad=s2 p1Hz ; unit for accelerometers: m=s3 p1Hz ; the value corresponds to the intercept at σ = 3 s of the straight-line portion with the slope of 1/2 in the Allan standard deviation plot. The physical meaning is that for a gyroscope (or accelerometer) with a sampling frequency of f, the derivative of the zero bias with respect to time conforms to a Gaussian distribution with a standard deviation of p f × N. Parameter ηa in the accelerometer measurement model corresponds to N of acceleration. Parameter b_ g corresponds to K of acceleration. Parameter ηg in the gyroscope measurement model corresponds to N of angular velocity. Parameter b_ a corresponds to K of angular velocity. The specific calibration process for random errors is divided into the following steps: ① Fix the IMU in a stable plane to avoid vibrations affecting the calibration accuracy. The installation orientation is the same as that when it is mounted on the robot, so as to reduce the effect of gravitational acceleration on the gyroscope measurements. ② Connect the IMU to the acquisition device via an extension cable to establish communication with the PC. ③ Wait for several minutes until the IMU enters a stable state. ④ Use the ROS rosbag tool to record the IMU data for 2–5 hours. Pay attention to keep the IMU stationary during this period. Otherwise, the calibration results will be affected. ⑤ Use the open-source tool imu_utils based on the AVAR method. Modify the sampling frequency of the IMU in the source code to the actual value and obtain the calibration result through formula fitting.

6.2

LiDAR and IMU Extrinsic Parameter Calibration

Table 6.2 Random error calibration results for the IMU

Gyroscope

Accelerometer

x-axis y-axis z-axis x-axis y-axis z-axis

281 White noise 2.938 e-3 rad/s 4.813 e-3 rad/s 6.184 e-3 rad/s 1.103 e-1 m/s2 2.980 e-2 m/s2 3.271 e-2 m/s2

Bias instability 1.352 e-5 rad/s2 1.085 e-5 rad/s2 1.920 e-5 rad/s2 1.194 e-3 m/s3 1.996 e-4 m/s3 2.904 e-4 m/s3

Fig. 6.3 Schematic of data time alignment

Table 6.2 demonstrates the random error calibration results for the BMI055, which are in general agreement with the values given in the datasheet.

6.2

LiDAR and IMU Extrinsic Parameter Calibration

In a multi-sensor-based SLAM system, the sensors use different types of data, so their sampling frequencies are also different. For example, for a mobile robot system, the sampling frequencies of the LiDAR, camera, and IMU are 10 Hz, 15 Hz, and 400 Hz, respectively. Before performing data fusion, it is necessary to ensure that the data from the different sensors are acquired at the same moment, that is, data time synchronization is performed. The lowest frequency among these sensors is chosen as the reference; that is, the sampling time of the LiDAR is used as the alignment mark, as shown in Fig. 6.3. The data collected via the ROS package are marked with a time stamp for each frame. Because there is no hardware synchronization between the individual sensors, it is impossible to guarantee that the collected data frames have the same timestamp. Therefore, the LiDAR is used as a reference, and the camera frame with the closest

282

6

SLAM Based on Multi-Sensor

Fig. 6.4 LiDAR-IMU pose transformation relationship

timestamp to the LiDAR data frame is kept. For IMU sensors, time synchronization is performed by the weighted mean method as shown in Eq. (6.9). d^ t2 = α1 d t1 þ α2 dt3 t2 - t1 α1 = t3 - t1 t3 - t2 α2 = t3 - t1

ð6:9Þ

where t2 is the LiDAR-based timestamp, and t1and t3 are the IMU timestamps that are the nearest to t2 before and after, respectively. d denotes the IMU data, that is, the measured values of acceleration and angular velocity. α1 and α2 are the time interpolation factors. LiDAR acquires point cloud data, which roughly reflect the profiles of the objects in the scene. LiDAR moves with the motion of the mobile robot. The frame in which the initial pose of the LiDAR is located is treated as the world frame, as shown in Fig. 6.4. The absolute coordinates of the point cloud obtained by the LiDAR in the world frame at each moment are described by the following two methods. 1. If the coordinate transformation relationship from IMU to LiDAR is known to be T = {R, t}, a continuous integration of the IMU data yields pose transformation matrix C. Then, the representation of the LiDAR frame in the world frame at the current moment is T-1CT. The point cloud in the LiDAR frame is represented as

6.2

LiDAR and IMU Extrinsic Parameter Calibration

283

vector p = [x, y, z]T (the same below). The point is expressed as T-1CTp in the world frame by the transitivity of the pose transform. 2. When the LiDAR moves slowly, many point clouds in the front and back frames overlap. Using the point cloud registration algorithm, the iterative closest points (ICP) algorithm, the pose transform of the LiDAR between the front and back frames can be estimated. The transform C′ of the current pose to the initial pose can be obtained by continuous accumulation, and then the representation of any point cloud in the world frame can be obtained, that is, C′p. Based on the description of the two methods above, the transform T between the LiDAR and the IMU can be estimated by using the point cloud registration algorithm again to construct and optimize the nearest neighbor error T, which is the extrinsic parameter transformation matrix between the LiDAR and the IMU. The ICP algorithm assumes that there are two sets of point cloud data, P and Q , to be processed. P denotes the source point cloud, and Q is the target point cloud. There exists an unknown spatial transformation T = {R, t} that allows them to be spatially matched, that is, transform the frame in which P is located to the frame in which Q is located. Let the vector {pi 2 P, qi 2 Q, i = 1, 2. . ., n}; the purpose of the ICP algorithm is to find the nearest neighboring point cloud pair (pi, qi), that is, Rpi + t = qi, where n is the number of nearest neighboring point cloud pairs in the two point clouds. The matching error is measured by the Euclidean distance between point clouds: dðp, qÞ =

xp - xq

2

þ y p - yq

2

þ zp - zq

2

ð6:10Þ

The least squares optimization objective function can then be established as follows. n

dðp, qÞ2

ΔT = argmin

ð6:11Þ

i=1

There are two conditions for ending the iterative optimization, one of which needs to be satisfied: (1) the number of iterations reaches a set threshold and (2) the absolute value of the difference between the root mean square error before and after the two iterations is less than the set threshold, and the absolute value of the difference between the proportion of interior points before and after the two iterations is less than the set threshold. In the actual processing, the source and target point clouds are not identical, and the mismatched point cloud pairs need to be eliminated so that the iterative optimization converges more accurately. The adaptive thresholding method is used to perform the mismatch rejection by statistically analyzing the point cloud pairs and calculating the adaptive truncation distance value, so as to reject the point cloud pairs larger than this distance value. The mean and standard deviation of the corresponding point distance values during each iteration are calculated as follows.

284

6

Table 6.3 Calibration results of the LiDAR-IMU extrinsic parameter transformation matrix

SLAM Based on Multi-Sensor

Parameters Rotation matrix R

LiDAR-IMU

Translation vector t

½ 4:257e - 3

- 1:371e2 0:962 - 0:147

6:931e - 2 - 7:307e - 2 0:893 5:136e - 3

0:902 3:432e - 2 4:919e - 2

- 13:566e - 2



n

di μd =

i=1

N

:

n

ð6:12Þ

2

ðdi - μd Þ σd =

1

N

- μ2d

The adaptive threshold can be set as follows: d threshold = μd þ 0:1σ d :

ð6:13Þ

Table 6.3 shows the calibration results of the LiDAR-IMU extrinsic parameter transformation matrix for a certain mobile robot system, where the translation vectors are in meters. Thus, the transformation matrix of all the sensor coordinate systems on the mobile robot platform can be established based on the pose transformation relationship between the sensors two-by-two, which lays the foundation for the subsequent multi-sensor fusion algorithm.

6.3

Kinematic Odometer Model for Differential Wheeled Mobile Robot

SLAM algorithms need to perform pose estimation based on the robot’s motion model and then correct the pose in combination with the observations of other sensors. Assuming that the robot moves in a 2D plane and there is no wheel slip, encoders are installed on the left and right wheels of the robot. The kinematic model of the differential wheeled mobile robot is performed based on the above assumptions. As shown in Fig. 6.5, assuming that the robot moves in the xy-plane, the center point of the robot is in the middle of the left and right wheels, the distance between the left and right wheels is 2L, the linear velocities measured by the left and right wheel encoders are vL and vR, respectively, then the kinematic parameters of the differential wheel mobile robot consist of the linear velocity v and the angular velocity ω, which can be derived from the left and right wheel velocities of the robot [6]. Let the differential wheeled robot make a circular motion around the center of circle A with radius AB1 = R. When the robot moves from A1 to A2 for a very small time t along A, the distances the left and right wheels moved are ΔSL and ΔSR, and the overall distance is ΔS. According to the relationship between the arc length

6.3

Kinematic Odometer Model for Differential Wheeled Mobile Robot

Fig. 6.5 Schematic diagram of the circular motion of a differential wheeled mobile robot

Y

285

A

θ A2

Δ SL

R

Δ S

B1

Δ SR

A1 2L O

X

distance, radius R, and angle θ in circular motion, the following equations can be obtained: ΔSL = Rθ ΔSR = ðR þ 2LÞθ

ð6:14Þ

The arc lengths of the left and right wheel motions can be obtained by integrating the encoder over time, that is, ΔSL, ΔSR, and L are known quantities. The radius R and angle θ of the circular motion of the differential wheel robot can be found by combining the two equations in Eq. (6.14). 2L ∙ ΔSL ΔSR - ΔSL ΔSR - ΔSL θ= 2L ΔSR þ ΔSL ΔS = 2 R=

ð6:15Þ

In Eq. (6.15), ΔSL = vR ∙ t and ΔSR = vL ∙ t. After the robot moves along A for a small time t, the radius R and angle θ of the circular motion can be solved from the arc length of the motion. When the equations for θ and ΔS in Eq. (6.15) are differentiated for time t as t tapers to zero, the linear velocity v and angular velocity ω of the robot are obtained as follows: vL þ vR 2 vR - vL ω= 2L v=

ð6:16Þ

286

6

Fig. 6.6 Schematic diagram of the differential wheeled mobile robot motion model

SLAM Based on Multi-Sensor

(x,y)

y r

θ

θ – 90°

x

(xc,yc)

As shown in Fig. 6.6, assume that the velocity of the mobile robot at a certain moment is v, the angular velocity is ω, and the pose is (x, y, θ), where x and y denote the 2D coordinates and θ denotes the yaw (steering angle) of the robot. (xc, yc) denotes the coordinates of the center of the motion circle for the mobile robot. Based on the relationship between linear velocity and angular velocity: v=ω∙r xc = x - r sin θ yc = y þ r cos θ

ð6:17Þ

The motion speed of the robot does not jump and is constant for a very small amount of time Δt. Using the triangle method, the robot moves a distance of vΔt and its simultaneous yaw is ωΔt without noise. Thus, the pose (x′, y′, θ′) of the mobile robot can be expressed as Eq. (6.18). However, this is the ideal model of mobile robot motion based on the odometer. Usually, both velocity and angular velocity measurements contain various noises and can be expressed as Eq. (6.19). x′ y′ θ′

þ

v sin ðθ þ ωΔt Þ w = y - v cos ðθ þ ωΔt Þ = c w θ þ ωΔt v v - sin θ þ sin ðθ þ ωΔt Þ w w v v cos θ - cos ðθ þ ωΔt Þ w w ωΔt xc þ

^v ^ ω

=

v ω

þ

ε1 ðv; ωÞ ε2 ðv; ωÞ

x y θ

ð6:18Þ

ð6:19Þ

In Eq. (6.19), the ε1(v, ω) and ε2(v, ω) is the error (noise) model. Therefore, using the odometer for robot pose estimation has low accuracy and poor reliability. It needs to be solved jointly with other sensor observations.

6.4

6.4

Kalman Filter-Based Multi-sensor Fusion

287

Kalman Filter-Based Multi-sensor Fusion

SLAM can be thought of as a robot moving from a certain pose in an unknown environment, constantly and repeatedly using map features (e.g., corners, columns) observed by the sensors it carries (e.g., LiDAR, IMU, cameras) during its movement to perform its own pose determination. Meanwhile, the robot constructs a map incrementally based on its own pose, thus achieving simultaneous localization and mapping. Therefore, the essence of the SLAM problem is a state estimation problem. Currently, the methods to solve the SLAM problem fall into two main categories: filter-based approach and graph optimization-based approach. The filter-based SLAM approach uses the Bayesian principle to estimate the posterior probability of the system state according to the observed information as well as the control information. Depending on the posterior probability representation, several filter-based methods are available. The most prevalent filter-based SLAM solutions include the Kalman filter (KF), extended Kalman filter (EKF), unscented Kalman filter (UKF), and article filter (PF). The KF method is used for the real-time dynamic fusion of the data in the sensor data layer. If the system has a linear kinematic model and its system noise and sensor measurement noise obey a Gaussian distribution, the KF method can provide the optimal estimate in a statistical sense for data fusion using the minimization criterion of the mean-square error. The KF method has achieved wide application in the field of multi-sensor data fusion due to its advantages of fast computation and small storage space required for the computation process. For the nonlinear system, the EFK method can be adopted, that is, the nonlinear links are linearized, and the higher order terms are ignored or approximated. The UKF can also be used to approximate the nonlinear distribution by sampling. With respect to non-Gaussian models, the PF approach can be used. The PF algorithm originated from the Monte Carlo idea; that is, in a large number of repeated trials, the frequency of an event is used to replace the probability of this event. Moreover, particle filters are also suitable for nonlinear systems and have a wide range of applications [7]. In the filter-based SLAM solution, each step only considers the current pose of the robot or the pose of the previous steps and ignores the historical information of the robot pose. Hence, this solution suffers from the problem of cumulative error. To address this problem, later SLAM solutions focus on the graph-based optimization method. Task 1 Filter-Based SLAM This task will demonstrate the LiDAR SLAM algorithm based on multi-sensor fusion. This algorithm uses the Gmapping algorithm for robot mapping and localization by fusing LiDAR, odometer, and IMU data. Gmapping is a relatively mature algorithm based on LiDAR and odometry. The use of Gmapping requires the robot to provide odometry and LiDAR data. It is an implementation of the well-known open source OpenSLAM package in the ROS framework. Slam_gmapping in ROS is a metapackage, and its specific algorithm is

288

6

SLAM Based on Multi-Sensor

implemented in the Gmapping package. According to the input and pose data of the laser equipment, a grid-based 2D map is constructed. When installing ROS, Gmapping is installed by default. For the kinetic version of ROS, the following command can determine if Gmapping is installed: sudo apt-get install ros-kinetic-slam-gmapping

The slam_gmapping node of Gmapping needs to be turned on when the Gmapping algorithm is run for use. A launch file is usually created to start Gmapping, and the specific parameter configuration can be written in the launch file or yaml file. The input to the Gmapping algorithm consists of two main aspects: One is the coordinate transform (tf), and the other is the LiDAR topic scan. Tf includes the dynamic coordinate transform between the robot base and the odometer and the static coordinate transform between the base and the LiDAR frame, while scan contains the data information from the laser input. 1. Launch $ cd spark $ source devel/setup.bash $ roslaunch spark_slam 2d_slam_teleop.launch slam_methods_tel: =gmapping lidar_type_tel:=3iroboticslidar2

Note: If Spark is configured with EAI LiDAR (g2 or g6), lidar_type_tel=3iroboticslidar2 needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The contents of file 2d_slam_teleop.launch are as follows:







(continued)

6.4

Kalman Filter-Based Multi-sensor Fusion

289



The nodes launched by this command include the Spark motion control model, Spark motion control node, LiDAR node, Gmapping SLAM algorithm node, keyboard control, and map save node. 2. View the Gmapping SLAM algorithm file: The spark_gmapping.launch configuration file is as follows:























(continued)

290

6

SLAM Based on Multi-Sensor

Fig. 6.7 Schematic diagram of the SLAM mapping and localization process based on multi-sensor data. (a) Start of mapping and localization (b) Implementation of mapping and localization







3. Control the keyboard to move the robot around the indoor environment: The schematic diagram of SLAM mapping and localization based on LiDAR, odometer, and IMU sensors is shown in Fig. 6.7. Figure 6.8 demonstrates the result of the mobile robot completing SLAM. 4. Save maps After running for a while, switch to the terminal that prompts to save the map and press any key to save the currently constructed map. The map files will be saved to the spark_slam/scripts folder, where the test_map.pgm and test_map. yaml map files can be seen. Compare the mapping results in this task with the mapping results in Task 2 of Chap. 4, which employs the Gmapping algorithm for mapping without using odometer data.

6.5

Cartographer Algorithm

291

Fig. 6.8 SLAM result of the mobile robot

6.5

Cartographer Algorithm

Unlike the filter-based SLAM algorithm, the graph optimization-based SLAM approach is derived from the SLAM dynamic Bayesian network and the similarity of the graph. The graph optimization-based SLAM algorithm takes the state variable composed of the robot poses to be estimated and the map observations as the nodes of the SLAM Bayesian network, while the pose transformation matrix between nodes is used as the transition probability to form edges. Repeated observations of environmental features by the mobile robot will result in a large number of constraints between nodes, thus adding more edges to the graph. In this way, the SLAM problem is transformed into an optimization problem of how to minimize the average pose error while satisfying the constraints of the edges in the graph as much as possible. The idea of graph optimization originates from the bundle adjustment (BA) method of structure from motion (SFM). It is intuitive and can effectively use all historical observations. In addition, it is highly fault tolerant to data correlation and has easy loop closure detection and sparsity. The graph optimization-based SLAM algorithm uses all the observations to estimate the complete motion trajectory of the robot as well as the map. The map features scanned from different moments can be transformed into constraints between the robot’s poses at different moments; thus, the SLAM problem can be further transformed into an optimal estimation problem for a sequence of poses. The nodes in the graph represent the robot’s poses at different moments, while the edges between the nodes represent the constraint

292

6

SLAM Based on Multi-Sensor

relationship between different poses. The core of the graph optimization is to continuously adjust the pose nodes so that the nodes satisfy the constraint relationship between the edges as much as possible. The final optimization result is the optimized robot’s motion trajectory and map, which will be closer to the robot's real motion trajectory and environmental map. In 2016, Google’s open-source Cartographer SLAM algorithm combined the filter approach with the graph optimization approach [8]. The core of this algorithm is to use the UKF algorithm to fuse multiple sensor data for SLAM and prune the result in the process of loop closure detection using the branch and bound method. In this way, this algorithm can make real-time loop closure detection possible while mapping on a large scale. The advantages of this algorithm include incremental map growth, real-time loop closure detection, and the option to fuse multiple sensors (multi-line LiDAR, IMU, and odometer). However, it is computationally intensive, and the mapping results depend on the performance of the computer.

6.5.1

Principle Analysis

Cartographer is not innovative in theory, but it is in terms of engineering implementation. The Cartographer algorithm provides a means of implementing multi-sensor information fusion, using UKF to fuse the odometer, IMU, and LiDAR data, and the fused pose results are used as the initial values for scan matching. In addition, the Cartographer algorithm includes a loop closure detection scheme that can run in real time to eliminate the cumulative errors generated during the mapping process. The entire algorithm structure block diagram of Cartographer is shown in Fig. 6.9. In the process of scan matching, the Cartographer algorithm first uses the LiDAR observation model, the odometer model, and the yaw model to construct the optimization constraints. Next, the Ceres Solver library developed by Google is used

Fig. 6.9 Cartographer algorithm structure diagram

6.5

Cartographer Algorithm

293

for online real-time optimization, as shown in Optimization 1 in Fig. 6.9. With respect to the closed-loop optimization part, a multi-resolution map layering is used for branch-and-bound to find sub-maps that can be matched, and then the constraints are first determined using the sparse graph optimization method. Then, the Ceres Solver library is invoked for optimization, as demonstrated in Optimization 2 in Fig. 6.9. 1. Local Optimization Process As shown in Fig. 6.9, after the IMU and odometer data are fused, the initial pose matrix is obtained (including the translation matrix as well as the rotation matrix). During the data fusion, the estimate of the rotation matrix is first obtained by direct integration of the gyroscope. Then, complementary filtering of the accelerometer and gyroscope data is used to obtain an estimate of the fused rotation matrix. The estimate of the translation matrix is obtained by integrating the velocity vector. The fusion process is shown in Eq. (6.20). gv = gv

av × Δt

Δt α = 1 - e- T

ð6:20Þ

gv = ð1 - αÞ × gvþα × acc where gv is the gravity vector, av is the angular velocity, Δt is the sampling interval, T is the time constant, α is the compensation factor, and acc is the acceleration. When the current pose estimate is determined using the accelerometer and odometer data, the initial value is taken as the center, a fixed size window is selected, candidate pose particles are evenly distributed, and the coincidence degree between each particle and the sub-map is calculated. The particle pose with the highest score is selected as the optimized pose in real time. This initial optimized pose is then optimized again in the Ceres Solver library (Optimization 1 in Fig. 6.9) to obtain a more accurate pose, and the velocity vector is obtained using the feedback from this more accurate pose. Unlike the Hector SLAM algorithm, the Cartographer algorithm uses bicubic interpolation with higher accuracy and smoother interpolation when calculating the map grid probabilities. 2. Global Optimization Process In the Cartographer algorithm, sub-maps are stitched together from a certain number of laser-scanned point clouds. First, the best positions of the laserscanned point clouds in the sub-map are estimated using the existing sub-maps and other sensor data and combining the UKF algorithm. Then, the laser-scanned point clouds are inserted into the corresponding sub-map. When a sub-map is constructed, no new laser-scanned point cloud is added to that sub-map. As the drift of the sensor is small enough in a short period of time, the error in the submap is also small enough to be negligible. However, as a greater number of sub-maps are created, the cumulative error between sub-maps becomes larger and

294

6

SLAM Based on Multi-Sensor

larger, and this error is not negligible. Therefore, Cartographer provides a loop closure detection method that can run in real time to reduce this part of the cumulative error. The method is presented below. The loop closure detection process runs in a background multi-threaded manner; when a sub-map is constructed, that sub-map is added to the queue for loop closure detection, which finds all undetected sub-maps in the queue. When a new laser scanned-point cloud is collected and added to the sub-map, if the estimated pose of that laser-scanned point cloud is close to that of one in an existing sub-map, the closed loop is found by a scan matching strategy. Specifically, the scan matching strategy is to set a 3D window near the estimated poses of the newly added laser-scanned point cloud and find a possible match for that laser-scanned point cloud in the multi-resolution layered map within that window. Next, it adopts a depth-first search principle to prune in time to reduce the complexity of the matching process. If a good enough match is found, the matching closed loop constraints will be added to the pose optimization problem, and then the Ceres Solver library is used again for global pose optimization. After completing the optimization in the sub-map, to further reduce the cumulative error, the Cartographer algorithm uses global optimization (Optimization 2 in Fig. 6.9) to form a series of sub-maps into a sparse graph, calculates the node constraints in the sparse graph by the branch-and-bound method, and then calls the Ceres Solver library to complete the final optimization, so as to obtain the globally optimal pose and map.

Task 2 SLAM Based on Graph Optimization This task will demonstrate the SLAM algorithm based on the Cartographer algorithm. First, the installation and considerations of Cartographer are introduced. After that, the cart will be controlled by keyboard, and the robot mapping and localization performed by fusing LiDAR, IMU, and odometer data. Compile and install Cartographer: 1. Installation tools To install Cartographer ROS, it is recommended to install wstool and rosdep. sudo apt-get update sudo apt-get install -y python-wstool python-rosdep ninja-build

2. Initialize workspace Before starting this step, the domain name raw.githubusercontent.com needs to be manually updated with its IP address as it is often incorrectly resolved by DNS. First, visit https://ipaddress.com/website/raw.githubusercontent.com to get the current updated IP address of the domain, such as 185.199.108.133.

6.5

Cartographer Algorithm

295

Then execute the command: $ sudo gedit /etc/hosts

Open the file and add the following content in the first line: 185.199.108.133 raw.githubusercontent.com

After saving the file and closing it, execute the ping command to verify that you can visit: $ ping raw.githubusercontent.com

If you fail to visit, change the IP and repeat the above steps. After that, execute the following command: $ mkdir carto_ws $ cd carto_ws $ wstool init src $ wstool merge -t src https://raw.githubusercontent.com/ googlecartographer/cartographer_ros/master /cartographer_ros.rosinstall

When complete, execute $ gedit src/.rosinstall

If it contains the URL https://ceres-solver.googlesource.com/ceres-solver.git, change it to https://github.com/ceres-solver/ceres-solver.git; otherwise, leave it unchanged. Next, download the Cartographer main package by executing the following command: $ wstool update -t src

3. Install dependencies and download Cartographer-related packages (default version is kinetic; it can be replaced according to the ROS version being used)

296

6

SLAM Based on Multi-Sensor

$ src/cartographer/scripts/install_proto3.sh $ sudo pip install rosdepc $ sudo rosdepc init $ rosdepc update $ rosdepc install --from-paths src --ignore-src -rosdistro=kinetic -y

If executing sudo pip install rosdepc reports an error, install the pip tool by first executing sudo apt install python-pip. Please note that rosdepc here is the domestic version of rosdep, which can fix some rosdep connection failures. Next, install the absl library: $ sudo apt-get install stow $ sudo chmod +x ~/carto_ws/src/cartographer/scripts/ install_abseil.sh $ cd ~/carto_ws/src/cartographer/scripts $ . /install_abseil.sh

4. Compile and install Firstly modify the compile options for Cartographer by opening carto_ws/src/ cartographer/CMakeLists.txt and adding the compile options at the following location: project(cartographer) add_compile_options(-std=c++11) set(CARTOGRAPHER_MAJOR_VERSION 1)

Then, open carto_ws/src/cartographer_ros/cartographer_ros/CMakeLists.txt and add the compile option at the following location: project(cartographer_ros) add_compile_options(-std=c++11)

Open carto_ws/src/cartographer_ros/cartographer_ros_msgs/CMakeLists.txt and add the compile option at the following location: project(cartographer_ros_msgs) add_compile_options(-std=c++11)

6.5

Cartographer Algorithm

297

Fig. 6.10 Environmental map constructed using the 2D LiDAR packet

Open carto_ws/src/cartographer_ros/cartographer_rviz/CMakeLists.txt and add the compile option at the following location: project(cartographer_rviz) add_compile_options(-std=c++11)

Because the absl library requires compilation with the C++ 2011 standard, it will fail to compile if the compile option is not added. Next, enter the following command in the terminal: $ cd ~/carto_ws $ catkin_make_isolated --install --use-ninja $ source ~/carto_ws/install_isolated/setup.bash

Please note that the source statement above cannot be put into ~/.bashrc. 5. 2D LiDAR packet testing wget -P ~/Downloads https://storage.googleapis.com/ cartographer-public-data/bags/backpack_2d/ cartographer_paper_deutsches_museum.bag roslaunch cartographer_ros demo_backpack_2d.launch bag_filename=${HOME}/Downloads/ cartographer_paper_deutsches_museum.bag

The environmental map constructed using the 2D packet is shown in Fig. 6.10. Mapping Process of Cartographer by Fusing LiDAR, IMU, and Odometer Spark and the mapping algorithm are run below.

298

6

SLAM Based on Multi-Sensor

1. Preparatory work First, open ~/spark/devel/_setup_util.py, find CMAKE_PREFIX_PATH and change it to CMAKE_PREFIX_PATH = '/home/spark/carto_ws/install_isolated'

This step is to prevent conflicts between source setup.bash statements from different workspaces. 2. Mapping Execute the following command to start mapping: $ source ~/spark/devel/setup.bash # Cannot omit this line $ roslaunch spark_slam 2d_slam_teleop.launch slam_methods_tel: =cartographer lidar_type_tel:=3iroboticslidar2

Please note that if Spark is configured with EAI lidar (g2 or g6), lidar_type_tel=3iroboticslidar2 needs to be changed to lidar_type_tel=ydlidar_g2 or ydlidar_g6. It can also be configured according to the actual model. The content of file 2d_slam_teleop.launch is as follows:







The nodes started by this command include the Spark motion control model, Spark motion control node, lidar node, SLAM algorithm node, keyboard control, and map save node. 3. View the configuration file of Cartographer The spark_lds_2d.lua file in the spark_app/spark_slam/config directory is the configuration file for Cartographer. Its content is as follows:

6.5

Cartographer Algorithm

299

-- Copyright 2016 The Cartographer Authors -- Licensed under the Apache License, Version 2.0 (the "License"); -- you may not use this file except in compliance with the License. -- You may obtain a copy of the License at -- http://www.apache.org/licenses/LICENSE-2.0 --- Unless required by applicable law or agreed to in writing, software -- distributed under the License is distributed on an "AS IS" BASIS, -- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -- See the License for the specific language governing permissions and -- limitations under the License. include "map_builder.lua" include "trajectory_builder.lua" options = { map_builder = MAP_BUILDER, trajectory_builder = TRAJECTORY_BUILDER, map_frame = "map", tracking_frame = "IMU_link", -- imu_link. If you are using gazebo, use 'base_footprint' (libgazebo_ros_imu's bug) published_frame = "odom", odom_frame = "odom", provide_odom_frame = false, publish_frame_projected_to_2d = false, use_odometry = true, use_nav_sat = false, use_landmarks = false, num_laser_scans = 1, num_multi_echo_laser_scans = 0, num_subdivisions_per_laser_scan = 1, num_point_clouds = 0, lookup_transform_timeout_sec = 0.2, submap_publish_period_sec = 0.3, pose_publish_period_sec = 5e-3, trajectory_publish_period_sec = 30e-3, rangefinder_sampling_ratio = 1, odometry_sampling_ratio = 1, fixed_frame_pose_sampling_ratio = 1, imu_sampling_ratio = 1, landmarks_sampling_ratio = 1, } MAP_BUILDER.use_trajectory_builder_2d = true TRAJECTORY_BUILDER_2D.min_range = 0.1 TRAJECTORY_BUILDER_2D.max_range = 3.5 TRAJECTORY_BUILDER_2D.missing_data_ray_length = 3. TRAJECTORY_BUILDER_2D.use_imu_data = true TRAJECTORY_BUILDER_2D.use_online_correlative_scan_matching = true

(continued)

300

6 /imu_data

/spark_base_server /cmd_vel

/wheel_states

/flat_world_imu_node

/joint_state_publisher /odom

/teleop_node

/joint_states

SLAM Based on Multi-Sensor /flat_imu /tf

/robot_state_publisher

/iiiroboticslidar2

/tf

/cartographer_node /submap_list

/scan

/cartographer_occupancy_grid_node

Fig. 6.11 Relationship between each node of the Cartographer algorithm

Fig. 6.12 Robot local mapping result. (a) Robot starting position. (b) Robot mapping process

TRAJECTORY_BUILDER_2D.motion_filter.max_angle_radians = math. rad(0.1) POSE_GRAPH.constraint_builder.min_score = 0.65 POSE_GRAPH.constraint_builder.global_localization_min_score = 0.7 return options

The Cartographer algorithm-based mapping process and relationship between each ROS node are shown in Fig. 6.11. The robot local mapping result is demonstrated in Fig. 6.12.

6.5.2

Mapping Results

Experiments are conducted in a larger scene. The mapping results using the Cartographer algorithm and LiDAR only are shown in Fig. 6.13. Compared with the Gmapping algorithm and Hector SLAM algorithm, the Cartographer algorithm can

6.6

Summary

301

Fig. 6.13 Cartographer mapping result (with LiDAR only)

achieve good mapping results even if only LiDAR is used. The ring loops in the map can be successfully detected using the Cartographer algorithm. Fig. 6.14 illustrates the mapping results of using Cartographer algorithm and fusing multiple sensors such as the LiDAR, IMU, and odometer. The IMU and odometer data are fused by UKF to provide the initial values for scan matching. Thanks to the IMU, the mobile robot can operate normally on uneven ground (including uphill and downhill environments with small inclinations). During the experiments, the Cartographer algorithm consumes more computer resources, and the memory resources increase with time; therefore, this algorithm requires a high-performance computer. In addition, because the Cartographer algorithm carries out the loop closure detection concurrently with multiple threads, even the same data set will cause different maps due to different computer performance. Hence, this algorithm is suitable for computers with higher performance for better mapping results.

6.6

Summary

This chapter first introduced the background of SLAM based on multi-sensor fusion, and then presented the calibration of the IMU and analysis of the motion model of a differential wheeled mobile robot. Subsequently, the Kalman filter-based multisensor fusion method and filter-based Gmapping algorithm in ROS usage were

302

6

SLAM Based on Multi-Sensor

Fig. 6.14 Cartographer mapping (fusing LiDAR, IMU, and odometer data)

presented. Finally, the principle of the Cartographer algorithm based on graph optimization is introduced. Through a Cartographer mapping experiment conducted by fusing LiDAR, IMU, and odometer data, readers can gain a comprehensive understanding of the graph optimization mapping method based on multi-sensor fusion.

Further Reading Understand the characteristics and differences between tightly coupled and loosely coupled multiple sensors and the LIO-SAM and Fast-LIO tightly coupled methods. A loosely coupled system provides flexibility to the combination of sensors with lower requirement of timestamp synchronization and typically requires lower computational cost. Taking LIO as an example, loosely coupled LiDAR-inertial Odometry (LIO) methods typically process the LiDAR and IMU measurements separately and fuse their results later. A common procedure of the loosely coupled approach is obtaining a pose measurement by registering a new scan and then fusing the pose measurement with IMU measurements. The separation between scan registration and data fusion reduces the computation load. However, it ignores the correlation between the system’s other states (e.g., velocity) and the pose of the new scan. Moreover, in the case of featureless environments, the scan registration could degenerate in certain directions and causes unreliable fusion in later stages.

Further Reading

303

Unlike the loosely coupled methods, tightly coupled LiDAR-inertial odometry methods typically fuse the raw feature points (instead of scan registration results) of LiDAR with IMU data. It needs to put the LiDAR point cloud features into the feature vector, so the dimension of the whole system state vector will be very high, which requires a high amount of calculation. Tightly coupled LIO will put the original observation of LiDAR and the original observation of IMU together for joint processing, considering the internal relationship and mutual influence. That is, a tightly coupled system jointly optimizes over both inertial and LiDAR sensor measurements, which provides higher accuracy in both mapping and tracking. The general idea is that IMU data is first used for the distortion of LiDAR observation, and then LiDAR observation and IMU observation will be thrown together into some form of state estimation model, and the purpose is to minimize the overall error of LiDAR observation and IMU observation. The final state quantities such as Position, Velocity, Orientation, Bias, and Gravity are estimated. There are two main approaches to tightly coupled LIO: optimization based and filter based. LIO-SAM is a factor graph tightly coupled LiDAR inertial odometry via smoothing and mapping system1, which adds IMU pre-integration and GPS information and removes the inter-frame matching module. The inputs are IMU, point cloud, and GPS. The LIO-SAM method obtains the IMU pre-integration factor through IMU measurement between adjacent key frames. Then, the laser odometer factor is obtained by matching the key frame with the local map. When a new pose node is inserted into the factor graph, the GPS factor is added to the pose node. Finally, the loop factor is obtained by matching the key frame and the candidate loop key frame, and the overall map is optimized and constructed based on the optimization of the factor graph. The imu_preintegration node in the LIO-SAM specifically performs pre-integration on the IMU. The pre-integration result is given to the scan to local map part at the back end to remove distortion and initial registration values. The registration result will in turn return to the imu_preintegration node. In this node, a sliding window optimization problem of no more than N frames is constructed based on gtsam (Georgia Tech Smoothing and Mapping) factor graph optimization lib2, which specifically optimizes the current bias of the IMU. The new bias will be used for IMU pre-integration at subsequent times. At the same time, the IMU pre-integration results are also used as IMU pre-integration factors, which are input into the back-end factor graph optimization problem, to participate in the optimization of the entire trajectory, as shown in Fig 6.15. Fast-LIO and Fast-LIO2 fuse LiDAR feature points with IMU data using a tightly coupled iterated extended Kalman filter to allow robust navigation in fast-motion,

1 Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, Daniela Rus. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. Proceedings of 2020 IEEE/ RSJ International Conference on Intelligent Robots and Systems (IROS 2020), pp:5135-5142, 24-30 Oct, 2020, Las Vegas, Nevada, USA. 2 https://gtsam.org/tutorials/intro.html

304

6

SLAM Based on Multi-Sensor

Fig. 6.15 System structure of LIO-SAM

Fig. 6.16 Framework overview of FAST-LIO

noisy, or cluttered environments where degeneration occurs3. The LiDAR inputs are fed into the feature extraction module to obtain planar and edge features. Then the extracted features and IMU measurements are fed into the state estimation module for state estimation at 10Hz-50Hz. The estimated pose then registers the feature points to the global frame and merges them with the feature points map built so far. The updated map is finally used to register further new points in the next step. The framework overview of Fast-LIO is shown in Fig 6.16. Fast-LIO is compatible with rotary mechanical LiDAR and solid-state LiDAR. Firstly, feature extraction is performed on the point cloud, and the feature points are distorted by IMU observation. Then, the LiDAR observation and IMU observation are placed in Iterated Kalman Filter to estimate the position, velocity, orientation, bias, and gravity state quantities at the current moment, all of which are located in the world coordinate system.

3

Wei Xu, Fu Zhang. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by TightlyCoupled Iterated Kalman Filter. IEEE Robotics and Automation Letters, 2021, Vol.6(2): 3317-3324. Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, Fu Zhang. FAST-LIO2: Fast Direct LiDAR-inertial Odometry. https://arxiv.org/abs/2107.06829v1

References

305

Exercises 1. Calibrate an IMU sensor according to the IMU calibration method described. 2. Analyze the process of LiDAR and IMU extrinsic parameter calibration. 3. Kalman Filter is a Gaussian filter method while Particle Filter is a nonparametric filter method. What are the advantages and disadvantages, and application scenarios of these two methods? 4. What are the current improvements to address the limitations of the Kalman Filter method? 5. Explain the difference between filter-based SLAM and graph optimizationbased SLAM. 6. What method does the Cartographer algorithm use to reduce computational resource consumption and ensure real time and accuracy in loop closure detection? 7. Refer to the method of environmental mapping under 2D data sets using the Cartographer algorithm and perform mapping under 3D data sets.

References 1. Makarenko AA, Williams SB, Bourgault F, Durrant-Whyte HF (2002) An experiment in integrated exploration. Proceedings of 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), September 30–October 4, 2002, Lausanne, Switzerland, pp 534–539 2. Tedaldi D, Pretto A, Menegatti E (2014) A robust and easy to implement method for IMU calibration without external equipments. Proceedings of 2014 IEEE international conference on robotics and automation (ICRA 2014), pp:3042-3049, May 31–June 7, 2014, Hong Kong, China, pp 534–539 3. Zezao L (2019) Tightly coupled monocular visual inertial SLAM Combined with wheel speed sensor. University of Science and Technology, Wuhan 4. Board ISS (2006) IEEE standard specification format guide and test procedure for single-axis interferometric fiber optic gyros. IEEE Std, pp 1–83 5. Woodman OJ, Adams DE (2007) An introduction to inertial navigation. J Navig 9:249–259 6. Zheng W (2017) Research on key technologies of mapping and localization system for mobile robots. Huazhong University of Science and Technology, Wuhan 7. Thrun S, Burgard W, Fox D (2019) Probabilistic robotics (trans: Cao H, Tan Z, Shi X). Machinery Industry Press, Cambridge 8. Hess W, Kohler D, Rapp H, Andor D (2016) Real-time loop closure in 2D LIDAR SLAM. Proceedings of 2016 IEEE international conference on robotics and automation (ICRA 2016), 16–21 May 2016, Stockholm, Sweden, pp 1271–1278

Chapter 7

Manipulator Motion Control

Abstract Motion control is an important part in the field of manipulator research. This chapter first introduces URDF modeling and visualization control of a multijoint manipulator. ROS provides a fully functional manipulator motion planning package, Movelt, which includes kinematic modeling, operation control, environment perception, and motion planning. Movelt is an easy-to-use integrated development platform. The manipulator is configured using the Movelt configuration assistant. When combined with Gazebo, MoveIt enables visualization simulation with friendly human-computer interaction. The motion control of the manipulator will be presented in three parts: manipulator modeling, Movelt configuration, and Movelt motion planning. Keywords Motion control · Kinematic modeling · Motion simulation · Motion planning · Digital Twins Motion control is an important part in the field of manipulator research. This chapter first introduces URDF modeling and visualization control of a multi-joint manipulator. ROS provides a fully functional manipulator motion planning package, Movelt, which includes kinematic modeling, operation control, environment perception, and motion planning. Movelt is an easy-to-use integrated development platform. The manipulator is configured using the Movelt configuration assistant. When combined with Gazebo, MoveIt enables visualization simulation with friendly human-computer interaction. The motion control of the manipulator will be presented in three parts: manipulator modeling, Movelt configuration, and Movelt motion planning.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_7

307

308

7.1

7

Manipulator Motion Control

Manipulator Modeling

7.1.1

Common Manipulators in ROS

7.1.1.1

Universal Robots

Universal Robots (UR) was founded in Denmark in 2005. UR company has developed a product series including the UR3, UR5, and UR10 for different payload levels, as shown in Fig. 7.1. In 2009, UR company released its first collaborative robot, the UR5, with a selfweight of 18 kg, payload of up to 5 kg, and working radius of 85 cm. UR not only upended the perception of traditional industrial robots but also developed the concept of a “collaborative robot” that can work with human operators in the same space, with exceptional safety and no safety fence. In March 2015, UR released the UR 3, which weighs only 11 kg but has a payload of up to 3 kg, 360-degree rotation on all wrist joints, and infinite rotation on the end joint. Additionally, the UR 10 offers a payload capacity of 10 kg and working radius of 130 cm. All three robots are easy to program and flexible enough to work safely and reliably with humans on various tasks, such as grasping, assembly, and polishing, as illustrated in Fig. 7.2.

7.1.1.2

Franka Panda

Franka Panda is a 7-DOF manipulator designed by Franka Emika company (Germany) and is shown in Fig. 7.3. Each joint contains torque sensors with robust performance. The Franka Panda offers an open-source interface called the Franka

Fig. 7.1 UR3, UR5, and UR10, robots

7.1

Manipulator Modeling

309

Fig. 7.2 Collaborative robot performing a polishing operation

Fig. 7.3 Franka Emika Panda manipulator

Control Interface (FCI), which enables users to program the manipulator using C++ and ROS. Its built-in robot control system can be conveniently operated through a mobile app or web page. In addition, the system is agile and can be automated to perform various repetitive and monotonous operations, such as assembly, screwing, and connecting operations with high fine control requirements.

310

7.1.2

7

Manipulator Motion Control

URDF Manipulator Model

The unified robot description format (URDF) is an essential robot model description format in ROS. ROS provides a C++ parser for URDF files that can parse robot models described in XML format in URDF files. In general, any robot model can be divided into two parts: link and joint, as illustrated in Fig. 7.4. 1. tag The tag describes the appearance and physical properties of a rigid part of the robot, including size, color, shape, inertial matrix, and collision properties, etc. The syntax for the URDF description of the link structure is as follows:

. . . . . . . . . . . . . . . . . .

The tag describes the inertial parameters of the link, the tag describes the robot appearance parameters of the link, and the tag describes its collision properties. As shown in Fig. 7.5, the link area for collision detection is larger than the visual appearance area, which means that whenever

Fig. 7.4 Link and joint in manipulator URDF model

7.1

Manipulator Modeling

311

Fig. 7.5 Contents of the tag description

Table 7.1 Forms of manipulator joint motions Type of joint Continuous Revolute Prismatic Planar Floating Fixed

Motion description Hinge joint for infinite rotation around a single axis Hinge joint, similar to continuous, but with rotational angular limits Sliding joint that slides along a certain axis with position limits Planar joint that allows translation or rotation in the orthogonal direction of the plane Floating joint that allows translation and rotation Special joint that cannot move

another object intersects the collision area, the link is considered to have been in a collision. 2. tag The tag describes the kinematic and dynamic properties of the robot joints including the position and velocity limits of the joint motion. Based on the motion form, joints can be classified into six types as shown in Table 7.1. Similar to human joints, the main function of robot joints is to connect two rigid body links, which are called the parent link and child link, respectively. The tag is illustrated in Fig. 7.6. The tag has the following description syntax:



....

The parent link and child link of the joint must be specified, and other properties of the joint can be set.

312

7

Manipulator Motion Control

Fig. 7.6 < joint> tag illustration

• : the reference position of the joint, used to calibrate the absolute position of the joint. • : describes the physical properties of the joint, such as damping values and physical static friction, which will be used in dynamic simulation. • : describes the limit value of the motion, including the upper and lower limit position, speed limit, and torque limit of the joint motion. • : describes the relationship between the joint and existing joints. • : describes the safety controller parameters. 3. tag is the topmost tag of the complete robot model, and both the and tags must be included in the tag. A complete robot model consists of a series of and tags. The < robot> tag is illustrated in Fig. 7.7. The name of the robot can be set in the tag, with the following basic syntax:

....... ....... ....... .......

7.1

Manipulator Modeling

313

Fig. 7.7 < robot> tag illustration

7.1.3

URDF Modeling

In most scenarios, a robot model is first designed using 3D software and then imported into the ROS environment. Solidworks is a common 3D design software, and the ROS community provides a model conversion plugin sw2urdf for this software. Next, the 3D model of the UR5 is taken as an example for readers to learn how to use the plug-in to complete the conversion of the URDF model. First, obtain the UR5 manipulator 3D model. The model file can be obtained in stp format from the download link on the official website. Then log in to the sw2urdf plug-in web page as shown in Fig. 7.8. After downloading, install the sw2urdf plugin on the computer with Solidworks software. After the plugin is successfully installed, open the 3D model Solidworks file of UR5 in the Solidworks software, as shown in Fig. 7.9. Before exporting the model, use the Solidworks software to set up a global coordinate system (i.e., global frame) for the model and a reference axis for each joint; the result after configuration is shown in Fig. 7.10. Then, click “Tools” -> “File” -> “Export as URDF” and open the installed sw2urdf plugin, as shown in Fig. 7.11. In the configuration screen of the plug-in, select the name and model of each link, the name and type of the joint, and the axis of rotation according to the configuration

314

7

Manipulator Motion Control

Fig. 7.8 The download interface of the sw2urdf plugin

Fig. 7.9 UR5 model in Solidworks

instructions, and click on the number under “Number of child links” to add a tandem joint. The UR5 manipulator is a six-joint, seven-link tandem arm that requires all configurations to be completed in sequence, as demonstrated in Fig. 7.12. After completing the configuration, click on the “Preview and Export” button to automatically generate all reference frames (i.e., reference coordinate system) and open a confirmation screen for the exported parameters, as shown in Fig. 7.13.

7.1

Manipulator Modeling

315

Fig. 7.10 UR model with reference axes added

Fig. 7.11 Sw2urdf plugin open location

After confirming that the link and joint parameters are correct, click “Export URDF and Meshes” at the bottom right corner of the interface to automatically convert the UR5 model into a ROS package that contains the URDF model and its linked meshes file, as shown in Fig. 7.14. Place the package under the ROS workspace to compile it and run the following command to see the URDF model, as shown in Fig. 7.15.

316

Fig. 7.12 Configuring the URDF model

Fig. 7.13 Confirmation of configuration parameters

$roslaunch ur5_description display.launch

7

Manipulator Motion Control

7.1

Manipulator Modeling

317

Fig. 7.14 Completing the model export

Fig. 7.15 Displaying the exported model under ROS

Next, the Gazebo simulation software and MoveIt motion planning package can be used to build a complete manipulator motion simulation system. As described earlier, Gazebo is a powerful 3D physics simulation platform with a powerful physics engine, high-quality graphics rendering, and convenient programming and graphics interface. ROS provides the underlying driver package for Gazebo, making it easy to use the Gazebo simulation robot system in a ROS environment. As illustrated in Fig. 7.16, the robot model in Gazebo is the same as the one used by

318

7

Manipulator Motion Control

Fig. 7.16 Gazebo-based manipulator motion simulation system

rviz. Furthermore, the physical properties of the robot and its surroundings, such as mass, friction coefficient, and elasticity coefficient, can be added to the model as needed. Robot sensor information can also be added to the simulation environment in the form of a plug-in for visual display.

7.2 7.2.1

Manipulator Control: MoveIt Introduction to MoveIt

The PR2, which first used the ROS framework, is not only a mobile robot but also has two multi-DOF manipulators. Based on the PR2 robot, ROS has provided a number of packages for the manipulator, which were integrated into a separate ROS software, MoveIt, in 2012. MoveIt provides developers with an easy-to-use, integrated development platform. This platform consists of a series of packages containing motion planning, operation control, 3D sensing, kinematics, and control and navigation algorithms. It also offers a user-friendly GUI for a wide range of industrial, commercial, research and development, and other applications [1]. As shown in Fig. 7.17, MoveIt currently supports dozens of common robots and can also be flexibly applied to various robot systems.

7.2

Manipulator Control: MoveIt

319

Fig. 7.17 MoveIt supports numerous robot platforms

7.2.2

Setup Assistant to Configure Manipulator

The first step in using MoveIt is to use its own Setup Assistant tool to complete a series of configuration tasks. The Setup Assistant generates a semantic robot description format (SRDF) file based on the user-imported robot URDF model and creates a MoveIt configuration package to complete the robot configuration, visualization, and simulation. The following is an example of how to configure the UR5 using Setup Assistant. First, start Setup Assistant with the following command: $ roscore $ rosrun moveit_setup_assistant

After the setup is run successfully, the interface as shown in Fig. 7.18 will appear. The list on the left side of the interface includes the items that need to be configured in the following step. Click on one of these items and the main interface will appear with the corresponding configuration options. 1. Load the robot URDF model There are two options here: One is to create a new configuration package, and the other one is to use an existing configuration package. After New is selected, set the model file path to the URDF file ur5_description. xacro under the ur5_description package in the model loading window and click the “Load Files” button to finish loading the model. When the model is loaded successfully, the UR5 manipulator model can be seen in the right window, as shown in Fig. 7.19.

320

7

Manipulator Motion Control

Fig. 7.18 Launch interface of Setup Assistant

Fig. 7.19 Load the robot model

2. Configure the Self-Collisions matrix Click on the second configuration step “Self-Collisions” on the left to configure the Self-Collisions matrix. MoveIt allows a certain number of random sampling points to be set, and collision parameters will be generated based on these points to detect links that

7.2

Manipulator Control: MoveIt

321

Fig. 7.20 Configure the Self-Collisions matrix

will not collide. If too many points are set, it will cause a slow operation, while too few points will lead to imperfect parameters. The default number of sampling points is 10,000, which can yield good results. With this default value, click “Generate Collision Matrix,” as shown in Fig. 7.20. 3. Configure virtual joints Virtual joints are mainly used to describe the position of the robot in the world frame (i.e., world coordinate system). If the robot is mobile, virtual joints can be associated with a mobile base. If the manipulator is stationary, virtual joints are not required. 4. Create planning groups This step allows for the integration of multiple robot components (links and joints) into a group, and the motion planner will complete the planning task for a set of links or joints. During the configuration process, the kinematic parser can be selected. Two groups will be created here: one group containing the manipulator body and one group containing the end-effector (with a gripper). First the manipulator group for the body is created. Click Add Group, and the configuration screen is shown in Fig. 7.21. • • • • •

Group Name: manipulator Kinematic Solver: kdl_kinematics_plugin/KDLKinematicsPlugin Kin. Search Resolution: 0.005 Kin. Search Timeout (sec): 0.05 Kin. Solver Attempts: 3 Then click the Add Kin. Chain button to set the links to be included in the kinematic calculation. Click the inverted triangle symbol next to Robot Links

322

7

Manipulator Motion Control

Fig. 7.21 Create the manipulator group for the manipulator body

Fig. 7.22 Add a kinematic tool chain

in the interface to expand all the links. After that, select the desired link and click Choose Selected to select the link, as shown in Fig. 7.22. Here the joints included in the kinematic calculation of the manipulator are set as follows: • Base Link: base_link • Tip Link: link_6

7.2

Manipulator Control: MoveIt

323

Fig. 7.23 Define the ZERO pose

5. Define robot poses Depending on the application scenario, some custom poses can be set, such as the initial pose of the robot and several working poses. When programming with MoveIt’s API, these poses can be called directly by name. Two robot poses are configured below. Click the Add Pose button, and set the first pose name as zero in the interface that appears. Then, select the corresponding planning group as manipulator, as shown in Fig. 7.23. This pose is the position in which the six axis angles are at 0, which can be treated as the initial pose of the robot. When the settings are complete, click Save to save. Then set the second pose: home. The setup process is similar, except that the joint control slider in the setup screen needs to be dragged to control the robot model shown on the right to the desired pose, and then save that pose as home. After the two poses are set, the list of poses in the main interface can be seen, as shown in Fig. 7.24. 6. Configure end-effectors The manipulator will be fitted with end grippers in the application scenario, which can be added in this step. 7. Configure passive joints Some joints on the manipulator, which may not be used in the planning and control process, can be declared initially as useless. A typical manipulator does not have such joints, and this step does not need to be configured. 8. Set author information

324

7

Manipulator Motion Control

Fig. 7.24 List of defined completed robot poses

Fig. 7.25 Set author information

As shown in Fig. 7.25, set the author information and complete as appropriate. 9. Generate configuration files This is the final step in which all the files in the configuration package will be automatically generated as previously configured.

7.2

Manipulator Control: MoveIt

325

Fig. 7.26 Generate the configuration package

Click the “Browse” button and select a path to store the configuration package. The Setup Assistant will package all the configuration files into one ROS package, usually named “RobotName_moveit_config.” Here it is named “ur5_moveit_config.” Click “Generate Package.” If the configuration file is successfully generated and saved, the message “Configuration package generated successfully!” will appear, as shown in Fig. 7.26. Click on “Exit Setup Assistant” to complete the configuration of the manipulator.

7.2.3

MoveIt Visualization Control

After the configuration steps in the previous section are followed, a package called ur5_moveit_config will be generated. This package contains most of the configuration and launch files needed for MoveIt to start and also contains a simple demo that can be used to test the success of the configuration. It can be run using the following command: $ roslaunch ur5_moveit_config demo.launch

After a successful launch, the interface appears as shown in Fig. 7.27.

326

7

Manipulator Motion Control

Fig. 7.27 Launch interface of the MoveIt demo

Based on rviz, this interface adds the MoveIt plug-in. Through the plug-in window in the lower left corner, the relevant functions of MoveIt can be configured and the manipulator controlled to complete motion planning. For example, with the help of the MoveIt plug-in, the manipulator can be controlled to complete drag planning, random target planning, initial position update, collision detection, and other functions.

7.2.3.1

Drag Planning

Drag the front end of the manipulator to change its pose. In the planning tab, click “Plan and Execute,” and MoveIt starts to plan the path and control the manipulator to the target position. The entire manipulator motion process can be seen from the right interface, as shown in Fig. 7.28.

7.2.3.2

Random Planning

There is a “Select Goal State” function in the Query toolbar, and “random valid” can be selected in the drop-down option. When “Update” is clicked, MoveIt will generate a random goal state within the working range of the manipulator. Then, if “Plan and Execute” is selected, the manipulator will automatically move to the random goal state, as demonstrated in Fig. 7.29.

7.2

Manipulator Control: MoveIt

327

Fig. 7.28 Drag planning motion result

Fig. 7.29 Random planning motion result

7.2.4

Manipulator Kinematics

The manipulator kinematics are divided into two parts: forward kinematics and inverse kinematics. Given the structural parameters and joint angle variables of the manipulator, forward kinematics determine the pose of the end-effector of the manipulator. Given the desired poses of the end-effector in the reference frame,

328

7

Manipulator Motion Control

inverse kinematics find the corresponding joint variables of the manipulator [2, 3]. The kinematics of the manipulator are briefly described below.

7.2.4.1

D-H Modeling of the Manipulator

The UR5 manipulator is a 6-DOF manipulator, and its kinematic model is determined using the D-H parameter method, as shown in Fig. 7.30. The UR5 manipulator D-H parameters are shown in Table 7.2, where the rotating joint θi is the joint variable, and the linkage offset di is a constant.

7.2.4.2

Forward Kinematics

Establish the homogeneous transformation matrices of frame i in frame i-1 and then substitute the D-H parameters into these transformation matrices to obtain the transformation matrices of all adjacent frames. 0 1T

1 2T

= ½ cos ðθ1 Þ0 sinðθ1 Þ0 sinðθ1 Þ0 - cosðθ1 Þ0010d1 0001

= ½ cos ðθ2 Þ - sinðθ2 Þ0a2 cosðθ2 Þ sinðθ2 Þ cosðθ2 Þ0a2 sinðθ2 Þ00100001

Fig. 7.30 D-H model of the UR5 manipulator Table 7.2 D-H parameters of the UR5 manipulator Joint number 1 2 3 4 5 6

α (around x-axis) α1 = 90 0 0 α4 = 90 α5 = -90 0

a (along x-axis) 0 a2 = –425 a3 = –392 0 0 0

θ (around z-axis) θ1 θ2 θ3 θ4 θ5 θ6

d (along z-axis) d1 = 89.2 0 0 d4 = 109.3 d5 = 94.75 d6 = 82.5

7.2

Manipulator Control: MoveIt 2 3T

329

= ½ cos ðθ3 Þ - sinðθ3 Þ0a3 cosðθ3 Þ sinðθ3 Þ cosðθ3 Þ0a3 sinðθ3 Þ00100001 3 4T 4 5T

= ½ cos ðθ4 Þ0 sinðθ4 Þ0 sinðθ4 Þ0 - cosðθ4 Þ0010d4 0001

= ½ cos ðθ5 Þ0 - sinðθ5 Þ0 sinðθ5 Þ0 cosðθ5 Þ00 - 10d 5 0001

5 6T

= ½ cos ðθ6 Þ - sinðθ6 Þ00 sinðθ6 Þ cosðθ6 Þ00001d6 0001

Then, the transformation matrix of the end frame to the base fixed frame is: T = 01 T 12 T 23 T 34 T 45 T 56 T The transformation matrix T of the end coordinates in the base frame is obtained by inputting the six joint angles θi of the manipulator.

T=

nx

ox

ax

px

ny nz

oy oz

ay az

py pz

0

0

0

1

The upper left 3 × 3 matrix of the transformation matrix T is the rotation matrix, and the upper right 3 × 1 matrix is the spatial position [x, y, z]. The kinematic forward solution is completed by obtaining the end pose of the manipulator from the transformation matrix T.

7.2.4.3

Inverse Kinematics

The inverse kinematics is more complex compared to the forward kinematics and solves for the rotation angle of each joint of the manipulator based on the spatial pose of the manipulator’s end-effector. The methods of obtaining the inverse solution include analytical, numerical iterative, and geometric approaches [4]. Among these methods, the analytical approach uses mathematical derivation to find all the roots; however, it is computationally complex. The main derivation process is as follows: First, calculate some intermediate matrices in the process of finding the transformation matrix T:

330

7

1 3T

= 12 T 23 T =

=

Manipulator Motion Control

c2 c3 - s 2 s 3 s 2 c3 þ c2 s 3

- c2 s 3 - s 2 c3 - s 2 s 3 þ c 2 c3

0 0

a3 ðc2 c3 - s2 s3 Þ þ a2 c2 a3 ðs2 c3 þ c2 s3 Þ þ a2 s2

0

0

1

0

0

0

0

1

c23 s23

- s23 c23

0 a3 c23 þ a2 c2 0 a3 s23 þ a2 s2

0

0

1

0

0

0

0

1

where c23 = cos(θ2+θ3), and s23 = sin(θ2+θ3).

1 4T

= 13 T 34 T =

c23 c4 - s23 s4

0

c23 s4 þ s23 c4

a3 c23 þ a2 c2

s23 c4 þ c23 s4 0

0 1

s23 s4 - c23 c4 0

a3 s23 þ a2 s2 d4

0

0

0

1

1 1 4 5 T¼4 T 5 T

ðc23 c4 -s23 s4 Þc5 -c23 s4 -s23 c4 ð -c23 c4 þs23 s4 Þs5 d 5 ðc23 s4 þs23 c4 Þþa3 c23 þa2 c2 ¼

ðs23 c4 þc23 s4 Þc5 -s23 s4 þc23 c4 ð -s23 c4 -c23 s4 Þs5 d5 ðs23 s4 -c23 c4 Þþa3 s23 þa2 s2 s5

1

c5

d4

0

0

0

1

Then, from T = 01 T 12 T 23 T 34 T 45 T 56 T, the following is obtained: 01 T - 1 T 65 T - 1 = = 15 T . Based on

1 2 3 4 2T 3T 4T 5T

0 -1 1T

=

c1 0

s1 0

0 1

0 - d1

s1

- c1

0

0

0

0

0

1

, 56 T - 1 =

c6 - s6

s6 c6

0 0

0 0

0

0

1

- d6

0

0

0

1

0 1 5 1 1T T 6T

¼

c1 nx þ s1 ny s6 þ c1 ox þ s1 oy c6 nz s 6 þ oz c 6 s1 nx - c1 ny s6 þ s1 ox - c1 oy c6 0

c1 nx þ s1 ny c6 - c1 ox þ s1 oy s6

nz c6 - oz s6 s 1 nx - c 1 ny c 6 - s 1 ox - c 1 oy s 6

0

the following can be obtained:

0

az s 1 ax - c 1 ay

c 1 ax þ s 1 ay

1

- d 6 az þ pz - d 1 - d 6 s 1 ax - c 1 ay þ s 1 px - c 1 py

- d 6 c 1 ax þ s 1 ay þ c 1 px þ s 1 py

7.2 Manipulator Control: MoveIt 331

332

7

Manipulator Motion Control

Based on the equality of the ranks of the matrices on both sides of the equation, the following solutions can be found: d4

θ1 = ± arccos

2

d 6 a y - py

þ arctan

þ ð px - d 6 ax Þ

px - d 6 ax d 6 ay - py

θ5 = ± arccos s1 ax - c1 ay θ6 = arctan -

s 1 ox - c 1 oy s 1 nx - c 1 ny

Then, based on 0 - 1 5 - 14 - 1 T 6T 5T 1T

=

=

= 12 T 23 T 34 T = 14 T

c23 c4 - s23 s4 s23 c4 þ c23 s4

0 0

c23 s4 þ s23 c4 s23 s4 - c23 c4

a3 c23 þ a2 c2 a3 s23 þ a2 s2

0

1

0

d4

0

0

0

1

c234 s234

0 0

s234 - c234

a3 c23 þ a2 c2 a3 s23 þ a2 s2

0 0

1 0

0 0

d4 1

the following can be obtained:

4 -1 5T

=

c5

s5

0

0

0 - s5

0 c5

-1 0

d5 0

0

0

0

1

Furthermore, the left-hand side of the equation is equal to

7.2

Manipulator Control: MoveIt

333

c1 nx þ s1 ny c6 - c1 ox þ s1 oy s6 c5 - c1 ax þ s1 ay s5 c1 nx þ s1 ny c6 þ c1 ox þ s1 oy s6 s5 þ c1 ax þ s1 ay c5 - c1 nx þ s 1 ny s 6 - c1 ox þ s 1 oy c 6 d 5 c 1 nx þ s 1 ny s 6 þ c 1 ox þ s 1 oy c 6 - d6 c1 ax þ s1 ay þ c1 px þ s1 py ðnz c6 - oz s6 Þc5 - az s5 ðnz c6 - oz s6 Þs5 þ az c5 - nz s 6 - oz c 6 d 5 ð n z s 6 þ oz c 6 Þ - d 6 a z þ pz - d1 s1 nx - c1 ny c6 - s1 ox - c1 oy s6 c5 - s 1 ax - c 1 ay s 5 s 1 nx - c 1 ny c 6 þ s 1 ox - c 1 oy s 6 s 5 þ s 1 ax - c 1 ay c 5 - s 1 nx - c 1 ny s 6 - s 1 ox - c 1 oy c 6 d 5 s 1 nx þ c 1 ny s 6 þ s 1 ox - c 1 oy c 6 - d 6 s 1 ax - c 1 ay þ s 1 px - c 1 py 0001 which can be solved for θ3 = ± arccos θ2 = arctan

m 2 þ n2 - a 2 2 - a3 2 2a2 a3

nða2 þ a3 c3 Þ - ma3 s3 mða2 þ a3 c3 Þ þ na3 s3

Finally, the following is obtained: θ2 þ θ3 þ θ4 = arctan

ðnc6 - oz s6 Þc5 - az s5 c1 nx þ s1 ny c6 - c1 ox þ s1 oy s6 c5 - c1 ax þ s1 ay s5

which gives θ4. Combining the cases with two solutions, the inverse kinematics of the UR5 manipulator has a total of 2 × 2 × 2 = 8 sets of solutions. The above is a typical kinematic derivation process for a 6-DOF tandem manipulator. Developers do not need to implement the above algorithms separately. A variety of kinematic algorithm packages are provided in ROS, and the kinematics can be solved after completing the MoveIt configuration. Task 1 Kinematic Simulation of Manipulator Next, to configure a complete manipulator motion simulation system according to the following steps: 1. Refine the robot model The robot URDF model describes the appearance characteristics and physical properties of the robot and has the basic conditions for a simulation in Gazebo [5]. However, the model still cannot move in the Gazebo simulation environment due to the lack of Gazebo-related properties in the model. First, make sure that the element of each link has been set appropriately and then set the tag for each necessary , , and

334

7

Manipulator Motion Control

Fig. 7.31 Description of controller interface connection

. The tag is an extended attribute required to describe the Gazebo simulation in the URDF model. • Add transmission tags The motion of a real manipulator requires the use of motors, gearboxes, and other transmission devices to drive it and so does the simulated manipulator. Add the following transmission tags to the model:

transmission_interface/SimpleTransmission

hardware_interface/ PositionJointInterface

hardware_interface/ PositionJointInterface 1







In the above code, defines the joint to which the drive will be bound, the tag declares the type of drive used, and defines the type of hardware interface. In this case, the position control interface is used.

7.2 Manipulator Control: MoveIt

335

• Add gazebo controller plugin In addition to the transmission device, the simulation controller also needs to be added to help the model bind ROS messages and complete the sensor simulation output and motor control. In this way, the robot model will be more realistic. Add the following plug-in declaration to the model file:

/ur5 gazebo_ros_control/DefaultRobotHWSim true

Ros_control is the controller package in ROS. This is just a declaration to use the plugin; many parameters still need to be set. 2. Configure the controller To build a complete manipulator control system, the following three controllers need to be configured: Follow Joint Trajectory Controller, Joint Trajectory Controller, and Joint State Controller, as shown in Fig. 7.31. • Follow Joint Trajectory Controller The output interface of MoveIt after motion planning is an action called “FollowJointTrajectory,” which contains a series of planned path point trajectories. This information is published through a plugin provided by MoveIt called “Follow Joint Trajectory Controller.” Start by creating the controllers.yaml file in the ur5_moveit_config package. controller_manager_ns: controller_manager controller_list: - name: ur5/arm_joint_controller action_ns: follow_joint_trajectory type: FollowJointTrajectory default: true joints: - joint_1 - joint_2 - joint_3 - joint_4 - joint_5 - joint_6

336

7

Manipulator Motion Control

The configuration file is then loaded in ur5_description_moveit_controller_manager.launch.

the

launch

file





• Joint Trajectory Controller Once Gazebo has received the above trajectory data, the data points need to be converted into angular values for each joint, which is done using the “Joint Trajectory Controller” plugin. The Joint Trajectory Controller is used to control a set of joints in joint space. The received path point information is used to calculate the periodic position of each joint of the robot using the spline interpolation function. The following types of spline functions are often used: linear spline (which can only guarantee the continuity of position, but the velocity and acceleration are not continuous), cubic spline (which can guarantee the continuity of position and velocity, but the acceleration is not continuous), and quintuple spline (which guarantees the continuity of position, velocity, and acceleration). This controller is used in a similar way to the other controllers, again requiring the creation of a configuration file. In the ur5_gazebo package, create the configuration file ur5_trajectory_control.yaml. The content is as follows: arm_joint_controller: type: "position_controllers/JointTrajectoryController" joints: - joint_1 - joint_2 - joint_3 - joint_4 - joint_5 - joint_6 gains: joint_1: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0}

(continued)

7.2

Manipulator Control: MoveIt

337

joint_2: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0} joint_3: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0} joint_4: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0} joint_5: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0} joint_6: {p: 1000.0, i: 0.0, d: 0.1, i_clamp: 0.0}

Next, the Joint Trajectory Controller is also loaded via the launch file, creating the ur5_trajectory_controller.launch file.



• Joint State Controller The main role of the joint state controller is to feed the robot’s joint state to MoveIt, so that the system knows the current state of the manipulator and thus can determine whether the manipulator is in motion. Additionally, the manipulator model in rviz is dynamically displayed based on this feedback data. The configuration of the joint state controller requires the setting of the following content in the ur5_gazebo_joint_states.yaml file. ur5: # Publish all joint states ---------------------------------– joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50

Then, create ur5_gazebo_states.launch to implement parameter loading:







3. Run manipulator motion simulation environment Create a top-level launch file called ur5_bringup_moveit.launch, launch Gazebo, load all controllers, and finally launch MoveIt.







Run MoveIt and Gazebo through the ur5_bringup_moveit.launch file, and the Gazebo simulation environment and rviz Moveit interface will open.

7.3

MoveIt Programming: Manipulator Motion Planning

339

Fig. 7.33 Result of motion planning in joint space

$ roslaunch ur_gazebo 55ur_bringup_moveit.launch

The robot in Gazebo can be controlled using the planning motion methods in MoveIt discussed earlier. For example, select a random location, as illustrated in Fig. 7.32. Click on “Plan and Execute,” and the robot in Gazebo starts moving. Meanwhile, the robot model in rviz will synchronously show its status. These two remain consistent. MoveIt and Gazebo are now connected. The simulated robots in Gazebo are very similar to real robots. The rviz visualization plugin can be used to control the simulated robot. Furthermore, the robot can also be controlled through the MoveIt programming interface, which will be covered in the following section.

7.3 7.3.1

MoveIt Programming: Manipulator Motion Planning Motion Planning in Joint Space

The joint space motion is a motion in which the robot joint angle is the control quantity. Although the time elapsed for each joint to reach the desired position is the same, the joints move independently of each other. The robot state is described using the position of each joint axis, and after the target pose is specified, it is reached through the motion control of each joint axis.

340

7

Manipulator Motion Control

Use the following command to test the motion of the UR5 manipulator in joint space: $ roslaunch ur5_moveit_config demo.launch $ rosrun ur5_planning moveit_fk_demo.py

The result of manipulator motion planning in joint space is shown in Fig. 7.33. It can be seen that the manipulator moves to the specified pose. The ur5_planning/scripts/moveit_fk_demo.py source code for implementing this routine in MoveIt is as follows: import rospy, sys import moveit_commander class MoveItFkDemo: def __init__(self): # API for initializing the move_group moveit_commander.roscpp_initialize(sys.argv) # Initialize the ROS node rospy.init_node('moveit_fk_demo', anonymous=True) # Initialization requires the use of the arm group in the manipulator controlled by the move group arm = moveit_commander.MoveGroupCommander('manipulator') # Set the allowable error value of the manipulator motion arm.set_goal_joint_tolerance(0.001) # Set the allowable maximum speed and acceleration arm.set_max_acceleration_scaling_factor(0.5) arm.set_max_velocity_scaling_factor(0.5) # Control the manipulator to return to the initialized position first arm.set_named_target('zero') arm.go() rospy.sleep(1) # Set the target position of the manipulator, describe using position data from the six axes (in radians) joint_positions = [-2.3524302503112633, 0.23318268951514246, 2.1094235050263386, -1.4769358554257048, -1.405569430264964, - 0.9657781800186755] arm.set_joint_value_target(joint_positions) # Control the manipulator to complete the motion arm.go() rospy.sleep(1) # Control the manipulator to return to the initialized position first arm.set_named_target('zero') arm.go() rospy.sleep(1)

(continued)

7.3

MoveIt Programming: Manipulator Motion Planning

341

# Close and exit moveit moveit_commander.roscpp_shutdown() moveit_commander.os._exit(0) if __name__ == "__main__": Try: MoveItFkDemo() except rospy.ROSInterruptException: pass

The implementation flow of the code is analyzed below. Before MoveIt’s API can be used, its Python interface module needs to be imported. import rospy, sys import moveit_commander

Before MoveIt’s Python API can be used, the API needs to be initialized using the roscpp interface at the bottom and performing Python encapsulation at the top: moveit_commander.roscpp_initialize(sys.argv)

An important class, MoveGroupCommander, is provided in moveit_commander to create control objects for Planning Groups. arm = moveit_commander.MoveGroupCommander('manipulator')

Set the allowable error of the motion control. arm.set_goal_joint_tolerance(0.001)

Because the motion is under joint space, it is necessary to set the allowable error of joint motion. In this case, they are all set to 0.001 (radian/rad), which means that if each axis of the robot moves to within 0.001 rad of the target position, the robot is considered to have reached the target. To keep the manipulator’s motion consistent, first return the manipulator to its initial position, which is the “zero” pose set in Setup Assistant. Then, use the set_named_target() interface and set the pose to “zero.” Finally, use the go() command to move the robot to the “zero” pose.

342

7

Manipulator Motion Control

arm.set_named_target('zero') arm.go() rospy.sleep(1)

Be careful to hold a delay for a period of time to ensure that the manipulator has completed its motion. Once the manipulator has returned to its initial position, the target pose for the motion can be set. joint_positions = [-2.3524302503112633, 0.23318268951514246, 2.1094235050263386, -1.4769358554257048, -1.405569430264964, 0.9657781800186755] arm.set_joint_value_target(joint_positions) arm.go() rospy.sleep(1)

The interface used to set the target pose in joint space is set_joint_value_target(), and the parameter is the radian of each joint in the target pose. The manipulator is controlled in the same way as the gripper, by first setting the motion target, and then using go() to control the robot to complete the motion. If the specified target pose cannot be reached by the manipulator, an error message will be displayed in the terminal. This completes the motion of the manipulator in joint space. The interface is then closed and this routine is exited.

Fig. 7.34 Result of motion planning in workspace

7.3

MoveIt Programming: Manipulator Motion Planning

343

Fig. 7.35 Output log of the motion planning process

moveit_commander.roscpp_shutdown() moveit_commander.os._exit(0)

Several key steps involved in MoveIt joint space motion planning include creating control objects for the planning group, setting the target poses for joint space motion, and controlling the manipulator to complete the motion. The corresponding APIs are as follows: arm = moveit_commander.MoveGroupCommander('manipulator') arm.set_joint_value_target(joint_positions) arm.go()

7.3.2

Motion Planning in the Workspace

In the robot application system, the pose at the end of the manipulator is very important. The counterpart of the manipulator joint space planning is workspace planning, in which the target pose of the manipulator is no longer given using the position of each joint axis but is given by the 3D coordinate pose at the end of the manipulator. In workspace motion planning, it is necessary to solve for each joint axis position by inverse kinematics, and then motion planning is performed to control the manipulator motion. MoveIt supports target pose settings in the workspace, and the motion planning routine in the workspace is run using the following command: $ roslaunch ur5_moveit_config demo.launch $ rosrun ur5_planning moveit_ik_demo.py

344

7

Manipulator Motion Control

After a successful run, the manipulator moves to the specified position as shown in Fig. 7.34. Meanwhile, the log output of the motion planning can be seen in the terminal, which includes the time taken by the KDL kinematic solver to complete the inverse kinematic solution, as shown in Fig. 7.35. The ur5_planning/scripts/moveit_ik_demo.py source code for implementing this routine is as follows: import rospy, sys import moveit_commander from geometry_msgs.msg import PoseStamped, Pose class MoveItIkDemo: def __init__(self): # API for initializing the move_group moveit_commander.roscpp_initialize(sys.argv) # Initialize the ROS node rospy.init_node('moveit_ik_demo') # Initialization requires the use of the arm group in the manipulator controlled by the move group arm = moveit_commander.MoveGroupCommander('manipulator') # Get the name of the end link end_effector_link = arm.get_end_effector_link() # Set the reference coordinate frame to be used for the target position reference_frame = 'base_link' arm.set_pose_reference_frame(reference_frame) # Allow replanning when motion planning fails arm.allow_replanning(True) # Set allowable error in setting position (in meters) and attitude (in radians) arm.set_goal_position_tolerance(0.001) arm.set_goal_orientation_tolerance(0.001) # Set the allowable maximum speed and acceleration arm.set_max_acceleration_scaling_factor(0.5) arm.set_max_velocity_scaling_factor(0.5) # Control the manipulator to return to the initialized position first arm.set_named_target('zero') arm.go() rospy.sleep(1) # Set the target pose in the manipulator workspace; the position is described using x, y, z coordinates # Orientation is described using quaternions, based on base_link frame target_pose = PoseStamped() target_pose.header.frame_id = reference_frame target_pose.header.stamp = rospy.Time.now() target_pose.pose.position.x = 0.454178 target_pose.pose.position.y = 0.284321 target_pose.pose.position.z = 0.651409

(continued)

7.3

MoveIt Programming: Manipulator Motion Planning

345

target_pose.pose.orientation.x = 0.216404 target_pose.pose.orientation.y = 0.399163 target_pose.pose.orientation.z = 0.757239 target_pose.pose.orientation.w = -0.469497 # Set the current state of the manipulator as the initial state of motion arm.set_start_state_to_current_state() # Set the target position for the end-effector motion of the manipulator arm.set_pose_target(target_pose, end_effector_link) # Planning motion paths traj = arm.plan() # Control the motion of the manipulator according to the planned motion path arm.execute(traj) rospy.sleep(1) # Control the manipulator back to the initialized position arm.set_named_target('zero') arm.go() # Close and exit moveit moveit_commander.roscpp_shutdown() moveit_commander.os._exit(0) if __name__ == "__main__": MoveItIkDemo()

Many of the code segments in the routine are identical to the code for motion planning in joint space and will not be repeated here. The following analysis focuses on the code related to motion planning in the workspace. Use the get_end_effector_link() interface to obtain the name of the end-effector link in the model file, which is required for subsequent planning. end_effector_link = arm.get_end_effector_link()

The workspace pose needs to be described using Cartesian coordinate values, so the frame (i.e., coordinate system) in which the pose is located must be declared. Set_pose_reference_frame() sets the frame of the target pose, which is set as the robot’s base frame - base_link:. reference_frame = 'base_link' arm.set_pose_reference_frame(reference_frame)

346

7

Manipulator Motion Control

When the manipulator inverse kinematics are solved, there will be either no solutions or multiple solutions, some of which may not allow motion planning. In this case, allow_replanning() can be used to set whether to allow replanning after a planning failure. arm.allow_replanning(True)

If it is set to true, MoveIt will attempt to solve five times, otherwise it will only solve once. Use the PoseStamped message data in ROS to describe the target pose of the robot. First, set the reference frame in which the pose is located, and then create the timestamp. Next, set the xyz coordinate and quaternion orientation values of the target pose. target_pose = PoseStamped() target_pose.header.frame_id = reference_frame target_pose.header.stamp = rospy.Time.now() target_pose.pose.position.x = 0.454178 target_pose.pose.position.y = 0.284321 target_pose.pose.position.z = 0.651409 target_pose.pose.orientation.x = 0.216404 target_pose.pose.orientation.y = 0.399163 target_pose.pose.orientation.z = 0.757239 target_pose.pose.orientation.w = -0.469497

Before the target pose is entered, set the initial state of the motion planning: arm.set_start_state_to_current_state() arm.set_pose_target(target_pose, end_effector_link)

In the general case, the set_start_state_to_current_state() interface is used to set the current state to the initial state. Then, use set_pose_target() to set the target pose. Meanwhile, set the link described by that target pose, which is the robot end-effector link name determined earlier. The first step in motion planning is to plan the path and call the plan() function. If the path planning is successful, a planned motion trajectory is returned; then, the execute() function is called to control the robot along the trajectory. If the planning fails, the planning is repeated or a planning failure log message appears in the terminal, depending on the setting. traj = arm.plan() arm.execute(traj) rospy.sleep(1)

7.3

MoveIt Programming: Manipulator Motion Planning

347

Fig. 7.36 Result of motion planning in Cartesian space

In summary, the key steps of MoveIt workspace motion planning including creating the control object of the planning group, obtaining the manipulator end-effector link name, setting the reference frame corresponding to the target pose, setting the start and end poses, performing workspace motion planning, and controlling the manipulator to complete the motion. The corresponding APIs are as follows: arm = moveit_commander.MoveGroupCommander('manipulator') end_effector_link = arm.get_end_effector_link() reference_frame = 'base_link' arm.set_pose_reference_frame(reference_frame) arm.set_start_state_to_current_state() arm.set_pose_target(target_pose, end_effector_link) traj = arm.plan() arm.execute(traj)

7.3.3

Motion Planning in Cartesian Space

In workspace motion planning, there is no constraint on the trajectory of the end-effector of the manipulator. After the target, pose of the end-effector of the manipulator is given, the inverse kinematics solution is used to obtain the radian of each joint axis in the joint space, and then the joint space motion planning is carried out to complete the motion control of the manipulator. However, in many application

348

7 Manipulator Motion Control

scenarios, not only the start and end pose of the manipulator but also the trajectory and pose requirements during the motion need to be considered [6]. For example, it is hoped that the end-effector of the manipulator can move according to a straight line or circular arc trajectory. MoveIt also provides an interface to Cartesian motion planning. Use the following command to run the motion planning routines in Cartesian space: $ roslaunch ur5_moveit_config demo.launch $ rosrun ur5_planning moveit_cartesian_demo.py

After a successful run, the motion trajectory can be seen as in Fig. 7.36. Cartesian motion planning needs to guarantee the pose of each intermediate target point during the motion of the manipulator, and the manipulator end-effector sequentially completes the motion between multiple target points in a straight line. The source code for ur5_planning/scripts/moveit_cartesian_demo.py, which implements the routine, is as follows: import rospy, sys import moveit_commander from moveit_commander import MoveGroupCommander from geometry_msgs.msg import Pose from copy import deepcopy class MoveItCartesianDemo: def __init__(self): # API for initializing the move_group moveit_commander.roscpp_initialize(sys.argv) # Initialize the ROS node rospy.init_node('moveit_cartesian_demo', anonymous=True) # Initialization requires the use of the arm group in the manipulator controlled by the move group arm = MoveGroupCommander('manipulator') # Allow replanning when campaign planning fails arm.allow_replanning(True) # Set the reference frame to be used for the target position arm.set_pose_reference_frame('base_link') # Set the allowable error in the setting position (in meters) and orientation (in radians) arm.set_goal_position_tolerance(0.001) arm.set_goal_orientation_tolerance(0.001) # Set the allowable maximum speed and acceleration arm.set_max_acceleration_scaling_factor(0.5) arm.set_max_velocity_scaling_factor(0.5) # Get the name of the end-effector link end_effector_link = arm.get_end_effector_link() # Control the manipulator to return to the initialized position first

(continued)

7.3

MoveIt Programming: Manipulator Motion Planning

349

arm.set_named_target('zero') arm.go() rospy.sleep(1) # Get the start position of the manipulator motion with the most current pose data start_pose = arm.get_current_pose(end_effector_link).pose print start_pose # Initialize the list of waypoints waypoints = [] # Add initial poses to the list of waypoints waypoints.append(start_pose) # Set waypoint data and add to waypoint list wpose = deepcopy(start_pose) wpose.position.z -= 0.2 waypoints.append(deepcopy(wpose)) wpose.position.x += 0.2 waypoints.append(deepcopy(wpose)) wpose.position.y += 0.2 waypoints.append(deepcopy(wpose)) fraction = 0.0 # path planning coverage maxtries = 100 # maximum number of planning attempts attempts = 0 # number of planning attempts made # Set the current state of the manipulator as the motion start state arm.set_start_state_to_current_state() # Try to plan a path in Cartesian space through all waypoints in turn while fraction < 1.0 and attempts < maxtries: (plan, fraction) = arm.compute_cartesian_path ( waypoints, # waypoint poses, list of waypoints 0.01, # eef_step, end step value 0.0, # jump_threshold,jump threshold True) # avoid_collisions, obstacle avoidance planning # Attempts are cumulative attempts += 1 # Print motion planning process if attempts % 10 == 0: rospy.loginfo("Still trying after " + str(attempts) + " attempts...") # If the path planning is successful (100% fraction), start controlling the motion of the manipulator if fraction == 1.0: rospy.loginfo("Path computed successfully. Moving the arm.") arm.execute(plan) rospy.loginfo("Path execution complete.") # If path planning fails, print failure message else: rospy.loginfo("Path planning failed with only "

(continued)

350

7

Manipulator Motion Control

+ str(fraction) + " success after " + str(maxtries) + " attempts.") rospy.sleep(1) # Control the manipulator to return to the initialized position first arm.set_named_target('zero') arm.go() rospy.sleep(1) # Close and exit moveit moveit_commander.roscpp_shutdown() moveit_commander.os._exit(0) if __name__ == "__main__": Try: MoveItCartesianDemo() except rospy.ROSInterruptException: pass

The following analysis focuses on the code in the Cartesian path planning section. A waypoint, also known as a path point, refers to each pose point in the Cartesian path that needs to be passed through and a straight-line trajectory motion is used between two adjacent waypoints. waypoints = [] waypoints.append(start_pose)

The waypoints that the manipulator motion needs to pass through are added to the list of waypoints, but no motion planning is started at this point. The following code snippet is the core part of the entire routine: while fraction < 1.0 and attempts < maxtries: (plan, fraction) = arm.compute_cartesian_path ( waypoints, # waypoint poses, list of waypoints 0.01, # eef_step, end step value 0.0, # jump_threshold,jump threshold True) # avoid_collisions, obstacle avoidance planning # Attempts are cumulative attempts += 1 # Print motion planning process if attempts % 10 == 0: rospy.loginfo("Still trying after " + str(attempts) + " attempts...")

This code uses the API for path planning in Cartesian space, compute_cartesian_path(), with four parameters: The first parameter is a list of waypoints; the second parameter is the end step value of the manipulator; the third

7.3

MoveIt Programming: Manipulator Motion Planning

351

Fig. 7.37 Operational effect of obstacle avoidance planning

parameter is the jump threshold; and the fourth parameter sets whether the motion process considers obstacle avoidance. Compute_cartesian_path() returns two values after execution: “plan” and “fraction.” Plan denotes the planned motion trajectory, and fraction describes the coverage of the successfully planned trajectory in the given list of waypoints, from 0 to1. If fraction is less than 1, it means that the given list of waypoints is not planned completely. In this case, the planning can be performed again; however, the number of planning times needs to be set manually. If the planning is successful and the value of fraction is 1, the robot can then be controlled to execute the successfully planned path trajectory using execute(). if fraction == 1.0: rospy.loginfo("Path computed successfully. Moving the arm.") arm.execute(plan) rospy.loginfo("Path execution complete.")

The key to this routine is mastering the use of the API compute_cartesian_path() so as to achieve linear motion planning between a series of path points in Cartesian space. The end-effector of the manipulator can move on a circular arc trajectory, or the arc can be decomposed into multiple straight lines, and then the motion of the manipulator can be controlled using compute_cartesian_path().

352

7.3.4

7

Manipulator Motion Control

Manipulator Collision Detection

In many application scenarios, there will be objects around the robot. These objects may be in the robot’s workspace and become obstacles in the robot’s motion planning process; therefore, motion planning needs to consider obstacle avoidance. MoveIt supports obstacle avoidance planning by using the relevant interface of the planning scene plugin in move_group to add an obstacle model and maintain information about the scene in which the robot is working. Run the following command to start the obstacle avoidance planning routine. $ roslaunch ur5_moveit_config demo.launch $ rosrun ur5_planning moveit_cartesian_demo.py

After it runs successfully, the motion scene appears as shown in Fig. 7.37. First, the manipulator moves to the initial position. Then a hovering table appears, and the end-effector of the manipulator grasps a long rod. The manipulator then performs collision detection, plans the motion path, avoids the obstacles, and moves to the target position. After the motion is completed, the manipulator returns to the initial position. The source code for ur5_planning/scripts/moveit_obstacles_demo.py, which implements the routine, is as follows: import rospy, sys import thread, copy import moveit_commander from moveit_commander import RobotCommander, MoveGroupCommander, PlanningSceneInterface from geometry_msgs.msg import PoseStamped, Pose from moveit_msgs.msg import CollisionObject, AttachedCollisionObject, PlanningScene from math import radians from copy import deepcopy class MoveAttachedObjectDemo: def __init__(self): # API for initializing move_group moveit_commander.roscpp_initialize(sys.argv) # Initialize the ROS node rospy.init_node('moveit_attached_object_demo') # Initialize the scene object scene = PlanningSceneInterface() rospy.sleep(1) # Initialization requires the use of the arm group in the manipulator controlled by move_group arm = MoveGroupCommander('manipulator')

(continued)

7.3

MoveIt Programming: Manipulator Motion Planning

353

# Get the name of the end-effector link end_effector_link = arm.get_end_effector_link() # Set the allowable error in setting position (in meters) and orientation (in radians) arm.set_goal_position_tolerance(0.01) arm.set_goal_orientation_tolerance(0.05) # Allow replanning when motion planning fails arm.allow_replanning(True) # Control the manipulator to return to the initialized position arm.set_named_target('zero') arm.go() # Set time limit for each motion planning: 10s arm.set_planning_time(10) # Remove attached objects from the scene that are left over from previous runs scene.remove_attached_object(end_effector_link, 'tool') scene.remove_world_object('table') # Set the height of the table table_ground = 0.6 # Set the 3D size of the table and the tool table_size = [0.1, 0.6, 0.02] tool_size = [0.02, 0.02, 0.4] # Set the pose of the tool p = PoseStamped() p.header.frame_id = end_effector_link p.pose.position.x = 0.0 p.pose.position.y = 0.0 p.pose.position.z = -0.2 p.pose.orientation.x = 1 # Attach the tool to the end of the manipulator scene.attach_box(end_effector_link, 'tool', p, tool_size) # Add the table to the scene table_pose = PoseStamped() table_pose.header.frame_id = 'base_link' table_pose.pose.position.x = 0.3 table_pose.pose.position.y = 0.0 table_pose.pose.position.z = table_ground + table_size[2] / 2.0 table_pose.pose.orientation.w = 1.0 scene.add_box('table', table_pose, table_size) rospy.sleep(2) # Update the current pose arm.set_start_state_to_current_state() # Set the target position of the manipulator, describe it using position data from the six axes (in radians) joint_positions = [-0.17835936210466416, -0.5436051587483343, -2.4830557792422314, -1.9394291217061226, -1.4178586738298844, - 0.00027600657645464105] arm.set_joint_value_target(joint_positions)

(continued)

354

7

Manipulator Motion Control

# Control the manipulator to complete the motion arm.go() rospy.sleep(1) # Control the manipulator to return to the initialized position arm.set_named_target('zero') arm.go() moveit_commander.roscpp_shutdown() moveit_commander.os._exit(0) if __name__ == "__main__": MoveAttachedObjectDemo()

The main flow of the routine is as follows: 1. 2. 3. 4.

Initialize the scene and set the parameters. Add obstacle models to the visualization environment. Set the start and target poses of the robot. Perform obstacle avoidance planning.

The following is a concrete analysis of the code implementation. The PlanningSceneInterface interface provides the ability to add and remove object models, and the PlanningScene message is the message type to which the scene update topic planning_scene subscribes: from moveit_commander import RobotCommander, MoveGroupCommander, PlanningSceneInterface from geometry_msgs.msg import PoseStamped, Pose from moveit_msgs.msg import CollisionObject, AttachedCollisionObject, PlanningScene

Create an instance of the PlanningSceneInterface class, through which object models can be added or removed. scene = PlanningSceneInterface()

The routine code can be run repeatedly in the terminal; however, the object model loaded by the routine is not automatically cleared and thus needs to be cleared programmatically using remove_attached_object(), remove_world_object(). scene.remove_attached_object(end_effector_link, 'tool') scene.remove_world_object('table')

7.3

MoveIt Programming: Manipulator Motion Planning

355

Set the height of the obstacle table and the size of the model for the manipulator’s effector to grip the long rod. Use a rectangle to describe the model, and set the length, width, and height of the rectangle in meters. table_ground = 0.6 table_size = [0.1, 0.6, 0.02] tool_size = [0.02, 0.02, 0.4]

In addition to the size of the object, the position of the object in the scene also needs to be set, which is described using the PoseStamped message. p = PoseStamped() p.header.frame_id = end_effector_link p.pose.position.x = 0.0 p.pose.position.y = 0.0 p.pose.position.z = -0.2 p.pose.orientation.x = 1 scene.attach_box(end_effector_link, 'tool', p, tool_size) table_pose = PoseStamped() table_pose.header.frame_id = 'base_link' table_pose.pose.position.x = 0.3 table_pose.pose.position.y = 0.0 table_pose.pose.position.z = table_ground + table_size[2] / 2.0 table_pose.pose.orientation.w = 1.0 scene.add_box('table', table_pose, table_size)

After the location of the object is determined, the object is added to the scene using the PlanningSceneInterface's add_box() and attach_box() interfaces. The properties of the objects added by these two interfaces are different. The object added by add_box() is an external obstacle with which the manipulator has the potential to collide when moving. The object added by attach_box() will become one with the manipulator. For example, the long rod here is already “attached” to the end-effector of the manipulator, that is, it is connected to the end-effector of the manipulator. When the manipulator is in motion, the body of the manipulator cannot collide with the outside world, and the long rod cannot collide with the outside world either. Once the scene has been configured, the manipulator can be instructed to move. target_pose = PoseStamped() target_pose.header.frame_id target_pose.pose.position.x target_pose.pose.position.y target_pose.pose.position.z

= reference_frame = 0.2 = 0.0 = table_pose.pose.position.z +

(continued)

356

7

Manipulator Motion Control

Fig. 7.38 Framework of digital twin system in smart manufacturing

table_size[2] + 0.05 target_pose.pose.orientation.w = 1.0 arm.set_pose_target(target_pose, end_effector_link) arm.go()

The motion trajectory planned by MoveIt will consider avoiding a collision with the table; there will be obvious obstacle avoidance during the motion of the manipulator, and the long rod grasped by the manipulator end-effector will not collide with the table. arm.set_start_state_to_current_state() joint_positions = [-0.17835936210466416, -0.5436051587483343, -2.4830557792422314, -1.9394291217061226, -1.4178586738298844, - 0.00027600657645464105] arm.set_joint_value_target(joint_positions) arm.go() rospy.sleep(1) arm.set_named_target('zero') arm.go()

This routine describes how to add external object models to the scene of MoveIt motion planning. Furthermore, the properties of the models are set through different interfaces, which will complete collision detection and achieve autonomous obstacle avoidance based on these scene models during manipulator motion planning.

7.3

MoveIt Programming: Manipulator Motion Planning

357

Fig. 7.39 Technical architecture of the digital twin

Task 2 Digital Twins: Synchronized Motion of Physical and Simulated Manipulators Digital twins were developed amid the gradual digitization of human society. The purpose is to create digital virtual entities of physical entities. A digital twin integrates multidisciplinary, multi-physical quantities and multi-scale simulation processes using physical models, algorithmic models, sensor updates, and operational histories and completes mapping in virtual space. Thus, it reflects the entire life cycle process of the corresponding physical equipment and realizes the modeling, analysis, and optimization of physical entities. The digital twin model is based on multi-dimensional, large-scale, cumulative, real-time, real-world data measurements. Digital twin technology is widely used in smart manufacturing, smart city, smart building, smart healthcare, and other fields. Fig. 7.38 demonstrates the framework of the digital twin system in smart manufacturing. The digital twin technology has several typical characteristics: 1. Interoperability: The physical objects and digital space in the digital twin model are capable of bidirectional mapping, dynamic interaction, and real-time connectivity. 2. Scalability: Digital twin technology has the ability to integrate, to add and replace digital models, and to scale for multi-scale, multi-physical, and multi-layered model content. 3. Real-time: Digital twin technology allows for the representation of physical entities over time, forming a digital virtual mapping of the real-time state of physical entities. 4. Fidelity: The fidelity of the digital twin technology refers to the proximity of the digital virtual model to the physical entity, including a high degree of simulation in geometry, state, phase, and temporal states. The degree of simulation can vary in different digital twin scenarios, for example, an industrial scenario may require only the physical properties of the virtual entity to be described and does not need to focus on the chemical structure.

358

7

Manipulator Motion Control

5. Closed-loop: The virtual entity in the digital twin technology is used to describe the visual model and intrinsic mechanism of the physical entity, so as to facilitate the monitoring, analysis, and reasoning of the state data of the physical entity, optimize the process and operational parameters, and realize the closed-loop decision-making function. To put it in another way, this provides a brain to the digital virtual entity and the physical entity. The digital twin is driven by the digital thread. The digital thread is an information interaction framework that can bridge what were multiple vertical business perspectives, connecting interconnected data flow and the integrated view of equipment full lifecycle data (i.e., its digital twin model). Thus, the digital twin can be considered as a digital mapping system of one or more important and interdependent equipment systems. It maps the real thing digitally in the virtual space, with the main function of simulating and predicting the actual change patterns in the real environment by manipulating the model in the virtual space and observing and studying the changes in the virtual model. The basic idea of digital twin technology is that the digital model constructed in the virtual space is mapped with the physical entity (the virtual and real bi-directional mutual mapping) to faithfully describe the trajectory of the full life cycle of the physical entity. The differences between digital twin technology and simulation are described below. Simulation technology applies simulation hardware and software to reflect the behavior or process of a system through simulation experiments with the help of certain numerical calculations and problem solving. Simulation is a method to simulate the physical world by transforming a model containing deterministic laws and complete mechanisms into software. It relies on the correct model, complete information, and environment data to reflect the properties and parameters of the physical world. Simulation technology can only simulate the physical world in an offline, one-way manner, without analysis and optimization functions. In addition, it does not have the real-time and closed-loop characteristics of the digital twin. In the digital twin technology, the twin can send data to the ontology in reverse, in addition to receiving data from the ontology. Based on the information fed back by the twin, further actions and interventions are taken on the ontology. As shown in Fig. 7.39, the digital twin contains sensing, modeling, control, analysis, and optimization processes, covering many technical fields such as sensing and control, data integration, model construction, model interoperability, business integration, and human-computer interaction. It is a synthesis of advanced technologies such as cloud computing, big data, 3D modeling, industrial Internet, and artificial intelligence. Another concept related to the digital twin is cyber physical systems (CPS); both digital twin and CPS use digital means to build systems. CPS is a system implementation, while digital twin technology focuses on the construction of “digital equipment” that integrates information technologies and automatic control technologies such as sensing, computing, communication, and control to build a complex system. This complex system has mutual mapping, timely interaction, and efficient collaboration of human, machine, object, environment, and information in physical

7.3

MoveIt Programming: Manipulator Motion Planning

359

Fig. 7.40 Synchronous motion display of the manipulator model in ROS

and virtual spaces, so as to realize the on-demand response, rapid iteration, and dynamic optimization of resource allocation and operation within the system. Therefore, the digital twin can be regarded as the enabling technology basis of CPS and is the concrete physical embodiment of CPS. The applications of digital twin include products, production lines, factories, and workshops, which directly correspond to the objects of products, equipment, and systems faced by CPS. In ROS MoveIt, it is possible to synchronize the motion of the physical manipulator with the simulated manipulator and simply implement an interoperability function such as the digital twin. The following is an example of a 6-DOF manipulator (sagittarius_arm). When manually dragging the manipulator, the model in ROS keeps pace with the physical manipulator’s orientation. When controlling the manipulator using ROS MoveIt, the physical manipulator stays in sync with the motion orientation of the model. The GitHub download website for the manipulator is

https://github.com/NXROBO/sagittarius_ws.git The following describes how to implement the synchronous motion of the manipulator with the robot model in ROS, that is, the manipulator in the simulation system acquires the joint information of the physical manipulator, so that when the physical manipulator is dragged, the manipulator in the simulation system synchronously follows the motion. 1. Compile the package of the manipulator: Download the workspace of the manipulator to the user directory, enter the workspace, and use the catkin_make command to compile it.

360

7

Manipulator Motion Control

Fig. 7.41 Synchronized motion of the physical manipulator with the simulation model

2. Launch the physical manipulator: Turn on the power switch of the manipulator. Pay attention to check that the Spark master PC and manipulator are connected using a USB cable. 3. Synchronization of the simulation model with the physical manipulator: Close all previously running terminals, then open a new terminal and launch the simulation model using the following command: $ source devel/setup.bash $ roslaunch sagittarius_moveit demo.launch

After the successful launch, the robot model appears as shown in Fig. 7.40. At this point, drag the physical manipulator by hand, and the manipulator model in ROS moves synchronously. 4. Install MoveIt $ sudo apt-get install ros-kinect-moveit

5. Synchronization of the physical manipulator with the simulation model: Close all previously running terminals, then open a new terminal and start the manipulator’s drive and MoveIt control interface using the following command: $ roslaunch sagittarius_moveit demo_true.launch

Further Reading

361

After the successful launch, the MoveIt control interface can be seen as shown in Fig. 7.41. Use the control sphere to select a target point for the manipulator motion, click “plan” and “execute”; then, MoveIt will automatically generate the motion trajectory, and the physical manipulator and simulation model will move synchronously.

7.4

Summary

This chapter first introduced the URDF modeling of the manipulator and MoveIt visualization configuration, and then enhanced readers’ understanding of manipulator kinematics by demonstrating how to control the manipulator in the simulation environments of MoveIt and Gazebo. Finally, several manipulator motion planning methods in different coordinate spaces were presented, and a simple digital twin experiment enables readers to experience the synchronized motion of the physical and simulated manipulators mapped to each other.

Further Reading We consider methods of path generation in which the path shapes (in space and in time) are described in terms of functions of joint angles. Each path point is usually specified in terms of a desired position and orientation of the tool frame, {T}, relative to the station frame, {S}. Each of these via points is “converted” into a set of desired joint angles by application of the inverse kinematics. Then a smooth function is found for each of the n joints that pass through the via points and end at the goal point. The time required for each segment is the same for each joint so that all joints will reach the via point at the same time, thus resulting in the desired Cartesian position of {T} at each via point. Other than specifying the same duration for each joint, the determination of the desired joint angle function for a particular joint does not depend on the functions for the other joints. Hence, joint-space planning will achieve the desired position and orientation at the via points. In between via points, the shape of the path, although rather simple in joint space, is complex if described in Cartesian space. Joint-space planning is usually the easiest to compute, and since there is no continuous mapping between joint space and Cartesian space, issues related to mechanism singularities are avoided. Consider the problem of moving the tool from its initial position to a goal position in a certain amount of time. Inverse kinematics allow the set of joint angles that correspond to the goal position and orientation to be calculated. The initial position of the manipulator is also known in the form of a set of joint angles. Therefore, it is necessary to find a function for each joint whose value at t0 is the initial position of the joint and whose value at tf is the desired goal position of that joint. Thus far, we have considered

362

7 Manipulator Motion Control

motions described by a desired duration and a final goal point. In general, we wish to allow paths to be specified that include intermediate via points, and thus it is necessary to generalize the way to fit a cubic polynomial to the path constraints. As in the case of a single goal point, each via point is usually specified in terms of a desired position and orientation of the tool frame relative to the station frame. Each of these via points is "converted" into a set of desired joint angles by application of the inverse kinematics. Then to compute cubic polynomial that connect the via-point values for each joint together in a smooth way. If desired velocities of the joints at the via points are known, then we can construct cubic polynomial as before; now, however, the velocity constraints at each end are not zero, but rather, some known velocity. We can calculate the cubic polynomial that connects any initial and final positions with any velocities. If we have the desired joint velocities at each via point, there are several ways in which the desired velocity at the via points might be specified: (1) The user specifies the desired velocity at each via point in terms of a Cartesian linear and angular velocity of the tool frame at that instant. (2) The system automatically chooses the velocities at the via points by applying a suitable heuristic in either Cartesian space or joint space. (3) The system automatically chooses the velocities at the via points in such a way as to cause the acceleration at the via points to be continuous. In the (1) option, Cartesian desired velocities at the via points are "mapped" to desired joint rates by using the inverse Jacobian of the manipulator evaluated at the via point. If the manipulator is at a singular point at a particular via point, then the user is not free to assign an arbitrary velocity at this point. It is a useful capability of a path-generation scheme to be able to meet a desired velocity that the user specifies, but it would be a burden to require that the user always makes these specifications. Therefore, the more common method is to use (2) option or (3) option (or both). In (2) option, the system automatically chooses reasonable intermediate velocities, using some kind of heuristic. We can set reasonable joint velocities at the via points, with small line segments representing tangents to the curve at each via point. These set values are the result of applying a conceptually and computationally simple heuristic. Imagine the via points connected with straight line segments. If the slope of these lines changes sign at the via point, choose zero velocity; if the slope of these lines does not change sign, choose the average of the two slopes as the via velocity. In this way, from specification of the desired via points alone, the system can choose the velocity at each point. In (3) option, velocities are set in such a way that acceleration is continuous at the via point. To do this, a new approach is needed. In this kind of spline, we replace the two velocity constraints at the connection of two cubic polynomials with the two constraints that velocity be continuous and acceleration be continuous. Higher-order polynomials are sometimes used for path segments. For example, if we wish to be able to specify the position, velocity, and acceleration at the beginning and end of a path segment, a quantic polynomial is required. Some algorithms are available for computing smooth functions (polynomial or otherwise) that pass through a given set of data points.

Exercises

363

Exercises 1. List the D-H parameters of the Franka Panda manipulator according to its ROS, and create a URDF model of the manipulator in the form of linkages and visualize the model in rviz. 2. Please use the D-H parameters obtained from the UR5 robotic arm in Table 7.2 to calculate its forward and inverse kinematic equations. Then assume the program code of the joint angle in UR5 manipulator as follows: joint_positions = [-0.17835936210466416, -0.5436051587483343, -2.4830557792422314, -1.9394291217061226, -1.4178586738298844, -0.00027600657645464105] arm.set_joint_value_target(joint_positions)

Please calculate the rotation matrix and translation matrix corresponding to joint_positions based on the kinematic equation. 3. The motion planning program code in the world coordinate system of a UR5 manipulator is as follows: target_pose.pose.position.x = 0.454178 target_pose.pose.position.y = 0.284321 target_pose.pose.position.z = 0.651409 target_pose.pose.orientation.x = 0.216404 target_pose.pose.orientation.y = 0.399163 target_pose.pose.orientation.z = 0.757239 target_pose.pose.orientation.w = -0.469497 arm.set_start_state_to_current_state() arm.set_pose_target(target_pose, end_effector_link)

4.

5. 6. 7.

Please use the inverse kinematics equation to calculate the rotation angles of six joints. Based on the URDF model of a manipulator, use the MoveIt Setup Assistant tool to generate the configuration file and control the random motion of the manipulator in rviz to verify the result. Create a program to achieve the circular and linear motions of a manipulator’s end-effector and check the operation result in rviz. Let a manipulator move back and forth between two points, and observe the result in rviz. Then, add an obstacle between the two points and observe the result again. Analyze the relations and differences between motion planning in joint space and Cartesian space.

364

7

Manipulator Motion Control

References 1. Chunxu H (2018) ROS robot development and practice. China Machine Press 2. Craig JJ (2018) Translated by Chao Yun, Wei Wang. Introduction to robotics: mechanics and control (Fourth Edition). Mechanical Industry Press, March 3. Saeed B. Niku. Introduction to Robotics: analysis, control, applications (Second Edition). John Wiley & Sons Inc, 2011. 4. Xiong Y, Li W, Chen Y, Yang H, Ding Y, Zhao H (2018) Robotics: modeling, control, and vision. Huazhong University of Science and Technology Press 5. Ren Z (2021) Research on the calibration and grasping planning method of manipulator vision system [D]. Huazhong University of Science and Technology 6. Wang H (2022) Research on docking and grasping planning of Robotic arm based on depth vision [D]. Huazhong University of Science and Technology

Chapter 8

Computer Vision

Abstract The way that humans perceive the environment mainly relies on vision, which accounts for approximately 80% of the total acquired information. Computer vision technology has been developed with the aim of providing machines with the same visual ability as humans, as well as higher precision ranging capabilities. This enables machines to be applied in various practical systems. For example, in a vision inspection system, a camera is used to capture image signal of the inspected target. The signal is then transmitted to a dedicated image processing system where it is converted into digital signal based on brightness, color, and other information. The image processing system performs various operations on these signals to extract visual features of the target, such as area, position, quantity, etc., thus achieving automatic recognition function. Keywords Computer vision · Camera calibration · Image transformation · Feature detection · Object recognition The way that humans perceive the environment mainly relies on vision, which accounts for approximately 80% of the total acquired information. Computer vision technology has been developed with the aim of providing machines with the same visual ability as humans, as well as higher precision ranging capabilities. This enables machines to be applied in various practical systems. For example, in a vision inspection system, a camera is used to capture image signal of the inspected target. The signal is then transmitted to a dedicated image processing system where it is converted into digital signal based on brightness, color, and other information. The image processing system performs various operations on these signals to extract visual features of the target, such as area, position, quantity, etc., thus achieving automatic recognition function.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0_8

365

366

8.1

8

Computer Vision

Getting to Know OpenCV

OpenCV refers to the Open Source Computer Vision Library. OpenCV is a distributed cross-platform computer vision library under the BSD license (open source). It is lightweight and efficient. After OpenCV 2.4, its architecture was changed to primarily C++ (previously C), with interfaces to Python, Java, and other languages, implementing many general algorithms for image processing and computer vision1. OpenCV is integrated into ROS through the ROS package vision_opencv, which consists of two main functional modules. • cv_bridge: provides an interface for moving between ROS image messages and cv::Mat. ROS internal images are sent/received in the sensor_msgs/Image message format. However, when the OpenCV library is used to process the images, because the image format cv:Mat is defined differently than it is in ROS, it must be converted. The CVbridge package in vision_opencv acts as a “bridge” for the transfer of image data between the two. • image_geometry: provides the transform from 3D world coordinates to the image coordinate frame; the camera needs to be calibrated before use.

8.1.1

Installing OpenCV

OpenCV is usually already integrated in ROS, so no additional installation operations are required. If it is not installed, it is recommended that the ROS perception package be installed, which integrates OpenCV and the open source Point Cloud Library (PCL), as well as other robot perception-related packages. Enter the following command to install: $ sudo apt-get install ros-< current ROS version>-perception

Please note that to use advanced features such as GPU acceleration, the relevant OpenCV libraries need to be compiled by the user. This is because the ROS-integrated OpenCV does not include these extensions.

8.1.2

Using OpenCV

First, create a new package named ros_opencv.

1

https://opencv.org

8.1

Getting to Know OpenCV

367

catkin_create_pkg ros_opencv sensor_msgs cv_bridge roscpp std_msgs image_transport

Put an img_pub.cpp file under the package that serves to read an image and publish it. The content is as follows: #include #include #include #include #include int main(int argc, char** argv) { ros::init(argc, argv, "image_publisher"); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Publisher pub = it.advertise("camera/ image", 1); cv::Mat image = cv::imread("image.jpg", CV_LOAD_IMAGE_COLOR); if(image.empty()) { printf("open error \n"); } sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs:: Header(), "bgr8", image).toImageMsg(); ros::Rate loop_rate(5); while (nh.ok()) { pub.publish(msg); ros::spinOnce(); loop_rate.sleep(); } }

Create another new img_sub.cpp file to receive the image and display it, using the following code: #include #include #include #include void imageCallback(const sensor_msgs::ImageConstPtr& msg) { cv::imshow("view", cv_bridge::toCvShare(msg, "bgr8")->image); cv::waitKey(30); }

(continued)

368

8

Computer Vision

int main(int argc, char **argv) { ros::init(argc, argv, "image_listener"); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Subscriber sub = it.subscribe("camera/ image", 1, imageCallback); ros::spin(); }

Add the configuration CMakeLists.txt as follows: find_package(OpenCV) include_directories(${catkin_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS}) add_executable(rosOpenCV src/img_pub.cpp) target_link_libraries(rosOpenCV ${OpenCV_LIBS}) target_link_libraries(rosOpenCV ${catkin_LIBRARIES}) add_executable(rosOpenCV src/ img_sub.cpp) target_link_libraries(rosOpenCV ${catkin_LIBRARIES})

Compile and run it.

8.2

Use of Monocular Vision Sensors

The common vision sensor used in the ROS system is the USB camera, which is inexpensive and easy to use. Once the driver is installed, the image_view package can be used to display the image, which is then provided to other algorithm packages for further processing. Different from other sensors, the camera needs to be calibrated before the data from it can be confidently used. The calibration can be done using the camera_calibration package that comes with ROS. Once the camera is connected to the computer via a USB port, the next step is to retrieve the camera image via ROS. However, the USB camera is not natively supported in ROS, and it is necessary to install a driver package to obtain the camera image and publish it. The image_view package can obtain the image and display it. Task 1 Image Acquisition This task requires that the computer already has a camera connected via USB. You can also use the camera that comes with the laptop. First download and install the package used to acquire the image. The usb_cam package is used here. Use the following command to get the usb_cam package:

8.2

Use of Monocular Vision Sensors

369

Fig. 8.1 Correct compiled output

# Note: Before this step, make sure you have created a new workspace catkin_ws as per the previous tutorial $ cd ~/catkin_ws/src $ git clone https://github.com/ros-drivers/usb_cam.git $ cd ... $ catkin_make

The result of the compilation is shown in Fig. 8.1. Catkin_make will compile all the packages in the workspace. Then, go to the launch directory that comes with usb_cam and run the test script: $ cd ~/catkin_ws/ $ source devel/setup.bash # Skip this command if you have already added this source statement to your .bashrc file $ roslaunch usb_cam usb_cam-test.launch

The program opens a new window to display the image acquired by the camera, as shown in Fig. 8.2. At this point, it is possible to acquire the camera image and display it via the ROS packages. The script code for the launch file is as follows:





(continued)

370

8

Computer Vision







The script opens two nodes. One is the usb_cam node, which comes with several parameters to determine the source and format of the image; the other is the image_view node, which is used to display the acquired image data. The video_device sets the device from which the image is sourced, and the video devices currently available on the system can be viewed in ubuntu with the following command: ls /dev/ |grep video

Normally, if only one camera is connected, there should only be video 0 here, as shown in Fig. 8.3. If more than one camera device is connected, video1, video2, and so on will appear. It is necessary to confirm which device is being used and modify the launch file accordingly. For more parameter settings, visit http://wiki.ros.org/usb_cam.

Fig. 8.2 Results of running the program

8.3

Camera Calibration

371

Fig. 8.3 View the camera devices available in the system

Fig. 8.4 Common calibration boards. (a) Check-board. (b) April-grid. (c) Circle-grid

8.3

Camera Calibration

Camera calibration has a very wide range of applications in computer vision, such as measurement, navigation, and 3D reconstruction. Its main purpose is to obtain the intrinsic parameters (focal length, optical center, etc.), extrinsic parameters (coordinate transformation relationship), and distortion coefficients of the camera by means of a calibration algorithm. In this way, it is possible to obtain the correspondence between the object in a 2D image and 3D space. The calibration method mainly uses a known calibration pattern and identifies the views from different angles. There are three common calibration boards: check-board, April-grid, and circle-grid, as demonstrated in Fig. 8.4. The following camera calibrations are introduced using the classic check-board pattern.

8.3.1

Pinhole Camera Model

Camera imaging refers to the process of mapping coordinate points from the 3D world onto a 2D image plane through perspective projection transformation. A geometric model can describe this process, usually called pinhole imaging, and the corresponding camera imaging model is called the pinhole imaging model. A pinhole camera model is built without considering lens distortion, as shown in Fig. 8.5. A virtual imaging plane is created in front of the camera to define the focal point and focal length. Based on the pinhole camera model, four coordinate frames are established as shown in Fig. 8.6, including the pixel frame (in pixels, the coordinate origin is in the

372

Fig. 8.5 Pinhole camera model

Fig. 8.6 Four coordinate frames

8

Computer Vision

8.3

Camera Calibration

373

upper left corner), the imaging plane frame, the camera frame (with the camera optical center as the origin), and the world frame. 1. Transform from pixel coordinate to imaging plane coordinate The pixel-rectangular frame UOV is established in pixels, and the position of a pixel in the image is described by a certain point coordinate (u, v). The imaging plane frame XOY is established on the CCD or CMOS element of the camera. To simplify the transformation matrix, set the origin as the center of the image sensor, then the correspondence between the pixel coordinates (u, v) and the imaging plane coordinates (x1, y1) can be obtained as x1 = udx–u0 dx ! u = x1 =dx þ u0 y1 = v–y - v0 dy = = > v = y1 =dy þ v0 where (u0, v0) denotes the coordinate of the origin O of the imaging plane frame XOY in the pixel frame, and dx and dy denote the actual physical length of each pixel in the x and y directions in the imaging plane frame, that is, the actual physical size of a single pixel point. In this way, the transform from image pixel point coordinates to imaging plane coordinates (photoreceptor units) can be expressed in terms of the homogeneous transformation as u v 1

=

1 dx 0 0

0

u0

1 dy 0

v0

x1 y1 1

ð8:1Þ

1

2. Transform between imaging plane coordinate to camera coordinate Taking the optical center of the camera (the intersection of the camera lens and the optical axis) as the origin, the camera’s 3D frame OXYZ is established, where X, Y, and Z are parallel to the x-axis, y-axis, and optical axis of the imaging plane frame, which is called the camera frame. The two origins of the imaging plane and camera frames are located on the optical axis, and the Euclidean distance between the two origins is defined as the focal length f of the camera (generally at the mm level). Suppose that a certain target point P(x,y,z) in the camera frame is projected to a point p(x1,y1) in the imaging plane frame, then the correspondence between the two points is obtained by the similarity principle as f =z = x1 =x = y1 =y = = > x1 z = f x, y1 z = f y In this way, the transform between the imaging plane coordinate point p and the camera coordinate point P can be expressed in terms of the homogeneous transformation as

374

8

f = 0 0

x1 z ∙ y1 1

0 f 0

0 0 1

0 0 0

Computer Vision

x y z 1

ð8:2Þ

3. Transform between camera coordinate and world coordinate To describe the relative position relationship between objects in the camera frame and objects in the world frame, the origin of the world frame can be selected according to the actual situation. Let the coordinates of point P in the camera frame in the corresponding world frame be P'(x2, y2, z2). Because the transform between the camera frame (i.e., camera coordinate system) and the world frame can be performed by a rigid transformation (rotation and translation), the correspondence between the coordinates of the two points is x y R = z 0 1

x2 y2 z2 1

t 1

ð8:3Þ

where R is the rotation matrix and t is the 3D translation vector. Then, the transform from coordinate P'(x2, y2, z2)in the 3D world frame, where the target object is located, to coordinate (u, v) in the pixel 2D frame can be established in the linear imaging model. 4. Transform between pixel coordinates and world coordinates From Eqs. (8.1), (8.2), and (8.3), the transformation relationship between the pixel frame and the world frame can be obtained as follows:

u v 1

z

f dx = 0 0



x2 y2 z2 1

0 u0 f v0 dy 0 1

0 0 0

R 0

t 1

x2 y2 z2 1

= M1  M2 

x2 y2 z2 1

=M

ð8:4Þ

where dxf , dyf , u0 , v0 are determined by the structural parameters of the camera itself; therefore, M1 is referred to as the intrinsic parameter matrix describing the transformation relationship from the 3D camera frame to the 2D pixel frame. R3 × 3, t3 × 1 are determined by the position of the camera in the world frame; therefore, M2 is referred to as the extrinsic parameter matrix describing the transformation relationship from the world frame to the camera frame. The matrix M determined by M1 and M2 is referred to as the camera projection matrix, which describes the mapping relationship of the object from the 3D space to the 2D pixel

8.3

Camera Calibration

375

ª1 «d « x « «0 « «0 «¬

0 1 dy 0

º u0 » »ª f » v0 » « 0 « » «¬ 0 1» »¼

0 f 0

0 0º ª R tº 0 0» « » ¬01u3 1»¼ 1 0 »¼

Fig. 8.7 Frame transform during the imaging process

image. Thus, the camera calibration is transformed into a problem of solving for the intrinsic and extrinsic parameters in the above related matrices. The imaging process is essentially the frame transform, as illustrated in Fig. 8.7. First, the points in space are transformed from “world coordinates” to “camera coordinates,” then they are projected to imaging plane coordinates, and finally the data in the imaging plane are transformed to pixel coordinates. Here, the normalized plane is defined to be multiplied by the camera focal length f to become the imaging plane and then scaled and translated to become the pixel plane; the normalized plane can also be multiplied by the intrinsic parameter matrix to become the pixel plane. In general, the normalized plane has the homogenous transformation feature, and the effect of object point depth can be ignored.

376

8.3.2

8

Computer Vision

Distortion

Distortion is introduced due to deviations in lens manufacturing accuracy as well as in the assembly process, resulting in a distortion of the original image. Lens distortion is divided into radial distortion caused by the lens shape and tangential distortion caused by the lens plane not being mounted parallel to the sensor pixel plane. Radial distortion is the distortion along the radius of the lens, caused by the fact that the light is more curved far from the center of the lens than near the center. The further away from the center of the image, the more severe the distortion. This distortion is more pronounced in ordinary cheap lenses. Radial distortion mainly includes barrel distortion and pillow distortion and can be expressed by Taylor expansions: xdistorted = x 1 þ k1 r 2 þ k2 r 4 þ k3 r 6 ydistorted = y 1 þ k1 r 2 þ k2 r 4 þ k3 r 6

ð8:5Þ

where (x, y) is the normalized pixel coordinate before correction, and (xdistorted, ydistorted) is the normalized pixel coordinate after correction. r denotes the distance from the 2D-coordinate point to the origin of the frame, r = x2 þ y2 . The lens radial distortion coefficients are denoted by k1, k2, and k3. Generally, the first two coefficients are used for correction. For cameras with a large distortion, k3 can be used. If the distortion of the middle part of the image is small, k1 is used for correction. In terms of the edge part of an image with a large distortion, k2 can be added to correct the distortion. The coefficient k3 is not required for ordinary camera lenses and is usually set to 0. Only when lenses such as wide-angle or fisheye lenses are used does k3 need to be added for correction. Tangential distortion is caused by the fact that the lens itself is not parallel to the camera sensor plane (imaging plane) or image plane. This is mostly due to installation deviations when the lens is affixed to the lens module. The tangential distortion model can be described by two additional parameters, p1 and p2: xdistorted = x þ 2p1 xy þ p2 ðr 2 þ 2x2 Þ ydistorted = y þ p1 ðr 2 þ 2y2 Þ þ 2p2 xy

ð8:6Þ

where p1 and p2 are the lens tangential distortion coefficients. There are a total of five distortion parameters in the radial and tangential distortion models. After obtaining these five parameters, it is possible to correct the image distortion caused by lens distortion. Combining the radial and tangential distortions of Eqs. (8.5) and (8.6), the distortion-corrected model can be obtained:

8.3

Camera Calibration

377

xcorrected = x 1 þ k 1 r 2 þ k 2 r 4 þ k3 r 6 þ 2p1 xy þ p2 ðr 2 þ 2x2 Þ ycorrected = y 1 þ k 1 r 2 þ k 2 r 4 þ k3 r 6 þ p1 ðr 2 þ 2y2 Þ þ 2p2 xy

8.3.3

ð8:7Þ

Camera Calibration Principle and Procedure

Based on the pinhole camera and distortion models, four intrinsic parameters and five distortion parameters can be obtained. The pinhole camera model can be uniquely determined by determining the camera and distortion parameters. This determining process is known as “camera calibration.” Once the camera structure is fixed, including the lens structure and focus distance, these parameters can be used to approximate the camera. The check-board is used for calibration as shown in Fig. 8.8. Define the upper left point of the calibration board as the origin of the world frame Ow, and the Xw -axis and Yw -axis are defined in the plane of the calibration board. According to Eq. (8.4), u v 1

=s

fx 0 0

0 fy 0

u0 v0 1

0 0 0

R 0

t 1

X Y 0 1

=s

fx 0 0

0 fy 0

u0 v0 1

½r 1 r 2 t 

X Y 1

ð8:8Þ

In the above equation, fx = f/dx and fy = f/dy; these are called the equivalent focal length. (X, Y) is the world coordinate of a certain point on the calibration board, and (u, v) is its corresponding pixel coordinate in the image. R and t are the rotation and translation matrices from the camera frame to the world frame, respectively. ri denotes the ith vector in the rotation matrix R , and s = 1/Z denotes the scale factor. According to the above equation, the homography matrix H is defined as

Fig. 8.8 Calibration principle of monocular camera intrinsic parameters

378

8

fx 0 0

H=s

0 fy 0

u0 v0 1

½ r1

r2

t  = sK ½ r1

r2

Computer Vision

t

ð8:9Þ

Homography Matrix H is used to describe the transformation relationship between pixel coordinates and world coordinates, and it contains both the intrinsic and extrinsic parameters of the camera. Because the calibration board size is known, the world coordinates of each corner point (x, y) are known, and the pixel coordinates of each corner point in the image (u, v) are also known. Thus, according to Eq. (8.8), u v 1

=H

x y 1

=

h11 h21 h31

h12 h22 h32

h13 h23 h33

x y 1

)

h11 x þ h12 y þ h13 h31 x þ h32 y þ h33 h21 x þ h22 y þ h23 v= h31 x þ h32 y þ h33

u=

ð8:10Þ

Because the H matrix can be at any scale, none of the results of the above equation will be affected. Therefore, it is common to restrict the module of H to 1: h211 þ h212 þ h213 þ h221 þ h222 þ h223 þ h231 þ h232 þ h233 = 1

ð8:11Þ

Based on Eqs. (8.10) and (8.11), matrix H can be obtained by listing nine equations using the pixel coordinates and corresponding world coordinates of four corners on the check-board. Because the matrix H is a mixture of the camera’s intrinsic and extrinsic parameters, it is necessary to separate the intrinsic parameter matrix after obtaining matrix H. According to the properties of the rotation matrix, r1 and r2 are orthogonal, and the modulus of the matrix is 1; thus, r 1T r 2 = 0 kr 1 k = kr 2 k = r 1T r 1 = r 2T r 2 = 1

ð8:12Þ

If H = [h1h2h3] = sK[r1r2t], then r 1 = λK - 1 h1 r 2 = λK - 1 h2 , λ = s - 1 t = λK - 1 h3

ð8:13Þ

Substituting Eq. (8.13) into Eq. (8.12), the following is obtained: T

h1T

h1T K - 1 K - 1 h2 = 0 T -1 T -1 K K h1 = h2T K - 1 K - 1 h2

ð8:14Þ

8.3

Camera Calibration

379

Then, multiple images are selected to determine the corresponding matrix H, and the equations are listed in combination with the above equation to solve the standard intrinsic parameter matrix K of the camera. Camera distortion and image noise are not considered in the above process of determining the intrinsic parameters of the camera. To reduce the influence of uncertainties on the calibration results, the solution of the camera’s intrinsic and distortion parameters is usually regarded as a maximum likelihood estimation problem in the actual calibration process. Suppose that n pictures of a calibration board are taken, and the calibration board has m check-board corner points. The list of parameters to be solved is P = (K, k1, k2, k3, p1, p2), the world coordinate of a point on the calibration board is (X, Y ), and (u, v) is its corresponding real pixel coordinate in the image. Based on Eqs. (8.7) and (8.8), the theoretical pixel coordinate (u′, v′) after the distortion correction is calculated. Assuming that the noise is independently and identically distributed, the above maximum likelihood estimation problem can be solved by minimizing the deviation of u and u′ , which in turn leads to the intrinsic and distortion parameters of the camera. P = argmin P

n

m

i=1

j=1

uij - u0 P, Ri , t i , X j

2

ð8:15Þ

It can be seen that the camera calibration process is a nonlinear optimization problem, and the accuracy of the camera parameter calibration results will directly affect the accuracy of the results produced in the camera work. Hence, a good camera calibration is an essential prerequisite for subsequent work, and the calibration process can be summarized as follows: 1. Extract the corner points on the calibration board. 2. Set the initial values for the nonlinear optimization problem. Initialize the optical center coordinate to half the image size. Initialize the focal length according to the equation fx = fy = dvanishing points/π, where vanishing points are elliptical vertices fitted with straight lines on the calibration board. 3. Solve for the pose relationship of each frame of the image with respect to the calibration board. The size parameters of the calibration board are known, so the 3D coordinate of each corner point in the calibration board frame can be determined, that is, the 3D-2D correspondence is obtained, and the camera poses are then solved by the Perspective-n-Point (PnP) algorithm. 4. Through the camera projection model, the pixel coordinates of the corner points on the calibration board can be obtained on the image. The camera parameters and pose of each frame relative to the calibration board are continuously and iteratively optimized until the maximum number of iterations is reached or the reprojection error (pixel error in projecting the 3D point onto the pixel frame) for all frames is less than a set threshold. In this way, it is possible to obtain the camera parameters.

380

8.3.4

8

Computer Vision

Camera Calibration Kit

In ROS, many mature camera calibration packages are provided, such as visp_camera_calibration and camera_calibration. Here, the camera_calibration package is selected for calibration, which adopts the flexible camera calibration by viewing a plane from unknown orientations proposed by Zhengyou Zhang in 1998 (referred to as Zhang’s calibration method [1]) and is implemented using the Python interface of OpenCV related functions. To simplify the calculation and stability of the solution, only the most influential radial distortion was considered in Zhang’s calibration method, and only the second-order distortion coefficient was selected. The intrinsic parameter matrix was determined by the linear model, and then estimated by the least square method and optimized by the maximum likelihood estimation. Then, the distortion coefficient was obtained. The camera_calibration package provides a friendly GUI interface and API. By starting the cameracalibrator node, the GUI interface can be loaded to complete the image acquisition and parameter calculation and saving. In addition, the intrinsic parameter matrix of the camera is obtained by using this package. For the extrinsic parameter matrix, the extrinsic parameter solution is not required because a static object and fixed camera are used. If the image_view package does not need to be opened to display the image, the launch file used in Sect. 8.2 can be simplified by removing the statement that opens the image_view package and modifying it as follows:







After launching the above script, execute the following command: rosrun camera_calibration cameracalibrator.py --size 8x6 -square 0.036 image:=/usb_cam/image_raw camera:=/camera --no-service-check

In the above command, several parameters are transmitted, with the following meaning:

8.3

Camera Calibration

381

Fig. 8.9 Interface after launching the script --size --square --image --no-servicecheck

The size of the board used for the current calibration, the number of corners inside the board The side length of each space in the board, in meters The image topic used Disable checking the set_camera_info service when launching

After launching the script, the interface is displayed as shown in Fig. 8.9. After the interface appears normally, the board needs to be held at different distances and angular positions for the program to recognize the board, as shown in Fig. 8.10. Please note that the check-board needs to stay present in the image in its entirety. Hold for a moment at each different position, and move it again after the board is highlighted in color. During the movement, several progress bars will increase in the upper right corner of the window while the color fades to green, as shown in Fig. 8.11. When the calibration button on the right lights up, it means that the data required for calibration has been collected. Click the calibration button to automatically calculate and display the results. The results are generally displayed in the following format: ('D = ', [0.1464942242867504, -0.4618102141865415, -0.0002832001606639612, 0.007823380979168761, 0.0])

(continued)

382

8

Computer Vision

Fig. 8.10 Handheld check-board in different positions

Fig. 8.11 Camera calibration

('K = ', [701.7445347703235, 0.0, 329.2574859169867, 0.0, 702.4058434859102, 235.6741579182364, 0.0, 0.0, 1.0]) ('R = ', [1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]) ('P = ', [707.4067993164062, 0.0, 332.3106384985822, 0.0, 0.0,

(continued)

8.3

Camera Calibration

383

711.5349731445312, 235.05484443414025, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0]) None # oST version 5.0 parameters [image] width 640 height 480 [narrow_stereo] camera matrix 701.744535 0.000000 329.257486 0.000000 702.405843 235.674158 0.000000 0.000000 1.000000 distortion 0.146494 -0.461810 -0.000283 0.007823 0.000000 rectification 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 projection 707.406799 0.000000 332.310638 0.000000 0.000000 711.534973 235.054844 0.000000 0.000000 0.000000 1.000000 0.000000

If there are no issues with the results, click the commit button to send the results to the camera and save them. In the specific calibration process, please pay attention to the following content: • The epi error in the calibration screen should be less than 0.25 (dual target timing). Do not move the scale randomly. • After clicking the commit button, the yaml file is stored in the ~/.ros/ folder, and then the camera_info_url in the camera driver is set to read the yaml file, so that the camera_info and image data can be published synchronously. In the ROS camera driver, camera_info is the most difficult to understand. First, the model parameters of the camera itself (intrinsic and extrinsic parameters) need to be understood. The camera also has different degrees of distortion, and describing these distortions requires the corresponding model and its parameters. For a binocular camera, the camera imaging plane and equal eigenmatrix parameters also need to be recorded. This information is integrated by ROS and put into camera_info, which is published together with the image data. In the ROS system, the default camera driver is responsible for managing this information for the camera. However, it is difficult for the ROS camera driver to handle this part well. This is due to the variety of camera hardware, and even if the manufacturer provides a ROS driver, camera_info needs to be determined manually. A very detailed tutorial on camera calibration is included in the ROS tutorial and is available at

384

8

Computer Vision

http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration

8.4

Image Transformation and Processing

Image processing technology is a technology that uses computers and digital processing techniques to apply certain operations to an image to extract various information from the image. In this way, a particular purpose can be achieved. Existing image processing methods include point operations, filtering, global optimization, image enhancement, gray map conversion, binarization, image restoration, image segmentation, image recognition, coding, and compression; these are widely used. This section focuses on perspective transformation, image matching, and image stitching, while other image processing methods are not described.

8.4.1

Perspective Transformation

In plane image processing, the lens angle and other factors are prone to causing image skew, warping, and other situations. To facilitate subsequent processing, image correction is often required. The main technical principles include two transformations: affine transformation and perspective transformation. Affine transformation is a linear transformation between 2D coordinates; therefore, the transformed image still has some of the properties of the original image, including “straightness” and “parallelism.” It is often used for image flip, rotation, translation, and scale operations. However, affine transformation cannot correct warping issues, such as the partial warping of a rectangular area into a trapezoid. In this case, perspective transformation is needed. Perspective transformation is a nonlinear transformation in 3D space, which can be regarded as a more general form of affine transformation. To put it simply, the perspective transformation projects the original image to a new viewing plane through a 3 × 3 transformation matrix. Therefore, it is also known as projective mapping or projection transformation. The visual expression produces or eliminates the sense of perspective. Its general transformation formula is ½x0 , y0 , w0  = ½u, v, w

a11 a21 a31

a12 a22 a32

a13 a23 a33

(u, v) is the pixel coordinate corresponding to the obtained transformed picture coordinate (x, y), where x = x′/w′, y = y′/w′.

8.4

Image Transformation and Processing

The transformation matrix

a11 a21 a31

385 a12 a22 a32

a13 a23 a33

can be split into 4 parts.

a11 a21

a12 a22

represents linear transformations, such as scale operations and rotation. ½ a31 a32  is used for translation, and ½ a13 a23 T produces a perspective transformation. Hence, affine transformations can be understood as a special form of perspective transformation. The picture after a perspective transformation is usually not a parallelogram (except in the case where the mapped view plane is parallel to the original plane). To better understand the above process, a simple example of the transformation from a square to quadrilateral is discussed as follows. The four groups of corresponding points of the transformation can be expressed as ð0, 0Þ → ðx0 , y0 Þ, ð1, 0Þ → ðx1 , y1 Þ, ð1, 1Þ → ðx2 , y2 Þ, ð0, 1Þ → ðx3 , y3 Þ According to the transformation equation, a31 = x0 a11 þ a31 - a13 x1 = x1 a11 þ a21 þ a31 - a13 x2 - a23 x2 = x2 a21 þ a31 - a23 x3 = x3 a32 = y0 a12 þ a32 - a13 y1 = y1 a12 þ a22 þ a32 - a23 y2 - a23 y2 = y2 a22 þ a32 - a23 y3 = y3 Next, define a few auxiliary variables: Δx1 = x1 - x2 Δy1 = y1 - y2

Δx2 = x3 - x2 Δy2 = y3 - y2

Δx3 = x0 - x1 þ x2 - x3 Δy3 = y0 - y1 þ y2 - y3

Based on the fact that the transformation plane is parallel to the original when both Δx3, Δy3 are 0,

386

8

3

v 3

1

2 case 2 1

2

case 1 0

u 0

0

Computer Vision

1

0

case 3

1

Fig. 8.12 Transformation matrix

a11 = x1 - x0 a21 = x2 - x1 a31 = x0 a12 = y1 - y0 a22 = y2 - y1 a32 = y0 a13 = 0 a12 = 0 When Δx3, Δy3 are not zero, a11 = x1 - x0 þ a12 x1 a21 = x3 - x0 þ a12 x2 a31 = x0 a12 = y1 - y0 þ a13 y1 a22 = y3 - y0 þ a23 y3 a32 = y0 a13 =

Δx3

Δx2

Δy3

Δy2

a12 =

Δx1

Δx3

Δy1

Δy3

=

Δx1

Δx2

Δy1

Δy2

=

Δx1

Δx2

Δy1

Δy2

The solved transformation matrix transforms a square to a quadrilateral. The same is true for transforming a quadrilateral to a square. Any quadrilateral can be transformed to another quadrilateral through two transformations: quadrilateral to square and square to quadrilateral, as shown in Fig. 8.12.

8.4

Image Transformation and Processing

387

Next, we introduce two related API functions in OpenCV to implement the above process: the image perspective transformation function and the point perspective transformation function. 1. The image perspective transformation function (perspective transformation of the image using the perspective matrix) returns the transformed image after the perspective transformation. warpPerspective( InputArray src, OutputArray dst, InputArray M, Size dsize, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, const Scalar& borderValue = Scalar());

Parameter meanings: InputArray src: the input source image. OutputArray dst: the output image, that is, the transformed image. InputArray M: the perspective transformation matrix. Size dsize: the size of the output image. int flags = INTER_LINEAR: the interpolation method of the output image, optional parameter. int borderMode = BORDER_CONSTANT: border pixel mode, optional parameter. const Scalar& borderValue = Scalar(): border fill value, optional parameter. The perspective transformation matrix requires the function findHomography to obtain a homography matrix. FindHomography is the calculation of the perspective matrix from two sets of points in the input and output images, with the following function interface: Mat findHomography(InputArray srcPoints, InputArray dstPoints, int method=0, double ransacReprojThreshold=3, OutputArray mask=noArray())

Parameter meanings: InputArray srcPoints: coordinate matrix of midpoints in the source image. InputArray dstPoints: coordinate matrix of midpoints in the output image. int method=0: optional parameter, the method used to calculate the homography matrix; 0 represents using the conventional method of all points. RANSAC denotes the robust algorithm based on the RANSAC. LMEDS refers to the minimum median robust algorithm, and RHO represents the PROSAC-based robust algorithm.

388

8

Computer Vision

Fig. 8.13 Transformation matrix. (a) Original image. (b) Transformed image

double ransacReprojThreshold = 3: optional parameter, the maximum allowed reprojection error threshold for treating point pairs as interior points (only for RANSAC and RHO methods). OutputArray mask = noArray(): optional output mask matrix, usually set by a robust algorithm (RANSAC or LMEDS). The return value of the findHomography function is Mat, a matrix class consisting of two data parts, containing matrix information and a pointer. The matrix information involves the size of the matrix, the method used to store it, and the address where the matrix is stored; the pointer points to the matrix containing the pixel values. 2. The point perspective transformation function (using the perspective matrix for a set of points for perspective transformation). perspectiveTransform( InputArray src, OutputArray dst, InputArray M );

Parameter meanings: InputArray src: the coordinates of a set of points in the input source image. OutputArray dst: the coordinates of the corresponding set of points in the output image, that is, the coordinates of the set of points after the transformation. InputArray M: the perspective transformation matrix. Fig. 8.13 shows an example of a perspective transformation.

8.4

Image Transformation and Processing

8.4.2

389

Image Matching

Perspective transformation provides a better view of an image, while image matching finds the target image of interest from an image, which plays an important role in practical applications. The following describes the foundation of image matching. A common method of image matching is template matching, which finds the most matching (similar) part of an image to another template image. For example, there are two images: • Source image (I): the picture in which we expect to find a region that matches the template image • Template image (T): the patch image which is compared with the source image Image matching is the process of searching to detect the target region that best matches the template image from the source image, as shown in Fig. 8.14. To determine the matching area, a comparison can be made by sliding the template image within the original image. “Sliding” means that the template image block is moved one pixel at a time (left to right, top to bottom). At each position, numeric measures are calculated to indicate the similarity/matching degree between the template image and the source image at the current position. In OpenCV, the function that performs template matching is cvMatchTemplate:

Fig. 8.14 Schematic diagram of image matching

390

8

Computer Vision

cvMatchTemplate( const CvArr* image, constCvArr* templ, CvArr* result, int method );

Parameter meanings: image: the image to be searched. templ: template image. result: the result of the match, used to store the similarity value of the sliding window to the template calculated by the following methods. method: method for calculating the matching degree. CV_TM_SQDIFF: squared difference matching method, which uses the squared difference to match; the best matching value is 0; the worse the match, the bigger the matching value. CV_TM_CCORR: correlation matching method, which uses a multiplication operation; the larger the value, the better the matching degree. CV_TM_CCOEFF: correlation coefficient matching method, 1indicating the best match, and -1 indicating the worst match. CV_TM_SQDIFF_NORMED: normalized squared difference matching method. CV_TM_CCORR_NORMED: normalized correlation matching method. CV_TM_CCOEFF_NORMED: normalized correlation coefficient matching method. Please note that regarding the method of match calculation, using different methods produces different results. Larger return values of some methods indicate a higher matching degree, while smaller return values of some methods indicate a higher matching degree.

8.4.3

Image Stitching

In addition to image matching, image stitching is also a method that needs to be used frequently in image processing. Image stitching stitches multiple images into one panorama or stitches images from different perspectives together in the same scene, which is critical in practical applications. The image stitching algorithm is relatively complex. Stitcher is a simple and practical image stitching class in OpenCV. The member functions in this class include createDefault, optimizeTransform, composePanorama, and stitch. The major steps for image stitching in OpenCV using stitcher are as follows: 1. First create a stitcher object

Stitcher stitcher = Stitcher::createDefault(try_use_gpu);

8.4

Image Transformation and Processing

391

Fig. 8.15 Multiple original images from different views

2. Choose stitching methods Plane stitching PlaneWarper* cw = new PlaneWarper() ; stitcher.setWarper(cw);

Spherical stitching SphericalWarper* cw = new SphericalWarper() ; stitcher.setWarper(cw);

Stereographic stitching StereographicWarper *cw = new cv::StereographicWarper() ; stitcher.setWarper(cw);

Among these, the feature point finding method supports both Surf and ORB algorithms, for example, to set the Surf algorithm for feature point matching. detail::SurfFeaturesFinder *featureFinder = new detail:: SurfFeaturesFinder() ; stitcher.setFeaturesFinder(featureFinder);

3. Generate the panorama image status = stitcher.composePanorama(pano);

Fig. 8.15 and Fig. 8.16 are the original images in three different views and the panorama image after image stitching, respectively.

392

8

Computer Vision

Fig. 8.16 Image stitching effect

8.5

Image Feature Detection Algorithms

An image is made up of many pixel points that can be considered as feature points of the image. In the field of computer vision, image feature matching is performed on the basis of feature points; therefore, it is very important to identify the feature points in an image. Several common methods for image feature detection are introduced below2. In the field of computer vision, the concept of feature points (also called key points or points of interest) has been widely used in target recognition, image alignment, visual tracking, and 3D reconstruction. The concept is to select certain feature points from an image and analyze the image locally, rather than observing the whole image. As long as there are enough detectable feature points in the image, and these feature points are different and feature-stable, they can be accurately localized, which will be conductive to subsequent operations.

8.5.1

SIFT Algorithm

First, the concept of “scale” will be discussed. If the number of pixels in a photograph is constantly compressed, or if the observer gets farther away, the photograph will gradually become blurred; this continuous independent variable, which causes the presentation of the photograph to change, can be called scale. Objects are presented in different ways with different scales of observation. Scale invariance of feature detection is a key concept. However, it is very difficult to solve the scale invariance problem. The concept of a scale invariant feature was introduced in the field of computer vision to address this issue; that is, not only can

2

https://docs.opencv.org/2.4/modules/features2d/doc/feature_detection_and_description.html https://docs.opencv.org/4.x/dc/dc3/tutorial_py_matcher.html

8.5

Image Feature Detection Algorithms

393

Fig. 8.17 Feature detection image samples

consistent feature points be detected for objects taken at any scale, but each detected feature point corresponds to a scale factor. Ideally, for the same object point at different scales in two images, the ratio between the two scale factors should be equal to the ratio of the two image scales. In 1999, Lowe proposed the scaleinvariant feature transform (SIFT) algorithm [2], which was refined and summarized in 2003 [3]. Fig. 8.17 shows a sample of feature detection images: The top two are landscape pictures taken by a mobile phone, and the bottom two are remote sensing images. First, perform the regular SIFT feature extraction and feature point matching to see how it works. #include "highgui/highgui.hpp" #include "opencv2/nonfree/nonfree.hpp"

(continued)

394

8

Computer Vision

#include "opencv2/legacy/legacy.hpp" #include using namespace cv; using namespace std; int main() { Mat image01 = imread("1.jpg", 1); Mat image02 = imread("2.jpg", 1); namedWindow("p2", 0); namedWindow("p1", 0); imshow("p2", image01); imshow("p1", image02); /grayscale map conversion Mat image1, image2; cvtColor(image01, image1, CV_RGB2GRAY); cvtColor(image02, image2, CV_RGB2GRAY); // Extract feature points SiftFeatureDetector siftDetector(2000); // Hessian matrix threshold, the larger the value, the fewer the points and the more accurate it is vector keyPoint1, keyPoint2; siftDetector.detect(image1, keyPoint1); siftDetector.detect(image2, keyPoint2); // feature point description for the following feature point matching SiftDescriptorExtractor SiftDescriptor; Mat imageDesc1, imageDesc2; SiftDescriptor.compute(image1, keyPoint1, imageDesc1); SiftDescriptor.compute(image2, keyPoint2, imageDesc2); // Obtain matching feature points and extract the optimal pair FlannBasedMatcher matcher; vector matchePoints; matcher.match(imageDesc1, imageDesc2, matchePoints, Mat()); cout > ~/.bashrc

If no error is reported after the command is executed, it means that ROS 2 has been installed successfully.

11.2

ROS 2 Installation and Use

11.2.2

549

Running the Turtlesim

In this section, readers are able to experience ROS 2 for the first time through an example of the turtlesim simulation. 1. Start the turtlesim emulator Start the turtlesim simulator by entering the following command in the terminal: $ ros2 run turtlesim turtlesim_node

After running the command, open a simulator interface, and a turtlesim will be randomly generated inside, as shown in Fig. 11.6. The name of the turtlesim and its location in the simulator coordinate system can be seen in the terminal (if you do not have the turtlesim simulator package, you can install it with the command sudo apt install ros-foxy-turtlesim). The terminal prints the following information about the location of the turtlesim. [INFO] [1641968936.339206332] [turtlesim]: Starting turtlesim with node name /turtlesim [INFO] [1641968936.344071096] [turtlesim]: Spawning turtle [turtle1] at x= [5.544445], y=[5.544445], theta=[0.000000] Fig. 11.6 Turtlesim simulator Interface

550

11

Introduction to ROS 2 and Programming Foundation

Fig. 11.7 Controlling the movement of turtles

2. Control the motion of turtlesim Open a new terminal and enter the following command: $ ros2 run turtlesim turtle_teleop_key

As shown in Fig. 11.7, the left and right arrow keys and the up and down arrow keys can be used to control the turtlesim’s rotation and forward and backward motion, respectively, in the new terminal window, which is the same as in ROS 1. The difference is that ROS 2 has an additional fixed-angle rotation action control, and the turtlesim can be rotated at a fixed angle by entering “G/B/V/C.” 3. Analyze the transmission of information Open a new terminal and enter the following command: $ rqt_graph

The ROS Nodes graphical interface can be opened. Select “Nodes/Topics(all)” and click on the refresh button to see the node and topic information corresponding to the turtlesim case, as well as the direction of information delivery with arrows. Among this information, “/turtlesim” is the graphical turtle node, “/teleop_turtle” is the turtlesim motion control node, and the two are related by the topic “/turtle1/cmd_ vel,” which is the turtlesim motion control topic. The rest of the topics control the fixed angle rotation action of the turtlesim, as shown in Fig. 11.8. 4. Install rqt graphical tool to spawn new turtlesim Install the rqt tool in Linux with the following command:

11.2

ROS 2 Installation and Use

551

Fig. 11.8 Node topic analysis of the turtlesim case

$ sudo apt update $ sudo apt install ~nros-foxy-rqt*

After the installation is complete, run the graphical interface with the command “rqt” as shown in Fig. 11.9. The first time the interface is opened, it is empty; select “Plugins” > “Services” > “Service Caller.” This plugin is used to publish service requests. After clicking the refresh button, all the current services of the system can be seen. Select the “/spawn” service, double click on the data item below, fill in the name “turtle2” and location of the newly generated turtlesim, and click “Call” to send a service request to generate a new turtlesim. Note that the name cannot be the same as the name of the existing turtlesim or an error will be reported. After requesting the service, a new turtlesim and the topic and service corresponding to “turtle2” will be generated, as shown in Fig. 11.10. Next, control the motion of the second turtlesim. The simulator’s default keyboard control node can only publish turtle1’s topic, but the remapping mechanism of the ROS enables the renaming of the topic. In a new terminal, enter the following command:

552

11

Introduction to ROS 2 and Programming Foundation

Fig. 11.9 Rqt service request interface

Fig. 11.10 Rqt service request screen

$ ros2 run turtlesim turtle_teleop_key --ros-args --remap turtle1/ cmd_vel:=turtle2/cmd_vel

This command changes the topic name turtle1/cmd_vel to turtle2/cmd_vel, and then the motion of the turtle2 turtlesim can be controlled through that node.

11.2

ROS 2 Installation and Use

11.2.3

553

ROS 2 Command Line

The main command line entry for ROS 2 is “ros2,” which is invoked using the command “ros2 .” The various commands are described as follows: action bag component daemon doctor interface launch lifecycle multicast node param pkg run security service topic wtf

Various action-related subcommands Various rosbag-related subcommands Various component-related subcommands Various daemon-related subcommands Check ROS settings and other potential problems Display information about the ROS interface Run the launch file Various lifecycle-related subcommands Various multicast-related subcommands Various node-related subcommands Various parameters (param)-related subcommands Various package-related subcommands Run a specific executable file in a package Various security-related subcommands Various service-related subcommands Various-topic related subcommands Alias of the doctor command

The ROS 2 command line is very similar to the ROS 1 command line in terms of functionality. Among the above commands, the following frequently used commands need to be noted. “ros2 node”: outputs node-related information. “ros2 run”: runs a package node/executable file and is the basic run command in ROS 2. “ros2 topic”: operates topic information, e.g., use ros2 topic list to see the names of all currently running topics, use ros2 topic info to print topic-related information, use ros2 topic pub to publish new topics, etc. “ros2 service”: operates service information, e.g., use ros2 service call to request a service, use ros2 service list to view the names of all currently running services, use ros2 service type to output the type of the specified service, etc. “ros2 param”: operates parameter server information, such as adding parameters and modifying parameters. “ros2 launch”: runs a launch file that integrates multiple package nodes/executables files in a single file to run simultaneously, which is convenient when running large projects. “ros2 action”: action is one of the communication types in ROS 2 and is suitable for long-running tasks. It consists of three parts: goal, feedback, and result. Action is built on topics and services. Its functionality is similar to that of a service, except

554

11

Introduction to ROS 2 and Programming Foundation

that action can be cancelled. Action also provides stable feedback, which is different from a service that returns a single response.

11.3

ROS 2 Programming Foundation

11.3.1

ROS 2 Programming Method

The programming method of ROS 2 is similar to that of ROS 1 and consists of three main communication mechanisms: topic communication, service communication, and parameter server communication. The basic principles and ideas of these three communication mechanisms do not differ from those of ROS 1; however, their implementation is different. For more information, please visit the ROS Wiki website at http://wiki.ros.org/ and the official ROS 2 tutorials, for example, the official ROS 2 Foxy Fitzroy version tutorial link: http://docs.ros.org/en/foxy/ Tutorials.html. To provide readers with a clear understanding of ROS 2 programming details, this section introduces ROS 2 programming methods using topic communication as an example—creating an ROS 2 workspace, creating an ROS 2 package, and creating a simple subscriber and publisher (C++). The differences between ROS 2 and ROS 1 programming will be covered in the next section.

11.3.1.1

Create an ROS 2 Workspace

A workspace in ROS 2 is a space for developing specific application projects. All packages are sourced, configured, and compiled in this space. When there are multiple workspaces, the hierarchy (similar to priority) needs to be set between the workspaces to differentiate the running priority of packages of the same name in different workspaces. By default, ROS starts the upper workspace (overlay). The packages in the overlay will override the packages with the same name in the lower workspace (underlay). Hence, when multiple workspaces exist, pay attention to setting the hierarchy of the workspace. The process of creating an ROS 2 workspace is as follows. 1. Set up ROS 2 environment variables $ source /opt/ros/foxy/setup.bash

The workspace hierarchy is configured through environment variables. Simply put, the next configured workspace is placed on top of the previous configured workspace. Packages in the ROS 2 installation path are typically set to the lowest workspace.

11.3

ROS 2 Programming Foundation

555

2. Create a new folder $ mkdir -p ~/dev_ws/src $ cd ~/dev_ws/src

dev_ws is the workspace created, and src is the location where the packagerelated files are placed. 3. Compile the workspace The compilation of workspace needs to be run in the root directory of the workspace: dev_ws $ cd ... $ colcon build

If colcon is not installed, the following command can be used to install it: $ sudo apt install python3-colcon-common-extensions

Colcon is a compilation tool for ROS 2, similar to catkin in ROS 1. The command can also be followed by some common parameter suffixes, such as “-packages-up-to”: compile the specified package instead of the whole workspace; “-- symlink-install”: save time for rebuilding python scripts each time; and “-event-handlers console_direct+”: display detailed logs during the compilation process in the terminal. After the compilation is completed, new “build,” “install,” and “log” folders are created in the dev_ws workspace. The “install” folder is where all future node launch files and scripts will be run. There are two important files in this folder, “local_setup. sh” and “setup.sh.” The former will only set environment variables related to the package in the current workspace, while the latter will also set environment variables of the other underlays in that workspace. When the “local_setup.sh” environment variable is set, only the package files in that workspace can be called; when the “setup.sh” environment variable is set, the environment variables in that workspace and the ROS 2 installation path will be set at the same time. In addition, the workspace will be set as the overlay, and the functions will be called with the package files in this workspace first; then, the package files in the ROS 2 installation path will be called. The command for setting environment variables in the user workspace is as follows: $ . install/local_setup.sh

556

11

Introduction to ROS 2 and Programming Foundation

or $. install/setup.sh

11.3.1.2

Create ROS 2 Packages

Packages are the basic containers for organizing code in ROS 2 to facilitate compilation, installation, and development. Each package is a relatively complete unit used to implement a specific function; packages in ROS 2 can be compiled using either CMake or Python. A package itself is a “folder,” but unlike folders, each package contains at least two files: package.xml, which is the package description, and CMakeLists.txt, which describes the rules for CMake to compile the package. There can be multiple packages in each workspace, either CMake packages or Python packages. However, packages cannot be nested. The structure of a typical package in a workspace is as follows: workspace_folder/ src/ package_1/ CMakeLists.txt package.xml package_2/ setup.py package.xml resource/package_2 ...... package_n/ CMakeLists.txt package.xml

Creating a package needs to be done in the workspace directory. Use the following command to create a new package in the src folder located in the previously created dev_ws workspace. $ cd ~/dev_ws/src $ ros2 pkg create --build-type ament_cmake

When the command is executed, a package named “” will be created, which contains the “include” folder, the “src” folder, “CMakeLists.txt” file, and “package.xml” file. The library files written by the user are saved in the “include” folder, and .cpp files are saved in the “src” folder. The command for

11.3

ROS 2 Programming Foundation

557

creating packages also allows the user to set the node name and automatically generate a helloworld routine code. For example, the following command will generate a package named “my_package” and a node named “my_node” in that package, where an automatically generated shelloworld routine code exists. $ ros2 pkg create --build-type ament_cmake --node-name my_node my_package

The above commands are used to create CMake packages. If the creation of Python packages is desired, “ament_cmake” can be changed to “ament_python.” It should be noted that the workspace needs to be recompiled after the package is modified. If only a compilation of a specific package is wanted, the following command can be used, where “my_package” is the name of the package to be compiled. $ colcon build --packages-select my_package

After the creation of the workspace and package is completed, the file tree structure can be obtained as shown below: dev_ws/ build/ install/ log/ src/ my_package/ CMakeLists.txt package.xml include/ src/

11.3.1.3

Create a Simple Subscriber and Publisher (C++)

Internode communication through topic publishing and subscription is one of the most important types of ROS communication mechanisms. Having a good understanding of how to use this communication method is very helpful for understanding ROS 2 programming. After completing step (1) and step (2) above, the “dev_ws” workspace has been built along with the “my_package” package. Now the publisher and subscriber can be created. 1. Publisher Go to the “my_package/src” directory, create the “HelloWorld_Pub.cpp” file, and copy the following code into it.

558

11

Introduction to ROS 2 and Programming Foundation

#include // some standard C++ header files #include #include #include #include “rclcpp/rclcpp.hpp” // Common C++ interface header files in ROS2 #include “std_msgs/msg/string.hpp” //header file for string messages in ROS2 using namespace std::chrono_literals; /* This example creates a subclass of Node and uses std::bind() to register a * member function as a callback from the timer. */ // Create a node class MinimalPublisher to trigger a callback function at regular intervals to publish information class MinimalPublisher : public rclcpp::Node { public: MinimalPublisher() : Node(“minimal_publisher”), count_(0) { publisher_ = this->create_publisher(“topic”, 10); timer_ = this->create_wall_timer( 500ms, std::bind(&MinimalPublisher::timer_callback, this)); } // Callback function: for external release of information private: void timer_callback() { auto message = std_msgs::msg::String(); message.data = “Hello, world! “ + std::to_string(count_++); RCLCPP_INFO(this->get_logger(), “Publishing: ‘%s’”, message.data.c_str()); publisher_->publish(message); } // Create timers, publishers and count variables rclcpp::TimerBase::SharedPtr timer_; rclcpp::Publisher::SharedPtr publisher_; size_t count_; }; int main(int argc, char * argv[]) { rclcpp::init(argc, argv); rclcpp::spin(std::make_shared()); rclcpp::shutdown(); return 0; }

11.3

ROS 2 Programming Foundation

559

MinimalPublisher() is the constructor for the node class MinimalPublisher. This function initializes a node named “minimal_publisher” and constructs a topic with a message type of String and a message queue length of 10. Then, it creates a timer, timer_ that triggers every 500 ms and runs the timer_callback function. The main function of the callback function, timer_callback, is to publish a topic message once per trigger. The string saved in the message is “Hello world” plus a count value; then, the log message is printed once through the RCLCPP_INFO macro function, and the message is published via the publish method of the publisher. In the main function, first initialize the ROS 2 node; then, use rclcpp::spin to create the MinimalPublisher class object and enter a spinlock. When the lock is exited, it closes the node to end the program. 2. Subscriber Similarly, in the “my_package/src” directory, create the “HelloWorld_Sub. cpp” file and copy the following code into it. #include #include “rclcpp/rclcpp.hpp” #include “std_msgs/msg/string.hpp” using std::placeholders::_1; class MinimalSubscriber : public rclcpp::Node { public: MinimalSubscriber() : Node(“minimal_subscriber”) { subscription_ = this->create_subscription( “topic”, 10, std::bind(&MinimalSubscriber:: topic_callback, this, _1)); } private: void topic_callback(const std_msgs::msg::String::SharedPtr msg) const { RCLCPP_INFO(this->get_logger(), “I heard: ‘%s’”, msg->data. c_str()); } rclcpp::Subscription::SharedPtr subscription_; }; int main(int argc, char * argv[]) { rclcpp::init(argc, argv); rclcpp::spin(std::make_shared()); rclcpp::shutdown(); return 0; }

560

11

Introduction to ROS 2 and Programming Foundation

The overall code flow of the subscriber is similar to that of the publisher. In the constructor, the node “minimal_subscriber” is initialized, a subscriber is created, and String messages are subscribed to with a topic name of “topic.” The length of the queue for saving messages is 10, and when the data is subscribed, it will enter the callback function topic_callback. The callback function topic_callback is used to print out the received String message via RCLCPP_INFO. The content in the main function is almost identical to that of the publisher and will not be repeated. 3. Set up dependencies and compilation rules First, open the package.xml file of the package and complete the contents marked with TODO as follows: Examples of minimal publisher/subscriber using rclcpp ... Apache License 2.0

Add the required dependencies and place them in ament_cmake: rclcppcreate_publisher ("chatter",100);

In general, the programming ideas of ROS 2 remain basically the same as those of ROS 1. Only the changes in the API interface should be noted. Due to the large number of API interfaces in ROS programming, it is not possible to cover them all in this book. If readers expect to know more about them, please refer to the ROS 2 documentation website: https://docs.ros.org/

11.4

Summary

This chapter introduced ROS 2, which is the new generation design architecture of ROS. Firstly, the origin, design idea, and design framework of ROS 2 are presented. Then, the application method of ROS 2 is introduced by running the case of small turtle. Finally, through the topic communication on ROS 2, readers can further understand and be familiar with the programming methods of ROS 2, and the programming differences between ROS 2 and ROS 1, to pave the way for further study.

Further Reading

565

Further Reading Ament2 compilation system in ROS 2: the "ament_cmake" build system and the "ament_tools" meta build tool. The ament is an evolution of catkin, and the word ament is actually a synonym of catkin. ament is a meta build system to improve building applications, which are split into separate packages. It consists of two major parts: • a build system (e.g., CMake, Python setuptools) to configure, build, and install a single package • a tool to invoke the build of individual packages in their topological order The tool relies on meta information about the packages to determine their dependencies and their build type. This meta information is defined in a manifest file called, which is specified in REP 140. Each package is built separately with its own build system. In order to make the output of one package available to other packages, each package can extend the environment in a way that downstream packages can find and use its artifacts and resources. If the resulting artifacts are installed into, for example, it might not be necessary to alter the environment at all since these folders are commonly being searched by various tools. The ament_tools is a Python package, which provides the command line tool to build, test, install, and uninstall packages. It is similar to catkin_tools and builds each package in a workspace in topological order. Support for different build systems is integrated through extension points, which allows support for other build types to be added without changing the ament tool itself. While it currently does not build packages in parallel that feature will be added in the future to speed up the build process. The goal is to reuse common functionality from catkin_tools by making it available through a third package, which can be used by both tools. Each package can utilize a different build system to perform the steps of configuring, building, and installing. The build type is defined in each package manifest using the build_type tag in the export section. Currently supported are CMake and Python, but support for others (e.g., autotools, plain Makefiles) will likely be added in the future. The catkin has been used for ROS 1 since ROS Groovy. It was developed as a replacement for rosbuild, which was used from the beginning of ROS. Therefore, various features of catkin have been designed to be similar to rosbuild. catkin is a single monolithic package providing various features and functionalities. For example, it integrates support for which makes it very difficult to also optionally support since only one tool can be selected at a time. The ament on the other hand is designed to be very modular. Almost every feature is provided by a separate package and can

2

http://design.ros2.org/articles/ament.html https://docs.ros.org/en/foxy/How-To-Guides/Ament-CMake-Documentation.html

566

11

Introduction to ROS 2 and Programming Foundation

be optionally used. The CMake part of ament also features an extension point system to extend it with further functionality.

Exercises 1. Can mobile robots deploying ROS 1 be used on highways with high real-time requirements? What are the existing problems? 2. How many versions of ROS 2 are currently available? What are the differences between ROS 2 and ROS 1 in terms of the design framework? What are the differences in programming between ROS 2 and ROS 1? 3. What are DDS (Data Distribution Service) and its communication mechanism of ROS 2? What is its role? 4. Please design a program in ROS 2 to realize the function shown in the following figure. The small tank will walk in an elliptical path in a two-dimensional environment. Please explain the design idea, flow chart, and core

procedures.

5. With reference to the official website of ROS 2, to write a node to realize service communication in ROS 2.

Appendix A: Abbreviations

AMCL BoW BRIEF BA Caffe CNN CPS DBOW

EKF FAST g2o GUI HOG IMU IOU LBP LARK LIO-SAM LOAM LeGO-LOAM NMS ORB PF RANSAC R-CNN ROS

Adaptive Monte Carlo Localization Bag-of-Words Binary Robust Independent Elementary Features Bundle Adjustment Convolutional Architecture for Fast Feature Embedding Convolution Neural Network Cyber-Physical Systems Bags of binary words for fast place recognition in image sequence (An open-source software library based on a hierarchical tree, using FAST algorithm for feature detection, and using BRIEF as the descriptor) Extended Kalman Filter Features from Accelerated Segment Test General Graph Optimization Graphical User Interface Histogram of Oriented Gradient Inertial Measurement Unit Intersection Over Union Local Binary Pattern Locally Adaptive Regression Kernels Lidar Inertial Odometry via Smoothing and Mapping Lidar Odometry and Mapping Lightweight and Ground-Optimized LOAM Non-Maximum Suppression Oriented FAST and Rotated BRIEF Particle Filter RANdom SAmple Consensus Region-Convolution Neural Network Robot Operating System

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0

567

568

RPN SFM SIFT SLAM SPP SSD SURF SVO ToF UKF URDF XML

Appendix A: Abbreviations

Region Proposal Network Structure from Motion Scale-Invariant Feature Transform Simultaneous Localization and Mapping Spatial Pyramid Pooling Single Shot multi-box Detector Speed-Up Robust Features Semi-direct Visual Odometry Time-of-Flight Unscented Kalman Filter Unified Robot Description Format eXtensible Markup Language

Appendix B: Index of Tasks in Each Chapter

No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Chapter 1 2

3

4

5 6 7

8 9

10

Task name Task 1: Remote Desktop Connection: Using the Spark Robot Platform Task 1: Run a Simple ROS Program Task 2: Run the Turtlesim Task 3: Get Spark in Motion Task 1: Controlling the Simulated Robot to Synchronize Its Motion with the Real Robot in ROS Task 2: Observing the Environment from Robot’s Perspective Task 1: Robot Coordinate Transformation Task 2: Gmapping-Based 2D LiDAR Mapping Task 3: Hector SLAM-Based 2D LiDAR Mapping Task 1: Mobile Robot Localization Task 2: Mobile Robot Navigation Task 1: Filter-Based SLAM Task 2: SLAM Based on Graph Optimization Task 1: Kinematic Simulation of Manipulator Task 2: Digital Twins: Synchronized Motion of Physical and Simulated Manipulators Task 1: Image Acquisition Task 2: Monocular Camera-Based Object Recognition Task 1: Depth Camera Driver Installation Task 2: Object Recognition Based on Convolutional Neural Network Task 3: Extrinsic Parameters Calibration of Robot Hand–Eye Vision Task 4: Vision-Based Manipulator Object Grasping Task 5: Vision-Based Mobile Manipulator Pointing to Object Task 1: Run ORB-SLAM2 on a Monocular Dataset Task 2: Scene Mapping Based on Depth Camera

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 G. Peng et al., Introduction to Intelligent Robot System Design, https://doi.org/10.1007/978-981-99-1814-0

Page 29 76 92 105 158 166 173 188 193 208 236 256 262 293 315 327 364 382 393 398 415 418 435 457

569