Error Compensation for Industrial Robots 9811961670, 9789811961670

This book highlights the basic theories and key technologies of error compensation for industrial robots.  The chapters

247 45 12MB

English Pages 246 [247] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
Part I Theories
1 Introduction
1.1 Background
1.2 What is Robot Accuracy
1.3 Why Error Compensation
1.4 Early Investigations and Insights
1.4.1 Offline Calibration
1.4.2 Online Feedback
1.5 Summary
References
2 Kinematic Modeling
2.1 Introduction
2.2 Pose Description and Transformation
2.2.1 Descriptions of Position and Posture
2.2.2 Translation and Rotation
2.3 RPY Angle and Euler Angle
2.4 Forward Kinematics
2.4.1 Link Description and Link Frame
2.4.2 Link Transformation and Forward Kinematic Model
2.4.3 Forward Kinematic Model of a Typical KUKA Industrial Robot
2.5 Inverse Kinematics
2.5.1 Uniquely Closed Solution with Joint Constraints
2.5.2 Inverse Kinematic Model of a Typical KUKA Industrial Robot
2.6 Error Modeling
2.6.1 Differential Transformation
2.6.2 Differential Transformation of Consecutive Links
2.6.3 Kinematics Error Model
2.7 Summary
References
3 Positioning Error Compensation Using Kinematic Calibration
3.1 Introduction
3.2 Observability-Index-Based Random Sampling Method
3.2.1 Observability Index of Robot Kinematic Parameters
3.2.2 Selection Method of the Sample Points
3.3 Uniform-Grid-Based Sampling Method
3.3.1 Optimal Grid Size
3.3.2 Sampling Point Planning Method
3.4 Kinematic Calibration Considering Robot Flexibility Error
3.4.1 Robot Flexibility Analysis
3.4.2 Establishment of Robot Flexibility Error Model
3.4.3 Robot Kinematic Error Model with Flexibility Error
3.5 Kinematic Calibration Using Variable Parametric Error
3.6 Parameter Identification Using L-M Algorithm
3.7 Verification of Error Compensation Performance
3.7.1 Kinematic Calibration with Robot Flexibility Error
3.7.2 Error Compensation Using Variable Parametric Error
3.8 Summary
References
4 Error-Similarity-Based Positioning Error Compensation
4.1 Introduction
4.2 Similarity of Robot Positioning Error
4.2.1 Qualitative Analysis of Error Similarity
4.2.2 Quantitative Analysis of Error Similarity
4.2.3 Numerical Simulation and Discussion
4.3 Error Compensation Based on Inverse Distance Weighting and Error Similarity
4.3.1 Inverse Distance Weighting Interpolation Method
4.3.2 Error Compensation Method Combined IDW with Error Similarity
4.3.3 Numerical Simulation and Discussion
4.4 Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity
4.4.1 Robot Positioning Error Mapping Based on Error Similarity
4.4.2 Linear Unbiased Optimal Estimation of Robot Positioning Error
4.4.3 Numerical Simulation and Discussion
4.4.4 Error Compensation
4.5 Optimal Sampling Based on Error Similarity
4.5.1 Mathematical Model of Optimal Sampling Points
4.5.2 Multi-Objective Optimization and Non-Inferior Solution
4.5.3 Genetic Algorithm and NSGA-II
4.5.4 Multi-objective Optimization of Optimal Sampling Points of Robots Based on NSGA-II
4.6 Experimental Verification
4.6.1 Experimental Platform
4.6.2 Experimental Verification of the Positioning Error Similarity
4.6.3 Experimental Verification of Error Compensation Based on Inverse Distance Weighting and Error Similarity
4.6.4 Experimental Verification of Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity
4.7 Summary
References
5 Joint Space Closed-Loop Feedback
5.1 Introduction
5.2 Positioning Error Estimation
5.2.1 Error Estimation Model of Chebyshev Polynomial
5.2.2 Identification of Chebyshev Coefficients
5.2.3 Mapping Model
5.3 Effect of Joint Backlash on Positioning Error
5.3.1 Variation Law of the Joint Backlash
5.3.2 Multi-directional Positioning Accuracy Variation
5.4 Error Compensation Using Feedforward and Feedback Loops
5.5 Experimental Verification and Analysis
5.5.1 Experimental Setup
5.5.2 Error Estimation Experiment
5.5.3 Error Compensation Experiment
5.6 Summary
References
6 Cartesian Space Closed-Loop Feedback
6.1 Introduction
6.2 Pose Measurement Using Binocular Vision Sensor
6.2.1 Description of Frame
6.2.2 Pose Measurement Principle Based on Binocular Vision
6.2.3 Influence of the Frame FE on Measurement Accuracy
6.2.4 Pose Estimation Using Kalman Filtering
6.3 Vision-Guided Control System
6.4 Experimental Verification
6.4.1 Experimental Platform
6.4.2 Kalman-Filtering-Based Estimation
6.4.3 No-Load Experiment
6.5 Summary
References
Part II Applications
7 Applications in Robotic Drilling
7.1 Introduction
7.2 Robotic Drilling System
7.2.1 Hardware
7.2.2 Software
7.3 Establishment of Frames
7.3.1 World Frame
7.3.2 Robot Base Frame
7.3.3 Robot Flange Frame
7.3.4 Tool Frame
7.3.5 Product Frame
7.3.6 Transformation of Frames
7.4 Drilling Applications
7.4.1 Error-Similarity-Based Error Compensation
7.4.2 Joint Space Closed-Loop Feedback
7.4.3 Cartesian Space Closed-Loop Feedback
7.5 Summary
8 Applications in Robotic Milling
8.1 Introduction
8.2 Robotic Milling System
8.3 Milling on Aluminum Alloy Part
8.3.1 Line Milling
8.3.2 Arc Milling
8.4 Milling on Cylinder Head for Car Engine
8.4.1 Line Milling
8.4.2 Plane Milling
8.5 Edge Milling on Composite Shell
8.6 Summary
Recommend Papers

Error Compensation for Industrial Robots
 9811961670, 9789811961670

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Wenhe Liao · Bo Li · Wei Tian · Pengcheng Li

Error Compensation for Industrial Robots

Error Compensation for Industrial Robots

Wenhe Liao · Bo Li · Wei Tian · Pengcheng Li

Error Compensation for Industrial Robots

Wenhe Liao Nanjing University of Science and Technology Nanjing, Jiangsu, China

Bo Li Nanjing University of Aeronautics and Astronautics Nanjing, Jiangsu, China

Wei Tian Nanjing University of Aeronautics and Astronautics Nanjing, Jiangsu, China

Pengcheng Li Nanjing University of Aeronautics and Astronautics Nanjing, Jiangsu, China

ISBN 978-981-19-6167-0 ISBN 978-981-19-6168-7 (eBook) https://doi.org/10.1007/978-981-19-6168-7 Jointly published with Science Press The print edition is not for sale in China mainland. Customers from China mainland please order the print book from: Science Press. ISBN of the Co-Publisher’s edition: 9787030629036 © Science Press 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publishers, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

With the development of robot technology, industrial robots have been increasingly applied to aerospace and other high-end manufacturing fields, rather than limited to automotive, electronics and electrical industries. Aviation manufacturing industry has higher and higher requirements for high quality, high efficiency and long life of aircraft manufacturing. Realizing digitalization, flexibility and intelligence of aircraft manufacturing has become an inevitable trend in the development of aviation manufacturing industry. At present, drilling and riveting processes are still dominated by manual work in aviation manufacturing industry, which not only has low efficiency, but also results in unstable machining quality due to uneven technical level of individual workers. More importantly, manual operations have been unable to meet the technical indicators such as positioning accuracy and normal accuracy of the machined products. Use of automatic machining technology has become an inevitable choice for today’s aviation manufacturing, among which the automatic machining system based on industrial robots is the current research focus. As is known to all, industrial robots only have high repeatability, but do not have sufficiently high positioning accuracy, which makes the robot automatic machining cannot meet the precision requirements of aviation products. Therefore, it has become an urgent problem to explore feasible and reliable positioning error compensation methods and to improve the accuracy of industrial robots. Carrying out research on error compensation theories and applications of industrial robots is of great significance and practical value for promoting the development and innovation of aviation manufacturing technology. The objective of this book is to study error compensation technology for improving the accuracy of industrial robots. In summary, the book mainly includes theoretical analysis for the error compensation of industrial robots in Part I: Theories and the applications of the error compensation in robotic drilling and milling in Part II: Applications. This book is organized as follows. Chapter 1 briefly introduces the research status of accuracy and error compensation technology of industrial robots. The forward and inverse kinematics model and error analysis of the robot are introduced in Chap. 2. In Chap. 3, the error compensation using kinematic calibration technique is investigated, v

vi

Preface

together with two sampling methods. Chapter 4 proposes a compensation method with error similarity analysis. Different from the complex kinematics model, the positioning error estimation and compensation are realized by constructing the error mapping relationship. In Chap. 5, a robot accuracy improvement method is developed using feedforward compensation and feedback control considering the influence of joint backlash. In Chap. 6, an error compensation technique is presented using the visual guidance to effectively improve the pose accuracy of industrial robots. Applications of the error compensation technology to the robotic drilling and milling are exhibited in Chaps. 7 and 8, respectively. The ideas in this book come from the scientific research achievements of the authors’ team in the field of improvement of the robot accuracy in the past ten years and can provide a certain reference for the studies on robotics and advanced manufacturing using industrial robots. Nanjing, China January 2022

Wenhe Liao Bo Li Wei Tian Pengcheng Li

Acknowledgements

Parts of this book are greatly inspired by the discussions with cooperators and authors’ students. We would like to express our thanks to their contributions, which have definitely promoted the improvement of the book manuscript. Our students who have participated in the work of this book include Dr. Yuanfan Zeng, Dr. Wei Zhou, Dr. Guangyu Cui, Dr. Yufei Li, Dr. Wei Zhao, Dr. Shengping Zhang, Ms. Wei Zhang, Ms. Ye Shen, Ms. Chuan Xu, Ms. Kaizhuo Xi, Ms. Jun Wang, Ms. Shuang Liang, Ms. Chufan Zhang and Ms. Song Wei. The editor Li from Science Press strongly supported the publication of this book. We would like to dedicate this book to our family. The support of our research by the National Natural Science Foundation of China (Nos. 52075256, 52005254 and 51875287), by the Natural Science Foundation of Jiangsu Province (No. BK20190417) and by the National Key R&D Program of China (No. 2018YFB1306800) is gratefully acknowledged.

vii

Contents

Part I

Theories

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 What is Robot Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Why Error Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Early Investigations and Insights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Offline Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Online Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 6 8 9 10 16 19 19

2 Kinematic Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Pose Description and Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Descriptions of Position and Posture . . . . . . . . . . . . . . . . . . . . 2.2.2 Translation and Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 RPY Angle and Euler Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Forward Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Link Description and Link Frame . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Link Transformation and Forward Kinematic Model . . . . . . 2.4.3 Forward Kinematic Model of a Typical KUKA Industrial Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Uniquely Closed Solution with Joint Constraints . . . . . . . . . . 2.5.2 Inverse Kinematic Model of a Typical KUKA Industrial Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Error Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Differential Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Differential Transformation of Consecutive Links . . . . . . . . . 2.6.3 Kinematics Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 23 23 23 24 25 28 28 29 31 35 36 37 40 40 42 45

ix

x

Contents

2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47 47

3 Positioning Error Compensation Using Kinematic Calibration . . . . . 49 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.2 Observability-Index-Based Random Sampling Method . . . . . . . . . . . 50 3.2.1 Observability Index of Robot Kinematic Parameters . . . . . . . 50 3.2.2 Selection Method of the Sample Points . . . . . . . . . . . . . . . . . . 52 3.3 Uniform-Grid-Based Sampling Method . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 Optimal Grid Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 Sampling Point Planning Method . . . . . . . . . . . . . . . . . . . . . . . 78 3.4 Kinematic Calibration Considering Robot Flexibility Error . . . . . . . 83 3.4.1 Robot Flexibility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.4.2 Establishment of Robot Flexibility Error Model . . . . . . . . . . 86 3.4.3 Robot Kinematic Error Model with Flexibility Error . . . . . . 87 3.5 Kinematic Calibration Using Variable Parametric Error . . . . . . . . . . 89 3.6 Parameter Identification Using L-M Algorithm . . . . . . . . . . . . . . . . . 91 3.7 Verification of Error Compensation Performance . . . . . . . . . . . . . . . . 93 3.7.1 Kinematic Calibration with Robot Flexibility Error . . . . . . . 93 3.7.2 Error Compensation Using Variable Parametric Error . . . . . 94 3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4 Error-Similarity-Based Positioning Error Compensation . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Similarity of Robot Positioning Error . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Qualitative Analysis of Error Similarity . . . . . . . . . . . . . . . . . 4.2.2 Quantitative Analysis of Error Similarity . . . . . . . . . . . . . . . . 4.2.3 Numerical Simulation and Discussion . . . . . . . . . . . . . . . . . . . 4.3 Error Compensation Based on Inverse Distance Weighting and Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Inverse Distance Weighting Interpolation Method . . . . . . . . . 4.3.2 Error Compensation Method Combined IDW with Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Numerical Simulation and Discussion . . . . . . . . . . . . . . . . . . . 4.4 Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Robot Positioning Error Mapping Based on Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Linear Unbiased Optimal Estimation of Robot Positioning Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Numerical Simulation and Discussion . . . . . . . . . . . . . . . . . . . 4.4.4 Error Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Optimal Sampling Based on Error Similarity . . . . . . . . . . . . . . . . . . . 4.5.1 Mathematical Model of Optimal Sampling Points . . . . . . . . .

103 103 104 104 105 108 111 111 113 114 117 117 120 122 124 127 128

Contents

4.5.2 Multi-Objective Optimization and Non-Inferior Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Genetic Algorithm and NSGA-II . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Multi-objective Optimization of Optimal Sampling Points of Robots Based on NSGA-II . . . . . . . . . . . . . . . . . . . . 4.6 Experimental Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Experimental Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Experimental Verification of the Positioning Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Experimental Verification of Error Compensation Based on Inverse Distance Weighting and Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Experimental Verification of Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

130 132 138 142 142 143

151

155 158 158

5 Joint Space Closed-Loop Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Positioning Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Error Estimation Model of Chebyshev Polynomial . . . . . . . . 5.2.2 Identification of Chebyshev Coefficients . . . . . . . . . . . . . . . . . 5.2.3 Mapping Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Effect of Joint Backlash on Positioning Error . . . . . . . . . . . . . . . . . . . 5.3.1 Variation Law of the Joint Backlash . . . . . . . . . . . . . . . . . . . . 5.3.2 Multi-directional Positioning Accuracy Variation . . . . . . . . . 5.4 Error Compensation Using Feedforward and Feedback Loops . . . . . 5.5 Experimental Verification and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Error Estimation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Error Compensation Experiment . . . . . . . . . . . . . . . . . . . . . . . 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

159 159 159 159 163 164 165 165 169 171 173 173 173 176 178 178

6 Cartesian Space Closed-Loop Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Pose Measurement Using Binocular Vision Sensor . . . . . . . . . . . . . . 6.2.1 Description of Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Pose Measurement Principle Based on Binocular Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Influence of the Frame F E on Measurement Accuracy . . . . . 6.2.4 Pose Estimation Using Kalman Filtering . . . . . . . . . . . . . . . . 6.3 Vision-Guided Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Experimental Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Experimental Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

179 179 179 180 182 185 188 190 194 194

xii

Contents

6.4.2 Kalman-Filtering-Based Estimation . . . . . . . . . . . . . . . . . . . . . 6.4.3 No-Load Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part II

196 196 200 201

Applications

7 Applications in Robotic Drilling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Robotic Drilling System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Establishment of Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 World Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Robot Base Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Robot Flange Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Tool Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Product Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Transformation of Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Drilling Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Error-Similarity-Based Error Compensation . . . . . . . . . . . . . 7.4.2 Joint Space Closed-Loop Feedback . . . . . . . . . . . . . . . . . . . . . 7.4.3 Cartesian Space Closed-Loop Feedback . . . . . . . . . . . . . . . . . 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

205 205 205 205 207 209 209 212 213 215 217 217 219 219 223 223 227

8 Applications in Robotic Milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Robotic Milling System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Milling on Aluminum Alloy Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Line Milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Arc Milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Milling on Cylinder Head for Car Engine . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Line Milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Plane Milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Edge Milling on Composite Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

229 229 229 230 231 232 233 235 235 237 241

Part I

Theories

Chapter 1

Introduction

1.1 Background Currently, as production resources are desired to be capable of rapidly reacting to variations in the market environment, and showing flexibility and efficiency, the requirements for high-precision and flexible manufacturing equipment have been continually increasing in various industrial plants. Industrial robots, which incorporate multiple technologies such as computer science, mechanical engineering, electronic engineering, artificial intelligence, information sensing technology, and control theory, are the product of multi-disciplinary intersections. With the maturity of industrial robot technology, it has become a standard equipment widely used in the industrial automation industry, and its technological development level has also become an important symbol of a country’s level of industrial automation. The deep integration of robot technology and modern manufacturing technology will bring new vitality to existing products and technologies, enhance the comprehensive competitiveness of enterprises, and alleviate the crisis of labor shortage. Recently, due to their high degree of automation, flexibility and adaptability, industrial robots have been widely used in many traditional machining and manufacturing fields. As an example, in the electronics and automotive industries, robots have become a necessary tool for production owing to the variety and quantity of products. There are three reasons why robots are widely used in industrial countries: the first is to reduce labor production costs; the second is to increase labor productivity; the third and most important is to meet the needs of industrialization transformation. With the improvement of the technical level of industrial robots, they have begun to enter the high-precision manufacturing fields such as aerospace manufacturing, micro-processing, and biomedicine. Since the 1990s, the main robot production countries have already developed a robot flexible integrated system for a certain industrial field. Taking industrial robots as the main body, with peripheral manufacturing equipment and related software, forming a robot integrated system that meets a certain high-tech manufacturing industry, such as robotic drilling and

© Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_1

3

4

1 Introduction

riveting, robotic welding and robotic fiber placement, etc., will definitely become the development direction of the manufacturing industry and the robot industry. As the leading industry in the manufacturing fields, the aviation manufacturing has always been a strategic industry for the national economy and national defense construction. In recent years, the aircraft manufacturing industry has put forward the requirements of high-quality, high-efficiency, low-cost and adaptable to small-batch and multi-model products for aircraft assembly technology. Aircraft assembly is a process in which aircraft parts or components are combined and connected to form higher-level assemblies or complete aircraft according to the design requirements. It is an extremely important link in the aircraft manufacturing process. So far, aircraft assembly technology has experienced a development process from manual assembly, semi-automatic assembly, automated assembly to flexible assembly. In the assembly process of the aircraft, due to the large size, the complex shape, and the large number of parts and connections of the product, the workload accounts for about 40–50% of the total workload. Improving the quality and efficiency of aircraft assembly has become one of the research focuses of today’s aviation manufacturing industry. At present, in the aviation manufacturing industry, drilling and riveting are still dominated by manual operations, which has low work efficiency as well as unstable assembly quality. Especially for advanced aircrafts, manual operations have been unable to meet the requirements of technical indicators such as positioning accuracy and normal accuracy of connecting holes. The use of automatic drilling and riveting technology has become an inevitable choice for aircraft assembly today, where the automatic drilling and riveting system based on industrial robots is a current research hotspot. As a kind of automatic equipment integrating advanced technology, industrial robot is very suitable for use in aircraft automatic assembly, e.g., drilling, riveting, fiber placement, and milling, as shown in Fig. 1.1. Compared with the large and expensive automatic drilling and riveting machines based on CNC machine tools, industrial robots have the advantages of high flexibility, high efficiency, low manufacturing and maintenance costs. Some giants in the aerospace field have also already developed many robotic aircraft assembly systems, e.g., the Boeing 777 airframe assembly line by KUKA and Boeing (Fig. 1.2), the early RACE (robot assembly cell) robotic automatic drilling and riveting system and the POWER RACE system by BROETJE inc. (Fig. 1.3). It can be seen that the automatic drilling and riveting system based on industrial robots has gradually become an indispensable assistant in the aerospace industry. Unfortunately, the low robot accuracy cannot satisfy the requirement of aircraft manufacturing for high precision, which has increasingly restricted applications of industrial robots to aviation manufacturing industry. Therefore, it is necessary to vigorously develop flexible and automated assembly technology, to improve the quality and assembly efficiency of aircraft assembly, and to reduce aircraft production costs. The flexible and automatic assembly technology based on industrial robots is one of the important development directions.

1.1 Background

5

Fig. 1.1 Typical applications of industrial robots: a robotic drilling and riveting; b robotic milling; c robotic grinding; d robotic fiber placement

Fig. 1.2 Boeing 777 fuselage assembly line

6

1 Introduction

Fig. 1.3 Robotic automatic drilling and riveting system by BROETJE Inc: a RACE; b power RACE

1.2 What is Robot Accuracy Accuracy and repeatability are important performance indices of an industrial robot, including pose accuracy and pose repeatability, and trajectory accuracy and trajectory repeatability [1]. Accuracy refers to the difference between the command position or orientation and the mean value of the attained positions or orientations. Repeatability refers to the degree of inconsistency in position or orientation dispersion when the robot approaches repeatedly the same command position or orientation. The concept of the robot accuracy and repeatability can be depicted vividly in Fig. 1.4. Because it is vitally important for applications of industrial robots to the highprecision machining, the accuracy is dealt with in this book regardless of repeatability. Pose accuracy (or trajectory accuracy) contains positioning accuracy (or positioning trajectory accuracy) and orientation accuracy (or orientation trajectory accuracy). In practical applications, the normal accuracy of products to be machined is guaranteed by normal alignment sensors mounted on the end-effector of the robot, without considering the orientation of the robot itself, hence the focus of this book is the compensation of robot positioning errors. Without ambiguity, the positioning accuracy referred to in this book includes the positioning trajectory accuracy. Due to the fact that positioning accuracy is normally called as positioning error as well in the robotic community, this concept applies in this book. The operating principle of an industrial robot is to enter the set desired pose or trajectory (reachable in the working space) to the robot controller; the joint values required at each time are solved according to the inverse kinematics model; the motors drive the joints to move the end-effector to the desired pose. It can be seen that the positioning error is affected by the accuracy of the kinematics model and the control accuracy of the joint angle. Each link of the robot has manufacturing and assembly errors, resulting in the deviation of the actual link parameters from the nominal ones, which will eventually cause the pose errors of the robot end-effector. On the other hand, the control of the joint angle of the robot is realized by connecting a motor with a reducer to drive the

1.2 What is Robot Accuracy

7

Fig. 1.4 Robot accuracy and repeatability: a low accuracy with low repeatability; b high accuracy with low repeatability; c low accuracy with high repeatability; d high accuracy with high repeatability

joint link. Although the motor with encoder can have high control accuracy, the joint reducer has gear backlash, friction, and wear, and therefore the actual joint position cannot be transmitted to the robot control system as feedback signal. Furthermore, under the combined action of the robot’s own weight and the load, the joint will deform like a torsion spring, and the deformation error is also difficult to calculate. These two sources of errors from joints both cause the end-effector positioning error. The sources of errors for an industrial robot of the above two aspects can be categorized into two main classes based on whether the error is easy to model: geometric error and non-geometric error. Geometric errors are due to the manufacturing process, assembly tolerances and inaccuracies of the kinematic model. Non-geometric errors are quantities that are difficult to model, such as link deformation, joint backlash,

8

1 Introduction Manufacturing and assembly errors

Joint deformation due to load and self-weight Geometric errors

Geometric errors Joint zero calibration error

Link parameter errors Kinematic model errors

Sources of errors

Joint errors Thermal effects caused by temperature changes

Link deformation due to load and self-weight

Non-geometric errors

Non-geometric errors Joint transmission error

Fig. 1.5 Sources of errors and classifications of industrial robots

friction, and thermal effects caused by temperature change. The sources of errors for an industrial robot can be classified and shown in Fig. 1.5.

1.3 Why Error Compensation The accuracy requirements for industrial robots in aircraft assembly are determined by the programming methods of the robots and the precision requirements of aircraft products. Industrial robots mainly plan the tasks through online teaching and offline programming. Among them, online teaching programming refers to the manual guidance of the robot end effector, or the manual operation of the mechanical simulation device, or the use of a teaching box to make the robot complete the expected action. Since the robot uses a program that can be repeatedly reproduced and stored through teaching programming, the positioning accuracy of the system is mainly guaranteed by the repeatability of the robot. The traditional manual teaching programming method through the teach pendant has many shortcomings in the aviation manufacturing field. First, the large size of aircraft parts and the large number of connecting holes result in a huge workload of robot programming and low programming efficiency. Secondly, manual teaching requires online operation of the robot, which takes up a great deal of working time of the equipment, leading to higher costs. The most important is that aircraft assembly requires high positioning accuracy of the robot. For example, the positioning accuracy of the hole of the aircraft component product is required to be within ± 0.5 mm, which is difficult to achieve by manual teaching. The offline programming technology uses CAD/CAM software to plan the robot’s work according to the geometric model of the robot and the product to be machined, which can effectively solve for the above-mentioned problems. Therefore, in the field of aircraft assembly, the programming of industrial robots must rely on offline programming technology. To realize the effective application of offline programming technology in the field of aircraft assembly, industrial robots are required to have sufficiently high positioning accuracy. This is because offline programming is executed by specifying the

1.4 Early Investigations and Insights

9

absolute position of the end-effector in the machining coordinate system. However, industrial robots typically only have high repeatability, and do not have sufficiently high positioning accuracy. With the development of technology, the repeatability of the existing industrial robots has been within ± 0.1 mm, but the positioning accuracy is still at the level of 1–2 mm. In addition, external factors such as installation errors during the deployment of the robot and the movement errors of the end-effector itself may further reduce the positioning accuracy of the entire robot system, causing the robot automatic drilling and riveting system to fail to meet the accuracy requirements of aircraft assembly. Therefore, using an error compensation technology to improve the positioning accuracy of industrial robots is very necessary for their applications to the aircraft assembly.

1.4 Early Investigations and Insights To deal with the problem of the low accuracy for industrial robots, tremendous compensation approaches have been developed to improve the robot positioning accuracy. Similar to CNC machine tools, methods to improve the positioning accuracy of robots can be mainly divided into error prevention methods and error compensation methods [2]. The former is dedicated to improving the design accuracy and machining accuracy of the robot, the assembly accuracy and the accuracy of the control system as much as possible to ensure the positioning accuracy of the robot. Nonetheless, this method has higher requirements on materials and processing equipment or means, resulting in excessively high production costs. In addition, factors such as structural wear during the operation of the robot that will cause positioning errors cannot be avoided. Therefore, the positioning accuracy of the robot cannot be guaranteed after a long-time operation. The latter is to use advanced measurement technology to identify the real value of each kinematic parameter in the robot model, and then to improve the positioning accuracy by modifying the parameters in the robot controller or adding a certain external control algorithm. Compared with the error prevention method, the error compensation method is more economic and effective, and is also more widely used. A lot of research has been conducted to improve the robot positioning accuracy by error compensation approach, which is typically divided into offline calibration and online feedback. Offline calibration approaches can be categorized as kinematic model and non-kinematic model methods. The basic principle of the former is to obtain robot’s kinematic parametric errors through certain measurement and identification procedures, and then to modify the robot’s kinematic model. The disadvantage of this method is that the modeling and identification process is complicated, and especially it cannot compensate for non-geometric error, which usually causes poor compensation effect. In this case, the non-kinematic model calibration method with higher accuracy and stronger versatility emerged. Commonly-used non-kinematic model calibration methods include spatial interpolation [3] and neural network (NN) [4]. The online feedback method adds an external detection device to guide the robot

10

1 Introduction

end-effector to the desired pose, in which the current pose of the robot is provided by the external sensor online and then compared with the desired value.

1.4.1 Offline Calibration 1.4.1.1

Kinematic Calibration

The robot kinematics calibration usually follows four steps: modeling, measurement, identification and correction [5]. First, establish a mathematical model describing the real robot more precisely; then, measure the positioning errors of certain sample points for the robot end-effector; next, identify mathematically the kinematic parameters in the model by utilizing the data collected; finally, implement the correction of the kinematics model in the robot controller. (1) Kinematic modeling Kinematic modeling is the basis of motion control, dynamic characteristics, and offline programming, as well as error calibration. Various alternative kinematic models have been developed to implement robot error compensation. D-H model proposed by Denavit and Hartenberg [6, 7] is the most classic and most widely used kinematic model in robotics, where four kinematic parameters defined in each joint are adopted to describe the coordinate transformation of consecutive joints, thus establishing the kinematic model of the entire robot. Although the D-H model has a clear physical meaning and is simple to use, the kinematics model described by it is theoretical and cannot fully meet the requirements of robot kinematic calibration, which is due to the fact that this kinematic error model is based on the assumption of small displacements. If the D-H model is used to define the joint kinematic parameters, the singularity will emerge when two consecutive joint axes are parallel or approximately parallel, and the abrupt change of some kinematic parameters will be caused by small variations of other parameters, then the assumption of the small displacement cannot be satisfied. In view of the above-mentioned shortcomings of the D-H model, many researchers have proposed corresponding solutions. Hayati [8, 9] presented a modified D-H (MDH) model by adding a kinematic parameter that rotates around the y-axis between consecutive parallel joints, which was widely used in kinematic calibration by later researchers [10–13]. Stone et al. [14–16] re-defined the established rules of the link frame and suggested the S model, where each link was described by six parameters, including three translational parameters and three rotational parameters. After identified, the kinematics parameters defined by the S model can be converted into D-H model parameters. Judd [17] proposed a “type-two” model containing one translational parameter and three rotational parameters, different from the D-H model consisting of two translational parameters and two rotational parameters, and the singularity problem can be effectively solved. Zhuang et al. [18] believed that only

1.4 Early Investigations and Insights

11

when completeness and parametric continuity are both satisfied, the kinematics model can be applied to robot kinematics calibration. Based on this idea, a complete and parametrically continuous (CPC) model was proposed, which can overcome the abrupt change problem of geometrical parameters due to small variations in real dimensions. Gupta [19] proposed a zero reference model, which identifies the zero position of each joint as a reference, and utilizes the position and direction of each joint axis at the zero position to characterize the kinematics of the robot, to avoid model singularity. Besides, product of exponentials (POE) formula [20] was also used in the robot kinematic model to perform robot calibration [21, 22]. From the above analysis, researchers have proposed a variety of solutions to the singularity problem of the D-H model. However, the kinematic models used in majority of robot controllers are dominated by the D-H representation by virtue of the advantages of clear physical meaning, simple modeling process and strong universality. (2) Measurement Error measurement is a key procedure and is one of the most cumbersome and timeconsuming steps in robot error compensation. The effect of the error compensation depends directly on the accuracy of the error measurement, because the actual positioning error of the robot obtained by high-precision measurement equipment is the original basis for parameter identification and error estimation. The quality of error measurement is principally related to the measurement tools and methods employed. In practical applications, the robot error is measured mainly through zero-position calibration tools, ballbars, theodolites, coordinate measuring machines (CMMs), and laser trackers. For the measurement of joint zero offset, the existing methods normally rely on the dial gauges or the special tools provided by robot manufacturers. For instance, the electronic measuring device (EMD) provided by KUKA, can detect the mechanical zero position of each axis of the robot. However, this method can only calibrate the joint zero offset, and cannot measure the link length and other geometric parameters. Hence it cannot implement the measurement and compensation of the positioning error of the robot end-effector. Ballbar is an easy-to-operate comprehensive measurement tool, which can accurately measure the distance between the end-effector and the fixed point in the working space through a radial displacement sensor. It is usually used to sense the accuracy of multi-axis CNC machine tools, where the straightness of the single axis, the verticality of the two axes, the lead-lag of the servo system, and the backlash can be analyzed through the arc trajectory. Compared with other measurement instruments, it has the advantages of simple operation, low cost, and high accuracy. Theodolite is an automatic instrument for measuring horizontal or vertical angle. Its measurement accuracy is high, but the cost is also high, and it requires trained operators. Moreover, the measurement results are greatly dependent on changes in the environment and the operators’ level. CMM is normally used to obtain the 3D coordinate signals of measured points. Since its mechanical components are similar to those of CNC machine tools, it possesses the advantages of high accuracy, simple operations, and high efficiency. However, considering the small

12

1 Introduction

measuring range and the large size, the CMM is not suitable for the measurement of the positioning error for heavy-duty industrial robots. The laser tracker is convenient in measuring the coordinates of measured points using optical interference principle. It has the characteristics of large measurement range and high efficiency. However, its cost is relatively high. The greatest shortcoming of the laser tracker is that the laser beam may be interrupted during the measurement process, hence higher requirements are put forward for the site environment. Due to its advantages of automatic track, easy to operate and high precision, the laser tracker is still widely used in the research on the error compensation technology for industrial robots [12]. With the development of machine vision technology, the applications of vision-based measurements to the robot’s compensation have become more and more widespread [10]. (3) Identification The purpose of identification is to estimate variation of kinematic parameters in the robot model through an optimization algorithm, i.e., the optimal estimation of parametric errors needs to minimize the difference between the estimated and actual errors of the samples. Thus, the identification of the parametric errors is a typical regression problem, which can be solved by various mathematical methods. The techniques to estimate the parametric errors in kinematic model include mainly least square method [12, 23], Kalman filtering [11, 24] and artificial NNs [25, 26]. The least square approach is the simplest and the most common algorithm, which has a small amount of calculation and a fast convergence. Whereas, when the Jacobian matrix is near singular, an error in the numerical calculation process will appear, causing a decrease in the accuracy of parametric identification. Therefore, for most researchers, instead of directly using the least square method, the version of which is adopted to solve for the kinematic parameter problem [27, 28]. Zak et al. [29] used a weighted least square to identify the parametric error, where the weights were determined by statistical methods. Gao et al. [30] utilized an iterative least-square method to identify the kinematic errors of a robot based on eliminating redundant parameters. Filion [31] applied a portable photogrammetry system to measure the position error of the FANUC LR Mate 200iC, and an iterative least square was utilized to identify the robot’s geometric error. Among the improved least square algorithms, the Levenberg-Marquardt (L-M) [32] technique is widely used in the field of the robot’s kinematics calibration. The parameters in the L-M algorithm can change continuously during the identification process, which improves the deficiencies of improper initial value selection and inverse matrix inexistence in the Gauss-Newton algorithm owing to the combination of the advantages of the Gauss-Newton algorithm and the gradient descent method. Motta et al. [10] used the L-M algorithm to identify the parametric error of the IRB 2400 industrial robot. Lightcap et al. [33] proposed a method to identify the geometric parameters and the joint flexibility parameters of an industrial robot by using the L-M algorithm, and applied them to the kinematic calibration of the Mitsubishi PA10-6CE robot. Ginani [34] used the L-M algorithm to iteratively solve for the kinematic parameters of the IRB 2000 robot. In addition to the L-M algorithm, other

1.4 Early Investigations and Insights

13

improved least square algorithms such as the simulated annealing algorithm [35] and the maximum likelihood estimation algorithm [36] are also used to identify the robot’s kinematic parameters. The extended Kalman filtering (EKF) can also be adopted to solve for the identification problem of the robot’s kinematic parameters. Park et al. [24] used the EKF to estimate the kinematic errors of a 7-DOF robot and 4-DOF robot respectively, where the simulations and experiments were carried out. Based on a 5-DOF PUMA robot, Omodei et al. [37] compared the application effects of nonlinear optimization, linear iteration, and EKF in parametric identification, and then found that the EKF can obtain extra information of the uncertainty of parametric errors and has a higher efficiency. All in all, the EKF owns certain advantages in convergence speed, reliability and evaluation of identified results. Many intelligent algorithms for solving nonlinear problems, especially artificial NNs, are also adopted in identifying kinematics parameters of industrial robots. The artificial NN is a mathematical or computational model mimicking the structure and function of the biological NN, such as animal central nervous system. In the robot’s kinematic compensation, the optimal value of each parametric error in the robot’s kinematics model can be obtained by training artificial NNs. Zhong et al. [4, 38] applied a recurrent NN and a multilayered feedforward NN to identify the parametric errors of a PUMA robot, and raised its positioning accuracy approximately to the level of repeatability. Jang et al. [39] used a radial basis function NN to identify the geometric and non-geometric errors of a DR06 industrial robot. The disadvantage of using NNs for parametric identifications of the robot kinematics is that the optimal solution is often a local extremum, resulting in low recognition accuracy. (4) Correction Error correction is the final and decisive step in robot’s kinematic compensation. The basic principle is to improve the positioning accuracy of the robot by modifying the control parameters or changing the control strategy after identifying the kinematic parameters. The kinematic calibration method can achieve a good compensation effect if the kinematic model can be precisely fitted to match the real one. However, there exist some deficiencies in kinematic calibration. Firstly, the factors of the positioning errors include not only the parametric errors in robot’s kinematics, but backlash, mass distribution of the links, load, thermal effects and so on, thus establishing a model that reflects the actual motion of the robot is difficult. Secondly, when the structure of the robot changes, the kinematic parameters may change as well, resulting in the need to re-establish the kinematic model. Thirdly, modifying the kinematic parameters requires that the robot control system has a sufficiently high openness, but obtaining the control openness of most robot manufacturers are usually expensive, which could increase the costs in applying industrial robots to mechanical machining.

14

1.4.1.2

1 Introduction

Non-kinematic Calibration

To deal with the drawbacks of kinematic calibration approaches, researchers proposed a modeless calibration technique, also called as non-kinematic calibration, where the industrial robot was regarded as a “black box” despite of the mechanism of sources of errors on the positioning accuracy. Particularly, the mapping relationship between the positioning error and the desired pose of the end-effector is studied to establish an error database for compensating for positioning errors in the entire workspace of the robot. Commonly-used non-kinematic model calibration methods include spatial interpolation [3] and NN [4]. Spatial interpolation is to estimate and compensate for errors of unknown points by polynomial fitting or interpolation approach using the data of known points in the robot’s workspace. Alici [40] combined Fourier series and orthogonal polynomial to predict the positioning errors of the Motoman SK 120 robot, and experimental results proved the accuracy and universality of the approximation method. Bai [41] proposed a novel error estimation method to improve a robot’s accuracy by fuzzy interpolation of known point data, where the simulation results found that the compensation accuracy of this method was higher than that of a kinematic calibration method. Zhu [42] presented a bilinear interpolation method to estimate errors of target points based on the measured errors of boundary points. Nonetheless, since most of the planes are not ruled surfaces, the compensated accuracy is limited. The principle of the NN method is to use sampling data to train it, and then to implement the prediction and compensation of the robot’s positioning errors according to the trained NN model. Xu [43] adopted a fast backpropagation algorithm to train a feedforward NN for predicting joint errors of a robot, and then applied it to the robot’s control system to correct the positioning errors. Nguyen [11] utilized a NN to compensate for non-geometric errors with three joint angles as the NN’s inputs. Wang [44] established a mapping model between the actual and theoretical coordinates of the points a robot expected to reach by using an extreme learning machine (ELM) algorithm to increase the positioning accuracy of the robot. Nevertheless, the sampling range of this method is too small to be applied to practical engineering problems. It can be seen from the existing literature that the non-kinematic calibration method does not rely on the kinematic model of the robot, but investigates only the specific performance of the robot’s positioning errors. It avoids the complex modeling process, and overcomes the inaccuracy of parametric identification in the kinematic calibration. Essentially, the non-kinematic calibration is a numerical estimation technique, where the more data used, the higher the compensated accuracy.

1.4.1.3

Physical Constraint Calibration

Normally, in kinematic calibration and non-kinematic calibration, specialized technicians are required to operate high-precision external devices, which are necessary

1.4 Early Investigations and Insights

15

to sense error signals of the end-effector. This limits the applications of the two approaches on certain occasions. Therefore, a physical constraint means, which can complete the robot’s calibration without any external measuring instrument, has been explored by researchers [45]. Its main idea is to employ constraint equations, which are constructed through the contact between the robot end-effector and the physical constraints such as balls, planes, points and lines, to implement the robot’s calibration. Gaudreault et al. [46] established equations of positional constraint by forming multiple contacts between three digital dial indicators mounted on the end-effector and a ball in the robot’s working space, thus realizing the kinematic calibration of an ABB robot, as shown in Fig. 1.6a. Joubair et al. [47] used the high-precision probes on the robot end-effector to touch four planes of a cubic block to formulate constraint equations, and then compensated for geometric errors and joint flexibility errors of a robot, shown in Fig. 1.6b. He et al. [48] adopted multiple point constraints to improve the positioning accuracy of a six-axis industrial robot, where the robot was driven to reach the same position in different postures for calibrating the robot’s parameters. IRB120

Tool changer

Measuring device Sphere 1 Custom extension stems Sphere 2 Sphere 3

Triangular platforms

Magnetic ball mounts

Agriculating base

(a)

(b)

(c) Fig. 1.6 Physical constraint calibration: a ball contact [46]; b plane contact [47]; point contact [48]

16

1 Introduction

It is found that the robot’s calibration accuracy of the physical constraint method depends heavily on the sensitivity of the sensor on the end-effector, and the machining accuracy of the physical constraint is required extremely high. Moreover, the compensation effect of this method is pretty good near the constraint area, while in other areas of the robot’s workspace the compensated accuracy is unsatisfactory. In summary, offline calibration technology is a research focus in the field of the robot error compensation. Many researchers are committed to improving the robot’s positioning accuracy, and most of them focus on the kinematics calibration. However, the analysis above shows that the offline calibration method highly depends on the robot’s repeatability. In fact, the unidirectional repeatability of the robot is sufficiently high, but its accuracy from different directions to a same pose is poor. That is to say, due to the poor multi-directional accuracy, the robot’s positioning error from different directions to the same sampling point is different. Therefore, the offline calibration method cannot further improve the compensation effect.

1.4.2 Online Feedback With the progress of sensor technology and electronic components, an online correction method for robot’s positioning errors has emerged. Online compensation method refers to the use of external high-precision measurement equipment for real-time feedback of the end-effector’s motion, so that the positioning error can be constantly adjusted in the robotic machining to the desired value. According to the type of signal used for feedback, it is normally divided into joint space closed-loop feedback and Cartesian space closed-loop feedback.

1.4.2.1

Joint Space Closed-Loop Feedback

In the robotic machining system, the link stiffness is much greater than the joint stiffness, hence joint deformations are mainly responsible for positioning errors. To deal with the problem, the joint space closed-loop feedback compensation technology [49–52] came into being. The basic idea is to measure in real-time the rotation angles by installing sensors at robot’s joints to form closed-loop feedback, thus reducing joint errors and improving robots’ accuracy, as shown in Fig. 1.7. To reduce the joint error of a KUKA robot in point-to-point mode, Saund and DeVlieg [53] installed high-precision grating scales at each joint of the robot (as shown in Fig. 1.8) and redeveloped the robot controller using Siemens CNC system. The joint feedback system greatly improves the positioning accuracy of the robot, and has been successfully applied to machining of aircraft parts and production of composite materials. Möller et al. [52] also redeveloped the robot controller through a CNC system, to which the frictional model of the joint was added, which greatly enhanced the bidirectional repeatability of the robot. Taking into account the influence of joint

1.4 Early Investigations and Insights Desired pose

Inverse kinematics

Desired joint + angle -

17

Joint servomotor

Joint servo controller

Actual joint angle

Actual pose Robot

Angular sensor

Fig. 1.7 Joint-closed loop compensation technology

Fig. 1.8 Grating scales used in joint closed-loop feedback [53]

backlash on multi-directional accuracy, Zhang et al. [54] installed grating scales on the front three joints of KUKA KR210 robot, and applied this technology to aircraft assembly. It can be seen that the feedback of grating scales based on Siemens CNC system is widely used in industry because it is difficult to obtain the modification permission of robot controller. In addition, the feedback technology of grating scales can better enhance the multi-direction accuracy of the robot.

1.4.2.2

Cartesian Space Closed-Loop Feedback

To further improve the robot accuracy, the Cartesian space closed-loop feedback technology is adopted, where the actual pose of the end-effector is directly obtained through a pose sensor, e.g., a laser tracker or a visual device to correct the endeffector’s error online, as shown in Fig. 1.9. Because the feedback signal is acquired by directly measuring the final implementation of the whole system (i.e., the endeffector), the highest accuracy depends on the minimum resolution of the robot moving in Cartesian space, the measuring accuracy of the pose sensor, and the steadystate error of control system.

18 Desired pose

1 Introduction + -

Robot controller

Actual pose Robot

Pose sensor

Fig. 1.9 Cartesian space closed-loop feedback using servo control

The laser tracker uses a laser beam to detect the position information of the spherically mounted retro-reflector (SMR) installed on the end-effector. Qu et al. [55] constructed a robot-assisted aircraft drilling system based on the closed-loop feedback of the laser tracker, where the robot’s error was compensated by the deviation between the actual pose and the theoretical pose. Shi et al. [56] designed an online error measurement and real-time compensation system for industrial robots used in machining applications, where a laser tracker was used to measure the position of the robot in real time to achieve error compensation combining the corresponding compensation algorithm. Droll et al. [57] proposed a method of online compensation using a laser tracker to realize fast path correction of a robot. In fact, this method uses the identified kinematic parameters to modify the parametric values in the robot controller online to compensate for the errors, and does not realize the complete online feedback. Although using the laser tracker for closed-loop control can realize higher accuracy, it is unable to meet the requirements of measurement in some complex working conditions where light may be blocked. With the improvement of computing power and image processing ability of computer, another closed-loop compensation method based on vision guidance used to improve the robot’s accuracy has emerged [58–62]. Hajiloo et al. [62] constructed a vision-based compensation technique for a 6-DOF robot using a robust model predictive method, and solved the problem of excessive deviation between the initial position and the expected position of the camera. Wei et al. [63] conducted an in-depth study on the trajectory tracking of a robot’s fingertips based on image positioning, where the image quality and the recognition accuracy of trajectory were improved by applying wavelet transform to remove the noise of the image. Gharaaty et al. [64] proposed a dynamic pose correction method that uses a binocular vision 6D measurement device to track the position and posture of the robot’s end effector in real time. Shu et al. [65] suggested a dynamic tracking method based on Gharaaty’s research to improve the robot’s trajectory accuracy. To the best of our knowledge, however, the combination of vision and industrial robot is mostly used to complete a certain function, and there are fewer further studies on the accuracy improvement for the robotic machining. In order to improve further the robot’s motion accuracy, researchers began to focus on advanced nonlinear control algorithms, such as adaptive control [66], robust control [67], fuzzy control [68], NN control [69] and combinations of them [70, 71].

References

19

Although they can improve the robot control performance to a certain extent, these control algorithms are all carried out under the assumption that the robot controller is open, and therefore cannot be directly used in industrial robot products whose control system is not open. More importantly, they are seldomly implemented in actual engineering applications due to the complex control algorithm.

1.5 Summary In this chapter, the application background of industrial robots is given, and the questions of what the robot accuracy is and why the error compensation is carried out are answered. Then we provide a general overview of the methods and the state of the art of various error compensation technologies. Although tremendous approaches have been suggested to solve for the error problem of industrial robots, the majority of which have no significant improvement compared to those proposed 20 years ago. In particular, the difference only lies in the increase in the robot’s manufacturing accuracy, and the measuring accuracy and efficiency of sensors. Therefore, further research of robot error compensation technology is of great significance to expand the application fields and effects of industrial robots. This is the subject throughout this book.

References 1. ISO 9283. Manipulating Industrial Robots: Performance Criteria and Related Testing Methods. International Organization for Standardization, 1998. 2. Ouyang JF, Liu W, Qu X, Yan Y. Industrial robot error compensation using laser tracker system. Key Eng Mater. 2008;381–382:579–82. 3. Bayro-Corrochano E, Lechuga-Gutierrez L, Garza-Burgos M. Geometric techniques for robotics and HMI: interpolation and haptics in conformal geometric algebra and control using quaternion spike neural networks. Robot Auton Syst. 2018;104:72–84. https://doi.org/10.1016/ j.robot.2018.02.015. 4. Zhong X, Lewis J, N-Nagy FL. Inverse robot calibration using artificial neural networks. Eng Appl Artif Intell. 1996;9(1):83–93. https://doi.org/10.1016/0952-1976(95)00069-0 5. Li Z, Li S, Luo X. An overview of calibration technology of industrial robots. IEEE/CAA J Autom Sinica. 2021;8(01):23–36. 6. Hartenberg RS, Denavit J. A kinematic notation for lower pair mechanisms based on matrices. J Appl Mech. 1955;77(2):215–21. 7. Hartenberg R, Danavit J. Kinematic synthesis of linkages. New York: McGraw-Hill; 1964. 8. Hayati SA. Robot arm geometric link parameter estimation. In: The 22nd IEEE conference on decision and control. New York: IEEE; 1983. p. 1477–83. 9. Hayati S, Mirmirani M. Improving the absolute positioning accuracy of robot manipulators. J Robot Syst. 1985;2(4):397–413. 10. Motta JMST, De Carvalho GC, McMaster RS. Robot calibration using a 3D vision-based measurement system with a single camera. Robot Comput-Integr Manuf. 2001;17(6):487–97. https://doi.org/10.1016/S0736-5845(01)00024-2

20

1 Introduction

11. Nguyen H-N, Zhou J, Kang H-J. A calibration method for enhancing robot accuracy through integration of an extended Kalman filter algorithm and an artificial neural network. Neurocomputing. 2015;151:996–1005. https://doi.org/10.1016/j.neucom.2014.03.085. 12. Nubiola A, Bonev IA. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Robot Comput-Integr Manuf. 2013;29(1):236–45. https://doi.org/10.1016/j.rcim.2012.06.004. 13. Santolaria J, Ginés M. Uncertainty estimation in robot kinematic calibration. Robot ComputIntegr Manuf. 2013;29(2):370–84. https://doi.org/10.1016/j.rcim.2012.09.007. 14. Stone H, Sanderson A. Statistical performance evaluation of the S-model arm signature identification technique. In: Proceedings 1988 IEEE international conference on robotics and automation. New York: IEEE; 1988. p. 939–46. 15. Stone H, Sanderson A. A prototype arm signature identification system. In: Proceedings 1987 IEEE international conference on robotics and automation. New York: IEEE; 1987. p. 175–82. 16. Stone H, Sanderson A, Neuman C. Arm signature identification. In: Proceedings 1986 IEEE international conference on robotics and automation. New York: IEEE; 1986. p. 41–8. 17. Judd RP, Knasinski AB. A technique to calibrate industrial robots with experimental verification. IEEE Trans Robot Autom. 1987;6(1):20–30. 18. Zhuang H, Roth ZS, Hamano F. A complete and parametrically continuous kinematic model for robot manipulators. IEEE Trans Robot Autom. 2002;8(4):451–63. 19. Gupta KC. Kinematic analysis of manipulators using the zero reference position description. Int J Robot Res. 1986;5(2):5–13. 20. Okamura K, Park FC. Kinematic calibration using the product of exponentials formula. Robotica. 1996;14(4):415–21. https://doi.org/10.1017/s0263574700019810. 21. Chen G, Wang H, Lin Z. Determination of the identifiable parameters in robot calibration based on the POE formula. IEEE Trans Rob. 2014;30(5):1066–77. https://doi.org/10.1109/ TRO.2014.2319560. 22. Yang X, Wu L, Li J, Chen K. A minimal kinematic model for serial robot calibration using POE formula. Robot Comput-Integr Manuf. 2014;30(3):326–34. https://doi.org/10.1016/j. rcim.2013.11.002. 23. Joubair A, Zhao LF, Bigras P, Bonev I. Absolute accuracy analysis and improvement of a hybrid 6-DOF medical robot. Ind Robot: Int J. 2015;42(1):44–53. https://doi.org/10.1108/IR09-2014-0396. 24. Park I-W, Lee B-J, Cho S-H, Hong Y-D, Kim J-H. Laser-based kinematic calibration of robot manipulator using differential kinematics. IEEE/ASME Trans Mechatron. 2011;17(6):1059– 67. 25. Wu H, Tizzano W, Andersen TT, Andersen NA, Ravn O. Hand-Eye Calibration and inverse kinematics of robot arm using neural network. In: Kim J-H, Matson ET, Myung H, Xu P, Karray F, editors. Robot intelligence technology and applications 2: Results from the 2nd international conference on robot intelligence technology and applications. Cham: Springer International Publishing; 2014. p. 581–91. 26. Wang D, Bai Y, Zhao J. Robot manipulator calibration using neural network and a camera-based measurement system. 2012;34(1):105–21. https://doi.org/10.1177/0142331210377350 27. Veitschegger WK, Wu CH. Robot calibration and compensation. IEEE J Robot Autom. 1988;4(6):643–56. 28. Kim DH, Cook KH, Oh JH. Identification and compensation of a robot kinematic parameter for positioning accuracy improvement. Robotica. 1991;9(1):99–105. 29. Zak G, Benhabib B, Fenton RG, Saban I. Application of the weighted least squares parameter estimation method to the robot calibration. J Mech Des. 1994;116(3):890–3. 30. Gao G, Sun G, Na J, Guo Y, Wu X. Structural parameter identification for 6 DOF industrial robots. Mech Syst Signal Process. 2018;113:145–55. 31. Filion A, Joubair A, Tahan AS, Bonev IA. Robot calibration using a portable photogrammetry system. Robot Comput-Integr Manuf. 2018;49:77–87. 32. Marquardt DW. An algorithm for least-squares estimation of nonlinear parameters. J Soc Ind Appl Math. 1963;11(2):431–41.

References

21

33. Lightcap C, Hamner S, Schmitz T, Banks S. Improved positioning accuracy of the PA10-6CE robot with geometric and flexibility calibration. IEEE Trans Rob. 2008;24(2):452–6. 34. Ginani LS, Motta JMS. Theoretical and practical aspects of robot calibration with experimental verification. J Braz Soc Mech Sci Eng. 2011;33(1):15–21. 35. Horning RJ. A comparison of identification techniques for robot calibration. Case Western Reserve University. 1998. 36. Renders J-M, Rossignol E, Becquet M, Hanus R. Kinematic calibration and geometrical parameter identification for robots. IEEE Trans Robot Autom. 1991;7(6):721–32. 37. Omodei A, Legnani G, Adamini R. Calibration of a measuring robot: experimental results on a 5 DOF structure. J Robot Syst. 2001;18(5):237–50. 38. Zhong X-L, Lewis JM. A new method for autonomous robot calibration. In: Proceedings of 1995 IEEE international conference on robotics and automation. New York: IEEE; 1995. p. 1790–5. 39. Jang JH, Kim SH, Kwak YK. Calibration of geometric and non-geometric errors of an industrial robot. Robotica. 2001;19(3):311. 40. Alici G, Shirinzadeh B. A systematic technique to estimate positioning errors for robot accuracy improvement using laser interferometry based sensing. Mech Mach Theory. 2005;40(8):879– 906. https://doi.org/10.1016/j.mechmachtheory.2004.12.012. 41. Bai Y. On the comparison of model-based and modeless robotic calibration based on a fuzzy interpolation method. Int J Adv Manuf Technol. 2007;31(11–12):1243–50. https://doi.org/10. 1007/s00170-005-0278-4. 42. Zhu W, Qu W, Cao L, Yang D, Ke Y. An off-line programming system for robotic drilling in aerospace manufacturing. Int J Adv Manuf Technol. 2013;68(9–12):2535–45. https://doi.org/ 10.1007/s00170-013-4873-5. 43. Xu WL, Wurst KH, Watanabe T, Yang SQ. Calibrating a modular robotic joint using neural network approach. In: 1994 IEEE international conference on neural networks, vol 1–7. 1994; p. 2720–5. 44. Wang L, Li X, Zhang L. Analysis of the positioning error of industrial robots and accuracy compensation based on ELM algorithm. Robot 2018;40(06):843–51+59. 45. Nubiola A, Slamani M, Bonev IA. A new method for measuring a large set of poses with a single telescoping ballbar. Precis Eng. 2013;37(2):451–60. 46. Gaudreault M, Joubair A, Bonev IA. Local and closed-loop calibration of an industrial serial robot using a new low-cost 3D measuring device. In: 2016 IEEE international conference on robotics and automation (ICRA). New York: IEEE; 2016. p. 4312–9. 47. Joubair A, Bonev IA. Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis Eng. 2015;40:325–33. 48. He S, Ma L, Yan C, Lee C-H, Hu P. Multiple location constraints based industrial robot kinematic parameter calibration and accuracy assessment. Int J Adv Manuf Technol. 2019;102(5):1037–50. https://doi.org/10.1007/s00170-018-2948-z. 49. DeVlieg R, Szallay T. Applied accurate robotic drilling for aircraft fuselage. SAE Int J Aerosp. 2010;3(2010-01-1836):180–6. 50. Saund B, DeVlieg R. High accuracy articulated robots with CNC control systems. SAE Int J Aerosp. 2013;6(2013-01-2292):780–4. 51. Rathjen S, Richardson C. High path accuracy, high process force articulated robot. SAE Technical Papers. 2013;7. https://doi.org/10.4271/2013-01-2291 52. Möller C, Schmidt HC, Koch P, Böhlmann C, Kothe S-M, Wollnack J, et al. Machining of large scaled CFRP-Parts with mobile CNC-based robotic system in aerospace industry. Proc Manuf. 2017;14:17–29. https://doi.org/10.1016/j.promfg.2017.11.003. 53. Saund B, DeVlieg R. High accuracy articulated robots with CNC control systems. SAE Int J Aerosp. 2013;6(2):780–4. https://doi.org/10.4271/2013-01-2292. 54. Zhang L, Tian W, Zheng F, Liao W. Accuracy compensation technology of closed-loop feedback of industrial robot joints. Trans Nanjing Univ Aeronaut Astronaut. 2020;37(6):858–71. 55. Qu W, Dong H, Ke Y. Pose accuracy compensation technology in robot-aided aircraft assembly drilling process. Acta Aeronautica et Astronautica Sinica. 2011;32(10):1951–60.

22

1 Introduction

56. Shi X, Zhang F, Qu X, Liu B, Wang J. Position and attitude measurement and online errors compensation for KUKA industrial robots. J Mech Eng. 2017;53(8):1–7. 57. Droll S. Real time path correction of industrial robots with direct end-effector feedback from a laser tracker. SAE Int J Aerosp. 2014;7(2):222–8. 58. Deng LF, Janabi-Sharifi F, Wilson WJ. Hybrid motion control and planning strategies for visual servoing. IEEE Trans Industr Electron. 2005;52(4):1024–40. https://doi.org/10.1109/Tie.2005. 851651. 59. Mariottini GL, Oriolo G, Prattichizzo D. Image-based visual servoing for nonholonomic mobile robots using epipolar geometry. IEEE Trans Rob. 2007;23(1):87–100. https://doi.org/10.1109/ Tro.2006.886842. 60. Dong GQ, Zhu ZH. Position-based visual servo control of autonomous robotic manipulators. Acta Astronaut. 2015;115:291–302. https://doi.org/10.1016/j.actaastro.2015.05.036. 61. Lippiello V, Siciliano B, Villani L. Position-based visual servoing in industrial multirobot cells using a hybrid camera configuration. IEEE Trans Rob. 2007;23(1):73–86. https://doi.org/10. 1109/Tro.2006.886832. 62. Hajiloo A, Keshmiri M, Xie WF, Wang TT. Robust online model predictive control for a constrained image-based visual servoing. IEEE Trans Industr Electron. 2016;63(4):2242–50. https://doi.org/10.1109/Tie.2015.2510505. 63. Wei Z, Tang B. Simulation analysis of robot fingertips tracking process based on image. Comput Simul. 2015;32(2):414–7. 64. Gharaaty S, Shu T, Xie W, Joubair A, Bonev IA. Accuracy enhancement of industrial robots by on-line pose correction. In: 2017 2nd Asia-Pacific conference on intelligent robot systems (ACIRS). 2017:214–20. 65. Shu T, Gharaaty S, Xie W, Joubair A, Bonev IA. Dynamic path tracking of industrial robots with high accuracy using photogrammetry sensor. IEEE/ASME Trans Mechatron. 2018;23(3):1159– 70. https://doi.org/10.1109/TMECH.2018.2821600. 66. Kolbari H, Sadeghnejad S, Bahrami M, Ali KE. Adaptive control of a robot-assisted telesurgery in interaction with hybrid tissues. J Dyn Syst-T Asme. 2018;140(12). https://doi.org/ 10.1115/1.4040818 67. Abu Alqumsan A, Khoo S, Norton M. Robust control of continuum robots using Cosserat rod theory. Mech Mach Theory. 2019;131:48–61. https://doi.org/10.1016/j.mechmachtheory.2018. 09.011. 68. Yen VT, Nan WY, Cuong PV. Recurrent fuzzy wavelet neural networks based on robust adaptive sliding mode control for industrial robot manipulators. Neural Comput Appl. 2019;31(11):6945–58. https://doi.org/10.1007/s00521-018-3520-3. 69. Zhang DH, Kong LH, Zhang S, Li Q, Fu Q. Neural networks-based fixed-time control for a robot with uncertainties and input deadzone. Neurocomputing. 2020;390:139–47. https://doi. org/10.1016/j.neucom.2020.01.072. 70. Yin XX, Pan L. Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy. ISA Trans. 2018;72:178–84. https://doi.org/10.1016/j.isatra.2017.10.007. 71. Yen VT, Nan WY, Cuong PV. Robust adaptive sliding mode neural networks control for industrial robot manipulators. Int J Control Autom. 2019;17(3):783–92. https://doi.org/10.1007/s12 555-018-0210-y.

Chapter 2

Kinematic Modeling

2.1 Introduction Generally, an industrial robot is a chain mechanism connected by links and joints in series, whose base is mounted on a fixed or mobile platform. Its end-effector is driven by an actuator (e.g., servomotor) to complete related tasks. The positioning accuracy of a robot is affected by a variety of sources of errors, which are transmitted to the end-effector through joints, resulting in positioning errors. The kinematic model is the theoretical basis for studying the robot’s motion law and error analysis. Strictly speaking, the research on robot kinematics should contain the pose relationship, speed relationship, and acceleration relationship between the robot’s links, among which the pose relationship is more concerned in the robot’s error compensation, especially in the problem of the closed solution for the robot’s inverse kinematics. In this chapter, establishments of the forward and inverse kinematic models of the robot are discussed, and the law of the positioning error is analyzed and evaluated to provide a theoretical basis for subsequent error compensation.

2.2 Pose Description and Transformation 2.2.1 Descriptions of Position and Posture The posture and the position of a rigid body are called its pose. For the description of industrial robots, the links are usually treated as rigid bodies. The homogeneous transformation method, which can connect motion, transformation, mapping, and matrix operations, is introduced in this section, to depict the robot’s kinematics. The position of any reference point p at a rigid body moving in space can be denoted by a 3 × 1 vector in Cartesian coordinate system {A} A

p = [ p x p y p z ]T

© Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_2

(2.1) 23

24

2 Kinematic Modeling

where px , p y , pz are coordinate components of point p in the x-, y-, and z- axis of {A}, respectively. To describe the posture of a rigid body B in space, a Cartesian coordinate system {B} is attached to it. The posture of the rigid body B relative to {A} can be denoted by a 3 × 3 matrix, which is composed of direction cosines of three unit vectors x B , y B , z B of {B} relative to {A} ⎡

A

RB = [ x B

⎤ r11 r12 r13 y B z B ] = ⎣ r21 r22 r23 ⎦ r31 r32 r33

(2.2)

where A R B , an orthogonal matrix, is the rotation matrix of {B} relative to {A}.

2.2.2 Translation and Rotation 2.2.2.1

Translational Operator

A translation moves a point in space a finite distance along a given vector direction. With this interpretation of actually translating the point in space, only one frame is involved. Figure 2.1 indicates pictorially how a vector A P 1 is translated by a vector A Q. Then a new vector A P 2 can be obtained as A

P2 = A P1 + A Q

(2.3)

Fig. 2.1 Translational operator

zA

A

P2 A

P1

A

P1

A

Q

OA

xA

yA

2.3 RPY Angle and Euler Angle

25

where A Q is the translational operator.

2.2.2.2

Rotational Operator

The rotational operators (or rotational matrices) that rotating θ around the x-, y-, and z-axis of a frame can be respectively represented as ⎡

⎤ 1 0 0 R(x, θ ) = ⎣ 0 cos θ − sin θ ⎦ 0 sin θ cos θ ⎡ ⎤ cos θ 0 sin θ R(y, θ ) = ⎣ 0 1 0 ⎦ − sin θ 0 cos θ ⎡ ⎤ cos θ − sin θ 0 R(z, θ ) = ⎣ sin θ cos θ 0 ⎦ 0 0 1

(2.4)

(2.5)

(2.6)

Translational operator and rotational matrix are used to describe the position and the posture of a rigid body, respectively. Generally, a frame fixed to the rigid body B is usually adopted to completely describe B in space. For the reference frame {A}, a homogeneous transformation matrix A T B , composed of a translational operator A p and a rotational matrix A R B , which is utilized to describe the position and the posture of {B}, can be denoted as

A

TB =

[A



r11 ⎢ r21 RB A p =⎢ ⎣ r31 0 1 0 ]

r12 r22 r32 0

r13 r23 r33 0

⎤ px py ⎥ ⎥ pz ⎦

(2.7)

1

2.3 RPY Angle and Euler Angle It is not convenient and intuitive to describe the posture of a rigid body using rotation matrix in some cases. In this section, description methods adopting the RPY angle, the Z-Y-X Euler angle, and the Z-Y-Z Euler angle are introduced, respectively. (1) RPY angle The RPY angle is defined as follows: the first rotation around the x-axis of a fixed frame is called “Yaw angle”; the second rotation around the y-axis of the fixed frame

26

2 Kinematic Modeling

Fig. 2.2 Schematic diagram of RPY angle: a rotation of γ around x axis; b rotation of β around y axis; c rotation of α around z axis

is called “Pitch angle”; and the third rotation around the z-axis of the fixed frame is called “Roll angle”. As an example, the posture description of the frame ox ,,, y ,,, z ,,, with respect to (w.r.t.) the frame oxyz through RPY angle is exhibited in Fig. 2.2. Starting with the frame coincident with the reference frame oxyz, first rotate an angle γ about the x-axis of frame oxyz to obtain a frame ox , y , z , ; then rotate an angle β about the y-axis of frame oxyz to obtain a frame ox ,, y ,, z ,, ; finally, rotate an angle α about the z-axis of frame oxyz to obtain the frame ox ,,, y ,,, z ,,, . Here the reference frame is the frame {A}, and the frame ox ,,, y ,,, z ,,, is the frame {B}. From Eqs. (2.4– 2.6), the rotation matrix of the frame {B} w.r.t. the frame {A} can be expressed via the three rotations about axes of the reference frame {A} as A

R B (γ , β, α) = R(z, α)R(y, β)R(x, γ ) ⎡ ⎤⎡ ⎤⎡ ⎤ cos α − sin α 0 cos β 0 sin β 1 0 0 = ⎣ sin α cos α 0 ⎦⎣ 0 1 0 ⎦⎣ 0 cos γ − sin γ ⎦ 0 0 1 − sin β 0 cos β 0 sin γ cos γ ⎤ ⎡ n x ox ax (2.8) = ⎣ n y oy ay ⎦ n z oz az

Solutions of α, β, γ can be obtained as / ⎧ ⎪ ⎨ β = atan2(−n z , n 2x + n 2y ) α = atan2(n y , n x ) ⎪ ⎩ γ = atan2(oz , az )

(2.9)

where atan2(y, x) is a two-argument arctangent function. To ensure that the rotation matrix method can correspond to the RPY angle representation, the solution of β is often constrained within – 90° to 90°. However, Eq. (2.9) degenerates when β equals ± 90°. In such cases, only the sum or the difference of α and β can be computed,

2.3 RPY Angle and Euler Angle

27

and one common practice is that let α = 0, which yields ⎧ ⎨ β = 90◦ α=0 ⎩ γ = atan 2(ox , o y ) ⎧ ⎨ β = −90◦ α=0 ⎩ γ = −atan2(ox , o y )

(2.10)

(2.11)

(2) Z-Y-X Euler angle The principle to describe the posture of a rigid body by Z-Y-X Euler angle is explained as follows. Starting from the frame coincident with the reference frame {A}, rotate an angle α first about the z-axis of {A} to obtain a frame {A, }; then rotate an angle β about the y-axis of {A, } to obtain a frame {A,, }; finally, rotate an angle γ about the x-axis of {A,, } to obtain a frame {A,,, }, i.e., the frame {B}. The rotation matrix corresponding to the Z-Y-X Euler angle can be written as A

R B (α, β, γ ) = R(z, α)R(y, β)R(x, γ ) ⎡ ⎤⎡ ⎤⎡ ⎤ cos α − sin α 0 cos β 0 sin β 1 0 0 = ⎣ sin α cos α 0 ⎦⎣ 0 1 0 ⎦⎣ 0 cos γ − sin γ ⎦ 0 0 1 − sin β 0 cos β 0 sin γ cos γ ⎤ ⎡ n x ox ax (2.12) = ⎣ n y oy ay ⎦ n z oz az

(3) Z-Y-Z Euler angle Similar to the Z-Y-X Euler angle representation, the principle to describe the posture of a rigid body by Z-Y-Z Euler angle is: starting from the frame coincident with the reference frame {A}, rotate an angle α first about the z-axis of {A} to obtain a frame {A, }; then rotate an angle β about the y-axis of {A, } to obtain a frame {A,, }; finally, rotate an angle γ about the z-axis of {A,, } to obtain a frame {A,,, }, i.e., the frame {B}. The rotation matrix corresponding to the Z-Y-Z Euler angle can be written as A

R B (α, β, γ ) = R(z, α)R(y, β)R(z, γ ) ⎡ ⎤⎡ ⎤⎡ cos α − sin α 0 cos β 0 sin β cos γ − sin γ = ⎣ sin α cos α 0 ⎦⎣ 0 1 0 ⎦⎣ sin γ cos γ 0 0 1 − sin β 0 cos β 0 0 ⎤ ⎡ n x ox ax = ⎣ n y oy ay ⎦ n z oz az

⎤ 0 0⎦ 1 (2.13)

28

2 Kinematic Modeling

The approach to acquiring α, β, γ from Z-Y-X Euler angle and Z-Y-Z Euler angle representations is the same as that from the RPY angle presentation, hence we only give the solutions from Z-Y-Z Euler angle here as an illustration. If sin β /= 0, it follows that √ ⎧ ⎨ β = atan2( n 2z + o2z , az ) α = atan2(a y , ax ) ⎩ γ = atan2(oz , −n z )

(2.14)

In order to ensure that the rotation matrix can correspond to the Z-Y-Z Euler angle, the solution of β is often taken between 0° and 180°. While β = 0 or 180°, Eq. (2.14) degenerates. One possible convention is to choose α = 0, which yields ⎧ ⎨β α ⎩ γ ⎧ ⎨β α ⎩ γ

=0 =0 = atan2(−ox , n x )

(2.15)

= 180◦ =0 = atan2(ox , −n x )

(2.16)

2.4 Forward Kinematics The forward kinematic model is generally described by parameterizing the geometric relationship between the robot’s links to obtain the pose of the end-effector according to the joint values of the robot. The D-H model is used here to establish the forward kinematic model of a typical six-degree-of-freedom revolute joint robot.

2.4.1 Link Description and Link Frame The structure of an industrial robot can be abstracted into several links connected by joints. The geometric relationship between the links can be expressed by the pose between the frames attached to them. Link parameters and link frames of a robot should be determined uniformly with the definition shown in Fig. 2.3. Taking link i as an example, the definition of the link frame {i} fixed to it is: (1) The z-axis of the link frame {i} coincides with the joint axis i with its positive direction the same as that of the joint axis i, which is generally determined by the right-hand rule.

2.4 Forward Kinematics

29

Axis i

Axis i +1 Link i

Axis i -1 Link i -1

Link i +1

Fig. 2.3 Link kinematics parameters [1]

(2) The x-axis of the link frame {i} coincides with the common normal of joint axes i-1 and i, and its positive direction is from the joint axis i-1 to the joint axis i. (3) The y-axis of the link frame {i} is determined according to the right-hand rule. (4) The origin oi of the link frame {i} is the intersection of zi and x i when the joint axes i-1 and i are not parallel; otherwise, oi is the intersection of zi and x i-1 . The robot kinematic parameters based on the link frame {i} can be defined as follows: (1) Joint rotational angle θ i : the angle of rotation from x i to x i+1 around the zi axis, and its positive direction is counterclockwise by the right-hand rule. (2) Link offset d i : the distance from x i to x i+1 along the zi axis, and its positive direction is along that of the zi axis. (3) Joint torsional angle α i : the angle of rotation from zi to zi+1 around the x i+1 axis, and its positive direction is counterclockwise by the right-hand rule. (4) Link length ai : the distance from zi to zi+1 along the x i+1 axis, and it has no positive direction.

2.4.2 Link Transformation and Forward Kinematic Model The pose relationship of the frame {i + 1} w.r.t. the frame {i} is represented by the link transformation matrix i T i+1 , which can be mathematically described by the four kinematic parameters, θ i , d i , α i , and ai . According to the geometric characteristics of these four kinematic parameters, the link transformation i T i+1 can be decomposed into four sub-transformations as follows:

30

(1) (2) (3) (4)

2 Kinematic Modeling

The frame {i} rotates θ i around the zi axis to obtain the frame {i, }. The frame {i, } moves d i along the zi axis to obtain the frame {i,, }. The frame {i,, } moves ai along the x i+1 axis to obtain the frame {i,,, }. The frame {i,,, } rotates α i around the x i+1 axis to obtain the frame {i + 1}.

The sub-transformations above are implemented relative to the moving frames, e.g., {i’}, {i,, } and {i,,, }, therefore the link transformation i T i+1 can be expressed as i

T i+1 = T rot (z, θi ) T trans (z, di ) T trans (x, ai ) T rot (x, αi ) ⎤ ⎡ cos θi − sin θi cos αi sin θi sin αi ai cos θi ⎢ sin θi cos θi cos αi − cos θi sin αi ai sin θi ⎥ ⎥ =⎢ ⎣ 0 sin αi cos αi di ⎦ 0 0 0 1

(2.17)

where T rot and T trans are homogeneous rotational and homogeneous translational matrices, respectively, with the form of Eq. (2.7). [ T rot (𝓁, θ ) = ⎡

0 ⎢0 T trans (z, d) = ⎢ ⎣0 ⎡

] R(𝓁, θ ) 0 (𝓁 = x, y, z) 0 1 ⎡ ⎤ ⎤ 000 0000 ⎢ ⎥ 0 0 0⎥ ⎥, T trans (y, d) = ⎢ 0 0 0 d ⎥, ⎣ ⎦ 00d 0 0 0 0⎦

0001

(2.18)

0001



000d ⎢0 0 0 0⎥ ⎥ T trans (x, d) = ⎢ ⎣0 0 0 0⎦ 0001

(2.19)

Multiplying successively the transformation matrix of each link using Eq. (2.17), one can obtain the robot’s forward kinematic model, i.e., the end-effector’s pose transformation matrix, as 0

Tn = T1 T2 · · · 0

1

n−1

Tn =

[0

nn 0 on 0 an 0 pn 0 0 0 1

] =

[0

R n 0 pn 0 1

] (2.20)

where 0 Rn , which consists of 0 nn , 0 on , and 0 an , is the rotation matrix of the robot’s end-effector, corresponding to the projection of the x-, y-, and z-axis of link frame {n} w.r.t. those of link frame {0}, respectively; 0 pn is the position vector of the robot’s

2.4 Forward Kinematics

31

di = L/sinζ

ζ

zi-1

zi''

yi-1

zi' zi

yi

Δβi Δβi

xi-1 ai = L

xi

ai

Fig. 2.4 Singularities in consecutive parallel joints

end-effector, which is the coordinates of the origin of link frame {n} w.r.t. link frame {0}; n is the number of degree of freedom of the robot. It should be noted that singular phenomenon will occur if the consecutive joint axes are parallel or nearly parallel. As shown in Fig. 2.4, when the axis of the actual joint i-1 has a small rotation, the origin of the joint frame i may change abruptly to the intersection of the actual axis of joint i and the axis of joint i-1 according to the definition of the D-H model, and the theoretical link length will change abruptly to zero and the link offset will approach infinity. To address this problem, the modified D-H model was developed. In particular, when the consecutive joint axes are parallel or nearly parallel, a rotational angle βi around the y-axis of the frame {i + 1} is introduced based on the D-H model. Then Eq. (2.17) is corrected as i

T i+1 = T rot (z, θi ) T trans (z, di ) T trans (x, ai ) T rot (x, αi ) T rot (y, βi ) ⎡ ⎤ cθi cβi − sθi sαi sβi −sθi cαi cθi sβi + sθi sαi cβi ai cθi ⎢ sθi cβi + sαi cθi sβi cθi cαi sθi sβi − sαi cθi cβi ai sθi ⎥ ⎥ =⎢ (2.21) ⎣ −cαi sβi sαi cαi cβi di ⎦ 0 0 0 1

where c is cosine function cos, and s is sinusoidal function sin.

2.4.3 Forward Kinematic Model of a Typical KUKA Industrial Robot The KUKA KR210 industrial robot is considered in this section to show the modeling of the robot’s forward kinematics using the D-H approach. The KUKA KR210 is a typical six-degree-of-freedom articulated robot, shown in Fig. 2.5, where axes A2 and A3 are parallel, and axes A4, A5, and A6 intersect at wrist point.

32

2 Kinematic Modeling

Fig. 2.5 Typical KUKA industrial robot and its joint axes [2]

The structural dimensions and working envelope of the KUKA KR210 industrial robot are shown in Fig. 2.6a. The overall structure of the KUKA KR210 is the same as that of most KUKA industrial robots, and only values of geometric parameters, such as link length and joint constraints, are different. Link frames of the KUKA KR210 industrial robot are demonstrated in Fig. 2.6b. It is worth noticing that in Fig. 2.6b as the frames {0} and {f } are not link frames, they are defined in the light of conventions for KUKA industrial robots. The frame {0} represents the base, which is a fixed reference frame. Its origin is at the center of the bottom surface of the robot; the z0 axis is vertically upward; the x 0 axis parallels to the x 1 axis with the same direction when the robot lies at the “HOME position”, i.e., the position of six joint angles (0, − 90°, 90°, 0, 0, 0). The frame {f } represents the flange, whose origin is at the center of the flange plane, zf axis is perpendicular to the flange plane and points outside of the robot, and x f axis is vertically downward when the robot is located at the “HOME position”. According to the definition of the robot link parameters described in Sect. 2.4.1 and Fig. 2.6a, the link parameters of the KUKA KR210 industrial robot can be obtained, as shown in Table 2.1, where the additional 90° of θ 3 is due to the fact that the zero position of A3 axis set by the KUKA robot Inc. differs from the definition of kinematic parameters here.

2.4 Forward Kinematics

33

(a) x3

z3 y3

z4

y5

yf

z5

zf

x5

y4

x4

y6

x6

xf

z2

x1

x2

y1 y2

z1 z0

z6

y0 x0

(b)

Fig. 2.6 Schematic diagram of KUKA KR210 industrial robot: a working space [2]; b link frames Table 2.1 Kinematic parameters of KUKA KR210 robot Serial number

αi (°)

di (mm)

θi (°)

ai (mm)

0

180

675

0

0

1

90

0

θ1

350

2

0

0

θ2

1150

3

− 90

0

θ 3 + 90

41

4

90

− 1200

θ4

0

5

− 90

0

θ5

0

6

180

− 215

θ6

0

34

2 Kinematic Modeling

Substituting the kinematic parameters in Table 2.1 into Eq. (2.17), the homogeneous transformation matrices between consecutive link frames are obtained as ⎤ 1 0 0 0 ⎢ 0 −1 0 0 ⎥ 0 ⎥ T1 = ⎢ ⎣ 0 0 −1 d0 ⎦ 0 0 0 1 ⎤ ⎡ cos θ1 0 sin θ1 a1 cos θ1 ⎢ sin θ1 0 − cos θ1 a1 sin θ1 ⎥ 1 ⎥ T2 = ⎢ ⎦ ⎣ 0 1 0 0 0 0 0 1 ⎤ ⎡ cos θ2 − sin θ2 0 a2 cos θ2 ⎢ sin θ2 cos θ2 0 a2 sin θ2 ⎥ 2 ⎥ T3 = ⎢ ⎦ ⎣ 0 0 1 0 0 0 0 1 ⎤ ⎡ − sin θ3 0 cos θ3 −a3 sin θ3 ⎢ cos θ3 0 sin θ3 a3 cos θ3 ⎥ 3 ⎥ T4 = ⎢ ⎦ ⎣ 0 1 0 0 0 0 0 1 ⎤ ⎡ cos θ4 0 sin θ4 0 ⎢ sin θ4 0 − cos θ4 0 ⎥ 4 ⎥ T5 = ⎢ ⎣ 0 1 0 d4 ⎦ 0 0 0 1 ⎡ ⎤ cos θ5 0 − sin θ5 0 ⎢ sin θ5 0 cos θ5 0 ⎥ 5 ⎥ T6 = ⎢ ⎣ 0 −1 0 0⎦ 0 0 0 1 ⎤ ⎡ cos θ6 sin θ6 0 0 ⎢ sin θ6 − cos θ6 0 0 ⎥ 6 ⎥ Tf = ⎢ ⎣ 0 0 −1 d6 ⎦ 0 0 0 1 ⎡

(2.22)

(2.23)

(2.24)

(2.25)

(2.26)

(2.27)

(2.28)

Substituting Eqs. (2.22–2.28) into Eq. (2.20), the forward kinematic model of the KUKA KR210 industrial robot, i.e., the pose matrix of the robot’s end-effector is acquired as

2.5 Inverse Kinematics

35



nx ⎢ n y 0 T f = 0 T 11 T 2 · · · 6 T f = ⎢ ⎣ nz 0 with ⎧ nx ⎪ ⎪ ⎪ ⎪ ⎪ ny ⎪ ⎪ ⎪ ⎪ ⎪ nz ⎪ ⎪ ⎪ ⎪ ⎪ ox ⎪ ⎪ ⎪ ⎪ ⎪ oy ⎪ ⎪ ⎪ ⎨o z

⎪ ax ⎪ ⎪ ⎪ ⎪ ⎪ ay ⎪ ⎪ ⎪ ⎪ ⎪ az ⎪ ⎪ ⎪ ⎪p ⎪ x ⎪ ⎪ ⎪ ⎪ ⎪ p y ⎪ ⎪ ⎩ pz

ox oy oz 0

ax ay az 0

⎤ px py ⎥ ⎥ pz ⎦ 1

(2.29)

= c1 s23 (s4 s6 − c4 c5 c6 ) + s1 (s4 c5 c6 + c4 s6 ) + c1 c23 s5 c6 = s1 s23 (c4 c5 c6 − s4 s6 ) + c1 (s4 c5 c6 + c4 s6 ) − s1 c23 s5 c6 = c23 (s4 s6 − c4 c5 c6 ) − s23 s5 c6 = s1 (s4 c5 s6 − c4 c6 ) − c1 s23 (c4 c5 s6 + s4 c6 ) + c1 c23 s5 s6 = s1 s23 (c4 c5 s6 + s4 c6 ) + c1 (s4 c5 s6 − c4 c6 ) − s1 c23 s5 s6 = −c23 (c4 c5 s6 + s4 c6 ) − s23 s5 s6 = s1 s4 s5 − c1 s23 c4 s5 − c1 c23 c5 = s1 s23 c4 s5 + c1 s4 s5 + s1 c23 c5 = s23 c5 − c23 c4 s5 = c1 (a1 + a2 c2 − a3 s23 ) − c1 c23 (d4 + d6 c5 ) + d6 c1 s23 c4 s5 + d6 s1 s4 s5 = s1 c23 (d4 + d6 c5 ) − s1 (a1 + a2 c2 − a3 s23 ) − d6 s1 s23 c4 s5 + d6 c1 s4 s5 = s23 (d4 + d6 c5 ) + d6 c23 c4 s5 − a3 c23 − a2 s2 + d0 (2.30)

where c1 is cosθ 1 , s1 is sinθ 1 , c23 is cos(θ 2 + θ 3 ), s23 is sin(θ 2 + θ 3 ), and so on. The position vector of the robot’s end-effector is represented by [px py pz ]T and the posture vector can be solved by Eq. (2.30).

2.5 Inverse Kinematics In Sect. 2.4, the problem of computing the pose of the tool frame w.r.t. the base frame given the joint angles is considered, which is forward kinematics of the robot. Here, the inverse kinematics problem, given the desired pose of the end-effector, to compute the set of joint angles, is investigated. The inverse kinematics is the theoretical basis for robot motion control, parametric calibration, and error compensation. This section discusses the formulation of the inverse kinematic model of a revolute-joint industrial robot with six degrees of freedom.

36

2 Kinematic Modeling

2.5.1 Uniquely Closed Solution with Joint Constraints Generally, there exists no closed solution in the inverse kinematics of a 6-DOF serial robot. From the kinematics equation established by the D-H method, it can be found that there are many inverse solutions, which cannot be closed effectively. However, it is solvable if the robot satisfies one of the following sufficient conditions: (1) Three consecutive joint axes intersect at one point; (2) Three consecutive joint axes are parallel to each other. The conditions above are the Pieper criterion [3]. For 6-DOF articulated industrial robots, the first condition is normally satisfied. For instance, the last three joint axes of the KUKA KR210 robot intersect at the wrist point. The idea to solve for inverse kinematics of the robot satisfying the Pieper criteria (1) is: (1) Compute the position of the wrist point in accordance with the end-effector’s pose, and then solve for θ1 of joint A1; (2) Solve for angles θ2 and θ3 according to the solution of the planar double-link mechanism since joint axes A2 and A3 are parallel; (3) Solve for angles θ4 , θ5 , and θ6 based on the solution of Z-Y-Z Euler angle representation as the posture of the end-effector is obtained by the rotation of axes A4, A5 and A6 and therefore can be achieved from the rotation matrix corresponding to Z-Y-Z Euler angles. Unfortunately, there are usually multiple solutions in the inverse kinematics obtained by the method above, e.g., there may be up to 8 groups of solutions for a revolute-joint robot with six degrees of freedom. Hence, a 3-bit binary state quantity s (from 000 to 111) is used to constrain joint angles for determining a uniquely closed solution. The meaning represented by each bit of state quantity s (s2 s1 s0 ) is shown in Table 2.2, where s0 determines the sign of θ1 ; s1 determines the sign of θ3 ; s2 determines the sign of θ5 . φ is the angle between the line from the wrist point to the origin of axis A3 and the line from the wrist point to the origin of axis A4, which is related to the offset between axes A3 and A4 in vertical direction. Using the constraint state quantity, one can solve for all the closed inverse solutions corresponding to a certain pose of the end-effector and can obtain a uniquely closed solution by defining different state quantities. In the following, a uniquely closed solution with joint constraints is discussed via a typical KUKA industrial robot. Table 2.2 Definition of state quantity s Value

s2

s1

s0

0

0◦ ≤ θ5 ≤ 180◦ , θ5 < −180◦

θ3 < φ

The wrist point is in + x direction of frame {1}

1

−180◦ ≤ θ5 ≤ 0◦ , θ5 ≥ 180◦

θ3 ≥ φ

The wrist point is in –x direction of frame {1}

2.5 Inverse Kinematics

37

2.5.2 Inverse Kinematic Model of a Typical KUKA Industrial Robot Without loss of generality, we have assumptions that the joint state quantity is s (s2 s1 s0 ) and the pose transformation of the flange frame w.r.t. the base frame is ⎡

nx ⎢ ny b Tf = ⎢ ⎣ nz 0

ox oy oz 0

ax ay az 0

⎤ px py ⎥ ⎥ pz ⎦

(2.31)

1

(1) Solution of θ1 Since the wrist point is the intersection of axes A4, A5 and A6, θ4 , θ5 and θ6 do not affect the position of the wrist point w.r.t. the base frame. It can be seen from Fig. 2.6b that the wrist point of the typical KUKA industrial robot is the origin of link frame {5}, also the origin of link frame {6}, therefore the position of the wrist point can be determined by calculating the poses of these two frames. The pose of link frame {6} w.r.t. the base frame can be calculated as b

T 6 =b T f 6 T −1 f ⎡ , , , n x ox ax ⎢ n ,y o,y a ,y =⎢ ⎣ n , o, a , z z z 0 0 0

⎤ px, p ,y ⎥ ⎥ p, ⎦

(2.32)

z

1

Substituting Eq. (2.28) into Eq. (2.32), the positional coordinate of the wrist point w.r.t. the base frame can be obtained as ⎤ ⎡ ⎤ px, ax d6 + px b p6 = ⎣ p ,y ⎦ = ⎣ a y d6 + p y ⎦ az d6 + pz pz, ⎡

(2.33)

Since the projection of the wrist point onto the xy plane of the base frame is only related to the rotation of axis A1, we can further compute the value of θ1 . If s0 = 0, then θ1 = −atan2 ( p ,y , px, ) If s0 = 1, then { −atan2 ( p ,y , px, ) − π, when atan2 ( p ,y , px, ) > 0 θ1 = −atan2 ( p ,y , px, ) + π, when atan2 ( p ,y , px, ) ≤ 0

(2.34)

(2.35)

38

2 Kinematic Modeling

(2) Solutions of θ2 and θ3 According to θ1 , the positional coordinate of the wrist point w.r.t. link frame {2} can be obtained as 2

p6 = (b T 1 1 T 2 )−1 b p6 ⎤ ⎡ px, c1 − p ,y s1 − a1 ⎥ ⎢ =⎣ − pz, + d0 ⎦ , , px s1 + p y c1

(2.36)

Further, the pose of link frame {5} w.r.t. link frame {2} can be calculated by ⎤ −c4 s23 −c23 −s23 s4 a2 c2 − c23 d4 − a3 s23 ⎢ c23 c4 −s23 s4 c23 a3 c23 + a2 s2 − d4 s23 ⎥ 2 ⎥ T 5 = 2 T 33 T 44 T 5 = ⎢ ⎦ ⎣ −s4 0 c4 0 0 0 0 1 ⎡

(2.37)

Because of 2 p5 = 2 p6 , then we have ⎤ ⎤ ⎡ , px c1 − p ,y s1 − a1 a2 c2 − c23 d4 − a3 s23 ⎥ ⎣ a3 c23 + a2 s2 − d4 s23 ⎦=⎢ − pz, + d0 ⎦ ⎣ 0 px, s1 + p ,y c1 ⎡

(2.38)

Let l3 =

/ a32 + d42

(2.39)

k1 = px, c1 − p ,y s1 − a1

(2.40)

k2 = − pz, + d0

(2.41)

k3 =

/

k12 + k22

(2.42)

If (|a2 | + l3 ) < k3 or ||a2 | − l3 | > k3 , it means that the end-effector’s pose is unreachable with given joint constraints, and consequently the inverse kinematics has no solution. Let φ = atan2 (|a3 |, |d4 |)

(2.43)

φ1 = atan2 (k2 , k1 )

(2.44)

2.5 Inverse Kinematics

39

) a22 + k32 − l32 φ2 = arccos 2a2 k3 ( 2 ) l + k32 − a22 φ3 = φ2 + arccos 3 2l3 k3 (

(2.45)

(2.46)

If s1 = 0, then θ2 =φ1 +φ2

(2.47)

{ φ − φ3 , a 3 < 0 θ3 = −φ − φ3 , a3 ≥ 0

(2.48)

θ2 =φ1 − φ2

(2.49)

{ φ+φ3 , a3 < 0 θ3 = −φ+φ3 , a3 ≥ 0

(2.50)

If s1 = 1, then

(3) Solutions of θ4 , θ5 and θ6 On the basis of solutions of θ1 , θ2 and θ3 , the pose of link frame {4} w.r.t. the base frame can be obtained as ⎡

−c1 s23 ⎢ s1 s23 b T4 = ⎢ ⎣ −c23 0

⎤ −s1 −c1 c23 c1 (a1 + a2 c2 − a3 s23 ) −c1 s1 c23 −s1 (a1 + a2 c2 − a3 s23 ) ⎥ ⎥ 0 s23 d0 − a3 c23 − a2 c2 ⎦ 0 0 1

(2.51)

Then, the pose of the flange frame w.r.t. link frame {4} can be calculated by ⎡

n ,,x ⎢ n ,,y 4 T f = (b T 4 )−1b T f = ⎢ ⎣ n ,, z 0

o,,x o,,y o,,z 0

ax,, a ,,y az,, 0

⎤ px,, p ,,y ⎥ ⎥ pz,, ⎦ 1

(2.52)

According to link transformations, the rotation matrix corresponding to the posture of the flange frame w.r.t. link frame {4} is ⎡

⎤ c4 c5 s6 − s4 s6 c4 c5 s6 + s4 c6 c4 s5 4 Rf = ⎣ s4 c5 c6 + c4 s6 s4 c5 s6 − c4 c6 s4 s5 ⎦ s5 c6 s5 s6 −c5

(2.53)

40

2 Kinematic Modeling

Combining Eqs. (2.51) and (2.52), one can solve for θ4 , θ5 , and θ6 referring to the Z-Y-Z Euler angle solution. θ5 is solved firstly. If s2 = 0, then θ5 = arccos az,,

(2.54)

θ5 = − arccos az,,

(2.55)

If s2 = 1, then

In general, the calculation formulas of θ4 and θ6 are ) ax,, , θ4 = atan2 s5 s5 ) ( ,, n ,, o θ6 = atan2 z , z s5 s5 (

a ,,y

(2.56) (2.57)

However, when θ5 = 0 or θ5 = π, degradation occurs in Eqs. (2.56) and (2.57), then θ4 and θ6 are replaced by ⎧ ⎪ ⎨ θ5 θ4 ⎪ ⎩ θ6 ⎧ ⎪ ⎨ θ5 θ4 ⎪ ⎩ θ6

=0 =0 = atan2(n ,,y , −o,,y ) =π =0

(2.58)

(2.59)

= atan2(−n ,,y , o,,y )

2.6 Error Modeling 2.6.1 Differential Transformation Differential motion of a rigid body includes a differential translation and a differential rotation. If a frame is fixed to a rigid body, supposed that its pose transformation matrix w.r.t. a fixed reference frame is T . When differential motion occurs, the pose transformation matrix of this frame becomes T + dT , which can be expressed in the fixed reference frame as T + dT = Trans(dx , d y , dz ) Rot( f , dθ ) T

(2.60)

2.6 Error Modeling

41

where Trans(dx , d y , dz ) represents the differential translation of the transformed frame w.r.t. the fixed reference frame, and (dx , d y , dz ) is the translation component along each coordinate axis; Rot( f , dθ ) represents the differential rotation of the transformed frame w.r.t. the fixed reference frame around the vector f , and dθ is the angle of differential rotation. Arranging Eq. (2.60), one can gain the expression of dT as dT = [Trans(dx , d y , dz ) Rot( f , dθ ) − I] T = Δ T

(2.61)

where Δ = Trans(dx , d y , dz ) Rot( f , dθ ) − I is the differential transform matrix of a rigid body w.r.t. the fixed reference frame. It is well known that the homogeneous transformation matrices representing the differential translation and rotation are ⎡

1 ⎢0 Trans(dx , d y , dz ) = ⎢ ⎣0 0

0 1 0 0

0 0 1 0

⎤ dx dy ⎥ ⎥ dz ⎦ 1

(2.62)

and ⎛ ⎜ ⎜ Rot( f , dθ ) = ⎜ ⎝

f x f x versdθ + cdθ f y f x versdθ − f z sdθ f z f x versdθ + f y sdθ f x f y versdθ + f z sdθ f y f y versdθ + cdθ f z f y versdθ − f x sdθ f x f z versdθ − f y sdθ f y f z versdθ + f x sdθ f z f z versdθ + cdθ 0 0 0

⎞ 0 ⎟ 0⎟ ⎟ 0⎠ 1

(2.63) where versdθ = 1 − cos dθ . Since dθ is a small quantity, we have ⎧ ⎨ sin dθ ≈ dθ cos dθ ≈ 1 ⎩ vers dθ ≈ 0

(2.64)

Then Eq. (2.63) can be reduced to ⎡

1 − f z dθ f y dθ ⎢ f z dθ 1 − f x dθ Rot( f , dθ ) = ⎢ ⎣ − f y dθ f x dθ 1 0 0 0

⎤ 0 0⎥ ⎥ 0⎦ 1

(2.65)

Accordingly, the differential rotation around the vector f is equivalent to that around each axis of a frame. Let f x dθ = δx , f y dθ = δ y , f z dθ = δz , we have

42

2 Kinematic Modeling



1 ⎢ δz Rot( f , dθ ) = ⎢ ⎣ −δ y 0

−δz 1 δx 0

δy −δx 1 0

⎤ 0 0⎥ ⎥ 0⎦ 1

(2.66)

Substitution of Eqs. (2.62) and (2.66) into Eq. (2.61) yields ⎡

0 ⎢ δz Δ=⎢ ⎣ −δ y 0

−δz 0 δx 0

δy −δx 0 0

⎤ dx dy ⎥ ⎥ dz ⎦ 0

(2.67)

It means that the differential transformation Δ can be achieved from a differential translation vector d and a differential rotation vector δ with d = dx i + d y j + dz k and δ = δx i + δ y j + δz k.

2.6.2 Differential Transformation of Consecutive Links The parametric error of the robot’s link, a small quantity, is responsible for the link’s transformation error, which can be considered as the differential transformation due to the link’s parametric error. Assumed that the theoretical transformation of link {i + 1} w.r.t. link {i} is defined as i T i +1 . When the link’s parametric error occurs, the actual transformation is denoted as i T i +1 + di T i +1 with di T i +1 the differentiation of link {i + 1} w.r.t. link {i}, which can be approximately written as a linear combination of the link’s parametric errors, i.e., di T i+1 =

∂ i T i+1 ∂ i T i+1 ∂ i T i+1 ∂ i T i+1 ∂ i T i+1 ∆θi + ∆di + ∆ai + ∆αi + ∆βi ∂θi ∂di ∂ai ∂αi ∂βi (2.68)

where ∆θi , ∆di , ∆ai , ∆αi , and ∆βi represent parametric errors of each link {i} in the MD-H model. Taking the partial derivative of Eq. (2.21) w.r.t. θi yields ⎤ −sθi cβi − sαi cθi sβi −cαi cθi −sθi sβi + sαi cθi cβi −ai sθi ⎢ cθi cβi − sαi sθi sβi −cαi sθi cθi sβi + sαi sθi cβi ai cθi ⎥ ⎥ (2.69) =⎢ ⎣ 0 0 0 0 ⎦ 0 0 0 0 ⎡

∂ i T i+1 ∂θi

Since the partial derivative is calculated using theoretical link parameters, let βi = 0, Eq. (2.69) can be rewritten as

2.6 Error Modeling

43

⎤ −sθi −cαi cθi sαi cθi −ai sθi ⎢ cθi −cαi sθi sαi sθi ai cθi ⎥ ⎥ = Dθ i T i+1 =⎢ ⎣ 0 0 0 0 ⎦ 0 0 0 0 ⎡

∂ i T i+1 ∂θi

(2.70)

where ⎡

0 ⎢1 Dθ = ⎢ ⎣0 0

−1 0 0 0

0 0 0 0

⎤ 0 0⎥ ⎥ 0⎦ 0

(2.71)

Similarly, the partial derivatives of Eq. (2.21) w.r.t. di , ai , αi , and βi , respectively, lead to ⎡

⎤ 0000 ⎢0 0 0 0⎥ ⎥ Dd = ⎢ ⎣0 0 0 1⎦ 0000 ⎤ ⎡ 0 0 0 cθi ⎢ 0 0 0 sθi ⎥ ⎥ Da = ⎢ ⎣0 0 0 0 ⎦ 000 0 ⎤ ⎡ 0 0 sθi −di sθi ⎢ 0 0 −cθi di cθi ⎥ ⎥ Dα = ⎢ ⎣ −sθi cθi 0 0 ⎦ 0 0 0 0 ⎤ 0 −sαi cθi cαi ai sθi sαi − di cθi cαi ⎢ sαi 0 sθi cαi −ai cθi sαi − di sθi cαi ⎥ ⎥ Dβ = ⎢ ⎦ ⎣ −cθi cαi −sθi cαi 0 ai cαi 0 0 0 0

(2.72)

(2.73)

(2.74)



(2.75)

Then, Eq. (2.68) can be rewritten as ) ( di T i+1 = Dθ ∆θi + Dd ∆di + Da ∆ai + Dα ∆αi + Dβ ∆βi i

T i+1 = δ i T i+1 i T i+1

where δ i T i+1 is the differential transformation matrix. Substituting Eqs. (2.71) to (2.75) into Eq. (2.76), one can have

(2.76)

44

2 Kinematic Modeling



δ i T i+1

0 −∆θi sθi ∆αi ⎢ ∆θi 0 −cθi ∆αi =⎢ ⎣ −sθi ∆αi cθi ∆αi 0 0 0 0 ⎡ 0 −sαi ∆βi ⎢ sαi ∆βi 0 +⎢ ⎣ −cθi cαi ∆βi −sθi cαi ∆βi 0 0

⎤ cθi ∆ai − di sθi ∆αi sθi ∆ai + di cθi ∆αi ⎥ ⎥ ⎦ ∆di 0

⎤ cθi cαi ∆βi (ai sθi sαi − di cθi cαi )∆βi sθi cαi ∆βi (−ai cθi sαi − di sθi cαi )∆βi ⎥ ⎥ ⎦ 0 ai cαi ∆βi 0 0 (2.77)

where the second matrix represents Dβ ∆βi . It can be seen that the differential transformation caused by the link’s parametric errors has the same form as the differential transformation in Eq. (2.67), and its differential translation vector i di+1 and differential rotation vector i δ i+1 can be written as ⎡ ⎤ ⎡ ⎡ ⎤ ⎤ 0 cθi −di sθi i d i+1 = ⎣ 0 ⎦∆di + ⎣ sθi ⎦∆ai + ⎣ di cθi ⎦∆αi 0 0 1 ⎡ ⎤ ai sθi sαi − di cθi cαi + ⎣ −ai cθi sαi − di sθi cαi ⎦∆βi (2.78) ai cαi ⎤ ⎤ ⎡ ⎤ ⎡ ⎡ 0 cθi −sθi cαi i δ i+1 = ⎣ 0 ⎦∆θi + ⎣ sθi ⎦∆αi + ⎣ cθi cαi ⎦∆βi (2.79) 0 sαi 1 Let [ ]T m1i = 0 0 1

(2.80)

]T [ m2i = cθi sθi 0

(2.81)

]T [ m3i = −di sθi di cθi 0

(2.82)

]T [ m4i = ai sθi sαi − di cθi cαi −ai cθi cαi − di sθi cαi ai cαi ]T [ m5i = −sθi cαi cθi cαi sαi

(2.83) (2.84)

Then the differential transformation vectors of the link i can be written in the following linear form as

2.6 Error Modeling i

45

d i+1 = m1i ∆di + m2i ∆ai + m3i ∆αi + m4i ∆βi

(2.85)

δ i+1 = m1i ∆θi + m2i ∆αi + m5i ∆βi

(2.86)

i

2.6.3 Kinematics Error Model Based on the differential transformation of links, we can model the positioning error of the robot end-effector. For an n-degree-of-freedom robotic manipulator, the coordinate transformation of its end-effector w.r.t. the base is 0

T n + d0 T n =

n−1 ||

(i T i+1 + di T i+1 )

(2.87)

i=0

Expanding Eq. (2.87) and ignoring the higher-order differential terms yield 0

T n + d0 T n =0 T n +

n−1 ∑

(0 T 1 · · ·i−1 T i · di T i+1 ·i+1 T i+2 · · ·n−1 T n )

(2.88)

i=0

Substituting Eq. (2.76) into Eq. (2.88), yields d0 T n =

n−1 ∑

(0 T 1 · · ·i−1 T i · δi T i+1 ·i T i+1 ·i+1 T i+2 · · ·n−1 T n )

i=0

=

n−1 ∑

[0 ] ( T 1 · · ·i−1 T i ) · δi T i+1 · (0 T 1 · · ·i−1 T )−1 ·0 T n

i=0

[ n−1 ] ∑ 0 i−1 i 0 i−1 −1 = ( T 1 · · · T i ) · δ T i+1 · ( T 1 · · · T i ) ·0 T n

(2.89)

i=0

Let d0 T n = δ0 T n ·0 T n , we have δ0 T n =

n−1 ∑

0

T i · δi T i+1 ·0 T i−1

i=0

Equation (2.90) can be further calculated as

(2.90)

46

2 Kinematic Modeling



0 ⎢ n δ ⎢ δ0 T n = ⎢ z n ⎣ −δ y 0

−δzn 0 δxn 0

δ ny −δxn 0 0

⎤ dxn ⎥ d yn ⎥ ⎥ dzn ⎦ 0

(2.91)

It can be seen that the positioning error of the end-effector w.r.t. the base also has the mathematical form of the differential transformation, and its differential translation vector 0 d n and differential rotation vector 0 δ n can be expressed as [0

dn 0 δn

]

[

] [ ] [ ] ] [ ] [ M1 M2 M5 M3 M4 = ∆θ + ∆α + ∆β ∆d + ∆a + M2 0 0 M3 M6 ⎡ ⎤ ∆θ ⎥ [ ]⎢ ⎢ ∆d ⎥ M1 M2 M3 M4 M5 ⎢ ⎥ = (2.92) ⎢ ∆a ⎥ ⎥ M2 0 0 M3 M6 ⎢ ⎣ ∆α ⎦ ∆β

where ∆θ , ∆d, ∆a, ∆α, and ∆β are vectors composed of parametric errors of each link, i.e., ⎧ [ ]T ⎪ ∆θ = ∆θ1 · · · ∆θn ⎪ ⎪ ]T [ ⎪ ⎪ ⎪ ⎨ ∆d = [ ∆d1 · · · ∆dn ] T ∆a = ∆a1 · · · ∆an ⎪ ] [ ⎪ T ⎪ ⎪ ∆α = ∆α1 · · · ∆αn ⎪ ⎪ ]T [ ⎩ ∆β = ∆β1 · · · ∆βn

(2.93)

and M 1 , M 2 , M 3 , M 4 , M 5 , and M 6 are 3 × n matrices composed of link parameters θi , di , ai , and αi with their ith column vectors expressed as M 1i =0 pi−1 × (0 Ri−1 · m1i )

(2.94)

M 2i =0 Ri−1 · m1i

(2.95)

M 3i =0 Ri−1 · m2i

(2.96)

M 4i =0 pi−1 × (0 Ri−1 · m2i ) + (0 Ri−1 · m3i )

(2.97)

M 5i =0 pi−1 × (0 Ri−1 · m5i ) + (0 Ri−1 · m4i )

(2.98)

M 6i =0 Ri−1 · m5i

(2.99)

References

47

Heretofore, the mathematical relationship between the robot’s positioning error and its kinematic parametric errors is obtained, then the robot kinematic model with the link’s parametric errors can be expressed as 0

T n + d0 T n = ( I + δ0 T n ) ·0 T n

(2.100)

2.7 Summary In this chapter, pose descriptions and coordinate transformations involved in robotics are first introduced, followed by the forward and inverse kinematics of industrial robots. Then, the error modelling is conducted by using the differential transformation approach to provide a theoretical basis for subsequent error compensations.

References 1. Craig JJ. Introduction to robotics: mechanics and control. 3rd ed. Upper Saddle River, NJ: Pearson Education; 2005. 2. KUKA Inc.: KUKA Robots KR QUANTEC extra With F and C Variants Specification. https://xpert.kuka.com/service-express/portal/project1_p/document/kuka-project1_p-com mon_PB1888_en?context=%7B%7D Accessed (2020). 3. Sirlantzis K, Larsen LB, Kanumuru LK, Oprea P. Robotics. In: Cowan D, Najafi L, editors. Handbook of electronic assistive technology. Academic Press; 2019. p. 311–45.

Chapter 3

Positioning Error Compensation Using Kinematic Calibration

3.1 Introduction Studies have shown that the positioning error caused by geometric errors contributes up to more than 80% of the total error, hence geometric errors are the primary factor that needs to be focused on for robot kinematic calibration. The geometric error describes the accuracy of structural parameters for the robot body and the accuracy of the related parameters of the robot system and the external system. Therefore, studying kinematic calibration and accurately identifying geometric parametric errors are of great significance in improving robot positioning accuracy. A measurement process, in which positioning errors of the robot end-effector are measured at all specified points within the robot’s workspace, is an extremely key procedure for the robot calibration. The achievable accuracy and the difficulty of error compensation implementation are closely related to the choice of sampling points. The reasonable selection of sampling points is the primary problem that has to be solved in the measurement process. Furthermore, the robot kinematics error is the result of various geometric error parameters. The kinematic calibration should consider not only the parametric error of each link, but also various error factors such as the establishment error of the base frame and the flexibility error as much as possible. In this chapter, first several planning methods of sampling points are presented to achieve a compromise between the sampling efficiency and the positioning accuracy in the error compensation process. Subsequently, two positioning error models considering the construction error of the base frame and the flexibility error of the joints and links are established separately, and the corresponding parameters are identified. Finally, through variable parameter identification, the positioning accuracy of the robot is effectively improved, which is verified by experiments.

© Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_3

49

50

3 Positioning Error Compensation Using Kinematic Calibration

3.2 Observability-Index-Based Random Sampling Method It is a common practice to select the observability of robot kinematic parameters as an indicator to evaluate the pros and cons of sampling points. The selection method for random sampling points based on the observability index will be elaborated in this section, which provides a basis for determining sampling points for the robot’s kinematic calibration.

3.2.1 Observability Index of Robot Kinematic Parameters Observability, a concept proposed by Kalman [1] in control theory, is an index to measure the ability of a system internal state to be inferred by its external output. In the robot error compensation, the observability of robot kinematic parameters refers to the ability of its kinematic parametric errors to be identified by the end-effector positioning errors, which reflects the relationship between robot kinematic parametric errors and the end-effector positioning errors. Different degrees of observability are owned by different kinematic parameters. Here, the measure of observability is called the observability index. The robot positioning error can be expressed as a linear combination of robot kinematic parametric errors. When there are M sampling points, the following relationship satisfies ∆T = J∆ρ

(3.1)

where ∆T is the pose error of each sample point; ∆ρ is the robot’s parametric errors to be identified with the dimension of L; J is the Jacobian matrix indicating the relationship between the parametric errors and the end-effector positioning error. Substituting the singular value decomposition of J into Eq. (3.1) yields ∆T = UΣ V T ∆ρ

(3.2)

where U is a 6M × 6M matrix with its column vector being the eigenvector of the matrix J J T ; V is a L × L matrix with its column vector being the eigenvector of the matrix J T J; Σ is a 6M × L non-negative real diagonal matrix with (if 6M > L) ⎡

σ1 ⎢0 ⎢ ⎢ . ⎢ . . Σ =⎢ ⎢ ⎢0 ⎢ ⎣0 0

0 σ2 .. .

··· ··· .. .

0 0 .. .

0 0 .. .



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0 · · · 0 σL ⎥ ⎥ 0 ··· 0 0 ⎦ 0 ··· 0 0

(3.3)

3.2 Observability-Index-Based Random Sampling Method

51

where non-negative real number σi is the singular value of the matrix J with σ1 ≥ σ2 ≥ · · · ≥ σ L . If σ L /= 0, it means that the error of each parameter of the robot is observable. Through the above derivation, it can be found that for any set of sampling points the observability of each parametric error of the robot can be judged by calculating its corresponding singular value. Therefore, in order to ensure that the robot’s parametric errors can be identified, the observability of the robot’s parametric errors should be guaranteed by the selection of sampling points in principle. Meanwhile, the sampling points with high observability of parametric errors should be selected to improve the recognition accuracy of the parametric errors. Hence, the observability of robot parameters needs to be quantitatively described further. According to the analysis from Borm and Menq [2], when the robot’s error parameter ∆ρ is a constant, a spatial ellipsoid will be formed by the robot’s positioning error, of which the range of the norm can be determined by the singular value of the matrix J σL ≤

||∆T || ≤ σ1 ||∆ρ||

(3.4)

Based on this theory, different observability indexes were proposed by different researchers. The first observability index was proposed by Borm and Menq [2] 1

O1 =

(σ1 σ2 · · · σ L ) L √ M

(3.5)

The geometric meaning of the singular value σi is used by the index, and its goal is to maximize the product of the singular values, resulting in the largest envelope of the ellipsoid of the positioning error space. It means the maximum positioning error based on the given parametric error can be obtained by choosing the sampling points that meet this condition. Namely, which means that the relationship between the positioning errors corresponding to these sampling points and the parametric errors is the most significant and the best parameter recognition accuracy can be accomplished. The second observability index was proposed by Driels and Pathre [3] O2 =

σL σ1

(3.6)

which is also the reciprocal of the condition number of the matrix J. The reason of using this index is that when the observability index is maximized, the condition number of the matrix J is the smallest, and the distribution of the singular value σi is relatively uniform. In this case, the eccentricity of the ellipsoid in the positioning error space is improved, which can reduce the influence of measurement errors on the identification of parametric errors.

52

3 Positioning Error Compensation Using Kinematic Calibration

The third observability index was proposed by Nahvi and Hollerbach [4] O3 = σ L

(3.7)

which is the smallest singular value of the matrix J. The volume of the ellipsoid in the positioning error space can be maximized by maximizing this observability index, which means that the sampling points that meet this condition are more sensitive to the parametric errors of the robot. The fourth observability index was also proposed by Nahvi and Hollerbach [4] O4 =

σ L2 σ1

(3.8)

This observability index is also known as the noise amplification index, which can be used to evaluate the impact of measurement errors (i.e., noise) and unmodeled errors on the robot error compensation. The accuracy of error identification can be improved by selecting the sampling points satisfying the maximum value of the observability index. The fifth observability index was proposed by Sun and Hollerbach [5] O5 =

1 σ1

+

1 σ2

1 + ··· +

1 σL

(3.9)

From Eq. (3.9), it can be concluded that maximizing this observability index means minimizing the reciprocal sum of the singular values of the matrix J. Therefore, using this observability index the volume of the ellipsoid in the positioning error space can be expanded to improve the recognition accuracy of parametric error.

3.2.2 Selection Method of the Sample Points Because the factors that affect the positioning errors of the robot include not only error factors with certain change law such as the link length, the zero deviation of the joint rotation angle, and the link or joint flexibility, but also uncertain factors like backlash, load, and gear transmission error, etc. These factors unavoidably cause the inaccuracy of the established robot kinematic error model. The observability index in the last section is adopted here as the evaluation standard for sampling point selection to lower furthest the influence of unpredictable sources of errors on the parametric errors during the identification process, as is shown in Fig. 3.1. It needs to be analyzed from two aspects: on the one hand, among the five observability indicators mentioned above, which one is more suitable for the current robot calibration; on the other hand, how to select the experimental space and the alternative sampling points.

3.2 Observability-Index-Based Random Sampling Method

53

Observability index

Alternative schemes for sampling points

Selection for sampling points

Optimal schemes for sampling points

Fig. 3.1 Selection of measurement samples

(1) Key issues of samples selection Many factors affect the positioning error of the robot’s end-effector, some of which show certain functional laws, and some have no functional laws. The parametric errors corresponding to each configuration of the robot are relatively constant because of the high repeatability (e.g., the repeatability of the KUKA KR210 robot is ± 0.08 mm), hence, we can conclude that the error factors that exhibit a certain functional law can be regarded as the main sources of errors. Therefore, the measurement samples should be distributed throughout the calibration space as much as possible. Meanwhile, considering that the limited number of sampling points does not fully reflect the error characteristics of a certain area, we should divide the sample space into several more even areas, and evenly select sampling points from each area to prevent the sampling points from being concentrated in a small area during the selection process of the measurement samples. The number of sampling points will be tested and discussed to analyze its influence on parameter identification. The main method is to gradually increase the number of selected points in a small area divided by the calibration space to determine the functional relationship between the number of sampling points and the observability index, which is calculated for each measurement scheme. (2) Simulation setup Before the calibration is carried out, the appropriate observability index is selected through simulation, and the influence of the selected measurement sample on the parameter identification in the robot calibration is tested and verified in the simulation. The size of the space selected for simulation calibration is 1000 mm × 1000 mm × 1000 mm, as shown in Fig. 3.2. Considering that joints 2 and 3 impact significantly on the position and not obviously on the posture of the wrist point, in order to facilitate the selection of measurement samples, the sampling space is divided according to the wrist point position in spite of its posture distribution. Combined with the kinematic model of the KUKA robot in Sect. 2.4.3, the inverse kinematic solution near the robot’s mechanical zero point (0, − 90°, 90°, 0, 0, 0) is selected as the joint space position of each candidate measurement sampling point by the principle of minimum energy. The selected space is divided into 3 × 3 × 3 grids, and 12

3 Positioning Error Compensation Using Kinematic Calibration

1000 mm

54

1000 mm

1000 mm

Intersection point of the rear three axes

Fig. 3.2 Simulated space and grid division

candidate sampling points are selected randomly in each grid. It is assumed here that the observability of the measured sample can be improved by a better end-effector pose, and the 12 sampling points for each grid are cyclically tested. Certain parametric errors are introduced into the robot kinematic model for the purpose of simulation. A random variable conforming to Gaussian distribution (the distribution area is ±3σ ) with the mean value of μ = (0, 0, 0) and the standard deviation of σ = (0.01, 0.02, 0.04) is added to the positioning error of the robot end-effector to simulate the positioning errors due to non-model errors and measurement uncertainties. In this way, there are a total of 324 candidate sampling points in the planned grid space. Combining the calibration method and the theory of observability index, the process of the selection of observability index and measuring samples is shown in Fig. 3.3. The main steps are as follows. (1) Alternative sampling point settings. In order to prevent the parametric identification results from falling into local optimizations, the workspace of the robot to be calibrated is divided into M subspaces on average and the sampling points are uniformly selected around the grid center. (2) Measurement samples setup. The optimal measurement sample is specified as Ω i (i = 1, 2, 3, 4, 5), where the subscript i corresponds to each observability index Oi (i = 1, . . . , 5).

3.2 Observability-Index-Based Random Sampling Method

55

Start

Select observability index and measurement samples

Set alternative sampling points

Identify parameters

Select measurement samples

Calculate residual errors of test points

Calculate observability indices O1, O2, O3, O4, and O5

Update appropriate observability index

Update optimal samples corresponding to each observability index

Satisfy termination condition?

N

Y N

Satisfy termination condition?

Y

Obtain observability index and measurement samples

Fig. 3.3 Selection of observability index and measurement samples

(3) Measurement samples updating. The points in a subspace within the measurement sample are cyclically replaced with those that are not in the measurement samples and each observability index Oi (i = 1, . . . , 5) of the measurement sample is calculated. If the observability index of the current measurement sample is larger than that of the previous one, the current measurement sample is updated to the optimal measurement sample corresponding to this observability index. (4) Optimal observability index selection. After the optimal measurement samples corresponding to each observability index are obtained, the simulated errors of the robot are identified and calculated with these measurement samples and the residuals of the candidate sampling points are compared. Finally, the optimal observability index is selected. (5) Optimal sample size determination. Based on the selected observability index in step (4), increase the size of the corresponding measurement samples, and then calculate the observability index. Finally, determine the optimal number of measurement samples. (6) Simulation and analysis. Obtain the optimal measurement samples corresponding to each observability index through simulation, as is shown in Fig. 3.4.

3 Positioning Error Compensation Using Kinematic Calibration

Observability value

Observability value

56

Epoch

Epoch

(b)

Observability value

Observability value

(a)

Epoch

(c)

(d)

Observability value

Epoch

Epoch

(e) Fig. 3.4 Optimization of samples using different observability indexes: a O1 ; b O2 ; c O3 ; d O4 ; e O5

The selected optimal samples are used for parameter identification and error prediction is conducted for all 324 candidate measurement samples. The predicted residuals of each sample can be obtained, and their statistical results are tabulated in Table 3.1. It can be seen from Table 3.1 that the optimal measurement sample with O1 as the observability index has the best identification effect on the error model when simulating the actual working conditions. By increasing the number of the selected

0.119

O5

0.374

0.374

0.0516

0.0148

0.119

O3

O4

0.0359

0.0513

0.0105

0.0122

O1

0.0247

0.0247

0.0247

0.0323

0.0187

σ = 0.02 Average error

Maximum error

σ = 0.01

Average error

O2

Index

Table 3.1 Predicted residuals of optimal samples (mm)

0.0709

0.0709

0.0846

0.0698

0.0483

Maximum error

σ = 0.04

0.0624

0.0624

0.0519

0.0489

0.0395

Average error

0.1759

0.1759

0.2246

0.1212

0.093

Maximum error

σ = 0.1

0.093

0.093

0.1387

0.099

0.0963

Average error

0.4103

0.4103

0.6725

0.4427

0.326

Maximum error

3.2 Observability-Index-Based Random Sampling Method 57

3 Positioning Error Compensation Using Kinematic Calibration

O1

58

Number of sampling points Fig. 3.5 History of O1 with number of samples

optimal samples, we can calculate the observability index O1 , whose variation with the number of sampling points is shown in Fig. 3.5. It can be observed that when the number of the sampling points exceeds 27, the observability index O1 changes little. Considering that the larger the number of measurement samples, the heavier the test workload and the longer the period, hence it is appropriate to select 27 measurement points.

3.3 Uniform-Grid-Based Sampling Method 3.3.1 Optimal Grid Size 3.3.1.1

Definition of Optimal Grid Size

The main principle of robot positioning error compensation based on grid interpolation is to divide the robot’s working space into a series of cube grids with a certain step length. Then the positioning error of any target point in the workspace is estimated by those of the eight vertices of the smallest grid containing the target point, and is compensated to the theoretical positioning coordinates. Obviously, the robot error compensation effect is closely related to the grid size. Theoretically, the smaller the cube grid is divided, the better the error compensation effect may be. However, it does not mean that the compensation effect of the points in the cube grid with a small step size must be better than that of the cube grid with a large step size, because this depends on the positioning error distribution law of the robot. Below we select

3.3 Uniform-Grid-Based Sampling Method

59

a small area in the workspace to illustrate the robot positioning error variation. In order to simplify the problem, keep the other axes still and only rotate the axes A2 and A3. The variation range of each axis is shown in Table 3.2. The error distribution of the robot in the three directions of the base frame within a given area is shown in Figs. 3.6, 3.7 and 3.8. From Fig. 3.6, although the distance between the joint angles corresponding to P1 and P2 is larger than that corresponding to P1 and P3 , the error difference between P1 and P2 is smaller than that between P1 and P3 in the x-direction. From Fig. 3.7, the positioning error of the 3 points in the ydirection has analogous case with that in the x-direction in Fig. 3.6. In addition, it can be observed from Fig. 3.8 that the error distribution in the z-direction is monotonously changing within the selected joint range. Since the distance between the joint angles corresponding to P1 and P3 is smaller than that corresponding to P1 and P2 , in the z-direction the error of P3 is closer to that of P1 . Table 3.2 Variation range of joint angles A1

A2

A3

A4

A5

A6

Variation range (°)

21

−58 to −48

178 to 188

32

−46

−26

Positioning error (mm)

Axis number

Fig. 3.6 Distribution of positioning errors in the x-direction

3 Positioning Error Compensation Using Kinematic Calibration

Positioning error (mm)

60

Positioning error (mm)

Fig. 3.7 Distribution of positioning errors in the y-direction

Fig. 3.8 Distribution of positioning errors in the z-direction

3.3 Uniform-Grid-Based Sampling Method

61

As mentioned above, although the accuracy of the same point compensated by a large grid size may approach or even exceed that compensated by a small one, it is difficult to occur in the case that error surface changes dramatically. In order to satisfy that the given positioning accuracy after compensation can be reached for all points in the divided grid, the grid size should be as small as possible. However, the workload of measuring positioning errors of samples will be increased greatly if the grid size is too small, which is not conducive to implementation in industrial sites. Therefore, under the premise of satisfying the given positioning accuracy, it is of great importance to find a maximum grid size. Here the maximum grid size satisfying the requested positioning accuracy after compensation is defined as the optimal grid size.

3.3.1.2

Determination of Optimal Grid Size

Several representative poses in the robot workspace, normally front, back, left, right, top, bottom, and middle points, can be selected as the center points of the grid, respectively. Then the different grid sizes are determined. The positioning error vectors of the point and the grid vertex can be obtained through a certain measurement method. If the difference between the positioning error vector of the vertex and that of the measured point satisfies the desired tolerance, it is considered that there is a high degree of similarity between the positioning error vector of any point in the grid and that of the vertex at the selected grid size. Furthermore, the error compensation for the center point of the grid is implemented to verify the actual effect under the selected grid size. Finally, in order to determine the optimal grid size, the positioning errors of all test points with each selected grid size are counted by using probability and statistics, and the mean value and standard deviation of the positioning errors corresponding to the different grid size are calculated, where the grid size with relatively large value and relatively small average error and standard deviation is chosen as the optimal grid size for a given processing area The specific steps can be summarized as follows. (1) Select usually no less than five test points in the given workspace of the robot. (2) Choose different grid sizes with each test point as the center, and conduct an error compensation experiment of the robot. (3) For each selected grid size, perform probabilistic statistics on the compensated positioning errors of all test points. (4) Determine the grid size with average positioning error meeting the specified tolerance, with small standard deviation and with relatively large value as the optimal one. 3.3.1.3

Optimal Grid Size Experiment

The experiment of determining the optimal grid size is carried out taking the KUKA KR150-2 industrial robot as a test object and the laser tracker as a measuring device.

62

3 Positioning Error Compensation Using Kinematic Calibration

During the experiment, the robot is without load and the ambient temperature is maintained at 15–18 °C. At the same time, the robot’s target posture and its operating speed are kept constant. It should be noticing that the operation of each grid point should be started from a fixed point, which is also called the “HOME position”, to exclude the influence of other factors related to the non-grid size on the experimental results. Any point in the robot workspace is chosen, e.g., (2000 mm, 700 mm, 1000 mm, 0, 90°, 0), whose first and last three parameters indicate the position coordinates and posture in the x-, y-, and z-axis of the robot base frame, respectively. Here, the RPY angle is used to denote the posture. The grid size is initially selected as 10 mm, and then is increased gradually with a step length of 10–200 mm to build 20 cubic grids. The change curve of positioning error corresponding to each grid size after compensation is shown in Fig. 3.9. It can be seen that the compensated accuracy changes not obviously with the grid size. Then the grid size is set to 80 mm. In this case, the change curve of the positioning error after compensation is shown in Fig. 3.10. Furthermore, another four test points (1900 mm, − 100 mm, 2100 mm, 0, 90°, 0), (2150 mm, 0 mm, 1300 mm, 0, 90°, 0), (1450 mm, 200 mm, 1500 mm, 0, 90°, 0), and (2300 mm, 500 mm, 2100 mm, 0, 90°, 0) are selected to evaluate the effect of the grid size on the positioning error. The positioning errors after compensation corresponding to them are shown in Figs. 3.11, 3.12, 3.13 and 3.14, respectively. It should be noted that the technique of multiple measurements is adopted to reduce the influence of the measurement error. The statistical analysis can be performed on the compensated positioning errors of all test points under each grid size with the steps as:

Fig. 3.9 Compensated error of point (2000 mm, 700 mm, 1000 mm, 0, 90°, 0) with a small grid size

Positioning error (mm)

(1) Determine and group the maximum and minimum positioning errors of all test points under the selected grid size. (2) Count the frequencies of all experimental data under the selected grid size in each interval divided in step (1).

Grid size (mm)

3.3 Uniform-Grid-Based Sampling Method

63 1st compensation 2nd compensation 3rd compensation 4th compensation 5th compensation 6th compensation 7th compensation 8th compensation

0.5

Positioning error (mm)

0.4

0.3

0.2

0.1

0

0

100

200 300 Grid size (mm)

400

500

Fig. 3.10 Compensated error of point (2000 mm, 700 mm, 1000 mm, 0, 90°, 0) with different grid sizes 1st compensation 2nd compensation 3rd compensation 4th compensation 5th compensation 6th compensation 7th compensation 8th compensation

0.5

Positioning error (mm)

0.4

0.3

0.2

0.1

0

0

100

200 300 Grid size (mm)

400

500

Fig. 3.11 Compensated error of point (1900 mm, -100 mm, 2100 mm, 0, 90°, 0) with different grid sizes

64

3 Positioning Error Compensation Using Kinematic Calibration

0.5

1st compensation 2nd compensation 3rd compensation 4th compensation 5th compensation 6th compensation 7th compensation 8th compensation

Positioning error (mm)

0.4

0.3

0.2

0.1

0

0

100

200 300 Grid size (mm)

400

500

Fig. 3.12 Compensated error of point (2150 mm, 0 mm, 1300 mm, 0, 90°, 0) with different grid sizes

0.5

1st compensation 2nd compensation 3rd compensation 4th compensation 5th compensation 6th compensation 7th compensation 8th compensation

Positioning error (mm)

0.4

0.3

0.2

0.1

0 0

100

200 300 Grid size (mm)

400

500

Fig. 3.13 Compensated error of point (1450 mm, 200 mm, 1500 mm, 0, 90°, 0) with different grid sizes

3.3 Uniform-Grid-Based Sampling Method

65

0.5 1st compensation 2nd compensation 3rd compensation 4th compensation 5th compensation 6th compensation 7th compensation 8th compensation

Positioning error (mm)

0.4

0.3

0.2

0.1

0

0

100

200 300 Grid size (mm)

400

500

Fig. 3.14 Compensated error of point (2300 mm, 500 mm, 2100 mm, 0, 90°, 0) with different grid sizes

(3) Calculate the mean value of the positioning errors under the selected grid size. For each grouping interval, the mean of the minimum and maximum values is defined as the median. The product of the frequency of each grouping interval and the median is summed, and the quotient of the sum of product and the total number of experimental data at this grid size is taken as the mean value of the positioning errors. (4) Calculate the standard deviation of the positioning errors under the selected grid size. The difference between the median of each grouping interval and the mean value obtained in step (3) is taken as the group residuals. The product of the square of the group residuals of each grouping interval and the frequency is summed, and the square root of the quotient of the obtained sum and the total number of experimental data at this grid size is taken as the standard deviation of the positioning errors of all test samples. It can be seen from the change curves of the compensated errors that each grid size corresponds to 40 groups of samples. Here, the number of grouping intervals is selected as 6 synthesizing the distribution law of positioning errors and the rationality of grouping. In terms of the above procedures, the statistical results are shown in Tables 3.3, 3.4, 3.5, 3.6, 3.7, 3.8 and 3.9. Figure 3.15 shows the statistical result of the average positioning accuracy of the robot under different selected grid sizes. It is clear that the variation of the average positioning accuracy with the grid size is V shape, where the maximum

Interval (mm)

[0.0321, 0.0897)

[0.0897, 0.1473)

[0.1473, 0.2048)

[0.2048, 0.2624)

[0.2624, 0.3200)

[0.3200, 0.3775)

No.

1

2

3

4

5

6

0.3488

0.2912

0.2336

0.1760

0.1185

0.0609

Median(mm)

4

4

0

13

10

9

Frequency

1.1847

− 0.0460

0.1842

0.1266

0.0691 1.3950

1.1647

0.0000

2.288

0.5481

− 0.1040 0.0115

Frequency × median (mm)

Residual (mm)

Table 3.3 Statistics of positioning errors with a grid size of 20 mm

0.1357

0.0642

0.0000

0.0017

0.0212

0.0966

Frequency × residual2 (mm2 ) 0.1645

Mean value (mm)

0.0894

Standard deviation (mm)

66 3 Positioning Error Compensation Using Kinematic Calibration

Interval (mm)

[0.0565, 0.1092)

[0.1092, 0.1619)

[0.1619, 0.2146)

[0.2146, 0.2673)

[0.2673, 0.3200)

[0.3200, 0.3727)

No.

1

2

3

4

5

6

0.3464

0.2937

0.2410

0.1883

0.1356

0.0829

Median(mm)

5

1

2

12

3

17

Frequency

0.4067

− 0.0290

0.1818

0.1291

0.0764 1.7319

0.2937

0.4819

2.2593

1.4089

− 0.0820 0.0237

Frequency × median (mm)

Residual (mm)

Table 3.4 Statistics of positioning errors with a grid size of 100 mm

0.1653

0.0167

0.0117

0.0067

0.0025

0.1134

Frequency × residual2 (mm2 )

0.1646

Mean value (mm)

0.0889

Standard deviation (mm)

3.3 Uniform-Grid-Based Sampling Method 67

Interval (mm)

[0.0220, 0.0613)

[0.0613, 0.1008)

[0.1008, 0.1403)

[0.1403, 0.1797)

[0.1797, 0.2192)

[0.2192, 0.2586)

No.

1

2

3

4

5

6

0.2389

0.1995

0.1600

0.1205

0.0811

0.0416

Median(mm)

7

4

4

11

7

7

Frequency

0.5675 1.3259

− 0.0510 − 0.0120

0.1065

0.0671 1.6724

0.7978

0.6400

0.2913

− 0.0910

0.0276

Frequency × median (mm)

Residual (mm)

Table 3.5 Statistics of positioning errors with a grid size of 180 mm

0.0795

0.0180

0.0031

0.0015

0.0184

0.0577

Frequency × residual2 (mm2 )

0.1320

Mean value (mm)

0.0667

Standard deviation (mm)

68 3 Positioning Error Compensation Using Kinematic Calibration

Interval (mm)

[0.0260, 0.0720)

[0.0720, 0.1180)

[0.1180, 0.1640)

[0.1640, 0.2099)

[0.2099, 0.2560)

[0.2560, 0.3019)

No.

1

2

3

4

5

6

0.2789

0.2329

0.1870

0.1410

0.0950

0.0490

Median(mm)

3

1

6

13

9

8

Frequency

0.8548

− 0.0370

0.1471

0.1012

0.0552 0.8367

0.2329

1.1216

1.8325

0.3920

− 0.0830 0.0092

Frequency × median (mm)

Residual (mm)

Table 3.6 Statistics of positioning errors with a grid size of 260 mm

0.0649

0.0102

0.0183

0.0011

0.0122

0.0548

Frequency × residual2 (mm2 )

0.1310

Mean value (mm)

0.0663

Standard deviation (mm)

3.3 Uniform-Grid-Based Sampling Method 69

Interval (mm)

[0.0038, 0.0430)

[0.0430, 0.0821)

[0.0821, 0.1213)

[0.1213, 0.1605)

[0.1605, 0.1996)

[0.1996, 0.2388)

No.

1

2

3

4

5

6

0.2192

0.1801

0.1409

0.1017

0.0626

0.0234

Median(mm)

3

14

8

1

6

8

Frequency

0.3753 0.1017

− 0.0620 − 0.0230

0.0950

0.0558 0.6577

2.5207

1.1271

0.1870

− 0.1010

0.0166

Frequency × median (mm)

Residual (mm)

Table 3.7 Statistics of positioning errors with a grid size of 340 mm

0.0271

0.0436

0.0022

0.0005

0.0228

0.0814

Frequency × residual2 (mm2 )

0.1242

Mean value (mm)

0.0666

Standard deviation (mm)

70 3 Positioning Error Compensation Using Kinematic Calibration

Interval (mm)

[0.0328, 0.0653)

[0.0653, 0.0978)

[0.0978, 0.1302)

[0.1302, 0.1627)

[0.1627, 0.1951)

[0.1951, 0.2276)

No.

1

2

3

4

5

6

0.2113

0.1789

0.1464

0.1140

0.0815

0.0491

Median(mm)

14

2

9

6

3

6

Frequency

0.2446 0.6839

− 0.0650 − 0.0320

0.0649

0.0325 2.9589

0.3578

1.3179

0.2944

− 0.0970

0.0000

Frequency × median (mm)

Residual (mm)

Table 3.8 Statistics of positioning errors with a grid size of 420 mm

0.0590

0.0021

0.0000

0.0063

0.0126

0.0569

Frequency × residual2 (mm2 )

0.1464

Mean value (mm)

0.0585

Standard deviation (mm)

3.3 Uniform-Grid-Based Sampling Method 71

Interval (mm)

[0.0500, 0.0886)

[0.0886, 0.1271)

[0.1271, 0.1656)

[0.1656, 0.2041)

[0.2041, 0.2426)

[0.2426, 0.2811)

No.

1

2

3

4

5

6

0.2618

0.2233

0.1848

0.1463

0.1078

0.0693

Median(mm)

9

6

4

12

2

7

Frequency

0.2156 1.7557

− 0.0640 − 0.0260

0.0895

0.0510 2.3564

1.3400

0.7394

0.4850

− 0.1030

0.0125

Frequency × median (mm)

Residual (mm)

Table 3.9 Statistics of positioning errors with a grid size of 500 mm

0.0721

0.0156

0.0006

0.0081

0.0083

0.0743

Frequency × residual2 (mm2 )

0.1723

Mean value (mm)

0.0669

Standard deviation (mm)

72 3 Positioning Error Compensation Using Kinematic Calibration

73

Positioning error (mm)

3.3 Uniform-Grid-Based Sampling Method

Grid size (mm) Fig. 3.15 Compensated average positioning error with different grid sizes

average positioning error is 0.1723 mm and the minimum one is 0.1242 mm. The compensation effect of the robot positioning error at a given grid size is reflected visually by these frequency distribution diagrams from Figs. 3.16, 3.17, 3.18, 3.19, 3.20, 3.21 and 3.22. Taking 0.2 mm as a limit, statistics are made on the probability of occurrence of positioning errors greater than 0.2 mm after compensation, which is shown in Fig. 3.23. It can be seen that when the grid size is between 260 and 340 mm, the probability of occurrence of positioning error greater than 0.2 mm is the lowest. In summary, when the grid size is 340 mm, the average positioning accuracy of the robot after error compensation is 0.124 mm, which is the highest value of the average positioning accuracy among all the grid sizes involved in the statistics. The maximum positioning error of all data samples under this grid size is 0.238 mm. The average positioning accuracy of the robot after error compensation is 0.131 mm when the grid size is 260 mm. At this time, the maximum positioning error of all data samples under this grid size is 0.301 mm. While the grid size is between 260 and 340 mm, the probability of the positioning error after robot compensation being greater than 0.2 mm is the lowest. Therefore, the grid size of 300 mm is selected as the optimal one for the error compensation of the KUKA KR150-2 robot.

3 Positioning Error Compensation Using Kinematic Calibration

Frequency

74

Positioning error (mm)

Frequency

Fig. 3.16 Frequency distribution histogram of errors with grid size of 20 mm

Positioning error (mm)

Fig. 3.17 Frequency distribution histogram of errors with grid size of 100 mm

Frequency

3.3 Uniform-Grid-Based Sampling Method

Positioning error (mm)

Frequency

Fig. 3.18 Frequency distribution histogram of errors with grid size of 180 mm

Positioning error (mm)

Fig. 3.19 Frequency distribution histogram of errors with grid size of 260 mm

75

3 Positioning Error Compensation Using Kinematic Calibration

Frequency

76

Positioning error (mm)

Frequency

Fig. 3.20 Frequency distribution histogram of errors with grid size of 340 mm

Positioning error (mm)

Fig. 3.21 Frequency distribution histogram of errors with grid size of 420 mm

77

Frequency

3.3 Uniform-Grid-Based Sampling Method

Positioning error (mm)

Frequency

Fig. 3.22 Frequency distribution histogram of errors with grid size of 500 mm

Grid size (mm)

Fig. 3.23 Probability statistics of compensated errors greater than 0.2 mm with grid size

78

3 Positioning Error Compensation Using Kinematic Calibration

3.3.2 Sampling Point Planning Method 3.3.2.1

Experimental Method

Because the error surface of the robot is spatially variable, it may cause the error compensations in different regions to respond differently to the same change in different grids. Therefore, several representative areas in the zone to be calibrated are selected to analyze the variation of the compensation effect. Both a peripheral area and a central area in the given zone are tested. In addition, the points close to the central points of the marginal area and the central area are used as the central points of the grids (see Fig. 3.24). To test the error compensation method of different grid sizes, several cubes with the same central point are chosen, and their side lengths increase with a fixed value. Then, the positioning errors of the grid vertex are obtained through measurement at the selected sampling points for error compensation. To examine the actual accuracy after compensation, the proposed error compensation method is used to correct the errors of the test points in the region. The measured position error is the actual compensation effect value when the grid side is selected. The maximum and standard deviation of the positioning errors of different grid sizes after error compensation are mathematically analyzed to meet the accuracy requirement, and then the optimal grid size is selected. The specific experimental steps are as follows: (1) According to the error distribution in the zone, select experimental points within the given region as grid central points. (2) For each grid central point, determine cubes with different grid sizes. An error compensation method is used to correct the positioning error. (3) For each selected grid step, analyze the positioning accuracy of all experimental points after compensation and calculate their average value and standard deviation. Fig. 3.24 Selection of grid center points

E

D

F

z H Zone to be tested

x

y

3.3 Uniform-Grid-Based Sampling Method

79

C4 P7 C1

C3 C2

P6

P3

P2

P1

P5

z P4

y C7 x

C8

P8 C5

P9

C6 Fig. 3.25 Experimental points within a grid

(4) Determine the optimal grid size. If a step had a relatively small standard deviation and a relatively long step size and its positioning error meets the accuracy requirements, then it is selected as the optimum grid size of the given region. 3.3.2.2

Selection of Experimental Points

Eight suitable positions must be determined within the cube of the workspace to examine the positioning accuracy of an industrial robot. As shown in Fig. 3.25, C i (i = 1, 2, …, 8) are selected as the cubic vertices. There are 4 planes to be selected for a positioning experiment based on the standard requirement. In this case, the planes are C 1 –C 2 –C 7 –C 8 , C 2 –C 3 –C 8 –C 5 , C 3 –C 4 –C 5 – C 6 , and C 4 –C 1 –C 6 –C 7 . 5 points P1 , P2 , P3 , P4 , and P5 that must be measured are on the diagonals of the measuring planes in the standard requirement. P1 is the center of the cube. The positions of other points P2 to P5 are shown in Fig. 3.25. To describe the errors within the entire grid space as much as possible, the points on the other two diagonals are added. 9 points P1 , P2 , …, P9 within each grid are selected as measurement points. In Fig. 3.25, L represents the length of the diagonal. A cube in the robot workspace is determined to check the positioning accuracy of industrial robots, whose vertexes are Ci (i = 1, 2, . . . , 8). Nine points P1 to P9 in the cube are selected as measuring points for error compensation experiment, where P1 is the center of the cube and P2 to P9 are the points on the diagonal that is 10 ± 2% of the length of the diagonal from the vertexes, as shown in Fig. 3.25. 3.3.2.3

Statistical Method

The maximum error and the standard deviation of the measurements are used as the criteria. The largest grid that meets the accuracy requirements is selected as the

80

3 Positioning Error Compensation Using Kinematic Calibration

optimum compensation grid size in the region. The steps of the procedure are as follows: (1) Determine the maximum and minimum errors under the selected grid step size, and calculate the standard deviations of all error samples. (2) Establish an error-grid size curve with the cubic polynomial interpolation method. (3) According to the error variation curve from Step 2, determine the thresholds of the grid sizes that meet the accuracy requirements. (4) Select the optimal grid size. If a grid size has a relatively small standard deviation and its maximum positioning error meets the accuracy requirements, it is selected as the optimal grid size of the given machining region. 3.3.2.4

Experimental Statistical Method

The common working region of the robot is selected. The size of the region is 1000 mm × 1200 mm × 1000 mm, as shown in Fig. 3.26. The accuracy requirement of the robot end-effector is required to be ± 0.3 mm in each degree of freedom. Because the positioning error plane is spatially variable, the effect of error compensation on each part of the selected area may be inconsistent. For this reason, according to the characteristics of the workspace, five grid central points are selected, shown in Table 3.10. A growth step of 60 mm is selected. The grid central point is selected as the start point and the step is gradually increased from 20 to 500 mm to establish 9 cubic grids. 1000 mm

Fig. 3.26 Space of the error compensation experiment

1200 mm

3.3 Uniform-Grid-Based Sampling Method

81

Table 3.10 Coordinate distribution of measuring points Point number

x (mm)

D

1700

y (mm) 0

z (mm)

a (°)

b (°)

c (°)

1500

0

90

0

E

2000

− 350

1800

0

90

0

F

1400

350

1900

0

90

0

G

1400

− 350

1400

0

90

0

H

2000

350

1400

0

90

0

To reduce the effects of random errors during the measurements, 5 measurements are taken in each point to calculate the average value. Table 3.11 shows the results of the test points’ poison errors after compensation in different grid sizes. The statistical method described in Sect. 3.3.2.3 is applied to the experimental data. In order to ensure the validity of the data, the mean maximum and minimum values of the five measurements are used as the maximum and minimum values of the samples, respectively. The maximum and minimum error values in the x, y, and z directions are adopted to establish the error-grid size distribution maps with cubic polynomial interpolation, as shown in Figs. 3.27, 3.28 and 3.29. The analyses in Figs. 3.27, 3.28 and 3.29 show that the errors in the x and z directions increase as the grid size increases, i.e., the compensation results in the x and z directions decrease as the grid size increases. According to the required error limit (±0.3 mm), the threshold of the grid size in the x-direction is in the range of 200–250 mm. For all grid sizes tested in the y-direction, up to and including 500 mm, the error is less than ± 0.3 mm. The threshold of the grid size in the z-direction is in the range of 200–250 mm. The variation of the standard deviation of the errors is relatively small in the range of 140–260 mm. Considering the statistical results and the convenience of the region partition, 200 mm is selected as the optimal grid size. Table 3.11 Positioning error after compensation in different grid sizes (mm) Grid size

Maximum value x

y

z

x

y

z

x

y

z

20

0.165

0.059

0.167

− 0.016

− 0.098

− 0.06

0.050

0.040

0.073

Minimum value

Standard deviation

80

0.182

0.211

0.210

− 0.027

− 0.266

− 0.132

0.055

0.108

0.081

140

0.228

0.204

0.226

− 0.074

− 0.248

− 0.176

0.064

0.105

0.109

200

0.279

0.213

0.265

− 0.107

− 0.224

− 0.228

0.084

0.109

0.121

260

0.283

0.195

0.278

− 0.119

− 0.245

− 0.262

0.102

0.108

0.13

320

0.393

0.183

0.461

− 0.181

− 0.261

− 0.393

0.127

0.117

0.177

380

0.406

0.182

0.464

− 0.189

− 0.233

− 0.372

0.130

0.112

0.198

440

0.493

0.205

0.556

− 0.226

− 0.235

− 0.471

0.153

0.123

0.22

500

0.450

0.227

0.648

− 0.271

− 0.251

− 0.443

0.160

0.122

0.238

82

3 Positioning Error Compensation Using Kinematic Calibration

Positioning error (mm)

Positive error limit Negative error limit

Grid size (mm)

Fig. 3.27 x-direction error distribution of the end-effector after compensation

Positioning error (mm)

Positive error limit Negative error limit

Grid size (mm)

Fig. 3.28 y-direction error distribution of the end-effector after compensation

Following the above analysis, the grid size of 200 mm is selected to generate the grid for error compensation for the KUKA KR210 robot. The error compensation algorithm is used to measure the positioning errors of 200 random points in the workspace after error compensation. The experimental results (see Table 3.12) show that the maximum/minimum values of the positioning errors of the 200 points are 0.27/− 0.22 mm, respectively. Therefore, this planning method for sampling configurations effectively reduces the difficulty in finding error compensation sampling configurations. The experimental results show that the positioning error of the robot can be increased to the required accuracy using this method.

3.4 Kinematic Calibration Considering Robot Flexibility Error

83

Positioning error (mm)

Positive error limit Negative error limit

Grid size (mm)

Fig. 3.29 z-direction error distribution of the end-effector after compensation

Table 3.12 Error results of optimal samples (mm) Maximum value

Minimum value

Standard deviation

x

y

z

x

y

z

x

y

z

Before compensation

0.598

0.272

0.045

− 0.282

− 0.675

− 1.068

0.193

0.15

0.252

After compensation

0.167

0.270

0.240

− 0.144

− 0.217

− 0.216

0.061

0.101

0.084

3.4 Kinematic Calibration Considering Robot Flexibility Error Currently, most error calibration methods regarded the robot as a rigid body and achieved a relatively good accuracy improvement effect. However, they cannot improve the robot positioning accuracy further, which is difficult to meet the high-precision requirement in industries such as aviation manufacturing. Among the robot non-geometric parametric errors, the flexibility error is the main factor resulting in that the robot positioning accuracy cannot be further improved. The flexibility error mainly includes two aspects: the flexible deformations caused by the external load and by the weight of the robot itself. The method of solving the flexibility deformation caused by the external load is relatively mature. However, due to the lack of physical parameters such as the weight and the moment of inertia of the link, it is difficult to construct the flexibility error model generated by the robot self-weight. To deal with this problem, the flexibility deformation due to the robot self-weight is studied to realize the accurate estimation of the robot positioning error [6].

84

3 Positioning Error Compensation Using Kinematic Calibration

3.4.1 Robot Flexibility Analysis The flexibility deformation of the robot is related to the joint position, including the deflections of the link and the joint, which will be analyzed respectively to identify the main sources of flexibility errors and to establish the robot flexibility model in this section. (1) Link deflection The link deflection can be simplified as a simply supported beam for analysis, where the link self-weight is the uniformly distributed load applied to the link, as shown in Fig. 3.30 and the external load is the concentrated force applied to the tip of the link, as shown in Fig. 3.31. For the uniformly distributed load applied to the link, according to the formula in mechanics of materials. w=

−q x 2 , 24(x 2 − 4lx + 6l 2 )

the expression of the tip deflection can be calculated as wb = −

ql 3 6E I

(3.10)

Fig. 3.30 Deflection due to link self-weight

q

L

wb

F

Fig. 3.31 Deflection due to external load

L

wb

3.4 Kinematic Calibration Considering Robot Flexibility Error

85

where q is the equivalently uniform load of the link self-weight, l is the equivalent length of the link, E is Young’s modulus of the link, and I is the moment of inertia of the cross-section of the link. Similarly, the external load applied to the link is analyzed. For the concentrated force at the tip of the link, according to the formula in mechanics of materials. w=

−q x 2 , 6 (3l − x)

the expression of the tip deflection can be calculated as wb = −

Fl 2 3E I

(3.11)

where F is the equivalent concentrated load. (2) Joint deflection The joint deflection can be equivalent to the torsional deformation of a round shaft, as shown in Fig. 3.32. The deflection can be obtained as ϕ=

Me l G Ip

(3.12)

where Me is the equivalent torque acting on the joint, l is the length of the round shaft, G is the shear modulus, and I p is the equivalent moment of inertia of the cross-section. Fig. 3.32 Equivalent model of joint deflection

Me

Me

R

ϕ

l

86

3 Positioning Error Compensation Using Kinematic Calibration

3.4.2 Establishment of Robot Flexibility Error Model The flexibility deformation in the above analysis can be extended to a small deviation of the parametric error of the robot D-H model. The link deflection is a small deviation of the link’s parametric errors ∆a and ∆d, and the joint deflection is a small deviation of the link’s parametric error ∆θ . According to the research analysis [7], the maximum change of the link deflection of the six-degree-of-freedom industrial robot occurs on links 2 and 3, which can usually reach the level of 0.01 mm, and the joint deflection can reach 0.1°–0.2°. Using the robot kinematic error model, the error deviation caused by the joint deflection can be calculated to be 0.3 mm or more, which is much more than that by the link deflection (normally less than 0.1 mm). For this reason, the link deflection is ignored in the robot flexibility error model here. With the simplified arrangement of Eq. (3.12), one can obtain the flexible deformation of the joint as δθc = Cθ Tθ

(3.13)

where δθc is the joint deflection, Cθ is the joint flexibility coefficient, and Tθ is the equivalent torque applied to the joint. After a robot is installed and fixed, the direction of the joint axis A1 is the same as the direction of the robot’s gravity, which is not subject to the gravity moment. The joint axes A4, A5, and A6 are less affected by the gravity, and the resultant deviation has little effect on the end-effector positioning error. Therefore, only the flexibility errors caused by the gravity applied to the joint axes A2 and A3 are considered here. The error model due to flexibility of the robotic arm is shown in Fig. 3.33, where G 2 and G 3 represent the center of gravity and the weight of links 2 and 3, L 2 represents the length of the link 2, l2 represents the distance from the center of gravity of the link 2 to the axis 2, and l3 represents the distance from the center of gravity of the link 3 to the axis 3. Without loss of generality, the offset angles of the two centers of gravity around the joint axis are set to θG2 and θG3 . According to Fig. 3.33, the torques on joint axis A2 and A3 are Tθ2 = G 3l3 cos(θ2 + θ3 + θG 3 ) + G 3 L 2 cos(θ2 ) + G 2 l2 cos(θ2 − θG 2 ) Tθ3 = G 3l3 cos(θ2 + θ3 + θG 3 )

(3.14) (3.15)

Substituting Eqs. (3.14) and (3.15) into Eq. (3.13), the joint flexibility errors of axes A2 and A3 under corresponding torques can be obtained as δθc2 = Cθ2 [G 3l3 cos(θ2 + θ3 + θG 3 ) + G 3 L 2 cos(θ2 ) + G 2 l2 cos(θ2 − θG 2 )] (3.16) δθc3 = Cθ3 G 3l3 cos(θ2 + θ3 + θG 3 )

(3.17)

3.4 Kinematic Calibration Considering Robot Flexibility Error

87

Fig. 3.33 Error model due to flexibility

θ3

θG

2

l3

θG

3

l2

G2

G3

θ2

3.4.3 Robot Kinematic Error Model with Flexibility Error In order to facilitate the analysis and simplify the non-primary factors that affect the flexibility error, some assumptions are made on parameters in Eqs. (3.16) and (3.17): (1) G i , li and θGi vary very little when the robot pose changes; (2) They are defined as constants. Let k22 = Cθ2 (G 3 L 2 + G 2 l2 cos θG2 )

(3.18)

k23 = Cθ2 G 2 l2 sin θG 2

(3.19)

k24 = Cθ2 G 3l3 cos θG 3

(3.20)

k25 = Cθ2 G 3l3 sin θG 3

(3.21)

k32 = Cθ3 G 3l3 cos θG 3

(3.22)

k33 = Cθ3 G 3l3 sin θG 3

(3.23)

Equations (3.16) and (3.17) can be simplified as δθc2 = k22 cos θ2 + k23 sin θ2 + k24 cos(θ2 − θ3 ) + k25 sin(θ2 − θ3 )

(3.24)

88

3 Positioning Error Compensation Using Kinematic Calibration

δθc3 = k32 cos(θ2 − θ3 ) + k33 sin(θ2 − θ3 )

(3.25)

In addition to the flexibility deformation, the joint offset includes the zero deviation, hence the errors of joints 2 and 3 can be expressed as ∆θ2 = ∆θo2 + δθc2 = k21 + k22 cos θ2 + k23 sin θ2 + k24 cos(θ2 − θ3 ) + k25 sin(θ2 − θ3 ) ∆θ3 = ∆θo3 + δθc3 = k31 + k32 cos(θ2 − θ3 ) + k33 sin(θ2 − θ3 )

(3.26) (3.27)

where ∆θ2 and ∆θ3 represent the angle errors of joints θ2 and θ3 , respectively; ∆θo2 and ∆θo3 represent the zero position errors of joints θ2 and θ3 , respectively; k21 and k31 are the constants of the zero position errors. Substituting Eqs. (3.26) and (3.27) into the D-H kinematic error model in Chap. 2 and let ∂P ∂P = , ∂k21 ∂θ2

(3.28)

∂P ∂P = cos θ2 , ∂k22 ∂θ2

(3.29)

∂P ∂P = sin θ2 , ∂k23 ∂θ2

(3.30)

∂P ∂P = cos(θ2 − θ3 ), ∂k24 ∂θ2

(3.31)

∂P = sin(θ2 − θ3 ), ∂k25

(3.32)

∂P ∂P = , ∂k31 ∂θ3

(3.33)

∂P ∂P = cos(θ2 − θ3 ), ∂k32 ∂θ3

(3.34)

∂P ∂P = sin(θ2 − θ3 ), ∂k33 ∂θ3

(3.35)

one can obtain the partial differential terms for θ2 and θ3 as ∂P ∂P ∂P ∂P ∂P ∂P · ∆θ2 = · k21 + · k22 + · k23 + · k24 + · k25 , ∂θ2 ∂k21 ∂k22 ∂k23 ∂k24 ∂k25 (3.36)

3.5 Kinematic Calibration Using Variable Parametric Error

∂P ∂P ∂P ∂P · ∆θ3 = · k31 + · k32 + · k33 . ∂θ3 ∂k31 ∂θ32 ∂θ33

89

(3.37)

Let the flexibility vector be k = [k21 , k22 , k23 , k24 , k25 , k31 , k32 , k33 ]T

(3.38)

whose physical dimension is the same as the physical dimension of angle. Then the kinematic error model can be extended to ⎡ ⎤ ∆a ⎢ ⎥ ⎢ ∆α ⎥  T

⎢ ⎥ d P = dx dy dz δx δy δz = M a M α M d M θ M k ⎢ ∆d ⎥ = J (q)dq ⎢ ⎥ ⎣ ∆θ ⎦ k (3.39) which is the robot kinematic error model with flexibility error.

3.5 Kinematic Calibration Using Variable Parametric Error The aforementioned kinematics error model depends on the link’s parametric errors ∆ai , ∆αi , ∆di , ∆θi and ∆βi , which will not change because it regards the robot as an ideal rigid body. However, the existing robot kinematics error model cannot fully reflect distribution characteristics of the robot positioning error because sources of errors, e.g., flexible deformation of joints and links, gear clearance, cause the link’s parametric errors are not constant. It means that the parametric errors ∆ai , ∆αi , ∆di , ∆θi , and ∆βi will vary as the robot pose is changed. For a certain point, the parametric error is relatively determined [8]. Therefore, each parametric error of the robot can be expressed as a function of the joint value ∆x = (∆a, ∆d, ∆α, ∆θ , ∆β) = f (θ1 , θ2 , · · · , θ6 ) = f (θ )

(3.40)

Considering the coupling of six joint angles θ1 , θ2 , · · · , θ6 , it is difficult to establish an accurate error model in the joint space. Hence, Eq. (3.40) should be transformed into Cartesian space when the robot configuration is determined, i.e., ∆x = (∆a, ∆d, ∆α, ∆θ , ∆β) = g(x, y, z) = g(x)

(3.41)

When the joint configuration θ is determined, the parametric error at θ can also be determined. Assumed that the parametric error corresponding to joint configuration

90

3 Positioning Error Compensation Using Kinematic Calibration

θ 1 is ∆x 1 , and ∆x 2 for θ 2 , then we have E = ||∆x 1 − ∆x 2 || < ξ

when ∆θ → 0

(3.42)

where ∆θ is the difference between θ 1 and θ 2 , E is the 2-norm of the difference between ∆x 1 and ∆x 2 . The above formula means that when ∆θ approaches 0, we can always find a ξ > 0 close to 0 such that E < ξ . When the robot configuration is determined, Eq. (3.42) can be transformed into Cartesian space as E = ||∆x 1 − ∆x 2 || < ξ

when (∆x, ∆y, ∆z) → 0

(3.43)

Equation (3.43) indicates that when the robot configuration is determined, there is always a ξ > 0, which is greater than 0 so that E < ξ if the position variation (∆x, ∆y, ∆z) approaches zero. Therefore, the robot workspace can be divided into several subspaces and the positioning error influenced by the flexibility error is close in each subspace. To improve the efficiency of the identification algorithm, the workspace is divided into small square meshes, as shown in Fig. 3.34. Each subspace needs to be enveloped by sample points. Here, we use 8 grid vertices to envelope each grid. In order to better describe the parametric error of the grid and make it closer to the true value, the grid central point is also introduced, that is, a single grid uses 9 points for sampling, as is shown in Fig. 3.35 The maximum norm of parametric error in a single grid is represented by ξ. In Fig. 3.35, the maximum grid size is ξ 1 , the second is ξ 2 , and the minimum is ξ 3 . It is obvious that the smaller the ξ is, the closer the parametric error is, and the smaller the residual error inside the grid is, which means the higher the accuracy is after error compensation.

Fig. 3.34 Workspace divided into small square meshes

3.6 Parameter Identification Using L-M Algorithm

91

Fig. 3.35 Grid size and sampling points

3.6 Parameter Identification Using L-M Algorithm According to the robot kinematic error model, the linear transformation relationship between the robot positioning error and the parametric error of each link can be obtained. Conversely, the parametric errors of each link can be solved iteratively according to the positioning error of the robot, which is called the robot parameter identification. The basic principle of the robot parameter identification is that the theoretical positions P t of a certain number of sampling points are randomly generated in the robot workspace, and the actual arrival positions of the group of sampling points P a are measured. The corresponding joint angle θ t is obtained by the inverse kinematic solution. The parametric error ∆x is obtained by the identification algorithm and then used to calculate the end-effector’s position P ac of the modified robot kinematic model. Comparing P ac with P a , the positioning error ∆ P e can be obtained. The value of the parametric error that makes ∆ P e closest to 0 can be regarded as the optimal solution through continuous iteration. It means that the optimal parametric error can make the end-effector’s position calculated by the modified robot kinematic model closest to the actual arrival position of the robot’s end-effector. The problem of robot parameter identification is summarized in the form of ∆ P= J∆X, which can be simplified as the problem of solving linear equations in the form of Ax = b. In this section, the process of the robot parameter identification using L-M algorithm will be elaborated. The least-square method is the most commonly used approach to identify the robot’s parametric error ∆X, i.e., ∆X = ( J T J )−1 J T ∆ P

(3.44)

In this method, the singular poses of the robot can result in irreversibility of J T J and wrong solutions to parametric errors. The L-M algorithm [9] is a modified version

92

3 Positioning Error Compensation Using Kinematic Calibration

of the least-square method, which is widely used in the field of robot kinematic calibration. Each iteration of the L-M algorithm mainly includes five steps: (1) Calculate the Jacobian matrix J(x k ) at the kth iteration. (2) Solve for the parametric error matrix and obtain the small increment of the parametric error corresponding to this iteration step as ∆x k = −{[ J(x k )]T J (x k ) + λk I}−1 [ J (x k )]T ∆ P(x k )

(3.45)

where k represents the number of iterations, ∆x k represents the small increment of the parametric error after the kth iteration, x k represents the robot link parameter used in the kth iteration, ∆ P(x k ) represents the error matrix between the measured pose and the nominal pose computed by the forward kinematics using the current x k , λk represents the damping factor in the kth iteration, which is adjusted by the following equation λk = αk (ρ||∆ P k || + (1 − ρ) J kT ∆ P k ), ρ ∈ [0, 1]

(3.46)

with αk being the optimization factor. (3) Calculate the ratio rk of the actual decrease Ar edk to the estimated decrease Pr edk in the kth iteration

Ar edk = ||∆ P k ||2 − ||∆ P(x k + ∆x k )||2

(3.47)

Pr edk = ||∆ P k ||2 − ||∆ P k + J k ∆x k ||2

(3.48)

rk =

Ar edk Pr edk

(3.49)

(4) Update the link parameters x k+1 and optimization factor αk+1 at the (k + 1)th iteration

x k+1 =

αk+1

x k + ∆x k , rk > p0 xk , r k ≤ p0

⎧ r k < p1 ⎨ 4αk , = αk ,  r k ∈ [ p1 , p2 ]  ⎩ max α4k , m , rk > p2

(3.50)

(3.51)

3.7 Verification of Error Compensation Performance

93

(5) When J Tk ∆ P k < ε (ε is the expected residual error norm for convergence, generally ε = 0.0001) or when the specified maximum number of iterations is reached, the cycle ends and the optimal solution of the robot’s parametric errors is obtained.

3.7 Verification of Error Compensation Performance 3.7.1 Kinematic Calibration with Robot Flexibility Error To validate the performance of parameter identification and error compensation in the constructed robot kinematic error model with flexibility error, a simulated test is designed. The basic idea is to simulate the actual robot structure by presetting initial values for the link’s parametric error and flexibility error. Under the condition of the preset actual robot structure, 50 points are selected, 20 of which are taken as the samples for calibration, and the remaining 30 points are the test points for robot calibration. The specific implementation plan is as follows: (1) A rotation torsion angle around the y-axis is added to the transmission between link 2 and link 3, since joints 2 and 3 of the KUKA KR210 robot are parallel to each other. The preset kinematic parametric error and flexibility error are given in Tables 3.13 and 3.14. (2) Select 20 points in the robot workspace, and calculate their coordinate values under the cases of the error model and the nominal model for the robot, respectively. (3) Using these 20 points as the measurement samples for robot calibration, the parametric error can be obtained by the L-M identification method, as shown in Tables 3.15 and 3.16. Table 3.13 Link’s parametric error ∆a (mm)

Link number

∆α (°)

∆d (mm)

∆θ (°)

∆β (°)

1

− 0.7

0.00003

− 1.03

− 0.02



2

− 0.4

0.002

− 0.15



0.3

3

0.5

0.08

− 0.006





4

0.5

− 0.08

− 0.1

0.3



5

− 0.1

0.05

0.0003

0.02



6

− 0.2

− 0.06

0.05

− 0.6



Table 3.14 Flexibility error (°) Error term

k21

k22

k23

k24

k25

k31

k32

k33

Value

− 0.038

0.03

0.0285

0.03

− 0.002

− 0.0068

− 0.0163

0.0078

94

3 Positioning Error Compensation Using Kinematic Calibration

Table 3.15 Identification results of link’s parametric error Link number

∆a (mm)

∆α (°)

∆d (mm)

∆θ (°)

∆β (°)

1

− 0.7

0.00003

− 1.03

− 0.02



2

− 0.3999

0.002

− 0.134



0.34

3

0.5

0.08

− 0.00569





4

0.501

− 0.08

− 0.1002

0.33



5

− 0.1

0.05

0.000312

0.0202



6

− 0.2001

− 0.0601

0.051

− 0.599



Table 3.16 Identification results of flexibility error (°) Error term

k21

k22

k23

k24

k25

k31

k32

k33

Value

− 0.038

0.03

0.0285

0.03

− 0.002

− 0.007

− 0.0163

0.0078

(4) Utilize the inverse kinematics to simulate the robot control system and test the robot error compensation. The corresponding compensation effect is shown in Fig. 3.36. It can be seen that in the parameter identification, errors ∆a, ∆α, ∆d, ∆θ , and k are all acceptable. While the identification outcome of ∆β is not so great, it is within the acceptable error range. The overall identification effect is satisfactory. After the kinematic calibration with robot flexibility error, the positioning error of the end-effector is maintained below 0.1 mm, which indicates that the robot error compensation achieves a good calibration effect.

3.7.2 Error Compensation Using Variable Parametric Error It is well known that the smaller the grid size is divided, the smaller the identified parametric error and the compensated robot positioning error. Under the condition of the constant room temperature, an area of 600 mm × 600 mm × 600 mm in the robot workspace of the robot is divided into 1, 8, 64, and 216 grids with the sizes of 600 mm × 600 mm × 600 mm, 300 mm × 300 mm × 300 mm, 150 mm × 150 mm × 150 mm, and 100 mm × 100 mm × 100 mm, respectively. In this experiment, 64 points were randomly selected in the space of 600 mm × 600 mm × 600 mm for the error compensation, whose results are shown in Fig. 3.37 and Table 3.17. From Table 3.17, the average value, the standard deviation, and the maximum value are 1.069 mm, 0.118 mm, and 1.401 mm for the robot positioning accuracy before compensation. The positioning accuracy has been improved to a certain extent after compensation using the grid size of 600 mm with the average value, the standard deviation, and the maximum value being 0.341 mm, 0.124 mm, 0.581 mm,

95

Positioning error (mm)

3.7 Verification of Error Compensation Performance

Positioning error (mm)

Positioning error (mm)

Number of points

Number of points

Positioning error (mm)

Positioning error (mm)

Number of points

Number of points

Number of points

Fig. 3.36 Kinematic calibration results with robot flexibility error

respectively, which are all within 0.6 mm. The uneven distribution of error after compensation is mainly due to that the parametric errors are constant in the space with a grid size of 600 mm, which cannot accurately reflect the positioning errors of the sample points. After error compensation using the 300 mm grid size, a compensation performance with the average positioning accuracy of 0.246 mm, the standard deviation of 0.076 mm, and the maximum value of 0.392 mm is realized. Compared to the error compensation effect of the 600 mm grid size, the positioning accuracy has been improved and the error spread is reduced. When the 150 mm grid size

96

3 Positioning Error Compensation Using Kinematic Calibration

Fig. 3.37 Error compensation results using variable parametric error

Table 3.17 Comparison of error compensation results with different grid sizes Before compensation

600 mm

300 mm

150 mm

100 mm

Average value (mm)

1.069

0.341

0.246

0.121

0.076

Standard deviation (mm)

0.118

0.124

0.076

0.047

0.032

Maximum value (mm)

1.401

0.581

0.392

0.229

0.147

is adopted for error compensation, the average positioning accuracy, the standard deviation, and the maximum value reach 0.121 mm, 0.047 mm, and 0.229 mm, respectively. Compared with the 600 and 300 mm grid size, both the positioning accuracy and the uniformity of error dispersion have been greatly perfected. From the compensation effect using the 100 mm grid size, it is clear that the average positioning accuracy of 0.076 mm, the standard deviation of 0.032 mm, and the maximum value of 0.147 mm are achieved, respectively. The error compensation effect is slightly bettered compared to the 150 mm grid size; however, the effect is relatively limited. The above experimental results show that the smaller the grid size, the better the error compensation effect of the robot. However, after a certain level of refinement, the accuracy improvement effect is relatively limited. The number of sampling points required to improve the limited accuracy is greatly increased. Therefore, for different robots, it is necessary to select the appropriate grid size according to the desired accuracy. In addition, it can be concluded that the parametric error is indeed different in different pose states. If the same parametric error is used in the workspace, it is impossible to accurately characterize the error model of the robot in different pose states. On the contrary, using the variable parametric errors can have an accurate error model of the robot, thus obtaining a better error compensation effect. Figure 3.38 shows the relationship between the number of grids and the positioning error in the same space size. It can be observed that the compensation effect becomes better with the increase in the number of grids, whereas the increase in the

3.7 Verification of Error Compensation Performance

97

compensation effect will gradually reduce when the number of grids is augmented. As a consequence, the compensation efficiency must be considered while ensuring the compensation effect. Figure 3.39 shows the situation where the number of iterations meets 100 times using the L-M algorithm to solve for the parametric error with a grid size of 150 mm. The horizontal axis represents the number of iterations, and the vertical axis represents the norm of the parametric error. It can be seen that the algorithm has a high efficiency and a good convergence. The iterative process of the parametric error using the EKF method is shown in Fig. 3.40. It is obvious that the convergence rate of the EKF method is faster than that of the L-M method with the same steadystate error. The EKF method has a better solution to the problem of robot parameter identification. Figure 3.41 exhibits the parametric errors of a, d, α, β when the grid size is 150 mm × 150 mm × 150 mm. It can be seen that the above parametric errors are variable and not uniformly distributed in the space. The average values of the

Error (mm)

Fig. 3.38 Positioning error versus number of grids

Number of grids

Error norm

Fig. 3.39 Error iterative process using L-M algorithm

Number of iterations

98

3 Positioning Error Compensation Using Kinematic Calibration

Error norm

Fig. 3.40 Error iterative process using EKF method

Number of iterations

parametric errors of 64 grids are computed and shown in Table 3.18. It can be found that all the parametric errors of a, d, α, β are relatively small, which are therefore not the main cause of the positioning errors of the robot. The parametric errors of θ are shown in Fig. 3.42, from which the parametric errors of θ changes dramatically compared with those of a, d, α, β shown in Fig. 3.41. This is because the flexibility mainly exists in the joints, which affects the joint angle more, but has less influence on the link length and offset. From Table 3.19, it can be seen that ∆θ 2 , ∆θ 3 , and ∆θ 5 have an order of magnitude improvement compared to the parametric errors ∆θ1 , ∆θ4 , and ∆θ6 of joints 1, 4, and 6, as well as the parametric errors of a, d, α, β. It means that the robot positioning errors are mainly caused by ∆θ 2 , ∆θ 3 and ∆θ 5 , i.e., the flexibilities of joints 2, 3, and 5. Finally, three test spaces sized 600 mm × 600 mm × 600 mm in the workspace of the robot are selected, and divide into 192 grids sized 150 mm × 150 mm × 150 mm. In each grid, one point is randomly selected to form 192 points in all. The positioning errors with traditional and variable parameter identification methods are measured and shown in Fig. 3.43. Using the traditional parameter identification method, the average value, the standard deviation, and the maximum value of the positioning errors are 0.430 mm, 0.103 mm, and 0.596 mm, respectively. For the variable parameter identification method, the corresponding statistical values of the positioning errors are 0.115 mm, 0.051 mm, and 0.278 mm. Therefore, the variable parameter compensation method improves the robot positioning accuracy significantly compared with the traditional method.

3.7 Verification of Error Compensation Performance a1

(a)

0.004

d1 (mm)

0.002

α1 (rad)

0 -0.002

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

-0.004

Number of grids a2 (mm) a2/mm d2 (mm) d2/mm α2 (rad) α2/rad

(b)

0.002 0.001 0 -0.001

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

-0.002

Number of grids a3 (mm) a3/mm d3 (mm) d3/mm

(c)

0.004 0.002

α3 (rad) α3/rad

0 -0.002

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

-0.004

Number of grids a4 (mm) a4/mm

(d)

0.004

d4 (mm) d4/mm

0.002

α4 (rad) α4/rad

0 -0.002

99

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

-0.004

Number of grids

0.002

(e)

0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 -0.002

a5 (mm) a5/mm d5 (mm) d5/mm

-0.004

Number of grids

(f)

4.00E-03

α5 (rad) α5/rad a6 (mm) a6/mm d6 (mm) d6/mm

2.00E-03

α6 (rad) α6/rad

0.00E+00 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 -2.00E-03 0.004

Number of grids

(g)

β (rad) β/rad

0.002 0 -0.002 -0.004

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 Number of grids

Fig. 3.41 Parametric errors for a link 1, b link 2, c link 3, d link 4, e link 5, f link 6, and g β

100

3 Positioning Error Compensation Using Kinematic Calibration

Table 3.18 Average values of errors for parameters (a, d, α, β) Axis Nos

∆ai (mm)

∆di (mm)

∆αi (rad)

∆β (rad)

A1

3.28 × 10–4

− 2.07 × 10–4

2.46 × 10–5

0

A2

2.09 × 10–4

2.81 × 10–6

5.14 × 10–5

− 1.3 × 10–5

1.78 ×

1.11 ×

A3

1.01 ×

A4

− 5.2 × 10–5

1.74 × 10–4

− 1.5 × 10–5

A5

− 2.9 × 10–4

1.06 × 10–5

− 2.6 × 10–4

0

A6

− 4.6 ×

3.65 ×

1.52 × 10–4

0

10–4

10–6

10–6

10–4

(a)

Δθ1 (rad)

Δθ 2 (rad)

(b)

(c)

Δθ3 (rad)

Δθ 4 (rad)

(d)

(e)

(f)

Δθ 6 (rad)

Δθ5 (rad)

10–4

Fig. 3.42 Parametric errors of a θ1 , b θ2 , c θ3 , d θ4 , e θ5 , and f θ6

0 0

3.8 Summary Table 3.19 Average values of errors of parameter θ

101 Axis Nos

∆θi (rad)

A1

− 1.018 × 10–4

A2

1.024 × 10–3

A3

− 1.04 × 10–3

A4

− 3.6 × 10–5

A5

− 1.053 × 10–3

A6

− 4.1 × 10–5

Error (mm)

Traditional parameter identification Variable parameter identification

Number of points

Fig. 3.43 Compensated positioning error with different identification methods

3.8 Summary The achievable accuracy of the error compensation implementation is closely related to the choice of sampling points. Hence, the observability-index-based random sampling method and the uniform-grid-based sampling method are explored firstly to optimize sampling points in this chapter. Then the kinematic calibrations considering robot flexibility error and using variable parametric error are given, as well as the L-M algorithm for the parameter identification. Finally, the performance of error compensation by these two methods are validated.

102

3 Positioning Error Compensation Using Kinematic Calibration

References 1. Kalman RE. Mathematical description of linear dynamical systems. J Soc Ind Appl Math Ser A Control. 1963;1(2):152–92. 2. Borm J-H, Meng C-H. Determination of optimal measurement configurations for robot calibration based on observability measure. Int J Robot Res. 1991;10(1):51–63. 3. Driels MR, Pathre US. Significance of observation strategy on the design of robot calibration experiments. J Robot Syst. 1990;7(2):197–223. 4. Nahvi A, Hollerbach JM. The noise amplification index for optimal pose selection in robot calibration. In: Proceedings of IEEE international conference on robotics and automation. New York: IEEE; 1996. p. 647–54. 5. Sun Y, Hollerbach JM. Observability index selection for robot calibration. In: 2008 IEEE international conference on robotics and automation. New York: IEEE; 2008. p. 831–6. 6. Ginani LS, Motta JMS. Theoretical and practical aspects of robot calibration with experimental verification. J Braz Soc Mech Sci Eng. 2011;33(1):15–21. 7. Wang W, Loh R, Ang M. Passive compliance of flexible link robots. I. A new computation method. In: International conference on advanced robotics; 1997. 8. Zeng Y, Tian W, Liao W. Positional error similarity analysis for error compensation of industrial robots. Robot Comput-Integr Manuf. 2016;42:113–20. https://doi.org/10.1016/j.rcim.2016. 05.011. 9. Zhang YG, Huang YM, Xie LM. Robot inverse acceleration solution based on hybrid genetic algorithm. In: 2008 International conference on machine learning and cybernetics; 2008.

Chapter 4

Error-Similarity-Based Positioning Error Compensation

4.1 Introduction The kinematic error model is described in Chap. 2, and the robot error compensation using the kinematic calibration is conducted in Chap. 3. However, for the robot error compensation, this kinematic error model has limitations in the aspects that it includes only geometric error sources neglecting the non-geometric ones. If more error sources need to be contained, more error parameters must be added to the robot error model, which leads to a significant increase in the complexity and in the amount of calculation. In addition, in terms of the geometric parameters, they will change with the type of robots. Therefore, it is inevitable to establish different error parameter models to compensate for the accuracy of different types of robots. More importantly, to apply the kinematics calibration method to the error compensation, the controller parameters of the robot itself need to be modified, whereas the cost of obtaining the modification authority of the robot control system is relatively high. For the robot system integrator, it needs a more economical error compensation method. From this perspective, the positioning error compensation method based on parameter calibration is less versatile. For these reasons, it is meaningful to improve the robot error compensation method. In this chapter, a robot error compensation method based on spatial similarity is proposed to improve the positioning accuracy of an industrial robot. This method does not rely on the specific robot kinematic model, but only focus on the specific performance of the robot positioning errors. It does not need to modify the robot kinematic parameters; therefore, it is capable of solving the above problems of kinematic parameter calibration. To expound the compensation method, the similarity of the robot positioning error is analyzed qualitatively and quantitatively first, and the mathematical representation of the error similarity is explored and determined. Based on the error similarity, an error compensation method is presented using the inverse distance weighting interpolation technique. Then another error compensation approach is proposed combining the linear unbiased estimation with the error similarity. Next, a method for choosing the optimal sampling points is suggested based on the error similarity. Finally, the © Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_4

103

104

4 Error-Similarity-Based Positioning Error Compensation

effectiveness and feasibility of the above methods are verified by simulations and experiments on industrial robots.

4.2 Similarity of Robot Positioning Error Research showed that the robot positioning errors of near positions have error similarity. This property is similar to the spatial dependency of spatial data, so the positioning errors can be estimated with spatial interpolation, which is mainly used in geographic information system [1]. This section discusses this similarity of the positioning error through qualitative analysis and quantitative analysis.

4.2.1 Qualitative Analysis of Error Similarity For manipulators with rotary joints, we can assume that only the joint angle θ i is variable and the other kinematic parameters are constant. According to the kinematic model in Chap. 2, the variation of the end-effector’s position depends on the joint angles. Generally, the kinematics parametric errors are also considered as constant values, thus the variation of the end-effector’s positioning error also depends on that of the joint angles. Based on the above assumptions, the positioning error ∆P(θ i ) is composed of a series of elementary functions of the kinematic parameters and their errors. For 6-DOF manipulators with rotary joints, the positioning error P i is continuous since it is a function w.r.t. the joint angles. Therefore, when two joint configurations are similar, the corresponding robot positions and their errors are also similar. In addition, for 6-DOF industrial robots with rotary joints, the waist, shoulder and elbow joints (θ 1 , θ 2 and θ 3 ) contribute primarily to the position of the endeffector, while the pitch, roll and yaw joints (θ 4 , θ 5 and θ 6 ) contribute primarily to the posture of the end-effector. Therefore, the waist, shoulder and elbow joints show a more remarkable effect on the robot positioning error than that of the pitch, roll and yaw joints [2]. In general, there is multi-solution issue in solving inverse kinematics from Cartesian space to joint space. Under certain constraints of the joint angles of the robot, such as restricting the “status” value of KUKA industrial robots, the robot configurations can be uniquely identified, i.e. the waist, shoulder and elbow joints can be similar when the positions of the end-effector are similar. Then, if the orientations of the end-effector are similar, the corresponding robot configurations are similar, thus the positioning errors are similar. Thus, we can qualitatively assume that, under certain constraints, there is a similarity between the positioning errors of consecutive positions in both joint space and Cartesian space. The “similarity” mentioned here means that if the positioning error of a position in the robot working space is relatively large (or relatively small), the

4.2 Similarity of Robot Positioning Error

105

positioning error of the consecutive position also trends to be relatively large (or relatively small).

4.2.2 Quantitative Analysis of Error Similarity The degree of similarity between the robot positioning errors within the joint range of an n-DOF robot can be characterized as 1 Var[∆ P(θ ) − ∆ P(θ + h)] 2 1 1 = E[∆ P(θ ) − ∆ P(θ + h)]2 − {E[∆ P(θ ) − ∆ P(θ + h)]}2 2 2

γ (θ , h) =

(4.1)

where γ (θ , h) is the semivariogram (also known as the semi-variance function); θ is a set of joint values corresponding to a joint configuration; h represents the distance between the two sets of joint configurations, which can be defined as the Euclidean distance of the two joint angle vectors in the joint space

h i, j

[ | 6 [ ]2 | ∑ j θki − θk =|

(4.2)

k=1

It is worth noting that θ + h denotes not the sum of θ and h, but a joint vector with a division of h from θ . The value of the semivariogram is half of the variance of the robot positioning error increment in the joint space, which can quantitatively reflect the degree of similarity for the robot positioning error. In the joint space, the positioning error of the end-effector associated with any joint value always changes within a limited range. To facilitate calculation and analysis, the following assumptions are made: (1) In the workspace of the robot, the mathematical expectation of the variation of the positioning error is zero, i.e. E[∆ P(θ ) − ∆ P(θ + h)] = 0

∀θ , ∀h

(4.3)

(2) In the workspace of the robot, the variance of the increment of positioning error exists and is stable, namely Var[∆ P(θ ) − ∆ P(θ + h)] = E[∆ P(θ ) − ∆ P(θ + h)]2 − {E[∆ P(θ ) − ∆ P(θ + h)]}2 = E[∆ P(θ ) − ∆ P(θ + h)]2 ∀θ , ∀h

(4.4)

106

4 Error-Similarity-Based Positioning Error Compensation

From the above assumptions, the semivariogram of the robot positioning error can be written as γ (h) =

1 E[∆ P(θ ) − ∆ P(θ + h)]2 2

(4.5)

At this time, the semivariogram of the positioning error also exists and is stable, and γ (h) is independent of θ and depends only on h. The smaller the value of the semivariogram γ (h) is, the smaller the expectation and variance of the increment of the positioning errors are, and the greater the degree of similarity of the positioning errors is. In actual applications, it is impossible to statistically analyze all the robot positioning errors in the entire workspace using Eq. (4.5), hence it is necessary to discuss how to use limited known sampling points to analyze the similarity of the positioning errors. On the premise of satisfying the above assumptions, the semivariogram of the positioning errors can be calculated by averaging the actual sampling data, then Eq. (4.5) can be rewritten as γ (h) =

N (h) ]2 1 ∑ [ ∆ P(θ i ) − ∆ P(θ i + h) 2N (h) i=1

(4.6)

where γ (h) is called the experimental semivariogram, N(h) is the number of the sample pairs with distance h, and ∆ P(θ i ) represents the positioning error of the sample point Pi . A semivariogram curve can be fitted by calculating several values for semivariogram related to different values for distance h, and the semivariogram properties can be analyzed. A typical graph of the semivariogram curve is shown in Fig. 4.1, where C 0 and C are the semivariogram and covariance of two observations of the positioning error at the same point, respectively. The range a is the minimum distance corresponding to the maximum semivariogram value (C 0 + C). There exists a relationship between the semivariogram and the covariance function [3]: γ (h) = Cov(0) − Cov(h) = σ 2 − Cov(h)

(4.7)

where Cov(h) is the covariance between the positioning errors of two positions with a distance of h, and σ 2 is the covariance of the positioning errors at the same position, which is equal to the semivariogram parameter C. The semivariogram curve shows some properties of the positioning error similarity. When the distance of two sample points is small, the variance of the increment between their positioning errors is also small. The positioning errors of consecutive positions show a greater similarity than distant ones. Notice that C 0 is generally not zero, because measurement errors lead to different observation results at the same position. The semivariogram keeps stable after the distance reaches the range a, which means the positioning error similarity only shows in a limited range of distances.

4.2 Similarity of Robot Positioning Error

107

γ(h)

Fig. 4.1 Typical graph of a semivariogram curve (spherical model)

C0 + C

C0 a

0

h

The fitting of the semivariogram curve is based on the following experience models: spherical model, circular model, Gaussian model and linear model [4]. The general form of the spherical model is { γ (h) =

C0 + C

(

3h 2a



h3 2a 3

C0 + C

)

0≤h≤a h>a

(4.8)

The general form of the circular model is { γ (h) =

[ ( )] C0 + C 1 − exp − ha 0 ≤ h ≤ 3a h>a C0 + C

(4.9)

The general form of the Gaussian model is { γ (h) =

[ ( 2 )] √ C0 + C 1 − exp − ah 2 0 ≤ h ≤ 3a √ C0 + C h > 3a

(4.10)

The general form of the linear model is { γ (h) =

C0 + C C0 + C

(h) a

0≤h≤a h>a

(4.11)

where C 0 , C and a are the parameters needed to be fitted in Eqs. (4.8)–(4.11). In practice, the semivariogram curve can be obtained by selecting the best-fitted model according to the distribution of the sample data.

108

4 Error-Similarity-Based Positioning Error Compensation

4.2.3 Numerical Simulation and Discussion Based on the robot kinematics error model established above, the semivariogram function of the positioning error is simulated to verify and analyze the similarity of the positioning error of the robot. The simulation process of the positioning error similarity is shown in Fig. 4.2 and the steps are: (1) First, the theoretical kinematics model of the robot is established based on the nominal kinematics parameters of the industrial robot. Then the errors of each kinematics parameter are randomly generated, and the robot kinematics model with errors is established to simulate the real model. (2) The motion range of each joint of the robot is determined, and the joint angles corresponding to several sampling points and verification points are generated randomly within this range. The joint angles are input to the theoretical kinematics model and the kinematics model with errors established in step (1) to calculate the theoretical and actual positions of the robot end-effector to simulate real positioning errors of the robot. (3) In the joint space, calculate the semivariogram function of the positioning errors according to Eq. (4.6). According to the calculation results, a scatterplot of the positioning error semivariogram function is drawn to analyze the local trend of the positioning error similarity. (4) To show intuitively the variation tendency of the positioning error, an appropriate ∆h is set to group the sampling points according to h ± ∆h, so that the product of the step length of each group and the number of groups is equal to half of the maximum division amount to ensure that each group has a sufficient number of sampling point pairs. Equation (4.6) is used to calculate the semivariogram function of each group, then a dot-and-line graph of the mean and standard deviation of the semivariogram function is drawn to analyze the global trend of the positioning error similarity. Here the KUKA KR 210-2 industrial robot is considered with its each joint range shown in Table 4.1. Theoretical kinematics model

Joint angle input

Segmentation quantity calculation

Error similarity model

Kinematics error model

Fig. 4.2 Simulation process of robot positioning error similarity

Variation function of error

Comparative analysis

4.2 Similarity of Robot Positioning Error

109

Table 4.1 Joint range of KUKA KR 210-2 industrial robot (°) Joint axis

A1

A2

A3

A4

A5

A6

Angular range

[−45, 45]

[−90, −30]

[80, 120]

[−15, 15]

[−15, 15]

[−15, 15]

Within the specified joint range, 200 sets of joint angles are randomly generated, and the positioning errors corresponding to the joint angles of each group are calculated. By using the Euclidean distance in the joint space as the division variable, the semivariogram function values of the positioning errors in the x, y, and z directions of the robot base frame are calculated, shown in Figs. 4.3, 4.4 and 4.5. Each point in the figures represents a pair of sampling points. The x-coordinate of each point indicates the division of the point pairs, and the y-coordinate indicates the semivariogram function. It is observed from the figures that when the Euclidean distance between the two groups of joint angles is small, i.e., they have similarity, the difference in the positioning errors corresponding to them is also small. As the Euclidean distance increases, the difference in positioning errors also gradually enlarges, which indicates Fig. 4.3 Semi-variogram function on x-axis vs the distance

Fig. 4.4 Semi-variogram function on y-axis vs the distance

110

4 Error-Similarity-Based Positioning Error Compensation

Fig. 4.5 Semi-variogram function on z-axis vs the distance

that the probability of the positioning error similarity decreases. It shows that the positioning error of the robot has remarkable similarity in the joint space. It is worth noting that the extreme value of the difference in positioning errors appears at about 1/2 of the maximum amount of segmentation, denoted as 1/2hmax , and when h further increases, the similarity of the positioning error does not change significantly. It can be considered that when h is large enough, the positioning errors of the sampling points have no similarity. From a statistical point of view, it is more meaningful to study the error similarity of sample data with h ≤ 1/2hmax . According to step (4), the sampling data with h ≤ 1/2hmax are divided into 10 groups in terms of the Euclidean distance. From Eq. (4.6), the values of the semivariogram function corresponding to each group are calculated, whose mean and standard deviation are shown in Figs. 4.6 and 4.7. It can be found from Figs. 4.6 and 4.7 that after grouping, the global variation trend of the robot positioning error in the joint space is intuitively reflected. The mean and standard deviation of the semivariogram function both rise with the increase in h, indicating that the probability of the positioning error similarity decreases with the increase in h. In addition, the anisotropy of the positioning error in the three directions of x, y, and z is observed, which is because the input of the joint angle has different Fig. 4.6 Mean value of the semivariogram function of positioning errors after grouping

4.3 Error Compensation Based on Inverse Distance Weighting and Error …

111

Fig. 4.7 Standard deviation of the semivariogram function of positioning errors after grouping

effects upon the positioning errors of the robot in all directions. Furthermore, from the data close to the origin, we can see that the variation trend of the semivariogram function approaches linear or parabolic, signifying that the positioning error has spatial continuity, which is consistent with the previous qualitative analysis.

4.3 Error Compensation Based on Inverse Distance Weighting and Error Similarity The core problem of the robot error compensation is to obtain the positioning errors of the points to be compensated for. Through the analysis in Sect. 4.2, the positioning errors of the robot have a certain continuity and similarity in the joint space. When the joint positions approach, the corresponding positioning errors of the end-effector have a certain similarity, and the degree of similarity is related to the deviation between the joint angles. This error similarity can provide a strong condition and support for the innovation of error compensation technology. In this section, an error compensation method based on the weight measurement of error similarity is proposed to improve the robot positioning accuracy.

4.3.1 Inverse Distance Weighting Interpolation Method The inverse distance weighting (IDW) interpolation is a method proposed in the late 1960s, which is essentially a weighted average algorithm, and is widely used in geographic information systems for spatial interpolation. The spatial interpolation refers to from a set of known spatial data, or the form of discrete points or the form of partitioned data, to find an implicit functional relationship which can approximate not only the known spatial data, but the data at any other arbitrary points near the known data. This relationship can be mathematically expressed as

112

4 Error-Similarity-Based Positioning Error Compensation

z=

⎧ ⎪ ⎪ ⎨

n ∑

zi p i=1 di n ∑ 1 p j=1 d j

⎪ ⎪ ⎩

zi

z /= zi

(4.12)

z = zi

where z i is a known data point, z is an unknown data point, di is the distance from z to z i , p > 0 is a weighted power index. It is apparent that z is a weighted average of z 1 , z 2 , . . . , z n . Although Eq. (4.12) is a piecewise expression, z is continuous with a proof process as follows. n ∑

lim z =

z→z i

zi p d i=1 i lim n z→z i ∑ 1 p d j=1 j p

= lim

di →0

z 1 di p d1

=

+ ··· + p

di p d1

+ ··· +

z1 p d1 lim 1 di →0 p d1

+ ··· + p

z i−1 di p di−1

+ ··· +

p

di p di−1

z i−1 p di−1 1 p di−1

+ + p

+ zi +

z i+1 di p di+1

+1+

di p di+1

p

zi p di 1 p di

+ +

z i+1 p di+1 1 p di+1

+ ··· +

+ ··· +

p

di p dn

+ ··· + + ··· +

zn p dn 1 p dn

p

z n di p dn

= zi

(4.13)

In Eq. (4.12), the weighted power exponent p can be used to adjust the shape of the interpolation function surface, which controls how the weight decreases with the increase in the distance between two points. As shown in Fig. 4.8, if p is larger, the closer the distance between two points is, and the higher the weight is, thus the function surface is flatter. Else if p is smaller, the weight is equally distributed among the data points, causing a sharper functional surface. According to related research, when the distribution of the known sample points z i is relatively uniform, the IDW approach has a better approximation to the interpolation point z with a fast calculation speed. But it cannot interpolate a value which is larger (or smaller) than the maximum (or minimum) of known sample points. In addition, the interpolation method is susceptible to the sample points, particularly, it may appear that the value of a certain isolated point for the interpolation result is significantly higher than that of the surrounding data points. This issue is usually solved by introducing a smooth coefficient. A smooth coefficient greater than zero can ensure that all sample points are not given all weights, even when the known sample points coincide with those to be interpolated. The mathematical description can be expressed as

Fig. 4.8 Surface of interpolation function

4.3 Error Compensation Based on Inverse Distance Weighting and Error … n ∑

z=

i=1 n ∑ j=1

113

zi p hi

(4.14) 1 p hj

with hi =

/

di2 + δ 2

(4.15)

where δ is the smooth coefficient.

4.3.2 Error Compensation Method Combined IDW with Error Similarity For the robot positioning error at any point, if the positioning errors of several other points have a high degree of similarity to it, theoretically the positioning error of this point can be obtained by interpolating the positioning errors of the known points by the IDW method. In order to make the distribution of known sample points uniform for facilitating the approximation of interpolation, the robot workspace is divided into a series of cubic grids with a certain step size. The vertices of the cubic grids are chosen as the sampling points. In this way, for any other points in the workspace, their positioning errors can be interpolated by those of the eight vertices of the cubic grids. As shown in Fig. 4.9, the eight vertices are defined as K i (i = 1, 2, …, 8). Assumed that the nominal and measured coordinates are (X i , Y i , Z i ) and (X i ’, Y i ’, Z i ’) at point K i , respectively, then the corresponding positioning error (∆X i , ∆Y i , ∆Z i ) can be obtained by comparing (X i , Y i , Z i ) and (X i ’, Y i ’, Z i ’). For any target point K with coordinate of (X, Y, Z) in the cubic grid, its positioning error can be predicted as follows. (1) Calculate the weight corresponding to vertex K i . First, the distance d i from the measured coordinate (X i ’, Y i ’, Z i ’) of the eight vertices to the nominal coordinate (X i ’, Y i ’, Z i ’) of the target point K is calculated, and then the weight qi is reversely obtained according to the distance d i as qi =

1/di 8 ∑

(4.16)

1/di

i=1

with di =

/ (X − X i' )2 + (Y − Yi' )2 + (Z − Z i' )2

(4.17)

114 Fig. 4.9 IDW interpolation with eight vertices around point K

4 Error-Similarity-Based Positioning Error Compensation

K8

K7 K3

K4 d8

d4

d3 d 7

K d1 d 5

d6 d2 K6

K5 K1

K2

The weighted power exponent and the smooth coefficient in Eqs. (4.14) and (4.15) are both selected as 1. (2) Predict the positioning error of point K . The positioning error of point K can be obtained by interpolating the errors of the vertexes as ⎧ 8 ∑ ⎪ ⎪ ∆X = ∆X i qi ⎪ ⎪ ⎪ i=1 ⎪ ⎨ 8 ∑ ∆Y = ∆Yi qi ⎪ i=1 ⎪ ⎪ ⎪ 8 ∑ ⎪ ⎪ ⎩ ∆Z = ∆Z i qi

(4.18)

i=1

Finally, we can generate a modified robot code by adding the error (∆X, ∆Y, ∆Z) to the nominal position of K. Then, the positioning accuracy of K can be improved when the robot is driven by the modified code.

4.3.3 Numerical Simulation and Discussion In this section, a simulation scheme was designed to show the feasibility of the error compensation method combined IDW with error similarity. The basic idea is to first select test points at any pose in the robot workspace, second to center them to determine the corresponding cubic grid vertex with different grid size, next to utilize the robot positioning error model established in Sect. 2.6 to calculate the theoretical positioning error of each vertex of the cubic grid, then to applied the proposed error compensation method to predict the errors at the selected test points, and finally to compare the predicted error with the calculated one to obtain the residual error, by

4.3 Error Compensation Based on Inverse Distance Weighting and Error …

115

Table 4.2 Pre-given kinematic parametric errors of the robot Link number

∆a (mm)

∆d (mm)

∆α (rad)

∆θ (rad)

1

−0.00356

−0.0618

−0.00019

−0.000012

2

0.0112

−0.1056

−0.00128

0.0000362

3

−0.01018

−0.0899

0.00239

−0.0000301

4

−0.012

0.22561

0.000305

−0.0000166

5

0.00001

0.00002

−0.000217

0.0000188

6

0.0078

0.1156

0.000018

0.0000061

which the effectiveness of the error compensation method is judged. The specific steps are: (1) Select test points at any pose. Here, (2000, 700, 1000 mm, 0, 90°, 0) and (1450, 200, 1500 mm, 0, 90°, 0) are selected as the test points. The first three parameters represent the target position, and the latter three ones represent the posture expressed with the RPY. (2) Determine the pose of the vertices of the cubic grid. Centering the selected test point, select different grid sizes to determine the position parameters of each vertex. Define the corresponding posture with the same parameters as the test point, whose purpose is to simplify the selection of the pose of the grid vertex. Here, grid sizes of 100 mm and 300 mm are tested, respectively. (3) Calculate the positioning errors of the test points and the vertices of the cubic grid. The kinematics inverse solution is first performed to obtain the rotation angles of each joint of the robot, which are then substituted into the robot error model to obtain the positioning error, where each kinematics parametric error adopts the pre-given values in Table 4.2. (4) Predict the positioning errors of the test points. The selected test points pose is identified as the desired one, and the robot’s error compensation method based on the IDW and error similarity is used to predict the positioning errors. (5) Compare the positioning errors of the test points calculated by the error model in Step 3 with those predicted in Step 4. The effectiveness of the proposed error compensation method can be judged by the residual errors obtained by comparing the above two errors. Table 4.3 gives the positioning error of each vertex of a cubic grid with a step size of 100 mm centered on the point (2000, 700, 1000 mm, 0, 90°, 0) obtained by the error model. Table 4.4 shows the positioning error of each vertex of a cubic grid with a step size of 300 mm centered on the point (1450, 200, 1500 mm, 0, 90°, 0) obtained by the error model. Table 4.5 gives the comparative results between the positioning errors predicted by the error compensation method and by the error model. From the residual results shown in Table 4.5, it can be seen that the positioning error predicted by the compensation method is almost the same as that obtained by the error model, which validates the effectiveness of the error compensation method.

116

4 Error-Similarity-Based Positioning Error Compensation

Table 4.3 Positioning error with a 100 mm step size centered on (2000, 700, 1000 mm, 0, 90°, 0) Serial number

Nominal position (mm)

Positioning error (mm)

Serial number

Nominal position (mm)

Positioning error (mm)

1

1950 750 1050

0.01731 0105 −0.2189

5

2050 750 1050

0.0288 0.9971 −0.2345

2

1950 650 1050

0.0634 1.0094 −0.2116

6

2050 650 1050

0.0692 1.0104 −0.2253

3

1950 750 950

0.0191 0.9860 −0.2353

7

2050 750 950

0.0283 0.9760 −0.2492

4

1950 650 950

0.0644 0.9835 −0.2286

8

2050 650 950

0.0682 0.9880 −0.2406

Table 4.4 Positioning error with a 100 mm step size centered on (1450, 200, 1500 mm, 0, 90°, 0) Serial number

Nominal position (mm)

Positioning error (mm)

Serial number

Nominal position (mm)

Positioning error (mm)

1

1300 350 1650

0.2142 0.7123 0.0195

5

1600 350 1650

0.1806 1.0200 −0.0575

2

1300 50 1650

0.3508 0.5894 −0.0554

6

1600 50 1650

0.3509 0.9472 −0.0839

3

1300 350 1350

0.3428 0.3471 −0.0236

7

1600 350 1350

0.2363 0.7877 −0.1021

4

1300 50 1350

0.3553 0.1806 −0.1258

8

1600 50 1350

0.3429 0.6915 −0.1490

Table 4.5 Simulated results using the compensation based on the IDW and error similarity Serial number

Nominal position (mm)

Grid size (mm)

Theoretical error (mm)

Predictive error (mm)

Residual error (mm)

1

2000 700 1000

100

0.0430 1.0011 −0.2303

0.0449 0.995 −0.2304

0.001979 −0.00595 −0.000159

2

1450 200 1500

300

0.3005 0.6606 −0.0698

0.2968 0.6592 −0.0722

0.00367 0.001302 0.002441

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation …

117

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity In view of the limitations of the robot error compensation method based on the inverse distance weighting, a novel robot positioning error estimation is proposed in this section, which can calculate the weights in different dimensions using the measured positioning error data of irregular sampling points to realize the linear unbiased optimal estimation of the positioning errors of the points to be compensated.

4.4.1 Robot Positioning Error Mapping Based on Error Similarity The error similarity of positioning errors can be used to solve for the key issue of the robot error compensation problem. The distribution of robot positioning errors can be modeled with the error similarity of the ones for the sampling data. Then the error similarity between the target positions and each sample can be calculated using the model, and the positioning error of the target position can be estimated using the linear unbiased optimal estimation. Therefore, the actual positioning error of the target position can be compensated via modifying the robot positioning command according to the estimated positioning error. The modeling and estimation method of robot positioning errors is proposed below. For an n-DOF robot with rotary joints, supposed that there are m samples in the joint space, and the joint angles corresponding to these samples can form an m × n matrix Θ = [ θ 1 · · · θ m ]T with θ i ∈ Rn . The positioning errors of the samples in the Cartesian coordinate system is denoted as an m × 3 matrix E = [ e1 · · · em ]T with ei ∈ R3 . The overbar here indicates that the data is measured. For the convenience of computation, it is necessary to normalize each element in the matrix Θ and E, namely θ ij =

θ ij − μ(θ j )

e.i =

σ (θ j )

, i = 1, . . . , m,

j = 1, . . . , n

e.i − μ(e. ) , i = 1, . . . , m, . = x, y, z σ (e. )

(4.19)

(4.20)

where μ(·) and σ (·) denote the mean value and standard deviation, respectively; θ ji and θ ji denote the measured and normalized values of the jth joint corresponding to the ith sample, respectively; e.i and e.i denote the measured and normalized positioning error in the . direction corresponding to the ith sample, respectively. After normalization, one can have

118

4 Error-Similarity-Based Positioning Error Compensation

μ(θ j ) = 0, σ (θ j ) = 1,

j = 1, . . . , m

μ(e. ) = 0, σ (e. ) = 1, . = x, y, z

(4.21) (4.22)

Then two new matrices Θ and E normalized are obtained. Unless otherwise specified, the following descriptions are made under the premise of satisfying the normalization conditions. For a certain joint configuration θ ∈ Rn , the positioning error is constituted by a regression model F and a stochastic process g by e. (θ ) = F(β . , θ ) + g. (θ ), . = x, y, z

(4.23)

( ) where F β . , θ is a linear regression model of θ , which represents the deterministic errors of the robot. It can be written as F (β. , θ ) = β1,. +β2,. θ1 + · · · + βn+1,. θn = [1 θ1 · · · θn ]β . = f (θ )T β .

(4.24)

where β . is the function parameters that need to be fitted. g. (θ ) is a stochastic process (i.e., a random function) about θ , which represents the random errors of the robot. The expectation value of the stochastic process g is 0, and the covariance between g. (θ i ) and g. (θ j ) is ) [ ( ) ( )] ( Cov g. θ i , g. θ j = σ.2 R ξ , θ i , θ j

(4.25)

where σ.2 is the variance of the stochastic process in the . direction, and R(ξ , θ i , θ j ) is the correlation model parameterized by ξ . The most commonly used model is Gauss model ( n | |2 ) ) || ( | j| R ξ, θi, θ j = exp −ξk |θki − θk |

(4.26)

k=1

The correlation model depends on the difference between each set of joint angles. The parameter ξ reflects the influence of the variation of each joint angle on the positioning error. It can be found that the closer any two sets of joint values are, the higher their correlation, which indicates that the positioning errors corresponding to these two sets of joint angles are more similar. On the contrary, the correlation function value will be close to 0, indicating that the positioning errors corresponding to the two sets of joint angles are not similar. To establish the mapping relationship between the positioning errors and the joint angles using the measurement samples, the coefficients to be determined in Eq. (4.23) need to be solved. For all samples, a matrix F is constructed as

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation …

( )]T [ ( ) F = f θ1 . . . f θm

119

(4.27)

where f (·) is defined in Eq. (4.24). Then a matrix R representing the similarity models of all samples is defined as ) ( Ri j = R ξ , θ i , θ j , i, j = 1, . . . , m

(4.28)

where Ri j is the element of the matrix R. [ ] Constructing the coefficient matrix β = β x β y β z , then β can be obtained by solving for the following regression problem ˜ Fβ −E

(4.29)

Let β ∗ be the maximum likelihood estimate of β, which has the following relationship ) F T R−1 F β ∗ = F T R−1 E

(4.30)

( ) β ∗ = F T R−1 F F T R−1 E

(4.31)

( Then

The maximum likelihood estimation of the variance corresponding to the estimation error is σ2 =

)T ( ) 1( E − Fβ ∗ R−1 E − Fβ ∗ m

(4.32)

Since matrix R depends on ξ , hence β ∗ and σ 2 also depends on ξ . The problem to solve for β can be transformed to the optimization of ξ . Let ξ ∗ be the maximum likelihood estimate of ξ , then the selection of ξ ∗ should maximize the following formula −

) 1( m ln σ 2 + ln |R| 2

(4.33) 1

where |R| is the determinant of matrix R. Let ψ(ξ ) = |R(ξ )| m · σ (ξ )2 , then the above optimization process is equivalent to argξ min{ψ(ξ )}

(4.34)

i.e., ξ ∗ should minimize the function ψ(ξ ). Then R, β ∗ and σ 2 can be calculated by using the optimized ξ ∗ , and finally, the robot positioning error model based on error similarity is established.

120

4 Error-Similarity-Based Positioning Error Compensation

It can be observed from the above analysis and derivation that the robot positioning error mapping method based on error similarity proposed in this section does not rely on any kinematic parameters, and the data required is only the measured joint angle and the positioning error corresponding to each sample. Therefore, this method can be applied to different types of industrial robots, i.e., it has strong versatility and can provide a good foundation for the robot’s positioning error estimation.

4.4.2 Linear Unbiased Optimal Estimation of Robot Positioning Error In order to compensate for the robot positioning errors, it is also necessary to obtain the positioning error of the robot to be compensated in the uncompensated state. Since the robot positioning error has similarity in the joint space, we can estimate the positioning errors of the points to be compensated by using the positioning errors of the known points which have error similarity with the points to be compensated. The key to estimating the positioning errors of the points to be compensated is to find the weight corresponding to each sampling point. In principle, the weights corresponding to sampling points that are more similar to the points to be compensated should be larger, and the weights corresponding to less similar sampling points should be smaller. For any point to be compensated in the robot workspace, assuming that the joint value is θ ∈ Rn , then a vector r ∈ Rm can be formulated to represent the correlation between the point to be compensated and each sampling point as [ ( ) ( )]T r (θ ) = R ξ , θ 1 , θ · · · R ξ , θ m , θ

(4.35)

The positioning error between the estimated and actual values is eˆ. (θ ) − e. (θ ) = wT e. − e. (θ )

[ ] = wT (Fβ + G) − f (θ )T β + g. (θ ) ]T [ = wT G − g. (θ ) + F T w − f (θ ) β [ ]T = wT G − g + F T w − f (θ ) β

(4.36)

where g = g. (θ ), w ∈ Rm is the weights for all samples, G ∈ Rm is the estimated error corresponding to the sampling points in the . direction ( )]T [ ( ) G = g. θ 1 · · · g. θ m = [g1 · · · gm ]T To keep the estimation unbiased, the following equation must be satisfied

(4.37)

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation …

F T w(θ ) − f (θ ) = 0

121

(4.38)

Under this condition, the mean square error (MSE) of the estimation is {[ ]2 } ϕ(θ ) = E eˆ. (θ ) − e. (θ ) [( )2 ] = E wT G − g ( ) = E g 2 + w T GG T w − 2w T Gg ( ) = σ 2 1 − wT Rw − 2wT r

(4.39)

It can be seen that Eq. (4.39) depends on the weight w. The optimal weight should satisfy the following two conditions: (1) Unbiased condition: the optimal weight w should ensure that the estimation of Eq. (4.36) is unbiased. (2) Optimal condition: the optimal weight w should make the MSE of the estimation of Eq. (4.36) minimum. One can describe mathematically this problem as ( ) arg min ϕ(θ ) = 1 + w T Rw − 2w T r w

s.t.

FT w = f

(4.40)

This problem can be solved by the Lagrange multiplier method. The Lagrange function of Eq. (4.40) is ( ( ) ) L(w, λ) = σ 2 1 + wT Rw − 2w T r − λT F T w − f

(4.41)

where λ is the Lagrange multiplier. The gradient of Eq. (4.41) w.r.t. the weight w is L 'w (w, λ) = 2σ 2 (Rw − r) − Fλ

(4.42)

According to the first-order necessary condition of the Lagrange multiplier method, let L 'w = 0, we have 2σ 2 (Rw − r) = Fλ

(4.43)

If  λ = −λ/(2σ 2 ) is defined, then Eq. (4.43) can be written in the following matrix form as    w r R F = (4.44) T  F 0 f λ

122

4 Error-Similarity-Based Positioning Error Compensation

Solving the above formula, one can have {

( )−1 ( ) −1  λ = FT R F ) F T R−1 r − f ( λ w = R−1 r − F

(4.45)

Then the optimal weight w can be obtained. Finally, on substituting the obtained optimal weight value w and the measured positioning error e. corresponding to each sampling point into eˆ. = wT e. , . = x, y, z,

(4.46)

the error estimation of the points to be compensated eˆ. can be obtained, where e. ∈ Rm is the vector composed of the positioning errors of all sampling points, and w ∈ Rm is the vector composed of the weights corresponding to all the sampling points. It can be seen that the estimation of the positioning error of the point to be compensated is to obtain a weighted average of the measured positioning errors of all m sampling points. Compared with the positioning error estimation based on the IDW method, the advantages of the technique proposed in this section are: (1) In the positioning error linear unbiased optimal estimation method proposed in this section, the optimal weights have different calculation results in different directions of the Cartesian coordinate system, which can reflect the anisotropy of robot positioning errors in space. (2) The calculation of the optimal weight requires the correlation matrix between the compensation point and the sampling point and between the sampling points. The calculation of the correlation matrix needs to input the joint angle of the point to be compensated and the sampling point. This means that the optimal weight depends not only on the position of the point to be compensated and the sampling point, but on the pose of the point to be compensated and the sampling point. It can reflect the similarity of positioning errors in joint space and is less sensitive to the change of robot pose than the IDW method. (3) The positioning error estimation method proposed in this section has no special requirements for the distribution of sampling points in space. Whether the sampling points are randomly distributed or uniformly distributed, the measured positioning error data can be used to estimate the positioning error of the point to be compensated.

4.4.3 Numerical Simulation and Discussion This section verifies the feasibility and correctness of the robot positioning error mapping method based on error similarity and the linear unbiased optimal estimation method of the robot positioning error proposed above through numerical simulation.

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation … Joint angle input of random sampling points

Kinematics error model

Joint angle input of random verification points

Simulation of measured error of sampling points

Kinematics error model

Mapping model of error

Estimation model of error

Simulation of actual error of verification points

123 Error estimation of verification points

Comparation and analysis

Fig. 4.10 Flow chart of verification of positioning error mapping and estimation method

The basic idea is to use the robot kinematics error model established in Chap. 2 to simulate the real robot positioning errors, and to estimate the positioning errors of the test points by the method proposed here adopting the positioning error data of known sampling points, then to compare the estimated values with the theoretical ones to analyze the feasibility and correctness of the method proposed in this section. The process of verifying the robot positioning error mapping method and linear unbiased optimal estimation method based on error similarity is shown in Fig. 4.10, and the specific steps are: (1) Establish the theoretical kinematics model of the robot based on the theoretical kinematics parameters; then generate each parametric error randomly; next, construct a robot kinematics model containing errors to simulate the real kinematics model of the robot. (2) Determine the motion range of each joint of the robot, and randomly generate the joint angles corresponding to several sampling points and test points within this range. Enter these values into the robot theoretical kinematics model and the kinematics model with errors established in step (1) to calculate the theoretical and actual positions of the robot end-effector and the positioning errors corresponding to each point. The positioning errors corresponding to the sampling points are used to simulate the real measured errors, and the positioning errors of the test points can be regarded as their real errors, which are used for comparative analysis with the estimated values of the positioning errors of the test points. (3) On the basis of the positioning error mapping method based on error similarity, establish the mapping relationship between the joint values and the positioning error values using the randomly generated sampling points in step (2). (4) Based on the linear unbiased optimal estimation method of the robot positioning error, enter the joint angle values corresponding to each test point into the robot positioning error mapping model established in step (3). Then calculate the optimal weight corresponding to each test point, and estimate the positioning error. Finally, compare and analyze the estimated values with the actual values of the positioning errors to verify the feasibility and correctness of the linear unbiased optimal estimation method. Similar to the Sect. 4.2.3, the KUKA KR 210-2 industrial robot is still taken as the research object, and the motion range of each joint of the robot is shown in Table

124

4 Error-Similarity-Based Positioning Error Compensation

4.1. 100 sampling points are randomly generated within the motion range of the robot. According to the above steps, the positioning errors corresponding to these 100 sampling points are obtained, and the mapping relationship model between the positioning errors and the joint angles of the sampling points is established by using the robot positioning error mapping method based on error similarity. In order to verify the correctness of the positioning error estimation method, 20 test points were randomly selected within the above-mentioned motion range. The theoretical and estimated values of the positioning errors were calculated, shown in Table 4.6. By comparing the theoretical and estimated values of the robot positioning errors in the three directions of the cartesian coordinate system, it can be found that the average values of the positioning errors in the x, y, and z directions are −0.0059 mm, −0.003 mm and −0.0013 mm, respectively; the standard deviations are 0.0046 mm, 0.0024 mm, and 0.0034 mm respectively, indicating that the linear unbiased optimal estimation method of the positioning error based on error similarity can accurately estimate the positioning errors of the points to be compensated. In order to show the simulated results more intuitively, the relationship between the theoretical and estimated values of the robot positioning errors can be compared and analyzed shown in Figs. 4.11, 4.12 and 4.13, where the x-coordinate and the y-coordinate are the theoretical and estimated values of the robot positioning errors, respectively. It can be seen from Figs. 4.11, 4.12 and 4.13 that the distribution of each point has a high degree of linearity, and all points are very close to the line y = x, which indicates that after using the method proposed here, there is a high degree of consistency between the estimated and theoretical values of the positioning error. In summary, on the basis of the results of numerical simulation experiments, it can be seen that, without using robot kinematics parameters, it is possible to establish a spatial mapping relationship between the robot positioning errors and the robot joint angles without targeting specific robot model through the error mapping method based on error similarity, and this method has strong versatility. By using the linear unbiased optimal estimation method, the positioning errors of the compensation points can be quickly and accurately estimated, which proves that the method proposed here is feasible and effective, and can provide a mathematical basis for the compensation of robot positioning errors.

4.4.4 Error Compensation The calculation of the error identification model proposed above does not rely on the kinematic parameters. So, there is no need to modify the kinematic parameters in the robot control system during the process of error compensation. The compensation for the target’s positioning errors is performed by modifying the position coordinates in the controlling commands. The modification of the position coordinates is performed using the following equations after the target’s positioning errors are estimated: px' (P0 ) = px (P0 ) − eˆx (P0 )

(4.47)

−1.0643

−1.4341

−0.4684

11

−1.2721

−1.2937

20

−1.0185

−1.1166 −0.8446

−0.7518

−2.0732

−1.3578

0.0927

18

−1.2787

19

−0.948

−1.6314

17

−0.4701

−1.2654

−1.9138

−1.6519

0.4806

−0.4359

15

−1.1965

−1.3098

−1.5514

14

16

−1.3305 −0.8482

−1.5221

−1.825

−1.1472

0.1201

12

13

−1.5211

−1.0761 −1.329

−0.8905

−1.3464

−1.5945

0.5838

9

10

−1.1678

−1.8495

0.146

8

−0.9043 −0.6871

−2.0554

−1.489

−0.2194

−0.845

−0.8149

−1.3162

6

−1.4581

−0.6446

5

−0.8177 −1.3883

7

−1.5819

−0.8698

−0.0235

−1.0655

3

4

−1.538

−1.3438

−0.8692

−0.8043

1

−0.8684

−1.2956

0.098

−1.3586

−1.634

−0.4345

0.4822

−1.5532

0.124

−1.1437

−0.4731

0.5809

−1.5929

0.1471

−0.8439

−0.2189

−0.6455

−1.0653

−0.0232

−0.8216

−1.2721

−2.0704

−0.7524

−0.9466

−1.653

−1.9129

−1.3095

−1.8225

−1.5233

−1.4358

−1.3537

−0.8872

−1.8514

−1.4869

−2.0556

−1.4589

−0.8729

−1.5827

−1.3462

−1.5363

y

Estimated value x

y

z

Theoretical value

x

2

Serial number

Table 4.6 Simulated results of positioning errors using linear unbiased optimal estimation (mm)

−1.0182

−0.8527

−1.1174

−1.2798

−0.4714

−1.2657

−1.1979

−0.8437

−1.3275

−1.5272

−1.3346

−1.0752

−1.1703

−0.6872

−0.9054

−0.8141

−1.3158

−1.0631

−1.3972

−0.8176

z 0.0008

−0.0019

0.0053

−0.0009

−0.0026

0.0013

0.0016

−0.0018

0.0039

0.0035

−0.0047

−0.0029

0.0017

0.0011

0.001

0.0005

−0.0009

0.0002

0.0004

−0.0173

−0.0001

0.0027

−0.0006

0.0014

−0.0011

0.0008

0.0004

0.0025

−0.0011

−0.0017

−0.0073

0.0033

−0.0019

0.0021

−0.0001

−0.0008

−0.003

−0.0008

−0.0023

0.0017

y

Estimated error x

0.0003

−0.0081

−0.0008

−0.001

−0.0013

−0.0003

−0.0014

0.0045

0.003

−0.0061

−0.0056

0.0009

−0.0025

−0.0001

−0.001

0.0008

0.0004

0.0011

−0.0089

0.0001

z

4.4 Error Compensation Based on Linear Unbiased Optimal Estimation … 125

126 Fig. 4.11 Theoretical and estimated values of positioning errors in the x-direction

Fig. 4.12 Theoretical and estimated values of positioning errors in the y-direction

4 Error-Similarity-Based Positioning Error Compensation

4.5 Optimal Sampling Based on Error Similarity

127

Fig. 4.13 Theoretical and estimated values of positioning errors in the z-direction

p 'y (P0 ) = p y (P0 ) − eˆ y (P0 )

(4.48)

pz' (P0 ) = pz (P0 ) − eˆz (P0 )

(4.49)

where px (P0 ), p y (P0 ), and pz (P0 ) are the target’s nominal position coordinates along x-, y- and z-axis, respectively; px' (P0 ), p 'y (P0 ), and pz' (P0 ) are the target’s modified position coordinates along x-, y- and z-axis, respectively. The error compensation is performed by sending px' (P0 ), p 'y (P0 ), and pz' (P0 ) as controlling commands to the robot.

4.5 Optimal Sampling Based on Error Similarity Two sampling point optimization method in Chap. 3, for robot error compensation, mainly uses the observability of the robot kinematics parameters as an index to measure the advantages and disadvantages of robot sampling points. However, the observability measurement depends on the robot’s kinematic parameter model. The purpose of using the observability index is also to provide a basis for determining the sampling point for the robot kinematic parameter calibration method. For the non-kinematic parameter calibration methods such as the robot error compensation method based on error similarity proposed in this section, the observability index is not applicable, and there are few reports about sampling point optimization methods

128

4 Error-Similarity-Based Positioning Error Compensation

for non-kinematic parameter calibration. Therefore, the study on sampling point optimization for non-kinematic parameter calibration can effectively improve the reasonableness of the distribution of sampling points for the error compensation and the sampling efficiency. In view of the above problems, a multi-objective optimization method of robot optimal sampling points is proposed for the error-similarity-based error compensation. Firstly, the characteristics and mathematical model of the optimal samples are proposed based on the engineering application requirements of error compensation methods. Then, a multi-objective optimization method based on non-dominated sorting genetic algorithm-II (NSGA-II) [5] is proposed to solve for the optimization problem of sample points.

4.5.1 Mathematical Model of Optimal Sampling Points To optimize the sampling points for error compensation, the objective function of the sampling point optimization needs to be determined first, that is, to determine the evaluation criteria of the optimal sampling points, and to establish a mathematical model for the optimal sampling points. Virtually, due to the transmission errors and backlash of the robot itself, as well as the measurement errors of the equipment and the random errors caused by environmental factors, no matter what error compensation method is adopted, the robot’s positioning errors cannot be truly eliminated by 100%. Therefore, the most direct criterion to assess the effect of robot error compensation technology is the residual error of the target point after error compensation. In engineering applications, the technical indicators that the positioning accuracy of the robot needs to meet are generally specified. Only when the residual error of the robot is less than this technical indicator, the effect of error compensation can meet application requirements. Therefore, the optimal sampling points used in the robot error compensation must also make the residual error minimize, which is one of the most important conditions for optimal sampling points. In addition, to make the residual error of the robot after error compensation small, the estimated value of the robot positioning error is required to be accurate enough. Hence, a sufficient number of sampling points are required. However, more sampling points are not always better. According to the experimental analysis from Borm and Menq [6], the robot’s residual error cannot be reduced indefinitely by increasing the number of sampling points. Theoretically, with the increase of the number of sampling points, the estimated value of the robot positioning error will become more and more accurate, but the positioning accuracy of the robot can only be close infinitely to repeatability, and cannot exceed the repeatability. In reality, as the number of sampling points increases, the estimated value of the robot positioning error is not necessarily more accurate. When the number of sampling points is too large, the measurement time increases accordingly. Due to the thermal drift of the measuring

4.5 Optimal Sampling Based on Error Similarity

129

instrument, the measurement accuracy will be influenced significantly by the longterm measurement, which will affect the final accuracy of error compensation in turn. Therefore, the number of sampling points is an important indicator to evaluate the quality of sampling points. Under the condition that the residual error meets the accuracy requirements, the number of sampling points should be reduced as much as possible to improve sampling efficiency. From the above analysis and actual engineering application requirements, the characteristics required for the optimal sampling points can be defined as: (1) The number of sampling points is the smallest. (2) The optimal sampling points need to be able to minimize the sum of residual errors of all target points after error compensation. (3) The optimal sampling points need to be selected within the given working space. (4) The optimal sampling points need to be capable of making the residual error of each target point after compensation satisfy the given accuracy requirement. Analysis of the above four feature descriptions shows that feature 1 and feature 2 are the two evaluation criteria for the optimal sampling points, feature 3 is the natural constraint, and feature 4 is the additional constraint of error compensation technology in practical applications. Therefore, the above four features can be divided into two categories, feature 1 and feature 2 are the goals of sampling point optimization, and feature 3 and feature 4 are the constraints. According to the analysis, the mathematical model of the optimal sampling points can be written as f1 = M

min

min f 2 = { s.t.

N ∑ | | |∆ P i − ∆ P i | c u i=1

l|b ≤ θ j ≤ u b | j = 1, 2, . . . , M |∆ P i − ∆ P i | ≤ ε i = 1, 2, . . . , N c u

(4.50)

where M is the number of elements in the set of optimal sampling points; N is the number of robot target points (points to be compensated); ∆ P ic is the estimated value of the positioning error of the target point; ∆ P iu is the original positioning error of the target point before error compensation; lb and u b are the lower limit and upper limit of each joint angle of the robot, which represent the scope of the robot’s workspace; ε is the robot positioning accuracy requirements specified in practical applications; f 1 and f 2 are two objective functions. Obviously, this mathematical model is a typical multi-objective optimization problem. Notice that these two objective functions are contradictory, i.e., the number of sampling points and the sum of the residual errors after error compensation are negatively correlated, which is also a common problem faced by multi-objective optimization model. Therefore, when there is no globally optimal solution that minimizes

130

4 Error-Similarity-Based Positioning Error Compensation

both the objective functions at the same time, it is necessary to find the non-inferior solution of the multi-objective optimization problem.

4.5.2 Multi-Objective Optimization and Non-Inferior Solution The multi-objective optimization studies the optimization problem in a sense when the vector objective function satisfies certain constraints [7]. Since the early 1960s, it has become a research hotspot in academia. Although the single-objective optimization problem can be solved well by many classical methods, the multi-objective optimization problem cannot be solved based on the optimal solution of a singleobjective optimization problem because the former requires multiple targets to reach the integrated optimal value at the same time. However, the optimal solution that satisfies all the objective functions cannot be found in the multi-objective optimization problem normally due to the contradiction of the objectives. In other words, the multi-objective optimization generally cannot find a single global optimal solution, but only a set of balanced solutions. The characteristic of this set of solutions is that it is impossible to further optimize one or several objectives without degrading other ones. As a result, this solution set is called a non-inferior solution set, also known as the Pareto optimization solution [5, 8]. Generally speaking, the mathematical expression of the multi-objective optimization problem is max /min f (x) = [ f 1 (x), f 2 (x), . . . , f n (x)] { gi (x) ≤ 0 i = 1, 2, . . . , m s.t. h i (x) = 0 i = 1, 2, . . . , k

(4.51)

where x = (x1 , x2 , . . . , x p ) is the decision variable vector, f i (x) is the objective function, gi (x) is the inequality constraint, and h i (x) is the equality constraint. For multi-objective minimization problems, given two decision variables x u and x v arbitrarily, the following three dominant relationships can be defined: (1) For ∀i ∈ {1, 2, . . . , n}, if and only if f i (x u ) < f i (x v ), then x v is dominated by xu . (2) For ∀i ∈ {1, 2, . . . , n}, if and only if f i (x u ) < f i (x v ), and there exists at least one j such that f j (x u ) < f j (x v ), then x v is dominated by x u weakly. (3) if and only if ∃i ∈ {1, 2, . . . , n} such that f i (x u ) < f i (x v ), and ∃ j ∈ {1, 2, ..., n} such that f j (x u ) > f j (x v ), then x u and x v do not dominate each other. According to the dominance relationship defined above, if x u is a non-inferior solution of the multi-objective optimization problem, the following condition needs to be satisfied: if and only if there is no decision variable x v dominating x u , i.e.,

4.5 Optimal Sampling Based on Error Similarity Fig. 4.14 Schematic of non-inferior solutions to a multi-objective optimization problem

131

f2 A G

B

F C D

H E

O

f1

there exists no decision variable x v such that ∀i ∈ {1, 2, . . . , n}, f i (x v ) ≤ f i (x u ) ∧ ∃i ∈ {1, 2, . . . , n}, f i (x v ) < f i (x u ) (4.52) Therefore, the non-inferior solution is also called a non-dominated solution. A typical non-inferior solution to a multi-objective optimization problem is shown in Fig. 4.14. There are two objective functions f 1 and f 2 in this problem that need to be minimized. The dots in the figure represent the objective functions corresponding to different decision variables. Comparing solution C and solution F, there is f 1 (C) < f 1 (F) and f 2 (C) < f 2 (F), hence solution F is dominated by solution C and solution F is not a non-inferior solution. Comparing solution D and solution H, although there is f 2 (D) = f 2 (H ), f 1 (D) < f 1 (H ), hence solution H is dominated weakly by solution D, and solution H is not non-inferior. According to the mathematical definition of non-inferior solutions, we can find that only solutions A, B, C, D and E are non-inferior solutions in the figure. The ultimate goal of the multi-objective optimization problem is to find these non-inferior solutions. The traditional mathematical planning method can only perform a single point search serially, it is impossible to evaluate the pros and cons of multiple targets at the same time. Genetic algorithm, a kind of evolutionary algorithm, can optimize multiple solutions in the entire solution space synchronously through genetic operation on the population, which has a higher solving efficiency. Therefore, genetic algorithm can be effectively applied to solve for multi-objective optimization problems, and has become one of the mainstream methods of multi-objective optimization problems.

132

4 Error-Similarity-Based Positioning Error Compensation

4.5.3 Genetic Algorithm and NSGA-II 4.5.3.1

Genetic Algorithm

Genetic algorithm (GA) was first proposed by Holland from the University of Michigan and his colleagues in the 1960s when studying cellular automata [9]. It is based on Darwin’s theory of biological evolution and Mendel’s theory of genetic variation, following the evolutionary law of survival of the fittest in the biological world, and performing an adaptive heuristic global optimization. In genetic algorithms, the solution to the optimization problem is called an individual, which represents a sequence of variables called chromosomes or gene strings. This variable sequence is typically represented by a simple string or array. A population is a set made up of a certain number of individuals. The number of individuals in the population is called the population size. Each individual corresponds to a fitness, which indicates the adaptability of the individual to the environment. The fitness value of each individual is obtained by calculating the fitness function. The basic flow of the genetic algorithm is shown in Fig. 4.15, the steps are: (1) Encode and generate the initial population. Determine an appropriate encoding scheme according to the problem to be solved. Generate an initial population Fig. 4.15 Flowchart of genetic algorithm

Start

Initialize population

Calculate the fitness of each individual

Satisfy termination condition?

Yes

No Selection operator

Cross operator

Mutation operator

End

4.5 Optimal Sampling Based on Error Similarity

133

randomly, which is composed of N chromosomes popi (t),

t = 1, i = 1, 2, . . . , N

(4.53)

(2) Calculate the fitness value corresponding to each chromosome popi (t) in the population pop(t) f i = fitness[ popi (t)]

(4.54)

(3) Assess the fitness. Determine whether the algorithm meets the given convergence conditions according to the fitness value calculated in step (2). If the convergence condition is satisfied, then terminate the search and output the final result. Otherwise, continue to carry out the genetic operations in steps (4)–(6) on the population. (4) Perform selection operation. First, calculate the selection probability according to the fitness value of each individual Pi =

fi N ∑

, i = 1, 2, . . . , N

(4.55)

fj

j=1

then make the probability calculated by Eq. (4.55) inherited to the next generation, and generate the following new population newpop(t + 1) = { pop j (t + 1),

j = 1, 2, . . . , N }

(4.56)

(5) Perform crossover operation. The codes of different individuals are crossed and paired with probability Pc to generate some new individuals. Combine these new individuals with the original ones to obtain a new population, denoted as cr osspop(t + 1). (6) Perform mutation operation. Based on the crossover, let the individual’s code mutate with a small probability to get a new population, denoted as mut pop(t + 1). Hereto, a complete genetic operation has been completed in the population, which can be substituted into step (2) as the parent population in the next iteration, denoted as pop(t + 1). 4.5.3.2

NSGA-II Algorithm

To solve for the multi-objective optimization problem, the NSGA [10] was proposed by Srinivas and Deb, which can be used to solve for multi-objective optimization problems because of adopting a nondominated sorting algorithm to replace the traditional sorting algorithm. However, its shortcomings such as high computational complexity limit its further development. To deal with these problems, the NSGA- II

134

4 Error-Similarity-Based Positioning Error Compensation

[5] with an elite strategy based on NSGA was suggested by Deb et al., which greatly improved the performance of the NSGA algorithm. The flow chart of the nondominated sorting genetic algorithm is shown in Fig. 4.16. It can be seen that the main difference between the NSGA algorithm and the basic genetic algorithm is that the non-dominant sorting and stratification of the population is incorporated into the NSGA algorithm based on the basic genetic algorithm. The steps of this algorithm are: (1) Let i = 1, for ∀ j ∈ {1, 2, . . . , N } and j /= i, where N is the population size. Compare the dominant and non-dominated relationships between individuals xi and xj based on the fitness function. (2) If there is no individual xj better than xi , then xi is represented as a non-dominated individual. (3) Let i = i + 1, and go back to step (1) and terminate the loop until all nondominated individuals are found. Start

Let Gen = 0, and initialize population

Front = 1

No Are populations all graded?

Identify non-dominated individuals

Yes Gen = Gen+1

No

Copy according to virtual fitness

Specify virtual fitness value

Crossover

Calculate shared fitness

Mutation

Front = Front+1

Satisfy termination condition?

Yes End

Fig. 4.16 Flowchart of NSGA algorithm

4.5 Optimal Sampling Based on Error Similarity

135

So far, a set containing all non-dominated individuals can be obtained, which is called the first-level non-dominated layer. According to the above procedure, a set of non-dominated individuals obtained by performing non-dominated sorting again on the remaining non-dominated individuals is called the second-level non-dominated layer. Repeating this procedure, non-dominated ranking on all individuals in the population can be done. When performing non-dominated sorting, each level of non-dominated layers of the population will obtain a virtual fitness value to reflect the non-dominated relationship between the levels. The advantage of this approach is that the probability of lower-level non-dominated individuals being inherited to the next generation is increased in the selection operation, and the characteristics of the individuals on each non-dominated layer are guaranteed to be diverse, which makes it easier to determine the search range within the optimal range. Suppose there are n m individuals on the mth non-dominated layer, and each individual’s virtual fitness value is f m and let i, j = 1, 2, …, nm , then the steps to specify the virtual fitness value are: (1) Calculate the Euclidean distance between the individuals i and j in the same non-dominated layer [ )2 | p ( j | ∑ x ik − x k | di j = x max − x min k k k=1

(4.57)

where p is the number of decision variables, x max and x min are the upper and k k lower bounds of the kth decision variable, respectively. (2) Use the sharing function s to represent the relationship between the individual i and other individuals in the niche group { s(di j ) =

1−

(

di j σshare



di j < σshare di j ≥ σshare

0

(4.58)

where σshare is the shared radius and α is a constant. (3) Let j = j + 1, if j ≤ nm , return to step (1); otherwise, calculate the number of the niche of individual i ci =

nm ∑

s(di j )

(4.59)

j=1

(4) Calculate the shared fitness value of individual i f m' =

fm ci

(4.60)

136

4 Error-Similarity-Based Positioning Error Compensation

Repeat the above steps to get the shared fitness value of each individual. In actual engineering applications, researchers found that the NSGA algorithm has the following three defects: (1) The computational complexity is high. The algorithm has a long optimization time and low efficiency, especially when the population and the number of iterations are large. (2) Lack of elite strategy. The elite strategy can not only significantly improve the calculation speed of the genetic algorithm, but also prevent good individuals from being abandoned. (3) The sharing radius σshare needs to be given artificially. In response to the above defects, the NSGA-II algorithm has been developed from the NSGA algorithm in the following three aspects: (1) A fast non-dominated sorting algorithm based on layer is proposed, which effectively reduces the complexity of the algorithm and improves the execution efficiency of the algorithm. (2) Introduce the congestion degree into the calculation. On the one hand, it is no longer necessary to specify the sharing radius σshare to implement the sharing strategy. On the other hand, the congestion degree can compare individuals belonging to the same non-dominated layer so that the population has the characteristics of diversity and can make it easier to achieve global optimization. (3) Introduce the elite strategy. The sampling space is expanded by using the competition between the parent population and the child population, which makes the next generations retain the elite of the parent generation. It is beneficial to get a better child population. The flow chart of the algorithm NSGA-II is shown in Fig. 4.17, and the steps are: (1) Set the population size to be N, and initialize the population randomly. Then, perform the non-dominated sorting on the initial population, and conduct operations of selection, crossover, and mutation on the sorted population to generate the first-generation child population. (2) For the second generation and its subsequent generations, combine the parent population and the child population into a whole, and perform rapid nondominated sorting on it. While sorting, calculate the degree of congestion corresponding to each individual at each non-dominated layer. Then, synthesizing the non-dominant relationship and degree of congestion for the individual, select the appropriate individual to generate a new parent population. (3) Conduct operations of selection, crossover, and mutation on the parent population to generate a new child population. (4) Repeat steps (2) and (3) until the convergence condition is satisfied, and terminate the iterative loop of the algorithm. In the NSGA-II algorithm, the fast non-dominated sorting technique is normally used to reduce the computational complexity. Assume that the population is P, the number of individuals that dominate individual p in the population is n p , the set of

4.5 Optimal Sampling Based on Error Similarity

137

Start

Initialize population

Has the first generation child group been generated?

No Non-dominated sorting

Yes Selection, crossover, and mutation

Let Gen = 2

Merge child and parent individuals

Has a new parent population been generated?

Gen = Gen+1

No

Fast non-dominated sorting

Yes Selection, crossover, and mutation

Calculate degree of congestion

Satisfy terminal condition

Select suitable individuals to form a new parent population

Yes

No End

Fig. 4.17 Flowchart of NSGA-II algorithm

individuals dominated by individual p in the population is sp , then the main steps of the fast non-dominated sorting algorithm are: (1) Find all individuals of np = 0 in the population and save them in the current set F1. (2) For each individual i in the current set F 1 , let the set of individuals controlled by it be Si , and traverse each individual . in Si ; let n . = n . − 1, if n . = 0, then the individual . is saved in the set H. (3) Let the individual obtained in F 1 be the first individual of the non-dominant layer, and H be the current set. (4) Repeat the above steps until the entire population has been graded.

138 Fig. 4.18 Schematic diagram of degree of congestion

4 Error-Similarity-Based Positioning Error Compensation

f2

n−1

nd n n+1

O

f1

Another improvement of the NSGA-II algorithm is the use of congestion to replace the shared niche technology. Congestion is an index of each individual in a population, which refers to the density of other individuals contained in the adjacent area of a certain individual. The congestion can be intuitively understood as the length of the largest rectangle around the individual n which contains only the individual n itself, and is expressed with nd as shown in Fig. 4.18. The calculation of congestion does not need to specify any parameters manually. Compared with the shared niche technology, it has more engineering application values.

4.5.4 Multi-objective Optimization of Optimal Sampling Points of Robots Based on NSGA-II There are mainly two types of sampling point optimization methods: experimental and theoretical methods. In the former one, the robot positioning accuracy that can be achieved by different sampling points is measured, and then the number, position, and step length of the optimal sampling points are determined through statistical analysis. The advantage of this method is that the optimal samples are directly obtained from the measured data, which has high reliability. However, the disadvantages are also very obvious. Because a large amount of measurement data is required in statistical analysis, which results in an enormous sampling workload before the optimization. In such case, it deviates from the original intention of sampling planning. The second method is to determine the number and location of the optimal sampling points according to certain criteria before actually sampling through analysis and calculation. Since this method can determine the optimal samples before sampling, it has more engineering application potential than the first method. However, the existed calculation standard is mainly based on the observability index of robot kinematics parameters. As mentioned above, the observability index is only

4.5 Optimal Sampling Based on Error Similarity

139

applicable to the robot error compensation method based on parameter calibration, and has certain limitations. In this section, the above two methods are integrated, and a multi-objective optimization method for robot optimal sampling points based on NSGA-II is proposed. The basic idea is: first, by measuring a small number of positioning errors in the robot workspace, the robot is pre-calibrated to determine its positioning error law; then, the population of the genetic algorithm is composed of multiple sets of sampling points, and the robot compensation technology based on error similarity is used to estimate and compensate for the error of the target point; finally, the mathematical model of the optimal sampling point is taken as the standard, and the NSGA-II is used for multi-objective optimization to obtain the non-inferior solution set of the optimal sampling points. The basic flow of the algorithm is shown in Fig. 4.19. The steps of the multi-objective optimization method based on NSGA-II are: (1) Pre-calibration. Several sampling points in the robot workspace are selected and their actual positioning errors are measured. The positioning error data of these sampling points are used to calibrate the error of the robot’s kinematic parameters initially by the previous calibration method based on the MD-H model. It is worth noting that the pre-calibration performed in this step is only a pre-processing for the follow-on optimization of the sampling points. Therefore, the number of sampling points required in this step should not be too large, only needs to be able to roughly obtain the actual error state of the robot. Besides, the robot kinematics parameter calibration in this step cannot be used as the basis for the final robot error compensation. (2) Initialization of the set of sampling points. The set D of candidate points with N sampling points is randomly generated in the robot workspace, which constitutes the search space for the optimal sampling points. The robot kinematics error model established in step (1) is used to calculate the positioning errors of all sampling points in the set D to complete the initialization of the sampling point set. In the multi-objective optimization algorithm of sampling points, these positioning errors are regarded as the actual errors of the robot without error compensation, i.e., ∆ P iu , i = 1, 2, . . . , N in Eq. (4.50). (3) Generation of the population. For each individual in the population, the encoding is an N-dimensional binary vector, i.e., the elements of the vector are only composed of 0 or 1, and each element corresponds to a point in the set D to be selected. The significance of this coding method is that if the code corresponding to the ith element is 1, it means that the ith point in the set D is selected as the sampling point for error compensation. On the contrary, it means that this point is regarded as the verification point which is used to verify the error compensation effect that the selected sampling point set can achieve. According to the coding of each individual, a number of sampling points for error compensation will be selected randomly, and the sampling point set is recorded as S, so there is S ⊆ D.

140

4 Error-Similarity-Based Positioning Error Compensation Start

Positioning error sampling for industrial robot

Pre-calibrate based on MD-H model

Initialize the collection of sampling points

Encoding and initialize population, and Gen = l

Yes Satisfy terminal condition?

No

Gen = Gen+1

Estimate positioning error and calculate residual error

Calculate fitness

Obtain optimal sampling points

End

Update population and non-inferior solution using NSGA-II algorithm

Fig. 4.19 Flowchart of multi-objective optimization algorithm for robot optimal sampling based on NSGA-II

(4) Positioning error estimation and residual error calculation. According to the linear unbiased optimal estimation method of robot positioning error based on the error similarity proposed above, for each individual, the estimated values of the positioning errors for all points in the set D can be calculated by using the positioning error of each point in the corresponding sampling point setS. The estimated values of these positioning errors can |be regarded as| ∆ P i , i = 1, 2, . . . , N in Eq. (4.50). For each point in the set D, |∆ P i − ∆ P i | c

c

u

4.5 Optimal Sampling Based on Error Similarity

141

is calculated, and the residual error of each point after error compensation can be simulated. (5) Fitness calculation. For each individual, the corresponding fitness is calculated according to the mathematical model of the optimal sampling point described in Eq. (4.50). For the two objectives needing to be optimized, each individual should contain two fitness functions. The fitness function f 1 is the number of elements of the sampling point set S. The fitness function f 2 is the sum of the residual errors of each point calculated in step (4). However, due to the constraint of the accuracy requirement after compensation (constraint 2 in Eq. (4.50)), the definition of the fitness function f 2 is | | ⎧ ∃ P i ∈ D : |∆ P ic − ∆ P iu | > ε ⎨ +∞ N | | f 2 = ∑ ⎩ |∆ P ic − ∆ P iu | otherwise

(4.61)

i=1

As long as there is a situation where the residual error of a certain point is out of tolerance, its fitness value is defined +∞ to ensure that it cannot be inherited to the next generation in the process of non-dominated sorting and selection. In the premise of ensuring the constraints of multi-objective optimization, the efficiency of optimization is improved as much as possible. (6) Updating of the population and non-inferior solution set. The NSGA-II algorithm is used to perform fast non-dominated sorting, calculation of the degree of congestion, selection, crossover and mutation to update the non-inferior solution set of the optimal sampling point in the population. At the same time, the updated new population is used for the next-generation calculation. (7) Iterative optimization. Steps (4)–(6) are repeated until the end condition is met, and the final non-inferior solution set of the optimal sampling point can be obtained. In summary, the multi-objective optimization method based on NSGA-II proposed in this section has the following two characteristics. First, the basis of the multiobjective optimization method proposed here is the mathematical model of the optimal sampling point of the robot. The model takes the final positioning accuracy of the robot after the error compensation and the number of sampling points as the criteria for evaluating the quality of the sampling points. Compared with the observability index of the robot kinematic parameters, it is more in line with the requirements of engineering applications. Moreover, the mathematical model does not have specific limitations on the robot error compensation method, so it has stronger versatility. Second, only a small number of samples of the robot is needed in the multi-objective optimization method proposed here. Compared with the sampling point optimization method using statistical analysis, it can greatly reduce the previous sampling work. It is more in line with the original intention of sampling point optimization, and has better engineering application value.

142

4 Error-Similarity-Based Positioning Error Compensation

4.6 Experimental Verification 4.6.1 Experimental Platform The layout of the experimental platform is shown in Fig. 4.20, including the following components: (1) Industrial robot. The industrial robot is the object of experimental verification, and is also the carrier of the end-effector. It can be installed in a fixed position on the ground or on a mobile platform. However, when the positioning error is measured, the mobile platform is in a static state, so the installation position of the industrial robot is regarded as a fixed position. (2) End-effector. The end-effector is installed on the flange of the industrial robot and is the main working part to achieve various specified functions. For the error compensation test, the end-effector can provide the installation position for the target ball of the laser tracker. (3) Laser tracker. The laser tracker is a measuring device that is responsible for detecting actual positioning errors. There are four types of robots used in the robot error compensation experiment involved in this book, whose basic parameters and performance indicators are shown in Table 4.7. It should be pointed out that only repeatability is provided by KUKA Inc., not positioning accuracy. The laser tracker is a high-precision large-size measuring instrument in the industrial measurement system. It mainly consists of a laser tracking

Flange frame Tool frame

Industrial robot

Target ball End-effector

World frame Laser tracker frame

Laser tracker Base frame Target support base

Fig. 4.20 Layout of experimental platform

4.6 Experimental Verification

143

Table 4.7 Basic information of industrial robots used in experiments of this book KUKA KR210 R2700 extra

KUKA KR 500-3

KUKA KR 30 HA

KUKA KR 150-2

DOFs

6

6

6

6

Rated load (kg)

210

500

30

150

Maximum wingspan (mm)

2696

2825

2033

2700

Repeatability (mm)

±0.06

±0.08

±0.05

±0.06

Table 4.8 Laser trackers used in experiments of this book Linear measuring range (m)

API radian

API tracker 3–15

FARO VantageE 6

40

30

35

Yaw angle (°)

±320

±320

Infinite

Pitch angle (°)

−59 to 79

−60 to 77

−52.1 to 77.9

Accuracy

±10 µm or 1 ppm

±15 µm or 1.5 ppm

16 µm + 0.8 µm/m

measuring head, a controller, a SMR and a target base. The model and main performance parameters of laser trackers used in the error compensation experiment in this book are shown in Table 4.8.

4.6.2 Experimental Verification of the Positioning Error Similarity The KUKA KR210 R2700 extra industrial robot is used to test the similarity of the robot positioning error. The experimental setup is shown in Fig. 4.21, where the detection equipment is the API Radian laser tracker. The process of the positioning error similarity test is shown in Fig. 4.22, and the specific steps are as follows. (1) Select a cubic area in the robot workspace where N sampling points are randomly generated. (2) Utilize a laser tracker to measure and establish each coordinate system required, and sense the actual positions of the N sampling points generated in step (1) in the robot base frame; then compare them with the theoretical positions to obtain and analyze the positioning error of each sampling point.

144

4 Error-Similarity-Based Positioning Error Compensation

End-effector

API Radian laser tracker

KUKA KR210 R2700 extra robot Fig. 4.21 Experimental platform based on KUKA KR210 R2700 extra industrial robot

Determine measuring area

Establish frames

Generate random sampling points

Measure error of sampling points

Calculate semivariogram

Analyze error similarity

Fig. 4.22 Flowchart for verification of error similarity

(3) Pair the sampling points two by two, then calculate the joint angle corresponding to the pose of each point based on the robot’s inverse kinematics model; set the amount of segmentation h according to the joint angle, and compute the semivariogram of the positioning error between each pair of sampling points separately. For the point pairs with h ≤ 1/2hmax , draw their scatter plot of the positioning error semivariogram. (4) Set an appropriate tolerance ∆h and group the sampling points in terms of h ± ∆h ensuring each group has a sufficient number of sampling point pairs; then calculate the semivariogram of each group, and draw a dot-line graph of the mean and standard deviation of the semivariogram. (5) Discuss and analyze the error similarity of the robot positioning error. According to the above test steps, a cuboid area with a size of 665 mm × 1100 mm × 900 mm is planned in the robot workspace, as shown in Fig. 4.23, where 500 sampling points are randomly generated. Their positional coordinates (x, y, z) are randomly selected in the whole cuboid area, and posture angles (a, b, c) are also randomly selected within the range of ±15°, ±10° and ±10° respectively.

145

900 mm

4.6 Experimental Verification

1100

mm

m

665 m

Fig. 4.23 Schematic diagram of measurement range

Taking the theoretical pose as the NC command to control the robot to the positions of the above random sampling points, and use a laser tracker to measure their positioning error, as shown in Figs. 4.24, 4.25 and 4.26. The three-dimensional coordinates of each point in the figure represent the theoretical position of each sampling point in the robot base frame, and the color represents the magnitude of the positioning error. From Figs. 4.24, 4.25 and 4.26, the individual positioning error in the robot workspace is analyzed. It can be found that the probability of blue dots appearing in the vicinity of the red dots is small, which indicates that when its positioning error is large (or small) in a certain direction of the robot base frame for a single sampling point, the positioning error of other points in its vicinity in the same direction also tends to large (or small). It is obvious that the positioning error of each sampling point has a strong correlation with its position in the robot base frame. When any two points are close to each other, their positioning errors tend to be similar. On the contrary, the similarity is not significant. It can also be observed from the figures that the distribution of the positioning errors of the sampling points in the x, y, and z directions is different presenting obvious anisotropy, which reflects the difference in the influence of the rotation angle of each axis on the positioning error in different directions.

146

4 Error-Similarity-Based Positioning Error Compensation 0. 2

1, 800 0

z (mm)

1, 600 1, 400 −0. 2

1, 200 1, 000

−0. 4 800 1, 300 1, 400 1, 500 1, 600 1, 700 1, 800 −140 −280 1, 900 x (mm) −420 2, 000 −560

−0. 6

0

140

280

420

560

y (mm)

Fig. 4.24 Positioning error distribution of sampling points in the x-direction 0. 2 1, 800

z (mm)

1, 600

0

1, 400 1, 200 −0. 2 1, 000 800 −0. 4 1, 300 1, 400 1, 500 1, 600 1, 700 1, 800 −140 −280 x (mm) 1, 900 −420 −560 2, 000

0

140

280

420

y (mm)

Fig. 4.25 Positioning error distribution of sampling points in the y-direction

560 −0. 6

4.6 Experimental Verification

147

1, 800

0. 8

1, 600

z (mm)

0. 6 1, 400 1, 200

0. 4

1, 000 0. 2 800 1, 300 1, 400 1, 500 1, 600 1, 700 1, 800 −140 −280 x (mm) 1, 900 −420 2,000 −560

0

0

140

280

420

560 −0. 2

y (mm)

Fig. 4.26 Positioning error distribution of sampling points in the z-direction

To analyze the overall change trend of the positioning errors of the sampling points, divide the cuboid area shown in Fig. 4.23 into a 2 × 4 × 3 grid along the x, y, and z directions, and make statistics on the positioning errors in each grid with results shown in Figs. 4.27, 4.28 and 4.29. To facilitate observation, the three rows of grids in the figure are separated along the z-direction, but actually they are continuous in the Cartesian coordinate system. It is clear from Figs. 4.27, 4.28 and 4.29 that all the positioning errors in the three directions of the robot base frame show a relatively remarkable varying trend, where the positioning error in the x direction increases as the positional coordinate in the z direction decreases, the positioning error in the y direction increases as the positional coordinate in the y direction increases, and the positioning error in the z direction increases as the positional coordinate in the z direction decreases. Same as the properties exhibited by individuals, the average positioning errors of the grid points with a short distance are more similar than those of the grid points with a long distance, and the distribution of positioning errors in all directions has obvious anisotropy. Statistical analysis is performed on the measured positioning error data by using the semivariogram. From the previous analysis, when determining the amount of segmentation h, the different effects of the rotation angle of each axis on the final positioning error can be considered. Hence, h can be determined according to Eq. (4.26) as

148

4 Error-Similarity-Based Positioning Error Compensation

Fig. 4.27 Comparison of positioning errors in the x-direction by region

−0.1 −0.2 −0.3 −0.4 −0.5 z y

−0.6 x

Fig. 4.28 Comparison of positioning errors in the y-direction by region

0

−0.1 −0.2 −0.3 −0.4

z y x

4.6 Experimental Verification

149

Fig. 4.29 Comparison of positioning errors in the z-direction by region

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

z y

−0.1 x

[ | n | ∑ j h=| ξk [θki − θk ]2 θ i , θ j ∈ Rn

(4.62)

k=1

where ξk can be calculated by Eq. (4.34). Based on the calculation result of the semivariogram, the sampling points with h ≤ 1/2hmax are analyzed, and the resulting positioning error semivariogram is shown in Figs. 4.30, 4.31 and 4.32. It can be seen that there is a significant error similarity among the positioning errors of the samples in the joint space. The deviation of the positioning errors is Fig. 4.30 Scatter plot of semivariogram of positioning errors in the x-direction

150

4 Error-Similarity-Based Positioning Error Compensation

Fig. 4.31 Scatter plot of semivariogram of positioning errors in the y-direction

0.6 0.5

γ ( mm2)

0.4 0.3 0.2 0.1 0

0

5

10

15

20 h (deg)

25

30

35

40

0

5

10

15

20 h (deg)

25

30

35

40

1.4

Fig. 4.32 Scatter plot of semivariogram of positioning errors in the z-direction

1.2

γ (mm2)

1 0.8 0.6 0.4 0.2 0

generally small when the amount of segmentation h of two samples is small. As h increases in the joint space, the deviation of the positioning errors trends to increase, indicating that the probability of the positioning errors to be similar decreases. This result is consistent with the results from the qualitative analysis and simulation. Group the data in Figs. 4.30, 4.31 and 4.32 and divide them into 10 groups according to the size of h. Then the mean and standard deviation of the positioning error semivariogram in each group are obtained shown in Figs. 4.33 and 4.34. It can be seen that the overall similarity of the robot positioning error decreases with the increase in the amount of segmentation of joint angle. In addition, the standard deviation of the semivariogram increases with the increase in the amount of segmentation of joint angle, which manifests that as h increases, the randomness of the positioning error gradually enlarges with the similarity gradually declining. In summary, the experimental results prove that the positioning error of the industrial robot has error similarity in the joint space, showing that the semivariogram can effectively calculate the similarity of positioning errors quantitatively.

4.6 Experimental Verification

151 0.14

Fig. 4.33 Mean value of semivariogram of positioning errors after grouping

x y z

0.12

γ ( mm2)

0.1 0.08 0.06 0.04 0.02 0 0

5

10

15

20

25

30

35

40

25

30

35

40

h (deg)

0.18

Fig. 4.34 Standard deviation of semivariogram of positioning errors after grouping

x y z

0.16 0.14

γ ( mm2)

0.12 0.1 0.08 0.06 0.04 0.02 0 0

5

10

15

20 h (deg)

4.6.3 Experimental Verification of Error Compensation Based on Inverse Distance Weighting and Error Similarity 4.6.3.1

Error Compensation in a Wide Area

In order to verify the proposed robot error compensation method based on inverse distance weighting and error similarity, a wide-area error compensation experiment is implemented on the KUKA KR150-2 robot with no-load. When the robot is at the mechanical zero position, the wide-area workspace of the robot is spatially divided into 209 cubic grids with a step size of 300 mm, as shown in Fig. 4.35. A laser tracker is used to measure the attained position after controlling successively the robot to all the vertices of the divided cubic grid. After collecting the positioning data of all grid vertices, a target point is randomly selected in each divided cubic grid for verification. Finally, the positioning errors of all verification points are counted to verify the proposed error compensation method. The experimental process can be summarized as follows.

152

4 Error-Similarity-Based Positioning Error Compensation

z y O

x

Fig. 4.35 Schematic diagram of division of spatial grid

(1) Offline programming. Within the determined robot envelope space, the spatial grid is divided with the 300 mm step size. Determine the theoretical positioning coordinates of the vertices of each cubic grid, and conduct the robot positioning offline program combing the specified target posture. (2) Data acquisition. Control the robot to the position determined by the offline program, and use the laser tracker to measure the actual positioning coordinate of each grid vertex. (3) Error compensation. In each divided cubic grid, a target point is selected randomly for verification. For each verification point, the robot error compensation method based on inverse distance weighting and error similarity is used to correct the theoretical coordinates, and the corrected coordinate data is adopted to control the robot to the desired position. Finally, compare the actual coordinates of the robot with the theoretical ones to evaluate the effect of the error compensation method. The positioning data of the grid vertex is collected and verified at the room temperature of 25 °C, and the movement speed of the robot is 50% of its maximum speed. The experimental results are shown in Fig. 4.36. There are 5 sub-pictures, which correspond to the partition plane in Fig. 4.35. The 63 grids contained in the left-most partition plane in Fig. 4.35 correspond to 63 points in the first sub-picture in Fig. 4.36, and the rest can be deduced by analogy. The results show that the average value and the maximum value of the positioning error of 209 randomly selected target points after compensation are 0.156 and 0.386 mm, which has been improved by an order of magnitude compared with the positioning error of 1–3 mm before compensation. It can be proved that the proposed robot error compensation method based on the inverse

4.6 Experimental Verification

153

Positioning error (mm)

distance weighting and the error similarity can effectively improve the positioning accuracy of the robot.

Positioning error (mm)

Positioning error (mm)

Number of points

Number of points

Positioning error (mm)

Positioning error (mm)

Number of points

Number of points

Number of points

Fig. 4.36 Error compensation results using inverse distance weighting and error similarity in a wide area

154

4.6.3.2

4 Error-Similarity-Based Positioning Error Compensation

Error Compensation in a Given Area

In the last section, the positioning errors in a wide area of the robot are compensated using the inverse distance weighting and error similarity. The following is an evaluation of the positioning accuracy of the robot in a given work area. The experimental steps are: (1) Divide the given area into cubic grids and measure the actual positioning coordinates of the grid vertices. (2) Calculate the theoretical coordinates of the positioning points. According to the given area to be processed, the theoretical positioning coordinates from points P1 to P5 are calculated. C 1 -C 2 -C 7 -C 8 is selected as the plane used for the pose test, in which points P1 to P5 are selected. (3) Error compensation and measurement. Use the proposed error compensation method to correct the theoretical coordinates of the positioning points, which are utilized to control the robot to the target position and its actual positioning coordinates are measured. (4) Cyclic measurement. Repeat step 2 30 times for the selected points P1 to P5 . (5) Calculate positioning accuracy and repeatability. For the measured 30 sets of actual data, the positioning accuracy and repeatability of points P1 to P5 are calculated. From the above steps, the positioning accuracy after compensation from points P1 to P5 is shown in Table 4.9. From the positional coordinates after compensation in Table 4.9, the positioning error is obtained as shown in Table 4.10 and Fig. 4.37, where APx , APy , and APz represent/the components in the x, y, and z directions of the robot positioning accuracy A Pp = A Px2 + A Py2 + A Pz2 , respectively. It can be observed from the results after compensation that the maximum, minimum and, average positioning errors of the five points is 0.2511 mm, 0.2182 mm, and 0.2274 mm, respectively, which is nearly an order of magnitude higher than the positioning error of 1–3 mm before compensation. Table 4.9 Average of positional coordinates after compensation in a given area (mm) Serial number 1

Theoretical values

Actual values

x

y

z

x

y

z

−70

−2250

590

−69.8793

−2250.1

589.8443

2

170

−2250

590

170.1149

−2250.16

589.8485

3

170

−1050

1310

169.8523

−1050.11

1310.126

4

−70

−1050

1310

−70.1497

−1050.11

1310.114

5

50

−1650

950

49.90174

−1650.14

949.855

4.6 Experimental Verification

155

Table 4.10 Compensated errors in a given area (mm) Nos

APx

APy

APz

APp

1

0.1206

−0.0959

−0.1556

0.2190

2

0.1148

−0.1640

−0.1515

0.2511

3

−0.1477

−0.1105

0.1262

0.2235

4

−0.1496

−0.1108

0.1137

0.2182

5

−0.0982

−0.1427

−0.1449

0.2250

0.4

Positioning error ( mm)

Fig. 4.37 Positioning error after compensation in a given area

0.3

0.2

0.1

0.0

0

1

2

3 4 Number of points

5

6

4.6.4 Experimental Verification of Error Compensation Based on Linear Unbiased Optimal Estimation and Error Similarity To verify the feasibility of the error compensation method based on the linear unbiased optimal estimation and error similarity, the positioning errors of 150 random verification points (not the same as the samples) are measured in the 1.2 m × 1.2 m × 1.2 m space. The measurement of the positioning errors is performed before and after the error compensation. The positioning errors of the verification points in x, y and z axes are shown in Fig. 4.38. It can be seen from the experimental results that the positioning errors in z the direction obviously focused on the positive direction, while the positioning errors in x and y directions are around zero, since the robot base frame constructed by rotating and fitting is more accurate in x and y directions than that in the z direction. This phenomenon shows the influence of the errors between the constructed and the theoretical base frame on the robot positioning errors. Additionally, the positioning errors in the x direction are much larger than that in the y direction, which is mainly because the errors of the robot’s kinematic parameters contributed different effects on the positioning errors in different directions.

156

4 Error-Similarity-Based Positioning Error Compensation

Fig. 4.38 Positioning errors with and without compensation: a x direction; b y direction; c z direction

After performing the error compensation based on the linear unbiased optimal estimation and error similarity, the positioning errors of the robot end-effector in x, y and z directions are all limited in the range of ±0.3 mm. The result shows that the proposed method can improve the robot positioning accuracy without calibrating the errors of the base frame. The statistical results of the positioning errors of the robot end-effector in x, y and z directions are shown in Table 4.11. The frequency distribution diagrams of the positioning errors in x, y and z directions are shown in Fig. 4.39. It can be seen from the statistical results that the range and the amplitude of the fluctuation of the positioning error in the x direction are obviously reduced after error compensation. There is no significant change on the mean value and the standard deviation of the positioning errors in the y direction, which is because the positioning accuracy is already close to the robot positioning accuracy controlled in open-loop control, so that the compensation effect is not obvious. Even so, the fluctuation range of the positioning errors in the y direction is reduced after compensation. The mean value of the positioning errors in the z direction is significantly reduced to about zero after compensation. Figure 4.39d shows the positioning error of the robot end-effector measured before and after compensation. According to the experimental results, the maximum positioning error of the robot end-effector is reduced by 75.36% from

4.6 Experimental Verification

157

Table 4.11 Error results by compensation using linear unbiased optimal estimation and error similarity (mm) Methods APx APy APz

Error range

Mean value

Standard deviation

Before compensation

[−0.9882, 0.7476]

−0.0232

0.3932

After compensation

[−0.1189, 0.1553]

0.0232

0.0451

Before compensation

[−0.3236, 0.3380]

0.0062

0.0996

After compensation

[−0.2382, 0.2751]

0.0073

0.1092

Before compensation

[0.4528, 0.9797]

0.7978

0.1208

After compensation

[−0.1859, 0.1959]

−0.0088

0.720

1.2912 to 0.3182 mm, indicating that the positioning error compensation method proposed is able to meet the accuracy requirement of most robot applications.

Fig. 4.39 Frequency distribution of positioning errors of the robot end-effector: a x direction; b y direction; c z direction. d Positioning error

158

4 Error-Similarity-Based Positioning Error Compensation

4.7 Summary In this chapter, the positioning error similarity was proposed based on the analysis of the robot kinematics model. The spatial similarity of the positioning errors of industrial robots is qualitatively and quantitatively analyzed, showing significant spatial similarity in the robot joint space. An error modeling method was proposed based on the spatial similarity of the positioning errors, and a linear unbiased optimal estimation method of the positioning errors is proposed based on the error model. Experimental results show that the proposed method can effectively compensate for the robot positioning errors. Since the robot positioning errors are considered as spatial data in this study, the error compensation does not rely on any kind of kinematic models, thus the proposed method has good generality. Besides, the proposed method can be used to calibrate the robots with a closed control system, since the error compensation does not need to modify the robot control system.

References 1. Tobler WR. A computer movie simulating urban growth in the Detroit region. Econ Geogr. 1970;46(sup1):234–40. 2. Shiakolas P, Conrad K, Yih T. On the accuracy, repeatability, and degree of influence of kinematics parameters for industrial robots. Int J Model Simul. 2002;22(4):245–54. 3. Tilmann G, Zoltán S, Martin S. Analogies and correspondences between variograms and covariance functions. Adv Appl Probab. 2001;33(3):617–30. 4. Jian X, Ricardo A, et al. Semivariogram modeling by weighted least squares. Comput Geosci. 1996. 5. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput. 2002;6(2):182–97. 6. Borm J-H, Meng C-H. Determination of optimal measurement configurations for robot calibration based on observability measure. Int J Robot Res. 1991;10(1):51–63. https://doi.org/10. 1177/027836499101000106. 7. Hillermeier C. Nonlinear multiobjective optimization: a generalized homotopy approach. Birkhaüser Verlag. 2001. 8. Bechikh S, Kessentini M, Said LB, Ghédira K. Chapter four—preference incorporation in evolutionary multiobjective optimization: a survey of the state-of-the-art. In: Hurson AR, editor. Advances in computers. Elsevier; 2015. p. 141–207. 9. Holland JH. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press; 1992. 10. Srinivas N, Deb K. Multiobjective function optimization using nondominated sorting genetic algorithms. Evol Comput. 1994;2(3):1301–8.

Chapter 5

Joint Space Closed-Loop Feedback

5.1 Introduction The offline calibration methods introduced in the previous chapters achieve the error compensation of the robot with strong versatility and practicability. Whereas, they rely on the robot’s repeatability. In fact, the robot’s unidirectional repeatability is fairly high while the multidirectional repeatability is poor. It is therefore that the offline calibration method cannot further improve the robot’s accuracy. The joint closed-loop feedback correction method can not only maintain high positioning accuracy but has low requirements on the site environment, therefore it particularly meets the application requirements of robots in high-precision operations. In this chapter, a positioning error compensation method using feedforward compensation loop and feedback control loop is proposed for improving the accuracy of the industrial robot synthesizing the advantages of offline calibration and online correction approaches. In Sect. 5.2, the Chebyshev polynomial is constructed to estimate the positioning error. Section 5.3 analyzes the influence of the joint backlash on the positioning accuracy. Section 5.4 proposes an error compensation strategy combining feedforward and feedback control loops. Section 5.5 verifies the proposed accuracy improvement method experimentally. Concluding remarks are given in Sect. 5.6.

5.2 Positioning Error Estimation 5.2.1 Error Estimation Model of Chebyshev Polynomial Generally, a polynomial fitting method can be used to characterize the relationship between the positioning error and the joint angle. Then one can predict the positioning error at any target point and complete the error compensation. In this procedure, the key is how to accurately establish the relationship between the geometric and non-geometric errors and joint angles. © Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_5

159

160

5 Joint Space Closed-Loop Feedback

To address this problem, this section proposes an error estimation model using Chebyshev polynomial, which can mathematically depict the geometric and nongeometric errors of the robot, to obtain the real robot kinematic model reflecting the distribution characteristics of the positioning errors. It can be seen from Chap. 2 that the nominal kinematic model of the robot can be expressed as [ F n (θ ) = T 1 (θ1 ) T 2 (θ2 ) . . . 0

1

n−1

T n (θn ) =

Rt P t 0 1

] (5.1)

where n is the number of DOF of the robot; P t is the nominal position of the robot’s end-effector; Rt is the nominal posture rotation matrix of the robot’s endeffector. Based on the nominal kinematic model, the actual transformation between two consecutive joint frames can be considered as Eq. (5.2) by introducing the error transfer matrix shown in Fig. 5.1 as ∆

i

T i+1 (θi ) = E u,i (θi )i T i+1 (θi )T link,i

(5.2)

where E u,i (θi ) is an error term related to the joint angle θi , and T link,i is a constant error matrix corresponding to link i. Then the actual kinematic model of the robot can be obtained by using Eq. (5.2) as ∆





F a (θ ) = 0 T 1 (θ1 )1 T 2 (θ2 ) · · · n−1 T n (θn ) = E u,1 (θ1 )0 T 1 (θ1 )T link,1 E u,2 (θ2 )1 T 2 (θ2 )T link,2 · · · E u,n (θn )n−1 T n (θn )T link,n

Error transfer matrix

xi zi

1

zb

yi

yb xb

xi

1

x i'

Ei

Ti yi

(5.3)

z i'

zi

1

y i'

y6

Actual link frame i

Nominal link frame i

Base frame

Fig. 5.1 Schematic diagram of error estimation model

Measurement frame

z6 zp

x6

yp xp

5.2 Positioning Error Estimation

161

Let E i (θi ) = T link,i−1 E u,i (θi ).

(5.4)

Equation (5.3) is simplified to F a (θ ) = E 1 (θ1 )0 T 1 (θ1 )E 2 (θ2 )1 T 2 (θ2 ) . . . E n (θn )n−1 T n (θn ),

(5.5)

where E i (θi ) is the error transfer matrix, which describes the influence of common error sources on the spatial transformation of consecutive joints. This error can be regarded as the differential transformation caused by the geometric and non-geometric errors ⎤ 1 −εi,z (θi ) εi,y (θi ) δi,x (θi ) ⎢ εi,z (θi ) 1 −εi,x (θi ) δi,y (θi ) ⎥ ⎥ E i (θi ) = ⎢ ⎣ −εi,y (θi ) εi,x (θi ) 1 δi.z (θi ) ⎦ 0 0 0 1 ⎡

(5.6)

where εi,x , εi,y and εi,z are the differential rotations of the frame {i} around the x, y, and z axes of the frame {i − 1}, respectively; δi,x , δi,y and δi,z are the differential translations of the frame {i} along the x, y, and z axes of the frame {i − 1}, respectively. The Chebyshev polynomial is an orthogonal polynomial defined in the interval [−1, 1] as c j (x) = cos( j arccos x), −1 ≤ x ≤ 1

(5.7)

where j is the order of the polynomial. Each differential transformation in the error transfer matrix can be written in the form of Chebyshev polynomial as εi,k (θ˜i ) =

m ∑

˜ λ(i) j,k c j (θi )

j=0

δi,k (θ˜i ) =

m ∑

(i) γ j,k c j (θ˜i ) i = 1, . . . , 6, k = x, y, z

(5.8)

j=0 (i ) where λ(i) j,k and γ j,k are the Chebyshev coefficients, m is the order number of the polynomial, θ˜i is the normalized value of the joint angle θi in the form of

θ˜i =

2(θi − θi,min ) −1 (θi,max − θi,min )

(5.9)

162

5 Joint Space Closed-Loop Feedback

with θi,min and θi,max being the minimum and maximum values of each joint angle respectively. The zero-order term in Eq. (5.8) is not a function of the joint angle, and can be regarded as a geometric error such as the kinematic parametric error. The other highorder terms are the mapping function of the joint angle, which can depict the nongeometric errors, e.g., the transmission ratio error, the joint flexibility, etc. Hence, the Chebyshev polynomial can synchronously model the geometric and non-geometric errors of the robot. Substitution of Eqs. (5.6) and (5.8) into Eq. (5.5) can yield the Chebyshev polynomial error estimation model F a (θ ) = E 1 (θ˜1 )0 T 1 (θ1 )E 2 (θ˜2 )1 T 2 (θ2 ) . . . E 6 (θ˜6 )5 T 6 (θ6 ) =

[

Ra P a 0 1

] (5.10)

where P a and Ra are the actual position and rotation matrix of the robot’s end-effector respectively. Since the actual position of the robot’s end-effector can be measured directly by some displacement sensor, e.g., the laser tracker, the positioning error ∆ P e can be obtained compared with the theoretical position as ∆ P e (θ ) = P a − P t = F a (θ )L 6t − P t

(5.11)

where L 6t = [ L x L y L z 1 ]T is the actual position of the tool frame relative to link frame {6}. Computing the partial derivative of Eq. (5.11) w.r.t. the Chebyshev coefficients yields ∆Pe =

6 ∑ ∂ Pa i=1

∂λi

∆λi +

6 ∑ ∂ Pa i=1

∂γ i

∆γ i =

6 ∑

J λi ∆λi +

i=1

6 ∑

J γi ∆γ i ,

(5.12)

i=1

where ]T [ (i) ) (i ) (i) ∆λi = λ(i0,x) , . . . , λ(im,x , λ(i) 0,y , . . . , λm,y , . . . , λ0,z , . . . , λm,z

(5.13)

]T [ (i) (i ) (i ) (i) (i ) (i) ∆γ i = γ0,x , . . . , γm,x , γ0,y , . . . , γm,y , . . . , γ0,z , . . . , γm,z

(5.14)

are the Chebyshev coefficient vector to be identified, and J λi = J γi =

[ [

∂ Pa ∂λ(i) 0,x

···

∂ Pa ∂ Pa ) ∂λ(im,x ∂λ(i0,y)

···

∂ Pa ∂ Pa (i ) ∂λ(i) m,y ∂λ0,z

···

∂ Pa ) ∂λ(im,z

∂ Pa (i ) ∂γ0,x

···

∂ Pa ∂ Pa (i ) (i ) ∂γm,x ∂γ0,y

···

∂ Pa ∂ Pa (i ) (i) ∂γm,y ∂γ0,z

···

∂ Pa (i) ∂γm,z

] (5.15) ] (5.16)

5.2 Positioning Error Estimation

163

are 3 × (3m + 3) Chebyshev coefficient matrices. Thus, once the positioning error values of s groups of sampling points are measured and substituted into Eq. (5.12), one can obtain ⎡

⎤ ⎡ (1) (1) (1) (1) J λ1 J γ1 J λ2 J γ2 · · · ∆ P (1) e (2) (2) (2) (2) ⎢ ∆ P (2) ⎥ ⎢ J ⎢ λ 1 J γ1 J λ 2 J γ2 · · · e ⎥ ⎢ ⎢ ⎢ . ⎥=⎢ . .. .. .. .. ⎣ .. ⎦ ⎣ .. . . . . (s) (s) (s) (s) ∆ P (s) J J J J e γ1 γ2 · · · λ1 λ2 ⎡

J (1) λ6 J (2) λ6 .. . J (s) λ6

∆λ1 ⎤⎢ ∆γ 1 ⎢ J (1) ⎢ γ6 (2) ⎥⎢ ∆λ2 J γ6 ⎥⎢ ⎢ ∆γ2 .. ⎥ ⎥⎢ . ⎦⎢ .. ⎢ . ⎢ J (s) γ6 ⎣ ∆λ6 ∆γ6

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(5.17)

which can be further simplified as ∆ P = J (x)∆x

(5.18)

where ∆x is a vector composed of all Chebyshev coefficients.

5.2.2 Identification of Chebyshev Coefficients The identification of Chebyshev coefficients is a typical regression problem. For solving such problems, an improved L-M algorithm [1] is used to identify the Chebyshev coefficients in this section. The iterative process can be divided into the following steps: (1) Initialize the iteration number k, the convergence accuracy ϕ, the optimization factor v1 , and the convergence parameters w, P0 , P1 and P2 with 0 < P0 < P1 < P2 < 1 and v1 > w > 0 satisfied. (2) Calculate the Jacobian matrix J (xk ) and positioning error ∆ P(xk ) by Eq. (5.17). (3) Introduce the adaptive damping factor μk to get the kth iteration step of Chebyshev coefficient {

dk1 = −[ J T (xk ) J(xk ) + μk I]−1 J T (xk )∆ P(xk ) μk = υk ||∆ P(xk )||2

(5.19)

where I is the identity matrix and xk is the Chebyshev coefficient of the kth iteration. (4) Calculate the update step yk , the Chebyshev coefficient approximate iterative step dk2 , the update step z k , and Chebyshev coefficient approximate iterative step dk3 by the following equations

164

5 Joint Space Closed-Loop Feedback

{ {

yk = xk + dk1 dk2 = −[ J T (xk ) J (xk ) + μk I]−1 J T (xk )∆ P(yk )

(5.20)

z k = yk + dk2 dk3 = −[ J T (xk ) J (xk ) + μk I]−1 J T (xk )∆ P(z k )

(5.21)

(5) Define the total step length of the iteration as sk = dk1 + dk2 + dk3

(5.22)

(6) Update the iterative coefficient r k rk = Ak / P k

(5.23)

with ⎧ Ak = ||∆ P(xk )||2 − ||∆ P(xk + sk )||2 ⎪ ⎪ ⎨ P k = ||∆ P(xk )||2 − ||∆ P(xk ) + J(xk )dk1 ||2 + ||∆ P(yk )||2 ⎪ −||∆ P(yk ) + J (xk )dk2 ||2 ⎪ ⎩ +||∆ P(z k )||2 − ||∆ P(z k ) + J (xk )dk3 ||2

(5.24)

(7) Update the Chebyshev coefficient xk+1 and optimization factor υk+1 by { xk+1 =

υk+1

xk + sk rk > P0 xk rk ≤ P0

⎧ rk < P1 ⎨ 2υk = υk P1 ≤ rk ≤ P2 ⎩ max{υk /2, w} rk > P2

(5.25)

(5.26)

(8) Let k = k + 1, and return to step (2) to continue iterating until ||∆ P(xk+1 ) − ∆ P(xk )||2 ≤ ϕ is satisfied. The accurate estimation of the positioning error for the robot can be achieved upon substituting the Chebyshev coefficients identified above into the error estimation model.

5.2.3 Mapping Model Based on the Chebyshev polynomial error estimation model in the previous subsection, the estimated Cartesian error needs to be mapped to the first three joints to obtain the rotation angle correction value for the joint error compensation. In this

5.3 Effect of Joint Backlash on Positioning Error

165

subsection, the Jacobian matrix is utilized to complete the error conversion from Cartesian space to joint space. According to the differential kinematics of the robot, one can have dx = J (θ )dθ

(5.27)

where dx is the differential vector in Cartesian space, and dθ is the joint differential vector. The positioning error can be approximately expressed as the partial differential form of the theoretical position of the robot’s end-effector w.r.t. the rotation angles of the first three joints, that is ∆P =

∂ Pt ∂ Pt ∂ Pt ∆θ1 + ∆θ2 + ∆θ3 = J θ ∆θ ∂θ1 ∂θ2 ∂θ3

(5.28)

where ∆ P is the positioning error of the end-effector, ∆θ = [∆θ 1 ∆θ 2 ∆θ 3 ]T is the vector composed of the angular errors of the first three joints, and J θ is a 3 × 3 matrix with ] [ (5.29) J θ = ∂∂θP1t ∂∂θP2t ∂∂θP3t . The angular error vector of the first three joints can be acquired by utilizing the least square method as −1 T  ∆θ = J Tθ J θ J θ ∆ P.

(5.30)

Therefore, the correction value of the rotation angles for the first three joints is θ ' = θ − ∆θ .

(5.31)

In this way, the conversion from the Cartesian space error to the joint space error is achieved, and the correction value of the first three joints is obtained, which lays a foundation for realizing joint error compensation below.

5.3 Effect of Joint Backlash on Positioning Error 5.3.1 Variation Law of the Joint Backlash Joint backlash refers to the deviation of the robot’s joint angle during the forward and reverse motion to the specified angle, and its influence on the end-effector’s positioning error is depicted in Fig. 5.2. It can be seen that although the joint backlash is small, the end-effector will produce a large positioning error owing to the

166

5 Joint Space Closed-Loop Feedback

Fig. 5.2 Effect of joint backlash on positioning error

amplification of the links. In order to study the effect mechanism of joint backlash experimentally, the Renishaw’s RTLA-S absolute grating scale with a resolution of 50 nm and an accuracy of ±5 µm/m is installed at the arc surface of the first three joints, as shown in Fig. 5.3. To analyze the joint angle deviation in the positive and negative directions during a single joint motion, we firstly control one joint of the robot to move from the starting point to the endpoint with a certain angle increment, and then return to the starting point in the opposite direction with the same angle increment. The measurement conditions are shown in Table 5.1, where A4, A5, and A6 axes are maintained at 0, 90°, and −15°, respectively. Taking A1 axis as an example, set its joint motion range to [−30°, 30°] with a load of 150 kg. To study the influence of different joint positions on the joint backlash, we control the joint to move to the specified position with the angle increment of 10° in constant speed, and the test is repeated 5 times. In order to explore the effect of joint speed on the joint backlash, we then control the joint to move to the specified position at a speed of 1–50% (100% speed corresponds to 2 m/s), where the angle increment is also 10 ◦ and the test is also repeated 5 times. In the experimental measurements, the grating scale readings are recorded when the joint reaches each designated position. The difference between the average value of the grating scale readings in the positive direction and the average value of the grating scale readings in the reverse direction is taken as the joint backlash. The variation laws of the joint backlash for A1, A2, and A3 axes vs the motion speed and joint position are shown in Fig. 5.4.

5.3 Effect of Joint Backlash on Positioning Error

167

Fig. 5.3 Installation position of grating scales

Table 5.1 Experimental conditions of joint backlash measurement Axis Nos

Joint range (°)

Angular step (°)

Speed range (%)

Repeat times

Load (kg)

A1

[−30, 30] and [30, −30]

10

1–50

5

150

A2

[−100, −80] and [−80, −100]

5

1–50

5

150

A3

[80, 100] and [100, 80]

5

1–50

5

150

It can be seen from Fig. 5.4 that the joint backlash is jointly affected by the robot’s motion speed and joint position, where the motion speed is the main influencing factor. Particularly, the joint backlash decreases with the increase in the motion speed, while it starts to enlarge in the opposite direction when the motion speed increases

168

b

600 -30° -20° -10° 0° 10° 20° 30°

Grating scale reading

500 400 300 200 100 0 0

300 -100° -95° -90° -85° -80°

250

Grating scale reading

a

5 Joint Space Closed-Loop Feedback

200 150 100 50 0

10

20

30

40

50

-50 0

10

Motion speed (%)

c

20

30

40

50

Motion speed (%)

40

Grating scale reading

20 0 -20 80° 85° 90° 95° 100°

-40 -60 -80 -100 0

10

20

30

40

50

Motion speed (%)

Fig. 5.4 Measurement value of joint backlash: a A1 axis; b A2 axis; c A3 axis

to a certain extent. It can be assumed that the main reason is when the motion speed is low, the joint angular momentum is too small to eliminate the gear clearance, resulting in a large joint motion backlash. When the movement speed increases, the angular momentum of the joints gradually increases, and the braking distance of the joint increases, which restrains the influence of the joint gap, thereby the joint motion backlash is reduced. When the motion speed continues to increase to a certain value, the excessive braking distance will make the joint backlash reversely increase. Further analysis shows that the joint backlash is mainly caused by gear clearance and joint friction lag. When there exists a gap between the meshing gears, the reverse motion of the joints will lead to the return error of the joints. In addition, when the joint moves in the opposite direction, the instantaneous speed of the joint undergoes a process from deceleration to reverse acceleration. At this time, the joint friction changes suddenly, causing the joint to lag. Therefore, the increase in the robot’s motion speed adds the joint angular momentum, which can reduce the joint backlash to some extent due to the gear gap and joint friction. To sum up, the robot’s motion speed has a more obvious influence on the joint backlash than the joint position, and the joint backlash decreases with the increase in the motion speed within a certain range.

5.3 Effect of Joint Backlash on Positioning Error

169

5.3.2 Multi-directional Positioning Accuracy Variation The positioning accuracy is defined as the deviation between the average values of the actual arrival position and the command position when the robot’s end-effector approaches the command position from the same direction, i.e., A Pp =

/ (x − xc )2 + (y − yc )2 + (z − z c )2

(5.32)

where x=

n 1∑ xj, n j=1

y=

n n 1∑ 1∑ yj, z = zj n j=1 n j=1

(5.33)

are the average coordinates of the actual position of the robot’s end-effector; x c , yc , and zc are the coordinates of the command position; x j , yj , and zj are the coordinates of each actual position of the robot end-effector. From the last subsection, the joint backlash is transmitted through the joints and amplified by the link, resulting in uncertainty in the end-effector’s position. In other words, there exists uncertainty in the positioning accuracy thanks to different motion directions in Cartesian space. To depict this accuracy difference, a concept of the multi-directional positioning accuracy variation (MDPAV) is proposed in this subsection. The MDPAV represents the maximum value of each positioning accuracy when the robot’s end-effector moves from three mutually perpendicular directions to the same command position for multiple times, as shown in Fig. 5.5, which can be defined as / v A Pp = max (x i − xc )2 + (y i − yc )2 + (z i − z c )2 i = 1, 2, 3 (5.34) As shown in Fig. 5.6, a cuboid of 600 mm × 1000 mm × 600 mm in the robot’s workspace is selected as the test area to measure the MDPAV, where P1 , P2 , and P3 are the test points on the diagonal of the cuboid. The arrow indicates the motion direction, and L represents the diagonal length. The three mutually perpendicular directions are enlightened in Fig. 5.6, and the end-effector of the robot is controlled to repeat the movement to each test point for 30 times at a speed of 1%-50%. The movement distance is 100 mm. The experimental conditions are shown in Table 5.2. During the above motions, the API RADIAN laser tracker with an accuracy of 15 µm/10 m, is adopted to measure the robot’s multi-directional positioning accuracy, and the experimental results are shown in Fig. 5.7. It can be seen from Fig. 5.7 that the MDPAV decreases with the increase in the robot’s movement speed. The reason is that with the increase in the robot’s movement speed, the joint backlash decreases and the consistency of the joints reaching the specified position is improved. Under the circumstances, when the robot moves from different directions to the same command position in the Cartesian space, the actual arrival positions gradually get closer, hence the multi-directional positioning

170

5 Joint Space Closed-Loop Feedback

Fig. 5.5 Schematic of MDPAV for the robot

Fig. 5.6 Selected points to be measured during measurement of MDPAV Table 5.2 Experimental conditions during measurement of MDPAV Test points

Command coordinates (mm)

Speed range (%)

Repeat times

Load (kg)

P1

(1510, 700, 1060)

1–50

30

150

P2

(1650, 300, 1300)

1–50

30

150

P3

(1890, −100, 1540)

1–50

30

150

5.4 Error Compensation Using Feedforward and Feedback Loops

171

0.3 P1 P2 P3

MDPAV (mm)

0.25 0.2 0.15 0.1 0.05 0 0

10

20

30

40

50

Motion speed (%)

Fig. 5.7 Experimental results of MDPAV

accuracy variation of the robot declines. Furthermore, the position of the robot’s endeffector has no significant effect on the MDPAV at the same motion speed, which is consistent with the conclusion that the joint position has limited influence on the joint backlash. It can be concluded that the joint backlash is the main influencing factor of the MDPAV, and increasing the movement speed of the robot’s end-effector can reduce the joint backlash to some extent and improve the multi-directional positioning accuracy of the robot. However, since the Jacobian matrix is a function of joint angle, even if the end-effector’s velocity of the robot is large, the joint velocity is relatively small in some positions (because the mapping relationship between the end-effector’s velocity and joint velocity is the Jacobian matrix). Therefore, simply increasing the speed of the end-effector of the robot still cannot effectively reduce the effect of the joint backlash. It is necessary to adopt feasible measures to compensate for the joint backlash for improving the multi-directional positioning accuracy of the robot.

5.4 Error Compensation Using Feedforward and Feedback Loops In this section, an error compensation strategy combining feedforward compensation and feedback control is proposed for the purpose of suppressing joint backlash by referring to the compensation principle of offline calibration method and online correction method, as shown in Fig. 5.8. The error compensation method includes a feedforward control loop and a feedback control loop, where the feedforward realizes prediction and conversion of the robot’s positioning error, and the feedback forms a closed-loop control. In detail, the feedforward loop includes the Chebyshev polynomial error estimation model and the joint mapping model, which respectively realize the estimation of the positioning error of the robot’s end-effector and the conversion of the Cartesian space error to the joint space error. In the feedback loop, the grating scales at the first three joints are used to measure the joint angle

172

5 Joint Space Closed-Loop Feedback

signals online, and transmit them to the feedback controller in real-time to reduce the influence of the joint backlash and to realize the online correction of the end-effector’s positioning error. The feedback controller adopts the discrete form of PD control law, namely μ(nT ) = kP e(nT ) + kD {e(nT ) − e[(n − 1)T ]}

(5.35)

where kP and kD are the proportional and differential control coefficients, T is the sampling interval, n is the number of sampling intervals, e(nT ) is the tracking error at the current sampling time, and μ(nT ) is the controller output at the current time. The process of the joint closed-loop feedback correction is that the upper computer obtains the joint grating feedback signal, and calculates the actual rotation angle value of the joints. The difference between the desired values and the actual values of the joint angles are processed by the PD controller to become the joint correction values, and are then sent to the robot to complete the joint error correction. The error compensation process proposed in this section is shown in Fig. 5.9 and the specific steps are as follows. (1) Calibrate the grating scales installed on the first three joints and determine the corresponding relationship between the grating readings and the actual rotation angle of the joint.

Feedforward loop Desired joint position

Modified joint position

Chebyshev error estimation

Joint mapping model

Feedback loop

-

-

Desired Cartesian position

-

PD controller

KUKA RSI

KUKA robot

Actual Cartesian position

Grating scale

Fig. 5.8 Block diagram of robot’s positioning error compensation using feedforward and feedback loops Positioning error measurement of sampling points

Feedforward

Chebyshev error estimation model

Estimation of positioning error

Joint mapping model

Joint error

Feedback

Positioning of end effector of the robot

Joint closed-loop correction

Robot control system

Corrected joint angle

Fig. 5.9 Flowchart of the proposed error compensation

5.5 Experimental Verification and Analysis

173

(2) Determine the motion range of each joint in the robot’s workspace, and randomly generate several sampling points and command points within this range; input the command position of the sampling points to the robot; utilize the laser tracker to measure the actual arrival position of the robot’s end-effector under the action of joint closed-loop control and then calculate the positioning error value corresponding to the sampling point. (3) Estimate the positioning error of the robot using the Chebyshev polynomial model, the command position, and the error value of the sampling points obtained in step (2). (4) Calculate the positioning error estimation of the target points by substituting the command points into the error estimation model in step (3), then transform the estimated Cartesian errors to the joint errors through the joint mapping model to obtain the corrected joint angle value. (5) Implement the precise positioning control of the robot’s end-effector under the action of the joint closed-loop PD feedback controller, as well as the corrected joint angle compensation.

5.5 Experimental Verification and Analysis 5.5.1 Experimental Setup In order to verify the effectiveness of the positioning error compensation method containing feedforward compensation and feedback control proposed in this section, an experimental platform as shown in Fig. 5.10 is built. The industrial robot is KUKA KR210 robot, and its main parameters are shown in Table 5.3. The API RADIAN laser tracker is used to measure the robot’s positioning errors. The Renishaw RTLAS absolute grating scale (shown in Fig. 5.3) installed on the first three joints is for online detection of the actual joint angle. The tooling is used to fix the products to be machined.

5.5.2 Error Estimation Experiment The Chebyshev polynomial error estimation model with different orders is constructed and validated in this section. The basic idea is to build a robot kinematic model with kinematic parametric errors by using a set of randomly generated robot kinematic parametric errors to simulate the real robot kinematic model. Then, the Chebyshev polynomial error estimation model is established according to the theoretical position and the location errors obtained by simulation. Finally, the Chebyshev polynomial error estimation model is used to estimate the positioning error of the target point, and the estimated value of the target positioning error is compared with the simulated actual positioning error to verify the accuracy of the spatial error estimation model. In the process of Chebyshev coefficient identification, the optimal order of Chebyshev polynomial is determined to ensure that the error estimation

174

5 Joint Space Closed-Loop Feedback

Fig. 5.10 Experimental setup

Table 5.3 Main parameters of KUKA KR210 robot

Performance parameters

Values

Number of joints

6

Rated maximum load (kg)

210

Maximum wingspan (mm)

2696

Repeatability (mm)

±0.06

Accuracy (mm)

±0.8

model can accurately describe the distribution characteristics of robot positioning error, and no error overfitting phenomenon occurs. The specific verification steps of the Chebyshev polynomial spatial error estimation model are as follows. (1) Each kinematic parametric error is generated randomly within a reasonable range and is substituted into the theoretical kinematic model to obtain the robot kinematic model with kinematic parametric error, which can be used to simulate the real kinematics model. (2) Determine the rectangular region of the robot workspace, and generate 100 sampling points and 20 target points randomly in the rectangular region. 100 sampling points are used to construct the Chebyshev polynomial spatial error estimation model, and 20 target points are used to verify the accuracy of the spatial error estimation model. Firstly, the theoretical joint angles are obtained by the inverse kinematic solution of the theoretical position of the sampling point and the target point. These joint angle values are substituted into the kinematic model containing parametric errors to calculate the actual positioning errors between the sampling points and the target points.

5.5 Experimental Verification and Analysis

175

(3) The Chebyshev polynomial error estimation model is established according to the theoretical and actual positioning errors of the sampling points. (4) The theoretical joint angle value of the target point is substituted into the Chebyshev polynomial error estimation model, and the estimated value of the target point positioning error is calculated. Then, it is compared with the actual positioning error of the target point in Step (2) to verify the accuracy of the spatial error estimation model. The sum of squares of estimation errors in the x, y, and z directions for different order Chebyshev polynomial is shown in Fig. 5.11. It can be observed that the sum of the estimation errors for the command points decreases with the rise of the Chebyshev polynomial order, indicating that the higher the order of the Chebyshev polynomial, the more accurately it can reflect the distribution characteristics of the robot’s positioning error. Moreover, the sum of the estimated errors of the secondorder Chebyshev polynomial for positioning error approximation is 0.0798 mm, and the average estimation error is 0.003989 mm, which is much lower than the robot’s repeatability. Synthesizing the identification efficiency and accuracy, the secondorder Chebyshev polynomial model is used to model the robot’s positioning error. The corresponding estimation error for the command point is shown in Fig. 5.12. It can be seen that the average values of the estimated errors in the x, y, and z directions are −0.000122 mm, −0.00101 mm, and −0.00056 mm, respectively. The corresponding standard deviations are respectively 0.003119 mm, 0.002456 mm, and 0.001946 mm, indicating that the second-order Chebyshev polynomial model can accurately estimate the positioning error of the robot.

Estimated error (mm)

2

1.5

1

0.5

0 0

1

2

3

4

Chebyshev polynomial order

Fig. 5.11 Estimation of positioning error by Chebyshev polynomial with different orders

176

5 Joint Space Closed-Loop Feedback

Estimated error (mm)

0.01

x direction y direction z direction

0.005

0

-0.005

-0.01 0

5

10

15

20

Point number

Fig. 5.12 Estimation of positioning error for second-order Chebyshev polynomial

Table 5.4 Joint motion range (°) Axis Nos

A1

A2

A3

A4

A5

A6

Maximum

25

−70

120

15

110

20

Minimum

−25

−110

70

−15

60

−40

5.5.3 Error Compensation Experiment 100 sampling points and 100 target points in the rectangular area shown in Fig. 5.6 are randomly generated, where the sampling points are used to build the robot’s error estimation model and the target points are utilized to verify the effectiveness of the accuracy improvement method proposed in this subsection. The motion range of each joint is given in Table 5.4. First, with the joint closed-loop control, the actual positioning errors of 100 sampling points are measured with the laser tracker as the original error data for accuracy compensation. Then, the second-order Chebyshev polynomial is used to model the positioning errors of 100 sampling points. The experimental results of the positioning errors with and without compensation for 100 target points are shown in Fig. 5.13, where the positioning error is APp . The parameter calibration method in [2] is used for comparison. In addition, the statistical results of the experimental data are shown in Table 5.5. From Fig. 5.13, it can be found that the robot’s positioning errors after using the proposed method are significantly reduced. The errors in the x, y, and z directions are all compensated to within ±0.2 mm. The average value of the error components in the three directions approaches zero, which indicates higher stability. From Table 5.5, the maximum positioning error is reduced from 0.76 mm before compensation to 0.18 mm, and the average value of the positioning error is reduced from 0.43

5.5 Experimental Verification and Analysis

b 0.8

Without compensation Parameter calibration method Proposed method

0.6

Positioning error (mm)

Positioningerror (mm)

a 0.8

177

0.4 0.2 0 -0.2 -0.4

0

10

20

30

40

50

60

70

80

90

0.2 0 -0.2

Point number 0.8

d

Without compensation Parameter calibration method Proposed method

0.6

-0.6

100

0.4 0.2 0 -0.2

Positioningerror (mm)

Positioning error (mm)

c

0.4

-0.4

-0.6 -0.8

Without compensation Parameter calibration method Proposed method

0.6

-0.4 0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

Point numer 1 Without compensation Parameter calibration method Proposed method

0.8 0.6 0.4 0.2 0 0

10

20

30

40

50

60

70

80

90

100

Point numer

Point number

Fig. 5.13 Positioning errors with different compensation methods under no-load condition: a APx ; b APy ; c APz and d APp Table 5.5 Statistical results of positioning error in robotic no-load experiment APx

APy

APz

APp

Methods

Range (mm)

Average value (mm)

Without compensation

[−0.70, 0.54]

0.29

With proposed method

[−0.15, 0.11]

0.04

Without compensation

[−0.49, 0.43]

0.16

With proposed method

[−0.17, 0.06]

0.07

Without compensation

[−0.44, 0.54]

0.20

With proposed method

[−0.08, 0.12]

0.04

Without compensation

[0.10, 0.76]

0.43

With proposed method

[0.03, 0.18]

0.10

178

5 Joint Space Closed-Loop Feedback

to 0.10 mm. Compared with that without compensation, the maximum positioning error is decreased by 76.3%, which shows that the positioning accuracy of the robot is significantly improved. The feasibility and effectiveness of the proposed accuracy improvement method can be proved by the above experiment results and analysis.

5.6 Summary This chapter proposes an accuracy improvement method utilizing feedforward compensation and feedback control for decreasing the positioning error of an industrial robot. A Chebyshev polynomial model that comprehensively considers geometric errors and non-geometric errors is established to realize accurate estimation of positioning errors for constructing a feedforward compensation loop. A joint mapping model is built to transform the end-effector error to the joint angular error, thus realizing the error online correction by utilizing the joint PD feedback controller. The no-load experimental results show that the positioning error is reduced by 76.3% compared to that without compensation.

References 1. Yan Z, Fan X, Zhao W, Xu X, Fan J, Wang Y. Improving the convergence of power flow calculation by a self-adaptive Levenberg-Marquardt method. Proc CSEE. 2015;35(8):1909–18. https://doi.org/10.13334/j.0258-8013.pcsee.2015.08.010. 2. Zeng Y, Tian W, Liao W. Industrial robot error compensation methods for aircraft automatic drilling and riveting system. Aeronautical Manuf Technol. 2016;18:46–52. https://doi.org/10. 16080/j.issn1671-833x.2016.18.046.

Chapter 6

Cartesian Space Closed-Loop Feedback

6.1 Introduction The objective of this chapter is to enhance the motion accuracy of industrial robots in machining applications by using the visually-guided approach. In Sect. 6.2, the influence mechanism of the establishment method of the end-effector frame on the measurement error of the visual sensor is mathematically investigated. Section 6.3 gives a vision-guided control strategy of the robot. Experimental studies are carried out in Sect. 6.4. As a novel type of measurement method, the vision sensor has a larger measurement space and can derive more information. By integrating with industrial robot equipment, it can greatly improve the robot’s sense ability. However, restricted by its accuracy, the visual sensor is not widely used in the field of high-precision measurement. Starting from the measurement principle of vision, the excessive error in the process of vision measurement is analyzed through theoretical formula derivation, and a fuzzy PID controller is designed to implement the external closed-loop feedback.

6.2 Pose Measurement Using Binocular Vision Sensor As a novel measurement method, the vision sensor has a larger measurement space and more information. However, restricted by its accuracy, the visual sensor is not widely used in the field of high-precision measurement. Starting from the measurement principle of vision, this section analyzes the cause of excessive error in the process of vision measurement through theoretical formula derivation, and designs a Kalman filter to achieve accurate estimation of the robot’s pose signal.

© Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_6

179

180

6 Cartesian Space Closed-Loop Feedback

6.2.1 Description of Frame According to the number of cameras, the visual measurement technology can usually be divided into monocular measurement, binocular measurement, and multi-ocular measurement. Monocular measurement adopts one camera, which can only obtain the two-dimensional plane information of the measured object. Due to the uncertain depth, the pose of the measured object in Cartesian space is generally unable to be derived, which is commonly used in 2D vision measurement. Binocular measurement technology mimics the human eyes, which can calculate the three-dimensional position coordinates of the measured object by photographing the two-dimensional plane information of the same object from different views, hence it becomes the most widely used measurement method. Multi-ocular measurement, which is based on binocular measurement, can establish a large visual measurement field. However, due to the calibration between multiple cameras, the measurement accuracy is lower, which is suitable for a large range of measurements with relatively low accuracy requirement. Comprehensively considering the high-precision measurement requirements and the working range of the robot, the C-Track binocular vision sensor from Creaform Inc. is used to measure the pose of the robot’s end-effector in Cartesian space, as shown in Fig. 6.1a. C-Track includes two digital cameras that can automatically capture and detect all visual targets within the field of view. Several LEDs are installed on the outer ring of the camera, which can emit active light sources to improve the identification accuracy of visual targets in space. VXElements is the measurement software that serves for C-Track, including six functional modules: VXScan, VXShot, VXInspect, VXModel, VXProbe, and VXTrack. Where the VXProbe module can be used for entity detection in space, and can establish characteristic entities of the measured object in VXElements virtual space with HandyPROBE, such as points, planes, circles, cylinders, and spheres. Before the measurement, the visual targets are attached to the rigid body under test, then these points can be selected in the software interface, and the position relationship between the current measurement frame and the visual targets is determined. In

Fig. 6.1 C-Track and its HandyPROBE: a C-Track and visual targets; b HandyPROBE

6.2 Pose Measurement Using Binocular Vision Sensor

181

FH FE

FS

FW

FB

Fig. 6.2 Schematic diagram of the distribution for frames

addition, different measurement frames can be established according to the measured data in order to derive the transformation of measurement data in different frames. Through the VXprobe module in C-Track, a unified measurement field can be established. For the convenience of description, the drilling process is taken as an example to derive the desired pose of the robot. The following frames are introduced: F S (sensor frame), F B (base frame), F E (end-effector frame), F W (workpiece frame), and F H (hole frame), as shown in Fig. 6.2. The measurement field F S is established by binocular vision, and the relationships of the base frame F B and the workpiece frame F W w.r.t. F S are represented by the S T respectively. The center of the hole to be drilled transformation matrices SB T and W on the workpiece surface is selected as the origin point of F H . Its axial direction is set as z-axis, and the x-axis is constrained to be parallel to the ground. Then the frame F H of the hole to be drilled can be constructed by the right-hand rule. The relationship between F H and F W can be written as W H T . The processing point of the robot is the coincidence position of F E and F H , namely, the pose of the end effector in the base frame F B can be obtained as B ET

=

B HT

=

=

S W B ST WT HT S −1 S W BT WT HT

(6.1)

182

6 Cartesian Space Closed-Loop Feedback

6.2.2 Pose Measurement Principle Based on Binocular Vision To achieve high-precision pose measurement of end-effector and reduce measurement noise, it is necessary to study the pose measurement principle of C-Track. Since the single visual target in Cartesian space can only represent the position information, at least three visual targets are needed to describe the pose information of the object, as shown in Fig. 6.3. The visual targets are attached on the end effector, and the origin of F E is fixed at the tooltip. In addition, any three targets do not belong to a straight line. The visual target group and the corresponding frame F E are called the tracking model. When C-Track recognizes the visual targets, the pose transformation SE T of the frame F E related to the sensor frame F S can be obtained. The pose solution can be described as follows: assume that n (n > 3) visual targets are attached on the end effector, then their coordinates mi (i = 1, 2, · · · , n) in the frame F E and si (i = 1, 2, · · · , n) in the frame F S can form the matrices M and S, respectively, denoted as ] [ M = m1 m2 · · · mn ,

(6.2)

[ ] S = s1 s2 · · · sn .

(6.3)

The coordinate transformation between mi and si can be expressed as Rmi + t = si ,

(6.4)

where R and t are the rotation matrix and the translation vector from F E to F S , respectively. To solve for the optimal estimation of R and t, one can rewrite Eq. (6.4) as F(R, t) =

n ∑

||Rmi + t − si ||2

(6.5)

i=1

a

b

FS

Fig. 6.3 Schematic diagram of pose measurement: a C-Track; b end-effector

FE

6.2 Pose Measurement Using Binocular Vision Sensor

183

The partial derivative of Eq. (6.5) w.r.t. variable t is ∂F = 2n( R m + t − s) ∂t

(6.6)

n n 1∑ 1∑ m= mi , s = si , n i=1 n i=1

(6.7)

with

where m and s are the coordinate vectors of the geometric center for the targets in the end effector frame F E and the sensor frame F S , respectively. Let ∂F = 0, ∂t

(6.8)

then the optimal translation vector can be obtained as t ∗ = −Rm + s

(6.9)

Substituting Eq. (6.9) into Eq. (6.5) yields F(R, t) =

n ∑

||Rmci − sci ||2 ,

(6.10)

i=1

where mci = mi − m,

(6.11)

sci = si − s.

(6.12)

Equation (6.10) can be further expanded as n ∑

||Rmci − sci || = 2

i=1

n ∑

(Rmci − sci )T ( Rmci − sci )

i=1

=

n ∑

mTci mci − sTci Rmci − mTci RT sci + sTci sci ,

i=1

=

n ∑ i=1

mTci mci − 2sTci Rmci + sTci sci

(6.13)

184

6 Cartesian Space Closed-Loop Feedback

where the first and third terms are constants, then the minimum value problem of Eq. (6.13) can be transformed into finding the maximum value of F(R) =

n ∑

sTci Rmci = tr(STc R M c ),

(6.14)

i=1

] ] [ [ with M c = mc1 mc2 · · · mcn and Sc = sc1 sc2 · · · scn . Using the property of trace of the matrix tr(STc R M c ) = tr(R M c STc ),

(6.15)

P = M c STc

(6.16)

let

and perform singular value decomposition of Eq. (6.16), the following equation can be derived, P = U∑V T ,

(6.17)

then Eq. (6.15) can be transformed as tr(R P) = tr(RU∑V T ) = tr(∑ H),

(6.18)

H = V T RU.

(6.19)

where

Because R and U are all orthogonal matrices, H is also an orthogonal matrix and the maximum value of each element of H is not larger than 1. Therefore, one can have ⎡ ⎤⎡ ⎤ 3 3 σ1 0 0 h 11 h 12 h 13 ∑ ∑ tr(∑ H) = ⎣ 0 σ2 0 ⎦⎣ h 21 h 22 h 23 ⎦ = σi h ii < σi . (6.20) i=1 i=1 h 31 h 32 h 33 0 0 σ3 To maximize Eq. (6.18), the diagonal elements of H are 1, i.e., H = V T RU = I .

(6.21)

Then the optimal expression of the rotation matrix R is R∗ = V U T .

(6.22)

6.2 Pose Measurement Using Binocular Vision Sensor

185

Substituting Eq. (6.22) into Eq. (6.9) yields the optimal value t ∗ of the translation vector t. Then the pose transformation SE T of the EE (end effector) frame F E w.r.t. the sensor frame F S can be represented as [ S ET

=

] R∗ t ∗ . 0 1

(6.23)

6.2.3 Influence of the Frame FE on Measurement Accuracy It can be seen from Subsection 7.2.2 that the measurement accuracy SE T is closely related to the positions of the visual targets in the EE frame F E . Without loss of generality, there are four ways adopted to establish the EE frame F E for a target group, as shown in Fig. 6.4. The origin of the frame F E0 is at the geometric center of the visual targets; its position vector in the sensor frame F S is t; its posture is the same as that of F S . The frame F E1 is obtained by transforming F E0 through a rotation matrix Λ. The frame F E2 is given by the translation of the frame F E0 to q. The frame F E3 is obtained by the translation q plus the rotation Λ of the frame. The influence of different methods for building the frame F E on the measurement accuracy is discussed below.

6.2.3.1

Measurement Accuracy of FE0

According to the matrix transformation relationship, the rotational and translational transformation of F E0 w.r.t. F S can be deduced as (6.24)

t ∗0 = t + e,

(6.25)

zE 2

z E1

zE 0 xE 0

R∗0 = E I ,

yE 0

x E1

zE 3 yE 2

y E1

yE 3

xE 2

xE 3 FE 0

FE1

FE 2

Fig. 6.4 Different establishment approaches of the EE frame F E

FE 3

186

6 Cartesian Space Closed-Loop Feedback

where E is an orthogonal matrix representing the small deviation, which is determined through the small error caused by the measurement noise of the visual targets. e is a three-dimensional column vector also representing small deviations. E and e represent the C-Track pose measurement error, which is an inherent property of the measurement system.

6.2.3.2

Measurement Accuracy of FE1

The coordinates of the target in F E1 and F E0 have the following relationship m1,i = ΛT m0,i

(6.26)

The matrix M 1,c w.r.t. the frame FE1 can be obtained according to Eq. (6.11) as M 1,c = [ m1,1 − m2 m1,2 − m2 · · · m1,n − m2 ] = ΛT [ m0,1 − m0 m 0,2 − m0 · · · m 0,n − m0 ] = ΛT M 0,c

(6.27)

Combining Eqs.(6.18) and (6.27) yields tr(ST1,c R1 M 1,c ) = tr(R1 M 1,c ST1,c ) = tr(R1 ΛT M 0,c ST0,c )

(6.28)

According to the method mentioned in Sect. 3.2, one can have R1 ΛT = V 0 U T0 = R0

(6.29)

Substituting Eq. (6.24) into Eq. (6.29), the optimal attitude matrix of the frame F E1 can be obtained as R∗1 = Λ R∗0 = ΛE I

(6.30)

Comparing Eq. (6.24) with Eq. (6.30), the posture error term of F E1 rotates with F E1 . Compared to the measurement error in F E0 , only the error components of different coordinate axes in F E1 have changed, that is to say, the overall error value remains unchanged. According to Eq. (6.9), the optimal translation vector t 1 of F E1 can be calculated as t ∗1 = −R1 m1 + s = −ΛΛT m0 + t + e = t+e

(6.31)

6.2 Pose Measurement Using Binocular Vision Sensor

187

It can be seen from Eqs. (6.25) and (6.31) that the posture of the frame does not affect the error of the optimal translation vector.

6.2.3.3

Measurement Accuracy of FE2

The coordinates of the targets in F E2 and F E0 have the following relationship m2,i = m0,i + q

(6.32)

The matrix M 2,c in the frame F E2 can be obtained according to Eq. (6.11) as M 2,c = [ m2,1 − m2 m2,2 − m2 · · · m2,n − m2 ] = [ m0,1 − m0 m0,2 − m0 · · · m0,n − m0 ] = M 0,c

(6.33)

It can be seen that the matrix M c in the frames F E2 and F E0 is equal. Physically, each column vector in M c represents the distances from each target to the geometric center. Therefore, no matter how large q changes, M 2,c equals to M 0,c , as long as the posture of the frame F S remains unaltered. Due to the equivalents of Sc and M c , the values of the optimal rotation matrix R for the frames F E2 and F E0 are also equal, namely R2 = R0 = E I .

(6.34)

According to Eq. (6.9), the optimal translation vector t 2 of F E2 can be calculated as t ∗2 = −R2 m2 + s = −Em0 − Eq + t + e = − Eq + t + e

(6.35)

Comparing Eq. (6.25) with Eq. (6.35), it can be seen that when the frame F E translates by q, an error -Eq is added to the optimal translation vector adds. The greater the value of q is, the larger the error is.

6.2.3.4

Measurement Accuracy of FE3

The coordinates of the targets in F E3 and F E0 have the following relationship m3,i = ΛT m0,i + q

(6.36)

188

6 Cartesian Space Closed-Loop Feedback

The matrix M 3,c in the frame F E3 can be obtained according to Eq. (6.11) as M 3,c = [ m3,1 − m2 m3,2 − m2 · · · m3,n − m2 ] = [ (ΛT m0,1 + q) − (ΛT m0 + q) (ΛT m0,2 + q) − (ΛT m0 + q) · · · (ΛT m0,n + q) − (ΛT m0 + q) ] = ΛT [ m0,1 − m0 m0,2 − m0 · · · m0,n − m0 ] = ΛT M 0,c

(6.37)

Similar to the method for calculating R1 , one can have R3 = Λ R0 = ΛEI

(6.38)

According to Eq. (6.9), the optimal translation vector t 3 of F E3 can be acquired as t 3 = −R3 m3 + s = −ΛE(ΛT m0 + q) + t + e = −ΛEq + t + e.

(6.39)

It can be seen from Eqs. (6.25) and (6.39) that the same conclusion for the measurement accuracy of F E3 can be drawn as that of F E2 . From the above analyses, when C-Track continuously measures the pose of a static object at a high frequency, the measurement errors of the four methods for establishing the frame F E vary with the changes in E and e. Since the optimal translation vector t calculated by the third and fourth methods introduces error terms of − Eq and −ΛEq, the measurement value of the position generates severe fluctuations. Additionally, the noise of the inherent measurement error E of C-Track is very small. The measurement error mainly depends on the offset q of the frame F E, which is positively correlated with the measured noise. To sum up, the origin should be as close as possible to the geometric center of the targets when establishing the EE frame F E . In actual applications, the origin is usually set by requirements, such as placing it at the tipping point of a drill bit. Therefore, the center of the targets should be as close as possible to the tipping point, to minimize the error according to the measurement principle.

6.2.4 Pose Estimation Using Kalman Filtering Uncertain factors such as image noise and vibration caused by the motion of the measured object may disturb the pose measurement using C-Track. To meet the

6.2 Pose Measurement Using Binocular Vision Sensor

189

requirements of dynamic measurement accuracy of the robot’s pose, the Kalman filtering [1, 2] is used to achieve smooth estimation of the pose signal. In one sampling period the displacement of the end effector is very small, which can be considered as linear. The following equations can be established X(k) = F X(k − 1) + W (k),

(6.40)

Z(k) = H X(k) + V (k),

(6.41)

where X(k), Z(k), W (k) and V (k) are state vector, observation vector, process noise vector and measurement noise vector at time k, respectively; F and H are the corresponding coefficient matrices. X(k) mentioned here includes the robot’s pose and its rate, described as X(k) = [ xk yk z k αk βk γk x˙k y˙k z˙ k α˙ k β˙k γ˙k ]T .

(6.42)

F can be depicted as [ F=

] I 6×6 ts I 6×6 , 06×6 I 6×6

(6.43)

with t S the sampling cycle. where I 6×6 is a 6-order identity matrix, and 06×6 represents a 6-order null matrix. Z(k) is a 6 × 1 matrix, which is measured by C-Track as Z(k) = [ xk yk z k αk βk γk ]T .

(6.44)

The observation matrix H is H = [ I 6×6 06×6 ].

(6.45)

W (k) and V (k) are Gaussian white noise with zero mean, and their covariance matrices are Q(k) and R(k) respectively, i.e., W (k) ∼ N (0, Q(k)) , V (k) ∼ N (0, R(k)), E(W (k)W T ( j )) = Q(k)δk− j , E(V (k)V T ( j )) = R(k)δk− j , E(W (k)V T ( j)) = 0,

(6.46)

where δk− j is the Kronecker-δ function: if k = j then δk− j = 1; if k /= j then δk− j = 0. The unbiased optimal estimation of X(k), i.e., the solution to Kalman filtering, can be obtained as

190

6 Cartesian Space Closed-Loop Feedback

ˆ ˆ − 1), X(k/k − 1) = F X(k

(6.47)

P(k/k − 1) = F P(k − 1)F T + Q(k),

(6.48)

ˆ ˆ ˆ X(k) = X(k/k − 1) + K (k)[Z(k) − H(k) X(k/k − 1)],

(6.49)

K (k) = P(k/k − 1)H T (k)[H(k) P(k/k − 1)H T (k) + R(k)]−1 , P(k) = [I − K (k)H(k)] P(k/k − 1).

(6.50) (6.51)

where P(k) is the covariance matrix of the state vector; P(k/k − 1) is the one-step ˆ is the filtered estimation of the prediction covariance matrix of the state vector; X(k) ˆ − 1) is the one-step prediction of the state vector. state vector; X(k/k

6.3 Vision-Guided Control System As a mature commercial product, the controller of the KUKA robot considered in this book is not open to users. To realize the correction of the robot’s positioning error, it is necessary to use interactive interfaces such as KUKA Robot Language (KRL) and Robot Sensor Interface (RSI) provided by KUKA Inc. for realizing the communication between the internal and the external control systems. KRL is an external programming command interface, where the external control commands are transmitted to the inner robot control system through Ethernet to perform pointto-point, linear, and arc motions, realizing the non-real-time control of the robot. RSI is an external real-time control interface that can periodically process and respond to the externally transmitted data, and instruct the robot to move in Cartesian space or joint space. The error compensation principle by combining the external visual servoing controller and the controller that comes with the robot itself is shown in Fig. 6.5. Typically, there exists a deviation between the parameters of the actual structure and the theoretical kinematic and dynamic model in the robot’s internal controller, which may lead to an initial pose error. By turning on the external control system and using the vision sensor to measure the actual pose signal of the robot end-effector in real-time, a closed-loop control system can be formed to correct the motion error. The pose error obtained from the closed-loop feedback is processed by the visual servoing controller, and the desired pose at the next moment is obtained and is sent to the robot system. According to the calculated joint value, the motor is actuated to reach the corresponding position and the pose error of the end effector can be reduced. In conclusion, the external control system communicates with the robot system through the RSI interactive interface, to realize the online correction of the robot’s pose by the external visual servo controller.

6.3 Vision-Guided Control System

191

External control system

Desired pose

Visual controller

Robot system Inverse kinematics

Internal controller

Robot

Actual pose

Visual sensor

Fig. 6.5 Schematic of error compensation principle using a visually guided method

PID controller is generally applicable to the controlled system. Conventional PID formula in industrial application is shown as { t de(t) , (6.52) u(t) = K p e(t) + K i e(τ ) dτ + K d dt 0 where u(t) is the output of the PID controller, e(t) is the error of the controlled system, K p , K i , and K d are the proportional coefficient, integral coefficient, and differential coefficient respectively. However, in the actual computer process, the external system sends pulse signals to the robot system in a fixed period, which means Eq. (6.52) needs to be discretized: the integral approximation of the error is converted into the summation of the error, and the differential approximation of the error is converted into the change rate of the error from the previous moment to the current moment. After discretization, the PID formula is shown as Eq. (6.53). u(k) = K p e(k) + K i

k ∑

e( j ) + K d [e(k) − e(k − 1)].

(6.53)

j=0

As a highly nonlinear and time-varying mechanical and electrical system, the simple PID control cannot meet the requirements, thus a control method suitable for highly nonlinear is needed. A fuzzy PID controller is designed to implement the external closed-loop feedback. The error and its rate of change are transmitted to the fuzzy inference system and the PID control parameters are adjusted online according to the fuzzy rule. The principle of fuzzy PID controller is shown in Fig. 6.6. To realize the high precision control for the full degree of freedom of the robot end-effector, six controllers are required respectively. The x-axis motion in the robot base frame is taken as an example to design the fuzzy PID controller. The most important part of the fuzzy PID control is the FIS (Fuzzy Inference System), and the GUI for FIS in Simulink/Matlab can be applied to design, build and analyze the fuzzy inference machine. GUI includes 3 editors and 2 observers as follows: (1) FIS Editor;

192

6 Cartesian Space Closed-Loop Feedback

Fuzzy inference system

Kp

Ki

Kd

e d/dt

PID controller

u

Fig. 6.6 Schematic diagram of fuzzy PID controller

(2) (3) (4) (5)

Membership Function Editor; Rule Editor; Rule Viewer; Surface Viewer.

The error sign of the robot is uncertain in different poses; hence the error and its rate of change are taken as absolute values. |e| and |ec| are chosen as the input of the fuzzy controller, and the variations of the PID parameters K p , K i , and K d are chosen as the output. Select the centroid method to achieve the defuzzification, as shown in Fig. 6.7.

Fig. 6.7 FIS editor

6.3 Vision-Guided Control System

193

Fig. 6.8 Membership function editor

Define the fuzzy sets including input variables |e| and |ec|, output variables ∆K p , ∆K i , and ∆K d as {ZO, S, M, B} to describe the magnitude of the variables respectively. The fuzzy domain interval is [0, 3], where the triangular function is selected as the membership function according to its advantages of high sensitivity and uniform distribution. The setup is completed as shown in Fig. 6.8. According to the practical experience in robot operation, there exist the following relationships between |e|, |ec| and K p , K i , K d : (1) When |e| is big, considering the limitation of RSI compensation ability, an appropriate value of K p should be selected to avoid vibration of the end effector; to eliminate the overshoot, K i is set to 0; moreover, small K d should be taken to avoid the robot’s brake in advance. (2) When |e| is medium, the RSI compensation capability can satisfy the margin of the error, therefore K p , K i, and K d should be appropriately increased. (3) When |e| is small, K p is increased to amplify the control effect of the controller, and K i also should be increased to improve the control accuracy. Meanwhile, to suppress vibration, K d should be medium. The fuzzy control rule is tabulated in Table 6.1. The physical domains of |e| and |ec| are [0, 1] and [0, 2] respectively, and their fuzzy domains are both [0, 3]. The physical domains of ∆K p , ∆K i , and ∆K d are [0, 0.05], [0, 0.01], [0, 0.01], respectively, and their fuzzy domains are [0, 3]. The quantization factor K e and the scaling factor K ec can be calculated from Eqs. (6.54) and (6.55).

194

6 Cartesian Space Closed-Loop Feedback

Table 6.1 Fuzzy control rule |e|

|ec| B

M

S

Z

B

M\Z\S

S\S\M

M\M\Z

M\B\Z

M

B\Z\M

M\S\M

B\B\S

B\B\Z

S

B\Z\B

M\Z\B

B\B\S

B\B\S

Z

B\Z\B

M\Z\B

B\B\S

Z\Z\Z

Fig. 6.9 Fuzzy PID controller model of the robot

Ke = Up =

0.05 ≈ 0.016 3

3 =3 1

Ui =

K ec =

0.01 ≈ 0.003 3

3 = 1.5 2 Ud =

0.01 ≈ 0.003 3

(6.54) (6.55)

By trial and error, the initial values of PID are determined as K p = 0.05, K i = 0.01, and K d = 0.01. The final fuzzy PID simulation model is shown in Fig. 6.9.

6.4 Experimental Verification 6.4.1 Experimental Platform To verify the effectiveness of the vision-guided accuracy enhancement method proposed in this chapter, a robotic platform is built to investigate the robot’s motion accuracy, as shown in Fig. 6.10, where the KUKA KR500-3 industrial robot, a drilling

6.4 Experimental Verification

195

KUKA KR500-3 industrial robot

End effector with visual targets

C-Track

PLC

PC workstation

Product tooling

Fig. 6.10 Experimental setup

and milling end effector, a product tooling, the C-Track binocular vision sensor, the PLC and PC workstation are included. The end effector for drilling and milling mounted on the robot is equipped with a motorized spindle and the BT40 tool holder. Product tooling is used for clamping and fixing the products to be machined. To avoid vibration during the machining, the product tooling should have sufficient rigidity and be rigidly connected to the ground through high-lock bolts. The PC workstation is an upper computer with the control integrating software and responsible for sending instructions to each device. The lower computer PLC controls various devices of the entire system serving as a transfer station for the work instructions from the upper computer. The visual measurement equipment used in this chapter is Creaform’s C-Track, which includes two digital cameras that can automatically capture and detect all visual targets within the field of view. TIA Portal from Siemens is adopted as the PLC control software, different instructions corresponding to the hardware devices such as spindle rotation, spindle feed, or the opening of external automatic control have been developed. The robot’s software system realizes the communication with the PC upper computer and the PLC control software through the KRL and RSI interfaces. The main process is as follows: a subprogram number is sent to the robot by PLC control software; then the robot control software receives it through the KRL interface and calls subprograms of different motion modes; next, the integrated control software sends the real-time control signal to the robot; finally, the robot control software receives it through the RSI interface to implement the visual-servo-based guidance.

196

6 Cartesian Space Closed-Loop Feedback

6.4.2 Kalman-Filtering-Based Estimation To verify the estimation effect of the robot’s state using the Kalman filtering, a circular trajectory is planned in the robot’s workspace. The maximum sampling frequency of C-Track is 80 Hz, that is, the sampling period is 0.0125 s. According to Eq. (6.43), the state transition matrix F can be acquired as [ F=

] I 6×6 0.0125I 6×6 . 06×6 I 6×6

(6.56)

The robot’s pose typically changes uniformly in the actual motion; therefore, its movement reliability is high. Set the prediction noise covariance matrix Q(k) to be Q(k) = 10−4 diag(10, 10, 10, 1, 1, 1, 10, 10, 10, 1, 1, 1).

(6.57)

The results of the pose error with and without filtering are shown in Fig. 6.11. The positioning errors are shown in Fig. 6.11a–c, and in Fig. 6.11d–f are given the posture errors. In the figures, red and blue lines represent the pose error with and without filtering, respectively. It can be seen clearly that measurement noise occurs in the data before filtering and can be significantly reduced after filtering, thus better estimating the pose of the robot in actual motion.

6.4.3 No-Load Experiment 6.4.3.1

Point-To-Point Motion

An 800 mm × 800 mm × 800 mm cubic grid is divided into the robot’s machining area, where 100 sampling points are then generated randomly. The KUKA robot’s moving speed is set to 30% of its maximum speed during the experiment. The pose errors of the end effector are measured by C-Track without compensation and with compensation guided by the visual servoing controller. The experimental results of the pose errors are shown in Fig. 6.12. The positioning error is shown in Fig. 6.12a. The posture errors around each axis are shown in Fig. 6.12b–d. The experimental statistics are shown in Table 6.2. It can be found that the maximum positioning error of the robot without compensation is 1.11 mm, which is 96.4% higher than that of the robot with compensation. In addition, the maximum values of the angular errors around x, y and z axes are respectively 0.03°, 0.02°, and 0.02°, which are 66.7%, 99.5%, and 66.7% larger than the compensated ones of 0.01°, 0.001°, and 0.01°, respectively. Moreover, the RMS values for the pose errors with compensation drop by 70.6–96.2% compared to the ones without compensation. The above analyses show that the proposed method improves significantly the robot’s positioning accuracy.

6.4 Experimental Verification

197 filtering filtering

filtering filtering

(b)

(a)

filtering filtering

filtering filtering

(c)

(d) filtering filtering

filtering filtering

(e)

(f)

Fig. 6.11 Error results of with and without Kalman filtering: a x-axis positioning error; b y-axis positioning error; c z-axis positioning error; d x-axis angular error; e y-axis angular error; f z-axis angular error

(a)

(b)

(c)

(d)

Fig. 6.12 Comparative results of errors with and without compensation in the point-to-point motion: a positioning error; b x-axis angular error; c y-axis angular error; d z-axis angular error

198

6 Cartesian Space Closed-Loop Feedback

Table 6.2 Statistical errors in the point-to-point motion with and without compensation Maximum value

RMS value

Without With Decline Without With Decline compensation compensation percentage compensation compensation percentage Positioning 1.11 error (mm)

0.04

96.4

0.78

0.03

96.2

x-axis angular error (°)

0.03

0.01

66.7

0.019

0.004

78.9

y-axis angular error (°)

0.02

0.0001

99.5

0.012

0.002

83.3

z-axis angular error (°)

0.03

0.01

66.7

0.017

0.005

70.6

6.4.3.2

Linear Trajectory

In order to explore the error compensation effect of the proposed method for the robot’s linear motion, a desired linear trajectory with a length of 1000 mm is planned in the working area. The robot is actuated to move along the line without load under two cases of compensation and uncompensation, respectively, where the pose of the end effector is measured. The experimental results of the pose error during the linear motion are shown in Fig. 6.13. It can be observed clearly from Fig. 6.13. that the pose error with compensation is dramatically lower than that without compensation. The statistical results are given in Table 6.3. It can be found from Table 6.3 that the uncompensated maximum values of the positioning error, and the angular errors are 0.48 mm, 0.05°, 0.03°, and 0.05°, respectively, which are 89.6%, 60%, 93.3%, and 60% larger than those of the compensated ones with 0.05 mm, 0.02°, 0.002°, and 0.02°, respectively. This delivers significantly better results due to the proposed

(a)

(b)

(c)

(d)

Fig. 6.13 Comparative results of errors in line trajectory with and without compensation: a positioning error; b x-axis angular error; c y-axis angular error; and d z-axis angular error

6.4 Experimental Verification

199

Table 6.3 Statistical errors in line trajectory with and without compensation Maximum value

RMS value

Without With Decline Without With Decline compensation compensation percentage compensation compensation percentage Positioning 0.48 error (mm)

0.05

89.6

0.31

0.02

93.5

x-axis angular error (°)

0.05

0.02

60.0

0.027

0.005

81.5

y-axis angular error (°)

0.03

0.002

93.3

0.014

0.001

92.9

z-axis angular error (°)

0.05

0.02

60.0

0.023

0.007

69.6

method in the present study. Additionally, the RMS values for the pose errors with compensation, i.e., 0.02 mm, 0.005°, 0.001°, and 0.007°, respectively decrease by 93.5%, 81.5%, 92.9% and 69.6% than those of 0.31 mm, 0.027°, 0.014°, and 0.023° without compensation. In conclusion, superior results are achieved with the proposed method, which remarkably enhances the robot’s accuracy of linear trajectory.

6.4.3.3

Circular Trajectory

In order to investigate the error compensation effect of the proposed method for the robot’s circular motion, a desired circular trajectory with a radius of 500 mm is planned in the robot’s workspace. Similarly, the pose of the end effector is measured when the robot moves along the circular trajectory under the conditions of compensation and uncompensation. The experimental results of the pose error during the circular motion are shown in Fig. 6.14, and the corresponding statistics are given in Table 6.4. From Fig. 6.14, one can find that in the circular trajectory motion the pose error of the robot is decreased greatly with the proposed compensation method. Detailtedly, from Table 6.4 the uncompensated maximum values of the positioning error and the angular errors reach 0.48 mm, 0.04°, 0.03°, and 0.05°, respectively, which are 85.4%, 50%, 93.3%, and 60% greater than those of the compensated ones of 0.07 mm, 0.02°, 0.002°, and 0.02°, respectively. Furthermore, the RMS values of the pose errors with compensation, i.e., 0.02 mm, 0.006°, 0.002°, and 0.007°, respectively decrease by 94.7%, 76%, 86.7% and 69.6% than those of 0.38 mm, 0.025°, 0.015°, and 0.023° without compensation. Overall, the results with compensation are substantially better than those without compensation, which suggests that the proposed method is extremely effective for promoting the robot’s accuracy of circular trajectory.

200

6 Cartesian Space Closed-Loop Feedback

(a)

(b)

(c)

(d)

Fig. 6.14 Comparative results of errors in circular trajectory with and without compensation: a positioning error; b x-axis angular error; c y-axis angular error; d z-axis angular error

Table 6.4 Statistical errors in circular trajectory with and without compensation Maximum value

RMS value

Without With Decline Without With Decline compensation compensation percentage compensation compensation percentage Positioning 0.48 error (mm)

0.07

85.4

0.38

0.02

94.7

x-axis angular error (°)

0.04

0.02

50.0

0.025

0.006

76.0

y-axis angular error (°)

0.03

0.002

93.3

0.015

0.002

86.7

z-axis angular error (°)

0.05

0.02

60.0

0.023

0.007

69.6

6.5 Summary In this chapter an error compensation strategy for the robotic machining is developed using the visual-guidance approach. Firstly, the influence of the construction of the end-effector frame on the measurement error of the visual sensor is mathematically explored. Next, a Kalman filtering is designed to improve the measurement accuracy of the robot pose. Then, a fuzzy PID controller is devised to form the closed-loop feedback using the measured signal from the visual sensor. Finally, the no-load motion experiments are conducted to verify the effectiveness and feasibility of the method proposed in this chapter for improving the robot’s pose accuracy. The experimental results suggest that the accuracy of the compensated robot is greatly improved compared to the uncompensated one.

References

201

References 1. Chen SY. Kalman filter for robot vision: A survey. IEEE Trans Industr Electron. 2012;59(11):4409–20. 2. Janabi-Sharifi FM, A Kalman-filter-based method for pose estimation in visual servoing. IEEE Trans Robotics. 2010;26(5):939–47.

Part II

Applications

Chapter 7

Applications in Robotic Drilling

7.1 Introduction In this chapter, the error compensation methods proposed above are applied to robotic drilling system. After the error compensation is performed, the positioning accuracy of each hole after drilling is measured to verify the effectiveness and feasibility of the method proposed in this book. Firstly, a robotic drilling platform is built and its hardware and software compositions, and working principles are explained. Then the approaches of establishing various frames that need to be used in the drilling process are given in detail. Finally, the robot is controlled to conduct drilling operation to validate the error compensation methods.

7.2 Robotic Drilling System The robotic drilling system is composed of multiple hardware and software subsystems, whose details are introduced as follows.

7.2.1 Hardware The hardware of the robotic drilling system mainly includes an industrial robot, a multi-functional end-effector, a heavy rail, tooling, and auxiliary components. As a core part of the whole system, the industrial robot is responsible for carrying the end-effector for movement and positioning. It is necessary to use error compensation technology to improve its positioning accuracy as there exist types of sources of errors in the motion of TCP. Through the cooperation of the heavy rail and external automatic control technology, the large-scale and high-precision movement of the robot can be realized. © Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_7

205

206

7 Applications in Robotic Drilling

The end-effector is the core functional part in the robotic drilling system, and its structure is shown in Fig. 7.1. In addition to the drilling function, it can also provide functions such as datum detection and normal detection with the associated sensors on the end-effector. The main functional modules of the end-effector are elaborated as follows. (1) The datum measurement module consists of a 2D laser scanner, servo-motor, ball screw, and grating scale. The laser scanner is actuated to move in a straight line to realize the online scanning of datum holes and to determine their actual 3D coordinates. By computing the deviation between the actual coordinates and the theoretical coordinates of the datum holes, the positioning errors of the holes to be drilled can be modified. (2) The normal measurement module is composed of four laser displacement sensors which are mounted on the pressure foot. Four laser beams are emitted to the product surface to obtain the distance information between each sensor and the skin surface. Then, the actual normal vector of the approximate plane where the hole is to be drilled is fitted with a normal detection algorithm, and is compared with the theoretical normal vector to correct the normal deviation.

Fig. 7.1 Schematic diagram of end-effector

7.2 Robotic Drilling System

207

(3) The pressure foot is used to provide a certain amount of preload to the product surface before drilling for reducing the interlayer gap between workpieces and for improving the stiffness of the end-effector during drilling. The compression of the pressure foot is realized by two feeding cylinders, and the compression force can be adjusted by the electromagnetic servo-valve. (4) The drilling module adopts the servo-motor to actuate the motorized spindle through the ball screw to complete the hole-making. The tooling is responsible for the positioning and clamping of the workpiece to be machined. In order to ensure the stability of the drilling, the tooling needs to have high enough stiffness and to be connected to the foundation floor through anchor bolts. In addition, to clamp various types of workpieces and products, the tooling is equipped with several groups of prefabricated mounting holes. The topological figure of the hardware configuration for the robotic drilling system is shown in Fig. 7.2. From the point of view of control, hardware system can be divided into upper layer, middle layer and lower layer. The upper layer integrates the robotic drilling system through the industrial PC, and is in charge of the planning of machining tasks and the judgment, the decision-making and the sending of instructions. The middle layer is responsible for scheduling of subsystems and functional components through soft PLC, receiving task instructions from the upper layer and giving the upper layer the information data back, as well as modularized division and data interaction of controlled hardware equipment in the lower layer. The lower layer is integrated into the control system with the communication mode compatible with the controlled hardware equipment, completing the data collection and execution of machining tasks. The control network of the system mainly realizes communication and interaction through industrial Ethernet and EtherCAT. Ethernet is mainly used for non-real time communication tasks, while EtherCAT is mainly used for real-time communication tasks.

7.2.2 Software Off-line programming software and integrated control software are mainly included in the robotic drilling system. The off-line programming software accomplishes the functions of adding and extracting product process information, planning and simulating machining tasks, compiling and post-processing of machining programs, and finally generates a machining program that can be executed in the integrated control software. The integrated control software is installed on the upper computer, which mainly realizes the planning, execution and monitoring of machining tasks. In addition, it integrates logic control algorithm and database and can carry out unified and efficient management of the robotic drilling system. The workflow of the offline programming software is shown in Fig. 7.3. Firstly, according to the task requirements and process information, the process model of the product is built, from which the product information to be machined is extracted

208

7 Applications in Robotic Drilling Host computer

PLC

EtherCAT

EtherNet

Profibus End-effector

Valve terminal

Displacement sensor

Motorized spindle

Cylinder

Robot controller

Servo driver

Manipulator

Rail

2D laser scanner

Servomotor

Fig. 7.2 Hardware configuration

on the basis of the digital model, and is displayed on the main interface of the software. Secondly, the pose and trajectory of the robot are planned in the machining sequence and process planning module. Meanwhile, the processing information of each hole and the necessary event information are utilized to obtain a file which can be recognized by the robot controller. In the simulation and verification module, the generated machining task is simulated to check for interference messages. Finally, the NC program which can be used in the integrated control system is generated by the postprocessing compiler. The integrated control software is developed by using the design method of separating the user interface layer and the logic function layer, and its system architecture is shown in Fig. 7.4. The user interface layer is divided into five functional modules of NC machining control, robot control, end-effector control, measurement control, and system management according to the functions of the integrated control software for automatic drilling. And the interface is designed and managed in the form of layer pages. Modules such as logic control, algorithm recall, database management, alarm management, log management, and communication control are included in the core function management layer and a unified function entrance is provided for the user interface layer to efficiently call the above function modules. For the

7.3 Establishment of Frames Product model

Extract hole information

209 Coordinate transformation

Select holes

Show results

Process planning

Open teach box

Select station

Layout System

Output NC file

Add process

Simulation

Adjust point

Check point state

Output NC Code

Numerical control NC code

Fig. 7.3 Workflow of offline programming software

communication control layer, correct and efficient communication ways and protocols are selected to achieve a close association among the integrated control software, the lower-level robot control program and the middle-level PLC control program. It is worth mentioning that the offline error compensation algorithm proposed in this book can be programmed in the core function management layer of the integrated control software. When the robot end-effector needs to be positioned, it can be easily invoked by the system.

7.3 Establishment of Frames To conduct the error compensation of the robot, several frames need to be established in the robot workspace, including the world frame, the robot base frame, the robot flange frame, and the tool frame, etc. The measurement and establishment methods of each frame are different; therefore, they are described in detail in this section.

7.3.1 World Frame The world frame is indispensable for the measurement in actual applications of the robot error compensation. There are several specific reasons for setting the world frame as follows.

210

7 Applications in Robotic Drilling

User interface layer NC Processing Master

Robot Control

Measurement Control

End Effector

...

System time

Equipment status

User Management

...

Auxiliary unit

Precision compensation

Normal alignment

Test drilling planning

Drilling Management

Processing status

External automatic

Point movement

Parameter setting

Movement state

Function shield

NC editor

Process control

NC execution

...

System Management

Interactive message mechanism Core function management layer Unified function entrance Alert management

Core state

Tool information

User Info

Coordinate system conversion

Benchmark detection

Normal alignment

Accuracy compensation

Ethernet

Measurement data

Form information

Control information

Logic control

Database management

Algorithm call

. Socket

Log management

Log4cpius Log file

EtherCAT .. ADS ADS

Ethernet

EtherCAT

Lower level robot control program

Intermediate control Lower levellayer robotPLC control program program

Fig. 7.4 System architecture of integrated control software

(1) In order to evaluate the performance of the robot positioning error, normally an appropriate frame needs to be selected as the measurement reference frame. Since the robot flange frame and the tool frame are moving, they cannot be used as the measurement reference frame. (2) When the robot is installed on a platform, e.g., ground rail, the robot base frame is moving, it cannot be used as the measurement reference frame. (3) The laser tracker frame also cannot be directly used as the measurement reference frame. One of the reasons is that, in order to facilitate the measurement, the position of the laser tracker sometimes needs to be adjusted, hence the laser tracker frame may be varying. The other one is that the measurement accuracy of the laser tracker may be worse off with the increase in measuring time. If the laser tracker is powered off and restarted, its measurement frame will also be reset with the encoder being reset. Therefore, certain repetitive errors will exist for the laser tracker frame, which cannot be directly used as the measurement reference frame. Several fixed reference points can be set in the space of the experimental platform, and the world frame can be established by measuring the positions of these fixed reference points. Here, the landmarks on the ground are used as the fixed reference

7.3 Establishment of Frames

211

Fig. 7.5 Schematic diagram of establishment method of the world frame

points. In order to ensure high repetitive accuracy of the world frame, the number of fixed reference points should be no less than 4 and should not be collinear. Generally, the pinhole which can match with the holder of the SMR is used to determine the positions of the fixed reference points. The method of establishing the world frame is shown in Fig. 7.5. The specific steps are given as follows: (1) Use a laser tracker to measure the position coordinates of all the fixed reference points, which are represented by the solid dots in Fig. 7.5. (2) Utilize the measured positions of all the fixed reference points to fit an optimal plane, which contains the position information of all the fixed reference points. (3) Project all fixed reference points onto the fitted plane to obtain a new set of points, represented by hollow dots. (4) Use the three-point construction method to establish the world frame by selecting the plane projection points (such as P1 ' , P2 ' and P3 ' , shown in Fig. 7.5). The basic principle of the three-point construction method is to regard P1 ' as the origin of the world frame and the second point P2 ' as a point of the x-axis of the frame. And the third point P3 ' is treated as a point on the x–y plane of the frame. In this way, P1 ' and P2 ' can directly be used to determine the origin and x-axis of the frame by → − →' − P1 P2' x→ = ||− →|| ||→' − || || P1 P2' ||

(7.1)

The z-axis of the frame is determined by the normal direction of the plane where the three points P1 ' , P2 ' and P3 ' locate as →− → − → − →' − P1 P2' × P1' P3' →z = ||− →− → − →|| ||→' − || || P1 P2' × P1' P3' ||

(7.2)

When the x-axis and z-axis of the frame are confirmed, the y-axis of the world frame can be obtained by the right-hand rule.

212

7 Applications in Robotic Drilling

The three-point construction method will be adopted in subsequent frames. It is worth noting that when establishing the world frame, we used the plane projections of the fixed reference points instead of their actual positions and the purpose is to ensure that the position information of all the fixed reference points can be contained in the established world frame by using the fitted plane and to reduce the random error of establishing the world frame.

7.3.2 Robot Base Frame Specific poses of the end-effector in the workspace are typically represented in the robot base frame. The basic idea of establishing the robot base frame is (1) to rotate several joints of the robot, (2) to measure certain points fixed on the robot flange using a laser tracker, and (3) to establish the robot base frame utilizing the three-point construction method. The establishment method of the robot base frame is shown in Fig. 7.6 with the steps as follows: (1) Move the robot to its HOME position, and place the SMR of the laser tracker at a fixed position of the end-effector. (2) Rotate A1 axis of the robot keeping the other joint angles unchanged and measure the position of the center of the SMR on the rotation path using the laser tracker to acquire a series of points. Then fit a circle with these points and obtain the center of the circle C 1 . (3) Move the robot to its HOME position and rotate A2 axis keeping the other joint angles unchanged. Use the same method as in step (2) to fit a circle with its center of the circle C 2 .

Fig. 7.6 Schematic diagram of establishment method for the robot base frame

7.3 Establishment of Frames

213

(4) Move the robot to its HOME position and rotate A6 axis keeping the other joint angles unchanged. Use the same method as in step (2) to fit a circle with its center of the circle C 6 . (5) Build a Plane 1 passing C 1 along the normal direction of Circle 1. Then translate Plane 1 to obtain a Plane 2 passing C 2 , which is theoretically parallel to Plane 1. Therefore, the xoy plane of the robot base frame can be obtained by translating Plane 2 downward along its normal direction with a distance d 0 (d 0 is the distance between the A2 axis and the xoy plane of the base frame from the robot’s theoretical kinematics parameters). (6) Project C 1 , C 6 and any measuring point P to the xoy plane of the base frame. C 1 ' , the projection point of C 1 , is taken as the origin of the base frame. C 6 ' , the projection point of C 6 , is taken as the point on the x axis of the base frame. P' is taken as a point on the x–y plane of the base frame. Then the three-point construction method is used to finish the establishment of the robot base frame.

7.3.3 Robot Flange Frame The function of the flange frame is to obtain the pose transformation relationship of the tool frame w.r.t. the flange frame, which is stored in the robot controller. Then the tool frame can be controlled directly to move the robot to the target pose. The relationship between the flange frame and the tool frame is shown in Fig. 7.7. The origin of the flange frame is at the center of the flange. The z-axis points out of the robot along the normal direction of the flange. When the robot is at the zero position, the x-axis of the flange frame is vertically downward. Then the y axis can be determined according to the right-hand rule. There are two methods for establishing the flange frame: measurement construction method and space transformation method. The former is to measure the feature position on the flange through the laser tracker directly, and the measured geometric Fig. 7.7 Relationship between the flange frame and the tool frame

214

7 Applications in Robotic Drilling

Fig. 7.8 Measuring construction method of the robot flange frame

elements are used to construct the flange frame. This method is suitable for the situation where the flange can be measured directly such as the calibration phase when the robot is with no payload. The space transformation method is suitable for the situation where the flange cannot be directly measured. The principle is to establish the robot base frame first, and then to determine the spatial position of the flange frame w.r.t. the base frame according to the theoretical kinematic model of the robot, which is used as a basis to spatially transform the base frame to obtain the flange frame. Note that the flange frame should be established by the measurement construction method as far as possible if condition permits. The method for establishing the robot flange frame by using the measurement construction method is shown in Fig. 7.8. The specific steps are: (1) Place the SMR of a laser tracker on the flange plane, and measure the threedimensional coordinates of several points on the flange plane using the laser tracker. Fit the flange plane using these points. The data measured is the position coordinate of the center point of the SMR, so the plane should be translated with the distance which is equal to the SMR radius along the normal direction. (2) Place the SMR at the evenly-distributed mounting holes on the robot flange, and measure the three-dimensional coordinates of the center points of the SMR at these positions with the step similar to that in step (1). (3) Fit a circle with its center denoted as C 1 using the center points of each hole obtained in step (2). (4) Determine the three-dimensional coordinates of the midpoint C 2 for the centers of the two holes C 3 and C 4 shown in Fig. 7.8.

7.3 Establishment of Frames

215

(5) Take point C 1 as the origin of the flange frame and point C 2 as a point on the x axis of the flange frame. Regard point C 3 as a point in the x–y plane of the flange frame. Then utilize the three-point method to construct the flange frame.

7.3.4 Tool Frame The tool frame represents the pose of the parts on the end-effector, such as cutters, sensors, etc. When programming a task for an industrial robot, it is more intuitive and convenient to use the tool frame for deciding the pose of the point to be attained. In the verification test of the robot error compensation, the tool used here is the SMR of the laser tracker and its measuring rod, hence the SMR position can be identified as the origin of the tool frame. The measuring rod is mounted on the motorized spindle of the end-effector. There is a pinhole at the front of the measuring rod, where the SMR holder can be installed. The SMR is fixed as shown in Fig. 7.9. The origin of the tool frame lies at the center of the SMR. The x axis of the tool frame is along the feed direction of the motorized spindle. The z axis is vertically upward, and the y axis is determined by the right-hand rule. The reason for determining the tool frame in this way is to make each axis direction in the tool frame approximate to the robot base frame, such that the target point is more convenient for programming. The method for setting up the tool frame is shown in Fig. 7.10, and the specific steps are as follows. Spherical mounted reflector

Measuring rod

Fig. 7.9 Schematic diagram of installation of measuring rod and SMR

Motorized spindle

216

7 Applications in Robotic Drilling

Fig. 7.10 Schematic diagram of establishment method of the tool frame

(1) Fix the SMR at an arbitrary position of the end-effector. Rotate the A4 axis while keeping the other joint angles unchanged, and then determine the threedimensional coordinates of the center point of the SMR on the movement track to fit a circle with the center indicated as C 4 . (2) Install the measuring rod on the motorized spindle of the end-effector, and fix the SMR of the laser tracker at the tip of the measuring rod. Control servo motor to drive the motorized spindle to move on the feeding direction. At the same time, measure the three-dimensional coordinates of the center of the SMR on the trajectory, and then fit a straight line depending on the measured point data. (3) Get back the motorized spindle to the zero position, and record the reading of the grating ruler. Then drive the motorized spindle to a certain position in the feeding direction, set as the TCP and record again the reading of grating ruler. The difference between the two-scale reading can be used as the gist for repeated positioning to TCP. (4) Through the TCP selected in step (3), create a plane along the direction of the fitted line. Project the center point C 4 and another arbitrary point P onto this plane, denoted as C 4 ' and P' . Then take the TCP as the origin of the tool frame, point C 4 ' as a point on the z axis of the tool frame, and point P' as a point on the oyz plane of the tool frame. Finally, establish the tool frame by the three-point method.

7.3 Establishment of Frames

217

Datum plane

yp

xp zp Datum hole 1

Datum hole 2

Fig. 7.11 Schematic diagram of establishment of the product frame

7.3.5 Product Frame The product frame provides a datum for the hole position to be drilled and for assessing accuracy of the drilled holes. To construct the product frame, it is usually necessary to predrill several datum holes on the product. Since a flat plate is considered in this book for the drilled object, and the normal direction of the plate is the same, the two datum holes shown in Fig. 7.11 are used to establish the product frame. The specific steps are as follows. (1) Move the SMR in the entire test plate, and use a laser tracker to measure the points on the surface of the plate. Utilize the measured points to fit a plane, i.e., the datum plane, and regard the normal direction of the plane as the z-axis of the product frame. (2) Place the SMR at the positions of the datum holes 1 and 2 shown in Fig. 7.11, and use a laser tracker to measure the center of SMR position. Then project the measurement point onto the datum plane, and obtain the position coordinates of the datum holes 1 and 2, respectively. (3) Set the datum hole 1 as the origin of the product frame, and the straight line fitted by the datum holes 1 and 2 as the x-axis of the product frame. Let the z-axis established in step (1) be y-axis of the product frame, thereby constructing the product frame.

7.3.6 Transformation of Frames The substance of transforming frames is to determine the pose between each independent frame. The meaning of transforming frames exists in two aspects. On the one

218

7 Applications in Robotic Drilling

hand, the three-dimensional coordinates of the measurement data in any frame can be obtained, which is convenient for the data observation and data processing. On the other hand, it is capable of building the pose between the robot and its surrounding environment, facilitating the motion programming and motion control of the robot. The transformation between each frame is shown in Fig. 7.12. When the error compensation test for the robot is carried out, firstly the laser tracker and the landmark are used to establish the world frame. Then the base frame can be obtained according to the transformation of the robot base frame w.r.t. the world frame w T b . Next the transformation of the tool frame w.r.t. the robot flange frame f T t is entered into the robot controller to program for the points to be measured. When the robot is controlled for positioning tasks, the robot controller can be used to convert the pose of the tool frame w.r.t. the base frame into the pose of the flange frame w.r.t. the base frame automatically, and to solve for the inverse solution and to control the rotation of each axis for achieving the positioning of the target point. Finally, the laser tracker is used to measure the actual pose of the tool frame w.r.t. the base frame directly, and the corresponding positioning error can be obtained compared with the theoretical pose.

Fig. 7.12 Transformation among frames

7.4 Drilling Applications

219

7.4 Drilling Applications 7.4.1 Error-Similarity-Based Error Compensation In this section, the robotic drilling accuracy with the error-similarity-based compensation method is tested and verified. First, different types of fixtures are used to set up the angles (0°, 10°, and −10°) between the test plate and the installation plane of the tooling, as is shown in Fig. 7.13. Then, the datum holes for establishing the product frame on the test plate are predrilled, and the positions of the holes to be drilled in the product frame are planned. By using the coordinate transformations described above, the holes in the product frame are transformed into the coordinates in the robot base frame and the error-similarity-based compensation test is performed. Finally, the robot is controlled to drive the end-effector to drill holes, and the laser tracker is used to measure and verify the positioning accuracy of the holes in the product frame. In each working condition, 50 holes to be verified are planned in the product frame. As shown in Fig. 7.14, the four ∅8 mm datum holes, indicated by circle, are pre-drilled to establish the product frame. The verification holes and the trial drilling holes drilled by the robot in 0° working condition are indicated by the solid box and the dotted box respectively in Fig. 7.14a, where the trial drilling holes are not used to assess the positioning accuracy. The verification holes on the same test plate drilled by the robot under working conditions of +10° and −10° are indicated respectively by the solid box and the dotted box, shown in Fig. 7.14b. In each working condition, the distances between each verification hole in the y-axis and z-axis directions of the product frame is 20 mm and 50 mm, respectively. Before drilling, to prevent the robot from interfering with the tooling, the actual positioning errors of sampling points are measured at a position far from tooling first. Then, the robot is run to the drilling station near the tooling. Next the positions

(a)

(b)

(c)

Fig. 7.13 Three working conditions of robotic drilling with different installation angles: a 0°; b 10°; c −10°

220

7 Applications in Robotic Drilling

(a)

(b) Fig. 7.14 Hole positions drilled by the robot under three working conditions: a 0°; b ±10°

7.4 Drilling Applications

221

in the product frame of verification holes are converted to those in the base frame, and the error compensation is carried out. Finally, the robot drives the end-effector to drill holes. After the robot completes the drilling under each working condition, the laser tracker is used to measure the datum holes on the test plate to establish the product frame. Then, the position of the verification hole is measured in the product frame, and the y-axis error, z-axis error and the positioning error on the yz plane of the verification holes in the product frame are determined, respectively. The measurement results under the three working conditions are shown in Figs. 7.15, 7.16, and 7.17. The statistical data to verify the accuracy of the hole position is shown in Table 7.1.

APy APz

Error (mm)

APp

Point number

Fig. 7.15 Robotic drilling results at 0°

APy APz

Error (mm)

APp

Point number

Fig. 7.16 Robotic drilling results at + 10°

222

7 Applications in Robotic Drilling

Table 7.1 Statistical results of the hole’s error in robotic drilling Range (mm)

Installation angle (°) APy

0

+ 10

−10

Average value (mm)

[−0.18, 0.27]

0.04

Standard deviation (mm) 0.12

APz

[−0.23, 0.14]

−0.08

0.09

APp

[0.03, 0.31]

0.16

0.07

APy

[−0.24, 0.31]

0.10

0.16

APz

[−0.10, 0.25]

0.07

0.09

APp

[0.04, 0.32]

0.20

0.07

APy

[−0.07, 0.28]

0.13

0.09

APz

[−0.23, 0.26]

−0.10

0.12

APp

[0.02, 0.29]

0.18

0.07

APy APz

Error (mm)

APp

Point number Fig. 7.17 Robotic drilling results at −10°

It can be seen from the experimental results of robotic drilling that the positioning accuracy of the holes is within 0.3 mm with the error-similarity-based compensation approach, which can meet the precision requirements for the hole position in aircraft assembly (generally 0.5 mm). Additionally, the robotic drilling is tested under three working conditions, which shows that the error-similarity-based compensation approach has robustness w.r.t. robot configurations. Therefore, the problem of the low positioning accuracy for the robotic drilling can be solved effectively by the compensation method proposed here with high engineering application values.

7.4 Drilling Applications

223

7.4.2 Joint Space Closed-Loop Feedback In this section, the robotic drilling experiment is implemented using the joint space closed-loop feedback control. First, the workpiece is fixed on the tooling, and four datum holes are pre-drilled on the workpiece to establish the product frame. Then, in the product frame, 21 hole positions to be drilled are planned with a y-direction hole spacing of 20 mm, a z-direction hole spacing of 25 mm, and a hole diameter of 5 mm. Finally, according to the error compensation method proposed in Chap. 6, the robot is controlled to conduct the drilling operation. As shown in Fig. 7.18, the holes inside the circles on the workpiece are 4 predrilled datum holes with a diameter of 5 mm, and the holes in the rectangle are the verification holes drilled by the robot. The actual position of the center of the verification hole in the product frame is measured and compared with the command position of the verification hole to obtain the positioning error of the drilled hole. The robotic drilling results are shown in Fig. 7.19, and the statistical data of the positioning errors is shown in Table 7.2. It can be seen from the drilling results that the maximum values of the y-direction, z-direction and the positioning errors of the drilled holes are 0.23 mm, 0.17 mm and 0.24 mm, respectively, which meets the requirements of the aircraft assembly for the hole positioning accuracy of 0.25 mm.

7.4.3 Cartesian Space Closed-Loop Feedback In this section, the Cartesian space closed-loop feedback control is implemented to evaluate the drilling accuracy of the robot. A robotic platform is built to investigate the robot’s motion accuracy, as shown in Fig. 7.20, where the KUKA KR500-3 industrial robot, a drilling and milling end-effector, a product tooling, the C-Track binocular visual sensor, the PLC and PC workstation are included.

Fig. 7.18 Robotic drilling using joint space closed-loop feedback: a scene; b drilled workpiece

224

7 Applications in Robotic Drilling

Fig. 7.19 Robotic drilling results with joint space closed-loop feedback Table 7.2 Error results of robotic drilling with joint space closed-loop feedback

Range (mm)

Average value (mm)

APy

[−0.10, 0.23]

0.10

APz

[−0.14, 0.17]

0.07

APp

[0.05, 0.24]

0.13

KUKA KR500-3 industrial robot

End effector with visual targets

C-Track

PLC

PC workstation

Tooling

Fig. 7.20 Robotic drilling and milling setup

7.4 Drilling Applications

225

70 holes to be drilled are planned on a 120 mm × 225 mm sheet metal part. The distance between two holes is 15 mm and the diameter of each hole is 3 mm. Moreover, the left and right holes are datum holes with a diameter of 4 mm which need to be drilled in advance. The product model is shown in Fig. 7.21. The robot is driven to the positions to be processed without and with compensation respectively, and the drilling operation is completed. Figure 7.22 shows the sheet metal part after drilling. A CMM is used to measure the center positions and axial angles of the 5 × 14 holes in the middle, and the positional and axial errors of them are obtained by comparing the measured values with the theoretical values, as shown in Fig. 7.23. The statistical data of the drilling error is shown in Table 7.3. Fig. 7.21 Product model of the sheet metal part to be drilled

(a)

(b)

Fig. 7.22 Sheet metal part after robotic drilling: a without compensation; b with Cartesian space closed-loop feedback

226

7 Applications in Robotic Drilling

Positioning error (mm)

0.8

With compensation Without compensation

0.6 0.4 0.2 0 0

10

20

30

40

50

60

70

Nunmber of holes

(a) 0.12 With compensation Without compensation

Axial error(°)

0.1 0.08 0.06 0.04 0.02 0 0

10

20

30 40 Nunmber of holes

50

60

70

(b) Fig. 7.23 Comparison of the hole errors in the robotic drilling: a positioning error; b axial error

Table 7.3 Error results with and without Cartesian space closed-loop feedback Maximum value

RMS value

Without With Decline Without With Decline compensation compensation percentage compensation compensation percentage Positioning 0.71 error (mm)

0.08

88.7

0.06

0.02

66.7

Axial error 0.11 (°)

0.03

72.7

0.02

0.01

50.0

It can be seen from Fig. 7.23a that the compensated positioning error is totally less than the uncompensated one. From Fig. 7.23b, the axial error of the drilled holes with compensation is superior to that of the ones without compensation. Specifically, the uncompensated axial error has a worse consistency than the compensated one. Further, it can be found from Table 7.3 that in the robotic drilling, the maximum values of the positional and axial errors are reduced by 88.7% and by 72.7%, respectively, and the RMS values are declined by 66.7% and by 50.0% using the vision-guided closed-loop feedback technique, which extremely improve the drilling accuracy of the robot and confirm the effectiveness of the proposed method.

7.5 Summary

227

7.5 Summary In this chapter, the error compensation approaches presented in the previous chapters are applied to the robotic drilling. Firstly, the robotic drilling system including hardware and software is introduced, followed by the construction of several frames used in the robotic drilling. Then, the robotic drilling results are shown to demonstrate the effectiveness of the proposed methods.

Chapter 8

Applications in Robotic Milling

8.1 Introduction In this chapter, the robotic milling is investigated with the error compensation methods proposed in this book. First, a robotic milling platform with the hardware and software topology are constructed. Then, the robotic milling experiments on a sheet metal part, a cylinder head of car engine, a composite material part, and a spacecraft cabin bracket are conducted respectively to validate the effectiveness and feasibility of the compensation methods proposed.

8.2 Robotic Milling System The hardware structure of the robotic milling system is shown in Fig. 8.1, which includes automatic guided vehicle (AGV), KUKA industrial robot and its controller, milling end-effector, fixture, C-Track visual measuring equipment, PLC, upper computer, and peripheral devices. As a mobile base, AGV can provide the industrial robot a broader workspace. The AGV roughly moves the industrial robot to the designated station according to the needs of the processing sequence, so the workspace of the industrial robot covers the area to be processed and completes the preparation work before processing. As the most important part of the whole system, the industrial robot is responsible for moving the end-effector to the position to be processed. Although the positioning accuracy of industrial robot is low, its high-precision movement can be realized through the error compensation method presented in this book. Given the mass of the end-effector and the machining force in the milling process, the 500 kg-load industrial robot is adopted here, and the model is KUKA KR500-3. The milling end-effector is the core for machining. The cutter is mounted on the BT40 tool holder and equipped with a motorized spindle. The ball screw is driven by the servomotor to realize the axial feed of the spindle. The upper computer is integrated with the control software and is responsible for sending instructions to each device. © Science Press 2023 W. Liao et al., Error Compensation for Industrial Robots, https://doi.org/10.1007/978-981-19-6168-7_8

229

230

8 Applications in Robotic Milling

Upper computer

Ethernet Siemens PLC

Milling end-effector

AGV

Laser tracker

KUKA controller Grating ruler

C-Track

Cylinder

Servomotor

Spindle

KUKA robot

Fig. 8.1 Hardware structure of the robotic milling system

The PLC connects various devices as a transfer station for the instructions from the PC upper computer. The visual device is the C-Track from the Creaform Inc., which has been mentioned in previous chapters. The laser tracker is used for establishing different types of frames and for measuring the position of the end-effector. The software architecture of the robotic milling system is comprising upper control software, PLC control software and robot control software. It is similar to that introduced in Chapter 8, and therefore is neglected here.

8.3 Milling on Aluminum Alloy Part To verify the effectiveness of the Cartesian space closed-loop feedback control approach based on vision-guided strategy, an experimental platform of robotic machining is built, shown in Fig. 7.26, to investigate the robot’s milling performance in this section. Two scenarios, line milling and arc milling on aluminum alloy part, are considered in this experiment.

8.3 Milling on Aluminum Alloy Part

231

8.3.1 Line Milling In the line milling experiment, 10 linear trajectories are planned on a 150 mm × 250 mm sheet metal part, on which two 4 mm datum holes are needed to be predrilled. The product model is shown in Fig. 8.2a. The sheet metal part is milled along the five lines on the left without compensation and the five lines on the right with compensation, respectively. The cutting depth is 0.02 mm and the feed speed is 10 mm/s. The sheet metal part milled along the straight lines is shown in Fig. 8.2b. Twenty points in each line trajectory are chosen to calculate their average, maximum and minimum errors. The statistics are shown in Table 8.1. It can be observed that the maximum error without compensation reaches 1.43 mm, which is reduced to 0.12 mm with compensation. It suggests that the proposed method significantly improves the line accuracy of the robotic milling.

Without compensation

6 7 8 910

1 2 3 4 5 With compensation

(a)

(b)

Fig. 8.2 Sheet metal part: a product model; b after line milling

Table 8.1 Statistical results of positioning error in the robotic line milling (mm) Number

Methods

Average value

Maximum value

Minimum value

1

Without compensation

1.07

1.11

1.02

2

Without compensation

1.28

1.32

1.21

3

Without compensation

1.12

1.16

1.08

4

Without compensation

1.35

1.43

1.26

5

Without compensation

1.11

1.16

1.05

6

With compensation

0.03

0.11

0.01

7

With compensation

0.04

0.10

0.01

8

With compensation

0.03

0.12

0.01

9

With compensation

0.03

0.09

0.01

10

With compensation

0.03

0.12

0.00

232

8 Applications in Robotic Milling

8.3.2 Arc Milling In the arc milling experiment, twelve semi-circular trajectories with two pairs forming a circle are planned on a 150 mm × 250 mm sheet metal part. The product model is shown in Fig. 8.3a. The sheet metal part is clockwise milled along the upper six semi-circular trajectories without compensation and along the lower 6 semi-circular trajectories with compensation, respectively. The cutting depth is 0.05 mm and the speed is 10 mm/s. The sheet metal part milled along the circular trajectories is shown in Fig. 8.3b. It can be obviously seen from the milled sheet metal part that the uncompensated circular trajectories do not match the product model planned. Particularly, because the depth of cut is too large, the tool breaks during the milling along the trajectory 1, and the milling operation is not completed along the circular trajectories 1 and 2. Then a new cutter is installed to restart the milling task. Milling along trajectories 5 and 6 is also not finished. By observing the milling trajectories 5 and 6, it can be found that the cutting depth of the whole circle is gradually from deep to shallow and then to deep, leading to the fact that the milling cutter does not cut the material in some of the trajectory. The reason lies in the fact that there exists posture error for the robot’s end-effector, and the actual circular plane in motion is not parallel to the plane of the sheet metal part. Fifteen points in each semi-circular trajectory are selected respectively to evaluate the trajectory accuracy. The average, maximum, minimum errors, positioning error of the center, and radius error of the circle are calculated, respectively. The corresponding statistics are shown in Table 8.2. It can be observed from Table 8.2 that the maximum error, the maximum positioning error of the circle center and the maximum radius error of the circle are respectively 1.55 mm, 1.34 mm and 0.26 mm for the uncompensated case. In the compensated case, the corresponding errors are reduced by 91%, 93% and 81%, respectively. The above results demonstrate that the proposed method can greatly improve the circular trajectory accuracy of the robotic milling.

2

Without compensation

8

With Compensation

4

6

1

3

10

12

7

(a) Fig. 8.3 Sheet metal part: a product model; b after circular milling

5

9

(b)

11

8.4 Milling on Cylinder Head for Car Engine

233

Table 8.2 Statistical results of positioning error in the robotic circular milling (mm) Number

Methods

Average value

Maximum value

Minimum value

Position of the center

Radius of the circle

1

Without compensation

0.61

1.09

0.20

1.20

0.13

2

Without compensation

/

/

/

/

/

3

Without compensation

0.96

1.55

0.35

1.34

0.00

4

Without compensation

0.82

1.10

0.23

0.97

0.13

5

Without compensation

0.54

0.98

0.20

0.73

0.26

6

With compensation

1.02

1.12

0.82

0.88

0.21

7

With compensation

0.04

0.14

0.01

0.08

0.04

8

With compensation

0.03

0.13

0.02

0.06

0.05

9

With compensation

0.03

0.10

0.01

0.08

0.04

10

With compensation

0.05

0.11

0.02

0.09

0.04

11

With compensation

0.04

0.10

0.00

0.10

0.03

12

With compensation

0.04

0.12

0.00

0.10

0.03

8.4 Milling on Cylinder Head for Car Engine In this section, an experimental platform of robotic milling is constructed to verify the effectiveness of the joint space closed-loop feedback control approach, shown in Fig. 8.4, where the FARO Vantage laser tracker is used to measure the SMR position to evaluate the trajectory error during the milling process. Since the RESOLUTE grating scale used here can only provide linear displacement, when measuring the joint angle of the robot, it needs to be pasted on the finished cylindrical surface of each joint, to ensure the measurement accuracy of the joint angle to the greatest extent. The installation of the grating scale for each joint is shown in Fig. 8.5. The line and plane milling experiments are respectively carried out on the surface of a car engine cylinder head, and the model and machining features of the cylinder head are shown in Fig. 8.6. The engine cylinder head is clamped on a tooling with a custom fixture, and the SMR on the end-effector is tracked and measured in robotic milling by the FARO laser tracker in real time. For the milling of plane features, it is impossible to measure the

234

8 Applications in Robotic Milling

PLC

Motorized spindle Grating ruler KUKA KR500 robot

Tooling A FARO Laser tracker

Fig. 8.4 Robotic milling platform with grating rulers

A1

A2

A3

A4

A5

A6

Fig. 8.5 Installation of grating rulers

8.4 Milling on Cylinder Head for Car Engine

235

Fig. 8.6 Features to be milled: a plane; b line

trajectory accuracy because of the complex tool path, so the flatness of the machined surface is measured after milling to test the machining performance of the robot.

8.4.1 Line Milling The line milling tool for the pipe groove on the engine cylinder head is ϕ 17 mm ball-end mill. The spindle speed is 5000 rpm, and the feed speed is 800 mm/min. The line length is 380 mm, and the milling length of the cylinder head is 100 mm. The milling errors before and after using the error compensation method are shown in Fig. 8.7. The error compensation method reduces the maximum error of linear trajectory from 0.698 mm to 0.227 mm. As can be seen from the milling results in Fig. 8.8, due to the positioning error of the robot, the second pre-cast groove on the blank cannot be milled before compensation (marked by dotted line in Fig. 8.8). Therefore, the milling error before compensation is larger than that after compensation, which is affected by the machining force. In addition, when the compensation method is not applied, each time the error will be increased by the machining process. After using the compensation method, when completing milling for a section of groove, the trajectory error can be modified and the line accuracy is greatly improved.

8.4.2 Plane Milling The size of the engine cylinder head is 380 mm × 330 mm. The tool is ϕ 125 mm disc-milling cutter. The spindle speed is 5000 rpm, and the feed speed is 500 mm/min.

236

8 Applications in Robotic Milling

0.8 0.7

Error (mm)

0.6 0.5 Before compensation After compensation

0.4 0.3 0.2 0.1

1 28 55 82 109 136 163 190 217 244 271 298 325 352 379 406 433 460 487 514 541 568 595 622 649 676

0 Number of points Fig. 8.7 Line milling error of cylinder head groove

Missed groove (a)

Machined groove (b) Fig. 8.8 Milling results of cylinder head groove: a without compensation; b with compensation

The flatness of the bonding surface before and after using the error compensation method is measured and shown in Fig. 8.9. The flatness of the fitting surface for the cylinder head is reduced from 0.31 mm to 0.16 mm by the compensation method. From the milling results shown in Fig. 8.10, it can be seen that before using the error compensation method, the tool mark is

237

Z (mm)

Error (mm)

8.5 Edge Milling on Composite Shell

Y (mm)

X (mm)

Z (mm)

Error (mm)

(a)

Y (mm)

X (mm)

(b) Fig. 8.9 Milling flatness of cylinder head surface: a without compensation; b with compensation

significantly obvious (marked in dotted line in Fig. 8.10), which seriously affects the flatness of the cylinder head. After using the compensation method, there are only tool marks in local areas. And the height of the tool mark is less than 0.03 mm.

8.5 Edge Milling on Composite Shell To verify the performance of the robotic milling, an edge milling test is carried out on a composite shell, whose model and features to be machined are shown in Fig. 8.11. The slot of the flange on the lower surface of the shell is fixed on the tooling shown in Fig. 8.4. The milling tool is a polycrystalline diamond cutter with a diameter of 8 mm.

238

8 Applications in Robotic Milling

Knife mark height 0.22mm

(a)

Knife mark height 0.03mm

(b) Fig. 8.10 Milling result for cylinder head: a without compensation and b with compensation

8.5 Edge Milling on Composite Shell Fig. 8.11 Model of composite shell

239

Diameter: 935 mm Thickness: 4 mm

The spindle speed is 4500 r/min. The feed rate is 100 mm/min. The circumference of the upper surface is 1468.69 mm. The trajectory accuracy and surface roughness of the milled part are measured to evaluate the performance of the robotic milling. The experimental results in no-load and milling conditions are shown in Fig. 8.12, where partial results in the semicircular trajectory is given. This is due to the fact that the tracker’s laser is shielded by the dust generated in the milling, and only the semicircular arc in the positive x direction of the robot base frame can be milled in the test. Furthermore, in the process of measuring the roundness of the part, the SMR of the laser tracker is blocked by the interference of the part itself, leading to some of the cutting area of the part cannot be measured. From Fig. 8.12, it is clear that the maximum error is 0.1 mm when the robot performs no-load circular motion at a speed of 100 mm/min. The maximum error when milling along the circumference is 0.22 mm, and it occurs at the slot with a width of 35 mm on the upper surface, as shown in Fig. 8.12. The reason is that the rigidity of the slot of the part is poor, and when the robot approaches the slotted place, significant vibration occurs, resulting in an increase in the milling error. The surface roughness of the 20 marked points is measured using the Mitutoyo SJ-210 instrument and is shown in Fig. 8.13. It can be observed that the average value of the roughness is Ra3.75 and the maximum value is Ra4.8.

240

8 Applications in Robotic Milling Desired Measured

Milling condition

y (mm)

x (mm)

35mm

Roughness (Ra)

Fig. 8.12 Trajectory error of composite shell after robotic milling

Fig. 8.13 Surface roughness of composite shell after robotic milling

Trajectory error (mm)

No-load condition

8.6 Summary

241

8.6 Summary This chapter mainly discusses the robotic milling applications. Firstly, we discuss the hardware structure and software architecture of the robotic milling system. The milling experiments are designed, and the final results show that the maximum error of straight-line milling is reduced by 92%. The maximum error of arc milling is reduced by 91%, and the maximum fitting center positioning error is reduced by 93%. The methods proposed in this book can greatly improve the milling accuracy of the robot.