138 26 4MB
English Pages 136 [132] Year 2023
SpringerBriefs in Computer Science Xin Luo · Zhibin Li · Long Jin · Shuai Li
Robot Control and Calibration Innovative Control Schemes and Calibration Algorithms
SpringerBriefs in Computer Science
SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic. Typical topics might include: • A timely report of state-of-the art analytical techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • An in-depth case study or clinical example • A presentation of core concepts that students must understand in order to make independent contributions Briefs allow authors to present their ideas and readers to absorb them with minimal time investment. Briefs will be published as part of Springer’s eBook collection, with millions of users worldwide. In addition, Briefs will be available for individual print and electronic purchase. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, easy-to-use manuscript preparation and formatting guidelines, and expedited production schedules. We aim for publication 8–12 weeks after acceptance. Both solicited and unsolicited manuscripts are considered for publication in this series. **Indexing: This series is indexed in Scopus, Ei-Compendex, and zbMATH **
Xin Luo • Zhibin Li • Long Jin • Shuai Li
Robot Control and Calibration Innovative Control Schemes and Calibration Algorithms
Xin Luo College of Computer and Information Science Southwest University Chongqing, Chong Qing, China
Zhibin Li School of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing, Chong Qing, China
Long Jin Department of Computer Science Lanzhou University Lanzhou, Gansu, China
Shuai Li Faculty of Information Technology and Electrical Engineering and VTT-Technical Research Centre of Finland University of Oulu Oulu, Oulu, Finland
ISSN 2191-5768 ISSN 2191-5776 (electronic) SpringerBriefs in Computer Science ISBN 978-981-99-5765-1 ISBN 978-981-99-5766-8 (eBook) https://doi.org/10.1007/978-981-99-5766-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.
Preface
Industrial robots have been widely adopted in numerous fields such as automobiles, airplanes, and trains, which play a huge role in our daily life. However, due to the long-term use of the robot, its links are worn out, thus its tracking and positioning errors become large and cannot be applied in high-precision manufacturing. Therefore, how to accurately control and calibrate the robot is an important issue. We develop some control and calibration schemes for enhancing the tracking and positioning accuracy of the robot. The first model is a Model Predictive Control (MPC) scheme for robot trajectory tracking, which can optimize tracking error, velocity norm, and acceleration norm. The second model is a new recurrent neural network (RNN), which has better performance in dealing with time-varying underdetermined linear systems with double limits. The third model is a novel Joint Drift Free (JDF) scheme synthesized by the Projection Zeroing Neural Network (PZNN) model, which can effectively solve the motion generation and control of redundant manipulators under disturbance. The fourth model is an ensemble based on six different regularizations, which can obtain better calibration results and effectively solve the overfitting problem. The fifth model is a new algorithm based on an improved covariance matrix adaptive evolution strategy and an extended Kalman filter, which addresses the local optimum of the covariance matrix adaptive evolution strategy. The sixth model is a novel algorithm combining a quadratic interpolated beetle antennae search algorithm and an extended Kalman filter, which successfully suppresses noise during the robot calibration process and addresses the local optimum of the beetle antennae search algorithm. The last model is a new calibrator based on an unscented Kalman filter and a novel variable step-size Levenberg-Marquardt algorithm, which addresses the local optimum of the conventional Levenberg-Marquardt algorithm and suppresses the measurement noises. This book first introduces how to calibrate and control a robot, which can help researchers and engineers fully understand robot control and robot calibration algorithm. Moreover, the robot calibration and control technology achieve high
v
vi
Preface
scalability in real industrial applications. In addition, we hope this monograph will be a useful reference for students, researchers, and professionals to know the basic principles of robot control and robot calibration. The readers can immediately conduct numerous real experiments on the presented datasets in the book. Chongqing, China Chongqing, China Lanzhou, China Oulu, Finland June 2023
Xin Luo Zhibin Li Long Jin Shuai Li
Acknowledgments
This book is written in 2023 and mainly focuses on robot control and calibration issues. We would like to thank Dr. Fan Zhang and Dr. Huiyan Lu from Lanzhou University for providing valuable data and experimental results. Meanwhile, we thank Dr. Tinghui Chen from Chongqing University of Posts and Telecommunications for providing valuable suggestions in the revision of this book. The viewpoints elaborated in this book are hoped to provide readers with more useful assistance.
vii
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Kinematic Control Problem of the Robot . . . . . . . . . . . . . . 1.2.2 Robot Kinematic Calibration . . . . . . . . . . . . . . . . . . . . . . 1.3 Book Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 2 2 4 5 7
2
A Novel Model Predictive Control Scheme Based on an Improved Newton Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 QP Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Model Predictive Control Scheme . . . . . . . . . . . . . . . . . . . 2.2.2 ESEN Model Construction . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Theoretical Verifications for ESEN Algorithm . . . . . . . . . . . . . . . 2.3.1 Preconditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Simulations Based on MPC Scheme . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Without Extraneous Disturbance . . . . . . . . . . . . . . . . . . . . 2.4.2 With Extraneous Disturbance . . . . . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 11 12 12 17 19 19 20 21 21 24 25 29
A Novel Recurrent Neural Network for Robot Control . . . . . . . . . . . 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Time-Varying Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 RNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Theoretical Analysis of RNN Model . . . . . . . . . . . . . . . . . . . . . . 3.4 Experiments for RNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33 34 34 35 36 40
3
ix
x
Contents
3.4.1 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 The Applications of Robot . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5
6
A Projected Zeroing Neural Network Model for the Motion Generation and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Feedback-Considered Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Neural Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Neural Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Theoretical Analysis without Noise . . . . . . . . . . . . . . . . . . 4.3.3 Theoretical Analysis in Constant-Noise Condition . . . . . . . 4.3.4 Theoretical Analysis in Bounded Random-Noise Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experimental Validations for the Developed PZNN Model . . . . . . 4.4.1 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Experiments for a Kinova JACO2 Robot . . . . . . . . . . . . . . 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Regularization Ensemble Based on Levenberg–Marquardt Algorithm for Robot Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Diversified Regularized LM Algorithm . . . . . . . . . . . . . . . . . . . . 5.2.1 Regularized Robot Kinematic Error Model . . . . . . . . . . . . 5.2.2 Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Experimental Results Based on the Proposed Ensemble . . . . . . . . . 5.3.1 General Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Experimental Calibration Performance for M1–6 . . . . . . . . 5.3.3 Experimental Calibration Performance for Compared Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Novel Evolutionary Computing Algorithms for Robot Calibration . . . 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 EKF-ICMA-ES Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Extended Kalman Filter (EKF) . . . . . . . . . . . . . . . . . . . . . 6.2.2 Improved Covariance Matrix Adaptive Evolution Strategy (ICMA-ES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Quadratic Interpolated Beetle Antennae Search (QIBAS) . . .
40 42 45 46 51 51 52 54 54 55 56 58 59 59 61 62 65 69 69 70 70 74 77 77 79 83 86 86 91 91 92 92 93 95
Contents
xi
6.3
Experimental Results for EKF-ICMA-ES and EKF-QIBAS . . . . . . 98 6.3.1 General Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.3.2 Experimental Performance . . . . . . . . . . . . . . . . . . . . . . . . 99 6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7
8
A Highly Accurate Calibrator Based on a Novel Variable Step-Size Levenberg-Marquardt Algorithm . . . . . . . . . . . . . . . . . . . 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 UKF-VSLM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Unscented Kalman Filter (UKF) . . . . . . . . . . . . . . . . . . . . 7.2.2 Variable Step-Size Levenberg-Marquardt Algorithm (VSLM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Experiments for UKF-VSLM Algorithm . . . . . . . . . . . . . . . . . . . 7.3.1 General Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Calibration Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113 115 115 115 117 119
Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
123 123 124 124
. . . .
111 111 112 112
Chapter 1
Introduction
Abstract This chapter gives an introduction to robot control and robot calibration strategies, which includes the problem descriptions, recent work and the structure of this book. Keywords Robot control · Robot calibration · Problem descriptions · Trajectory tracking · Positioning accuracy
1.1
Overview
In recent decades, the industrial internet has entered a period of rapid development. Thereafter, industrial robots have become critical devices of high-end manufacturing [1–5]. In 2025, McKinsey&Company predicts that advancing robots will create an output value of $1.7 trillion to $4.5 trillion in industrial manufacturing, healthcare and services. Under the guidance of Industry 4.0 and the policy “Made in China 2025”, the scale of the industrial robot continually expands, then the application potential of related products is gradually released [5–10], and the overall industrial chain is becoming increasingly complete. Therefore, industrial robots are widely adopted in the field of industrial manufacturing, such as welding, loading and unloading, grinding and polishing, handling and stacking, assembly, spraying [11– 15]. Industrial robots have made great progress in aerospace, medical and electronics [16–20]. Owing to their high flexibility, high fault tolerance and high reliability. Currently, the repeated positioning accuracy of industrial robots has reached 0.01 mm, their attitude angle accuracy is about 0.1°. However, the positioning and trajectory errors of the uncalibrated robots can reach several millimeters, which can’t be directly applied in high-precision manufacturing fields. Therefore, it is critical to accurately calibrate and control industrial robots [21–25]. To date, extensive robot calibration and control algorithms have been proposed by researchers. Zhang et al. [3] design a variable inertia parameter model for the precise control of aircraft robots. Wang et al. [4] present an underwater robot control system that uses flexible flippers to grab marine life. Zhang et al. [5] design a recursive neural network for redundant robots’ kinematic control. Yilmaz et al. [6] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_1
1
2
1 Introduction
develop a robot control scheme with self-adjusting fuzzy logic. Yan et al. [7] present a novel recursive neural network for conducting the redundant robots. Liu et al. [8] adopt a six-point measuring device to calibrate the UR10 robot by the conventional Levenberg-Marquardt algorithm. Xu et al. [9] present a novel calibration method based on an improved manta ray foraging algorithm to calibrate the self-designed 6-DOF robot, thereby the positioning error of the robot is decreased from 10.24 to 0.55 mm. Cao et al. [10] propose a new calibration method based on the extended Kalman filter and an artificial neural network, which successfully calibrates the Stewart platform. Wang et al. [11] present a multilayer neural network based on the beetle swarm optimization algorithm to finish the robot calibration. Robot trajectory tracking and positioning accuracy are crucial in robot applications [26–30]. On the one hand, inverse kinematics problems in trajectory tracking, such as redundancy resolution problems, present the end-effector information (position or velocity), thereby achieving the calculation of joint parameters. In addition, the redundant resolution issue can be addressed by the velocity and acceleration levels separately, which have special characteristics and virtues. Moreover, the popular minimum velocity norm (MVN) scheme is widely adopted in the velocity level [16–18], which can maintain smooth joint speed. On the other hand, extensive state-of-the-art calibration algorithms with the advantages of easy implementation and high calibration accuracy have significantly improved the calibration accuracy of the robot [19–21]. However, MVN is only applicable to velocity-driven manipulators because it cannot address the acceleration-level constraints. In addition, calibration algorithms usually suffer from local optimal issues that impair calibration accuracy [31–35]. Extensive methods have been adopted to address the robot control and calibration problems [36–41]. Is it possible to design novel control and calibration methods to obtain accurate trajectory tracking and positioning accuracy for the robot? To address this issue, this book proposes three control and calibration methods for industrial robots, which can implement efficient trajectory tracking and positioning accuracy of the robot.
1.2 1.2.1
Preliminaries Kinematic Control Problem of the Robot
The serial robot usually has n joints, and the position and direction vector of its end-effector r = [r1, r2, ..., rm]T 2 Rm has the following conditions: rðt Þ = f ðθðt ÞÞ,
ð1:1Þ
where θ = [θ1, θ2, ..., θm]T 2 Rn represents the vector of joint angles, f(): Rn → Rm is a nonlinear function that maps from the joint space to the workspace. If m < n, the
1.2
Preliminaries
3
robot satisfies the redundant conditions. Moreover, f() can be derived by following the Denavit and Hartenberg (D-H) based on the given D-H parameters. Considering the kinematic control of the industrial robot, we can find a joint vector to obtain the desired robotic end-effector rd 2 Rm, the joint constraint is r = rd. Actually, the joint angles are limited by the robot structure and constraints. Therefore, we can obtain the limitations of the industrial robot joint angle and velocity: θ - ≤ θ ðt Þ ≤ θ þ , þ θ_ ≤ θ_ ðt Þ ≤ θ_ ,
ð1:2Þ
where ≤ represents the element-wise, θ_ = dθ=dt 2 Rn is joint velocity, θ_ 2 Rn and þ θ_ 2 Rn are the lower and upper bound for the joint angle. In [31], the Eq. (1.2) is unified as:
θ_ 2 Ω,
ð1:3Þ
Ω = η - ≤ θ_ ≤ ηþ ,
ð1:4Þ
where Ω can be represented as:
þ where η - = max θ_ , - κðθ - θ - Þ , ηþ = min θ_ , - κðθ - θþ Þ . Additionally,
κ meets the following condition: κ>
þ θ_ - θ_ max þ - : i2f1, 2, ..., ng θ - θ
ð1:5Þ
To address the limitations, the time derivative can be calculated by Eq. (1.1), we obtain r_ ðt Þ = J ððθðt ÞÞÞθ_ ðt Þ,
ð1:6Þ
where r_ = dr=dt and J(θ) = d∂θ/∂θ represent the robot Jacobian matrix. uðt Þ = θ_ ðt Þ is defined as the control input. Then Eq. (1.6) is calculated by: r_ ðt Þ = J ðθðt ÞÞuðt Þ:
ð1:7Þ
Therefore, to address the robot kinematic control issue, we should find the input u(t) 2 Ω with Ω, thereby r(t) - rd converging to zero.
4
1.2.2
1
Introduction
Robot Kinematic Calibration
The robot calibration consists of four parts: (a) kinematic modeling, (b) measuring the position of sampling points, (c) identifying kinematic parameters, and (d) compensating the kinematic parameter errors [42–50]. In addition, kinematic modeling plays a vital role in robot positioning errors. In recent years, the DH model [51–55] has become a popular method for kinematic modeling. Thus, we utilize it to establish the kinematic model of the robot. The actual position and desired position of the robot end-effector is shown in Fig. 1.1. Based on the DH model [56–62], we can obtain the kinematic transformation matrix:
T ii - 1 =
cos θi sin θi 0 0
- sin θi cos αi cos θi cos αi sin αi 0
sin θi sin αi - cos θi sin αi cos αi 0
ai cos θi ai sin θi , di 1
ð1:8Þ
where T is defined as the link transformation matrix, ai is the link length, di is the link offset, θi is the joint angle, αi is the link twist angle. Thus, the link transformation matrices of the 6-degree-of-freedom robot are calculated by: T 06 = T 01 T 12 T 23 T 34 T 45 T 56 : Considering the deviation of the robot transformation matrix, we have
Fig. 1.1 The actual position and desired position of the robot end-effector
ð1:9Þ
1.3
Book Organization
5
dT = T R - T 06 ,
ð1:10Þ
where dT is the robot pose deviation. TR is the robot’s actual pose, respectively. With (1.10), dK is expressed as: dT i =
∂T i ∂T i ∂T i ∂T i dαi þ dai þ ddi þ dθi : ∂αi ∂ai ∂di ∂θi
ð1:11Þ
By combining the eq. (1.10) and (1.11), the robot pose error is given as: Δa e = ½J 1 J 2 J 3 J 4
Δd = JX, Δα
ð1:12Þ
Δθ where J is the Jacobian matrix, X is the DH parameter errors. According to the measuring cable length and nominal cable length [8–11], we can establish the loss function of robot calibration: f ðX Þ = min
1 n
n i=1
Li - L0i
2
,
ð1:13Þ
where L’i and Li are the calculated cable length, measuring cable length, n is the number of sampling points, respectively. In general, the nominal cable length is written as: L0i =
ðPi - P0 Þ2 :
ð1:14Þ
Note that Pi is the calculated position of the robot end-effector, P0 is the coordinate value of the fixed point [63–68].
1.3
Book Organization
This book has seven chapters: • Chapter 1 The related work of the robot control and calibration issue is reviewed in this chapter. • Chapter 2 This chapter proposes a Model Predictive Control (MPC) scheme for robot trajectory tracking, which can optimize tracking error, velocity norm and
6
•
•
•
•
1 Introduction
acceleration norm. The commonly used minimum velocity norm (MVN) and minimum acceleration norm (MAN) schemes require the introduction of additional parameters to address different levels of joint limits, thus reducing the feasible range of variables. In addition, the proposed scheme in this chapter contains two ideas: (a) considering these limits at different levels without reducing the feasible range of variables; (b) developing the Error Sum Enhanced Newton (ESEN) algorithm to solve the MPC scheme and effectively suppress the noises. Finally, extensive simulations and experiments demonstrate that the proposed MPC scheme solved by the ESEN algorithm has faster responsiveness and better noise tolerance when compared with other advancing trajectory tracking schemes. Chapter 3 To solve the perturbed time-varying underdetermined linear system, a new recurrent neural network (RNN) is proposed in this chapter. In addition, the bounded finite underdetermined linear system can be transformed into a timevarying system by constructing a nonnegative time-varying variable. Then, convergence analysis is presented for the proposed RNN. Thereafter, a large number of simulation experiments show that the proposed RNN has better performance in dealing with time-varying underdetermined linear systems with double limits. Chapter 4 This chapter proposes a novel Joint Drift Free (JDF) scheme synthesized by the Projection Zeroing Neural Network (PZNN) model, which can effectively solve the motion generation and control of redundant manipulators under disturbance. Then, this scheme first solves the interference between joint errors in joint space and position errors in Cartesian space. In addition, convergence analysis is conducted for the PZNN model to solve the problem of joint drift. Finally, a large number of experiments demonstrate that compared to other control schemes, the proposed PZNN model synthesized JDF scheme has higher efficiency in dealing with the joint drift problem of redundant robots. Chapter 5 In recent decades, robots have played a huge role in intelligent manufacturing. It is necessary to ensure that the robot has high positioning accuracy during highprecision production. However, due to the long-term use of the robot, its links are worn, resulting in poor positioning accuracy of the robot. Therefore, it is urgent to recalibrate the robot. The robot calibration commonly uses the least square method and Levenberg Marquardt (LM) algorithm to identify their parameters but frequently suffers from overfitting issues. In order to solve this problem, this chapter introduces L1, L2, dropout, elastic, log and swish, thereby integrating these six schemes into an ensemble to obtain better calibration results and effectively solve the overfitting problem. Chapter 6 This Chapter proposes two novel robot calibration methods: the extended Kalman filter with quadratic interpolated beetle antennae search algorithm and the extended Kalman filter with an improved covariance matrix adaptive
References
7
evolution strategy. It adopts two-fold ideas: (a) developing a novel calibration method based on an extended Kalman filter and an improved covariance matrix adaptive evolution strategy to optimize the robot kinematic parameters; (b) designing a robot calibration method combining an extended Kalman filter and Quadratic Interpolated Beetle Antennae Search algorithm to calibrate the robot. Lots of experiments are conducted on an ABB IRB120 industrial robot, which demonstrates that the proposed two calibration methods obtain higher calibration accuracy than their peers do. • Chapter 7 This chapter proposes a new calibrator based on an unscented Kalman filter and a novel variable step-size Levenberg-Marquardt algorithm, which has two main ideas: (a) adopting an unscented Kalman filter to suppress noises in the calibration process; (b) incorporating variable step-size operator into LM algorithm to solve the local optimum of a conventional LM algorithm. Moreover, we conduct extensive experiments on an ABB IRB120 industrial robot, the experimental results demonstrate that the proposed calibrator has better calibration performance than its peers. • Chapter 8 The final chapter provides the conclusions of this book and some future research work.
References 1. Luo, X., Wang, D.X., Zhou, M.C., Yuan, H.Q.: Latent factor-based recommenders relying on extended stochastic gradient descent algorithms. IEEE Trans. Syst. Man Cybern. Syst. 51(2), 916–926 (2021) 2. Li, Z., Li, S., Luo, X.: An overview of calibration technology of industrial robots. IEEE/CAA J. Autom. Sin. 8(1), 23–36 (2021) 3. Zhang, G., He, Y., Dai, B., Gu, F., Han, J., Liu, G.: Robust control of an aerial manipulator based on a variable inertia parameters model. IEEE Trans. Ind. Electron. 67(11), 9515–9525 (2020) 4. Wang, Y., Cai, M., Wang, S., Bai, X., Wang, R., Tan, M.: Development and control of an underwater vehicle-manipulator system propelled by flexible flippers for grasping marine organisms. IEEE Trans. Ind. Electron. 69(4), 3898–3908 (2022) 5. Zhang, Y., Li, S., Kadry, S., Liao, B.: Recurrent neural network for kinematic control of redundant manipulators with periodic input disturbance and physical constraints. IEEE Trans. Cybern. 49(12), 4194–4205 (2019) 6. Yilmaz, B.M., Tatlicioglu, E., Savran, A., Alci, M.: Self-adjusting fuzzy logic based control of robot manipulators in task space. IEEE Trans. Ind. Electron. 69(2), 1620–1629 (2022) 7. Yan, J., Jin, L., Yuan, Z., Liu, Z.: RNN for receding horizon control of redundant robot manipulators. IEEE Trans. Ind. Electron. 69(2), 1608–1619 (2022) 8. Liu, Y., Zhuang, Z., Li, Y.: Closed-loop kinematic calibration of robots using a six-point measuring device. IEEE Trans. Instrum. Meas. 71, 1–12 (2022) 9. Xu, X., Bai, Y., Zhao, M., Yang, J., Pang, F., Ran, Y., Tan, Z., Luo, M.: A novel calibration method for robot kinematic parameters based on improved manta ray foraging optimization algorithm. IEEE Trans. Instrum. Meas. 72, 1–11 (2023)
8
1
Introduction
10. Cao, H.Q., Nguyen, H.X., Tran, T.N.C., Tran, H.N., Jeon, J.W.: A robot calibration method using a neural network based on a butterfly and flower pollination algorithm. IEEE Trans. Ind. Electron. 69(4), 3865–3875 (2022) 11. Wang, Y.X., Chen, Z.W., Zu, H.F., Zhang, X., Mao, C.T., Wang, Z.R.: Improvement of heavy load robot positioning accuracy by combining a model-based identification for geometric parameters and an optimized neural network for the compensation of nongeometric errors. Complexity. 2020 (2020) 12. Wei, L., Jin, L., Luo, X.: A robust coevolutionary neural-based optimization algorithm for constrained nonconvex optimization. IEEE Trans. Neural Netw. Learn. Syst. https://doi.org/10. 1109/TNNLS.2022.3220806 13. Luo, X., Yuan, Y., Chen, S.L., Zeng, N.Y., Wang, Z.D.: Position-transitional particle swarm optimization-incorporated latent factor analysis. IEEE Trans. Knowl. Data Eng. 34(8), 3958–3970 (2022) 14. Luo, X., Wu, H., Li, Z.C.: NeuLFT: a novel approach to nonlinear canonical polyadic decomposition on high-dimensional incomplete tensors. IEEE Trans. Knowl. Data Eng. (2022). https://doi.org/10.1109/TKDE.2022.3176466 15. Luo, X., Zhou, Y., Liu, Z.G., Zhou, M.C.: Fast and accurate non-negative latent factor analysis on high-dimensional and sparse matrices in recommender systems. IEEE Trans. Knowl. Data Eng. https://doi.org/10.1109/TKDE.2021.3125252 16. Zhang, Y., Li, S., Zhou, X.: Recurrent-neural-network-based velocity level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 17. Xie, Z., Jin, L., Du, X., Xiao, X., Li, H., Li, S.: On generalized RMP scheme for redundant robot manipulators aided with dynamic neural networks and nonconvex bound constraints. IEEE Trans. Industr. Inform. 15(9), 5172–5181 (2019) 18. Lu, H., Jin, L., Zhang, J., Sun, Z., Li, S., Zhang, Z.: New joint-drift-free scheme aided with projected ZNN for motion generation of redundant robot manipulators perturbed by disturbances. IEEE Trans. Syst. Man Cybern. Syst. 51(9), 5639–5651 (2021) 19. Chen, X., Zhan, Q.: The kinematic calibration of an industrial robot with an improved beetle swarm optimization algorithm. IEEE Robot. Autom. Lett. 7(2), 4694–4701 (2022) 20. Santolaria, J., Brosed, F.J., Velázquez, J., Jiménez, R.: Self-alignment of on-board measurement sensors for robot kinematic calibration. Precis. Eng. 37(3), 699–710 (2013) 21. Fan, C., Zhao, G., Zhao, J., Zhang, L., Sun, L.: Calibration of a parallel mechanism in a serialparallel polishing machine tool based on genetic algorithm. Int. J. Adv. Manuf. Technol. 81(1), 27–37 (2015) 22. Luo, X., Zhong, Y.R., Wang, Z.D., Li, M.Z.: An alternating-direction-method of multipliersincorporated approach to symmetric non-negative latent factor analysis. IEEE Trans. Neural Netw. Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3125774 23. Luo, X., Qin, W., Dong, A., Sedraoui, K., Zhou, M.C.: Efficient and high-quality recommendations via momentum-incorporated parallel stochastic gradient descent-based learning. IEEE/ CAA J. Autom. Sin. 8(2), 402–411 (2021) 24. Luo, X., Zhou, Y., Liu, Z.G., Hu, L., Zhou, M.C.: Generalized nesterov’s accelerationincorporated, non-negative and adaptive latent factor analysis. IEEE Trans. Serv. Comput. 15(5), 2809–2823 (2022) 25. Jiang, Z.H., Zhou, W.G., Li, H., Mo, Y., Ni, W.C., Huang, Q.: A new kind of accurate calibration method for robotic kinematic parameters based on the extended Kalman and particle filter algorithm. IEEE Trans. Ind. Electron. 65(4), 3337–3345 (2018) 26. Bai, P.J., Mei, J.P., Huang, T., Chetwynd, D.G.: Kinematic calibration of delta robot using distance measurements. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 230(3), 414–424 (2016) 27. Fu, Z.T., Dai, J.S., Yang, K., Chen, X.B., López-Custodio, P.: Analysis of unified error model and simulated parameters calibration for robotic machining based on Lie theory. Robot. Comput. Integr. Manuf. 61, 101855 (2020)
References
9
28. Zhao, Y., Chen, Z., Zhou, C., Tian, Y.C., Qin, Y.: Passivity-based robust control against quantified false data injection attacks in cyber physical systems. IEEE/CAA J. Autom. Sin. 8(8), 1440–1450 (2021) 29. Dong, Y., Gupta, N., Chopra, N.: False data injection attacks in bilateral teleoperation systems. IEEE Trans. Control Syst. Techonol. 28(3), 1168–1176 (2020) 30. Pang, Z.H., Liu, G.P., Zhou, D., Hou, F., Sun, D.: Two-channel false data injection attacks against output tracking control of networked systems. IEEE Trans. Ind. Electron. 63(5), 3242–3251 (2016) 31. Zhang, Y., Wang, J., Xia, Y.: A dual neural network for redundancy resolution of kinematically redundant manipulators subject to joint limits and joint velocity limits. IEEE Trans. Neural Netw. Learn. Syst. 14(3), 658–667 (2003) 32. Luo, X., Zhou, M.C., Wang, Z.D., Xia, Y.N., Zhu, Q.S.: An effective scheme for QoS estimation via alternating direction method-based matrix factorization. IEEE Trans. Serv. Comput. 12(4), 503–518 (2019) 33. Li, S., Zhou, M., Luo, X., You, Z.H.: Distributed winner-take-all in dynamic networks. IEEE Trans. Automat. Contr. 62(2), 577–589 (2017) 34. Li, S., Zhou, M., Luo, X.: Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 35. Du, G., Liang, Y., Li, C., Liu, P.X., Li, D.: Online robot kinematic calibration using hybrid filter with multiple sensors. IEEE Trans. Instrum. Meas. 69(9), 7092–7107 (2020) 36. Yu, N., Yang, R., Huang, M.: Deep common spatial pattern based motor imagery classification with improved objective function. Int. J. Netw. Dyn. Intell. 1(1), 73–84 (2022) 37. Kamali, K., Bonev, I.A.: Optimal experiment design for elasto-geometrical calibration of industrial robots. IEEE/ASME Trans. Mech. 24(6), 2733–2744 (2019) 38. Chen, D., Wang, T.M., Yuan, P.J., Sun, N., Tang, H.Y.: A positional error compensation method for industrial robots combining error similarity and radial basis function neural network. Meas. Sci. Technol. 30(12), 125010 (2019) 39. Ma, L., Bazzoli, P., Sammons, P.M., Landers, R.G., Bristow, D.A.: Modeling and calibration of high-order joint-dependent kinematic errors for industrial robots. Robot. Comput. Integr. Manuf. 50, 153–167 (2018) 40. Zhou, S., Xing, L., Zheng, X., Du, N., Wang, L., Zhang, Q.: A self-adaptive differential evolution algorithm for scheduling a single batch-processing machine with arbitrary job sizes and release times. IEEE Trans. Cybern. 51(3), 1430–1442 (2021) 41. Li, S., You, Z.H., Guo, H., Luo, X., Zhao, Z.Q.: Inverse-free extreme learning machine with optimal information updating. IEEE Trans. Cybern. 46(5), 1229–1241 (2016) 42. Li, S., He, J., Li, Y., Rafique, M.U.: Distributed recurrent neural networks for cooperative control of manipulators: a game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 415–426 (2017) 43. Li, S., Zhang, Y., Jin, L.: Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2243–2254 (2017) 44. Li, C., Wu, Y.Q., Löwe, H., Li, Z.X.: POE-based robot kinematic calibration using axis configuration space and the adjoint error model. IEEE Trans. Robot. 32(5), 1264–1279 (2016) 45. Joubair, A., Bonev, I.A.: Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis. Eng. 40, 325–333 (2015) 46. Lembono, T.S., Suárez-Ruiz, F., Pham, Q.C.: SCALAR: simultaneous calibration of 2-D laser and robot kinematic parameters using planarity and distance constraints. IEEE Trans. Autom. Sci. Eng. 16(4), 1971–1979 (2019) 47. Li, Z., Li, S., Luo, X.: Using quadratic interpolated beetle antennae search to enhance robot arm calibration accuracy. IEEE Robot. Autom. Lett. 7(4), 12046–12053 (2022) 48. Li, Z., Li, S., Bamasag, O.O., Alhothali, A., Luo, X.: Diversified regularization enhanced training for effective manipulator calibration. IEEE Trans. Neural Netw. Learn. Syst. (2022). https://doi.org/10.1109/TNNLS.2022.3153039
10
1
Introduction
49. Li, Z., Li, S., Francis, A., Luo, X.: A novel calibration system for robot arm via an open dataset and a learning perspective. IEEE Trans. Circuits II. 69(12), 5169–5173 (2022) 50. Yuan, Y., Yuan, H., Guo, L., Yang, H., Sun, S.: Resilient control of networked control system under DoS attacks: a unified game approach. IEEE Trans. Ind. Inform. 12(5), 1786–1794 (2016) 51. Guo, K., Su, H., Yang, C.: A small opening workspace control strategy for redundant manipulator based on RCM method. IEEE Trans. Control Syst. Technol. 30(6), 2717–2725 (2022) 52. Yang, C., Jiang, Y., He, W., Na, J., Li, Z., Xu, B.: Adaptive parameter estimation and control design for robot manipulators with finite-time convergence. IEEE Trans. Ind. Electron. 65(10), 8112–8123 (2018) 53. Li, D., Zhang, W., He, W., Li, C., Ge, S.S.: Two-layer distributed formation-containment control of multiple Euler-Lagrange systems by output feedback. IEEE Trans. Cybern. 49(2), 675–687 (2019) 54. Zhang, Y., Li, S., Zou, J., Khan, A.H.: A passivity-based approach for kinematic control of manipulators with constraints. IEEE Trans. Industr. Inform. 16(5), 3029–3038 (2020) 55. Jin, L., Li, S., Yu, J., He, J.: Robot manipulator control using neural networks: a survey. Neurocomputing. 285, 23–34 (2018) 56. Guo, D., Xu, F., Yan, L.: New pseudoinverse-based path-planning scheme with PID characteristic for redundant robot manipulators in the presence of noise. IEEE Trans. Control Syst. Technol. 26(6), 2008–2019 (2018) 57. Wang, W., Liu, F., Yun, C.: Calibration method of robot base frame using unit quaternion form. Precis. Eng. 41, 47–54 (2015) 58. Tian, W.J., Mou, M.W., Yang, J.H., Yin, F.W.: Kinematic calibration of a 5-DOF hybrid kinematic machine tool by considering the ill-posed identification problem using regularisation method. Robot. Comput. Integr. Manuf. 60, 49–62 (2019) 59. Jiang, Y., Li, T.M., Wang, L.P., Chen, F.F.: Kinematic error modeling and identification of the over-constrained parallel kinematic machine. Robot. Comput. Integr. Manuf. 49, 105–119 (2018) 60. Yang, W., Li, S., Li, Z., Luo, X.: Highly-accurate manipulator calibration via extended Kalman filter-incorporated residual neural network. IEEE Trans. Industr. Inform. (2023). https://doi.org/ 10.1109/TII.2023.3241614 61. Huang, T., Zhao, D., Yin, F.W., Tian, W.J.: Chetwynd.: kinematic calibration of a 6-DOF hybrid robot by considering multicollinearity in the identification Jacobian. Mech. Mach. Theory. 131, 371–384 (2019) 62. Du, G.L., Zhang, P.: Online robot calibration based on vision measurement. Robot. Comput. Integr. Manuf. 29(6), 484–492 (2013) 63. Luo, X., Zhou, M.C., Li, S., Wu, D., Liu, Z.G., Shang, M.S.: Algorithms of unconstrained non-negative latent factor analysis for recommender systems. IEEE Trans. Big Data. 7(1), 227–240 (2021) 64. Liu, Z.G., Luo, X., Wang, Z.D.: Convergence analysis of single latent factor-dependent, non-negative and multiplicative update-based non-negative latent factor models. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 1737–1749 (2021) 65. Liu, Z., Yi, Y., Luo, X.: A high-order proximity-incorporated nonnegative matrix factorizationbased community detector. IEEE Trans. Emerg. Top. Comput. Intell. 7(3), 700–714 (2023) 66. Luo, X., Zhou, M.C., Xia, Y.N., Zhu, Q.S.: An efficient non-negative matrix-factorization based approach to collaborative filtering for recommender systems. IEEE Trans. Industr. Inform. 10(2), 1273–1284 (2014) 67. Luo, X., Wang, L., Hu, P., Hu, L.: Predicting protein-protein interactions using sequence and network information via variational graph autoencoder. IEEE/ACM Trans. Comput. Biol. Bioinform. https://doi.org/10.1109/TCBB.2023.3273567 68. Hu, L., Yang, Y., Tang, Z., He, Y., Luo, X.: FCAN-MOPSO: an improved fuzzy-based graph clustering algorithm for complex networks with multi-objective particle swarm optimization. IEEE Trans. Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2023.3259726
Chapter 2
A Novel Model Predictive Control Scheme Based on an Improved Newton Algorithm
Abstract This chapter proposes a Model Predictive Control (MPC) scheme for robot trajectory tracking, which can optimize tracking error, velocity norm and acceleration norm. The research background of this work is discussed in Sect. 2.1, Sect. 2.2 gives the problem description, Sect. 2.3 provides the theoretical verifications, the simulation results are presented in Sect. 2.4. Lastly, the conclusions are summarized in Sect. 2.5. Keywords Model predictive control scheme · Tracking error · Velocity norm · Acceleration norm · Error-Summation-Enhanced Newton Algorithm
2.1
Overview
Recently, robots have been widely used in fields, like intelligent transportation, aerospace, and industrial electronics [1–3]. In addition, redundant robots with high flexibility, fault tolerance and reliability can accurately complete complex tasks [4– 9]. It can adopt appropriate control schemes to avoid obstacles and singularities. Trajectory tracking plays a highly important role in robot control, and its inverse kinematics solution is also called the redundant solution problem. Moreover, the problem of redundancy resolution can be solved at both the speed and acceleration levels. The Minimum Speed Norm (MVN) scheme [3, 10, 11] is widely utilized at the velocity level, which can maintain smooth joint velocities. However, it cannot solve the limitations of acceleration levels. The minimum acceleration norm (MAN) scheme for acceleration levels [12–14] can handle joint limits at different levels, but this method reduces the feasible region of decision variables. Model predictive control (MPC) is usually used to solve the constrained optimization problem of multivariable systems [15–18]. Dai et al. [19] propose a robust MPC algorithm for the accurate control of robots with interference. Nut et al. [20] develop a novel control method that combines robust MPC with function approximation via neural networks, thereby successfully achieving reliable trajectory control of the robot. However, the above control schemes cannot handle the noise problem [19–24].
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_2
11
12
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
To solve the noise problem, a noise tolerance algorithm based on MPC is proposed. Meanwhile, robot optimization problems also have great academic value. Popular Newton-type algorithms and gradient-type algorithms can also be used to solve robot optimization problems [24–26]. Ueland et al. [27] adopt a Newton method with the Karush Kuhn Tucker (KKT) condition to solve the optimal force distribution problem for parallel robots. Jin et al. [28] propose a gradient-based differential neural solution to solve the robot motion planning problem. However, these schemes have time-derivative information, they amplify noise and result in system instability [29]. To address the mentioned drawbacks, this chapter proposes an error-summation-enhanced Newton (ESEN) algorithm with strong noise tolerance [29–34]. The research structure of this chapter is shown in Fig. 2.1, which utilizes the MPC scheme to process tracking errors, velocity norms and acceleration norms [35–41], thereby achieving the accurate trajectory control of redundant robots [15, 42–48]. In addition, the innovations of this work are as follows: (a) This chapter proposes an MPC scheme for trajectory tracking of redundant robots, which does not reduce the feasible region of decision variables and can accurately control the robot; (b) This chapter uses the ESEN algorithm to solve the constrained QP problem, which has strong noise tolerance; (c) Extensive experimental results demonstrate that the proposed MPC scheme solved by the ESEN algorithm has higher efficiency and noise tolerance than its peers do.
2.2
QP Problem
In this section, we discuss the proposed MPC scheme addressed by the ESEN algorithm, which can effectively solve the QP problem [48–53].
2.2.1
Model Predictive Control Scheme
According to the robot kinematic model [34], the relationship between the position of the redundant robot end-effector and Cartesian space is described as: rðt Þ = f ðσ ðt ÞÞ,
ð2:1Þ
r_ ðt Þ = J ðσ ðt ÞÞσ_ ðt Þ,
ð2:2Þ
where r(t) 2 Rm represents the position of the robot end-effector; σ(t) 2 Rn is joint angle; f(): Rn → Rm is a projection, r_ ðt Þ 2 Rm defines Cartesian velocity, σ_ ðt Þ 2 Rn
Fig. 2.1 The research structure of this chapter
2.2 QP Problem 13
14
2 A Novel Model Predictive Control Scheme Based on an Improved. . .
is the joint velocity; J(σ(t)) 2 Rm × n is the Jacobian matrix. Based on the joint velocity, the input vector of the robot controller is as follows: uðt Þ = σ_ ðt Þ,
ð2:3Þ
σ ðk þ 1Þ = σ ðkÞ þ τuðkÞ,
ð2:4Þ
with (2.2), we have:
where k represents the updating index, τ is the sampling period. Based on (2.3), we obtain u_ ðt Þ = σ€ðt Þ, then Δu(k) can be further derived as: ΔuðkÞ uðkÞ - uðk - 1Þ = ≈ σ€ðkÞ: τ τ
ð2:5Þ
Considering the Eq. (2.1)–(2.5), we achieve that uðk þ iÞ = Δuðk þ iÞ þ Δuðk þ i - 1Þ þ ⋯ þ ΔuðkÞ þ uðk - 1Þ, i = 0, 1, ⋯, N u - 1:
ð2:6Þ
where Nu is the control horizon; u(k + i), Δu(k + i) are the future input, future input increment at the instant k. Moreover, combining (2.4) and (2.6), the predicted state σ(k + i) is written as: σ ðk þ iÞ = σ ðkÞ þ iτΔuðkÞ þ ði - 1ÞτΔuðk þ 1Þ þ ⋯þ τΔuðk þ i þ 1Þ þ iτuðk - 1Þ, i = 1, 2, ⋯, N,
ð2:7Þ
where N represents the predictive horizon, we obtain the predicted output f(σ(k + i)): f ðσ ðk þ iÞÞ = f ðσ ðk ÞÞ þ J ðσ ðkÞÞ½σ ðk þ iÞ - σ ðk Þ, i = 1, 2, ⋯, N:
ð2:8Þ
Therefore, the MPC scheme can be represented as: N
f ðσ ðk þ i Þ Þ - r g ð k þ i Þ
min i=1
þ
Nu = 1 i=0
2 Q0
þ
Nu = 1 i=0
kuðk þ iÞk2R0
kΔuðk þ iÞk2R0 , 2
1
ð2:9aÞ
s:t: σ min ≤ σ ðk þ iÞ ≤ σ max , i = 1, 2, ⋯N,
ð2:9bÞ
σ_ min ≤ uðk þ iÞ ≤ σ_ max , i = 0, 1, ⋯N u - 1,
ð2:9cÞ
2.2
QP Problem
15
σ€min ≤
Δuðk þ iÞ ≤ σ€max , i = 0, 1, ⋯N u - 1, τ
ð2:9dÞ
where Q′ 2 Rm × m, R′ 1 2 Rn × n, R′ 2 2 Rn × n represent the weight of the tracking error, input, and input increment, respectively; rg(k + i) 2 Rm is the reference trajectory; kkQ0 = ðÞT Q0 ðÞ, kkR0 , kkR0 are the transposition operation. σ min ≤ σ(1 2 k + i) ≤ σ max, σ_ min ≤ uðk þ iÞ ≤ σ_ max , and σ€min ≤ Δuðk þ iÞ=τ ≤ σ€max define the lower and upper bounds of joint angles, velocities and accelerations, respectively. Based on these descriptions, we obtain that: r = r g ðk þ 1Þ, ⋯, rg ðk þ N Þ
T
2 RN × m ,
σ = ½σ ðk þ 1Þ, ⋯, σ ðk þ N ÞT 2 RN × n , u = ½uðkÞ, ⋯, uðk þ N u - 1ÞT 2 RN u × n , Δu = ½Δuðk Þ, ⋯, Δuðk þ N u - 1ÞT 2 RN u × n , f ðσ Þ = ½f ðσ ðk þ 1ÞÞ, ⋯, f ðσ ðk þ N ÞÞT 2 RN × m , I u = ½uðk - 1Þ, uðk - 1Þ, ⋯, uðk - 1ÞT 2 RN u × n , I σ = ½σ ðkÞ, σ ðk Þ, ⋯, σ ðk ÞT 2 RN × n , I f = ½f ðσ ðkÞÞ, f ðσ ðkÞÞ, ⋯f ðσ ðkÞÞT 2 RN × m , V = ½τuðk - 1Þ, ⋯, N u τuðk - 1Þ, ⋯, Nτuðk - 1ÞT 2 RN × n ,
S=
τ
0
⋯
0
2τ
τ
⋯
0
⋮
⋮
⋱
⋮
Nuτ
ðN u - 1Þτ
⋯
τ
ðN u þ 1Þτ
Nuτ
⋯
2τ
⋮
⋮
⋱
⋮
Nτ
ð N - 1Þ τ
E=
⋯ ðN - N u þ 1Þ τ
1
0
⋯
0
1
1
⋯
0
⋮ ⋮
⋱
⋮
1
⋯
1
1
2 RN × N u ,
2 RN u × N u , J = J ðσ ðkÞÞ,
16
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
σ max = ½σ max , ⋯, σ max T 2 RN × n , σ min = ½ σ min , ⋯, σ min T 2 RN × n , σ_ max = ½ σ_ max ,
⋯,
σ_ max T 2 RN u × n ,
σ_ min = ½ σ_ min , ⋯, σ_ min T 2 RN u × n , σ€max = ½ σ€max , ⋯, σ€max T 2 RN u × n , σ€min = ½ σ€min ,
⋯,
σ€min T 2 RN u × n :
Moreover, Eq. (2.9) is described as: min vec I f þ SΔuJ T þ VJ T - r Δu
2 Q
þ vec I u þ EΔu
2 R1
þ kvecðΔuÞk2R2 ,
ð2:10aÞ s:t: σ min ≤ σ ≤ σ max ,
ð2:10bÞ
σ_ min ≤ u ≤ σ_ max ,
ð2:10cÞ
σ€min ≤
Δu ≤ σ€max , τ
ð2:10dÞ
where Q = I Q′ 2 RmN × mN, R1 = I R01 2 RnN u × nN u , R2 = I R02 2 RnN u × nN u , is the Kronecker product, I is an identity matrix; vec() represents the vectorization. T and Define W = 2 ðJ SÞT QðJ SÞ þ I E R1 I E þ R2 T
q = 2ðJ SÞT Q vec I f þ vecðVJ T Þ - vecðr Þ þ 2 I E R1 vecðI u Þ. more, the loss function (2.10a) is represented as: min Δu
Further-
1 vecðΔuÞT WvecðΔuÞ þ qT vecðΔuÞ: 2
We have: σ max - I σ - V - σ min þ I σ þ V
S -S H0 =
E -E I -I
, h0 =
σ_ max - I u - σ_ min þ I u τ€ σ max
,
- τ€ σ min
and where v = vecðΔuÞ 2 RnN u ,H = I H 0 2 Rnð2Nþ4N u Þ × nN u 0 h = vecðh Þ 2 Rnð2Nþ4N u Þ , we achieve the conventional QP problem for the MPC scheme (2.9):
2.2
QP Problem
17
min v
1 T v Wv þ qT v, 2
ð2:11aÞ ð2:11bÞ
s:t: Hv ≤ h:
2.2.2
ESEN Model Construction
To address the QP problem (2.11), we have the Karush-Kuhn-Tucker (KKT) optimality conditions: Wv þ q þ H T μ = 0, μ ≥ 0, h - Hv ≥ 0, μ ∘ðh - Hv Þ = 0:
ð2:12Þ
where ° is the Hadamard product; v represents the solution of (2.11); u 2 Rnð2Nþ4N u Þ is the Lagrangian multiplier [35]. Moreover, the Eq. (2.12) meets the following condition, ϕ(a, b) = 0 , a ∘ b = 0, a ≥ 0, b ≥ 0: p ϕς ða, bÞ = ς a þ b - a∘a þ b∘b þ ð1 - ςÞaþ ∘bþ , where ζ 2 (0,1), z+ = max{0,z}. Therefore, Eq. (2.12) can be represented as ϕς(h Hv, u) = 0. Define d = ð1 - ςÞðh - HvÞþ ∘μþ - ς P=
T
W
H
- ςH
ςI
,x=
ðh - HvÞ∘ðh - HvÞ þ μ∘μ, v μ
,y=
-q - ςh - d
,
Equation (2.12) is described as: Px - y = 0,
ð2:13Þ
where P 2 Rð2nNþ5nN u Þ × ð2nNþ5nN u Þ , x 2 R2nNþ5nN u , y 2 R2nNþ5nN u . Then Eq. (2.13) can be written as: Pk xk - yk = 0: Moreover, we define the error function:
ð2:14Þ
18
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
ek = Pk xk - yk :
ð2:15Þ
To address (2.14), we adopt the NRI algorithm [36] to obtain the following equation: xkþ1 = xk - M k- 1 ðPk xk - yk Þ,
ð2:16Þ
where M=
W
HT
- ςH þ ςD1 H - ð1 - ςÞD3 H
ςI - ςD2 þ ð1 - ςÞD4
D1 = Λ ðh - HvÞ
w , D2 = Λ μ
,
w ,
D3 = Λ ∂ðh - HvÞþ ∘μþ , D4 = Λ ðh - HvÞþ ∘∂μþ , w=
ðh - HvÞ∘ðh - HvÞ þ μ∘μ, 1, z > 0,
∂zþ =
½0:1, z = 0, 0, z < 0,
where Λ() and is the diagonal matrix, the Hadamard division, respectively. Based on (2.16), we have xkþ1 - xk 1 = - M k- 1 ðPk xk - yk Þ: τ τ
ð2:17Þ
By calculating (2.17), we achieve that: x_ ðt Þ = - ηM - 1 ðt ÞðPðt Þxðt Þ - yðt ÞÞ,
ð2:18Þ
where η = 1/τ. According to (2.18), we introduce an integration term: t
x_ ðt Þ = - M - 1 ðt Þ ηeðt Þ þ ρ
eðwÞdw ,
ð2:19Þ
0
where ρ > 0, the ESEN model is written as: xkþ1 = xk - M k- 1 Pk xk - yk þ ε where ξ = ρτ2.
k i=0
ðPi xi - yi Þ ,
ð2:20Þ
2.3
Theoretical Verifications for ESEN Algorithm
2.3
19
Theoretical Verifications for ESEN Algorithm
In this section, we discuss the convergence analysis of the ESEN algorithm (2.20).
2.3.1
Preconditions
In this chapter, the developed ESEN algorithm (2.20) has the following convergence analysis. Theorem 1 The designed ESEN algorithm (2.20) is written as: k
ekþ1 þ ξ
ei þ O τ2 = 0,
ð2:21Þ
i=0
where O(τ2) is a vector. Proof The ESEN algorithm (2.20) is described as: k
M k ðxkþ1 - xk Þ = - ek þ ξ
ei :
ð2:22Þ
i=0
Based on Eq. (2.15), we achieve that: e_ k = lim
τ→0
ekþ1 - ek = M k x_ k : τ
ð2:23Þ
Therefore, (2.22) is represented as: e_ k = -
k
1 ei : e þξ τ k i=0
ð2:24Þ
Additionally, to solve (2.24), e_ k is calculated by: k
1 ekþ1 - ek ei =e þξ τ τ k i=0 with (2.25), we can further have:
- OðτÞ,
ð2:25Þ
20
2
A Novel Model Predictive Control Scheme Based on an Improved. . . k
ekþ1 - ek = - ek þ ξ
- τOðτÞ:
ei
ð2:26Þ
i=0
Thus, we can the final result: k
ekþ1 þ ξ
ei þ O τ2 = 0:
ð2:27Þ
i=0
2.3.2
Convergence Analysis
We use the Lyapunov theorem to discuss the stability and convergence of the designed ESEN algorithm (2.20). Theorem 2 The proposed ESEN algorithm (2.20) can converge at the residual error O(τ2). Proof With the theorem 2 [37], the Lyapunov function can be written as the following equation: V kþ1 = eTkþ1 ekþ1 : If Vk + 1 > 0 when ek following equation:
+ 1
≠ 0, or Vk
+ 1
ð2:28Þ
= 0 when ek
+ 1
= 0. We can achieve the
V kþ1 - V k = ξ2
T
k
k
ei i=0
= ξ 2 ke k k2
ei i=0
- ξ2
T
k-1
k-1
ei i=0
ei
þ O τ2
i=0
2 1þ O τ2 < 0, ξ
Therefore, the developed ESEN algorithm (2.20) with a steady-state error limk → 1kekk2 can converge to the residual error O(τ2). Moreover, the proof is completed.
2.4
Simulations Based on MPC Scheme
2.4
21
Simulations Based on MPC Scheme
In this section, we introduce the simulations of the designed MPC scheme solved by the ESEN algorithm, then extensive comparisons with other trajectory tracking schemes are conducted to verify the correctness and noise tolerance of the developed scheme. This experiment with and without disturbance is conducted on a 7-DOF Franka Emika Panda robot under n = 7 and m = 3 conditions, respectively. Moreover, we set ζ = 0.95, ξ = 0.2, Q = 20000I, R1 = R2 = 0.9I, N = Nu = 5, time Te = 15 s, the sampling period τ = 0.05 s, the initial joint angle σ(0) = [0,-0.7855,0,2.3543,0,1.5716,0.7855]T rad, joint limits: σ max = ½2:8, 1:7, 2:8, - 0:07,2:9,3:8,2:9T rad, σ min = ½–2:9, - 1:8, - 2:9, - 3:1, - 2:9, - 0:01, - 2:9T rad, σ_ max = - σ_ min = ½2:2, 2:2, 2:2, 2:2, 2:6, 2:6, 2:6T rad=s, σ€max = - σ€min = ½15, 7:5, 10, 12:5, 15, 20, 20T rad=s2 :
2.4.1
Without Extraneous Disturbance
This experiment adopts the robot to perform trajectory tracking tasks with petal shape trajectory. Then, simulation experiments are conducted by the proposed MPC scheme solved via the ESEN algorithm, thus the experimental results are shown in Fig. 2.2. Figures 2.2a, b, c depict the entire motion trajectory, the robot end-effector trajectory in blue, and the given trajectory in red, respectively. The tracking errors are below 10-4 m, thereby demonstrating the effectiveness of the proposed scheme. In addition, as shown in Fig. 2.2d, f, the joint angle, joint velocity, and joint acceleration do not exceed the limits. Therefore, the proposed scheme can effectively control the robot. Figure 2.3 shows the experiments of the MPC scheme solved by the proposed ESEN algorithm under the condition noise ω(t) = 10. Figure 2.3a–d, e–h show the trajectory errors of MPC scheme (2.9), the designed MPC scheme in [38], the RMP scheme in [10], and the JDF scheme in [11], respectively. The MPC scheme (2.9) has the advantages of small tracking errors and strong noise resistance ability, while the MPC scheme in [38] and the RMP scheme in [10] have large tracking errors. Figure 2.3d, h show the experimental results of the JDF scheme in [11], which has a stable tracking error of 4.95 × 10-3 m, but the joint angles can only remain stable under constant noise conditions. Therefore, the JDF scheme in [11] has weak noise resistance, which may result in an unstable system. Therefore, the MPC scheme (2.9) has better tracking performance than these above schemes under constant noise conditions.
2
Fig. 2.2 Simulations of the designed MPC scheme addressed by ESEN algorithm (2.20). (a) Robot Motion trajectory. (b) Robot end-effector trajectory and given trajectory. (c) The errors of trajectory Tracking. (d) Angles of robot joint. (e) Velocities of robot joint. (f) Accelerations of robot joint
22 A Novel Model Predictive Control Scheme Based on an Improved. . .
Fig. 2.3 Simulations for the robot with different control schemes. (a) The control errors of the MPC scheme. (b) The control errors of the MPC scheme [38]. (c) The control errors of the RMP scheme [10]. (d) The control errors of the JDF scheme [11]. (e) The joint angles of the MPC scheme. (f) The joint angles of the MPC scheme [38]. (g) The joint angles of the RMP scheme [10]. (h) The joint angles of the JDF scheme [11]
2.4 Simulations Based on MPC Scheme 23
24
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
Fig. 2.4 The tracking trajectory of Franka Emika Panda redundant under noise ω(t) = 50. (a) The tracking trajectory of the MPC scheme (2.9). (b) The tacking trajectory of the RMP scheme in [10]
Thereafter, the experiments are conducted on the MPC scheme solved by the proposed ESEN algorithm under the condition noise ω(t) = 50. We also can obtain similar results.
2.4.2
With Extraneous Disturbance
To test the responsiveness and robustness of the MPC scheme (2.9) with external disturbance, this section proposes a commonly used redundant robot control scheme. The MAN scheme solved by the piecewise linear projection equation with neural network (PLPENN) [13] can be expressed as 1 T σ€ σ€, 2 s:t: J ðσ Þ€ σ = €r g - J_ ðσ Þσ_ þ αv εv þ αp εp , σ min ≤ σ ≤ σ max ,
min σ
σ_ min ≤ σ_ ≤ σ_ max , σ€min ≤ σ€ ≤ σ€max ,
ð2:29Þ
2.5
Conclusion
25
where εv and εp are the velocity error and position error of the robot, respectively; αv and αp represent the corresponding feedback gains. Figure 2.5 describes the experimental results of the MPC scheme (2.9) solved by the ESEN algorithm (2.20) under external disturbances. Figure 2.5a, b show that the robot end-effector deviates from the given trajectory at t = 8 s. However, the redundant robot quickly returns to the given trajectory, which verifies the robustness of the MPC scheme (2.9). In addition, Fig. 2.5c describes that the robot tracking error stabilizes at 10-4 m after 0.15 s, which verifies the high efficiency of the MPC scheme (2.9). Moreover, Fig. 2.5d–f indicates that the joint parameters of the redundant robot are within a limited range and have strong anti-interference ability. Figure 2.6 shows the experimental results of the MAN scheme (2.29) solved by the PLPENN algorithm [13]. From Fig. 2.6a, b, the redundant robots deviate from the given trajectory due to external disturbances, but they slowly return to the given trajectory. However, Fig. 2.6c depicts that the robot tracking error stabilizes to 104 m at t = 10.9 s, which is longer than the feedback time of the MPC scheme (2.9). Figures 2.6d–f depict the joint angles, joint velocities and joint accelerations. The experiments in Figs. 2.2, 2.3, 2.4, 2.5 and 2.6 show that the MPC scheme (2.9) solved by the ESEN algorithm (2.20) has strong noise tolerance and high efficiency. Table 2.1 lists the comparisons of various control schemes of the redundant robot. From this table, the proposed schemes (2.9) and [7] optimize the indicators at the joint speed and joint acceleration levels, while other schemes only consider one indicator. However, the scheme in [7] has weak noise resistance. In terms of noise tolerance, only the proposed scheme (2.9) considers three levels of joint constraints simultaneously. Additionally, from the perspective of MTEN, the proposed scheme (2.9) has the second smallest MTEN, lower than the scheme in [11], but it considers both joint constraints and noise. Thus, the proposed scheme (2.9) has better practicality. Table 2.2 provides the comparisons of different MPC schemes. From this table, the MPC scheme (2.9) used for trajectory tracking can simultaneously consider three levels of joint constraints, while other schemes cannot handle some joint limits. In addition, the MPC scheme (2.9) is extended to dynamics at the acceleration level.
2.5
Conclusion
This chapter proposes a new robot control MPC scheme that considers three levels of joint constraints simultaneously, like tracking error, velocity norm and acceleration norm. Meanwhile, this scheme could not reduce the feasible area of decision variables. In addition, this chapter also proposes an ESEN algorithm for solving QP problems. Extensive experimental results demonstrate that the proposed control scheme has better trajectory tracking performance when compared with other advancing control schemes. In future work, dynamics will be considered in the MPC scheme.
2
Fig. 2.5 Simulations of the MPC scheme (2.9) solved by ESEN algorithm (2.20) with extraneous disturbance. (a) Robot motion trajectory. (b) Robot end-effector trajectory and given trajectory. (c) Robot trajectory errors. (d) The angles of the robot joint. (e) The velocities of the robot joint. (f) The accelerations of the robot joint
26 A Novel Model Predictive Control Scheme Based on an Improved. . .
Conclusion
Fig. 2.6 Simulations of the MAN scheme (2.29) [13] with extraneous disturbance. (a) Robot motion trajectory. (b) Robot end-effector trajectory and given trajectory. (c) Robot trajectory errors. (d) The angles of the robot joint. (e) The velocities of the robot joint. (f) The accelerations of the robot joint
2.5 27
Yes No No Yes Yes No Yes No
Yes
“MTEN” is the maximal tracking error norm
[3] [10] [11] [12] [13] [34] [39] [40]
[7]
Reference
Angle constraint Yes
Yes Yes No Yes Yes No Yes No
Yes
Jvelocity constraint Yes
Yes No No Yes Yes No No No
Yes
Acceleration constraint Yes
No No Yes No No Yes Yes Yes
No
Noise tolerance Yes
Yes No No Yes Yes No Yes No
No
Reduced feasible region No
1.244 × 10-2 m 3.401 × 10-3 m 9.061 × 10-5 m 5.183 × 10-2 m 7.212 × 10-4 m 4.568 × 10-4 m 1.347 × 10-4 m 3.365 × 10-3 m
1.453 × 10-3 m
MTEN 1.165 × 10-4 m
2
Optimization Level Acceleration and velocity Acceleration and velocity Velocity Velocity Velocity Acceleration Acceleration Velocity Velocity Velocity
Table 2.1 Comparisons of various control schemes for redundant robot
28 A Novel Model Predictive Control Scheme Based on an Improved. . .
References
29
Table 2.2 Comparisons of various MPC schemes for redundant robot
Reference [19] [23] [41] [42] [43]
Optimization level Acceleration and velocity Torque Torque Torque Torque Velocity
Kinematics/ dynamics Kinematics Dynamics Dynamics Dynamics Dynamics Kinematics
Joint constrains Joint angle, velocity and acceleration Joint torque Joint torque Joint torque Joint torque and velocity Joint angle, velocity and acceleration
References 1. Luo, X., Zhou, M., Xia, Y., Zhu, Q.: An incremental-and-static-combined scheme for matrixfactorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 2. Xia, Y.N., Zhou, M.C., Luo, X., Zhu, Q.S., Li, J., Huang, Y.: Stochastic modeling and quality evaluation of infrastructure-as-a-service clouds. IEEE Trans. Autom. Sci. Eng. 12(1), 162–170 (2015) 3. Zhang, Y., Li, S., Zhou, X.: Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 66(5), 3573–3582 (2019) 4. Zhang, Y., Li, S., Liu, X.: Adaptive near-optimal control of uncertain systems with application to underactuated surface vessels. IEEE Trans. Control Syst. Techonol. 26(4), 1204–1218 (2018) 5. Wang, Q., Liu, X., Shang, T., Liu, Z., Yang, H., Luo, X.: Multi-constrained embedding for accurate community detection on undirected networks. IEEE Trans. Netw. Sci. Eng. 9(5), 3675–3690 (2022) 6. Xie, Z., Jin, L., Luo, X.: Kinematics-based motion-force control for redundant manipulators with quaternion control. IEEE Trans. Autom. Sci. Eng. https://doi.org/10.1109/TASE.2022. 3186668 7. Yan, J., Jin, L., Yuan, Z., Liu, Z.: RNN for receding horizon control of redundant robot manipulators. IEEE Trans. Ind. Electron. 69(2), 1608–1619 (2022) 8. Chen, L., Cui, R., Yang, C., Yan, W.: Adaptive neural network control of underactuated surface vessels with guaranteed transient performance: theory and experimental results. IEEE Trans. Ind. Electron. 67(5), 4024–4035 (2020) 9. Luo, X., Wang, Z., Shang, M.: An instance-frequency-weighted regularization scheme for non-negative latent factor analysis on high dimensional and sparse data. IEEE Trans. Syst. Man Cybern. Syst. 51(6), 3522–3532 (2021) 10. Xie, Z., Jin, L., Du, X., Xiao, X., Li, H., Li, S.: On generalized RMP scheme for redundant robot manipulators aided with dynamic neural networks and nonconvex bound constraints. IEEE Trans. Industr. Inform. 15(9), 5172–5181 (2019) 11. Lu, H., Jin, L., Zhang, J., Sun, Z., Li, S., Zhang, Z.: New joint-drift-free scheme aided with projected ZNN for motion generation of redundant robot manipulators perturbed by disturbances. IEEE Trans. Syst. Man Cybern. Syst. 51(9), 5639–5651 (2021) 12. Xiao, L., Zhang, Y.: Dynamic design, numerical solution and effective verification of acceleration-level obstacle-avoidance scheme for robot manipulators. Int. J. Syst. Sci. 47(4), 932–945 (2016) 13. Jin, L., Zhang, Y.: G2-type SRMPC scheme for synchronous manipulation of two redundant robot arms. IEEE Trans. Cybern. 45(2), 153–164 (2015)
30
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
14. Guo, D., Zhang, Y.: Acceleration-level inequality-based MAN scheme for obstacle avoidance of redundant robot manipulators. IEEE Trans. Ind. Electron. 61(12), 6903–6914 (2014) 15. Chen, J., Luo, X., Zhou, M.C.: Hierarchical particle swarm optimization-incorporated latent factor analysis for large-scale incomplete matrices. IEEE Trans. Big Data. 8(6), 1524–1536 (2022) 16. Luo, X., Zhou, M.C., Li, S., Hu, L., Shang, M.: Non-negativity constrained missing data estimation for high-dimensional and sparse matrices from industrial applications. IEEE Trans. Cybern. 50(5), 1844–1855 (2020) 17. Song, Y., Li, M., Luo, X., Yang, G., Wang, C.: Improved symmetric and nonnegative matrix factorization models for undirected, sparse and large-scaled networks: a triple factorizationbased approach. IEEE Trans. Industr. Inform. 16(5), 3006–3017 (2020) 18. Shang, M., Luo, X., Liu, Z., Chen, J., Yuan, Y., Zhou, M.C.: Randomized latent factor model for high-dimensional and sparse matrices from industrial applications. IEEE/CAA J. Autom. Sin. 6(1), 131–141 (2019) 19. Dai, L., Yu, Y., Zhai, D.H., Huang, T., Xia, Y.: Robust model predictive tracking control for robot manipulators with disturbances. IEEE Trans. Ind. Electron. 68(5), 4288–4297 (2021) 20. Nubert, J., Köhler, J., Berenz, V., Allgöwer, F., Trimpe, S.: Safe and fast tracking on a robot manipulator: robust mpc and neural network control. IEEE Trans. Robot. Autom. 5(2), 3050–3057 (2020) 21. Luo, X., Zhou, M.C., Li, S., Xia, Y., You, Z., Zhu, Q., Leung, H.: Incorporation of efficient second-order solvers into latent factor models for accurate prediction of missing QoS data. IEEE Trans. Cybern. 48(4), 1216–1228 (2018) 22. Luo, X., Sun, J., Wang, Z., Li, S., Shang, M.: Symmetric and non-negative latent factor models for undirected, high dimensional and sparse networks in industrial applications. IEEE Trans. Industr. Inform. 13(6), 3098–3107 (2017) 23. Nicolis, D., Allevi, F., Rocco, P.: Operational space model predictive sliding mode control for redundant manipulators. IEEE Trans. Robot. 36(4), 1348–1355 (2020) 24. Hu, L., Zhang, J., Pan, X., Luo, X., Yuan, H.: An effective link-based clustering algorithm for detecting overlapping protein complexes in protein-protein interaction networks. IEEE Trans. Netw. Sci. Eng. 8(4), 3275–3289 (2021) 25. Luo, X., Liu, Z., Shang, M., Lou, J., Zhou, M.C.: Highly-accurate community detection via pointwise mutual information-incorporated symmetric non-negative matrix factorization. IEEE Trans. Netw. Sci. Eng. 8(1), 463–476 (2021) 26. Luo, X., Liu, Z., Jin, L., Zhou, Y., Zhou, M.C.: Symmetric non-negative matrix factorizationbased community detection models and their convergence analysis. IEEE Trans. Neural Netw. Learn. Syst. 33(3), 1203–1215 (2022) 27. Ueland, E., Sauder, T., Skjetne, R.: Optimal force allocation for overconstrained cable-driven parallel robots: continuously differentiable solutions with assessment of computational efficiency. IEEE Trans. Robot. 37(2), 659–666 (2021) 28. Jin, L., Wei, L., Li, S.: Gradient-based differential neural-solution to time-dependent nonlinear optimization. IEEE Trans. Autom. Control. 68(1), 620–627 (2023) 29. Luo, X., Zhou, M.-C., Li, S., Shang, M.-S.: An inherently non-negative latent factor model for high-dimensional and sparse matrices from industrial applications. IEEE Trans. Industr. Inform. 14(5), 2011–2022 (2018) 30. Luo, X., Zhou, M.C., Xia, Y.N., et al.: Generating highly accurate predictions for missing QoS data via aggregating nonnegative latent factor models. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 524–537 (2015) 31. Yuan, Y., He, Q., Luo, X., Shang, M.S.: A multilayered-and-randomized latent factor model for high-dimensional and sparse matrices. IEEE Trans. Big Data. 8(3), 784–794 (2022) 32. Luo, X., Chen, M., Wu, H., Liu, Z., Yuan, H., Zhou, M.: Adjusting learning depth in nonnegative latent factorization of tensors for accurately modeling temporal patterns in dynamic QoS data. IEEE Trans. Autom. Sci. Eng. 18(4), 2142–2155 (2021)
References
31
33. Leithead, W.E., Zhang, Y.: O(N2)-operation approximation of covariance matrix inverse in Gaussian process regression based on quasi-Newton BFGS method. Commun. Stat. Simul. Comput. 36(2), 367–380 (2007) 34. Xiao, L., Li, S., Li, K., Jin, L., Liao, B.: Co-design of finite-time convergence and noise suppression: a unified neural model for time varying linear equations with robotic applications. IEEE Trans. Syst. Man Cybern. Syst. 50(12), 5233–5243 (2020) 35. Chen, B., Chen, X., Kanzow, C.: A penalized Fischer-Burmeister NCP-function. Math. Program. 88, 211–216 (2000) 36. Zhang, Y., Li, Z., Guo, D., Ke, Z., Chen, P.: Discrete-time ZD, GD and NI for solving nonlinear time-varying equations. Numer. Algor. 64(4), 721–740 (2013) 37. Lu, H., Jin, L., Luo, X., Liao, B., Guo, D., Xiao, L.: RNN for solving perturbed time-varying underdetermined linear system with double bound limits on residual errors and state variables. IEEE Trans. Industr. Inform. 15(11), 5931–5942 (2019) 38. Li, Z., Deng, J., Lu, R., Xu, Y., Bai, J., Su, C.: Trajectory-tracking control of mobile robot systems incorporating neural-dynamic optimized model predictive approach. IEEE Trans. Syst. Man Cybern. Syst. 46(6), 740–749 (2016) 39. Chen, D., Li, S., Wu, Q., Luo, X.: New disturbance rejection constraint for redundant robot manipulators: an optimization perspective. IEEE Trans. Industr. Inform. 16(4), 2221–2232 (2020) 40. Fu, D., Huang, H., Wei, L., Xiao, X., Jin, L., Liao, S., Fan, J., Xie, Z.: Modified Newton integration algorithm with noise tolerance applied to robotics. IEEE Trans. Syst. Man Cybern. Syst. 52(4), 2134–2144 (2021) 41. Faulwasser, T., Weber, T., Zometa, P., Findeisen, R.: Implementation of nonlinear model predictive path-following control for an industrial robot. IEEE Trans. Control Syst. Technol. 25(4), 1505–1511 (2017) 42. Cisneros, P.S.G., Werner, H.: A velocity algorithm for nonlinear model predictive control. IEEE Trans. Control Syst. Technol. 29(3), 1310–1315 (2021) 43. Faroni, M., Beschi, M., Pedrocchi, N., Visioli, A.: Predictive inverse kinematics for redundant manipulators with task scaling and kinematic constraints. IEEE Trans. Robot. 35(1), 278–285 (2019) 44. Zhang, Z., Zhang, Y.: Design and experimentation of acceleration level drift-free scheme aided by two recurrent neural networks. IET Control Theory Applic. 7(1), 25–42 (2013) 45. Luo, X., Zhou, M.C., Li, S., You, Z.H., Xia, Y.N., Zhu, Q.-S.: A non-negative latent factor model for large-scale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 524–537 (2016) 46. Jin, L., Zhang, J.Z., Luo, X., Liu, M., Li, S., Xiao, L., Yang, Z.H.: Perturbed manipulability optimization in a distributed network of redundant robots. IEEE Trans. Ind. Electron. 68(8), 7209–7220 (2021) 47. You, Z.H., Zhou, M.C., Luo, X., Li, S.: Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 64(6), 4710–4720 (2017) 48. Luo, X., Wu, H., Zhou, M.C., Yuan, H.Q.: Temporal pattern-aware QoS prediction via biased non-negative latent factorization of tensors. IEEE Trans. Cybern. 50(5), 1798–1809 (2020) 49. Lin, Y., Chen, Z., Yao, B.: Decoupled torque control of series elastic actuator with adaptive robust compensation of time-varying load-side dynamics. IEEE Trans. Ind. Electron. 67(7), 5604–5614 (2020) 50. Luo, X., Chen, M.Z., Wu, H., Liu, Z.G., Yuan, H.Q., Zhou, M.C.: Adjusting learning depth in non-negative latent factorization of tensors for accurately modeling temporal patterns in dynamic QoS data. IEEE Trans. Autom. Sci. Eng. 18(4), 2142–2155 (2021) 51. Wu, D., Luo, X., Shang, M.S., He, Y., Wang, G.Y., Wu, X.D.: A data-characteristic-aware latent factor model for web services QoS prediction. IEEE Trans. Knowl. Data Eng. 34(6), 2525–2538 (2022)
32
2
A Novel Model Predictive Control Scheme Based on an Improved. . .
52. Luo, X., Zhou, M.C., Wang, Z.D., Xia, Y.N., Zhu, Q.S.: An effective QoS estimating scheme via alternating direction method-based matrix factorization. IEEE Trans. Serv. Comput. 12(4), 503–518 (2019) 53. Wu, D., He, Q., Luo, X., Shang, M.S., He, Y., Wang, G.Y.: A posterior-neighborhood regularized latent factor model for highly accurate web service QoS prediction. IEEE Trans. Serv. Comput. 15(2), 793–805 (2022)
Chapter 3
A Novel Recurrent Neural Network for Robot Control
Abstract To date, neural networks with high learning ability have been widely used in natural language processing, process control and other fields. In this chapter, a new recurrent neural network (RNN) is proposed to deal with time-varying underdetermined linear systems with disturbances, thereby achieving better control results. The related background of the underdetermined linear system is described in Sect. 3.1. In Sect. 3.2, we introduce the problem description. The theoretical analysis is discussed in Sect. 3.3. The experimental results are presented in Sect. 3.4. Finally, the conclusions and future research work are given in Sect. 3.5. Keywords Recurrent neural network · Time-varying underdetermined linear systems · Double bound limits · Residual errors · State variables
3.1
Overview
Recently, significant progress has been made in the research of underdetermined linear equations, which have received widespread attention in various fields [1–3], like graphic processing and speech recognition. Extensive researchers have studied underdetermined linear equations. Donoho et al. [4] successfully obtain the sparsest solution for underdetermined linear equations. Wang et al. [5] achieve semidefinite matrix solutions and nonnegative vector solutions of the underdetermined linear system. In addition, researchers have also proposed numerous algorithms for solving underdetermined linear equations, such as abstract algorithms [6] and regularization Lanczos iterative methods [7]. Neural networks, with the advantages of adaptive learning and strong robustness [8–12], can effectively address extensive datasets. Powerful recurrent neural networks (RNNs) [12] are commonly used to handle complex nonlinear problems, such as Wang neural networks [13] for time-invariant problems and Zhang neural networks (ZNN) [14] for time-varying problems. In addition, direct and iterative methods are also commonly used to solve complex mathematical problems. Keramati et al. [15] design a numerical algorithm based on homotopy perturbation to solve linear equations. Cichocki et al. [16] adopt various RNN algorithms to solve
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_3
33
34
3 A Novel Recurrent Neural Network for Robot Control
linear equations with inequality limits. In a word, the time-invariant underdetermined linear system is a major issue of recent researches [13–18]. However, there are few studies on time-varying underdetermined linear systems. In addition, variable settings should be considered for the time-varying underdetermined linear system [19–25]. Guo et al. [22] propose an RNN model to successfully solve the time-varying underdetermined linear system with bound limits, but they do not consider noises. Therefore, this chapter proposes a new RNN to deal with time-varying underdetermined linear systems with disturbances, which considers the double limits of residual error and state variables [26– 32]. Numerous experiments have been conducted on a PUMA560 robot, the experimental results show that the proposed method has better control performance of the redundant robot [17, 33–39]. Moreover, the major contributions of this chapter are as follows: (a) An RNN with great noise-suppression performance is proposed, which can address the time-varying underdetermined linear system with residual errors and state variables based on double bound limits; (b) Theoretical analyses of the proposed RNN are adopted to ensure its global convergence into zero; (c) Extensive experiments have been carried out on a PUMA560 robot, and then the experimental results demonstrate that the proposed RNN can deal with the timevarying underdetermined linear system with double bound limits based on disturbance.
3.2
Time-Varying Description
In this section, we discuss the time-varying system with double-bound limits.
3.2.1
Problem Formulation
Firstly, we design an RNN to solve the time-varying underdetermined linear system [40–44]: Qðt Þyðt Þ = sðt Þ, y - ≤ yð t Þ ≤ y þ ,
ð3:1Þ
where Q(t) 2 Rm × n, s(t) 2 Rm, and y 2 Rn. y± is the upper and lower bounds of y(t). Then (3.1) is replaced by:
3.2
Time-Varying Description
35
Hyðt Þ ≤ b,
ð3:2Þ
where H = [-E; E] 2 R2n × n, E 2 Rn × n, b = [-y-; y+] 2 R2n. In addition, Eq. (3.1) can be transformed as the following equation by a non-negative term [45–49]: Qðt Þyðt Þ - sðt Þ = 0, Hyðt Þ - b þ r 2 ðt Þ = 0,
ð3:3Þ
T
where r 2 ðt Þ = r 21 ðt Þ, ⋯, r 22n ðt Þ 2 R2n , T is the transpose of a matrix or a vector. Moreover, r2(t) 2 R2n ≥ 0 is unknown. Besides, the Eq. (3.3) is designed as: Aðt Þhðt Þ - lðt Þ = 0,
ð3:4Þ
where A(t) = [Q(t), 0; H, M(t)] 2 R(m + 2n) × 3n is an augmented matrix; h(t) = [yT(t), rT(t)]T 2 R3n, l(t) = [s(t); b] 2 Rm + 2n, and M(t) = diag {r1(t), ⋯, r2n(t)} 2 R2n × 2n is a diagonal matrix. To address the underdetermined linear system with bound limits, an error function ε(t) 2 Rm + 2n is expressed as: εðt Þ = Aðt Þhðt Þ - lðt Þ:
ð3:5Þ
ε_ ðt Þ = - βF Ω ðεðt ÞÞ,
ð3:6Þ
Thereafter, we have
where β > 0; ε_ ðt Þ is time derivative for ε(t); FΩ(γ) represents an increasing odd function [12]. Combining (3.5) and (3.6), the RNN model is represented as: h_ ðt Þ = Kþ ðt Þ V ðt Þhðt Þ þ _lðt Þ - βF Ω ðAðt Þhðt Þ - lðt ÞÞ ,
ð3:7Þ
where + defines pseudo-inverse operation, K(t) = [Q(t), 0; H, 2M(t)] 2 R(m + 2n) × 3n, V ðt Þ = - Q_ ðt Þ, 0; 0, 0 2 Rðmþ2nÞ × 3n . Moreover, h_ ðt Þ, _lðt Þ and Q_ ðt Þ is time derivatives of h(t), l(t), and Q(t), respectively.
3.2.2
RNN Model
To address the underdetermined linear system with bound limits, an error function ε(t) is expressed as: εðt Þ = Aðt Þhðt Þ - lðt Þ: Based on Eq. (3.1), we have
ð3:8Þ
36
3
A Novel Recurrent Neural Network for Robot Control t
ε_ ðt Þ = - βF Ω1 ðεðt ÞÞ - αF Ω2 εðt Þ þ β
F Ω1 ðεðδÞÞdδ ,
ð3:9Þ
0
where ε_ ðt Þ is the time derivative of ε(t), FΩ(γ) = arg minR 2 ΩkR - γkF. Combining (3.8) and (3.9), we obtain that h_ ðt Þ = K þ ðt Þ V ðt Þhðt Þ þ _lðt Þ - βF Ω1 ðAðt Þhðt Þ - lðt ÞÞ t
- αF Ω2 ðAðt Þhðt Þ - lðt ÞÞ þ β
F Ω1 ðAðδÞhðδÞ - lðδÞÞdδ ,
ð3:10Þ
0
where K(t) = [Q(t), 0; H, 2M(t)], V ðt Þ = - Q_ ðt Þ, 0; 0, 0 , h_ ðt Þ, _lðt Þ, Q_ ðt Þ are the time derivatives of h(t), l(t), and Q(t), respectively. (3.10) represents the designed RNN model [50–54].
3.3
Theoretical Analysis of RNN Model
Theorem 1 The residual error ε(t) can globally converge to zero. t
Proof Firstly, we develop an equation ϖ ðt Þ = εðt Þ þ β 0 F Ω1 ðεðδÞÞdδ, whose derivative is given by: ϖ _ ðt Þ = εðt Þ þ βF Ω1 ðεðt ÞÞ:
ð3:11Þ
Based on (3.9) and (3.11), we achieve that ϖ _ ðt Þ = - αF Ω2 ðϖ ðt ÞÞ:
ð3:12Þ
Besides, we design a Lyapunov function ω(t) = ϖ T(t)ϖ(t)/2, whose derivative is as ω_ ðt Þ = - αϖ T ðt ÞF Ω2 ðϖ ðt ÞÞ. Then, we have kϖ ðt Þ - F Ω2 ðϖ ðt ÞÞk2F ≤ kϖ ðt Þ - Rk2F :
ð3:13Þ
Based on (3.13), we can achieve the following inequality: ω_ T ðt Þ ≤ - αF Ω2 ðϖ ðt ÞÞF TΩ2 ðϖ ðt ÞÞ=2 ≤ 0:
ð3:14Þ
If ω_ ðt Þ ≤ 0, ϖ(t) can globally converge to zero. From (3.9), we can obtain the following equation:
3.3
Theoretical Analysis of RNN Model
37
ε_ ðt Þ = - βF Ω1 ðεðt ÞÞ:
ð3:15Þ
Therefore, the residual error ε(t) of the RNN (3.10) can globally converge to zero, the theoretical proof is finished. Theorem 2 The residual error ε(t) of the developed RNN model (3.10) with bound limits can globally converge to zero by F Ω2 ðϖ ðt ÞÞ - ρðt Þ=α ≥ 0. Proof With the noises, (3.12) can be expressed as ω_ 0 ðt Þ = - αF Ω2 ðω0 ðt ÞÞ þ ρðt Þ, ρ(t) is the given noises. Then we have ϖ _ i ðt Þ = - αF Ω2 ðϖ i ðt ÞÞ þ ρi ðt Þ:
ð3:16Þ
Additionally, we develop a Lyapunov function γ i ðt Þ = ϖ 2i ðt Þ, thus γ i(t) is non-negative. Based on (3.16), we achieve that. γ_ i ðt Þ = 2ϖ i ðt Þð- αF Ω2 ðϖ i ðt ÞÞ þ ρi ðt ÞÞ:
ð3:17Þ
_ ðt Þ = 0. From the Eq. (3.16), Combining (3.11)–(3.17), we can obtain that lim t → 1 ϖ we have: ε_ ðt Þ = - βF Ω1 ðεðt ÞÞ:
ð3:18Þ
Thus, the RNN (3.10) can globally converge to zero with the condition αF Ω2 ðϖ ðt ÞÞ ≥ ρðt Þ, the related proof is finished. Figure 3.1 shows the circuit topology for the developed RNN (3.10). Theorem 3 The residual error ε(t) of the developed RNN model (3.10) with bounded random noises ρ(t) 2 Rm + 2n to address the time-varying underdetermined linear system is bounded. If α and β are highly large, limt → 1 sup kε(t)k2 is tiny. Proof Fi() can convert into linear activation functions, i is 1 or 2. With Eq. (3.9), we achieve that t
ε_ ðt Þ = - ðα þ βÞεðt Þ - αβ
εðδÞdδ:
ð3:19Þ
0
Defining vðt Þ = εi ðt Þ,
T t 0 εðδÞdδ ,
based on (3.19), we have
v_ i ðt Þ = Zvi ðt Þ þ gρi ðt Þ,
ð3:20Þ
where Z is [-(α + β), -αβ; 1, 0], ρi(t) is ith element of ρ(t), g is [1, 0]T. Then the Eq. (3.20) can be rewritten as:
38
3
A Novel Recurrent Neural Network for Robot Control
Fig. 3.1 The circuit topology for the developed RNN (3.10) t
vi ðt Þ = expðZt Þvi ð0Þ þ 0
expðZ ðt - δÞÞgρi ðδÞdδ:
ð3:21Þ
Based on the triangle inequality, we have kvi ðt Þk2 ≤ k expðZt Þvi ð0Þk2 þ
t 0
expðZ ðt - δÞÞgρi ðδÞdδ
ð3:22Þ 2
Owing to the condition (α + β)2 > 4αβ, α ≠ β, we achieve the following equation: expðZt Þvi ð0Þ =
εi ð0Þðι1 expðι1 t Þ - ι2 expðι2 t ÞÞ=ðι1 - ι2 Þ
expðZt Þg = Then, we achieve that
,
εi ð0Þðexpðι1 t Þ - expðι2 t ÞÞ=ðι1 - ι2 Þ ðι1 expðι1 t Þ - ι2 expðι2 t ÞÞ=ðι1 - ι2 Þ ðexpðι1 t Þ - expðι2 t ÞÞ=ðι1 - ι2 Þ
:
ð3:23Þ
ð3:24Þ
3.3
Theoretical Analysis of RNN Model
39
ι 1 = - ðα þ β Þ þ
ðα þ βÞ2 - 4αβ=2 = - β,
ð3:25Þ
ι2 = - ð α þ β Þ -
ðα þ βÞ2 - 4αβ=2 = - α:
ð3:26Þ
1. If ι1 > ι2, the following inequalities are written as: ðι1 expðι1 t Þ - ι2 expðι2 t ÞÞ=ðι1 - ι2 Þ < expðι1 t Þ,
ð3:27Þ
ðexpðι1 t Þ - expðι2 t ÞÞ=ðι1 - ι2 Þ < expðι1 t Þ=ðι1 - ι2 Þ:
ð3:28Þ
Then, we deduce that p lim supkεðt Þk2 ≤ μk m þ 2n=β,
ð3:29Þ
t→1
where k = ðα - βÞ2 þ 1=ðα - βÞ, μ is max1 ≤ i ≤ m + 2n{max0 ≤ δ ≤ t|ρi(δ)|}. 2. If ι1 > ι2, the following inequalities are expressed as: ðι1 expðι1 t Þ - ι2 expðι2 t ÞÞ=ðι1 - ι2 Þ < expðι2 t Þ,
ð3:30Þ
ðexpðι1 t Þ - expðι2 t ÞÞ=ðι1 - ι2 Þ < - expðι2 t Þ=ðι1 - ι2 Þ:
ð3:31Þ
Then, we obtain that p lim supkεðt Þk2 ≤ μk m þ 2n=α:
ð3:32Þ
t→1
Based on the condition (α - β)2 = 4αβ, α = β, we have expðZt Þvi ð0Þ = expðZt Þg =
εi ð0Þι1 t expðι1 t Þ εi ð0Þt expðι1 t Þ ι1 t expðι1 t Þ t expðι1 t Þ
,
,
ð3:33Þ
ð3:34Þ
where ι1 = - (α - β)/2. According to Lemma 1 in [31], we have two constants φ > 0 and χ > 0, it can be obtained that t Lastly, we conclude that
ι21 þ 1 expðι1 t Þ < ϕ expð- χt Þ:
ð3:35Þ
40
3
A Novel Recurrent Neural Network for Robot Control
p lim supkεðt Þk2 ≤ μϕ m þ 2n=χ:
t→1
ð3:36Þ
In a word, the residual error ε(t) of the developed RNN model (3.10) to address the time-varying underdetermined linear system is bounded. Moreover, lim supkεðt Þk2 is highly tiny when α and β are highly large. Thus, the related
t→1
proof is completed.
3.4
Experiments for RNN Model
In this section, we discuss the experimental results of the developed RNN model (3.10) for addressing the time-varying underdetermined linear system.
3.4.1
Simulations
We discuss an example: 2 þ ςðt Þ Q ðt Þ =
3 þ ξðt Þ
T
,
4 - ςð2t Þ þ ξðt Þ sðt Þ = ςð2t Þ þ ξðt Þ,
ð3:37Þ
- y - = yþ = ½0:25, 0:25, 0:25T , where ς(t) and ξ(t) are sin(t) and cos(t), respectively. Moreover, the simulation results are shown in Figs. 3.2, 3.3 and 3.4. From Fig. 3.2a, when α = 10, β = 1, the residual errors kε(t)k2 = kA(t)h(t) l(t)k2 can converge to zero at 3 s, which demonstrates that the developed RNN model (3.10) for addressing (3.4). Figure 3.2b shows the three subsystems of h(t) addressed by the RNN model. Moreover, the derivative of residual error solved by the RNN model (3.11) with bounded functions is depicted in Fig. 3.2c. Figure 3.3 depicts the developed RNN model (3.10) for addressing the perturbed time-varying underdetermined linear system in five random states. From Fig. 3.3a, the residual errors can converge to zero at 0.1 s. In addition, Fig. 3.3b describes the state trajectory of y(t) (3.37). Comparing Fig. 3.2a and Fig. 3.3a, β is larger, the residual error converge better. Moreover, Fig. 3.3c shows the derivative of the residual error with seven initial values.
3.4
Experiments for RNN Model
41
Fig. 3.2 Simulation results of the developed RNN model (3.10) for addressing the bound-limited time-varying underdetermined linear system in five random states. (a) The residual error of the linear system. (b) State trajectory of the linear system. (c) The derivative of the residual error for the linear system
Figure 3.4 depicts the convergence performance of the developed RNN model (3.10) activated by different activation functions in different additive noises. Moreover, we can draw a summary from simulation results by Figs. 3.2, 3.3 and 3.4. (a) The value of β highly contributes to the convergence performance of the developed RNN model (3.10); (b) Compared with researching the linear activation function, the developed RNN model (3.10) achieves better convergence performance by considering nonconvex and bound activation functions. Figure 3.5a shows the residual error with α = 10 and β = 10 solved by the RNN model in constant noise can not converge to zero, thus the RNN model is ineffective. Figure 3.5b depicts that the RNN model with constant noise can break the limit [0.25, 0.25]3. Moreover, Fig. 3.5c shows simulation results of the developed RNN model (3.10) for addressing the time-varying underdetermined linear system with
42
3
A Novel Recurrent Neural Network for Robot Control
Fig. 3.3 Simulation results of the developed RNN model (3.10) for addressing the perturbed timevarying underdetermined linear system in five random states. (a) The residual error of the linear system. (b) State trajectory of the linear system. (c) The derivative of the residual error for linear system
constant noises. Thus, the developed RNN model (3.10) for addressing the timevarying underdetermined linear system with disturbance is ineffective.
3.4.2
The Applications of Robot
This section discusses the application of the developed RNN for the PUMA560 robot. The Cartesian path of robot end-effector rd(t) and the joint angle θ(t) of the PUMA560 robot can be expressed as:
3.4
Experiments for RNN Model
43
Fig. 3.4 The convergence performance of the developed RNN model (3.10) activated by different activation functions in different additive noises. (a) α = 10, β = 10 and ρ(t) = 0.05 × [1;1;1;1;1;1;1]. (b) α = 2, β = 1 and ρ(t) = 0.01 × [t;t;t;t;t;t;t]. (c) α = 10, β = 1 and ρ(t) = 0.05 × [-0.015,0.005]7
J ðθðt ÞÞθ_ ðt Þ = r_ d ðt Þ, þ θ_ ≤ θ_ ðt Þ ≤ θ_ ,
ð3:38Þ
where J(θ(t)) is the Jacobian matrix, θ_ ðt Þ is the joint-velocity vector, r_ d ðt Þ is time ± derivative of rd(t), θ_ is the upper and lower constrains for θ(t). Moreover, the Eq. (3.38) can be rewritten as: Qðt Þ = J ðθðt ÞÞ, sðt Þ = r_ d ðt Þ,
ð3:39Þ
þ y - = θ_ , yþ = θ_ :
ð3:40Þ
The developed RNN is adopted for the tracking of PUMA560 robot, whose jointvelocity constraints are as follow:
44
3
A Novel Recurrent Neural Network for Robot Control
Fig. 3.5 Simulation results of the developed RNN model (3.10) for addressing the time-varying underdetermined linear system with constant noises. (a) The residual error of the linear system. (b) State trajectory of the linear system. (c) The derivative of the residual error for the linear system
_ - = θ_ þ = ½0:23, 0:23, 0:23, 0:23, 0:23, 0:23T rad=s:
-θ
ð3:41Þ
Figure 3.6 gives the simulation results of the developed RNN model (3.10) on a PUMA560 robot. From Fig. 3.6a, the developed RNN model can keep within the restricted region. In addition, Fig. 3.6b, c presents the robot end-effector with desired Lissajous trajectory, whose max tracking error is less than 8 × 10-6 m. Moreover, we adopt the software V-rep to simulate the tracking performance of a PUMA560 robot in Fig. 3.7, which shows the effectiveness of the developed RNN model (3.10).
Conclusions
45
Fig. 3.6 Simulation results of the developed RNN model (3.10) on a PUMA560 robot. (a) The joint velocity of a robot. (b) The position error of a robot. (c) The motion trajectory of a robot
3.5
Conclusions
In this chapter, an effective RNN is developed to solve perturbed time-varying underdetermined linear system with double bound limits via residual errors and state variables. Then, convergence analysis is researched for the developed RNN. In addition, extensive experimental results show that the proposed RNN achieves better trajectory-tracking performance when compared with other advancing algorithms. In the future, we plan to study a system of linear equations with double-bounded limits via residual errors and state variables in large noises.
46
3
A Novel Recurrent Neural Network for Robot Control
Fig. 3.7 The tracking performance of a PUMA560 robot is simulated by the software V-rep
References 1. Wu, D., He, Q., Luo, X., Shang, M.S., He, Y., Wang, G.Y.: A posterior-neighborhood regularized latent factor model for highly accurate web service QoS prediction. IEEE Trans. Serv. Comput. 15(2), 793–805 (2022) 2. Luo, X., Zhou, M.C., Wang, Z.D., Xia, Y.N., Zhu, Q.S.: An effective QoS estimating scheme via alternating direction method-based matrix factorization. IEEE Trans. Serv. Comput. 12(4), 503–518 (2019) 3. Luo, X., Zhou, M.C., Xia, Y.N., Zhu, Q.S., Ammari, A.C., Alabdulwahab, A.: Generating highly accurate predictions for missing QoS-data via aggregating non-negative latent factor models. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 524–537 (2016) 4. Donoho, D., Tsaig, Y., Drori, I., Starck, J.L.: Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory. 58(2), 1094–1121 (2012) 5. Wang, M., Xu, W., Tang, A.: A unique “nonnegative” solution to an underdetermined system: from vectors to matrices. IEEE Trans. Signal Process. 65(13), 3551–3582 (2017) 6. Esmaeili, H., Mahdavi-Amiri, N., Spedicato, E.: Explicit ABS solution of a class of linear inequality systems and LP problems. Bull. Iranian Math. Soc. 30(2), 21–38 (2004) 7. Morigi, S., Sgallari, F.: A regularizing L-curve Lanczos method for underdetermined linear systems. Appl. Math. Comput. 121(1), 55–73 (2001)
References
47
8. Hu, L., Zhang, J., Pan, X.Y., Luo, X., Yuan, H.Q.: An effective link-based clustering algorithm for detecting overlapping protein complexes in protein-protein interaction networks. IEEE Trans. Netw. Sci. Eng. 8(4), 3275–3289 (2021) 9. Hu, L., Yang, S.C., Luo, X., Yuan, H.Q., Zhou, M.C.: A distributed framework for large-scale protein-protein interaction data analysis and prediction using MapReduce. IEEE/CAA J. Autom. Sin. 9(1), 160–172 (2022) 10. Hu, L., Yuan, X.H., Liu, X., Xiong, S.W., Luo, X.: Efficiently detecting protein complexes from protein interaction networks via alternating direction method of multipliers. IEEE/ACM Trans. Comput. Biol. Bioinform. 16(6), 1922–1935 (2019) 11. Wang, Z., Liu, Y., Luo, X., Wang, J.J., Gao, C., Peng, D.Z., Chen, W.: Large-scale affine matrix rank minimization with a novel nonconvex regularizer. IEEE Tran. Neural Netw. Learn. Syst. 33(9), 4661–4675 (2022) 12. Jin, L., Hu, B.: RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Industr. Inform. 14(1), 189–199 (2017) 13. Chen, K.: Robustness analysis of Wang neural network for online linear equation solving. Electron. Lett. 48(22), 1391–1392 (2012) 14. Xiao, L., Liao, B., Li, S., Zhang, Z., Ding, L., Jin, L.: Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Industr. Inform. 14(1), 98–105 (2018) 15. Keramati, B.: An approach to the solution of linear system of equations by He’s homotopy perturbation method. Chaos Solitons Fract. 41(1), 152–156 (2009) 16. Cichocki, A., Ramirez-Angulo, J., Unbehauen, R.: Architectures for analog VLSI implementation of neural networks for solving linear equations with inequality constraints. Proc. IEEE Int. Sym. Circuits Syst., 1529–1532 (1992) 17. Wu, D., Luo, X., Shang, M., He, Y., Wang, G.Y., Zhou, M.C.: A deep latent factor model for high-dimensional and sparse matrices in recommender systems. IEEE Trans. Syst. Man Cybern. Syst. 51(7), 4285–4296 (2021) 18. Liu, Z., Luo, X., Wang, Z., Liu, X.: Constraint-induced symmetric nonnegative matrix factorization for accurate community detection. Inf. Fusion. 89, 588–602 (2023) 19. Wu, D., Shang, M.S., Luo, X., Wang, Z.D.: An L1-and-L2-norm-oriented latent factor model for recommender systems. IEEE Trans. Neural Netw. Learn. Syst. 33(10), 5775–5788 (2021) 20. Hu, L., Pan, X., Tang, Z., Luo, X.: A fast Fuzzy clustering algorithm for complex networks via a generalized momentum method. IEEE Trans. Fuzzy Syst. 30(9), 3473–3485 (2022) 21. Li, W., Luo, X., Yuan, H., Zhou, M.C.: A momentum-accelerated Hessian-vector-based latent factor analysis model. IEEE Trans. Serv. Comput. https://doi.org/10.1109/TSC.2022.3177316 22. Xu, F., Li, Z., Shao, H., Guo, D.: New recurrent neural network for online solution of timedependent underdetermined linear system with bound constraint. IEEE Trans. Industr. Inform. 15(4), 2167–2176 (2018) 23. Peng, Q., Xia, Y., Zhou, M.C., Luo, X., Wang, S., Wang, Y., Wu, C., Pang, S., Lin, M.: Reliability-aware and deadline-constrained mobile service composition over opportunistic networks. IEEE Trans. Autom. Sci. Eng. 18(3), 1012–1025 (2021) 24. Qin, W., Luo, X., Li, S., Zhou, M.: Parallel adaptive stochastic gradient descent algorithms for latent factor analysis of high-dimensional and incomplete industrial data. IEEE Trans. Autom. Sci. Eng. https://doi.org/10.1109/TASE.2023.3267609 25. Jin, L., Zheng, X., Luo, X.: Neural dynamics for distributed collaborative control of manipulators with time delays. IEEE/CAA J. Autom. Sin. 9(5), 854–863 (2022) 26. Jin, L., Liang, S., Luo, X., Zhou, M.: Distributed and time-delayed K-winner-take-all network for competitive coordination of multiple robots. IEEE Trans. Cybern. 53(1), 641–652 (2022) 27. Luo, X., Zhou, M.C., Li, S., Xia, Y.N., You, Z.H., Zhu, Q.S., Leung, H.: An efficient second order approach to factorizing sparse matrices in recommender systems. IEEE Trans. Industr. Inform. 11(4), 946–956 (2015)
48
3
A Novel Recurrent Neural Network for Robot Control
28. Chen, D., Zhang, Y.: Robust zeroing neural-dynamics and its time-varying disturbances suppression model applied to mobile robot manipulators. IEEE Trans. Neural Netw. Learn. Syst. 29(9), 4385–4397 (2017) 29. Pazos, F.A., Bhaya, A.: Control Lyapunov function design of neural networks that solve convex optimization and variational inequality problems. Neurocomputing. 72(16–18), 3863–3872 (2009) 30. Zhao, K., Chen, L., Chen, C.L.P.: Event-based adaptive neural control of nonlinear systems with deferred constraint. IEEE Trans. Syst. Man Cybern. Syst. 52(10), 6273–6282 (2022) 31. Jin, L., Zhang, Y., Li, S., Zhang, Y.: Noise-tolerant ZNN models for solving time-varying zerofinding problems: a control-theoretic approach. IEEE Trans. Autom. Control. 62(2), 992–997 (2017) 32. Yan, J., Jin, L., Luo, X., Li, S.: Modified RNN for solving comprehensive Sylvester equation with TDOA application. IEEE Trans. Neural Netw. Learn. Syst. https://doi.org/10.1109/ TNNLS.2023.3263565 33. Yuan, Y., He, Q., Luo, X., Shang, M.S.: A multilayered-and-randomized latent factor model for high-dimensional and sparse matrices. IEEE Trans. Big Data. 8(3), 784–794 (2022) 34. Wu, H., Luo, X., Zhou, M.C., Rawa, M.J., Sedraoui, K., Albeshri, A.: A PID-incorporated latent factorization of tensors approach to dynamically weighted directed network analysis. IEEE/CAA J. Autom. Sin. 9(3), 533–546 (2022) 35. Luo, X., Wang, Z.D., Shang, M.S.: An instance-frequency-weighted regularization scheme for non-negative latent factor analysis on high-dimensional and sparse data. IEEE Trans. Syst. Man Cybern. Syst. 51(6), 3522–3532 (2021) 36. Zhang, Y., Li, S.: A neural controller for image-based visual servoing of manipulators with physical constraints. IEEE Trans. Neural Netw. Learn. Syst. 29(11), 5419–5429 (2018) 37. Yang, C., Jiang, Y., Li, Z., He, W., Su, C.Y.: Neural control of bimanual robots with guaranteed global stability and motion precision. IEEE Trans. Industr. Inform. 13(3), 1162–1171 (2017) 38. Wu, D., Luo, X.: Robust latent factor analysis for precise representation of high-dimensional and sparse data. IEEE/CAA J. Autom. Sin. 8(4), 796–805 (2021) 39. Chen, M., Ma, G., Liu, W., Zeng, N., Luo, X.: An overview of data-driven battery health estimation technology for battery management system. Neurocomputing. 532, 152–169 (2023) 40. Luo, X., Zhong, Y.R., Wang, Z.D., Li, M.Z.: An alternating-direction-method of multipliers incorporated approach to symmetric non-negative latent factor analysis. IEEE Trans. Neural Netw. Learn. Syst. https://doi.org/10.1109/TNNLS.2021.3125774 41. Li, W., Wang, R., Luo, X., Zhou, M.: A second-order symmetric non-negative latent factor model for undirected weighted network representation. IEEE Trans. Netw. Sci. Eng. 10(2), 606–618 (2023) 42. Ferreira, L.V., Kaszkurewicz, E., Bhaya, A.: Solving systems of linear equations via gradient systems with discontinuous righthand sides: application to LS-SVM. IEEE Trans. Neural Netw. 16(2), 501–505 (2005) 43. Zhang, Y., Li, S., Gui, J., Luo, X.: Velocity-level control with compliance to acceleration-level constraints: a novel scheme for manipulator redundancy resolution. IEEE Trans. Industr. Inform. 14(3), 921–930 (2018) 44. Luo, X., Zhou, M., Li, S., Shang, M.: An inherently non-negative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Industr. Inform. 14(5), 2011–2022 (2018) 45. Qin, W.J., Wang, H.L., Zhang, F., Wang, J.J., Luo, X., Huang, T.W.: Low-rank high-order tensor completion with applications in visual data. IEEE Trans. Image Process. 31, 2433–2448 (2022) 46. Song, Y., Zhu, Z.Y., Li, M., Yang, G.S., Luo, X.: Non-negative latent factor analysis incorporated and feature-weighted fuzzy double c-means clustering for incomplete data. IEEE Trans. Fuzzy Syst. 30(10), 4165–4176 (2022)
References
49
47. Li, W.L., He, Q., Luo, X., Wang, Z.D.: Assimilating second-order information for building non-negative latent factor analysis-based recommenders. IEEE Trans. Syst. Man Cybern. Syst. 52(1), 485–497 (2021) 48. Luo, X., Yuan, Y., Zhou, M.C., Liu, Z.G., Shang, M.S.: Non-negative latent factor model based on β-divergence for recommender systems. IEEE Trans. Syst. Man Cybern. Syst. 51(8), 4612–4623 (2021) 49. Zhang, Y., Wang, Y., Jin, L., Mu, B., Zheng, H.: Different ZFs leading to various ZNN models illustrated via online solution of time-varying underdetermined systems of linear equations with robotic application. Lect. Notes Comput. Sci. 7952, 481–488 (2013) 50. Guo, D., Li, K., Liao, B.: Bi-criteria minimization with MWVNINAM type for motion planning and control of redundant robot manipulators. Robotica. 36(5), 655–675 (2018) 51. Hu, L., Yan, S.C., Luo, X., Zhou, M.C.: An algorithm of inductively identifying clusters from attributed graphs. IEEE Trans. Big Data. 8(2), 523–534 (2022) 52. Luo, X., Wu, H., Yuan, H.Q., Zhou, M.C.: Temporal pattern-aware QoS prediction via biased non-negative latent factorization of tensors. IEEE Trans. Cybern. 50(5), 1798–1809 (2020) 53. Song, Y., Li, M., Luo, X., Yang, G.S., Wang, C.J.: Improved symmetric and nonnegative matrix factorization models for undirected, sparse and large-scaled networks: a triple factorization-based approach. IEEE Trans. Industr. Inform. 16(5), 3006–3017 (2020) 54. Shi, X.Y., He, Q., Luo, X., Bai, Y.N., Shang, M.S.: Large-scale and scalable latent factor analysis via distributed alternative stochastic gradient descent for recommender systems. IEEE Trans. Big Data. 8(2), 420–431 (2022)
Chapter 4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
Abstract Recently, zeroing neural networks plays a vital role in robot control and trajectory tracking. This chapter designs a projected zeroing neural network for redundant robot control, which achieves high superiority and efficiency. The research background about robot control by zeroing neural networks is presented in Sect. 4.1. In Sect. 4.2, we study the feedback-considered scheme of the robot. Neural network design is briefly discussed in Sect. 4.3. The experiments are given in Sect. 4.4. Lastly, the conclusions and future work are concluded in Sect. 4.5. Keywords Zeroing neural network · Redundant robot control · Joint-drift-free scheme · Joint errors · Projection zeroing neural network · Singular avoidance
4.1
Overview
In the past decades, robots have made extensive convenience for people’s daily lives. Owing to the efficient flexibility, teleoperated robots [1–5] and flexible robot operators [6–11] can effectively complete some difficult tasks. In addition, robots can be divided into non-redundant robots and redundant robots. Redundant robots have more degrees of freedom to complete tasks, such as obstacle avoidance and singular avoidance. Moreover, the robot needs to return to its original state after completing a repetitive task. However, due to the failure of performing tasks and other reasons, the robot needs to readjust its state, which cannot guarantee work efficiency [12–18]. Neural networks with efficient learning abilities are adopted to address computational problems, which have been researched in the field of robot control by extensive scholars [19–26]. Shen et al. [20] propose a new RNN for addressing unknown noises with time delay. Li et al. [21] design a memory-based RNN with the Lipschitz continuity and boundedness of the activation function, which achieves great robot control performance. Yang et al. [22] develop a method for teleoperation control of robots, which solves the problems of dynamic uncertainty and communication delay. Additionally, the zeroing neural network is a new type of RNN that can utilize the time derivatives of time-dependent parameters for solving robot control problems [26]. Chen et al. [28] present a novel zeroing neural dynamics method that © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_4
51
52
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
considers the singularity problem of redundant robots. However, there are still numerous problems for pseudo-inverse methods to address joint-drift problems [27–35]. Conventional joint-drift-free (JDF) schemes have the problem that reducing joint errors in joint space could result in increasing position errors in Cartesian space [36–44]. In this chapter, a control scheme based on Projection Zeroing Neural Network (PZNN) is developed, which addresses the shortcomings of the conventional JDF scheme [45–52]. It can reduce joint errors in joint space and position errors in Cartesian space, thereby achieving accurate control performance [53–60]. The major contributions of this work are as follows: (a) This chapter develops a new JDF scheme, which completely addresses the interference of joint errors and positional errors of redundant robots; (b) The designed PZNN can also handle saturated or nonconvex projection functions; (c) This chapter presents the convergence analysis of PZNN, which successfully the problem of joint-drift. (d) Extensive simulation results indicate the superiority and accuracy of the proposed PZNN in handling joint-drift problems of redundant robots with noise.
4.2
Feedback-Considered Scheme
Firstly, we discuss robot kinematics [60–65]. Then the position of the robot end-effector can be written as: γ ðt Þ = f ðθðt ÞÞ,
ð4:1Þ
where γ(t)is the position of the robot end-effector, f() represents the kinematic operator, θ(t) is the joint angle. In addition, the time derivative of γ(t) is calculated by: γ_ ðt Þ = J ðθðt ÞÞθ_ ðt Þ,
ð4:2Þ
where γ_ ðt Þ is joint velocity, θ_ ðt Þ is the end-effector velocity, J(θ(t)) = ∂f(θ(t))/ ∂θ(t) 2 Rn × m. Moreover, the conventional pseudo-inverse scheme for addressing (4.2) is expressed as: θ_ ðt Þ = J þ ðθðt ÞÞ_γ ðt Þ,
ð4:3Þ
where + is the pseudo-inverse operation. However, the conventional pseudo-inverse scheme is difficult to solve the joint-drift issue. Thus, this section introduces a novel JDF scheme for addressing the joint-drift issue, which is given by:
4.2 Feedback-Considered Scheme
53
T minimize θ_ ðt Þθ_ ðt Þ=2 þ gT ðt Þθ_ ðt Þ, subject to J ðθðt ÞÞθ_ ðt Þ = sðt Þ
ð4:4Þ
where g(t) = k(I - J+J)(θ(t) - θ(0)), k > 0, I is the identity matrix, sðt Þ = γ_ d ðt Þ þ pðγ d ðt Þ - γ ðt ÞÞ 2 Rn . Theorem 1 In the designed JDF scheme (4.4), the error ε(t) = θ(t) - θ(0) 2 Rm can converge to zero. Proof The error function is given as: εðtÞ = θðt Þ - θð0Þ:
ð4:5Þ
Thereafter, we define a Lyapunov function: lðt Þ = εT ðt Þεðt Þ=2:
ð4:6Þ
The derivation of (4.6) is represented as: _lðt Þ = εT ðt Þ_εðt Þ:
ð4:7Þ
This section designs an improvement ε_ ðt Þ = - k ðI - J þ J Þðθðt Þ - θð0ÞÞ, then we have: _lðt Þ = - kεT ðt ÞðI - J þ J Þεðt Þ:
ð4:8Þ
Moreover, the matrix J can be calculated by: J = SΛD,
ð4:9Þ
where J 2 Rn × m,S 2 Rn × n,Λ 2 Rn × m,D 2 Rm × m,STS = I,DTD = I. Then we obtain JT = DTΛTST, which can be further calculated by: JJ T = SΛ2 ST :
ð4:10Þ
Based on (4.10), whose inversion is as follows: JJ T
-1
= SΛ - 2 ST :
Combining (4.10) and (4.11), we obtain that
ð4:11Þ
54
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
J þ = DT
Λ1 Λ - 2 ST : 0
ð4:12Þ
Finally, we deduce that 0 0 ≥ 0 2 Rm × m : 0 I ðm - nÞ × ðm - nÞ
I - J þJ =
ð4:13Þ
Therefore, θ_ ðt Þ = - kðI - J þ J Þðθðt Þ - θð0ÞÞ can globally converge to zero, thus the proof is finished.
4.3 4.3.1
Neural Network Design Neural Network Design
Firstly, we design an error function: ςðt Þ = F ðt Þxðt Þ - yðt Þ, F ðt Þ =
where - gð t Þ
I
J T ðθðt ÞÞ
J ðθ ðt ÞÞ
0n × n
2 RðmþnÞ × ðmþnÞ ,xðt Þ =
ð4:14Þ θ_ ðt Þ ιðt Þ
2 Rmþn ,
2 Rmþn . sðt Þ In addition, the derivative of ς(t) is as follows:
yð t Þ =
ς_ ðt Þ = - xQγ ðςðt ÞÞ,
ð4:15Þ
where x > 0, Qγ (V ) = arg minK 2 γ kK - VkF is the mapping function. Moreover, (4.15) is rewritten as: ς_ ðt Þ = F_ ðt Þxðt Þ þ F ðt Þ_xðt Þ - y_ ðt Þ:
ð4:16Þ
By combining (4.15) and (4.16), we have F ðt Þ_xðt Þ = - F_ ðt Þxðt Þ - xQγ ðF ðt Þxðt Þ - yðt ÞÞ þ y_ ðt Þ: Furthermore, the designed PZNN model is calculated by:
ð4:17Þ
4.3
Neural Network Design
55
F ðt Þ_xðt Þ = - F_ ðt Þxðt Þ - xQγ ðF ðt Þxðt Þ - yðt ÞÞ þ y_ ðt Þ þ ψ ðt Þ,
ð4:18Þ
where ψ(t) is the constant noise.
4.3.2
Theoretical Analysis without Noise T
T Theorem 2 For the JDF scheme (4.4), xðt Þ = θ_ ðt Þ, ιT ðt Þ
of the designed PZNN T
T model (4.18) can converge to the theoretical solution x ðt Þ = θ_ ðt Þ, ιT ðt Þ .
Proof We design a Lyapunov function: ωðt Þ = ςT ðt Þςðt Þ=2:
ð4:19Þ
Considering the first-order time derivative of w(t), which is written as: ω_ ðt Þ = - xQTγ ςT ðt Þ ςðt Þ:
ð4:20Þ
According to (4.20), we have kK - ςðt Þk2F ≥ Qγ ðςðt ÞÞ - ςðt Þ
2 , 8K F
2 γ:
ð4:21Þ
Defining K = 0, we obtain that Qγ ðςðt ÞÞ - ςðt Þ
2 F
≤ kςðt Þk2F :
ð4:22Þ
Lastly, we infer that ω_ ðt Þ ≤ - xQTγ ðςðt ÞÞQγ ðςðt ÞÞ=2 ≤ 0: T Therefore, xðt Þ = θ_ ðt Þ, ιT ðt Þ
T
of the designed PZNN model (4.18) can converge
T to the theoretical solution x ðt Þ = θ_ ðt Þ, ιT ðt Þ T
ð4:23Þ
T xðt Þ = θ_ ðt Þ, ιT ðt Þ , thus the proof is finished.
T
with an initial value
56
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
4.3.3
Theoretical Analysis in Constant-Noise Condition
Theorem 3 Define Qγ- and Qþ γ the lower bound and upper bound for Qγ (), In the condition: 0 < ψ ðt Þ < xQþ γ T θ_ ðt Þ, ιT ðt Þ
T
or 0 > ψ ðt Þ > xQγ- , JDF scheme (4.4), xðt Þ =
the residual error ς(t) of the designed PZNN model (4.18) can
converge to a fixed value in the constant noise ψ(t). Proof To research the convergence of PZNN (4.18) with a noisy, we develop an auxiliary equation: ς_ ðt Þ = ψ ðt Þ - xQγ ðςðt ÞÞ:
ð4:24Þ
In addition, based on the auxiliary Eq. (4.24), we can achieve the following equation: ς_ i ðt Þ = ψ i ðt Þ - xQγi ðςi ðt ÞÞ,
ð4:25Þ
where Qγi() is an activation function with bounded limitation, then we design a Lyapunov function: hi ðt Þ = ς2i ðt Þ=2:
ð4:26Þ
Based on (4.26), the derivative of hi(t) is as follows: h_ i ðt Þ = ςi ðt Þ_ςi ðt Þ:
ð4:27Þ
Combining (4.25)–(4.27), we have h_ i ðt Þ = ςi ðt Þ ψ i ðt Þ - xQγi ðςi ðt ÞÞ :
ð4:28Þ
Moreover, Fig. 4.1 shows ςi(t) can converge to the fixed value. Theorem 4 The residual error ς(t) of the designed PZNN model (4.18) can converge to a fixed value Qγ- 1 ðψ ðt Þ=xÞ with constant noise ψ(t) and the activation function. Proof The unbounded function (4.24) is as follows: ς_ ðt Þ = ψ ðt Þ - xQγ ðςðt ÞÞ, where Qγ () is an activation function ςðt Þ = Qγ- 1 ðψ ðt Þ=xÞ, then (4.29) is rewritten as:
with
ð4:29Þ unbounded
limitation,
4.3
Neural Network Design
57
Fig. 4.1 ςi(t) can converge to the fixed value
z (t)
a –1 (y (t) / x) Q¡i i
e b 0
t
c f d
ς_ i ðt Þ = ψ i ðt Þ - xQγi ðςi ðt ÞÞ,
–1 (y (t) / x) Q¡i i
ð4:30Þ
Additionally, a Lyapunov function is established by: vi ðt Þ = ς2i ðt Þ=2:
ð4:31Þ
Based on (4.31), the derivative of vi(t) is as follow: v_ i ðt Þ = ςi ðt Þ ψ i ðt Þ - xQγi ðςi ðt ÞÞ :
ð4:32Þ
Based on the above analysis, the residual error ς(t) of the designed PZNN model (4.18) can converge to a fixed value Qγ- 1 ðψ ðt Þ=xÞ with unknown constant noise, thus the proof is finished. Theorem 5 The position error of the end-effector calculated by the designed JDF scheme addressed by the developed PZNN model (4.18) for the robot control is ε/p, ε = Qγ- 1 ðψ ðt Þ=xÞðmþ1,⋯,mþnÞ . Proof Based on Eq. (4.4), we obtain that J ðθÞθ_ ðt Þ = γ_ d ðt Þ þ pðγ d ðt Þ - γ ðt ÞÞ:
ð4:33Þ
Then Eq. (4.33) is calculated by: γ_ ðt Þ - γ_ d ðt Þ = pðγ d ðt Þ - γ ðt ÞÞ: The position error of the robot end-effector is expressed as:
ð4:34Þ
58
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
eðt Þ = γ ðt Þ - γ d ðt Þ:
ð4:35Þ
In addition, by combining Eqs. (4.35) and (4.34), we have e_ ðt Þ = - peðt Þ:
ð4:36Þ
Considering lim t → 1 ςðt Þ = Qγ- 1 ðψ ðt Þ=xÞ, the Eq. (4.36) can be rewritten as: e_ ðt Þ = - peðt Þ þ ε,
ð4:37Þ
Adopting the Laplace transform to eq. (4.37), we have sei ðsÞ - ei ð0Þ = - pei ðsÞ þ εi =s,
ð4:38Þ
With Eq. (4.38), we infer that lim ei ðt Þ = lim
t→1
s→0
sðei ð0Þ þ εi =sÞ εi = : pþs p
ð4:39Þ
The position error of the end-effector calculated by the designed JDF scheme addressed by the developed PZNN model (4.18) for the robot control is ε/p, thus the proof is finished.
4.3.4
Theoretical Analysis in Bounded Random-Noise Condition
T Theorem 6 The state variable xðt Þ = θ_ ðt Þ, ιT ðt Þ
T
of by the designed JDF scheme
addressed by the developed PZNN model (4.18) for the robot control, whose the p residual error’s upper bound lim t → 1 kςðt Þk2 = β n þ m =x, β = max1 ≤ i ≤ n{max0 ≤ δ ≤ t|ψ i(δ)|}. Proof Based on Eq. (4.15), which can be expressed by: ς_ ðt Þ = ψ ðt Þ - xςðt Þ,
ð4:40Þ
where ψ(t) is bounded random noises, the Eq. (4.40) is rewritten as: ς_ i ðt Þ = - xςi ðt Þ þ ψ i ðt Þ, 8i 2 1, 2, ⋯, n, n þ 1, ⋯, n þ m where ςi ðt Þ = ςi ð0Þ expð- xt Þ þ Finally, we infer that.
t 0
expð- xðt - δÞÞψ i ðδÞdδ.
ð4:41Þ
4.4
Experimental Validations for the Developed PZNN Model
59
p β nþm lim supkςðt Þk2 ≤ , t→1 x p where β = max1 ≤ i ≤ n{max0 ≤ δ ≤ t|ψ i(δ)|}, then lim t → 1 kςðt Þk2 = β n þ m =x, the proof is finished.
4.4 4.4.1
Experimental Validations for the Developed PZNN Model Simulations
This chapter makes simulations on the PUMA 560 robot, then the experimental results are shown in Figs. 4.2, 4.3, 4.4, and 4.5. (a) Without noise: As shown in Fig. 4.2, the simulation results of the design with the JDF scheme based on the developed PZNN model (4.18) are conducted on a six-degrees-of-freedom robot. As shown in Fig. 4.2a, the joint angle solved by the pseudo-inverse scheme (4.3) does not return the initial value. As shown in Fig. 4.2b, the joint velocity does not approach zero, indicating that the robot can not stop moving after completing the tasks. As shown in Figs. 4.2c, d, the robot’s joint values do not return the initial values, which results in a joint drift situation. In addition, as shown in Fig. 4.2e, the joint angle solved by the pseudo-inverse scheme (4.3) returns to its initial value. As shown in Figs. 4.2f, g, the joint velocity and acceleration eventually approach zero. As shown in Fig. 4.2h, the robot trajectory is close to the expected path. Figure 4.2i-k illustrate that the maximum position error, velocity error and acceleration error of the robot are small, which are less than 10-6 m. Figure 4.2l shows that six joints of the robot can successfully complete the specified task. (b) Noise: As shown in Figs. 4.3a and 4.4a, the robot joint angle can return its initial state. As shown in Figs. 4.3b and 4.4b, the trajectory of the robot’s end-effector is consistent with the expected path, thereby verifying the correctness of the proposed JDF scheme (4.4). As shown in Figs. 4.3c and 4.4c, the maximum position error of the robot is less than 1 × 10-6 m and 2 × 10-6 m. In addition, as shown in Figs. 4.3d and 4.4d, the robot can return to its initial state, thereby solving the problem of joint drift with noise. Figure 4.5 depicts the simulation results for the PUMA 560 robot by tracking a circular path solved by the designed JDF scheme addressed by the developed PZNN model (4.18). As shown in Fig. 4.5a, the joint angle does not return its initial value. As shown in Fig. 4.5b, the final velocity is not close to zero, which indicates that the task execution is failed. As shown in Fig. 4.5c, the maximum position error is 2 × 10-3 m. As shown in Fig. 4.5d, the joint angle can return its original value. The joint velocity in Fig. 4.5e returns to zero, and then the joint acceleration in
4
_ ddq = € Fig. 4.2 Simulation results for a 6-DOF robot based on the designed JDF scheme addressed by the developed PZNN model (4.18), q = θ, dq = θ, θ. (a) Jointangle solved by the pseudo-inverse scheme (4.3). (b) Joint velocity solved by the pseudo-inverse scheme (4.3). (c) Position error solved by the pseudo-inverse scheme (4.3). (d) Motion trajectories solved by the pseudo-inverse scheme (4.3). (e) Joint-angle solved by JDF scheme (4.4). (f) Joint-velocity solved by JDF scheme (4.4). (g) Joint-acceleration solved by JDF scheme (4.4). (h) The desired trajectory and end-effector trajectory are solved by the JDF scheme (4.4). (i) Position error solved by JDF scheme (4.4). (j) Velocity error solved by JDF scheme (4.4). (k) Acceleration error solved by JDF scheme (4.4). (l) Motion trajectories solved by the JDF scheme (4.4)
60 A Projected Zeroing Neural Network Model for the Motion Generation and Control
4.4
Experimental Validations for the Developed PZNN Model
61
Fig. 4.3 Simulation results for a 6-DOF robot based on the designed JDF scheme addressed by the developed PZNN model (4.18) activated by nonconvex AF with constant noise. (a) Profiles of robot joint-angle. (b) Profiles of the desired robot trajectory and end-effector trajectory. (c) Robot position error. (d) Robot motion trajectories
Fig. 4.5f tends to be a unique value. As shown in Fig. 4.5g, the maximum position error is less than 5 × 10-6 m。 As shown in Fig. 4.5h, the robot successfully completes the specified task.
4.4.2
Experiments for a Kinova JACO2 Robot
As shown in Fig. 4.6, the experimental platform includes a Kinova JACO2 robot, a computer, MATLAB 2010b software and vs2010 software. From Fig. 4.7, it can be seen that the Kinova JACO2 robot can perform experiments on the same Chinese
62
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
Fig. 4.4 Simulation results for a 6-DOF robot based on the designed JDF scheme addressed by the developed PZNN model (4.18) activated by bound AF with constant noise. (a) Profiles of robot joint-angle. (b) Profiles of the desired robot trajectory and end-effector trajectory. (c) Robot position error. (d) Robot motion trajectories
character “Jiong” path, which shows that the JDF scheme (4.4) solved by the PZNN model (4.18) can effectively handle joint drift problems with noise.
4.5
Conclusions
This chapter proposes a new JDF scheme solved by the PZNN model to address the joint drift problem of redundant robots. Then, the convergence of the PZNN model is discussed. Moreover, numerous experiments demonstrate that the JDF scheme solved by the PZNN model can effectively handle the joint drift problem of redundant robots in noisy environments. In the future, we plan to study the adaptive control of redundant robots.
Fig. 4.5 Simulation results for the PUMA 560 robot by tracking a circular path solved by the designed JDF scheme addressed by the developed PZNN model (4.18). (a) Joint-angle solved by the pseudo-inverse scheme (4.3). (b) Joint velocity solved by the pseudo-inverse scheme (4.3). (c) Position error solved by the pseudo-inverse scheme (4.3). (d) Joint-angle solved by JDF scheme (4.4). (e) Joint-velocity solved by JDF scheme (4.4). (f) Joint acceleration solved by JDF scheme (4.4). (g) Position error solved by JDF scheme (4.4). (h) Motion trajectories solved by the JDF scheme (4.4)
4.5 Conclusions 63
64
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
Fig. 4.6 Experimental platform based on a Kinova JACO2 robot
Fig. 4.7 Snapshots for conducting the given task on a Kinova JACO2 robot
References
65
References 1. Shang, M.S., Yuan, Y., Luo, X., Zhou, M.C.: An Α-β-divergence-generalized recommender for highly-accurate predictions of missing user preferences. IEEE Trans. Cybern. 52(8), 8006–8018 (2022) 2. Zhang, F., Jin, L., Luo, X.: Error-summation enhanced newton algorithm for model predictive control of redundant manipulators. IEEE Trans. Ind. Electron. 70(3), 2800–2811 (2022) 3. Khan, A.H., Li, S., Luo, X.: Obstacle avoidance and tracking control of redundant robotic manipulator: an RNN based metaheuristic approach. IEEE Trans. Ind. Inform. 16(7), 4670–4680 (2020) 4. Chen, D.C., Li, S., Wu, Q., Luo, X.: New disturbance rejection constraint for redundant robot manipulators: an optimization perspective. IEEE Trans. Ind. Inform. 16(4), 2221–2232 (2020) 5. Xie, Z.T., Jin, L., Luo, X., Sun, Z.B., Liu, M.: RNN for repetitive motion generation of redundant robot manipulators: an orthogonal projection based scheme. IEEE Trans. Neural Netw. Learn. Syst. 33(2), 615–628 (2022) 6. Yang, C., Luo, J., Pan, Y., Liu, Z., Su, C.: Personalized variable gain control with tremor attenuation for robot teleoperation. IEEE Trans. Syst., Man, Cybern., Syst. 48(10), 1759–1770 (2018) 7. He, W., Dong, Y., Sun, C.: Adaptive neural impedance control of a robotic manipulator with input saturation. IEEE Trans. Syst., Man, Cybern., Syst. 46(3), 334–344 (2016) 8. Zinchenko, K., Wu, C., Song, K.: A study on speech recognition control for a surgical robot. IEEE Trans. Industr. Inform. 13(2), 607–615 (2017) 9. Li, Z., Li, S., Luo, X.: Efficient industrial robot calibration via a novel unscented Kalman filterincorporated variable step-size Levenberg-Marquardt algorithm. IEEE Trans. Instrum. Meas. https://doi.org/10.1109/TIM.2023.3265744 10. Xie, Z.T., Jin, L., Luo, X., Li, S., Xiao, X.C.: A data-driven cyclic-motion generation scheme for kinematic control of redundant manipulators. IEEE Trans. Control Syst. Technol. 29(1), 53–63 (2021) 11. Jin, L., Qi, Y.M., Luo, X., Li, S., Shang, M.S.: Distributed competition of multi-robot coordination under variable and switching topologies. IEEE Trans. Autom. Sci. Eng. 19(4), 3575–3586 (2021) 12. Chen, X.F., Luo, X., Jin, L., Li, S., Liu, M.: Growing echo state network with an inverse-free weight update strategy. IEEE Trans. Cybern. 53(2), 753–764 (2022) 13. Luo, X., Yuan, Y., Zhou, M.C., Liu, Z.G., Shang, M.S.: Non-negative latent factor model based on β-divergence for recommender systems. IEEE Trans. Syst. Man Cybern. Syst. 51(8), 4612–4623 (2019) 14. Hu, L., Yan, S., Luo, X., Zhou, M.C.: An algorithm of inductively identifying clusters from attributed graphs. IEEE Trans. Big Data. 8(2), 523–534 (2020) 15. Qi, Y., Jin, L., Luo, X., Zhou, M.C.: Recurrent neural dynamics models for perturbed nonstationary quadratic programs: a control-theoretical perspective. IEEE Trans. Neural. Netw. Learn. Syst. 33(3), 1216–1227 (2021) 16. Jin, L., Li, S.: Distributed task allocation of multiple robots: a control perspective. IEEE Trans. Syst., Man, Cybern., Syst. 48(5), 693–701 (2018) 17. Yang, C., Zeng, C., Cong, Y., Wang, N., Wang, M.: A learning framework of adaptive manipulative skills from human to robot. IEEE Trans. Industr. Inform. 15(2), 1153–1161 (2019) 18. Jin, L., Li, S., Xiao, L., Lu, R., Liao, B.: Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst., Man, Cybern., Syst. 48(10), 1715–1724 (2018) 19. Hu, L., Hu, P., Yuan, X., Luo, X., You, Z.: Incorporating the coevolving information of substrates in predicting HIV-1 protease cleavage sites. IEEE/ACM Trans. Comput. Biol. Bioinform. 17(6), 2017–2028 (2020)
66
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
20. Shen, Y., Wang, J.: Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Trans. Neural Netw. Learn. Syst. 23(1), 83–95 (2012) 21. Li, S., Liu, B., Li, Y.: Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 24(2), 301–309 (2013) 22. Yang, C., Wang, X., Li, Z., Li, Y., Su, C.: Teleoperation control based on combination of wave variable and neural networks. IEEE Trans. Syst., Man, Cybern., Syst. 47(8), 2125–2136 (2016) 23. Wei, L., Jin, L., Luo, X.: Noise-suppressing neural dynamics for time-dependent constrained nonlinear optimization with applications. IEEE Trans. Syst. Man Cybern. Syst. 52(10), 6139–6150 (2022) 24. Cheng, D., Huang, J., Zhang, S., Zhang, X., Luo, X.: A novel approximate spectral clustering algorithm with dense cores and density peaks. IEEE Trans. Syst. Man Cybern. Syst. 52(4), 2348–2360 (2021) 25. Zhong, Y.R., Jin, L., Shang, M.S., Luo, X.: Momentum-incorporated symmetric non-negative latent factor models. IEEE Trans. Big Data. 8(4), 1096–1106 (2020) 26. Luo, X., Zhou, M.C., Wang, Z.D., Xia, Y.N., Zhu, Q.S.: An effective scheme for QoS estimation via alternating direction method-based matrix factorization. IEEE Trans. Serv. Comput. 12(4), 503–518 (2019) 27. Luo, X., Liu, Z.G., Li, S., Shang, M.S., Wang, Z.D.: A fast non-negative latent factor model based on generalized momentum method. IEEE Trans. Syst. Man Cybern. Syst. 51(1), 610–620 (2019) 28. Chen, D., Zhang, Y.: Robust zeroing neural dynamics and its time-varying disturbances suppression model applied to mobile robot manipulators. IEEE Trans. Neural Netw. Learn. Syst. 29(9), 4385–4397 (2018) 29. Zhang, Y., Li, S., Kadry, S., Liao, B.: Recurrent neural network for kinematic control of redundant manipulators with periodic input disturbance and physical constraints. IEEE Trans. Cybern. 49(12), 4194–4205 (2019) 30. Najmaei, N., Kermani, M.R.: Applications of artificial intelligence in safe human-robot interactions. IEEE Trans. Syst., Man, Cybern., Syst. B, Cybern. 41(2), 448–459 (2011) 31. Li, J., Zhang, Y., Mao, M.: Continuous and discrete zeroing neural network for different-level dynamic linear system with robot manipulator control. IEEE Trans. Syst., Man, Cybern., Syst. 50(11), 4633–4642 (2018) 32. Jin, L., Zhang, Y., Li, S., Zhang, Y.: Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 63(11), 6978–6988 (2016) 33. Luo, X., Zhou, M.C., Li, S., Hu, L., Shang, M.S.: Non-negativity constrained missing data estimation for high-dimensional and sparse matrices from industrial applications. IEEE Trans. Cybern. 50(5), 1844–1855 (2018) 34. Wu, D., Luo, X.: Robust latent factor analysis for precise representation of high-dimensional and sparse data. IEEE/CAA J. Autom. Sin. 8(4), 796–805 (2021) 35. Luo, X., Sun, J.P., Wang, Z.D., Li, S., Shang, M.S.: Symmetric and non-negative latent factor models for undirected, high dimensional and sparse networks in industrial applications. IEEE Trans. Ind. Inform. 13(6), 3098–3107 (2017) 36. Yuan, Y., Luo, X., Shang, M.: Effects of preprocessing and training biases in latent factor models for recommender systems. Neurocomputing. 275, 2019–2030 (2018) 37. Luo, X., Zhou, M., Li, S., Shang, M.: An inherently nonnegative latent factor model for highdimensional and sparse matrices from industrial applications. IEEE Trans. Ind. Inform. 14(5), 2011–2022 (2018) 38. Zhang, Z., Zhang, Y.: Acceleration-level cyclic-motion generation of constrained redundant robots tracking different paths. IEEE Trans. Syst., Man, Cybern., Syst. B, Cybern. 42(4), 1257–1269 (2012)
References
67
39. Cheng, L., Liu, W., Hou, Z., Huang, T., Yu, J., Tan, M.: An adaptive Takagi-Sugeno model based fuzzy predictive controller for piezoelectric actuators. IEEE Trans. Ind. Electron. 64(4), 3048–3058 (2017) 40. Liu, L., Liu, Y., Tong, S.: Neural networks-based adaptive finite-time fault-tolerant control for a class of strict-feedback switched nonlinear systems. IEEE Trans. Cybern. 49(7), 2536–2545 (2019) 41. Liu, Y., Lu, S., Tong, S., Chen, X., Chen, C.L.P., Li, D.: Adaptive control-based barrier Lyapunov functions for a class of stochastic nonlinear systems with full state constraints. Automatica. 87, 83–93 (2018) 42. Liu, Z., Yuan, G., Luo, X.: Symmetry and nonnegativity-constrained matrix factorization for community detection. IEEE/CAA J. Autom. Sin. 9(9), 1691–1693 (2022) 43. Chen, J., Wang, R., Wu, D., Luo, X.: A differential evolution-enhanced position-transitional approach to latent factor analysis. IEEE Trans. Emerg. Topics Comput. Intell. https://doi.org/10. 1109/TETCI.2022.3186673 44. Wu, D., Zhang, P., He, Y., Luo, X.: A double-space and double-norm ensembled latent factor model for highly accurate web service QoS prediction. IEEE Trans. Serv. Comput. https://doi. org/10.1109/TSC.2022.3178543 45. Yuan, Y., Luo, X., Shang, M., Wang, Z.: A Kalman-filter-incorporated latent factor analysis model for temporally dynamic sparse data. IEEE Trans. Cybern. https://doi.org/10.1109/TCYB. 2022.3185117 46. Qin, W., Wang, H., Zhang, F., Wang, J., Luo, X., Huang, T.: Low-rank high-order tensor completion with applications in visual data. IEEE Trans. Image Process. 31, 2433–2448 (2022) 47. Jin, L., Li, S., Hu, B., Liu, M., Yu, J.: A noise-suppressing neural algorithm for solving the timevarying system of linear equations: a control-based approach. IEEE Trans. Industr. Inform. 15(1), 236–246 (2019) 48. Liu, Y., Zeng, Q., Liu, Y., Tong, S.: An adaptive neural network controller for active suspension systems with hydraulic actuator. IEEE Trans. Syst., Man, Cybern., Syst. 50(12), 5351–5360 (2018) 49. Cheng, L., Hou, Z., Lin, Y., Tan, M., Zhang, W., Wu, F.: Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans. Neural Netw. 21(5), 714–726 (2011) 50. Stanimirović, P.S., Petković, M.D.: Gradient neural dynamics for solving matrix equations and their applications. Neurocomputing. 306, 200–212 (2018) 51. Bi, F., He, T., Xie, Y., Luo, X.: Two-stream graph convolutional network-incorporated latent feature analysis. IEEE Trans. Serv. Comput. https://doi.org/10.1109/TSC.2023.3241659 52. Luo, X., Wang, Z.D., Shang, M.S.: An instance-frequency-weighted regularization scheme for non-negative latent factor analysis on high dimensional and sparse data. IEEE trans. Syst. Man. Cybern. Syst. 51(6), 3522–3532 (2019) 53. Luo, X., Zhou, M., Xia, Y., Zhu, Q.: An efficient non-negative matrix-factorization-based approach to collaborative-filtering for recommender systems. IEEE Tran. Ind. Inform. 10(2), 1273–1284 (2014) 54. Wu, D., Luo, X., Wang, G.Y., Shang, M.S., Yuan, Y., Yan, H.Y.: A highly-accurate framework for self-labeled semi-supervised classification in industrial applications. IEEE Trans. Ind. Inform. 43(3), 909–920 (2018) 55. Jin, L., Li, S., La, H.M., Luo, X.: Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans. Ind. Electron. 64(6), 4710–4720 (2017) 56. Zhang, Y., Yang, Y., Tan, N.: Zhang neural network solving for time-varying fullrank matrix Moore-Penrose inverse. Computing. 92(2), 97–121 (2011) 57. Guo, D., Zhang, Y.: Zhang neural network, Getz-Marsden dynamic system, and discrete-time algorithms for time-varying matrix inversion with application to robots’ kinematic control. Neurocomputing. 97, 22–32 (2012)
68
4
A Projected Zeroing Neural Network Model for the Motion Generation and Control
58. Wei, L., Jin, L., Yang, C., Chen, K., Li, W.: New noise-tolerant neural algorithms for future dynamic nonlinear optimization with estimation on hessian matrix inversion. IEEE Trans. Syst. Man Cybern. 51(4), 2611–2623 (2019) 59. Luo, X., You, Z., Zhou, M., Li, S., Leung, H., Xia, Y., Zhu, Q.: A highly efficient approach to protein interactome mapping based on collaborative filtering framework. Sci. Rep. 5, 7702 (2015) 60. Hu, L., Yang, S.C., Luo, X., Yuan, H.Q., Zhou, M.C.: A distributed framework for large-scale protein-protein interaction data analysis and prediction using MapReduce. IEEE/CAA J. Autom. Sin. 9(1), 160–172 (2022) 61. Hu, L., Yuan, X.H., Liu, X., Xiong, S.W., Luo, X.: Efficiently detecting protein complexes from protein interaction networks via alternating direction method of multipliers. IEEE/ACM Trans. Comput. Biol. Bioinform. 16(6), 1922–1935 (2019) 62. You, Z.H., Zhou, M.C., Luo, X., Li, S.: Highly efficient framework for predicting interactions between proteins. IEEE Trans. Cybern. 64(6), 4710–4720 (2017) 63. Luo, X., Zhou, M.-C., Wang, Z.-D., Xia, Y.-N., Zhu, Q.-S.: An effective QoS estimating scheme via alternating direction method-based matrix factorization. IEEE Trans. Serv. Comput. 12(4), 503–518 (2019) 64. Luo, X., Chen, M.Z., Wu, H., Liu, Z.G., Yuan, H.Q., Zhou, M.C.: Adjusting learning depth in non-negative latent factorization of tensors for accurately modeling temporal patterns in dynamic QoS data. IEEE Trans. Autom. Sci. Eng. 18(4), 2142–2155 (2021) 65. Wu, D., Luo, X., Shang, M.S., He, Y., Wang, G.Y., Wu, X.D.: A data-characteristic-aware latent factor model for web services QoS prediction. IEEE Trans. Knowl. Data Eng. 34(6), 2525–2538 (2022)
Chapter 5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm for Robot Calibration
Abstract This chapter investigates six regularization schemes, such as L1, L2, dropout, elastic, log, and swish. Then, an efficient ensemble incorporates six regularizations to achieve high calibration accuracy. Firstly, Sect. 5.1 discuss the research background of robot calibration. In Sect. 5.2, we introduce six regularized robot calibration schemes and the principle of an ensemble. Then, Sect. 5.3 presents experiments for the proposed ensemble. Lastly, conclusions and future work are summarized in Sect. 5.4. Keywords Regularization scheme · Ensemble · Kinematic parameters · Overfitting · Levenberg-Marquardt algorithm
5.1
Overview
For the high prosperity of industry and artificial intelligence, high requirements have been put forward for the positioning accuracy and stability of the robot [1–8]. However, due to the manufacturing tolerances, joint flexibility errors, and gravity, the robot has large positioning errors, which cannot satisfy the requirements of high-end manufacturing [9–15], such as cell phone assembly and chip manufacturing. Therefore, it is urgent to calibrate the robot in advance. To calibrate accurately robots, researchers have conducted extensive explorations [16–25]. Li et al. [1] design a novel method to search for the optimal measuring posture count and accurately identify the robot tracking error. Joubeir et al. [4] adopt planar constraints to limit the end-effector position of the FANUC LR Mate 200iC industrial robot and successfully identify robot positioning errors. Jiang et al. [5] propose a calibration algorithm that integrates an extended Kalman filter and a particle filter, which effectively suppresses the noise in the robot calibration process. Ma et al. [8] develop a new calibration method based on maximum likelihood estimation, which calibrates the FANUC LR Mate 200i robot and achieves high calibration accuracy. These above methods can be adopted to calibrate the robot. However, they frequently suffer from the overfitting phenomenon, which restricts their versatility. Therefore, this chapter designs the regularization scheme to solve this issue [9–11, 18, 19, 22, 23]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_5
69
70
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
This chapter investigates six different regularization schemes, such as L1 [19, 26, 27], L2 [18, 19, 28, 29], dropout [11, 30–36], log [37–39] and swish [40]. Specifically, robot calibration is a highly complex nonlinear problem. Then LM algorithm is commonly used to handle the nonlinear problem, which has high robustness and effectiveness [26, 27, 37, 38, 40–45]. Therefore, we take the LM algorithm as the basic model for robot calibration, and then we obtain six new calibration methods with different regularization schemes. In addition, the six methods with different regularization schemes are integrated to obtain an efficient ensemble, which achieves higher calibration accuracy than its peers do [28–31, 39, 46–49]. Moreover, the innovations and contributions are summarized as follows: (a) In this chapter, six robot calibration methods with different regularization schemes are researched, whose calibration accuracy is higher than that of the standard LM algorithm; (b) We propose a powerful ensemble consisting of six new robot calibration methods, which achieve high calibration accuracy; (c) Numerous experiments demonstrate that the proposed ensemble achieves higher calibration accuracy when compared with other advancing calibration algorithms.
5.2 5.2.1
Diversified Regularized LM Algorithm Regularized Robot Kinematic Error Model
LM Algorithm According to the kinematic model and error model of the robot, the deviation of robot kinematic parameters can be approximately calculated by measuring the length Yi and the nominal length Y 0i of the drawstring displacement sensor [32–34, 50–56], then its objective function can be expressed as: ε=
1 2n
n i=1
Y i - Y 0i -
∂Y 0i ðwtþ1 - wt Þ ∂wt
2
þ 2
λ0 ðw - wt Þ2 , 2 tþ1
ð5:1Þ
where w is the kinematic parameters, λ0 represents the learning rate, n is the number of samples. Based on (5.1), we can obtain that wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
1 n
n i=1
∂Y i 0 ðY i - Y i 0 Þ , ∂wt
ð5:2Þ
5.2
Diversified Regularized LM Algorithm
71
where I is the unit matrix, which can ensure that the Jacobian matrix is always a positive definite matrix during the iterative calculation process [35, 36, 57–64]. Based on the principle of regularization effect, the objective function of the regularized LM algorithm can be described as: ε=
1 n
n
1 2 e þ h þ g, 2 i
i=1
ð5:3Þ
where ei is the difference between the nominal cable length and the measured cable length, h is the substantial term, g is the regularization term. According to the optimization principle, the derivation of (5.3) is carried out to obtain the following updating rules for robot kinematic parameters: wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
1 n
n i=1
∂g ∂Y i 0 ðY i - Y i 0 Þ : ∂wt ∂wt
ð5:4Þ
In (5.4), ∂g/∂wt is the first derivative of the regularization term.
L1-Regularized LM Algorithm The L1 regularization term is the penalty term of the loss function, which can restrict the complexity of the model to avoid overfitting. Commonly, the L1 regularization term minimizes the sum absolutes of decision parameters by making some parameters to zero, which achieves a sparse model [27, 43, 44]. It can better calibrate the robot’s kinematic parameters. Based on the L1 regularization term, g can be represented as: g = λkwt k1 = λjwt jabs :
ð5:5Þ
The absolute value in (5.5) is non-differentiable at zero, λ is the learning rate. According to the literature [27], this chapter adopts its approximate value, we have: kwt kabs ≈
ðwt Þ2 þ v,
ð5:6Þ
where v is a tiny constant. Based on Eqs. (5.4) and (5.6), the parameter updating formula of the LM algorithm with L1 regularization can be expressed as:
72
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
1 n
þ λ0 I
n i=1
∂Y i 0 ðY i - Y i 0 Þ - λ ∂wt
wt ðwt Þ2 þ v
:
ð5:7Þ
L2-Regularized LM Algorithm The loss function involves extensive decision parameters or the model is too complex, which frequently suffers from overfitting. The L2 regularization scheme is usually adopted to solve the overfitting issue. It can minimize the L2 norm of decision parameters, which depends on the sum of squares of the parameters [18, 19, 28]. The L2 regularization can be written as: g=
λ λ kw k 2 = ð w Þ 2 : 2 t 2 2 t
ð5:8Þ
Combining (5.8) and (5.7), the parameter updating formula of the LM algorithm with L2 regularization is represented as: wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
1 n
n i=1
∂Y i 0 ðY i - Y i 0 Þ - λwt : ∂wt
ð5:9Þ
Elastic Net-Regularized LM Algorithm Elastic Net regularization combines the advantages of L1 and L2 regularization, thereby making the model more generalized [10, 47, 50–53]. Elastic Net regularization can be written as: g = λ1
ðwt Þ2 þ v þ
λ2 ðw Þ2 , 2 t
ð5:10Þ
where λ1 and λ2 are the learning rate of L1 and L2 regularization respectively. When λ1 = 0, Elastic Net regularization is L2 regularization. When λ2 = 0, Elastic Net regularization is L1 regularization. Moreover, the decision parameter learning rule of the LM algorithm with Elastic Net regularization is represented as:
5.2
Diversified Regularized LM Algorithm
wtþ1 ¼ wt þ
- λ1
1 n
n i¼1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
wt ðwt Þ2 þ v
73
T
-1
þ λ0 I
n
1 n
i¼1
∂Y i 0 ðY i - Y i 0 Þ ∂wt
ð5:11Þ
- λ2 wt :
Dropout-Regularized LM Algorithm Dropout regularization is a common regularization method in machine learning. To reduce the overfitting effect of the model, some neurons are randomly deleted from the neural network with a probability of 1-κ. According to the literature survey [32– 34], dropout regularization is similar to Tikhonov regularization, which only depends on data, thus its objective function is as follows: ε=
1 n
n i=1
1 1 E Y - Y0 2 k i κ i
2
ð5:12Þ
,
where k = [k1,...,kR] is a random vector that satisfies the Bernoulli distribution kr ~ Bernoulli(κ). According to the research literature [33–36], Eq. (5.12) can be transformed into the following objective function: ε=
1 n
n i=1
1 2
Y i - Y 0i
2
þ λ Y 0i
2
ð5:13Þ
,
where λ = (1-κ)/κ. Then we can obtain the decision parameter learning rule of the LM algorithm with dropout regularization: wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
1 n
n i=1
λ ∂Y i 0 ðY i - Y i 0 Þ n ∂wt
n i=1
∂Y i 0 0 Yi : ∂wt ð5:14Þ
Log-Regularized LM Algorithm The Log regularization scheme can also effectively prevent the overfitting phenomenon and improve the generalization ability of the model [37–39]. It can make the
5 A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
74
model more sparse and insensitive to the difference between decision parameters. In addition, the Log regularization can be described as the following equation: g = λ lnðjwt j þ τÞ,
ð5:15Þ
where τ is a tiny constant. By incorporating the Eq. (5.15) and (5.14), we can achieve that: wtþ1 = wt þ
1 n
n i=1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
1 n
λ ∂Y i 0 ðY i - Y i 0 Þ : ð5:16Þ wt þ τ ∂wt
n i=1
Swish-Regularized LM Algorithm Swish is a novel activation function designed by Google [40], which is a combination of Sigmaid and ReLU. It has the advantages of smoothness, no-upper bound, lower bound and non-monotonicity. This section defines the regularization scheme of swish as follows: g=
λ wt 2 1 þ e - βwt
2
ð5:17Þ
,
With (5.17) and (5.16), we achieve that wtþ1 ¼ wt þ
-
5.2.2
1 n
n i¼1
∂Y i 0 ∂Y i 0 ∂wt ∂wt
T
-1
þ λ0 I
λ wt þ wt e - βwt þ βw2t e - βwt ð1 þ e - βwt Þ3
1 n
n i¼1
∂Y i 0 ðY i - Y i 0 Þ ∂wt
ð5:18Þ
:
Ensemble
The ensemble is a learning method that integrates multiple learning algorithms through some rules for achieving better performance than a single learning algorithm. An effective ensemble generally requires that the basic learning models have diversity and basic learning abilities. According to literature research [12–15],
5.2
Diversified Regularized LM Algorithm
75
multiple robot calibration algorithms can be integrated to obtain an ensemble with better generalization performance. Firstly, this chapter presents a weight ωi to each training sample Yi for the ensemble. When adding the mth basic model into the developed ensemble, the ðmÞ ðmÞ and sample weight is defined as ωi . Then, the average absolute value error uerr ðmÞ standard deviation δerr of the sample can be expressed as: ðmÞ = uerr
ðmÞ δerr
1 n
n
ðmÞ
i=1
=
1 n
Yi
- Yi
n
ðmÞ Yi
i=1
abs
,
- Yi
2
-
mÞ uðerr
ð5:19Þ
2
,
ðmÞ
where Y i is the predicted value of Yi generated by the mth basic regularized robot calibration model. For the mth basic regularized LM algorithm, whose corresponding set that is not suitable for learning samples can be defined as: ðmÞ
ΓðmÞ = Y i j Y i
- Yi
abs
ðmÞ ðmÞ - uerr > γδerr :
ð5:20Þ
Based on the above analysis, its error rate can be obtained: ðmÞ
φðmÞ =
ωi :
ð5:21Þ
Y i 2ΓðmÞ
Therefore, the weight ρ(m) of the mth regularized robot kinematic calibration model can be expressed as: ρðmÞ = log 1=φðmÞ :
ð5:22Þ
Then, for the added (m + 1)th regularized robot model, its initial value is set as 1/m, whose updating rule is as follows: 1 , if Y i 2 ΓðmÞ , DðmÞ ðmÞ ðmÞ φ ωi , otherwise: DðmÞ ðmÞ
ðmþ1Þ ωi
=
ωi
In addition, D(m) can be calculated by the following formula:
ð5:23Þ
76
5 A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . . ðmÞ
DðmÞ =
ωi
þ
Y i 2ΓðmÞ
ðmÞ
φðmÞ ωi , ðmÞ Y i 2ðΩ - Γ Þ
ð5:24Þ
where Ω - Γ(m) is the complement Γ(m) in Ω. Finally, the output of the ensemble is as follows: M ðAÞ
Yi =
ðmÞ
ρðmÞ Y i
m=1 M
m=1
:
ð5:25Þ
ρðmÞ
Currently, the ensemble is widely utilized in numerous fields, which adopts the special rule to integrate the learning results of each basic model for obtaining a powerful model. Algorithm 5.1: Ensemble, which is the pseudocode of the proposed ensemble. Algorithm 5.1 Ensemble Input: Ω, Yi, w, ω(m) Operation 1. Initialize Γ (m) = empty set, u(m) err, δ(m) err 2. Initialize φ(m), D(m), ρ(m) 3. for m2Ω 4. Compute u(m) err, δ(m) err with (5.19) 5. end for 6. for m2Ω 7. Update φ(m) with (5.20), (5.21) 8. end for 9. for m2Ω 10. Update D(m) based on (5.24) 11. end for 12. for m2Ω 13. Compute ω(m) i with (5.23) 14. end for 15. Compute ρ(m) with (5.22) Output: ω(m+1), ρ(m)
Cost Θ(1) Θ(1) ×|Ω| Θ(1) – ×|Ω| Θ(1) – ×|Ω| Θ(1) – ×|Ω| Θ(1) – Θ(1)
5.3
Experimental Results Based on the Proposed Ensemble
5.3
77
Experimental Results Based on the Proposed Ensemble
5.3.1
General Settings
Evaluation Metrics Researchers commonly adopt root mean squared error (RMSE), standard deviation (STD) and maximum error (MAX) to evaluate the performance of industrial robot calibration algorithms. These evaluation metrics have played a significant role in industrial applications [65–68]. Y i - Y 0i
MAX = max STD =
1 n
n
2
Y i - Y 0i ,
i=1
RMSE =
1 n
2
n i=1
, i = 1, 2, ⋯n, ð5:26Þ
Y i - Y 0i : 2
Dataset This chapter adopts an ABB IRB120 industrial robot as the experimental object, the experimental platform is shown in Fig. 5.1. Considering the impact of noises and
Fig. 5.1 The experimental platform
78
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
Table 5.1 Five sample points No. 1 2 3 4 5
q1/° -67.3 -59.4 -66 -68.6 -68.9
q2/° 24.4 24.3 24.2 25.9 19.3
q3/° -24.4 -24.4 -24.4 -24.4 -24.4
q4/° -14.2 -14.2 -14.2 -14.2 -14.2
q5/° 76.6 76.6 76.6 76.6 76.6
q6/° -57 -57 -57 -57 -57
L/mm 538.56 536.88 539.72 528.01 581.55
personnel misoperation in the actual measurement environment, this experiment measures 110 groups of samples in different spatial positions. Table 5.1 shows five groups of samples, q1–q6 is the joint angles of the robot, L is the measuring cable length. During the calibration experiments, 100 groups of samples are selected for training, then the remaining 10 groups are utilized for testing the calibration performance of the proposed algorithm. Moreover, the termination condition for the calibration algorithm is as follows: a) the number of training reaches the maximum value, such as 200; b) The root mean square error between two consecutive rounds of training iterations is less than 10-3.
Experimental Device The experimental system includes an ABB IRB120 industrial robot with 6 rotating joints, a high-precision drawstring displacement sensor produced by Jinan Hengyang Intelligent Technology Co., LTD., a drawstring displacement indicator, an RS485 communication module, data acquisition software and a computer.
Experimental Process Generally speaking, the sampling process of the robot is highly complex. This experiment collects 110 sample points, which should cover the entire workspace of the robot as much as possible. Firstly, we manually teach 110 points in the workspace of the robot, and then a drawstring displacement sensor is adopted to measure the position of the sample points. In addition, based on LabVIEW software, a data acquisition software is designed to gather the information of sample points. Moreover, the data collection interface is shown in Fig. 5.1b. Figure 5.2 shows the sampling process of robot calibration. Finally, we adopt the proposed ensemble to estimate the optimal kinematics parameters of the robot.
5.3
Experimental Results Based on the Proposed Ensemble
79
Fig. 5.2 The sampling process for robot calibration
5.3.2
Experimental Calibration Performance for M1–6
Various calibration algorithms are listed in Table 5.2. Table 5.3 lists the hyperparameter selection results of M0–6, which are empirical values verified for a long time. Table 5.4 provides the calibration results of M0–6. Tables 5.5 and 5.6 list the average per iteration time cost and total time cost of M0–6, respectively. Figure 5.3 shows the calibration result of M7 by integrating M1–M6 sequentially, Fig. 5.4 presents the calibration result by integrating M1–M3, and M5-M6 with M4. From these experimental results, the following conclusions can be drawn:
80
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
Table 5.2 Various calibration algorithms Algorithm M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
Description LM algorithm. LM algorithm with L1 regularization. LM algorithm with L2 regularization. LM algorithm with elastic net regularization. LM algorithm with dropout regularization. LM algorithm with log regularization. LM algorithm with swish regularization. The improved beetle swarm optimization algorithm [2]. Hybrid genetic algorithm [5]. The improved manta ray foraging optimization [12]. The ensemble based on M1–6.
Table 5.3 Hyper-parameters values of M0–6
Algorithm M0 M1 M2 M3 M4 M5 M6
Table 5.4 Calibration accuracy of M0–6
Algorithm M0 M1 M2 M3 M4 M5 M6
Table 5.5 Time cost per iteration of M0–6
Algorithm M0 M1 M2 M3 M4 M5 M6
Hyper-parameter λ0 = 1.0 × 10-3 λ0 = 1.0 × 10-4, λ = 1.0 × 10-6 λ0 = 1.0 × 10-4, λ = 1.0 × 10-7, v = 0.001 λ0 = 1.0 × 10-4, λ1 = λ2 = 1.0 × 10-7 λ0 = 1.0 × 10-4, λ = 1.0 × 10-6 λ0 = 1.0 × 10-4, λ = 1.0 × 10-6, τ = 0.001 λ0 = 1.0 × 10-4, λ = 1.0 × 10-6, β = 1
RMSE(mm) 0.40 0.35 0.34 0.34 0.34 0.34 0.35
STD(mm) 0.33 0.28 0.27 0.27 0.27 0.27 0.28
MAX(mm) 0.99 0.86 0.87 0.87 0.88 0.87 0.86
Time cost per iteration(s) 2.21 2.31 2.30 2.31 2.31 2.32 2.31
5.3
Experimental Results Based on the Proposed Ensemble
81
Table 5.6 Total time cost of M0–6 Algorithm M0 M1 M2 M3 M4 M5 M6
Average iteration count (RMSE) 132 24 40 38 46 42 55 0.285
0.350
0.282
0.345
0.279
0.340
0.276 Mean
RMSE
Total time cost(s) (RMSE) 292 55 92 88 106 97 127
0.335 0.330
0.273 0.270
0.325
0.267
0.320
0.264 0.261
0.315 M1
+M2
+M3
+M4
+M5
M1
+M6
+M2 +M3 +M4 +M5 +M6
Integrated Models
Integrated Models
(a)
(b)
0.87 0.86 0.85 Max
0.84 0.83 0.82 0.81 0.80 0.79 M1
+M2
+M3
+M4
+M5
+M6
Integrated Models
(c) Fig. 5.3 Calibration result of M7 by integrating M1-M6 sequentially
(a) The calibration accuracy of M1–6 based on different regularization schemes is higher than M0. From Table 5.4, it can be seen that the RMSE, STD, and MAX of M0 are 0.40, 0.33, and 0.99, respectively, whose accuracy is lower than that of the M1–M6. In addition, M1 and M6 obtain the highest calibration accuracy among them, their RMSE, STD, and MAX are 0.35, 0.28, and 0.86, the accuracy gains are 12.5%, 15.15% and 13.13% respectively.
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
0.350 0.348 0.346 0.344 0.342 0.340 0.338 0.336 0.334 0.332 0.330
0.282
M* M*+M4
M* M*+M4
0.280
Mean
0.278 0.276 0.274 0.272 0.270 M1+M4 M2+M4 M3+M4 M5+M4 M6+M4
M1+M4 M2+M4 M3+M4 M5+M4 M6+M4
Integrated Models
Integrated Models
(a)
(b)
Max
RMSE
82
0.890 0.885 0.880 0.875 0.870 0.865 0.860 0.855 0.850 0.845 0.840 0.835 0.830
M* M*+M4
M1+M4 M2+M4 M3+M4 M5+M4 M6+M4 Integrated Models
(c) Fig. 5.4 Calibration result by integrating M1-M3, and M5-M6 with M4
(b) Different regularization schemes have additional computing costs. As shown in Table 5.5, M0 reaches the average time cost at 2.21 s per iteration, which is smaller than the time cost of M1–6. The average time cost for each iteration of M1 is 2.31 s, which is the most time-consuming among its peers. Specifically, due to the learning rules of M1–M6, which require additional operations to calculate their regularization terms. Therefore, the average time cost of each iteration of M1–6 is higher than M0. (c) M1 obtains the best calibration performance among its peers. From Tables 5.4 and 5.6, it can be seen that the maximum error and total time of M1 are 0.86 and 55 s, respectively. Compared with other robot calibration methods, it obtains the best robot calibration accuracy and the lowest time cost. As shown in Table 5.6, the total time of M0 is 292 s, which has the longest time cost. The experimental results show that M0 frequently falls into the long tail convergence, thereby resulting in a long computation time. Therefore, to reduce the total time cost, it is appropriate to incorporate some regularization items into M0.
5.3
Experimental Results Based on the Proposed Ensemble
5.3.3
83
Experimental Calibration Performance for Compared Algorithms
This section compares the experimental results of M10 with the three advancing calibration methods. In addition, Tables 5.7 and Figs. 5.5a–c summarize their calibration accuracy, Table 5.8 and Fig. 5.6a show their total time cost, Figs. 5.6b, c show the positioning error after calibration via various algorithms. Tables 5.9 and 5.10 show the identified DH parameters for M1 and M7, respectively. Based on the above experimental results, we have the following findings: (a) M10 achieved the highest calibration accuracy among M1–9. As shown in Table 5.7, the MAX of M10 is 0.80, which is 19.19% lower than M0’s 0.99, 6.98% lower than M1’s 0.86, 8.05% lower than M2’s 0.87, 8.05% lower than M3’s 0.87, 9.09% lower than M4’s 0.88, 8.05% lower than M5’s 0.87, 6.98% lower than M6’s 0.86, 54.29% lower than M7’s 1.75, 38.93% lower than M8’s 1.31, 49.37% lower than M9’s 1.58. When adopting STD and RMSE as the metric, we can also obtain similar conclusions. Therefore, the proposed ensemble of LM algorithms based on different regularizations can effectively improve the robot calibration accuracy. (b) The computational efficiency of the regularized robot calibration model is not high enough. From Table 5.8 and Fig. 5.6a, it can be seen that M1 has the highest computational efficiency among its peers. Therefore, a regularized robot calibration model with faster computational efficiency will be developed in the future. (c) The proposed ensemble is highly effective for solving the problem of DH parameter overfitting. As shown in Table 5.9, the DH parameters identified by M1 suffer from the overfitting problem, the value change amplitude of DH parameters before and after calibration is far more than 10%, which does not comply with the calibration rule. In addition, The obtained DH parameters after M2–6 calibration also have a similar overfitting problem. However, the proposed
Table 5.7 Calibration accuracy of M0–10
Algorithm M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
RMSE(mm) 0.40 0.35 0.34 0.34 0.34 0.34 0.35 0.71 0.59 0.66 0.32
STD(mm) 0.33 0.28 0.27 0.27 0.27 0.27 0.28 0.56 0.49 0.58 0.26
MAX(mm) 0.99 0.86 0.87 0.87 0.88 0.87 0.86 1.75 1.31 1.58 0.80
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
1.0
1.0
0.8
0.8 STD(mm)
RMSE(mm)
84
0.6
0.6
0.4
0.4
0.2
0.2 M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
Compared Algorithm
Compared Algorithm
(a)
(b)
2.0 1.8
MAX(mm)
1.6 1.4 1.2 1.0 0.8 0.6 M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 Compared Algorithm
(c) Fig. 5.5 Calibration accuracy of M0–10 Table 5.8 Total time cost of M0–10 Algorithm M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
Average iteration count (RMSE) 132 24 40 38 46 42 55 10 40 45 2
Total time cost(s) (RMSE) 292 55 92 88 106 97 127 108.12 135.14 121.56 129
5.3
Experimental Results Based on the Proposed Ensemble 3.5
300
3.0 Position errors/mm
350
䘝ԓᙫᰦ䰤 (。)
250 200 150 100 50
85
Before calibration M8 M10
2.5 2.0 1.5 1.0 0.5 0.0
0
0
M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
5
10
15
20
25
Compared Algorithm
Measurement Points
(a)
(b)
Position errors/mm
2.0
30
35
M9 M0 M10
1.5
1.0
0.5
0.0 0
5
10
15
20
25
30
35
Measurement Points
(c) Fig. 5.6 Position errors of the robot after calibration via various methods and total time cost Table 5.9 The calibrated DH parameters via M1
Joint i 1 2 3 4 5 6
αi /° -90.3547 -0.2471 -89.7821 90.0397 -90.1266 0
ai /mm -3.1687 267.4358 71.3326 -16.4893 -10.7394 2.9949
di /mm 340.2746 10.9628 11.1259 313.5153 -22.8587 104.0698
θi /° 1.6314 -89.0611 0.0460 -0.1238 0.0260 -0.0024
Table 5.10 The calibrated DH parameters via M10
Joint i 1 2 3 4 5 6
αi /° -90.0165 -0.0022 -89.9880 90.0179 -90.0142 0.0000
ai /mm 1.4485 268.8818 69.5233 -1.4344 -4.2796 -2.2217
di /mm 292.6661 1.2965 1.2964 304.4124 -0.5773 73.3217
θi /° 0.1556 -89.9151 0.0347 -0.0032 0.0173 0.0001
86
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
ensemble is adopted to identify the kinematic parameters of the robot, and then the obtained parameter results are shown in Table 5.10. There is no overfitting phenomenon for these calibrated parameters, thus demonstrating that the proposed ensemble is highly effective for solving the overfitting problem of parameters. After calibration, this section discusses the improvement of robot positioning accuracy. In the experiment, 110 sampling points are measured, 30 sampling points from them are adopted for showing the calibration result. Moreover, the position errors of the robot after calibration via various methods are shown in Figs. 5.6b, c, the dotted lines represent the average error value of the displayed samples. From the figure, it can be seen that the positioning accuracy of the calibrated robot has significantly improved. Furthermore, M10 has the best calibration performance among them. Therefore, the experimental results demonstrate the reliability of the ensemble scheme.
5.4
Conclusions
To improve the calibration performance of the standard LM algorithm and solve the overfitting problem of robot kinematic parameters, this chapter incorporates different regularization schemes into the LM algorithm to form a powerful ensemble. Moreover, extensive experiments are conducted on an ABB IRB120 industrial robot. The experimental results show that the LM algorithm based on different regularization schemes has different generalization performances, which can further improve the learning performance of the robot calibration method via an ensemble. In the future, we plan to develop a novel parallelization computing framework for improving the computational efficiency of the proposed ensemble.
References 1. Li, T., Sun, K., Xie, Z.W., Liu, H.: Optimal measurement configurations for kinematic calibration of six-DOF serial robot. J. Cent. S. Univ. Technol. 18, 618–626 (2011) 2. Xiao, X., Ma, Y., Xia, Y., Zhou, M., Luo, X., Wang, X., Fu, X., Wei, W., Jiang, N.: Novel workload-aware approach to mobile user reallocation in crowded mobile edge computing environment. IEEE Trans. Intell. Transport. Syst. 23(7), 8846–8856 (2022) 3. Luo, X., Zhou, Y., Liu, Z.G., Hu, L., Zhou, M.C.: Generalized Nesterov’s acceleration incorporated, non-negative and adaptive latent factor analysis. IEEE Trans. Serv. Comput. 15(5), 2809–2823 (2021) 4. Joubair, A., Bonev, I.A.: Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis. Eng. 40, 325–333 (2015) 5. Jiang, Z.H., Zhou, W.G., Li, H., Mo, Y., Ni, W.C., Huang, Q.: A new kind of accurate calibration method for robotic kinematic parameters based on the extended Kalman and particle filter algorithm. IEEE Trans. Ind. Electron. 65(4), 3337–3345 (2018) 6. Li, C., Wu, Y.Q., Löwe, H., Li, Z.X.: POE-based robot kinematic calibration using axis configuration space and the adjoint error model. IEEE Trans. Robot. 32(5), 1264–1279 (2016)
References
87
7. Santolaria, J., Brau, A., Velázquez, J., Aguilar, J.J.: A self-centering active probing technique for kinematic parameter identification and verification of articulated arm coordinate measuring machines. Meas. Sci. Technol. 21(5), 055101 (2010) 8. Ma, L., Bazzoli, P., Sammons, P.M., Landers, R.G., Bristow, D.A.: Modeling and calibration of high-order joint-dependent kinematic errors for industrial robots. Robot. Comput.-Integr. Manuf. 50, 153–167 (2018) 9. Leng, C.C., Zhang, H., Cai, G.R., Cheng, I., Basu, A.: Graph regularized Lp smooth non-negative matrix factorization for data representation. IEEE/CAA J. Autom. Sin. 6(2), 584–595 (2019) 10. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Statist. Soc., B, Stat. Methodol. 67(2), 301–320 (2005) 11. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014) 12. Luo, X., Zhou, M., Wang, Xia, Y., Zhu, Q.: An effective scheme for QoS estimation via alternating direction method-based matrix factorization. IEEE Trans. Services Comput. 12(4), 503–518 (2019) 13. Dietterich, T.G.: Ensemble methods in machine learning. Multiple Classifier Syst. 1857, 1–15 (2000) 14. Rokach, L.: Ensemble-based classifiers. Artif. Intell. Rev. 33(1–2), 1–39 (2010) 15. Liu, J., Chen, Y.: HAP: a hybrid QoS prediction approach in cloud manufacturing combining local collaborative filtering and global case-based reasoning. IEEE Trans. Services Comput. 14(6), 1796–1808 (2021) 16. Klimchik, A., Furet, B., Caro, S., Pashkevich, A.: Identification of the manipulator stiffness model parameters in industrial environment. Mechanism Mach. Theory. 90, 1–22 (2015) 17. Shi, X.Y., He, Q., Luo, X., Bai, Y.N., Shang, M.S.: Large-scale and scalable latent factor analysis via distributed alternative stochastic gradient descent for recommender systems. IEEE Trans. Big Data. 8(2), 420–431 (2022) 18. Luo, X., Zhou, M., Xia, Y., Zhu, Q.: An efficient non-negative matrix factorization-based approach to collaborative filtering for recommender systems. IEEE Trans. Ind. Informat. 10(2), 1273–1284 (2014) 19. Wu, D., He, Y., Luo, X., Zhou, M.C.: A latent factor analysis-based approach to online sparse streaming feature selection. IEEE Trans. Syst. Man Cybern. Syst. 52(11), 6744–6758 (2021) 20. Luo, X., Zhou, M.C.: Effects of extended stochastic gradient descent algorithms on improving latent factor-based recommender systems. IEEE Robot. Autom. Lett. 4(2), 618–624 (2019) 21. Luo, X., Zhou, M.C., Li, S., You, Z.H., Xia, Y.N., Zhu, Q.S.: A non-negative latent factor model for large-scale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 524–537 (2016) 22. Le, P.N., Kang, H.J.: A robotic calibration method using a model based identification technique and an invasive weed optimization neural network compensator. Appl. Sci. 10(20), 7320 (2020) 23. Luo, X., Zhou, M.C., Li, S., You, Z.H., Xia, Y.N., Zhu, Q.S., Leung, H.: An efficient secondorder approach to factorizing sparse matrices in recommender systems. IEEE Trans. Ind. Inform. 11(4), 946–956 (2015) 24. Wu, H., Luo, X., Zhou, M.C., Rawa, M.J., Sedraoui, K., Albeshri, A.: A PID-incorporated latent factorization of tensors approach to dynamically weighted directed network analysis. IEEE/CAA J. Autom. Sin. 9(3), 533–546 (2021) 25. Li, Y.-H., Zhan, Z.-H., Lin, S.-J., Zhang, J., Luo, X.-N.: Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf. Sci. 293, 370–382 (2015) 26. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006) 27. Dong, W., Zhang, L., Shi, G., Wu, X.: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011)
88
5
A Regularization Ensemble Based on Levenberg–Marquardt Algorithm. . .
28. Koren, Y., Bell, R., Volinsky, C.: Matrix-factorization techniques for recommender systems. Computer. 42(8), 30–37 (2009) 29. Takács, G., Pilászy, I., Németh, B., Tikk, D.: Scalable collaborative filtering approaches for large recommender systems. J. Mach. Learn. Res. 10, 623–656 (2009) 30. Wang, S., Manning, C.: Fast dropout training. In: Proc. of ICML., pp. 118–126 (2013) 31. Mou, W., Zhou, Y., Gao, J., Wang, L.: Dropout training, data dependent regularization, and generalization bounds. In: Proc. of 35th Int. Conf. Mach. Learn, pp. 3642–3650, Stockholm (2018) 32. Gal,Y., Ghahramani, Z.: Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In: Proc. of Int. Conf. Mach. Learn. pp. 1050–1059 (2016) 33. Baldi, P., Sadowski, P.J.: Understanding dropout. In: Proc. of 27th Int. Conf. Neural Inf. Process. Syst., pp. 2814–2822 (2013) 34. Cavazza, J., Lane, C., Haeffele, B.D., Murino, V., Vidal, R.: An analysis of dropout for matrix factorization. In: Proc. of 31th Int. Conf. Neural Inf. Process. Syst, pp. 1–17, Long Beach, CA (2017) 35. Baldi, P., Sadowski, P.: The dropout learning algorithm. Artif. Intell. 210, 78–122 (2014) 36. Wager, S., Wang, S., Liang, P.S.: dropout training as adaptive regularization. In: Proc. of Adv. Neural Inf. Process. Syst., Stateline, NV, pp. 351–359 (2013) 37. Armagan, A., Dunson, D.B., Lee, J.: Generalized double pareto shrinkage. Stat. Sin. 23(1), 119–143 (2013) 38. Zhou, H., Li, L.X.: Regularized matrix regression. J. Roy. Statist. Soc., Ser. B, Statist. Methodol. 76(2), 463–483 (2014) 39. Bayram, I., Chen, P.-Y., Selesnick, I.W.: Fused lasso with a nonconvex sparsity inducing penalty. In: Proc. of IEEE Int. Conf. Acoust., Speech Signal Process, pp. 4156–4160, Florence (2014) 40. Singh, A., Bist, A.S.: A wide scale survey on handwritten character recognition using machine learning. Int. J. Comput. Sci. Eng. 7(6), 124–134 (2019) 41. Wu, H., Luo, X., Zhou, M.C.: Advancing non-negative latent factorization of tensors with diversified regularizations. IEEE Trans. Serv. Comput. 15(3), 1334–1344 (2022) 42. Xie, Z.T., Jin, L., Luo, X., Hu, B., Li, S.: An acceleration-level data-driven repetitive motion planning scheme for kinematic control of robots with unknown structure. IEEE Trans. Syst. Man Cybern. Syst. 52(9), 5679–5691 (2022) 43. Chen, D.C., Li, S., Wu, Q., Luo, X.: New disturbance rejection constraint for redundant robot manipulators: an optimization perspective. IEEE Trans. Ind. Inform. 16(4), 2221–2232 (2020) 44. Jin, L., Li, S., Luo, X., Li, Y.M., Qin, B.: Neural dynamics for cooperative control of redundant robot manipulators. IEEE Trans. Ind. Inform. 14(9), 3812–3821 (2018) 45. Khan, A.H., Li, S., Luo, X.: Obstacle avoidance and tracking control of redundant robotic manipulator: an RNN based metaheuristic approach. IEEE Trans. Ind. Inform. 16(7), 4670–4680 (2020) 46. Koren, Y., Bell, R.: Advances in collaborative-filtering. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.) Recommender Systems Handbook, pp. 145–186. Springer, New York (2011) 47. Lai, Z., Wong, W.K., Xu, Y., Zhao, C., Sun, M.: Sparse alignment for robust tensor learning. IEEE Trans. Neural Netw. Learn. Syst. 25(10), 1779–1792 (2014) 48. Wei, L., Jin, L., Luo, X.: Noise-suppressing neural dynamics for time-dependent constrained nonlinear optimization with applications. IEEE Trans. Syst. Man Cybern. Syst. 52(10), 6139–6150 (2022) 49. Lu, H.Y., Jin, L., Luo, X., Liao, B.L., Guo, D.S., Xiao, L.: RNN for solving perturbed timevarying underdetermined linear system with double bound limits on residual errors and state variables. IEEE Trans. Ind. Inform. 15(11), 5931–5942 (2019) 50. Lai, Z., Xu, Y., Yang, J., Tang, J., Zhang, D.: Sparse tensor discriminant analysis. IEEE Trans. Image Process. 22(10), 3904–3915 (2013)
References
89
51. Zhang, Z., Lai, Z., Xu, Y., Shao, L., Wu, J., Xie, G.-S.: Discriminative elastic-net regularized linear regression. IEEE Trans. Image Process. 26(3), 1466–1481 (2017) 52. Ma, H., Zhou, D., Liu, C., Lyu, M.R., King, I.: Recommender systems with social regularization. In: Proc. of 4th ACM Int. Conf. Web Search Data Mining, pp. 287–296, Hong Kong (2011) 53. Li, H., Chen, N., Li, L.: Error analysis for matrix elastic-net regularization algorithms. IEEE Trans. Neural Netw. Learn. Syst. 23(5), 737–748 (2015) 54. Li, S., Zhou, M.C., Luo, X.: Modified primal-dual neural networks for motion control of redundant manipulators with dynamic rejection of harmonic noises. IEEE Trans. Neural Netw. Learn. Syst. 29(10), 4791–4801 (2018) 55. Luo, X., Wu, H., Wang, Z., Wang, J.J., Meng, D.Y.: A novel approach to large-scale dynamically weighted directed network representation. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 9756–9773 (2022) 56. Luo, X., Zhou, M.C., Xia, Y.N., Zhu, Q.S.: An incremental-and-static-combined scheme for matrix-factorization-based collaborative filtering. IEEE Trans. Autom. Sci. Eng. 13(1), 333–343 (2016) 57. Wu, D., Luo, X., He, Y., Zhou, M.: A prediction-sampling-based multilayer-structured latent factor model for accurate representation to high-dimensional and sparse data. IEEE Trans. Neural Netw. Learn. Syst. https://doi.org/10.1109/TNNLS.2022.3200009 58. Luo, X., Zhou, M.C., Li, S., You, Z.H., Xia, Y.N., Zhu, Q.S.: A nonnegative latent factor model for large-scale sparse matrices in recommender systems via alternating direction method. IEEE Trans. Neural Netw. Learn. Syst. 27(3), 579–592 (2015) 59. Xia, Y., Zhou, M., Luo, X., Pang, S., Zhu, Q.: A stochastic approach to analysis of energyaware dvs-enabled cloud datacenters. IEEE Trans. Syst. Man Cybern. Syst. 45(1), 73–83 (2015) 60. Song, Y., Zhu, Z., Li, M., Yang, G., Luo, X.: Non-negative latent factor analysis-incorporated and feature-weighted fuzzy double c-means clustering for incomplete data. IEEE Trans. Fuzzy Syst. 30(10), 4165–4176 (2022) 61. Luo, X., Xia, Y., Zhu, Q.: Incremental collaborative filtering recommender based on regularized matrix factorization. Knowl.-Based Syst. 27, 271–280 (2012) 62. Chen, D., Wang, T.M., Yuan, P.J., Ning, S., Tang, H.Y.: A positional error compensation method for industrial robots combining error similarity and radial basis function neural network. Meas. Sci. Technol. 30(12), 125010 (2019) 63. Du, G., Liang, Y., Li, C., Liu, P.X., Li, D.: Online robot kinematic calibration using hybrid filter with multiple sensors. IEEE Trans. Instrum. Meas. 69(9), 7092–7107 (2020) 64. Fan, C., Zhao, G., Zhao, J., Zhag, L., Sun, L.: Calibration of a parallel mechanism in a serialparallel polishing machine tool based on genetic algorithm. Int. J. Adv. Manuf. Technol. 81(1), 27–37 (2015) 65. Khan, A.H., Li, S., Luo, X.: Obstacle avoidance and tracking control of redundant robotic manipulator: an RNN-based metaheuristic approach. IEEE Trans. Ind. Inform. 16(7), 4670–4680 (2019) 66. Huang, Y.A., You, Z.H., Chen, X., Chan, K., Luo, X.: Sequence-based prediction of proteinprotein interactions using weighted sparse representation model combined with global encoding. BMC Bioinform. 17, 1–11 (2016) 67. Luo, X., Sun, J., Wang, Z., Li, S., Shang, M.: Symmetric and nonnegative latent factor models for undirected, high-dimensional, and sparse networks in industrial applications. IEEE Trans. Ind. Inform. 13(6), 3098–3107 (2017) 68. Luo, X., Zhou, M.C., Li, S., You, Z.H., Xia, Y.N., Zhu, Q.S., Leung, H.: An efficient secondorder approach to factorize sparse matrices in recommender systems. IEEE Trans. Ind. Inform. 11(4), 946–956 (2015)
Chapter 6
Novel Evolutionary Computing Algorithms for Robot Calibration
Abstract This chapter discusses an improved covariance matrix adaptation evolution strategy and a quadratic interpolated beetle antennae search algorithm for robot calibration. Firstly, Section 6.1 introduces the research motivation for robot calibration. In Sect. 6.2, we present the learning rule of an extended Kalman filter, an improved covariance matrix adaptive evolution strategy and a quadratic interpolated beetle antennae search algorithm. In addition, Section 6.3 provides the experiments for the developed novel evolutionary computing algorithms. And finally, Section 6.4. concludes this chapter. Keywords Improved covariance matrix adaptation evolution strategy · Quadratic interpolated beetle antennae search algorithm · Extended Kalman filter · RobotCali · Noise
6.1
Overview
Industrial robots are the crucial device for the manufacturing industry, which are widely used in assembly [1–5], welding and spraying. Meanwhile, owing to their powerful versatility, robots have attracted widespread attention from researchers [6– 10]. However, due to the long-term operation of the robot, which suffers from inevitable wear and tear, thereby resulting in structural deformation [11–15]. Thus, the absolute positioning error of the robot can reach several millimeters, which cannot be applied to high-end manufacturing fields. To accurately calibrate the robot, researchers have developed various calibration methods. Wang et al. [2] develop a new screw axis calibration system that accurately calibrates the MOTOMAN-UP6 robot. He et al. [3] propose a new calibration system, which accurately calibrates the kinematic parameters of the PUMA 560 robot. Zhong et al. [4] adopt an improved whale swarm algorithm to calibrate the legged robot. Before conducting robot calibration, it is necessary to obtain a dataset of the position of the robot end-effector, but these datasets are currently not publicly available online [16–22]. Meanwhile, the commonly used covariance matrix adaptation evolution strategy and the quadratic interpolated beetle antennae
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 X. Luo et al., Robot Control and Calibration, SpringerBriefs in Computer Science, https://doi.org/10.1007/978-981-99-5766-8_6
91
92
6 Novel Evolutionary Computing Algorithms for Robot Calibration
search algorithm frequently fall into local optimum during the robot calibration process [23–30]. To address the above-mentioned issues, this chapter proposes a dataset called “RobotCali”, which can be obtained from the following website: https://github.com/ Lizhibing1490183152/RobotCali. RobotCali has 1042 groups of samples, which is sufficient for robotic calibration experiments [31–35]. In addition, we develop two calibration methods: a new algorithm based on an improved covariance matrix adaptation evolution strategy and an extended Kalman filter (EKF-ICMA-ES), a novel algorithm combining a quadratic interpolated beetle antennae search algorithm and an extended Kalman filter (EKF-QIBAS), which successfully suppresses noise during the robot calibration process [36–40]. Furthermore, the contributions of this chapter are summarized as follows: (a) This chapter develops a new dataset called “RobotCali”, which provides strong assistance for relevant researchers; (b) We adopt an extended Kalman filter to handle noises in robot calibration; (c) We design two new algorithms: EKF-ICMA-ES and EKF-QIBAS, which effectively improve the accuracy of robot calibration; (d) Numerous experiments conducted on an ABB IRB120 industrial robot demonstrate that two proposed algorithms achieve higher calibration accuracy when compared with other advancing calibration algorithms.
6.2 6.2.1
EKF-ICMA-ES Algorithm Extended Kalman Filter (EKF)
The extended Kalman filter algorithm is the most direct method to solve the problem of nonlinear state estimation. EKF is not the most accurate optimal filter, but it has been successfully applied in numerous nonlinear systems. Firstly, The state equation of EKF is written as: xkjk - 1 = xk - 1jk - 1 , Pkjk - 1 = Pk - 1jk - 1 þ Qk - 1 ,
ð6:1Þ
where xk represents the errors of DH parameters, Pk and Qk covariance, covariance of noises, respectively. Thus, we have: Z k = J k xk þ E k , K k = Pkjk - 1 J Tk J k Pkjk - 1 J Tk þ Rk
- 1
are the system ð6:2Þ
-1
xkjk = xkjk - 1 þ K k Z k - J k xkjk - 1 ,
,
ð6:3Þ ð6:4Þ
6.2
EKF-ICMA-ES Algorithm
93
Pkjk = ðI - K k J k ÞPkjk - 1 ,
ð6:5Þ
where Ek, is the robot positioning error, Zk is the robot measurement error, Jk represents the Jacobian matrix, Kk is the optimal Kalman gain, Rk is the covariance of measurement noises, I represents the unit matrix.
6.2.2
Improved Covariance Matrix Adaptive Evolution Strategy (ICMA-ES)
The covariance matrix adaptive evolution strategy is an efficient population random search evolution strategy, which has the advantages of easy implementing, fast convergence speed, and high global performance. It is highly favored in the field of real value optimization. Firstly, the distribution N(0,C(t)) is adopted to generate some populations: ðt Þ
8i 2 f1, . . . , λg : zi N 0, C ðtÞ ,
ð6:6Þ
where zi is the arbitrary vector, C represents the covariance matrix. From the eq. (6.6), we obtain that ðtþ1Þ
xi
ðt Þ
= mðtÞ þ σ ðtÞ zi ,
mðtþ1Þ = mðtÞ þ σ ðtÞ
ð6:7Þ
ðt Þ
u i=1
wi xi:λ - mðtÞ ,
ð6:8Þ
where xi is the new generation, σ, m and wi are the mean vector, evolutionary step and the i-th weight, respectively. Combining (6.7) and (6.8), we obtain the learning rule of C(t), evolution path P(t) C: Pðctþ1Þ = ð1 - cc ÞPcðtÞ þ hðstþ1Þ
cc ð2 - cc Þueff
mðtþ1Þ - mðtÞ , σ ðt Þ
C ðtþ1Þ = 1 þ c1 δ hsðtþ1Þ - c1 - cu CðtÞ þ c1 Pcðtþ1Þ Pcðtþ1Þ þ
cu 2 ðσ ðtÞ Þ
u
ðt Þ
w xi:λ - mðtÞ i=1 i
ðt Þ
xi:λ - mðtÞ
T
δ hðstþ1Þ = 1 - hsðtÞ cc ð2 - cc Þ,
ð6:9Þ
T
ð6:10Þ
, ð6:11Þ
where cc, c1 and cu represent the updating learning rate, hs is the Heaviside function,
94
6
Novel Evolutionary Computing Algorithms for Robot Calibration
ueff represents the variance effective selection mass. Then, the conjugate evolution path Pσ can be calculated by: Pðσtþ1Þ =
cσ ð2 - cσ Þueff ×
mðtþ1Þ - mðtÞ C ðtÞ σ ðt Þ σ ðtþ1Þ = σ ðtÞ τ,
- 12
þ ð1 - cσ ÞPσðtÞ ,
ð6:12Þ ð6:13Þ
where cσ is the updating learning rate, E||N(0,I)|| is the expected length, τ is the coefficient of σ. To accelerate algorithm convergence of CMA-ES, a variable step-size operator is added into the evolutionary step σ. Algorithm 6.1 EKF-ICMA-ES Input: M, x0, {q(i1), q(i2), . . ., q(i6)},{Y(1), Y(2), . . ., Y(|N|)} /--Note: Initialization--/ 1 Initialize: M, T 2 Initialize: x=x0 3 Initialize: cc, c1, ε, cσ /--Note: Training Starts--/ /--Note: EKF-Step--/ 4 for k=1 to |M| 5 set P(0)k randomly 6 set K(0)k zeros 7 for i=1 to |N| 8 set Qk known 9 set Rk known 10 Update xk|k-1 with (6.1) 11 Make Jk evolve with (1.12) 12 Compute Pk|k-1 with (6.2) 13 Updating Kk based on (6.3) 14 Update xk|k with (6.4) 15 Compute Pk|k with (6.5) 16 end for 17 end for 18 xekf= xk|k /--Note: ICMA-ES-Step--/ 19 for t= 1 to |T| 20 set x0 = xekf 21 set x(t) i = x0 22 for i = 1 to |N| 23 Compute z(t) i~ N(0,C(t)) 24 Update x(t+1) i with (6.7) 25 Compute m(t+1) with (6.8) 26 Make P(t+1) C evolve with (6.9) (continued)
6.2
EKF-ICMA-ES Algorithm
95
27 Compute C(t+1) with (6.10) 28 Computing δ(h(t+1) s) based on (6.11) 29 Computing P(t+1) σbased on (6.12) 30 Make σ (t+1) evolve with (6.13) 31 end for 32 end for /--Note: Training Ends--/ Output: x
ετσ ðtÞ , δc > 0, τσ ðtÞ , δc ≤ 0,
σ ðtþ1Þ =
ð6:14Þ
Algorithm 6.1 is the procedure EKF-ICMA-ES, which is adopted to suppress the noises in robot calibration. M and T are the max iterations of the EKF and ICMA-ES respectively. N, qi and Yi are the sampling points count, joint angle and measuring cable length.
6.2.3
Quadratic Interpolated Beetle Antennae Search (QIBAS)
The beetle antennae search (BAS) algorithm is an efficient intelligent optimization algorithm, which are proposed in 2017. Similar to intelligent optimization algorithms such as the genetic algorithm and particle swarm optimization. The beetle antennae search algorithm can finish the search optimum without knowing the specific form of the loss function or gradient information. Compared with other advancing evolutionary algorithms, like the particle swarm optimization algorithm and genetic algorithm, the beetle antennae search algorithm only requires one individual, which reduces the computational complexity of calibration process. Firstly, we denote some decision parameters: x = ½x1 , x2 , ⋯ xn T :
ð6:15Þ
Then, the learning rule of the beetle antennae search algorithm is as follows: →
xtþ1 = xt þ δt b signðf ðxrt Þ - f ðxlt ÞÞ, →
ð6:16Þ
where δ is the step size of searching, b represents a random direction of beetle → searching, sign(∙) denotes a sign function. In addition, b is written as:
96
6
Novel Evolutionary Computing Algorithms for Robot Calibration →
b=
randsðk, 1Þ , krandsðk, 1Þk2
ð6:17Þ
where rands is a random function, ||∙||2 is a two-norm operator. The learning rule of the left and right beetle’s antennae is described as: →
xrt = xt þ mt b ,
ð6:18Þ
→
xlt = xt - mt b ,
where m is the sensing length of antennae, δ is the step size of searching, which are calculated by: δtþ1 = μδt þ δ0 , mtþ1 = τmt þ m0 ,
ð6:19Þ
where μ2(0,1), τ2(0,1), δ0 > 0, m0 > 0. To improve the convergence rate of the BAS algorithm, we add the quadratic polynomials into its learning rule. hðxi Þ = c0 þ c1 xi þ c2 x2i = f ðxi Þ,
ð6:20Þ
where c0, c1, c2 are constants. And the stable equilibrium point of h(ηi) is as follows: h0 ðηi Þ = c1 þ 2c2 xi = 0:
ð6:21Þ
From (6.21), we obtain the stable equilibrium point: xtk = -
c1 : 2c2
ð6:22Þ
In addition, we define the location of left and right tentacles, global optimal location as follows: xl = ½ xl1 , xr = ½ xr1 ,
xl2 , xr2 ,
⋯, ⋯,
xlk T , xrk T ,
xb = ½ xb1 ,
xb2 ,
⋯,
xbk T :
ð6:23Þ
By combining Eqs. (6.23) and (6.20), then we can achieve the following equation:
6.2
EKF-ICMA-ES Algorithm
97
hðxlk Þ = c0 þ c1 xlk þ c2 x2lk = f ðxlk Þ = f 1 , hðxrk Þ = c0 þ c1 xrk þ c2 x2rk = f ðxrk Þ = f 2 , hðxbk Þ = c0 þ c1 xbk þ c2 x2bk = f ðxbk Þ = f 3 ,
ð6:24Þ
wherek = 1, 2, ⋯, n, χ 1 = xlk, χ 2 = xrk, χ 3 = xbk, χ = xtk, we obtain the stable equilibrium point of the objective function. χ=
χ 21 - χ 23 f 2 þ χ 23 - χ 22 f 1 þ χ 22 - χ 21 f 3 , 2ððχ 1 - χ 3 Þf 2 þ ðχ 3 - χ 2 Þf 1 þ ðχ 2 - χ 1 Þf 3 Þ þ υ0
ð6:25Þ
where f(∙) is the objective function for the robot calibration, υ0 is a positive number. Algorithm 2 EKF-QIBAS is adopted to address the issue of robot calibration. I1 and T are the max iterations of the EKF and QIBAS respectively. N is the sampling points count. Algorithm 6.2 EKF-QIBAS Input: I, x0, { q(i1), q(i2), . . ., q(i6)},{ Y(1), Y(2), . . ., Y(|N|)} /--Note: Initialization--/ 1 Initialize: I, T 2 Initialize: x=x0 3 Initialize: m0, δ0, μ, τ /--Note: Training Starts--/ /--Note: EKF-Step--/ 4 for k=1 to |I1| 5 set P(0)k randomly 6 set K(0)k zeros 7 for i=1 to |N| 8 set Qk known 9 set Rk known 10 Update xk|k-1 with (6.1) 11 Make Jk evolve with (1.12) 12 Compute Pk|k-1 with (6.2) 13 Updating Kk based on (6.3) 14 Update xk|k with (6.4) 15 Compute Pk|k with (6.5) 16 end for 17 end for 18 xekf=xk|k /--Note: QIBAS-Step--/ 19 for t=1 to |T| 20 set x0=xekf 21 set xbest=x0 22 set fbest=f(x0) 23 for i=1 to |N| (continued)
98
6
Novel Evolutionary Computing Algorithms for Robot Calibration
24 Update b with (6.17) 25 Compute xrt and xlt with (6.18) 26 Make f(xrt) and f(xlt) evolve with (1.13) 27 Compute xt+1 with (6.16) 28 Computing f(xt+1) based on (1.13) 29 Update xtk with (6.25) 30 Computing f(xtk) based on (1.13) 31 end for 32 if f(xtk)< f(xt+1) then 33 f(xt+1)= f(xtk), xt+1=xtk 34 end if 35 if f(xt+1)