291 26 8MB
English Pages 379 [390] Year 2004
Lecture Notes in Control and Information Sciences Editors: M. Thoma · M. Morari
299
Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo
T.-J. Tarn ā S.-B. Chen ā C. Zhou (Eds.)
Robotic Welding, Intelligence and Automation With 234 Figures and 36 Tables
13
Series Advisory Board A. Bensoussan · P. Fleming · M.J. Grimble · P. Kokotovic · A.B. Kurzhanski · H. Kwakernaak · J.N. Tsitsiklis
Editors Prof. Tzyh-Jong Tarn Washington University School of Engineering and Applied Sciences Ones Brookings Drive 63130 St. Louis, MO USA
Prof. Dr. Changjiu Zhou Singapore Politechnic School of Electrical and Electronic Engineering Dover Road 500 139651 Singapore Singapore
Prof. Shan-Ben Chen Shanghai Jiao Tong University Institute of Welding Engineering 200030 Shanghai People’s Republic of China
ISSN 0170-8643 ISBN 3-540-20804-6
Springer-Verlag Berlin Heidelberg New York
Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at . This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data conversion by the editors. Final processing by PTP-Berlin Protago-TeX-Production GmbH, Berlin Cover-Design: design & production GmbH, Heidelberg Printed on acid-free paper 62/3020Yu - 5 4 3 2 1 0
Preface
Robotic welding systems have been used in different types of manufacturing. They can provide several benefits in welding applications. The most prominent advantages of robotic welding are precision and productivity. Another benefit is that labor costs can be reduced. Robotic welding also reduces risk by moving the human welder/operator away from hazardous fumes and molten metal close to the welding arc. The robotic welding system usually involves measuring and identifying the component to be welded, welding it in position, controlling the welding parameters and documenting the produced welds. However, traditional robotic welding systems rely heavily upon human intervention. It does not seem that the traditional robotic welding techniques by themselves can cope well with uncertainties in the welding surroundings and conditions, e.g. variation of weld pool dynamics, fluxion, solid, weld torch, and etc. On the other hand, the advent of intelligent techniques provides us with a powerful tool for solving demanding realworld problems with uncertain and unpredictable environments. Therefore, it is interesting to gather current trends and to provide a high quality forum for engineers and researchers working in the filed of intelligent techniques for robotic welding systems. This volume brings together a broad range of invited and contributed papers that describe recent progress in this field. This volume is mainly based on the papers selected from the 2002 International Conference on Robotic Welding, Intelligence and Automation (RWIA’2002), December 9-12, 2002, Shanghai, China. We have also invited some known authors as well as announced a formal Call for Papers to several research groups related to welding robotics and intelligent systems to contribute the latest progress and recent trends and research results in this field. The primary aim of this volume is to provide researchers and engineers from both academic and industry with upto-date coverage of new results in the field of robotic welding, intelligent systems and automation. The volume is divided into four logical parts containing twenty-five papers. In Part 1 (6 papers), the authors introduce the recent development of welding automation. Various applications such as vision sensing and control of welding process are discussed. In Part 2 (4 papers), the authors deal with some intelligent techniques for robotic welding. In Part 3 (7 papers), the authors describe their work on intelligent robotic systems which may lead to new development of intelligent robotic welding systems. Finally, in Part 4 (8 papers), the authors introduce some emerging intelligent techniques and their applications, which may contribute significantly to the further development of intelligent robotic welding systems. We would like to thank Professors J.L. Pan, B.S. Xu, S.Y. Lin, H.G. Cai, S.W. Xie, T.H. Song, L. Wu, Y.X. Wu and T. Wang for their kind advice and support to
VI
Preface
the organization of the RWIA’2002 and the publication of this book; to Wenjie Chen, Bing Wang, Wenhang Lei and Surong Zhang for their precious time to devote all RWIA’2002 correspondences and to reformat the most final submissions into the required format of the book; to Jun Ni, research associate of Singapore Polytechnic and PhD candidate of Shanghai Jiaotong University, for his editorial assistance, last but not least to Dr. Thomas Ditzinger for his advice and help during the production phases of this book.
Tzyh-Jong Tarn, Washington University at St. Louis, USA Shan-Ben Chen, Shanghai Jiao Tong University, China Changjiu Zhou, Singapore Polytechnic, Singapore October 2003
Contents
Preface
V
T.-J. Tarn, S.-B. Chen, and C. Zhou
Part 1: Intelligent and Automatic Techniques for Welding Dynamic Process Process Stability of Automated Gas Metal Arc Welding of Aluminum S. Emil
1
Nd:YAG – Laser – GMA – Hybrid Welding in Shipbuilding and Steel Construction U. Jasnau, J. Hoffmann, and P. Seyffarth
14
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics S.B. Chen, D.B. Zhao, Y.J. Lou, and L. Wu
25
Surface Shape Reconstruction of Weld Pool during Pulsed GTAW from Its Single Image D.B. Zhao, J.Q. Yi, S.B. Chen, and L. Wu
56
A Novel Intelligent Monitor for Manufacturing Processes M. Ge and Y. Xu
63
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique Y.H. Liu, D. Ding, and M.L. Lam
80
Part 2: Intelligent Techniques for Robotic Welding Laser Visual Sensing and Process Control in Robotic Arc Welding of Titanium Alloys H. Luo and X.Q. Chen
110
VIII
Contents
Intelligent Technologies for Robotic Welding S.B. Chen, T. Qiu, T. Lin, L. Wu, J.S. Tian, W.X. Lv, and Y. Zhang
123
The Seam Tracking System for Submerged Arc Welding H. Zhang, X. Ding, M. Chen, B. Da, and C. Zou
144
Visual Measurement of Path Accuracy for Laser Welding Robots D. Du, B. Sui, Y.F. He, Q. Chen, and H. Zhang
152
Part 3: Intelligent Robotic Systems Map Building and Localization for Autonomous Mobile Robots Y.L. Ip, A.B. Rad, and Y.K. Wong
161
Control and Stabilization of the Inverted Pendulum via Vertical Forces D. Maravall
190
Vision Based Motion Planning of Humanoid Robots M.F. Ercan and C. Zhou
212
Evolution of Locomotion Controllers for Legged Robots A. Boeing and T. Bräunl
228
Gait Planning for Soccer-Playing Humanoid Robots Z. Tang, C. Zhou, and Z. Sun
241
Fuzzy-Neural Impedance Control for Robots Z. Xu and G. Fang
263
On Singular Perturbation Based Inverse Dynamics Control for a Two-Link Flexible Manipulator X. Dai, L. Sun, and H. Cai
276
Part 4: Intelligent Techniques and Its Applications Wavelets and Biorthogonal Wavelets for Image Compression D. Hong, M. Barrett, and P. Xiao
281
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay F.H. Ho, A.B. Rad, Y.K. Wong, and W.L. Lo
304
Contents
Adaptive Robust Fuzzy Tracking Control for a Class of Nonlinear Systems Y. Yang and C. Zhou A Complete Algorithm for Attribute Reduction B. Wang and S.B. Chen
IX
327
345
Preliminary Probe of Embedded Real-Time Operating System and Its Application in Welding Control W. Chen, S. Chen, and T. Lin
353
Estimation of Oxygen Concentration Using Dynamic Recurrent Neural Network Based on PCA D. Sun, F. Ye, and X. Gu
358
Robust Stability of Interval Lur'e Systems: A Bilinear Matrix Inequality Approach J.T. Sun
365
Thresholding Segmentation and Classification in Automated Ultrasonic Testing Image of Electrical Contact H.D. Chen, Z.Y. Cao, Y.W. Wang, and J. Xue
372
Author Index
381
Process Stability of Automated Gas Metal Arc Welding of Aluminium Emil Schubert Alexander Binzel Schweisstechnik GmbH & Co. KG, Gießen/Germany, Kiesacker 7-9, D-35418 Buseck, Germany [email protected]
Abstract. For modern transport systems Aluminum is getting of growing importance. To fully utilize this potential economic, effective and stable joining technologies have to be available. However, the special features of Aluminium materials like high heat conductivity, oxide formation, pore and hot cracking susceptibility require special measures for high quality welding especially for thin sheet metals. New developments in gas metal arc welding power sources, torches and wire feeders for Aluminum welding to overcome these problems will be described. Besides the technological factors system aspects and their influence an process stability and manufacturing up-time will also be discussed. Actual and potential future applications of these new developments will be demonstrated.
1 Introduction The advantages of the material aluminium (low density and high stability at the same time) are more and more used as a possibility for weight reduction, especially in the field of transportation. Fig. 1. shows the use of aluminium in automobiles during the period from 1975 until today and gives an outlook to the year 2005 showing that the employment of aluminium will become twice as large until then [8]. However, in order to produce automobiles out of sheets, profiles and cast parts suitable joining processes have to be available to enable an effective, efficient joining of the aluminium materials with sufficient process stability. Gas metal arc welding processes such as the GMA , TIG- and Plasma welding process can be used in this field for producing sealed and continuous welding seams [6]. In addition the combination of these processes with the Laser welding process can be used to realize new possibilities for welding structures [3]. Especially in the field of thin sheets the handling of the soft filler metals as well as the minimized heat input represent the most important challenge for the gas metal arc welding processes. Additionally, the system aspects (i.e. integration of the welding process into the production cells including torch change, cleaning stations, robot control etc.) play an important role in the field of production technology. T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 1−13, 2004. Springer-Verlag Berlin Heidelberg 2004
2 2 E. Schubert
80
74 68
70
63
60
55 steel and iron elastomers / others polymers Aluminium other non-ferrous metals
50 40 30 20 10
13 6
1413
13 10 3 4
5 4
6 4
1415 10
6
0 1975 Year of construction
1985
1995
2005
Fig. 1. Employment of aluminium alloys in the automotive industry
Therefore, possibilities to increase and maintain the process stability during gas metal arc welding will be described in the following considering the special characteristics of the aluminium materials, and examples for applications will be given for explanation. Starting point are new developments in the field of welding power sources and torch technology which will open the field of thin sheets for the inert-gas welding in the future. Another core component is the feeding of the filler metal as well as the system integration of all components into the industrial manufacturing cell as already mentioned above.
2 Potentials and Problems of Aluminium Welding The low density of 2,7 g/cm³ at stabilities of up to 300 N/mm² (depending from the type of alloy) are opening for the aluminium alloys a large potential for application in the field of light-weight construction. Due to other physical characteristics (high heat conductivity, hydrogen absorption, large interval of solidification) and without going too deep into detail regarding these special characteristics - problems such as pipe cavities, pores and hot cracks as well as thermal distortions are occurring very frequently when welding these materials. To minimize the distortion problem it is generally possible to weld with an energy per unit of length as low as possible and to distribute the heat input by suitable welding cycles. Especially in the field of pulsed arc welding the heat input can be minimized (see chapter 3). The problem of pore formation [2] is caused by the high affinity of aluminium towards hydrogen. The hydrogen is dissolved in the melt pool and will be
Process Stability of Automated Gas Metal Arc Welding of Aluminium
33
segregated due to the considerably lower solubility in the solid aluminium, accumulating in form of pores. To avoid this problem the hydrogen has to be prevented from dissolving on one side and there has to be sufficient time and place for the hydrogen to degas in case that it has dissolved into the pool. Hydrogen gets into the pool e.g. through the formation of hydroxide layers in case of oxidized surfaces in a wet atmosphere or by dirt or remainders from greases and oils. Another possibility is the direct entering through the humidity in the air in case of insufficient gas-shielding or sputtering wire feeding. Measures to prevent pores are the use of inert shielding gases, dry stocking of work pieces and wires and a removal of the oxide layer by brushing, pickling or milling of edges. Besides the interaction of oxides with oxygen, oxides have another negative characteristic: due to the considerably higher melting temperature compared to the base metal (2050 ° C resp. 66 °C) notches can occur at the upper and the root bead or oxides can sink into the pool [1]. Measures for degasification are especially an adapted welding speed to give the gas pores time to rise as well as the creation of degasification areas (bevel opening angles as large as possible) where ever the design allows. In additon, air gaps between the sheets, e.g. through distance sheets can improve the degasification. The problem of hot crack formation is especially occurring with alloys having a large solidification interval where the elements Mg, Si and Cu are critical [2]. Reason for this is a too small quantity of intergranular eutectic during the solidification process accompanied by shrinkage stress. Countermeasures are the use of filler metal to bring the composition of the weld metal out of the crackcritical area, a suitable design enabling a shrinkage and an adapted weld heat input (not too high solidification speeds).
3 Process Stability of Welding of Aluminium Materials
3.1 Welding power source- and welding torch technology The GMA-, TIG- and Plasma welding process are suitable for the gas metal arc welding of aluminium.
GMA-Welding While welding of thick plates is preferably done using spray arc, the pulsed-arc welding is recommended for the welding of thin sheets. Fig. 2. is a current – time diagram for a typical pulsed-arc process. With the growing application of the inverter technology new possibilities are arising for the inert-gas welding process. The GMA-process too, having so far
4 4 E. Schubert
only limited access into the industrial production of structural components out of aluminium, is profiting from these developments. pulsed current non-short-circuiting metal transfer
current
medium current
background current
pulse time actual welding time
time
Fig. 2. Current-time-characteristic for impulsed welding of aluminium alloys
pulse voltage background current
neg.current time negative background current
pulse time pulse period time
(pulse frequency)
Fig. 3a. MIG-AC-Pulsating of voltage
Process Stability of Automated Gas Metal Arc Welding of Aluminium
55
pulse voltage background current
off-time
pulse time pulse period time
(pulse frequency)
Fig. 4b. MIG-DC-Pulsating of voltage
These process variants enable a process-stable welding of structural components out of aluminium up to a thickness of 0,6 mm. Fig. 4. shows the crosssection of a fillet weld with the clearly visible minimum penetration. Fig. 5. Cross-section of a fillet weld of AIMgSi welded by using the MIG-AC process
But also regarding conventional GMA welding of aluminium materials special requirements on the torch technology are arising, especially when using pulsed welding. A torch with a double-cooling circuit enables stable welding processes even at high pulse currents due to an optimal cooling of the torch neck, contact tip and gas nozzle. Fig. 5. shows the sectional drawing of such a torch with two cooling circuits.
6 6 E. Schubert
Fig. 6. Sectional drawing and photo of a double cooling-circuit-GMA-welding torch for impulse welding
io n
el ec tro ns
s
electrode at positive pole
ga s
ga si
on s
electrode at negative pole
type of penetration
Fig. 7. TIG-welding with negative and positive polarity at the electrode
el ec tro ns
Process Stability of Automated Gas Metal Arc Welding of Aluminium
77
TIG Welding Process The TIG-AC welding can be recommended for the TIG welding of aluminium[10]. Fig. 6. shows the TIG welding with different polarities at the electrode. The electrode is cooling down during the minus phase enabling a deep penetration while during the plus phase a cleaning effect occurs as the oxide layer is torn open by gas ions. The penetration during this phase is more flat and wider and the thermal load of the electrode is very high. In practice therefore, ACwelding is applied in order to combine the advantages.
Plasma Welding The Plasma welding is offering possibilities similar to the TIG welding with the difference of a stronger arc-constriction by the so-called plasma gas [9]. Additional advantages are the low heat input, low quantity of spatter and the very smooth seams. Fig. 7. is a diagram showing an industrial plasma torch used for aluminium welding. The plasma plus-pole torch with a rating above 200 A is under development [5] and promises a large application potential.
back cap
Plasma gas
upper cooling circuit
cooling body tungsten electrode gas diffusor
shielding gas
lower cooling circuit ceramic shielding gas nozzle Plasma nozzle
Fig. 8. Industrial plasma welding torch The conventional inert-gas welding process stands in competition to the laserbeam welding and the hybrid processes. While the laser processes are superior in
8 8 E. Schubert
12 P a ra m e te rs : b u t t j o in t , 1 m m L a s e r: 2 ,5 k W c o n v e n t io n a l w e ld in g : s ta n d a rd p a ra m e te rs
welding speed m/min
10 8 6 4
s ta in le s s s te e l s tru c tu ra l s te e l
2
A lu m in iu m
0 HLDL
L a s e r (C O 2 , Y A G )
P la s m a
P a ra m e te rs : P a r a m e t e r : S tu m p fs to ß , 1 m m b u t t jo in t, 1 m m L a s e r: 2 ,5 k W L a s e r, 2 ,5 k W k co on nv v. eSncthiowneai lß wv ee rl df ai nh gr e: n : g sä tna gn idg ae r d A rpbaeriat smp a e rt ea rms e t e r
s t a i n le s s s t e e l s tru c tu ra l s te e l
energy per unit length kJ/cm
2 ,5
A lu m in iu m
2 1 ,5 1 0 ,5 0 L a s e r (C O 2 , Y A G )
admissible tolerance
zul äs sig e To ler an z in m
W I IG G T
M IG /M A G
HLDL
P la s m a
M IG /M A G
W T I IGG
gap dim. tolerance Spaltmaßtoleranz
Positional tolerance ofder power source Positionstoleranz S 1,25 1,00 0,75 0,50 0,25 0,00 Laser (CO2, YAG)
HLDL
Plasma
WIG TIG
MIG/MAG
Fig. 9. Comparison of GMA-, TIG, Plasma- and Laser welding processes
Process Stability of Automated Gas Metal Arc Welding of Aluminium
99
welding speed and energy input, the inert-gas welding processes offer enormous advantages on the cost side. The fig. 8 a) to c) show a comparison of the possible welding speeds, energy inputs per unit of length and the gap bridging abilities. The diagrams show the especially large potential of the plasma welding technology as a bridge between GMA- and TIG-welding on one side and the laser technology on the other side. 3.2 Wire Feeding The feeding of soft, easy deformable wires with low stability and low resistance to wear and the risk of oxide formation and humidity absorption is a great challenge for the wire-feeding systems. One solution, already proven in industrial use, is a system consisting of a heated wire-spool housing, a wire spool with drive motor and a digitally controlled 4-roller drive inside the housing, a cable assembly and a second digitally controlled 4-roller drive directly mounted to the torch, as shown in fig. 9.
Fig. 10. Wire feeding equipment for Al-wires (Aluminium Power Drive: APD)
Both 4-roller drives are running absolutely synchronous as push-push drive and a certain wire quantity is always available in the cable assembly as reservoir. The rollers are preset and only require very small pressing forces as the wire roll is driven separately so that only minimum wear or deformation will occur at the wire. In addition, this is also minimizing the tensile forces acting on the wire. The heated housing keeps the temperature above dew point guaranteeing a dry wire with uniform oxide layer without humidity so that a welding with constant processing parameters and without the danger of pore formation due to oxide or hydrogen from the wire.
10 10 E. Schubert
3.3 System Aspects The integration into complete production cells plays an important role for the extensive use of the single components. The power source, the welding torch, the wire-feeding unit and the welding robot should produce trouble-free in a production cell with highest-possible equipment availability and stability. As especially at the welding torch and during the wire feeding wear and dirt accumulation is occurring – e.g. by weld spatter and worn products - which cannot be avoided, peripheral devices have to enable an automatic cleaning or an exchange of the components in questions, if possible. To avoid damages at the welding torch in case of collision, emergency shut-down devices have to be used. Modular systems are especially suitable for frequently changing welding jobs. Fig. 10. shows a concept of a wire-feeding system for different inert-gas welding processes, the so-called Master-Feeder-System. The user is selecting the suitable wire bunch (small coil, large coil, barrel, etc.) and welding process (GMA, TIG, Plasma, Laser or Hybrid) depending on the welding job. The required filler metal is fed by the first 4-roller drive after the separately driven wire bunch and from there transported through a cable assembly to the second 4-roller drive situated close to the process. The wire is then led into a GMA torch or is made available for other processes as cold or hot wire. Fig. 11. shows a processing head for LaserGMA welding.
Process -depending wire feeding :
MIG torch
Adjustment of wire packages :
barrel spool
Cold-/hot wire:
large spool
TIG
small spool
Plasma Laser
Fig. 11. Modular concept for wire feeding with different welding processes (MIG, TIG, Plasma, Laser)
Process Stability of Automated Gas Metal Arc Welding of Aluminium
11 11
Fig. 12. Laser-GMA processing head
4 Examples for Application Since 1999, the automatic inert-gas welding of structural components out of aluminium in combination with the wire-feeding concept (APD) described is used in the production of the AUDI A 8. Different seams at the wheel arch have been welded using in this process and equipment. The use of the APD system requires minimum intervention by the operator (change of spool once a week) and thus enables an economic production. The proven APD system could also be used for the GMA welding in the production of the AUDI A2. Here too, this system is used for welding jobs at the wheel arch with 17 welding units in total, see fig. 12.
12 12 E. Schubert
Original component
Process picture
GMA weld seam
Fig. 13. Al-wheel arch of the Audi A2 welded with APD
Besides this application, the wire feeding technology described above is successfully implemented as Laser cold-wire feeder at the successors of A2 and A8. Fig. 13. is showing the cold-wire feeding in detail. To observe the narrow process tolerances an additional positioning possibility is existing in x-, yand z direction. The cold-wire feeding can be adjusted to any focus length and interfering contours of laser processing heads.
Fig. 14. Laser cold-wire feeding system
Process Stability of Automated Gas Metal Arc Welding of Aluminium
13 13
5 Conclusions The material aluminium is making special demands to the welding technology. Due to new developments in the field of power sources, adapted welding torch designs and new wire-feeding units these demands can be met economically as well as regarding process stability.The integration of these components as a system into the production cells allows to carry out welding jobs with high process stability and equipment availability. The modular concept presented for wirefeeding has certainly a large future potential for feeding wires difficult to handle for the whole range of inert-gas joining processes. This does not only apply to the material Aluminium but also to the welding and soldering of other light metals as well as the proven steel materials and their developments, the high strength and super-high strength steels. In addition, applications in the field of Laser as well as hybrid processes become possible so that here too, improvements in process stability can be expected.
References [1] [2] [3]
[4]
[5] [6] [7] [8] [9] [10]
Baum L, Fichter V (1999) Der Schutzgasschweißer. DVS-Verlag, Düsseldorf Dilthey U (1995) Schweißtechnische Fertigungsverfahren. VDI-Verlag, Düsseldorf Dilthey U, Lüder F, Wieschemann A (1999) Erweiterte Möglichkeiten beim Schweißen von Aluminiumlegierungen durch den Laser-MIG-Hybridprozeß. Aluminium; Bd. 75, Nr. 1/2, pp 64-75 Goecke SF, Dorn L (2000) MAG-Chop-Arc-Schweißen für Feinstblech > 0,2 mm. Große Schweißtechnische Tagung, 2000, Nürnberg, DVS-Verlag, Düsseldorf, pp 163-1682. Kabatnik L (2000) Plasma-Pluspolschweißen. Vortragsmanuskript Große Schweißtechnische Tagung 2000, Nürnberg,DVS-Verlag, Düsseldorf Klock H, Schoer H (1977) Schweißen und Löten von Aluminiumwerkstoffen. DVS-Verlag, Düsseldorf Nickel HG (2000) MIG-MAG DC-O und AC-0-Spannungspulsen. Pers. Mitteilung, Nisterberg N.N. (2000) Werkstoffverteilung beim PKW in Deutschland. VDI-Nachrichten, Düsseldorf, pg 41 Radscheit C (1999) Plasmalöten - Grundlagen und Einsatz; Neue Löttechnologien für den Dünnblechbereich. SLV München Zahariev S, Herold H (1996) Besonderheiten der Anwendung des WIG-Wechselstromschweißens von Aluminium unter Nutzung modifizierter Halbwellen. DVS-Berichte 176, pp 34-37
14
Nd:YAG – Laser – GMA – Hybrid Welding in Shipbuilding and Steel Construction Ulf Jasnau, Jan Hoffmann, and Peter Seyffarth Welding Training and Research Institute Mecklenburg – Western Pomerania Ltd., Rostock, Germany [email protected]
Abstract. The paper deals with the Nd:YAG-laser – GMA – hybrid welding process for the use in shipbuilding and steel construction. The advantages of a laser – arc – hybrid welding process in comparison with a laser alone or an arc alone welding process are shown. The current possibilities of the Nd:YAG-laser – GMA – hybrid welding process are shown by some results from hybrid welding of steel and aluminium alloys. Furthermore the advantages of the Nd:YAG-laser technology, especially for a simple and flexible automation are described. The Nd:YAGlaser – GMA – hybrid welding is a process which demands a higher degree of automation in shipbuilding. Regarding to the advantages and related to the available Nd:YAG-laser sources a perspective for the use of the Nd:YAG-laser technology in shipbuilding and steel construction is pointed out. A current German research project under participation of the authors is presented in the paper. One aim of this project is to develop a system for the automatically programming of robots and other welding systems in shipbuilding. Therefore the new programming tool should use the real geometrical data of the component before welding, resulting from a image processing tool. Concluding, the first manufacturing system in shipbuilding (a welding gantry for 12 meter T-joints) outfitted with Nd:YAG-laser – GMA – hybrid welding equipment at a German shipyard is presented. The retrofitting of this welding gantry has been planned and accompanied by the company the authors are with.
1 Introduction The process of laser beam welding is today a well established joining process in industrial applications. The advantages, e.g. deep and narrow welds with high speed and low heat input are accompanied from some disadvantages like the necessary precise preparation of the edges of parts to be joined and the precise clamping of the work pieces. Especially in shipbuilding and steel construction the advantages of the laser technology are to be used only with high costs and efforts due to the boundary conditions in the production line. With the coupling of the laser beam and the arc T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 14−24, 2004. Springer-Verlag Berlin Heidelberg 2004
Nd:YAG - Laser - GMA - Hybrid Welding in Shipbuilding and Steel Construction
15 15
it is possible to extrude the range of application of the laser technology and their advantages. Since the fundamentals of a laser-arc-hybrid process have been described in the 70th [1] different options of this technology were studied, like the laser-GMA-, the laser-TIG- or the laser-plasma-process. In the most cases of this variations a CO2laser was used as a laser source because of the higher laser power and better beam quality. With the now available higher laser power up to 6 kW in case of Nd:YAG-laser this laser source will be more interesting for the laser-arc-hybrid welding in shipbuilding and steel construction. The gas-metal-arc welding is today the preferred welding process in shipbuilding and steel construction in Germany. Therefore and because of some important advantages of the Nd:YAG-Laser the coupling of this type of laser with the GMA-welding process will be contemplated.
2 Process Fundamentals The fundamentals of the coupled process are nearly the same for both, the CO2laser and the Nd:YAG-laser. In Fig. 1 the general principle of the process is shown. Laser process and arc process have a common process zone and weld pool. Only one shielding gas flow is necessary, mostly the gas flow is realized via the GMA-torch.
laser beam
laser induced metal vapor protective gas cover
welding electrode arc fusion zone
keyhole
workpiece
welding direction
Fig. 1. Principle of the laser-GMA-hybrid welding process
The superposition of the arc and the laser leads to different interaction between the two processes. The stabilization of the arc under the influence of laser radiation or a laser induced plasma is one of the main advantages. The wandering of the arc is prevented, because the laser beam evaporates material and ionizing the at-
16 16 U. Jasnau, J. Hoffmann, and P. Seyffarth
mosphere, producing good conditions for ignition and burning of the arc. So the transfer of the energy of the electric arc is more efficient and the thermal effect of the coupled process is in excess of a simple sum of the effects of the laser and arc heat sources taken separately [2]. Depending of the energy ratio of the two energy sources the character of the coupled process may be either more arc-like or more laser-like. The welding depths increases with the increase of laser energy whereas the weld width increases with the arc energy. For a given sheet thickness to be joined the proportion between the “laser part” and the “arc part” of the hybrid weld pool give us an estimation about the process character. The transverse sections in Fig. 2. are showing an example for such different process characters.
Fig. 2. Comparison between a more laser-like (left) and a more arc-like (right) laser-GMAhybrid weld (sheet thickness 5 mm, I-groove)
Depending on the welding task the one or the other process character may be preferable. For the control of the process character and the utilisation of the advantages of the coupled process (Fig. 3) the effects of a great amount of welding parameters have to be known. The effect of the laser beam in a hybrid weld is well illustrated by comparing transverse sections of GMA welds and hybrid welds with the same welding speed (Fig. 4. ). The welding depths increases from 1,5 mm to 5,1 mm using a 4 kW Nd:YAG-laser beam. Despite the high welding speed the quality of the top bead shape is good in case of hybrid welding, resulting from the laser-stabilized arc. The effect of the arc, resulting in a better gap bridging ability and a better weld geometry then in a pure laser weld is illustrated in Fig. 5. The addition of the arc power and the process stabilisation in hybrid welding leads to a significant higher welding speed. So the heat input per unit of length remains almost constant in comparison with the laser alone welding process.
Nd:YAG - Laser - GMA - Hybrid Welding in Shipbuilding and Steel Construction
arc
laser
• low cost energy source • gap bridging ability • filler material • influence of structure
• deep weld penetration • high welding speed • low heat input
17 17
hybrid process • better gap bridging ability, compared with laser welding • higher welding speeds, compared with arc and laser • deeper weld penetration, compared with arc and laser • lower heat input compared with arc-welding
Fig. 3. Coupling of the advantages of arc and laser welding in a hybrid process
sheet thickness: 8 mm laser type: Nd:YAG laser power: ~ 4 kW without laser
with laser
welding speed: 2 m/min shielding gas: 82% Ar / 18% CO2
Fig. 4. Transverse sections of a GMA-weld and a laser-GMA-hybrid weld
sheet thickness: 5 mm I-groove laser type: Nd:YAG laser power: ~ 4 kW shielding gas: 90% Ar / 10% CO2 laser weld,
laser-GMA-hybrid weld,
welding speed: 1,1 m/min
welding speed: 2,5 m/min
Fig. 5. Transverse sections of a laser weld and a laser-GMA-hybrid weld
18 18 U. Jasnau, J. Hoffmann, and P. Seyffarth
Besides the parameters laser power and arc power (arc voltage and arc current) and welding speed the geometrical formation of the laser beam and the torch and also the type of shielding gas are very important. The arbitrary geometrical formation of the laser beam and the torch (torch before or behind or besides the beam in welding direction) and the possibility to couple more than one GMA-torch with the laser beam offers a great variety of geometrical formations for the solution of welding tasks. The usable shielding gas is one of the biggest difference between the hybrid welding with CO2-laser and Nd:YAG-laser. For the work with the CO2-laser gas mixtures with a high Helium content are necessary. In case of Nd:YAG-laser also Argon and active gases, e.g. mixtures from Ar, CO2 and O2 are usable. So it is possible to influence the metallurgy of the weld pool like in case of pure GMA welding. Pure Argon and active shielding gases are not usable for hybrid welding with a CO2-laser, because of the risk of ignition of an protection plasma above the weld pool. This plasma interrupt the absorption of the laser radiation into the work piece and the hybrid welding process discontinues. This disadvantage of the CO2laser is caused in the different wave lengths of the radiation of the two laser types mentioned above. The CO2-laser with a wave length of 10,6 µm offers other optical conditions then the Nd:YAG-laser with a wave length of 1,06 µm. The practical qualification of the Nd:YAG-laser-GMA-hybrid welding for different shipbuilding applications is the main content of two research projects the authors company is involved. Besides the welding of constructional and shipbuilding steel also the welding of aluminium alloys for shipbuilding has to be done in one of the projects. In a lot of welding tests the hybrid welding of square butt joints and T-joints of the mentioned materials is carried out under laboratory conditions. Welding tests at a real welding gantry at the shipyard “Kvaerner Warnow Werft” in Rostock, Germany, for the qualification of the hybrid welding process also have been started. Despite the high welding speeds reachable with the coupled process the technical and economical effects of the new welding process are not only resulting from the higher welding speed compared with GMA-welding. The main aspect of the positive effects is the lower energy per unit of length, leading to lower distortions and less postweld straightening. Some results of the welding tests are shown in Fig. 6 and Fig. 7. For aluminium alloys the laser-GMA-hybrid welding offers an additional advantage over the pure laser welding compared with the welding of steel. During laser welding of aluminium alloys the keyhole is very instable due to the lower viscosity of the aluminium melt versus steel melt. This leads to numerous collapses of the keyhole and to a lot of big pores, so called process pores. The outgassing of these pores are very difficult due to the very small weld pool and short cycle time in case of laser welding. The phenomenon of process pores is limiting the welding depths in case of laser welding of aluminium. To solve this problem some strategies were developed, e.g. the welding with two laser spots for the enlargement of the weld pool. In principle all these strategies leads to a higher necessary laser power at constant welding depths.
Nd:YAG - Laser - GMA - Hybrid Welding in Shipbuilding and Steel Construction
19 19
The laser-GMA-hybrid welding is a good possibility to realise the higher power at the working piece. With the coupling of the laser beam and the arc high welding depths and high welding speeds are also possible on aluminium alloys without the danger of process pores. The bigger weld pool geometry and the modified flow conditions in the melt allows the outgassing of pores. Furthermore the override of the metallurgy of the weld with the melting filler wire is much better compared with pure laser welding.
root formation
transverse section Parameters: sheet thickness:
5 mm
current type/ polarity /arc type: DC / + / pulsed arc
weld type:
I-groove
arc current / voltage:
290 A / 29 V
welding position:
PA
wire feed rate:
12500 mm / min
laser type:
Nd:YAG
arc power:
~ 8,4 kW
laser focal length:
200 mm
welding speed:
3000 mm / min
laser power:
~ 4 kW
shielding gas:
12 l /min (Ar / O2)
Fig. 6. Laser-GMA-hybrid weld on constructional steel
20 20 U. Jasnau, J. Hoffmann, and P. Seyffarth
1 cm typical top bead shape
transverse section
Parameters of the weld (transverse section): sheet thickness:
5 mm
current type/ polarity /arc type:
DC / + / pulsed arc
weld type:
I-groove
arc current / voltage:
162 A / 24 V
welding position:
PA
wire feed rate:
12.000 mm / min
laser type:
Nd:YAG
arc power:
~ 4,0 kW
laser focal length:
200 mm
welding speed:
2500 mm / min
laser power:
~ 4 kW
shielding gas:
45 l /min (Ar / He)
Fig. 7. Laser-GMA-welds on aluminium alloy 5083
3 The Equipments of Production Lines
3.1 Theoretical Aspects The possibility for transferring the Nd:YAG-laser beam via a flexible laser light cable is a second big advantage of the Nd:YAG-laser caused in the lower wave length. In contrary, the CO2-laser beam must be transferred via a mirror system to the working point. Laser light cables and laser working heads are manageable by industrial robots, a lot of such solutions in automobile manufacturing are making these solutions to state of the art.
Nd:YAG - Laser - GMA - Hybrid Welding in Shipbuilding and Steel Construction
21 21
But the flexible beam transfer in connection with the higher available laser power of modern Nd:YAG-laser systems up to 6 kW makes the use of this laser type also very interesting for production lines in shipbuilding and steel construction. In case of using a Nd:YAG-laser only the working head has to fulfil the high requirements on the dynamical behaviour and not the whole beam transfer way, the complete machine tool respectively, as in case of using a CO2-laser. A common welding gantry or another welding machine, respectively the point of the working head, should be fulfil the requirements according to [3] as a supposition for refitting with a Nd:YAG-laser source. In connection with the laser-GMA-hybrid welding process the advantages of laser technology are usable for conventional production lines. Nevertheless the Nd:YAG-laser-GMA-hybrid welding demands a higher degree of automation in the production line. A precise seam tracking system including measuring of the gap size and the control of laser- and / or arc-power is an elementary equipment. The control system of the welding gantry must be able to control higher welding speeds than in case of pure GMA-welding and a lot of new functions, connected with the use and control of the laser system. But the effort for refitting a conventional welding gantry with Nd:YAG-laser equipment is much lower than the purchase of a new special designed laser welding gantry. Another advantage of the Nd:YAG-laser sources is the possibility to connect one laser source with up to 6 handling systems and to add the radiation of two or three laser sources into one laser light cable. So also with solid state lasers a power of more than 12 kW at the work piece is possible. As already realised in the automobile industry so called “lasernetworks” with several laser sources and several handling systems are also thinkable for production lines in shipbuilding and steel construction. Such a “lasernetwork” can be installed “step by step”, regarding to the actual demands of the shipyard. Also a occasionally insert of laser technology for special manufacturing tasks is possible. This flexibility is only caused in the type of laser source, the Nd:YAG-laser and the possibility to transfer the laser beam via laser light cable. In the two already mentioned German research projects the predicting problems with the use of Nd:YAG-lasers in shipbuilding, e.g. the occupational safety and health and the necessary higher level of control systems and automation should be solved. One of these projects is going on in a regional network of the authors company and two other research companies, attended by several German shipyards and related firms. Besides the qualification of Nd:YAG-laser-GMA-hybrid welding and problems regarding to safety and control of the laser system the aim of this project is to develop a system for the automatically programming of robots and other welding systems in shipbuilding. Therefore the new programming tool should use the real geometrical data of the component before welding, resulting from a image processing tool. Thus the expensive manual work for the programming of welding machines and the dependence from not up-to-date CAD-data should be replaced in the future. Further on with such a automatically programming tool it is possible to react without delay on short-term changes in the manufacturing tasks, the flexibility and reactivity of the production line increases.
22 22 U. Jasnau, J. Hoffmann, and P. Seyffarth
For the automatically programming the data resulting from the image processing tool are independent from any kind of machine control system and should be therefore usable for all programmable welding machines in a production line. The further processing of the data should be realized with a general software solution, usable for the different programmable robots and welding machines in a production line. Despite this great variability of the planned system an adaptation to existing machines and manufacturing lines will be necessary. Additionally to the specific adaptation of programming tools and laser-GMAhybrid welding equipment the theoretical and practical training of the staff is a focus point in the project of the three incorporated research companies. 3.2 Practical Applications The first industrial application of the laser-GMA-hybrid welding process in a shipyard was realized with a CO2-laser (Fig. 8) at the German shipyard “MeyerWerft”. This production line was designed especially for the hybrid welding of plates and stiffeners with a CO2-laser. Panels with a geometry of 20 m *20 m are welded completely at this line. “Meyer-Werft” is specialised in great cruise liners for the international market. Therefore a lot of panels with lower sheet thickness are used for their products.
Fig. 8. CO2-Laser–GMA–hybrid welding from panels and stiffeners at shipyard “Meyer-Werft” in Papenburg, Germany [4] At the German shipyard “Kvaerner Warnow Werft” in Rostock (Fig. 9) a common welding gantry was refitted with equipment for the Nd:YAG-laser-GMAhybrid welding. During the above mentioned regional research project the prob-
Nd:YAG - Laser - GMA - Hybrid Welding in Shipbuilding and Steel Construction
23 23
lems regarding with such a refitting should are solved. The refitting was done after measuring the dynamical behaviour at the point, were the future hybrid working head shall be located. The results correlated well with the requirements regarding to [3]. The first test welds made with the refitted conventional welding gantry also verified the suitability of the gantry for use with laser technique. In the meantime also other German shipyards are interested in such a refitting solution.
Fig. 9. Welding gantry (right) and Nd:YAG-laser-GMA-hybrid welding head (left) at the shipyard “Kvaerner Warnow Werft” in Rostock, Germany
4 Conclusions The Nd:YAG-laser in connection with the gas metal arc welding offers great possibilities for the use of laser technology in shipbuilding and steel construction. The lower energy per unit of length resulting from laser based joining processes leads to lower distortions and less postweld straightening. Besides the higher welding speed compared with conventional welding methods the laser-GMAhybrid welding offers a better gap bridging ability compared with pure laser welding and makes the advantages of laser technology usable for manufacturing under the conditions in shipbuilding and steel construction. The high flexibility of the Nd:YAG-laser equipment, resulting from the wavelength of the Nd:YAG-laser radiation and their optical properties enabled an implementation of laser technology according to the actual demands of a company. In principle the refitting of conventional welding machines with Nd:YAG-laser equipment, such as robots and welding gantries is possible.
24 24 U. Jasnau, J. Hoffmann, and P. Seyffarth
The implementation of laser equipment and technology into shipbuilding and steel construction leads to a higher level of automation in this branches, connected with a higher level of product quality and reproducibility. Besides the technical requirements for the introduction of laser technology in the mentioned industrial areas the qualification level of the staff, workers as well as technicians and engineers must be increased.
References 1. 2. 3. 4.
Eboo, Steen, Clarke: Arc Augmented Laser Processing of Materials; Conf. Proc. Of Advances in Welding Processes, Harrogate, UK, 1978, p. 257 Gvosdetzky, Krivtsun, Chzhenko, Yarinich; Laser-Arc Discharge: Theorie and Applications; Harwood Academic Publishers, 1995 N.N.:IIW-Document IV-789-01: Acceptance test for Nd:YAG-laser beam welding machines, part 1: machines with optical fiber delivery; 2001 N.N.: Schiffe aus Licht; in “INFORM-Das Magazin für Umformtechnik”, 01/2002, Schuler AG, Göppingen
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics S.B. Chen,1 D.B. Zhao,2 Y.J. Lou,3 and L. Wu3 1. S. B. Chen is with Institute of Welding Technology, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China. 2. D. B. Zhao is with Mechanical Depart., Qinghua Univ., Beijing, 100080, P. R. China. 3. Y. J. Lou and L. Wu are with and he was with Welding Division, HIT , Harbin, 150001, P. R. China. [email protected]
Abstract. This paper presents a successful application of intelligent technologies, such as computer vision, image processing fuzzy-neural networks, for sensing, characterization, modeling and control of the welding pool dynamic process in the pulsed Gas Tungsten Arc Welding (GTAW). The image processing algorithm is discussed in detail. The results of the 2-dimesion and 3-dimesion geometric characterization of the weld pool from the single-item pool images are presented. Neural models and self-learning neural controller for the weld pool dynamic process have been developed. The experiment results on the process indicate that the designed vision sensing and control systems are able to emulate a skilled welder’s intelligent behaviors: observing, estimating, decision-making and operating. The work in the paper shows great potential and promising prospect of artificial intelligent technologies in the welding manufacturing.
1 Introduction Artificial intelligence (AI) is an applied science and technology across multiple disciplines. Industrial requirements and promotion of AI techniques have resulted in considerable development in AI. Developing effective AI methodology for practical applications is all along attracted research efforts in scientific and technical fields. In practical engineering applications of AI, a challenging problem is how to control a complex object with uncertainty (Willis, et al,1992). In general, the means for T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 25−55, 2004. Springer-Verlag Berlin Heidelberg 2004
26 26 S.B. Chen et al.
resolving this problem includes the sensing techniques for simulating human sensing organ functions, modeling techniques for cognizing, control techniques for reasoning and decision-making, and administering of motion and operation. Computer vision technology has become a powerful tool for a wide variety of engineering applications (Hoffman ,1991; Marr,1982). Its use in the industry has accelerated advancement in quality control and automation systems. AI provides powerful approaches to control complex systems through synthesis of intelligent techniques. The non-linear behaviour and uncertainties of many practical systems make it difficult for conventional methods to model analytically and control these systems. Many real industrial process plants fall into this category, and hence intelligent techniques are required to model and control such systems. Among many different “intelligent” approaches, neural network and fuzzy methodologies have emerged as very powerful tools for designing intelligent control systems, **owing to their capabilities of emulating human learning processes (Willis et al ,1992). Artificial neural networks (ANN) shows that the method has been used in a modest scale to develop nonlinear models and control systems. For example, recent studies (Andersen et.al,1990; Lim, et al, 1993; Pham et.al, 1999) and their cited references have demonstrated the utility and flexibility of the concept within the domain of process engineering. Artificial neural network applications have great prospect and potential in various fields of industrial engineering. As is well known, arc welding processes are extremely nonlinear and multivariable-coupled. They involve many uncertainties, such as, influences of metallurgy, heat transfer, chemical reaction, arc physics, and magnetization (Saedi, et al, 1988). As a result, it is very difficult to obtain a practical and useful controllable model of an arc welding process through classical modeling approaches. Until now, control of weld quality, such as welding penetration, and fine formation of welding bead (i.e., control of the size and changes of the welding pool), is still a perplexed faced by control engineers or for weld technologists (Vilkas, 1966). The weld seam accuracy is difficult to control due to the non-linearity and uncertainties of the process. Intelligent control systems should be developed for modeling and control of the welding process, as they derive the control performance based on human experience, knowledge, and logic techniques, instead of mathematical process models. Andersen K (1990), Lim T.G. et al (1993) have studied neural network and fuzzy techniques for arc welding processes. Another critical difficulty of dynamic control of an arc welding process is how to detect weld pool geometrical features, such as weld bead width and penetration, especially from the topside of weld pool, conveniently and in real-time. Though various efforts have been made to sense weld pool sizes in real-time from the topside, such as ultrasonic detection, infrared sensing, pool image processing, and radiographic sensing (Brzakovic et al ,1991; Lee et al, 1996; Pietrzak el al, 1994; Richardson et al 1994), results of weld quality control have been achieved with limited successes. In order to achieve better control of weld quality, we should not only improve reliable measurement means, but also introduce advanced real-time control methodologies. The two technical fronts should be paid equal attentions, and developed simultaneously.
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics
27 27
In this study, a computer vision system combining video camera and image processing, was used in the closed loop system to control a pulse GTAW in real-time. A novel sensing system, which captures the topside and backside vision images of a weld pool in the same frame simultaneously, was employed for monitoring and modeling of the welding process. The real-time image processing algorithm was developed to acquire topside and backside sizes of weld pool. Furthermore, neural network models of pulsed GTAW process were established to predict weld pool features from measured data. Based on the ANN models, experiments on intelligent control schemes for the weld pool dynamic process were conducted.
2 The Pulsed GTAW Process and Monitoring Systems 2.1 Description of the pulsed GTAW process The pulse GTAW is one of arc weld dominant technologies. It is widely used in the high-quality weld manufacturing, especially in high-precise thin joining. The major quality index in pulsed GTAW is precise penetration and fine formation of the weld seam. In order to obtain good quality of welding products, dynamics of the welding pool should be exactly dominated in real time, i.e., regulating accurate size of weld pool. Main influences on weld pool in welding process involve weld current, such as pulse duty ratio, peak current, base current, arc voltage, and welding speed; experiment conditions such as the root opening or geometry of the groove, material, thickness, work piece size, electrode tip angle, and rate of shielding gas flow; welding conditions such as heat transfer condition, arc emission and so on. In designing control systems, we should deal with following problems: sensing of weld pool features, modeling of weld pool dynamic process, and designing effective controllers for the process.
2.2 Monitoring systems of the pulsed GTAW The structure diagram of monitoring system for the pulse GTAW welding is shown in Fig.2-1. The experiment system includes a double-side visual sensing subsystem, which consists of CCD camera , recorder, frame grabber, and monitor, image processing sections, welding quality controller completing by the computer , welding power supply, and welding current setup was regulated by a computer. Single chip microcomputer (SCM) was used for control of travel speed. A wire feeder was applied for filling wire metal into the weld pool during pulsed GTAW.
28 28 S.B. Chen et al. SCM motion control board
Personal computer Interface
M anipulator
Frame grabber
Recorder
Power supply
Wire feeder
M onitor CCD camera
Torch Work piece Work plate
Backside Topside Composed filter system
Travel direction
Fig.2-1 The structure diagram of experimental system for pulsed GTAW
2.3 Double-side visual sensing system for weld pool dynamics In many practical cases, accessing to the backside of the weld piece is not possible. According to the experience of skilled welder, the geometry of weld pool can provide instantaneous information about welding penetration. Topside and backside images of weld pool should be captured and the geometry should be extracted for modeling and control of the dynamic process. The backside width and penetration could be predicted by the dynamic model and controlled by intelligent controller. Therefore, a double-side visual sensor was designed to detect the face and back of the weld pool for modeling welding penetration and formation process. The sensing system consists of topside and backside light path and composite filters. Both topside and backside images concentrate on a same target of the CCD camera through the above double-side imaging light path system.
3 Image Processing of Weld Pool in Pulsed GTAW 3 .1 Two dimensional character processing of the weld pool image without wire filler Bead-on-plate experiment was conducted on low carbon steel during pulsed GTAW using the double-side imaging system, the experiment conditions are omitted here. A complete weld pool image in a frame is shown in Fig.3-1, in which the left is backside image and the right is the topside image. In Fig.3-1, the nozzle, arc center, topside molten portion, and topside solidified portion could be clearly distinguished from the topside image of weld pool.
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics Backside image of weld pool
29 29
Topside image of weld pool Nozzle Arc centre partition
Backside molten partition
Topside molten partition Topside solidified partition
Backside solidified partition
Backside
Topside
Fig.3-1 A frame complete weld pool image of pulsed GTAW
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Fig.3-2 The visual images of different time's weld pool in a pulse cycle
Fig 3-2 is top/backside pool serial images in a pulsed cycle. Fig.3-2 (b),(c),(d) are pool images in pulsed peak current and (e),(f),(g) in the pulsed based current. Based on the above analysis, the time sequence of pulsed current for collecting images were determined. The taking image current was selected as 60A, taking image time as 80ms for a frame complete weld pool image. Used of image contrast between arc mirror reflect of the pool liquid surface and arc diffuse reflect of frozen metal on work-piece surface, the arc disturbance could be changed into an advantage for taking a clear image of weld pool. Based on arc spectral analysis, an appropriate filtering light system was designed in wavelength 600nm 700nm and the stable top/backside images was obtained as Fig 3-3.
Nozzle Arc centre partition
Backside of weld pool
Topside of weld pool Unmolten material
(a) Backside visual image
Solidified material
(b) Topside visual image
Fig.3-3 Canonical topside and backside visual image of weld pool
30 30 S.B. Chen et al.
3.1.1 Exponential base filter processing of weld pool images The first step of weld pool image processing was filter and wiping off noise disturbance. In order to reduce calculation, the recursion exponential base filter was used to smooth images. The response property of exponential base filter is similar to the Guass filter(Marr,1982).
One dimensional recursion exponential base filter The unit sample response of Exponential Base Smoothing (EBS) was defined as:
S ( n) = k (α n + 1) e−α n
3-1 ✁
In Eq.(3-1) n is a discrete variable, α is a constant of the filter for one-dimesional space range, k is a proportion factor defined as : ✂
k=
(1 − e− α ) 1 + 2 α e − α − e −2 α
3-2 ✁
Decomposing S ( n ) as cause-effect and non-cause-effect parts and supposing ( ) is the output response of the EBS filtering S ( n ) we filter input as x ( n ) yn have ✂
✂
✂
✄
y ( n ) = yc ( n ) + ya ( n )
3-3 −α
−α
yc ( n) = k x ( n ) + e ( α − 1) x ( n − 1) + 2 e y c ( n − 1) − e
−2 α
✁
y c (n − 2)
y a (n) = k [e−α (α + 1) x(n + 1) − e−2α x(n + 2)] + 2 e−α y a (n + 1) − e−2α y a (n + 2) =0
Initial conditions as: x( 0)
☎
y c ( 0) = y c ( −1) = 0 ☎
n = 1, 2,"" M
x ( M + 1) = x ( M + 2 ) = 0 y a ( M + 1) = y a ( M + 2) = 0 n = M ,""2,1 In Eq.(3-3), yc ( n ) is the response for the filt cause-effect part, ya ( n ) is the ☎
☎
response for the filt non-cause-effect part. One dimensional recursion exponential base filtering algorithms could be realized by Eq.(3-3).
Two dimensional recursion exponential base filtering algorithms Based on one-dimensional exponential base filtering function, the separable two-dimensional exponential base filter was developed. For two-dimensional input x ( m, n ) , one-dimensional recursion filtering algorithms along one direction could be first completed due to separability of the filter, and then taking its output as input for next one-dimensional filtering algorithms. The top/back side images of the weld pool were smoothed by the above algorithms, i.e. Fig 3-4. The results shown that
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics
31 31
noises in the topside image was filtered and the image margin was fully maintained while α =1.41( σ = 1. 0 ). Taking α =0.94( σ = 1.5 ), the noises in backside image was ideally sieved by the exponential base filter. Since the two-dimensional exponential base filter could be separated two one-dimensional recursion filters in two directions, the computing time and space costs of the algorithms was greatly reduced, e.g., the algorithm time cost on a PC-486 computer with main frequency 100MHz was not more than 8ms for an image window with 160×190 pixel. The time cost processing the same size image by the Guass filter was about 50ms. It is obvious that exponential base recursion filter is suitable to process images in real-time.
(a)
(b)
(c)
(d)
Fig.3-4 The smoothed image of weld pool with EBS algorithm (a) Original topside image
(b) Topside image smoothed with EBS algorithm
(c) Original backside image
(d) Backside image smoothed with EBS algorithm
3.1.2 Contrast Enhancement algorithms for the weld pool Based on the contrast enhancement algorithms presented by Gordon et al. (1984) and Beghdadi et.al.(1986), for supposed center pixel point ( x ,
Gxy
y ) with greyscale and window Wm and the each pixel point with greyscale Gij the following
contrast enhancement algorithms, named as CE, was adopted to enhance the contrast of the weld pool. The CE algorithms follows as: Step1 For the pixel point( x , adjacent region: ✁
✁
✁
✁
G =
y ) calculating average greyscale G in its
¦ Gij / ( m × m)
✂
(3-4
(i , j )∈Wm
Step2 For all pixel points in the window of the pool: ✁
✁
✁
✁
Wm calculating the edge value ∆ ij
∆ ij = Gij − G
✂
(3-5
32 32 S.B. Chen et al.
Step3 For the window the pool edge: ✁
✁
✁
✁
Wm , calculating weighted average greyscale Exy for
¦ ∆ ij ∗ Gij
E xy =
¦ ∆ ij
(i , j ) ∈Wm
Step4
✁ ✁
✁
(i , j ) ∈Wm
Cxy of the pixel point ( x , y ): Gxy + E xy
Calculating the contrast degree
✁
Cxy = Gxy − E xy Step5
✂
(3-6
✁ ✁
✁
Changing
✁
Cxy to contrast transfer function C xy ′ = F (C xy )
✂
(3-7
Cxy′ = F ( Cxy ) = ( Cxy ) a b , b = 2 p , a < b (3-8 x , y G ′ and calculating Defining greyscale of the pixel point ( ) as xy ✂
Step6 functions as following: ✁
✁
✁
✁
R xy = (1 − Cxy′ ) (1 + Cxy′ )
if Gxy ≤ E xy ✂
= (1 + Cxy′ ) (1 − Cxy′ )
(3-9
if Gxy > E xy
Gxy′ = E xy ⋅ R xy Step7 ✁
✁ ✁
✁
✂
(3-10
Repeat algorithms Step1 to Step6 for each pixel point.
Using the above CE algorithm for the topside images, the contrast enhancement processing results are shown as Fig. 3-5. Let
β = 0. 5
to
β = 0. 25
β=a/b
window size m, for
and m=5 to m=13, the edges and background of the weld
pool are learly distinguished in Fig. 3-5. Trading off the enhancement effect and calculating complexity,
β = 0. 5
and m=9 were determined in processing and
calculating time was not more than 16ms.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Fig.3-5 Contrast enhancement of the topside image of weld pool (a) EBS smoothed image (b) CE( β = 0.5 , m=5) (c) CE(β = 0.5 , m=9) (d) CE(β = 0.5 ,m=13)(e) CE(β = 0. 25 , m=5)
(g) CE(β = 0. 25 , m=13)
(f) CE( β = 0. 25 , m=9)
Computer Vision Sensing and Intelligent Control of Welding Pool Dynamics
33 33
3.1.3 Extracting the geometry characters of the weld pool Defined the topside pool characters as: maximum width length
Wb max
W f max , maximum half
L f max , and the backside pool characters as: maximum width maximum length Lb max and area S b , shown as Fig. 3-6, these
parameters could be used to discribe the pool characters in full penetration. y
y Lf m ax Wf max
b
(a) Topside of weld pool
Lbm ax
x
o
c Wbmax
a
o
x
Sb
(b) Backside of weld pool
Fig.3-6 The definition of feature size parameters of fully penetrated weld pool
Algorithm for extracting the topside characters of the weld pool Based on the greyscale distribution of the image in Fig.3-7, the algorithm ETG for extracting top geometry of the weld pool was developed as the following: The algorithm ETG: extract-top-geometry (image window topw) { Step1: get-centre-point C (image window topw); Step2: for (i=-5; i 0 is a regularization constant. The last term controls the flatness of the function. In this context the use flatness denotes that one seeks small w. This has a significant effect on the generalization capability of the algorithm. Intuitively, the SVR maps the input vectors onto a high dimensional feature space, and subsequently locates an optimal function f that not only reduces the risk but also controls the generalization capability. It can be shown by use of the loss function as Eq. (2.4) that solving a SVR is equivalent to solving the following optimization problem:
min !
!
w,b, ,
*
s.t.
n n 1 1 2 $w, w& + C%(ξi + ξi*) = w + C%(ξi +ξi* ) 2 2 i =1 i =1 yi −$w,ϕ(xi )& −b ≥ ε +ξi
(2.7)
$w,ϕ(xi )& +b − yi ≥ ε +ξi* ξi, ξi* ≥ 0, i = 1,2,…, n.
where slack variables ξi,ξi* cope with additive noise, and ε represents the expected level of precision. With the slack variables ξi and ξi* equal to zero, Eqs. (2.7) form the so-called ε-tube. C, a function of (λn), in Eq. (2.7) is a parameter corresponding to the value ||w||2. It provides a trade-off between two terms: the flatness of f and the amount to which deviations larger than ε. According to Eq. (2.7), a Lagrange function can be constructed as: n n 1 2 w + C% (ξi + ξi* ) − %αi (ε + ξi − yi + w,ϕ (xi ) + b) 2 i =1 i , j =1
L=
n
−
%αi* (ε + ξi* + yi − i , j =1
(2.8)
n
w,ϕ (xi ) − b) − % (ηiξi + ηi*ξi* ), i =1
where αi, αi*, ηi, ηi* ≥ 0 are the Lagrange multipliers. A more generalized form of SVR involves the use of a kernel function, K. A kernel is a function, K, such that for all x, z ∈ Ω. K (x, z ) = $ϕ (x) ⋅ ϕ (z )& .
(2.9) [14]
The kernel function must satisfy the Mercer condition
.
'' K (x, z)ϕ (x)ϕ (z)dxdz > 0 ,
(2.10)
where K is a symmetric function, ϕ(x) is not always equal to zero, and 2 ' ϕ (x)dx < ∞ . Replacing the w ⋅ ϕ (x ) with K and by the partial derivatives of L with respect to the variables (w, b, ξi, ξi*), it can be shown that solving the optimization problem in Eq. (2.8) is equivalent to solving the quadratic optimization problem: n
min* α ,α
− % (α i − α i* ) yi + i =1
n 1 n (α i − α i* )(α j − α *j ) K (xi , x j ) + ε % (α i + α i* ) % 2 i, j =1 i =1
(2.11)
A Novel Intelligent Monitor for Manufacturing Processes
67 67
n
s.t.
% (α i − α i* ) = 0 i =1
0 ≤ αi,αi* < C,
i = 1, 2, …, n.
This problem can be solved using various methods such as projected conjugate gradients and Quasi-Newton methods. The quadratic optimization process guarantees the solution must be the global minima. After the unique solutions α, α* are found, the weight factor can then be represented as: n
(2.12)
w = %(αi − αi* )ϕ (xi ) . i =1
Note that, according to Karush-Kuhn-Tucker (KKT) complementarity conditions , the product of the dual variables and constraints at the optimal solution must vanish; that is
[15]
α i ( y i − (α i − α i* ) K (x i , x j ) − b − ε − ξ i ) = 0 α i* ((α i − α i* ) K (x i , x j ) + b − ε − ξ i* − y i ) = 0
(2.13) .
Thus, a number of input vectors lies on the boundary of the ε-tube; and they are referred to as the support vector (SV). Thus, according to the SVs, ξi,ξi* = 0, b can be calculated as follows: n
b = y i − % (α i − α i* ) K (x i , x j ) − ε
for α i ,α i* ∈ (0, C ) .
(2.14)
j =1
Finally, the SVR can be constructed for real-valued function estimation as
f ( x, , !
!
*
)=
% (α i − α i* ) K (x i , x) + b .
(2.15)
i∈SV
2.2 Similarity measurement
Let us examine Eq. (2.6) again. Given a set of training samples with additive noise, Sf, the goal of SVR is to find a function f that has at most ε deviation from the actually measured targets for all the training samples, and at the same time is as flat as possible. To do this, the definition of the loss function c(x, y, f(x, w)) is very important. Ideally, it should have a simple structure to avoid difficult optimization problems. In addition, it should be general enough so that the data can be fit. Assuming that the set Sf was generated by f plus additive noise ξ with density p(ξ), y i = f (x i ) + ξ i .
(2.16)
Then, the likelihood of an estimate Ff = {(x1, f(x1,w)), (x2, f(x2,w)), …, (xn, f(xn,w))} based on Sf is
68 68 M. Ge and Y. Xu
n
n
P(F f | F) = ∏ P( f (x i , w ) | (x i , y i )) = ∏ P ( f (x i , w ) | y i ) i =1
i =1
n
n
i =1
i =1
.
(2.17)
= ∏ p ( y i − f (x i , w )) = ∏ p(ξ i ) To maximize the P(Ff | F) is equivalent to maximize logP(Ff | F): n
n
n
i =1
i =1
i =1
log P(F f | F) = log ∏ p(ξ i ) = % log p(ξ i ) = % log p ( y i − f (x i , w )) .
(2.18)
Therefore, in a maximum likelihood sense, by setting c(x, y, f (x, w )) = − log p( y − f (x, w )) = − log p(ξ ) ,
(2.19)
optimal loss function can be achieved. Therefore, the desired density minimizes the risk function described by Eq. (2.3) with the loss function described by Eq. (2.19). Eq. (2.18) is equivalent to n
n
i =1
i =1
log P (F f | F) = % log( p (ξ i )) = −% c(x i , y i , f (x i , w )) .
(2.20)
The importance of this result is twofold. On the one hand, it presents a principle that enables the construction of a loss function. Once the noise density of the system is defined, its related loss function can be obtained according to Eq. (2.19). As in many practical applications, the standard Normal density model, N(0,1), is used to describe noise. p( y − f (x, w )) = p (ξ ) =
1
1 exp(− ξ 2 ) . 2 2π
(2.21)
This results in the following loss function: c(x, y, f (x, w )) =
1 1 ( y − f (x, w )) 2 = ξ 2 . 2 2
(2.22)
On the other hand, the result provides a new similarity measurement method – a theoretical standard by which it can be determined whether two signals are generated in the same condition or not. Under normal conditions, the process demonstrates unique characteristics that are reflected in its related signals. The signals from abnormal conditions would exhibit deviations, referred to here as noise. The predicted values emerge when substituting the unknown signal as input into the regression modeling. With a certain noise density model, or the related loss function, the unknown input vectors’ likelihood of estimating the function can be calculated according to Eq. (2.20). As a result, it is possible to implement condition monitoring by comparing the likelihood of an unknown signal with the likelihood distribution of normal ones. Employing the loss function defined in Eq. (2.22), the related SVR estimation can be defined as
A Novel Intelligent Monitor for Manufacturing Processes
min* !
!
w,b, ,
s.t.
n n 2 2 1 1 2 2 2 $w, w& + C%(ξi + ξi* ) = w + C%(ξi +ξi* ) 2 2 i =1 i =1 yi −$w,ϕ(xi )& −b ≥ ε +ξi $w,ϕ(xi )& +b − yi ≥ ε +ξi* i = 1,2,…, n. ξi, ξi* ≥ 0,
69 69
(2.23)
The problem can be derived using the method introduced in Section 2 and taking into account that ξiξi* = 0 and, therefore, that the same relation αiαi* = 0 holds for the corresponding Lagrange factors[12]. min
n
n
i =1
i =1
(2.24)
− % (α i − α i* ) yi + ε % (α i + α i* ) 1 n 1 + % (α i − α i* )(α j − α *j )(K (x i , x j ) + δ i, j ) 2 i, j =1 C n
s.t.
% (α i − α i* ) = 0 i =1
αi≥0, αi*≥0,
i = 1, 2, …, n,
where δ is an n by n identity matrix. The precision error ε plays an important role in the SVR estimation. Aiming to model the signal for condition monitoring, ε was set to zero. This corresponds to a standard linear least squares regression with a weight decay factor controlled by the parameter C. As C → ∞, the problem tends to an unconstrained least squares, which is equivalent to leaving the dot product matrix diagonal unchanged. According to the KKT complementarity condition, b can be calculated using Eq. (2.25). n
b = y i − % (α i − α i* ) K (x i , x j ) − ε − j =1
(α i − α i* ) C
for α i , α i* > 0 .
(2.25)
The final function estimation shares the same expression as Eq. (2.15).
3 Novel Condition Monitoring System To implement condition monitoring systems requires the measurement of typical quantities and is related to process and quality data. In practice, the data are preferred to employ the sensor signals with high signal-to-noise ratio (SNR). Nevertheless, despite the methods employed, extraneous noise still remains amongst normal signals. Some are related to measurements, but the process causes others. Current thinking suggests that abnormal operations contribute to the deviations in signal patterns, which can also be regarded as a kind of noise in addition to normal signal. Thus, a state-of-the-art condition monitoring system can
70 70 M. Ge and Y. Xu
be developed based on the SVR and noise density models employing the architecture depicted in Figure 1.
Fig. 1. The architecture of the condition monitoring system : off-line training path : on-line monitoring path It consists of two paths: off-line path is used in training and on-line path is used for real-time condition monitoring. In off-line training mode, SVR is carried out based on the available training sample sets in normal condition. Consequently, the mean and variance of the likelihood values P will be calculated according to the loss function. Finally, this enables us to calculate the threshold of P. In on-line monitoring mode, by assessing the likelihood of real data, the system can detect faults or otherwise and issue appropriate and efficient commands to the actuator. The SVR-based process monitor demonstrates a number of interesting features. First, it has an exceptional ability to estimate the functions of signals due to the regression estimation in the kernel-induced feature space. By replacing the dot product $ϕ(x),ϕ(xi)& with the kernel K(x, xi), inputs are mapped onto a high dimensional feature space non-linearly. The dimension in the feature space is determined by the kernel function. For example, if the kernel function is a 2nd order polynomial for a d dimension vector, the dimension in the feature space is d(d+3)/2. This effectively captures more information to depict the input. In practice, various kernel functions can be employed as long as they comply with the Mercer condition defined in Eq. (2.10). Second, the solution for SVR is the globe optimal. The quadratic optimization process guarantees the globe minima solution. Moreover, as shown in[15], there are two parts associated with a learning machine designed according to ERM principles: the empirical risk of the training samples Remp(w) and the confidence interval Φ R(w ) ≤ Remp (w ) + Φ (h ) ,
(2.26)
where h is called the Vapnik-Chervonenkis (VC) dimension and Φ is inverse proportionate to the 1/h. The structural risk minimization (SRM) principle[13] provides a new way of designing learning machines by endeavoring to locate a hyperplane that can minimize the sum of the Remp(w) and Φ, and achieve the
A Novel Intelligent Monitor for Manufacturing Processes
71 71
goal of R(w) minimization. As opposed to the ERM approach commonly employed in machine learning, the formulation of the SVR embodies the SRM principle[15]; therefore, SVR not only constructs an optimal hyperplane that displays a strong generalization capacity, but also overcomes overfitting and other difficulties in the design structure of the learning machine. Third, the system does not require complicated calculation during monitoring and hence can operate in real-time. Although the dimensions of the feature space increases, the construction of the optimal hyperplane does not require extra computational load. The reason for this is the introduction of the kernel function. By replacing the dot product with an appropriately chosen kernel function, it is implicit that performing a non-linear mapping onto a high dimensional feature space does not increase the number of tunable parameters. This means that the kernel function computes the dot product of the feature vectors corresponding to the two input vectors. As a result, the exceptional ability does not bring computational complexity and load. Table 1. The procedure for training the SVR-based condition monitoring system Step 1.
Step 2. Step 3. Step 4. Step 5. Step 6.
Step 7. Step 8.
Step 9. Step 10.
Obtain training sample sets in normal condition
S i = (x ij , yij ) nj=1
i = 1, 2, …, M; where Si ∈ S = {Si | i : number of training sample sets} (Optional) Signal processing for the de-noising. Obtain a sample set derived from S using statistics to train the system. Define a kernel function K(x, z); C could be determined by cross-validation. Define the loss function as Eq. (2.22). Apply quadratic optimization, Eq. (24), to find out αi,αi* (αi,αi* ≠ 0) → xi (corresponding support vectors according to Eq. (2.13)) → acquiring b* (Eq. (2.25)). xi, αi, αi*, # Eq. (2.15) → f(x, α, α*). for i = 1:M Eq. (20) → log (Pi); end. Log (Pi) → N(m,σ) → the threshold of log (Pi) = m - 3σ. n Monitoring on-line: obtain the real-time data set, S new = ( x j , y j ) j =1 , and substitute them into the system to calculate log (Pnew); if log (Pnew) < m - 3σ, then condition = normal; otherwise condition = fault; end. Note: x → y: y can be acquired from x;
X # y: substitutes x into y.
Finally, the system requires only the training sample sets produced under normal operating conditions thus making it more feasible than other techniques. Given a number of training sample sets, the SVR technique will calculate
72 72 M. Ge and Y. Xu
dependency between input vectors and outputs; moreover, its loss function defined by the noise density model helps compute thresholds of the likelihood automatically. Thus, the system has the intelligence to cope with different processes. All of these features make the proposed process monitor very attractive for real-world applications. In summary, the procedures for training the monitor and carrying out the on-line monitoring are in Table 1. In practice, three major factors may affect the performance of the system. First, an appropriate kernel can make the system more efficient. The kernel function reflects the geometric relationship between the new input vector and a specific support vector. The Gram matrix K = {Ki,j}, where Ki,j = K(xi, xj), of the kernel is positive definite, which contains all necessary information for the learning algorithm. In condition monitoring, K is used to measure the similarities between normal features and faulty features. In this paper, the exponential RBF kernel function is employed. Second, the size of the input vector may pose a computational limitation. In high-speed stamping, a stroke is completed in a fraction of a second. Meanwhile, SVR requires significant amount of computation load. In fact, to protect precise moulds and ensure quality, real-time factors and the accuracy of detection must have first propriety in process monitoring. Therefore, the size of input vectors should be as small as possible. Third, with regard to the signal type, it should be noted that an appropriate signal with high SNR not only effectively represents physical characteristics, but can also make the monitoring system more robust.
4 Experimental Study 4.1 Experimental design
To verify the performance, the new monitor has been applied on a complicated process. Sheet metal stamping operation is one of the most commonly used manufacturing processes which involves the design and lubrication of the mould, the transient elastic and plastic deformation of the sheet metals, as well as the static and dynamic behavior of the press[16]. Nearly 40 process variables could affect the operation and the product. It is quite difficult in monitoring its processes. The experiments have been carried out on a C-frame double column press (SEYI: SN1-25), a typical machine used in practice. The maximum stamping force is 25 tons and the maximum speed is 110spm. It is reported that strain signals are most commonly used in the case of monitoring stamping operations [16], and rely on gathering meaningful information and robust physical properties. Here a strain signal was employed, with the result that signal processing it was not required. The raw signals are acquired from a strain sensor (Kistler 9232A)
A Novel Intelligent Monitor for Manufacturing Processes
73 73
mounted on the middle of the press frame. The signals were sampled using a NI DAQ (PCI 4452), installed in an Intel Pentium III 550MHz PC. Sheet metal stamping is a transient process in high speed, in order to capture the signal every time at the same instance, a trigger mechanism, a proximity sensor installed near the main shaft, was developed. Every time when the key way of the shaft rotates passing the sensor, a pulse is generated and sent to computer to trigger the start of the sampling. Three experiments were conducted covering a range of situations. The first and second experiments were conducted in a laboratory setting. In the third experiment, data from real-world application were acquired using a JG21-14 press (manufactured by Jieyang Machine Tool Works). Parts produced were manufactured under normal conditions. When working conditions changed, so-called faults developed and deviations occurred. Table 2 provides descriptions of conditions that occurred frequently in practice. Table 2. Fault types occurring frequently during stamping process Type -A B C D E F G
Description No detection: under normal conditions. Misfeed: workpiece is not aligned with the dies. Slug: slug(s) left on the surface of sheet metal. Too thick: workpiece thickness is thicker than normal. Too thin: workpiece thickness is thinner than normal. Material missing: the material is used up. Crack: occurred in the drawing operation due to bad lubrication, materials or design. Double stroke: part left in upper module.
4.2 Experiment conduction
The first experiment incorporated a blanking operation with a simple mould. The parts produced in the experiment are simple buckets used for mounting I/O boards in desktop computers, whose material is SPCC and the blank dimension is approximately 32 cm. The normal working condition is 85spm with workpiece 1.1mm. The signals in Figure 2 show that except the last two conditions, the other signals are all similar in appearance. The second experiment used a multi-station mould, and included the three stamping categories. It produces the covers of the rear part of a small type motor. The material is SPCE and the dimension of the parts is approximately 35mm x 6.6mm (diameter x height). Its normal working condition is 80spm with workpiece 1.0mm. Due to its quite long multi-station mould and operation procedure, two strain sensors were mounted on the left and right sides of the press to detect faults. The strain signals from both sensors merged and provided us with boosted information. Figure 3 presents its typical strain under different conditions.
74 74 M. Ge and Y. Xu
Fig. 2. Typical strain signals under different stamping conditions in the first experiment
Fig. 3. Typical strain signals under different stamping conditions in the second experiment The third experiment also used a multi-station mould; its products are the battery caps. The material is copper and the approximate dimension of the part is 12mm x 4.3mm (diameter x height), designed to work under 150spm with workpiece 0.7mm. It is noted the normal stamping force of the press is 140KN. There are numbers of presses working in the same area, which may cause disruption to the process, and the press is timeworn. Figure 4 consists of illustration strain signals under different conditions in the experiment. Material specifications and the design of the mould caused the battery caps to break easily. Moreover, due to the age and condition of the press, along with adverse working conditions, the SNR of the strain signals in the third experiment was low. As shown in Figure 4, only serious cracks were detected by traditional condition monitoring systems, i.e., peak value, RMS, kurtosis and skewness. In all
A Novel Intelligent Monitor for Manufacturing Processes
75 75
cases, operators are informed of the results too late to ensure reductions in the number of faulty parts manufactured. No obvious phenomenon can be detected to suggest that cracks will presently occur, and it is almost impossible to diagnose these faults.
Fig. 4. Typical strain signal under normal condition in the third experiment It was for just these circumstances that a condition monitoring system based on SVR was developed. In order to develop the system, three major factors need to be determined: the value of ε, the type of kernel function, and the training sample sets. In section 3, ε was set at 0, and the exponential RBF kernel function was chosen; meanwhile, its σ was set at 0.4 As to the third factor, training sample sets, a range of data was acquired from signal(s) according to the scan rate within each stroke. To eliminate meaningless data, and to ensure effective coverage of the entire stroke, specific points were drawn. In the first experiment, 512 points were selected for each stroke. In the second and third experiments, 1024 points and 640 points were selected respectively. Further, because the dimension of the input vector has an obvious impact on the computational load of the system, the number of points was reduced on the basis of maintaining crucial features of the processes whilst still aiming to guarantee the speed of the system. To this end, one in two points were chosen for each stroke from the selected data in the first experiment, and one in four from both the second and third experiments. In the SVR model, the regression step is T = 10. Hence, the model is as follows: y i = f (x i ) + ξ i = f ( xi −1 , xi − 2 ,..., xi −T ) + ξ i ,
i =1, 2, 3, …, n.
(2.27)
where, x is the point of the strain signal. A total of 130 sample sets were collected under normal and faulty conditions in each experiment. 30 training sample sets produced under normal conditions and selected randomly act as training sample sets for each experiment. The algorithm for training SVR is Sequential Minimal Optimization (SMO), which has been well defined by Platt[15]. The algorithm ensures the solution of quadratic programming optimization problem satisfies the time requirement of the system.
76 76 M. Ge and Y. Xu
4.3 Experiment results
Three experiments were carried out in progression according to the procedure defined above. The distributions of the likelihood for the three experiments are illustrated in Figure 5 (a), (b) and (c), respectively, where the solid line is the mean value of the likelihood, the broken line is the threshold, * represents the normal samples, and “•”represent abnormal samples
5 Discussion of Results In summary, the results of the three experiments are represented in Table 3. In the first experiment, the number of the sample sets produced under normal conditions numbered 60, and the remaining 70 sets were the result of misfeeds, slugs, and material either too thick or too thin. The second experiment consisted of 90 normal condition sample sets, with 40 sets resulting from misfeeds, slugs and double strokes. In the third experiment, suspending the flow of lubricating oil created faulty condition sample sets. Parts produced under such conditions are faulty and must to be discarded; however, there is no clear information with regard to strain signals.
Table 3: Experimental results using an SVR-based system Experiment 1st 2nd 3rd
Training sample set 30 30 30
Testing sample set Normal sets Abnormal sets 30 70 60 40 70 30
Success rate 99% 94% 89%
Figures 5 (a), (b) and (c) show that the likelihood of the samples under normal conditions is quite centralized in the first and second experiments, and that samples in abnormal conditions have a wide range of distribution. Due to the poor SNR in the third experiment, signal patterns for different working conditions are similar, which was reflected in the distribution of likelihood. However, the experimental results show that the system has robust performance. The approach extracts the invariant features from the strain signal for normal conditions. Moreover, to compare the new approach, the authors also employed the backpropagation (BP) approach, a traditional ANN technique; a Hidden Markov Model approach, another modeling technique; and a support vector machine (SVM) approach. In each experiment the same sample was used. Table 4 presents the results[16, 17].
A Novel Intelligent Monitor for Manufacturing Processes
77 77
Fig. 5. Experimental Results (a) Top left: results for Experiment 1 (b) Top left: results for Experiment 2 (c) Bottom: results for Experiment 3 Table 4: Experimental results for fault classification using other approaches Source of training sample Samples in Experiment 1 Samples in Experiment 1 Samples in Experiment 1 Samples in Experiment 2 Samples in Experiment 2 Samples in Experiment 3 Samples in Experiment 3
Approach BP network (two layers) Hidden Markov Model Support Vector Machine BP network (two layers) Support Vector Machine BP network (two layers) Support Vector Machine
Success rate 93% 92% 99% 84% 96% Failed ( 0
(2.11)
Note that in eq. (2.11) the coefficients αi cannot be equal to zero so as to guarantee that the origin is not on the facets (boundary) of the convex hull.
3 Qualitative Test of Form-Closure Grasps Qualitative test is a problem of checking whether a grasp is form-closure. The problem definition is given as follows: Problem 1: Given the contact positions of the n fingers on the surface of the grasped object and their corresponding surface normal vectors, check whether the given grasp forms a form-closure. One possible method for the qualitative test is to construct the convex hull of the primitive contact wrenches. However, construction of a convex hull in six dimensional wrench space is not a trivial task according to the results in Computational Geometry. For sake of computational efficiency, it is necessary to develop an ap-
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
87 87
proach without constructing the convex hull. In this section, a simple and efficient qualitative check algorithm is presented based on the following observation: Theorem 1: Suppose that an n-finger frictional/frictionless grasp is given. Denote the convex hull of the primitive contact wrenches by H(W). Assume that point P is an interior point of H(W). The ray from point P to the origin O of the wrench space intersects the convex hull H(W) in a point Q only. The grasp is formclosure if and only if the distance between points P and Q is strictly larger than the distance between points P and O. Proof: For a convex hull H(W), the definition of its interior point P means that there always exists a r such that the spherical neighborhood region of point P: B( P, r ) = {x : x − P < r } lies exactly inside the convex hull. Since H(W) is a convex region, any ray emanating from its interior point P intersects H(W) only in a point Q on the boundary. Denote the intersection of H(W) with the ray PO by Q. The point Q lies on a facet of the convex hull H(W), i.e., on the boundary. Any point on segment PQ except for the endpoint Q is an interior point of the convex hull. If the distance between point P and the origin O is strictly smaller than that between points P and Q, the origin O must be an interior point. Therefore, there always exists a spherical neighborhood B(O, r) of the origin O that is strictly contained by H(W).
This theorem is illustrated in Fig. 3. In Fig. 3(a), the origin O is on the segment PQ and hence the grasp is form-closure. In Fig. 3(b), the origin O does not lie on segment PQ and thus the grasp is not a form-closure grasp. It should be emphasized that one interior point P is sufficient for the check. The point P cannot lie on the boundary of H(W). Obviously, a strictly positive combination of the primitive contact wrenches is an interior point of the convex hull H(W). Therefore, we can choose points P as the centroid of the primitive contact wrenches: P=
1 N
N
¦w
i
(2.12)
i =1
where N denotes the number of the primitive contact wrenches. To locate the intersection point Q of H(W) with the ray from the interior point P to the origin O of the wrench space R6, we first detect the facet E of H(W) intersected by the ray PO, and then calculate the intersection of the facet with the ray PO . The problem of detecting the facet is closely related to the ray-shooting problem of a convex hull: Definition 3: Ray-shooting problem: Let M be a given set of points in Rd. Assume that the convex hull H(M) contains the origin. Given a ray emanating from the origin, find the facet of H(M) intersected by this ray. It is well-known in Computational Geometry that the ray-shooting problem can be transformed to a linear programming problem based on the duality between convex hull and convex polytope. Assume that the convex hull H(M) contains the
88 88 Y.-H. Liu, D. Ding, and M.-L. Lam
origin. According to well-known results in Computational Geometry [17], the convex hull H(M) can be dually transformed to a convex polytope CT(M) defined by
Fig. 3 A form-closure grasp (a) and a non-form-closure grasp (b)
vT x ≤ 1 ∀v ∈ M and vice versa by virtue of a dual transformation T that maps a point vT=(v1,v2,...,vd) in Rd to the hyperplane:
v1 x1 + v 2 x 2 + • • • + v d x d = 1
(2.13)
This mapping is illustrated in Fig. 4. By the transformation T, a point v of the convex hull H(M) is transformed to a hyperplane T(v). The transformation T is reversible and has the following important properties: • A vertex of the convex hull H(M) is transformed to a facet of the convex polytope CT(M) and vice versa. For instance, in Fig. 4 the vertex D is transformed to the facet T(D). • A facet of the convex hull H(M) is transformed to a vertex of the convex polytope CT(M) and vice versa. For example, in Fig. 4 the facet DC of the convex hull is transformed to the vertex corresponding to the intersection of the facets T(D) and T(C). • Points exactly inside the convex hull H(M) are transformed to redundant hyperplanes of the convex polytope CT(M) and vice versa. For example, in Fig. 4 the interior point G of the convex hull is transformed to the redundant hyperplane T(G). Note that a ray-shooting problem assumes that the convex hull contains the origin. By applying a coordinate translation -P of on all points in R6, we readily change the origin point to point P, which lies exactly inside the convex hull. After the coordinate translation, the convex hull H(W) is dual to the convex polytope defined by
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
89 89
T(G) C Convex hull H(M) ☎
☎ ✆
☎ ✆
D ☎
☎
✏ ✑
☎ ✆
☎ ✆
✆
✏
✑ ✝ ✞
✏
✑ ✝
✞
✑ ✝ ✞
✞
✂
✂
✂
✄
✍
✍
✎
✎
✍
✄
✍ ✎
T(B) ☎
✆
✏
✑ ✝ ✝
✆
✏ ✑
✏
✞
☎ ✆
✏ ✑
✆
G
☎ ✆
✏ ✑
✄
✍ ✎
✍ ✎
✎
☎
☎
☎
☎
☎
✆
✆
✆
✆
✆
✏
✏
✑ ✝ ✝ ✞
✏
✑ ✝ ✞
✏
✑ ✝ ✞
✑ ✝ ✞
✞
✁
✂
✂
✂
✄
✍
✍ ✎
✍ ✎
☎
✞
✡ ☛
✍
✍ ✎ ✟ ✠
☎
☎ ✆
✆
✏ ✑ ✝
✏ ✑ ✝
✞
✏ ✑ ✝
✞
✡
✑ ✝ ✞
✡
✞
✡ ☛
✡ ☛
✍
☛
✍ ✎
✍ ✎
✟
✟ ✠
✠
✎
✟ ✠
✝
✝
✝ ✞
✡
✝ ✞
✡
✞
✡ ☛
✡ ☛
✟
☛
✟ ✠
✟ ✠
✝
✟ ✠
✝ ✞
✝ ✞
✡ ☛
✟ ✠
✠
✝
✞
✝ ✞
✝ ✞
✡
✡
✡
✡
✡
☛
☛
☛
☛
☛
✞
✡ ☛
✟
✟ ✠
✟ ✠
✟ ✠
✟ ✠
☞
✠
☞
☞
✌
✡
✡
✡
☛
☛
☛
✌
✡
✡ ☛
✡ ☛
☞
☛
☞ ✌
T(C)
✍
✎
✟ ✠
☛
B
✍ ✎
✟ ✠
✞
✡ ☛
✎ ✟ ✠
☎ ✆
✏
A
✂ ✄
✍ ✎
✟ ✠
☎ ✆
☛
✍
✍ ✎
✟ ✠
☎
✝
✡ ☛
✂ ✄
✆
✞
✡ ☛
✂ ✄
✍ ✎
✟
✆
✏ ✑ ✝ ✞
✡ ☛
✠
☎ ✆
✏ ✑ ✝ ✞
✡ ☛
✎
T(D)
✟ ✠
☎ ✆
✏ ✑ ✝ ✞
✡ ☛
✍
✎
✟ ✠
☎ ✆
✏ ✑ ✝
✡ ☛
✎
✍ ✎
✟ ✠
☎ ✆
✞
✄
✍ ✎
✟ ✠
✝
✄
✍ ✎
✟ ✠
✡ ☛
Convex polytope CT(M)
☞ ✌
T(A)
Fig. 4 The duality between a convex hull and a convex polytope.
(w i
− P) x ≤ 1 T
i = 1, 2 , " , N .
(2.14)
Denote the direction vector of the ray PO by t. Based on the duality between convex hull and convex polytope, the ray-shooting problem is equivalent to a problem of maximizing the objective function: z = tT x
(2.15)
subject to the constraints in (2.14). Suppose that the optimal solution of the linear programming problem is eG = (e1 , e 2 , " , e 6 )T . According to the duality, the facet E of H(W) intersected by the ray PO is e1 x 1 + e 2 x 2 + " + e 6 x 6 = 1
(2.16)
Then, the intersection point Q of H(W) with the ray PO is the intersection of the hyperplane defined by (2.16) with the ray PO. To summarize the discussions above, the qualitative test of a form-closure grasp can be performed by the following algorithm: Algorithm 1
1) Calculate all the primitive contact wrenches wi corresponding to the given grasp. 2) Calculate the interior point P using eq. (2.12). 3) By the linear programming, calculate the optimal point eT=(e1,e2,...,e6) that maximizes function z in eq. (2.15) subject to the constraints in eq.
90 90 Y.-H. Liu, D. Ding, and M.-L. Lam
(14). The hyperplane T(e) corresponding to the optimal solution: eT x = 1 is the facet of H(W) intersected by the ray PO. 4) Calculate the intersection Q of the ray PO with the hyperplane T(e). The point Q is the intersection of H(W) with the ray PO. 5) If the distance between points P and Q is larger than that between points P and O, the grasp is form-closure; otherwise, it is not. 6) The algorithm ends. Fig. 5(b) plots computation time of the algorithm with respect to the number m of segments linearizing a friction cone for a 4-finger frictional grasps on a rectangle box shown in Fig. 5(a). Note that the time is obtained on a Sun Ultra I170 using C programming language. The grasp is not form-closure when the frictional coefficient is 0.3; it becomes a form-closure grasp when the frictional coefficient is increased to 0.5. From the results, the qualitative check is performed efficiently. For example, when m is equal to 100, which implies a reasonably accurate linearization of the friction cone, the computation time was 0.45 seconds when the frictional coefficient is 0.5. Finger 3
12000
10000
Friction coefficient=0.3 8000
Time (ms)
Finger 4 Finger 2
6000
4000
Friction coefficient=0.5 2000
0
Finger 1
(a)
0
50
100
150
200 m
250
300
350
400
(b)
Fig. 5 A 4-finger grasp (a) and the results of its qualitative test (b)
4 Grasp Synthesis in Discrete Domain Compared to the qualitative test, a more challenging and important problem is how to find a form-closure grasp on a given object. This problem cannot be avoided in motion planning of dexterous tasks. In this section, we address the grasp synthesis problem in a discrete domain, i.e. we will solve the problem of finding an n-finger form-closure grasp on an object represented by a set of discrete
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
91 91
points. It is assumed that the number of discrete points is big enough to yield specified accuracy of approximation. Following is the problem defined: Problem 2: Given a set Ω of surface points representing a 3-D object of arbitrary geometry and their corresponding surface normals, find an n-finger frictionless or frictional form-closure grasp in the point set Ω. A simple, efficient and complete algorithm will be presented for solving Problem 2 based on the ray-shooting technique. To better explain the algorithm, the description will be focused on the problem of finding a 7-finger frictionless grasp. The extension of the algorithm to frictional grasps is discussed in the last subsection of this section. 4.1 Algorithm Outline
The basic idea of the algorithm to be presented is to combine a local heuristic search with a recursive problem decomposition strategy. First, randomly select seven points from the points set and check whether the selected points form a form-closure. If the selected points achieve a form-closure grasp, the algorithm finishes; otherwise, a local heuristic search is executed until a form-closure grasp is found or a local minimum is encountered. The local search algorithm searches for a form-closure grasp in the direction of reducing the distance between the convex hull of the primitive contact wrenches and the origin of the wrench space. In case the distance reaches a local minimum during the search process, the point set is divided into subsets by a separating hyperplane in the wrench space and a few sub-problems defined in the subsets, which have less points than the original set, are generated according to existing conditions of form-closure grasps. To effectively handle the process, we adopt a search tree whose root represents the original problem. The sub-problems are represented as child nodes of the root. This process is recursively carried out on every node of the search tree until a form-closure grasp is found or all the nodes have been processed. An inevitable step in the proposed algorithm is qualitative test of a formclosure grasp, for which the ray-shooting based algorithm presented in the previous section is used. In addition to the qualitative test, an important property is that the qualitative test algorithm detects the closest facet of the convex hull to the origin of the wrench space. The closest facet will used to guide the direction of the local search algorithm. 4.2 Local Heuristic Search
The local search algorithm iteratively searches for finger positions that yield formclosure under the guidance of a heuristics. The local search starts from a randomly selection of an initial grasp. It searches for a form-closure grasp in the direction of reducing the distance between the convex hull of the primitive wrench and the
92 92 Y.-H. Liu, D. Ding, and M.-L. Lam
origin of the wrench space. The key idea here is to move the convex hull gradually closer and closer to the origin of the wrench space through a iterative process. This idea is illustrated in Fig. 6.
Fig. 6 The local heuristic search process
We consider that each of the given points in set Ω is connected with four neighboring points on the surface of the object. The connection means four possible motions of a finger at an iteration of the search. To simplify the problem, we here assume that at a iteration only one finger can be moved to its neighboring position. Consequently, for a frictionless grasp with 7 contacts there are totally 28 candidate grasps to be selected at every iteration. Observed from the qualitative test, it is desirable to move the convex hull H(W) towards the origin O of the wrench space R6 until the origin O completely lies inside the convex hull. Therefore, we consider the distance ||QO|| or ||PO|| as the heuristics and thus select the candidate grasp with the minimum distance. The local search algorithm is described in detail as follows: 1) k = 1; arbitrarily choose an initial grasp Gk = { ri }, for i = 1, 2, …, 7, where all ri ∈ Ω. Calculate the primitive contact wrenches wik. 2) Check whether Gk is form-closure using the qualitative test algorithm; if so, return Gk. 3) For each of the grasping position ri, locate the four adjacent positions ζ i ∈ Ω, for l = 1, 2, 3, 4; generate all the 28 combinations of candidate l
grasps CG i . l 4) For each candidate grasp CG i , calculate the corresponding ||QO( CG i )||. l
l
5) Find the candidate grasp CG* with the minimum value ||QO||. If ||QO(CG*)|| is not strictly smaller than ||QOk||, report that local minimum
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
93 93
is encountered; return Gk. Otherwise, k = k + 1; update Gk = CG*, wik(CG*), ||QOk|| = ||QO(CG*)||; go back to 2). Note that the local search terminates under two situations: when a form-closure grasp is obtained in 2) or when a local minimum is encountered in 5). In case a local minimum is encountered, the algorithm needs to select another initial grasp and start another round of the local search. In the next subsection, we propose a recursive problem decomposition strategy to divide the problem into sub-problems with fewer points so as to overcome the local minimum problem and at the same time to improve the computational efficiency.
Fig. 7 The separating hyperplane used in the problem decomposition
4.3 Recursive Problem Decomposition
To cope with the local minimum problem, a recursive problem decomposition strategy is employed to divide the problem in the original point set Ω into subproblems in sub-sets. The division is based on an important observation that the primitive contact wrenches corresponding to the seven contact points must not lie one side of any hyperplane passing through O to achieve a form-closure grasp. Having this observation, we can drastically reduce the number of eligible candidate grasps by neglecting those grasps with all the wrenches on one side of a specific hyperplane called separating hyperplane. Recall, in the qualitative test, that the hyperplane detected is the facet E of H(W) intersecting with the ray PO. Here, we select a hyperplane Y that is parallel to the facet E obtained in the local minimum iteration as the separating hyperplane (Fig. 7):
94 94 Y.-H. Liu, D. Ding, and M.-L. Lam
e1 x 1 + e 2 x 2 + " + e 6 x 6 = 0
(2.17)
The hyperplane divides the wrench space into two half-spaces, which are denoted by Y + and Y − , respectively. Y + is the half-space: − denotes the half-space: ; and Y e1 x 1 + e 2 x 2 + " + e 6 x 6 ≥ 0 e1 x1 + e 2 x 2 + " + e 6 x 6 < 0 and As shown in Fig. 4, the separating hyperplane Y divides the original point set Ω into two subsets: Ω (Y − ) = {r j ∈ Ω | w j ∈ Y − }
(2.18)
Ω (Y + ) = {r j ∈ Ω | w j ∈ Y + }
(2.19)
and
Since in a form-closure grasp the seven points should not all belong to subset Ω (Y − ) or subset Ω (Y + ) , points of form-closure grasps must be selected from both of these subsets. This gives rise to a division of the problem in the original set into problems in the subsets based on existence conditions of form-closure grasps. Six problems in the subsets need to be considered, which are grasps consisting of • one grasp point from subset Ω (Y − ) and six grasp points from subset Ω (Y + ) , subset Ω (Y − ) and five points from subset
•
two grasp points from Ω (Y + ) ,
•
three points from subsect Ω (Y − ) and four points from subset Ω (Y + ) , four points from subset Ω (Y − ) and three points from subset Ω (Y + ) ,
•
five points from subset Ω (Y − ) and two points from subset Ω (Y + ) , and • six points from subset Ω (Y − ) and one point from subset Ω (Y + ) . The sub-problems obtained are represented as child nodes of the search tree whose root represents the problem in the original set (Fig. 8). The (i,j) in the node denotes that i and j grasp points must be selected from subsets Ω (Y − ) and subset Ω (Y + ) , respectively, in order to obtain a form-closure grasp in the corresponding subsets of points. From the six child nodes, we select one and carry out the local search of a form-closure grasp within the subset of points. When a new local minimum is encountered, the separating hyperplane is employed to further divide the subset into three or four subsets. Based on the existence condition, a new set of sub-problems are generated and represented as child nodes of the node selected. This process is performed recursively until a form-closure grasp is found or all the nodes have been explored. It should be noted that no child will be generated for a node if none of the combination of the subsets divided by the separating hyperplane satisfies the existence condition of form-closure grasps. •
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
95 95
To determine which child should be traversed first, we consider a node with eligible contact points more evenly distributed over the point sets has a higher chance to contain form-closure grasps. Therefore, we define a heuristic function as follows:
Original Problem
(1,6)
(2,5)
(3,4)
(4,3)
(5,2)
(6,1)
Fig. 8 The search tree generated 2
· §7 (2.20) ¨ − ai ¸ ¦ ¹ i =1 © s where s is the number of subsets obtained in the decomposition and ai denotes the number of fingers (contact points) in the subset i of the points. For example, for the left-most node in Fig. 8, the a1 and a2 are 1 and 6, respectively. A node with a smaller f value will be traversed prior to the one with a larger f value. s
f ( < a1 a 2 " a s > ) =
4.4 Extension to Frictional Grasps
The difference in frictional case from frictionless case lies that a contact point results in m primitive contact wrenches, so the primitive contact wrenches corresponding to a grasp point may lie on both sides of the separating hyperplane. This requests us to divide the original point set Ω into three subsets: Ω(Y − ) = {ri ∈ Ω | wij ∈ Y − , for all j = 1,2,", m} Ω(Y + ) = {ri ∈ Ω | wij ∈ Y + , for all j = 1,2,", m}
(2.21) (2.22) (2.23)
and Ω (Y ) = {ri ∈ Ω | ri ∉ Ω (Y ), ri ∉ Ω (Y )} where wij denotes the primitive contact wrenches. Ω(Y − ) denotes the subset containing contact points with the m primitive contact wrenches fully located in the half-space Y − . Ω(Y + ) denotes the subset containing contact points with the cor0
−
+
responding m primitive contact wrenches fully located in the half-space Y + . Ω(Y 0 ) denotes the subset containing contact points with the corresponding m primitive contact wrenches located on the both sides of the separating hyperplane. Since in a form-closure grasp, the n grasp points should not all belong to subset Ω(Y − ) or subset Ω(Y + ) . This gives rise to a division of the problem in the origi-
96 96 Y.-H. Liu, D. Ding, and M.-L. Lam
nal set into problems in the subsets based on the existence conditions of formclosure grasps. Grasp points of form-closure grasps must be selected from the subset(s) as stated in any one of the following combinations: • Ω(Y − ) , Ω(Y + ) and Ω(Y 0 ) , • Ω(Y − ) and Ω(Y + ) only, Ω(Y − ) and Ω(Y 0 ) only, Ω(Y + ) and Ω(Y 0 ) only, and • Ω(Y 0 ) only. To illustrate the idea, let’s consider a 3-finger frictional grasp. Eight problems in the subsets need to be considered, which are grasps consisting of • one grasp point from subset Ω(Y − ) , one grasp point from subset • •
Ω(Y + ) and one grasp point from subset Ω(Y 0 ) . •
two grasp points from subset Ω(Y − ) and one grasp point from subset Ω(Y + ) ,
•
one grasp point from subset Ω(Y − ) and two grasp points from subset Ω(Y + ) ,
•
two grasp points from subset Ω(Y − ) and one grasp point from subset
Ω(Y 0 ) , •
one grasp point from subset Ω(Y − ) and two grasp points from subset
Ω(Y 0 ) , •
two grasp points from subset Ω(Y + ) and one grasp point from subset Ω(Y 0 ) ,
•
one grasp point from subset Ω(Y + ) and two grasp points from subset Ω(Y 0 ) , and
•
all the three grasp points from subset Ω(Y 0 ) .
Fig. 9 The search tree generated for a 3-finger frictional grasp
The sub-problems obtained are represented as child nodes of the search tree whose root represents the problem in the original set (Fig. 9). The (i,j,k) in the
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
97 97
node denotes that i, j and k grasp points must be selected from subsets Ω(Y − ) , Ω(Y + ) and Ω(Y 0 ) , respectively. From the eight child nodes, we select one by a similar heuristics and carry out the local search of a form-closure grasp within the subset of points. When a new local minimum is encountered, the separating hyperplane is employed to further divide the subset into subsets. Based on the existence condition, a new set of subproblems are generated and represented as child nodes of the node selected. This process is performed recursively until a form-closure grasp is found or all the nodes have been explored. It should be noted that no child would be generated for a node if none of the combination of the subsets divided by the separating hyperplane satisfies the existence condition of form-closure grasps. 4.5 Completeness and Complexity
Although the algorithm employs a heuristics to guide the local search and selection of the sub-problems, it always finds a form-closure grasp or reports no solution after all the nodes in the search tree have been visited. This can be explained in the following. The algorithm divides using the separating hyperplane the problem into problems in subsets of the points when the distance between the convex hull and the origin of the wrench space reaches a local minimum. The division makes the numbers of points in the subsets smaller than those in the original point sets. The search reaches a leave node when the division results in zero subset or all the points lie on one side of the separating hyperplane. In both cases, it is guaranteed that no form-closure grasp exists in the subsets. The algorithm will always terminate with reporting no solution after all the leave nodes are visited. Therefore, the proposed algorithm is complete as it always finds a solution if there is and otherwise reports no solution. Then, consider the computational complexity of the algorithm. The complexity depends on the number of nodes of the search tree. Proposition 1: Denote by n the number of fingers/contact points. In frictionless cases, the number of child nodes of each node is not more than 2 n − 2 ; In frictional case, the number of each node is not more than 3n-2. This can be simply proven. The separating hyperplane divides the subset for each finger into at most 2 subsets in frictionless case. According to the existence conditions, there are at most 2 n − 2 sub-problems to be considered. Therefore, there are not more than 2 n − 2 children for each node in the search tree. In frictional case, the separating hyperplane divides a subset into at most 3 subsets, so there are not more than 3n − 2 children for each node. Since the search tree generates new nodes whenever a local minimum is detected, the computational efficiency is strongly dependent of the number of local minimum points. Denote by K the total number of the local minima in the configuration space of grasps. Note that the number K is determined by the surface geometry of the object. Once the object is specified, the number K is a constant. For
98 98 Y.-H. Liu, D. Ding, and M.-L. Lam
simplicity, we assume that the local minima are uniformly distributed over the configuration space. The average number of attraction points for each local mini-
Nn , where N is the total number of the discrete geometric points on the K object surface. The corresponding number of points in the subset for each finger is N N points in subset set of each fin. In other words, if there are less than n nK K
mum is
ger, we can assume that there is only one minimum point (global minimum) in the subset. When ||QO|| is used as the heuristics, the local search process will definitely find a solution if there are form-closure grasps in the subsets. This is because the distance will be convergent to the minimum value, which is negative. Therefore, Proposition 2: Assume that the K local minimum points are uniformly distributed in the configuration space. In case there is a form-closure grasp in the given discrete point set, the number of nodes generated in the search tree is O(
K ln K ) n
if the search tree is expanded in the breadth-first strategy. Proof: For simplicity, assume that at each decomposition the separating hyperplane divides points in each subset equally, i.e., the numbers of points in the two new subsets obtained after the division are the same. In the frictionless case, the level l of the search tree leading to a smaller number than the average number N of attraction points from each local minimum in the subsets is calculated by nK
ln K N N =n l = l n ln 2 2 K
(2.24)
If the breadth-first search is adopted, the total number L of nodes generated in the search tree satisfies: 1
ln K ln K ln K n K) ( K − K n ) = O( ( 2 − 2) l = L≤ n n ln 2 n ln 2
(2.25)
A similar result can be obtained for frictional grasps. It should be noted that the number of nodes is independent of the number of points representing the object in case there exists a solution. The complexity of the tree is mainly determined by the surface geometry of the object and the number of fingers. The total computational time is equal to the product of the number of nodes with the times of calling the local search process. Note that the local search algorithm is efficient because it is essentially a gradient-based algorithm and the ray-shooting based qualitative test algorithm takes a time linear to the number of the primitive contact wrenches. The problem is more complicated when there exists no form-closure grasp in the given problem. Theoretically, if the corresponding grasp is not form-closure at the global minimum point in the subsets, it is not necessary to carry out any further investigation. However, since no method is available to check that the mini-
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
99 99
mum reached in the local search process is the global one in the subsets, it is necessary to further divide the subsets. From the nature of the strategy adopted in the algorithm, it is reasonable to consider that most of the points are on one side of the separating hyperplane obtained at the global minimum. This implies that the algorithm would be able to understand non-existence of a solution in the subsets with a few times of further decomposition. Therefore, a weaker result can be obtained that the number of nodes generated in the search tree when there is no solution in the problem is similar to that when there exists a solution. When the distance ||PO|| is used as the heuristics, the complexity is similar. However, a rigorous proof is difficult and necessary. Although the discussion above is based on assumptions, this algorithm works more efficiently than the exhaustive search algorithm whose time is O( N n ) . Compared to other heuristic approaches, the algorithm is complete. 4.6 Implementation
We have implemented the proposed algorithm using MATLAB. Two numerical examples are used to demonstrate the usefulness and efficiency of the algorithm. The first example concerns computation a 7-finger frictionless grasps and the second example addresses computation frictional grasps. In both examples, we used the model of a turbine airfoil (Fig. 10) with 1546 predetermined candidate locations and surface normals. Turbine airfoils are good examples of 3-D workpieces with complex geometry. The geometric shape of an airfoil is primarily defined by its aerodynamics. However, the geometric representation in a CAD system is usually approximated by parametric surfaces such as B-splines. Only a dense set of points of the surfaces are defined exactly as calculated in the aerodynamic analysis. In order to minimize the effects of the geometric approximation, the airfoil is required to be fixtured or grasped at some of these precise surface locations in its manufacture and inspection. Therefore, the point-set constraint is imposed by the practical conditions related to the functional and/or manufacturing requirements. In the first example, seven frictionless fingers are to grasp the airfoil. The algorithm takes 68 iterations in total to obtain a form-closure grasp. The initial grasp, selected randomly, shown in Fig. 11(a) is non-form-closure. After 62 iterations, the algorithm encounters a local minimum in the performance measure ||PO||. A separating hyperplane is defined to divide the candidate contact points into two subsets and the algorithm performs the local search process in the state < 3 4 >. Then, a form-closure final grasp is found after 6 more iterations (Fig. 11(b)). Fig. 12 plots the distance difference ||PO|| - ||PQ|| and the distance ||PO|| against the iteration number. In second example, we consider the problem of finding a frictional grasp on the airfoil Assume that the number of fingers is 4. The friction cone at each contact point is linearized with a polyhedral convex cone with 20 segments (i.e. m = 20). The algorithm is required to obtain a 4-finger form-closure grasp on the airfoil model in the presence of friction. A friction coefficient µ = 0.2 exists between the
100100Y.-H. Liu, D. Ding, and M.-L. Lam
object and each finger. The initial grasp upon random selection is non-formclosure (Fig. 13(a)). The algorithm performs the local search in obtaining trajectories of finger motions in the direction to decrease ||PO||. In the 62nd iteration, the distance between P and O reaches a local minimum value (Fig. 14). The search tree is then generated and the child (1,1,2) is adopted. Then, a new grasp is selected from points in the subsets corresponding to node (1,1,2) and another round of local search was executed. Finally, a form-closure grasp is obtained (Fig. 13(b)) at the 71st iteration. The results ascertained the efficiency of the proposed algorithm.
Fig. 10 An airfoil approximated by 1546 discrete points
Fig. 11 Frictionless grasp: the initial grasp (a) and the final grasp (b)
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
101 101
Fig. 12 Frictionless grasp: plots of the distances ||PO||-||PQ|| and ||PO||
Fig. 13 Frictional grasp: The initial grasp (a) and the final grasp (b)
5 Automatic Fixture Layout Design A closely related problem to mutli-fingered grasping is automatic fixture layout design, which is to find a set of fixturing points (locators and clamp) firmly holding a workpiece. A firm holding implies that the fixturing points must form a
102102Y.-H. Liu, D. Ding, and M.-L. Lam
form-closure. For sake of safety during fabrication process, in fixture layout design all contacts are conservatively considered frictionless, and thus the problem is similar to synthesis problem of frictionless form-closure grasps. Since seven frictionless contact points are sufficient and necessary to form a form-closure, we suppose that the number of fixturing points to be found is 7. This section will show that the ray-shooting problem can be also applied to automatic fixture layout design on polyhedral workpiece. The problem addressed here is defined as follows: Problem 3: Given a geometric model of a polyhedral workpiece with welldefined surface normal vectors and vertices information, (1) find an eligible set of fixturing surfaces on which seven fixturing locations satisfying form-closure property exist; (2) find seven fixturing points on the eligible set of surfaces which ensure the total constraint and an accurate localization of the workpiece.
Fig. 14 Frictional Grasps: Plots of the distances ||PO|| and ||QO||
5.1 Eligibility Set of Fixturing Surfaces
Fixturing surfaces are the surfaces of a workpiece used to locate and clamp the workpiece where functional fixturing elements including locators and clamps are in contact with these surfaces. Besides some trivial requirements like nonmachining surfaces and surfaces with accessible normal directions, the primary concern when selecting the fixturing surfaces is whether there exists possible fixturing points yielding form-closure on them. A set of fixturing surfaces are said eligible if there exists fixturing points on them yielding form-closure.
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
103 103
On a set of eligible surfaces, it is always possible to find seven fixturing points ensuring form-closure. A simple condition is presented here to check the eligibility of a set of surfaces. A polyhedral workpiece is comprised of convex as well as concave faces. Note that a concave surface can be always decomposed into convex ones. The position vector ri of a contact point on a convex surface can be represented as a convex combination of the vertices of the surface. Denote by vij the coordinates of vertex j on the i-th surface and by cij the corresponding multiplier. Then position ri of the i-th fixturing point on the surface is a convex combination of vij, i.e., ti
ri =
¦c v
(2.26)
ij ij
j =1
where ti is the number of the vertices of the i-th convex surface. The positive multipliers cij satisfy ti
¦c
ij
=1
(2.27)
cij > 0
j =1
Note that cij are chosen to be strictly positive to ensure that the fixturing points calculated are inside the surfaces and not at the vertices or on the edges. cij are the variables of interest, as they define the location of each fixturing contact. Then the wrench applied at the object by the i-th contact can be rewritten as:
· § ti ¸ ¨ cij ni ¸ · ¨ i =1 § ni ¸¸ = ¨ t wi = ¨¨ ¸= © ri × ni ¹ ¨ i ¸ ¨ cij vij × ni ¸ ¹ © j =1
¦ ¦
ti
¦c j =1
§ ni · ¨ ¸ ¸ © vij × ni ¹
ij ¨
(2.28)
Let
§ ni · ¸ bij ∆¨¨ ¸ v n × ij i © ¹
(2.29)
which is called the vertex contact wrench here. Then introduce a matrix B:
(
B = b11 , b12 ,..., b1t1 ,..., bk1 , bk 2 ,..., bkt k
)
(2.30)
where k is the number of convex surfaces of a workpiece. Proposition 3: Assume that k convex surfaces of a workpiece are given. The k surfaces are eligible if and only if the convex hull of the corresponding vertex contact wrenches bij strictly contains the origin of R6. Proof: The existence of form-closure fixturing points on the surfaces implies the existence of strictly positive α i such that
104104Y.-H. Liu, D. Ding, and M.-L. Lam k
¦α w i
i
=0
i =1
(2.31)
k
¦α i = 1
∀α i > 0
i =1
Substituting eq. (2.28) into eq. (2.31) results in k
ti
i =1
j =1
¦αi ¦ cijbij = 0
(2.32)
Let β ij = α i cij , thus k
ti
¦¦ β b
ij ij
=0
(2.33)
i =1 j =1
Note that k
ti
¦¦ β
ij
= 1,
β ij > 0
(2.34)
i =1 j =1
This means the convex hull of the points bij contains strictly the origin of R6. From this proposition, by testing whether the convex hull of points in B contains the origin, we can readily check the eligibility of a set of surfaces. For convenience, in the following part of this section we use bi with a single subscript i, instead of bij, to denote the i-th column vector of the wrench matrix B. To find a set of eligible surfaces, we cam apply the grasp synthesis algorithm presented in the previous section. The problem here is similar to the problem of finding a frictional form-closure grasp. Denote the set of convex surfaces of the object by Ω. First, we randomly select 7 surfaces and check whether the convex hull of their vertex wrenches contains the origin of the wrench space. If the convex hull contains strictly the origin, a set of eligible surfaces has been found; otherwise, apply the local search procedure by reducing the distance between the convex hull and the origin until finding a set of eligible surfaces or encountering a local minimum. In case a local minimum is encountered, divide by the separating hyperplane the surfaces in the set Ω into subsets and represent the sub-problems in the subsets as child nodes of the root of the search tree. Note that the vertex wrenches of a surface may spread on both sides of the separating hyperplane, so it is necessary to divide the surface set Ω into three subsets like the case of frictional grasp. The same procedure is applied to the child nodes recursively until a set of eligible surface is detected or all the nodes of the search tree have been processed. 5.2 Optimal Fixturing Points
The next step is to find fixturing points on the set of eligible surface. Usually the automated fixturing point determination needs to perform an exhaustive search to
105 105
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
reach an optimal design solution. We here present a method to calculate the optimal fixturing points under a performance index. A given system of wrenches achieves form-closure when the following equations can be satisfied: 7
ti
7
¦ w = ¦α ¦ c b i
i
i =1
i =1
ij ij
(2.35)
=0
j =1
using the following variables:
β l = α i cij
i = 1,2,...,7
j = 1,2,..., ti
l = j + (i − 1) ∗ ti
(2.36)
The form-closure constraints can be formulated as: N
¦β b
l l
(2.37)
=0
l =1 N
¦β
=1
l
l =1
β ∈ RN > 0 7
It should be noted that all the constraints are linear and involve N =
¦t
vari-
i
i =1
ables. Besides the stability constraint in determining fixturing points, another important factor to be considered is the positioning accuracy of the workpiece provided by fixturing points [30]. The positioning errors of fixturing points result in the localization error of the workpiece. To ensure a minimum influence of the positioning errors of the fixturing points on the positioning accuracy of the workpiece, the fixturing points should be located as far away as possible from the center of mass of the workpiece. So we consider to maximize the Euclidean distance from each fixturing point to the center of mass of the workpiece, i.e., 7
¦α
Maximize :
(2.38)
2 T i ri ri .
i =1
The above equation can be rewritten as: 7
B
D
7
B
D
i =1
A
C
i =1
A
C
¦αi ¦¦ c j ck vTj vk = ¦¦¦ β j β k vTj vk
(2.39)
where i −1
A : j = 1+
¦t , z
z =1
i −1
i
B:
¦t , z
z =1
C : k = 1+
¦t , z
z =1
i
D ::
¦t
z
z =1
Thus we obtain a Quadratic Programming (QP) problem subject to the constraints eq. (2.37). Maximizing the objective function has the effect of maximizing ri and forcing α i to tend to be equal for all i. The former means minimization
106106Y.-H. Liu, D. Ding, and M.-L. Lam
of workpiece locating errors and the latter implies even distribution of the fixturing forces. Finally, the variables of interest, multiplier cij, can be obtained as follows:
cij =
βl
(2.40)
D
¦β
k
C
It should be noted that the objective function employed here sometimes may produce fixtures with bad sensitivities for certain workpiece, but it works in most of the cases to produce good fixtures that reduce the localization errors of workpieces. Fig. 15 shows two views of the fixturing points, found by this algorithm, on a polyhedral part. The part has eight surfaces and two of them are concave. After decomposing the two concave surfaces, there are fourteen convex surfaces for the part. The vertices and normal vector of each surface are the input parameters. Note that those surfaces of other purposes, e.g., machining features, supporting, should be eliminated from the input set. In this case, the large top face shown in Fig. 15(a) is assumed to be the surface to be machined, so we remove it from the input surface set. The algorithm first randomly picks up three surfaces as the initial set, and then finds the final eligible surface set including seven surfaces is obtained in 0.33 seconds of the CPU time of a SPARC Station 4 using Matlab. Next, the quadratic programming is employed to calculate proper positions for fixturing points on the final surface set selected above. It takes 0.44 seconds of the CPU time to calculate the positions for seven fixturing points. This example ascertained the effectiveness and efficiency of this algorithm.
2 1.5 1 4
0.5
2
4
2
0
1.5
3 −0.5
3
1
−1
0.5
1 −1.5
7
6
0
−2 3
3
−0.5 2 2 1
−1
5
5
−1 1
−2 −3
(a)
0
1 0
−1.5
0
−1
−2 −1
−2 0
−1
1
−3
(b)
Fig. 15 Fixture locations found on a polyhedral part
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
107 107
6 Conclusion In this chapter, we presented a ray-shooting based formulation for stability analysis and synthesis of 3-D form-closure grasps. First, it has been shown that the qualitative test of a 3-D form-closure grasp can be transformed to a ray-shooting problem. Then, based on this transformation we proposed an effective heuristics for guiding the local search for form-closure grasps. A complete and efficient algorithm has been developed for 3-D form-closure grasp synthesis in discrete domain by combining the local search with a recursive problem decomposition strategy. Furthermore, we extended the ray-shooting based formulation to automatic fixture layout design on polyhedral workpieces. Examples have been given to demonstrate the effectiveness and efficiency of the algorithms developed on the basis of this new formulation. It is important to note that this formulation provides a good example of linking techniques in Computational Geometry with mechanics problems. The work presented is based on point-point contacts among the object and the robot fingers, so the results may not apply to soft fingers or other kind of finger models. One of important future works is the generalization of the methods to various finger models. Other future works include optimization of form-closure grasps and planning for dexterous manipulation tasks.
References [1] A. Bicchi, “On the closure properties of robotic grasping,” International Journal of Robotics Research, vol. 14, no. 4, pp. 319–334, 1995. [2] M. Buss, H. Hashimoto, and J. Moore, “Dexterous hand grasping force optimization,” IEEE Transactions on Robotics and Automation, vol. 12, no. 3, pp. 406–418, 1996. [3] M. R. Cutkosky, "On grasp choice, grasp model, and the design of hands for manufacturing task," IEEE Trans. on Robotics and Automation, vol. 5, no. 3, pp. 269-279, 1989. [4] D. Ding, Y. H. Liu, Y. T. Shen, and G. L. Xiang, “An efficient algorithm for computing a 3D form-closure grasp,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robotics and Systems, 2000, vol. 2, pp. 1223–1228. [5] C. Ferrari and J. F. Canny, "Planning optimal grasps," in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 2290-2295, 1992. [6] J. R. Kerr and B. Roth "Analysis of multi-fingered hands," Journal of Robotics Research, vol. 4, no. 4, 1986. [7] L. Han, J. C. Trinkle, and Z. Li, “Grasp analysis as linear matrix inequality problems,” IEEE Transactions on Robotics and Automation, vol. 16, pp. 663– 674, Dec. 2000. [8] M. L. Lam, D. Ding, and Y. H. Liu, "Grasp planning with kinematic constraints", in Proc. IEEE/RSJ Int. Conf. On Intelligent Robots and Systems, 2001, vol. 2, pp. 943-948.
108108Y.-H. Liu, D. Ding, and M.-L. Lam
[9] Z. Li, P. Hsu and S. Sastry, "Grasping and coordinated manipulation by a mutlifingered robot hand," Int. Journal of Robotics Research, pp. 33-50, 1989. [10] Y. H. Liu, “Qualitative test and force optimization of 3-D frictional formclosure grasps using linear programming,” IEEE Transactions on Robotics and Automation, vol. 15, pp. 163–173, Feb. 1999. [11] Y. H. Liu, “Computing n-finger form-closure grasps on polygonal objects,” International Journal of Robotics Research, vol. 18, no. 2, pp. 149–158, 2000. [12] H. Maekawa, K. Komoriya, and K. Tanie, "Manipulation of an unknown object by multi-fingered hands with rolling contact using tactile feedback," in Proc. IEEE Int. Conf. on Intelligent Robots and Systems, pp. 1877-1882, 1992. [13] R. Mattikalli, D. Baraff and P. Khosla, Finding all stable orientation of assemblies with friction, IEEE Trans. On Robotics and Automation, vol. 12, no. 2, pp. 290-301, 1996. [14] X. Markenscoff, L. Ni, and C. H. Papadimitriou, “The geometry of grasping,” International Journal of Robotics Research, vol. 9, no. 1, pp. 61–74, 1990. [15] B. Mishra, J. T. Schwartz, and M. Sharir, “On the existence and synthesis of multifinger positive grips,” Algorithmica, Special Issue: Robotics, vol. 2, no. 4, pp. 541–558, 1987. [16] D. J. Montana, “The condition for contact grasp stability,” in Proc. IEEE Int. Conf. on Robotics and Automation, 1991, vol. 1, pp. 412–417. [17] K. Mulmuley, Computational Geometry: an Introduction through Randomized Algorithms. Englewood Cliffs, NJ: Prentice Hall, 1994. [18] R. M. Murray, Z. Li, and S. S. Sastry, A Mathematical Introduction to Robotic Manipulation. Orlando, FL: CRC, 1994. [19] V.-D. Nguyen, “The synthesis of stable force-closure grasps”, A. I. Laboratory, MIT, Cambridge, MA, Tech. Rep. No. 905, 1986. [20] V.-D. Nguyen, “Constructing force-closure grasps,” International Journal of Robotics Research, vol. 7, no. 3, pp. 3–16, 1988. [21] J. Ponce and B. Faverjon, “On computing three-finger force-closure grasps of polygonal objects,” IEEE Transactions on Robotics and Automation, vol. 11, pp. 868–881, Dec. 1995. [22] J. Ponce, S. Sullivan, A. Sudsang, J. -D. Boissonnat and J. -P. Merlet, “On computing four-finger equilibrium and force-closure grasps of polyhedral objects,” International Journal of Robotics Research, vol. 16, no. 1, pp. 11–35, 1997. [23] F. Reulaux, The Kinematics of Machinery. New York: Macmillan, 1976. [24] R.C. Brost and K. Y. Goldberg, A complete algorithm for designing planar fixtures using modular components, IEEE Trans. on Robotics and Automation, vol. 12, no. 1, pp. 31-46, 1996. [25] J. K. Salisbury and B. Roth, “Kinematic and force analysis of articulated hands,” ASME J. Mechanism, Transmissions, and Automation in Design, vol. 105, pp. 33–41, 1982.
3-D Grasp Analysis and Synthesis Using the Ray-Shooting Technique
109 109
[26] J. C. Trinkle, “On the stability and instantaneous velocity of grasped frictionless objects,” IEEE Transactions on Robotics and Automation, vol. 8, pp. 560–572, Oct. 1992. [27] M. Umetsu and K. Oniki, "Compliant motion control of arm-hand system," in Proc. JSME Int. Conf. on Advanced Mechanics, pp. 429-432, 1993. [28] A. S. Wallack and J. F. Canny, Modular fixture design for generalized polyhedra, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 830-837, 1996. [29] A. S. Wallack and J. F. Canny, Planning for modular hybrid fixtures, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 520-527, 1994. [30] M. Y. Wang, “An optimum design for 3-D fixtures synthesis in a point set domain,” IEEE Transactions on Robotics and Automation, vol. 16, pp. 839– 846, Dec., 2000. [31] J. D Wolter and J. C. Trinkle, Automatic selection of fixture points for frictionless assemblies, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 528-534, May, 1994. [32] T. Yoshikawa and K. Nagai, Evaluation and determination of grasping forces for multifingered hands, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 245-251, 1987. [33] H. Zhang, K. Tanie, and H. Maekawa, Dextrous manipulation planning by graph transformation, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 3055-3060, 1996.
110
Laser Visual Sensing and Process Control in Robotic Arc Welding of Titanium Alloys H. Luo, and X.Q. Chen Singapore Institute of Manufacturing Technology 71 Nanyang Drive, Singapore 638075 [email protected]
Abstract. Sensor errors arise from detecting the weld seam profile of titanium alloys with laser-based vision sensors because the surface of such a material is highly reflective to a laser light. In this study, the correct weld seam profile is established from the distorted raw data by signal processing. Furthermore, a fully automatic seam tracking and parameter adjusting system is developed. This system features training and saving of the template, auto-finding of the starting welding point, auto-calibration, detection of seam position and groove dimensions, tracking control and on-line welding parameter adjustment. The system is capable of tracking V-groove, fillet, lap and butt joints of titanium alloys with high accuracy of less than 0.4 mm.
1 Introduction Titanium, the fourth most abundant metal in the earth’s crust, features low density, high strength and excellent resistance to corrosion. Titanium alloys have a fascinating range of applications and are becoming increasingly important in a growing spectrum of industrial sectors. These applications play a key part in modern life and there is huge potential for expanded use of them in fields such as aerospace, automotive, construction and medicine. Obviously, high quality welding of titanium alloys is one of the critical processing technologies that determine their applicability. In order to achieve good welding quality, seam tracking is needed to align the torch along the welding seam in despite of distortion or poor jig fixing. Various approaches for seam tracking during arc welding processes such as through-the-arc sensing [5-7,10], ultrasonic [1] and vision-based sensing [2-4,8,9,11] have been extensively explored. Welding parameters also need to be controlled and adjusted according to weld groove profiles. Among various seam-tracking methods, laser-based vision sensor is the most promising and most widely used technology due to its high speed, high accuracy and high robustness to welding environment. However, laser-based visual sensors are very sensitive to material surface and titanium alloys are very high reflective T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 110−122, 2004. Springer-Verlag Berlin Heidelberg 2004
Laser Visual Sensing and Process Control in Robotic Arc Welding
111 111
metals. Problems arise in seam tracking and parameter control during arc welding of titanium alloys. In this study, two laser-based sensors are investigated to obtain weld seam point and weld groove profile. A titanium alloy welding specification database is set up. Weld seam point is used to move the robot to position the torch along the seam, while weld groove profile is used to extract welding parameters from the database to ensure welding quality.
2 System Design and Experimental Set-Up The experimental set-up consisted of a 5-axis AC servo driven robot, a welding torch, a pulsed power source (Fronius TT2000) and a laser-based vision sensor- a Meta 3D camera or a Keyence position-sensitive-detector (PSD) displacement sensor. The PC interface inside the power source enabled the PC to control and adjust the welding parameters. The sensor head was mounted in front of the welding torch, separated by a look-ahead-distance (LAD) between the sensor and the torch. The computer contained a frame grabber board or a data acquisition board, an 8-axis MEI motion controller card. The host computer served as an image or data acquisition and robotic control system. Figure 1 is a block diagram showing the various components of the system and their relationships. Figure 2 shows the set-up of the 5-axis AC servo driven welding robot. It consists of three Cartesian axes (X, Y, Z), and two rotary axes to control the torch orientation. Power supply
Welding process
Torch
Laser sensor/camera Interface
Robot Seam point position
Groove dimensions
PC Process parameters
Database
Seam tracking
Fig. 1. Block diagram of the robotic welding system
Motion control card
112112H. Luo and X.Q. Chen
Fig. 2. Experimental Set-up
The CPU sends signals to the MEI card, instructing it to move the welding torch and the camera/sensor. The camera/sensor attached to the welding torch captures images/profiles of the weld groove, and sends them to the frame grabber/data acquisition card. Subsequently, the images/profiles are processed to detect the seam point and the width and length of the weld groove. The seam point is passed to the seam-tracking controller to align the torch in relation to the weld seam, while the weld groove dimensions are passed to the welding specification database to extract the welding parameters.
3 Sensing Algorithms for Seam Point and Weld Groove Dimensions Since the surface of the titanium alloys is very reflective to laser light, sensing errors will arise when a laser-based vision system is used. 3.1 Laser Scanning Sensing There are two PSD detectors (Y-direction and Z-direction) inside the Keyence PSD sensor. As the red light laser scans across the work-piece by 30~40 mm in 20 ms, the coordinate values (Y in points, Z in voltage) of 1000 points across the work-piece can be acquired. Then the laser returns to its starting point in 10 ms, which ends a measurement cycle. For the Keyence PSD laser displacement sensor, precise weld groove profile was achieved when the titanium plates were sandblasted. The V-groove profile is shown in Figure 3.
Laser Visual Sensing and Process Control in Robotic Arc Welding
113 113
Fig. 3. V-groove profile of sandblasted titanium plates
However, when this sensor was used for the very shining surface of titanium plates, distorted signals were captured in the profile data. Figure 4 and Figure 5 demonstrate the V-groove profile without a gap between the titanium plates and that with a gap respectively.
Fig. 4. V-groove profile without a gap
It was found that even though the sensed profile was distorted by the reflective surface, points A and B are correct. The distance between point A and C or that between point B and D is exactly the height of the weld groove. The position of point E is the center of the groove. Therefore, the weld groove profile can be constructed from the distorted data. Figure 6 showed the results. Seam point and groove dimensions were available form the profile. 3.2 Structured Laser Vision System The Meta 3D camera was used for this development work. It consists of two laser diodes and a CCD camera. The sensor operates on the principle of active camera at a known angle. The image resulting form the intersection of this light sheet with the
114114H. Luo and X.Q. Chen
work-piece comprises two curved or straight-line stripe segments corresponding to the two pieces of metal to be joined. The camera is located immediately in front of the welding torch. Each stripe produces the weld profile image.
A
B
C
D E
Fig. 5. V-groove profile with a gap
Fig. 6. Weld profile constructed form corrupted data
For the Meta sensor, since the surface of titanium alloys is very reflective to the laser light, two reflection lines were seen in the image shown in Figure 7.
Reflective lines
Fig. 7. 768 by 572 pixel weld groove image with origin (0, 0) at top-left corner
Laser Visual Sensing and Process Control in Robotic Arc Welding
115 115
It was seen that there existed a -type shape inside the image. Because of its difference from the other part of the image, it was trained and saved as a template. Then, a feature-finding tool using a normalized correlation engine was used to locate the given feature within a new image. Once executed, it indicates whether it successfully finds its model by setting the “found” property to true, and the X and Y coordinates of the identical feature can be read in pixel. It should be noted that the X coordinate represents the Z-axis (vertical axis) of the real world. Furthermore, four lines in the image can be extracted through pixel intensity finding and curve fitting techniques. Their intersection points are found by solving the four equations representing the four lines. Therefore, the weld groove dimensions can be obtained. !
4 Seam Tracking and Control through Structured-Laser Vision The implementation of seam tracking and welding parameter control consists of the following procedures: • Training and saving of the feature template; • Auto-finding of the starting welding point; • Auto-calibrations; • Detection of seam position and groove dimensions; • Tracking control and parameter adjusting. The detailed flow chart is shown in Figure 8. Some key steps in seam tracking and parameter control process are explained in detail. 4.1 Region of Interest (ROI) Auto Placement After loading the trained feature, the feature-finding tool uses it to locate a feature within an image. If the feature was found, a result appeared as a point indicating where the feature was located in the inspection image. Consequently, a training and an inspection region of interest (ROI) are placed with their center at the point. After placing the ROIs, the feature-finding tool re-trains the feature specified by the training ROI as the new feature for the following processes such as calibration process and seam tracking process. In these processes, only the inspection ROI will be searched for the feature, hence significantly reducing image-processing time. 4.2 Auto Calibrations and Start Point Finding In the feature-finding algorithm, the software will return the result in pixel coordinates of the trained feature in the acquired image. Hence, it is necessary to find out the actual size with respect to the real world. The purpose of calibration is to synchronize the movement of the 3-axis robot to the work piece, and it is also used to correlate the pixel size of the image to the real world units in millimeters. In
116116H. Luo and X.Q. Chen Load job inform ation and feature
S tart
• M ove the robot along X direction • G rab an im age No Feature Found Y es • • • • Increase m issing point counter & set m issing flag
C am era at the start of the w eld (X ) Auto R O I p lacem ent, re-train the featu re Axis Y & Z calibration S tart process tim er w ith interval of 0.5s G rab an im age No Feature F ound Y es
Y es
C om pute interm ediate m issing point using line fit m ethod
M issing flag set No • R eset m issing point counter & m issing flag • C om pute the deviation betw een the previous seam point and the current one • C om pute the w eld groo ve dim ensions and extract param eters fro m database No
D eviation is w ithin threshold Y es
• Add the point (Y ,Z ) into the list box • Increase seam point counter No
S eam point counter>= Look A head Fram es Y es S tart w elding & m ake an y correction to the seam path & set w elding param eter Y es
N o of m issing point > Look A head Fram es No
S top the w elding process
R obot idle?
No
Y es E nd of w eld reached
• S ave the seam point into the file • M ove the robot to the start position
E nd
Fig. 8. Flow chart of seam tracking and process control
Laser Visual Sensing and Process Control in Robotic Arc Welding
117 117
order to achieve high reliability, it is always necessary to calibrate every job to obtain the calibration factor. The calibration factor (CF) is the numbers of pixels in one-millimeter drift (pixels per mm). The calibration factor of Y-axis and Z-axis are obtained by finding the pixel coordinates of the trained feature in the acquired image as the reference point (ref_pixel). The torch will then be shifted to the left and right by 1 mm from its current position to capture the respective pixel coordinates of the acquired image (left_pixel and right_pixel). The calibration factor for Y-axis is calculated by the following formulae: CF for Y-axis = abs [(left_pixel-ref_pixel) + (ref_Pixel-right_pixel)]/2 The Z-axis calibration is similar except that shifting the torch up and down by 1mm to capture the pixels coordinates (up_pixel and down_pixel). The calibration factor for Z-axis is calculated by the following formulae: CF for Z-axis = abs [(up_pixel-ref_pixel) + (ref_Pixel-down_pixel)]/2 As the torch is moving along X-axis, the first feature-finding point is where the sensor is at the starting point of the weld seam ( -direction), while the torch is still Look Ahead Distance (LAD) behind. If the sensor is at the center of the weld seam, the Y-coordinate returned by the feature find tool should be 384, which is half of the image length in pixels. If the sensor is not at the center of the weld seam, it is able to move the robot Y-axis until the Y-coordinate reaches that value. Since the welding torch is aligned with the sensor, after moving the -axis by LAD, the torch should be in the center of the weld groove. For Z-axis, the robot can be moved up or down until it reached the desired Z-position where the X-coordinate should return the value, which equals to [desired arc length * CF (Z) in pixels]. After these automatic adjustments, this found point should be the start of the weld seam (X-direction), and the center of the weld groove (Y-direction) and high above the work piece of the desired arc length. This point should be the very start point of welding process. During the welding process, the robot will move in the X-direction at the welding speed, arc length will be kept constant, and the torch will be kept at the center of the groove despite any work piece distortion. !
!
4.3 Missing Point Algorithm Using feature-finding tool, there exist undetected points - missing points. In this study, a two-point linear interpolation method was first used to find the undetected seam point between the previous and the following detected seam point. This algorithm is able to take care of any straight and titled path, and also able to compute the coordinates of several consecutive missing points. Figure 9 shows three consecutive missing points between two detected points. The respective missing point’s Y-coordinate can be computed by the following equation irrespective of whether the seam path has a positive or negative gradient. Mn = P1+ n*(P2-P1) / (No. Of missing point+1)] However, if P1 and P2 are in a curvature path, as shown is Figure 10, the computed missing point is M1* based on the linear interpolation while the actual
118118H. Luo and X.Q. Chen
seam point is M1. This tracking error is due to the fact that only two points are used in the linear interpolation algorithm.
P2 M3 M2 M1 P1
D is t
D is t
D is t
D is t
Fig. 9. Three consecutive missing points
M1
P2
M 1* P1 D ist
D ist
Fig. 10. Wrong computation of seam point (M1*)
The occurrence of this error is very rare, as the interval between two points is very small. For instance, given the image acquisition and processing period is 0.5 ms and the welding speed is 4 mm/s, the interval between two points is 2 mm. In such a short interval, two-point linear interpolation works very well. In a high speed welding application, more points are needed for the interpolation, and curve-fitting method can be adopted to reduce this computation error. 4.4 Seam Path Conversion Algorithm One of the essential portions of this study is to acquire the seam path and convert it into the actual welding path. Once the welding process starts, the X-axis will begin to move from the starting point to the ending position as specified by the welding length. There is always a LAD between the laser beam and the welding torch, which causes difficulty in controlling the correction of the Y and Z-axis movement. Hence, a seam welding path conversion algorithm is used to convert the seam path to the actual welding path, where we need to take into account the correct start
Laser Visual Sensing and Process Control in Robotic Arc Welding
119 119
welding position and to synchronize the movement of the welding torch in the Y and Z-axis with the LAD. The main problem in this algorithm is associated with the LAD. As the tip of the torch lags behind the laser beam by the LAD, the immediate seam points captured by the sensor head cannot immediately be used to move the welding torch. Hence, a delay time is required before moving the welding torch to the desired position so as to compute by the immediate seam point. The delay time is known as look-ahead time, which can be calculated by the following equation. Look-ahead time (sec) =Look-ahead-distance (mm)/welding speed (mm/sec) From the look-ahead time, we can determine the number of look-ahead frames required to execute the control. It is much easier to use the look-ahead frame because all the computation is calculated based on the frame rate (number of frames captured per second). Hence, the number of look-ahead frames is computed as follows: Look-ahead frames (frames) =look-ahead time (sec) * frame rate (frames/sec) The system tracks the number of frames captured upon commencing the welding process. Once the number of frames captured is equal to the look-ahead frames, then the welding machine will be switched on. Next, we need to compute the deviation in Y and Z-axis for the seam point found at the instant the image is acquired. Figure 11 shows the diagram of calculating the actual correction distance.
Torch position
sensor position
1
2
4
3
5
6
!
Y
!
Y+
2 !
!
3
!
4 Y+2
Sensor path !
!
4 X
Y
Y+3
!
!
Y+4
!
Y+4
!
Actual seam path
Fig. 11. Calculation diagram for actual correcting distance
Assume the number of look-ahead frames is 4. When the torch is at position 1, the sensor is looking at position 5. At this position, we know that the system already
120120H. Luo and X.Q. Chen !
found the seam point at position 5, which is shifted 4 to the left of the current torch position. Before the sensor reaches the position 5, it has already captured the seam point at location 2, 3 and 4. Hence, the system will shift the torch to the left by when it is moving forward to position 2. At the same time, the sensor at position 6 is also shifted by to the left. Therefore, the sensor is only able to detect a 4 shift of the seam path and the system will misinterpret that there will not be any deviation at position 6 as the detected seam point is the same as the previous one. However, the torch should be shifted by when moving form position 5 to position 6. This is because when the torch is at location2, the sensor had already shifted by . In order to eliminate this problem, any shift of the torch away from the reference position (horizon line) in Y-axis will be accumulated. This accumulation will be added to the deviation of the seam point found. The actual deviation at the particular seam point can be computed by the following equation: Shift_dev [k] (pixels) = Result returned by feature find [k] (pixels) + Accumulative Error [k-1] (pixels) – Shift_dev [k-1] (pixels) The accumulative error can be computed by the next equation: Accumulative Error [k] (pixels) = Accumulative Error [k-1] (pixels) + Shift_dev [k] (pixels) !
!
!
!
!
As the result is in pixels, it is necessary to convert it into millimetre by dividing the calibration factor. The speed of the Y-axis must also be computed before the move command is executed. The speed of the Y-axis has to be calculated using the following equation: Speed of Y (mm/sec) = [distance in X direction (mm) / Distance in Y direction (mm)]* Speed of X (mm/sec) The same method applied to the tracking in Z-axis to maintain the correct arc length.
5 Results Extensive welding trials were carried out to study the overall performance of the seam tracking and parameter controller. The seam-tracking controller produced satisfactory results with good tracking accuracy of less than 0.4 mm. Figure 12 shows a zigzag V-groove path. The laser vision sensor tracked the actual seam accurately without prior knowledge of the weld preparation. A weld sample of straight line welding is shown in Figure 13. The laser vision sensor captured the weld seam, and extracted groove features. Subsequently, correct welding parameters were inferred from the process database accordingly. Desired welds can be formed even in the presence of groove variations.
121 121
Laser Visual Sensing and Process Control in Robotic Arc Welding
☎
✁
!
☎
!
!
✄
✁
!
✄
!
✰
✯
✮
★
✭
✤
✬
✫
★
✪
✦
✩ !
✤
✥
✤
✂
✁
!
✂
!
!
★
✧
✦
✥
✤
✣
✢
✜
✜
✛
✁
!
✚
✙
!
✆
✆
✝
✆
✆
✞✆
✞✝
✆✟
✟✝
✠✆
✠✝
✡✆
✡✝
✆
✝
☛✆
☛✝
✆☞
☞✝
✆✌
✌✝
✆
✝
✍
✍
✆
✆
✆
✝
✞✆
✝
✝
✎
✏
✑
✒
✝
✓
✔
✕ ✖
✗
✆
✆
✆
✆
✆
✘
Fig. 12. Zigzag V-groove seam profile
Fig. 13. Titanium weld sample
6 Conclusions The following conclusions are drawn from the study: 1. High reflectiveness of the surface of the titanium alloys causes the distortion of the range data acquired by laser-based sensors. Correct weld seam profile can be successfully recovered from the distorted raw sensor data. 2. A laser-based camera system has been successfully deployed in a fully automated welding seam tracking system, with a feature finding technique to track the weld seam reliably. This system features training and saving of the template, auto-calibration, detection of seam position and groove dimensions, tracking control and on-line welding parameter adjusting. It is able to find the starting welding point automatically, and recovers missing seam point through two-point linear interpolation during the welding process.
122122H. Luo and X.Q. Chen
3. The system is capable of tracking in V-groove, fillet, lap and butt joints of titanium alloys with high accuracy of less than 0.4 mm.
References: 1.
Ajay Mahajan, Fernando Figueroa (1997) Intelligent seam tracking using ultrasonic sensors for robotic welding. Robotica, Vol. 15 (3): 275-281 2. Chen X Q, Huang S S (1993) An investigation into vision system for automatic seam tracking of narrow gap welding process. Chinese Journal of Mechanical Engineering, Vol. 29 (3): 8-12 3. Chen X Q, Lucas J (1989) Sensors for arrow gap welding. Proceedings of Welding Production Procedures Being Technology – Arc Welding 1989, DVS Berichte No. 127, pp 101-105 4. Chen X Q, J Lucas (1989) A fast vision system for control of narrow gap TIG welding. International Conference on Advances in Joining and Cutting Processes, Harrogate, UK, pp 8-1 to 8-9 5. Cook G E (1986) Computer-based analysis of arc welding signals for tracking and process control. IEEE Transaction On Industrial Electronics, Vol. 34: 1512-1518 6. Dilthey U, Stein L (1994) Through-the-arc-sensing: A multipurpose low-cost sensor for arc welding automation. Welding in the World, Vol.34: 165-171 7. Dilthey U, Stein L Stein, M Oster (1996) Through-the-arc-sensing—an universal and multipurpose sensor for arc welding automation. International Journal for the Joining of Materials, Vol.8 (1): 6-12 8. Kakizaki, Takao Arakawa, Ken-ichi, et al. (1998) Development of a high-speed visual sensing system for robotic seam tracking in automobile body sealing. NTT R&D, Vol.47 (7): 813-818 9. Karsten Haug, Guenter Pritschow (1998) Robust laser-stripe sensor for automated weld-seam-tracking in the shipbuilding industry. IECON Proceedings of Industrial Electronic Conference, Vol.2, IEEE Comp Soc, Los Alamitos, CA, USA, pp 1236-1241 10. Wang J Y, Mohanamurthy P H, Fong M K, Devanathan R, Chen X Q, Chan S P (2000) Development of a closed-loop through-the-arc sensing controller for seam tracking in gas metal arc welding. Gintic Technical Report AT/00/013/AMP 11. Yu, Je-Yong, Na, Suck-Joo. (1998) Study on vision sensors for seam tracking of height-varying weldment Part 2: Applications. Mechatronics, Vol.8 (1): 21-36
Intelligent Technologies for Robotic Welding S.B. Chen,1 T. Qiu,1 T. Lin,1 L. Wu,2 J.S. Tian,2 W.X. Lv,2 and Y. Zhang3 1. Institute of Welding Engineering, Shanghai Jiao Tong University, Shanghai 200030, PR China. Email: [email protected] 2. Welding department, Harbin Institute of Technology 3. Dept. of Mechanical Engineering, Qinghua University [email protected]
Abstract. This paper discusses intelligent technologies for the robotic welding, which contains computer vision sensing, automatic programming for weld path and technical parameters, guiding and tracking seam, intelligent control of welding pool dynamics and quality. Such an welding robotic systems with some artificial intelligent functions could realize collision-free path planning for complex curve seam, detecting welding surroundings by laser scanning technology, identifying the start of weld seam and guiding weld torch, tracking the seam, complete real-time control of pulsed GTAW pool dynamical process by computer vision sensing and intelligent technology. The above intelligent separated units or subsystems were integrated into an intelligentized welding flexible manufacturing cell (IWFMC). Based on discrete event control theory and Petri net modeling method, the related domination strategies for the systems was designed to supervise the IWFMC.
1 Introduction Although using robotic welding is propitious to increase productivity, improve welder laboring status, ensure stable welding quality and realize welding automation for a small batch of products [34,36,21,9], effective application of welding robots in practical production is still obstructed due to complexity and uncertainty in welding process [1,4,14]. Recently, developing intelligentized technology for welding robots has become an attractive research field and inevitable uptrend[7,6,35] At present, welding robots serving in practical production still are teaching and playback type, and can’t often meet quality and diversification requirements of welding production because this type of robots don’t have adaptive ability to circumstance changes and uncertain disturbances during welding process. In practical production, welding conditions are often changing, such as the errors of pre-machining and fitting work-piece would result in differences of gap size and T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 123−143, 2004. Springer-Verlag Berlin Heidelberg 2004
124124S.B. Chen et al.
position, the change of workpiece heat dispersion during welding process would bring on weld distortion and penetration odds . In order to overcome or restrain various uncertain influences on welding quality, it will be an effective approach to develop and improve intelligent technologies for welding robots, such as vision sensing, automatic programming, guiding and tracking, and real-time intelligent control of welding process [2,10]. To obtain enough precise sensing information from welding process under a mass of disturbances is one of key technologies for intelligent welding robots. During robotic welding, intelligent sensing mainly focuses in tracking seam and monitoring pool feature, and vision sensing technology has been paid a hot attention in recent years. Generally, the vision sensing technology can be divided into active method and passive method in welding according to imaging light source. The active vision sensing method requires laser or other assistant light source to illuminate welding region and to restrain welding arc interference for imaging weld pool. Ref[22,3,23,8]reported some designed laser sensor systems for tracking seam and detecting weld pool. The main problem of active vision sensing system is too expensive to apply it in practical welding process. Without assistant light source, the passive vision sensing method only utilizes self-arc to illuminate welding region. Using a filter to wipe off disturbance, the charge coupled device (CCD) camera captures weld pool images in selective narrow spectrum range for picking-up pool features , so that the passive vision method has been considered as a practical technique for monitoring weld pool process. Ref.[38,15,25] investigated the passive vision sensing systems for tracking seam and monitoring weld pool. As is well known, arc welding process contains many complex factors, such as material metallurgy, heat conduction, physical chemistry reactions, etc., it is a typical multi-variable coupling, nonlinear, time-varying, random and uncertain properties, so that it is very difficult to model and control welding dynamics by classical linear system theory. In recent years, some intelligent methods were introduced to welding process, e.g., Ref.[ 17,28,11 ] investigated fuzzy reasoning application in welding quality control, and Ref.[12,29,40,24,37] showed research results artificial neural net works for modeling and control of welding process. In former researches, although some significant results were obtained in vision sensing, modeling and intelligent control of weld quality and pool dynamics[27,26], these intelligent technologies are still not realized effectively in welding robot systems due to complexity of welding process, real-time and opening limits of robot systems. Therefore, developing intelligentized technology for improving current teaching and playback welding robot is necessary and exigent to satisfy high quality and flexible requirements for welding products and advanced manufacturing uptrend.
Intelligent Technologies for Robotic Welding
125 125
2 Intelligentized Welding Robot Systems The principle scheme of an intelligentized welding robot systems is shown as Fig.1, it contains a 9-freedom welding flexible mechanical platform with 6-freedom manipulator and 3-freedom positioner; an automatic programming and simulation subsystem for welding task; a welding seam guiding and tracking unit; an intelligent real-time control unit with vision sensing, modeling and control of weld pool dynamical features; and a central control unit for harmonizing each separated unit. In Fig.1, Wp is welding power, Sp is signal processing interface for laser scanning, IP is image processing interface for weld pool; WPPC is control computer for welding power, TPPC is control computer for tracking seam process, VPPC is process control computer for vision sensing and intelligent control of weld pool dynamics, MPPC is control computer for collaborating the welding flexible mechanical platform with 6-freedom manipulator and 3-freedom positioner; CCPC is central control computer for whole intelligentized welding robot systems. Based on structure functions of intelligent robot system [ 16,30], systems in Fig.1 could be divided into three functional levels: executive level, collaborative level and intelligent level. The executive level contains 9-freedom flexible platform, sensors, controllers, welding power. The collaborative level includes each functional control computers. The intelligent level includes the index command input of human-computer interface, welding task programming and simulation, it administrates whole systems by means of CCPC. The system in Fig.1 is named as an intelligentized welding flexible manufacturing cell (IWFMC).
Fig. 1 The system scheme of intelligentized welding flexible manufacturing cell (IWFMC)
126126S.B. Chen et al.
The photo of the flexible platform with 6-freedom manipulator and 3-freedom positioner is shown as Fig.2, which was designed by our-self, due to its opening framework and maintainability, the platform become a usable equipment for realizing various intelligent technique experiments. The basic mechanical experiment on the platform validated that the system could realize satisfactory collaborating movement, repeating orientated precision, continuous locus and velocity precision for robotic welding tasks.
Fig. 2 The photo of 9-frredom flexible platform
3 Automatic Programming and Simulation for Robotic Welding Task Off-line programming and simulation is a key technology for improving flexibility of welding robots Ref.[ 13,5,19]. The automatic programming and simulation subsystem in Fig.1 includes CAD techniques for features of welding robot and work-piece, path and parameter planning for robotic welding process. 3.1 Robotic welding off-line programming and simulation system based on dynamic graphics The module framework of designed off-line planning system is shown as Fig.3, beside basic functional modules, some advanced modules, such as work-piece feature CAD modeling, welding parameter and path program, were integrated into the system for improving program efficiency. Moreover, this system has added a programming and simulation of arc welding robot with vision and tracking sensors for robotic movement reach-ability. In order to obtain features of various welding work-pieces, it is necessary to establish a basic modeler for identifying geometry features of the object, such as joint and intersecting between sides and plane. The AutoCAD was used in the system to indicate welding workpiece features.
Intelligent Technologies for Robotic Welding
127 127
Advanced Model character CAD
Parameter planning
path planning
Autocad 2000
Geometry modeling
Motion model
Path model
Teaching model
Off-line programming
Program transition
Collision Calibration checking model
Basic Model
Fig. 3 Framework of off-line programming and simulation
An identification of welding seam features passed through two pipes is shown as Fig.4.
Fig. 4 An identified results of welding seam features passed through two pipes
3.2 Welding parameter planning for the robot systems In order to improve intelligence of the programming system, our studies has combined rule-based reasoning RBR , case-based reasoning CBR and artificial neural networks ANN with welding parameter planning so that off-line planning of robotic welding techniques can meet some special requirements in practical production. The framework of welding parameter programmer for the robot systems is shown as Fig.5. !
!
✁
✁
!
✁
128128S.B. Chen et al.
3.3 Path planning technology for arc welding robot
[34]
Path planning technology for arc welding robot contains following two aspects:
Fig. 5 The framework of welding parameter programmer
collision-free path planning and redundancy robotic path planning. The collision-free path planning aims at weld workpiece and circumstance, searching an optimal welding path in manipulator space for weld quality. Redundancy robotic path planning focuses on self-movement in arc robot, searching an optimal kinematics and welding poses path in joint space.
The collision-free path planning of arc robot 1. The collision-free path planning was expressed as an optimization of welding torch poses. An optimizing model was presented to escape collision path, and checking collision algorithm was developed in the system; 2. The heuristic searching technique was used to resolve collision-free path planning of arc welding robot by the designed producing system and so-called quality-step searching method. !
✁
Based on the AutoCAD software, a simulation system for collision-free path planning was developed.
Intelligent Technologies for Robotic Welding
129 129
Redundancy robotic path planning 1. An index function with multi-performance synthesis was presented for redundancy robotic path planning; !
✁
Using inheriting arithmetic, A harmonizing control algorithm was developed for the redundancy manipulator system with six-freedom robot and three-freedom positioner;
3. Based on the AutoCAD and 3DS software, a simulation system for redundancy robotic path planning was developed.
4 Laser Scanning Sensing Tracking and Real Time Control of Welding Robot Moving Locus During robot welding process, due to various factor influences on welding conditions, such as the machining and assembling errors, heated and remained stress distortion, it is difficult for the teaching and playback robot to satisfy welding quality requirement under welding condition anomaly changes. The IWFMC can realized real autonomous control and correcting robotic manipulations by the following laser sensor for improving robotic adaptive ability, even if without off-line programming and teaching and playback process. 4.1 The laser scanning sensor for weld seam tracking The main performance parameters of the designed laser scanning sensor for weld seam tracking used in IWFMC follows as: 1. Volume:
145mm×120mm×60mm
2. Weight:
600g
3. Scanning angle scope:
±7.5°
4. Scanning times/second:
11.6
5. Data capture ratio:
4000 point/s
6. Across measuring precision: 7. Portrait measuring precision
✁
✁
✁
0.15mm 0.1mm ✂
8. Suitable to various work-piece surfaces
✁
✁
✁
✁
✁
9. Strong resisting interference ability: no influence aparting from laser scanning plane 30mm under 200 Amp GTAW arc.
130130S.B. Chen et al.
This sensor can capture the relative position information of the torch to cross section of weld seam in real time, and obtain high precision profile of weld seam cross section. Autonomous control of welding robotic moving locus is one of basic indexes of intelligentized robots. Realizing of welding robot moving locus can relax consistent requirements for work-piece and dependence on welding clamps. In the IWFMC, the moving locus of welding robot can be regulated in real time by laser scanning sensing information and independent on off-line path programming or teaching and playback. 4.2 Laser scanning-based identification of some typical joints Using the above three-dimension laser scanning system, the detected results for several typical seams, such as V-type groove, lap joint, butt seam, are shown as Fig.6. The reticles in Fig.6 indicate welding position determined by measured information.
✄
!
✁
✂
(a) V-type groove joint with little gap
✝
☎
✆
✆
✂
(b) lap joint
✞
✟
✠
✡
✡
☛
(c) butt seam in 1.5mm gap Fig. 6
☞
✌
✌
(d) butt seam in 0.15mm gap Detected results typical seams
Since identification of butt seam is affected by different gaps, usually, the characteristic differences between small gap and no-gap butt are not evident, it is very difficult to detect and identify the position of these joints. In general, the butt
Intelligent Technologies for Robotic Welding
131 131
seam is so various that it is easily confused with fillet and corner joint due to their similarity. Therefore, some expert rules was used to determined the seam features.
5 Welding Robot Systems with Vision Sensing and Intelligent Control of Pulsed GTAW Pool Dynamics Up till now, no welding robot has provided with function of real-time sensing and control of weld pool dynamics. Therefore, it is significant to realize an intelligentized robot systems with vision sensing and real-time control of welding pool process..
CCD camera Composed filter system Reflecter
Motor
Travel direction
Torch
Reflecter
Weldingdirection Workpiece
(a). photograph of the visual sensor
(b). Structure diagram of the visual sensor
(c) Installation of visual sensor and tracking sensor Fig. 7 The systems of weld pool visual sensing and laser tracking sensors
132
S.B. Chen et al.
5.1 The vision sensing of the pool dynamical process The photograph and the structure diagram of the visual sensor are shown in Fig.7 (a), (b). The visual sensor at the right and the laser scanning sensor at the left were fixed with the end-effector of welding robot. In order to decrease the visual sensor influences on pool images during movement of the torch, a two-step reflecting light path system was adopted. Through a composed filter system , which consisted of a neutral density filter and a narrow band filter, weld pool images and arc light entered the CCD camera. The vision sensor, image collecting board, the computer for image processing and control of weld penetration and pool size, welding power and related interfaces composes the intelligent control subsystem for weld pool dynamic process. 5.2 Picking up dynamical features of pulsed GTAW pool images Calibration of the image sensing system is omitted here for short.
Weld pool Character Parameters Definition In order to investigate weld penetration and weld quality, two weld pool parameters, topside maximum width Wfmax and topside half length Lfmax were defined as Fig.8. arc center
Wfmax
Lfmax
Fig. 8. Definition of characteristic parameters of weld pool during pulsed GTAW
Image Processing and Picking-up features of the weld pool images Under the conditions in Table 1, welding experiment on the robotic system was conducted and typical images are shown as Fig.9
Using image processing algorithms desigined in this paper, characteristic parameters of weld pools were picked-up. Usually, robotic controller cannot ensure that the angle between the direction of detecting and welding travel always was accordant to the complicated seam. On the other hand, the angle was varing during robot welding. Therefore, the algorithm for weld pool image processing
Intelligent Technologies for Robotic Welding
133 133
must be applicable for changing of observation direction during robotic arc welding. Table 1. The experiment conditions of pulsed GTAW Pulse frequency f, Hz Pulse duty ratio , % Peak current Ip, A Base current Ib, A Traveling speed Vw, mm/s !
1 45
Arc Length l, mm T-pole dia. , mm
3 2.4
130 60 2.0
T-pole angle , ° Flux of argon L, l/min Workpiece size mm×mm×mm
30 8.0 250×50×2
✁
✂
Fig.9(a) was an original image of weld pool captured at the base current time, Fig.9(b) was an image processed through smoothing and filtering noise. The point C in Fig.9(c) was determined as the arc center, The point T was determined as the trail weld pool. Using edge picking-up algorithms, boundary points were determined as Fig.9(d), Fig.9(e) was a pool boundary approached by the quartic polynomial. The pool character parameters were determined by Fig.9(f). The detail image processing algorithms are omitted here.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 9 Image processing and feature picking-cp of robotic pulsed GTAW pool
5.3 Controller for weld pool dynamic features during pulsed GTAW Due to the complexity of weld pool dynamics during pulsed GTAW, we designed a neuron self-learning Proportional-Summational-Differential (PSD) controller for pulsed GTAW process, shown as Fig.10. Main merits of PSD controller are its modeling-free of the complicated process and its adaptability. Using measured errors between the actual outputs and the expected outputs, the PSD can realize self-learning control of the process. The controlled variables of the system are maximum width and half-length of the weld pool, the outputs of PSD controller are pulse duty ratio and peak current respectively. A neuron self-learning PSD controller’s output follows as
∆u (t ) = ω 1 (t ) x1 (t ) + ω 2 (t ) x 2 (t ) + ω 3 (t ) x3 (t )
(1)
134134S.B. Chen et al.
Fig. 10 Diagram of a neuron self-learning PSD control system for pulsed GTAW
where u(t) is control input
x1 (t ) = e(t )
!
x i (t ) (i=1,2,3) are inputs of neuron, i.e.,
x 2 ( t ) = ∆e(t ) and x 3 (t ) = ∆ e 2 (t ) . ω i (t ) (i=1,2,3) are weight!
ing coefficients corresponding to x i (t ) (i=1,2,3). The weighting coefficients ω i (t ) (i=1,2,3) were modified during neuron self-learning process, the learning rules follows as:
ω i (t + 1) = (1 − m)ω i (t ) + dri (t ) Here m, d>0 ✁
d is learning ratio and
(2)
ri (t ) is defined as
ri (t ) = z (t )u (t )[e(t ) + ∆e(t )]
(3)
where z(t) is a teaching signal. To ensure convergence and robustness of the learning algorithm, following learning algorithms are adopted, and they compose controlling rules of single neuron PSD, that is 3
∆ u ( t ) = K # ω ' ( t ) x i (t ) 3
Where
i=0
ω i' (t ) = # ω i (t ) i =1
$ω 1 (t + 1) = ω 1 (t ) + d I z (t )u (t )[e(t ) + ∆e(t )] ' %ω 2 (t + 1) = ω 2 (t ) + d P z (t )u (t )[e(t ) + ∆e(t )] 'ω (t + 1) = ω (t ) + d z (t )u (t )[e(t ) + ∆e(t )] D 3 & 3 The parameters dP, dI and dD are regulated according to the actual system.
(4)
Intelligent Technologies for Robotic Welding
135 135
5.4 Experiment of PSD control of weld pool dynamics during robotic welding process In order to verify effectiveness of PSD controller in robot systems, butt welding in robotic pulsed GTAW process was conducted. The workpiece was dumbbell-shaped mild steel plate with 2mm thickness. The size and the shape of the work-piece are shown in Fig.11. The varying curves of peak current and maximum width of the weld pool are shown as Fig.12. The peak value and duty ratio of the pulse current were regulated by neuron self-learning PSD controller during welding process. The Fig.13 is photos of controlled work-piece.
Fig. 11 Shape and the size of the work-piece
Fig. 12 Curves of neuron self-learning PSD control of bead width during robotic pulse GTAW
(a) Topside
(b) Backside
Fig. 13 Photographs of the PSD control of weld work-piece
136136S.B. Chen et al.
6 Integration of Intelligentized Welding Robot Systems The intelligentized welding robot systems has been integrated for uniformly administering robotic collaborating movement, path and parameter planning and simulation, guiding and tracking seam, visual sensing and intelligent control of weld pool dynamic features. The integrated systems could be called as a welding flexible manufacturing cell with some primary intelligent functions, or primary intelligent welding flexible manufacturing cell (IWFMC). 6.1 The integrated structure designs of the IWFMC Based on task characters of arc welding robot systems, the IWFMC was designed in star-shape net framework, as Fig.14. It includes a robotic movement trajectory control unit(RCU), a weld seam tracking unit(STU), welding pool penetration control unit(PPU) and robotic welding power unit(WPU). The communication and exchanging information among these units can be completed by the center monitoring computer(CMC).
✠
✠
✡
✡
☛
☛
✠
✠
✡
✡
☛
☛
✂
✂ ✂
✂
✄
✄ ✄
✄
!
!
✁
✁
!
!
!
!
✁
✁
!
!
✝
✝
✞
✞
✟
✟
✝
✝
✞
✞
✟
✟
☎
☎
✆
✆
✆
✆
☎
☎
☎
☎
✆
✆
✆
✆
☎
☎
Fig. 14 The net topology structure of IWFMC
The information flow in IWFMC systems can be described as following: 1. Based on net communication techniques, robotic controller can down load and run welding task program in real time from CMC, during welding process, the robotic controller can regulate driving signals of joints by tracking sensing information at same time, the penetration unit will provide welding pool states ☞
Intelligent Technologies for Robotic Welding
137 137
and welding current and speed information to weld power and robotic controller through CMC. 2. The net system of IWFMC was designed by classified control mode and combined with sensing system for realizing sharing control, fusing control and centralizing management by sensing information flow. The CMC can indicate, supervise and correct welding process, path and parameter planning, multimedia graph simulation, data from weld expert system and so on. 6.2 Petri net model for optimization of material flow and information flow in IWFMC !
✁
The Petri net method in the discrete event dynamic system DEDS theory has very powerful token function for modeling, analysis, performance evaluation and control of event elements and its logic sequence[18,31,39,20]. It is a significant research direction to use Petri net method for optimizing and planning material flow and information flow in welding flexible manufacturing cell/system[33,32] . The discrete event and Petri net are very suitable to little batch and excessive various welding products on Welding Flexible Manufacturing Cell(WFMC) and can improve efficiency of WFMC and welding quality. In order to assure welding product quality and satisfy intelligent requirements for welding flexible manufacturing, a petri net model combined with sensing and executing information during robotic welding process has been established by the following rules: Rule1: if input buffer is empty, then put workpiece to input buffer; Rule2: if positioner is vacancy, then put workpiece to positioner from input buffer; Rule3: if tracking sensor sends regulating signal, then regulate robot running locus; Rule4: if pool feature is exceptional, then regulate weld current and speed; Rule5: if welding is finished and output buffer is empty, then put workpiece to output buffer; Based on the above rules, an automatic Petri net (APN) model for IWFMC has been established as Fig.15. There are two types of directed arc to be added in common Petri net for APN model: inhibited arc, noting as , and enabling arc, noting as . The APN model is activated by exterior spring events, e.g., input information from sensors and action information from executors. The APN model contains 18 places ps1, ps2, p1, p2, , p16 and 18 transitions t1, t2, , t18 , and its marks are explained as following: ps1/ps2 input/output sink, p1/p2: workpiece moving permit/prohibit (from sink to buffer);p3/p4 positioner idle/busy; p5/p6 weld pool sensors (regulating signal)/( no regulating signal); p7/p8 tracking sensor (regulating signal)/ (no regulating signal) ; p9/p10 workpiece moving permit/prohibit (from buffer to sink); p11/p16 input/output buffer states; p12: workpiece ready p13/p14 welding process start/end; p15 post- processing; w1/w2 weight value of inhibited arc. !
!
✁
✁
!
✁
✂
!
✁
✂
✄
✄
✄
✄
✄
✄
✄
☎
✄
✄
138138S.B. Chen et al.
A running process of the APN system can be described as following: In welding process starting, if input buffer p11 is empty, then put workpiece from the input sink to input buffer; if the positioner is vacancy, then put workpiece to positioner from input buffer p12 if workpiece is ready then start to weld p13 during welding process, if weld pool sensor outputs regulating signals, then to regulate welding parameters for control of weld penetration t14; if seam tracking sensor outputs regulating signals, then to regulate manipulator to correct moving locus t15 necessary processing after welding end p15 at the same time, if output buffer is empty p16, ; then down-load the welded workpiece to the output buffer for delivering output sink. Thus whole welding process is completed. ☎
☎
☎
☞
Fig. 15 Automatic Petri net model of IWFMC
6.3 Functional Realization of composite IWFMC system Based on IWFMC systems with exterior sensors, a running flow chart of the IWFMC with model of welding automation Petri Net – WAPN is shown as Fig.16.
Intelligent Technologies for Robotic Welding
139 139
The work-piece photos of visual tracking and coordinating control experiments on IWFMC are shown as Fig.17. System startup Set-up welding state parameters Download off-line program TPPC
Guide to seam start CCPC monitor system with WAPN
Get to the start?
MPPC
N
Y
Run tracking system WPPC Run process control
Get to the terminal?
VPPC N
Y
Turnoff systems Fig. 16 Running flow chart of WFMC systems with WAPN
Fig. 17 (a) Tracking experiment
140140S.B. Chen et al.
Fig. 17 (b) Coordinating experiment Fig. 17 Work-piece photos of the experiments on IWFMC
7 Conclusions The current teaching and playback welding robots need to be improved by intelligentized welding techniques for its effective application in practical production. The intelligentized welding techniques for robot systems conclude the vision sensing, guiding and tracking weld seam, programming path, pose and welding parameters, knowledge modeling and intelligent real-time control of welding dynamics, optimizing supervision of welding flexible manufacturing cell/system, and so on. This paper has shown a primary intelligent welding robot systems with separated sensing and control subsystems, so-called intelligentized welding flexible manufacturing cell—IWFMC. The intelligent technologies contained in this systems will be a useful foundation for us to develop new-type intelligent welding robot in the future.
Intelligent Technologies for Robotic Welding
141 141
Acknowledgments This work was supported by National Natural Science foundation of China, No.59635160 and Science and Technology Committee of Shanghai, China, No.021111116.
References
1. 2. 3.
4. 5.
6. 7. 8.
9. 10.
11.
12.
13. 14.
15.
B.F. Kuvin. Welding for Space. Welding Journal. 1992, 71(3): 41-44 Bob Irving. Sensor and Control Continue to Close the Loop in Arc Welding. Welding Journal. 1999, 78(5): 31-36 C.G. Morgan, J.S. Bromley and P.G. Davey. Visual Guidance Techniques for Robot Arc Welding. Proceedings of 3rd Conferences of Robotic Vision Sense Control, 1983: pp615-619 E.J. Morgan-Warren. Remote Repair Welding in Nuclear Reactors. Welding & Mental Fabrication. 1989, (8): 111-116 G. Grunwald, Modeling and Planning for Sensor Based Intelligent Robot Systems (Task-Directed Planning of Cooperating Sensors: A Framework). Machine Perception and Artificial Intelligence. 1995, 21: 143-159 Gerhard Teubel. Experience + Application Up-Date: Automation of A.W.-Operations Using Robot-Technology. Welding in the World. 1994, 34: 75-84 J.D. Lane. Robotic Welding State of the Art. Robotic Welding—International Trends in Manufacturing Technology. IFS (Publications) Ltd.UK. 1987: 1-10 J.E. Agapakis and J. Bolstad. Vision Sensing and Processing System for Monitoring and Control of Welding and Other High Luminosity Processes. International Robots & Vision Automation Conference, 1991: pp23-29 J. L. Pan, ” A survey of welding sciences in 21th century.” Proceeding of 9th Chinese Welding Conference, Tianjun, China, Oct. 1999, (1): D-001-D-017 J.W. Barratt W.F. Clocksin P.G. Davey and A.R. Vialer. Visually Guided Robot Arc-Welding of Thin Sheet Pressings. Proc. of 12th Int. Symp. on Industrial Robot.IFS(Publications) Ltd.UK.1987: 225-230 J.W. Kim and S.J. Na. A Self-Organizing Fuzzy Control Approach to Arc Sensor for Weld Joint Tracking in Gas Metal Arc Welding of Butt Joints. Welding Journal. 1993, Vol. 72(1): pp60s-66s K. Andersen and G.E. Cook. Gas Tungsten Arc Welding Process Control Using Artificial Neural Networks. Proceedings of the 3rd International Conference on Trends in Welding Research, Gatlinburg, Tennessee, USA, 1-5, June, 1992: pp135-142 Krister Brink, etc. Increased Autonomy in Industrial Robotic Systems: A Framework. Journal of Robotic Systems. 1997, 19: 357-373 K. Masubuchi, .A.V. Gaudiano and .I.J. Reynolds Technologies and Practices of Underwater Welding. Proceedings of The International Conference on Underwater Welding. Norway.1983, (6): 49-70 K. Oshima and M. Morita. Sensing and Digital Control of Weld Pool in Pulsed MIG Welding. Transactions of the Japan Welding Society. 1992, Vol. 23(4): pp36-42
142142S.B. Chen et al.
16. K. P. Valavanis, G. N., Saridis, Intelligent Robotics System: Theory, Design and Applications. Boston, 1992: 12-18 17. L. Dimiter. Adaptive Robot Under Fuzzy Control. Fuzzy Sets and Systems. 1985, Vol. 17(1): pp23-28 18. L.E., Holloway, etc. Survey of Petri Net Methods for Controlled Discrete Event Systems. Discrete Event Dynamic Systems: Theory and Applications. 1997, 7(2): 151-190 19. L. Wu, K. Cui, S.B. Chen, Redundancy Coordination of Multiple Robotic Devices for Welding through Genetic Algorithm, ROBOTICA, 2000,18(6), 669-676. 20. M.C. Zhou, Petri Net Synthesis and Analysis of A Flexible Manufacturing System Cell. IEEE Transactions on Systems, Man and Cybernetics. 23(2): 523-531 21. Paul Drews., Gunther Starke. Welding in The Century of Information Technology. Welding in the World. 1994, 34: 1-20 22. R.J. Beatlie, S.K. Cheng and P.S. Logue. The Use of Vision Sensors in Multipass Welding Applications. Welding Journal. 1988, Vol. 67(11): pp28-33 23. R. Kovacevic, Y.M. Zhang and L.Li. Monitoring of Weld Joint Penetration Based on Weld Pool Geometrical Appearance. Welding Journal. 1996, Vol. 75(10): pp317s-329s 24. R. Kovacevic and Y.M. Zhang. Neuro-fuzzy Model-Based Weld Fusion State Estimation. IEEE Transactions on Control Systems Technology. 1997, Vol. 5(4): pp30-42 25. R.W. Richardson and D.A. Gutow. Coaxial Arc Weld Pool Viewing for Process Monitoring and Control. Welding Journal. 1984. 63(3): 43-50 26. S.B. Chen, D. B. Zhao, L. Wu and Y. J. Lou, Intelligent Methodology for Sensing , Modeling and Control of Pulsed GTAW :PART2—Butt Joint Welding, Welding Journal, 2000, 79(6):164s-174s 21. S. Murakami. Weld-line Tracking Control of Arc Welding Robot Using Fuzzy Logic Controller. Fuzzy Sets and Systems. 1989, Vol. 32(2): pp31-36 27. S. B. Chen, Y. J. Lou, L. Wu and D.B. Zhao , Intelligent Methodology for Sensing , Modeling and Control of Pulsed GTAW :PART1—Band-on-Plate Welding, Welding Journal, 2000, 79(6):151s-163s 28. S. Murakami. Weld-line Tracking Control of Arc Welding Robot Using Fuzzy Logic Controller. Fuzzy Sets and Systems. 1989, Vol. 32(2): pp31-36 29. T.G. Lim and H.S. Cho. Estimation of Weld Pool Sizes in GMA Welding Process Using Neural Networks. Journal of Systems and Control Engineering. 1993, Vol. 207(1): pp15-26 30. Timothy Hebert, etc. A Real-Time, Hierarchical, Sensor-Based Robotic System Architecture. Journal of Intelligent and Robotic Systems. 1998, 21: 1-27 31. T.O., Boucher, etc. Petri Net Control on Automated Manufacturing System Cell. Advanced Manufacturing Engineering. 1990, 2(3): 323-331 32. T. Qiu, S. B. Chen, Y. T. Wang, L. Wu, Information Flow Analysis and Petri Net-based Modeling for Welding Flexible Manufacturing Cell. SPIE Intl. Symposium on Intelligent Systems and Advanced Manufacturing. Vol.4192:449-456, Nov. 6-8, 2000, Boston. 33. T. QIU, S. B. Chen, Y. T. Wang, L. Wu, Modeling and Control of Welding Flexible Manufacturing Cell Using Petri Net, Journal of Materials Science & Technology, 2001, 17(1): 181-182 34. Trailer, Manufacturer Depends on Robotic Welding to Boast Production. Welding Journal. 1995, 74(7): 49-51 35. U. Dilthey, L. Stein. Robot System for Arc Welding—Current Position and Future Trends. Welding & Cutting. 1992, (8): E150-E152
Intelligent Technologies for Robotic Welding
143 143
36. West Carrillton, Robot Assure Quality for Auto Seat Manufacturer. Welding Journal. 1995, 74(8): 57-59 37. Y. Kaneko. T. Iisaka and K. Oshima. Neuro-Fuzzy Control of the Weld Pool in Pulsed MIG Welding. Quarterly Journal of the Japan Welding Society. 1994, Vol. 12(3): pp374-378 38. Y. Li and J.E. Middle. Machine Vision Analysis of Welding Region and Its Application to Seam Tracking in Arc Welding. Journal of Engineering Manufacture. 1993, No. 5: pp275-283 39. Y. Narahari, etc. A Petri Net Approach to the Modeling and Analysis of Flexible Manufacturing System. Annals of Operation Research. 1985, (3): 449-472 40. Y. Suga and M. Naruse. Application of Neural Network to Visual Sensing of Weld Line and Automatic Tracking in Robot Welding. Welding in the World. 1994, 34: pp275-284
144
The Seam Tracking System for Submerged Arc Welding Hua Zhang Xuelei Ding Maohua Chen Benqi Da and Chunhua Zou !
!
!
!
Mechatronic Institute Nanchang University, Nanchang 330029, P.R. China [email protected] ✁
Abstract. The structure and the principle of seam tracking for submerged arc welding were introduced in this paper. The method of seam recognition with computer vision was mainly researched when the environmental light was changed. Intelligent control system for seam tracking has been realized by using a selfmodulated fuzzy control set. The system was tested on the producing of spiral submerged arc welded pipe for outside-welding and the primary results were obtained.
1 Introduction Since submerged arc welding has the advantages of high efficiency, well seamdeform and without arc light interference, it’s widely applied in petrochemical industry and pressure vessel manufacture. Submerged arc welding is adopted in the key engineering project “gas transportation from west to east” in China. Because tracking accuracy is influenced by a number of factors such as changing of pipe diameter, variation of image, flux leaking and so on, it’s not easy to control these factors. So far an effective approach of seam tracking for outside welding is not proposed yet and it is still in the stage of manual control. In practice, workers can only judge seam position according the fact whether the light is just in the middle position of seam. Seam tracking accuracy is influenced workers’ eye fatigue if they watch the light point for a long time. How to get over above disadvantages, increase tracking accuracy, reduce work intensity, replace manual control with automatic control is the primary problem for researchers to solve.
2 Structure and Principle of Seam Tracking System 2.1 Framework of system The standard parameters of this project are described as: pipe : 1016×14.6, welding speed: 0.9 1.5m/min, welding voltage: 32 welding current: 390 1100A, pipe thickness: 7 10mm. !
✁
✁
✁
38v
✁
T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 144−151, 2004. Springer-Verlag Berlin Heidelberg 2004
The Seam Tracking System for Submerged Arc Welding
145 145
The hardware of the seam tracking system consists of cross slide industrial control computer interface circuit image acquisition card(VIDEO-PCI-XR), CCD(Charge Coupled Devices) sensor, AC4064 I/O board and so on. The framework of the system is illustrated in Fig.1. ✂
✂
✄
✂
Intelligent Control
I/O Interface Circuit
Cross Slide(Torch)
Image Processing
Image Acquisition
Vision Sensor(CCD)
☎
Welding pipe
✆
Fig. 1. Framework of seam tracking system of submerged arc welding for outside-welding
2.2 Principle of system The CCD sensor transfers seam images into video signals. And image acquisition card transfers video signals into digital signals in the computer. Then all above images are in the process of filtering, binarization, Hough transform according to a software programmed with VC++6.0 seam error is obtained after these processing in the end. Seam error is put into intelligent controller, digital quantity is reached by a certain of algorithm ways. Cross slide is adjusted by the digital quantity which is transmitted to analogue quantity through I/O interface circuit. The purpose of precise tracking and error correction is obtained finally.
3 Method of Seam Recognition 3.1
Seam analyzing
First determine the processing region before image preprocessing. It’s not needed that all the image region should be processed, because it has an impact on the real
146146H. Zhang et al.
time processing speed for occupying much more space of the memory and wasting much processing time. The purpose of the process is seam recognition. A region of 300×200(pixels) is selected as the object of image processing to save memory space which contains the seam information, enhance anti-interference capability and speed image processing. This region contains all the information, which is needed for seam recognition. The performance of system is enhanced according to this selection. 3.2
Process of filtering
To make it easy to differentiate feature line from background and reinforce feature line, make the process of filtering. A template is used in the process of filtering; this template is introduced as the following table. Table 1. A template to reinforce vertical orientation
3.3
-1
2
-1
-1
2
-1
-1
2
-1
Process of binarization
Make the process of linear transformation similar to binarization, set gray level of background zero, and gray level of feature line 255. f(x,y) represents the gray level of original image. In term of some criterion get an appropriate threshold value expressed with T. g(x,y) representing gray level of processed image will be achieved after segmentation ,it is described as following: '255 g ( x, y ) = ( ) 0
f ( x, y ) ≥ T
(1)
f ( x, y ) < T
The threshold Selecting To differentiate object and extract seam information from complex background, the selection of threshold plays a key role in technique of threshold segmentation. If the value of threshold is selected too big, it leads to that the object is judged as background incorrectly. On the contrary, anti-results will be attained. So far there are many methods of choosing threshold, the universal methods are histogram threshold partitioning approach, two-dimension maximal entropy partitioning approach, fuzzy threshold partitioning approach and so on. Iterative threshold seg-
The Seam Tracking System for Submerged Arc Welding
147 147
mentation approach is adopted in this paper. This approach is introduced as the following: (A) Determine minimal gray level expressed with ZL and maximal gray level expressed with ZK, suppose the original value of gray level is
ZL + ZK 2
TK =
(2)
(B) TK divides image into two parts ,one is object, the other is background. Determine the gray level of two parts ZO and ZB which are depicted respectively as
=
Z O
# Z (i, j ) × N (i, j ) z i, j < T K $ % % %% &
* + + ++ ,
(3)
# N (i, j ) z i, j < T K $ % % % &
* + + + ,
# Z (i , j ) × N (i , j ) z (i , j ) > T K ZB = # N (i , j ) ( ) z i, j > T K
(4)
Z(i,j) is the gray level of point (i,j), N(i,j) is the weight of the point (i,j), in generally ,N(i,j)=1.0. (C) Determine the new threshold TK+1, it is written as (5)
Z +Z T K +1 = O B 2 !
then TK is the result. In this system (D) If TK+1-TK consider TK as the gray level of this image . Otherwise ,K ← K+1 out process (B). ✁
3.4
☎
✂
✆
✝ ✞
✟
✠
✡
☛
☞ ✂
✄
✂
go on carrying
Hough Transform
Seam information of the butt joint is extracted according to Hough transform after binarization. The obvious advantage is high interference immunity. The essence of Hough transform is to transfer many points to one point in two coordinate systems. Multiple points which represent a straight line or a well-regulated line in a coordinate system is switched to a point in another coordinate system. This point corresponds to parameters of original line. Make the process of Hough transform is to classify some points as well as congregate all the points of a line. Then the
148148H. Zhang et al.
check of the feature of two-dimensional line can be converted to search for maximum in an array of one dimension. Perfect results can be achieved by means of this approach even if there are just a little feature points of butt joint. It is proved that this approach has high reliability and stability when the butt joint parameters are changed greatly. 3.5
Result of seam recognition
It is complex in the producing spot of submerged arc welding for spiratron. There are a great number of interferential factors and these factors are not stable extremely. So the primary task is to remove these disturbances.
(a) Original image (b) filtering result (c) binarization result (d) Hough transform result Fig. 2. the process of image processing
Fig.2.(a) is the picture before image processing, which shows that there are a lot of disturbances on the surface of work piece. According to above described 3×3 filtering template, the vertical orientation characteristic is emphasized and the horizontal direction disturbances are removed effectively on the bottom of the image, shown in Fig.2.(b). A method of selecting threshold automatically is adopted in binarization, so seam feature changed constantly in the process of seam tracking can be highlighted. In Fig.2.(c), background and seam object are distinguished obviously after binarization which is the foundation of Hough transform. It is shown in Fig.2.(c) that there is a little disturbance still. Fig.2.(d) shows that disturbance has been removed and seam has been recognized through Hough transform. Adjust torch and CCD before image processing. The center of image is exactly oriented just before the torch. The center point represents the position of torch. At this time the distance from this point to seam line is seam error, which is one of the controller inputs. In this example the result indicates that the error is 12 pixels after image processing, which is equal to 0.6mm as the resolution ratio of CCD is 0.05 mm/pixel. This time the real error is 0.8mm through accurate measurement. So the processing error is 0.2 mm, the precision of recognition meets the requirement of seam tracking.
149 149
The Seam Tracking System for Submerged Arc Welding
4 Fuzzy Controller of the Seam Tracking System The welding process is subject to a great number of influencing factors, these factors are highly nonlinear and non-determinacy. It is so complex that classical control theory applied in automatic tracking system is not satisfactory. During recent years fuzzy control has been improved increasingly in application of regulating speed and tracking accuracy in seam tracking control system. But the application range of simple fuzzy control is limited. Enough high regulating speed can be achieved when error is smaller, however regulating speed can’t be amended according to the changing of error when error is larger. A conclusion can be drawn that simple fuzzy control is difficulty to meet the requirement of welding in reality yet. Fuzzy-p control theory is adopted in this seam tracking control system in this paper. The concept of Fuzzy-p is expressed as following: Proportional control is adopted to improve response speed and reduce regulating time when error is larger, Fuzzy control is adopted to improve the stability and control precision of response and decrease system overshoot when error is smaller. Switch between two control approaches is realized by designating threshold in advance. It is expressed simply as Proportional control if |E| EP is the absolute value of error Fuzzy control if 0 |E| EP A great number of experiments approved that the effect of tracking are satisfying when partition threshold EP is 1mm,then select EP 1mm. It is key to establish rules of fuzzy control in fuzzy control theory. In formula method of fuzzy control, a universal mathematical model is !
✆
✂
✁
✄
U ✟
✞
☎
☎
✝
E+(1- )EC
(6)
✞
is the fuzzy quantity of erWhere E or e is the fuzzy quantity of error, EC or ror variance ratio, U is the output fuzzy quantity, is the adjustment factor. reit vary from 0 to 1. The primary contradiction of flects the weight of e and control system is to remove e when e is large relatively, this time should be increased to enlarge the proportion of e weight in the control rules to improve dynamic characteristics of the system. On the contrary, when e is small the primary contradiction is to restrain system overshoot to make system stable as soon as possible. So should be decreased, meanwhile ,1 is increased to enlarge the proportion of weight in the control rules. The rules of fuzzy control can be modified through amending . Once is fixed, the rules of fuzzy control will be defined. The situation of seam error and working state is varied rapidly due to the influence of welding condition. The fixed fuzzy control can’t meet the demand of seam is obtained through table lookup. tracking. In the practice, When is defined, control rules self modulated online can be realized according to EC. Fig.3 describes the principle of self-modulated fuzzy control seam tracking system. ✠
✡
✞
✠
✡
✞
☛
✞
✞
☞
✡
✠
✞
✌
✌
✌
✆
✍
✎
✌
✑
✏
✄
✒
✌
✝
✞
150150H. Zhang et al.
checking comparision parameter self-tune !
✁
✝
K1
ec d / dt
✂
fuzzifica -tion
Ec ✄
☎
✆
inference engine
defuzzifi -cation
✞
K3
object
K2
Fig. 3. Structure of self- modulated fuzzy control seam tracking system
5 Conclusions This system responding quickly can recognize seam for almost all kinds of disturbances, its accuracy rate is high. The resolution ratio of CCD is 0.05 mm/pixel. The application of self- modulated fuzzy controller improves the control characteristic. It meets the need of automatic positioning and automatic tracking as well as easy to manipulate. The primary results have been obtained in the producing spot. It is shown in the results that the tracking accuracy of this system can meet the requirements of outside welding for spiratrons.
References 1.
2. 3. 4.
Gao Xiangdong et al. Application of Network Controller in the Network Controller in the Seam Tracking of Arc-welding Robot. Control Theory and Application, 1999, Vol.16, No.3 Kim J W, Na S J. A Self-Organizing Fuzzy Control Approach to Arc Sensor for Weld Joint Tracking in Gas Metal Arc Welding of Butt Joint. Welding Journal. 1993.72 Zhang Hua, Xian An, Peng Shaopin et al. Inner Weld Auto Tracking and Fusion Control System of Spiral Pipe. Welding Pipe.2002.1Vol.25, No.1 Huang Shisheng, Gao Xingdong, Yu Shiwei. Study On An intelligent Weld Seam Tracking System, Chinese Journal of Mechanical Engineering. 1999(12), Vol.35, No.6
The Seam Tracking System for Submerged Arc Welding
5.
6.
151 151
Hu Shengsun, Hou Wenkao, Qin Baozhong. Study on The Fuzzy-P Controller With Self-modulation Level Factor in Seam Tracking System. Journal of Tianjin University. 1999(3), Vol.32, No2 Wang Xiuyuan Hunag Shisheng, Xue Jiaxiang, et al. Application of Image Processing on The Image Segmentation in Weld Detection. Electric Welding Apparatus. 2000(10), Vol.30, 32~34 !
152
Visual Measurement of Path Accuracy for Laser Welding Robots Dong Du, Bo Sui, Y. F. He, Qiang Chen, and Hua Zhang Department of Mechanical Engineering. Tsinghua University. Beijing 100084. P.R. China [email protected]
Abstract. A sensing method based on binocular vision theory is presented to measure the path accuracy of the robots for laser welding. This method uses two cameras fixed on the end-effector of the industrial robot to capture the images of two marked-lines in the workspace. Based on line matching technology and stereo vision theory, a mathematic model has been set up to calculate the space coordinates of the two cameras from the captured images. This method is used to measure the path accuracy of a 6-DOF industrial robot. The results indicated that this method meet the requirement of continuous path measurement of industrial robots and the detection can be accomplished easily with little reference datum needed.
1 Introduction Industrial robot has been widely used in the application of laser welding recently. In these fields, the path accuracy of the robots is the key problem. But few types of equipment can be used to measure the 3-D location of the end-effector when the robot is running, and those available equipments are very expensive. An instrumentation that can test the variation of path accuracy for industrial robots in a brief way must be developed. Two kinds of measuring methods, which are contact and non-contact method, have been used to measure the position accuracy of industrial robots. However, the contact method cannot be performed continuous path measurement of dynamic trajectory accuracy, as the measuring speed and measuring scope of this kind of methods are very small. In non-contact method, there are two kinds of modes, which are initiative mode and passive mode. The initiative measuring system send signal such as infrared, laser or ultrasonic to the object, and then cipher-out the location of the object by detecting the returned signal from the object [1,2]. To work with this system a very complex and expensive device, difficult to operate, must be set up. So the passive method is considered recently [3-5]. In this article, a binocular vision system using two cameras fixed on the endeffector of the industrial robot has been setup to measure the path accuracy of industrial robots. To work on this system, the stereovision theory and the analytic T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 152−160, 2004. Springer-Verlag Berlin Heidelberg 2004
Visual Measurement of Path Accuracy for Laser Welding Robots
153 153
geometry equation in 3-D space have been studied. Then the projective plane equation of a spatial line with a CCD (Charge Coupled Device) camera has been introduced to get the mathematic model of this system, which is based on linematch technology. The path accuracy of an industrial robot has been measured simply and conveniently using this system, and the measuring results match the demands of continuous path measurement for the laser-welding robot.
2 Camera Projection Model The working process of a CCD camera is usually described by pinhole model (Fig. 1). Three coordinate systems have been used in this model. One is the world coordinate system represented by (XW, YW, ZW), one is the camera coordinate system (xc,yc,zc), the other is the image pixel coordinate system (u, v). The relationship between the retinal image coordinate and the world-coordinate can be written as Eq.2.1.
$XW + $u + %Y , W , % , = MX W zc v == [ L ⋅ F 0 ] M1 % % , % ZW , %&1 ,% , & 1 -
$f $ R t+ % Where , M 1 = % T , M = [ L ⋅ F 0] ⋅ M1 , F = 0 , % &0 1%& 0 $1 %d % x % L=% 0 % %0 % &
0 1 dy 0
(2.1)
0
0+
f
0, ,
0
1 ,-
,
+
u0 ,
, , v0 , , 1, , -
dx and dy is the physical distance between two adjacent pixels. u0 and v0 is the lower case of the pixel corresponding with optical axis. R and t is the rotation and translation matrix mapping from the world coordinate to the camera coordinate. M1 is a 4×4 matrix, mapping from the world coordinate to the camera coordinate system. M is the perspective projection matrix that accounts for both the extrinsic parameters of the camera and the intrinsic parameters of the camera.[3]
154154D. Du et al.
YC YW XC
XW
ZC
ZW Fig. 1. Pin hole model
3 Visual Measurement Model for Path Accuracy Two cameras, both focused on the lines marked in the world coordinate system, are used in this measuring instrumentation. When the robot is moving, the relative position of the marked-lines and the cameras will be changed, which can affect the images captured by the CCD in a definite sampling rate. Consequently, the estimation of the path accuracy is to calculate the relative position, which changes in one sampling cycle time. 3.1 Projection model of a spatial line
Fig. 2 shows the relationship between a spatial line and its projection in the image plane. YC
O
ZC
XC
image plane l N
W ZW L YW
XW
Fig. 2. A spatial line and its projection
Visual Measurement of Path Accuracy for Laser Welding Robots
155 155
Suppose the projection l of a spatial line L in the image plane is expressed as an equation: KU=0
Where K = ( k1 , k 2 , k3 ) , U = (u , v ,1)T With reference to Eq.2.1, we can get the following equation by premultiplication the vector K.
[k
1
k2
$ XW + $u + % , $ R t + YW k3 ] %v , = [ K ⋅ L ⋅ F 0] % T , % , = 0 % , &0 1- % ZW , % , &%1-, &1 -
(3.1)
S is used to represent the production of K, L and F. thus Eq.3.1 can be written as SRXW + St = 0
(3.2)
Eq.3.2 describes the characteristics of the projection plane W, which is constructed by the optical center of the camera lens O, space line L and its projection line l. The normal vector N of the plane W can also be obtained from this equation. This vector is a 3×1matrix whose directional number is the production of S and R. This equation is the key to calibrate the camera. 3.2 End-effector position estimation
The projection of two space lines in two CCD cameras is shown in Fig. 3.
Fig. 3. Binocular visual measurement model
156156D. Du et al.
Assumption: CCD1 and CCD2 are fixed on the end-effector of the robot by rigid tools. The rotation and translation from the world coordinate to the two camera coordinates are described as (R1, t1) and (R2, t2) respectively; the CCD1 coordinate to CCD2 coordinate is described as (R3, t3). Then, because
' X CCD1 = R1 X W + t1 * ( X CCD2 = R2 X W + t 2 *X ) CCD2 = R3 X CCD1 + t3 The relationship between (R1, t1), (R2, t2) and (R3, t3) can be described as follows. R2=R3R1, t2=R3t1+t3
(3.3)
There are two marked-lines L1 and L2 in the world coordinate, which are linearly independent. The directional number of them are (m1, n1, p1) and (m2, n2, p2), respectively. The parameter of the first sample point (R0, t0) and world coordinates of A1 and A2 have already been calculated out by the calibration procedure. The rotation and translation of the end-effector from sampling point i to sampling point i+1 is described as (∆Ri, ∆ti) in the world coordinate system. Then the parameter of the number i+1 (Ri+1, ti+1) can be calculated from the following relationship. CCD1: Ri+1,1=∆RiRi,1,
ti+1,1= ∆t i+ ti,1
(3.4)
CCD2: Ri+1,2=R3Ri+1,1=R3∆RiRi,1, ti+1,2= R3ti+1,1+ t3= R3∆t i+ R3 ti,1+ t3
(3.5)
The estimation of the path accuracy of the robot is to calculate out the numerical value of rotation and translation matrices (∆Ri, ∆ti). To achieve this estimation, some geometry constraints should be considered. 1) Perpendicular constraint: The inner product of the normal vector N and projec$$$# tion line OA is nil Showed in figure 4. For example, that the vector Ni+1,11 is perpendicular with vector L1 in figure 4, thus the inner product of Ni+1,11L1 is zero.
' N i +1,11 L1 = Si +1,11 Ri +1 ,1 L1 = 0 ' Si +1 ,11 ∆ Ri Ri L1 = 0 *N L = S R L = 0 *S R ∆ R R L = 0 1 i +1 , 21 i +1 ,2 i i ,1 1 * i +1 ,21 1 * i +1,21 3 #( ( * N i +1 ,12 L2 = Si +1,12 Ri +1,1 L2 = 0 * Si +1,12 ∆ Ri Ri ,1 L2 = 0 *N * ) i +1 ,22 L2 = Si +1 ,22 Ri +1,2 L2 = 0 ) Si +1,22 R3 ∆ Ri Ri ,1 L2 = 0
(3.6)
2) In-plane constraint: The extreme points A1 (XWA1) and A2 (XWA2) must match the equation of the projection plane, which is constructed with marked-line L and its projection l.
Visual Measurement of Path Accuracy for Laser Welding Robots
157 157
' Si +1,11 Ri +1,1 X WA1 + Si +1,11 ti +1 ,1 = 0 * * Si +1,12 Ri +1 ,1 X WA2 + Si +1,12 ti +1 ,1 = 0 # ( * Si +1,21 Ri +1 ,2 X WA1 + Si +1,21 ti +1,2 = 0 *S ) i +1,22 Ri +1 ,2 X WA2 + Si +1,22 t i +1,2 = 0 'Si+111 , ∆ Ri Ri ,1 XWA1 + Si+111 , ( ti ,1 +∆ ti ) = 0 * , ∆ Ri Ri ,1 XWA2 + Si +112 , ( ti ,1 +∆ ti ) = 0 *Si+112 ( *Si+1,21 R3∆ Ri Ri ,1 XWA1 + Si+1,21 [R3 ( ti ,1 +∆ ti ) + t3 ] = 0 * )Si+1,22 R3∆ Ri Ri ,1 XWA2 + Si+1,22 [R3 ( ti ,1+∆ ti ) + t3 ] = 0
(3.7)
From the Eq.3.6 and Eq.3.7 we can get eight linearly independent equations, where only the matrices ∆Ri and ∆ti are unknown and need to be figured out. In these equations, Si+1is made up of matrix K and the intrinsic parameters of CCD; the former can be obtained from the image-reorganization of lines; the latter can be achieved from the calibration of the CCD. If we use γ, β, θ to represent the rotation along X, Y, Z-axis, the ∆Ri, ∆ti can be written as
$ ∆ti , x + $cγ ⋅ cβ cγ ⋅ sβ − sγ ⋅ cθ cγ ⋅ sβ ⋅ cθ + sγ ⋅ sθ + % , % , ∆Ri = %sγ ⋅ cβ sγ ⋅ sβ + cγ ⋅ cθ sγ ⋅ sβ ⋅ cθ − cγ ⋅ sθ , , ∆ti = % ∆ti , y , %& ∆ti , z ,cβ ⋅ sθ cβ ⋅ cθ &% −sβ -,
(3.8)
where s- and c- is the abbreviation of sine and cosine. There are 6 unknown variables that must be calculated in these eight equations. To solve this problem, least square method is used to get an optimized value. Thus, the deviation of the sampling point i+1 to sampling point i is achieved.
4 Experiment and Results An experiment has been carried out to prove the feasibility of this measuring method. The experimental system is made up of a calibration base, two CCD cameras (resolution 800×600), some fixing tools and an industrial robot, just like figure 4. Two lenses, whose focal lengths are about 50mm and object field angles are about 5.5 degree, are used in the cameras. There are two unparallel lines marked in the calibration base. These lines are very brightness in the black background and easy to be imaged by the cameras. The precision of the system is tested to be little than ±0.2mm. In a common laser-welding application, the deviation of the welding path is not allowed to be above ±0.5mm. Thus, this system can be used to
158158D. Du et al.
measure the path accuracy of industrial robot, which is used in laser-welding application. A 6-DOF (Degree of freedom) articulated robot, whose reach radius is about 1300mm and maximum payload is about 6kg, is measured. The measuring results are list in figure 5 and 6 (the figures of rotation deviation along y-axis and z-axis are elided). From the results, the higher the moving speed, the worse the path accuracy of the robot. The path position accuracy of this robot is more than 4.05mm. This measuring result is proved to be credible by a contact measurement method. Industrial Robot
CCD Camera #1 Fixing Tools
CCD Camera #2
Marked-lines
Fig. 4. System setup
Fig. 5. Measuring results of position error of industrial robot in different moving velocities
Visual Measurement of Path Accuracy for Laser Welding Robots
159 159
11.22m/min 45.00m/min 90.00m/min
α/°
2.4
Rotation deviation along x-axis
2.2 2.0 1.8 1.6 1.4 0
5
10
15 20 Number
25
30
Fig. 6. Measuring results of orientation error along x-axis in different moving velocities
5 Conclusion The measurement method of path accuracy for laser welding robot has been presented. Using this method, the path accuracy of industrial robots can be achieved easily and conveniently. The measurement of a robot moving along a path of straight line is performed successfully. The experimental result demonstrates that the measuring method for industrial robots is capable of measuring path accuracy of laser-welding robot.
Acknowledgements This work is Supported by the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE, P.R.C. and the National Nature Sciences Foundation of China ( No. 59975050).
References 1. 2. 3.
Shirnzadeh Bijan. (1998) Laser-interferometry-based tracking for dynamic measurement. Industrial Robot, 25(1): 35–41. Yuan Jing, Yu S L. (1999) End-effector position-Orientation Measurement. IEEE transition on Robotics and automation, 15(3): 592–595. Ma Songde. (1996) A self-calibration technique for active vision systems. IEEE Transactions on robotics and automation, 12(1): 114-120
160160D. Du et al.
4.
5.
Motta JMST, Carvalho GC, McMaster RS. (2001) Robot calibration using a 3D visionbased measurement system with a single camera. Robotics and Computer Integrated Manufacturing, 17(6):487-497 Gordon Wells, Carme Torras. (2001) Assessing image features for vision-based robot positioning, Journal of Intelligent and Robotic Systems, 7(1):95-118
Map Building and Localization for Autonomous Mobile Robots Y.L. Ip, A.B Rad, and Y.K. Wong Department of Electrical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR, China. [email protected]
Abstract. In this chapter, we present our recent works on map building and localization algorithms for autonomous mobile robots. We start with the segment-based map building via enhanced adaptive fuzzy clustering technique. Based on this algorithm, we develop a dynamic segment-based algorithm in Bayesian state framework. A localization algorithm will be developed in the second part of the chapter. We propose a model-based localization algorithm based on fuzzy-tuned extended Kalman filter. These algorithms will be tested in real-time applications and scenarios.
1 Introduction Many industrial applications require autonomous mobile robots to be able to perform their tasks in partially known or unknown environments with little a priori information. This requirement dictates that the mobile robot is “aware” of its surroundings. It should be able to build an initial map, be able to maintain and update the map and finally to be able to use this map in order to determine its position. The purpose of this chapter is to discuss these issues and suggest some algorithms for map building and localization. The problem of simultaneous map building and localization (SLAM) will not be discussed here. Both map building and location algorithms rely on the perception device. This could be in the form of vision, radar, lasers and ultrasonic sensors. We limit our discussion to ultrasonic perception device. The algorithms described here can be expanded to cover these perception devices with not much difficulty. There are several approaches to solve the map building problem. Among those are occupancy grid or geometric maps. The occupancy grid maps are generated from stochastic estimates of the occupancy state of an object in a given cell. It is rather easy to construct and maintain them with reasonable computing power in small workspace. However, in large environments the storage space and the processing of the maps become matters of concern. This study focuses on geometric methodology. Models based on geometric boundaries approximate the location, orientation and size of the boundaries and are applicable to environments T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 161−189, 2004. Springer-Verlag Berlin Heidelberg 2004
162162Y.L. Ip, A.B. Rad, and Y.K. Wong
susceptible to inaccurate and/or noisy data. The object boundaries can be approximated with straight lines or curves from additional processing of the sensor measurements. The rest of this chapter is organized as follows: Fuzzy clustering and its associated algorithms are reviewed in Section 2. A segment-based map building algorithm is reported in Section3. This algorithm is extended to dynamic case in Section 4. In Section 5, we discuss a localization method based on fuzzy-tuned extended Kalman filter. Experimental results are included in Section 6. The chapter is concluded in Section 7.
2
Fuzzy Clustering Algorithms
The fuzzy clustering algorithms have been used in many applications involving data segmentation. The basic fuzzy clustering algorithm is the fuzzy c-means (FCM) method developed by Bezdek (Bezdek 1981). FCM is aimed at minimizing the sum of the squared distances between the data points and the c cluster centers Vi. The objective function to be minimized is cp J m : M × R → R+ fc n c m 2 c J m ({U ik }, {Vi }) = # # (U ik ) (d ik ) | # U ik = 1 , k = 1,2,...,n (1) i =1 k =1 i =1 where, Mfc is the fuzzy partition space; U∈Mfc is a fuzzy c-partition of X; V={V1,V2,…,Vc} are the cluster center vectors, Vi is a cluster center of p features, Vi∈ℜ p ; X is the matrix of feature vectors, where X={ X1,X2,…,Xn }, Xi∈ℜ p; and dik is given by 2 2 (2) ( d ik ) = x k − vi Here, n is the total number of feature vectors (data points) and c is the number of cluster centers. The index i denotes each clusters, index k denotes each of n number of data points and p is the dimension of ℜ space. The exponent m∈[1,∞) is used to adjust the weighting effect of the membership value. The cluster centers vi can be obtained from the following: n m # (u ik ) x k k =1 vi = n m # (u ik ) k =1
, i = 1,2,..., c
(3)
The fuzzy membership matrix U can be obtained from the following: u ik =
1 2 c d ik m # ( ) −1 j =1 d jk
, i = 1,2,..., c ; k = 1,2,..., n
if dik = 0 then uik = 1 and ujk = 0 for j ≠ i.
(4)
Map Building and Localization for Autonomous Mobile Robots
163 163
The cluster center matrix V is randomly initialized and the fuzzy partition matrix U is created by using Eq. 4. The initialization of V is done by randomly selecting a value for each feature of a cluster center vi within the range of values exhibited by the feature in the data set. The stopping condition is set to ustop. The theoretical details and the proofs of the above are well documented and can be found in (Bezdek 1981). This FCM can not be used directly for line segment detection in a data set. It is more suitable for feature detection of linear shapes. For our purpose, it is best to directly detect the line segment within the data set space. There are several variations of FCM that allows detection of line segments, such as: Fuzzy c-elliptotypes (FCE) algorithm (Bezdek 1981; Runkler and Palm 1996), Adaptive fuzzy clustering (AFC) algorithm (Dave 1989; Phansalkar and Dave 1997). 2.1
Noise Clustering
Noise is a common problem in many environments. The data set obtained by the ultrasonic sensors includes arbitrary distribution of noise and creates numerous problems for minimization of squared error type algorithms. Noise Clustering (NC) was proposed in (Dave 1991) and aims to alleviate this problem. In NC algorithm, the noise is considered as a separate cluster which contain data that cannot be classified into other clusters. Noise prototype is defined as a universal entity such that it is always at the same distance from every point in the data set. From (Dave 1991), the definitions of the noise distance δ is defined by the following equation:
n ' c#−1 # 21 ( d ik ) ( 2 2 δ = ζ ( i =1 k =1 2 ( n(c − 1) 2 ) 3 where
ζ
is the value of the multiplier used to obtain
(5)
δ
from the average of
distances. Based on Eq. 5, δ can be calculated at each iteration of the sequence. The details for the EAFC algorithm along with the NC technique are discussed in next section. 2.2 Enhanced Adaptive Fuzzy Clustering algorithm (EAFC) integrated with NC technique
The main feature of our proposed approach is to divide the AFC algorithm into two phases. Phase I is a standard FCM iterative process whereas phase II is an AFC with the cluster centers initialized by the last cluster center values from phase I. The function of phase I is to select the cluster centers within the data set by the standard FCM algorithm. After obtaining the cluster centers in the data set, the phase II (AFC) is used to calculate the line segments. The previously calculated cluster centers are used for initialization of the fuzzy partition matrix of AFC to obtain the
164164Y.L. Ip, A.B. Rad, and Y.K. Wong
cluster with linear prototypes (line segments in our case). In this algorithm, the type of distance norms is Euclidean. Let ustop_I and ustop_II be the stopping condition in phases I and II respectively. The outline of the proposed algorithm shown as following: Phase I 1. Fix c, 2 ≤ c < n; fix 1 ≤ m < ∞; initialize the cluster center matrix V by using a randomly selected entry in the data set; initialize the fuzzy partition matrix U(ρ = 0) with Eq. 4. 2. Increment ρ = ρ +1. Compute the cluster center vi by Eq. 3. 3. Update membership function U(ρ = ρ+1) by using Eq. 4, along with dik calculation by Eq. 2. 4. If |U(ρ) - U(ρ-1)| < ustop_I then stop, else repeat steps 2-4. Phase II 1. Use the same c as select in Phase I, fix 1 ≤ m < ∞, initialize the cluster center matrix V and the fuzzy partition matrix U by using results from Phase I, initialize mixing coefficient αi, 0 < αi ) 2 5.
(ρ )
If |U
165 165 (11)
(ρ-1)
-U
| < ustop_II then stop, else repeat steps 2-5.
3 Segment-Based Map Building via EAFC with NC for Static Environment In the above section, we provide a brief outline of the Enhanced Adaptive Fuzzy Clustering Algorithm with noise clustering approach (EAFC with NC). The main contribution of this algorithm is to extract the line segment from the noise sonar measurements. In this section, we will introduce how to obtain the environment model during the robot traveling in real time. This algorithm is based on the recursive updating of the line segments to extract the features in the map and is based on the following sequence of steps, that analyzes the uncertainly in sensor information with different levels: • As the mobile robot moves in the environment, the consecutive observations n
obtained by the sonar sensors S is given by n n n n n n n S = {( x1 , y1 ), ( x 2 , y 2 ), # , ( x k , y k )}, n = 1, 2, # ,16 (12) Here, n is the number of sonar sensors and k is the number of measurements at each sampling interval. • The world environment is modeled into two-dimension and divided into
squared cells C ( X ,Y ) (local models) (see Fig. 1) to reduce the complexity of data structure. During the robot sampling, the sonar measurements from the environment will transform to global coordinate system. At the same time, the sonar data points are grouped in the related cells defined by
C ( X ,Y )
. The cell
C ( X ,Y )
C ( X ,Y ) = {( x1 , y1 ), ( x 2 , y 2 ), # , ( x % , y % ), B1 , B 2 ,..., Bi }
is
(13)
where X and Y are the cell location in the global frame X-axis and Y-axis, respectively. Index % denotes the number of sensor measurements within the cell. The ith extracted line segments from the corresponding sonar data points is given by
Bi = {φ , ( x S , y S ), ( x E , y E ), ( xV , yV )}
(14)
where φ is the unit-eigenvector of the segment and the end-points of the segment, ( x S , y S ) and ( x E , y E ) are obtained by the EAFC with NC approach. ( xV , yV ) is cluster center of this segment. The cell contains a discrete sample of the outline of the feature within the cell.
C ( X ,Y )
166166Y.L. Ip, A.B. Rad, and Y.K. Wong
(a)
(b) Fig. 1. a) Grouping the sensor measurements in cell C ( X ,Y ) . b) Updating the sensor measurements and line segments simultaneously.
• To extract the line segments in each cell C ( X ,Y ) by the EAFC with NC and store in Bi. • Finally, the similar line segments within a cell are combined together by the compatible line segment merging technique (Ip et al. 2002). It is proposed by the authors’ pervious work. The contribution of the “compatible line segment merging technique” is to merge the similar basic segments together to form a single line segment. The basic segments are discontinuous outlines of the object boundaries in the vicinity of the mobile robot trajectory (Fig. 1). In order to improve the quality of the map, the number of line segments representing the same boundary is merged up by evaluates the relation of the eigenvector in both segments within the cell. In the proposed map building algorithm, the compatible line segment merging technique is implemented in two routines: “local grouping” and “global grouping”. When two segments (i.e. 2 clusters in each cell) are extracted, we apply the compatible line segment merging technique to merge those segments. This routine is called “local grouping”. In “global grouping” routine, the cells are combined by the compatible line segment merging technique based on the pervious updated cells as the center and merging the updated cells with the surrounding cells. Two routines are shown in Fig. 2. The merged segments and the location of the cells are stored in the “global buffer” for display and global map information. However, the segments in the cells are still stored in the local cell memory. This information is saved for the next updating since the pervious
Map Building and Localization for Autonomous Mobile Robots
Fig. 2. The operation of “Local Grouping” and “Global Grouping” routines.
167 167
168168Y.L. Ip, A.B. Rad, and Y.K. Wong
cell information is used for initialization in EAFC with NC. Since the EAFC with NC segment extraction scheme is proposed by the authors’ pervious work. The theoretical details and discussion of the above are well documented and can be found in (Ip et al. 2002).
3.1 Outline of the proposed On-line Segment-based Map Building algorithm
Task 1: Initialization. Fix the number of clusters c, the stopping condition ustop_I and ustop_II for each cell. The size of each cell is required to fix. Task 2: To obtain the sonar sensors measurements S n from the environment (global frame) and transform to global coordinate system. Task 3: To group the sensors measurements into the related cells C( X ,Y ) . Task 4: To extract the line segments in each cell by EAFC with NC approach and then store it in form of Bi in Eq. 14. The information in Bi (i.e. cluster centers ( xV , yV ) and mixing coefficient α ) is used for the initialization of EAFC with NC. Task 5: “Local grouping” routine: to combine the similar cluster segments in each cell by compatible line segment merging technique. Store all the update cells in the buffer for “global grouping”. Repeat Task 2 to 5 for next sampling interval. We use Tlocal to denote the sonar sensors sampling interval. Task 6: “Global grouping” routine: We use Tglobal ( Tglobal > Tlocal ) to denote the “global grouping” sampling interval. We use updated cells (the updated cells are stored in the buffer in Task 5) as a center to merge the surrounding cells. The merged segments and cells location are stored in “global buffer”. Once the mobile robot finishes the navigation in the workspace, the final map information is selected from each cell and the “global buffer”. If the cells are overlapped, we select the segments in “global buffer” instead of that in the cell.
4 Dynamic Segment-Based Map Building via Bayesian State Estimation Framework In the pervious section, our proposed segment-based map building via EAFC with NC for static environment is introduced. In this section, we will extend the proposed map building algorithm to handle a dynamic environment. The definition of dynamic case in this paper is stated as: the object in the modeling environment is not fix and it can be relocated by the human.
Map Building and Localization for Autonomous Mobile Robots
4.1
169 169
Estimating the state of segments
We use the segment-based map building with sonar sensors to extract the object in the environment. We propose using Bayesian update rule to estimate and update the state of segments. Each extracted segment is divided into j parts. Each part is assigned a probability function of its state s ( Bij ) which is conditioned over r observations (sonar sensors measurements) within Bayesian framework. The Bij denotes the jth part at the ith extracted segment in the related cell C( X ,Y ) . The state variable s ( Bij ) associated with a part of segment Bij is defined as a discrete random variable with two states, present and absent, denoted PRESENT and ABSENT. Since the states are exclusive and exhaustive, P [ s ( B ij ) = PRESENT ] + P [ s ( B ij ) = ABSENT ] = 1 . To determine how a sensor reading is used to estimate the state of the a part of a segment, we start by applying Bayes’ theorem (Berger 1985) to a part of segment Bij : P[s( Bij ) = PRESENT | r ] =
p[r | s( Bij ) = PRESENT]P[ s( Bij ) = PRESENT] #s( B ) p[r | s( Bij )]P[ s( Bij )] ij
(15)
where P[ s ( Bij ) = PRESENT ] is prior presented segment probability of a part of segment Bij . P[ s ( Bij ) = PRESENT | r ] is new presented segment probability of a part of segment Bij based on r observations. In Eq. 15, the term p[r | s ( Bij ) = PRESENT ] corresponds to sonar sensor pdf that will be discussed in the following section. In order to estimate the state of complete segment, the average value of the j parts of segment states is calculated as shown below: P[ s ( Bi ) = PRESENT | r ] =
# j P[ s ( Bij ) = PRESENT | r ]
(16)
j
The value of j can be determined by length of segment Bi
(17) σ where σ is range variance of the sonar sensors. Here, we use 40mm. The minimum value of j is fixed at 1. j≈
4.2
Fuzzy tuned sonar sensor model
The sonar pdf is further tuned to accommodate a dynamic environment. A similar idea is proposed in the authors’ pervious work (Chow et al. 2002) for grid-based map building. Here we proposed an enhanced and modified version of tunable sonar pdf model. We select a zero-mean Gaussian distribution to represent the sonar
170170Y.L. Ip, A.B. Rad, and Y.K. Wong
sensor range readings. Only radial uncertainty is considered. The pdf model is shown as follows: * $ − (r − z) 2 . /≤k -k ε , if x < r and k o exp % % σ 2 / ε & 0 r $ − (r − z ) 2 . />k if k o exp% % σ 2 / ε $ − (r − z ) 2 . & 0 r % / , p ( r | z ) = +k o exp % σ 2 / $ − (r − z) 2 . & 0 r / > 0.5 , if x > r and k o exp % % σ 2 / & 0 r , otherwise -0.5 -,
(18)
where x = The distance from the sensor k ε = A parameter that corresponds to the empty space that is tuned by a fuzzy model. k o = A parameter that corresponds to the occupied space and is tuned by a fuzzy model. 2 σ r = The variance of the radial probability. r = The sensor range measurement of the sonar sensor. z = The true parameter space range value. To extend the derivation to two spatial dimensions, consider the example of a range sensor characterized by Gaussian uncertainty in both radial and angular directions. This can be represented as: * $ −θ 2 . % / -1 − (1 − kε ) exp% 2 / & σθ 0 $ 2. $ − (r − z) 2 . /) exp% − θ / % -1 − (1 − k o exp% 2 / %σ 2 / σ 0 & r & θ 0 p(r | z, θ ) = + 2. $ 2. $ -k exp% − ( r − z ) / exp% − θ / - o % σ 2 / %σ 2 / r & 0 & θ 0 -0.5 ,-
, if x < r and p ( r | z ) = k ε and (1 − kε ) exp
$ −θ 2 . % / > 0.5 %σ 2 / & θ 0
if x < r and kε < p ( r | z ) < 0.5 ,
$ 2. $ − (r − z ) 2 . /) exp% − θ / > 0.5 %σ 2 / % σ 2 / & 0 r & θ 0
and (1 − k o exp%
$ − (r − z ) 2 . $ − θ 2 . / > 0.5 / exp% 2 / % 2/ & σr 0 & σθ 0
(19)
, if p ( r | z ) > 0.5 and k o exp% %
, otherwise
where
θ = The azimuth angle measured with respect to the beam central axis. 2 σ θ = The variance of the angular probability. In this algorithm, k ε and k o are tuned by the fuzzy systems to be further described (see Fig. 4 and 5). In both fuzzy systems, the fuzzy model includes two inputs and a single output with singleton fuzzier, product inference engine and center average defuzzifier (Wang 1997). The tuning strategy is based on the following four variables and their geometric representation is shown in Fig. 3:
Map Building and Localization for Autonomous Mobile Robots
171 171
PmaxB : the highest probability of the extracted segment PmaxBi within the sensor cone. P*B : the probability of the nearest segment within the sonar cone . dr : the distance between the range reading and the extracted segment with the highest probability within the sensor cone. (i.e. range of sensor - location of the extracted segment with the highest probability within the sensor cone) ∆dr : the change of dr. (i.e. ∆ dr = drt − drt −1 , where t is sampling time)
(a)
(b)
Fig. 3. a) Basic condition for tuning kε and for tuning k o .
(c)
k o . b) Condition for tuning kε . c) Condition
The special criteria for tuning these two parameters will be discussed below: For the first criteria to tune these two parameters, the acute angle θ oi between the extracted line segment Bi and valid sonar beam should be located within 90° to θ lim it , where θ lim it is the angle limit. Since the half of valid sonar beam angle is 12.5°. The θ lim it is selected as about 77.5°. Fig. 3(a) shows the situation that satisfies the first criteria. If the angle θ oi exit these limit, these sonar sensor data point will not be used to update the probability of these line (or cell). The parameter k ε corresponds to the probability that a cell is empty (or the segment is absent in our case). In the traditional approach, this parameter does not exist and is implicitly set near to zero. Therefore, when the object detected range of sensor is much larger than its actual location, the probability of the corresponding extracted segment may be reduced to close the zero at once during Bayesian update. In order to avoid this problem, it is suggested to increase k ε when the range of the sensor measurement is much larger than the extracted segment location and vice versa. The corresponding membership function and fuzzy rule-base are shown in Fig. 4 and 5, respectively. This fuzzy system is only activated when the detected range of the sensor measurement is larger than the distance between the corresponding extracted segment and position of sensor (i.e. dr = positive value)
172172Y.L. Ip, A.B. Rad, and Y.K. Wong
and the highest probability of the corresponding extracted segment is larger than Pth (i.e. PmaxB > Pth ), otherwise k ε will be a constant. VS
S
M
VS
L
Plim it 0.5 + 3Plim it 0.5 + Plimit 1.5 + Plimit 4
2
Pth
0.5
4
Z
S
0.4
0.8
1 + 3Pth 4
1 + Pth 2
3 + Pth 4
VL
1
S
M
L
0.125
0.25
0.375
VL
VS
MS
M
ML VL
0.5
0.65
0.75
0.85 0.9
Output consequence fuzzy sets for ko
0.5
Input Space of dr
Input space of ∆ dr
6
L
L
0
0
M
Input Space of PmaxB
Input space of P*B Z
S
VL
VS
MS
M
ML
VL
0.05
0.2
0.3
0.4
0.5
Output consequence fuzzy sets for kε
(b)
Fig. 4. The membership functions and the output consequence fuzzy sets for tuning a) k o and b) k ε .
The parameter k o corresponds to the probability that the cell is occupied (or the segment is present in our case). In conventional approaches, this parameter is fixed. Therefore, when the object is moved towards the sensor, the probability can not reach up again. In our approach, k o is tuned by the fuzzy system. It is suggested to increase k o when the object is moved towards the mobile robot (or sonar sensor). The corresponding membership function and fuzzy rule base are shown in Fig. 4 and 5, respectively. This fuzzy system is only activated when the detected range of sensor is within the range between the location of sensor and existing extracted
Map Building and Localization for Autonomous Mobile Robots
173 173
segment (i.e. dr = negative value) and P*B is lower than 0.5, otherwise k o will be fixed.
(a)
(b)
Fig. 5. a) Fuzzy rule table corresponding to k o . b) Fuzzy rule table corresponding to k ε .
4.3
Outline for updating the state of an extracted segment
Note that the input and output memberships in Fig. 4 and 5 are tuned by simulation trial. All the new extracted segments are initialized with 0.5 probability state. A step-by-step procedure for updating the state of an extracted segment is presented as follows: 1. Obtain the sonar sensor measurements and use the kinematics transformation to obtain global location and store it in the cell C( X ,Y ) . 2. Update and extract the segment in the corresponding cell by the EAFC with NC for all new sonar measurements if the total numbers of data point in the related cells more than 5 data points (designed by human). 3. Locate the highest probability of extracted segment within the new sensor measurement cone, and obtain the dr, if b 0-/ else ')θ## + sinθ < 0-/ ∀θ / − π < θ < π ; ∀θ ≠ 0
(60)
Thus, solving θ## in the global nonlinear system given by (37) yields
θ## =
1
( M + m′) l + m l sin 2θ y ) sinθ − m l sinθ cosθ θ# 2 − Fx cosθ -/ ')( M + m′ + m ) ( g + ##
which substituted into (59) gives
(61)
206206D. Maravall
− θ# {F cosθ − [( M + m′ + m ) ( g + ##y ) ( M + m′) l + m l sin 2θ x +( M + m′) l + m l sin 2θ − m l cosθ θ# 2 ] sinθ }
V# ( x1, x2 ) =
(62)
Note that in step (61) we have, again, made use of the control strategy based on considering the vertical acceleration as an exogenous variable in the pendulum dynamics, which is the most feasible procedure for controlling the system when simultaneously applying horizontal and vertical forces. From (62), we propose the control law
Fx = [( M + m′ + m ) ( g + ## y ) + ( M + m′ ) l 2 2 + m l sin θ − m l cosθ θ# /- tgθ + kd θ#
(63)
that with k d > 0 completely guarantees the definite negativeness of the first derivative of the Liapunov function for − π < θ < π
V# ( x1, x2 ) =
− kd θ# 2 ( M + m′) l + m l sin 2θ
(64)
Note that the control law (63) can be expressed for the values of interest of the variable as a conventional PD control law !
Fx = k p tgθ + kd θ# ≈ k p θ + kd θ#
(65)
In fact, the control law (63) is a refinement of the stability conditions that we obtained in the ODE analysis and with Liapunov’s indirect method –see expressions (43) and (56)-(57), respectively-. Thus, by applying Liapunov’s direct method, we have arrived at a more precise and refined control law that guarantees the global IP stability. Consequently, the first step in the actual stabilization of the IP is to design a quantitative control law by introducing specific performance indices such as rise time, settling time, percent overshoot, bandwith etc. Afterwards, the global stability of the IP is guaranteed by additionally constraining the designed control law to satisfy the condition given by (63).
Control and Stabilization of the Inverted Pendulum
207 207
4 Concluding Remarks Although the use of high-frequency oscillating vertical forces for IP stabilization is a well-known technique, the application of generic vertical forces to stabilize the IP has not been, theoretically and practically, fully developed to date. In this chapter, the novel idea of controlling and stabilizing the IP via vertical forces has been introduced and thoroughly analyzed. After having established the dynamic equations of the IP with a generic vertical force applied to its base, we studied IP control and stabilization. The final conclusion is that the vertical force has an excellent and fast stabilization effect, although at the cost of maintaining the IP in free-fall. After this preliminary analysis, the chapter approaches its main contribution, namely, the combination of the customary horizontal force with the vertical force. Roughly speaking, the horizontal force permits a direct stabilization of IP by means of a feedback control action, while the vertical force significantly improves IP stabilization, mainly due to its fast response to external perturbations of the IP equilibrium state. The theoretical analysis of the combined forces has been developed for both the ODE and the variable state representations. In particular, the necessary and sufficient conditions of the local stability of the IP controlled by a PD algorithm have been obtained. Furthermore, by applying Lyapunov’s direct method, the control law, which turns out to be a PD-like feedback action that guarantees the global stability of the IP, has also been obtained. As a general concluding remark, the chapter has demonstrated the excellent properties of the vertical force as regards the stabilization of the inverted pendulum.
Acknowledgments The idea of stabilizing the inverted pendulum via a vertical force originated from endless discussions with my father, Prof. Dario Maravall-Casesnoves of the Royal Academy of Sciences, Madrid. The control law defined by expressions (46) and (47) was proposed by Javier Alonso-Ruiz. Special thanks are due to Prof. C. Zhou of the Singapore Polytechnic for very fruitful discussions and insightful comments and for his invitation to write this chapter.
References 1. 2.
Khalil HK (1996) Nonlinear systems. Second edition. Prentice-Hall, Upper Saddle River, NJ Nelson J, Kraft LG (1994) Real-time control of an inverted pendulum system using complementary neural network and optimal techniques. Proc. American Control Conference, 2553-2554
208208D. Maravall
3. 4. 5.
6.
7. 8. 9. 10.
11.
12. 13. 14. 15.
16.
17. 18. 19. 20. 21.
Zhou C, Ruan D (2002) Fuzzy control rules extraction from perception-based information using computing with words. Information Sciences 142: 275-290 Ghosh BK, Xi N, Tarn TJ (1999) Control in robotics and automation: sensorbased integration. Academic Press, San Diego, CA Maravall D, de Lope J (2002) A reinforcement learning method for dynamic obstacle avoidance in robotic mechanisms. In: Ruan D, D´hondt P, Kerre EE (eds) Computational intelligent systems for applied research. World Scientific, Singapore, 485-494 Schaal S (1997) Learning from demonstration. In: Mozer MC, Jordan M, Petsche T (eds) Advances in neural information processing systems 9. MIT Press, Cambridge, MA, 1040-1046 Sprenger B, Kucera L, Mourad S (1998) Balancing of an inverted pendulum with a SCARA robot. IEEE/ASME Trans. Mechatronics 3 (2): 91-97 Schreiber G, Ott C, Hirzinger G (2001) Interactive redundant robotics: control of the inverted pendulum with nullspace motion. Proc. IROS 2001, 158-164 Chung CY (2002) Balancing of an inverted pendulum with a kinematically redundant robot. Int. J General Systems 31 (1): 1-15 Bloch AM, Leonard NE, Marsden JE (2000) Controlled Lagrangians and the stabilization of mechanical systems I: the first matching theorem. IEEE Trans. Automatic Control 45 (12): 2253-2270 Bloch AM, Chang DE, Leonard NE, Marsden JE (2000) Controlled Lagrangians and the stabilization of mechanical systems II: potential shaping. IEEE Trans. Automatic Control 46 (10): 1556-1570 Stephenson A (1908) On induced stability. Philosophical Magazine 15: 233236 Kapitsa PL (1951) Dynamic stability of a pendulum with a vibrating point of suspension. Zh. Ehksp. Teor. Fiz. 21 (5): 588-598 Fliess M, Levine J, Martin P (1995) Flatness and defect of non-linear systems: introductory theory and examples. Int. J Control 61 (6): 1327-1361 Furuta K, Yamamoto M, Kobayashi S (1992) Swing-up control of inverted pendulum using pseudo-state feedback. J Systems Control Eng. 206 (6): 263269 Widjaja M, Yurkovich S (1995) Intelligent control for swing up and balancing of an inverted pendulum system. Proc. IEEE Conf. Control Applications, 534542 Fantoni I, Lozano R (2001) Non-linear control for underactuated mechanical systems. Springer, Berlin Meerkov S (1980) Principle of vibrational control: theory and applications. IEEE Trans. Automatic Control 25: 755-762 Bogaevski V, Povzner A (1991) Algebraic methods in nonlinear perturbation theory. Springer-Verlag, New York Hillsley KL, Yurkovich S (1993) Vibration control of a two link flexible robot arm. Dynamics and Control 3 (3): 261-280 Acheson D (1993) A pendulum theorem. Proc. Royal Society of London, series A 443: 239-245
Control and Stabilization of the Inverted Pendulum
209 209
22. Baillieul J, Lehman B (1996) Open-loop control using oscillatory inputs. In: Levine WS (ed) The control Handbook. CRC Press & IEEE Press, Boca Raton, FL, 967-980 23. Dimeo RM, Thomopoulos SCA (1994) Novel control of an inverted pendulum. Proc. American Control Conference, 2185-2189 24. Wu Q, Thornton-Trump AB, Sepehri N (1998) Lyapunov stability control of constrained inverted pendulums with general base point motion. Int. J Nonlinear Mechanics 33: 801-818 25. Wu Q (1999) Lyapunov´s stability of constrained inverted pendulums. Proc. American Control Conference, 293-297 26. Raya A, Maravall D (2001) Contribution to the control and stabilization of the pole-cart system. In: Moreno-Diaz R, Buchberger B, Freire JL (eds) Computer aided ystems theory. LNCS 2178, Springer, Berlin, 437-449 27. Jose JV, Saletan EJ (1998) Classical dynamics: a contemporary approach. Cambridge University Press, Cambridge
Appendix. Experimental Results J. Alonso-Ruiz and D. Maravall Department of Artificial Intelligence, Faculty of Computer Science Universidad Politécnica de Madrid, Madrid 28660 SPAIN We briefly present some of the experimental results obtained from computer simulations, in which IP stabilization via the combination of horizontal and vertical forces is investigated and compared with a single horizontal force. We have considered the values of the system parameters to be as follows: cart mass, 2 kg; platform mass, 0.2 kg; pendulum mass and length, 0.1 kg and 0.5 m, respectively. Unless otherwise indicated, distances are in meters (m), time in seconds (s), forces in newtons (N) and angular displacements in radians in all figures. In all the reported examples the respective PD algorithm gains have been obtained to optimize the usual performance indices: rise time, overshoot peak and settling time, either for the combined horizontal and vertical forces or for the single horizontal force. Furthermore, for the combined case, the stability conditions obtained in the theoretical analysis –see expressions (43), (56), (57) and (63)- have been applied. Figure 4 shows the IP trajectories for initial deviations from 5º to 30º in 5º steps.
210210D. Maravall
(a)
(b)
Fig. 4. Instances of IP stabilization with (a) a combination of horizontal and vertical forces and with (b) a single horizontal force.
Note the significant improvement achieved by the addition of the vertical force, which makes the IP stabilize much faster. In particular, the greater the IP initial deviation, the stronger the positive effect of the vertical component. It is also interesting to compare the control efforts, so the respective horizontal forces profiles are depicted in Figure 5. Note the + 20 N restriction over the range of the applied forces. The same initial deviations as in Figure 4 have been considered.
(a)
(b)
Fig. 5. Profiles of the horizontal forces of (a) the combined case and (b) the single horizontal case. The greater the initial deviation, the stronger the applied force.
Note that the reductions in the control effort achieved with the combination of horizontal and vertical forces are significant. Of course, in these cases there is an additional control effort produced by the vertical force, although its respective energy cost is comparatively negligible, because the platform mass is small in comparison to the cart mass. Figure 6 shows the vertical forces profiles and the respective vertical displacements of the platform.
Control and Stabilization of the Inverted Pendulum
211 211
Fig. 6. Vertical forces profiles and platform vertical displacements for the same range of initial deviations as above.
212
Vision Based Motion Planning of Humanoid Robots M. Fikret Ercan and Changjiu Zhou School of Electrical and Electronic Engineering, Singapore Polytechnic, 500 Dover Road., Singapore 139651, Republic of Singapore http://www.robo-erectus.org
Abstract. Vision is the most effective sensing form for humans in understanding their environment, recognition of the objects around them and navigating from one point to another. Humanoid robots mimic human behaviour and they are mainly intended for applications where robots need to work in an environment designed for humans. Naturally, computer vision serves as a dominating sensing unit for them. Computer vision is a powerful tool. In addition to steering the robot, it can also play a key role in achieving robot’s intelligence and its interaction with outside world. This paper discusses various aspects of exploiting computer vision in humanoid robots.
1 Introduction In the past, robots were predominately developed and utilized in factories for manufacturing and transportation purpose. However, a new generation of robots has recently begun to emerge. These are service robots that cooperate with people and assist them in their everyday tasks. Many wheeled mobile robots, operating in indoor or outdoor environments, have been studied extensively for this purpose. Lately, researchers are interested in developing human-like robots named humanoids. The main objective of realizing a human like robot is to develop robots that can operate in an environment designed for humans. Ideally, it is expected that such robot would be a replica of humans with motion, sensing, reasoning, and socializing capabilities and even emotional responses. The research in this field promoted by well-respected researchers [3], though, it was initially perceived as an ambitious research challenge. Recent developments, how-ever, demonstrate that emergence of humanoid robots in our daily life is not too far. A significant example to this is P2 and P3 and Asimo robots developed by Honda [12]. With such astounding demonstrations, research in humanoid robots gaining popularity and it is attracting many research institutions [21, 18, 16]. For instance, humanoid SIG is designed to study integration of perceptual information to control robot [18]. The prototype H7 is developed by Kawada industries to provide an experimental research T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 212−227, 2004. Springer-Verlag Berlin Heidelberg 2004
Vision Based Motion Planning of Humanoid Robots
213 213
platform for full-body integrated sensing and control [16]. Recently, Sony announced a new biped robot [27]. Sony’s the SDR-4X is 50cm tall, weighs 6.5kg, and uses a pair of 64-bit processors. It has speech recognition, speech synthesis, and stereo CCD vision sensors to provide face recognition and distance measuring capabilities. Earlier work in humanoid robots mainly concerned with elemental functions such as bipedal walking of robot, control of algorithms for smooth movement, and multi-sensor fusion. A significant amount of study reported in literature was on the study of bipedal locomotion and many methods introduced for controlling robot walking (see for instance [15]). With the improvements made on the stable bipedal walking of humanoids, attempts to make them autonomous become viable. Integrating a vision based perception system can advance robot locomotion and behaviour. Using computer vision in robotics is not new and there are many vision guided mobile robots reported in the literature [20, 6]. These systems generally utilize a dedicated hardware especially to rapidly extract useful information, such as depth information from stereovision [21], obstacles or land marks, to steer the robot reliably. However, in the case of humanoids integrating a sophisticated computer vision unit does not only assist to steer the robot but also allow it to interpret visual data to recognize its environment, plan a proper motion and interact (even socialize [2]) with its environment autonomously. The capability of a robot’s understanding of its environment is a key parameter to achieve these goals. A well-known approach for this purpose is model-based processing of image data [20, 28]. It is a goal oriented approach and requires knowledge about the robot’s environment through partially modelling of unchanging parts in that environment. Basically, the following objectives are intended to achieve: • Localizing the robot by analyzing the scene and finding a match of the current scene with those stored in a database. • Detect obstacles on a mobile robot’s path. The knowledge based vision system is used to detect unexpected object features in a scene. This is achieved by finding mismatches with the world model defined in the system. • Plan robot navigation by referring to correspondence between image features and expected landmark locations. Perception is achieved by a hypothesis and test method where the model-based system produces hypothesis about the objects in the scene and its relationship with other objects and verifies them with sensory input. Recently, there are other innovative approaches used in vision-guided systems that significantly differ from classical sensory input, analysis and action type of control scheme. Instead, a neural network, which acts as a controller, is used between input pixels and motor drivers (see for instance [1, 22, 3]). This method proposes that a robot will develop a proper behaviour by progressively learning its environment and training its neural network controller. Apparently, a major draw back is the time consuming training process of the network. This is an active research area though application to robots operating in an uncontrolled environment is not immediate. Current research demonstrates solution to rather simple problem cases.
214214M.F. Ercan and C. Zhou
This paper is mainly focused on application of vision in humanoid robots in particular perception and motion planning aspects. In the following section, various features of computer vision will be discussed. Integrating a computer vision system to humanoid robot will be elaborated in section 3 and underlying research issues will be presented. Section 4, describes a humanoid robot and its vision system developed in our institution.
2 Challenges of Computer Vision A governing human sensing is vision; therefore, harnessing an artificial one will benefit attaining autonomy for robots. A truly autonomous system must posses the decision-making ability (intelligence) to work in a non-deterministic world, like a human being. A reliable decision-making is highly depended on the information collected from sensors. As images from the surrounding environment provide the most valuable information, many robots employ a vision based sensory unit [3,20]. However, there are two main factors governing a robust vision system and they can be broadly categorized as software and hardware issues. 2.1 Software issues Unlike the human vision system, which is endowed with natural mechanisms to filter out and reduce the burden of irrelevant information, a vision-guided robot must be able to extract useful information from the input images. However, there is no single or standard solution to achieve this goal. Usually a set of imageprocessing algorithms is selected for the application at hand, and they are acquired from a wide selection of fields such as signal processing, statistics, and artificial intelligence. There is no unambiguous definition on how algorithms are selected, though, they are usually organized in a hierarchical order and the type of image processing can be broadly classified into three levels considering the complexity and data type involved in the operations [28]. Algorithms at the lower level of image processing deal with extracting basic object features, such as corners, size, shape, orientation, linear line segments etc. Operations mainly deal with array of numerical information. The objective of the intermediate level algorithms is to group features extracted from the image. The output of intermediate level algorithms is no more pixels but symbols that represent object features found in the image. In the higher-level, hypothesis regarding the elements in the scene are generated and verified with detected features and finally information about the scene is collected for interpretation. Operations here are model directed and usually utilize spatial relations in object features. Planning and initiating an action, based on the scene analysis, are also conducted at the high-level. The controlling of the computer vision operations is in bottom-up, top-down or hybrid manner. Bottom-up approach requires a complete set of correct features to
Vision Based Motion Planning of Humanoid Robots
215 215
carry out interpretation and this is not possible in practice. On the other hand, topdown analysis is model guided. The features being sought trigger the operations. Hybrid approach combines these two approaches where a bottom up feature extracting is performed first and then followed by a top-down verification and matching phase. The selection of algorithms and the control of vision operations are user dependant and they are not perfect. Furthermore, due to occlusions of objects, noise, and shadows low level image processing may not produce a good result. Consequently, more intelligent algorithms needed for the high level processing to be able to reason with incomplete data. An additional pre-processing is needed in humanoid robotics application to compensate the errors occur in camera position at every step of the robot. 2.2 Hardware issues Vision computation in robotics is highly demanding, mainly due to the amount of data and various types of operations involved. For instance, computations in a typical robotic application must be completed within a constant time frame, which ranges from 25-100 msec. A well-known approach to overcome this hurdle is to employ multi-processor systems. However, considering the computation structure discussed earlier, it is expected that such system should be tailored to the requirements of computer vision. For instance, the system must be capable of manipulating various data types such as pixels, symbols and other complex structures. The system must perform low, intermediate and high-level algorithms simultaneously and it must have a fast data I/O mechanism. A good scalability of computational performance, flexibility and modularity are also desired features. There is a vast amount of research presented to meet computational requirements of computer vision. An extensive discussion on high performance computer vision architectures can be found in the following references [7, 9]. It is difficult to address this great variety of requirements by a single architecture and there are various approaches used in research community. For the purpose of compact and feasible hardware realization, application specific systems are developed. These are usually VLSI based compact systems but they are not flexible [8, 23]. A large number of them are proposed to speed-up low-level image processing such as convolution, median filtering etc. The other common approach is to construct a programmable and flexible system to address variety of algorithms used in computer vision. These systems employed either a heterogeneous multi-layer or a partitionable/reconfigurable structure. Heterogeneous multi-layer systems combine multiple layers of computing platforms and provide a suitable one to each abstract computation level, such as SIMD arrays for low level and MIMD platforms for intermediate and high level processing. When processing continuous stream of images, executing three processing levels simultaneously generates a pipelining effect. On the other hand, partitionable/reconfigurable systems provide a pool of processors that can be partitioned into clusters and configured as SIMD, Systolic, or MIMD depending on the application that is being han-
216216M.F. Ercan and C. Zhou
dled by each cluster [7]. A major shortcoming of these systems is their huge hardware complexity that resulted from efforts to achieve flexibility. Recent systems favour dedicated image-processing units based on VLSI large processor arrays [8, 23], or a network of DSPs [9,11]. Recently, FPGA arrays become popular due to their easy hardware configurability [29]. Problem size for a humanoid robot operating at indoors can be regarded as medium-scale considering the data size and the processing requirements. Basic architectural requirements from vision hardware are the same, but the system should be feasible and hold a low hardware complexity so that it can be realized easily. The other concern in humanoids is apparently the physical limitations. The hardware included to the system has to be compact and power consumption should be minimal. This implies that achieving a compact vision hardware using off the shelve components is rather difficult task.
3 Employing Computer Vision in Humanoid Robots We have reasoned that computer vision can provide significant information to a humanoid about its environment. In order a humanoid to have visual sensing various computer vision areas have to be integrated. We can categorize them as navigation, interaction and coordination. In the following, we will elaborate these areas. 3.1 Navigation Computer vision techniques in this category, deal with gathering information about the outside world by recognition and interpretation of the environment, extracting range information and tracking objects or landmarks. This information is then used to guide the robot such a way that it can negotiate outside world autonomously. There are many techniques fall into this category, though, we will point out three well-known methods: model based scene interpretation (or analysis), stereo depth analysis and active vision. Model based scene analysis: The major goal of the vision is to detect and locate landmarks and obstacles to plan navigation. Interpretation system generates a semantic and symbolic description of a scene in terms of detected objects and their spatial relations. Model based scene analysis would be the most suitable tool for humanoids operating indoor environments. The world model predefined to the robot and its navigation is goal directed. Hence, in each step robot is able to localize itself and how far from the goal by comparing the model and real scene. There are many significant work has been done in this field (see for instance Sandakly et al.[26] as an example). Model-based object recognition or, more generally, scene interpretation is a two-part process. The first phase is to generate a sequence of hypotheses on object identities and poses, and the second phase is to evaluate them based on the object models. The overall structure of computations includes
Vision Based Motion Planning of Humanoid Robots
217 217
algorithms from three image processing level as described earlier in Section 2. Major difficulties involved in scene interpretation are in selecting and tuning image-processing algorithms that deliver object features effectively, optimizing the processing time by selecting relevant data from the scene, and uncertainties and impression of models and data gathered from scene. Among them the most difficult issue to deal with is the uncertainties and many studies using Bayesian probability, evidence theory and fuzzy sets have been presented earlier in the literature [4] . For visual analysis of natural scenes, which include incompletely visible objects in an uncontrolled context, it is a highly intricate task to optimize the match of a model to the data obtained from the scene. Local optimization techniques will usually get stuck in local optima while techniques similar to exhaustive search are limited by time and memory constraints. Consequently, in practice a critical aspect of many object recognition problems is to develop a clever search method. Stereo Analysis and 3D modelling: Humanoids, akin to any autonomous robot, have to interpret 3D structure of their environment. There are two main approaches used in obtaining range information of object or constructing a 3D map of a robot’s environment. One method is using active sensors such as sonar, radar, or laser range finders. Sensors generate the signals and the bouncing signals from the objects provide range information. The second method is using stereovision. Cameras are placed with a fixed baseline and depth can be obtained by computing the disparity in each point in image pairs. Apparently a key problem in stereovision is to find corresponding points in stereo images. Corresponding points are the projections of a single point in 3D world in the different image spaces. The difference in the position of corresponding points in their respective images is called disparity. There are many algorithms presented for the solution. They are generally based on window based correlation technique. The performance of the algorithms is depended on selecting a proper window size and it is not trivial. Adaptive techniques for the solution introduced by Kanade et. Al. [17]. Apparently most of the humanoids presented in the literature employ stereovision to identify robots environment. In some of the applications, stereovision was employed to track a target and to detect mobile objects [18, 21]. An interesting application of stereovision in humanoid robots is demonstrated in [21], where stereovision is used to estimate the grade of the ground. The purpose of the study is to calculate the proper posture of the humanoid so that it can walk not only on flat ground but also on the uneven terrains such as slops. Another particular application of stereovision in humanoids would be to perform leaping over an obstacle. This task is particular to humanoid robots and solution requires utterly different approach than the conventional methods used for obstacle avoidance in wheeled mobile robots. Active Vision: The ability of actively controlling camera parameters is vital to achieving robust, real-time perception for a robot interacting with a complex, dynamic world. Active vision systems have mechanisms that can actively control camera parameters such as orientation, focus, zoom and aperture in response to the requirements of the task and external stimuli. More broadly, active vision incorporates attention that is selectively sensing in space, resolution and time by modify-
218218M.F. Ercan and C. Zhou
ing physical camera parameters or the way image data is being processed. An active vision system actively extracts the information it needs to solve a task. For instance, a camera-equipped robot can change the direction of its camera is looking at, or change the zoom. In other words, an active vision system always interacts with the world with a tight coupling between perception and actions. The processing is also tied closely with the activities it supports, such as navigation or manipulation, allowing simplified control algorithms and scene representations. In this sense, the concept of active vision is different from traditional method of finding detailed reconstructions of the external world. The benefits of active vision were primary interest in a variety of humanoids projects. A well known application of active vision is Kismet humanoid robot where the robot socializes with humans by analyzing their behaviours [2]. 3.2 Gesture recognition Many robots developed today do not interact with people. As humanoids start appearing in our daily life, such as health-care, nursing, and entertainment, humanrobot interaction will be vital. Non-expert users will operate these robots therefore a natural interface that makes instructing these robots as simple as possible would be very useful. In this respect, gesture recognition, face recognition, facial gesture recognition come handy to use in humanoid robots. Gesture recognition: Gestures are an important aspect of human interaction, both interpersonally and in the context of man-machine interfaces. There are many aspects to the modelling and recognition of human gesture as they can be expressed through hands, faces, or the entire body. Gesture recognition is valuable in applications requiring human/robot interaction as it can provide a redundant form of communication or it can help to clarify spoken commands.. For example, the user may say “left” simultaneously pointing to left direction with a hand gesture. The robot needs to recognize only one of the two commands. This is beneficial in situations where speech may be distorted or suppressed by environmental noise. Furthermore, iconic information can easily be given to the robot. For instance, instead of giving coordinates of the target point that robot should move, the user can simply point to that spot. Motion gestures provide additional freedom in the design of robot interaction with gestures. In addition, they reduce the chances of accidentally classifying arm poses as gestures hence they appear better suited for human robot interaction than static pose gestures. While the area of gesture recognition is relatively new, there has been a great deal of activity in the last few years. Some of the vision-based interfaces that allow people to instruct mobile robots via arm gestures can be found in the following references [10, 19]. Face Recognition: Over the years there is a vast amount of search on face recognition and facial expressions [30]. Performance of these algorithms, such as eigenfaces, is depended on accurate detection of faces as they assume that face images are normalized in terms of scale and orientation. In numerous studies, face detection techniques demonstrated such as using motion detection or skin colour.
Vision Based Motion Planning of Humanoid Robots
219 219
Colour provides a computationally efficient yet effective method that is robust under rotations in depth and partial occlusions. A neural network based algorithm is also demonstrated by Rowley et. al. [25], where a neural network examines small windows of an image and decides whether each window contains a face. In a recent study, a model-based approach, using Hausdorff distance has been demonstrated where the face recognition is realized without a face detection step [14]. The detection, recognition and model-based coding of faces have a potential application in humanoid robots as they are interacting with humans. Therefore, it is necessary to detect faces and facial gestures and react accordingly. For example the application presented in [8], demonstrates a robots interaction through facial recognition. The robot interprets the head gesture nodding as “Yes” and shaking as “No”. The face recognition also endow robot with the capability of identifying people. It can be practical for security/surveillance applications or developing personalized robots. 3.3 Coordination Human beings easily use eyes to guide their arms and hands to reach objects, and to perform any particular pose. A robot that can imitate such hand-eye coordination needs the sophisticated skills of perception, spatial reasoning, and communication [13]. This task requires a combination of computer vision methods stereo and active vision components in particular. Grasp reflex would be one of the main objectives. The relative positions of hand and target have to be observed and the motion is refined until the object is grasped. A major challenge is the dynamic nature of the robots interaction with its environment and with human. It is impossible to program the robot anticipating every possible situation; hence the common consensus in achieving this goal is by robot learning. (see for instance, Cambron et al[5] ). 3.4 Key research areas The major areas of visual perception and intelligence envisioned for humanoid studies discussed above are still open to further studies. Although, each topic is a major research area by itself, a key research area is the integration of them. In terms of a full-scale computer vision application, a few systems deployed in the literature and so far fairly limited progress has been achieved. This is mainly due to a lack of robustness and methods for integration of various techniques into full systems. The construction of systems gradually provides the insight to formulate basic methods for the design, analysis, implementation, and evaluation of operational and robust systems that have a wide domain of application.
220220M.F. Ercan and C. Zhou
4 Design of Humanoid Robo-Erectus Currently, we are developing humanoid robots in our research lab [24]. The ultimate goal of this project is to build humanoid robots that can play a soccer game akin to human. The idea of soccer playing robots has been introduced in early 90’s. Apparently, the basic idea behind this initiative was to advance artificial intelligence, control, and robotic fields. Soccer playing is used as a test bed to represent dynamic real world. The response from the rest of the scientific world was strong, and the Robot World Cup Initiative, RoboCup, was started. The first Robot World Cup, RoboCup-97, was held in Nagoya, Japan, in August 1997, and included the participation of more than 40 teams. Since then, RoboCup became an annual event. The vision is to develop a humanoid team which will have the potential of defeating a real world cup winning team of humans in fifty years. Apparently, this challenge poses all the possible set of problems in artificial intelligence field to tackle. The first step of the project was to build a robust humanoid robot that can perform bipedal walk, for straight, curved motions and in-place turns. The second stage of the project includes identifying the goal, objects, team mates and performing basic moves such as kicking, passing and shooting the ball. The third stage of the project would be the acquiring of a team strategy. Currently, we have developed a number of robots with different specifications. These humanoids can perform a steady bi-pedal walk, turn, and ball-kicking type of motions. The first version of the robot had 10 DOF while the latest version has 22 DOF. 4.1 Structure of the humanoid The task of soccer playing requires a humanoid to perform various behaviours and motion types and some of them fairly complex. For the successful realization of the first stage of the project, we had to consider all these motions and provide a high DOF to humanoid. The first stage of the project was completed successfully and demonstrated in RoboCup humanoid walk league 2002. Fig 1 shows our humanoid robot at work. Three control systems developed for these robots. They are PC-based, microcontroller based, and PDA based controllers. Main consensus behind the PDA based controller was to reduce to the cost of humanoid robot and make it more portable and compact. The latest version of the humanoid includes all the limbs and neck joints. In Fig 2, we present the organization of the basic joints and the sensor positions on the robot. 4.2 Humanoid vision unit and related tasks The basic structure of humanoid robot is stable and flexible. Currently, we are tackling with challenges at the second stage of the project that requires a vision-
Vision Based Motion Planning of Humanoid Robots
221 221
Fig. 1. Robo Erectus performing a steady walk
based unit to be incorporated to the system. Our immediate goal is to perform the tasks described in RoboCup humanoid league. These tasks are briefly: 1. Performing a steady walk and making a U turn at a designated landmark 2. Detecting a ball and kicking 3. Executing a penalty kick against a goal keeper. These objectives can comfortably be accommodated via a vision system. Fig 3 depicts a block diagram of the vision unit’s place in robot’s control structure. As shown in this figure, vision unit provides feedback both local and global action planning components of the controller. The first task is about navigation and algorithms discussed earlier in Section 3.1 need to be incorporated. Here, vision unit provides information to global motion and path planning units; they are the scene interpretation and stereo depth information. The scene interpretation reports the spatial relationship about the objects, and between the robot and objects. Here, the whereabouts of landmark and robot’s spatial relation to it is vital to make a proper walk and U-turn around the landmark. The stereo depth information is used to provide 3D-proximity data to robot. We employ a knowledge based top-down approach for the algorithm as the robot is operating in a predefined environment. Our method is similar to that discussed in [20] though in recognition phase we are also utilizing color parameter. The second and third tasks are more complex as they require detecting a ball, approaching to it and kicking. For a penalty kick against a goalkeeper, robot needs to develop a strategy and it should have a reasonable control on kicking the ball to a certain direction. The techniques used here fall into coordination topic discussed earlier in section 3.3. Here, challenges are more on the local motion-planning unit and we are basically dealing with two situations static and dynamic. In the static
222222M.F. Ercan and C. Zhou
Accelerometers and gyros are mounted to provide body posture information
z pitch y
yaw
row
x
Vision system
4 force sensing sensors for each foot
Fig. 2. The humanoid structure and sensory structure
case, robot negotiates with static objects and landmarks. For instance robot needs to detect boundaries of the soccer field, position of the ball etc. In order to detect the ball we use color and shape analysis. The goal tracking function should be considered as a higher level of the basic locomotion control system in order to implement a hierarchical autonomous locomotion control system as shown in Fig. 4. For the static case planning has two major steps. The first step is to determine the path to the goal, and the second step is to calculate the width of each step and the turning angle of Θ. In the dynamic case, we deal with moving objects such as the goalkeeper. However, in this problem both robot and the target are in motion. The moving goal tracking function consists of the following tasks: 1. Detection of the 3D position measurement of the goal in camera coordinates. As the humanoid and the goal are both moving, the goal’s color rather than its shape is utilized to detect the goal. 2. Planning of footstep for one step: For the path and footstep planning, we use the world coordinate system which is more convenient for acquiring the knowledge of the goal motion and the map information. 3. Camera posture control with self motion compensation: In order to keep the goal in the field of view, the camera posture is controlled by the pan and tilt joints of the neck. This can be perceived as an application of active vision in humanoids. That is small deviations in robot’s gazing point due to walking motion is compensated by using head and neck joints. Robot has to keep its dynamic stability. However, unlike wheeled robot, humanoid’s body doesn’t move along with the moving direction. Another, practical issue is vertical position of the camera. Currently, cameras are fixed on the robot’s shoulders, which
Vision Based Motion Planning of Humanoid Robots
223 223
result in rotation of input images up to 12 degrees with the robots bipedal walking. To compensate this rotation, a camera self-motion compensation system is needed. For this purpose we are experimenting two approaches, one of them is using a feedback from the gyroscope integrated into humanoid’s body and using the tilt joint to correct the camera position. It gives a measure of rectification needed to correct the error occurred in camera alignment due to the robot body movement. The second approach is by fixing a landmark in the world model in the scene and counter rotating the image pixels by computing the optical flow between two consecutive image frames. This method is purely in software and computationally demanding. 4. In addition to calculations made for path planning, step size and turning angle in the dynamic case we also calculate the speed of the walk in order to cope with the dynamics of the moving goal.
Path Planning
Gait synthesis
Global motion planning
Local motion planning
Humanoid robot
Joint controller
Potentiometers/ encoders
Range sensors/gyros/ force sensors,...
Vision
Fig. 3. A block diagram of vision and other sensory input to robot control mechanism
4.3 Architecture of the vision unit The humanoid project is open ended and the future tasks of the humanoid will be more and more complex. Current vision system utilizes a single CPU and it will be limited for them. Hence, we have to take hardware aspect of the vision system into account and provide a dedicated architecture for it. Our goal is to design a system with minimal hardware complexity while satisfying the basic architectural requirements demanded in a computer vision system in general. For this purpose, currently we are building a multi processor unit with latest parallel DSPs from Analog Devices. This class of processors provides computation flexibility sought for the system as they can efficiently perform both low-level number crunching type algorithms as well as high-level algorithms that require manipulation of data structures. The hardware configuration for data communication is also made flexi-
224224M.F. Ercan and C. Zhou
ble by latest FPGA devices. Processors of the system can be partitioned and pipelining of vision operations can be realized easily since FPGAs used in data communication frontline. Each DSP holds its own memory and four of them share a common memory for rapid data transfer. The vision architecture has two separate units; they are a pool of processors and a image capture unit. A block diagram of this system is presented in Fig 5.
θ
Fig. 4. Path and footstep planning for humanoid turning
The structure of the processing unit is modular and it is built by integrating basic building modules. In each module one of the four DSPs acts as a controller or master processor to the others. In addition to parallel computing, this unit also aimed for multiprogramming. That is modules carry out vision tasks independently, this leads to a two-way communication structure between the modules as well as between the processors of each module. Intra-Layer Communication: although vision system is developed with building blocks available in the market, we organize these basic modules and their communication hierarchy. The first level of the communication is the interaction between the processors of a module. Here, a hybrid communication platform is employed for this purpose. That is in each module processors can directly communicate to each other via a communication link or by using a shared memory among them. When processing images in a parallel architecture, collective communication operations, such as broadcast, scatter and gather, are needed very often and they are time consuming operations. Shared memory is suitable for collective communication, whereas direct links for data exchange between sub-programs running at individual processors. Inter-Layer Communication: This mechanism required for transferring results from one module to another. Communication by using shared memory between modules is most likely to be a bottleneck on the performance and will limit the flexibility. Therefore, we directly connected processors at successive modules.
Vision Based Motion Planning of Humanoid Robots
225 225
The inter-layer communication allows pipelining of vision tasks and it enhances computation performance.
Fig. 5. The architecture of the latest vision unit of the humanoid robot
The scene interpretation algorithm utilizes these modules simultaneously. The pipelining effect is achieved when we process multiple frames. The flow of operations is bottom-up at first and then top-down at the second phase. In the first phase, mainly low- level image processing algorithms are performed for feature extraction and features are grouped to form tokens or vectors to represent objects. At the second phase, matching of objects with model base is performed and using knowledge (world model) about the spatial relations of objects, a description of scene is built. At this stage hypothesis about missing object features or scene parameters are also generated and verified with further processing the input image.
5 Conclusions In this paper, we have discussed issues concerning humanoid robots vision. The desired intelligence and human like behavior of humanoids is strongly dependent
226226M.F. Ercan and C. Zhou
on the superiority of vision systems embedded into their design. Whilst it may seem over ambitious to incorporate so many different aspects of vision into a humanoid robot, desired goal can only be achieved when various visual operations are employed simultaneously. In this paper, we have also briefly introduced the humanoid robot developed in our institution. The ultimate goal of this humanoid robot is to play soccer game. This task is used as a test bed as it embodies all the major techniques discussed above to be incorporated.
Acknowledgments This project is supported by the Singapore Polytechnic R&D fund and Singapore Tote Board fund. Authors wish to acknowledge the many members of the ARICC Centre who have contributed to this effort.
References [1] S. Baluja, “Evolution of an Artificial Neural Network Based Autonomous Land Vehicle Controller,” IEEE Transactions on Systems, Man and Cybernetics-Part B, vol. 26, pp.450-463, 1996. [2] C. Breazeal, “Sociable Machines: Expressive Social Exchange Between Humans and Robots,” Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, PhD Thesis, May 2000. [3] R.A. Brooks, C. Breazeal, M. Marjanovic, B. Scassellati and M.M. William-son, “The Cog Project: Building a Humanoid Robot,” IARP First International Workshop on Humanoid and Human Friendly Robotics, 1998. [4] T. Caelli and W. F. Bischof, Machine Learning and Image Interpretation, New York, Plenum Press, 1997. [5] M. E.Cambron, and R.A. Peters II, “Learning Sensory Motor Coordination for Grasping by a Humanoid Robot,” Proceedings of the 2000 IEEE International Conference on Systems, Man and Cybernetics, vol.5, pp. 870-875, 2000. [6] K. H. Chen and W. H. Tsai, “Vision-Based Autonomous Land Vehicle Guidance in Outdoor Road Environments Using Combined Line and Road Following Techniques,” Journal of Robotic Systems, vol. 14, pp.711-728, 1997. [7] A.N. Choudhary, J.H. Patel and N. Ahuja, “NETRA: A Hierarchical and Partitionable Architecture for Computer Vision Systems,” IEEE Trans. on Parallel and Distributed Systems, vol. 4, pp. 1092-1104, 1993. [8] M. Doi, M. Nakakita, Y. Aoki and S. Hashimoto, “Real-time Vision System for Autonomous Mobile Robot,” IEEE Int’l Workshop on Robot and Human Interactive Communication, pp.442-449, 2001. [9] M.F. Ercan and Y.F. Fung, “The Design and Evaluation of a Multiprocessor System for Computer Vision,” Microprocessors and Microsystems, vol. 24, pp. 365-377, 2000. [10] F. J., Roger, E. Kahn, M. J. Swain, and P. N. Prokopowicz , “Real-Time Gesture Recognition Using the Perseus Architecture,” University of Chicago, Computer Science Technical Report, No:96-04, March 1996
Vision Based Motion Planning of Humanoid Robots
227 227
[11] D.M. Harvey, S. P. Kshirsagar, C.A. Hobson, “Low Cost Scaleable Parallel Image Processing System,” Microprocessors and Microsystems, vol. 25, pp. 143-157, 2001. [12] http://www.honda-p3.com/ [13] R. Jarvis, “Interactive Hand/Eye Coordination between a Human and a Humanoid-A proposal,” International Conference on Intelligent Robots and Systems, pp.1741-1747, 2000. [14] O. Jesorsky, K.J. Kirchberg, and R. W. Frischholz, “Robust Face Detection Using Hausdorff Distance,” Lecture Notes in Computer Science, vol. 2091, pp. 90-95, 2001. [15] S. Kajita and K. Tani, “Study of Dynamic Biped Locomotion Rugged Terrain: Theory and Basic Experiment,” Int’l Conference on Advanced Robotics, pp. 741-746, 1991. [16] http://www.dh.aist.go.jp/h6/H6_H7.html [17] T. Kanade and M. Okutomi, “A Stereo Matching Algorithm with an Adaptive Window; Theory and Experiment,” IEEE Transactions in Pattern Analysis and Machine Intelligence, vol. 16, pp. 920-932, 1994. [18] H. Kitano, H. G. Okuno, K. Nakadai, T. Sabisch and T. Matsui, “Design and Architecture of SIG the Humanoid: An Experimental Platform for Integrated Perception in RoboCup Humanoid Challenge,” Proceedings of the International Conference on Intelligent Robots and Systems, pp. 181-190, 2000. [19] D. Kortenkamp, E. Huber and R. P. Bonasso, “Recognizing and Interpreting Gestures on a Mobile Robot,” Proceedings of AAAI-96,1996. [20] A. Kosaka and A.C. Kak, “Fast Vision-Guided Mobile Robot Navigation Using Model based Reasoning and Prediction of Uncertainties,” Computer Vision and Image Processing: Image Understanding, vol.56, pp. 271-329, 1992. [21] M. Kumagai and T. Emura, “Vision based Walking oh Human Type Biped Robot on Undulating Ground,” Int’l Conference on Intelligent Robots and Systems, pp. 13521357, 2000. [22] S. Nolfi and D. Floreano, Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines, MIT press, London, 2000. [23] S. Okazaki, Y. Fujita, and N. Yamashita, “A Compact Real-time Vision System Using Integrated Memory Array Processor Architecture,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 5, pp. 446-452, 1995. [24] http://www.robo-erectus.org [25] H. A. Rowley, S. Baluja and T. Kanade, “Neural Network-Based Face Detection,” IEEE Transactions in Pattern Analysis and Machine Intelligence, vol. 20, pp. 23-38, 1998. [26] F. Sandakly and G. Giraudon, “3D Scene Interpretation for a Mobile Robot,” Robotics and Autonomous Systems, vol.21, pp. 399-414, 1997. [27] http://www.sony.net/SonyInfo/News/Press/200203/02-0319E/ [28] C.C. Weems, “Architectural Requirements of Image Understanding with Respect to Parallel Processing,” Proceedings of the IEEE, vol. 79, pp. 537-547, 1991. [29] J. Woodfill and B. V. Herzen, “Real-Time Stereo Vision on the PARTS Reconfigurable Computer,” IEEE Symposium on FPGAs for Custom Computing Machines, April 1997. [30] W. Zhao, R. Chellappa, A. Rosenfeld, and J. Phillips, “Face Recognition: A Literature Survey,” Technical Report, University of Maryland, No: CS-TR4167R, 2002.
228
Evolution of Locomotion Controllers for Legged Robots A. Boeing and T. Bräunl Centre for Intelligent Information Processing, University of Western Australia, Perth, Australia [email protected]
Abstract. The construction of a locomotion controller for legged robots is a difficult and time consuming process. This task can be automated through the use of evolutionary techniques and an appropriately chosen control system. The presented approach utilizes a genetic algorithm that evolves the parameters for a spline controller. The controller outputs its control signals to a virtual robot in a mechanical simulator, enabling the robot designer to preview the robots locomotion patterns. This approach has shown to produce walking patterns for a number of robots with differing morphology. Extensions to the basic spline controller allow various forms of sensory information to be encoded enabling the robot to maneuver over irregular terrain. The resulting system has shown to be a simple and efficient method for automated controller design.
1 Legged Robots Legged robots exhibit a number of advantages for locomotion[1]. The mobility offered by legged vehicles is far greater than that of wheeled or tracked vehicles, as they are not limited to paved terrain. The increased mobility offered, allows for a far larger range of applications to legged vehicles. Another incentive for exploring the use of legs for locomotion is that it provides an insight to the systems responsible for human and animal locomotion. Humans are capable of complex movements, whilst maintaining orientation, balance and speed. Robots that could mimic human movements could seamlessly integrate with the human world, enlarging the number of possible applications for legged vehicles. This makes the study of bipedal locomotion particularly attractive. Although there are a multitude of existing locomotion control techniques and well described design processes, the automated generation of these controllers for robots provides significant advantages. Often, the design process is quite complex, time consuming to perform, and requires the control system to be completely redesigned for small alterations to the robot [2]. Furthermore, humans often have difficulty in understanding which sensors to incorporate to provide the best possible feedback to the robot. Designers are often biased towards feedback sensors T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 228−240, 2004. Springer-Verlag Berlin Heidelberg 2004
Evolution of Locomotion Controllers for Legged Robots
229 229
that are present in humans, such as vision and touch. These senses are not necessarily the best sensors for the desired application. By evolving the locomotion controller the robot designer is alleviated from the controller design process, and the control system becomes more flexible, as the robot can improve its controller to cope with environmental or structural changes. Allowing the control system to evolve enables the controller to utilize sensory inputs that would normally be disregarded by human designers. The resulting controllers are more adaptive to the robot’s environment. They are also more robust, flexible, and usually provide superior performance to human designed controllers [2].
2 Control Architectures There are a number of control systems that are applicable to the objective of robot locomotion. Possible control systems range from simple oscillators [3] to neural networks [2] to simple assembly programs [4]. The simplest oscillators consist of a set of sinusoidal function generators whose outputs are combined to generate the control signal for an actuator. These systems can represent a range of outputs by altering the phase and amplitude of the sinusoids [3]. However, these systems are generally incapable of expressing the complexity required for sustained locomotion [5]. Thus, more complicated forms of control are desirable. 2.1 Neural Networks Neural Networks are a popular form of controller for robot locomotion [6,7,8,9,10]. These controllers complement the biological systems believed to be responsible for walking movement in humans and other animals. This form of control allows application of the knowledge learned from neuroscientists studying rhythmical controllers in animals [8]. Neural Networks consist of a set of interconnected processing element nodes whose functionality is based on the animal neuron [11]. Neurons process information by summing the signals that appear at the node’s inputs. Each of the input signals is multiplied by a weight to simulate different input strengths. The weighted sum is then passed through an activation function, which will produce an output if the transformed sum passes a calculated threshold [11]. Traditionally, artificial neurons have been idealized for the sake of mathematical tractability [7]. One of the simplest, and most commonly implemented neuron models, is the sigmoidal neuron [6]. This neuron is a simple extension of the binary neuron model to express a continuous softened step-function. The equations governing the output of the sigmoidal neuron are given below. n
yi = # j =1 w ji S j
(2.1)
230230A. Boeing and T. Bräunl
Si =
1 1 + e − yi
(2.2)
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -10
-8
-6
-4
-2
0
2
4
6
8
10
Fig. 1. Sigmoid Function
There are many extensions possible to this model, and many have been applied to the area of robotic locomotion. 2.2 Splines Splines are piecewise polynomial functions expressed by a set of control points [12] . There are many different forms of splines, each with their own attributes. There are two desirable properties for a spline to posses. Continuity, so that the generated control signal translates to smooth velocity and acceleration changes. Locality of the control points, to reduce the influence of alterations of one control point to the overall shape of the spline. The Hermite spline is expressed by the equations given in (2.3). The curve generated from a Hermite spline passes through the control points that define the spline. Thus, a set of predetermined points can be smoothly interpolated by simply setting the predetermined points as the control points for the Hermite spline. Hermite splines feature both the properties of locality and continuity. However, the disadvantage of the Hermite spline is that the control point tangent values must be specified. The function used to interpolate the control points, given starting point p1, ending point p2, tangent values t1 and t2, and interpolation parameter s, is shown below:
Evolution of Locomotion Controllers for Legged Robots
f ( s ) = h1 . p1 + h2 . p 2 + h3 .t1 + h4 .t 2
231 231
(2.3)
where
h1=2s3-3s2+1 h2=-2s3+3s2 h3=s3-2s2+s h4=s3-s2 0 ≤ s ≤1
3 Evolution Evolutionary algorithms are a form of search and optimization algorithms, which make use of some of the principles of organic evolution [13]. These types of algorithms typically specify what is to be done, rather than how the task should be accomplished [13]. Although evolutionary algorithms are only a basic approximation to the biological evolutionary process in reality, they have been proven to provide a powerful means of problem solving [6,7,3,9,14]. 3.1 Evolving Controllers Genetic algorithms were applied to the evolution of neural controllers for robot locomotion by numerous researchers [6,7,3,9,14]. Lewis et al. [9] successfully generated locomotion patterns for a hexapod robot using a simple traditional genetic algorithm with one point crossover and mutate. Isjpreet [14] evolved a controller for a simulated salamander using an enhanced genetic algorithm. The large number of experiments in this area clearly indicates that genetic algorithms are capable of evolving neural controls which can describe legged robot locomotion [6]. The genetic programming approach has been shown to successfully generate locomotion patterns for various control strategies. Banzhaf et al. [15] demonstrated the pure genetic programming approach to develop assembly programs for robot control. The system was then expanded to control a hexapod robot [16] using a BSpline based approach. Lewis also applied genetic programming to his neural controller [10]. This demonstrated that both the genetic programming approach and the genetic algorithm approach should both be capable of evolving adequate control systems for legged locomotion [10,14]. Parker et al. [17] explored the use of cyclic genetic algorithms for locomotion control of a small hexapod robot. The system demonstrated that the cyclic nature
232232A. Boeing and T. Bräunl
needed to generate the oscillatory motions necessary for legged robot locomotion could be abstracted from the control system and transferred and encoded into the genetic algorithms chromosomes. 3.2 Genetic Algorithm A common implementation of an evolutionary algorithm is the Genetic Algorithm (GA) [13]. As with most evolutionary algorithms the Genetic Algorithm is based on Darwin’s theory of natural selection, ensuring the survival of the fittest. The genetic algorithm operates on a set of encoded variables representing the parameters for the potential solution to a problem. The parameters (or genes) are combined together to form a string of values, referred to as a chromosome [18]. Each of these possible solutions are then assigned a fitness value according to how well it solves the problem. The better solutions are then selected to “reproduce” with other solutions, generating a new set of chromosomes which have inherited features from the chromosomes they were created from. The least fit (worst solutions) are less likely to be selected for reproduction, and thus eventually are removed from the set of chromosomes on which the algorithm operates. The basic methodology for the genetic algorithm is presented below: 1. Randomly initialize a population of chromosomes 2. While the terminating criteria has not been satisfied a) Evaluate the fitness of each chromosome b) Remove the lower fitness individuals c) Generate new individuals, determined by a certain selection scheme, utilizing selected operations. Each iteration of these steps creates a new population of chromosomes. The total set of chromosomes at one iteration of the algorithm is known as a generation. As the algorithm progresses it searches through the solution space, refining the solutions to find one which will fulfill (or come as close as possible to fulfilling) the desired criteria, as described by the fitness function. 3.3 Operators The operators determine the method in which one or more chromosomes are combined to produce a new chromosome. Traditional schemes utilize only two operators: Mutate, and Crossover [18]. Crossover takes two individuals and divides the string into two portions at a randomly selected point inside the encoded bit string. This produces two “head” segments and two “tail” segments. The two tail segments for the chromosomes are then interchanged, resulting in two new chromosomes where the bit string preceding the selected bit position belongs to one parent, and the remaining portion belongs to the other parent. The mutate operator randomly selects one bit in the chromosome string, and inverts the value of the bit. Extensions on these operators are possible by interpreting the chromosome as a string of non-binary values (such as 8 bit integer values, or 32 bit floating point
Evolution of Locomotion Controllers for Legged Robots
233 233
values). Two operators commonly used that rely on this interpretation of the chromosome are the Non-Binary Average, and the Non-Binary Creep operators [19] . Non-Binary Average interprets the chromosome as a string of higher cardinality symbols and calculates the arithmetic average of the two chromosomes to produce the new individual. Likewise Non-Binary Creep treats the chromosomes as strings of higher cardinality symbols and increments or decrements the values of these strings by a small randomly generated amount [19]. 3.4 Selection Schemes In the natural world the organisms which reproduce the most before dying will have the greatest influence on the next generation. In order to simulate this effect in the genetic algorithm a selection scheme is used. The selection scheme determines which individuals of a given population will contribute to form the new individuals for the next generation Roulette Wheel selection and Tournament selection are two commonly used selection schemes [18]. In roulette wheel selection the chance for a chromosome to reproduce is proportional to the fitness of the entity. Thus, if the fitness value returned for one chromosome is twice as high as the fitness value for another, it is then twice as likely to reproduce. Although genetic algorithms will converge to a solution if all of the chromosomes reproduce, it has been shown that by duplicating unchanged copies of the chromosomes future generations will generally have a significant increase in the convergence rate towards the solution. The implemented genetic algorithms attributes are given in Table 1. Table 1. Operators attributes
Operator Name Crossover Mutate Random Replacement Inversion Average Creep
Probability of use 35% 10% 10% 5% 30% 10%
3.5 Staged Evolution A number of possibilities exist in enhancing the performance of a genetic algorithm. Staged evolution is based on the concept of behavioral memory, and increases the convergence rate by introducing a staged set of manageable challenges [10] . Initially limiting the search to a subset of the full solution space enables an
234234A. Boeing and T. Bräunl
approximate solution to be determined. Incrementally increasing the complexity of the problem will increase the solution space, providing the possibility of increased performance as further refinements of the solution are possible. Applying this strategy to a particular problem task, requires that the tasks is capable of being split into further smaller sub-tasks which can be solved in order to contribute to the overall solution. 3.6 Fitness Functions Each problem to be solved required a unique fitness function describing that problem. Given a particular chromosome a fitness function must return a numerical value indicating the appropriateness of the solution with respect to the overall goal [20] . For some applications, such as function optimization problems, the fitness function will simply return the value of the function itself. However, for many applications there is no straightforward performance measurement of the goal, and thus it must be expressed as a combination of the desired factors. The fitness function used to evaluate the robot during the initial phases of evolution was evaluated purely from the distance the robot traveled forward, minus the distance that the robot’s center of gravity lowered. During later phases of evolution the average speed at which the robot moved, and the distance that the robot wavered from the desired path were incorporated. Finally, the distance that the robot was at termination from its desired terminating point was taken into consideration.
4 Simulation There are a number of advantages offered when evolving a system under a simulated environment. Firstly, the risk of damaging the robot hardware during evolution is removed. Secondly, simulation can be performed in far less time than that which is required for a robot to physically perform the desired gait. Simulators also provide the experimenter with the ability to easily modify the environment and model the interaction between objects. Dynamechs [21], the simulation library utilized is a free, extensible and portable mechanical environment simulator. Dynamechs calculates all environment variables and all forces generated by the actuators, in order to describe the movement of all bodies in the system. The robots are expressed by multiple chains linked to a mobile base described with modified Denavit-Hartenberg parameters. The simulation library allows configurable integration step sizes and offers a number of integrators from first order Euler to fifth order Runge-Kutta. This allows configuration of the accuracy and speed of the simulation. The fitness evaluation functions needed to operate the genetic algorithm were added to the package, along with the simulated spline control system and the necessary sensors.
Evolution of Locomotion Controllers for Legged Robots
235 235
5 Spline Control Although a neural controller offers a number of advantages for control, a spline controllers provides a simple and fast method for evolving control. The implemented spline controller consists of a set of joined Hermite splines. One set contains robot initialization information, to move the joints into the correct positions and enable a smooth transition from the robot’s standing state to a travelling state. The second set of splines contains the cyclic information for the robot’s gait. Each spline can be defined by a variable number of control points, with variable degrees of freedom. A pair of start and cyclic splines corresponds to the set of control signals required to drive one actuator within the robot. Cubic Hermite splines were implemented in the controller as they offer a number of desirable characteristics over other splines. The major advantage of the Hermite spline is that the curve passes through all the control points. As the position and tangent are explicitly specified at the ends of each segment, it is easy to create a continuous curve. An example of a simple spline controller is illustrated in Figure 2. Each spline indicates the controllers output value for one actuator.
Fig. 2. Evolved spline controller for a three-jointed limb
The number of control points required for this controller is given by equation (5.1).
a.(i + c) where, a i c
(5.1)
is the number of actuators is the number of initialization control points is the number of cyclic control points
In order to incorporate sensor feedback information into the spline controller, another dimension was added. This resulted in a set of control surfaces for each actuators cyclic information. The initialization splines were not extended as it was assumed that any robot would be set-up on flat terrain. However, the initialization
236236A. Boeing and T. Bräunl
splines can be easily extended to contain information dependent on the terrain’s gradient through a similar method to that used to extend the cyclic splines. The number of control points required for the extended controller is given by equation (5.2). Extending the controller in this form, significantly increase the number of control points required.
a.(i + c.s )
(5.2)
Where, s is the number of control points used to describe the feedback
Fig. 3. Spline controller for a single actuator with feedback
The example controller used an inclinometer to provide the sensory feedback. Thus the controller’s output at any stage is expressed in terms of the current cycle time, and the angle read from the inclinometer. This allows the controller to correct for undesired tilting during normal walking, and also allows for movement over rough terrain. Figure 3 illustrates a resulting control surface for one actuator. 5.1 Spline Controller Encoding In order to evolve the spline controller with a genetic algorithm, the controller’s parameters needed to be encoded into chromosome representations. To enable support for staged evolution the controller needs to be specifically designed such that the evolution can proceed in this manner. The encoded spline controller treated each control point value as an 8 bit fixed point number. In the initial phase
Evolution of Locomotion Controllers for Legged Robots
237 237
of evolution the control point locations within the walking cycle time were equally distributed. This provided each control point with only one degree of freedom, and reduced the available solution space but also significantly reduced the complexity of the chromosome required to describe the controller. In the following stage of evolution, the equally distributed time constraint was removed, providing the control points with an additional dimension of freedom. Finally the tangent values for the control points in the spline were added to the chromosome, allowing final refinement of the solution.
6 Results A number of spline controllers have been evolved for the control of various legged robots. An example of a spline controller which generated locomotion without feedback is shown in Figure 4. The walk is maintained for 10 seconds, before the robot collapsed.
Fig. 4. Walking without Feedback
Fitness vs Generation for Extended Spline Controller 120 100
Top Fitness
60
Average Fitness
40 20
Generation
Fig. 5. Genetic Algorithm Fitness
657
616
575
534
493
452
411
370
329
288
247
206
165
83
124
1
0 42
Fitness
80
238238A. Boeing and T. Bräunl
The increasing fitness function for the control parameters evolved for the gait depicted in Figure 6 is illustrated above. The best and average fitness values increase linearly during the initial phases of evolution. As the algorithm begins to converge towards an appropriate solution the fitness values for the population increase rapidly. The algorithm terminates once it has refined the solution after 500 generations. Incorporating sensory information to provide feedback to the control algorithm overcame the difficulties in maintaining a robots walk. The addition of feedback allowed for continues walking to be achieved. However, integrating feedback significantly impacts on the number of parameters required in the controller. The controller utilized for the gait generation in Figure 4 consists of four initialization control points, and four cyclic control points, with six controlled actuators. Thus, the total chromosome size to represent the complete controller is 48 bytes during its first phase of evolution. The expansion of this controller to enable feedback that samples the sensor with four control points results in an encoded controller size of 120 bytes. This represents a significant increase in the solution space that the genetic algorithm needs to search. Seeding the genetic algorithm with the previously evolved splines resulted in uniform spline surfaces that were capable of expressing the initial movements required in bipedal locomotion. Approaching the extended controller through the staged evolutionary method allowed the controller to be developed in under one hundred hours on an 800MHz Pentium 3 system running Windows NT. The use of an inclinometer also allowed the robot to detect alterations in the terrain. Thus the resulting controller was capable of expressing bipedal gaits which allowed the robot to maneuver over non-uniform terrain.
Fig. 6. Walking over terrain Gaits for non-bipedal legged robots were also successfully evolved utilizing the spline control system. The most successful gait evolved for the tripod robot is illustrated in Figure 6. The robot achieves forwards motion by thrusting its rear leg towards the ground, and lifting its forelimbs. The robot then gallops with is fore limbs to produce a dynamic gait.
Fig. 7. Tripod Gait
Evolution of Locomotion Controllers for Legged Robots
239 239
7 Conclusion and Outlook The proposed evolutionary control system was shown to generate locomotion patterns for a bipedal robot. A system for integrating sensory information into a spline controller was presented. Also, the possible improvements to a robot’s control system when provided with sensory feedback was demonstrated. Future extensions of the system could provide a method for directly downloading the control system to a physical biped robot. Further evolution of the control system on the physical robot could then be used to overcome discrepancies between the simulated and physical world. Another useful extension could be to evolve the robot’s morphology as well as the controller. This would allow the system to suggest structural alterations to robots, which could assist the robot in achieving its desired purpose. Providing control over the placement of sensors on the robot would allow the evolutionary system to optimize the sensor locations such as to provide the optimal feedback to the controller. This should result in further improvements to the adaptability and robustness of the generated control system.
References [1] M. H. Raibert. Legged Robots That Balance Cambridge, MA: The MIT Press, 1986 [2] G. S. Hornby, S. Takamura, J.Yokono, O. Hanagata, T. Yamamoto, M. Fujita, “Evolving Robust Gaits with AIBO” in IEEE International Conference on Robotics and Automation. pp. 3040-3045, 2000. [3] J.Ventrella,“Explorations in the Emergence of Morphology and Locomotion Behavior in Animated Characters “ in R.A. Brooks and P. Maes (eds.) Artifical List IV, pp. 436441, (MIT Press, Cambridge, MA 1994). [4] J. Busch, J. Ziegler, C. Aue, A. Ross, D. Sawitzki, W. Banzhaf, “Automatic generation of Control Programs for Walking Robots Using Genetic Programming” in European Conference on Genetic Programming (EURO GP), 2002. [5] D. Arnold “Evolution of Legged Locomotion”, MSc. thesis, Simon Fraser University, School of Computing Science, 1997. [6] M. Krueger “Genetic Algorithms can Evolve Neural Network Controllers for Robot Locomotion”, BE thesis, University of Western Australia, 2001. [7] R. Reeve “Generating walking behaviors in legged robots”, Ph.D. thesis, University of Edinburgh, 1999. [8] P. Wallen, O Ekeberg, A Lansner, L Brodin, H Traven, and S Grillner. “A computer based model for realistic simulations of neural networks. II. The segmental network generating locomotor rhymicity in the lamprey.” Journal of Neurophysiology, pp. 1939-1950, 1992. [9] M. Lewis, A. Fagg, G. Bekey. “Genetic Algorithms for Gait Synthesis in a Hexapod Robot.” in Recent Trends in Mobile Robots (World Scientific, New Jersey), pp. 317331. 1994.
240240A. Boeing and T. Bräunl
[10] M. Lewis, A. Fagg, A. Solidum, “Genetic Programming approach to the Construction of a Neural Network for Control of a Walking Robot” in IEEE International Conference on Robotics and Automation (Nice, France) pp 2618-23, 1992. [11] K. Gurney, “Neural Nets”, UCL Press Ltd, 2002. [12] Bartels, R. H., Beatty, J. C. and Barsky, B. A. “An Introduction to Splines for use in Computer Graphics and Geometric Models” Morgan Kaufmann, 1987. [13] J.H. Holland, "Adaptation in Natural and Artificial Systems", MIT Press, 1975. [14] A. J. Ijspeert, “Evolution of neural controllers for salamander-like locomotion” in Proceedings of Sensor Fusion and Decentralised Control in Robotics Systems II pp. 168179, 1999. [15] P Nordin, W Banzhaf, “An On-Line Method to Evolve Behaviour and to Control a Miniature Robot in Real Time with Genetic Programming” in Adaptive Behaviour, vol 5 no 2 pp 107-140, 1997. [16] J. Ziegler, W. Banzhaf, “Evolution of Robot Leg Movements in a Physical Simulation” in K. Berns and R. Dillmann, (Eds.), Proceedings of the Fourth International Conference on Climbing and Walking Robots, CLAWAR , (Professional Engineering Publishing 2001). [17] G.B. Parker, D.W. Braun, I. Cyliax, “Learning Gaits for the Stiquito” Proceedings of the 8th International Conference on Advanced Robotics (ICAR'97). (pp. 285-290). 1997. [18] D. Beasley, D. R. Bull, and R. R. Martin, "An Overview of Genetic Algorithms: Part I, Fundamentals" University Computing, vol. 15, no. 2, pp. 58-69, 1993. [19] D. Beasley, D. R. Bull, and R. R. Martin, "An Overview of Genetic Algorithms:Part 2, Research Topics" University Computing, vol. 15, no. 4, pp. 170-181, 1993. [20]F. Busetti, ”Genetic algorithms overview”. Available online: http://citeseer.nj.nec.com/464346.html [Accessed: 28th October 2002] [21]DynaMechs (Dynamics of Mechanisms): A Multibody Dynamic Simulation Library [Online] http://dynamechs.sourceforge.net/ [Accessed: 7th March 2003]
241
Gait Planning for Soccer-Playing Humanoid Robots Zhe Tang1,2, Changjiu Zhou2, and Zengqi Sun1 1
Department of Computer Science and Technology, Tsinghua University, Beijing, P.R. China 2 School of Electrical and Electronic Engineering, Singapore Polytechnic, 500 Dover Road, Singapore http://www.robo-erectus.org
Abstract. Gait planning for humanoid walking and penalty kicking, the two fundamental tasks for the current humanoid soccer competition, is addressed in this paper. First, an overview of some basic research issues in the field of humanoid robotics is presented. Humanoid walking gait planning in both joint space and Cartesian space is then discussed in a detailed way so as to make a clear comparison. By introducing some new concepts such as maximum kicking range and effective kicking range, a novel gait planning method for humanoid penalty kicking is also presented. All the proposed gait synthesizing methods have been successfully implemented on a soccer-playing humanoid robot, named RoboErectus, developed in Singapore Polytechnic.
1 Introduction The desired goals of humanoid robotics research are to develop robots that are able to coexist and collaborate with humans, to replace humans in some of their roles in the interaction between humans and robots, and to extend human capabilities for interacting with environments[1]. So the motion planning for humanoid robot is not only limited to walking, but also include many other tasks, such as kicking, running, jumping and so on. Gait synthesis has been a very important part of work in humanoid robot motion planning. Aiming at participating in robot soccer competitions[2, 3], we have developed a series of humanoid robots named Robo-Erectus in the ARICC Centre of Singapore Polytechnic [4]. Humanoid walking and penalty kicking (PK) are two basic tasks for the current humanoid soccer playing games. We will focus on some basic research issues in the field of gait synthesis for soccer-playing humanoids in this paper. Vukobratovic et al.[5,6] have considered locomotion directly. The fundamental aspect of their approach is the adaptation of human walking data to prescribe T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 241−262, 2004. Springer-Verlag Berlin Heidelberg 2004
242
242
Z. Tang, C. Zhou, and Z. Sun
motion of the lower limbs and the application point of the resultant of the reaction forces among lower limbs and the walking surface. Furthermore, repeatability conditions are added to impose further symmetry conditions on the motion. Vukobratovic and his colleagues adopted a two-stage control synthesis. In the first stage, the control synthesis has to ensure the system’s motion in the absence of any disturbance along the exact nominal trajectories calculated in advance. In the second stage external perturbation is involved, the nominal should be realized with assistance from an additional control force. In this paper, we will only focus on the first stage, designing a desired trajectory for humanoid robot without considering external perturbation. Furusho and Sano[7] achieved smooth 3D walking through a sensor based control on a biped with nine links. They considered the motion in the lateral plane as a regulator problem with two equilibrium states. In the sagittal plane, where the walking control was based on the body speed in forward movement, they made the body movement close to the desired smooth speed by controlling the ankle torque of the support leg. The sole and ankle driving actuators underwent a force/torque feedback control based on the sensor information. An inverted pendulum mode was adopted. Mita and et al.[8] used a linear optimal state feedback regulator to control a seven-link biped. Their work proved the usefulness of typical modern control theory for studies on biped locomotion. Miura and et al.[9] approximated the biped motion of the single support phase as an inverted pendulum, and also gave trajectory planning and stability analysis in their work. Furthermore, their methods were implemented on their two biped robots. Zheng and Hemami[10] have addressed the impact effects of a biped contact with the environment. They showed that the impact may cause large internal impulsive forces in the body, and that the biped can control its initial posture in order to reduce the impulsive component of the internal constraint forces. They also proposed a linear velocity and nonlinear position feedback control method to slow down the biped motion so that it finally reaches a static posture after hitting the ground. In addition, Zheng and Shen[11] developed a biped robot, SD-2, which could climb sloping surfaces using static stabilization criteria. Using a force sensor underneath the foot, their robot was able to detect the transition of the supporting terrain from a flat floor to sloping surface of up to 10 deg. They were the first to propose a biped walking algorithm on a slope surface. Shih[12,13] proposed a gait synthesis of ascending and descending stairs for his 7 DOF biped. The efficiency of the gait synthesis was supported by the biped implementation. However, his method is only applicable to his specific biped: variable length legs and translatable balance weight in the body. Furusho and Masubuchi[14] have proposed some novel approaches to synthesize a five-element biped. They adopted a hierarchical control structure for lower level control. A reduced order model of biped locomotion, which is based on the concept of local feedback, was presented. The results were verified experimentally by using a walking machine and periodicity was demonstrated for several walking patterns. Miyazaki and Arimoto[15] divided biped locomotion into two modes, slow mode and fast mode. The joint motion trajectories were obtained in the two modes respectively. They used the inverted pendulum model to approximate the biped global locomotion. Their control algorithm was verified by
Gait Planning for Soccer-Playing Humanoid Robots
243 243
computer simulation. Kajita and Yamaura[16] used the term “potential energy conserving orbit” to describe a particular class of biped motion trajectories, which was based on an ideal biped model. They proposed a potential energy conserving orbit. Huang and et al.[17] presented a practical method for biped trajectory planning using the third-order Spline interpolation. However, they did not consider the energy consumption, and most of the interpolation positions were decided by experience or trial and error. The rest of this paper is organized as follows. In the following section, for the gait synthesis of humanoid walking, we propose a novel approach to generate trajectories in both Cartesian Space and Joint Space from a robot manipulator perspective. We clarify the difference between the traditional robot manipulators and humanoid robot manipulators. Furthermore, we will also compare the gait planning in the two different spaces. In Section 3, we give a detailed gait synthesis for humanoid penalty kick, including kicking cycles and some basic research issues in penalty kicking. We also propose some new concepts such as max kicking range and effective kicking range. Finally, some remarks on gait planning for soccer-playing humanoid robots are given in Section 4.
2 Gait Synthesis for Humanoid Walking 2.1 Classification Different gait planning methods have been developed for humanoid walking pattern generation. 1. Control goal based method. Researchers have addressed a wide variety of control goals for humanoid walking, such as stable walking on uneven terrain[17], energy efficient walking[16], walking on ascending and descending stairs[12,13]. 2. Control model based method. There are model-based and model-free walking planning schemes. The model-based method generates a walking gait from a mathematical model[5-17]. The model-free method usually makes use of intelligent techniques, such as machine learning[18-24], genetic algorithms[25,26], neural networks[22,23,27] and fuzzy control[19-21]. 3. Stability criteria based method. There are two different kinds of walking, namely dynamic walking and static walking for humanoid robots. Most current dynamic walking schemes use zero moment point (ZMP) as stability criterion[1,17,19,20,21,24,27]. Static walking[11,12,13,22,23] is implemented in slow movement, and utilizes center of gravity (CG) as stability criterion. 4. Trajectories generation space based method. Humanoid walking gait can be planned in either joint space or Cartesian space. Both Cartesian space walking gait planning[12,13,17] and joint space walking gait planning[18,28] will be discussed in this paper. We will also give a clear explanation and comparison of the gait planning methods in both joint space and Cartesian space in this section.
244
244
Z. Tang, C. Zhou, and Z. Sun
2.2 Manipulator Although trajectory generation, one of most important issues in humanoid robots, is different from path planning in manipulator with a fixed base [5], they share many vital attributes. Both the manipulator and the humanoid robot use the same dynamic Newton-Euler equation as follows: ..
.
τ = D ( q ) q + C ( q, q ) + g ( q )
(2.1)
where q is the n × 1 vector of generalized joint displacements, τ is the n × 1 vector of applied generalized forces, D(q ) is the n × n mass matrix, .
C (q, q) is an n × 1 vector of centrifugal and Coriolis terms, and g (q ) is an n × 1 vector of gravity terms. (For details of parameters please refer to [14]) A humanoid robot can be considered as one or several independent manipulators under certain conditions. In this way, many theories in robotic manipulator can also be used for humanoid robots. In this paper, we will demonstrate how manipulator motion planning method can be used for humanoid motion planning. There are two spaces in trajectory generation for manipulator motion: Cartesian space and joint space[29,30,31]. We will present the details of trajectory generation for humanoid robot walking in both spaces. One of the two main differences between the two spaces is that the normal robot manipulator assumes a static base frame (i.e. world coordinate system) which will never move. In Cartesian space, however, the humanoid robot motion involves the movement of every joint. We have to find or assume a base frame to apply the manipulator methods. The other difference is that humanoid robots introduce unpowered degree-of-freedom (DOF), which greatly affects the kinematics of the humanoid robot[5]. 2.3 Stability Criterion There are two kinds of humanoid robot stability: static stability and dynamic stability. Static stability neglects the dynamics of the robots and takes centre of gravity (CG) as stability criterion, so this stability is applicable only if the robot moves very slowly. Most researches have adopted dynamics stability. The most common dynamic criteria are center of pressure (CP), zero moment point (ZMP) and foot rotation indicator (FRI), which are briefly described as follows[32]: 1. The CP is a point on the foot/ground surface where the net ground reaction force actually acts. 2. The ZMP is the point on the floor at which the moment generated by the reaction force and the reaction torque are balanced. 3. The FRI is the point on the floor/ground surface, inside or outside the base of support, where the net ground reaction force would have to act to keep the foot stationary.
Gait Planning for Soccer-Playing Humanoid Robots
245 245
The ZMP is identical to CP in a different definition. The ZMP was originally introduced for humanoid robot stability in 1969[6]. Using this criteria has yielded many impressive results[1,19,20,21,27,28]. We adopted the ZMP control for our humanoid walking gait synthesis. The ZMP can be calculated from the following equations:
xzmp =
$$ #in=1mi ($$zi + g)xi − #in=1mi $$xi zi − #in=1 IiyΩ iy
(2.2)
#in=1mi ($$zi + g)
$$ #in=1mi ($$zi + g) yi − #in=1mi $$yi zi − #in=1IixΩ ix yzmp = n #i=1mi ($$zi + g) where
$$ and mi is the mass of link i, I ix and I iy are the inertial component, Ω ix
$$ are absolute angular velocity components around the x-axis and the y-axis at Ω iy the center of the gravity of link i, g is the gravitational acceleration, ( xzmp , y zmp , 0 ,) is the coordinate of the ZMP, and ( xi , yi , zi ) is the coordinate of the mass center of link i on an absolute Cartesian coordinate system. The angular accelerations
$$ are small and have little effect on the $$ and Ω Ω iy ix
ZMP position compared with the link acceleration, so the two angular acceleration is normally neglected. 2.4 Walking Phases Humanoid robot walking includes two walking phases, namely, double-support phase and single-support phase[5]. But for each leg, the walking process is composed of a stance phase and swing phase. The double-support phase is the overlap period of two legs’ swing phase. During the double-support phase, the CG or ZMP moves from one foot to another foot, this motion is achieved mainly in lateral plane, to simplify our analysis, our motion planning is restricted to the sagittal plane. Fig.1 gives a walking cycle starting from kth step, where Tc is period for one walking step, Td is duration for double-support phase, and kTc+ Tm corresponds to the point when the swing foot reaches its highest point. 2.5 Trajectory Generation in Cartesian Space Humanoid walking is a periodic motion which alternates between the doublesupport phase and the single-support phase. During the double-support phase, both of the humanoid robot feet are in contact with ground. During the single-support phase, only one foot is in contact with ground to support the humanoid, the other leg swings forward. Our previous work[33] shows that the humanoid robot is prone
246
246
Z. Tang, C. Zhou, and Z. Sun
to falling at the switching point between the double-support phase and the singlesupport phase. In order to control a humanoid robot’s movement, we should first generate the trajectories of humanoid joints. The trajectories should have first-order and second-order derivative continuity. First order derivative continuity guarantees smoothness of joint velocity, while second order guarantees smoothness of acceleration or torque of joint. There are two ways to represent the joint position, Cartesian space or joint space. The current joint trajectory generation falls into these two spaces. Furthermore, it can be proved that the first-order and second-order derivative continuity in Cartesian space can guarantee the first-order and second-order derivative continuity and vice visa. Because the two spaces conversion is a one-toone mapping. In fact, any trajectory generation in Cartesian space has to convert to joint space to control each joint.
Fig. 1. Walking phase.
The trajectory generation in Cartesian space for the humanoid robot can be summarized as follows: (the humanoid robot model is shown in fig.2. This model is a simplified one which neglects the foot and hip, and only considers one leg because the other leg is symmetric.) (1) Defining the position of the hip position (xh(t),yh(t)) at some given time points, then generating the trajectory of the hip using interpolation by polynomials. (2) Defining the position of the ankle position (xa(t),ya(t)) at the switching points and some other interpolation points to constrain the walking pattern. From these positions, the trajectory is generated using polynomial interpolation. (3) Based on the above positions of hip and ankle, the knee position(xk(t),yk(t)) is obtained using the following geometric formulas:
Gait Planning for Soccer-Playing Humanoid Robots
{
x = x + l cos tan −1 %$ ( y − y ) ( x − x ) ,+ − k a 2 a h a & h
247 247
(2.3)
cos −1 $(l 2 + ( x − x )2 + ( y − y )2 − l 2 ) / %& 2 1 h a h a + . 2l ( x − x )2 + ( y − y )2 , ) / 2 h a h a - 0
{
y = y + l sin tan −1 $ ( y − y ) ( x − x ) + − k a 2 a h a -, &% h cos−1 $(l 2 + ( x − x ) 2 + ( y − y )2 − l 2 ) / %& 2 1 h a h a + . 2l ( x − x )2 + ( y − y )2 , ) / 2 h a h a - 0
(4) From the positions of the joints, the joint angles can be obtained from the following inverse kinematics equation.
θ h = − tan −1 [ ( yk − yh ) ( xk − xh )] ± π / 2
(2.4)
θ a = −(π / 2 ± tan −1 [ ( yk − ya ) ( xk − xa )]) θk = θa + θh
Fig. 2. Simplified model of the humanoid robot
For the humanoid robot with a variable-length leg[12,13,34,35], there is no need generate knee joint trajectory. This kind of robot simplifies control architecture at the expense of more complex mechanical structure.
248
248
Z. Tang, C. Zhou, and Z. Sun
2.6 Trajectory Generation in Joint Space
For the trajectory generation in joint space, we take the two legs as two manipulators. Then the trajectory generation for humanoid motion can be considered as manipulator trajectory generation. The first step for manipulator planning is to define the base frame. Based on humanoid walking phases, we define the two manipulations as follows: (1) During the double-support phase and left-leg stance, the left foot is the base frame of the left manipulator, and the hip is the base frame of the right manipulator. (2) During the right-leg stance phase, the right foot is the base frame of the right manipulator and the hip is the left manipulator’s base frame. We name the foot base frame as the first base frame (BF1), and the hip base frame as the second base frame (BF2). The corresponding legs using the frame are named as the first-leg and the second-leg respectively. The joint angles are defined as following:
' qasi * θ ai (t ) = (qami *q ) aei ' qhsi * θ hi (t ) = (qhmi * ) qhei
t = kTc
(2.5)
t = kTc + Tm t = (k + 1)* Tc t = kTc t = kTc + Tm t = (k + 1)* Tc
where i = 1, 2 .Based on the above analysis, θ as1 = θ ae 2 , θ hs1 = θ he 2 ,
θ as1 = θ ae 2 ,
and θ hs 2 = θ he1 . Furthermore, because the motion of the humanoid robot proceeds to double-support phase in the lateral plane after the single-support phase, the angle velocity in sagittal plane is supposed to be zero, at the start and end of the double support-phase:
θ$ai (kTc) = θ$ai ((k + 1)Tc) = 0
(2.6)
θ$hi (kTc) = θ$hi ((k + 1)Tc) = 0 where i = 1, 2 . From the above via points and start and end conditions, smooth trajectories of the hip and two ankles in joint space can be generated using the third order Spline interpolation. These trajectories can achieve first and second derivate continuity in joint space.
Gait Planning for Soccer-Playing Humanoid Robots
249 249
2.7 Experiment Results
To synthesize a gait for controlling a real humanoid robot, we have developed a gait generator platform see Fig.4 using Visual C++ under Windows 2000. This generator decomposes the humanoid motion into three planes: sagittal, frontal and transverse. Most of humanoid research assumes that the motions in the three planes are independent, which greatly simplifies humanoid motion analysis. The humanoid robot parameters and other values are shown in Fig. 5 and Table1. !
✁
Fig. 3. Base frame
2.7.1 Planning in Cartesian Space Let the foot landing angle be Qf = 5 deg and leaving angle be Qb=5 deg respectively. We set the walking step length Ds = 5cm, Hao = 2cm, Lao = 4 cm, and walking cycle 2*Tc = 1.8s, a walking cycle consists of two steps, doublesupport phase Td = 0.2s. The ankle position at the following via points are defined, After defining the via points position, the ankle trajectory is generated by the third-order Spline interpolation. The hip trajectory can be obtained in a similar way. Then the knee position can be obtained from Eq.(2.3).
Table 1. Robot parameters Length (cm)
Lhip
Lthigh
Lshank
Lan
Laf
Lab
10
15
15
0
1.5
1.5
250
250
Z. Tang, C. Zhou, and Z. Sun
Table 2. Walking parameters Time (s)
Td
Tm
Tc
0.2
0.6
1
Fig. 4. Simulator interface
Fig. 5 Robot parameters and coordinate
Gait Planning for Soccer-Playing Humanoid Robots
251 251
2.7.2 Planning in Joint Space When the hip and ankle positions at the via points are given, then by Eq.(2.4) the joint angles can be obtained as shown in Table 3. A smooth joint angle trajectory in the joint space is generated using the third order Spline interpolation. However, the definition of the via points is not a trivial task, we normally get the positions through trial-error method and experience. Table 3. Via points value (rad) qas1
qam1
qae1
qhs1
qhm1
qhe1
0.559
0.906
-.02
0.038
0.446
0.643
2.7.3 Comparison between Two Spaces The comparison of experiment results are shown in Figs.6 to 17. The motion planning in the joint space is simpler and computationally less expensive than the motion planning in the Cartesian space, because motion planning in the Cartesian space requires solution of inverse kinematics equation Eq.(2.4). Therefore, the motion planning in joint space is more suitable for on-line trajectory generation. However, the joint space motion planning does not provide straight line motion of the end-effector. Therefore, it is not applicable for humanoid kicking, because the kicking trajectory must be a straight line. From Figs.6 and 7, it can be seen that the density variance of the stick for Cartesian walking planning is smaller than that in the joint space. This indicates that the liner accelerations of every link are smaller. As a result, the trajectory of ZMP is smoother. This is also demonstrated in Figs. 16 and 17. Although the trajectory planning scheme in the joint space makes the joint angular velocity smoother, the liner velocities of every robot link contribute much more to the dynamic walking stability. From this point of view, the walking planning in the Cartesian space is desirable. However, one of drawback for motion planning in the Cartesian space is that it is difficult to predict the joint’s angular velocity during on-line trajectory planning in the Cartesian space[30].
Fig. 6 Consecutive walking for planning in the Cartesian space
252
Z. Tang, C. Zhou, and Z. Sun
252
Fig. 7 Consecutive walking for planning in the joint space
1.3
1.1 Ankle Knee Hip
1.1
0.9
1
0.8
0.9
0.7
0.8
0.6
0.7
0.5
0.6
0.4
0.5
0.3
0.4
0.3
Ankle Knee Hip
1
Angle(rad)
Angle(rad)
1.2
0.2
0
0.1
0.2
0.3
0.4
0.5 t(d)
0.6
0.7
0.8
0.9
0.1
1
Fig. 8. Joint angle for planning in the Cartesian space
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
10 X-axis Z-axis
X-axis Z-axis 9
9
8
8
7
7
Ankle Position(cm)
Ankle Position(cm)
2
Fig. 9. Joint angle for planning in the joint space
10
6
5
4
6
5
4
3
3
2
2
1
1
0
1.8
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
1.8
2
Fig. 10. Ankle position for planning in the Cartesian space
0
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
1.8
2
Fig.11. Ankle position for planning in the joint space
253
Gait Planning for Soccer-Playing Humanoid Robots 18
16 X-axis Z-axis
16
14
14
12 Knee Position(cm)
Knee Position(cm)
X-axis Z-axis
12
10
10
8
8
6
6
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
1.8
4
2
Fig. 12. Knee position for planning in the Cartesian space
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
2
30 X-axis Z-axis
X-axis Z-axis
25
25
20
20 Hip Position(cm)
Hip Position(cm)
1.8
Fig. 13. Knee position for planning in the joint space
30
15
15
10
10
5
5
0
253
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
1.8
0
2
Fig. 14. Hip position for planning in the Cartesian space
0
0.2
0.4
0.6
0.8
1 t(s)
1.2
1.4
1.6
1.8
2
Fig. 15. Hip position for planning in the joint space
2
2 ZMP CG
ZMP CG 0
-2
-2
-4
-4
Y(cm)
Y(cm)
0
-6
-6
-8
-8
-10
-10
-12
5
6
7
8
9
10 X(cm)
11
12
13
14
15
Fig. 16. Fig. X ZMP and CG for planning in the Cartesian space
-12
2
4
6
8
10
12
14
X(cm)
Fig. 17. Fig. X ZMP and CG for planning in the joint space
16
254
254
Z. Tang, C. Zhou, and Z. Sun
3 Humanoid Penalty Kicking For kicking planning in this paper, we assume that the supporting foot keeps static while the kicking leg moves towards the ball. Furthermore, the velocity of the kicking leg at the start and end points is zero, so the moments from reciprocating links are negligible. In the following discussion, CG is used as criteria to evaluate humanoid static stability. Furthermore, humanoid penalty kicking requires that the kicking foot trajectory should be straight in the kicking range, which cannot achieve in joint space planning. Therefore, humanoid penalty kicking planning is conducted in the Cartesian space. 3.1 Basic Research Issues of Humanoid Kicking
3.1.1 Kicking a Ball There are many different methods for kicking a ball. As shown in Fig.18, different kicking directions and different kicking points on the ball may result in different kicking effects. Kicking in a horizontal direction is not the optimal way. But accurate kicking requires accurate information on both the ball and robot localization. To simplify the problem, we only consider kicking in horizontal direction in this paper.
Fig. 18. Kicking a ball
3.1.2 Max Kicking Range and Effective Kicking Range To achieve a longer kicking range, the humanoid robot’s hip should be kept as low as possible, because at the max kicking range point, the kicking leg stretches straight (Fig. 19). The lower the Hip Height (HH) is, the wider the max kicking range is. But the HH is subjected to the constraint that the CG should lie in the
Gait Planning for Soccer-Playing Humanoid Robots
255 255
supporting foot, because if HH is lowered, the CG moves forward. So, there exists a threshold value for the Lowest Hip Height (LHH). Based on the above analysis, the kicking leg speed is zero at the maximum kicking range point. If kicking the ball at that point, there is no momentum for the ball. We need to define another range, effective kicking range (see Fig. 20). To kick the ball into the goal, the minimum ball movement distance is required, and hence the minimum momentum for the foot can be defined. Then there exists a minimum velocity (Vmin) for the kicking foot to kick the ball into the goal. As indicated in Fig. 21, where Ts is the time when the kicking foot enters kicking range. The kicking range, in which the kicking foot velocity is greater than Vmin, is defined as effective kicking range. This effective kicking range is decided by many factors, e.g. the minimum ball movement distance, the floor property (friction) and the foot trajectory planning.
Fig. 19. Max kicking range and lowest hip height
3.2 Gait Planning for Penalty Kicking
3.2.1 Kicking Cycles The proposed kicking action can be decomposed into three steps (Fig. 22): (1) Leaning the body and bending the legs, from t = T0 to T1, (2) Lifting kicking leg, t = T1 to T2, (3) Kicking, t = T2 to T3. For the first step, leaning is to move the center of gravity (CG) to the supporting leg, bending for lifting kicking leg. From the previous discussion, to get a wider kicking range the robot needs to bend as low as possible. At the same time, the robot should lean in the frontal plane in order to move the CG to the supporting foot. During the second step, the kicking leg should lift as high as possible at same time the CG must keep within the supporting area. This position
256
256
Z. Tang, C. Zhou, and Z. Sun
can be obtained using the similar method as described above. For the kicking cycle, the ankle trajectory of kicking foot is first to be generated. At the start and end of a cycle, the kicking leg velocity is zero. This condition can be used to balance the robot.
Fig. 20. Kicking range
Fig. 22. Kicking cycle
Fig. 21. Kicking range vs. velocity
3.2.2 Ankle Trajectory for Kicking Spline interpolation is utilized to generate ankle trajectory of the kicking leg. In the kicking range, the kicking foot position in z-axis should be equal to ball radius. And we define the kicking range starts from zero point. And we assume Ts as the time for the kicking leg swing from the highest point to kicking range (xa(t) = 0). (xa(t), ya(t), za(t) ) is the position of ankle. During the kicking phase, the ankle should meet the following constraints:
Gait Planning for Soccer-Playing Humanoid Robots
' Hx * x a ( t ) = ( Lx * Rx )
t = T2
257 257
(3.1)
t = T2 + Ts t = T3
' Hz * za (t ) = (Rball + Lan *R + Lan ) ball
(3.2)
t = T2 t = T2 + Ts T2 + Ts < t ≤ T3
where ( Hx , Hy , Hz ) is the position of the kicking foot at highest point. Lx is the ankle’s x value at the point the foot enters the kicking range, Rball is the radius of the ball. Because we assume the hip keeps static in this paper, once the ankle trajectory is defined, the knee trajectory is also defined. From Eq.(3.2), it can be seen that za(t) is a constant during T2 + Ts < t ≤ T3 , this condition guarantees the kicking foot in the horizental direction. 3.3 Experiment
The kicking cycle time for this experiment is shown in Table 4. Table 4. Kicking cycle time Time (s)
T0
T1
T2
T3
Ts
0
0.3
1.8
3.3
0.9
Let Hz = 15cm, Lx = 0, Rball = 5cm, and LHH=88cm. According to the constraints (Eqs.(3.1) and (3.2)) and the parameters given in Table 1 and Table 4, an ankle trajectory can be generated as shown in Fig. 23. The kicking foot velocity is given in Fig. 24. Based on this velocity, the effective kicking range can be derived. Fig. 25 shows the three joints angular trajectories. The CG position is given in Fig. 26, which always lies in the supporting foot area, i.e. Lab −a3[(2θ##1 + θ##2 )s2 + (2θ#1θ#2 + θ#22 )c2 ](i = 1,2) and K2 >> a3 (θ##1s2 −θ#12c2 ) , for the purpose of simplicity in the following, let $ K = diag ( K1 , K 2 ) instead of K . matrix, K the system stiffness matrix, g ,
The end trajectory of manipulator can be expressed as a set of , where
y = Φ 0θ + Φ f q
(2.6)
Φ 0 = [l1 l2 ] l1 and l2 are the length of two links, respectively. Φ f = [φ11 (l ) φ 21 (l )] , in general, for the n-th modal where
✁
φin ( x) = l ( F1,n sin d n χ + F2,n sinh d n χ + F3,n cos d n χ + F4,n cosh d n χ ) here
χ = x/l ✂
1 d n ≈ (n + )π , F j ,n ≈ (−1) j +1 (8M t l 2 + ρl 3 ) −1/ 2 . 4
,
278278X. Dai, L. Sun, and H. Cai
3 Inverse Dynamics Control Based on Singular Perturbation Denoting q with q = µ ξ , here µ = Kˆ . From the principle of singular perturbation, Eqs. 2.1 and 2.2 can be rewritten as 2
−1
2
(3.1) Mθ## + g + Mµξ 2 + f1 ( µξ ) + f 2 (θ , µξ ) = T Mµ 2ξ## + Cµξ# + ξ + Mθ## + g + f 2 (θ , µξ ) = 0 (3.2) Let µ = 0 and take into account both f1 and f 2 visibly involving q , Eqs. (3.1) and (3.2) are reduced to
## Mθ + g = T ξ + Mθ## + g = 0
ξ
(3.3) (3.4)
q can be obtained. y of y can be gained, too. These are
can be resolved from Eqs. (3.3) and (3.4). Furthermore
From Eq. (2.6), the approximated value given by
$ q ≈ µ 2ξ = − K −1T y = Φ θ − Φ Kˆ −1T 0
(3.5) (3.6)
f
yd is known, substitute y in Eq. (3.6) with yd . Hence, one equation containing T as variable is Φ f Kˆ −1T = Φ 0θ − yd (3.7) Since the desired end trajectory of manipulator
Considering the desired motion direction of the manipulator end effector, we
LT = 0 , where L ∈ R 2×1 . Now, from the above −1 #Φ f Kˆ −1 ' #Φ 0θ − yd ' T =$ (3.8) ( $ ( 0 ) % L ) % Up to now, T is still not suitable for control, because there're a last unknown variable θ in Eq. (3.8). Next, discuss how to replace θ with its approximated value θˆ . have
Substituting Eq. (3.8) in Eq. (3.3), we have −1
−1
#Φ f Kˆ −1 ' # yd ' #Φ f Kˆ −1 ' #Φ 0 ' # # (3.9) Mθ = $ ( $ (−g ( $ (θ − $ % L ) %0) % L ) %0) ˆ . Because of M varying slowly, so replace it with a constant matrix M ˆ || , so ignore it for brevity, Eq. 3.9 becomes Notice that || g || 0 fixed. Then the wavelet decomposition is
f = # cm ,n ( f )ψ m,n . For some very special choices of
ψ
and
a 0 > 1, b0 > 0 , the functions ψ
2
constitute an orthonormal basis for L (R). In particular, if we choose
m, n
a0 = 2 ,
b0 = 1 , then there exists ψ , with good time-frequency localization properties such that ψ
2
m,n
becomes an orthonormal basis for L (R) and thus,
c m ,n ( f ) =< ψ m ,n , f >= .ψ m ,n ( x) f ( x)dx . In image compression, we are dealing with sampled data that are discrete in time. We would like to have discrete representations of time and frequency, which is called the discrete wavelet transform (DWT). 2.1 Discrete Wavelet Transform For wavelets generated by a scaling function ϕ (x ) using a multiresolution analysis (see [3] for examples), we can write c m , n ( f ) =< ψ m , n , f >= ψ m ,n ( x) f ( x )dx as the following algorithm.
.
c m ,n ( f ) = # g 2 n −k a m −1,k ( f ) k
a m ,n ( f ) = # h2 n− k a m −1,k ( f )
(2.1)
k
where
g l = (−1) l h−l +1 and hn = 21 / 2 . ϕ (2 x − n)ϕ ( x)dx. When f is
given in sampled form, then we can assume those samples as the highest order resolution approximation coefficient a 0 ,n . Equation (2.1) gives the coding algorithm on these sampled values with low-pass filter h and high-pass filter g. For orthonormal wavelet bases, these filters give exact reconstruction by the following equation:
a m −1,n ( f ) = # [h2 n −l a m ,n ( f ) + g 2 n−l c m ,n ( f )]. n
284284D. Hong, M. Barrett, and P. Xiao
Compact Supported Orthogonal Wavelets In this section, we will determine the filter coefficients for compact supported orthogonal wavelets. We begin with the scaling function
φ (t ) = # hk 2φ (2t − k ). k
By integrating both sides, we have
.
∞
−∞
φ (t )dt = .
∞
−∞
#h
k
2φ (2t − k )dt .
k
Substituting x = 2k − t and dx = 2dt on the right hand side, we get
.
∞
−∞
1 2
∞
φ (t )dt = # hk 2 . φ ( x) dx = # hk −∞
k
k
1 2
.
∞
−∞
φ ( x)dx .
Then, divide both sides by the integral and we have
#h
k
= 2.
(2. 2)
k
We use the orthogonality condition on the scaling function to get another condition on {hk }:
. | φ (t ) |
2
dt = . # hk 2φ ( 2t − k )# hm 2φ ( 2t − m) dt k
m
= ## hk hm 2 . φ ( 2t − k )φ ( 2t − m) dt k
m
= ## hk hm . φ ( x − k )φ ( x − m)dx , k
m
where in the last equation, we substituted 2t by x . The integral on the right hand side is zero except k = m . When k = m , we obtain
#h
2 k
= 1.
(2. 3)
k
By using the orthogonality of the translates of scaling function we have
. φ (t )φ (t − m)dt = δ Substituting
ϕ (t ) '
m
.
by scaling function, we get
2 2' 2φ ( 2t − k )3 (# hk 2φ (2t − 2m − l )3 dt k 4 4) k = ## hk hl 2 . φ ( 2t − k )φ (2t − 2m − l ) dt.
. ()# h
k
k
l
By substituting x = 2t , we get
. φ (t )φ (t − m)dt = ## h h . φ ( x − k )φ ( x − 2m − l )dx k
k
l
l
Wavelets and Biorthogonal Wavelets for Image Compression
285 285
= ## hk hl δ k −( 2 m +l ) = # hk hk − 2 m . k
l
k
Therefore, we have equation
#h h k
= δm .
k −2 m
(2. 4)
k
By using equations (2.2)-(2.4), we can generate filter coefficients for scaling function. For k = 2 , from equations (2.2) and (2.3), we have h0 + h1 = 2
The only solution is For
h0 = h1 =
h02 + h12 = 1. 1
, which is the Haar scaling function.
2
k = 4 , from equations (2.2)-(2.4), we have h0 + h1 + h2 = 2
h02 + h12 + h22 = 1 h0 h2 + h1 h3 = 0 . The solutions to these equations include the 4-tap (filter length is 4) Daubechies scaling function. 1− 3 1+ 3 , 3+ 3 , 3− 3 , h0 = h1 = . h3 = h2 = 4 2 4 2 4 2 4 2 We know the wavelet function is
ψ (t ) = # wk 2φ (2t − k ). k
If the wavelet is orthogonal to the scaling function at the same scale, we have
. φ (t − k )ψ (t − m)dt = 0. Then, we can obtain the wavelet filter coefficients from the scaling filter coefficients [1].
wk = ±(−1) k hN −k .
Haar Wavelet The Haar scaling function is defined as:
* 1 0 ≤ x = 2 − m / 2 . φ m , n ( x) f ( x) dx c m ,n ( f ) =< ψ m ,n , f >= 2 − m / 2 . ψ m ,n ( x ) f ( x ) dx and reconstruction is
f = # < ψ m ,n , f > ψ~m ,n . m,n
The following figure shows the relation between the filter structure and wavelets functions.
~ H (n)
H (n) X
X
~ G ( n)
G (n)
Fig. 4. Filter structure and the associating wavelets
For symmetric filters, the exact reconstruction condition on equation (2.5) can be represented by ~ ~ H (ξ ) H (ξ ) + H (ξ + π ) H (ξ + π ) = 1 ,
~
where H (ξ ) = 2 −1 / 2
~
#h e
− jn ξ
n
n
Together with the divisibility of
and H (ξ ) = 2 −1 / 2
#h e n
~ h and h given in
− jnξ
.
n
~ H and H , respectively, by (1 + e − jξ ) k and
~
(1 + e − jξ ) k
We have [4]
' l −1 $ l − 1 + ~ H (ξ ) H (ξ ) = cos(ξ / 2) 2l (# %% p ) p =0 & where R (ξ ) is an odd polynomial in
2 p/ 00. sin(ξ / 2) 2 p + sin(ξ / 2) 2l R (ξ )3 , 1 4 ~ cos(ξ ) and 2l = k + k .
(2. 6)
Spline Filters
~ ~ R ≡ 0 with H (ξ ) = cos(ξ / 2) k e − jkξ / 2 where k = 0 if k is ~ even, k = 1 if k is odd. We have Let us choose
Wavelets and Biorthogonal Wavelets for Image Compression ~ ' l −1 $ l − 1 + H (ξ ) = cos(ξ / 2) 2l − k e jkξ / 2 (# %% ) p =0 & p
Then, we get the corresponding function
~
φ
Hn ~ Hn
~ k = 4, k = 2 . Table 2 list the
~ H n and H n (see [3]). Table 2. Filter Coefficients with
n
2 p/ 00. sin(ξ / 2) 2 p 3 . 1 4
which is a B-spline function.
We show one examples from this family with their coefficients
289 289
0
±1
±2
±3
45/64
19/64
-1/8
-3/64
1/2
1/4
0
0
~ k = 4, k = 2
±4 3/128 0
Fig. 5. Spline examples with
~ l=k=k =4
A Spline Variant with Less Dissimilar Lengths We choose R ≡ 0 , and factor the right hand side of equation (2.6) to break the polynomial into a product of two polynomials in sin(ξ / 2) . Allocate the
290290D. Hong, M. Barrett, and P. Xiao
~
polynomials to H and H respectively to make the length of h and as possible.
~ h as close
The following example is the shortest one in this family (shortest h and corresponds to l = k = 4 [3]. Table 3. Filter coefficients with
n Hn ~ Hn
~ h ). It
l=k=4
0
±1
±2
±3
±4
0.602949
0.266864
-0.078223
-0.016864
0.026749
0.557543
0.295636
-0.028772
-0.045636
0
2.2 Comparison of Wavelet Properties The following Table 4 shows the property comparison of three kinds of wavelets. Haar and Daubechies wavelets have orthogonality, which has some nice features. 1) The scaling function and wavelet function are the same for both forward and inverse transform. 2) The correlations in the signal between different subspaces are removed. Among the three kinds of wavelets, the Haar wavelet transform is the simplest one to implement, and it is the fastest. The major disadvantage of the Haar wavelet is its discontinuity, which makes it difficult to simulate a continuous signal. Daubechies found the first continuous orthogonal compact support wavelet. Note that this family of wavelets is not symmetric. The advantage of symmetry is that the corresponding wavelet transform can be implemented using mirror boundary
Property
Table 4. Property comparison of three kinds of wavelets Haar Dau. Dau. Orthogonal Spline Yes No Yes
Explicit Function Orthogonal Symmetric Continuous Compacted support Maximum regularity for order L Shortest scaling function for order L
Yes Yes No Yes
Yes No Yes Yes
No Yes Yes Yes
No
No
Yes
Yes
No
Yes
Biorthogonal
Wavelets and Biorthogonal Wavelets for Image Compression
291 291
conditions to reduce boundary artifacts. That is why we introduce the biorthogonal spline wavelet. For the biorthogonal spline wavelet, the scaling function is a Bspline. The B-spline of degree N is the shortest possible scaling function of order N-1 and B-splines are the smoothest scaling functions for a filter of a given length [12]. Since splines are piecewise polynomial, they are easy to manipulate. For example, it is simple to get spline derivatives and integrals. Next, we discuss how to apply wavelet transform to image compression.
3 Wavelets Applied in Image Compression In order to compare wavelet methods, a software tool coded in Visual C++ called MinImage is used. MinImage was originally created to test Haar and Daubechies wavelets [5]. We add some new functionality to it in order to support other wavelet types. Also, the EZW coding algorithm is implemented to achieve better compression results. In the following, we discuss some implementation details of MinImage.
3.1 Baseline Schema The wavelet image compressor, MinImage, is designed for compressing either 24bit true color or 8-bit gray scale digital images. It was originally created to test Haar wavelet using subband coding. To compare different wavelet types, other wavelet types including Daubechies orthogonal and birothogonal spline wavelets were implemented. Also, the original subband coding was changed to EZW coding to obtain better compression results. A very useful property of MinImage is that different degrees of compression and quality of the image can be obtained by adjusting the compression parameters through the interface. The user can trade off between the compressed image file size and the image quality. The user can also apply different wavelets to different kind of images to achieve the best compression results. Figure 6 is the baseline structure of MinImage compressing sechema.
Sampled
Preprocessor
Decompression
DWT
Compressed Image
Fig. 6. The baseline schema of MinImage
EZW Coding
Entropy Coding
292292D. Hong, M. Barrett, and P. Xiao
For more details on the Preprocessor, see [5]. In the rest of the chapter, we will focus on implementation of discrete wavelet transform (DWT) and EZW coding.
3.2 Discrete Wavelet Transform (DWT) The discrete wavelet transform usually is implemented by using a hierarchical filter structure. It is applied to image blocks generated by the preprocessor. Figure 7 shows the 1-D wavelet decomposition (forward transform) and composition (inverse transform) algorithms. original signal x[0..n-1], n = 2k
normalize x[i ] =
x[ i ] n
,
i
=
h=n
nDecompositionStep = 0
h>1
n
y nDecompositionStep ++
nDecompositionStep > nMaxDecompostionStep
n DecompositionStep( x[0..h-1] )
h=h/2
wavelet coefficients x[0..n-1] Fig. 7. The 1-D wavelet decomposition algorithm in MinImage
y
Wavelets and Biorthogonal Wavelets for Image Compression
wavelet coefficients x[0..n-1], n=2k nStart = n i=0
n i> i = = 0
y nStart = 2 h= nStart
n
h >
293 293
294294D. Hong, M. Barrett, and P. Xiao
As we know, images are 2-D signals. A simple way to perform wavelet decomposition on an image is to alternate between operations on the rows and columns. First, wavelet decomposition is performed on the pixel values in each row of the image. Then, wavelet decomposition is performed to each column of the previous result. The process is repeated to perform the complete wavelet decomposition. The following is the 2-D wavelet decomposition algorithm. PROCUDURE 2D-Decomposition( ImageData[ 0..n-1, 0..n-1 ] ) FOR row = 0 TO n-1 DO FOR col = 0 TO n-1 DO ImageData[ row, col ] / = n h=n WHILE h > 1 DO FOR row = 0 TO h-1 DO DecompositionStep( ImageData[ row, 0..h-1 ] ) END FOR FOR col = 0 TO h-1 DO DecompositionStep( ImageData[ 0..h-1, col ] ) END FOR h=h/2 END WHILE END FOR 0 END FOR 1END PROCEDURE
3.3 Embedded Zerotree Wavelet (EZW) Coding After the 2-D wavelet decomposition, the wavelet transform blocks contain the wavelet coefficients. Then, we use Embedded Zerotree Wavelet coding to code the transformed wavelet coefficients. EZW coding was designed by Shapiro [10] to use with wavelet transforms. In fact, EZW coding is more like a quantization method. This method is based on progressive encoding to compress an image into a bit stream. The coding process is implemented by passes. For each pass, a threshold is chosen against which all the wavelet coefficients are measured. If a wavelet coefficient is larger than the threshold, it is encoded and removed from the image; if it is smaller, it is left for the next pass. When all the wavelet coefficients have been visited, the threshold is lowered, and the image is scanned again to add more detail to the already encoded image. This process is repeated until the stopping threshold has been reached. A zerotree is a quad-tree of which all nodes are equal to or smaller than the root. The tree is coded with a single symbol and reconstructed by the decoder as a quad-tree filled with zeroes. EZW coding is based on the observation that wavelet
Wavelets and Biorthogonal Wavelets for Image Compression
295 295
coefficients decrease with scale. When the coefficients are scanned, if the root is smaller than a threshold, it is highly probable that all the coefficients in the quadtree will be smaller than the threshold. If this is the case, then the whole tree can be coded with a single zerotree symbol.
3.4 Entropy Coding Entropy coding is the last stage of MinImage to compress the EZW coded data to a final wavelet image file by using a lossless method. A wrapper class is written to apply the Zlib compression engine to perform this task. Zlib is a free compression library. It compresses the source data by LZ77 (Lempel-Ziv1977) and/or Huffman entropy encoding. Its deflation algorithms are similar with those used by PKZIP (an MSDOS based software by PKWARE, Inc.).
4 Experiments In this section, we analyze the compression results produced by MinImage. These include the subjective and objective qualities of reconstructed images for different wavelet types, timing of composition and decomposition for different wavelet types, and timing of EZW coding algorithm for different compression ratios. PSNR (peak-signal-to-noise-ratio) is the most commonly used value to evaluate the objective image compression quality. It is calculated by the following equation:
PSNR( dB) = 10 log10
where
δ d2
x 2peak
δ d2
,
is called MSE (mean-squared-error) and is calculated by the
following equation:
δ d2 =
1 N
N
# (x
n
− yn ) 2 .
n =1
4.1 Objective Quality To compare the objective quality for different wavelet types, we use the standard testing image (see Figure 9). Table 5 illustrates one set of compression results.
296296D. Hong, M. Barrett, and P. Xiao
Fig. 9. Lenna 256*256 Table 5. Comparison of compression results by using different wavelets Wavelet Haar D4 D6 D8 D 10 SP 2_2 SP 2_4 SP 4_4
Compression Ratio 9.177 % 8.613 % 8.469 % 8.426 % 8.336 % 8.321 % 8.278 % 8.235 %
Compressed File size 18048 bytes 16938 bytes 16709 bytes 16570 bytes 16393 bytes 16365 bytes 16280 bytes 16196 bytes
PSNR 35.230 dB 36.515 dB 35.246 dB 36.946 dB 35.643 dB 36.962 dB 37.152 dB 37.644 dB
From the above table, we observe that biorthogonal spline wavelets achieve better compression results than orthogonal wavelets. Haar wavelet has the worst result.
4.2 Subjective Quality To compare the subjective quality for different wavelet types, we choose five different standard testing images. All compressed images have the same compression ratio (1%). For each image, we present the reconstructed images in random order to some volunteers to evaluate the quality based on the original image (graded as 10). The following table 6 shows the result, which is conformed to the objective result. Haar wavelet obtains the worse subjective quality because of its discontinuity, which results in the block artifact in reconstructed images. It is hard to tell the difference of reconstructed images compressed by different Daubechies orthogonal wavelets. Birorthogonal spline wavelets achieve the best subjective quality because those reconstructed images are smoother than the others.
Wavelets and Biorthogonal Wavelets for Image Compression
297 297
Table 6. Comparison of subjective quality Wavelet Haar D4 D6 D8 D 10 SP 2_2 SP 2_4 SP 4_4
Average grade 3.2 5.5 5.4 5.4 5.6 5.9 6.0 6.2
Table 7. Comparison of compression results by using different quantization methods
UQ
Compression Ratio 9.177 %
Compressed File size 18048 bytes
EZW
8.788 %
17282 bytes
UQ
8.613 %
16938 bytes
EZW
8.443 %
16605 bytes
UQ
8.469 %
16709 bytes
EZW
8.058 %
15874 bytes
UQ
8.426 %
16570 bytes
EZW
8.396 %
16511 bytes
UQ
8.336 %
16393 bytes
EZW
7.942 %
15619 bytes
UQ
14.550 %
28615 bytes
EZW
14.510 %
28536 bytes
UQ
12.917 %
25403 bytes
EZW
12.756 %
25086 bytes
Wavelet
Quantization
Haar
D4
D6
D8
D 10
SP 2_2
SP 4_4
PSNR 35.230 dB 35.710 dB 36.515 dB 36.433 dB 35.246 dB 36.580 dB 36.946 dB 37.140 dB 35.643 dB 36.381 dB 40.650 dB 41.190 dB 40.710 dB 41.271 dB
298298D. Hong, M. Barrett, and P. Xiao
4.3 Quantization Methods Originally MinImage used Uniform Quantization to code the transformed wavelet coefficients [5]. EZW coding algorithm is adopted to achieve better compression result. Table 7 illustrates the compression results for these two different methods. From this table, we can verify that EZW coding obtains better objective compression results than Uniform Quantization method for each of the wavelet types.
4.4 JPEG Vs. MinImage JPEG stands for Joint Photographic Experts Group, the original name of the committee that wrote the standard. JPEG is a standardized image compression standard which uses the DCT (Discrete Cosine Transform). As we discussed above, wavelet based compression method should obtain better compression ratios than the DCT based compression method. The following test confirms this result. The standard 512*512 grayscale Lenna image (see Figure 11) is used for this test. The MinImage files on the right side of Figure 10 are compressed by using the SP4_4 wavelet and the EZW coding. The left side images of Figure 10 are compressed by using the JPEG standard algorithm called CJPEG, version 6 (2Aug-95) by Thomas G. Lane (Copyright (C) 1995) [7]. Table 8 lists results for four different compression ratios. Table 8. Comparison of compression results between JPEG and MinImage Compression Method JPEG
Compression Ratio 1.60 %
Compressed File size 4213 bytes
MinImage
1.19 %
3129 bytes
JPEG
3.07 %
8078 bytes
MinImage
2.97 %
7812 bytes
JPEG
6.36 %
16752 bytes
MinImage
6.06 %
15944 bytes
JPEG
12.40%
32640 bytes
MinImage
12.39 %
32618 bytes
PSNR 21.93 dB 28.20 dB 30.40 dB 32.05 dB 34.74 dB 35.24 dB 37.81 dB 38.47 dB
Wavelets and Biorthogonal Wavelets for Image Compression
CJPEG Compressed File Size:4213 bytes PSNR:21.93dB (a)
299 299
MinImage Compressed File Size:3219 bytes PSNR:28.20dB (b)
CJPEG MinImage Compressed File Size:8078 bytes, PSNR:30.40dB Compressed File Size:7812 bytes, PSNR:32.05dB (c) (d) Fig. 10. Comparison of JPEG and MinImage
From the Table 8, we can see that MinImage has better objective (PSNR) quality for all different compression ratios. Since JPEG applies DCT on 8*8 blocks, the block artifact (see image (a) in Figure 10) can be seen from the reconstructed image, especially for low quality image.
300300D. Hong, M. Barrett, and P. Xiao
Fig. 11. Original 512*512 grayscale Lenna
4.5 Timing Analysis In this section, we show the decomposition and composition time comparison (see Figure 12), EZW coding and decoding time comparison, and EZW coding time with different compression ratio (see Figure 13). All timing data were collected from a Dell Pentium III desktop machine running on Windows NT 4.0. Comparison of Decomposition and Composition Time 1.2 Time (Secs)
1 0.8 Decomposition Composition
0.6 0.4 0.2
H a
6
4 D
D
8 D
D 10
Sp 2_ 2
Sp 2_ 4
Sp 4_ 4
0
Wavelet Type
Fig. 12. Comparison of Decomposition and Composition Time
Wavelets and Biorthogonal Wavelets for Image Compression
301 301
Since the wavelet decomposition and composition times are entirely determined by the filter length, Haar wavelet (2 taps) takes the least time and Daubechies 10tap orthogonal wavelet takes the most time.
Comparison of EZW Coding and Decoding Time (SP4_4)
Time (Secs)
100 80 60
EZW Coding Time EZW Decoding Time
40 20 0 0.75
1.5
3
6
12
Compression Ratio (%)
Fig. 13. Comparison of EZW Coding and Decoding Time
Figure 13 shows that the EZW coding and decoding take about the same time for same the compression ratio. However, when the compression ratio increases, the EZW coding and decoding time will increase significantly. To explain this, let us recall the EZW coding algorithm. When the stop threshold is high, for every main loop, the transformed wavelet coefficients maintain a good zerotree structure, which makes EZW coding algorithm efficient. On the other hand, with a lower stop threshold, a lot of time will be wasted to check the potential zerotree structures, which are not most of the time for a small threshold. Since the EZW coding takes so much time, especially for higher compression ratios, the total compression time will be dominated by the compression ratio. From Figure 14 we can see that there is not much difference in the EZW coding time between different wavelet types for a same compression ratio.
5 Conclusion and Final Remarks This paper introduced some orthogonal and biorthogonal wavelet filters and introduced the EZW coding method for image compression. A software tool called MinImage is used to compare different wavelet methods. The Haar wavelet transform is the simplest one to implement, and it is the fastest. However, because of its discontinuity, it is not optimal to simulate a continuous signal. Based on our experiments, Haar wavelet obtained the worst compression result, which proves the above statement.
302302D. Hong, M. Barrett, and P. Xiao
100 90 80 70 60 50 40 30 20 10 0
0.75% 1.50% 3.00% 6%
H a
4 D
6 D
D8
D 10
12%
Sp 2_ 2
Sp 4_ 4 Sp _2 _4
Coding Time (Secs)
Comparison of EZW Coding Time
Wavelet Type
Fig. 14. Comparison of EZW Coding time with different wavelets and different compression ratios Daubechies found the first continuous orthogonal compactly supported wavelet. Note that this family of wavelets is not symmetric. The advantage of symmetry is that the corresponding wavelet transform can be implemented using a mirror boundary condition, which reduces boundary artifacts. It was proved that no wavelet could be compactly supported, symmetric, continuous and orthogonal simultaneously. To get the symmetry property, we introduced biorthogonal spline wavelet by relaxing the orthogonal condition. For the biorthogonal spline wavelet, the scaling function is a B-spline. The B-spline of degree N is the shortest possible scaling function of order N-1 and B-splines are the smoothest scaling functions for a filter of a given length. Our experiment also proved that birothogonal spline wavelet outperformed the Daubechies orthogonal wavelet. Compared with the current JPEG image compression standard, MinImage obtains better compression results. However, the run time of MinImage increases significantly to achieve a high quality compression ratio because of the EZW coding algorithm. As an application, MinImage gives the user an easier way to do a variety of tests about wavelet image compressions. It could be improved in several ways. We end the paper with the following remarks. 1. To get the symmetric wavelets, one solution is to relax the orthogonal condition. We derived the biorthogonal spline wavelet, which gives better compression results than orthogonal wavelets for symmetric images. Another solution is using the called multiwavlets [11]. The basic idea is to use a vector of scaling functions to generate multiwavelets with orthogonality, symmetry, continuity, and compact support properties at the same time. In [6], some properties of multiwavelets with multiplicity
Wavelets and Biorthogonal Wavelets for Image Compression
2.
3.
4.
303 303
four are discussed. MinImage could be more powerful if multiwavelets transform methods are added. The EZW coding method [13] is one of the more efficient quantization and coding methods for wavelet coefficients. However, MinImage could be more powerful if some more efficient quantization methods, such as space-frequency quantization [14] or context-based entropy coding [2], can be implemented. The entropy coding library, Zlib, is imported to MinImage. Since it uses traditional coding methods, it may not be the best coding method for EZW coded wavelet coefficients. A basic introduction on image compression from discrete cosine transform (DCT) to wavelets can be found in [9].
References 1. 2.
3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13. 14.
Burrus CS, Gopinath RA, Guo H (1998) Introduction to Wavelets and Wavelet Transfroms. Prentice Hall, Englewood Cliffs, NJ. Chrysafis C, Ortega A (1997) Efficient context-based entropy coding for lossy wavelet image compression. DCC, Data Compression Conference, Snowbird, UT, March, pp. 25 - 27. Daubechies I (1992) Ten Lectures on Wavelets. SIAM, Philadelphia, Pennsylvania. Dunn S (1999) Digital Color. http://davis.wpi.edu/~matt/courses/color. Gu H, Hong D, Barrett M (2003) Image Compression Using the Haar Wavelet Transform. J Computational Analysis and Applications 5: 45-75. Hong D, Wu A (2001) Orthogonal Multivavelets of Multiplicity Four, Computer and Mathematics with Applications 20:1153-1169. McCrosky C (1999) Demo of wavelet compression and JPEG. http://www.cs.usask.ca/faculty/mccrosky/dwt-demo/index.html. Morlet J (1983) Sampling theory and wave propagation. NATO ASI Series, Vol. 1, Issues in Acoustic signal/Image processing and recognition, Berlin, pp. 233261. Saha S (2000) Image Compression - from DCT to Wavelets. http://www.acm.org/crossroads/xrds6-3/sahaimgcoding.html. Shapiro JM (1993) Embedded Image Coding Using Zerotrees of Wavelet Coefficients. IEEE Transactions on Signal Processing 41:3445-3462. Strela V (1996) Multiwavelets: Theory and Application. Ph.D. thesis, Massatchusets Institute of Technology. Unser M (1999) Splines, A Perfect Fit For Signal/Image Processing. IEEE Signal Processing Magazine 16:22-38. Valens C (1999) EZW Coding. http://perso.wanadoo.fr/polyvalens/clemens/ezw. Xiong Z, Ramchandran K, Orchard MT (1997) Wavelet packet image coding using space-frequency quantization. IEEE Trans. Image Processing 7:892-898.
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay F.H. Ho, A.B Rad, Y.K. Wong, and W.L. Lo Department of Electrical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR, China. [email protected]
Abstract. We present an adaptive fuzzy logic controller, which learns a lowerorder model of the system via an on-line Neural Network (NN) identification algorithm. The identification is based on the estimation of parameters of a First-OrderPlus-Dead-Time (FOPDT) model. The outputs of the NN are three parameters: gain, apparent time delay and the dominant time constant. By combining this algorithm with a fuzzy logic controller with rotating rule-table, an adaptive controller is obtained which -with very little a priori knowledge- can compensate systems with unknown time delay. The simplicity and feasibility of the scheme for realtime control provides a new approach for a variety of real-time applications. Simulation and experimental results are included to demonstrate the adaptive property of the proposed scheme.
1 Introduction Identification and control of time delay systems have attracted a lot of attention due to their theoretical and practical significance [7]. Many processes exhibit delay characteristics in their mathematical formulation. Also, systems that are described by rational transfer functions are generally represented by lower-order transfer functions for the purpose of controller design [13]. Model-based control literature is abundant of studies regarding their modeling, identification and control [11]. Well-known solutions to time delay compensation are control methodologies such as the Smith predictor [16] and Internal Model Control (IMC) [6]. In contrast, one may notice that despite their importance, Soft-Computing [20] research has not paid much attention to such systems. This may be due to the fact that the soft-computing methodologies are essentially model-free approaches and the knowledge of the time delay is only implicitly required. This argument is only valid if the system under control contains a time delay much less than the dominant time constant of the system. However, for effective control of systems for which the ratio of time delay to the time constant is equal to or greater than unity, and in particular the dominant time delay systems, explicit knowledge of the delay is required. In such instances, the control strategy should be able to predict the T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 304−326, 2004. Springer-Verlag Berlin Heidelberg 2004
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
305 305
time delay and hence adjust the control signal accordingly. Neural networks architecture using the Nonlinear Auto-Regressive with eXogenous (NARX) model has been suggested to identify systems with time delay [23]. Another approach using a PID Neural Network has also been suggested in [18]. The problem is also addressed by fuzzy neural network [3] and fuzzy control structure [10]. However, these methods, except [23], require the actual knowledge of the system time delay. In this paper, we present a novel adaptive controller for systems with unknown time delay. As with standard adaptive controllers, an on-line parameter estimation algorithm is integrated with a control design methodology [1]. In the proposed design, an algorithm via neural networks estimates the parameters of a First Order Plus Dead Time (FOPDT) model of a higher-order system from input/output data. Based on the information obtained from this model, in particular, the estimated time delay- the rule-table of a fuzzy controller will be adjusted on-line. The rest of this paper is organized as follow: Section 2 is devoted to the idea of approximating a high-order system with a FOPDT model using neural networks. In Section 3, two adaptive designs based on the PID and a novel Fuzzy Logic Controller (FLC) are derived. Simulations of time responses for different systems and the two controllers are given in Section 4. Stability analysis is performed in Section 5.Experimental results will be included in Section 6. Finally, the paper is concluded in Section 7.
2 Lower Order Approximation of Systems with Neural Networks The study will be restricted to single-input / single-output (SISO) systems. It is well known that a FOPDT model, with the transfer function given by Eq. 1, can approximate many higher order systems [13]: b s m + b1 s m −1 + ... + b m − Ts K ⋅ e −θ ⋅ s Y (s) = 0 n e ≈ − n 1 τs + 1 U (s) s + a1 s + ... + a n
(1)
Here, K is the system gain, τ is the dominant time constant, θ is the apparent dead-time and m≤ n. Y(s) and U(s) are the Laplace transformed output and input signals respectively. The parameters the parameters K, τ and θ can be determined by various methods [15,17,19]. Smith's method [15] is a popular approach whereby the parameters are determined from the process reaction curve by injecting a step signal to the system. Although, this method is convenient and accurate, however, it should be carried out off-line and it is very sensitive to noise. Other methods also suffer from the same drawback and hence cannot be used in the design of an adaptive controller since their parameters cannot be updated on-line.
306306F.H. Ho et al.
2.1 Online Approximating Approach The proposed approach is conceptually simple and is inspired by the process reaction curve method. The block diagram of the on-line approximating approach is shown in Figure 1. As shown in this figure, the on-line identification consists of a standard neural network and a module called the model generator. The control signal u(t) is applied to the high-order system, the neural network, and the FOPDT model generator at the same time. The outputs of the neural network are the three FOPDT parameters, namely, the gain ( K ), the time constant (τ ) and the time delay (θ ). These three parameters are fed to the first-order plus dead time model generator to get the output of the model. The error between the output of the plant and the output of the model is used to train the weights of the neural networks. The training process tends to force the output of the FOPDT model generator to approximate the output of the system. Thus, the inputs of the FOPDT model generator are the approximating parameters of the first-order model for the high-order system. The output of the FOPDT model is expected to match the output of the high-order system after the neural network converges. The transfer function of the FOPDT model generator is rewritten below: Ym (s) K ⋅ e −θ ⋅ s = U (s) s +1
(2)
τ
According to the convolution theorem, the output of the FOPDT model generator can be obtained in the time domain as: K ⋅ e− θ ⋅ s y (t ) = L- 1[ ⋅ U ( s)] m τs + 1
(3)
where L-1 stands for inverse Laplace transform. Therefore, we get: y
m
(t ) = K ⋅ u (t − θ ) − τ
dy
m dt
(t )
(4)
where, u(x) is the input of the FOPDT model generator. The model output ym(t) depends on the parameters K, τ and θ at each time instant. The neural network architecture used for plant modeling is a three-layer feedforward network. The basic structure, having one hidden layer with sigmoid function, is known to be powerful enough to produce an arbitrary mapping among variables [4]. Thus, a three layer network is usually used for control applications. The activation function used here is the standard sigmoid function with range between 0 and 1. To train the above neural network, a direct learning strategy is employed for on-line training. As the desired output of the neural network is unknown, the FOPDT model generator is considered as an additional but not modifiable layer of the neural network. The Back-Propagation (BP) algorithm [5] is used to update the weights of the neural network. The algorithm consists of two passes, forward pass and backward pass. The calculation of the forward pass and updating the connection weights from the input layer to hidden layer are the same as those in the standard BP algorithm. To up date the connection weights from the hidden layer to the
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
307 307
output layer, the momentum technique [5] is employed. The weight adjustment for each iteration is derived below. The error function E is define by: 1 r (5) E = $ ( y − ym )2 2 t =1
u
y System error K
✝
Num
K
÷ Den
τ θ
Neural Network
τ
⋅ ✁
✂
!
✄
delay
θ
✆
ym
ym ☎
X
+ ✝
Num
÷Den
FOPDT Model Generator
Fig. 1. The architecture of the on-line lower-order modeling of high-order systems
where r is the number of input/output pairs available for training the network, y and ym are the output of the plant and the output at the FOPDT model at any time instant t. Within each time interval from t to t+1, the BP algorithm is used to update the connection weights according to the following relationship: (6) ∂E W
ij
( t + 1) = W
ij
(t ) − η ⋅
∂W
ij
(t )
+ α ⋅ ∆W
ij
(t )
where η and α are the learning rate and the momentum factor respectively. ∆Wij is the amount of the previous weight change. Using the chain rule, one has ∂E ∂W
ij
(t )
=
∂E ∂y
m
= − ( y (t ) − y m ( t )) ⋅
(t )
⋅
∂y ∂φ
m n
(t ) (t )
⋅
∂φ ∂W
n ij
(t ) (t )
∂ y m (t ) ⋅ X i ( t ) ⋅ (1 − X i (t )) ⋅ X j ( t ) ∂φ n ( t )
(7)
where Xi is the output of the ith node of the output layer; Xj is the input vector of the nodes of the jth output layer; Yn(t) is a 3x1 input vector of the FOPDT model (the output vector of the neural network):
308308F.H. Ho et al. ∂y m ( t ) ( ∂y m ( t ) ∂y m ( t ) ∂y m ( t ) / = , , ∂φ n (t ) )* ∂K ∂τ ∂θ 01
T
(8)
To find the partial derivatives of the output ym(t) of the model generator (FOPDT) w.r.t. gain (K), dominant time constant (T) and apparent dead-time (τ), respectively.
∂ y m (t ) e −θ ⋅s = L -1[ u ( s )] ∂K τ ⋅ s +1 ∂y m (t ) − sKe −θ ⋅s u ( s )] = L -1[ ∂τ (τs + 1) 2 ∂ y m (t ) − sKe − θ S u ( s )] = L -1 [ ∂θ τs + 1
(9) (10) (11)
3 Design of the Adaptive Controller Three designs are derived and their performances are compared with each other. The first one is based on the PID control algorithm. The second and third are implementations of Fuzzy Logic Controllers (FLC). It should be noted that the identification part is the same in all these algorithms. 3.1 On Line PID Tuning Control Method Using NN The control structure for the on-line PID tuning is shown in Figure 2. There are two parts in the control structure of the on line PID tuning method. The first part, which was described in the previous section, is the approximation of high order systems with FOPDT using neural networks, and the second part is the design of the PID controller. The parameters of the PID controller can be obtained from the corresponding parameters of the estimated FOPDT by neural networks. We have used the Ziegler-Nichols ultimate cycle tuning method [24] to compute the parameters of the PID controller: Kp = 0.6Ku Ti = 0.5Tu Td = 0.125Tu
(12)
Here, Kp, Ti, Td, Ku and Tu are the proportional gain, integral time constant, derivative time constant, the ultimate gain and the ultimate period respectively. The ultimate gain and the ultimate period are calculated from the FOPDT model of the high order plant [22]. The output of the PID controller is in the form of u (t ) = K P ( e (t ) +
dy f 1 ) + e ( t ) dt − T d ⋅ Ti dt
(12)
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
e ( t ) = r ( t ) − y ( t ),
y
f
(s) =
309 309 (13)
1 y(s) T 1+ d s 10
where u(t), y(t), r(t), yf(t) are the controller output, process output, set-point, and filtered derivative respectively.
Intput
PID Controller
u Output
PROCESS
r
y
y +
adjust Kp,Ti & Td
u Tuning Algorithm Based on First Order Plus Dead-time Model
e
K
Neural Network
Τ
τ
Model Generator
ym
error for training
Fig. 2. Overall structure of the auto-tuning PID controller
3.2 Adaptive Fuzzy Logic Controller (AFLC) for Time Delay Systems Tanaka and Sano [9] suggested that rotating the rule-base of a fuzzy controller, in effect, reduces the number of fuzzy rules and has the property of time delay compensation. Here, we have extended this concept to design an adaptive fuzzy controller, which has the capability of time delay compensation. The complete algorithm is included in the appendix. The control structure for the adaptive FLC is shown in Figure 3. This structure is the same as the PID controller except that it is replaced by a fuzzy controller. The control structure consists of two parts: the first part is the approximation of the high-order system with FOPDT model using neural networks, and the second part is the design of the FLC. The scaling factors of the FLC are also determined on-line based on gain, phase margin and FOPDT model [21]. The parameters of the controller are in the form of ρ (14) G = e
r
(15) Gde =
1 , % 4ω 2θ & 2ω p − p + 1 -∆t & π τ . '
Ge
310310F.H. Ho et al.
Gu =
ω pτ Am KG de
(16)
( 4 − 2α )
(17)
α ≅ max(Ge | e |, G de | ∆ |)
Intput
Fuzzy Controller
u Output
PROCESS
r
y
y + u K
Rotation of the Rule-base via Dead-time Model
e
Neural Network
Τ
τ
Model Generator
ym
error for training
Fig. 3. Overall structure of the on line tuning parameters FLC
where θ is the dead time, τ is the time constant and K is the process gain of the FOPDT model. r is the set-point change range. Ge , Gde and Gu are the scaling factors for error, change of error and the control signal respectively. The parameter ρ is a coefficient for the criterion of the Integral Time Absolute Error (ITAE). In this paper, ρ is assigned to 0.2, which gives the system possible compromise between ITAE index to set-point change and load disturbance. The ωp is the phase crossover frequency and is computed from: π Amφm + Am ( Am − 1) (18) 2 ωp = ( Am2 − 1)τ where Am and φm are gain and phase margin respectively. The gain and phase margin pair used in this paper is (3,450) because this pair of gain and phase margins gives good response for both set-point change and load disturbance [21]. Two adaptive fuzzy controllers are implemented. The difference between the two is how the scaling factors are set. In the first algorithm (AFLC1), the scaling factors are set manually during the initialization. In the second algorithm (AFLC2), the scaling factors are set on-line. The implementation of the two adaptive FLC is as follows: 1. 2.
Estimate the First-Order-Plus-Dead-Time (FOPDT) model by NN. Calculate the error and change of error between set-point and the system output.
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
3. 4. 5. 6. 7. 8. 9. 10. 11.
311 311
Calculate the input and output-scaling factor from equation (14-16) [only for AFLC2*] Calculate the Φ0 from equation (A4). Calculate the time delay compensation parameter ∆Φ from equation (A5). Estimate the final angle of rotation of control rule from equation (A3). Calculate z and dz from the coordinate transformation equation (A2). Convert v and dv to fuzzy from. Evaluate fuzzy control value. Compute the crisp value by defuzzification. Deterimine the process output by the control value.
4 Simulation Results To show the adaptive behavior of the algorithm, let us consider two processes as: Process I
Process II
Y ( s ) 1.5 e −2.5s = U (s) (1 + s) 2 1 − 1 .4 s Y (s) = U (s) (1 + s ) 3
The first process is a time delay system and the second process is a nonminimum phase system. First, adaptive control of Process I was simulated for t=150s after which the system was changed from Process I to Process II. Subsequently, the system was switched back to process I. We simulated three controllers: Adaptive PID (APID), Adaptive Fuzzy Controller with off-line setting of scaling factors (AFLC1) and Adaptive Fuzzy Controller with on-line setting of the scaling factors (AFLC2). In the simulation, the set-point was selected to be a square wave with amplitude 0.5 and a period of 80s. Gaussian noise with mean zero and variance of 0.01 was injected at the output of the system. We employed a fourth order Runge Kutta numerical integration algorithm for all time responses and the integration interval was selected to be 0.1s. The neural network also used the same time interval for updating its parameters. The simulation proceeded as follows: the PID controller was initialized with Kp = 1, Ti =1000, Td = 0.0. The architecture of the NN was (1,8,3) and the fuzzy controller was using fixed scale factor with Ge=0.43, Gde=0.6, Gu=0.6. The adaptive fuzzy controller was initialized with Ge=0.0, Gde=0.0, Gu=0.0. The architecture of the NN was (1,16,3). For all controllers a bias term of 0.5 was added to all hidden nodes, the number of hidden units was determined by a trial and error procedure. The connection weights were randomly initialized. The learning rate and the momentum parameter were set at 0.8 and 0.1 respectively. Figures 4-6 show the overall performances of the three controllers. In these figures, the set point and the output, the controller signal and the estimated parameters of gain, apparent time delay and the dominant time constant are shown in top, middle and bottom curves respectively. It should be
312312F.H. Ho et al.
noted that the gains in Process I and Process II are different (1.5 and 1), since some adaptive controllers cannot cope with change in steady state gain of the controlled system. However, the proposed method can successfully track the dynamic change. These simulation studies demonstrate the adaptive property of the proposed algorithm. In all these system changes, the neural networks converged and the estimated parameters of the FOPDT also converged to their steady state values. All controllers are known to provide stable and robust control under various conditions. Tables 1 and 2 show the parameters of FOPDT model approximated by several other methods such as Smith's [14], minimized error [8] and Tsang and Rad [19]. The corresponding ultimate gain and the ultimate period for processes I and II respectively. It should be noted that the parameters from all other methods except the proposed one were obtained off-line, from open loop excitation with unit step and were noise free. Furthermore, the values quoted for the proposed algorithm are based on the last measurement before each system change and not the average value. Fig. 4: Simulation results of the PID controller Table 1. Parameters obtained from different methods for Process I
Smith Method [18] Minimized-error [8] Tsang-Rad [19]
Proposed method for PID Proposed method for FLC Process I
K 1.50 1.50 1.50 1.49 1.51 -
T 1.65 1.46 1.50 1.40 1.39 -
Process I τ 3.00 3.11 3.09 3.14 3.09 -
Ku 1.06 0.98 1.00 0.97 0.96 1.04
Tu 8.38 8.44 8.45 8.44 8.32 8.44
Table 2. Parameters obtained from different methods for Process II
Smith Method [15] Minimized-error [8] Tsang-Rad [19] Proposed method for PID Proposed method for FLC Process II
K 1.00 1.00 1.00 0.99 1.05 -
T 1.89 1.67 1.78 1.28 1.25 -
Process II τ 2.43 2.55 2.44 2.72 2.71 -
Ku 1.93 1.74 1.85 1.49 1.40 1.54
Tu 7.22 7.35 8.45 7.38 7.33 6.83
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
Set-Point and Process O utput
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
O utput Set-point
t/s 0
50
100
150
200
250
300
350
400
1 5 0 t/s 2 0 0
250
300
350
400
250
300
350
400
C o n tro l S ig n a l 1 0 .9 0 .8 0 .7 0 .6 0 .5 0 .4 0 .3 0 .2 0 .1 0 0
50
100
First Order Model Parameters 4
θ
3.5 3 2.5
K
2 1.5 1
τ
0.5 0 0
50
100
150
t/s
200
Fig. 5a. Simulation results of the AFLC1
313 313
314314F.H. Ho et al.
Set-Point andProcess Output 1 Set-point
0.8 0.6 0.4 0.2
FOPDT auto-tuning FLC 0
t(sec)
-0.2 0
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
50
100
150
200
250
300
350
400
200
250
300
350
400
Control Signal
0
50
100
150
t/s
First Order Model Parameters
4 3.5
θ
3 2.5
Κ
2 1.5 1
τ
0.5 0 0
50
100
150 t/s 200
250
300
Fig. 5b. Simulation results of the AFLC1
350
400
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
315 315
Set-Point and Process Output
1 0.9 0.8
Output Set-point
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
50
100
150
t/s
200
250
300
350
400
200
250
300
350
400
Control Signal
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
50
100
150
t/s
F irs t O rd e r M o d e l P a ra m e te rs
4 3 .5
θ
3 2 .5
K
2 1 .5 1
τ
0 .5 0
t/s 0
50
100
150
200
250
300
350
400
Fig.6. Simulation results of the AFLC2
5 Experimental Studies 5.1 Plant Description A laboratory-scale process control unit (PCU) from Bytronic [2] was used in this experiment. The system rig consists of a sump, a pump, manual/computer control
316316F.H. Ho et al.
diverter valve and drain valve. The sump water is pumped to the pipeline and the manual flow control valve to the process tank. The water is fed back to the sump via the drain valve, thus completing the cycle. The rig can be used for level, temperature and flow control. The objective in our study was to control the water flow by manipulating the pump voltage. Figure 7 shows the schematic diagram of the process rig. A first-order with time delay model was identified, off-line as:
G(s) =
y ( s ) 1.5e −0.25 s = 0 .5 s + 1 u ( s)
(16)
where y(s) and u(s) are the Laplace transform of the flow rate and pump drive voltage respectively. cooler indicator
SYSTEM RIG stirrer tank full indicator
flow temp display
cooler
switch level
prt
temp display
dirverter indicator
heater
heat display manual heat input
flow rate display
manual drain
solenoid drain
drain indicator
manual diverter
solenoid diverter
gate valve priming valve
sump sump temp display
pump
Fig. 7. Bytronic Process control unit
5.2 Experimental Results The purpose of the experiment was to demonstrate the performance of the algorithm on a real system. The plant (Bytronic process control rig) can be represented by a FOPDT model at nominal operating points. However, the plant is inherently nonlinear due to the dead-zone of the pump and the flow meter. The set point was chosen as a staircase signal with amplitude 0.4 l/min, 0.8 l/min and 1.2 l/min and period of 40 sec. The sample time chosen to be 0.05 sec. The on-line process computer sampled the water flow rate at every sample interval and the system parameters were identified by the NN from the pump voltage and the water flow rate. Due to the dead zone in the motor dynamic, the pump drive voltage minimum and maximum were 0.8V and 5V respectively. Three controllers were tested:
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
317 317
Adaptive PID (APID), Adaptive Fuzzy Controller with off-line setting of scaling factors (AFLC1) and Adaptive Fuzzy Controller with on-line setting of the scaling factors (AFLC2). The experiment proceeded as follows: the PID controller was initialized with Kp = 1, Ti =1000, Td = 0.0 and the fuzzy controller was using fixed scale factor with Ge=0.43, Gde=0.6, Gu=0.6. The adaptive fuzzy controller was initialized with Ge=0.0, Gde=0.0, Gu=0.0. The architecture of the NN was (1,8,3). For all controllers a bias term of 0.5 was added to all hidden nodes. The connection weights were randomly initialized. The learning rate and the momentum parameter were set at 0.8 and 0.1 respectively. Figures 8-10 show the performance of the proposed algorithm for these controllers on the real system. It can been seen that the performance of the ZN-PID control system is not very satisfactory, since the derivative action amplifies the inherently noisy flow control system. Such a system can be controlled better by an adaptive PI controller. It was noticed that both FLC algorithms perform better than conventional controller for set-point tracking as well as noise rejection. Besides, although the model of the plant changed at different set-points, the control system maintained a very acceptable performance.
6 Conclusions In this paper, a new adaptive fuzzy control algorithm is presented. The control algorithm is based on the parameters of a FOPDT model, which are obtained using neural networks. The time delay compensation is achieved by rotating the rulebase based on the estimated parameters in particular, the time delay. The performance of the adaptive FLC is compared with a conventional ZN-PID controller. It is noticed that the both simulation and experimental results show that the proposed adaptive control system has superior performance in terms of set-point tracking and noise rejection. The simplicity of the scheme for real-time control provides a new approach for implementing neuro-fuzzy applications for a variety of on-line industrial control problems. Results presented clearly demonstrate the adaptive property of the proposed method.
Acknowledgements The authors gratefully acknowledge the financial support of the Hong Kong Polytechnic University
318318F.H. Ho et al.
Set-Point and Process Output 1.4 1.2 Set-point
1 0.8 0.6 0.4
Output
0.2 0 0
40
80
t/s
120
160
200
120
160
2 00
120
160
200
C ontroller O u tp ut 1.4 1.2 1 0.8 0.6 0.4 0.2 0
t/s 0
40
80
Firs t Order Model Param eters 2 1.8 1.6 K 1.4 1.2 1 τ 0.8 0.6 0.4 0.2 θ 0 0
40
80
t/s
Fig. 8. Experimental results of the PID controller
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay Set-Point and Process Output 1.4 1.2 1 0.8 0.6 0.4 0.2 0
t/s 0
40
80
120
160
200
120
1 60
200
C ontrol S ign al 1.4 1.2 1 0.8 0.6 0.4 0.2 0
t/s 0
40
80
First Order Model Parameters 2 1.8
K
1.6 1.4 1.2 1 0.8 0.6
θ
τ
0.4 0.2 0 0
40
80
t/s
120
Fig. 9. Experimental results of the AFLC1
160
200
319 319
320320F.H. Ho et al. Set-Point and Process Output 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
40
80
t/s
120
160
200
120
160
200
C o n tro l S ig n a l 1 .4 1 .2 1 0 .8 0 .6 0 .4 0 .2 0 0
40
80
t/s
F irs t O rd e r M o d e l P a ra m e te rs 2 1 .8
K
1 .6 1 .4 1 .2 1
τ
0 .8 0 .6 0 .4 0 .2
θ
0 0
40
80
t/s
120
160
200
Fig. 10. Experimental results of the AFLC2
References [1] Astrom, K.J., Wittenmark, B., Adaptive Control, Addison-Wesley Publishing Company, Inc., Second Edition, 1995.
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
321 321
[2] Bytronic (United Kingdom), User Manual for the Bytronic process control rig (Bytronic), 1994. [3] Chen, S.B., Wu, L., and Wang, Q.L., “Self-learning fuzzy neural networks for control of uncertain systems with time delays,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 27, no.1, pp. 142-148,1997. [4] Cui, X., Shin, K.G., “Direct control and coordination using neural networks,” IEEE Transactions on Systems, man, Cybernetics, vol. 23, no.3, pp. 686-697,1993. [5] Freeman, J.A., Skapura, D.M., Neural Networks: Algorithms, Applications and Programming Techniques, Addison-Wesley Publishing Company, Inc, 1991. [6] Garcia, C.E., “Internal Model Control – 1.A unifying review and some new results,” Industrial and Engineering Chemistry, Process Design and Development, vol. 21, pp. 308-323, 1982. [7] Gawthrop, P.J., Nihtila, M.T., and Rad, A.B., “Recursive parameter estimation of continuous-time systems with unknown time delay,” Control–Theory and Advanced Technology (C-TAT), vol.5, no.3, pp.227-248,1989. [8] K.R.Sundaresan and P.R. Krishnaswamy, “Estimation of Time Delay Time Constant Parameters in Time, Frequency and Laplace Domains,” The Chemical Journal of Chemical Engineering, April, vol. 56, 1978. [9] K. Tanaka, K., Sano, M., Parameter adjustment laws for fuzzy PI controllers for firstorder lags systems with dead-time. In R. Lowen and M. Roubens.(ED): Fuzzy logic : state of the art, pp. 531-540, Kluwer Academic Press, 1993 [10] Li, H. X., and Tso, S.K., “Higher-order fuzzy control structure for higher-order or time delay systems,” IEEE Transactions on Fuzzy Systems, vol. 7, no.5, pp.540-552, 1999. [11] Marshall, J.E., Time delay systems: Stability and performance criteria with applications, Ellis Horwood, New York, 1992. [12] Rad, A.B., “Self-tuning control of systems with unknown time delay – A continuoustime approach,” Control – Theory and Advanced Technology, Transactions on neural networks, vol. 1, no.1, March, pp. 479-497,1994. [13] Rad, A.B., Lo, W.L., and Tsang, K.M., “Self-tuning PID controller using NewtonRaphson search method,” IEEE Transactions on Industrial Electronics, vol.44, no.5, pp. 717-725, 1997. [14] Smith, C.L., Digital Computer Process Control, United States: International Textbook Company, 1972. [15] Smith, C. L., Corripio, A.B., Principles and Practice of Automatic Process Control, John Wiley and Sons, Inc., Second Edition. 1997. [16] Smith, O.J.M., “Closer control of loops with dead time,” Chemical Engineering, vol.53, no.5, pp. 217-219,1957. [17] Sundaresan, K.R., Krishnaswamy, P.R., “Estimation of Time Delay Time Constant Parameters in Time, Frequency, and Laplace Domains,” The Canadian Journal of Chemical Engineering, vol. 56, April, pp. 257-262, 1978. [18] Shu, H.L., Pi, Y.G., “PID neural networks for time delay systems,” Computers and Chemical Engineering, vol.24, pp.859-862, 2000. [19] Tsang, K. M. and Rad, A. B., “A new approach to auto-tuning of PID controllers,” International Journal of Systems Science, vol. 26, no.3, pp.639-658, 1995. [20] Tzafetas, S.G., Soft-computing in Systems and Control Technology, World Scientific Publishing, Singapore, 1999. [21] Xu, J.X., Liu, C., Hang, C.C., “Tuning of fuzzy PI Controllers Based on gain/phase margin specifications and ITAE index,” ISA Transactions, vol.35, pp.79-91, 1996.
322322F.H. Ho et al.
[22] Yu, C.C., Autotuning of PID Controllers, Springer, London, 1999. [23] Yu, D.L., Gomm, J.B. and Williams, D, “Neural model input selection for a MIMO chemical process,” Engineering Applications of Artificial Intelligence, vol.13, pp.1523, 2000 [24] Ziegler J. G. and Nichols N. B., “Optimum Settings for Automatic controllers” Transactions on ASME, Nov, 759-768, 1942
Appendix A The controller will be restricted to Single-Input / Single-Output (SISO) systems. It is well known that a higher order system can be approximated by a first order plus dead time (FOPDT) model in form of:
K ⋅ e −θ ⋅s Y ( s ) b0 s m + b1 s m −1 + ... + b m − Ts (A1) ≈ = e τs + 1 U (s) s n + a1 s n −1 + ... + a n Here, K, τ, θ are the gain, dominant time constant and the apparent dead-time respectively and m≤n. U(s) and Y(s) are the Laplace transformed input and output signals respectively. T is the system time delay and ai and bj are the coefficients of the system transfer function. In this paper, we designed a time delay compensated controller that is specially suited to FOPDT model. The compensated effect is effectively use linear operator transformation of system error and error derivative. In fact, there are two effect can be achieved, First the number of control rules can be reduced. Second, the fuzzy logic controller can be possible to do the time delay compensation. Consider a standard fuzzy rule table (A1) in a symmetric form, and initial z and dz in both diagonal axes. A linear operator A: e→ z that take a vector e and rotates both the fuzzy rule table and vector z in clockwise direction by an angle Φ. The matrix of A is equal to Eq. A2.
e NL NM NS ZO PS PM PL PL ZO PS
PS PM PM PL
PM NS ZO PS PS
z
PL
PS PM PM PL
NS NS ZO PS
PS PM PM
ZO NM NS NS ZO PS
e
PS PM
NS NM NM NS NS ZO PS
PS
NM NL NM NM NS NS ZO PS
Φ (to=45o)
NL NL NL NM NM NS NS ZO
Fig. A1. Illustration of Rotating Table
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
(− sin Φ cos Φ / A=) 0, * cos Φ sin Φ 1
323 323 (A2)
z = Ae
.
where e = [e, e ] ∈ ℜ 2 and z = [ z, dz] ∈ ℜ 2 After the initialization of the rule-table, one diagonal of the rule-table gives the same control action for any dz. In other words, the output of the fuzzy controller depends only on z, (Table A2) hence the number of rules are reduced. Table A1. New rotated rule table dz
NL Any z NL
NM NM
NS ZO PS PM PL NS ZO PS PM PL
The angle of rotation Φ which is the summation of the two angles Φ0 and ∆Φ defined as (A3) Φ = Φο + ∆Φ where Φ0 and ∆Φ are the angle of rotation base on First Order model and the angle for time delay compensation respectively. If the higher order system is modelled by a first-order with delay, three parameters gan, time constant and time delay are obtained. The angle in equation 4 for angle of rotation base on First Order model Φ 0 is obtained using the time constant parameter from the first order model without delay case. Φο = tan −1 (−
kτ 1
k 2τ - 1 - K ⋅ f (e)
)
(A4)
where k1 and k2 are the positive constant parameter in order to make the closeloop reference response stable. In this paper, k1 and k2 are equal to 2. The function f(e) is the control value and K is the static gain of the process. Proof: Given in appendix B The parameter Φ 0 have effect of rotation without consider the time delay. The other parameter ∆Φ in fact has effect for time delay compensation based on the first order with time delay model. It is confirmed from simulation results, the first order plus time delay model parameter τ and θ have influence ∆Φ. e.g ∆Φ=g(τ,θ). The function g for time delay compensation is approximate by a fuzzy system. Table 2 show the time delay compensation parameter ∆Φ as a consequent part for 32 pairs τ and θ obtain by computer simulation for quickest response with 20 percent overshoot as a performance index. The i th rule of the fuzzy system is of the following form: Rule i: If τ is Ai1 and θ is Ain Then ∆Φ is ci where Aij, i=1,2,……,m, j=1,2,……n are the fuzzy sets.
324324F.H. Ho et al. Table A2. Rule table for time delay parameters ∆Φ
θ ∆Φ τ
1 3 5 7 9 11 13 15
1 0.00 0.351 0.466 0.505 0.526 0.539 0.549 0.557
3 0.099 0.537 0.589 0.611 0.624 0.634 0.642 0.648
5 0.317 0.597 0.636 0.622 0.667 0.676 0.684 0.690
7 0.42 0.632 0.666 0.683 0.695 0.704 0.711 0.717
Given the input data τ and θ, by using a singleton fuzzifier, product fuzzy inference and weighted average defuzzifier, the output of the fuzzy system is inferred as follow: n
m
$c ∏u i
∆Φ =
i =1 m
Aij
(x j )
j =1 n
$∏ u
Aij
(A5)
(x j )
i =1 j =1
The implementation of the FLC is as follows: Initial the two axes z and dz in both diagonals for the rule table. 1. 2. Calculate the error and error derivative between set-point and the system output. 3. Calculate the Φ 0 from equation (A4). 4. Calculate the time delay compensation parameter from equation (A5). 5. Estimate the final angle or rotation of control rule from equation (A3). 6. Calculate z and dz from the coordinate transformation equation (A2). 7. Convert z and dz to fuzzy from. 8. Evaluate fuzzy control value. 9. Compute the crisp value by defuzzification. 10. Determine the process output by the control value.
Appendix B Proof of Lemma1: Consider the First Order model without delay K τS + 1 and the closed-loop system governed by ⋅⋅
(B1)
⋅
(B2) e+ k1 e+ k 2 e = 0 where k1 and k2 are positive constant, we have e(t)→0 as t→∝. The close-loop transfer function
Adaptive Neuro-Fuzzy Control of Systems with Unknown Time Delay
325 325
Y ( s) KF = R( s) τs + 1 + KF
where F=f(e,Φ ) τsY + Y + KFY = KFR, Y = R − e
τs ( R − e) + ( R − e) + KF ( R − e) = KFR . ⋅ ⋅⋅ ⋅ − τ e+ R − e − KFe = 0 # −τ e − e− KF e = 0 ⋅⋅ ⋅ 1⋅ 1⋅ e = − e(1 + KF ) # − e(1 + KF ) = − k1 e − k 2 e
τ
τ
⋅ kτ e 1 tan Φ = = − 0 e k τ − 1 − KF 2
⋅
e ( k 2τ − KF − 1) = − k τe, 1
Appendix C
.
X m (t ) = AX m (t ) + Bu (t − T ),
y m (t ) = CX m (t ) + Du (t − T )
t
X m (t ) =
(t − t 0 ) X m (t 0 ) + + (t − λ ) Bu (λ − T )dλ !
!
t0
(C1)
t
y m (t ) = C (t − t 0 ) X m (t 0 ) + C + (t − λ ) Bu (λ − T )dλ + Du (t − T ) , !
!
t0
(t ) = e At Assuming that all initial states are zeros and D=0, the output equation becomes: !
t
ym (t ) = + h(t − λ , p)u (λ − T )dλ t0
e
− sT
(C2)
Gm ( s) = e − sT L[h(t,p)]
= e − sT C(sI - A)-1B = e− sT
b0 s m + b1s m −1 + ... + bm B (s) = e− sT m s n + a1s n −1 + ... + 1 Am ( s)
where θ is the model time delay and h(t, p) =L-1[Gm(s)] is the impulse function of Gm(s). The vector is defined as p = [a1 a2 …an b0 b1 …bm ]. The partial derivatives of the model output with respect to the time delay and the model parameters are as follows.
326326F.H. Ho et al.
∂ym ∂ = ∂T ∂T t
=
t
+ h(t − λ , p)u (λ − T )dλ t0
∂
+ ∂T [h(t − λ , p)u (λ − T )]dλ
t0 t
= + h(t − λ , p) t0 t
(C3)
∂u (λ − T ) dλ ∂T .
= + [−hm (t − λ , p) u (λ − T )]dλ t0
= − L−1[ se − sT Gm ( s)U ( s)] t
∂ ∂ym hm (t − λ , p)u (λ − T )dλ = ∂ai ∂ai t+0 t
=
∂
+ ∂a [h
m
(t − λ , p)u (λ − T )]dλ =
i
t0
∞
L[ =
(C4)
t
∂ ∂h − st ∂h ]= + e dt = a ai ∂ ∂ ∂a i i 0
∂h(t − λ , p) u (λ − T )dλ ∂ai t0
+
∞
+ h(t , p)e
− st
dt
0
∂G m ( s ) ∂ L[h(t , p)] = ∂a i ∂a i
#
(C5)
∂G ( s ) ∂h = L−1 [ m ] ∂a i ∂a i
Similarly
∂G ( s ) ∂h = L−1 [ m ] ∂bi ∂bi
∂y m t ∂h(t − λ , p) u (λ − T ) dλ = ∂a i t+0 ∂a i
(C6) n −i
= L−1 [e − sT
s Bm (s) ∂G m ( s) U ( s )] = L−1 [ −e − sT U ( s)] ∂a i [ Am ( s)] 2
∂y m t ∂h(t − λ , p) = u (λ − T )dλ ∂bi ∂bi t+0
(C7) m −i
= L−1 [e − sT
∂G m ( s) s U ( s)] U ( s)] = L−1 [e − sT ∂bi Am ( s)
B ( s) ∂y m = L−1 [− se − sT m U ( s )] Am ( s) ∂T
(C8) The control signal u(t) is filtered by the filter function in eq. (C6-C8) to find the partial derivatives of ym(t) with respect to various model parameters.
Adaptive Robust Fuzzy Tracking Control for a Class of Nonlinear Systems Yansheng Yang1 and Changjiu Zhou2 1
Navigation College of Dalian Maritime University, Dalian, 116026, P.R. China School of Electrical and Electronic Engineering, Singapore polytechnic, 500 Dover Road, Singapore 139651, Republic of Singapore [email protected] 2
Abstract. The tracking control problem for a class of nonlinear systems with uncertain system function and uncertain gain function, which are the unstructured state-dependent (or non-repeatable) unknown nonlinear functions, is investigated in this paper. A fuzzy logic system is used to approximate the lumped nonrepeatable state-dependent uncertain function, then the fuzzy system is used as the upper bound of uncertainty, and a novel of adaptive robust fuzzy tracking control (ARFTC), that can evaluate the bound parameter of uncertainty on line, is presented. For the proposed algorithm, there are two advantages: (1) the adaptive mechanism with one learning parameterization can be obtained by use of Lyapunov approach; (2) the possible controller singularity problem in some of the existing adaptive control schemes met with feedback linearization techniques can be removed. Finally, the proposed algorithm is verified through simulation.
1 Introduction In recent years, there have been significant research efforts on the adaptive tracking control of nonlinear systems. The control scheme via feedback linearization for nonlinear systems has been proposed in [1]. The fundamental idea of feedback linearization is to transform a nonlinear system dynamic into a linear one. Therefore, linear control or fuzzy logic control (FLC) techniques can be used to acquire the desired performance [1]-[3]. The FLCs have well proven their broad potential in numerous practical industrial applications and have attracted a great deal of public attention and a lot of work has been published in the field. Although FLC systems have the advantages such as no formal mathematical models are needed and the system uncertainties can be coped with, an analytical understanding of this well demonstrated success is lacking, and when designing controllers for an uncertain system, if the structure of the uncertainties in the system is known and the upper bound of uncertainties can be obtained, so it is understood that there exists a class of continuous-time robust controllers that insure the convergence of the state to an arbitrarily small T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 327−344, 2004. Springer-Verlag Berlin Heidelberg 2004
328328Y. Yang and C. Zhou
neighborhood of the origin in a finite period of time [4]. Unfortunately, in industrial control environment, there were some controlled systems which were not only nonlinear, but also characterized by the unstructured state-dependent (or nonrepeatable) unknown functions. Furthermore, as the design of FLCs involves a great deal of heuristics, many theoretical questions regarding various analytical properties of FLCs (e.g. stability, robustness) remain difficult to solve. A solution to the above-mentioned problems was firstly presented by Wang [5]. The fuzzy systems were proposed to uniformly approximate the uncertain nonlinear functions in the designed system by use of the universal approximation properties of certain classes of fuzzy systems presented in Lee [6] , Ying [7], Wang and Mendel [8]. By using Lyapunov based learning law, an adaptive fuzzy control strategy that ensured the stability of the overall system was developed. However, the control method described in Wang [3] has two substantial drawbacks: (1) the method is restricted to plants where the Lie-derivative L∆g Lnf0−1h( x) is constant,
which excludes many systems of practical importance; (2) many parameters need to be turned in the learning laws when there were many state variables in the designed system and many rule bases were used in the fuzzy system, which was used to approximate the nonlinear and uncertain systems. In this paper, an adaptive robust fuzzy tracking control (ARFTC) scheme, that can evaluate the bound parameters of the lumped non-repeatable state-dependent nonlinear function on line, is presented. The main features of the control law proposed can be viewed from two aspects: (1) the Lie-derivative L∆g Lnf0−1h( x) is not constant which can be considered in this method, so the prerequisite for understanding of the controlled system is relaxed; (2) no matter how many rules in the fuzzy system are used, only one parameter needs to be adapted on-line, so the computation load of the scheme has been reduced. This rest of this paper is organized as follows. In Section 2, we will give the preliminaries for the configuration of MISO fuzzy systems and approximation theorem. Section 3 presents the problem formulation for the description of a class of nonlinear systems and tracking control problem. In Section 4, a systematic procedure for the synthesis of adaptive robust fuzzy tracking control (ARFTC) is developed. In Section 5, a simulation example is given to illustrate the effectiveness of the proposed ARFTC for tracking control of an inverted pendulum system. Concluding remarks are given in Section 6.
2 Preliminaries The basic configuration of fuzzy logic systems consists of some fuzzy IF-THEN rules and a fuzzy inference engine. The fuzzy inference engine uses the fuzzy IFTHEN rules to perform a mapping from an input linguistic vector x to an output linguistic variable y . In this paper, we consider a fuzzy system to uniformly approximate a continuous multi-dimensional function y = f ( x ) , where x is input
Adaptive Robust Fuzzy Tracking Control
329 329
T
vector with r independent x = ( x1 , x2 ,# , xr ) . The domain of xi is θ i = [ ai , bi ] . It
follows that the domain of x is Θ = θ1 × θ 2 × # ×θ r = [ a1 , b1 ] × [ a2 , b2 ] × # × [ ar , br ] In order to construct a fuzzy system, the interval [ ai , bi ] is divided into N subintervals ai = C0i < C1i < # < CN −1i < CNi = bi , 1≤ i ≤ r On each interval θ i (1 ≤ i ≤ r ) , N + 1 continuous input fuzzy sets, denoted by A ( 0 ≤ j ≤ N ) , are defined to fuzzify xi . The membership function of Aij is dei j
noted by µ ij ( xi ) , which can be represented by triangular, trapezoid, generalized bell or Gaussian type and so on. Generally, the fuzzy system can be constructed by the K ( K > 1) fuzzy rules as follows: Rl IF x1 is Ahl1 AND x2 is Ahl2 AND …… AND xr is Ahl r !
THEN y is Bhl1h2 #hr , l = 1, 2,# , K where Bhl1h2 #hr is a singleton output fuzzy set for the output variable y . The membership function of Bhl1h2 #hr is 1 only at y = σ l (an arbitrary constant) and 0 at other position. The product fuzzy inference is employed to evaluate the ANDs in the fuzzy rules. After being defuzzified by a typical center average defuzzifier, the output of the fuzzy system is K
fˆ ( x, σ ) =
# σ ( µ ( x ) ⋅ µ ( x )# µ ( x ) ) l h2
1
l hr
2
# ( µ ( x ) ⋅ µ ( x )# µ ( x ) ) l h1
1
l h2
l hr
2
µ hl ( x1 ) ⋅ µ hl ( x2 )# µ hl ( xr ) 1
2
K
#(µ l =1
r
l =1 K
l =1
where ξ l ( x ) =
l h1
l
l h1
r
( x1 ) ⋅ µ ( x2 )# µhl ( xr ) ) l h2
= σ Tξ ( x)
(1)
r
and ξ l ( x ) is called a fuzzy base
r
function. When the membership function µ ij ( xi ) in the ξ l ( x ) is denoted by some type of membership function, ξ l ( x ) is a known continuous function. When the fuzzy system is employed to approximate the continuous function, two questions of interest may be asked: (1) whether there exists a fuzzy system to approximate any nonlinear function to an arbitrary accuracy? (2) how to determine the parameters in the fuzzy system if such a fuzzy system does exist. For the first question, the following lemma is proven by Wang [3]. Lemma 2.1 Suppose that the input universal of discourse U is a compact set in R r . Then, for any given real continuous function f ( x ) on U and arbitrary ∀ε > 0 , there exists a fuzzy system in the form of expression (1) such that
330330Y. Yang and C. Zhou
sup fˆ ( x, σ ) − f ( x ) ≤ ε
(2)
x∈U
Remark 2.1 If using fˆ ( x, σ ) to estimate the function f ( x ) , there will be N n
unknown constants to be adjusted.
3 Problem Formulation
3.1 Descriptions of uncertain nonlinear systems
Consider an uncertain nonlinear dynamic system in the following form -*ζ& ( t ) = f 0 (ζ ) + ∆f (ζ , w ) + ') g0 (ζ ) + ∆g (ζ , w ) 24 u ( t ) + -, y (t ) = h(ζ )
(3)
where ζ ∈ R n is the state. u ∈ R is the input control. y ∈ R is the output of the system. w is the model uncertainty belonging to a compact set, which includes the parameter and the disturbance uncertainties of the system. f 0 (ζ ) and g 0 (ζ ) are known functions and belong to smooth vector fields in a neighborhood of the origin ζ = 0 with f 0 ( 0 ) = 0 and g 0 ( 0 ) ≠ 0 . ∆f (ζ , w ) is the uncertain system function and ∆g (ζ , w ) is the uncertain control gain function, both of which are continuous functions depending on the state ζ . For the purpose of designing the controller, the input-state linearization is needed to be performed for the system first. The following assumptions are introduced. Assumption 3.1 The distributions
{
}
Di = span g0 , ad f0 g 0 ,# , ad if0 g 0 , 0 ≤ i ≤ n − 1
are involutive and of constant rank i + 1 in U1 . Assumption 3.2 Suppose
{
}
Di = span g0 , ad f0 g 0 ,# , ad if0 g 0 , 0 ≤ i ≤ n − 1 uncertain function ∆f (ζ , w ) satisfies strict-triangle condition such that
') ∆f (ζ , w ) , Di 24 ∈ Di
0≤i ≤ n−2 T
Finding a diffeomorphism x = φ (ζ ) = ') h, L f0 h,# , Lnf0−1h 24 according to Assumptions 3.1 and 3.2,, then Eq. (3) can be transformed into
Adaptive Robust Fuzzy Tracking Control
1 ≤ i ≤ n −1 * x&i = xi +1 + ∆ i n n −1 −1 −1 + x&n = L f0 h (φ ( x ) ) + Lg0 L f0 h (φ ( x ) ) u + ∆ n , y = x1 where ∆ i = L∆f Lif−01h (φ −1 ( x ) ) , ∆ n = L∆f Lnf0−1h (φ −1 ( x ) ) + L∆g Lnf0−1h (φ −1 ( x ) ) .
331 331
(4)
In Eq. (4), ∆ i , i = 1, 2,# , n are the functions relating to the uncertainties of the system. From Eq. (4), if ∆ i , i = 1, 2,# , n − 1 does not act within channels implicit in the control input, i.e. unmatched uncertainty hold, it is difficult to design acontroller for the unmatched uncertain system. Therefore, the following assumption is introduced in this paper. Assumption 3.3 The uncertainties of the system satisfy the matched condition such that ∆ i = 0, i = 1, 2,# , n − 1 . According to Assumption 3.3, the uncertain nonlinear system (4) can be rewritten as follows * x&i = xi +1 1 ≤ i ≤ n −1 (5) + x&n = f 0 ( x ) + ∆f ( x, w ) + ') g 0 ( x ) + ∆g ( x, w ) 24 u , y = x1 T
where x = [ x1 , x2 ,# , xn ] , f 0 ( x ) = Lnf0 h , ∆f ( x, w ) = L∆f Lnf0−1h , g 0 ( x ) = Lg0 Lnf0−1h , ∆g ( x, w ) = L∆g Lnf0−1h . Another assumption is introduced as follows. Assumption 3.4 The uncertain control gain function ∆g ( x, w ) is confined within a certain range such that 0 < bmin ≤ 1 + g0−1 ( x)∆g ( x, w ) ≤ bmax
(6)
where bmin and bmax are the low and upper bound parameters respectively. The Assumption 3.4 implies that smooth function g 0 ( x) + ∆g ( x, w) is strictly either positive or negative. In the following, without loss of generality, we shall assume 1 + g 0−1 ( x)∆g ( x, w ) ≥ bmin ∀x ∈ R n . Assumption 3.4 is reasonable because g 0 ( x) + ∆g ( x, w) being away from zero is the controllable conditions of system (5). It should be emphasized that the low bound bmin is only required for analytical purposes, its true value is not necessarily known. 3.2 Adaptive fuzzy tracking control problem
In this paper, the control objective is to force y (t ) to track a given reference signal yd (t ) while keeping the states and control bounded, i.e. the output tracking error e1 (t ) = y (t ) − yd (t ) should be small. The given reference signal yd (t ) is as-
332332Y. Yang and C. Zhou
sumed to be bounded and has bounded derivatives up to the nth order for all t ≥ 0 , and yd( n ) (t ) is piecewise continuous. Let ν = ') yd , yd(1) ,# , yd( n −1) 24 (5) can then be described by
T
such that ν is bounded. Suppose e = x − ν . Eq.
*-e&i = ei +1 , 1 ≤ n ≤ n − 1 +& ( n) -,en = f 0 ( x) + ∆f ( x, w) − yd (t ) + )' g 0 ( x) + ∆g ( x, w ) 42 u (t ) For Eq. (7), we have the following transformation for the control input u (t ) = g 0−1 ( x) [ − f 0 ( x) + v (t )]
(7)
(8)
where v(t ) is a new control input. From observation of Eq. (8), the control scheme proposed here consists of two terms: one is − g 0−1 ( x) f 0 ( x) which is used to cancel the nonlinearity in the system and the other is g 0−1 ( x)v(t ) which is mainly used to compensate the effect of uncertainty in the system. Substituting Eq. (8) into Eq. (7), we have *e&i = ei +1 , 1 ≤ n ≤ n − 1 +& (n) ,en = F ( x, w) − yd (t ) + G ( x, w ) v(t )
(9)
where F ( x, w) = ∆f ( x, w) − (1 + g 0−1 ( x)∆g ( x, w)) f 0 ( x) , G ( x, w) = 1 + E ( x, w) and E ( x, w) = g0−1 ( x)∆g ( x, w) . The problem of adaptive fuzzy tracking control design for v(t ) has recently received attention with renewed interest (see, e.g. Wang [3], Fischle and Schroder [9], Su and Stepanenko [10], Yang, Zhou and Jia [11]). If using fuzzy systems to approximate F ( x, w) and G ( x, w) , then an "integral" control law is necessary, i.e., a dynamic feedback
χ& = ϖ ( χ , ξ ( z ), e), ϖ (0, 0, 0) = 0
(10)
v(t ) = α r ( χ , ξ ( z ), e), α r (0, 0, 0) = 0
(11)
where ξ ( z ) is the known fuzzy base functions. And χ ∈ R χ , χ ( ' ) and α r ( ' ) n
n
are smooth functions on R χ × R K × R n . An important quality of the control law is of course the property that the dimension nχ of χ is "as small as possible", and is in particular not dependent on the dimension of the state. However, all the conventional adaptive fuzzy tracking control laws available in the literature have the property that the dimension nχ of χ is 2 N n when using the fuzzy system (cf. Remark 1) to estimate the function F ( x, w) and G ( x, w) . We can state that the dimension nχ of χ is strong dependent on the dimension n of the state. In such a way, the learning time will tend to become unacceptable long for systems of higher order. For the conventional adaptive fuzzy tracking control laws, there is
333 333
Adaptive Robust Fuzzy Tracking Control
another main difficulty which comes from uncertainty G ( x, w) , which is usually approximated by the fuzzy system Gˆ ( x, σ ) . Consequently, the estimate Gˆ ( x, σ ) must be away from zero for avoiding a possible singularity problem. In this paper, we will develop a novel adaptive robust fuzzy tracking controller which does not require to estimate the unknown function Gˆ ( x, σ ) , and therefore avoids the possible controller singularity problem usually met in the traditional adaptive fuzzy tracking control laws. It will result in all the solutions of the closed-loop system (9), (10) and (11), which are uniformly ultimately bounded. Furthermore, the tracking error lim e ( t ) = 0 can be achieved. The outstanding feat →∞
ture of the algorithm proposed in this paper is that the dimension nχ of χ is only one, no matter how many states in the system are investigated and how many rules in the fuzzy system are used.
4 Design of Adaptive Robust Fuzzy Tracking Controller To simplify the design procedure of the adaptive robust fuzzy tracking controller (ARFTC), we define a variable transformation for Eq. (9) as follows n −1
S (e, t ) = en + k1e1 + # + kn −1en −1 = en + # ki ei
(12)
i =1
where ki ’s are chosen in such a way that s n + kn −1 s n −1 + # + k1 = 0 is a stable polynomial, hence, we get n −1
S& ( e, t ) = e&n + # ki ei +1
(13)
i =1
Substituting Eq. (9) into Eq.(13), we have
* -e& = e , 1≤ i ≤ n−2 i +1 -- i +e&n −1 = − k1e1 − # − kn −1en −1 + S n −1 - S& = F ( x, w) + ')1 + E ( x, w ) 24 v(t ) − yd( n ) (t ) + # ki ei +1 -, i =1
(14)
T
Let ζ = [ e1 , e2 ,# , en −1 ] and we use kn −1 to multiply Eq. (12), and substituting it into Eq. (14), we get
*-ζ& = Aζ + BS +& ( n) ,- S = F ( x, w) − yd (t ) + ')1 + E ( x, w ) 24 v(t ) + kn −1 S − Cζ
(15)
334334Y. Yang and C. Zhou
1 0 # 0 2 ' 0 '02 ' kn −1k1 2 ( 0 3 (03 # 0 1 0 ( k k −k 3 ( 3 ( 3 ( n −1 2 1 )3 $ $ $ $ 3 , B = ( $ 3 , C T = (( where A = ( $ 3. $ ( 3 ( 3 ( 2 3 0 0 # 1 3 ( 0 (03 ()( kn −1 − kn − 2 ) 34 () −k1 −k2 −k3 # −kn −1 34 ()1 34 Owing to A is stable, the positive definite solution P = PT of the Lyapunov equation
AT P + PA + Q = 0
(16)
exists and Q > 0 is chosen by the designer. Construct an ARFTC as follows
v = vequ + vN
(17)
where vequ denotes a certainty equivalent controller and vN denotes a nonlinear controller. Then we define vequ as follows
vequ = −(kn −1 + k 2) S where k > 0 is chosen by the designer. Substituting Eq. (18) into Eq. (15), we can rewrite Eq. (15) as follows *ζ& = Aζ + BS +& 1 (n) - S = − kS + F ' ( x, ζ , yd , w ) + (1 + E ( x, w) ) vN 2 ,
(18)
(19)
where F ' ( x, ζ , yd( n ) , w ) = F ( x, w ) + BT Pζ − Cζ − yd( n ) (t ) . When the controller is designed based on the feedback linearization technique, the most commonly used control structure is vN = − F '( x) (1 + E ( x) ) if there are no uncertain parameters in the system. When there are uncertainties in the system,
(
)
the nonlinearities F ' x, ζ , yd( n ) , w and 1 + E ( x, w ) are unknown and many adaptive control schemes have been developed, in which the unknown function 1 + E ( x, w ) is usually approximated by a function approximator 1 + Eˆ ( x, w ) . Consequently, 1 + Eˆ ( x, w ) must be away from zero for avoiding a possible singularity problem. In this paper, we will propose a method to design the adaptive robust controller without estimating 1 + Eˆ ( x, w ) so that we can not only reduce the number of parameters needed to be adapted on-line for 1 + Eˆ ( x, w ) and but also avoid the possible controller singularity problem usually met with feedback linearization design when the adaptive fuzzy control is applied. For Eq. (19), the uncertainties F ' and E are the bound function in the control engineering. Hence there exists an unknown low bound for E . If ∆f and ∆g are
Adaptive Robust Fuzzy Tracking Control
335 335
the non-repeatable functions that can be considered to be continuous, then the uncertain function F ' is also a continuous function. In order to design control law, a fuzzy system is employed to approximate the uncertain function F ' . By using Lemma 3.1, the uncertain function F can be approximated by a fuzzy system. Then we can obtain the bound function as follows F '( z ) ≤ σ T ξ ( z ) + ε + D T
T
where z = [ x, ζ ] , ξ ( z ) = ')ξ1 ( z ) , ξ 2 ( z ) ,# , ξ K ( z ) 24 is the fuzzy base function which is known, σ is the weight parameters of the fuzzy system. ε is a parameter for respecting approximating accuracy. yd( n ) (t ) ≤ D . Suppose σ , D and ε are all unknown, above expression can be modified as follows −1 bmin F ' ( z ) ≤ θψ ( z )
(20)
where ψ ( z ) = 1 + ξ1 ( z ) + ξ 2 ( z ) + # + ξ K ( z )
is a known function and
T
θ = max [ε + D, σ 1 ,# , σ K ] . And E is a bound function which holds Assumption 2.4. The nonlinear controller vN is expressed as follows ( θψ ( z ) S ˆ vN = −θψ ( z ) ( (21) θψ ( z ) S + ς e − at where ς > 0 and a > 0 will be specified by the designer. For the parameter θ , let θˆ be the estimate of θ . Consider a constraint region Ω for approximation parameter θˆ for all t ≥ 0 to be contained inside given conθ
straint region Ωθ , the parameter update laws with projection algorithms are necessary. Here, the smooth projection idea proposed by Khalil [12] is used to tackle this problem. Suppose the region Ωθ is a convex hypercube. That is, consider ( ( Ω 01 = θˆ 0 ≤ θ ≤ c and Ωθ 1 = θˆ 0 ≤ θ ≤ c + δ where all the values c and
{
}
{
}
δ > 0 are chosen by the designer. A smooth projection algorithm with respect to θˆ can be obtained as & -*lφ , if θˆ > c and φ > 0 θˆ = + (22) -,lφ , otherwise $ c − θˆ / where l denotes the adaptive gain, φ = ψ ( z ) S and φ = %1 + 0φ . % δ 01 & Theorem 4.1 For the uncertain nonlinear system (3) subject to Assumptions 3.1 to 3.4, suppose that the uncertain function F ' ( z ) is approximated by the fuzzy system, the bound function in form (20) is available, an adaptive robust fuzzy tracking control scheme u (t ) is selected in Eqs. (8), (18) and (21) and the bound
336336Y. Yang and C. Zhou
parameter θˆ of uncertainty is estimated by Eq. (22), then the system tracking error converges to zero asymptotically, i.e. lim ( y (t ) − yd (t ) ) = 0 . t →∞
Proof: Choosing the following Lyapunov function candidate 1 1 1 V ζ , S ,θ% = ζ T Pζ + S 2 + bmin l −1θ% 2 2 2 2 where θ% = θˆ − θ .
(
)
Differentiating V along the trajectory of Eq. (19) and substituting Eq. (18) and (20), so we get 1 1 % &% V& = − ζ T Qζ − kS 2 + S ( F '+ (1 + E )v ) + bmin l −1θθ 2 2 1 1 % &% ≤ − ζ T Qζ − kS 2 + bmin l −1θθ 2 2 ( θψ (X )S ˆ (X )S ( + S F − (1 + E )θψ θψ ( X ) S + ς e − at Substituting Eq. (20) into above equation, we get 1 1 & (23) V& ≤ − ζ T Qζ − kS 2 + bmin l −1θ% lψ ( x ) S − θˆ + bmin ς e− at 2 2 & Now, from the adaptive law in [12] we get l −1θ% lψ ( x ) S − θˆ ≤ 0 and θˆ ∈ Ωθ 1
)
(
)
(
for all t ≥ 0 if θˆ ( 0 ) ∈ Ω01 . Then Eq. (23) is
1 1 V& ≤ − ζ T Qζ − kS 2 + bminς e − at 2 2 ≤ −Vn + bmin ς e − at 1 1 where Vn = ζ T Qζ + kS 2 . Then we get 2 2 t
V ( t ) − V ( 0 ) ≤ . Vn dt + bmin ς (1 − e − at ) ( 2a ) 0
Therefore, t
0 ≤ . Vn dt ≤ Vn (0) + bmin ς (1 − e − at ) (2a ) 0
and t
lim . Vn dt ≤ Vn (0) + bmin ς (1 − e− at ) (2a) < ∞ x →∞ 0
Since Vn is a uniformly continuous function, according to Barbalat lemma [4], lim Vn = 0 . This implies that lim ζ = 0 and lim S = 0 based on the definition of x →∞
t →∞
t →∞
Vn . As results we get lim ei (t ) = 0, i = 1, 2,# , n − 1 and lim S = 0 which implies t →∞
t →∞
lim ( y (t ) − yd (t ) ) = 0 t →∞
So the desired result has been obtained.
337 337
Adaptive Robust Fuzzy Tracking Control
5 Design of ARFTC for an Inverted Pendulum System To demonstrate the effectiveness of the proposed algorithm, an inverted pendulum system is used for simulation. The Fig. 1 shows the plant composed of a pole and a cart. The cart moves on the rail tracks in horizontal direction. The control objective is to balance the pole starting from an arbitrary condition by supplying a suitable force to the cart. The same case studied has been given in Hwang [13]. The dynamic equations are described by
θ
θ&
u
Fig. 1. An inverted pendulum system
*- x&1 = x2 + -, x&2 = f ( x, w) + g ( x, w)u (t )
(24)
cos x1 mlx22 cos x1 sin x1 mc + m mc + m . + w(t ) , g ( x, w) = $ 4 m cos 2 x1 / $ 4 m cos 2 x1 / l% − l% − 0 0 & 3 mc + m 1 & 3 mc + m 1
g sin x1 − where f ( x, w) =
mc denotes the mass of the pendulum, m is the mass of the vehicle, 2l is the length of the pendulum and u is the applied force. x1 = θ is the angular position from the equilibrium position and x2 = θ& . The trajectory planning problem for a weight-lifting operation is considered in this paper. The desired angle trajectory is assumed to be yd (t ) = 0.1sin(2t ) . In this paper, we use following parameters for simulation: mc = 1 kg , m = 0.1 kg , l = 0.5 m l=0.5. And w(t ) is assumed to be w(t ) = 0.2sin(2t ) . Define five fuzzy sets for each x1 , x2 , e1 with labels Ahi1 (NL), Ahi2 (NM), Ahi3 (ZE), Ahi4 (PM), Ahi5 (PL) which are characterized by the following membership functions ' $ x + π 6 /2 2 ' $ x + π 12 / 2 2 i i µ A1 = exp ( − % 0 3 , µ Ahi2 = exp ( − % 0 3 hi () & π 24 1 34 () & π 24 1 34 (25) ' $ x /2 2 ' $ x − π 12 / 2 2 i i µ A3 = exp ( − % 0 3 , µ Ahi4 = exp ( − % 0 3 hi () & π 24 1 34 () & π 24 1 34
338338Y. Yang and C. Zhou
' $ x − π 6 /2 2 i 0 3 , i = 1, 2,3 () & π 24 1 34
µ A = exp ( − % 5 hi
Let’s set * f 0 ( x) = 0 - g ( x) = 1 - 0 (26) + - ∆f ( x, w) = f ( x1 , x2 , w) ,- ∆g ( x, w) = g ( x1 , x2 , w) − 1 If we get e1 (t ) = x1 (t ) − yd (t ) , e2 (t ) = x2 (t ) − y& d (t ) and k1 = 5 , Q = −2 , then P = 1 and vequ = − ( 5 + 0.5k ) S
where S = e2 + 5e1 . S& = −0.5kS + F ( x, e1 , w) + (1 + E )uN
where F ( x, e1 , w) = f ( x, w) + 5e1 + g ( x, w) − y& d (t ) , E = g ( x, w) . If g ( x, w ) has a lower bound and the system function F ( x, e1 , w) is a continuous complicated formulation system function, then F is also a continuous function and a fuzzy system is applied to approximate function F . Assuming that the fuzzy system is constructed from the following 75 fuzzy IFTHEN rules R1 : IF x1 is PL AND x2 is PL AND e1 is PL THEN y is α1 Ri : ....... ........
.........
R75 : IF x1 is NL AND x2 is NL AND e1 is NL THEN y is α 75 The fuzzy system can be obtained as follows 75
F ( x, e1 , w ) = # α iξ i ( x, e1 ) + ε
(27)
i =1
So the bound function can be written as F ( x, e1 ) ≤ θψ ( x, e1 ) Accordingly the adaptive robust fuzzy tracking control scheme can be obtained for an inverted pendulum system as follows ˆ ( x, e ) S ( θψ 1 u (t ) = −(5 + 0.5k ) S − θψ ( x, e1 ) (28) ˆ ( x, e ) S + 5e−0.01t θψ 1
where k = 180 and θˆ can be calculated by Eq. (22). And l = 1 . c = 0.2 , 75
δ = 0.01 . φ = lψ ( x, e1 ) S , ψ ( x, e1 ) = 1 + # ξ i ( x, e1 ) . i =1
For the simulation, we choose the initial conditions as x1 (0) = 0.1 , x 2 (0 ) = 0 ,
θˆ(0 ) = 0 . The simulation results are shown in Fig. 2 and Fig. 3.
Adaptive Robust Fuzzy Tracking Control
339 339
Fig. 2. Simulation results for Control algorithm in Eq. (28). (Position of the inverted pendulum system. Solid line: actual position, Dashed line: reference position)
Fig. 3. Control force
Before presenting the outstanding advantages of ARFTC developed in this paper, we will briefly review the control law proposed in Wang [3] as follows 1 ' f ( x,θ1 ) − && u = uc + u s = − yd (t ) + k T e 42 gˆ( x,θ 2 ) )
− I1 sgn(eT PB )
1 ' ˆ f ( x , θ1 + f U ( x ) g L ( x) )
(29)
+ gˆ ( x,θ 2 ) uc + g U ( x)uc 42 where k = [2,1]T ,
f ( x) ≤ f U ( x) , g L ( x) ≤ g ( x) ≤ g U ( x) , B = [0,1]T and
Q = diag[10,10] . Then, we solve Lyapunov equation and obtain
340340Y. Yang and C. Zhou
Fig. 4. Simulation results for the adaptive parameter ˆθ
Fig. 5. Simulation results for tracking error
'15 52 P=( 3 ) 5 54 where I 1 = 1 if Ve > V (which is a constant specified by designer), I 1 = 0 if 1 T e Pe . 2 Defining five fuzzy sets the same as those in Eq. (25) for each x1 , x2 , twentyfive fuzzy rules for the systems fˆ ( x,θ1 ) and gˆ ( x,θ 2 ) respectively, and singleton Ve ≤ V . And Ve =
fuzzy model, the product inference and the center-average defuzzification are used. Hence, the function f ( x) and g ( x) can be approximated by the fuzzy systems fˆ ( x,θ ) = θ T ξ ( x) and gˆ( x,θ ) = θ T ξ ( x) where 1
1
1
2
2
2
Adaptive Robust Fuzzy Tracking Control
341 341
θ1 = ')θ11 ,θ12 ,# ,θ125 24 ∈ R 25 ξ1 ( x) = ')ξ11 ( x), ξ12 ( x),# , ξ125 24 ∈ R 25 θ 2 = ')θ 21 ,θ 22 ,# ,θ 225 24 ∈ R 25 ξ 2 ( x) = ')ξ 21 ( x), ξ 22 ( x),# , ξ 225 24 ∈ R 25 with components
ξ11 ( x) = µ h11 ( x1 ) µ h1 2 ( x2 ) D1 ξ12 ( x) = µ h11 ( x1 ) µ h22 ( x2 ) D1 ξ13 ( x) = µ 1h1 ( x1 ) µ h32 ( x2 ) D1 ,... ξ125 ( x) = µ h51 ( x1 ) µ h52 ( x2 ) D1 and 5
5
D1 = ## µ hi 1 ( x1 ) µ hj2 ( x2 ) i =1 j =1
and the construction of ξ 2 ( x) is similar to ξ1 ( x) . In Wang [3], the following adaptive law to adjust parameter vector θ1 was proposed: *−Γ1eT PBξ1 ( x), if ( θ1 < M f ) or ( θ1 = M f -and eT PBθ1T ξ1 ( x) ≥ 0) θ&1 = + (30) T T T -, P {−Γ1e PBξ1 ( x)} , if ( θ1 = M f and e PBθ1 ξ1 ( x) < 0) where the projection operation P {'} is defined as
P {−Γ1eT PBξ1 ( x)} = −Γ1eT PBξ1 ( x) + Γ1eT PB
θ1θ1T ξ1 ( x) θ1
2
And in Wang [3], the following adaptive law to adjust parameter vector θ 2 was used: T -*−Γ 2 e PBξ 2 ( x)uc , -,0,
θ&2 = +
if eT PBξ 2 ( x)uc < 0 if eT PBξ 2 ( x)uc ≥ 0
(31)
Here, for the parameters M f , V , Γ1 and Γ 2 , please refer to Wang [3]. The simulation results are shown in Fig. 6 to Fig. 8. Fig. 5 shows the simulation results of tracking errors by use of the proposed ARFTC and Fig. 8 shows the simulation results of tracking errors by use of the controller given in Eq. (29). From the results, we can see that the control performance of ARFTC is better than the controller given in Eq. (29). Hence, we can state that the ARFTC satisfies the following advantages that have been described in Section 4: only one function f ( x) is needed to be approximated by fuzzy systems and no matter how many states in the system are investigated and how many rules in the fuzzy system are used, only two parameters are needed to be adapted
342342Y. Yang and C. Zhou
on-line in ARFTC. However, for the traditional methodology (e.g. the control law proposed in Wang [3], even based on five fuzzy sets for each state variable and singleton fuzzy model aforementioned, there are 50 parameters needed to be adapted on-line for the fuzzy system fˆ ( x,θ1 ) and gˆ ( x,θ 2 ) when the fuzzy logic controller is implemented. And the traditional methodology can cause the increase of the number of parameters needed to be adapted exponentially with that of number of state variables or fuzzy sets. The computational complexity can be lessened dramatically and the learning time can be reduced vastly when using ARFTC developed in this paper. Then ARFTC has the potential to provide uncomplicated, reliable and easy-to-understand solutions for a large variety of nonlinear control tasks even for higher order systems.
Fig. 6. Simulation results for Control algorithm in Eq. (29). (Position of the inverted pendulum system: Solid line: actual position, Dashed line: reference position)
Fig. 7. Control force
Adaptive Robust Fuzzy Tracking Control
343 343
Fig. 8. Simulation results for tracking error
6 Conclusions In this paper, the tracking control problem has been considered for a class of nonlinear uncertain systems with the unknown system function and unknown gain function, and fuzzy logic systems have been used to approximate the lumped nonrepeatable state-dependent nonlinear function and the fuzzy system is used as the upper bound function of the uncertainty, then an adaptive robust fuzzy tracking control (ARFTC) algorithm, that can guarantee that the system tracking error converges to zero asymptotically, is proposed. The outstanding features of the algorithm proposed in this paper are that it can avoid the possible controller singularity problem in some of existing adaptive control schemes with feedback linearization techniques and the adaptive mechanism with minimal learning parameterizations, e.g. no matter how many states in the system are investigated and how many rules in the fuzzy system are used, only one parameter is needed to be adapted on-line, so that the computation load of the algorithm can be reduced and it is convenient to realize this algorithm in engineering. Finally, in order to research the efficiency of the algorithm proposed in this paper, it has been applied to an inverted pendulum system. We compare the simulation result by use of the algorithm proposed in this paper with the one by Wang [3] . Simulation results have shown the effectiveness of the proposed ARFTC control scheme.
Acknowledgments This work was supported in part by the Research Fund for the Doctorate Program of Higher Education under Grant No. 20020151005, the Science Foundation under Grant No. 95-06-02-22 and the Young Investigator Foundation under Grant No. 95-05-05-31 of the Ministry of Communications of P. R. China.
344
Y. Yang and C. Zhou
References 1. 2. 3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13.
Sastry, S. and Isidori. A., Adaptive control of linearization systems. IEEE Trans. on Automatic Control. Vol. 34, pp.1123-1131, 1989. Isidori, A., Nonliear Control Systems. 2nd ed. Berlin: Springer-Verlay, 1995. Wang, L. X., A Course in Fuzzy Systems and Control. New York: Prentice-Hall Inc., 1997. Slotine, J. E., and Li, W., Applied Nonlinear Control. New York: Prentice-Hall, Inc., 1990. Wang, L. X., Fuzzy systems are universal approximators. In proc. IEEE. International conf. On Fuzzy systems, San Diego, CA, pp.1163-1170, 1992 Lee, C. C., Fuzzy logic in control systems: fuzzy logic controller—Parts I&II. IEEE. Trans. Syst. Man. Cyber. Vol. 20, pp. 404-435, 1990, Ying, H., Sufficient conditions on general fuzzy systems as function approximators. Automatica. Vol. 30, No. 4, pp. 521-525, 1994. Wang, L. X. and Mendel, J. M., Fuzzy basis functions universal approximation, and orthogonal least-squares learning. IEEE Trans. on Neural Networks. Vol. 3, pp. 807-814, 1992. Fischle K. And Schroder D., An improved stable adaptive fuzzy control method. IEEE Trans. on Fuzzy Systems. Vol. 7, No. 1, pp. 27-40, 1999. Su, C. Y. And Stepanenko, Y., Adaptive control of a class of nonlinear systems with fuzzy logic. IEEE Trans. Fuzzy Systems. Vol. 2, pp.285-294, 1994. Yang, Y. S., Zhou, C. and Jia, X. L., Robust adaptive fuzzy control and its application to ship roll stabilization. Information Sciences. Vol. 142, pp. 177-194, 2002. Khalil, H. K., Adaptive output feedback control of nonlinear systems represented by input-output models. IEEE Trans. on Automatic Control. Vol. 41, pp. 177-188, 1996. Hwang, G. C. and Lin, S. C., A stability approach to fuzzy control design for nonlinear systems. Fuzzy Sets and Systems. Vol. 40, pp. 279-287, 1990.
A Complete Algorithm for Attribute Reduction B. Wang1, and S.B. Chen2 1. School of Materials Science & Eng., Shanghai Jiao Tong Univ., Shanghai 200030, P.R. China 2. School of Materials Science & Eng., Shanghai Jiao Tong Univ., Shanghai 200030, P.R. China [email protected]
Abstract. Rough sets theory is an effective mathematical tool dealing with vagueness and uncertainty. It has been applied in a variety of fields such as data mining, pattern recognition or process control. It is significant to find high performance algorithms for attribute reduction. Most of presented algorithms were incomplete and the work on complete algorithms was comparatively little. This paper presented a complete algorithm based on the principle of discernibility matrix and discussed the time complexity of the algorithm. The proof of the completeness of the algorithm was also given in the paper. Finally, the validity of the algorithm was demonstrated by two classical databases in the UCI repository.
1 Introduction Rough sets theory[3] introduced by Pawlak in 1982 is an effective mathematical tool dealing with vagueness and uncertainty. It has been applied in a variety of fields such as data mining, pattern recognition or process control[4]. Attribute reduction is significant in most applications, so that it has been a major concern in the research on rough set theory. Let S be a decision table and we have S=, where U is the universe, the disjoint attribute sets C and D are referred to as the condition and decision attributes respectively. It is necessary to point out U, C and D should be non-empty and finite sets. According to the definition given by Pawlak[3], if P⊆C is a reduction of C, P satisfies follow two conditions: (1)POSP(D)=POSC(D); (2)POSP−{a}(D)≠POSC(D) for each a∈P. In [3], Pawlak presented the original algorithm for attribute reduction whose time complexity was O(2n), where n=|C|. So the algorithm is just a theoretical algorithm and it is limited very much in actual applications. The output of the algorithm, which is a subset of C, satisfies both two conditions above. Some attribute T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 345−352, 2004. Springer-Verlag Berlin Heidelberg 2004
346346B. Wang and S.B. Chen
reduction algorithms with polynomial-time complexity have been introduced, notably those in [1, 2, 5]. These algorithms are more practical but their outputs only satisfy the first condition above. If the output of an attribute reduction algorithm satisfies both two conditions above, we say the algorithm is complete. If the output of an attribute reduction algorithm only satisfies the first condition above, we say the algorithm is incomplete. What we want are complete algorithms with polynomial-time complexity. In [6], firstly, authors introduced a complete algorithm. It is needed to define a total relation over the set of condition attributes beforehand and the order can be considered as prior knowledge. In this way, one can fully use the experience of experts. At the same time, the output of the algorithm is unique for a given order of attributes. However, the algorithm still has limitation when no prior knowledge is available. Secondly, authors presented a optimal reduction algorithm. It is also complete and its time complexity is not lower than O(m3n2) in the worst case. To the author's knowledge, there is no other published complete algorithm with polynomial-time complexity. In this paper, an algorithm based on the principle of discernibility matrix is introduced. We show that the algorithm is a complete attribute reduction algorithm with polynomial-time complexity and its time complexity is O(m2n2) in the worst case. The validity of the algorithm was demonstrated by two classical databases in the UCI repository. This paper is organized as follows. In Section 2, some concepts and definitions are introduced. The algorithm and the analysis of the time complexity of it are given in Section 3. In Section 4, we show that the algorithm is complete. In Section 5, we give results of experiments and the conclusion is in Section 6.
2 Concepts and Definitions Let S= be a decision table. Suppose |U|=n. The discernibility matrix of S denoted by MS is a n×n matrix defined thus: (cij)={ a∈C f (xi a)≠ f (xj a) and w(xi,xj)=1} for i,j=1, 2, ..., n. where !
✁
✁
1 xi∈POSC(D) and xj∉POSC(D) w(xi,xj) =
1 xi∉POSC(D) and xj∈POSC(D) 1 xi, xj∈POSC(D) and (xi, xj)∉IND(D) 0 otherwise
Table 1. An example of decision tables U x1 x2 x3 x4 x5 x6
a 2 1 1 0 1 2
b 2 2 2 0 0 0
c 0 0 0 0 1 1
d 1 0 1 0 0 1
347 347
A Complete Algorithm for Attribute Reduction
Table 2. The discernibility matrix 1
1 ∅
2 {a}
3 {a}
2
{a}
∅
∅
3
{a}
∅
∅
4 5
{a b} {a b c} ∅
{a b} {b c} {a b c}
!
!
6
!
!
!
!
!
5 {a b c} {b c}
{a b} {b c}
4 {a b} {a b} {a b} ∅ ∅
∅ ∅
{a b c} {a b c} {a c} {a}
{a b c}
{a c}
{a}
∅
!
!
!
!
!
!
!
!
!
!
{b
6 ∅
!
!
c}
!
!
!
!
!
For example, Table 1 is an example of decision talbes and Table 2 is its discernibility matrix. It is necessary to give follow definitions before discussing our algorithm in detail. Let MS be the discernibility matrix of decision table S. The set of non-empty entries of MS will be called the discernibility set of S, denoted by $S. Obviously, the discernibility set of a decision table is the collection of subsets of condition attributes. We associate a subset: Core($S)={a: {a}∈ $S } called the core of $S. Let B⊆C. A S− (B) be the B—elimination set of $S and
A S− (B)={A' A∈$S A'=A−B and A'≠∅ }. A S+ (B) is the B—selection set of $S !
✁
A S+
and (B)={A∈$S A∩B=∅}. It is clear that the B—elimination set and the B—selection set are all subsets of condition attributes. To illustrate these definitions, we take the discernibility matrix in Table 2 for example. $S={{a}, {a, b}, {a, b, c}, {b, c}, {a, c}}. Core($S)={a}. Suppose B={a}. !
A S− (B)={{b}, {b, c}, {c}}. A S+ (B)={{b, c}}.
3 The Complete Algorithm and Its Time Complexity In this section, we will introduce a complete attribute reduction algorithm and analyze its time complexity. The input of the algorithm is the discernibility matrix $S of a given decision table S. Algorithm 3.1 (1)R⇐∅, P⇐∅, T⇐∅; (2)$=$S; (3)If $=∅, stop, otherwise continue; (4)P⇐ Core($);
348
B. Wang and S.B. Chen
(5)If P=∅, go to Step 6, otherwise go to Step 7; (6)Select one attribute a in C–(R $ T) randomly, $⇐ A − ({a}) , T⇐T $ {a}, and go to Step3; (7) R⇐R $ P, $⇐ A + (R) and go to Step 3. When the algorithm stop, R is the output of the algorithm, namely the attribute reduction. Fig. 1 is the flow chart of Algorithm 3.1.
START
R⇐∅ !
P⇐∅
T⇐∅ !
!
$
Y END ✁
$=∅ N
select a
P⇐Core($)
$⇐ $ !
−
({a}) !
Y ✁
P=∅ N R⇐R $ P !
$⇐ $
Fig. 1 The flow chart of Algorithm 3.1
Suppose |U|=n and |C|=m, where m and n are positive integer. In the worst case, |$|=
n2 − n = N . The time complexity of computing Core($) is O(mN). Let B⊆C. 2
The time complexity of computing A − (B) and A + (B) are also O(mN). Algorithm 3.1 will repeat m times at most so the time complexity of Algorithm 3.1 is O(m(mN+mN))=O(2m2
n2 − n )=O(m2n2) at the worst case. It is clear Algorithm 3.1 2
is faster than the optimal reduction algorithm[6] in the worst case.
A Complete Algorithm for Attribute Reduction
349 349
4 The Completeness of the Algorithm In this section, we show Algorithm 3.1 is complete, namely, the output of the algorithm satisfies the two conditions presented in Section 1. It has been proved those two conditions are equivalent to follow two conditions[6]: (1)For ∀ A∈$S, A # P≠∅; (2)For ∀ a∈P, ∃ A∈$S, such that A # P−{a}=∅. To prove the completeness of the algorithm, we can prove the output of the algorithm, that is R, satisfies these two conditions. For this end, it is worth giving some properties of the algorithm. Step 3 to Step 7 of the algorithm form a loop. Since U and C are finite sets, it is obvious the algorithm terminates absolutely. So we can safely suppose the algorithm iterate k times, where constant k is a positive integer. During one iteration, if Step 6 is executed in the loop, we say the iteration is an elimination iteration, otherwise Step 7 is executed and we say the iteration is a selection iteration. In an elimination iteration one attribute is added to T(the set of attributes eliminated by the algorithm) and in an selection iteration some attributes are added to R. We use the sequence $0, $1, ..., $i, ..., $k−1, $k to indicate the changing of $ during the algorithm running, where i=0, ..., k. The first iteration changes $ from $0 into $1 and the ith iteration changes $ from $i−1 into $i. Accordingly, the sequence R0, R1, ..., Ri, ..., Rk−1, Rk indicates the changing of R and the sequence T0, T1, ..., Ti, ..., Tk−1, Tk indicates the changing of T. We have $0=$S, $k=∅, R0=∅, Rk=R(the output of the algorithm), T0=∅, Tk=T, and if i≤j, Ri⊆Rj and Ti⊆Tj. It is necessary to point out that C−R=T is not always hold. Let C={a, b, c, d, e, f} and the input $S={{a, b, d}, {a, c, d}, {b, c, d}, {a, b, c}, {e, f}}. The output of the algorithm is R={c, d, f}. Table 3 shows how the algorithm runs.
Table 3. Running process of the algorithm i 0 1 2 3 4 5
$i {{a, b, d}, { a, c, d}, {b, c, d}, { a, b, c}, {e, f}} {{b, d}, {c, d}, {b, c, d}, {b, c}, {e, f}} {{d}, {c, d}, {c}, {e, f}} {{e, f}} {{f}} ∅
Ri ∅ ∅ ∅ {c, d} {c, d} {c, d, f}
Ti ∅ {a} {a, b} {a, b} {a, b, e} {a, b, e}
Proposition 1 If the ith iteration is an elimination iteration, ∃ A'∈ $i such that A'⊆A for ∀ A∈$i−1, where i=1, ..., k.
350350B. Wang and S.B. Chen
Proposition 2 If the ith iteration is a selection iteration, A # Ri≠∅ if A∉ $ for ∀ A∈$i−1, where i=1, ..., k. Proposition 3 A # Ri≠∅ if ¬∃ A'∈ $i such that A'⊆A for ∀ A∈$i−1 and i=1, ..., k. Proposition 4 A # Rj≠∅ if ¬∃ A'∈ $j such that A'⊆A for ∀ A∈$i , i < j and i, j=1, ..., k. Proposition 1 and Proposition 2 are easy to be proved. By Proposition 1 and Proposition 2 we can show Proposition 3 is right and then we get Proposition 4. Theorem 1 Let $S and R be the input and the output of Algorithm 3.1 respectively. A # R≠∅ for ∀ A∈$S. Proof Let the algorithm iterate k times, where k∈Z and k≥0. If k=0, the theorem is right obviously. If k>0, we have A0=AS and Ak=∅. For ∀ A∈AS, A∈A0 holds. Note that Ak=∅, that is ¬∃ A'∈Ak such that A'⊆A. By Proposition 4, A # Rk≠∅, namely A # R≠∅. Proposition 5 For ∀ A∈$i, ∃ A'∈ $0 such that A⊆A' and A'−A⊆Ti, where i=0, ..., k. Proposition 6 T # R=∅. Both Proposition 5 and 6 are obvious. Theorem 2 Let $S and R be the input and the output of Algorithm 3.1 respectively. For ∀ a∈R, ∃ A∈$S such that A # R−{a}=∅. Proof Let the algorithm iterate k times, where k∈Z and k≥0. If k=0, the theorem is right obviously. If k>0, we have A0=AS and Ak=∅. For ∀ a∈R, ¬∃ i(0 0, then $x , % - + 2ω T z − 2εx T Px V$ ( x) = −[ x T , ω T ] 1 %ω & . $ where V (x) is evaluated along the trajectories of the system (1), if Σ1 > 0, ∀t ≥ 0 and Ψ(t, z) belongs to the sector [0, +∞) (i.e. ωTz ≤ 0), then we have Proof
#
V$ ≤ −2εV
(2.5)
Integrating (2.5)from 0 to t yields t
+
t
+
V$ ( x(t ))dt ≤ − 2ε V ( x(t ))dt then V ( x(t )) ≤ V ( x(0)) exp(−2εt ). From V ( x) ≥
0
0 2
ρ min ( P) x , we obtain x(t ) ≤
(
V ( x ( 0 )) ρ min ( P )
)
1 2
exp(−εt ).
This inequality indicates that the interval Lur’e system (2.1) is globally exponentially stable with a damping factor ε. The proof of this theorem is completed. Remark 1 A relationship between the stability of symmetric interval matrix #1 and the globally exponentially stability of interval Lur'e system is established in Theorem 1. Remark 2 The symmetric interval matrix L[ A, A] = { A = (a ij ) n×n a ij ≤ a ij ≤ a ij ,
a ij = a ji , i, j = 1,2, # , n, A = (a ij ) n×n , A = (a ij ) n×n } is stable if and only if LV [ A, A] = { A = (a ij ) n×n a ij = a ij or a ij , a ij = a ji , i, j = 1, # , n} is stable. The proof is similar to those of Shi's paper (1986). 'a ij i= j * is stable, then Remark 3 If M [ A, A] = (mij ) n×n , mij = ( *)max a ij , a ij , i ≠ j
(
)
symmetric interval matrix L[ A, A] is stable. Theorem 2 The interval Lur'e system consisting of (2.1) and the feedback (2.2) belonging to a sector [0, +∞) is globally exponentially stable with a damping factor ε if there exists a symmetric positive-definite matrix P > 0 such that
$ H 11 % %C − BT P 0 &
C T − PB0 T
D +D+
F2T
, -≥0 F2 -.
where H 11 = − PA0 − A0T P − 2εP + P ( E1 E1T + E 2 E 2T ) P + F1T F1 .
(2.6)
369
Robust Stability of Interval Lur’e Systems
369
Proof Consider the equivalent system (2.4), Note $ − PA0 − A0T P − 2εP Y0 = % % C − B0T & $ξ1 , %% -- = 2(ξ 1T PE1 )( &ξ 2 .
C T − PB0 ,. Since ξ 1T ξ 2T T D +D .
(
# F ξ ) + 2(ξ 1 1
T 1
(
PE 2 )(
#F ξ
2 2)
)$ H
ξ 1T PE 2 E 2T Pξ 1 + ξ 2T F2T F2ξ 2 = ξ 1T ξ 2T %% &0 + F1T
#E
T 1
0
11
F2T
$
)%%% &
L11 F2T
#E
PE 2 T 2
P
#F
2
0
, -.
≤ ξ 1T PE1 E1T Pξ 1 + ξ 1T F1T F1ξ 1
,$ ξ 1 , -% -, whereL11 = PE1 F2 -.%& ξ 2 -.
#F
1
P; H 11 = P( E1 E1T + E 2 E 2T ) P + F1T F1 ,
0 , $ H 11 ->0 Y0 + % % 0 FT F 2 2. & by Lyapunov function V(x)=xTPx and similar procedure of proof of Theorem 1, we can obtain the interval Lur's system (2.1) is globally exponentially stable with a damping factor ε.The proof of this theorem is completed. hence
Lemma 1 If A ∈ N [ A, A], (1 − λ )( A − A) ≤ ∆A = A − [λ A + (1 − λ ) A] ≤ λ A − A) =
λT , λ ∈ [0,1], then for any positive number ε , P∆A + ∆A T P ≤ εP T P + nε −1 [max(λ , 1 − λ1 )] 2 diag[( A − A) T ( A − A)] Proof Since x T {εP T P + nε −1 [max(λ , 1 − λ )] 2 diag (T T T )}x = εx T P T Px + n[max( λ ,1− λ )]2
ε
#x t
j jk t jk
x j ≥ εx T P T Px +
1 2ε
[
j ,k
# x ∆a j
jk ∆a jk
# x ∆a i
x j ] = εx T P T Px +
[max( λ ,1− λ )]2 2ε
[
#x t
i ik t ik x i
j ,i , k
i, j ,k
≥ εx T P T Px + ε1
ik ∆a ik x i
+
j ,i , k
# x ∆a i
ik ∆a jk
+
#x t
j jk t jk
xj]
i , j ,k
x j = εx T P T Px + ε1 x T ∆A T ∆Ax ≥ 2 x T ∆AT Px
i, j ,k T
T
= x [ P∆A + ∆A P]x then, Lemma is holds. Theorem 3 The interval Lur'e system consisting of (2.1) and the feedback (2.2) belonging to a sector [0, +∞) is globally exponentially stable with a damping factor ε if there exists a symmetric positive-definite matrix P > 0 and scalar λi ∈ [0, 1] (i = 1,2), λ > 0, such that
370
370
J.T. Sun
$ T11 % %C − BT 1 &
C T − PB1 ,≥0 T22 -.
(2.7)
where T11 = − PA1 − A1T P − 2εP + (λ + 1) PP + nλ −1 [max(λ1 , 1 − λ1 )] 2 diag[( A − A) T ( A − A)], T22 = D T + D + m[max(λ 2 , 1 − λ 2 )] 2 diag[( B − B)]T ( B − B )]. Proof Note $ − PA1 − A1T P − 2εP C T − PB1 , -, where A = λ A + (1 − λ ) A, B = λ B + Y1 = % 1 1 1 1 2 T % C − B1 P D − D -. &
(1 − λ 2 ) B. $ K 11 Since x T ω T %% T & ∆B P
(
)
P∆B ,$ x , -% - = x T ( P∆A + ∆A T P) x + 2ω T ∆B T Px ≤ % 0 -.& ω .
x T {λP T P + nλ −1 [max(λ1 , 1 − λ1 )] 2 diag[( A − A) T ( A − A)]}x + ω T ∆B T ∆Bω + x T PPx ≤ x T {(λ + 1) P T P + nλ −1 [max(λ1 , 1 − λ1 )] 2 diag[( A − A)]T ( A − A)]}x −
(
m[max(λ 2 , 1 − λ 2 )] 2 diag[( B − B ) T ( B − B)]ω T ω = x T
)$ L0
ω T %% &
11
0 ,$ x , -% -, L 22 -.%& ω -.
−1
where K 11 = P∆A + ∆A P; L11 = (λ + 1) P P + nλ [max(λ1 ,1 − λ1 )] 2 diag[( A − T
T
A) T ( A − A)], L 22 = m[max(λ 2 , 1 − λ 2 )] 2 diag[( B − B) T ( B − B ) hence $ P∆A + ∆AT P P∆B ,≥ 0, by similar procedure of proof of Theorem 1, we Y1 + % % 0 -. ∆B T P & can obtain that the interval Lur'e system (2.1) is globally exponentially stable with a damping factor ε. The proof of the theorem is completed. Remark 4 (2.6) and (2.7) are bilinear matrix inequality (BLMI) of P for BLMI, Goh. et al (1994) presented properties and computational methods.
3 Numerical example We consider an uncertain system (2.1) with $ − 0.06 0.97 , $ 0.06 1.03 , $ 0 .9 , $1.1, --, A = %% --, B = %% --, B = %% --, C = (1 1), D = 0.98 A = %% 0 . 9 − − 1 . 97 2 . 04 − − 2 . 03 3 . 06 & . . & & . &1.1. the nonlinear feedback is set as Ψ(t, z(t)) = z(t){1−sin[z(t)]} this nonlinear function belongs to the sector [0, +∞).
(3.8)
Robust Stability of Interval Lur’e Systems
371 371
If we set ε =0.5 and λ =1, λ1=λ2 =0.5 there exists a positive definite matrix !
$4 1 , -- which satisfies the condition in Theorem 2, hence the interval Lur'e P = %% & 1 0 .5 . system consisting of (2.1) and the feedback (3.8) belonging to the sector [0.+∞) is globally exponentically stable with a damping factor 0.5.
References [1]. Vidyasagar, M., (1993) Nonlinear Systems Analysis, Prentice-Hall, London [2]. Grujic LJT, Petkovski DJ (1987) On robustness of Lur'e systems with multiple nonlinearities. Automatica 23: 327-334 [3]. Tesk A, Vicino A (1991) Robust absolute stability of Lur'e control systems in parameter space. Automatica 27: 147-151 [4]. Dahleh M, Tesk A, Vicino A (1993) On the robust Popov criterion for interval Lur'e system. IEEE Tansactions on Automatic Control 38: 1400-1405 [5]. Mori T, Nishimura T, Kuroe Y, Kokame H (1995) Comments on `On the robust Popov criterion for interval Lur'e system. IEEE Tansactions on Automatic Control 40: 136-137 [6]. Wada T, Ikeda M, Ohta Y, Siljak DD (1996) Parametric absolute stability of multivariable Lur'e systems: an LMI condition and application to polytopic systems, Proceedings of the 13th IFAC World Congress, San Francisco, California, USA, pp19-24 [7]. Wada T, Ikeda M, Ohta Y, Siljak DD (1998) Parametric absolute stability of Lur'e systems. IEEE Tansactions on Automatic Control 43: 1649-1653 [8]. Boyd S, Ghaoui LEL, Feron E, Balakrishnan V (1994) Linear Matrix Inequalities in System and Control Theory, Philadelphia, Pennsylvania SIAM [9]. Konishi K, Kokame H (1996) Robust stability of Lur'e systems with time-varying uncertainties: a linear matrix inequality approach. Int.J. of systems Science 30: 3-9 [10]. Gu, K, Zohdy M.A, Loh NK (1990) Necessary and sufficient condition of quadratic stability of uncertain linear system. IEEE Tansactions on Automatic Control 35: 601-604 [11]. Goh KC, Turan L, Safonov MG (1994) Bilinear matrix inequality properties and computational methods. Proc. of Conf. on ACC, Baltimore, pp. 850-855 [12]. Goh KC, Safonov M G, Papavassilopoulos GP (1994) A global optimization approach for the BMI problem .Proc. of Conf. Decision and Control, Tokoy, pp. 2009- 2014 [13]. Shi Z, Gao W (1986)A necessary and sufficient condition for the positive- definiteness of interval symmetric matrices Int. J. of Control 43:325-328 [14]. Wu FX, Shi ZK, Dai GZ (1999) H∞ robust control for interval systems. Acta Automatica Sinica 25: 705-708 [15]. Sun JT, Deng FQ ,Liu YQ (2001) Robust absolute stability of general interval Lur'e type nonlinear control systems.J. of Syst. Engineering and Electronics 12:46-52
372
Thresholding Segmentation and Classification in Automated Ultrasonic Testing Image of Electrical Contact H. D. Chen, Z. J. Cao, Y. W. Wang, and J. Xue Welding Research Institute, School of Materials Sci. & Eng., Xi’an Jiaotong Univ., Xi’an 710049, P.R. China [email protected]
Abstract. This paper is concerned with the thresholding segmentation of ultrasonic C-scan image for defect with artificial intelligence and support vector machines techniques. Wavelet based method is used to remove the noise of image. Segmentation techniques based on thresholding were performed in order to segment the defect in ultrasonic testing image. Experimental results show that this segmentation technique performs well in finding defects in C-scan image of electrical contact.
1 Introduction The increasing use of electrical contact requires that reliable non-destructive evaluation (NDE) methods be available for the detection of defects and damage, or the assessment of quality of these materials. Ultrasonic C-scan is a method commonly used in NDE, which is based on propagation of ultrasonic waves through the material. The waves penetrate the material, bounce at the bottom (or at the defect if exist) and return to the transducer. The time at which the first echo arrives determines the position of the bottom of joint or of the defect. The signal is then digitized along the plate and is visualized in a manner of a gray level image. The main difficulty is then to extract the defects from the background in C-scan images [1] . The images obtained are far from being ideal. Not only do these images contain sensor noise, which is typically present in any kind of image acquisition, but they are also corrupted by non-uniform illumination. That is due to the image acquisition apparatus. For example, the specimen might not be perfectly horizontal when ultrasonic transducer scans it. Wavelet transform is of the ability called “digital microscope”. When ultrasonic testing images are gotten, the original image attached with noise can be decomposed by wavelet transform. There is a difference between the noise frequency spectrum and the frequency spectrum of defect reflection signal, little T.-J. Tarn et al. (Eds.): Robotic Welding, Intelligence and Automation, LNCIS 299, pp. 372−379, 2004. Springer-Verlag Berlin Heidelberg 2004
Thresholding Segmentation and Classification
373 373
useful information is with the low signal-to-noise ratio’s image, which can be eliminated while image reconstruction. There is a large amount of useful information in the sub-band image with high signal-to-noise ratio; the defect image can be obtained after the appropriate defect inspection algorithm. The purpose of image segmentation is to separate the defect image area from the whole image, which is important for subsequent process. Therefore, research on ultrasonic image segmentation technique is of significance for defect detection of automated ultrasonic imaging system in weld. With the rapid development of electronic technology and computer technology, further requirement of joint’s ultrasonic testing is desired. It need not only get the ratio of braze area (RBA), but also get the defect size defect location and defect type in order to carry out quantitative and qualitative analysis. Thus research in ultrasonic testing imaging segmentation method is essential to defect’s identification in weld [2]. In spite of all modern automated inspection methods, defect classification is still a difficult task. Much work, essentially based on pattern recognition techniques and artificial intelligence methods using neural networks, has been published on this topic. The paper is organized as follows. In section two image techniques are talked about and the methods that have been applied to this type of images, next a system overview of the method that implemented is presented and the way that works is introduced. Some theoretical explanations about the methods and techniques are given. Then, segmentation results produced by the proposed method are shown. In section six we conclude that this segmentation technique performs well in finding defects in C-scan image of electrical contact. !
2 Image Processing Technique The image techniques utilized for C-scan image segmentation could be divided into the following groups: threshold-based segmentation [3,4], statistical methods for image segmentation and region growing methods. In the category of threshold-based segmentation was proposed the use of: iterative thresholding, histogram analysis and morphological operations. Some of these techniques are followed by some refinement of the segmented zones. Statistical methods represent another important category in the segmentation process and most of the approaches proposed in this category are using some statistical classifications combined with different image processing techniques in order to segment the C-scan images. Also, the use of statistical methods in texture analysis has been proposed. The region growing techniques applied to C-scan images represents the final category. All these techniques have its advantages and drawbacks, that’s why even more research is demanded in this domain and maybe some combination of the already existing methods or some new image processing techniques will lead to better results. The segmentation problems that are widely encountered by the researchers are the noise introduced with the acquisition of the image, the overlapping
374374H.D. Chen et al.
intensities (different defect structures have different characteristics which results in various signal intensities and these intensities could overlap), and the partial volume effect.
3 Image Segmentation System Now, the system is introduced and all the theoretical implications with this system will be explained in the next section. Fig. 1. shows the system used in order to segment and detect the defect in C-scan images.
Fig. 1. System diagram
The C-scan image acquisition module is the first step in the image segmentation system. In this step it is very likely that noise will be imported, thus the quality of the image could be decreased. That’s why the next step is dealing with the enhancement of the image in order to improve the quality or to perform a de-noising operation. The de-noising method is a wavelet-based method that will be described in the theoretical section. After the de-noising has been performed the image is entering into the classifier module that will distinguish, according to statistical parameters that describe the image. In this step the classifier, which is a support vector machine should also be able to distinguish between defect and background. The last step is the segmentation module of the image to detect the defect. This step uses some prior knowledge from the classification step in order to use an adaptive histogram threshold for a better defect segmentation. After a C-scan image has passed through the whole system, the result is the segmentation of the existing defect.
4 Theoretical Backgrounds 4.1 Image De-noising
[5]
An important part of any image processing system is the pre-processing phase. This phase could imply contrast enhancement techniques or methods for removing the noise. The method that used to perform this task is a wavelet-based one. Lately, wavelets have become a popular tool in many research domains, image-denoising being only one of them. First, a wavelet transform is applied to the image, and then the wavelet coefficients obtained are manipulated in order that the noisy points to be
Thresholding Segmentation and Classification
375 375
eliminated and then inverse wavelet transform are performed in order to reconstruct the image. The manipulation of the wavelet coefficients in a procedure of the former type is based on a classification. This classification is often binary: the coefficients are divided into two groups. The first group contains important, regular coefficients, while the other group consists of coefficients that are catalogued as “too noisy”. These two groups are then processed in an appropriate way. Zero often replaces noisy coefficients. In order to perform this classification a decision is needed in order to know what’s the threshold that distinguishes between these two classes.
4.2 Classification
4.2.1 Statistical parameters A region having a particular defect has a wide variety in its gray level. The question is how we can capture this defect in order to distinguish between different defects. In this case, the original C-scan image is split in small windows and each one of these is characterized using statistical calculus. This area, also called window, can also be varied in its size to capture a sample of different scales to be found here. Note that the precision with which the boundary between regions is known is still a function of the window used. This is always true, and so there is an advantage in using methods that allow the use of small windows. The mean gray level and the standard deviation are known as statistical moments. The mean is related to the first moment, and the standard deviation depends on the second moment. There are others, and these could also be used to characterize defect areas. In general, we have: (1)
where N is the number of data points, and n is the order of the moment. Now the skewness can be defined as:
(2)
and the kurtosis as:
(3)
376376H.D. Chen et al.
4.2.2 Support Vector Machines Support Vector Machines (SVMs) are a new technique that introduced in 1995 by Vapnik [6,7]. In terms of theory the SVM are well founded and proved to be very efficient in classification tasks. The advantages of such classifiers are that they are independent of the dimensionality of the feature space and that the results obtained are very accurate, although the training time is very high. Support Vector Machines are feedforward networks with a single layer of nonlinear units. Their design likes the generalized radial basis function (GRBF) networks with good generalization performance as an objective and follows for that reason the principle of structural risk minimization that is rooted in VC dimension theory. The solution for a typical two-dimensional case to has the form shown in Fig. 2. Those training points for which the equality in of the separating plan is satisfied ( i.e.( yt ( xt ⋅ w + b) − 1 ≥ 0 ∀i ) , those which wind up lying on one of the hyper-planes
H1 , H 2 ), and whose removal would change the solution found, are
called Support Vectors (SVs) and are indicated in Fig. 2 by the extra circles.
Fig. 2. Linear separating hyper-planes for the separable case, the support vectors are circled
4.3 Thresholding and Morphological Operations According to the histogram of the C-scan image, the threshold to be used for segmentation can be determined. For a pixel to remain in the image after thresholding, its intensity value must be greater than the threshold value. Thresholding is the most common segmentation technique and has received great interest in the literature.
Thresholding Segmentation and Classification
377 377
Another image processing technique used is opening which is a morphological operation. The morphological operations are well known techniques for the segmentation and enhancement of images and also for recognition purposes. Opening is used to eliminate small details and smoothes the image boundaries. 4.4 Segmentation The purpose of ultrasonic image segmentation is to separate the defect area from the background. If the amplitude of reflection wave signal is large, then the intensity of gray scale value of pixel in the image is large. Because the gray scale value of defect area is different from that of the well-welded area, image segmentation can be utilized to segment the defect image. The gray scale value range of ultrasonic image is 0 255 suppose the gray scale value of a pixel in original image be T ( x, y ) after thresholding segmentation it becomes Tt ( x, y ) then !
✁
✂
✁
✁
✁
#1 T ( x, y ) ≥ S Tt ( x, y ) = $ %0 T ( x, y ) < S where S , 0 < S < 255 , stands for thresholding. !
(4)
✂
The image after thresholding process is a binary image. If the thresholding selected is higher than the actual one, some defect area will be taken for welded area and the calculated welded area would be large then the actual one, vice versa, it would be less than the actual one. Therefore the key of image segmentation is the thresholding selection. The image segmentation techniques used in image processing can be divided into the following groups: threshold-based segmentation, statistical methods for image segmentation and region growing methods. Threshold-based segmentation is widely used in image processing fields. In ultrasonic imaging testing, threshold-based segmentation method is used to segment the defect area in ultrasonic image. There are two ways to choose the threshold: iterative thresholding that is independent of the image or image histogram-based threshold. In order to extract defect object that interested in from the image we have to choose a threshold in such a way that it separates those objects from the rest. One way to choose the threshold is using the histogram of the image. In this paper we performed experiments with histogram-based thresholds. According to the histogram of the C-scan image, the threshold to be used for segmentation can be determined.
5 Experimental Results The training of the Support Vector Machine network was done with 40 images, 20 being of artificial flaw workpiece and 20 of actual workpiece. The result of the classification is around 70%, but it is very clear that they could be improved by training the neural network with more images. The idea of using a neural network
378378H.D. Chen et al.
was to build it and test it in order to use it in the future for a classification of the images. The first step was represented by the pre-processing phase of the images in order to remove the noise that could be introduced to image acquisition. In the figures below an example of image de-noising is shown. The next step after the classification is represented by the segmentation of the C-scan image. The segmentation is a threshold-based, but the threshold was applied after an opening transform. The opening transform was used to enhance the defect and then the threshold was applied. Fig.4 is the results after the segmentation.
(a) Image before de-noising (b) image after wavelet de-noising Fig. 3. Experimental results of switch contact C-scan image
Fig. 4. Segmentation result of the C-scan image
6 Conclusions An image segmentation approach based on wavelet de-noising and SVMs feature extraction and classification has been developed to detect flaws automatically in NDE image. The system implemented is used for detecting the defect in ultrasonic C-scan images. The system involves two major modules, one that performs classification and the other one that segment the defects from the images. Classification module can be used either before the segmentation process or after, depending on what the system is used for. In the segmentation phase the system performed well and detected the defect in the images. But I had no extra information that could be very useful in detecting some flaw patterns. Overall, the system performed well, but there is still a lot to improve for the completeness of knowledge base.
Thresholding Segmentation and Classification
379 379
7 Future Works The work will be continued about this system, so in the following lines I will present some future research that to do. First, hopefully, I will have more artificial defect images and actual ones. When the acquisition of these images will be done, I want to use feature extracted from images in order to get an accurate classification of the images. In my opinion, the classification would be better performed after the segmentation method and the segmentation results to be used in classification.
References 1. 2. 3. 4.
D. E. Bray and R. K. Stanley, Nondestructive evaluation, a tool for design, manufacturing and service. New York: McGraw-Hill, 1989. D. T. Brown and G. Green, “Development of an ultrasonic system for submarine hull acoustic cladding inspection,” Ndt&E Int., vol. 30, no.3, pp. 123-133, 1997. P. Sahoo S. Soltani and A. Wong, “A survey of thresholding techniques,” Comput. Vision, Graph., vol. 41, no.3, pp. 233-260, 1988. J. Kapur, P. Sahoo and A. Wong, A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vision Graph. vol. 29, no.2, pp. 273-285, 1985. L. D. Wu, Computer Vision. Shanghai: Fudan University Press, 1993. V. N. Vapnik, The Nature of Statistical Learning Theory. New York: Springer Verlag, 1995. V. N. Vapnik, “An Overview of statistical Learning Theory,” IEEE Trans. Neural Networks, vol. 10,no.5, pp.998-1000, 1999. !
!
✁
!
5. 6. 7.
Author Index
M. Barrett, 281 A. Boeing, 228 T. Bräunl, 228
Y. J. Lou, 25 H. Luo, 110 W. X. Lv, 123
H. Cai, 276 Z. J. Cao, 372 H. Q. Chen, 372 M. Chen, 144 Q. Chen, 152 S. B. Chen, 25, 56, 123, 345, 353 W. Chen, 353 X. Chen, 110
D. Maravall, 190
B. Da, 144 X. Dai, 276 D. Ding, 80 X. Ding, 144 D. Du, 152 S. Emil, 1 M. F. Ercan, 212 G. Fang, 263 M. Ge, 63 X. Gu, 358 Y. F. He, 152 F.H. Ho, 304 J. Hoffmann, 14 D. Hong, 281 Y. L. Ip, 161 U. Jasnau, 14 M. L. Lam, 80 T. Lin, 123, 353 Y. H. Liu, 80 W. L. Lo, 304
T. Qiu, 123 A. B. Rad, 161, 304 P. Seyffarth, 14 B. Sui, 152 D. Sun, 358 J. Sun, 365 L. Sun, 276 Z. Sun, 241 Z. Tang, 241 J. S. Tian, 123 B. Wang, 345 Y. W. Wang, 372 Y. K. Wong, 161, 304 L. Wu, 25, 56, 123 P. Xiao, 281 Y. Xu, 63 Z. Xu, 263 J. Xue, 372 Y. Yang, 327 F. Ye, 358 J. Q. Yi, 56 H. Zhang, 144, 152 Y. Zhang, 123 D. B. Zhao, 25, 56 C. Zhou, 212, 241, 327 C. Zou, 144